1 / 35

Vladimir Protasov (Moscow State University , Russia )

Vladimir Protasov (Moscow State University , Russia ). Joint spectral radius of matrices: applications and computation. The Joint spectral radius (JSR). The geometric sense :. JSR is the measure of simultaneous contractibility.

Télécharger la présentation

Vladimir Protasov (Moscow State University , Russia )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Vladimir Protasov (Moscow State University, Russia) Joint spectral radius of matrices: applications and computation

  2. The Joint spectral radius (JSR) The geometricsense: JSR is the measure of simultaneous contractibility Taking the unit ball in that norm:

  3. The Joint spectral radius (JSR) J.C.Rota, G.Strang (1960)-- Normed algebras A.Milchanov, E.Pyatnitsky, N.Barabanov, D.Liberzon, L.Gurvits, …(1988) Linear switching systems C.Micchelli, H.Prautzch, W.Dahmen, A.Levin, N.Dyn, P.Oswald,…… (1989) Subdivision algorithms I.Daubechies, J.Lagarias , C.Heil, D.Strang, … (1991) Wavelets

  4. Applications of the Joint Spectral Radius Signal and image processing (wavelets) Combinatorics, graphs Coding, channel capacity Mathematical economics (Leontief model, Cantor-Lippman model) Ecological models (population dynamics)

  5. The Joint spectral radius (JSR) The same is true if all the matrices commute are upper (lower) triangular

  6. Application. Binary codes that avoid prohibited differences. Codes in binary communication channels. Magnetic recording channels. Karabed, Nazari (1996), Kurtas, Proakis, Salehi (1997) Immink, Wolf (1998) Moision, Orlitsky Siegel (2001, 2007), Fact: the probability error is dominated by a small set of potential difference patterns.

  7. Application: The Euler binary partition function

  8. L.Euler (1728), A.Tanturri (1918), K.Mahler (1940), N.de Bruijn (1948), W.Pennington (1953), L.Carlitz (1965), D.Knuth (1966), R.Churchhouse (1969), B.Reznick (1990), Pfaltz (1995), C.Froberg (1997), V.P. (2000), D.Feng (2011), N.Sidorov (2011), A.Thomas (2014), etc. ?

  9. Applications. Wavelets with compact support Haar (1909), Kotelnikov (1933), Shannon (1949), 1980-90: S.Mallat, Y.Meyer, I.Daubechies, C.Chui, A.Cohen, W.Dahmen, etc. I.Daubechies (1988) – wavelets with compact support. Advantages of wavelets: localization (compact supports), Fast algorithms for computing the coefficients, Characterization of functional spaces by coefficients . . Signal processing . Approximation and interpolation algorithms Numerical PDE (Wavelet Galerkin method, etc.)

  10. How to construct wavelets To construct a system of compactly supported wavelets one needs to solve a refinement equation is a sequence of complex numbers satisfyingsome constraints.

  11. Examples of wavelets 1.Haar wavelets (1909) Refinement equation: 1 1 0 1 0 1 2.Shannon-Kotelnikov wavelets (1933, 1949) (not compactly supported) (not compactly supported) 3.Meyer wavelets (1986) 4. Daubechies wavelets (1988) The second wavelet of Daubechies. Refinement equation: 3 3 0 0

  12. Smoothness of wavelets I.Daubechies, D.Lagarias, 1991 A.Cavaretta, W.Dahmen, C.Micchelli, 1991 C.Heil, D.Strang, 1994 Example.

  13. I.Daubechies, J.Lagarias (1991) G.Grippenberg (1996) N.Guglielmi, V.P. (2015)

  14. How to compute or estimate ? Blondel, Tsitsiklis (1997-2000). The problem of JSR computation for two 0-1 matrices in NP-hard The problem, whether JSR of two rational matrices is greater than one is algorithmically undecidable, whenever d > 46. There is no polynomial-time algorithm, with respect to both the dimension d and the accuracy

  15. The main inequality for JSR The convergence to JSR is very slow

  16. Methods for estimating JSR A2 A1 A1A1 A1A2 A2A1 A2A2

  17. The concept of extremal norm

  18. Once we are not able to find an extremal norm, we find the best possible one in a class of norms.

  19. Sometimes easier to prove more George Polya «Mathematics and Plausible Reasoning» (1954) When trying to prove something, often a good strategy is to try to prove more. When trying to compute something approximately, often a good strategy is to… find it precisely. Sometimes easier to find precisely what was supposed to be approximated

  20. The extremal polytope algorithm (Guglielmi, P., 2013): It appears that in practice the invariant polytope ‘’almost always’’ exists. In practical experiments, for 100 % of … randomly generated matrices “spoiled” randomly generated matrices matrices from problems in applications

  21. The algorithm for exact JSR computation N.Guglielmi, V.P., (Found. Comput. Math, 13 (2013), 37-97) ….. The ‘’dead’’ branches Every time we check if the new vertex is in the convex hull of the previous ones (this is a linear programming problem). The algorithm terminates, when there are no new vertices. The invariant polytope P is the convex hull of all vertices produced by the algorithm

  22. Example. Computing the capacity of codes

  23. Example. The problem of density of ones in the Pascal rhombus (S.Finch, P.Sebah, and Z.-Q.Bai, 2008) Actually, (the algorithm works for less than one second)

  24. Computing the exponent of growth of the ternary Euler partition function:

  25. Results of JSR computation for randomly generated matrices

  26. JSR computation for positive matrices of dimension d = 100.

  27. Conditions for finite terminating of the algorithm dominant s.m.p. Is this condition ’’almost always’’ satisfied ?

  28. The classical simplex method (for linear programming, G.Dantzig, 1947). In practice, converges extremely fast. G.Dantzig believed that the number of steps is linear in N and d. In average, the number of iteration is indeed linear in N and d (S.Smale, 1983). What is the ``average’’ complexity of the Invariant Polytope algorithm ?

  29. Thank you!

More Related