1 / 13

Degree and Sensitivity: tails of two distributions

This paper explores the relationship between the degree and sensitivity of Boolean functions, introducing new parameters and providing insights into their properties. It also examines the Fourier distribution and its approximation, as well as the concept of sensitive trees. The paper concludes by discussing potential applications and open problems.

strausbaugh
Télécharger la présentation

Degree and Sensitivity: tails of two distributions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Degree and Sensitivity: tails of two distributions ParikshitGopalanMicrosoft Research Rocco ServedioColumbia Univ. Avi Wigderson IAS, Princeton and Avishay Tal IAS, Princeton* (*see ECCC version)

  2. f : {-1,1}n {-1,1} in R[x1,x2,…,xn] deg(f) = min d: ∃ Real polynomial p of degree d such that p(x)=f(x) x  {-1,1}n Ex1: Maj(x,y,z) = ½ (x+y+z – xyz) deg=3 Ex2: NAE(x,y,z) = ½ (xy+yz+xz–1) deg=2 pf= unique multilinearps.t.p(x)=f(x) x{-1,1}n pf= T [n]f’(T) ∏iTxif’(T)= Fourier coefficients deg(f) = deg(pf) = max |T|: f’(T)≠ 0 (Real) degree of Boolean functions

  3. f : {-1,1}n {-1,1} D(f) – Deterministic decision tree complexity R(f) – Probabilistic decision tree complexity Q(f) – Quantum decision tree complexity N(f) – Certificate complexity deg(f) – Real degree deg∞(f) – L∞approximate degree bs(f) – Block sensitivity …… [Nisan,…] All parameters are polynomially related sen(f) – Sensitivity ?? Complexity measures independent of n

  4. 1 1 Gf f : {-1,1}n {-1,1} -1 -1 Sensitivity -1 1 • s(x)= sensitivity of x • = vertex degree of xin Gf • sen(f)= maxxs(x) • [Nisan-Szegedy] sen(f) ≤deg(f)2 • [Sens-Conjecture] deg(f) ≤sen(f)c -1 1 UnderstandGf!! New parameters low sensitivity = smooth Smooth =Simple

  5. f : {-1,1}n {-1,1} L2-approximation degε(f) = min d: ∃ Real polynomial qof degree d such that Ex {-1,1}n[|q(x)-f(x)|2] ≤ε Tf’(T)2 =1 Fourier dist:T [n]with probf’(T)2 εt=|T|>tf’(T)2 : Tails of the Fourier distribution degε(f) = min d: εd≤ ε - Best approximatorq is a truncation of pf - deg0(f) = deg(f) Fourier dist. & Real approx.

  6. f : {-1,1}n {-1,1} • deg0(f) = deg(f) • degε(f) = min d: εd≤ ε(best approximation in L2) • [Sens-Conj] deg0(f) ≤sen(f)c • [Thm1] ε>0degε(f) ≤sen(f)log(1/ε) • [Thm2] This is “optimal”: • ∃c>0,δ<1 degε(f) ≤sen(f)c log(1/ε)δ •  [Sens-Conj] Main result

  7. 1 1 • f : {-1,1}n {-1,1} • A sensitive tree in Gf • is a subgraphH of Gf • H is a tree • dimensions of edges(H) distinct • ts(f) = max {dim H: H senstree} • sen(f) = max {dim H: Hsensstar} • [Thm3] deg(f) ≤ ts(f)2 • [TS-Conj] deg(f) ≤ ts(f) -1 2 2 -1 Sensitive trees 3 -1 1 3 -1 1 1 3 2 1

  8. f : {-1,1}n {-1,1}, pf= Tf’(T)XT D: draw T [n]with probability f’(T)2 Dk=ED[|T|k] : Fourier moments S: draw x  {-1,1}nuniformly. Sk=Ex[s(x)k]: Sensitivity moments [Moment-Conj] kDk≤ akSk(independent of f,n) [Fact] D1 = S1 (Total influence), [Kalai] D2 = S2 [Thm4] deg(f) ≤ ts(f)  [Moment-Conj] Moments: Fourier vs. Sensitivity Average-case variants of deg(f) & sens(f)

  9. f : {-1,1}n {-1,1} degε(f) = min d: εd≤ ε(best approximation in L2) [Thm1] ε>0degε(f) ≤100 sen(f)log(1/ε) s k (t=100sk) εt= |T|>tf’(T)2 = PrD[|T|>t] = PrD[|T|k>tk] ≤ ≤ E[|T|k]/tk ≤ ………… ≤ exp(-k) ≤(n/t)kPr[deg(fρ)= k] ≤ (1) (2) Proof of the main result Random restriction Leaving kvar alive

  10. ρ: {x1,x2,…,xn}  {-1,1,*} at random from • Rk= {ρ= (K,y) : K=ρ-1(*), |K|=k,y  {-1,1}n-K } • (1) ED[|T|k] ≤ nkPrρ[deg(fρ) = k] • Proof: Prρ[deg(fρ) = k] = Pr[f’ρ(K)≠0] • ≥ 2-2k E[f’ρ(K)2] Granularity of Fourier • = TPrρ[KT] fρ(T)2Heredity of Fourier • = (k/n)k ED[|T|k] Random restrictions

  11. (2) Prρ[deg (fρ) = k] ≤ (10sk/n)k • (2’) Prρ[ts(fρ) = k] ≤ (10sk/n)k • Bad = { ρ: ts (fρ) = k }  Rk • Bad  [2n]×[s]k×[2]k • ρ A DFS path in the sensitive tree • |Bad|/|Rk| < (10sk/n)k • [Moment-Conj] Hastad’s switching lemma In paper we use “proper walks” A switching lemma Has max degree ≤ s

  12. Applications Learning algorithm for low-sensitivity functions in time (1/ε)poly(s) Under uniform dist: Uses[LMN] Exact learning alg: Uses[GNSTW] New proof of the switching lemma Better bounds on Entropy-Influence conj: [EntInf-Conj] Ent(f) ≤ c.Inf(f) [Fact] Ent(f) ≤ c.Inf(f).log n [EntInf-Conj] Ent(f) ≤ c.Inf(f).log sen(f)

  13. Conclusions & open problems Prove consequences of [Sens-Conj] [GSTW] degε(f) ≤sen(f)log(1/ε) [GNSTW] depth(f) ≤poly(sen(f)) Prove [Sens-Conj]!!! If not… deg(f) ≤ ts(f)? Relate new parameters kED[|T|k] ≤ akEx[s(x)k] ? kEx[s(x)k] ≤ bkED[|T|k] ?

More Related