Home Math Sumsets and entropy revisited | What’s new

Sumsets and entropy revisited | What’s new

0
Sumsets and entropy revisited | What’s new

[ad_1]

Ben Inexperienced, Freddie Manners and I’ve simply uploaded to the arXiv our preprint “Sumsets and entropy revisited“. This paper makes use of entropy strategies to assault the Polynomial Freiman-Ruzsa (PFR) conjecture, which we research within the following two types:

Conjecture 1 (Weak PFR over {Z}) Let {A subset {bf Z}^D} be a finite non-empty set whose doubling fixed A is at most {K}. Then there’s a subset {A'} of {A} of density {gg K^{-O(1)}} that has affine dimension {O(log K)} (i.e., it’s contained in an affine house of dimension {O(log K)}).

Conjecture 2 (PFR over {{bf F}_2}) Let {A subset {bf F}_2^D} be a non-empty set whose doubling fixed {sigma[A]} is at most {K}. Then {A} might be lined by {O(K^{O(1)})} cosets of a subspace of cardinality at most A.

Our important outcomes are then as follows.

Theorem 3 If {A subset {bf Z}^D} with {sigma[A] leq K}, then

The skew-dimension of a set is a amount smaller than the affine dimension which is outlined recursively; the exact definition is given within the paper, however suffice to say that singleton units have dimension {0}, and a set {A subset {bf Z}^{D_1} times {bf Z}^{D_2}} whose projection to {{bf Z}^{D_1}} has skew-dimension at most {d_1}, and whose fibers in {{x} times {bf Z}^{D_2}} have skew-dimension at most {d_2} for any {x}, can have skew-dimension at most {d_1+d_2}. (The truth is, the skew-dimension is principally the most important amount which obeys all of those properties.)

Half (i) of this theorem was implicitly confirmed by Pálvölgi and Zhelezov by a special methodology. Half (ii) with {5/3+o(1)} changed by {2} was established by Manners. To our data, half (iii) is totally new.

Our proof technique is to ascertain these combinatorial additive combinatorics outcomes by utilizing entropic additive combinatorics, by which we change units {A} with random variables {X}, and cardinality with (the exponential of) Shannon entropy. That is with the intention to make the most of some superior options of entropic additive combinatorics, most notably good conduct with respect to homomorphisms.

As an illustration, the analogue of the combinatorial doubling fixed A of a finite non-empty subset {A} of an abelian group {G}, is the entropy doubling fixed

displaystyle  sigma_{mathrm{ent}}[X] := {exp( bf H}(X_1+X_2) - {bf H}(X) )

of a finitely-valued random variable {X} in {G}, the place {X_1,X_2} are impartial copies of {X} and {{bf H}} denotes Shannon entropy. There may be additionally an analogue of the Ruzsa distance

displaystyle  d(A,B) := log frac{|A|^{1/2} |B|^{1/2}}

between two finite non-empty subsets {A,B} of {G}, particularly the entropic Ruzsa distance

displaystyle  d_{mathrm{ent}}(X,Y) := {bf H}(X'+Y') - frac{1}{2} {bf H}(X) - frac{1}{2} {bf H}(Y)

the place {X',Y'} are impartial copies of {X,Y} respectively. (Truly, one factor we present in our paper is that the independence speculation might be dropped, and this solely impacts the entropic Ruzsa distance by an element of three at worst.) Lots of the outcomes about sumsets and Ruzsa distance have entropic analogues, however the entropic variations are barely higher behaved; as an illustration, we’ve got a contraction property

displaystyle  d_{mathrm{ent}}(phi(X),phi(Y)) leq d_{mathrm{ent}}(X,Y)

at any time when {phi: G rightarrow H} is a homomorphism. The truth is we’ve got a refinement of this inequality by which the hole between these two portions can be utilized to manage the entropic distance between “fibers” of {X,Y} (by which one situations {phi(X)} and {phi(Y)} to be fastened). Then again, there are direct connections between the combinatorial and entropic sumset portions. As an illustration, if {U_A} is a random variable drawn uniformly from {A}, then

displaystyle  sigma_{mathrm{ent}}[U_A] leq sigma[A].

Thus if {A} has small doubling, then {U_A} has small entropic doubling. Within the converse route, if {X} has small entropic doubling, then {X} is shut (in entropic Ruzsa distance) to a uniform random variable {U_S} drawn from a set {S} of small doubling; a model of this assertion was confirmed in an previous paper of myself, however we set up right here a quantitatively environment friendly model, established by rewriting the entropic Ruzsa distance by way of sure Kullback-Liebler divergences.

Our first important result’s a “99% inverse theorem” for entropic Ruzsa distance: if {d_{mathrm{ent}}(X,Y)} is small enough, then there exists a finite subgroup {H} of {G} such that

displaystyle  d_{mathrm{ent}}(X,U_H), d_{mathrm{ent}}(Y,U_H) leq 12 d_{mathrm{ent}}(X,Y).      (1)

This consequence makes use of the outcomes simply talked about to narrate {X,Y} to a set {S} of small doubling, which might then be associated to a subgroup {H} by normal inverse theorems; this offers a weak model of (1) (roughly talking shedding a sq. root within the certain), and a few extra evaluation is required to bootstrap this preliminary estimate again to (1).

We now sketch how these instruments are used to show our important theorem. For (i), we cut back issues to establishing the next bilinear entropic analogue: given two non-empty finite subsets {A,B} of {{bf Z}^D}, one can discover subsets {A' subset A}, {B' subset B} with

displaystyle  |A'| |B'| geq e^{-C d_{mathrm{ent}}(U_A, U_B)} |A| |B|

such that {A', B'} have skew-dimension at most {C d_{mathrm{ent}}(U_A, U_B)}, for some absolute fixed {C}. This may be proven by an induction on (say). One applies a non-trivial coordinate projection {pi: {bf Z}^D rightarrow {bf Z}} to {A,B}. If {pi(U_A)} and {pi(U_B)} are very shut in entropic Ruzsa distance, then the 99% inverse theorem reveals that these random variables should every focus at some extent (as a result of {{bf Z}} has no non-trivial finite subgroups), and might cross to a fiber of those factors and use the induction speculation. If as an alternative {pi(U_A)} and {pi(U_B)} are far aside, then by the conduct of entropy underneath projections one can present that the fibers of {A} and {B} underneath {pi} are nearer on common in entropic Ruzsa distance of {A} and {B} themselves, and one can once more proceed utilizing the induction speculation.

For components (ii) and (iii), we first use an entropic model of an statement of Manners that units of small doubling in {{bf Z}^D} have to be irregularly distributed modulo {2}. A clear formulation of this in entropic language is the inequality

displaystyle  d_{mathrm{ent}}(X, 2Y) leq 5 d_{mathrm{ent}}(X,Y)

at any time when {X,Y} take values in a torsion-free abelian group similar to {{bf Z}^D}; this seems to observe from two functions of the entropy submodularity inequality. One corollary of this (and the conduct of entropy underneath projections) is that

displaystyle  {bf H}( X hbox{ mod } 2 ), {bf H}( Y hbox{ mod } 2 ) leq 10 d_{mathrm{ent}}(X,Y).

That is the important thing hyperlink between the {{bf Z}^D} and {{bf F}_2^D} worlds that’s used to show (ii), (iii): whereas (iii) depends on the nonetheless unproven PFR conjecture over {{bf F}_2}, (ii) makes use of the unconditional progress on PFR by Konyagin, as detailed in this survey of Sanders. The argument has the same inductive construction to that used to ascertain (i) (and if one is prepared to exchange {5/3+o(1)} by {2} then the argument is in truth comparatively easy and doesn’t want any deep partial outcomes on the PFR).

As one byproduct of our evaluation we additionally receive an interesting entropic reformulation of Conjecture 2, particularly that if {X} is an {{bf F}_2^D}-valued random variable then there exists a subspace {H} of {{bf F}_2^D} such that

displaystyle  d_{mathrm{ent}}(X, U_H) ll sigma_{mathrm{ent}}[X].

Proper now the very best consequence on this route is

displaystyle  d_{mathrm{ent}}(X, U_H) ll_varepsilon sigma_{mathrm{ent}}[X] + sigma_{mathrm{ent}}^{3+varepsilon}[X]

for any {varepsilon > 0}, by utilizing Konyagin’s partial consequence in direction of the PFR.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here