source
stringclasses
1 value
text
stringlengths
152
659k
filtering_features
stringlengths
402
437
source_other
stringlengths
440
819k
arxiv
Scalar and fermion contributions to the vacuum energy 3 Mar 2013 Dimitrios Metaxas [email protected] Department of Physics National Technical University of Athens Zografou Campus15780AthensGreece Scalar and fermion contributions to the vacuum energy 3 Mar 2013 I consider a theory of a real scalar and a fermion field, with a Yukawa interaction and a potential term that admits two degenerate minima at the tree level. I calculate the quantum vacuum energy difference between these two vacua and I find a finite, non-zero result, with scalar and fermion contributions whose origin and physical significance I discuss. I will start by reviewing the problem of the vacuum energy for renormalizable quantum field theories, in four-dimensional flat spacetime, that contain a scalar field, φ, which is endowed, at tree level, with a standard kinetic term and a general potential term, U(φ), which is bounded below. (A) If the potential term at hand has a single minimum (vacuum) at φ = φ min then quantization can be performed around it after expanding U(φ) = U(φ min ) + 1 2 U ′′ (φ min )(φ − φ min ) 2 + · · ·, discarding the constant term, using the quadratic term to describe a scalar excitation of mass m around the minimum, with m 2 = U ′′ (φ min ), and treating the higher order terms in perturbation theory as interactions with the respective coupling constants. The constant term, also called the vacuum energy term, along with the mass and the coefficients of the higher order interactions have no meaning at this point; they are called bare terms and get regularized by (infinite) multiplications or subtractions, along with a similar treatment of the kinetic term in the usual process of renormalization. Associated with this procedure are two parameters, both with dimensions of mass: Λ, which is used in order to cut-off divergent expressions, and µ, that sets the scale in which the physical parameters of the theory, masses and coupling constants are defined or measured. Then one proceeds by calculating, order by order in the perturbation expansion, the various Green's functions of the theory as well as the related functional expressions of the effective action with the corresponding effective potential [1]. The cut-off, Λ, was just a mathematical convention and should be absent from any final result of these calculations. The theory is defined by specifying the values of the masses and the coupling constants at a scale µ; although the Green's functions and the effective action depend on the scale, any physical result derived from them should be µ-independent. For example: one may measure and define the masses and coupling constants of the theory using scattering experiments at a "reference" scale µ ref = 1GeV. Then one may predict and measure the outcomes of experiments at any other scale, say, µ exp = 10GeV. The result should be the same with what one would have obtained after having used a different µ ref to start with. This is embodied in the renormalization group formalism and is expressed mathematically by the fact that the total derivative of any physical quantity with respect to µ, given by the sum of the various partial derivatives, must vanish. We see immediately the reason why the constant, vacuum energy, term was discarded: there is no physical process or experiment that depends on it; it can be set to zero, or any other value if one is not worried about the semiclassical expansion around an infinite constant. Once this is done (here I will consider it set to zero) there is no prediction for a different value, nor can there be any process to verify such a prediction. If one wants to use the renormalization group formalism consistently, however, one must take care of the constant term too, that is, in our case, subtract its value at the minimum at any level in the perturbation expansion of the effective potential [2]. When the theory under consideration is coupled to gravity, whether the latter is considered at the classical level or quantized, the value of the vacuum energy becomes a physical observable that can be measured in the cosmological expansion rate and contributes to the cosmological constant [3]. The quantum theory of gravity is not renormalizable; it can be viewed as an effective quantum field theory [4], with a limited range of predictability, as all effective quantum field theories, and its implications will not be considered here. As far as renormalizable quantum field theories are concerned, there can be no prediction for the vacuum energy defined as the value of the renormalized effective potential at its minimum. It is sometimes argued that the sum of the zero-point energies of the field modes at the minimum contributes 1 4π 2 Λ 0 dk k 2 √ k 2 + m 2 ≈ Λ 4 16π 2(1) when a momentum cut-off regularization scheme is employed, or µ 4−d (2π) (d−1) 1 2 d d−1 k √ k 2 + m 2 ≈ m 4 64π 2 ln m 2 µ 2(2) when dimensional regularization and minimal subtraction prescription are performed. In (2), a fermion field would have given a contribution with the opposite sign, involving, of course, the fermion mass at the minimum. The cut-off, Λ, is usually considered to be related to the Planck or a Grand Unified Theory (GUT) scale, and the scale µ to the radiation associated with the supernova observations or the Cosmic Microwave Background [5]. Although these expressions are suggestive of contributions to the vacuum energy that drive it away from a zero value when non-renormalizable interactions such as gravity are considered, they can hardly be considered as a prediction of a renormalizable quantum field theory. Higher energy scales, such as the GUT scale, may or may not leave an imprint on processes at the electroweak scale depending on the details of the decoupling procedure, none of the contributions, however, may depend explicitly on the cut-off in a way implied by (1). As far as the expression in (2) is concerned, one also sees that it cannot, by itself, correspond to a well-defined prediction; it is rather a one-loop result that should be subtracted if the perturbation expansion around the vacuum is to be done consistently. (B) Let us now consider the case where the potential energy term, U(φ), has, besides the global minimum at φ min , a second, local minimum at φ met , such that U(φ met ) > U(φ min ). This local minimum corresponds to a "false", metastable vacuum, and the energy difference between the two vacua is a physical observable that can, in principle, be measured if an appropriate metastable state is prepared. The perturbation expansion of the effective potential must account for this fact; the renormalization group equation [2] will ensure that the vacuum energy difference can be consistently defined, the value of the "true" vacuum energy, however, is still undetermined and can be set to zero. Only the energy difference between the two vacua is a meaningful, physical quantity. This vacuum energy difference is also an input of the theory, much like the various masses and coupling constants; it is not a prediction of the quantum field theory. Similar considerations apply when the global minimum of the potential was not present at tree level but was induced by radiative, quantum effects [1]. The dimensionful parameter that defines the location of the absolute minimum and its related energy difference with respect to the metastable one is again an input of the theory, although "camouflaged" at the tree level. As an additional, important note for these cases, one should mention that the energy of a metastable state has an imaginary part that is related to the rate of its decay [6]; this is a non-perturbative effect, however, and will not show up at any level of the perturbation expansion. (C) One may also consider a theory where the potential energy term at tree level has a discrete or continuous family of degenerate minima that are related by a symmetry. Two simple examples that one can have in mind involve a complex scalar field with a "Mexican-hat" potential, or a real scalar field with a "reflection" symmetry of φ → −φ. Quantization can again be performed picking one of these minima and following the same procedure as above. The value of the potential at the minimum is again undefined and can be consistently set to zero. Once this is done, by symmetry considerations, the value of the renormalized potential at any other minimum will be zero as well. (D) Finally, coming to the case that is relevant to the present work, one can imagine the case of a potential term with a set of degenerate minima that have the same value of the energy at tree level but are not otherwise related by any symmetry. A simple example would be a potential with two minima at φ 1 and φ 2 , such that U(φ 1 ) = U(φ 2 ) but U ′′ (φ 1 ) = U ′′ (φ 2 ). Then the elementary excitations around each minimum would have different masses. If one were to pick one minimum, say φ 1 , to quantize the theory, all the subtractions described before would have to be performed at this point, and the difference of terms such as (2) around the two minima should give a finite, possibly non-zero result for φ 2 . This would be a definite prediction for the energy of the second vacuum, similar to well-known phenomena like the Casimir effect [7]. Obviously, it is not possible to have a renormalizable quantum field theory in four dimensions with such a potential term at tree level (it is interesting, however, that the effective potential in the Standard Model allows for the possibility of a second minimum, other than the one in the electroweak scale, close to the Planck scale and degenerate in energy [8]). Even so, there are other examples where asymmetries between classically degenerate vacua can be seen and this is investigated further below. In order to examine a simple case of the aforementioned asymmetries, I will consider here a theory with a real scalar and a fermion field with a Yukawa interaction and the Lagrangian: L = 1 2 (∂φ) 2 − U(φ) + iψ∂ /ψ − gφψψ,(3) where the potential term, U(φ) = λ 4! φ 2 (φ − φ 0 ) 2 ,(4) has two degenerate minima at φ = 0 and φ = φ 0 . There are two sources of asymmetry in this case: first, as it is obvious, the fermion acquires a mass, m f = gφ 0 , around the second minimum, while it is massless around the first. Second, the scalar potential is, in fact, asymmetric in field space. The masses of the scalar excitations are the same around the two vacua, U ′′ (0) = U ′′ (φ 0 ) = λ 12 φ 2 0 ≡ M 2 ,(5) since renormalization involves a scale, µ, however, there is a resulting asymmetry between the zero and the non-zero vacuum, depending on where the renormalization conditions are imposed. As a final result, we will find, therefore, a difference in the renormalized vacuum energies of these two vacua, although they are degenerate at tree level. The effective potential at one loop, after dimensional regularization, is given by the well-known expression U eff (φ) = U(φ) + 1 64π 2 (U ′′ ) 2 ln U ′′ µ 2 − 1 2 − 4g 4 φ 4 ln g 2 φ 2 µ 2 − 1 2(6)+ c 0 + c 1 φ + c 2 φ 2 2 + c 3 φ 3 3! + c 4 φ 4 4! . I have included the four counterterms, proportional to c 4 , c 3 , c 2 , c 1 , in order to impose the four renormalization conditions U ′′′′ eff (φ 0 ) = λ,(7)U ′′′ eff (φ 0 ) = λφ 0 2 ,(8)U ′′ eff (φ 0 ) = M 2 ,(9) and U ′ eff (φ 0 ) = 0,(10) and the constant, c 0 counterterm, to account for the vacuum energy. One is only allowed a single counterterm to adjust that, and once a condition is imposed at one vacuum there is a definite, possibly non-zero prediction, for the value at the other vacuum. All the other counterterms, from linear to quartic are allowed since there is no symmetry, like reflection with respect to the origin (evenness of the potential), which is usually imposed for simplicity. The linear counterterm is not necessary since it corresponds merely to a shift of the field, it has been included, however, for clarity. Using it, I have imposed the conditions that keep φ 0 as one of the minima. Then the second minimum will be slightly displaced from φ = 0. This effect can also be calculated from the linear term in the potential, for small enough values of the couplings, however, as will be seen shortly, this will be a subleading effect. I should mention at this point that I consider values of the couplings that do not destroy the vacuum structure of the theory, that is I take the Yukawa coupling small enough, g 2 < λ/4, as is required for stability. The four renormalization conditions stated above can be solved to give the four coefficients, c 4 , c 3 , c 2 and c 1 , and the final result for the effective potential at one loop, without including the c 0 term, is: U eff (φ) = U(φ) + 1 64π 2 (U ′′ ) 2 ln U ′′ µ 2 − 1 2 − 4g 4 φ 4 ln g 2 φ 2 φ 2 0 − 1 2(11)+ 1 64π 2 1 12 λ 2 φ 3 0 ln M 2 µ 2 + 3 2 λ 2 φ 3 0 − 32 3 g 4 φ 3 0 φ − 1 3 λ 2 φ 2 0 ln M 2 µ 2 + 13 4 λ 2 φ 2 0 − 24g 4 φ 2 0 φ 2 + 1 2 λ 2 φ 0 ln M 2 µ 2 + 3λ 2 φ 0 − 32g 4 φ 0 φ 3 − 1 4 λ 2 ln M 2 µ 2 + λ 2 − 44 3 g 4 φ 4 . Now we have a definite expression for the value of the potential at φ 0 : U eff (φ 0 ) = 1 64π 2 M 4 ln M 2 µ 2 − 1 2 + 1 4 λ 2 φ 4 0 − 2g 4 φ 4 0 .(12) The second minimum, as mentioned before, is not located exactly at φ = 0, its position, however, can be calculated in the small coupling expansion, and the effect of its displacement on the vacuum energy can be seen to be subleading compared to U eff (0) = 1 64π 2 M 4 ln M 2 µ 2 − 1 2 .(13) The displacement of the second minimum from zero can be shown to be of order λφ 0 , and the resulting change in the vacuum energy of order λ 3 φ 4 0 . One has, therefore, a definite prediction for the vacuum energy difference between the two vacua, δU = U eff (φ 0 ) − U eff (0) = 1 64π 2 1 4 λ 2 φ 4 0 − 2g 4 φ 4 0 ,(14) regardless of the choice of c 0 (which can be chosen so as to cancel the term of (13) for consistency [2]). This is a quantum result that was absent at tree level, where one would have to put in by hand the value of the vacuum energy, or even any vacuum energy difference between two or more vacua. Before embarking on the discussion of the result, I should mention that, as is well known, there is a region in field space where the final expression for the one-loop effective potential has an imaginary part [9]. It is the region where U ′′ (φ) < 0, and one has to be more careful when deriving physical results associated with this part of the field space. Our areas of interest, however, near φ = 0 and φ = φ 0 , have no overlap with the problematic region in this case. Now we can proceed to investigate the origins and implications of the final result. As far as the second, fermion contribution to (14) is concerned, one might have expected the result qualitatively, as well as its sign. It is also interesting, however, that one has a non-zero scalar contribution to the vacuum energy difference. This arises from the fact that the potential is asymmetric with respect to the renormalization conditions imposed. This is true even without the fermion field; a fermion without a mass term at tree level was considered here merely in order to get a simple and suggestive quantitative result. Without the fermion one can equally well impose the previous renormalization conditions at φ = 0; then the second minimum near a non-zero φ 0 would show the same effect, that is an energy difference equal to the first factor in (14). This calculation is easy to do and will not be reproduced here. The final result for just the scalar field with the potential term in (4) is that the vacuum at φ 0 in the quantum theory has higher energy than the one at 0 by the amount given by the first term in (14), regardless of where the renormalization conditions are imposed. With the fermion term used here, and the condition g 2 < λ/4 that has to be fulfilled for stability, one sees that the energy at φ 0 is always higher than that at φ = 0. It should be kept in mind, however, that the model considered is quite simple and that even slightly more elaborate models, with more fermion species or fermion mass terms will give a more general expression with a greater range of final values. It is important that the final result in (14) is independent of any cut-off or renormalization scale, and is given, as expected, by the parameters that define the theory, couplings and masses. Any "running" from renormalization group effects appears at higher orders as it should. One may also view the expression derived here as a definite finite result that comes from considering finite factors included in (2) and then taking the difference of two such terms. In any case, it is a prediction of a renormalizable quantum field theory and a purely quantum effect. The fact that the two, classically degenerate, vacua are energetically inequivalent because of quantum corrections, gives this simple model a structure that is richer than expected. The vacuum with higher energy, φ 0 in this case, becomes metastable, although it was classically stable. One can accordingly calculate its rate of decay; the appropriate formalism is related to the results of [10] although the physical situation here is different. Since the vacuum energy difference is a quantum effect, the result for this vacuum decay rate is extremely small; it is proportional to the exponential of minus the "bounce" action, which, in our case, turns out to be of order 1/λ 2 . It would be interesting, as a problem for further research, to study the evolution of the vacua and the effective potential in a finite temperature and cosmological setting in this or related problems where the breaking or the lack of symmetry play an important role [11]. As a final note I should discuss the possibility of a "landscape" of vacua, a large number of which are degenerate, with zero energy at the classical level or even after some quantum corrections have been taken into account. Unless they are all related by the same symmetries, it does not seem possible to have zero energy in all of them when higher order quantum effects are considered, and the energy difference between two adjacent vacua, if one literally translates the results obtained here, would be proportional to powers of coupling constants times their distance in field space. It is an attractive scenario which states that if the value of the vacuum energy of a particular minimum is fixed by some reason to be zero, the value of the vacuum energy for any nearby minimum will be a highly suppressed and calculable number. One frequently encounters the problem, however, that some of the interactions that are involved, in this or other physically important situations, are non-renormalizable, the most important example being the gravitational interaction; when these are regarded as effective quantum field theories [4], instead of a coupling constant expansion that was the basic tool of renormalizable theories, one now has an expansion in powers of the energy, and it is possible that well-defined results for the vacuum energy or energy difference exist in these situations as well. It would be interesting, therefore, as a subject of future work, to consider the results of similar considerations in effective quantum field theories . AcknowledgementsThis work was completed while visiting the National Technical University of Athens. I would like to thank the people of the Physics Department for their hospitality. S Coleman, Aspects of Symmetry. Cambridge Univ. PressS. Coleman, Aspects of Symmetry, Cambridge Univ. Press (1985). . S Coleman, E J Weinberg, Phys. Rev. 71888S. Coleman and E. J. Weinberg, Phys. Rev., D7, 1888 (1973). . E J Weinberg, hep-th/0507214E. J. Weinberg, hep-th/0507214. . C Ford, D R T Jones, P W Stephenson, M B Einhorn, Nucl. Phys. 39517C. Ford, D. R. T. Jones, P. W. Stephenson and M. B. Einhorn, Nucl. Phys., B395, 17 (1993). . M B Einhorn, D R T Jones, JHEP. 070451M. B. Einhorn and D. R. T. Jones, JHEP, 0704, 051 (2007). . S Weinberg, Rev.Mod.Phys. 611S. Weinberg, Rev.Mod.Phys., 61, 1 (1989). . T Padmanabhan, Phys. Rep. 380235T. Padmanabhan, Phys. Rep., 380, 235 (2003). . J F Donoghue, Phys. Rev. 503874J. F. Donoghue, Phys. Rev., D50, 3874 (1994). . M M Anber, J F Donoghue, M El-Houssieny, Phys.Rev. 83124003M. M. Anber, J. F. Donoghue and M. El-Houssieny, Phys.Rev., D83, 124003 (2011). . J F Donoghue, arXiv:1209.3511gr-qcJ. F. Donoghue, arXiv:1209.3511 [gr-qc]. . J Martin, Comptes Rendus Physique. 13566J. Martin, Comptes Rendus Physique, 13, 566 (2012). . I L Shapiro, J Sola, arXiv:0808.0315hep-thI. L. Shapiro and J. Sola, arXiv:0808.0315 [hep-th]. . S Coleman, Phys. Rev. 152929S. Coleman, Phys. Rev., D15, 2929 (1977). . A D Linde, Nucl. Phys. 216421A. D. Linde, Nucl. Phys., B216, 421 (1983). . G Plunien, B Muller, W Greiner, Phys.Rept. 13487G. Plunien, B. Muller and W. Greiner, Phys.Rept., 134, 87 (1986). . K A Milton, S A Fulling, P Parashar, A Romeo, K V Shajesh, J A Wagner, J.Phys. 41164052K. A. Milton, S. A. Fulling, P. Parashar, A. Romeo, K.V. Shajesh and J. A. Wagner, J.Phys., A41, 164052 (2008). . M Sher, Phys.Rept. 179273M. Sher, Phys.Rept., 179, 273 (1989). . C D Froggatt, H B Nielsen, Phys.Lett. 36896C. D. Froggatt and H. B. Nielsen, Phys.Lett., B368, 96 (1996). . D L Bennett, H B Nielsen, Int.J.Mod.Phys. 95155D. L. Bennett and H. B. Nielsen, Int.J.Mod.Phys., A9, 5155 (1994). . J Elias-Miro, J R Espinosa, G F Giudice, G Isidori, A Riotto, A Strumia, Phys.Lett. 709222J. Elias-Miro, J. R. Espinosa, G. F. Giudice, G. Isidori, A. Riotto and A. Strumia, Phys.Lett., B709, 222 (2012). . F Bezrukov, M Yu, B A Kalmykov, M Kniehl, Shaposhnikov, JHEP. 1210140F. Bezrukov, M. Yu. Kalmykov, B. A. Kniehl and M. Shaposhnikov, JHEP, 1210, 140 (2012). . I Masina, arXiv:1209.0393hep-phI. Masina, arXiv:1209.0393 [hep-ph]. . F Bezrukov, G K Karananas, J Rubio, M Shaposhnikov, arXiv:1212.4148hep-phF. Bezrukov, G. K. Karananas, J. Rubio and M. Shaposhnikov, arXiv:1212.4148 [hep-ph]. . R Armillis, A Monin, M Shaposhnikov, arXiv:1302.5619hep-thR. Armillis, A. Monin and M. Shaposhnikov, arXiv:1302.5619 [hep-th]. . R Jackiw, Phys.Rev. 91686R. Jackiw, Phys.Rev., D9, 1686 (1974). . E J Weinberg, A Wu, Phys. Rev. 362474E. J. Weinberg and A. Wu, Phys. Rev., D36, 2474 (1987). . E J Weinberg, Phys. Rev. 474614E. J. Weinberg, Phys. Rev., D47, 4614 (1993). . D Metaxas, E J Weinberg, Phys. Rev. 53836D. Metaxas and E. J. Weinberg, Phys. Rev., D53, 836 (1996). . D Metaxas, Phys. Rev. 6383507D. Metaxas, Phys. Rev., D63, 083507 (2001). . D Metaxas, Phys. Rev. 7547701D. Metaxas, Phys. Rev., D75, 047701 (2007). . J , Int.J.Mod.Phys. 264523J. Alexandre, Int.J.Mod.Phys., A26, 4523 (2011). . J Alexandre, A Tsapalis, Phys.Rev. 8725028J. Alexandre and A. Tsapalis, Phys.Rev., D87 025028 (2013). . K Farakos, Int.J.Mod.Phys. 271250168K. Farakos, Int.J.Mod.Phys., A27, 1250168 (2012). . K Farakos, D Metaxas, Phys.Lett. 71176K. Farakos and D. Metaxas, Phys.Lett., B711, 76 (2012).
{'fraction_non_alphanumeric': 0.054479787650667945, 'fraction_numerical': 0.0371611331099604, 'mean_word_length': 3.963714902807775, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 21, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'I consider a theory of a real scalar and a fermion field, with a Yukawa interaction and a potential term that admits two degenerate minima at the tree level. I calculate the quantum vacuum energy difference between these two vacua and I find a finite, non-zero result, with scalar and fermion contributions whose origin and physical significance I discuss.', 'arxivid': '1303.0470', 'author': ['Dimitrios Metaxas [email protected] \nDepartment of Physics\nNational Technical University of Athens\nZografou Campus15780AthensGreece\n'], 'authoraffiliation': ['Department of Physics\nNational Technical University of Athens\nZografou Campus15780AthensGreece'], 'corpusid': 118316667, 'doi': '10.1103/physrevd.98.036001', 'github_urls': [], 'n_tokens_mistral': 6978, 'n_tokens_neox': 5926, 'n_words': 4010, 'pdfsha': 'f2380a25f56cfc702ab25efa9e6e71bd8de5387d', 'pdfurls': ['https://arxiv.org/pdf/1303.0470v3.pdf'], 'title': ['Scalar and fermion contributions to the vacuum energy', 'Scalar and fermion contributions to the vacuum energy'], 'venue': []}
arxiv
THE EXTERNAL-INTERNAL GROUP QUOTIENT STRUCTURE FOR THE STANDARD MODEL IN ANALOGY TO GENERAL RELATIVITY May 1998 Heinrich Saller Max-Planck-Institut für Physik and Astrophysik Werner-Heisenberg-Institut für Physik München THE EXTERNAL-INTERNAL GROUP QUOTIENT STRUCTURE FOR THE STANDARD MODEL IN ANALOGY TO GENERAL RELATIVITY May 1998arXiv:hep-th/9805052v1 11 In analogy to the class structure GL( IR 4 )/O(1, 3) for general relativity with a local Lorentz group as stabilizer and a basic tetrad field for the parametrization, a corresponding class structure GL( I C 2 )/U(2) is investigated for the standard model with a local hyperisospin group U(2). The lepton, quark, Higgs and gauge fields, used in the standard model, cannot be basic in a coset interpretation, they may to be taken as first order terms in a flat spacetime, particle oriented expansion of a basic field (as the analogue to the tetrad) and its products. The Coset Structure in Relativity Usually, general relativity as the dynamics of a metric for a Lorentz manifold is characterized with concepts from differential geometry. To prepare a comparison of relativity and the standard model from a common coset point of view, I present in this section the well known [16] Lorentz group class structure of relativity in a more algebraically oriented language. Special relativity distinguishes a Lorentz group O(1, 3) with its causal order preserving orthochronous subgroup SO + (1, 3) as invariance group of a symmetric 2 pseudometric g with signature (1, 3) on a real 4-dimensional vector space IM ∼ = IR 4 with spacetime translations (Minkowski space) g : IM ∨ IM −→ IR, sign g = (1, 3), g(v, w) = g(w, v) O(1, 3) ∋ Λ : IM −→ IM ⇐⇒ g = g • (Λ ∨ Λ) The inverse metric is used for the dual 3 energy-momentum space IM T g −1 : IM T ∨ IM T −→ IR, g −1 = g −1 • (Λ ∨ Λ) −1T where the contragredient representation Λ −1T acts on. A Lorentz metric induces an isomorphism 4 between translations and energymomenta g : IM −→ IM T , v −→ g(v, ), g = g T It defines 5 a linear g-involution (Lorentz 'conjugation') f g ↔ f g for all endomorphisms f : IM −→ IM of the translations g IM −→ IM T f g     f T IM −→ IM T g , f g = g −1 • f T • g f gg = f for all v, w ∈ IM : g(v, f (w)) = g(f g (v), w) The g-invariance Lorentz group is defined by g-unitarity 6 Λ ∈ O(1, 3) ⇐⇒ Λ g = Λ −1 2 For a vector space V , the totally symmetric and antisymmetric tensor product subspaces are denoted with V ∨ V and V ∧ V resp. in the 2nd tensor power V ⊗ V , correspondingly higher powers, e.g. V ∨ V ∨ V and V ∧ V ∧ V in V ⊗ V ⊗ V , etc. 3 V T denotes the algebraic dual with the linear forms for the vector space V , f T : W T −→ V T is the dual (transposed) linear mapping for f : V −→ W . For finite dimensions, the linear mappings {f : V −→ W } are naturally isomorphic to the tensor product W ⊗ V T . 4 The sloppy notation g : IM ∨ IM −→ IR and g : IM −→ IM T with the same symbol g ∈ IM T ∨ IM T should not lead to confusion. 5 All diagrams are commutative. 6 Any involutive g aa = g ∈ G antiautomorphism (gh) a = h a g a of a group G defines the associate unitary subgroup U (G, a) = {g a = g −1 }. The inversion is the canonical antiautomorphism. In the quotient G/U (G, a) the unitary group is the stabilizer [17]. The invariance Lorentz Lie algebra 7 is g-antisymmetric and, therefore, as a vector space isomorphic to the antisymmetric square of the translations l ∈ log O(1, 3) ⇐⇒ l g = −l IR 16 ∼ = IM ⊗ IM T ⊃ log O(1, 3) ∼ = IM ∧ IM ∼ = IR 6 There is a manifold (symmetric space) GL( IR 4 )/O (1,3) of Lorentz groups in the general linear group of a real 4-dimensional vector space as illustrated by the different invariance groups of the three metric matrices [3,5] in one reference basis of the translations g ∼ =   g 00 g 01 g 02 g 03 g 01 g 11 g 12 g 13 g 02 g 12 g 22 g 23 g 03 g 13 g 23 g 33 After the Stern-Gerlach experiment leading to the introduction of the spin operations with half integer SU(2)-quantum numbers, also spacetime has to come with a local 'half-integer' Lorentz structure SL( I C 2 ). The tetrad field, introduced by Weyl [19] as the basic field for general relativity, maps a real 4dimensional differentiable spacetime manifold D, parametrized with four real coordinates (x µ ) 3 µ=0 ∈ IR 4 , into the real 10-dimensional manifold of metrics. It gives an isomorphism between the tangent space, definable by the derivations der C(x) = IM(x) ∼ = IR 4 of the differentiable functions at each spacetime point x ∈ D, and one reference translation space IM(0) ∼ = IR 4 with metric g(0) h(x) : IM(x) −→ IM(0), h −1T (x) : IM T (x) −→ IM T (0) Therewith all multilinear 8 structures of IM(0) and IM(x) are bijectively related to each other, e.g. the metric and its invariance group g(x) ( IM ∨ IM)(x) −→ IR h(x)∨h(x)     id I R ( IM ∨ IM)(0) −→ IR g(0) , Λ(x) IM(x) −→ IM(x) h(x)     h(x) IM(0) −→ IM(0) Λ(0) g(x) = g(0) • (h ∨ h)(x), Λ(x) = h −1 (x) • Λ(0) • h(x) 7 The Lie algebra [4,10] of a Lie group G is denoted by log G which reminds also of log ∼ lag ∼ Lie algebra. 8 ( IM ⊗ IM T )(x) = IM(x) ⊗ IM T (x) or ( IM ∨ IM) T (x) = IM T (x) ∨ IM T (x) etc. are vector subspaces of the local tensor algebra over ( IM ⊕ IM T )(x) = IM(x) ⊕ IM T (x). With dual ( IM(x), IM T (x))-bases, e.g. {∂ µ , dx µ }, one obtains as tensor components h(x) ∼ h j µ (x) ∼ h T (x), h −1 (x) ∼ h µ j (x) = ǫ µνρλ ǫ jikl h i ν h k ρ h l λ 3! det h (x) ∼ h −1T (x) g(0) ∼ η jk , g −1 (0) ∼ η jk g(x) ∼ g µν (x) = η jk h j µ h k ν (x), g −1 (x) ∼ g µν (x) With the Lorentz metric induced isomorphisms between tangent space and its dual, those relations can be written in the form g(x) IM(x) −→ IM T (x) h(x)     h −1T (x) IM(0) −→ IM T (0) g(0) , g(0) • h(x) ∼ η jk h k µ (x) = h jµ (x) g −1 (0) • h −1T (x) ∼ η jk h µ k (x) = h µj (x) Because of the invariance of the local metric under the local Lorentz transformations g(x) = g(x) • (Λ ∨ Λ)(x) = g(0) • (h ∨ h)(x) • (Λ ∨ Λ)(x) the tetrad field as coset representative is determined up to local Lorentz transformations Λ(x) ∈ O(1, 3)(x) : h(x) −→ h(x) • Λ(x) This Lorentz gauge freedom of the tetrad is made compatible with the translations (local derivations) by using an O(1, 3)-gauge field O(x) as a linear mapping from the translations into the Lorentz Lie algebra log O(1, 3)(0) of the reference space O(x) : IM(x) −→ ( IM ⊗ IM T )(0), O(x) ∼ O i j µ (x) Because of the Lorentz invariance of the metric, a gauge field is g(0)-antisymmetric O(x) : IM(x) −→ ( IM ∧ IM)(0) O(x) = g −1 (0) • O(x) ∼ η ik O i k µ (x) = O ij µ (x) = −O ji µ (x) General relativity uses no fundamental O(1, 3)-gauge field, but a 'composite' one: The local Lorentz freedom for the tetrad defines the tetrad induced gauge field O(x) = O(h)(x) by using a covariantly constant tetrad Dh(x) : IM(x) ⊗ IM(x) −→ IM(0) Dh(x) = ∂h(x) − h • Γ(x) − O • h(x) = 0 D µ h i ν (x) = ∂ µ h i ν (x) − h i λ Γ λ µν (x) − O µ i j h j ν (x) = 0 with a manifold connection Γ(x). A covariantly constant tetrad leads with g(x) = g(0) • (h ∨ h)(x) to a covariantly constant metric Dg(x) : IM(x) ⊗ ( IM ∨ IM)(x) −→ IR Dh(x) = 0 ⇒ Dg(x) = 0 = D µ g νρ (x) = ∂ µ g νρ (x) − Γ λ µν g λρ (x) − Γ λ µρ g νλ (x) If the log GL( IR 4 )-valued connection is assumed as g(x)-symmetric (torsion free manifold), it is expressable by the tetrad and its derivative if Γ λ µν (x) = Γ λ νµ (x) ⇒ Γ λ µν (x) = g λρ 2 (∂ µ g νρ + ∂ ν g µρ − ∂ ρ g µν )(x) Therewith, the tetrad induced O(1, 3)-gauge field is determined O(x) = h −1 • (∂h − h • Γ)(x) O ij µ (x) = h νi (∂ µ h j ν − h j λ Γ λ µν )(x) = 1 2 h λi h νj (h µk ∂ [λ h k ν] + h λk ∂ [µ h k ν] − h νk ∂ [µ h k λ] )(x) The tetrad induced O(1, 3)-curvature field R(x) and R(x) relates the antisymmetric square of the tangent space and the local Lie algebra log O(1, 3)(x) ∼ = ( IM ∧ IM)(x) to the antisymmetric square of the reference space and the reference Lie algebra R(x) : ( IM ∧ IM)(x) −→ ( IM ∧ IM)(0) R(x) ∼ R ij µν (x) = ∂ [µ O ij ν] (x) − O ik [µ η kl O lj ν] (x) R(x) : ( IM ⊗ IM T )(x) −→ ( IM ⊗ IM T )(0) R(x) = g(0) • R • g −1 (x) ∼ R j k λ µ (x) = η ki R ij µν g νλ (x) With the tetrad isomorphisms, it can be related to a transformation of the reference Lorentz Lie algebra R • (h ∧ h) −1 (x) : ( IM ∧ IM)(0) −→ ( IM ∧ IM)(0) R • (h ∧ h) −1 (x) ∼ R ij µν h µ k h ν l (x) R • (h ⊗ h −1 )(x) : ( IM ⊗ IM T )(0) −→ ( IM ⊗ IM T )(0) R • (h ⊗ h −1 )(x) ∼ R ij µν h µ k h ν l (x) The coupling of the curvature R to the tetrad h ⊗ h −1 determines the familiar 2nd order derivative action A(h, ∂h) = det h(x) d 4 x tr R • (h ⊗ h −1 )(x) tr R • (h ⊗ h −1 )(x) = R ij µν h µ i h ν j (x) = R j k λ µ h µ j h k λ (x) = tr R • (h ∧ h) −1 (x) The integration over the manifold uses the invariant volume element 4 IM T (x) −→ 4 IM T (0), d 4 x −→ ǫ jikl h j µ h i ν h k ρ h l λ 4! (x) dx µ ∧ dx ν ∧ dx ρ ∧ dx λ A Lorentz group O(1, 3) is a semidirect product × of a reflection group 9 II(2) = {±1}, e.g. a time reflection, and its special normal subgroup SO(1, 3) which by itself is the direct product × of the spacetime translations reflection group {±1 4 } ∼ = II(2) and its orthochronous group SO + (1, 3) O(1, 3) ∼ = II(2) ×SO(1, 3) ∼ = II(2) ×[ II(2) × SO + (1, 3)] The general linear group g ∈ GL( IR 4 ) contains via the modulus of the 4th root of the determinant | 4 √ det g| the abelian dilatation group D(1 4 ) = 1 4 exp IR as a direct factor with the other factor UL( IR 4 ) (unimodular linear group) containing the elements with | det g| = 1 GL( IR 4 ) = D(1 4 ) × UL( IR 4 ) UL( IR 4 ) ∼ = II(2) ×SL( IR 4 ) ∼ = II(2) ×[ II(2) × SL 0 ( IR 4 )] SO + (1, 3) = SO 0 (1, 3) and SL 0 ( IR 4 ) are the connection components of the group unit in O(1, 3) and UL( IR 4 ) resp. and the adjoint groups 10 of SO(1, 3) and SL( IR 4 ) resp. The tetrad manifold is the product of the dilatation group and the quotient of the connection components of the units GL( IR 4 )/O(1, 3) ∼ = D(1) × SL 0 ( IR 4 )/SO + (1, 3) The real 9-dimensional manifold SL 0 ( IR 4 )/SO + (1, 3) is the manifold of nontrivial natural order structures v 0 on the translations IM ∼ = IR 4 as induced by the natural order of the scalars IR: A natural translation order has to be characterized by IR-multilinear forms, the even-linear forms characterize the pairs ( , ), consisting of an order and its reverse. Only the signature (1, 3)-bilinear forms g define nontrivial order pairs: v    z n = 1} designates the n-th cyclotomic group. 10 The adjoint group of a group G consists of its classes G/ centr G with respect to the centrum. 11 A cyclic representation generates by its tensor products all representations (up to equivalence). 12 The IR 4 -volume element is a symmetric bilinear form ǫ(4) ∼ ǫ ijkl = ǫ klij with signature (3, 3) on 0 or v 0 ⇐⇒ g(v, v) ≥ 0. The orbit {h −1 (x) • O(1, 3)(0) • h(x)    h(x) ∈ GL( IR 4 )}IR 4 ∧ IR 4 ∼ = IR 6 . 13 The representations [2J L |2J R ] ⊕ [2J R |2J L ] are decomposable as complex representations. dimensional representations, those with equal integer or half-integer 'left' and 'right' spin numbers J L = J R = J = 0, 1 2 , 1, . . . and those with different 'left' and 'right' spin numbers J L = J R , but integer sum irrep SO + (1, 3) = {[2J|2J]    2J = 0, 1, . . .} ∪ {[2J L |2J R ] ⊕ [2J R |2J L ]     2J L,R = 0, 1, . . . , J L = J R , J L + J R = 0, 1, . . .} The dimensions for the representation spaces are J L = J R = J : dim I R [2J|2J] = (2J + 1) 2 J L = J R : dim I R ([2J L |2J R ] ⊕ [2J R |2J L ]) = 2(2J L + 1)(2J R + 1) All representations are selfdual, i.e. they have an SO + (1, 3)-invariant bilinear form, symmetric as tensor product of the Lorentz metric. The equivalence classes of the irreducible real finite dimensional representations [6,8] of the special group SL 0 ( IR 4 ), locally isomorphic to SO(3, 3), with a simple rank 3 Lie algebra, are built by three fundamental representations, the real 4-dimensional cyclic representations [1, 0, 0] and [0, 0, 1], dual to each other, and the real 6-dimensional representation [0, 1, 0] ∼ = [1, 0, 0] ∧ [1, 0, 0], selfdual with the volume form ǫ(4) irrep SL 0 ( IR 4 ) = {[n 1 , n 2 , n 3 ]    n 1,2,3 = 0, 1, . . .} dim I R [n 1 , n 2 , n 3 ] = (n 1 +1)(n 3 +1)(n 2 +1)(n 1 +n 2 +2)(n 3 +n 2 +2)(n 1 +n 3 +n 2 +3) 2!3! The three natural numbers in [n 1 , n 2 , n 3 ] are the linear combination coefficients of the dominant representation weight from the three fundamental weights. The real 15-dimensional adjoint representation is [1, 0, 1]. The decomposition of the SL 0 ( IR 4 )-representations into SO + (1, 3)-representations is given for the simplest cases, relevant in relativity SL 0 ( IR 4 ) [1, 0, 0] [0, 0, 1] [0, 1, 0] [2, 0, 0] [0, 0, 2] [0, 2, 0] [1, 0, 1] dimension 4 6 = 4 2 10 = 4+1 2 20 = 6+1 2 − 1 15 = 4 2 − 1 = 6 2 SO + (1, 3) [1|1] [2|0] ⊕ [0|2] [0|0] ⊕ [2|2] [0|0] ⊕ [2|2]⊕ [4|0] ⊕ [0|4] [0|0] ⊕ [2|2]⊕ [2|0] ⊕ [0|2] The tangent space of the tetrad (metric) manifold is the quotient of the corresponding Lie algebras log GL( IR 4 )/ log O(1, 3) ∼ = IM ∨ IM ∼ = IR 10 It carries the irreducible representations [2, 0, 0] of SL 0 ( IR 4 ). The curvature R µνκλ (x) = R ij µν h iκ h jλ (x) with its familiar (anti)symmetry properties as traceless element of ( IM∧ IM)(x)∨( IM∧ IM)(x) transforms with the 20-dimensional representation [0, 2, 0], the symmetric Ricci tensor R µλ (x) = R µνκλ g νκ (x) with the 10-dimensional [2, 0, 0]. In general, a representation ψ of a group quotient G/U will be defined as a mapping from the classes ψ : G/U −→ V U ⊗ V T G into the linear map- pings ψ gU : V G −→ V U of two vector spaces with linear representations of the groups involved, G −→ GL(V G ) and U −→ GL(V U ). If the vector spaces are isomorphic V G ∼ = V U ∼ = V , the mappings ψ gU ∈ GL(V ) are assumed to be isomorphisms. The tetrad h(x), h −1 (x) ∈ GL( IR 4 )N Grassmann power N IM field SO + (1, 3) SL 0 ( IR 4 ) 0 IR id I R ∼ 1 [0||0] [0, 0, 0] 1 IM ∼ = IR 4 h(x) ∼ h j µ (x) [1||1] [1, 0, 0] 2 IM ∧ IM ∼ = IR 6 R(x) ∼ R ij µν (x) [2||0] ⊕ [0||2] [0, 1, 0] 3 IM ∧ IM ∧ IM ∼ = IR 4 h −1 (x) ∼ h µ j (x) [1||1] [0, 0, 1] 4 IR det h(x) [0||0] [0, 0, 0] Lorentz and special linear representation properties of the relativity fields The Grassmann degree N is the D(1)-grading, by Weyl [20] called 'weight of a tensor density'. The Scales for Relativity The rank of the symmetric space GL( IR 4 )/O(1, 3) (tetrad or metric manifold) will be defined as the difference 4 − 2 of the ranks for the 'nominator' and 'denominator' Lie algebra rank I R GL( IR 4 )/O(1, 3) = 2, rank I R D(1) = 1 The rank gives the number of invariants for the representations of the manifold -one abelian invariant for D(1) and one simple invariant for the quotient SL 0 ( IR 4 )/SO + (1, 3). Those invariants can be used as overall normalization and relative space-time normalization resp. or as fundamental intrinsic length scale ℓ (Newton's constant) and fundamental velocity scale c g(x) ∼ = ℓ 2 c 1 c 0 0 −c1 3 = h(ℓ, c) 1 0 0 −1 3 h T (ℓ, c) h(ℓ, c) = ℓ c 0 0 ℓ1 3 with ℓ, c > 0 The abelian invariant is given by the determinant of the tetrad h(ℓ, c) or, in the Lie algebra, by the trace, the simple invariant arises from the 'double 14 IM is isomorphic as vector space, not as associative algebra, to the Clifford algebra over IM. trace' as familiar from the Killing form and the quadratic Casimir element for semisimple Lie algebras h(ℓ, c) = exp l(ℓ, c),    det h(ℓ, c) = exp tr l(ℓ, c) = ℓ 4 c exp 4 tr l(ℓ,c)•l(ℓ,c)−( tr l(ℓ,c)) 2 3 = 1 c The flat spacetime expansion for general relativity uses the 10-dimensional tangent space of the tetrad manifold. It expands the GL( IR 4 )-tetrad with its Lie algebra around a reference Lorentz group O(1, 3). A tetrad from the unit connection component GL 0 ( IR 4 ) = D(1 4 ) × SL 0 ( IR 4 ) can be written with an exponent h(x) = exp l(x), l(x) ∈ log GL( IR 4 ) Because of the local invariance, the Lie algebra element l(x) is determined up to gauge translations l(x) + log O(1, 3)(x). The flat spacetime expansion is characterized by h(x) = 1 4 + l(x) + . . . , h j µ (x) = δ k µ [δ j k + l j k (x) + . . .] The Operation Groups of the Standard Model Before trying an interpretation with coset structures also for the standard model of the electroweak and strong interactions, its relevant operational symmetries will be summarized. The standard model implements the electroweak and strong interactions as gauge structures, relating the spacetime translations to the internal transformation groups hypercharge: U(1), isospin: SU(2), colour: SU (3) In the lepton, quark, Higgs and gauge fields, the internal groups meet with the external transformation groups 15 Lorentz group resp. as given by the quantum numbers in the following table [15] field symbol U(1) SU(2) SU(3) U(1) SL( I C 2 ) Ψ [y] [2T ] [C 1 , C 2 ] [c] [2J L |2J R ] left lepton l − 1 2 [1] [0, 0] 1 2 [1|0] right lepton e −1 [0] [0, 0] 3 2 [0|1] left quark q 1 6 [1] [1, 0] − 1 2 [1|0] right down quark d − 1 3 [0] [1, 0] 1 2 [0|1] right up quark u 2 3 [0] [1, 0] − 3 2 [0|1] Higgs H − 1 2 [1] [0, 0] 1 [0|0] hypercharge gauge A 0 [0] [0, 0] 0 [1|1] isospin gauge B 0 [2] [0, 0] 0 [1|1] colour gauge G 0 [0] [1, 1] 0 [1|1] quantum numbers of the standard model fields dim I C [2J L |2J R ] = (2J L + 1)(2J R + 1), 2J L,R = 0, 1, . . . dim I C [2T ] = 2T + 1, 2T = 0, 1, . . . dim I C [C 1 , C 2 ] = (C 1 +1)(C 2 +1)(C 1 +C 2 +2) 2 , C 1,2 = 0, 1, . . . Fields and antifields have reflected quantum numbers Ψ with [y||2T ; C 1 , C 2 ] • [c||2J L |2J R ] Ψ * with [−y||2T ; C 2 , C 1 ] • [−c||2J R |2J L ] The chirality property [c] will be discussed below in more detail. The gauge interaction of the fermion fields is effected by the local Lie algebra invariants 16 (current-gauge field products) g 1 J(1)A + g 2 J(2)B + g 3 J(3)G for U(1) : J(1) = 1 6 [q * 1 6 q − 2d * 1 3 d − 3l * 1 2 l + 4u * 1 3 u − 6e * e] for SU(2) : J(2) = 1 2 [q * τ ⊗ 1 3 q + l * τ l] for SU(3) : J(3) = 1 2 [q * 1 2 ⊗ λq + d * λd + u * λu] involving as a basis e.g. the three Pauli and eight Gell-Mann matrices τ = (τ a ) 3 a=1 and λ = (λ c ) 8 c=1 resp. The coupling constants g 2 1,2,3 > 0 are the normalizations of the corresponding Lie algebras [15]. At face value, the relevant group seems to be a product of five unrelated direct factors U(1) × SU(2) × SU(3) internal × U(1) × SL( I C 2 ) external A closer look, however, suggests a common origin for all those groups: The three internal factors are related to each other as well as the two external ones and, highly interesting, there exists also an internal-external correlation. In general, a standard model field does not represent faithfully all operations. If a group G is represented, the faithfully represented group is the quotient G/N, consisting of classes with respect to the trivially represented invariant subgroup N ⊆ G. To find those groups in the standard model, one has to consider the four central correlations of its operation group [9,15]. The two internal correlations connect hypercharge with both isospin and colour: The colourless fields l, e, H, A and B show a (half)integer hypercharge-(half)integer isospin correlation. The isospin-less fields u, d and G show an II(3) correlation. Therefore the faithfully represented groups arise from the full unitary groups U(n) for n = 2, 3. U(n) is a product, not direct, of two normal subgroups with II(n) as discrete intersection 17 . Its quotient groups are the phase group U(1 n ) = 1 n exp i IR and the adjoint group SU(n)/ II(n) U(n) = U(1 n ) • SU(n) U(1 n ) ∩ SU(n) = centr SU(n) ∼ = II(n) ⇒ U(n) ∼ = U(1)×SU(n) I I(n) normal subgroup U(1) SU(n) quotient group SU(n)/ II(n) U(1) internal operation groups from U(n), n = 2, 3 Furthermore, the internal colour and isospin properties of the left handed quark field q show that the internal faithfully represented group, defined in U(6), is a product of three normal subgroups with an II(2) × II(3) ∼ = II(6) correlation U(2 × 3) = U(1 6 ) • [SU(2) ⊗ 1 3 × 1 2 ⊗ SU(3)] U(1 6 ) ∩ [SU(2) ⊗ 1 3 ] ∼ = II(2) U(1 6 ) ∩ [1 2 ⊗ SU(3)] ∼ = II(3) ⇒ U(2 × 3) ∼ = U(1)×SU(2)×SU(3) I I(2)× I I(3) normal subgroup U(1) SU(2) SU(3) quotient group SO(3) × SU(3)/ II(3) U(3) U(2) normal subgroup U(2) U(3) SU(2) × SU(3) quotient group SU(3)/ II(3) SO(3) U(1) internal operation groups from U(2 × 3) The external correlation is seen in the fact that halfinteger spin J L + J R comes with halfinteger chirality number c and integer J L + J R with integer c. Therefore, the faithfully represented external group is the unimodular group UL(2) = {g ∈ GL( I C 2 )    | det g| = 1} (phase Lorentz group). Its quotient 17 The somewhat ambiguous notation G 1 ×G 2 H denotes a common normal subgroup H ⊆ G 1 ∩ G 2 in contrast to e.g. G 1 × G 2 /H. groups are the phase group (chirality group) and the orthochronous Lorentz group as adjoint group UL(2) = U(1 2 ) • SL( I C 2 ) U(1 2 ) ∩ SL( I C 2 ) = centr SL( I C 2 ) ∼ = II(2) ⇒ UL(2) ∼ = U(1)×SL( I C 2 ) I I(2) ⇒ UL(2)/SL( I C 2 ) ∼ = U(1)/ II(2) ∼ = U(1) UL(2)/U(1 2 ) ∼ = SL( I C 2 )/ II(2) ∼ = SO + (1, 3) normal subgroup U(1) SL( I C 2 ) quotient group SO + (1, 3) U(1) external operation groups from UL (2) Before discussing the internal-external correlation, the standard model fields will be arranged with respect to the external and internal quotient groups of UL(2) and U(2 × 3) resp. they are representing faithfully UL(2) U(1) ext SO + (1, 3) U(2) l H × U(1) int e − × U(2 × 3) q − × U(3) d, u − × SO(3) × × B {1} × × A SU(3)/ II(3) × × G faithfully represented homogeneous groups in the standard model Some entries are missing: First of all, there are no coloured Lorentz scalar fields, analogous to the Higgs isodoublet. Secondly: A field of the standard model has nontrivial hypercharge if, and only if, it has nontrivial chirality. The chirality U(1) ext number c is determined from the Yukawa interaction (µ e e * l + µ u q * u + µ d d * q)H + h.c. with Yukawa couplings µ e,u,d ∈ IR With an integer c H for the Higgs field, the chiral numbers for the quark fields q, d, u and for the lepton fields l, e are given up to integers z q and z l resp. U(1) int U(1) ext U(1) ext with U(1) ferm y c c H = 1, z q,l = 0 f = −c − 2y l − 1 2 1 2 + z l 1 2 1 2 e −1 1 2 + z l + c H 3 2 1 2 q 1 6 − 1 2 + z q − 1 2 1 6 d − 1 3 − 1 2 + z q + c H 1 2 1 6 u 2 3 − 1 2 + z q − c H − 3 2 1 6 H − 1 2 c H 1 0 A, B, G 0 0 0 0 hypercharge, chirality and fermion numbers for the standard model fields The choice of the three integers c H , z l , z q is not obvious. z l and z q will be determined by opposite chirality and hypercharge for the lepton isodoublet field l and opposite chirality and threefold hypercharge for the quark isodoublet field q c l = −y l , c q = −3y q ⇒ z l , z q = 0 The chirality c H for the Higgs field is determined in such a way that the hypercharge-chirality combination (fermion number) f = −c − 2c H y, trivial for the Higgs field, gives a ratio 1 : 3 for quark and lepton fields f l = 3f q ⇐⇒ c l + 2c H y l = 3(c q + 2c H y q ) ⇒ c H = 1 Those conditions will be discussed in section 4 and 6. Both U(1)'s, chirality and hypercharge, have to be represented in the only one phase group of a field. The combination of chirality and hypercharge with trivial value for the Higgs field defines a fermion number group U(1) which correlates external and internal U(1) U(1) ext ⊂ UL(2) U(1) int ⊂ U(2 × 3) , U(1) ferm ∼ = U(1)ext×U(1) int U(1) f = −c − 2y =      Symmetries for Particles One has to make a clear distinction between the operation group (symmetry) for fields and the operation group (symmetry) for particles [21]: Going from the standard model fields for the description of the dynamics to the in-and outfields for the description of particles, the homogeneous real 18-dimensional Lie group U(2×3)×UL (2) U (1) with both external and internal operations is dramatically reduced. With colour confinement and ground state frozen electroweak symmetries there remains from the 12-dimensional U(2 × 3) only a 1-dimensional abelian U(1)-symmetry, faithfully represented by particles with nontrivial electromagnetic charge or fermion number, e.g. by the electron or the neutron. The establishment of a laboratory distinguishes a reference rest system and reduces the 6-dimensional external Lorentz group operations SL( I C 2 ) for fields in the case of massive halfinteger and integer spin particles to a faithfully represented 3-dimensional group SU(2) and SU(2)/ II(2) ∼ = SO(3) resp. Massless particles represent faithfully only a 1-dimensional polarization subgroup SO(2) ∼ = U(1) ⊂ SU(2), which -possibly reflecting the external-internal U(1)correlation -are all chargeless, e.g. the photon and the neutrinos D(1) U(1) ⊂ SU(2) U(1) U(1) particle symbol mass spin polarization el.mgn. direction (helicity) charge massive electron e ∓ m e + 1 2 , − 1 2 − ∓1 electron neutrino ν e , ν e 0 − ±1 0 charged weak boson W ± m W +1, 0, −1 − ±1 neutral weak boson Z m Z +1, 0, −1 − 0 photon γ 0 − ±1 0 particles from standard fields The Coset Structure in the Standard Model After the coset formulation for relativity in sections 1,2,3 and the exposition of the standard model operation groups in section 4, I come to the main purpose of this paper. An attempt to characterize the standard model for the electroweak and strong interactions with coset structures and symmetric spaces in analogy to relativity encounters characteristic differences: Relativity is a real theory with orthogonal groups and bilinear forms (metrics) whereas the standard model and quantum theory come in a complex formulation with unitary groups and sesquilinear forms (scalar products, probability amplitudes). The local operation Lorentz group O(1, 3) for relativity has no true normal Lie subgroup whereas the internal standard model operation group U(2 × 3) has the normal Lie subgroups U(1) (hypercharge), SU(2) (isospin) and SU(3) (colour). The main apparent obstacle for a symmetric space interpretation for the standard model is the colour group SU(3): It prevents a naive embedding of the internal group U(2 × 3) as subgroup of the external phase Lorentz group UL(2) -as compared to the tetrad manifold quotient structure GL( IR 4 )/O (1, 3). Therefore, Weinberg's 'Model of Leptons' [18] is considered first: There, the colourless group U(2)×UL (2) U (1) with hyperisospin and phase Lorentz group is represented by the lepton fields l, e, the hypercharge and isospin gauge fields A, B and the Higgs field H. A group U(2) (hyperisospin) is the invariance group of a definite scalar product d for a complex 2-dimensional vector space I U ∼ = I C 2 d : I U × I U −→ I C, d(v, v) > 0 ⇐⇒ v = 0, d(v, w) = d(w, v) U(2) ∋ u : I U −→ I U ⇐⇒ d = d • (u × u) A scalar product d for quantum theory is the analogue to a signature (1, 3) metric g of the real translation vector space IM ∼ = IR 4 in relativity with O(1, 3)invariance (section 1). A scalar product defines a conjugation f d ↔ f * for all linear mappings f : I U −→ I U for all v, w ∈ I U : d(v, f (w)) = d(f * (v), w), f * * = f with u ∈ U(2) ⇐⇒ u * = u −1 and l ∈ log U(2) ⇐⇒ l = −l * . Antilinear structures like a sesquilinear complex scalar product d are more complicated than linear ones. In general for a complex linear space I U ∼ = I C n , one has to consider the complex quartet 18 of associated vector spaces I U, I U T , I U * , I U * T ∼ = I C n , consisting of space, dual space, antispace and dual antispace resp. [2,7], to take care of the conjugations in a basic independent form. The canonical I C-conjugation defines canonical antilinear isomorphisms between antispaces I U ∼ = I U * and I U * ∼ = I U * T . With an additional vector space conjugation, i.e. an antilinear isomorphisms between duals, d : I U −→ I U T , v −→ d(v,d ∼ = d 0 + d 3 d 1 − id 2 d 1 + id 2 d 0 − d 3 ≻ 0 ⇐⇒ d = d * and tr d, det d > 0 i.e. d j ∈ IR and d 0 , d 2 = d 2 0 − d 2 > 0 In analogy to α = ǫ(α)α for a positive number α > 0, the positivity of the matrix d is expressable with its signature ǫ(d) = ǫ(d 0 )ϑ(d 2 ) The full linear group GL( I C 2 ) is the direct product of its dilatation group D(1 2 ) = 1 2 exp IR and its unimodular group UL(2) GL( I C 2 ) = D(1 2 ) × UL(2) D(2) = GL( I C 2 )/U(2) ∼ = D(1) × SD(2) The spacetime manifold D(2) involves as direct nonabelian factor the real 3-dimensional boost manifold SD(2) = UL(2)/U(2) ∼ = SL( I C 2 )/SU(2) ∼ = SO + (1, 3)/SO(3) The IM ∼ = D(2) = exp IM, IR 4 ∼ = exp IR 4 x = x * = x 0 + x 3 x 1 − ix 2 x 1 + ix 2 x 0 − x 3 d(x) = exp x = (cosh | x| + σ x | x| sinh | x|) exp x 0 = d 0 (x) + d 3 (x) d 1 (x) − id 2 (x) d 1 (x) + id 2 (x) d 0 (x) − d 3 (x) In the special manifold factors SL 0 ( IR 4 )/SO + (1, 3) (manifold of natural orders) and SL( I C 2 )/SU(2) (manifold of conjugations), the orthogonal stability group SO + (1, 3) has a signature (1, 3) invariant Lorentz form g on the translations IM ∼ = IR 4 whereas the unitary group SU(2) has, in addition to an invariant scalar product d on U ∼ = I C 2 , an invariant antisymmetric bilinear form ǫ(v, w) = −ǫ(w, v) ('spinor metric'). The I C 2 -volume form ǫ is invariant also with respect to SL( I C 2 ), it leads to the bilinear symmetric orthochronous SO + (1, 3)-forms g ∼ = ǫ ⊗ ǫ −1 . No SL 0 ( IR 4 )-invariant bilinear form exists on the translations IM. Spacetime as Basic Field Quantization In analogy to the relativity tetrad h as basic representation of the real 10-dimensional metric manifold GL( IR 4 )/O (1, 3), a basic field ψ is introduced as fundamental representation for the real 4-dimensional manifold GL( I C 2 )/U(2) of scalar products. It associates to each point of the real 4-dimensional spacetime D = D(2), parametrizable with d(x) = exp x for x ∈ IR 4 , a class representative ψ : D(2) −→ GL( I C 2 ), d(x) ∼ = x −→ ψ(x) With the basic field ψ, a complex vector space I U(x) ∼ = I C 2 at each spacetime point can be related to a reference space. ψ * (x) gives an isomorphism between the reference antispace I U * (0) and the antispace I U * (x) ψ(x) : I U(x) −→ I U(0), ψ * −1 (x) : I U * (x) −→ I U * (0) with the scalar products d(x) ( I U × I U)(x) −→ I C (ψ×ψ)(x)     id I C ( I U × I U)(0) −→ I C d(0) , d(x) = d(0) • (ψ × ψ)(x) Bases are given with α, A = 1, 2 ψ(x) ∼ ψ α A (x) ∼ ψ T (x), ψ * −1 (x) ∼ ψ * Ȧ β (x) = δ αβ ψ * α A (x)δ AȦ ∼ ψ * −1T (x) d(0) ∼ δ αβ d(x) ∼ d AB (x) = δ βα ψ * α A ψ β B (x) ∼ = dȦ B (x) = d AB (x)δ AȦ = ψ * Ȧ β ψ β B (x) The basic fields ψ and ψ * transform under the two conjugated fundamental complex 2-dimensional UL (2) . Such an expansion in the standard model is performed by the transition from the operation group representing fields for the dynamics to the tangent particle fields (in-and out fields) involving the dramatic symmetry reduction mentioned above and requires the definition of a ground state and a reference system (spontaneous symmetry breakdown). By an expansion of the coset representative ψ in flat spacetime IM ∼ = IR 4 with the standard model lepton fermion field l ψ(x) = l(x) + . . . , ψ α A (x) = l α A (x) + . . . ψ * (x) = l * (−x) + . . . , ψ * Ȧ α (x) = l * Ȧ α (−x) + . . . the spacetime defining scalar product can be related to the anticommutator quantization condition 19 log d(x) = {ψ * (x), ψ(x)} = {l * (−x), l(x)} + . . . = xǫ(x 0 )δ ′ (x 2 ) + . . . log dȦ B (x) = {ψ * Ȧ β (x), ψ β B (x)} = {l * Ȧ α (−x), l α B (x)} + . . . = xȦ B ǫ(x 0 )δ ′ (x 2 ) + . . . with x = x * ∼ xȦ B = (σ j )Ȧ B x j = x 0 + x 3 x 1 − ix 2 x 1 + ix 2 x 0 − x 3 With the canonically quantized flat space standard model fields alone a coset interpretation breaks down at this point. A quantization involving lightcone supported distributions does not allow an interpretation as a spacetime dependent scalar product d(x). Additional nonparticle contributions [13,14] can lead to an expansion for the basic field quantization without lightcone supported distribution log d(x) = {ψ * (x), ψ(x)} = xǫ(x 0 )ϑ(x 2 ) + . . . The parametrization of the spacetime manifold D(2) is effected by the quantization of the basic field ψ. (3) U(2)-gauge fields G associate to each spacetime point an isomorphism to a reference tensor space The Scales for the Standard Model u ∈ U(2) : u ⊗ u : I U ⊗ I U * −→ I U ⊗ I U * u ⊗ u ∼ = id I C ⊕ O 3 (u) ∈ {1} ⊕ SOG(x) : ( I U ⊗ I U * )(x) −→ ( I U ⊗ I U * )(0) G(x) = (ψ ⊗ ψ * )(x) = A(x) ⊕ B(x) GḂ α Aβ (x) = ψ α A ψ * Ḃ β (x) = (σ j )Ḃ A [δ α β A j (x) + τ α β B j (x)] Therewith, the manifold of (1 ⊕3)-decomposable 4-dimensional isospin SO(3)representations on the tensor product is considered in the orthochronous Lorentz group SO + (1, 3) ∼ = UL(2)/U(1). The hyperisospin U(2)-gauge fields of the standard model might be taken as one term in the particle oriented flat spacetime approximation A j (x) = 1 4 ψ α A δ β α (σ j ) Ȧ B ψ * Ḃ β (x) = A j (x) + . . . B j (x) = 1 4 ψ α A τ β α (σ j ) Ȧ B ψ * Ḃ β (x) = B j (x) + . . . In general, the standard model fields seem to be the particle related and ground state respecting contributions in a flat spacetime expansion for the more basic fields ψ, ψ * which parametrize the U(2)-operations in UL(2) ⊂ GL( I C 2 ) acting on the tensor powers of the vector spaces I U, I U * . This is done for the basic space I U with faithful hyperisospin U(2) action by the standard lepton field ψ = l + . . . and for the tensor space I U ⊗ I U * with adjoint isospin group U(2)/U(1)-action by the standard hypercharge and isospin gauge fields ψ ⊗ ψ * = A ⊕ B + . . .. The Grassmann Algebra for Spacetime The local Grassmann algebra IM ∼ = IR 16 over the translations at each point of the spacetime manifold in relativity has as analogue the local Grassmann algebra over I U ⊕ I U * ∼ = I C 4 for the standard model. In contrast to the translations IM ∼ = IR 4 , the vector space I U ∼ = I C 2 does not arise as a tangent space. The totally antisymmetric tensor powers N ( I U ⊕ I U * ) with Grassmann degree N = 0, 1, 2, 3, 4 carry all fundamental representations of hyperisospin U(2) and its quotient groups U(1) and SO(3). Their direct sum constitutes the complex Grassmann (exterior) algebra [11,12] ( I U ⊕ I U * ) = GRASS ∼ = I C 16 N subspaces of N ( I U ⊕ I U * ) ∼ = I C ( 4 N ) faithfully represented internal group faithfully represented external group 0 I C {1} {1} 1 I U, I U * ∼ = I C 2 U(2) UL(2) 2 I U ⊗ I U * ∼ = I C 4 I U ∧ I U, I U * ∧ I U * ∼ = I C SO(3) U(1) SO + (1, 3) U(1) 3 I U ⊗ I U * ∧ I U * ∼ = I C 2 I U ∧ I U ⊗ I U * ∼ = I C 2 U(2) UL(2) 4 ( I U ∧ I U) ⊗ ( I U ∧ I U) * ∼ = I C {1} {1} U(2)-and UL(2)-properties of the Grassmann algebra GRASS With the basic fermion field ψ the internal hyperisospin U(2)-properties of a reference Grassmann algebra for ( I U ⊕ I U * )(0) are considered in the external Lorentz phase group UL(2)-properties of a Grassmann algebra for ( I U⊕ I U * )(x) (ψ ⊕ ψ * )(x) : GRASS(x) −→ GRASS(0) with isomorphism between corresponding vector subspaces with an corresponding external and internal representation structure N basic field U(2) = U(1 2 ) • SU(2) [y||2T ] UL(2) = U(1 2 ) • SL( I C 2 ) [c||2J L |2J R ] 0 id I C [0||0] [0||0|0] 1 ψ(x), ψ * (x) [− 1 2 ||1], [+ 1 2 ||1] [+ 1 2 ||1|0], [− 1 2 ||0|1] 2 (ψ ⊗ ψ * )(x) (ψ ∧ ψ)(x), (ψ ∧ ψ) * (x) [0||0] ⊕ [0||2] [∓1||0] [0||1|1] [±1||0|0] 3 (ψ ⊗ ψ * ∧ ψ * )(x) (ψ ∧ ψ ⊗ ψ * )(x) [+ 1 2 ||1] [− 1 2 ||1] [− 1 2 |1||0] [+ 1 2 |0||1] 4 (ψ ∧ ψ) ⊗ (ψ ∧ ψ) * (x) [0||0] [0||0|0] quantum numbers of the basic field products A basic field ψ, quantized with anticommutators, cannot imbed the U(1)properties of I U ∧ I U ∼ = I C with Grassmann degree N = 2, since the scalar combination vanishes ψ ∧ ψ(x) : ψ α A ǫ AB ǫ αβ ψ β B (x) = 1 2 ǫ AB ǫ αβ {ψ α A (x), ψ β B (x)} = 0 Only the combination leading to an SU(2)-triplet is nontrivial ψ ∧ ψ(x) ∼ ψ α A ǫ AB τ αβ ψ β B (x), τ αβ = ǫ αγ τ γ β = τ βα Therewith one has to consider four types of nontrivial fields -two fermionic fields with odd Grassmann degree 1 and 3 and two bosonic fields with even Grassmann degree 2 and 4. Only N = 1, 2, 3 characterize nontrivial symmetric spaces and representations of the nonabelian boost manifold (conjugation manifold) UL(2)/U(2) N z 2 basic field manifold representation 1 ∓ 1 2 ψ(x), ψ * (x) UL(2)/U(2) 2 0 (ψ ⊗ ψ * )(x) SO + (1, 3)/{1} ⊕ SO(3) 3 ± 1 2 (ψ ⊗ ψ * ∧ ψ * )(x) (ψ ∧ ψ ⊗ ψ * )(x) UL(2)/U(2) 4 0 (ψ ∧ ψ) ⊗ (ψ ∧ ψ) * (x) {1} UL(2)/U(2)-representations by basic field products In addition to the D(1)-grading with the natural number Grassmann degree N ∈ IN, a Grassmann algebra over a selfdual complex space I U ⊕ I U * ∼ = I C 2n has a U(1)-grading with z ∈ Z Z 2n+1 . The U(1)-property defines the hypercharge and chirality Z Z 5 -grading with y, c = z 2 = 0, ± 1 2 , ±1 GRASS = 2 z=−2 I U (z) ,                I U (0) = I C ⊕ [ I U ⊗ I U * ] ⊕ [ I U ∧ I U ⊗ I U * ∧ I U * ] ∼ = I C 6 I U (1) = I U * ⊕ [ I U ⊗ I U * ∧ I U * ] ∼ = I C 4 I U (−1) = I U ⊕ [ I U ∧ I U ⊗ I U * ] ∼ = I C 4 I U (2) = I U * ∧ I U * ∼ = I C I U (−2) = I U ∧ I U ∼ = I C A basic theory for the symmetric space GL( I C 2 )/U(2) has to use only the field ψ in analogy to the tetrad h for minimal relativity GL( IR 4 )/O(1, 3). The standard model is not basic in this sense. But, at least, the correspondence between the relevant basic field products and the effective particle oriented standard fields can be found. External-Internal Cosets in the Lepton Model The 'colourless' standard model, i.e. without quark and gluon fields, parametrizes all nontrivial external-internal or internal-external cosets, G ext /G int and G int /G ext resp., which are possible with the UL(2) and U(2)-representations in the Grassmann algebra UL(2) U(1) SO + (1, 3) U(2) l H × U(1) e − × {1} ⊕ SO(3) × × A ⊕ B One has to consider the possibilities to embed into each other the nontrivial external groups UL (2) The fields in the diagonal, the 2 × 2 lepton fields l (isodoublet Lorentzdoublet) and the 4×4 gauge fields A⊕B (U(2)-quartet Lorentz-vector) connect spaces with equal dimensions. The pair (H, e) in the skew-diagonal with the 2 × 1 Higgs fields H (isodoublet Lorentz-scalar) and the 1 × 2 lepton fields e (isosinglet Lorentz-doublet) come together as a 'doublet property swapping pair' [y||2T ] • [c||2J L |2J R ] =      [1||0] • [− 3 2 ||1|0] for e * [− 1 2 ||1] • [1||0|0] for H [ 1 2 ||1] • [− 1 2 ||1|0] for e * ⊗ H The internal SU(2) for the Higgs Lorentz singlet field H corresponds to the external SU(2) ⊂ SL( I C 2 ) for the lepton isosinglet The SU(2)-swapping pair can arise from the isomorphisms for the tensors of Grassmann degree 3 χ(x) : ( I U ⊗ I U * ∧ I U * )(x) −→ ( I U ⊗ I U * ∧ I U * )(0) χ(x) = (ψ ⊗ ψ * ∧ ψ * )(x) ∼ χ α A (x) = ψ β A τ α β ψ * Ċ γ ǫĊḊ τ γδ ψ * Ḋ δ (x) as a particle oriented twofold factorization in the flat spacetime expansion (ψ ⊗ ψ * ∧ ψ * )(x) = (e * ⊗ H)(x) + . . . , χ α A (x) = ǫ AB e * B H α (x) + . . . (ψ ∧ ψ ⊗ ψ * )(x) = (H * ⊗ e)(x) + . . . , χ * Ȧ α (x) = ǫȦḂH * α eḂ(x) + . . . Quark Fields as Grassmann Roots The main problem for an interpretation of the standard model in the framework of a basic GL( I C 2 )/U(2) coset structure are the coloured fields, the quark fields q, d, u and the gluon fields G. The only natural relation of U(2 × 3) to U(2) seems to arise in the Grassmann algebra GRASS ∼ = I C 16 over I U ⊕ I U * ∼ = I C 4 which gives rise to two types of faithful U(2)-representations with Grassmann degree N = 1 and N = 3 which may reflect colour singlet and colour triplet properties resp. In analogy to the representation of I U⊗( I U∧ I U) * with the Higgslepton two factor product e * ⊗ H the quarks may arise from a parametrization with a three factor product I U ⊗ I U * ∧ I U * : U(2) ⊗ U(2)) ∧ U(2) ∼ = U(2) ⊗ U(1) ∼ = U (2) UL(2) ⊗ UL(2)) ∧ UL(2) ∼ = UL(2) ⊗ U(1) ∼ = UL (2) taking care of the GL( I C) = D(1) × U(1)-properties given by the two gradings of the Grassmann algebra. Originally, the quarks were introduced as 'cubic root'-representations of the nucleons with colour SU(3) as gauge group for the strong interactions. As seen in the standard model central correlation II(3) ∼ = SU(3) ∩ U(1 3 ) (section 4), a colour SU(3)-property with nontrivial triality [1], i.e. an SU(3)-representation [C 1 , C 2 ] with C 1 − C 2 = 3 Z Z, e.g. triplets [1,0] or sextets [2,0], not, however, octets [1,1] or decuplets [3,0], cannot be separated from a third integer hypercharge U(1)-property. The U(3)-hypercharge-coulour group can be considered to be the continuous phase generalization of the discrete cyclotomic root exp 2πi 3 ∈ II (3) GL( IR 1+s )/O(1, s) with s ≥ 1 space dimensions can be visualized for s = 1, 2 by all possible s-dimensional 2-component hyperbola (hyperboloids) in IR 1+s . It associates a GL( IR 4 )/O(1, 3)-class representative to each spacetime point h : D −→ GL( IR 4 ), x −→ h(x) of a reference Lorentz group by inner automorphisms with GL( IR 4 )-operations does not fill the full group GL( IR 4 ) because of the nontrivial centralizer, isomorphic to GL( IR) = D(1) × II(2). The equivalence classes irrep SO + (1, 3) of the irreducible real finite dimensional representations of an orthochronous Lorentz group with its simple rank 2 Lie algebra are built by two fundamental representations, the real 4-dimensional Minkowski representation [1|1] (cyclic representation 11 ), selfdual with the symmetric signature (1, 3) Lorentz metric g, and the real 6-dimensional adjoint representation [2|0] ⊕ [0|2] ∼ = [1|1] ∧ [1|1], selfdual with two symmetric bilinear forms, the definite metric g ∧ g and the signature (3, 3)-Killing metric 12 ǫ(4). Correspondingly, there are two types of real irreducible 13 finite 9 II(n) = {z ∈ I C and the curvature R(x) ∈ GL( IR 6 ) as representations of the quotient GL( IR 4 )/O(1, 3) relate to each other vector spaces with the fundamental representations of the orthogonal and special group. In general, the (n − 1) fundamental SL 0 ( IR n )-representations act on the (n − 1) Grassmann powers N IR n for N = 1, . . . , n − 1. Therefore the reference Grassmann algebra 14 IM(0) ∼ = IR 16 over the translations with the powers N IM(0) ∼ = IR ( 4 N ) as direct summands and the isomorphic local partners IM(x) are related to each other by the fields in relativity Lorentz group: SL( I C 2 ), chirality: U(1) The fundamental standard model fields transform internally with irreducible representations [y], [2T ] and [C 1 , C 2 ] for hypercharge, isospin and colour group resp. and, externally, with [c] and [2J L |2J R ] for chirality and 15 The unspecified name 'Lorentz group' is used for the locally isomorphic real Lie groups O(1, 3), SO(1, 3) (special), SO + (1, 3) (orthochronous) and SL( I C 2 ) (covering). The complex finite dimensional representations of the real dimension 6, rank 2 simple Lie algebra log SL( I C 2 ) are denoted with 2 natural numbers [2J L |2J R ] for the linear combination of its dominant weight from the 2 fundamental weights for the Weyl representations. quark fields q, d, u 0 for boson fields H, A, B, G Summarizing the operation groups of the standard model: The externalinternal homogeneous symmetry group, faithfully represented with the standard model fields, is a product of five normal subgroups with a fourfold central correlation ), one obtains linear isomorphisms I U ∼ = I U * T and I U T ∼ = I U * . There is a real 4-dimensional manifold (symmetric space) GL( I C 2 )/U(2) of positive unitary groups in the general linear group, considered as real 8-dimensional Lie group. With a reference basis, this manifold is parametrizable by all positive 2 × 2-matrices for the scalar products d ≻ 0 0⇐⇒ d = 0 and d = ǫ(d 0 )ϑ(d 2 )d Besides the analogies, there are important differences between the realorthogonal quotient structure of relativity and the complex-compact one proposed for the standard model: In contrast to the different dimensions of the spacetime and tetrad manifold in relativity for IM ∼ = IR 4 4 = 1 + s = dim I R D < dim I R GL( IR 1+s )/O(1, s) I R D = dim I R GL( I C n )/U(n) = n 2 = 4 Consequently, the symmetric space D(2) can be used [14] as a model for the spacetime manifold D = D(2) = GL( I C 2 )/U(2) ∼ = exp IR 4With this interpretation, spacetime arises as the manifold of compact operations U(2) in general linear operations GL( I C 2 ). tangent spaces of the homogeneous space as the quotient of the correponding Lie algebras log GL( I C 2 )/ log U(2) ∼ = IM ∼ = IR 4 can be taken for the Minkowski translations carrying the irreducible SL( I C 2 )representations [1|1] of the adjoint group GL( I C 2 )/GL( I C) ∼ = SO + (1, 3). The Cartan representation of the spacetime translations IM by U(2)-hermitian complex 2 × 2-matrices x = x * shows the local U(2)-structure -representations (left and right handed Weyl spinors), as usual denoted with undotted and dotted indices. The GL( I C 2 )/U(2)-analogue to the flat spacetime expansion in general relativity GL( IR 4 )/O(1, 3) with the tetrad expansion h = 1 4 + . . . around a reference O(1, 3) requires an expansion of the external group UL(2) around a compact local reference group U(2) M representations of the scalar product manifold D(2) with real rank 2 as model for spacetime are characterized by two real invariants -an abelian dilatation invariant M and a simple 'boost'-invariant m rank I R GL( I C 2 )/U(2) = rank I R D(1) + rank I R SO + (The two invariants, given in the Lie algebra structure by the abelian trace and the simple 'double trace' = tr log ψ(M, m), m 2 = 2 tr log ψ(m) • log ψ(m) can be used [14] as fundamental mass scale M and fundamental interaction range 1 m in the representations of the spacetime manifold D(2) ∼ = D(1)×SD(2) by quantum fields.9 Hyperisopin Gauge FieldsThe curvature in relativity R : ( IM ⊗ IM T )(x) −→ ( IM ⊗ IM T )(0) relates to each other the Lorentz Lie algebras acting on the tangent spaces. The analogue for the standard model considers the tensor product I U ⊗ I U * ∼ = I C 4 for a scalar product space I U with the represented group U(2) and its Lie algebra. The two real 4-dimensional subspaces {f = ±f *    f ∈ I U ⊗ I U * } of the endomorphisms I U ⊗ I U * ∼ = IR 4 ⊕ i IR 4 are both stable under the action of U(2). The product representation of U(2) decomposes into a 3-dimensional representation, faithful for the adjoint group SO(3) ∼ = U(2)/U(1), and a 1dimensional trivial one , U(1) and SO + (1, 3) and the nontrivial internal groups U(2), U(1) and SO(3) -in both directions.Internal U(2) can be embedded only in external UL(2), done by the lepton isodoublet fields, as flat space contribution for the basic fields ψ, ψ * UL(2)/U(2) :l(x) : I U(x) −→ I U(0), l(x) ∼ l α A (x) l * (x) : I U * (x) −→ I U * (0), l * (x) ∼ l * Ȧ α (x)Internal SO(3)can be embedded only in external SO + (1, 3), done by the gauge fields, corresponding to the basic field ψ ⊗ ψ *SO + (1, 3)/{1} ⊕ SO(3) : A(x) ⊕ B(x) : ( I U ⊗ I U * )(x) −→ ( I U ⊗ I U * )(0) A(x) ⊕ B(x) ∼ A j (x) + B j (x) The embedding of external U(1) in internal U(1) is trivial U(1)/U(1) ∼ = {1}.InternalU(2) can imbed only external U(1), done by the Higgs isodoublet fields U(2)/U(1) : H(x) : ( I U ∧ I U)(x) −→ I U(0), H(x) ∼ H α (x) H * (x) : ( I U ∧ I U) * (x) −→ I U * (0), H * (x) ∼ H * α (x) Internal U(1) can be embedded only in external UL(2), done by the lepton isosinglet fields UL(2)/U(1) : e(x) : I U * (x) −→ ( I U ∧ I U)(0), e(x) ∼ eȦ(x) e * (x) : I U(x) −→ ( I U ∧ I U) * (0), e * (x) ∼ e * A (x) ∈ or, general for U(N) with centr U(N) ∼ = II(N) II(N) as power of the cyclic root exp 2πi N has its correspondence in the group which is defined by the U(N) representation k U(N) on a complex N k -dimensional space as k-th Grassmann power of the cyclic defining representation with k k = 1, . . . , N} with U(N) ∼ = U(1)×SU(N ) I I(N ) With respect to the Lorentz group, [0|0] designates scalar fields, [1|0] and [0|1] are left and right handed Weyl spinor fields resp., [1|1] vector fields. The external and internal multiplicity (singlet, doublet, triplet, quartet, octet, etc.) of the Lorentz-group, isospin and colour representations can be computed from the natural numbers 2J L,R , 2T, C 1,2 For a Lie algebra representation D : L −→ V ⊗ V T in the endomorphism algebra of a vector space (L and V finite dimensional) the tensor D ∈ V ⊗ V T ⊗ L T is the associated invariant. The complex quartet structure leads also to the fourfold concept 'particle creation, particle annihilation, antiparticle creation and antiparticle annihilation'. For the left-handed part of the massive lepton particle field one has the anticommutator{l * (0), l(x)} = d 4 q 8π 2 qǫ(q 0 )δ(q 2 − M 2 ) exp iqx = xǫ(x 0 )[δ ′ (x 2 ) − M 2 4 δ(x 2 ) + M 4 16 ϑ(x 2 ) + . . .] AcknowledgmentsI benefitted from discussions with David Finkelstein and Tony Smith, both at Georgia Tech, Atlanta.Here are as examples the 2nd, 3rd and 6th Grassmann roots of U(1) The root allows the distribution of the U(1)-phase in U(n) on k ≤ N factors,20The Grassmann root of U(1) for any natural number m = 1, 2, . . . is obtained by using its Sylow decomposition m = p k 1 1 · · · p kr r in powers of primese.g. for (n, N) = (2, 3)The quark fields as cubic Grassmann roots take care of the basic field products with Grassmann degree N = 3 in I U ⊗ I U * ∧ I U * , I U ∧ I U ⊗ I U * ∼ = I C 2 . The quark isodoublet field q parametrizes the k = 1 member of the internal U(2)-roots 3 U(2) with U(2 × 3)-degrees of freedom, the two quark isosinglets d, u parametrize the k = 2 member of 3 U(2) with U(3)-degrees of freedomwhich is written with the representationsIf, for an effective linearization of the basic GL( I C 2 )/U(2) coset structure as realized with the Grassmann algebra GRASS ∼ = I C16, the basic internal operation group U(2) is extended by a cubic Grassmann root to U(2 × 3) one has to provide also for a gauge field for the additional local U(2 × 3)/U(2) ∼ = SU(3)/ II(3)operations. This is done in the standard model with the gluon fields G(x). G E Baird, L C Biedenharn, Proceedings 1st Goral Gabels Conference on Symmetry Principles. 1st Goral Gabels Conference on Symmetry PrinciplesFreeman68G.E. Baird, L.C. Biedenharn, Proceedings 1st Goral Gabels Conference on Symmetry Principles (1964), p.68, Freeman . N Bourbaki, Algebra I, Chapters. 13SpringerN. Bourbaki, Algebra I, Chapters 1-3 (1989), Springer, Berlin, Heidelberg, New York, London, Paris, Tokyo N Bourbaki, Algèbre , Formes sesquilineaires et formes quadratiques). Hermann, Paris9N. Bourbaki, Algèbre, Chapitre 9 (Formes sesquilineaires et formes quadratiques), Hermann, Paris (1959) N Bourbaki, Lie Groups and Lie Algebras. Berlin, Heidelberg, New York, London, Paris, TokyoSpringerN. Bourbaki, Lie Groups and Lie Algebras, Chapters 1-3 (1989), Springer, Berlin, Heidelberg, New York, London, Paris, Tokyo . D R Finkelstein, Quantum Relativity, SpringerD.R. Finkelstein, Quantum Relativity (1996), Springer W Fulton, J Harris, Representation Theory. SpringerW. Fulton, J. Harris, Representation Theory (1991), Springer M Haft, Conjugations and Discrete Symmetries, Thesis. LM-University MünchenM. Haft, Conjugations and Discrete Symmetries, Thesis, LM-University München (1997) S Helgason, Differential Geometry, Lie Groups and Symmetric Spaces. New York etcAcademic PressS. Helgason, Differential Geometry, Lie Groups and Symmetric Spaces (1978) Academic Press, New York etc. . J Hucks, Physical Review D. 432709J. Hucks, Physical Review D 43 (1991), 2709 L O&apos;raifeartaigh, Group Structure of Gauge Theories. CambridgeCambridge University PressL. O'Raifeartaigh, Group Structure of Gauge Theories (1986), Cambridge University Press, Cambridge . H Saller, Nuovo Cimento 108B. 603H. Saller, Nuovo Cimento 108B (1993), 603 . H Saller, Nuovo Cimento 109B. 255H. Saller, Nuovo Cimento 109B (1993), 255 . H Saller, International Journal of Theoretical Physics. 361033H. Saller, International Journal of Theoretical Physics 36 (1997), 1033 . H Saller, International Journal of Theoretical Physics. 362783H. Saller, International Journal of Theoretical Physics 36 (1997), 2783 The Central Correlations of Hypercharge, Isospin, Colour and Chirality in the Standard Model. H Saller, MPI-PhT/98-14 (hep-th/9802112H. Saller, The Central Correlations of Hypercharge, Isospin, Colour and Chirality in the Standard Model, MPI-PhT/98-14 (hep-th/9802112) . R Utiyama, Physical Review. 1011597R. Utiyama, Physical Review 101 (1956), 1597 N Ja Vilenkin, A V Klimyk, Representations of Lie Groups and Special Functions. Dordrecht, Boston, LondonKluwer Academic PublishersN.Ja. Vilenkin, A.V. Klimyk, Representations of Lie Groups and Special Functions (1991), Kluwer Academic Publishers, Dordrecht, Boston, Lon- don . S Weinberg, Physical Review Letters. 18507S. Weinberg, Physical Review Letters 18 (1967), 507 . H , Zeitschrift für Physik. 56330H. Weyl, Zeitschrift für Physik 56 (1929), 330 . H Weyl, Raum, Materie Zeit, Wissenschaftliche Buchgesellschaft, DarmstadtH. Weyl, Raum, Zeit, Materie (1923), Wissenschaftliche Buchgesellschaft, Darmstadt . E P Wigner, Annals of Mathematics. 40149E. P. Wigner, Annals of Mathematics 40 (1939), 149
{'fraction_non_alphanumeric': 0.10902694160144491, 'fraction_numerical': 0.037834888621312464, 'mean_word_length': 3.4242550357915764, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 125, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In analogy to the class structure GL( IR 4 )/O(1, 3) for general relativity with a local Lorentz group as stabilizer and a basic tetrad field for the parametrization, a corresponding class structure GL( I C 2 )/U(2) is investigated for the standard model with a local hyperisospin group U(2). The lepton, quark, Higgs and gauge fields, used in the standard model, cannot be basic in a coset interpretation, they may to be taken as first order terms in a flat spacetime, particle oriented expansion of a basic field (as the analogue to the tetrad) and its products.', 'arxivid': 'hep-th/9805052', 'author': ['Heinrich Saller \nMax-Planck-Institut für Physik and Astrophysik Werner-Heisenberg-Institut für Physik München\n\n'], 'authoraffiliation': ['Max-Planck-Institut für Physik and Astrophysik Werner-Heisenberg-Institut für Physik München\n'], 'corpusid': 17615634, 'doi': '10.1023/a:1026610924096', 'github_urls': [], 'n_tokens_mistral': 21152, 'n_tokens_neox': 18306, 'n_words': 10302, 'pdfsha': '277dc40d83a6ad87020408b02696e90f904e199b', 'pdfurls': ['https://export.arxiv.org/pdf/hep-th/9805052v1.pdf'], 'title': ['THE EXTERNAL-INTERNAL GROUP QUOTIENT STRUCTURE FOR THE STANDARD MODEL IN ANALOGY TO GENERAL RELATIVITY', 'THE EXTERNAL-INTERNAL GROUP QUOTIENT STRUCTURE FOR THE STANDARD MODEL IN ANALOGY TO GENERAL RELATIVITY'], 'venue': []}
arxiv
A Machine Learning Approach for Hierarchical Classification of Software Requirements Manal Binkhonain [email protected] College of Computer and Information Sciences King Saud University RiyadhSaudi Arabia Liping Zhao [email protected]. Department of Computer Science Department of Computer Science University of Manchester ManchesterUK University of Manchester Manch-esterM13 9PLUK A Machine Learning Approach for Hierarchical Classification of Software Requirements Preprint submitted to Machine Learning with Applications February 24, 2023 requirements.Conclu-sion: This paper makes an important practical contribution to addressing the class imbalance and HDLSS problems in multiclass classification of software * Correspondence toRequirements EngineeringRequirements ClassificationMachine LearningHierarchical ClassificationImbalanced ClassesHigh Dimensional Data with Low Sample Size (HDLSS) Context: Classification of software requirements into different categories is a critically important task in requirements engineering (RE). Developing machine learning (ML) approaches for requirements classification has attracted great interest in the RE community since the 2000s. Objective: This paper aims to address two related problems that have been challenging real-world applications of ML approaches: the problems of class imbalance and high dimensionality with low sample size data (HDLSS). These problems can greatly degrade the classification performance of ML methods. Method: The paper proposes HC4RC, a novel ML approach for multiclass classification of requirements. HC4RC solves the aforementioned problems through semantic-role based feature selection, dataset decomposition and hierarchical classification. We experimentally compare the effectiveness of HC4RC with three closely related approaches -two of which are based on a traditional statistical classification model whereas one using an advanced deep learning model. Results: Our experiment shows: 1) The class imbalance and HDLSS problems present a challenge to both traditional and advanced ML approaches.2) The HC4RC approach is simple to use and can effectively address the class imbalance and HDLSS problems compared to similar approaches. Introduction In recent years, machine learning (ML) approaches have achieved visible successes in a wide range of real-world applications, from fake news detection (Agarwal et al., 2020), to opinion mining (Jin et al., 2009), sentiment analysis (Ravi and Ravi, 2015), spam email filtering (Jin et al., 2009), traffic predication (Sarker, 2021), and medical diagnosis (Sidey-Gibbons and Sidey-Gibbons, 2019), to name just a few. Away from these general applications, ML approaches have also attracted great interest in the requirements engineering (RE) community, with more and more RE researchers actively seeking to develop practical ML applications for requirements analysis tasks. Such tasks include requirements classification (Cleland-Huang et al., 2006), requirements prioritization (Perini et al., 2012), requirements detection (Abualhaija et al., 2019), and requirements traceability (Cleland-Huang et al., 2007a). In this paper, we focus on requirements classification, a task central and critical to successful software development projects (Glinz, 2007;Chung and do Prado Leite, 2009;Broy, 2015). A requirement for a software system development project is a statement of what the system should do or how well the system should perform. Examples of requirements are: "The system must provide an online help function" and "It must be possible to completely restore a running configuration when the system crashes". An average software project normally has a few hundreds of requirements (Eckhardt et al., 2016). The complete set of requirements for a specific system is called a "requirements document" or a "requirements specification". Requirements classification is the task of the assignment of a given set of requirements in a document to different categories or classes according to a specific classification scheme. This typically involves classifying each requirement as either a functional requirement (FR) or a non-functional requirement (NFR) (Chung and do Prado Leite, 2009). A NFR can be further classified as a security, reliability, performance, or usability requirement. In the aforementioned examples, "The system must provide an online help function" should be classified as a FR, whereas "It must be possible to completely restore a running configuration when the system crashes" should be classified as a reliability requirement, a specific NFR. As shown above, requirements are stated in natural language (Zhao et al., 2021), which means that requirements documents are text documents. Consequently, ML approaches to text classification (Sebastiani, 2002;Kowsari et al., 2019) can be adapted to requirements classification, whereby we train a requirements classifier with a set of labelled requirements examples (i.e., the training set) (Binkhonain and Zhao, 2019). In other words, ML approaches to requirements classification are based on supervised text classification. Furthermore, requirements classification is typically a multiclass classification task as it deals with more than two classes (usually more than 10 different classes). Since the publication of the landmark work by Cleland- Huang et al. (2006) on ML-based requirements classification more than a decade ago, RE researchers have proposed a large number of ML approaches. While most of these approaches are based on traditional ML algorithms (Cleland-Huang et al., 2007c;Ko et al., 2007;Casamayor et al., 2010;Kurtanović and Maalej, 2017;Dalpiaz et al., 2019;Abualhaija et al., 2020;Dias Canedo and Cordeiro Mendes, 2020), more recent proposals are exploring the use of deep learning (DL) models for requirements classification (Hey et al., 2020a;Mekala et al., 2021). However, regardless of what ML approach is used, a common problem with requirements classification is class imbalance in the training data (He and Garcia, 2009), as requirements categories are naturally uneven, usually with a small percentage of categories containing a large percentage of the requirements (Kurtanović and Maalej, 2017;Eckhardt et al., 2016). Class imbalance in requirements classification is known as relative class imbalance, as the minority classes are not necessarily rare in their own right but rather relative to the majority classes (He and Garcia, 2009). Relative class imbalance occurs frequently in real-world applications, such as the detection of oil spills in satellite radar images, the detection of fraudulent telephone calls, information retrieval and filtering, and diagnoses of rare medical conditions (Japkowicz and Stephen, 2002). Imbalanced classes in the training set can cause imbalanced learning (He and Garcia, 2009), as ML classifiers will have more examples to learn in the majority classes than in the minority classes (He and Garcia, 2009;Seiffert et al., 2014;Li et al., 2020;Jiang et al., 2013). Consequently, imbalanced classes can lead to misclassification. There exist different techniques for dealing with imbalanced classes (Li et al., 2020). These techniques generally attempt to reduce the severity of imbalance within the training data by providing a more equivalent statistical representation of the majority and minority classes (Mills et al., 2018). Among them are data sampling (or resampling) techniques (He and Garcia, 2009), used to either remove excessive samples from the majority classes (known as under-sampling) or to add more samples to the minority classes (known as oversampling). Both over and under sampling techniques have also been used in ML approaches for requirement classification (Kurtanović and Maalej, 2017;Hey et al., 2020a). However, sampling techniques have their own drawbacks. In particular, oversampling can cause a classifier to over-ft to the minority classes, whereas under-sampling can affect the performance of the classifier on the majority classes, due to the risk of removing good representative samples from these classes (Wang and Yao, 2012;Li et al., 2020). Additionally, oversampling can be an issue for requirements classification, due to the lack of labelled requirements (Alhoshan et al., 2022). The class imbalance problem can become even worse when combined with the problem of high dimensionality and low sample size (HDLSS) datasets (He and Garcia, 2009;Shen et al., 2022). The HDLSS problem is concerned with the scenario where the sample size n in a training dataset is dramatically smaller than the feature dimension d, where (n << d) (Shen et al., 2022). HDLSS data can seriously degrade the classification performance of classical statistical methods (Shen et al., 2022). HDLSS data are common in many real-world applications such as data mining, image processing and computer vision, bioinformatics, and gene expression (Shen et al., 2022). The problem is also present in the training data of requirements classification. The combination of class imbalance and HDLSS data presents a critical challenge to many ML approaches, as HDLSS can amplify the imbalanced data and makes the classifier even more closely or exactly fitted to a specific training set (He and Garcia, 2009;Liu et al., 2017). To address the HDLSS problem, researchers have turned to feature selection techniques for answers (Wasikowski and Chen, 2009;Sima and Dougherty, 2006;Liu et al., 2017;Huang et al., 2017), as these techniques can not only successfully reduce the dimensions in texts, but also reduce the over-fitting problem (Wasikowski and Chen, 2009;Zheng et al., 2004;Chen et al., 2009;Yin et al., 2013;Liu et al., 2017;Huang et al., 2017;Fu et al., 2020). In this paper, we propose a novel ML approach for requirements classification. The proposed approach, called HC4RC (Hierarchical Classification for Requirements Classification), aims to address the class imbalance and HDLSS problems by means of three novel techniques, namely Semantic Role-Based Feature Selection (SR4FS), Dataset Decomposition and Hierarchical Classification. Specifically, SR4FS addresses the HDLSS problem in the requirements training data using a small set of semantic roles, such as agent, action and goal (Gildea and Jurafsky, 2002), so as to reduce the number of word features on each requirement statement. Dataset Decomposition and Hierarchical Classification work together to handle the class imbalance problem in requirements classes, with the former for rebalancing the training set by decomposing it into two approximately balanced sets and the latter for performing hierarchical classification on the decomposed datasets. We continue this paper as follows: Section 2 justifies the novelty of the above three techniques by reviewing related techniques used in general text classification applications as well as in requirements classification. Section 3 explains the principles of each of these techniques in detail, how we combine them into a coherent approach, HC4RC. Section 4 describes the procedures and methods by which we experimentally compare HC4RC with three closely related ML approaches, whereas Section 5 presents and analyzes the experiment results. Our implementation code for all the compared approaches used in our experiments is provided at the Code Ocean platform (https://codeocean.com, with doi:10.24433/CO.6887783.v1). Then Section 6 discusses the validity and limitations of the proposed approach and our evaluation methods. Finally, Section 7 concludes our paper and summarises our contributions. Related Work and Our Contributions In this section, we review some prominent techniques that have been used to solve the class imbalance and HDLSS problems. In Section 2.1, we review common feature selection techniques used in text classification as solutions to the HDLSS problem; in Section 2.2, we present feature selection techniques used in requirements classification and justify our novel contribution. In Section 2.3, we present common data re-balancing techniques used in classification tasks as solutions to the class imbalance problem; finally, Section 2.4 reviews re-balancing techniques used in requirements classification and justify our novel contribution. Feature Selection for Text Classification A major challenge of text classification is high dimensionality of the feature space (Deng et al., 2019). A text document usually contains hundreds or thousands of distinct words that are regarded as features for classifiers, however, many of them may be noisy, less informative, or redundant with respect to class labels. This may mislead the classifiers and degrade their performance in general (Sebastiani, 2002;Deng et al., 2019). Therefore, feature selection must be applied to eliminate irrelevant features, so as to reduce the feature space to a manageable level, thus improving the efficiency and accuracy of the classifiers used (Kowsari et al., 2019;Deng et al., 2019). In this paper, feature selection plays a specific role at addressing the HDLSS problem (Wasikowski and Chen, 2009;Sima and Dougherty, 2006;Liu et al., 2017;Huang et al., 2017). Feature selection techniques for text classification broadly fall into three categories: syntactic word representation, weighted words and semantic word representation (Kowsari et al., 2019). The most basic form of syntactic word representation feature selection is n-gram (e.g., 1-gram, 2-gram, 3-gram, etc.), which is a set of n-words occurring consecutively in a text. Other syntactic word representations include syntactic features on the text, such as part-of-speech (POS) tags (Deng et al., 2019). The most common weighted word feature selection techniques are TF (Term Frequency), TF-IDF (Term Frequency-Inverse Document Frequency) and BOW (Bag-of-Words). These techniques use word frequency to calculate the weight (importance) of each word in a text (Kowsari et al., 2019). The current approach for semantic word representation for feature selection is word embeddings, where each word or phrase from the vocabulary is mapped to a N dimension vector of real numbers (Kowsari et al., 2019). Examples of common word embedding techniques are Word2Vec (Mikolov et al., 2013a,b), GloVe (Pennington et al., 2014) and FastText (Bojanowski et al., 2017). However, each of these three types of feature selection technique has its own limitations. For example, n-gram relies on an extensive dictionary of words to identify features, whereas word frequency techniques such as BOW will fail if none of the words in the training set are included in the testing set. Word embeddings require a large corpus to train an embedding model. Feature Selection in Requirements Classification and Our Contribution Due to the domain-specificity nature of the requirements texts and lack of labelled data (Ferrari et al., 2017), requirements classification normally employs ad hoc techniques for feature selection. These include keywordbased (Cleland-Huang et al., 2007c), which uses a dictionary of requirements keywords for feature selection; syntactic feature-based, which derives word features from various syntactic features, such as POS tags, n-grams, verbs, and syntactic dependency rules (Kurtanović and Maalej, 2017;Abualhaija et al., 2020;Dalpiaz et al., 2019). However, as these feature selection techniques also involve the frequency analysis of features, they suffer similar drawbacks as the aforementioned techniques. In this paper, we propose a simple semantic word representation technique for feature selection. The technique, SR4FS, uses a small set of semantic roles to identify meaningful and representative word features from requirements statements. Semantic roles, also known as thematic roles, are the various roles or positions that words in a sentence may play with respect to the action or state described by a governing verb, commonly the sentence's main verb (Gildea and Jurafsky, 2002). The set of semantic roles formulated by us is based on our knowledge of the semantic concepts of requirements (Letsholo et al., 2013), rather than the frequencies of the words in the dataset. Consequently, SR4FS is independent of the frequency of words in the training set and the size of the training set. This novel feature selection technique is presented in Section 3. Class Rebalancing and Hierarchical Classification Most existing techniques to the imbalanced learning problem are designed to address binary-class problems, in which imbalances exist between two classes (He and Garcia, 2009), but these solutions are found to be less effective or even cause negative effects on multiclass classification tasks Wang and Yao (2012). Existing solutions for multiclass imbalance problems are very limited, among which are the aforementioned data sampling (oversampling, under-sampling and a combination of both) and data decomposition techniques (Feng et al., 2018;Li et al., 2020). Data sampling techniques either increase (oversampling) or decrease (under-sampling) the number of instances in the sampled classes. Data sampling can be carried out randomlyrandom sampling -or with targeted majority or minority classes. However, while oversampling increases the risk of over-fitting to the minority classes, under-sampling is sensitive to the number of minority classes and can cause performance loss on majority classes Wang and Yao (2012), as it may remove good representative instances from majority classes, which ultimately misleads the classification (Li et al., 2020). Data decomposition techniques generally entail decomposing a multiclass classification problem into a series of smaller two-class sub-problems (He and Garcia, 2009) and then applying binary-class classification to these sub-problems. These techniques include One-Versus-All (OVA) (also known as One-Versus-Rest), One-Versus-One (OVO) (Li et al., 2020) and a class decomposition technique proposed by Yin et al. (2013). However, data decomposition techniques can aggravate imbalanced class distributions (Żak and Woźniak, 2020;Li et al., 2020) and the combined results from the binary classifiers learned from the different sub-datasets can cause potential classification errors, as each individual classifier is trained without the full knowledge of the entire dataset (Feng et al., 2018). In recent years, ensemble-based imbalance learning techniques have been adapted to multiclass imbalance problems, with positive results. For example, Wang and Yao (2012) show that a boosting-based ensemble that combines AdaBoost with random oversampling can improve the prediction accuracy on the minority class without losing the overall performance compared to other existing class imbalance learning methods. Feng et al. (2018) show that a bagging-based ensemble that combines margin ordering with under-sampling can improve a classifier's recognition of minority class instances without decreasing the accuracy of majority class. However, all these techniques use class decomposition to convert a multiclass imbalance problem into a series of binary-class sub-problems and then apply a set of binary classifiers to classify these sub-problems (Galar et al., 2011). However, although ensemble-based techniques can improve classification performance on imbalanced classes, their success depends on the creation of diverse classifiers while maintaining their consistency with the training set (Galar et al., 2011) and this is not easy (Brown et al., 2005), as the concept of diversity is still ill-defined in ML (Galar et al., 2011). Originally designed for classification with hierarchical class structures, hierarchical classification (Kiritchenko et al., 2006) has also shown promise for class imbalance problems in text classification (Ghazi et al., 2010;Zheng and Zhao, 2020). In hierarchical classification, classes are organized into a tree structure with levels and nodes (Kiritchenko et al., 2006). Accordingly, the classification task is also divided into a set of sub-tasks corresponding to the number of nodes. The construction of a classification hierarchy can be informed by domain knowledge (e.g., relationships between the classes (Ghazi et al., 2010)) or constraints (e.g., cost-sensitive factor (Zheng and Zhao, 2020)), with an aim to address the class imbalance problem. Class Rebalancing in Requirements Classification and Our Contribution In requirements classification, we only found two approaches that have explicitly addressed the class imbalance problem, one by Kurtanović and Maalej (2017) and another by Hey et al. (2020a). These approaches all adopt data sampling techniques, using oversampling for the minority class and under-sampling for the majority class. The sampling techniques used in these approaches are for binary classification, whereby the requirements are classified into functional and non-functional requirements. These approaches have not addressed the class imbalance problem in multiclass classification tasks, which are inherent to requirements classification. In this paper, we propose HC4RC, a novel approach for multiclass imbalanced learning that combines dataset decomposition with hierarchical classification (Ghazi et al., 2010). Under this approach, we first decompose the training dataset into two balanced subsets, one with the majority classes and another with the minority classes. In doing so, we divide a "flat", imbalanced classification problem into a hierarchy of two smaller, balanced problems. We then train a hierarchy of three classifiers, one binary and two multiclass classifiers and use them to perform three sub-classification tasks. While the binary classifier classifies each requirement into either the majority class set or the minority class set, the two multiclass classifiers each perform classification in its corresponding subset. The basic idea of this technique is to partition the training dataset into two subsets so as to reduce between-class imbalances within each subset, as the classes in each subset are relatively balanced. The decomposition step is similar to solving a two-class (binary) imbalanced problem. This novel approach is presented in Section 3. The HC4RC Approach As stated early, the HC4RC approach uses three novel techniques to solve the class imbalance and HDLSS problems in requirements classification. These techniques are summarised below: • Semantic Role-based Feature Selection. This technique uses a small number of semantic roles to identify most relevant semantic features from the requirements, to address the high dimensionality and low sample size problems. • Dataset Decomposition. This technique aims to rebalance a given training dataset, by annotating it into two approximately balanced datasets, with one containing the majority classes and another the minority classes. • Hierarchical Classification. This technique works with Data Decomposition, to perform hierarchical classification on the decomposed datasets. These techniques are organized as a series of steps and integrated into a coherent training process, as shown in Figure 1. These steps and the principles and rationale behind these techniques are described in the sections below. Text Pre-Processing This is a necessary first step in text classification, as text data contain many noises and unwanted words. The purpose of this step is to clean and standardize the text so that the text can be processed in the subsequent steps (Sarkar, 2016;Dias Canedo and Cordeiro Mendes, 2020). Various natural language processing (NLP) techniques are available for text pre-processing. Some commonly used NLP techniques for requirements classification are described in a survey by Binkhonain and Zhao (2019). In HC4RC, we apply the following NLP techniques to the requirements text, in the order of, tokenization, lowercase conversion, lemmatization, and stop words and short words removal (short words are the words containing fewer than three characters). Semantic Role-Based Feature Selection This step aims to select a small number of most relevant features for each requirement statement. To do so, we apply our semantic-role based feature selection technique, SR4FS. Below we introduce the set of semantic roles used in our approach and the principles behind these roles. We then explain how we can identify them from requirements statements. A semantic role is a word or phrase in a sentence that plays a certain role in relation to the sentence's main verb. There are many kinds of semantic role (Gildea and Jurafsky, 2002), but we only adopt six of them for our SR4FS, as they are similar to the concepts used in requirements modelling (Rolland and Proix, 1992). These six semantic roles are introduced here: 1. Agent -the volitional causer of an event or action (Jurafsky and Martin, 2020). This role is played by the main subject in a sentence For example, in the requirement statement: "The system shall send a verification email to the user when they log on to their account from an unfamiliar computer", the word "system" is an agent. An agent is also called an actor (Rolland and Proix, 1992). 2. Action -the cause of an action, event or state. This role is fulfilled by the verb of a sentence. For example, in the requirement statement: "The system shall send a verification email to the users when they log on to their account from an unfamiliar computer", the words "send" and "log on" play the action role. 3. Theme -the participant most directly affected by an event or action (Jurafsky and Martin, 2020). This role is played by the direct object in a sentence. For example, in the requirement "The system shall send a verification message to the users when they log on to their account from an unfamiliar computer", the word "message" takes the theme role. In requirements modelling, the theme role is also referred to as the key object (Sutcliffe and Maiden, 1998). 4. Goal -the destination of an object of a transfer event or an action (Jurafsky and Martin, 2020). This role is fulfilled by the indirect object in a sentence. For example, in the requirement: "The system shall send a verification email to the users when they log on to their account from an unfamiliar computer", the word "user" is the goal. In requirements modelling, the goal describes a future, required state which the system should satisfy, maintain or sometimes avoid (Sutcliffe and Maiden, 1998). 5. Manner -the manner in which an action is taking place (Xue, 2008). This role is fulfilled by an adjective, adverb, determiner, or preposition phrase. For example, in the requirement "The system should be easy to use", the adjective phrase "easy to use" plays the manner role. If a term is an adjective, adverb, or determiner, this term and its headwords represent a manner; else, if a term is a preposition (e.g., from, with, without, after), then the preposition and all its dependents correspond to a manner. Measure Adverb; 10. Number or Quantity If a term is a named entity (e.g., data, time, percent, money, and cardinal), then the term and all its dependents represent a measure; else, if the term is an adverb, this term and its headwords are mapped onto a measure. 6. Measure -the degree of control by the action or the quantification of an event (Jurafsky and Martin, 2020). This role is typically fulfilled by an adverb (e.g., rather), a number or a quantity (e.g., 99%). For example, in the requirement, "The system must be available to the users 98% of the time every month during business hours", the percentage "98%" plays the role of measure. The above semantic roles are sufficient to answer a range of questions in requirements analysis, as they cover the concerns of: "Who (agent) did (action) what (theme) to whom (goal), how (manner) and how much (measure)". The underlying words of these roles can thus serve as relevant features to requirements classification. In particular, subjects, verbs and objects are highly relevant to the identification of FRs, whereas adjectives, adverbs and quantities are relevant to NFRs. As can be seen, semantic roles can be mapped onto different parts of speech and grammatical features in sentences. Consequently, they can be automatically identified using NLP tools such as POS tagging, dependency parsing and named entity recognition (NER). The mapping rules between the aforementioned six semantic roles and their corresponding POS tags and grammatical features are presented in Table 1. SR4FS automatically performs feature selection in two steps: 1. Processes each requirement statement in the training set using a POS tagger, dependency parser and NER tagger. 2. Extracts the POS tags and grammatical features from the above process, and maps them onto semantic roles using the mapping rules. The features selected from SR4FS will then be manually checked to correct any errors or inaccuracies from the automatic process. The manual checking is needed due to: 1) NLP tools have yet to achieve 100% accuracy 1 ; 2) NER tools perform worse on recognising domain-specific entities; and 3) the relationship between a semantic role and its underlying syntactic realisation is not a strict one-to-one mapping. Dataset Decomposition The process of Dataset Decomposition aims to decompose a flat, imbalanced training set into two subsets with balanced numbers of requirements. However, instead of physically splitting the training set, the process assigns a label to each requirement in the training set to indicate if the requirement belongs to the majority or minority subset. This process consists of these steps: 1. Sort the classes in the training set in descending order, based on the number of requirements in each class. 2. Starting from the top of the list, for each class, assign each requirement in the class a "maj" label, to denote that the requirement belongs to the majority class subset. This labelling process ends when the number of requirements in the majority class subset is at least half of the total number of requirements in the training dataset. 3. Finally, assign each remaining requirement a "min" label, to denote that the requirement belongs to the minority class subset. This decomposition process thus divides the original flat classification task with an imbalanced dataset into two balanced subtasks, which can then be solved by a hierarchical classification approach, described below. Hierarchical Classification With the training set labelled into two subsets, the process of Hierarchical Classification entails training a classification model that classifies each requirement in a hierarchical fashion, as Figure 2 shows. The main steps in hierarchical classification are: 1. At the top level, we train a binary classifier, F super , to classify each requirement in the training set into either the majority class subset or the minority class subset, based on the "maj" and "min" labels, resulting in two balanced subsets. 2. At the second level, for the majority set, we train a multiclass classifier, F maj , to classify each requirement into one of several categories. For the minority set, we also train a multiclass classifier, F min , that respectively perform classification in its corresponding subset, to classify each requirement into one of several categories. These three classifiers form a hierarchical classification model collectively performing multiclass classification of requirements in the training set. We have implemented the HC4RC approach in Python programming language and made the source code of this implementation publicly available in Zenodo (Binkhonain and Zhao, 2022). Evaluation of HC4RC To evaluate HC4RC, we experimentally compare it with three closely related ML approaches. This involves implementing HC4RC and the three related approaches, and then comparing their performance results to assess the strengths and weaknesses of HCRC against its three peer approaches. In this section, we detail our experimental procedures and methods. We present and analyse the results obtained in Section 5. Three Related ML Approaches for Evaluating HC4RC The approaches we are looking for comparison must meet the following criteria: 1) They must be closely related; 2) They should explicitly deal with imbalanced classes or feature selection; and 3) Their description should be clear and detailed enough so that we can reimplement them or their source code is available so that we can adapt their code. Most existing ML approaches for requirements classification, such as those included in a recent survey (Binkhonain and Zhao, 2019), do not meet our comparison criteria. Here we introduce the three selected approaches that meet our criteria. The K&M Approach Proposed by Kurtanović and Maalej (2017), the K&M approach performs both binary and multiclass classification tasks. A binary classifier was trained using the PROMISE NFR dataset to classify a requirement as a FR or NFR. For multiclass classification, the K&M approach only considered the four most frequent NFRs in the PROMISE NFR dataset, i.e., Usability, Security, Operational, and Performance. Two types of classifiers were developed to perform multiclass classification: a set of binary classifiers, one for each requirements category, and one single multiclass classifier for classifying all requirements categories. Both binary and multiclass classifiers were SVMbased. However, the K&M approach only addressed the two class imbalance problem between Usability and the rest of NFRs (treating non-usability requirements as one class). Data sampling techniques were employed by adding supplementary samples derived from Amazon software reviews to the minority class (Usability) and randomly removing some samples from the majority class (non-Usability). For feature selection, the K&M approach used different types of features, including word n-grams, POS tag based n-grams and syntactic features. The authors of the K&M approach reported that the best performance was achieved when all the word n-gram features, with n ∈ {1, 2, 3}, were used for binary classification (FRs or NFRs) and the next best performance was achieved using the top 500 selected word features (out of 1,000). The NoRBERT Approach Proposed by Hey et al. (2020a), this transfer learning approach uses the fine-tuning technique to adopt two pre-trained BERT models (BERT base and BERT large ) (Devlin et al., 2018) for requirements classification. The PROMISE NFR dataset was also used for fine-tuning the BERT models. Apart from using a different kind of classification model, NoRBERT had many similarities to the K&M approach: Both approaches performed both binary and multiclass classification tasks; both used the PROMISE NFR dataset as the training set; both applied under-sampling and oversampling to imbalanced classes for binary classification. However, NoRBERT outperformed the K&M approach, due to its use of a state-of-the-art deep learning model. One issue that we found from the source code of NoRBERT (Hey et al., 2020b) was that NoRBERT used a weighted average F1 for calculating the overall performance values for both binary and multiclass classification, which is biased towards the majority classes. We will discuss this issue in Section 4.4 and propose a different way to calculate the average performance results for multiclass classifiers. The Yin Approach Proposed by Yin et al. (2013), the Yin approach was originally developed for binary classification of medical image data, not for requirements classification. However, we selected it to compare with our HC4RC as we were interested in its unique class decomposition technique. This decomposition technique decomposes the majority class in the dataset for a binary classification task into several relatively balanced pseudo-subclasses for the purpose of feature selection, with balanced instances in each one. Afterwards, the Yin approach applied a Hellinger distance-based feature selection technique (Cieslak et al., 2012) on the decomposed classes. This feature selection technique is said to be independent of the class distributions and can handle the high-dimensional class-imbalanced data (Fu et al., 2020). In our comparison, we reimplemented the Yin approach for multiclass classification of requirements so that it can be compared with our HC4RC. The Requirements Dataset The training set for our evaluation was the PROMISE-exp dataset (Lima et al., 2019), which is an expansion of the PROMISE NFR dataset (Cleland-Huang et al., 2007b). The original PROMISE NFR dataset contains a total of 625 labelled requirements, distributed across 12 classes, made up of one FR class and 11 NFR classes. The FR class has 255 requirements in total and the NFR classes have 370 requirements in total. These requirements are collected from 15 requirements documents (i.e., 15 software projects). This dataset has become the de facto dataset for training new ML approaches for requirements classification (Hussain et al., 2008;Kurtanović and Maalej, 2017;Abad et al., 2017;Dalpiaz et al., 2019). In contrast, the PROMISE-exp dataset contains 969 requirements from 47 requirements documents. The number of classes in this dataset is the same as the original PROMISE NFR dataset. Figure 3 depicts these classes and their requirements distribution in PROMISE-exp. The FR class is denoted as Functional (F); the NFR classes are Security (SE), Usability (US), Operability (O), etc. As Figure 3 shows, the PROMISE-exp dataset is imbalanced, with the largest class, Functional (F), containing 444 requirements whereas the smallest class, Portability (PO), containing only 12 requirements. Furthermore, the dataset also exhibits the HDLSS problem as it has a sample size of 969 requirements and a feature dimension of 2133 features. Clearly, 969 << 2133. Implementations We implemented our HC4RC approach based on the description given in the previous section. We reimplemented the K&M approach based on the source code provided by Dalpiaz et al. (2019) and applied imbalanced-learn 2 , a python package, for random over and under-sampling of the training set. For the NoRBERT approach, we adopted the source code provided by its authors (Hey et al., 2020b). We implemented the Yin approach from scratch based on its description, as its source code is not available. We implemented the classifiers for the HC4RC, K&M and Yin approaches using the linear SVM based on the implementation provided by scikit-learn's LinearSVC (Pedregosa et al., 2011). We fine-tuned the parameters of these classifiers using scikit-learn's GridSearchCV. For NoRBERT, we only finetuned the BERT base model due to a lack of computational resources. We trained all these approaches on the PROMISE-exp dataset. The source code of our implementations of all four approaches is available at the Code Ocean platform (https://codeocean.com, with doi:10.24433/CO.6887783.v1). We employed both 10-fold and p-fold (project-specific fold) cross-validation (CV) methods to test each approach. The 10-fold CV was used to reduce the bias and variance of the approach on different parts of the data, whereas the p-fold CV was to reduce the bias and variance of the approach on different requirements projects in the dataset. In other words, the purpose of using the 10-fold CV is to improve the generalizability of each classification approach on the unseen requirements, whereas the purpose of the p-fold CV is to improve the generalizability of each classification approach on the unseen requirements documents. The two CV methods thus complement one another. For 10-fold CV, we divided the PROMISE-exp dataset into 10 equal parts based on the number of requirements and executed the approach 10 times, each time using a different fold of the data for testing whereas the remaining nine parts for training. For the p-fold CV, we divided the PROMISE-exp dataset into 10 parts based on the number of projects (i.e., the number of documents in the dataset) (Cleland- Huang et al., 2007c;Dalpiaz et al., 2019). As PROMISE-exp contains 47 projects, we assigned 4−5 documents to each fold. The p-fold CV process is the same as the 10-fold CV. We used scikit-learn's StratifiedKFold to divide the dataset into 10-fold and p-fold. We carried out the training and testing of all four approaches on a standard laptop with an Intel Core i5 1.6 GHz and 8 GB RAM. The computation efficiency of each approach was measured by the time taken and the memory used to train and test the approach. The Python time and psutil libraries were respectively used for measuring the execution time and memory usage. The measurements for these four approaches are given in Table 2. The table shows that for the execution time, the Yin approach is the fasted one, followed by HC4RC and then K&M; NoRBERT takes the longest time. For memory load, Yin consumes the least memory space whereas NoRBERT consumes the most. We measured the classification performance (effectiveness) of each approach using the metrics described in the section below. Evaluation Metrics We measure the classification performance of each approach on individual classes using the unweighted precision (P), recall (R) and F-1 score (F1 ). These metrics calculate the performance of an approach on each individual class by statistically comparing the predicted class for each requirement with its true label. We then measure the overall performance of each approach on all the classes using the recommended multiclass classification metrics of macro and micro average P, R and F1 (Grandini et al., 2020). Macro Average P and R are simply computed as the arithmetic mean of the metrics for individual classes: Macro-Average-P = K k=1 P k K ,(1)Macro-Average-R = K k=1 R k K (2) Macro F1 is the harmonic mean of Macro-Precision and Macro-Recall: Macro-F1 = 2 * ( MacroAverage-P * Macro-Average-R Macro-Average-P −1 + Macro-Average-R −1 )(3) Micro-Average P, R and F1 are equal to Accuracy as follows: Micro-Average-P = Micro-Average-R = Micro-Average-F1 = K k=1 T P k Grand Total (4) In the above formulae, K is the number of classes in the dataset, whereas k denotes an individual class. These formulae are explained as follows: Macro average P, R and F1 evaluate the performance of a multiclass approach at the class level, without consideration of the size of classes. Under these metrics, a higher macro-F1 score indicates that the approach performs well on all the classes, regardless of large or small, whereas a lower macro-F1 score indicates the poorer performance of the approach (Grandini et al., 2020). On the other hand, micro average P, R, and F1 are all measured using the same Accuracy metric and thus have the same score (Grandini et al., 2020). Furthermore, these metrics evaluate a multiclass approach by considering the size of each class and thus they give more importance to majority classes. In other words, for micro averages, poor performance on small classes is not so important, as the number of instances belonging to those classes is small compared to the overall number of instances in the dataset (Grandini et al., 2020). Under these metrics, a higher micro-F1 score indicates that the approach is more accurate overall, whereas a lower macro-F1 score indicates the approach is less accurate overall. Thus macro average F1 and micro average F1 complement one another in that the former measures each class equally, whereas the latter measures each instance equally (Sokolova and Lapalme, 2009). Based on the aforementioned evaluation metrics, we measure the classification performance of the HC4RC, K&M, Yin, and NoRBERT approaches. The measurements are presented in Table 3 while Figure 4 depicts the macro and micro averages of these approaches. We discuss these results in the next section. Results Analysis and Discussion The classification performance results obtained by the four approaches are presented in Table 3. In this section, we compare, analyze and interpret these results. Where appropriate, we explain why our approach performs better or worse than its peer approaches. Table 3 shows that HC4RC outperformed K&M on all but one class in both 10-fold and p-fold CV. In both cases the class under-performed is a minority class, i.e., Portability (PO) in 10-fold whereas Legal (L) in p-fold. As both HC4RC and K&M are based on the SVM model, the better performance achieved by our approach suggests that our hierarchical classification approach incorporating semantic role-based feature selection and dataset decomposition can handle imbalanced classes better than the K&M approach. Figure 4 shows that HC4RC has higher macro and micro averages than K&M. In particular, macro averages show that HC4RC outperformed K&M considerably on individual classes (0.51 vs 0.44 in 10-fold and 0.50 vs 0.39 in p-fold), whereas micro averages show that HC4RC achieved an overall better performance than K&M on all 12 classes (0.63 vs 0.58 in 10-fold and 0.64 vs 0.61 in p-fold). These results also show that HC4RC has a better generalizability on both unseen requirements (10-fold) as well as unseen requirements projects (p-fold) than K&M. Table 3: Classification performance of HC4RC, K&M, Yin, and NoRBERT on 12 requirements classes in the PROMISE-exp dataset. The highest F1 score for each class is in bold. Comparing HC4RC with K&M HC4RC K&M Yin NoRBERT Table 3 shows that HC4RC also outperformed the Yin approach on all but one class in 10-fold and it outperforms the Yin approach on all classes in p-fold CV. Both approaches applied decomposition to address the class imbalance problem. However, the better performance of our approach indicates that its data decomposition technique works better in handling imbalance problems in multi-classification problems than the Yin approach's class decomposition. Figure 4 shows that HC4RC has higher macro and micro averages than Yin. In particular, macro averages show that HC4RC outperformed Yin considerably on individual classes (0.51 vs 0.47 in 10-fold and 0.50 vs 0.22 in p-fold), whereas micro averages show that HC4RC achieved an overall better performance than Yin on all 12 classes (0.63 vs 0.62 in 10-fold and 0.64 vs 0.50 in p-fold). Class P R F 1 P R F 1 P R F 1 P R F 1 10- Comparing HC4RC with Yin These results also show that HC4RC has a better generalizability on both unseen requirements (10-fold) as well as unseen requirements projects (p-fold) than Yin. Table 3 shows that NoRBERT outperformed HC4RC on almost all classes under both 10-fold and p-fold CV. However, we notice that NoRBERT's performance fluctuates in small classes. In particular, NoRBERT performed worse than HC4RC on the Maintainability (MN) class under 10-fold and on the Availability (A) under p-fold. The poorer performance of NoRBERT on some minority classes indicates that even the state-of-the-art deep learning model is still limited when it comes to classifying small classes in multiclass classification. Figure 4 shows that HC4RC performed worse than NoRBERT on individual classes as well as on all 12 classes as a whole. These results clearly show that the combination of class imbalance and HDLSS data has seriously degraded the classification performance of classical statistical methods such as the SVM model used in HC4RC. Comparing HC4RC with NoRBERT These results also show that NoRBERT has a much better generalizability on both unseen requirements (10-fold) as well as unseen requirements projects (p-fold) than HC4RC. Further Analysis and Discussion We first compare the overall classification performance of these four approaches from the viewpoint of their macro and micro average scores (see Figure 4). The findings are discussed as follows. Of the four compared approaches, NoRBERT is the best overall approach for requirements classification. Its macro averages show that NoRBERT has achieved the best performance on individual classes and its micro averages show that it has achieved the best performance on all 12 classes. Furthermore, its performance results in 10-fold and p-fold show that NoRBERT has the best generalizability on both unseen requirements and unseen requirements projects. These results also suggest that NoRBERT is the best approach for dealing with class imbalance and HDLSS data in requirements documents. As NoRBERT applied the same data sampling techniques to imbalanced data as K&M, we assume that the strong performance of NoRBERT is due to its underlying deep learning model BERT. Of the three approaches that used the SVM model as their classification model (HC4RC, K&M and Yin), HC4RC is the best overall approach for requirements classification. Its macro averages show that HC4RC has achieved the best performance on individual classes and its micro averages show that it has achieved the best performance on all 12 classes. Furthermore, its performance results in 10-fold and p-fold show that HC4RC has the best generalizability on both unseen requirements and unseen requirements projects. These results also suggest that HC4RC is the best SVM-based approach for dealing with class imbalance and HDLSS data in requirements documents. As the three SVM-based approaches applied different techniques for handling class imbalance and HDLSS data, we assume that our semantic role-based feature selection combined with dataset decomposition and hierarchical classification is more effective than the data sampling and hybrid feature selection techniques used in K&M, and class decomposition and Hellinger distance-based feature selection in the Yin approach. We now look into the performance of these four approaches on individual classes from the viewpoint of their unweighted P, R and F1 scores (see Table 3). The findings are discussed as follows. First, on the largest class F (functional requirements), all four approaches performed relatively well. NoRBERT in particular achieved a F1 score of 0.89 in 10-fold and 0.91 in 10-fold. Both HC4RC and K&M achieved a F1 score of 0.74 and 0.73 respectively in 10-fold and 0.76 in p-fold. Yin achieved 0.73 in 10-fold and 0.67 in p-fold. These scores also show that NoRBERT, HC4RC and K&M have a better generalizability on the unseen requirements projects than the unseen requirements. We believe this ability is critically important for requirements classification, as a ML approach should be able to differentiate requirements in different projects. Second, on the remaining classes (SE, US, etc.), NoRBERT performed better in p-fold than 10-fold; HC4RC and K&M performed similarly in 10-fold and p-fold; Yin performed better in 10-fold than p-fold. These results suggest that NoRBERT has better generalizability on the unseen requirements projects than the unseen requirements; HC4RC and K&M have similar generalizability on the unseen requirements and the unseen requirements documents; Yin has a better generalizability on the unseen requirements than the unseen requirements documents. Under 10-fold CV, we notice that HC4RC outperformed the rest three approaches on three small classes A, MN and L; Yin outperformed the rest three approaches on one small class SC; NoRBERT outperformed the rest three approaches on two small classes FT and PO. These results show that HC4RC achieved a better performance than NoRBERT on the small classes for the unseen requirements. Under p-fold CV, we notice that HC4RC outperformed the rest three approaches on every class, but it suffered the performance loss on the small classes MN, SC, L, and PO. These results show that the classification performance of the deep learning model BERT has also been degraded on HDLSS requirements, a key finding from our evaluation. Finally, we attribute the better performance of HC4RC than K&M and Yin to the aggregate effect of the three key techniques employed by HC4RC. Threats to Validity In this section, we discuss potential threats to the validity of our evaluation of HC4RC and explain why we believe such threats are minimal. Reimplementation of related approaches. One potential validity threat is our reimplementations of the K&M, Yin and NoRBERT approaches, as we modified these approaches so that they can be used to perform multiclass classification on the same dataset. While we cannot avoid this threat entirely, we are making the source code for our reimplementations of these approaches publicly available (Binkhonain and Zhao, 2022), so that other researchers can assess its validity. The quality of the training set. The PROMISE NFR dataset on which the PROMISE-exp dataset was built was known for its mislabelling issues, as the dataset was labelled by students (Hey et al., 2020a). However, while we believe that the poor quality of the dataset can affect the performance of the approaches in our evaluation, it should not affect the generalizability of these approaches, as we applied this dataset consistently to all the approaches. Furthermore, since both the original PROMISE NFR and PROMISE-exp datasets have been widely used in the RE community, using them for research evaluation should be a strength, not a weakness, as they allow us to compare our results directly to other approaches (Kurtanović and Maalej, 2017;Hey et al., 2020a). We make our code and data publically available so that further replication or reproduction of our approach can be carried out. We recognize that the lack of the gold standard labelled requirements datasets has been an open challenge to using ML approaches for RE tasks (Binkhonain and Zhao, 2019). Performance measure. Another concern is how well metrics can really measure what they are intended to measure (Ralph and Tempero, 2018). In RE, we noticed that researchers normally use unweighted F1 (Kurtanović and Maalej, 2018;Dalpiaz et al., 2019) or weighted F1 (Hey et al., 2020a) to measure the performance of both binary and multiclass classifiers. For example, Hey et al. (2020a) used the weighted average F1 -score over all classes weighted by the frequency of appearance -that is, to weigh larger classes more than smaller classes. We believe such a weighting can inflate the performance of the larger classes and skew the overall results. In our evaluation, we rationally chose macro and micro average metrics to evaluate and compare different multiclass approaches, to avoid the bias towards large classes (Grandini et al., 2020), as discussed in Section 4. However, we agree that in RE, achieving higher recall is more important than higher precision and in this context, a weighted F1 -score that gives more importance to R than P is desirable (Berry, 2021). While we have not used such a weighted metric in our experiments, we believe our metrics have minimized the classifier bias. Generalizibility. As our comparison has been limited to a single dataset, a potential threat is the validity of our evaluation conclusion. To mitigate this threat, we used both 10-fold and p-fold CV to test all the approaches, as these cross-validation methods were designed for evaluating ML models on limited data samples (Bengio et al., 2003). Conclusion This paper proposes HC4RC, a novel machine learning approach for multiclass classification of requirements. HC4RC is designed to address two specific problems in requirements classification: class imbalance and HDLSS. These problems, common to requirements classification tasks, can greatly degrade the performance of ML methods. HC4RC solves the first problem through dataset decomposition and hierarchical classification; it deals with the second problem through a novel semantic role-based feature selection method. The novelty of HC4RC thus lies in its combination of these three techniques into a simple and practical approach that can effectively solve the problems of class imbalance and HDLSS. The key findings of this paper are summarized as follows: • Overall, HC4RC performs better than the two SVM-based approaches, K&M and Yin, and performs only slightly worse than the BERTbased approach, NoRBERT. This finding shows that our semantic role-based feature selection combined with dataset decomposition and hierarchical classification provides a more effective solution to class imbalance and HDLSS data than the data sampling and hybrid feature selection techniques used in K&M, and class decomposition and Hellinger distance-based feature selection in the Yin approach. As NoRBERT applied the same data sampling techniques to imbalanced data as K&M, we assume that the strong performance of NoRBERT is due to its underlying deep learning model BERT. • On individual classes, HC4RC performs better than all other compared approaches on small classes for the unseen requirements. This shows that the classification performance of the deep learning based NoRBERT can also be degraded on imbalanced classes. • HC4RC has a better generalizability on both unseen requirements (shown in 10-fold CV) as well as unseen requirements projects (shown in p-fold CV) than its closest peers, K&M and Yin, but it is worse than NoRBERT. This means that the DL based approach has a better generalizability than the traditional ML approach. In conclusion, our results, while still preliminary as they are based only on one dataset, suggest that multiclass classification of requirements with the class imbalance and HDLSS problems presents a challenge to ML approaches in general, even for the advanced deep learning models. This paper has made a practical contribution to addressing these problems. We suggest future work on requirements classification to focus on the following areas: • For ML approaches based on the traditional statistical classification models, more work is needed to develop better feature selection techniques such as semantic representations and roles of requirements. We believe our work presented in this paper has made a start in this area. • For ML approaches based on the advanced deep learning models, more work is needed to train these models on requirements specific data. Some pioneering work has already started in this area (Ajagbe and Zhao, 2022). • For both traditional and advanced learning approaches, more work is needed to investigate different data re-balancing techniques, such as those presented in this paper. Figure 1 : 1The training process of HC4RC and its key techniqiues. Figure 2 : 2Dataset Decomposition and Hierarchical Classification. Figure 3 : 3Requirements classes and their instances in the PROMISE-exp dataset. Figure 4 : 4Macro and micro averages of the four approaches on classification of 12 requirements classes. Table 1 : 1Six Semantic Roles of SR4FS and their Mapping to Corresponding Grammatical Features.Semantic Roles Grammatical Features Mapping Rules Agent 1. Subject If a term is the subject of the head verb, it corresponds to an agent. Action 2. Action Verb If a term is the verb and its head is verb, it corresponds to an action. Theme 3. Direct Object If a term is the direct object of the main verb, it corresponds to a theme. Goal 4. Indirect Object If a term is an indirect object of a dative preposition, it corresponds to a goal. Manner 5. Adverb; 6. Adjective; 7. Determiner; 8. Proposition Phrase Table 2 : 2Computation efficiency of HC4RC, K&M, Yin, and NoRBERTHC4RC K&M Yin NoRBERT Execution Time 10.73 seconds 23.70 minutes 10.52 seconds 1.09 hours Memory load 1.7 GB 3.9 GB 0.75 GB 5.7 GB The current state-of-art POS tagger (e.g., spaCy) can only achieve a 95.1% accurate whereas the current state-of-art NER (spaCy and RoBERTa) can achieve an 89.8% accurate (https://spacy.io/usage/factsfigures). https://pypi.org/project/imblearn/ AcknowledgmentsWe wish to thank the three reviewers for their expert comments to our paper. We thank the University of Manchester for providing an Open Access fund for this paper. What works better? a study of classifying requirements. Z S H Abad, O Karras, P Ghazi, M Glinz, G Ruhe, K Schneider, 2017 IEEE 25th International Requirements Engineering Conference (RE). IEEEAbad, Z. S. H., Karras, O., Ghazi, P., Glinz, M., Ruhe, G., and Schneider, K. (2017). What works better? a study of classifying requirements. In 2017 IEEE 25th International Requirements Engineering Conference (RE), pages 496-501. IEEE. Automated demarcation of requirements in textual specifications: a machine learning-based approach. S Abualhaija, C Arora, M Sabetzadeh, L C Briand, M Traynor, Empirical Software Engineering. 256Abualhaija, S., Arora, C., Sabetzadeh, M., Briand, L. C., and Traynor, M. (2020). Automated demarcation of requirements in textual specifica- tions: a machine learning-based approach. Empirical Software Engineering, 25(6):5454-5497. A machine learning-based approach for demarcating requirements in textual specifications. S Abualhaija, C Arora, M Sabetzadeh, L C Briand, E Vaz, 2019 IEEE 27th International Requirements Engineering Conference (RE). IEEEAbualhaija, S., Arora, C., Sabetzadeh, M., Briand, L. C., and Vaz, E. (2019). A machine learning-based approach for demarcating requirements in textual specifications. In 2019 IEEE 27th International Requirements Engineering Conference (RE), pages 51-62. IEEE. Fake news detection using a blend of neural networks: An application of deep learning. A Agarwal, M Mittal, A Pathak, L M Goyal, SN Computer Science. 1Agarwal, A., Mittal, M., Pathak, A., and Goyal, L. M. (2020). Fake news detection using a blend of neural networks: An application of deep learning. SN Computer Science, 1:1-9. Retraining a bert model for transfer learning in requirements engineering: A preliminary study. M Ajagbe, L Zhao, 2022 IEEE 30th International Requirements Engineering Conference (RE). IEEEAjagbe, M. and Zhao, L. (2022). Retraining a bert model for transfer learning in requirements engineering: A preliminary study. In 2022 IEEE 30th International Requirements Engineering Conference (RE), pages 309-315. IEEE. A zero-shot learning approach to classifying requirements: A preliminary study. W Alhoshan, L Zhao, A Ferrari, K J Letsholo, ternational Working Conference on Requirements Engineering: Foundation for Software Quality. SpringerAlhoshan, W., Zhao, L., Ferrari, A., and Letsholo, K. J. (2022). A zero-shot learning approach to classifying requirements: A preliminary study. In In- ternational Working Conference on Requirements Engineering: Foundation for Software Quality, pages 52-59. Springer. A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Jauvin, Journal of machine learning research. 3Bengio, Y., Ducharme, R., Vincent, P., and Jauvin, C. (2003). A neu- ral probabilistic language model. Journal of machine learning research, 3(Feb):1137-1155. Empirical evaluation of tools for hairy requirements engineering tasks. D M Berry, Empirical Software Engineering. 266Berry, D. M. (2021). Empirical evaluation of tools for hairy requirements engineering tasks. Empirical Software Engineering, 26(6):1-77. A review of machine learning algorithms for identification and classification of non-functional requirements. M Binkhonain, L Zhao, Expert Systems with Applications. Binkhonain, M. and Zhao, L. (2019). A review of machine learning algorithms for identification and classification of non-functional requirements. Expert Systems with Applications. Supplementary material of "multiclass classification of software requirements with imbalanced, high dimensional and low sample size data. M Binkhonain, L Zhao, Binkhonain, M. and Zhao, L. (2022). Supplementary material of "multiclass classification of software requirements with imbalanced, high dimensional and low sample size data". Enriching word vectors with subword information. P Bojanowski, E Grave, A Joulin, T Mikolov, Transactions of the Association for Computational Linguistics. 5Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. Diversity creation methods: a survey and categorisation. G Brown, J Wyatt, R Harris, X Yao, Information Fusion. 61Brown, G., Wyatt, J., Harris, R., and Yao, X. (2005). Diversity creation methods: a survey and categorisation. Information Fusion, 6(1):5-20. Rethinking nonfunctional software requirements. M Broy, Computer. 4805Broy, M. (2015). Rethinking nonfunctional software requirements. Computer, 48(05):96-99. Identification of nonfunctional requirements in textual specifications: A semi-supervised learning approach. A Casamayor, D Godoy, M Campo, Information and Software Technology. 524Casamayor, A., Godoy, D., and Campo, M. (2010). Identification of non- functional requirements in textual specifications: A semi-supervised learning approach. Information and Software Technology, 52(4):436-445. Feature selection for text classification with naïve bayes. J Chen, H Huang, S Tian, Y Qu, Expert Systems with Applications. 363Chen, J., Huang, H., Tian, S., and Qu, Y. (2009). Feature selection for text classification with naïve bayes. Expert Systems with Applications, 36(3):5432-5435. On non-functional requirements in software engineering. L Chung, J C S Do Prado Leite, Conceptual modeling: Foundations and applications. SpringerChung, L. and do Prado Leite, J. C. S. (2009). On non-functional require- ments in software engineering. In Conceptual modeling: Foundations and applications, pages 363-379. Springer. Hellinger distance decision trees are robust and skew-insensitive. D A Cieslak, T R Hoens, N V Chawla, W P Kegelmeyer, Data Mining and Knowledge Discovery. 241Cieslak, D. A., Hoens, T. R., Chawla, N. V., and Kegelmeyer, W. P. (2012). Hellinger distance decision trees are robust and skew-insensitive. Data Mining and Knowledge Discovery, 24(1):136-158. Best practices for automated traceability. J Cleland-Huang, B Berenbach, S Clark, R Settimi, E Romanova, Computer. 406Cleland-Huang, J., Berenbach, B., Clark, S., Settimi, R., and Romanova, E. (2007a). Best practices for automated traceability. Computer, 40(6):27-35. . J Cleland-Huang, S Mazrouee, H Liguo, D Port, Cleland-Huang, J., Mazrouee, S., Liguo, H., and Port, D. (2007b). NFR. The detection and classification of non-functional requirements with application to early aspects. J Cleland-Huang, R Settimi, X Zou, P Solc, 14th IEEE International Requirements Engineering Conference (RE'06). IEEECleland-Huang, J., Settimi, R., Zou, X., and Solc, P. (2006). The detection and classification of non-functional requirements with application to early aspects. In 14th IEEE International Requirements Engineering Conference (RE'06), pages 39-48. IEEE. Automated classification of non-functional requirements. J Cleland-Huang, R Settimi, X Zou, P Solc, Requirements engineering. 122Cleland-Huang, J., Settimi, R., Zou, X., and Solc, P. (2007c). Automated classification of non-functional requirements. Requirements engineering, 12(2):103-120. Requirements classification with interpretable machine learning and dependency parsing. F Dalpiaz, D Dell&apos;anna, F B Aydemir, S Çevikol, 2019 IEEE 27th International Requirements Engineering Conference (RE). IEEEDalpiaz, F., Dell'Anna, D., Aydemir, F. B., and Çevikol, S. (2019). Require- ments classification with interpretable machine learning and dependency parsing. In 2019 IEEE 27th International Requirements Engineering Con- ference (RE), pages 142-152. IEEE. Feature selection for text classification: A review. X Deng, Y Li, J Weng, J Zhang, Multimedia Tools and Applications. 783Deng, X., Li, Y., Weng, J., and Zhang, J. (2019). Feature selection for text classification: A review. Multimedia Tools and Applications, 78(3):3797- 3816. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Software requirements classification using machine learning algorithms. E Dias Canedo, B Mendes, Entropy. 2291057Dias Canedo, E. and Cordeiro Mendes, B. (2020). Software requirements classification using machine learning algorithms. Entropy, 22(9):1057. Are non-functional requirements really non-functional? an investigation of non-functional requirements in practice. J Eckhardt, A Vogelsang, D M Fernández, 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE). IEEEEckhardt, J., Vogelsang, A., and Fernández, D. M. (2016). Are non-functional requirements really non-functional? an investigation of non-functional requirements in practice. In 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), pages 832-842. IEEE. Class imbalance ensemble learning based on the margin theory. W Feng, W Huang, J Ren, Applied Sciences. 85815Feng, W., Huang, W., and Ren, J. (2018). Class imbalance ensemble learning based on the margin theory. Applied Sciences, 8(5):815. Pure: A dataset of public requirements documents. A Ferrari, G O Spagnolo, S Gnesi, 2017 IEEE 25th International Requirements Engineering Conference (RE). IEEEFerrari, A., Spagnolo, G. O., and Gnesi, S. (2017). Pure: A dataset of public requirements documents. In 2017 IEEE 25th International Requirements Engineering Conference (RE), pages 502-505. IEEE. Hellinger distancebased stable sparse feature selection for high-dimensional class-imbalanced data. G.-H Fu, Y.-J Wu, M.-J Zong, J Pan, BMC bioinformatics. 211Fu, G.-H., Wu, Y.-J., Zong, M.-J., and Pan, J. (2020). Hellinger distance- based stable sparse feature selection for high-dimensional class-imbalanced data. BMC bioinformatics, 21(1):1-14. A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. M Galar, A Fernandez, E Barrenechea, H Bustince, F Herrera, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 424Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., and Herrera, F. (2011). A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4):463-484. Hierarchical versus flat classification of emotions in text. D Ghazi, D Inkpen, S Szpakowicz, Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text. the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in textAssociation for Computational LinguisticsGhazi, D., Inkpen, D., and Szpakowicz, S. (2010). Hierarchical versus flat classification of emotions in text. In Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, pages 140-146. Association for Computational Linguistics. Automatic labeling of semantic roles. D Gildea, D Jurafsky, Computational linguistics. 283Gildea, D. and Jurafsky, D. (2002). Automatic labeling of semantic roles. Computational linguistics, 28(3):245-288. On non-functional requirements. M Glinz, 15th IEEE International Requirements Engineering Conference (RE 2007). IEEEGlinz, M. (2007). On non-functional requirements. In 15th IEEE International Requirements Engineering Conference (RE 2007), pages 21-26. IEEE. Metrics for multi-class classification: an overview. M Grandini, E Bagli, G Visani, arXiv:2008.05756arXiv preprintGrandini, M., Bagli, E., and Visani, G. (2020). Metrics for multi-class classification: an overview. arXiv preprint arXiv:2008.05756. Learning from imbalanced data. H He, E A Garcia, IEEE Transactions on knowledge and data engineering. 219He, H. and Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263-1284. NoRBERT: transfer learning for requirements classification. T Hey, J Keim, A Koziolek, W F Tichy, 2020 IEEE 28th International Requirements Engineering Conference (RE). IEEEHey, T., Keim, J., Koziolek, A., and Tichy, W. F. (2020a). NoRBERT: transfer learning for requirements classification. In 2020 IEEE 28th International Requirements Engineering Conference (RE), pages 169-179. IEEE. Supplementary Material of "NoRBERT: Transfer Learning for Requirements Classification. T Hey, J Keim, A Koziolek, W F Tichy, Hey, T., Keim, J., Koziolek, A., and Tichy, W. F. (2020b). Supplementary Material of "NoRBERT: Transfer Learning for Requirements Classification". Online; accessed 3 August 2020. Feature selection solution with high dimensionality and low-sample size for land cover classification in object-based image analysis. Y Huang, C Zhao, H Yang, X Song, J Chen, Li , Z , Remote Sensing. 99939Huang, Y., Zhao, C., Yang, H., Song, X., Chen, J., and Li, Z. (2017). Feature selection solution with high dimensionality and low-sample size for land cover classification in object-based image analysis. Remote Sensing, 9(9):939. Using linguistic knowledge to classify non-functional requirements in srs documents. I Hussain, L Kosseim, O Ormandjieva, International Conference on Application of Natural Language to Information Systems. SpringerHussain, I., Kosseim, L., and Ormandjieva, O. (2008). Using linguistic knowledge to classify non-functional requirements in srs documents. In International Conference on Application of Natural Language to Information Systems, pages 287-298. Springer. The class imbalance problem: A systematic study. Intelligent data analysis. N Japkowicz, S Stephen, 6Japkowicz, N. and Stephen, S. (2002). The class imbalance problem: A systematic study. Intelligent data analysis, 6(5):429-449. Sampled bayesian network classifiers for class-imbalance and cost-sensitive learning. L Jiang, C Li, Z Cai, H Zhang, 2013 IEEE 25th International Conference on Tools with Artificial Intelligence. IEEEJiang, L., Li, C., Cai, Z., and Zhang, H. (2013). Sampled bayesian network classifiers for class-imbalance and cost-sensitive learning. In 2013 IEEE 25th International Conference on Tools with Artificial Intelligence, pages 512-517. IEEE. Opinionminer: a novel machine learning system for web opinion mining and extraction. W Jin, H H Ho, R K Srihari, Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. the 15th ACM SIGKDD international conference on Knowledge discovery and data miningJin, W., Ho, H. H., and Srihari, R. K. (2009). Opinionminer: a novel machine learning system for web opinion mining and extraction. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1195-1204. Speech and language processing. D Jurafsky, H J Martin, Pearson Education Indiathird editionJurafsky, D. and Martin, H. J. (2020). Speech and language processing. Pearson Education India, third edition. Learning and evaluation in the presence of class hierarchies: Application to text categorization. S Kiritchenko, S Matwin, R Nock, A F Famili, Conference of the Canadian Society for Computational Studies of Intelligence. SpringerKiritchenko, S., Matwin, S., Nock, R., and Famili, A. F. (2006). Learning and evaluation in the presence of class hierarchies: Application to text categorization. In Conference of the Canadian Society for Computational Studies of Intelligence, pages 395-406. Springer. Using classification techniques for informal requirements in the requirements analysis-supporting system. Y Ko, S Park, J Seo, S Choi, Information and Software Technology. 49Ko, Y., Park, S., Seo, J., and Choi, S. (2007). Using classification techniques for informal requirements in the requirements analysis-supporting system. Information and Software Technology, 49(11-12):1128-1140. Text classification algorithms: A survey. K Kowsari, K Jafari Meimandi, M Heidarysafa, S Mendu, L Barnes, D Brown, Information. 104150Kowsari, K., Jafari Meimandi, K., Heidarysafa, M., Mendu, S., Barnes, L., and Brown, D. (2019). Text classification algorithms: A survey. Information, 10(4):150. Automatically classifying functional and non-functional requirements using supervised machine learning. Z Kurtanović, W Maalej, 2017 IEEE 25th International Requirements Engineering Conference (RE). IEEEKurtanović, Z. and Maalej, W. (2017). Automatically classifying functional and non-functional requirements using supervised machine learning. In 2017 IEEE 25th International Requirements Engineering Conference (RE), pages 490-495. IEEE. On user rationale in software engineering. Z Kurtanović, W Maalej, Requirements Engineering. 233Kurtanović, Z. and Maalej, W. (2018). On user rationale in software engi- neering. Requirements Engineering, 23(3):357-379. Tram: A tool for transforming textual requirements into analysis models. K J Letsholo, L Zhao, E.-V Chioasca, 28th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEELetsholo, K. J., Zhao, L., and Chioasca, E.-V. (2013). Tram: A tool for transforming textual requirements into analysis models. In 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 738-741. IEEE. Multiclass imbalanced learning with one-versus-one decomposition and spectral clustering. Q Li, Y Song, J Zhang, V S Sheng, Expert Systems with Applications. 147113152Li, Q., Song, Y., Zhang, J., and Sheng, V. S. (2020). Multiclass imbalanced learning with one-versus-one decomposition and spectral clustering. Expert Systems with Applications, 147:113152. Software engineering repositories: Expanding the promise database. M Lima, V Valle, E Costa, F Lira, B Gadelha, Proceedings of the XXXIII Brazilian Symposium on Software Engineering. the XXXIII Brazilian Symposium on Software EngineeringACMLima, M., Valle, V., Costa, E., Lira, F., and Gadelha, B. (2019). Software engineering repositories: Expanding the promise database. In Proceedings of the XXXIII Brazilian Symposium on Software Engineering, pages 427-436. ACM. Deep neural networks for high dimension, low sample size data. B Liu, Y Wei, Y Zhang, Yang , Q , IJCAI. Liu, B., Wei, Y., Zhang, Y., and Yang, Q. (2017). Deep neural networks for high dimension, low sample size data. In IJCAI, pages 2287-2293. Classifying user requirements from online feedback in small dataset environments using deep learning. R R Mekala, A Irfan, E C Groen, A Porter, M Lindvall, 2021 IEEE 29th International Requirements Engineering Conference (RE). IEEEMekala, R. R., Irfan, A., Groen, E. C., Porter, A., and Lindvall, M. (2021). Classifying user requirements from online feedback in small dataset environ- ments using deep learning. In 2021 IEEE 29th International Requirements Engineering Conference (RE), pages 139-149. IEEE. Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in neural information processing systems. 26Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013a). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26:3111-3119. Linguistic regularities in continuous space word representations. T Mikolov, W.-T Yih, G Zweig, Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologies. the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologiesMikolov, T., Yih, W.-t., and Zweig, G. (2013b). Linguistic regularities in continuous space word representations. In Proceedings of the 2013 confer- ence of the north american chapter of the association for computational linguistics: Human language technologies, pages 746-751. Automatic traceability maintenance via machine learning classification. C Mills, J Escobar-Avila, S Haiduc, 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEEMills, C., Escobar-Avila, J., and Haiduc, S. (2018). Automatic traceability maintenance via machine learning classification. In 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 369- 380. IEEE. Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. A machine learning approach to software requirements prioritization. A Perini, A Susi, Avesani , P , IEEE Transactions on Software Engineering. 394Perini, A., Susi, A., and Avesani, P. (2012). A machine learning approach to software requirements prioritization. IEEE Transactions on Software Engineering, 39(4):445-461. Construct validity in software engineering research and software metrics. P Ralph, E Tempero, Proceedings of the 22nd International Conference on Evaluation and Assessment in Software. the 22nd International Conference on Evaluation and Assessment in SoftwareRalph, P. and Tempero, E. (2018). Construct validity in software engineering research and software metrics. In Proceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 2018, pages 13-23. A survey on opinion mining and sentiment analysis: tasks, approaches and applications. Knowledge-based systems. K Ravi, V Ravi, 89Ravi, K. and Ravi, V. (2015). A survey on opinion mining and sentiment analysis: tasks, approaches and applications. Knowledge-based systems, 89:14-46. A natural language approach for requirements engineering. C Rolland, C Proix, International Conference on Advanced Information Systems Engineering. SpringerRolland, C. and Proix, C. (1992). A natural language approach for require- ments engineering. In International Conference on Advanced Information Systems Engineering, pages 257-277. Springer. Text Analytics with python. D Sarkar, SpringerSarkar, D. (2016). Text Analytics with python. Springer. Machine learning: Algorithms, real-world applications and research directions. I H Sarker, SN Computer Science. 23Sarker, I. H. (2021). Machine learning: Algorithms, real-world applications and research directions. SN Computer Science, 2(3):1-21. Machine learning in automated text categorization. F Sebastiani, ACM computing surveys (CSUR). 34Sebastiani, F. (2002). Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1):1-47. An empirical study of the classification performance of learners on imbalanced and noisy software quality data. C Seiffert, T M Khoshgoftaar, J Van Hulse, A Folleco, Information Sciences. 259Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J., and Folleco, A. (2014). An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Information Sciences, 259:571-595. Classification for high-dimension low-sample size data. L Shen, M J Er, Yin , Q , Pattern Recognition. 108828Shen, L., Er, M. J., and Yin, Q. (2022). Classification for high-dimension low-sample size data. Pattern Recognition, page 108828. Machine learning in medicine: a practical introduction. J A Sidey-Gibbons, C J Sidey-Gibbons, BMC medical research methodology. 19164Sidey-Gibbons, J. A. and Sidey-Gibbons, C. J. (2019). Machine learning in medicine: a practical introduction. BMC medical research methodology, 19(1):64. What should be expected from feature selection in small-sample settings. C Sima, E R Dougherty, Bioinformatics. 2219Sima, C. and Dougherty, E. R. (2006). What should be expected from feature selection in small-sample settings. Bioinformatics, 22(19):2430-2436. A systematic analysis of performance measures for classification tasks. Information processing & management. M Sokolova, G Lapalme, 45Sokolova, M. and Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information processing & management, 45(4):427-437. The domain theory for requirements engineering. A Sutcliffe, N Maiden, IEEE Transactions on Software Engineering. 243Sutcliffe, A. and Maiden, N. (1998). The domain theory for requirements engineering. IEEE Transactions on Software Engineering, 24(3):174-196. Multiclass imbalance problems: Analysis and potential solutions. S Wang, X Yao, IEEE Transactions on Systems, Man, and Cybernetics. 424Part B (Cybernetics)Wang, S. and Yao, X. (2012). Multiclass imbalance problems: Analysis and potential solutions. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(4):1119-1130. Combating the small sample class imbalance problem using feature selection. M Wasikowski, X Chen, IEEE Transactions on knowledge and data engineering. 2210Wasikowski, M. and Chen, X.-w. (2009). Combating the small sample class imbalance problem using feature selection. IEEE Transactions on knowledge and data engineering, 22(10):1388-1400. Labeling chinese predicates with semantic roles. N Xue, Computational linguistics. 342Xue, N. (2008). Labeling chinese predicates with semantic roles. Computa- tional linguistics, 34(2):225-255. Feature selection for high-dimensional imbalanced data. L Yin, Y Ge, K Xiao, X Wang, X Quan, Neurocomputing. 105Yin, L., Ge, Y., Xiao, K., Wang, X., and Quan, X. (2013). Feature selection for high-dimensional imbalanced data. Neurocomputing, 105:3-11. Performance analysis of binarization strategies for multi-class imbalanced data classification. M Żak, M Woźniak, International Conference on Computational Science. SpringerŻak, M. and Woźniak, M. (2020). Performance analysis of binarization strategies for multi-class imbalanced data classification. In International Conference on Computational Science, pages 141-155. Springer. Natural language processing for requirements engineering: A systematic mapping study. L Zhao, W Alhoshan, A Ferrari, K J Letsholo, M A Ajagbe, E.-V Chioasca, R T Batista-Navarro, ACM Computing Surveys (CSUR). 543Zhao, L., Alhoshan, W., Ferrari, A., Letsholo, K. J., Ajagbe, M. A., Chioasca, E.-V., and Batista-Navarro, R. T. (2021). Natural language processing for requirements engineering: A systematic mapping study. ACM Computing Surveys (CSUR), 54(3):1-41. Cost-sensitive hierarchical classification for imbalance classes. W Zheng, H Zhao, Applied Intelligence. Zheng, W. and Zhao, H. (2020). Cost-sensitive hierarchical classification for imbalance classes. Applied Intelligence, pages 1-11. Feature selection for text categorization on imbalanced data. Z Zheng, X Wu, R Srihari, ACM Sigkdd Explorations Newsletter. 61Zheng, Z., Wu, X., and Srihari, R. (2004). Feature selection for text cat- egorization on imbalanced data. ACM Sigkdd Explorations Newsletter, 6(1):80-89.
{'fraction_non_alphanumeric': 0.048422613954019054, 'fraction_numerical': 0.0271550002852416, 'mean_word_length': 4.906462699642833, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 0, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Context: Classification of software requirements into different categories is a critically important task in requirements engineering (RE). Developing machine learning (ML) approaches for requirements classification has attracted great interest in the RE community since the 2000s. Objective: This paper aims to address two related problems that have been challenging real-world applications of ML approaches: the problems of class imbalance and high dimensionality with low sample size data (HDLSS). These problems can greatly degrade the classification performance of ML methods. Method: The paper proposes HC4RC, a novel ML approach for multiclass classification of requirements. HC4RC solves the aforementioned problems through semantic-role based feature selection, dataset decomposition and hierarchical classification. We experimentally compare the effectiveness of HC4RC with three closely related approaches -two of which are based on a traditional statistical classification model whereas one using an advanced deep learning model. Results: Our experiment shows: 1) The class imbalance and HDLSS problems present a challenge to both traditional and advanced ML approaches.2) The HC4RC approach is simple to use and can effectively address the class imbalance and HDLSS problems compared to similar approaches.', 'arxivid': '2302.12599', 'author': ['Manal Binkhonain [email protected] \nCollege of Computer and Information Sciences\nKing Saud University\nRiyadhSaudi Arabia\n', 'Liping Zhao [email protected]. \nDepartment of Computer Science\nDepartment of Computer Science\nUniversity of Manchester\nManchesterUK\n\nUniversity of Manchester\nManch-esterM13 9PLUK\n'], 'authoraffiliation': ['College of Computer and Information Sciences\nKing Saud University\nRiyadhSaudi Arabia', 'Department of Computer Science\nDepartment of Computer Science\nUniversity of Manchester\nManchesterUK', 'University of Manchester\nManch-esterM13 9PLUK'], 'corpusid': 257205842, 'doi': '10.1016/j.mlwa.2023.100457', 'github_urls': [], 'n_tokens_mistral': 23352, 'n_tokens_neox': 20053, 'n_words': 12763, 'pdfsha': '69b0c1031e24ab8d813e8b3ddbaca709881ad0ab', 'pdfurls': ['https://export.arxiv.org/pdf/2302.12599v1.pdf'], 'title': ['A Machine Learning Approach for Hierarchical Classification of Software Requirements', 'A Machine Learning Approach for Hierarchical Classification of Software Requirements'], 'venue': []}
arxiv
On Statistical Properties of Sharpness-Aware Minimization: Provable Guarantees Kayhan Behdin Rahul Mazumder MIT Sloan Schools of Management CambridgeMA MIT Operations Research Center CambridgeMA On Statistical Properties of Sharpness-Aware Minimization: Provable Guarantees Sharpness-Aware Minimization (SAM) is a recent optimization framework aiming to improve the deep neural network generalization, through obtaining flatter (i.e. less sharp) solutions. As SAM has been numerically successful, recent papers have studied the theoretical aspects of the framework and have shown SAM solutions are indeed flat. However, there has been limited theoretical exploration regarding statistical properties of SAM. In this work, we directly study the statistical performance of SAM, and present a new theoretical explanation of why SAM generalizes well. To this end, we study two statistical problems, neural networks with a hidden layer and kernel regression, and prove under certain conditions, SAM has smaller prediction error over Gradient Descent (GD). Our results concern both convex and nonconvex settings, and show that SAM is particularly well-suited for non-convex problems. Additionally, we prove that in our setup, SAM solutions are less sharp as well, showing our results are in agreement with the previous work. Our theoretical findings are validated using numerical experiments on numerous scenarios, including deep neural networks. Introduction Training Deep Neural Networks (DNN) can be challenging, as it requires minimizing non-convex loss functions with numerous local minima (and saddle points). As different local minima have different generalization properties, recent research has been focused on developing optimization methods and techniques that improve the quality of DNN training, leading to better generalization over unseen data. An important property of a DNN solution's landscape is its sharpness, which is defined as how rapidly the loss value changes locally. A flatter solution is a solution where the highest and the lowest loss values in the region do not differ too much. Sharpness measures in practice include the largest eigenvalue [38] or trace [21] of the Hessian of the loss. Sharpness-Aware Minimization (SAM) [15] is an optimization framework that builds on the observation that sharpness of the training loss correlates with the generalization performance of a DNN. Specifically, flatter solutions to DNNs have been found to generalize better [14,24,40,39,15]. Thus, in SAM, the loss function is modified, in a way that it encourages convergence to flatter regions of the loss. SAM has been shown to be empirically successful in numerous tasks [9,6,3] and has been extended to several variations [41,13]. Thus, there has been a growing interest in understanding the theoretical underpinnings of SAM. In this paper, our goal is to further the theoretical understanding of SAM, by exploring the implicit regularization implications of the algorithm dynamics. Related Work. [15] introduced SAM and presented upper bounds for the generalization performance of SAM. Their bound suggests that SAM should generalize well, however, their result does not completely explain why SAM performs better than a vanilla training using Stochastic Gradient Descent (SGD). Most current papers explain the statistical performance of SAM through analyzing the SAM solution loss landscape and geometry, specially, its sharpness. Particularly, continuous time analysis (i.e. when the step size is infinitesimally small) has been used to show SAM can choose more sparse solutions [2], can regularize eigenvalues of the Hessian of the loss [37] and eventually select flatter solutions [10]. The recent work Note that our theoretical plots capture the relative profile of real data. of [35] also explores the connections between SAM and variational inference, and how SAM seeks flatter solutions. [1] show that SAM regularizes the eigenvalues of the Hessian, resulting in a flatter solution. Another interesting work is by [4] which explores SAM's trajectory for quadratic loss functions and explains how SAM can lead to flatter minima. Although the fact that SAM solutions are flatter partially explains the good generalization of SAM, we note that sharp minima can generalize as well [12,23] and sharpness can be generally manipulated by reparameterizing the network [21]. This shows the need for a statistical analysis of SAM, rather than a geometric one. Summary of Results and Approach. In a departure from the literature, we directly study the statistical properties and performance of SAM. To this end, we consider two statistical problems, a neural network with a hidden layer, and (indefinite) kernel regression, as kernel methods have been shown to be closely related to DNNs and understanding kernel methods is valuable in the DNN literature [8,16,17,7,22]. We present a crisp characterization of the prediction error of SAM and Gradient Descent (GD) for these two problems over the course of the algorithm, and show that under certain conditions, SAM can have a lower prediction error compared to GD. In our analysis, we study both convex and non-convex problems and show that SAM particularly works well in the non-convex cases, where GD might have unbounded error unlike SAM. Moreover, we show that SAM solutions in our setup tend to be flatter compared to GD, which theoretically shows the correlation between statistical performance and sharpness. On a technical level, we characterize the SAM trajectory for the aforementioned problems and show a bias-variance trade-off for the prediction error of the algorithm, where bias generally decreases over iterations and variance increases. We show that SAM has a lower bias compared to GD, while GD's variance can be lower than SAM's. This shows SAM performs better when bias is the dominant term, for example when the noise is not too large or the total number of epochs is finite, as is in practice [32], specially for large models [19,5]. Moreover, we show that in non-convex settings, GD can have unbounded bias and variance while SAM is able to keep the error bounded, showing a better performance. Our numerical results on several models including deep neural networks agree with our theoretical insights. We use numerical experiments to illustrate some of our results. In Figure 1(a), we compare SAM and GD classification error on the validation set over epochs when training ResNet50 network on CIFAR100 dataset (see Section 5 for more details on numerical experiments). We see that SAM has better accuracy over GD for almost all epochs, specially in earlier phases of training, which can be explained by our theory. As the training labels are not noisy in this case, bias is likely to be dominant and as we show, SAM's bias is less than GD's for all iterations under our model assumptions. In fact, in Figure 1(b) we show the error plot calculated from our theory for a noiseless model 1 , which follows the same trends as Figure 1(a), showing how our theory can explain the differences of GD/SAM in practice. In another case, we compare the performance of SAM and GD for CIFAR10 with training label noise in Figure 1(c). Both methods perform worse in later epochs, which can be due to variance becoming dominant. However, GD performs even worse than SAM in the noisy setup. As we show, in the non-convex settings GD can have larger (and even unbounded) variance over SAM, which explains the performance gap seen here. Particularly, Figure 1(d) plots the error from our theory for a noisy model, which again, shows similar trends to the real data plots, such as non-monotonicity of the error and the increasing gap between SAM and GD in later iterations. We note that our approach is different from the previous work. Instead of studying geometric properties of SAM's solution such as its sharpness, which can partially explain why SAM generalizes better, we directly study the statistical performance of SAM. Hence, we present a direct explanation for SAM's performance in practice, rather than relying on the correlation between flatness and generalization. Moreover, our analysis is different from previous work, which does not require us to assume the step size is infinitesimally small, unlike most current work [10,2,37,35]. This provides insights for non-infitnesimal step sizes used in practice. Our contributions. Our contributions in this paper can be summarized as follows: (i) We study the statistical performance of SAM for one layer neural networks and (indefinite) kernel regression; (ii) We show that for these two problem classes, SAM has lower prediction error over GD under certain conditions, specially for non-convex settings; (iii) We show that in our settings, SAM tends to be flatter, confirming the correlation between generalization and flatness; (iv) We verify our theoretical findings using numerical experiments on synthetic and real data, and models including DNNs. SAM: An Overview Let f : R p → R be the objective function that we seek to minimize. In many machine learning applications in particular, we have f (w) = n i=1 f i (w)/n where f i is the loss value corresponding to the i-th observation. A standard approach to minimizing f is the GD approach where the model parameters, or weights, are updated by the iterations w GD k+1 = w GD k − η∇f (w GD k )(1) where η > 0 is the step size or learning rate. In SAM [15], the goal is to find a flatter solution that does not fluctuate too much in a neighborhood of the solution. Therefore, SAM modifies f as f SAM (w) = max ε 2 ≤ρ f (w + ε)(2) and GD is then applied over f SAM , which captures the worst objective locally. The hope is that by minimizing f SAM , a solution is found that does not perform bad locally, and hence the local loss function is flat. As calculating f SAM in closed form is difficult, [15] suggest to approximate f with a linear function, i.e., argmax ε 2≤ρ f (w + ε) ≈ argmax ε 2 ≤ρ f (w) + ε T ∇f (w). The linear approximation leads to [15]: f SAM (w) ≈ f (w + ρ∇f (w)/ ∇f (w) 2 ). Taking the gradient of this approximation and ignoring second order terms, the SAM updates are given as (we refer to [15] for details of derivation) w SAM k+1 = w SAM k − η∇f (w SAM k + ρ∇f (w SAM k )).(3) We note that in (3), we ignored the normalization of the inner gradient. But recent work [2] has shown that the effect of such normalization can be neglected and we follow suit. We also note that our analysis in this work is done directly on (3) (based on the linear approximation to f ) which is implemented in practice, unlike the original loss f SAM which is hard to compute. Overview of Results Throughout the paper, we assume n data points (y i , x i ) n i=1 are given with x i ∈ R d . In our statistical model, each observation is y i = y * i + i where y * i is the true noiseless observation and i 's are the zero-mean independent noise values with IE[ T ] = σ 2 I. We let Φ(w; x i ) to be our predicted value for observation i, where w ∈ R p parameterizes the model. We consider the least squares loss as f (w) = 1 2 n i=1 (y i − Φ(w; x i )) 2 .(4) The expected prediction error for a solution w is therefore defined as Error(w) = IE 1 n n i=1 (y * i − Φ(w; x i )) 2 .(5) One hence can decompose error as Error(w) = 1 n n i=1 (y * i − IE [Φ(w; x i )]) 2 Bias 2 (w) + 1 n IE n i=1 (Φ(w; x i ) − IE [Φ(w; x i )]) 2 Var(w) .(6) The bias term in (6) captures how far the expected predicted value is from the true model, while the variance term is the variance of the prediction resulting from the noise. We discuss the details of models we study in Section 3.1 for the neural network model, and in Section 3.2 for the kernel regression case. Our goal is to show that under certain condition, SAM has lower statistical error compared to GD. To this end, we will characterize the bias and variance terms in (6). Specifically, we show that in all cases that we consider, SAM has a lower bias compared to GD. Moreover, SAM has higher variance in convex settings, but has significantly lower variance in non-convex settings. This quantifies that SAM is well-suited for non-convex problems. Statistical Models Before stating our results, we discuss two important statistical models that we consider and present a formal problem definition for each. Neural Networks with a Hidden Layer Let φ(x) : R → R be a possibly non-linear activation function. A neural network with one hidden layer and L hidden neurons can be defined as Φ(w; x) = L l=1 φ(x T w (l) ) where w (l) ∈ R d and w = (w (1) , · · · , w (L) ) ∈ R p where p = dL. For the rest of the paper, we consider the ReLU as the activation function, φ(x) = max(0, x). Let a(w; x) = (a 1 (w; x), · · · , a L (w; x)) ∈ R p where for l ∈ [L], we have a l (w; x) ∈ R d , a l (w; x) = 0 if x T w (l) ≤ 0 x if x T w (l) > 0.(7) Under this notation, for the ReLU activation we have Φ(w; x) = a(w, x) T w. We study the sequence w SAM k from (3) where f (w) is given in (4). In particular, we let w GD k to be the sequence from (1) with ρ = 0. We assume both SAM and GD use the same step size η and they both start from an initial solution such as w 0 . We also consider the following assumptions. (A1) There existsk ≥ 1 such that for 0 ≤ k ≤k and i ∈ [n], we have a(w SAM k ; x i ) = a(w GD k ; x i ) = a(w 0 ; x i ). (A2) There existsw ∈ R p such that a(w; x i ) = a(w 0 ; x i ) for i ∈ [n] and y * i = a(w; x i ) Tw . Assumption (A1) states that the quantities x T w (l) do not change sign over the course of the algorithm to avoid non-differentiability of ReLU. This can be ensured by choosing a sufficiently small step size or studying the method near a local minimum where the solution does not change significantly, a common approach to studying DNNs [39,38,29]. Moreover, as a(w; x) ∈ R dL , Assumption (A2) is likely to hold true if L is sufficiently large (i.e. the total number of hidden neurons is large). It is worth noting that if φ(x) = x and L = 1, Φ(w; x) = w T x which simplifies the model described above to the ordinary least-squares problem. By taking a(w; x) = x, we have Φ(w; x) = a(w; x) T w = x T w similar to the ReLU case above. Moreover, Assumption (A1) holds trivially for the linear regression case, and Assumption (A2) simplifies to existence ofw ∈ R d such that y * i = x T iw for i ∈ [n] which is standard in the linear regression literature. Therefore, the framework developed here for ReLU networks can be readily applied to the linear regression problem. Kernel Regression Kernel methods and feature mappings have been a staple of machine learning algorithms in different applications [20]. Moreover, kernel methods have been studied to better understand optimization and generalization in machine learning [26]. This is specially interesting as a long line of work has explored connections and similarities between DNNs and kernels [8,16,17,7,22], making the analysis of kernel methods even more important. Let K : R d × R d → R be a kernel and X ∈ R n×d be the model matrix with rows of x 1 , · · · , x n . We define the Gram matrix associated with this kernel and data as K X = [K(x i , x j )]. A classical assumption in kernel learning is that K is Positive Semidefinite (PSD), that is K X is PSD for any X ∈ R n×d and n ≥ 1. However, there has been a growing interest in learning with indefinite kernels as they often appear in practice due to noisy observations and/or certain data structures (see [27,28,30,31] and references therein). Therefore, throughout this paper, we do not assume K is PSD. In fact, we assume K = K + − K − where K + , K − are two PSD kernels, resulting in K being indefinite. We use H K to denote the Reproducing Kreȋn Kernel Space (RKKS) for which K is the reproducing kernel. Note that H K = H K+ H K− where H K+ , H K− are Reproducing Kernel Hilbert Spaces (RKHS) associated with K + , K − and denotes orthogonal direct sum [27]. We also assume K is symmetric, that is K(x i , x j ) = K(x j , x i ) for all x i , x j . Given pairs of observations (y i , x i ) n i=1 , we seek to learn the function h ∈ H K such that h(x i ) ≈ y i for all i. To this end, for h ∈ H K we define the loss L[h] = 1 2 n i=1 (h(x i ) − y i ) 2 .(8) We note that L[h] is a function of h ∈ H K . The gradient of this loss then can be calculated as 2 ∇L[h] = n i=1 (h(x i ) − y i )K(x i , ·) ∈ H K(9) where K(x, ·) : R d → R denotes the evaluation function, K(x, ·)(y) = K(x, y). Although the SAM algorithm was introduced in the context of losses in R p , one can mimic SAM in the RKKS. Specifically, we define KernelSAM, an equivalent of SAM algorithm in the RKKS, by iterations h SAM k+1 = h SAM k − η∇L[h SAM k + ρ∇L[h SAM k ]].(10) Our first result is a representer theorem for KernelSAM. For w ∈ R n , we will use the notation 2 We provide a short review of kernel gradients in Appendix B. w T K(X, ·) := n i=1 w i K(x i , ·) ∈ H K .Theorem 1. Suppose h SAM 0 = 0. Then, for k ≥ 1, there exists w SAM k ∈ R n such that h SAM k = (w SAM k ) T K(X, ·). 3 Theorem 1 shows that at each iteration, the SAM solution can be represented as a linear combination of K(x i , ·) which allows us to directly study w SAM k . Therefore, using the notation from Section 2.1, Φ(w SAM k ; x) = n j=1 (w SAM k ) j K(x j , x) = h SAM k (x).(11) Similar to the case of ReLU networks, we seek to characterize the error for KernelSAM. To this end, we assume the model is well-specified and there existsw ∈ R n such that y i = n j=1w j K(x j , x i ) y * i + i(12) where i 's are the noise values, independent of X, with the property IE[ ] = 0 and IE[ T ] = σ 2 I. With this notation, we leth =w T K(X, ·) to be the noiseless estimator. The expected error for h SAM (11). Our final result in this section shows that under the model discussed here, KernelSAM is equivalent to applying SAM on a (non-convex) quadratic objective. k = (w SAM k ) T K(X, ·) ∈ H K is defined as Error(w SAM k ) = IE 1 n n i=1 h (x i ) − h SAM k (x i ) 2 = IE 1 n n i=1 y * i − Φ(w SAM k ; x i ) 2 with Φ(·; ·) defined inTheorem 2. The solution w SAM k defined in Theorem 1 follows (3) where f (w) = 1 2 (w −w) T K X (w −w) − w T .(13) As we study indefinite kernels, K X might be indefinite and therefore f (w) in Theorem 2 can be nonconvex. This shows that our analysis of SAM applies to both convex (as in the linear regression case discussed in Section 3.1 and PSD kernels) and non-convex functions, as for indefinite kernels. Main Results ReLU Networks In this section, we review our theoretical results for the ReLU networks discussed in Section 3.1. We note that as discussed, this model also readily applies to the least-squares linear regression problem and therefore, we do not study that problem separately. Let A ∈ R n×p be the matrix with i-th row equal to a(w 0 ; x i ). Let us consider the following Singular Value Decomposition (SVD) of A, A = V ΣU T = V 1 V 2 Σ 1 0 U T 1 U T 2 where Σ 1 ∈ R r×r collects nonzero singular values of A and r is the rank of A. We let D 1 = Σ 2 1 . Theorem 3 characterizes the error for the neural model discussed in Section 3.1. Theorem 3. Suppose w 0 = U 1 U T 1 w 0 and 0 ≺ I − ηD 1 − ηρD 2 1 ≺ I and let u = U T 1 (w − w 0 ). Then, under the model from Section 3.1 one has for k ≤k Bias 2 (w SAM k ) = 1 n r i=1 (1 − ηd i − ηρd 2 i ) 2k d i u 2 i Var(w SAM k ) = σ 2 n Tr I − (I − ηD 1 − ηρD 2 1 ) k 2 .(14) 3 An explicit expression for updates can be found in (E.8). In particular, fork ≥ k ≥ 0 one has Bias 2 (w SAM k ) ≤ Bias 2 (w GD k ) and Var(w SAM k ) ≥ Var(w GD k ). We note that Theorem 3 is applicable to GD by setting ρ = 0. Theorem 3 precisely characterizes the expected SAM trajectory, and its corresponding bias and variance terms for the neural network model. Specifically, we see that bias for SAM in each iteration is smaller than GD, while the variance for SAM is larger. We note that as k increases, the bias term decreases while the variance increases. Therefore, if the optimization is run for finitely many steps, the bias term is more likely to be the dominant term and as SAM has a lower bias, SAM is more likely to outperform GD. This intuitive argument is formalized in Proposition 1. Proposition 1. Suppose there exists a numerical constant c 0 > 1 such that 1 − ηd r ≤ c 0 (1 − ηd 1 − ηρd 2 1 ), 1 − ηd 1 ≥ √ c 0 (1 − ηd r − ηρd 2 r ).(15) Let SNR = X(w − w 0 ) 2 2 /rσ 2 and assume SNR ≥ 1. Under the assumptions of Theorem 3, if k ≤ log[2/(SNR + 1)] log[(1 − ηd 1 − ηρd 2 1 ) 2 /(1 − ηd r − ηρd 2 r )] ∧k one has Error(w SAM k ) ≤ Error(w GD k ). Proposition 1 shows that, assuming noise is not too large, SAM has a lower error compared to GD if the optimization is run for finitely many steps. Remark 1. As noted, in practice DNNs are trained for a limited number of epochs [32], and it is believed [19,5] recent large neural networks, specially language models, tend to be undertrained due to resources limitations. This shows the assumption that k is finite is realistic. Remark 2. An interesting special case of Theorem 3 is the noiseless case where σ = 0. We note that Theorem 3 implies that SAM has a lower error than GD for all iterations k ≥ 1 for this case. Remark 3. In Appendix C, we discuss the selection of η, ρ to ensure condition (15) holds. On a high level, condition (15) suggests taking ρ ≥ η to take advantage of SAM performance. Remark 4. Proposition 1 suggests that the total number of iterations should be smaller in noisy cases. As we demonstrate numerically in Section 5, this is necessary to avoid overfitting to noise. Kernel Regression Assume the eigenvalue decomposition K X = U DU T . For simplicity, we assume rank(K X ) = n. We let U 1 , D 1 and U 2 , D 2 collect eigenvectors and eigenvalues of K X corresponding to positive and negative eigenvalues, respectively. We also let D = diag(d 1 , · · · , d n ) with d 1 ≥ · · · ≥ d n . Theorem 4. Suppose h SAM 0 = 0 and let u = U Tw . Then, Var(w SAM k ) = Var + (w SAM k ) + Var − (w SAM k ) where Bias 2 (w SAM k ) = 1 n n i=1 (1 − ηd i − ηρd 2 i ) 2k d 2 i u 2 i Var + (w SAM k ) = σ 2 n Tr I − (I − ηD 1 − ηρD 2 1 ) k 2 Var − (w SAM k ) = σ 2 n Tr I − (I − ηD 2 − ηρD 2 2 ) k 2 .(16) In Theorem 4, Var + , Var − capture the variance from positive and negative eigenvalues of K X , respectively. As we see, the behavior in a non-convex case where some eigenvalues are negative is wildly different from the case where all eigenvalues are non-negative. In particular, if d n < 0, not only Bias 2 (w SAM k ) ≤ Bias 2 (w GD k ), but the GD bias actually diverges to infinity, while the SAM bias converges to zero under the assumptions of Theorem 4. In terms of variance, we see that similarly, Var − for SAM stays bounded, while it can diverge to infinity for GD. This shows that in the indefinite setting, GD might have unbounded error in the limit of k → ∞ while SAM can keep the error bounded. We also see Var + shows a behavior similar to the variance from the ReLU case (Theorem 3), implying that when the number of iterations is limited, SAM has smaller error than GD. This shows that SAM is even more suited to the non-convex case as it performs well for both finite and infinite number of iterations. This explanation is formalized in Proposition 2. Proposition 2. Suppose there exists a numerical constant c 0 > 1 such that 1 − ηd r ≤ c 0 (1 − ηd 1 − ηρd 2 1 ), 1 − ηd 1 ≥ √ c 0 (1 − ηd r − ηρd 2 r )(17) where r is such that d r > 0, d r+1 < 0. Moreover, assume there exists ε > 0 that for j ≥ r + 1, 1 − ηd j − ηρd 2 j ≤ 1 ≤ 1 + ε ≤ 1 − ηd j . Let SNR = K Xwk ≤ log[2/(SNR + 1)] log[(1 − ηd 1 − ηρd 2 1 ) 2 /(1 − ηd r − ηρd 2 r )] AND k ≥ log 2 log(1 + ε) (18) one has Error(w SAM k ) ≤ Error(w GD k ). Moreover, if d n < 0, lim k→∞ Error(w GD k ) = ∞, lim k→∞ Error(w SAM k ) < ∞. Similar to Proposition 1, Proposition 2 shows that when the total number of iterations is not too large, SAM performs better. Moreover, as discussed, SAM is able to keep the error bounded in the non-convex case, while GD's error diverges as k → ∞. SAM solutions are flat. As discussed, numerous papers have numerically observed the correlation between flatness and generalization, where flatter solutions tend to generalize better. In our work, we directly explained how SAM can perform better statistically compared to GD. However, one might ask if such a correlation between flatness and error exists in our setup. Here, we answer this question in the affirmative. Let us define the sharpness for SAM (and GD similarly) as the expected local fluctuations in the loss, κ SAM k = max ε 2 ≤ρ0 IE [f (IE [w SAM k ] + ε) − f (IE [w SAM k ])](19) for some ρ 0 > 0 which might be different from ρ, and f is given in (4) for the ReLU case and in (13) for the kernel regression setup. Note that this can be considered as the expected value of sharpness defined by [15], max ε 2≤ρ0 f (w + ε) − f (w) which motivates the SAM algorithm. Proposition 3. (1) Under the assumptions of Theorem 3, for k ≥ 1 κ GD k − κ SAM k ≥ ρ 2 0 d r − d 1 2 + ρ 0   r i=1 (1 − ηd i ) 2k d 2 i u 2 i − r i=1 (1 − ηd i − ηρd 2 i ) 2k d 2 i u 2 i   . (2) Under the assumptions of Theorem 4, if d n < 0 lim k→∞ κ GD k = ∞ > κ SAM k ∀k ≥ 1. Proposition 3 shows that for the ReLU setup, SAM has lower sharpness compared to GD for sufficiently small ρ 0 , k. Specially, if d r = d 1 , SAM has lower sharpness for k, ρ 0 > 0. Moreover, for the indefinite kernel setup, this proposition shows that GD has unbounded sharpness, unlike SAM. This further confirms the connections between generalization and flatness observed theoretically [11] and numerically [15,39] Numerical Experiments We conduct various numerical experiments on linear models, kernel methods and deep neural networks to examine the theory we developed and to gain further insight into SAM. Due to space limitations, we only discuss main insights from our DNN experiments here. We use CIFAR10/100 [25] data and noisy versions of CIFAR10 provided by [36] to train ResNet50 network [18] in our experiments here. Additional results for ResNet18 network as well as experiments on linear/kernel models can be found in Appendix F. Large ρ vs small ρ: First, we consider the case with clean (noiseless) labels, where one can expect the bias to be the dominant term. In this case, our theory would suggest taking a larger ρ lowers the error. Moreover, our theory anticipates SAM performs specially better than (S)GD in earlier epochs where the difference in bias is even larger. We show that these insights hold true in our experiments. Particularly, in Figure 2 [Two Left Panels] we observe that when ρ > 0, SAM performs better than GD in almost all epochs. We see that as we increase ρ, SAM performs quite well over the first 150 epochs. However, the gains from large ρ tend to fade in later epochs as smaller values of ρ get to lower bias values as well. Nevertheless, we see that in terms of accuracy, it is better to choose a larger ρ rather than a small ρ (the accuracy values are given in figure legends. Also see Figure F.6 for more details). In case of CIFAR10, ρ = 0.1 is the best value of ρ in our experiments and taking ρ = 0.5 results in a smaller loss of accuracy, compared to taking ρ = 0 (i.e. GD). In the case of CIFAR100, ρ = 0.5 results in better accuracy compared to ρ = 0.1, which shows that generally, overestimating ρ is less harmful than underestimating ρ. This mostly agrees with theory that taking larger ρ in noiseless settings is better, although we note that in practice, variance might not be exactly zero so large ρ might perform slightly worse than smaller ρ, as is the case for CIFAR10. Early stopping in noisy settings: Next, we consider a noisy setting and show that to avoid overfitting, the training has to be stopped early, showing the assumption that the number of epochs is finite is realistic. We use two versions of noisy CIFAR10, random label 1 and worse labels from [36] which we call random and worse, respectively. The random version has about 17% noise in training labels, while worse has about 40% noise. The validation labels for both datasets are noiseless. As we see in Figure 2 [Two Right Panels], as the noise increases both methods tend to overfit in the later stages of training, and overfitting is stronger when noise is higher. This shows that in noisy settings training has to be stopped earlier as noise increases. Performance under noise: As we see from Figure 2, in noisy settings the gap between SAM and GD is even larger. This can be explained as in non-convex settings, GD can have unbounded variance (cf. Theorem 4) which leads to worse performance of GD specially in later epochs. Decaying ρ helps: We observe that having large ρ helps in initial phases of training, while having a smaller ρ might help in the later phases. Therefore, we propose to start SAM with a large value of ρ to decrease bias and then decay ρ over the course of algorithm to limit the increase of variance (the details are discussed in Appendix F). The result for this case are shown in Table 1. Full is the accuracy at the end of training (epoch 200) and Early corresponds to early stopping (epoch 120 for SGD and 50 for SAM-based methods). As can be seen, starting with larger than optimal ρ and decaying leads to accuracy results similar or slightly better than using the optimal fixed ρ. Interestingly, using large ρ leads to considerably better performance if training has to be stopped early, which is often the case in practice specially for large models [19,5] due to resource limitations. Conclusion and Future Work We presented a direct explanation of why SAM generalizes well through studying the statistical performance of SAM/GD for two classes of problems. Specifically, we showed that SAM works well for neural networks with a hidden ReLU layer if the noise is not too high. We also showed that in indefinite kernel regression, corresponding to a non-convex optimization problem, SAM can have bounded error while GD has unbounded error. An interesting question is that how stochastic version of SAM would differ from the full-batch setting studied here. In Appendix D, we study a stochastic version of SAM and compare it to SGD for a special case. As we see, SAM tends to benefit from stochasticity even more, specially in high-dimensional settings. A Details of example from Section 1 Under the notation used in Section 4, the noiseless example shows a one-dimensional case with d 1 = 1, η = 0.015, u = 1 and ρ = 1. The plot shows Error for GD and SAM. In the noisy setting, we set n = 2, σ 2 = 0.2. The model follows η = 0.0045, d 2 = −0.0007/η and ρ = −1/d 2 . We also set u 1 = u 2 = 1. The plot similarly shows the error. B Review of Kernel Gradients Note that as discussed in Section 3.2, H K = H K+ H K− . Therefore, for f ∈ H K , there exists f + ∈ H K+ , f − ∈ H K− such that f = f + − f − . Moreover, the inner product in H K is defined as f, g = f + , g + − f − , g − . (B.1) Note that similar to the RKHS case, for f ∈ H K we have K(x, ·), f = K + (x, ·), f + − K − (x, ·), f − = f + (x) − f − (x) = f (x). Let x ∈ R p , y ∈ R and L[h] = (h(x) − y) 2 for h ∈ H K . The gradient of L[h] is a function such as k ∈ H K where k is a good first-order approximation to L[h]. In particular, for any bounded g ∈ H K , L[h + g] = (h(x) + g(x) − y) 2 = (h(x) − y) 2 + 2 g(x) 2 + 2 g(x) (h(x) − y) = L[h] + 2 g(x) (h(x) − y) + O( 2 ) = L[h] + 2 (h(x) − y) K(x, ·), g + O( 2 ). (B.2) Therefore, lim →0 L[h + g] − L[h] = 2 (h(x) − y) K(x, ·), g . (B.3) Hence, we take ∇L[h] = 2(h(x) − y)K(x, ·). C Discussion on Propositions 1 and 2 In this section, we study what conditions (17) implies on the model. Particularly, we set d 1 = 1. As two examples, we take d r ∈ {0.8, 0.95} and d n ∈ {−0.6, −1}. We also like the bounds of Proposition 2 to be valid for at least k ≥ 20. Therefore, we take ε = log 2/ log 20 − 1. Next, we sweep η and ρ and choose the values that satisfy (17) for some c 0 > 1. We plot the results in Figure C.1 for different values of d r , d n , where we highlight every pair of (η, ρ) that satisfy the condition in dark blue. As can be seen in this figure, taking η to be small and ρ η results in (17) being satisfied. This makes intuitive sense as taking η small helps to satisfy 1 − ηd i − ηρd 2 i < 1 and taking ρ η helps to take advantage of SAM regularization. D Effect of Stochasticity: A Special Case In this section, we study SAM when stochastic mini-batches are used and discuss how stochasticity helps SAM's performance. To this end, we limit our analysis to the linear regression case, and assume for k ≥ 1, y k =w T x k + i , where i 's are iid noise values as before, and x k 's are independent of each other and noise. In fact, we assume for k ≥ 1, x k ∼ N (0, I) follows a normal distribution. The loss corresponding to the point k is defined as The stochastic versions of the algorithms hence follow f k (w) = 1 2 (y k − x T k w) 2 .w SAM k+1 = w SAM k − η∇f k (w SAM k + ρ∇f k (w SAM k )). (D.1) To better understand stochastic SAM, following the recent work on SGD [34], we consider the expected trajectory the algorithm takes, that is, IE[w SAM k ] where the expectation is taken over x i , i for i ≥ 1. Moreover, as the observations are random, we define error over an unseen data point, which follows the same distribution as the training data, i.e., we consider a random design. Specifically, we define Error(w SAM k ) = IE x0, 0 x T 0 (IE[w SAM k ] −w) − 0 2 (D.2) where x 0 , 0 follow the same distribution as x k , k and are independent. Proposition D.1. Suppose 0 < 1 − η − ηρ(p + 2) ≤ 1 − η < 1. Then, under the stochastic setup, Error(w SAM k ) − Error(w GD k ) = (1 − η − ηρ(p + 2)) 2k − (1 − η) 2k w 2 2 ≤ 0. (D.3) Note that Proposition D.1 shows that SAM outperforms GD in the stochastic setup under this linear regression setup. The term (p + 2)ηρ appearing in (D.3) is an effect of stochasticity, and this term in the full-batch setting is expected to be ηρ (cf. Theorem 3 when D 1 = I). In the high-dimensional setting where p 1, this additional term resulting from stochasticity improves the SAM error significantly, showing the suitability of SAM for both stochastic and high-dimensional settings. A deeper analysis of stochastic SAM on more complex model is left for future work. E Proof of Main Results E.1 A Preliminary Result Theorem 5. Let w SAM k follow (3) with f (w) = 1 2 (w −w) T H(w −w) + g T (w −w). Then, w SAM k+1 = η k i=0 (I − ηH − ηρH 2 ) i (I + ρH)(Hw − g) + (I − ηH − ηρH 2 ) k+1 w 0 . (E.1) Proof. As Hw). f (w) = 1 2 (w −w) T H(w −w) + g T (w −w) we have w SAM k + ρ∇f (w SAM k ) = w SAM k + ρg + ρH(w SAM k −w) = (I + ρH)w SAM k + ρ(g − Therefore, by writing SAM updates: w SAM k+1 = w SAM k − η∇f (w SAM k + ρ∇f (w SAM k )) = w SAM k − η(g + H(w SAM k + ρ∇f (w SAM k ) −w)) = w SAM k − η g + H (I + ρH)w SAM k + ρ(g − Hw) −w = (I − ηH − ηρH 2 )w SAM k + η(I + ρH)(Hw − g) (E.2) = η k i=0 (I − ηH − ηρH 2 ) i (I + ρH)(Hw − g) + (I − ηH − ηρH 2 ) k+1 w 0 (E.3) where the last equality is a result of an inductive argument. E.2 Proof of Theorem 1 Proof. By the definition, L[h] = 1 2 n i=1 (y i − h(x i )) 2 (E.4) and ∇L[h] = n i=1 (h(x i ) − y i )K(x i , ·). (E.5) Hence, the KernelSAM gradient can be written as ∇L[h + ρ∇L[h]] = n i=1 [(h + ρ∇L[h])(x i ) − y i ] K(x i , ·) = n i=1     h + ρ n j=1 (h(x j ) − y j )K(x j , ·)   (x i ) − y i   K(x i , ·) (a) = n i=1   h(x i ) + ρ n j=1 (h(x j ) − y j )K(x i , x j ) − y i   K(x i , ·) = n i=1 [h(x i ) − y i ] K(x i , ·) + ρ n i=1 n j=1 [(h(x j ) − y j )K(x i , x j )] K(x i , ·) (E.6) where in (a), we used the fact K(x i , ·)(x j ) = K(x i , x j ). As a result, we have ∇L[h + ρ∇L[h]] = v(h) T K(X, ·) where v(h) ∈ R n , and v i (h) = h(x i ) − y i + ρ n j=1 [(h(x j ) − y j )K(x i , x j )] . Note that h SAM 0 = 0 = 0 T K(X, ·). Suppose for the sake of induction that h SAM k = (w SAM k ) T K(X, ·). Then, h SAM k+1 = h SAM k − η∇L[h SAM k + ρ∇L[h SAM k ]] = w SAM k − ηv(h SAM k ) T K(X, ·) = (w SAM k+1 ) T K(X, ·) (E.7) where w SAM k+1 = w SAM k − ηv(h SAM k ). (E.8) This completes the proof. E.3 Proof of Theorem 2 The proof of this theorem is based on the following technical lemma. Lemma E.1. Under the assumptions of Theorem 2, one has h SAM k = (w SAM k ) T K(X, ·) where w SAM k+1 = (I − ηK X − ηρK 2 X )w SAM k + η(I + ρK X )(K Xw + ). Proof. Suppose h = n i=1 w i K(x i , ·). Then, the residual corresponding to the i-th observation is given as 9) or in the matrix/vector notation, R i = h(x i ) − y i = h(x i ) −h(x i ) − i = n j=1 (w j −w j )K(x i , x j ) − i (E.R = K X (w −w) − . (E.10) Thus, n i=1 [h(x i ) − y i ] K(x i , ·) = n i=1 R i K(x i , ·) = n i=1   n j=1 (w j −w j )K(x i , x j ) − i   K(x i , ·) = n i=1 n j=1 (w j −w j )K(x i , x j )K(x i , ·) − n i=1 i K(x i , ·) = (w −w) T K X K(X, ·) − T K(X, ·) (E.11) using our vector notation. Next, n j=1 [(h(x j ) − y j )K(x i , x j )] = n j=1 n l=1 (w l −w l )K(x j , x l ) − j K(x i , x j ) = n l=1 (w l −w l ) n j=1 K(x i , x j )K(x j , x l ) − n j=1 K(x i , x j ) j = n l=1 (K 2 X ) i,l (w l −w l ) − [K X ] i = K 2 X (w −w) i − [K X ] i . (E.12) This leads to n i=1 n j=1 [(h(x j ) − y j )K(x i , x j )] K(x i , ·) = (w −w) T K 2 X K(X, ·) − T K X K(X, ·). (E.13) Therefore, from (E.6), ∇L[h + ρ∇L[h]] = K X (w −w) − + ρK 2 X (w −w) − ρK X T K(X, ·). (E.14) In particular, if h SAM k = (w SAM k ) T K(X, ·), then h SAM k+1 = (w SAM k+1 ) T K(X, ·) where w SAM k+1 = w SAM k − η K X (w SAM k −w) − + ρK 2 X (w SAM k −w) − ρK X = (I − ηK X − ηρK 2 X )w SAM k + η(I + ρK X )(K Xw + ). (E.15) Proof of Theorem 2. The proof follows from comparing Lemma E.1 to (E.2). E.4 Proof of Theorem 3 Proof. Note that under the setup, f (w SAM k ) = 1 2 n i=1 (y i − Φ(w SAM k ; x i )) 2 (a) = 1 2 n i=1 i + a(w; x i ) Tw − a(w SAM k ; x i ) T w SAM k 2 (b) = 1 2 + A(w − w SAM k ) 2 2 = 1 2 (w − w SAM k ) T A T A(w − w SAM k ) − T A(w SAM k −w) + 1 2 where (a) is by Assumption (A2) and (b) is by Assumption (A1). Comparing (E.16) to Theorem 5, we see that H = A T A and g = −A T . Next, note that k i=0 (I − ηH − ηρH 2 ) i (I + ρH)(Hw − g) (a) = U k i=0 (I − ηD − ηρD 2 ) i (I + ρD)U T (U 1 D 1 U T 1w + U 1 Σ 1 V T 1 ) (b) =U 1 k i=0 (I − ηD 1 − ηρD 2 1 ) i (I + ρD 1 )U T 1 (U 1 D 1 U T 1w + U 1 Σ 1 V T 1 ) (c) = U 1 diag   1 − (1 − ηd j − ηρd 2 j ) k+1 ηd j + ηρd 2 j r j=1   diag {1 + ρd j } r j=1 (D 1 U T 1w + Σ 1 V T 1 ) (d) = U 1 diag   1 − (1 − ηd j − ηρd 2 j ) k+1 ηd j + ηρd 2 j r j=1   diag {1 + ρd j } r j=1 (D 1 U T 1w + D 1 Σ −1 1 V T 1 ) =U 1 diag   1 − (1 − ηd j − ηρd 2 j ) k+1 ηd j + ηρd 2 j r j=1   diag {d j + ρd 2 j } r j=1 (U T 1w + Σ −1 1 V T 1 ) = 1 η U 1 diag 1 − (1 − ηd j − ηρd 2 j ) k+1 r j=1 (U T 1w + Σ −1 1 V T 1 ) = 1 η U 1 (I − (I − ηD 1 − ηρD 2 1 ) k+1 )(U T 1w + Σ −1 1 V T 1 ) (E.17) where (a) is by substituting H = U 1 D 1 U T 1 = U DU T , (b) is by the fact the fact U T 2 U 1 = 0, (c) is using U T 1 U 1 = I and k i=0 (1 − x) i = 1 − (1 − x) k+1 x and (d) is true as D 1 is invertible. Moreover, U (I − η 1 D 1 − ηρD 2 1 ) k U T w 0 = U (I − η 1 D 1 − ηρD 2 1 ) k U T U 1 U T 1 w 0 = U 1 (I − η 1 D 1 − ηρD 2 1 ) k U T 1 w 0 (E.18) where the first equation is by the assumption w 0 = U 1 U T 1 w 0 . By (E.1), (E.17) and (E.18), we achieve: w SAM k+1 = U 1 (I − (I − ηD 1 − ηρD 2 1 ) k+1 )(U T 1w + Σ −1 1 V T 1 ) + U 1 (I − η 1 D 1 − ηρD 2 1 ) k+1 U T 1 w 0 . (E.19) By taking the expectation, we have IE [w SAM k ] = U 1 (I − (I − ηD 1 − ηρD 2 1 ) k )U T 1w + U 1 (I − η 1 D 1 − ηρD 2 1 ) k U T 1 w 0 . (E.20) This implies IE [w SAM k ] − U 1 U T 1w = −U 1 (I − ηD 1 − ηρD 2 1 ) k U T 1 (w − w 0 ). (E.21) Next, by the definition of bias in (5), nBias 2 (w SAM k ) = (IE [w SAM k ] −w) T H(IE [w SAM k ] −w) (a) = (IE [w SAM k ] −w) T U 1 D 1 U T 1 (IE [w SAM k ] −w) (b) = (IE [w SAM k ] − U 1 U T 1w ) T U 1 D 1 U T 1 (IE [w SAM k ] − U 1 U T 1w ) (c) = (w − w 0 ) T U T 1 (I − ηD 1 − ηρD 2 1 ) k D 1 (I − ηD 1 − ηρD 2 1 ) k U T 1 (w − w 0 ) = r i=1 (1 − ηd i − ηρd 2 i ) 2k d i u 2 i (E.22) where (a) is by the SVD of H, (b) is true asw = U 1 U T 1w + U 2 U T 2w and the fact U T 1 U 2 = 0, and (c) is by (E.21). This completes the bias part of the theorem. Next, note that w SAM k − U 1 U T 1w = −U 1 (I − ηD 1 − ηρD 2 1 ) k U T 1 (w − w 0 ) + U 1 (I − (I − ηD 1 − ηρD 2 1 ) k )Σ −1 1 V T 1 . As a result, nError(w SAM k ) =IE (w SAM k −w) T H(w SAM k −w) =IE (w SAM k −w) T U 1 D 1 U T 1 (w SAM k −w) =IE (w SAM k − U 1 U T 1w ) T U 1 D 1 U T 1 (w SAM k − U 1 U T 1w ) =IE (w − w 0 ) T U 1 (I − ηD 1 − ηρD 2 1 ) k D 1 (I − ηD 1 − ηρD 2 1 ) k U T 1 (w − w 0 ) (E.23) + T V 1 Σ −2 1 D 1 I − (I − ηD 1 − ηρD 2 1 ) k 2 V T 1 (E.24) − 2(w − w 0 ) T U 1 (I − ηD 1 − ηρD 2 1 ) k D 1 (I − (I − ηD 1 − ηρD 2 1 ) k )Σ −1 1 V T 1 . (E.IE [ T V 1 I − (I − ηD 1 − ηρD 2 1 ) k 2 V T 1 ] = IE Tr T V 1 I − (I − ηD 1 − ηρD 2 1 ) k 2 V T 1 = Tr IE T V 1 I − (I − ηD 1 − ηρD 2 1 ) k 2 V T 1 (a) = σ 2 Tr V 1 I − (I − ηD 1 − ηρD 2 1 ) k 2 V T 1 = σ 2 Tr I − (I − ηD 1 − ηρD 2 1 ) k 2 (E.26) where (a) uses IE [ T ] = σ 2 I. For the next part of the proof, Bias 2 (w SAM k ) = 1 n r j=1 (1 − ηd j − ηρd 2 j ) 2k d j u 2 j ≤ 1 n r j=1 (1 − ηd j ) 2k d j u 2 j = Bias 2 (w GD k ). (E.27) The proof for variance follows. E.5 Proof of Proposition 1 Before proceeding with the proof of the proposition, we first show a few technical lemmas. Lemma E.2. Let q(x) = a exp(x log b) − c exp(x log d) where a ≥ c and b < d. Then, q(x) ≥ 0 for x ∈ [0, log(c/a)/ log(b/d)]. Proof. One has q(x) = a exp(x log b) − c exp(x log d) = 0 ⇒ exp(x log(b/d)) = c a ⇒ x = log(c/a) log(b/d) ≥ 0 (E1 n (1 − ηd 1 − ηρd 2 1 ) 2k X(w − w 0 ) 2 2 ≤ Bias 2 (w SAM k ) ≤ 1 n (1 − ηd r − ηρd 2 r ) 2k X(w − w 0 ) 2 2 and σ 2 r n (1 − ηd 1 − ηρd 2 1 ) 2k − 2σ 2 r n (1 − ηd r − ηρd 2 r ) k ≤ Var(w SAM k ) − σ 2 r n ≤ σ 2 r n (1 − ηd r − ηρd 2 r ) 2k − 2σ 2 r n (1 − ηd 1 − ηρd 2 1 ) k . Proof. From Theorem 3, 1 n (1 − ηd 1 − ηρd 2 1 ) 2k X(w − w 0 ) 2 2 ≤ Bias 2 (w SAM k ) ≤ 1 n (1 − ηd r − ηρd 2 r ) 2k X(w − w 0 ) 2 2 where we used X(w − w 0 ) 2 2 = (w − w 0 ) T X T X(w − w 0 ) = (w − w 0 ) T U 1 D 1 U T 1 (w − w 0 ) = u T D 1 u = r j=1 d j u 2 j . Moreover, from Theorem 3 we also have Var(w SAM k ) = σ 2 n Tr((I − (I − ηD 1 − ηρD 2 1 ) k ) 2 ) = σ 2 r n + σ 2 n Tr((I − ηD 1 − ηρD 2 1 ) 2k ) − 2σ 2 n Tr((I − ηD 1 − ηρD 2 1 ) k ). The rest of the proof follows. Error(w SAM k ) − σ 2 r n ≤ X(w − w 0 ) 2 2 + σ 2 r n (1 − ηd r − ηρd 2 r ) 2k − 2σ 2 r n (1 − ηd 1 − ηρd 2 1 ) k (E.29) and Error(w GD k ) − σ 2 r n ≥ X(w − w 0 ) 2 2 + σ 2 r n (1 − ηd 1 ) 2k − 2σ 2 r n (1 − ηd r ) k (E.30) by setting ρ = 0. From (15), Error(w GD k ) − σ 2 r n ≥ X(w − w 0 ) 2 2 + σ 2 r n c k 0 (1 − ηd r − ηρd 2 r ) 2k − 2σ 2 r n c k 0 (1 − ηd 1 − ηρd 2 1 ) k =c k 0 X(w − w 0 ) 2 2 + σ 2 r n (1 − ηd r − ηρd 2 r ) 2k − 2σ 2 r n (1 − ηd 1 − ηρd 2 1 ) k ≥c k 0 Error(w SAM k ) − σ 2 r n (E.31) where the last inequality is by (E.29). As a result, Error(w GD k ) ≥ c k 0 Error(w SAM k ) + (1 − c k 0 ) σ 2 r n or equivalently, Error(w SAM k ) − Error(w GD k ) ≤ (1 − c k 0 ) Error(w SAM k ) − σ 2 r n . (E.32) Next, let a = ( X(w − w 0 ) 2 2 + σ 2 r)/n, b = (1 − ηd 1 − ηρd 2 1 ) 2 , c = 2σ 2 r/n and d = (1 − ηd r − ηρd 2 r ). Define q(x) as in Lemma E.2. Then, from Lemma E.3 we have Error(w SAM k ) − σ 2 r n ≥ X(w − w 0 ) 2 2 + σ 2 r n (1 − ηd 1 − ηρd 2 1 ) 2k − 2σ 2 r n (1 − ηd r − ηρd 2 r ) k =q(k). (E.33) From Lemma E.2, if k ≤ log(c/a) log(b/d) = log(2σ 2 r/( X(w − w 0 ) 2 2 + σ 2 r)) log((1 − ηd 1 − ηρd 2 1 ) 2 /(1 − ηd r − ηρd 2 r ) then q(k) ≥ 0 or equivalently, Error(w SAM k ) − σ 2 r/n ≥ 0. Therefore, from (E.32), Error(w SAM k ) − Error(w GD k ) ≤ (1 − c k 0 ) ≤0 Error(w SAM k ) − σ 2 r n ≥0 ≤ 0 (E.34) which completes the proof. E.6 Proof of Theorem 4 Proof. First, suppose h = w T K(X, ·) for some w ∈ R n . Then, n i=1 h(x i ) −h(x i ) 2 = n i=1   n j=1 (w j −w j )K(x i , x j )   2 = K X (w −w) 2 2 = (w −w) T K 2 X (w −w). (E.35) Moreover, Bias 2 (h) = 1 n n i=1 IE[h(x i )] −h(x i ) 2 (E.36) = 1 n n i=1 IE[w T [K X ] i ] −w T [K X ] i 2 = 1 n n i=1 [K X ] T i (IE[w] −w) 2 = 1 n (IE[w] −w) T K 2 X (IE[w] −w) (E.37) where [K X ] i denotes the i-th column of K X . From Lemma E.1, one has w SAM k+1 = η k i=0 (I − ηK X − ηρK 2 X )(I + ρK X )(K Xw + ) = η k i=0 (I − ηK X − ηρK 2 X ) i (K X + ρK 2 X )(w + K −1 X ) = U I − I − ηD − ηρD 2 k+1 (U Tw + D −1 U T ). (E.38) As a result, IE[w SAM k+1 ] = U I − I − ηD − ηρD 2 k+1 U Tw (E.39) and w SAM k+1 −w = −U I − ηD − ηρD 2 k+1 U Tw + U I − I − ηD − ηρD 2 k+1 D −1 U T . (E.40) From (E.35), n i=1 h SAM k (x i ) −h(x i ) 2 =(w SAM k −w) T K 2 X (w SAM k −w) (E.41) =(w SAM k −w) T U D 2 U T (w SAM k −w) (a) =w T U I − ηD − ηρD 2 2k D 2 U Tw (E.42) + T U I − I − ηD − ηρD 2 k 2 U T (E.43) − 2w T U I − ηD − ηρD 2 k D I − I − ηD − ηρD 2 k U T (E.44) where (a) is by (E.40). On the other hand, from (E.37) and (E.39), we have Bias 2 (h) = 1 n (IE[w] −w) T K 2 X (IE[w] −w) =w T U I − ηD − ηρD 2 2k D 2 U Tw . E.7 Proof of Proposition 2 Proof. Let Error + (w SAM k ) = Bias 2 + (w SAM k ) + Var + (w SAM k ), Error − (w SAM k ) = Bias 2 − (w SAM k ) + Var − (w SAM k ) where Bias 2 + (w SAM k ) = 1 n r i=1 (1 − ηd i − ηρd 2 i ) 2k d 2 i u 2 i Bias 2 − (w SAM k ) = 1 n n i=r+1 (1 − ηd i − ηρd 2 i ) 2k d 2 i u 2 i . (E.46) Note that by following the same steps of the proof of Proposition 1, we have under the assumptions of the proposition, Error + (w SAM k ) ≤ Error + (w GD k ). Moreover, note that Bias 2 − (w SAM k ) ≤ Bias 2 − (w GD k ) similar to bias corresponding to the convex part. Finally, Var − (w GD k ) − Var − (w SAM k ) = σ 2 n n i=r+1 (1 − ηd i ) k − 1 2 − 1 − (1 − ηd i − ηρd 2 i ) k 2 ≥ σ 2 n n i=r+1 (1 + ε) k − 1 2 − 1 = (n − r)σ 2 n (1 + ε) k (1 + ε) k − 2 ≥ 0 (E.47) where the last inequality follows the lower bound on k from the proposition. E.8 Proof of Proposition 3 Proof. Let f (w) be defined as in (4) for the ReLU case and in (13) for the kernel case. Then, IE [f (w)] = 1 2 (w −w) T H(w −w) where H = A T A or H = K X for these two cases. Consider the eigenvalue decomposition H = U DU T = U 1 D 1 U T 1 + U 2 D 2 U T 2 where D 1 0 D 2 for the kernel case and D 1 D 2 = 0 for the ReLU case. Then, IE [f (w + ε)] − IE [f (w)] = 1 2 (w + ε −w) T H(w + ε −w) − 1 2 (w −w) T H(w −w) = 1 2 ε T Hε + ε T H(w −w) = 1 2 ε T (U 1 D 1 U T 1 + U 2 D 2 U T 2 )ε + ε T U DU T (w −w) = 1 2 (U T 1 ε) T D 1 (U T 1 ε) + 1 2 (U T 2 ε) T D 2 (U T 2 ε) + ε T U DU T (w −wIE [f (w + ε)] − IE [f (w)] ≥ max ε 2=ρ0 IE [f (w + ε)] − IE [f (w)] (a) ≥ max ε 2=ρ0 1 2 λ min (D 1 ) U T 1 ε 2 2 + (U T 1 ε) T D 1 U T 1 (w −w) ≥ max ε 2=ρ0 ε=U 1v 1 2 λ min (D 1 ) U T 1 ε 2 2 + (U T 1 ε) T D 1 U T 1 (w −w) (b) ≥ max v 2=ρ0 1 2 λ min (D 1 ) v 2 2 + v T D 1 U T 1 (w −w) = 1 2 λ min (D 1 )ρ 2 0 + max v 2=ρ0 v T D 1 U T 1 (w −w) (c) = 1 2 λ min (D 1 )ρ 2 0 + ρ 0 D 1 U T 1 (w −w) 2 (E.49) where λ min denotes the smallest eigenvalue of the matrix, (a) is by (E.48), (b) is true as if ε = U 1 v, ρ 2 0 = ε 2 2 = v T U T 1 U 1 v = v 2 2 and (c) follows v T D 1 U T 1 (w −w) ≤ v 2 D 1 U T 1 (w −w) 2 . Similarly, max ε 2≤ρ0 IE [f (w + ε)] − IE [f (w)] ≤ max ε 2≤ρ0 1 2 λ max (D 1 ) U T 1 ε 2 2 + (U T 1 ε) T D 1 U T 1 (w −w) (a) ≤ 1 2 λ max (D 1 )ρ 2 0 + max ε 2≤ρ0 (U T 1 ε) T D 1 U T 1 (w −w) ≤ 1 2 λ max (D 1 )ρ 2 0 + ρ 0 D 1 U T 1 (w −w) 2 . (E.50) where λ max denotes the largest eigenvalue and (a) is true as U T 1 ε 2 ≤ ε 2 . Note that from (E.20) U T 1 (IE [w SAM k ] −w) = −(I − ηD 1 − ηρD 2 1 ) k U T 1 (w − w 0 ) (E.51) which implies D 1 U T 1 (IE [w SAM k ] −w) 2 = D 1 U T 1 (IE [w SAM k ] −w) 2 2 = (w − w 0 ) T U 1 D 2 1 (I − ηD 1 − ηρD 2 1 ) 2k U T 1 (w − w 0 ) = r i=1 (1 − ηd i − ηρd 2 i ) 2k d 2 i u 2 i . (E.52) The proof follows from (E.49), (E.50) and (E.52). IE [f (w + ε)] − IE [f (w)] ≥ max ε 2=ρ0 IE [f (w + ε)] − IE [f (w)] ≥ max ε 2=ρ0 1 2 λ min (D) U T ε 2 2 + (U T ε) T DU T (w −w) ≥ 1 2 λ min (D)ρ 2 0 + max U T ε 2=ρ0 (U T ε) T DU T (w −w) = 1 2 λ min (D)ρ 2 0 + ρ 0 DU T (w −w) 2 (E.53) as U T ε 2 = ε 2 . Next, max ε 2≤ρ0 IE [f (w + ε)] − IE [f (w)] ≤ max ε 2≤ρ0 1 2 λ max (D) ε 2 2 + ε T U DU T (w −w) ≤ 1 2 λ max (D)ρ 2 0 + ρ 0 DU T (w −w) 2 . (E.54) From (E.39), U T (IE [w SAM k ] −w) = (I − ηD − ηρD 2 ) k U Tw (E.55) implying DU T (IE [w SAM k ] −w) 2 = DU T (IE [w SAM k ] −w) 2 2 = w T U D 2 (I − ηD − ηρD 2 ) 2k U Tw = n i=1 (1 − ηd i − ηρd 2 i ) 2k d 2 i u 2 i . (E.56) The proof follows from (E.53), (E.54) and (E.56) as 1 − ηd i − ηρd 2 i ≤ 1 for i ∈ [n] and 1 < 1 − ηd i for any i ≥ r + 1. Θ i,j = IE[Θ i,j ] = IE[(Ĥ 2 ) i,j ] = IE p l=1Ĥ i,lĤl,j = p l=1 IE[x i x j x 2 l ]. (E.57) Next, we consider the following cases in (E.57): 1. i = j: In this case, p l=1 IE[x i x j x 2 l ] = IE[x i x 3 l ] =0 + IE[x j x 3 l ] =0 + l =i,j IE[x i x j x 2 l ] =0 = 0. 2. i = j: Then, p l=1 IE[x i x j x 2 l ] = IE[x 4 i ] =3 + l =i IE[x 2 i x 2 l ] =1 = p + 2. Therefore, Θ = (p + 2)I. Proof of Proposition D.1. Note that f k = 1 2 (x T k (w − w) + k ) 2 = 1 2 (w −w) T x k x T k (w −w) − k w T x k + · · · . (E.58) We start by writing SAM updates for the stochastic case. The intermediate solution of SAM is given as w SAM k + ρ∇f k (w SAM k ) = w SAM k + ρ g k + H k (w SAM k −w) = (I + ρH k )w SAM k + ρ(g k − H kw ). (E.59) where from (E.58), we have H k = x k x T k and g k = − k x k . Therefore, the SAM update direction is given as ν k = ∇f k (w SAM k + ρ∇f k (w SAM k )) = g k + H k ((I + ρH k )w SAM k + ρ(g k − H kw ) −w) = (H k + ρH 2 k )w SAM k + (I + ρH k )(g k − H kw ). (E.60) Hence, w SAM k+1 = w SAM k − ην k = (I − ηH k − ηρH 2 k )w SAM k − η(I + ρH k )(g k − H kw ). (E.61) In the next step, we take expectation: IE[w SAM k+1 ] = IE x k , k IE x1, 1,··· ,x k−1 , k−1 w SAM k+1 x k , k = IE x k , k IE x1, 1,··· ,x k−1 , k−1 (I − ηH k − ηρH 2 k )w SAM k − η(I + ρH k )(g k − H kw ) x k , k = IE x k , k (I − ηH k − ηρH 2 k )IE x1, 1,··· ,x k−1 , k−1 w SAM k x k , k − η(I + ρH k )(g k − H kw ) (a) = IE x k , k (I − ηH k − ηρH 2 k )IE w SAM k − η(I + ρH k )(g k − H kw ) (b) = (I − ηIE[H k ] − ηρIE[H 2 k ])IE[w SAM k ] + η(IE[H k ] + ρIE[H 2 k ])w = (I − ηH − ηρΘ)IE[w SAM k ] + η(H + ρΘ)w. (E.62) with Θ = IE[H 2 k ], where in (a) we used the fact that w SAM k is independent of x k , k , x k+1 ,IE[w SAM k+1 ] = η k i=0 (I − ηH − ηρΘ) i (H + ρΘ)w = η k i=1 (1 − η − ηρ(p + 2)) i (1 + ρ(p + 2))w = 1 − (1 − η − ηρ(p + 2)) k+1 w (E.63) Error GD /Error SAM k GD − k SAM Error SAMIE x0, 0 x T 0 (IE[w SAM k ] −w) − 0 2 = (IE[w SAM k ] −w) T IE[x 0 x T 0 ](IE[w SAM k ] −w) + IE[ 2 0 ] = (IE[w SAM k ] −w) T (IE[w SAM k ] −w) + σ 2 = (1 − η − ηρ(p + 2)) 2k w 2 2 (E.64) which completes the proof. F Additional Numerical Experiments F.1 Linear Regression Experiments First, we examine the theory we developed on the linear regression problem. To this end, we set p = 100, n = 60 corresponding to an over-parameterized regime, with observations as y i = x T iw + i . We assume (x i , i ) are independent, and i ∼ N (0, σ 2 ) and x i ∼ (0, Σ) where Σ is an exponential covariance matrix, Σ i,j = 0.5 |i−j| . Each coordinate ofw is independently chosen from Unif[0, 1] and thenw is normalized to have norm one. We run GD and SAM with η = 1/2σ max (X) 2 and ρ = η/6 for 500 iterations on the least squares loss. We draw 600 noiseless validation samples, denoted as y test , X test from the same model. We record the validation error defined as Error(w) = y test − X test w GD k 2 2 /600 for GD (and SAM, similarly). We run the whole process described for 100 independent repetitions and report the average and standard deviation of results. We let Error GD , Error SAM to denote the best validation error achieved for GD and SAM over all iterations, respectively. We also let k GD , k SAM denote the number of iterations with the best error. In Figure F shows that SAM achieves the best error marginally earlier than GD. When noise is higher but not too high, SAM has a smaller error compared to GD, and this error is achieved earlier. If we continue to increase noise, SAM performs worse and early stopping does not seem to help SAM much. This result is consistent with the theory we developed. Particularly, our theory in Section 4, shows that in the noiseless case or small noise regime, SAM has a lower error compared to GD in earlier iterations. In our experiment, this is confirmed that in small noise regimes, SAM both achieves its best error earlier, and has a smaller error compared to GD. However, as we increase noise, variance increases and as SAM has higher variance, it performs worse. We have also shown the error of SAM in Figure F To further examine our theory, we run the model described above for two values of σ = 0.05, 1 and plot the ratio of GD error to SAM error for all iteration in Figure F performs better than GD in all iterations. On the other hand, when the noise is high, SAM performs better in early iterations, as SAM's bias is lower, while as variance becomes dominant, GD starts to performs better than SAM in later iterations. Again, this is in agreement with our theory. F.1.1 Effect of stochasticity Next, we explore the effect of stochasticity in the algorithm. We use the same model from the previous section. However, we set n = 200 and draw 2000 validation samples. We run the algorithm for an epoch, with batch size of 1. The results for this case are shown in Figure F.3. We see from the left panel that in this case, even for large noise variance values, stochastic SAM performs better than SGD. As we discussed in Appendix D, stochastic SAM can have stronger regularization over SGD due to the way SAM is implemented in practice. F.2 Kernel Regression Experiments Next, we investigate the effect on SAM for the kernel regression case. To this end, we set p = 200, n = 100 and draw X as described in Section F.1. Next, we expand the set of features by adding terms of the form x 2 i , x 3 i and x i x j for 400 pairs of randomly chosen (i, j), overall increasing the dimension of the model to 1000. Then, the observation are drawn as y =x Tw + wherex is the vector of expanded features andw has iid uniform coordinates and then is normalized to have norm 1. The rest of the setup is similar to Section F.1. We use an indefinite kernel, defined as the difference of a Gaussian kernel with the variance of 100 and an and run the training for 1500 iterations. The whole process is repeated 100 times and then averaged. We use 200 validation points and record the error, defined as Error GD /Error SAM k GD − k SAM Error SAMError(h) = 1 200 200 i=1 (y test,i − h(x test,i )) 2 . Similar to the case of linear regression, we report the best error over the course of the algorithm and the iteration leading to the best error. The results for this case are shown in Figure F.4. Overall, we see that GD and SAM perform closely in terms of error. Similar to the case of linear regression, unless noise is too large, SAM performs better than GD, although the improvements are marginal. However, we see that the best error for SAM is achieved earlier than GD, which overall agrees with the insight from our bias-variance analysis, as SAM's bias is smaller than GD and in noiseless cases, bias is dominant. We also show the SAM performance in Figure F Figure F. 5. We see that in almost all iterations, SAM performs better than GD, which agrees with Proposition 2. Specially, we see that in later iterations, GD performs significantly worse which agrees with our analysis that GD has unbounded error in the non-convex case. F.3 Deep Learning Experiments F.3.1 Experimental setup Our deep learning experiments are done on MIT Supercloud cluster [33] using a single NVIDIA V100 GPU. In all experiments, we use batch size of 256, with the peak starting learning rate of 0.35 scheduled to decay linearly. We train networks for 200 epochs, unless stated otherwise. We use momentum coefficient of 0.9 and weight decay coefficient of 0.0005 in all experiments. We run all experiments for three repetitions and report the average and standard deviation. F.3.2 Comparison of SGD and SAM First, we repeat our experiments from Section 5 on ResNet18. In Figure F.6 we explore the effect of ρ on the accuracy. We see that large or small values of ρ lead to worse performance, as expected. However, we see that as we discussed in Section 5, the loss of accuracy from large values of ρ is less than loss of accuracy when taking ρ = 0, i.e. GD. Specially, in CIFAR100, the loss of accuracy is negligible by going from ρ = 0.3 to ρ = 0.5. A similar situation can be observed from Figure F.6 for the ResNet50 network. As we discussed, this aligns with our theory for the noiseless case. Next, we take a look at error trajectory for noiseless and noisy setups for ResNet18 in Figure F.7. As we see, the profiles of error are similar to the ones from Section 5. Particularly, we see error decreases over epochs for the noiseless case, while start to increase in the noisy setup. Therefore, our insight regarding early stopping in the noisy case holds true here as well. We also see the increasing performance gap between SAM and SGD for the noisy setup, which as we discussed, can be explained by the increase of variance of GD in the non-convex setup. F.3.3 Effect of decaying ρ Based on our observations so far, we see that choosing large values of ρ improves the performance in early epochs, while resulting in worse accuracy in later iterations. Therefore, our hypothesis is that starting the training with a large ρ and then decaying ρ might result in better performance. To test this, we start the training with a value of ρ in our search grid of {0.1, 0.3, 0.5} that is one step larger than the optimal ρ. For example, if the optimal ρ is 0.1, we start training by ρ = 0.3. Then, we decay ρ. We consider two decay patterns: The value of ρ is multiplied with a coefficient α ∈ {0.7, 0.8, 0.9} at either epoch 175, or epochs 150 and 200. We choose the best scenario among the 6 possible choices outlined above and report the results in Table F.1 under the SAM-Decay row. SAM-Optimal shows the results for the best value of ρ. We run the training for full 200 epochs, but record the validation accuracy over the course of epochs. The early stopping column in Table F.1 corresponds to the best accuracy over the first 120 epochs for SGD, and first 50 epochs for SAM-based methods (i.e. 100 forward-backward passes). We can see that for the full training, decaying ρ results in better or similar accuracy compared to the optimal ρ. This is while using the larger value of ρ for full training leads to worse performance. Moreover, we see that the optimal value of ρ generally performs worse than the decay case when training is stopped early, and using a large ρ shows a considerable improvement in early epochs. As discussed, this is in agreement with the insights from our theory, and our observations above. F.3.4 Effect of sample splitting Finally, we explore the effect of stochasticity in SAM. To this end, we consider a version of SAM where two different sets of samples are used to calculate inner and outer gradients in SAM. That is, for a batch, we break it into two part and use part for the ascend step and the other for the descend. This ensures the independence between the data used for these two gradient steps, which can reduce the effect of stochasticity in SAM. We keep the experimental setup the same as before, but we use two batch size values of 256 (i.e. 128 samples per gradient) and 512 (i.e. 256 samples per gradient), as well as training for 200 and 400 epochs. We also shuffle batches randomly after each epoch. The results on ResNet50 for this case are shown in Table F.2. We see that the results in this case are worse than optimal SAM in Table F. 1. This shows that sample splitting does not seem to be helpful in practice. Figure 1 : 1Comparison of GD and SAM classification error on the validation set, over epochs for different cases. (a): CIFAR100 with clean labels on ResNet50. (b): Theoretical error curve for a particular model from our analysis for a noiseless case. (c): CIFAR10 with noisy training labels on ResNet50. (d): Theoretical error curve for a noisy case. 2 2 2/rσ 2 and assume SNR ≥ 1. Then, under the assumptions of Theorem 4, if Figure 2 : 2Accuracy over epochs for SAM and GD with ResNet50 and different datasets. The number in the parenthesis in the legend shows the average best accuracy. Figure C. 1 : 1Comparison of values of (η, ρ) that satisfy condition(17) for different values of dr, dn. See Appendix C for more details. .28) as c/a ≤ 1 and d/b > 1. Hence, q(x) does not change sign between 0 and log(c/a)/ log(b/d) by the intermediate value theorem. Moreover, q(0) = a − c ≥ 0 implying q(x) ≥ 0 in this interval. Lemma E.3. Under the assumptions of Theorem 3, result, IE[(E.42)] = nBias 2 (h), and also IE[(E.44)] = 0, IE[(E.41)] = nError(h), showing IE[(E.43)] = nVar(h). The rest of the proof follows from the proof of Theorem 3. . 4 . 4Let x ∼ N (0, I). Then, IE[xx T xx T ] = (p + 2)I. Proof. LetĤ = xx T ,Θ =Ĥ 2 and Θ = IE[Θ]. Note that k+1,··· , and (b) is true as IE[g k ] = IE[H k g k ] = 0 due to the independence of k and x k . Thus, from (E.62) Figure F. 1 : 1Results for the full-batch linear regression. Left: Ratio of the best error achieved by GD and SAM. Middle: The difference between the number of iterations leading to the best error Right: Best SAM error where the second equality is by Lemma E.4. Finally, we calculate error from (D.2) as .1, we compare the best error from GD and SAM, and when they are achieved compared to the noise standard deviation. FromFigure F.1 [Left Panel], we see that if noise is small, GD and SAM perform similarly, although Figure F.1 [Middle Panel] . 1 [ 1Right Panel], showing that increasing noise leads to larger error, as expected. . 2 .Figure F. 2 :Figure F. 3 : 223We see that when noise is smaller, SAM Comparison of ratio of error for GD and SAM over iterations, Error(w GD Results for the stochastic linear regression. Left: Ratio of the best error achieved by SGD and stochastic SAM. Right: Best SAM error Figure F. 4 :Figure F. 5 : 45Results for the kernel regression. Left: Ratio of the best error achieved by GD and SAM. Middle: The difference between the number of iterations leading to the best error Right: Best SAM error Comparison of ratio of error for GD and SAM over iterations, Error(w GD 0.8 exp(− x − y 2 ) . 4 [ 4Right Panel], showing increasing noise variance leads to worse performance. Finally, we compare the error trajectory of SAM/GD for two values of noise in Figure F. 6 :Figure F. 7 : 67Effect of varying ρ on accuracy on ResNet50 and ResNet18. Accuracy over epochs for SAM and SGD (ResNet18). The number in the parenthesis in the legend gives the average best accuracy. in the literature. Moreover, this is in agreement with the previous work showing SAM leads to flatter solutions compared to GD[4,1, 37, 35,15]. Table 1 : 1Comparison of SGD, SAM with optimal ρ and SAM with ρ decaying over the course of algorithm.A deeper analysis of stochastic SAM is left for a future work.[34] Samuel L Smith, Benoit Dherin, David GT Barrett, and Soham De. On the origin of implicit regularization in stochastic gradient descent. arXiv preprint arXiv:2101.12176, 2021.[35] Szilvia Ujváry, Zsigmond Telek, Anna Kerekes, Anna Mészáros, and Ferenc Huszár. Rethinking sharpness-aware minimization as variational inference. arXiv preprint arXiv:2210.10452, 2022. Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha Dvornek, Sekhar Tatikonda, James Duncan, and Ting Liu. Surrogate gap minimization improves sharpness-aware training. arXiv preprint arXiv:2203.08065, 2022.Dataset Method Full Early CIFAR10 SGD 95.44 ± 0.06 82.99 ± 0.75 SAM 96.31 ± 0.06 81.43 ± 2.73 SAM-Decay 96.42 ± 0.10 86.79 ± 0.38 CIFAR100 SGD 79.50 ± 0.33 58.87 ± 0.62 SAM 82.01 ± 0.09 60.20 ± 0.97 SAM-Decay 82.02 ± 0.27 61.92 ± 1.63 CIFAR10 ( CIFAR10ResNet50) CIFAR100 (ResNet50) CIFAR10 (ResNet18) CIFAR100 (ResNet18)0 0.1 0.2 0.3 0.4 0.5 95.2 95.4 95.6 95.8 96 96.2 96.4 96.6 0 0.1 0.2 0.3 0.4 0.5 79 79.5 80 80.5 81 81.5 82 82.5 0 0.1 0.2 0.3 0.4 0.5 95.2 95.4 95.6 95.8 96 96.2 96.4 0 0.1 0.2 0.3 0.4 0.5 79.2 79.4 79.6 79.8 80 80.2 80.4 Table F . 1 : F1Effect of starting training with a large ρ and decaying over the course of epochs.Architecture Dataset Method Full Training Early Stopping ResNet18 CIFAR10 SGD 95.45 ± 0.13 85.98 ± 1.26 SAM-Optimal 96.20 ± 0.02 85.07 ± 0.09 SAM-Decay 96.14 ± 0.07 88.60 ± 1.68 CIFAR100 SGD 79.37 ± 0.12 62.58 ± 1.23 SAM-Optimal 80.01 ± 0.03 59.38 ± 1.14 SAM-Decay 80.15 ± 0.34 63.37 ± 0.60 ResNet50 CIFAR10 SGD 95.44 ± 0.06 82.99 ± 0.75 SAM-Optimal 96.31 ± 0.06 81.43 ± 2.73 SAM-Decay 96.42 ± 0.10 86.79 ± 0.38 CIFAR100 SGD 79.50 ± 0.33 58.87 ± 0.62 SAM-Optimal 82.01 ± 0.09 60.20 ± 0.97 SAM-Decay 82.02 ± 0.27 61.92 ± 1.63 Table F . 2 : F2Effect of sample splitting on SAM (ResNet50)Dataset Epochs Batch Size 256 Batch Size 512 CIFAR10 200 94.39 ± 0.11 95.03 ± 0.09 400 94.73 ± 0.11 95.50 ± 0.04 CIFAR100 200 78.63 ± 0.48 78.72 ± 0.42 400 78.95 ± 0.30 79.55 ± 0.33 The details of plots inFigures 1(b,d)are discussed in Appendix A. The plots show error for a kernel regression problem with least-squares loss. 2 (E.16) AcknowledgmentsThis research is supported in part by a grant from the Office of Naval Research (N000142112841). Authors would like to thank MIT SuperCloud for providing computational resources for this work. Sam operates far from home: eigenvalue regularization as a dynamical phenomenon. Atish Agarwala, Yann N Dauphin, Atish Agarwala and Yann N. Dauphin. Sam operates far from home: eigenvalue regularization as a dynamical phenomenon, 2023. Towards understanding sharpness-aware minimization. Maksym Andriushchenko, Nicolas Flammarion, International Conference on Machine Learning. PMLRMaksym Andriushchenko and Nicolas Flammarion. Towards understanding sharpness-aware minimiza- tion. In International Conference on Machine Learning, pages 639-668. PMLR, 2022. Sharpness-aware minimization improves language model generalization. Dara Bahri, Hossein Mobahi, Yi Tay, arXiv:2110.08529arXiv preprintDara Bahri, Hossein Mobahi, and Yi Tay. Sharpness-aware minimization improves language model generalization. arXiv preprint arXiv:2110.08529, 2021. The dynamics of sharpness-aware minimization: Bouncing across ravines and drifting towards wide minima. L Peter, Bartlett, M Philip, Olivier Long, Bousquet, arXiv:2210.01513arXiv preprintPeter L Bartlett, Philip M Long, and Olivier Bousquet. The dynamics of sharpness-aware minimization: Bouncing across ravines and drifting towards wide minima. arXiv preprint arXiv:2210.01513, 2022. Efficient training of language models to fill in the middle. Mohammad Bavarian, Heewoo Jun, Nikolas Tezak, John Schulman, Christine Mcleavey, Jerry Tworek, Mark Chen, arXiv:2207.14255arXiv preprintMohammad Bavarian, Heewoo Jun, Nikolas Tezak, John Schulman, Christine McLeavey, Jerry Tworek, and Mark Chen. Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255, 2022. Ayan Acharya, Sathiya Keerthi, and Rahul Mazumder. Improved deep neural network generalization using m-sharpness-aware minimization. Kayhan Behdin, Qingquan Song, Aman Gupta, David Durfee, OPT 2022: Optimization for Machine Learning. NeurIPS2022Kayhan Behdin, Qingquan Song, Aman Gupta, David Durfee, Ayan Acharya, Sathiya Keerthi, and Rahul Mazumder. Improved deep neural network generalization using m-sharpness-aware minimization. In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022. To understand deep learning we need to understand kernel learning. Mikhail Belkin, Siyuan Ma, Soumik Mandal, International Conference on Machine Learning. PMLRMikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In International Conference on Machine Learning, pages 541-549. PMLR, 2018. Lin Chen, Sheng Xu, arXiv:2009.10683Deep neural tangent kernel and laplace kernel have the same rkhs. arXiv preprintLin Chen and Sheng Xu. Deep neural tangent kernel and laplace kernel have the same rkhs. arXiv preprint arXiv:2009.10683, 2020. When vision transformers outperform resnets without pre-training or strong data augmentations. Xiangning Chen, Cho-Jui Hsieh, Boqing Gong, arXiv:2106.01548arXiv preprintXiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre-training or strong data augmentations. arXiv preprint arXiv:2106.01548, 2021. An sde for modeling sam: Theory and insights. Antonio Enea Monzio Compagnoni, Luca Orvieto, Hans Biggio, Frank Norbert Kersting, Aurelien Proske, Lucchi, arXiv:2301.08203arXiv preprintEnea Monzio Compagnoni, Antonio Orvieto, Luca Biggio, Hans Kersting, Frank Norbert Proske, and Aurelien Lucchi. An sde for modeling sam: Theory and insights. arXiv preprint arXiv:2301.08203, 2023. Flat minima generalize for low-rank matrix recovery. Lijun Ding, Dmitriy Drusvyatskiy, Maryam Fazel, arXiv:2203.03756arXiv preprintLijun Ding, Dmitriy Drusvyatskiy, and Maryam Fazel. Flat minima generalize for low-rank matrix recovery. arXiv preprint arXiv:2203.03756, 2022. Sharp minima can generalize for deep nets. Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio, PMLRProceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine Learning70Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1019-1028. PMLR, 06-11 Aug 2017. Jiawei Du, Daquan Zhou, Jiashi Feng, Y F Vincent, Joey Tianyi Tan, Zhou, arXiv:2205.14083Sharpness-aware training for free. arXiv preprintJiawei Du, Daquan Zhou, Jiashi Feng, Vincent YF Tan, and Joey Tianyi Zhou. Sharpness-aware training for free. arXiv preprint arXiv:2205.14083, 2022. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. Karolina Gintare, Daniel M Dziugaite, Roy, arXiv:1703.11008arXiv preprintGintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008, 2017. Sharpness-aware minimization for efficiently improving generalization. Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur, arXiv:2010.01412arXiv preprintPierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412, 2020. On the similarity between the laplace and neural tangent kernels. Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, Basri Ronen, Advances in Neural Information Processing Systems. 33Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, and Basri Ronen. On the similarity between the laplace and neural tangent kernels. Advances in Neural Information Processing Systems, 33:1451-1461, 2020. When do neural networks outperform kernel methods?. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, Andrea Montanari, Advances in Neural Information Processing Systems. 33Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems, 33:14820-14830, 2020. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. Training computeoptimal large language models. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego De Las, Lisa Anne Casas, Johannes Hendricks, Aidan Welbl, Clark, arXiv:2203.15556arXiv preprintJordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Ruther- ford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute- optimal large language models. arXiv preprint arXiv:2203.15556, 2022. Kernel methods in machine learning. The annals of statistics. Thomas Hofmann, Bernhard Schölkopf, Alexander J Smola, 36Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. Kernel methods in machine learning. The annals of statistics, 36(3):1171-1220, 2008. H Ibayashi, T Hamaguchi, M Imaizumi, arXiv:2106.12612Minimum sharpness: Scale-invariant parameterrobustness of neural networks. arXiv preprintH. Ibayashi, T. Hamaguchi, and M. Imaizumi. Minimum sharpness: Scale-invariant parameter- robustness of neural networks. arXiv preprint arXiv:2106.12612, 2021. Neural tangent kernel: Convergence and generalization in neural networks. Arthur Jacot, Franck Gabriel, Clement Hongler, Advances in neural information processing systems. 31Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and general- ization in neural networks. Advances in neural information processing systems, 31, 2018. When do flat minima optimizers work?. Jean Kaddour, Linqing Liu, Ricardo Silva, Matt Kusner, Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun ChoJean Kaddour, Linqing Liu, Ricardo Silva, and Matt Kusner. When do flat minima optimizers work? In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. On large-batch training for deep learning: Generalization gap and sharp minima. Dheevatsa Nitish Shirish Keskar, Jorge Mudigere, Mikhail Nocedal, Ping Tak Peter Smelyanskiy, Tang, arXiv:1609.04836arXiv preprintNitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Just interpolate: Kernel "ridgeless" regression can generalize. Tengyuan Liang, Alexander Rakhlin, The Annals of Statistics. 483Tengyuan Liang and Alexander Rakhlin. Just interpolate: Kernel "ridgeless" regression can generalize. The Annals of Statistics, 48(3):1329-1347, 2020. Analysis of regularized leastsquares in reproducing kernel krein spaces. Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan Ak Suykens, Machine Learning. 110Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, and Johan AK Suykens. Analysis of regularized least- squares in reproducing kernel krein spaces. Machine Learning, 110:1145-1173, 2021. Support vector machine classification with indefinite kernels. Ronny Luss, Alexandre , Advances in neural information processing systems. 20Ronny Luss and Alexandre d'Aspremont. Support vector machine classification with indefinite kernels. Advances in neural information processing systems, 20, 2007. On linear stability of sgd and input-smoothness of neural networks. Chao Ma, Lexing Ying, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Chao Ma and Lexing Ying. On linear stability of sgd and input-smoothness of neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 16805-16817. Curran Associates, Inc., 2021. Learning in reproducing kernel krein spaces. Dino Oglic, Thomas Gartner, International conference on machine learning. PMLRDino Oglic and Thomas Gartner. Learning in reproducing kernel krein spaces. In International confer- ence on machine learning, pages 3859-3867. PMLR, 2018. Scalable learning in reproducing kernel krein spaces. Dino Oglic, Thomas Gartner, International Conference on Machine Learning. PMLRDino Oglic and Thomas Gartner. Scalable learning in reproducing kernel krein spaces. In International Conference on Machine Learning, pages 4912-4921. PMLR, 2019. Early Stopping -But When?. Lutz Prechelt, SpringerBerlin Heidelberg; Berlin, HeidelbergLutz Prechelt. Early Stopping -But When?, pages 55-69. Springer Berlin Heidelberg, Berlin, Heidelberg, 1998. Interactive supercomputing on 40,000 cores for machine learning and data analysis. Albert Reuther, Jeremy Kepner, Chansup Byun, Siddharth Samsi, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Lauren Milechin, Julia Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Peter Michaleas, IEEE High Performance extreme Computing Conference (HPEC). IEEEAlbert Reuther, Jeremy Kepner, Chansup Byun, Siddharth Samsi, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Lauren Milechin, Julia Mullen, Andrew Prout, Antonio Rosa, Charles Yee, and Peter Michaleas. Interactive supercomputing on 40,000 cores for machine learning and data analysis. In 2018 IEEE High Performance extreme Computing Conference (HPEC), pages 1-6. IEEE, 2018.
{'fraction_non_alphanumeric': 0.08179311229397807, 'fraction_numerical': 0.041262104790611175, 'mean_word_length': 3.3015724137931035, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 12, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 113, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Sharpness-Aware Minimization (SAM) is a recent optimization framework aiming to improve the deep neural network generalization, through obtaining flatter (i.e. less sharp) solutions. As SAM has been numerically successful, recent papers have studied the theoretical aspects of the framework and have shown SAM solutions are indeed flat. However, there has been limited theoretical exploration regarding statistical properties of SAM. In this work, we directly study the statistical performance of SAM, and present a new theoretical explanation of why SAM generalizes well. To this end, we study two statistical problems, neural networks with a hidden layer and kernel regression, and prove under certain conditions, SAM has smaller prediction error over Gradient Descent (GD). Our results concern both convex and nonconvex settings, and show that SAM is particularly well-suited for non-convex problems. Additionally, we prove that in our setup, SAM solutions are less sharp as well, showing our results are in agreement with the previous work. Our theoretical findings are validated using numerical experiments on numerous scenarios, including deep neural networks.', 'arxivid': '2302.11836', 'author': ['Kayhan Behdin ', 'Rahul Mazumder \nMIT Sloan Schools of Management\nCambridgeMA\n', '\nMIT Operations Research Center\nCambridgeMA\n'], 'authoraffiliation': ['MIT Sloan Schools of Management\nCambridgeMA', 'MIT Operations Research Center\nCambridgeMA'], 'corpusid': 258823479, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 29323, 'n_tokens_neox': 24972, 'n_words': 15398, 'pdfsha': '009fac3543208b2124ab82a25713953c2fb17af1', 'pdfurls': ['https://export.arxiv.org/pdf/2302.11836v3.pdf'], 'title': ['On Statistical Properties of Sharpness-Aware Minimization: Provable Guarantees', 'On Statistical Properties of Sharpness-Aware Minimization: Provable Guarantees'], 'venue': []}
arxiv
Reply to the Comment of R. M. Cavalcanti on "Resonant Spectra and the Time Evolution of the Survival and Nonescape Probabilities" arXiv:quant-ph/9804041v1 17 Apr 1998 Reply to the Comment of R. M. Cavalcanti on "Resonant Spectra and the Time Evolution of the Survival and Nonescape Probabilities" arXiv:quant-ph/9804041v1 17 Apr 1998 In our paper [1] we derived an exact expression for the nonescape probability P (t), (see Eq.(14)), as an expansion in terms of resonant states and M functions, C n C * ℓ I nℓ M (k n , t)M * (k ℓ , t),(1) where the integral I nℓ is defined by Eq. (15) of ref. [1], I nℓ = R 0 u * ℓ (r)u n (r)dr.(2) The long time limit of P (t) leads to an asymptotic expansion whose leading term reads, ∞ n=−∞ ∞ ℓ=−∞ C n C * ℓ I nℓ k n k * ℓ 1 t .(3) So we concluded that at long times P (t) ∼ t −1 . Cavalcanti [2] instead has proven that the above coefficient vanishes and concludes that the leading term of P (t) ∼ t −3 . His procedure corresponds to interchange the integral over r in the expression of P (t), Eq. (1), with the long time limit. The vanishing of the term proportional to t −1 then follows from the sum rule ∞ m=−∞ C m u m (r) k m = 0, (r ≤ R).(4) In our approach we perform first the integration over r and then take the long time limit. We provide below an argument that shows that in dealing with resonant state expansions the interchange of the above operations do not lead to the same result. This is the case for expansions that do not converge uniformly. Consider the n-th resonant function u n (r) is a solution of the Schrödinger equation [3], u ′′ n (r) + [k 2 n − V (r)]u n (r) = 0,(5) where the prime stands for the derivative with repect to r, k 2 n is a squared complex wavenumber, and V (r) is an arbitrary potential that vanishes beyond r = R. The function u n (r) satisfies the boundary conditions, u n (0) = 0; u ′ n (r) r=R = ik n u n (R).(6) Consider also similar equations for the complex ℓ − th function u * ℓ (r), u ′′ * ℓ (r) + [k * 2 ℓ − V (r)]u * ℓ (r) = 0,(7) which obeys the boundary conditions, u * ℓ (0) = 0; u ′ * ℓ (r) r=R = −ik * ℓ u * ℓ (R).(8) Now multiply Eq. (5) by u * ℓ (r) and substract from it Eq. (7) multiplied by u n (r). Integrating the resulting expression from r = 0 to r = R yields, u n (r)u ′ * ℓ (r) − u * ℓ (r)u ′ n (r) r=R r=0 + (k 2 n − k * 2 ℓ )I nℓ = 0. (9) Using Eqs. (6) and (8) allows to write a closed form of I nℓ , I nl = u n (R)u * ℓ (R) i(k n − k * ℓ ) .(10) Substituting Eq. (10) into Eq. (1) leads to the following exact expression for the nonescape probability, P (t) = ∞ n=−∞ ∞ ℓ=−∞ C n C * ℓ u n (R)u * ℓ (R) i(k n − k * ℓ ) M (k n , t)M * (k ℓ , t).(11) Taking now the long time limit allows to write P (t) at leading order in inverse powers of t as, P (t) ∼ ∞ n=−∞ ∞ ℓ=−∞ C n k n C * ℓ k * ℓ u n (R)u * ℓ (R) i(k n − k * ℓ ) 1 t .(12) The sum rule given by Eq. (4) does not lead to the vanishing of Eq. (12) because of the existence of the factor 1/(k n − k * ℓ ). This shows that the interchange of integration and the long time limit operations on the resonant expansions yields different results. In our opinion, according to the definition of P (t), the integration over r should precede the long time limit and consequently P (t) ∼ t −1 . [1] G. García-Calderón, J. L. Mateos and M. Moshinsky, Phys. Rev. Lett. 74, 337 (1995). . R M Cavalcanti, Phys. Rev. Lett. preceding Comment. R. M. Cavalcanti, Phys. Rev. Lett. preceding Comment. . G García-Calderón, R E Peierls, Nucl. Phys. A. 265441G. García-Calderón and R.E. Peierls, Nucl. Phys. A 265, 441 (1976); . G García-Calderón, J L Mateos, M Moshinsky, Ann. Phys. (N.Y.). 249430G. García-Calderón, J. L. Mateos and M. Moshinsky, Ann. Phys. (N.Y.) 249, 430 (1996).
{'fraction_non_alphanumeric': 0.11185442868611185, 'fraction_numerical': 0.03398447952903398, 'mean_word_length': 3.1672240802675584, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In our paper [1] we derived an exact expression for the nonescape probability P (t), (see Eq.(14)), as an expansion in terms of resonant states and M functions,', 'arxivid': 'quant-ph/9804041', 'author': [], 'authoraffiliation': [], 'corpusid': 37146915, 'doi': '10.1103/physrevlett.80.4354', 'github_urls': [], 'n_tokens_mistral': 1457, 'n_tokens_neox': 1356, 'n_words': 737, 'pdfsha': '7f93b8aa379e1f4011613887f94279d36dd74a8a', 'pdfurls': ['https://export.arxiv.org/pdf/quant-ph/9804041v1.pdf'], 'title': ['Reply to the Comment of R. M. Cavalcanti on "Resonant Spectra and the Time Evolution of the Survival and Nonescape Probabilities"', 'Reply to the Comment of R. M. Cavalcanti on "Resonant Spectra and the Time Evolution of the Survival and Nonescape Probabilities"'], 'venue': []}
arxiv
THE ADDITION OF TEMPORAL NEIGHBORHOOD MAKES THE LOGIC OF PREFIXES AND SUB-INTERVALS EXPSPACE-COMPLETE * 8 Jan 2023 Laura Bozzelli [email protected] Angelo Montanari [email protected] ANDAdriano Peron [email protected] Pietro Sala [email protected] © L Bozzelli A Montanari A Peron P Sala L Bozzelli A Montanari A Peron P Sala University of Napoli "Federico II" NapoliItaly University of Udine UdineItaly University of Trieste TriesteItaly CC Creative Commons University of Verona VeronaItaly THE ADDITION OF TEMPORAL NEIGHBORHOOD MAKES THE LOGIC OF PREFIXES AND SUB-INTERVALS EXPSPACE-COMPLETE * 8 Jan 2023Preprint submitted to Logical Methods in Computer Science* This paper is a revised and extended version of [BMPS20] and [BMPS21a].and phrases: Interval Temporal LogicStar-Free Regular LanguagesSatisfiabilityComplexity A classic result by Stockmeyer [Sto74] gives a non-elementary lower bound to the emptiness problem for generalized * -free regular expressions. This result is intimately connected to the satisfiability problem for interval temporal logic, notably for formulas that make use of the so-called chop operator. Such an operator may indeed be interpreted as the inverse of the concatenation operation on regular languages, and this correspondence enables reductions between non-emptiness of generalized * -free regular expressions and satisfiability of formulas of the interval temporal logic of chop under the homogeneity assumption[HMM83].In this paper, we study the complexity of the satisfiability problem for suitable weakenings of the chop interval temporal logic, that can be equivalently viewed as fragments of Halpern and Shoham interval logic. We first introduce the logic BD hom featuring modalities B, for begins, corresponding to the prefix relation on pairs of intervals, and D, for during, corresponding to the infix relation, whose satisfiability problem has been recently shown to be PSPACE-complete[BMPS21b]. The homogeneous models of BD hom naturally correspond to languages defined by restricted forms of generalized * -free regular expressions, that use union, complementation, and the inverses of the prefix and infix relations. Then, we focus our attention on the extension of BD hom with the temporal neighborhood modality A, corresponing to the Allen relation Meets, and prove that such an addition increases both the expressiveness and the complexity of the logic. In particular, we show that the resulting logic BDA hom is EXPSPACE-complete. Introduction Interval temporal logics (ITLs for short) are versatile and expressive formalisms to specify properties of sequences of states and their duration. When it comes to fundamental problems like satisfiability, their high expressive power is often obtained at the price of undecidability. As an example, the satisfiability problem of the most widely known ITLs, namely, Halpern and Shoham's HS [HS91], and Venema's CDT [Ven91a], turn out to be highly undecidable. Despite these negative results, a number of decidable logics have been identified by suitably weakening ITLs (see [BMM + 14] for a complete classification of HS fragments). Here the term "weakening" is intended as a set of syntactic and/or semantics restrictions imposed on the formulas of the logic and/or the temporal structures on which such formulas are interpreted, respectively. Among the plethora of possible weakenings, in this paper we focus on (the combination of) the following two natural and well-studied restrictions: • Restrict the set of interval relations. Many decidable fragments of ITLs are obtained by considering a restricted set of Allen's relations over pairs of intervals. This approach naturally induces fragments of HS with modalities corresponding to the selected subset of interval relations. As an example, the logic of temporal neighborhood (PNL for short) features only 2 modalities, corresponding to 2 interval relations among the possible 13 ones, namely, A (adjacent to the right) and its inversē A [CH97]. PNL has been shown to be decidable over all meaningful classes of linear orders [BMSS11,MS12]. • Restrict the class of models. As an alternative, it is possible to tame the complexity of ITLs by restricting to classes of models that satisfy certain specific assumptions. An example of such an approach can be found in a recent series of papers that study the model checking problem for ITLs (see, e.g., the seminal paper [MMM + 16]), as well as their expressiveness compared to that of classical point-based temporal logics, like LTL, CTL, and CTL * [BMM + 19a]. In this setting, models are represented as Kripke structures, and are inherently point-based rather than interval-based. The very same models can be obtained from interval temporal structures by making the so-called homogeneity assumption, that is, by assuming that every proposition letter holds over an interval if and only if it holds at all its points [Roe80]. Under such an assumption, full HS has a decidable satisfiability problem (as a matter of fact, the model checking procedures proposed in the aforementioned series of papers can be easily turned into satisfiability procedures, often retaining the same complexity) [MMM + 16]. Because of this, the focus in studying HS fragments under the homogeneity assumption was shifted from decidability to complexity. Under the homogeneity assumption, a natural connection to generalized * -free regular languages emerges from the analysis of the complexity of ITLs over finite linear orders. A classic result by Stockmeyer states that the emptiness problem for generalized * -free regular expressions is non-elementarily decidable (tower-complete) for unbounded nesting of negation [Sch16,Sto74] (it is (k-1)-EXP SP ACE-complete for expressions where the nesting of negation is at most k ∈ N + ). Such a problem can be easily turned into the satisfiability problem for the logic C of the chop modality, over finite linear orders, under the homogeneity assumption [HMS08, Mos83,Ven91b], and vice versa. C has one binary modality only, the so-called chop operator, that allows one to split the current interval in two parts and to state what is true over the first part and what over the second one. It can be easily shown that there is a LOGSP ACE-reduction of the emptiness problem for generalized * -free regular expressions to the satisfiability problem for C with unbounded nesting of the chop operator, and vice versa. The close relationships between formal languages and ITLs have been already pointed out in [MS13a,MS13b], where the ITL counterparts of regular languages, ω-regular languages, and extensions of them (ωB-and ωS-regular languages) have been provided. Here, we focus on some meaningful fragments of C (under the homogeneity assumption). 1 Modalities for the prefix, suffix, and infix relations over (finite) intervals can be easily defined in C. We have that a formula holds over a prefix of the current interval if and only if it is possible to split the interval in such a way that the formula holds over the first part and the second part contains at least two points. The case of suffixes is completely symmetric. Infixes can be defined in terms of prefixes and suffixes: a proper sub-interval of the current interval is a suffix of one of its prefixes or, equivalently, a prefix of one of its suffixes. The satisfiability problem for the logic D hom of the infix relation has been recently shown to be P SP ACEcomplete by a suitable contraction method [BMM + 17]. The same problem has been proved to be EXP SP ACE-hard for the logic BE hom of prefixes and suffixes by a polynomial-time reduction from a domino-tiling problem for grids with rows of single exponential length [BMM + 19b]. As for the upper bound, the only available one is given by the non-elementary decision procedure for HS hom [MMM + 16] (BE hom is a small fragment of it). Despite several attempts, no progress has been done in the reduction/closure of such a very large gap. 2 A couple of additional elements that help in understanding why BE hom is such a peculiar beast are the following: (i) as shown in [BMM + 19b], the only known fragments of HS hom whose satisfiability problem has been given an EXPSPACE lower bound contain both B and E modalities; (ii) the satisfiability problem for the logic DE hom (and the symmetric logic BD hom ), which is a maximal proper fragment of BE hom , has been recently proved to be PSPACE-complete [BMM + 17, BMPS20,BMPS21b]. Goals and structure of the paper. In this paper, we identify the first EXPSPACEcomplete fragment of HS hom which does not include both B and E modalities. Such a fragment is the logic BDA hom , which extends BD hom with the meet (adjacent to the right) modality A. As a preparatory step, we apply the proposed model-theoretic proof technique to the simpler fragment BD hom ; then, we show that it can be tailored to the logic BDA hom without any increase in complexity. The paper is organized as follows. In Section 2, we provide a gentle introduction to ITLs. We first introduce in an informal way the two main propositional ITLs, namely CDT and HS, interpreted over finite linear orders. Then, by making use of a simple example, we compare their expressive power with that of Linear Temporal Logic (LTL). Next, in Section 3, we specify syntax and semantics of BD hom , and we point out some interesting connections between BD hom formulas and restricted forms of generalized * -free regular expressions. Then, we prove a small model theorem for the satisfiability of BD hom formulas over finite linear orders, which provides a doubly exponential bound (in the size of the formula) on their models. By exploiting such a small model theorem, we show that there exists a decision procedure to check satisfiability of BD hom formulas that works in exponential space with respect to the size of the input formula. The proof consists of the following sequence of 1 Hereafter, for any ITL X, we will write X hom to indicate that we are considering X under the homogemeity assumption. 2 In fact, the only achieved result was a negative one showing that there is no hope in trying to tailor the proof techniques exploited for HS hom , which are based on the notion of BE-descriptor, to BE hom , as it is not possible to give an elementary upper bound on the size of BE-descriptors (in the case of BE hom ) [BMP19]. steps. In Section 4, we introduce and discuss a spatial representation of the models of BD hom formulas, called compass structure. Then, in Section 5, we prove a series of spatial properties of compass structures for formulas involving modalities B and D. Next, in Section 6, by making use of the properties stated in Section 5, we prove the small model theorem for BD hom , which allows us to devise a procedure to check the satisfiability of BD hom formulas over finite linear orders of EXPSPACE complexity. It is worth pointing out that such a decision procedure is sub-optimal, given the results proved in [BMPS21b], where a PSPACE decision procedure for the very same problem is provided; however, it plays an instrumental role in the proof of the main result of the paper about BDA hom . In Section 7, we introduce modality A and formally define the logic BDA hom ; in addition, we define and discuss its counterpart in terms of generalized * -free regular expressions. In Section 8, we first prove that the properties stated in Section 5 still holds for BDA hom , and then we show that an EXPSPACE decision procedure for BDA hom , over finite linear orders, can be obtained from the one developed in Section 6 with a few small adjustments. In Section 9, we prove EXPSPACE-hardness of the satisfiability problem for BDA hom , over finite linear orders, by providing a reduction from the exponential corridor tiling problem, thus allowing us to conclude that the EXPSPACE complexity bound for BDA hom finite satisfiability is tight. In Section 10, we provide an assessment of the work and outline future research directions. A gentle introduction to Interval Temporal Logics (ITLs) In this section, we provide a gentle introduction to ITLs, focusing on the features that distinguish them from point-based temporal logics. As a term of comparison, we choose LTL. For the sake of simplicity, we restrict our attention to totally ordered finite models, that is, finite prefixes 0 < 1 < . . . < N of N. With a little abuse of notation, we denote such an order by N . In such a setting, the focus is on LTL formulas interpreted on finite traces (we will refer to the set of finite traces simply as models). In the literature, LTL over finite traces is commonly referred to LTL f [GV13,GMM14]. Let Prop be a set of proposition letters. The first, crucial difference between ITLs and LTL f is the way in which Prop is interpreted over models. Let I N = {[x, y] ∶ 0 ≤ x ≤ y ≤ N } be the set of all and only intervals on N . In the case of LTL f , we have a function π ∶ N → 2 Prop , while, in the case of ITLs, we have V ∶ I N → 2 Prop . It is easy to see that V is, in fact, a generalization of π, as the point-based semantics π can be embedded in the interval-based one V by assuming π(x) = V([x, x]), for all x ∈ N . From now on, we will refer to intervals of the form [x, x] as point-intervals and to intervals of the form [x, y], with x < y, as strict-intervals. Whenever we will not need to distinguish between point-and strict-intervals, we will simply refer to them as intervals. In its full generality, ITL interval-based semantics does not impose any constrain on the relationships between the proposition letters that hold over a strict-interval and those that hold over the point-intervals that it includes, that is, the set of proposition letters V([x, y]) that hold over the strict-interval One of the first ITLs proposed in literature was CDT [Ven90], whose name comes from its three binary modalities C (Chopping), D (Dawning), and T (Terminating). Their semantics is graphically depicted in Figure 2. Intuitively speaking, if we take a point z inside an interval [x, y] and we consider the ternary relation [x, y] may be split into [x, z] and [z, y], the three CDT modalities allow one to talk about the properties of such a relation starting from any of the three intervals. More precisely, a formula ψ 1 C ψ 2 (chopping between ψ 1 and ψ 2 ) holds over an interval [x, y] Figure 2). CDT turns out to be very expressive. It can be easily checked that it allows one to specify a number of advanced properties in a straightforward way. As an example, it is easy to write a CDT formula that forces one or more proposition letter to behave like an equivalence relation over the points of the underlying linear order. However, such an expressivity is paid with an undecidable satisfiability problem on every interesting linear order, that is, any linear order but bounded ones, where the problem is trivially decidable. Such a statement holds even if we consider any of the fragments of CDT that contains just one modality among C , D , and T [GMSS06]. A meaningful fragment of CDT is HS [HS91], which features a unary modality for each ordering relation between a pair of intervals (the so-called Allen's relations [All81]), as shown in Figure 3. For the sake of simplicity, in Figure 3, we deliberately omitted the modality for the inverse of each considered relation, namely ⟨A⟩, ⟨B⟩, ⟨D⟩, ⟨E⟩, ⟨L⟩, and ⟨O⟩. The semantics of each HS modality can be captured by a suitable combination of CDT modalities as shown in Figure 4. The converse is not true. In Figure 4, we make an extensive use of the modal constant π, which holds over an interval [x, y] if and only if x = y, that is, [x, y] is a point-interval. It immediately follows that ¬π holds on all and only strict intervals. It is worth pointing out that some HS modalities can be defined as suitable combinations of other ones (a complete account of definability equations for the most significant classes of linear orders is given in [BMM + 14, BMM + 19c]). For what concerns the HS fragments considered in this paper, namely those featuring unary modalities ⟨A⟩, ⟨B⟩, and ⟨D⟩ (which should not be mistaken with the binary modality D of CDT), we have that modality ⟨L⟩ can be defined in terms of modality ⟨A⟩ and modality ⟨D⟩ can be expressed by means of a suitable combination of modalities ⟨B⟩ and ⟨E⟩. Notice that the opposite is not true, e.g., ⟨A⟩ cannot be expressed by means of modality ⟨L⟩. Moreover, in BDA it is not possible to define ⟨A⟩ in terms of ⟨L⟩, ⟨D⟩, and ⟨B⟩ and it is not possible to express ⟨B⟩ (resp., ⟨D⟩) in terms of ⟨A⟩ and ⟨D⟩ (resp., ⟨B⟩). We conclude the section by showing how both LTL f modalities Until (ψ 1 U ψ 2 ) and Next ( ψ) can be easily encoded by means of a combination of modalities ⟨A⟩ and ⟨B⟩ (no need to bring up modality ⟨D⟩). In Figure 5, we give the formulas that define ψ 1 U ψ 2 (above) and ψ (below) in AB together with a graphical account of how they "operate" on an interval model. Then, in Figure 6 we applied these encodings to translate the formula p U (¬p ∧ ¬q) (resp., (¬p ∧ ¬q)) into an equivalent formula of AB and, by means of the example of Figure 1, we show how the interval model is constrained when the resulting formula holds over an interval. 0 p 1 p 2 p, q 3 4 q p, q q C (p ∧ q) (p ∧ q) p p, q p, q . . . . . . x z z . . . . . . y ψ 1 ψ 2 C ψ 1 ψ 2 ψ 1 C ψ 2 holds over [x, Allen relation HS operator Definition MEETS STARTED-BY CONTAINS FINISHED-BY BEFORE OVERLAPS ⟨A⟩ ⟨B⟩ ⟨D⟩ ⟨E⟩ ⟨L⟩ ⟨O⟩ [x, y] MEETS [v, z] ⇔ y = v [x, y] STARTED-BY [v, z] ⇔ x = v ∧ z < y [x, y] CONTAINS [v, z] ⇔ x < v ≤ z < y [x, y] FINISHED-BY [v, z] ⇔ x < v ≤ z < y [x, y] BEFORE [v, z] ⇔ x < v ≤ z < y [x, y] OVERLAPS [v, z] ⇔ x < v ≤ z < y x y v z v z v z z v v z v z As shown in Figure 5 (top), the LTL f formula ψ 1 U ψ 2 is translated into the conjunction of [B]⟨A⟩(π ∧ ψ 1 ) and ⟨A⟩(π ∧ ψ 2 ). Let us recall that ψ 1 U ψ 2 holds at a point x if there exists a point y, with x ≤ y, where ψ 2 holds, and, for each point x ≤ x i < y, ψ 1 holds at x i . The idea behind the translation (a graphical account of it is given in Figure 5) exploits the generality of interval semantics to force the translation of ψ 1 U ψ 2 to hold over the whole interval [x, y]. Then, it constrains the formula ψ 2 to hold on [y, y], that is, the right endpoint of the interval, by means of the conjunct ⟨A⟩(π ∧ ψ 2 ), which literally says taht there exists an interval [y, y ′ ], which begins exactly where the current one ends (modality ⟨A⟩) and is a point-interval (constant π), where ψ 2 holds. Such an interval [y, y ′ ] can thus be only the interval [y, y]. The first conjunct [B]⟨A⟩(π ∧ ψ 1 ) forces the formula ⟨A⟩(π ∧ ψ 1 ) to hold on each proper prefix (modality [B] = ¬⟨B⟩¬) of the interval [x, y], that is, on each interval [x, x i ], with x ≤ x i < y. Then, by the very same argument we used for ⟨A⟩(π ∧ ψ 2 ), we have that ψ 1 holds on each point-interval [x i , x i ], with x ≤ x i < y. In Figure 6 (top), we give an example of the application of the proposed translation that makes use of the model of Figure 1. In particular, we analyze the translation of the LTL f formula p U (¬p ∧ ¬q), which is true at time point 0 according to the point-based semantics π, into the AB formula ψ = ([B]⟨A⟩(π ∧ p) ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q)), which holds over the interval [0, 3] according to the interval-based semantics V. Let us assume that the formula ψ holds over the interval [0, 3]. Its second conjunct ⟨A⟩(π ∧ ¬p ∧ ¬q) forces the existence of an interval [3, y], with y ≥ 3, where π, ¬p, and ¬q hold. The truth of π on the Figure 4. A graphical account of the encoding of HS modalities in CDT. MEETS ⟨A⟩ψ ψ T ⊤ . . . . . . x y . . . . . . z T ψ ⊤ ⟨A⟩ ψ ψ ⊤ STARTED-BY ⟨B⟩ψ ψ C ¬π . . . . . . x z . . . . . . y C ψ ¬π ⟨B⟩ ψ ψ ¬π CONTAINS ⟨D⟩ψ ¬π C (ψ C ¬π) (ψ C ¬π) C ¬π ⟨B⟩⟨E⟩ψ ⟨E⟩⟨B⟩ψ . . . . . . x v . . . . . . z . . . y C ¬π (ψ C ¬π) ⟨D⟩ ψ ¬π C ψ ¬π ψ ¬π FINISHED-BY ⟨E⟩ψ ¬π C ψ . . . . . . x z . . . . . . y C ¬π ψ ⟨E⟩ ψ ψ ¬π BEFORE ⟨L⟩ψ (¬π ∧ ψ T ⊤) T ⊤ ⟨A⟩(¬π ∧ ⟨A⟩ψ) . . . . . . x y . . . . . . v . . . z T (¬π ∧ ψ T ⊤) ⊤ ⟨L⟩ ψ ⊤ T ψ ¬π∧ ⊤ ⊤ ¬π OVERLAPS ⟨O⟩ψ ¬π C (¬π ∧ ¬π T ψ) . . . . . . x v . . . . . . y . . . z C ¬π (¬π ∧ ¬π T ψ) ⟨O⟩ ψ ψ T ¬π ¬π∧ ψ ¬π ¬πψ 1 U ψ 2 = [B]⟨A⟩(π ∧ ψ 1 ) ∧ ⟨A⟩(π ∧ ψ 2 ) x . . . x i . . . y . . . . . . [B] ⟨A⟩(π ∧ ψ 1 ) ∧⟨A⟩( π ∧ ψ 2 ) π ∧ ψ 2 . . . . . . . . . ⟨A⟩( π ∧ ψ 1 ) . . . . . . π ∧ ψ 1 ψ = ⟨A⟩(¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ψ)) x . . . y . . . y + 1 . . . ⟨A⟩( ¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ψ) ) ¬π ∧ [B] π ∧⟨A⟩( π ∧ ψ ) π π ∧ ψ Figure 5.V([x ′ , x ′ ]) for all point-intervals [x ′ , x ′ ], with x ′ ∈ {0, 1, 2}, that is, p belongs to V([0, 0]), V([1, 1]) , and V([2, 2]), as shown in Figure 6 (top). Let us consider now LTL f modality . In Figure 5 (bottom), we provide the translation of ψ into ψ ′ = ⟨A⟩(¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ψ)). According to the semantics of , ψ holds at a point x if and only if ψ holds at the point x + 1. As a matter of fact, for the sake of generality and simplicity, the proposed translation ψ ′ of ψ holds on an interval [x, y] if and only if ψ holds at the point-interval [y + 1, y + 1] regardless of the fact that [x, y] is a strictinterval or a point-interval. It is possible to force [x, y] to be a point-intervals by adding π as a conjunct of the translation, that is, by defining ψ ′ as π ∧ ⟨A⟩(¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ψ)). A graphical account of the translation is given in Figure 5 ( there exists an interval [y + 1, y ′ ] where both π and ψ hold. The truth of π over [y + 1, y ′ ] allows us to conclude that y ′ = y + 1, and thus ψ holds over the point-interval [y + 1, y + 1]. In Figure 6 (bottom), we give an example of the application of the above translation that makes use of the model of Figure 1. We focus our attention on the translation of the LTL f formula (¬p ∧ ¬q), which is true at time point 2 according to the point-based semantics π, into the AB f formula ψ = ⟨A⟩(¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q)), which holds over the interval [0, 2]. The outermost modality ⟨A⟩ constrains the three conjuncts ¬π, [B]π, and ⟨A⟩(π ∧ ¬p ∧ ¬q) to simultaneously hold over an interval [2, y]. From the truth of ¬π, it follows that y > 2, and from the truth of [B]π, we can conclude that y = 3. Now, from the truth of ⟨A⟩(π ∧ ¬p ∧ ¬q) over [2, 3], it follows that there exists 3 ≤ y such that the conjuncts π, ¬p, and ¬q simultaneously hold over [3, y]. Once more, π is true on [3, y] if and only if y = 3 [3, 3], and thus both ¬p and ¬q hold over [3, 3]. p U (¬p ∧ ¬q) = [B]⟨A⟩(π ∧ p) ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q) 0 p ⟨A⟩( π ∧ p ) π ∧ p 1 p π ∧ p 2 p, q π ∧ p 3 π ∧ ¬p ∧ ¬q 4 q ⟨A⟩( π ∧ p ) p, q ⟨A⟩( π ∧ p ) q [B] ⟨A⟩(π ∧ p) ∧⟨A⟩( π ∧ ¬p ∧ ¬q ) p p, q p, q (¬p ∧ ¬q) = ⟨A⟩(¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q)) 0 p 1 p 2 p, q π ⟨A⟩( ¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q) ) 3 π ∧ ¬p ∧ ¬q 4 q p, q ⟨A⟩( ¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q) ) q ⟨A⟩( ¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ¬p ∧ ¬q) ) p p, q ¬π ∧ [B] π ∧⟨A⟩( π ∧ ¬p ∧ ¬q ) p, q Last but not least, it is worth pointing out that the truth values of proposition letters on strict-intervals do not come into play in the proposed translations. It immediately follows that such translations still properly work under the homogeneity assumption that we will make in all the following sections. The logic BD of prefixes and infixes In this section, we introduce the logic BD of prefixes and infixes, we formally state the homogeneity assumption, and we define the relation of finite satisfiability under such an assumption. We conclude the section with a short analysis of the relationships between such a logic and a suitable restriction of generalized * -free regular expressions. BD formulas are built up from a countable set Prop of proposition letters according to the following grammar: ϕ ∶∶= p | ¬ψ | ψ ∨ ψ | ⟨B⟩ψ | ⟨D⟩ψ, where p ∈ Prop and ⟨B⟩ and ⟨D⟩ are the modalities for Allen's relations Begins and During, respectively. In the following, given a formula ϕ, we denote by |ϕ| the size of the parse tree for ϕ generated by the above grammar. It is straightforward to show that |ϕ| is less than or equal to the number of symbols used to encode ϕ. Let N ∈ N be a natural number and let I N = {[x, y] ∶ 0 ≤ x ≤ y ≤ N } be the set of all intervals over the prefix 0 . . . N of N. A (finite) model for BD formulas is a pair M = (I N , V), where V ∶ I N → 2 Prop is a valuation that maps intervals in I N to sets of proposition letters. Given a model M and an interval [x, y], the semantics of a BD formula is defined as follows: • M, [x, y] ⊧ p iff p ∈ V([x, y]); • M, [x, y] ⊧ ¬ψ iff M, [x, y] / ⊧ ψ; • M, [x, y] ⊧ ψ 1 ∧ ψ 2 iff M, [x, y] ⊧ ψ 1 and M, [x, y] ⊧ ψ 2 ; • M, [x, y] ⊧ ⟨B⟩ψ iff there is y ′ , with x ≤ y ′ < y, such that M, [x, y ′ ] ⊧ ψ; • M, [x, y] ⊧ ⟨D⟩ψ iff there are x ′ and y ′ , with x < x ′ ≤ y ′ < y, such that M, [x ′ , y ′ ] ⊧ ψ. The logical constants ⊤ (true) and ⊥ (false), the Boolean operators ∨, →, and ↔, and the (universal) dual modalities [B] and [D] can be derived in the standard way. We say that a BD formula ϕ is finitely satisfiable if and only if there exist a (finite) model M and an interval [x, y] such that M, [x, y] ⊧ ϕ (w.l.o.g., [x, y] can be assumed to be the maximal interval [0, N ]). Hereafter, whenever we use the term satisfiable, we always mean finite satisfiability, that is, satisfiability over the class of finite linear orders. Definition 1 (Homogeneity). We say that a model M = (I N , V) satisfies the homogeneity property (M is homogeneous for short) if and and only if ∀p ∈ Prop ∀[x, y] ∈ I N p ∈ V([x, y]) ⇔ ∀z ∈ [x, y] p ∈ V([z, z]) . In Figure 7, we given an example of a homogeneous model (a) and of an arbitrary non-homogeneous one (b). For the sake of readability, we will refer to them as M a = (I 7 , V a ) and M b = (I 7 , V b ), respectively. The complete definitions of V a and V b are given in Figure 7 below the respective models. It is easy to check that the definition of V a satisfies the homogeneity property as stated by Definition 1. To begin with, we observe that, in homogeneous models, the labelling V of the intersection of two intervals contains the labellings of the two intervals. This is the case, for instance, with intervals [1, 4] and [2, 6] in Figure 7 (a), whose intersection is the interval [2, 4]. This is not the case with arbitrary models. Consider, for instance, the very same intervals in Fig. 7 (b). Interval [1, 4] violates the homogeneity property because r ∈ V b (1, 4) but r ∉ V b (1, 1), thus violating the ⇒ direction of Def. 1. Interval [2, 4] violates the homogeneity property as well, because q ∈ V b (2, 2) ∩ V b (3, 3) ∩ V b (4, 4), but q ∉ V b (2, 4) (the same for r), thus violating the ⇐ direction of Definition 1. All the other intervals, including interval [2, 6], in Figure 7 (b) satisfy the homogeneity property, but this is obviously not sufficient to consider the model homogeneous, since each interval of the model must satisfy such a property. It is worth pointing out that the homogeneity property does not entail, in general, a similar containment property for formulas ψ ∉ Prop. As an example, it is easy to check that in the homogeneus model of Figure 7 Finally, we would like to observe that in homogeneous models, for any proposition letter, the labelling of point-intervals determines that of arbitrary intervals. This is not the case with arbitrary models. Counterexamples are intervals [1, 4] and [2, 4] in Figure 7 (b). {q} [x, y] V a (x, y) [0, 0] {p, r} [0, 1] {p} [0, 2] {p} [0, 3] {p} [0, 4] {p} [0, 5] ∅ [0, 6] ∅ [0, 7] ∅ [1, 1] {p} [1, 2] {p} [1, 3] {p} [1, 4] {p} [x, y] V a (x, y) [1, 5] ∅ [1, 6] ∅ [1, 7] ∅ [2, 2] {p, q, Satisfiability can be relativized to homogeneous models. We say that a BD formula ϕ is satisfiable under homogeneity if there is a homogeneous model M such that M, [0, N ] ⊧ ϕ. Satisfiability under homogeneity is clearly more restricted than plain satisfiability. We know from [MM14,MMK10] that dropping the homogeneity assumption makes D undecidable. This is not the case with the fragment B, whose expressive power is quite limited, which remains decidable [GMS04]. Hereafter, we will always refer to BD under the homogeneity assumption, denoted by BD hom . We conclude the section with a short account of the relationships between BD hom and generalized * -free regular expressions. Let Σ be a finite alphabet. A generalized * -free regular expression (hereafter, simply expression) e over Σ is a term of the form: e ∶∶= ∅ | a | ¬e | e + e | e ⋅ e, for any a ∈ Σ. We exclude the empty word from the syntax, as it makes the correspondence between finite words and finite models of BD hom formulas easier (such a simplification is quite common in the literature). An expression e defines a language Lang(e) ⊆ Σ + , which is inductively defined as follows: • Lang(∅) = ∅; • Lang(a) = {a}, for every a ∈ Σ; • Lang(¬e) = Σ + \ Lang(e); • Lang(e 1 + e 2 ) = Lang(e 1 ) ∪ Lang(e 2 ); • Lang(e 1 ⋅ e 2 ) = {w 1 w 2 ∶ w 1 ∈ Lang(e 1 ), w 2 ∈ Lang(e 2 )}. In [Sto74], Stockmeyer proves that the problem of deciding the emptiness of Lang(e), for a given expression e, is non-elementary hard. Let us now consider the logic C of the chop operator (under the homogeneity assumption). As informally described in Section 2, C features one binary modality, the "chop" operator ⟨C⟩, plus the modal constant π. As already pointed out (see Figure 4), modalities ⟨B⟩ and ⟨D⟩ of BD hom can be easily encoded in C as follows: ⟨B⟩ψ = ψ⟨C⟩¬π and ⟨D⟩ψ = ¬π⟨C⟩(ψ⟨C⟩¬π). It can be shown that, for any expression e over Σ, there exists a formula ϕ e of C whose set of models is the language Lang(e), that is, Lang(e) = {V(0, 0) . . . V(N, N ) ∶ (I N , V) ⊧ ψ e }. Such a formula is the conjunction of two sub-formulas ψ Σ and ψ e , where ψ Σ guarantees that each unitary interval of the model is labelled by exactly one proposition letter from Σ, and ψ e constrains the valuation on the basis of the inductive structure of (the translation of) e. As an example, if e = e 1 ⋅ e 2 , then ψ e = ψ e 1 ⟨C⟩((¬π ∧ ¬(¬π⟨C⟩¬π))⟨C⟩ψ e 2 ). Such a mapping of expressions in C formulas allows one to conclude that the satisfiability problem for C is non-elementary hard (its non-elementary decidability follows from the opposite mapping). A careful look at the expression-to-formula mapping reveals that modality C only comes into play in the translation of expressions featuring the operator of concatenation. In view of that, it is worth looking for subclasses of generalized * -free regular expressions where the concatenation operation is used in a very restricted manner, so as to avoid the need of modality C in the translation. Let us focus our attention on the following class of restricted expressions: e ∶∶= ∅ | a | ¬e | e + e | Pre(e) | Inf(e), for any a ∈ Σ, where Pre(e) and Inf(e) are respectively a shorthand for e ⋅ (¬∅) (thus defining the language Lang(Pre(e)) = {wv ∶ w ∈ Lang(e), v ∈ Σ + }), and (¬∅) ⋅ e ⋅ (¬∅) (thus defining the language Lang(Inf(e)) = {uwv ∶ u, v ∈ Σ + , w ∈ Lang(e)}). Every restricted expression e of the above form can be mapped into an equivalent formula ϕ e of BD hom by applying the usual constructions for empty language, letters, negation, and union, plus the following two rules: (i) ϕ Pre(e) = ⟨B⟩ψ e , and (ii) ϕ Inf(e) = ⟨D⟩ψ e . In the next sections, we will show that the satisfiability problem for BD hom belongs to EXP SP ACE. From the above mapping, it immediately follows that the emptiness problem for the considered subclass of expressions, that only uses prefixes and infixes, can be decided in exponential space (rather than in non-elementary time). Homogeneous compass structures In this section, we introduce a spatial representation of homogeneous models, called homogeneous compass structures, that will considerably simplify the proofs of the next sections. Let ϕ be a BD hom formula. We define the closure of ϕ, denoted by Cl(ϕ), as the set of all its subformulas and of their negations, plus formulas ⟨B⟩⊤ and [B]⊥. For every BD hom formula ϕ, it holds that Cl(ϕ) ≤ 2|ϕ| + 2. A ϕ-atom (atom for short) is a maximal subset F of Cl(ϕ) that, for all ψ ∈ Cl(ϕ), satisfies the following two conditions (as usual we identify every formula of the form ¬¬ψ as ψ): (i) ψ ∈ F if and only if ¬ψ ∉ F , and (ii) if ψ = ψ 1 ∨ ψ 2 , then ψ ∈ F if and only if {ψ 1 , ψ 2 } ∩ F ≠ ∅. Let At(ϕ) be the set of all ϕ-atoms. We have that |At(ϕ)| ≤ 2 |ϕ|+1 , where |ϕ| = |Cl(ϕ)|/2. It is easy to see that, given a model M = (I N , V), we can always univocally associate an atom F [x,y] in At(ϕ) with each interval [x, y] ∈ I N by simply put F [x,y] = {ψ ∈ Cl(ϕ) ∶ M, [x, y] ⊧ ψ}. An example of such an extension of the labelling V to atoms is provided in Figure 8 in both a graphical (top) and a tabular (bottom) form. For the sake of readability, in the graphical representation of Figure 8 we only provide the value for positive formulas, since the presence of negative ones follows from the absence of their negation in the atom. As an example, for the interval [1, 2] in Figure 8, F For R ∈ {B, D}, we introduce the functions Req R , Obs R , and Box R , that map each atom F ∈ At(ϕ) to the following subsets of Cl(ϕ): • Req R (F ) = {ψ ∈ Cl(ϕ) ∶ ⟨R⟩ψ ∈ F }; • Obs R (F ) = {ψ ∈ Cl(ϕ) ∶ ⟨R⟩ψ ∈ Cl(ϕ), ψ ∈ F }; • Box R (F ) = {ψ ∈ Cl(ϕ) ∶ [R]ψ ∈ F }. Notice that, for each F ∈ At(ϕ) and each formula ψ, with ψ ∈ {ψ ′ ∶ ⟨B⟩ψ ′ ∈ Cl(ϕ)}, either ψ ∈ Req B (F ) or ¬ψ ∈ Box B (F ); the same for D (this means that, per se, Box B (⋅) and Box D (⋅) are not strictly necessary; we introduce them to simplify some proofs). Sets Req R (F ), Obs R (F ), and Box R (F ) will be extensively used to prove most of the results of the paper. For that reason, we would like to illustrate their behaviour by means of the example in Figure 8. First, let us observe that all these sets are univocally determined by the atoms in their argument; however, while Obs R (F ) ⊆ F , this is not the case with Req R (F ) and Box R (F ). As an example, it holds that q ∈ Req D (F [1,4] On the other hand, if ⟨R⟩ψ ∈ Cl(ϕ) and ψ ∉ Req R (F ), then, necessarily, [R]¬ψ ∈ F and thus ¬ψ ∈ Box R (F ). It is easy to prove that Box R (F ) ∩ Req R (F ) = ∅ and Figure 8. A graphical (top) and tabular (bottom) account of the behaviour of Req R (F ), Obs R (F ), and Box R (F ), for F ∈ At(ϕ) and R ∈ {B, D}, with ϕ = ⟨B⟩(p ∧ ¬r) ∧ ⟨D⟩(¬q ∧ ⟨D⟩q). Box R (F ) ∪ Req R (F ) = {ψ ∶ ⟨R⟩ψ ∈ Cl(ϕ)}, that is, (Req R (F ), Box R (F )) is always a ϕ = ⟨B⟩( p ∧ ¬r ) ∧ ⟨D⟩( ¬q ∧ ⟨D⟩q ) ψ 1 ψ 2 0 p, q,ψ 2 ¬ψ 2 ⟨B⟩ψ 1 [B]¬ψ 1 ⟨D⟩ψ 2 [D]¬ψ 2 ϕ ¬ϕ Req B (F [x,y] ) Obs B (F [x,y] ) Req D (F [x,y] ) Obs D (F [x,y] ) [0, 0] 1 1 1 0 0 0 0 0 0 ∅ ∅ ∅ {q} [0, 1] 1 0 0 0 1 0 0 0 0 ∅ {ψ 1 } ∅ ∅ [0, 2] 1 0 0 0 1 0 1 0 0 {ψ 1 } {ψ 1 } ∅ ∅ [0, 3] 1 0 0 1 1 1 1 0 0 {ψ 1 } {ψ 1 } {q} {ψ 2 } [0, 4] 0 0 0 1 0 1 1 1 1 {ψ 1 } ∅ {q, ψ 2 } {ψ 2 } [1, 1] 1 0 0 0 1 0 0 0 0 ∅ {ψ 1 } ∅ ∅ [1, 2] 1 0 0 0 1 0 1 0 0 {ψ 1 } {ψ 1 } ∅ ∅ [1, 3] 1 0 0 1 1 1 1 0 0 {ψ 1 } {ψ 1 } {q} {ψ 2 } [1, 4] 0 0 0 1 0 1 1 0 0 {ψ 1 } ∅ {q} {ψ 2 } [2, 2] 1 1 1 0 0 0 0 0 0 ∅ ∅ ∅ {q} [2, 3] 1 0 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ [2, 4] 0 0 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ [3, 3] 1 0 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ [3, 4] 0 0 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ [4, 4] 0 0 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ partition of the whole set of temporal requests R in Cl(ϕ). Consider, for instance, for interval [1, 4] in Figure 8. We have that Req D (F [1,4] ) = {q} and, since {ψ ∶ ⟨D⟩ψ ∈ Cl(ϕ)} = {q, ψ 2 }, it holds that Box D (F [1,4] ) = {¬ψ 2 }. As opposed to what we stated above for Req R , for every ¬ψ ∈ Box B (F [x,y] ) (resp., ¬ψ ∈ Box D (F [x,y] )) and every prefix [x, y ′ ] (resp., infix [x ′ , y ′ ] ) of [x, y], we have that ψ ∉ Obs B (F [x,y ′ ] ) (resp., Obs D (F [x ′ ,y ′ ] ) ). In the considered case, for instance, since ¬ψ 2 ∈ Box D (F [1,4] ), we can conclude that ψ 2 ∉ Obs D (F [2,2] ) ∪ Obs D (F [2,3] ) ∪ Obs D (F [3,3] ) . We would like to further explain the relation between Req R and Obs R by considering the example in Figure 8 from another angle. Suppose that, for a given N ∈ N (in our example N = 4), we want to find, for each [x, y] in I N , a "labelling" F [x,y] ∈ At(ϕ) such that: ( * 1 ) M, [x, y] ⊧ ψ if and only if ψ ∈ F [x,y] , where M = (I 4 , V) and V([x, y]) = F [x,y] ∩ Prop. With the additional property: ( * 2 ) ϕ ∈ F [0,N ] , such a problem turns out to be the bounded satisfiability problem, which is simpler than the problem we are addressing in this paper, namely, the finite satisfiability problem. In the latter, indeed, N is not given as a parameter. It can be easily shown that the labelings for which the following property holds: ( * 3 ) Req B (F [x,y] ) = ⋃ x≤y ′ <y Obs B (F [x,y ′ ] ) and Req D (F [x,y] ) = ⋃ x≤x ′ ≤y ′ <y Obs D (F [x ′ ,y ′ ] ), for all [x, y] ∈ I N , all and only the labellings that satisfy property ( * 1 ). This means that all the requests that we associate with an interval [x, y] by means of its labelling F [x,y] must be satisfied (fulfilled). Consider, for instance, the B relation. It holds that Req B (F [x,y] ) ⊆ ⋃ x≤y ′ <y Obs B (F [x,y ′ ] ). On the other hand, it cannot exist a formula ψ such that ψ ∈ ⋃ x≤y ′ <y Obs B (F [x,y ′ ] ) and ψ ∉ Req B (F [x,y] ), as this would mean ¬ψ ∈ Box B (F [x,y] ) which implies ψ ∉ ⋃ x≤y ′ <y Obs B (F [x,y ′ ] ) (contradiction). Thus, we can conclude Req B (F [x,y] ) ⊇ ⋃ x≤y ′ <y Obs B (F [x,y ′ ] ) as well (consistency). The very same observations hold for modality D and all the other HS hom modalities. In fact, this is a general property which holds even without the homogeneity assumption. Thus, we can conclude that ( * 3 ) is a necessary and sufficient condition for M to satisfy ϕ. By making use of Req B , Req D , Obs B , and Obs D , we define two binary relations → B and → D over At(ϕ) as follows. Definition 2. For all F, G ∈ At(ϕ) we let: • F → B G iff Req B (F ) = Req B (G) ∪ Obs B (G); • F → D G iff Req D (F ) ⊇ Req D (G) ∪ Obs D (G). Relations → B and → D are often referred to as view-to-type dependencies since they constrain the labelling of a state (an interval) according to the labellings of the states that it can access via certain relations (interval relations). As already pointed out, for every ψ ∈ {ψ ′ ∶ ⟨B⟩ψ ′ ∈ Cl(ϕ)} we have either ψ ∈ Req B (F ) or ¬ψ ∈ Box B (F ) (and vice versa). Given two atoms F and G, with F → B G, and a formula ¬ψ ∈ Box B (F ) it immediately follows that ψ ∉ Req B (F ) and thus from Req B (F ) = Req B (G) ∪ Obs B (G), it follows ψ ∉ Obs B (G). Now, from ¬ψ ∈ Box B (F ), it follows that ⟨B⟩ψ ∈ Cl(ϕ), and from ⟨B⟩ψ ∈ Cl(ϕ) and ψ ∉ Obs B (G), it follows that ψ ∉ G. For the maximality of atoms, it follows that ¬ψ ∈ G. This allows us to conclude that for every pair of atoms F and G, with F → B G, we have that Box B (F ) ⊆ G. The same argument can be applied to the relation → D , and thus for every pair of atoms F and G, with F → D G, it holds that Box D (F ) ⊆ G. In addition, relation → D is transitive (by definition of atom, from Req R (F ) ⊇ Req R (G), it immediately follows that Box R (F ) ⊆ Box R (G)), while → B is not. A graphical account of relations → B and → D is given in Figure 9 and Figure 10, respectively. As for relation → B (Figure 9), we may observe how it is used to constrain the Req B (F [x,y] ) part of the labelling, for each interval [x, y] and its maximal proper pre- fix (if any) [x, y − 1] . This means that, in a "consistent model", we expect that for each ϕ = ⟨B⟩( r ∧ ¬p ∧ ¬q ) ∧ ⟨B⟩( ¬p ∧ ¬q ∧ ¬r ) ∧ ⟨B⟩⟨D⟩( r ∧ ¬p ∧ ¬q ) ψ 1 ψ 2 ψ 1 0 p, q, r 1 p, r 2 q, r 3 ψ 2 4 ψ 2 p, r r, ψ 1 ψ 2 , ⟨B⟩ψ 1 , ⟨D⟩ψ 1 ψ 2 , ⟨B⟩ψ 1 , ⟨B⟩ψ 2 , ⟨D⟩ψ 1 , ⟨B⟩⟨D⟩ψ 1 r, ψ 1 0 1 2 3 4 0 1 2 3 4 B B B B B d d B d d B B B d d B d d B d d B d d B d d B B D D F [x,y] p ¬p q ¬q r ¬r ψ 1 ¬ψ 1 ψ 2 ¬ψ 2 ⟨B⟩ψ 1 [B]¬ψ 1 ⟨B⟩ψ 2 [B]¬ψ 2 ⟨D⟩ψ 1 [D]¬ψ 1 ⟨B⟩⟨D⟩ψ 1 [B][D]¬ψ 1 Req B (F [x,y] ) Obs B (F [x,y] ) Req D (F [x,y] ) Obs D (F [x,y] ) [0, 0] 1 1 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ [0, 1] 1 0 1 0 0 0 0 0 0 ∅ ∅ ∅ ∅ [0, 2] 0 0 1 1 0 0 0 0 0 ∅ {ψ 1 } ∅ {ψ 1 } [0, 3] 0 0 0 0 1 1 0 1 0 {ψ 1 } {ψ 2 , ⟨D⟩ψ 1 } {ψ 1 } ∅ [0, 4] 0 0 0 0 1 1 1 1 1 {ψ 1 , ψ 2 , ⟨D⟩ψ 1 } {ψ 2 , ⟨D⟩ψ 1 } {ψ 1 } ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [1, 2] 0 0 1 1 0 0 0 0 0 ∅ {ψ 1 } ∅ {ψ 1 } . . .strict-interval [x, y], F [x,y] → B F [x,y−1] . Notice that → B is intended to constrain only the maximal prefix of an interval, not all its prefixes. Now, let us unravel the definition of → B . We have that the following three conditions are satisfied: ( 1) Box B (F [x,y] ) ⊆ Box B (F [x,y−1] ), since Req B (F [x,y] ) ⊇ Req B (F [x,(F [x,y−1] ) \ Req B (F [x,y] ) ≠ ∅; (3) from Req B (F [x,y] ) = Req B (F [x,y−1] )∪Obs B (F [x,y−1] ),ψ ∈ Obs B (F [x,y−1] ) \ Req B (F [x,y−1] ) , or ψ is featured again as a request, that is, In Figure 9, both in the interval model and in its compass structure counterpart, we show the labelling of intervals which are required to satisfy relation → B . We are faced with three ⟨B⟩ requests, namely, ψ 1 = r ∧ ¬p ∧ ¬q, ψ 2 = ¬p ∧ ¬q ∧ ¬r, and ψ 3 = ⟨D⟩ψ 1 . We focus on the intervals starting at point 0, namely, satisfy neither ψ 1 nor ψ 2 , as, for instance, p holds on both of them. Then, ψ ∈ Req B (F [x,→ B F [0,x ′ ] , with 0 ≤ x ≤ x ′ ≤ 4,Req B (F [0,0] ) = Req B (F [0,1] ) = ∅. Similarly, it holds that Obs B (F [0,0] ) = Obs B (F [0,1] ) = ∅. In such a case, it trivially holds that F [0,0] → B F [0,1] , that is, the two labellings can be swapped without any consequence on the consistency of B requests. In situations like this one, we will say that the involved atoms are B-reflexive. Reflexive atoms will play a crucial role in the proof of the results of the next sections. They are denoted by a self-loop in the compass structure of Figure 9. → B F [0,2] , since Req B (F [0,3] ) = Req B (F [0,2] ) ∪ Obs B (F [0,2] ), that is, {ψ 1 } = ∅ ∪ {ψ 1 }. Notice that F [0,3] → B F [0,1] does not hold, since {ψ 1 } ≠ ∅ ∪ ∅, that is, F [0,1] neither satisfies ψ 1 nor features ⟨B⟩ψ 1 . As for the observables, compared to atom F [0,2] , atom F [0,3] "loses" ψ 1 , which is transferred to its ⟨B⟩ψ 1 request, but it satisfies two more requests, namely, ψ 2 and ⟨D⟩ψ 1 in Obs B (F [0,3] ), for the labelling of the intervals that feature [0, 2] as proper prefix. As for F [0,4] , Obs B (F [0,4] ) = ∅ and Req B (F [0,4] ) has two formulas more than Req B (F [0,3] ) . Lemma 1. Let ϕ be a BD hom formula. For any atom F ∈ At(ϕ) and any sequence of Finally, we have that F [0,4] → B F [0,3] , that is, Req B (F [0,3] ) = Req B (F [0,2] ) ∪ Obs B (F [0,2] ), as {ψ 1 , ψ 2 , ⟨D⟩ψ 1 } = {ψ 1 } ∪ {ψ 2 , ⟨D⟩ψ 1 }; ψ 1 is transferred by F [0,2] to the proper prefixes of [0, 2] by means of ψ 1 ∈ Req B (F [0,2] ),atoms F h → B . . . → B F 1 → B F 0 = F , where, for each 0 ≤ i ≠ j ≤ h, Obs B (F i ) \ Req B (F i ) ≠ Obs B (F j ) \ Req B (F j ) or Req B (F i ) ≠ Req B (F j ), it holds that h ≤ 2|{ψ ∶ ⟨B⟩ψ ∈ Cl(ϕ)}| − 2|Req B (F )| − |Obs B (F ) \ Req B (F )|. Proof. Let us consider the sequence of pairs (Req B (F h ), Obs B (F h )\Req B (F h )) . . . (Req B (F 0 ), Obs B (F 0 ) \ Req B (F 0 )) induced by F h → B . . . → B F 1 → B F 0 = F . By Definition 2, it holds that Req B (F i ) = Req B (F i−1 ) ∪ Obs B (F i−1 ), for every 0 < i ≤ h. Moreover, by recursively unravelling the right part of the equation Req B (F i ) = Req B (F i−1 ) ∪ Obs B (F i−1 ) by replac- ing Req B (F i−j ) by Req B (F i−j−1 ) ∪ Obs B (F i−j−1 ), for 1 ≤ j < i, we obtain an alternative formulation of Req B (F i ) as Req B (F 0 ) ∪ ⋃ 0≤j<i Obs B (F j ). Now, for each ψ ∈ Req B (F h ), let us define the index ireq(ψ) ∈ {0, . . . , h} as follows: ireq(ψ) = i if ψ ∈ Req B (F ireq(ψ) ) \ Req B (F ireq(ψ)−1 ); 0 otherwise. The fact that ireq is well defined immediately follows from Req B (F i ) ⊇ Req B (F i−1 ), for all 0 < i ≤ h. Similarly, for each ψ ∈ Req B (F h ) ∪ Obs B (F h ), let us define the index iobs(ψ) ∈ {0, . . . , h} as follows: iobs(ψ) = i if ψ ∈ Obs B (F ireq(ψ) ) \ Req B (F ireq(ψ)−1 ); 0 otherwise. The fact that iobs is well defined follows from Req B (F i ) ⊇ Req B (F i−1 ), for all 0 < i ≤ h, and Req B (F i ) = Req B (F 0 ) ∪ ⋃ 0≤j<i Obs B (F j ), for all 0 ≤ i ≤ h. We now prove that there exists not an index i > 0 such that i ∉ Img(ireq) ∪ Img(iobs). By contradiction, let us assume that such an index exists (let us assume i > 0; the case i = 0 is symmetric). It follows that: • from i ∉ Img(ireq), it follows that, for each ψ ∈ Req B (F h ), either ireq(ψ) > i, and thus ψ ∉ Req B (F i ) ∪ Req B (F i−1 ), or i > ireq(ψ), and thus ψ ∈ Req B (F i ) ∩ Req B (F i−1 ), and then Req B (F i ) = Req B (F i−1 ); • from i ∉ Img(iobs), it follows that, for each ψ ∈ Req B (F h ) ∪ Obs B (F h ), either iobs(ψ) > i, and thus ψ ∉ Obs B (F i ) ∪ Obs B (F i−1 ) ∪ Req B (F i ) ∪ Req B (F i−1 ), or i > iobs(ψ), and then i − 1 > iobs(ψ), because if ψ ∈ Obs B (F i−1 ) \ Req B (F i−1 ), then ireq(ψ) = i (contradiction). Hence, Obs B (F i ) \ Req B (F i ) = Obs B (F i−1 ) \ Req B (F i−1 ) = ∅. From the above two cases, we can conclude that (Req B (F i ), Obs B (F i ) \ Req B (F i )) = (Req B (F i−1 ), Obs B (F i−1 ) \ Req B (F i−1 )), aand thus we obtain a contradiction. Finally, we have that h ≤ |Img(ireq)| + |Img(iobs)|, with |Img(ireq)| ≤ |{ψ ∶ ⟨B⟩ψ ∈ Cl(ϕ)}| − |Req B (F 0 )| and |Img(iobs)| ≤ |{ψ ∶ ⟨B⟩ψ ∈ Cl(ϕ)}|−|Req B (F 0 )|−|Obs B (F 0 )\Req B (F 0 )|, and thus h ≤ 2|{ψ ∶ ⟨B⟩ψ ∈ Cl(ϕ)}| − 2|Req B (F 0 )| − |Obs B (F 0 ) \ Req B (F 0 )|. Let us consider now relation → D . By Definition 2, given two atoms F and G, the condition imposed by F → D G is weaker than the one imposed by → B , that is, containement (superset) instead of full equality of the two sets. This is because with F → D G we want to express the fact that G may label any sub-interval In Figure 10, both in the interval model and in its compass structure counterpart, we show the labelling of intervals which are required to satisfy relation → D . We cope with three ⟨D⟩ requests: ψ 1 = p ∧ q, ψ 2 = ¬p ∧ q, and ψ 3 = p ∧ ¬q. Let us consider all the proper sub-intervals of the largest interval in the model, namely, sub-intervals [x, y] of an interval [x ′ , y ′ ] (x ′ < x ≤ y < y ′ ) not just its maximal proper sub-interval [x ′ − 1, y ′ − 1]; on the contrary, relation → B only refers to the maximal proper prefix [x ′ , y ′ − 1] of [x ′ , y ′ ]. ϕ = ⟨D⟩( p ∧ q ) ∧ ⟨D⟩( ¬p ∧ q ) ∧ ⟨D⟩( p ∧ ¬q ) ψ 1 ψ 2 ψ 3 0 1 p, ψ 3 2 p, q, ψ 1 3 q, ψ 2 4 ⟨D⟩ψ 1 , ⟨D⟩ψ 2 , ⟨D⟩ψ 3 p, ψ 3 ⟨D⟩ψ 1 q, ψ 2 0 1 2 3 4 . . . 0 1 2 3 4 D D D D D D D D D & & D & & D & & D & & D & & D F [x,y] p ¬p q ¬q ψ 1 ¬ψ 1 ψ 2 ¬ψ 2 ψ 3 ¬ψ 3 ⟨D⟩ψ 1 [D]¬ψ 1 ⟨D⟩ψ 2 [D]¬ψ 2 ⟨D⟩ψ 3 [D]¬ψ 3 ϕ ¬ϕ Req D (F [x,y] ) Obs D (F [x,y] ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [0, 4] 0 0 0 0 0 1 1 1 0 {ψ 1 , ψ 2 , ψ 3 } ∅ [1, 1] 1 0 0 0 1 0 0 0 0 ∅ {ψ 3 } [1, 2] 1 0 0 0 1 0 0 0 0 ∅ {ψ 3 } [1, 3] 0 0 0 0 0 1 0 0 0 {ψ 1 } ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [2, 2] 1 1 1 0 0 0 0 0 0 ∅ {ψ 1 } [2, 3] 0 1 0 1 0 0 0 0 0 ∅ {ψ 2 } . . . .L(0, 4) → D L(0, 4) L(0, 4) → D L(1, 3) L(0, 4) → D L(1, 2) L(0, 4) → D L(2, 3) L(0, 4) → D L(1, 1) L(0, 4) → D L(2, 2) L(0, 4) → D L(3, 3) L(1, 3) → D L(1, 3) L(1, 2)$$ → D L(1, 2) L(2, 3)$$ → D L(2, 3) L(1, 1)$$ → D L(1, 1) L(2, 2)$$ → D L(2, 2) L(3, 3)$$ → D L(3, 3)→ D F [x ′ ,y ′ ] , for 0 < x ≤ x ′ < 4 and [x, x ′ ] = [0, 4] , is true and it is not. The same pieces of information are graphically depicted in the top right part of Figure 10. Let → B F [0,3] cannot be turned into F [0,3] → B F [0,4] , because Req D (F [0,3] ) / ⊇ Obs B (F [0,4] ) ∪ Req D (F [0,4] ) as {ψ 1 } / ⊇ ∅ ∪ {ψ 1 , ψ 2 , ψ 3 }. It is worth pointing out that, in general, the following stronger consistency property, involving equality in place of containment, holds for ⟨D⟩ requests: for all [x, y] it holds that Req D (F [x,y] ) = Req D (F [x+1,y−1] ) ∪ Obs D (F [x+1,y−1] ) ∪ Obs D (F [x+1,y−2] ) ∪ Obs D (F [x+2,y−1] ) . Such a property states that the ⟨D⟩ requests that hold over an interval [x, y] must be completely "covered" by those holding over its maximal proper sub-interval [x + 1, y − 1] and the union of all the observables of [x + 1, y − 1], [x + 1, y − 2] (maximal proper prefix of [x + 1, y − 1]), and [x + 2, y − 1] (maximal proper suffix of [x + 1, y − 1]). As an example, in Figure 10, we have that Req D (F [0,4] ) = Req D (F [1,3] )∪Obs D (F [1,3] )∪ Obs D (F [1,2] ) ∪ Obs D (F [2,3] ) = {ψ 1 } ∪ ∅ ∪ {ψ 3 } ∪ {ψ 2 } = {ψ 1 , ψ 2 , ψ 3 }. Observe that,→ D F [1,2] nor F [1,3] → D F [2,3] hold. The next proposition reduces the equality condition for any pair of atoms to the equality of their propositional components and their respective sets of ⟨B⟩ and ⟨D⟩ requests. Proposition 1. For each pair of atoms F, G ∈ At(ϕ), F = G if and only if Req B (F ) = Req B (G), Req D (F ) = Req D (G), and F ∩ Prop = G ∩ Prop. The proof of Proposition 1 trivially follows from the fact that, for each atom F and each ψ ∈ F , either ψ ∈ Prop ∪ {⟨B⟩ψ ′ ∶ ψ ′ ∈ Req B (F )} ∪ {⟨D⟩ψ ′ ∶ ψ ′ ∈ Req D (F )} or ψ is a Boolean combination of Prop ∪ {⟨B⟩ψ ′ ∶ ψ ′ ∈ Req B (F )} ∪ {⟨D⟩ψ ′ ∶ ψ ′ ∈ Req D (F )}. Given a formula ϕ, a ϕ-compass structure (simply compass structure, when ϕ is clear from the context) is a pair G = (G N , L), where N ∈ N, G N = {(x, y) ∶ 0 ≤ x ≤ y ≤ N }, and L ∶ G N → At(ϕ) is a labelling function that satisfies the following conditions: • (initial formula) ϕ ∈ L(0, N ); • (B-consistency) for all 0 ≤ x ≤ y < N , L(x, y + 1) → B L(x, y), and for all 0 ≤ x ≤ N , Req B (L(x, x)) = ∅; • (D-consistency) for all 0 ≤ x < x ′ ≤ y ′ < y ≤ N , L(x, y) → D L(x ′ , y ′ ); • (D-fulfilment) for all 0 ≤ x ≤ y ≤ N and all ψ ∈ Req D (L(x, y)), there exist x < x ′ ≤ y ′ < y such that ψ ∈ L(x ′ , y ′ ). Observe that the definition of → B and B-consistency guarantee that all the existential requests via relation B (hereafter B-requests) are fulfilled in a compass structure. We say that an atom F ∈ At(ϕ) is B-reflexive (resp., D-reflexive) if F → B F (resp., F → D F ). If F is not B-reflexive (resp., D-reflexive), it is B-irreflexive (resp., D-irreflexive). Let G = (G N , L) be a compass structure. We define the function P ∶ G N → 2 y). The proof of the following theorem is straightforward and thus omitted. Hereafter, we will often write compass structure for homogeneous ϕ-compass structure. Prop such that P(x, y) = {p ∈ Prop ∶ p ∈ L(x ′ , x ′ ) for all x ≤ x ′ ≤ y}. We say that a ϕ-compass structure G = (G N , L) is homogeneous if for all (x, y) ∈ G N , L(x, y) ∩ Prop = P(x, In Figure 11, we depict the homogeneous model M = (I 7 , V) (left) with the corresponding compass structure G = (G 7 , L) (right), for a given formula ϕ. As for the compass structure G, we first observe that each interval [x, y] in M is mapped to a point in the second octant of the N × N grid (in Figure 11, we depict the first quadrant of such a grid, where the first octant is shaded). Thanks to such a mapping, interval relations are mapped into special relations between points (by a slight abuse of terminology, we borrow the names of the interval relations). As an example, in Figure 11 point (0, 2) begins (0, 3). Similarly, as enlightened by the hatched triangle, point (1, 6) has points (2, 2), (2, 3), (3, 3), (2, 4), (3, 4), (4, 4), (2, 5), (3, 5), (4, 5), and (5, 5) as sub-intervals. In general, all points (x, x) are labelled with irreflexive atoms containing [B]⊥, while all points (x, y), with x < y, are labelled with atoms containing ⟨B⟩⊤. The variety of atoms is exemplified by the following cases. Atom L(0, 3) is both B-irreflexive (Box B (L(0, 3)) = {p} and ¬p ∈ L(0, 3)) and D-irreflexive (Box D (L(0, 3)) = {q} and ¬q ∈ L(0, 3)), atom L(4, 6) is both B-reflexive (Box B (L(4, 6)) = {p} and p ∈ L(4, 6)) and D-reflexive (Box D (L(4, 6)) = {q} and q ∈ L(4, 6)), atom L(4, 7) is B-irreflexive (Box B (L(4, 7)) = {p} and ¬p ∈ L(4, 7)) and D-reflexive (Box D (L(4, 7)) = {q} and q ∈ L(4, 7)), and atom L(0, 2) is B-reflexive (Box B (L(0, 2)) = {p} and p ∈ L(0, 2)) and D-irreflexive (Box D (L(0, 2)) = {q} and ¬q ∈ L(0, 2)). Finally, we would like to highlight that L(4, 7) → B L(4, 6) (Req B (L(4, 7)) = ∅ and Obs B (L(4, 6)) ∪ Req B (L(4, 6)) = ∅) and L(3, 0) → D L(1, 2) (Req D (L(3, 0)) = ∅ and Obs D (L(1, 2)) ∪ Req D (L(1, 2)) = ∅). In the next sections, we will prove a small model theorem about compass structures for an input BD hom formula ϕ. In particular, we will prove that a model can be built by contracting a larger one in such a way that the resulting model is still a compass structure for ϕ. To achieve such a goal, we need to state some spatial properties of compass structures that involves the distinction between B-reflexive (resp., D-reflexive) and B-irreflexive (resp., D-irreflexive) atoms. Intuitively, if a point is labelled with an atom which is both B-reflexive and D-reflexive, its only purpose is to "fill the gaps" in the model, as each B/D-request that it possibly solves for other points are transferred to its prefixes/sub-intervals. On the other hand, a point that is B-irreflexive, D-irreflexive, or both B-irreflexive and D-irreflexive must be treated carefully since it feature at least one B/D-request in its observables that is solved once and for all, and it is not transferred to its prefixes/sub-intervals. Spatial properties of compass structures for BD hom formulas In this section, we prove a series of spatial properties of compass structures that turn out to be very useful in the proofs of the results of Sections 6 and 8. Each property is proved by making use of the previous one as follows: Section 5.1 -We first show that for any compass structure and any of its X-axis coordinate x, the sequence L(x, 0) . . . L(x, N ) is monotonic, that is, for any triplet 0 ≤ y 1 < y 2 < y 3 ≤ N , it cannot be the case that L(x, y 1 ) = L(x, y 3 ) and L(x, y 1 ) ≠ L(x, y 2 ). Such a property allows us to represent relevant information associated with any column x in space (polynomially) bounded in |ϕ|. Section 5.2 -Next, we define an equivalence relation over columns such that two columns are equivalent if they feature the same set of atoms. It is easy to verify that such an equivalence relation is of finite index and its index is exponentially bounded in |ϕ|. By exploiting the representation of Section 5.1, we first define a partial order over equivalent columns, and then we prove that, in a compass structure, such a relation totally orders equivalent columns. Section 5.3 -By exploiting the total order of the elements of each equivalence class, we show a crucial property of the rows of a compass structure, which is the cornerstone of the proof. First, we associate with each point (x, y) on row y, with 0 ≤ x ≤ y, a tuple consisting of: (i) L(x, y), (ii) the equivalence class ∼ x of column x, and (iii) the set of pairs (L(x ′ , y), ∼ x ′ ), for all x < x ′ ≤ y, and then we prove that, for every pair of points (x, y), (x ′ , y) that feature the same tuple, L(x, y ′ ) = L(x ′ , y ′ ) for all y ′ > y, that is, columns x and x ′ behave the same way (i.e., exhibit the same labelling) from y to the upper end. 5.1. A finite characterisation of columns and of their relationships. In this section, we first show that, in every compass structure, the atoms that appear in a column x must respect a certain order, that is, they cannot be interleaved. Let F, G, and H be three pairwise distinct atoms. In Figure 12.(a), we give a graphical account of the property that we are going to prove, while, in Figure 12.(b), we show a violation of it (atom H appears before and after atom G moving upward along the column). We preliminarily prove a fundamental property of B-irreflexive atoms. Lemma 2. Let G = (G N , L) be a compass structure. For all x ≤ y < N , if Req B (L(x, y)) ⊂ Req B (L(x, y + 1)), then L(x, y) is B-irreflexive. Proof. Let us assume by contradiction that L(x, y)) is B-reflexive. This means that Box B ( L(x, y)) ⊆ L(x, y). Since Req B (L(x, y)) ⊂ Req B (L(x, y + 1)), there exists a formula ψ ∈ Req B (L(x, y + 1)) \ Req B (L(x, y)) and thus we have ¬ψ ∈ Box B (L(x, y)) and, by Breflexivity of L(x, y), ¬ψ ∈ L(x, y). Since G is a compass structure, it holds that L(x, y) → B L(x, y − 1) → B . . . → B L(x, x), and thus ¬ψ ∈ Box B (L(x, y ′ )) and ¬ψ ∈ L(x, y ′ ), for all x ≤ y ′ ≤ y. Since, by definition of → B , all B-requests are fulfilled in a compass structure, we can conclude that ψ ∉ Req B (L(x, y + 1)) (contradiction). Let us now provide a bound on the number of distinct atoms that can be placed above a given atom F in a column, that takes into account B-requests, D-requests, and negative literals in F . Formally, we define a function ∆ ↑ ∶ At(ϕ) → N as follows: ∆ ↑ (F ) = (2|{⟨B⟩ψ ∈ Cl(ϕ)}| − 2|Req B (F )| − |Obs B (F ) \ Req B (F )|)+ (|{⟨D⟩ψ ∈ Cl(ϕ)}| − |Req D (F )|)+ (|{¬p ∶ p ∈ Cl(ϕ) ∩ Prop}| − |{¬p ∶ p ∈ Cl(ϕ) ∩ Prop ∧ ¬p ∈ F }|) The result and the proof of Lemma 1 in Section4 helps us to understand why the factor 2 comes into play in the case of B-requestes. Informally, from the proof of Lemma 1 it immediately follows that in order to move down from an atom including ⟨B⟩ψ to an atom including ¬ψ, [B]¬ψ one must pass through an atom including ψ, [B]¬ψ. It can be easily checked that, for each F ∈ At(ϕ), 0 ≤ ∆ ↑ (F ) ≤ 4|ϕ| + 1. To explain how ∆ ↑ works, we give a simple example. Let {ψ ∶ ⟨B⟩ψ ∈ Cl(ϕ)} = {ψ 1 } and let We say that an atom F is initial if and only if Req B (F ) = ∅. A B-sequence is a sequence of atoms Sh B = F 0 . . . F n such that F 0 is initial and for all 0 < i ≤ n we have F i → B F i−1 , Req D (F i ) ⊇ Req D (F i−1 ), and F i ∩ Prop ⊆ F i−1 ∩ Prop. It is worth pointing out that atoms in a B-sequence are monotonically non-increasing in ∆ ↑ , that is, ∆ ↑ (F 0 ) ≥ . . . ≥ ∆ ↑ (F n ). Definition 3. We say that a B-sequence F 0 . . . F n is flat if and only if it can be written as a sequence F k 0 0 . . . F k m m , where k i > 0, for all 1 ≤ i ≤ m, and F i ≠ F j , for all 1 ≤ i < j ≤ m. For the sake of clarity, it is worth to mention that in this paper F it is not used to denote the complement of F but as a simple alias for atoms. Moreover, we say that a flat B-sequence F L(x, N ). The next lemma easily follows from Definition 3 and Definition 4. It allows us to abstract the shadings in a compass structure into flat B-sequences (proof omitted). ⋮ F F G H H H ⋮ F ∩ P rop ⊇ G ∩ P rop ⊇ H ∩ P rop Req B (F ) ⊆ Req B (G) ⊆ Req B (H) Req D (F ) ⊆ Req D (G) ⊆ Req D (H)Lemma 3. Let G = (G N , L) be a compass structure and 0 ≤ x ≤ N . It holds that Sh G (x) is a B-sequence. The next lemma is the missing piece that allows us to restrict our attention to decreasing flat B-sequences when abstracting shadings in a compass-structure. Lemma 4. Let G = (G N , L) be a compass structure (for a formula ϕ). For every x ≤ y < N , we have that L(x, y) = L(x, y + 1) iff L(x, y) is B-reflexive, P(x, y) = P(x, y + 1), and Req D (x, y) = Req D (x, y + 1). Proof. The left-to-right direction is proved via a case analysis. If P(x, y) ≠ P(x, y + 1) or Req D (x, y) ≠ Req D (x, y + 1), then L(x, y) ≠ L(x, y + 1) immediately follows. If L(x, y) is B-irreflexive, then one gets a contradiction by observing that having two occurrences of the same B-irreflexive atom stacked one above the other violates the consistency of the compass structure (with respect to the → B relation). Let us prove now the right-to-left direction. Suppose, by way of contradiction, that L(x, y) ≠ L(x, y + 1). Then, there exists a formula ψ ∈ Cl(ϕ) such that ψ ∈ L(x, y + 1) and ¬ψ ∈ L(x, y). By Lemma 1, for all 0 ≤ x ≤ y ≤ N , the truth of ψ ∈ L(x, y) is uniquely determined by the truth values of P(x, y), Req B (x, y), and Req D (x, y). By the assumption, we get Req B (x, y + 1) ⊃ Req B (x, y). To reach the contradiction, we then proceed as in the proof of Lemma 2. The next corollary immediately follows from Lemma 1 and Lemma 4. It allows us to give a bound on the distinct atoms that may appear on a shading. More precisely, it states that the shading of each column x in G is a decreasing flat B-sequence, and it gives a polynomial bound on the number of distinct atoms occurring in it. Corollary 1. Let G = (G N , L) be a compass structure (for a formula ϕ). Then, for all Figure 13. Two equivalent columns that respect the order (a) and two equivalent columns that violate it (b). 0 ≤ x ≤ N , Sh G (x) is a decreasing flat B-sequence F k 0 0 . . . F k m m , with 0 ≤ m ≤ 4|ϕ| + 1. F 1 x F 2 F 3 F 3 F 3 F 4 F 4 ⋮ F 1 x ′ F 2 F 3 F 3 F 4 ⋮ (a) (b) F 1 x F 2 F 3 F 3 F 3 F 3 F 4 ⋮ F 1 x ′ F 2 F 3 F 4 F 4 ⋮ 5.2. A suitable equivalence relation over columns of a compass structure. By exploiting the above (finite) characterisation of columns, we can define a natural equivalence relation of finite index over columns: we say that two columns x, x ′ are equivalent if and only if they feature the same set of atoms. Thanks to Corollary 1, if multiple copies of the same atom are present in a column, their occurrences are consecutive, and thus can be represented as blocks. Moreover, these blocks appear in the same order in equivalent columns because of the monotonicity of Req B , Req D , and P rop, the latter being forced by the homogeneity assumption (see Fig. 12.(a)). In the following, we prove that equivalent columns can be totally ordered according to a given partial order relation over their shadings. Formally, for any two equivalent columns x and x ′ , Sh G (x) < Sh G (x ′ ) if and only if for every row y the atom L(x ′ , y) is equal to atom L(x, y ′ ) for some row y ′ , with 0 ≤ y ′ ≤ y. Intuitively, this means that moving upward column x ′ an atom cannot appear until it has appeared on column x. In Fig. 13.(a), we depict two equivalent columns that satisfy such a condition. In general, when moving upward, atoms on x ′ are often "delayed" with respect to atoms in x, the limit case being when atoms on the same row are equal. In Fig. 13.(b), a violation of the condition (boxed atoms) is shown. We are going to prove that this latter situation never occurs in a compass structure. Let us now define an equivalence relation ∼ over decreasing flat B-sequences. Two decreasing flat B-sequences Sh B = F k 0 0 . . . F k m m and Sh ′ B = G h 0 0 . . . G h m ′ m ′ are equivalent, written Sh B ∼ Sh≤ i ≤ m, Σ 0≤j≤i k j ≤ (|Sh B | − |Sh ′ B |) + Σ 0≤j≤i h j . Let us consider, for instance, the four equivalent decreasing flat B-sequences shown in Figure 13, from left to right they are Sh . In the following we will prove that the latter scenario cannot occur in the case of compass structures. 0 B = F 1 F 2 F 3 3 F 2 4 , Sh 1 B = F 1 F 2 F 2 3 F 4 , Sh 2 B = F 1 F 2 F 4 3 F 4 , and Sh 3 B = F 1 F 2 F 3 F Finally, we introduce a notation for atom retrieval. Let Sh B = F k 0 0 . . . F k m m be a decreasing flat B-sequence and 0 ≤ i ≤ |Sh B |. We put Sh B [i] = F j , where j is such that ∑ 0≤j ′ <j k j ′ < i ≤ ∑ 0≤j ′ ≤j k j ′ . The next lemma constrains the relationships between pairs of equivalent shadings (decreasing flat B-sequences) appearing in a compass structure. ≤ x < x ′ ≤ N such that Sh G (x) ∼ Sh G (x ′ ), it holds that Sh G (x) < Sh G (x ′ ). Proof. Let ∆ = x ′ − x, Sh G (x) = F k 0 0 . . . F k m m , and Sh G (x ′ ) = F h 0 0 . . . F h m m . Let us suppose by contradiction that Sh G (x) / ≤ Sh G (x ′ ). From Sh G (x) ∼ Sh G (x ′ ) we have that both Bsequences features the same atoms in the same order, they can differ just in their numerosity (i.e., exponents). From Sh G (x) ∼ Sh G (x ′ ), Sh G (x) / ≤ Sh G (x ′ ) and since |Sh G (x)| < |Sh G (x ′ )| (x ′ is closer to N than x and thus is shorter) there exists an index 0 ≤ i ≤ N − x ′ such that one of the following conditions holds: (1) Sh G (x)[∆ + i] ∩ Prop ≠ Sh G (x ′ )[i] ∩ Prop; (2) Req D (Sh G (x)[∆ + i]) ≠ Req D (Sh G (x ′ )[i]); (3) Req B (Sh G (x)[∆+i]) = Req B (Sh G (x ′ )[i]) and Req B (Sh G (x)[∆+i]) is B-irreflexive; (4) Req B (Sh G (x)[∆ + i]) ⊂ Req B (Sh G (x ′ )[i]) . The above cases stem from the fact that we are claiming that for a certain index i there exists j such that Sh G (x ′ )[i] = F j and Sh G (x)[∆ + i] = F j−1 and thus Sh G (x ′ )[i] ≠ Sh G (x)[∆ + i] . This is the case, for instance, of x and x ′ in Figure 13 for which we have ∆ = 2 Sh G (x ′ )[3] = F 4 and Sh G (x)[5] = F 3 . Let us assume that i is the minimum index which satisfies one of the above conditions. In the following, we will assume that Sh G (x ′ )[i] = F j and Sh G (x)[∆ + i] = F j−1 for some 0 < j ≤ m. Before proving that in each case we reach a contradiction, let us spend a few more words on how such cases are derived. Since Sh G (x) ∼ Sh G (x ′ ) but Sh G (x) / ≤ Sh G (x ′ ) and since |Sh G (x)| < |Sh G (x ′ )| we have a situation analogous to the one depicted in Figure13.(b). In particular, we have that Sh G (x) starts "before" Sh G (x ′ ) by unraveling the common sequence of atoms F 0 . . . F m (which is F 1 . . . F 4 in Figure13.(b)). Due to the fact that Sh G (x) ∼ Sh G (x ′ ), When Sh G (x ′ ) starts it must unravel the same sequence an then it is easy to see that either for every i there exists k ′ ≤ k such that Sh G (x ′ )[i] = F k ′ and Sh G (x)[i + ∆] = F k , i.e., Sh G (x ′ ) "waits" for Sh G (x) before showing any new atom in the sequence F 0 . . . F m , or there exists i and k ′ > k such that Sh G (x ′ )[i] = F k ′ and Sh G (x)[i + ∆] = F k , it is the case with i = 3, k = 3, and k ′ = 4 in Figure13.(b)). The first case it is a sufficient condition for concluding that Sh G (x) ≤ Sh G (x ′ ) holds while in the latter case Sh G (x) / ≤ Sh G (x ′ ) holds. In the latter case, if we take the mimimal i that satisfy the condition it is easy to see that k ′ = k + 1. Then, we have F k ′ → B F k but F k ′ ≠ F k . The cases (1)-(4) above are all the possible way in which we may have F k ′ → B F k but F k ′ ≠ F k with the additional constraint that F k ′ = L(x ′ , x ′ + i) and F k ′ = L(x, x + ∆ + i) which implies, since [x ′ , x ′ + i] finishes [x, x + ∆ + i] and we are in a compass structures, that Prop ∩ L(x ′ , x ′ + i) ⊇ Prop ∩ L(x, x + ∆ + i) must satisfy the homogeneity condition and Req D (L(x ′ , x ′ + i)) ⊆ Req D (L(x, x + ∆ + i)). For case (1) we have that since Sh G (x)[i] = L(x, x+∆+i), Sh G (x ′ )[i] = L(x ′ , x ′ +i), and x+∆+i = x ′ +i we have, by definition of compass structure, P(x, x+∆+i) ⊆ P(x ′ , x ′ +i) but since k ′ > k we will have that there exists i ′ > i for which P(x, x + ∆ + i ′ ) = P(x ′ , x ′ + i) and since [x, x+∆+i] begins [x, x+∆+i ′ ] we have P(x, x+∆+i) ⊇ P(x, x+∆+i ′ ) = P(x ′ , x ′ +i) and thus P( x, x + ∆ + i) = P(x ′ , x ′ + i) (contradiction(Sh G (x)[∆ + i]) ⊇ Req D (Sh G (x ′ )[i]). From Req D (Sh G (x)[∆+i]) ≠ Req D (Sh G (x ′ )[i]) we have Req D (Sh G (x)[∆+i]) ⊃ Req D (Sh G (x ′ )[i]) which means that Req D (F j−1 ) ⊃ Req D (F j ), thus there exists ψ ∈ Req D (F j−1 ) \ Req D (F j ). This translates into the intervals of a compass structure in ⟨D⟩ψ ∈ L(x ′ , x ′ + i) and [D]¬ψ ∈ L(x, x+∆+i). Since, G is a compass structure we have that there exists a proper sub-interval x ′ < x ′′ ≤ x ′′′ < x ′ + i ψ ∈ L(x ′′ , x ′′′ ). However, since x < x ′ < x ′ + i ≤ x + ∆ + i we have that [x ′′ , x ′′′ ] is also a proper sub-interval of [x, x + ∆ + i] then since [D]¬ψ ∈ L(x, x + ∆ + i) we have ¬ψ ∈ L(x ′′ , x ′′′ ) (contradiction). Let us assume that Req B (Sh G (x)[∆+i]) = Req B (Sh G (x ′ )[i]) and Req B (Sh G (x)[∆+i]) is B-irreflexive (case 3). Then, from Req B (F j ) = Req B (F j−1 ), F j → B F j−1 and F j B- irreflexive we have that F j−1 is B-reflexive. From Sh G (x ′ ) ∼ Sh G (x) we have Sh G (x)[∆ + i − 1] = F j−1 thus we have Sh G (x)[∆ + i] ∩ Prop = Sh G (x ′ )[i] ∩ Prop = Sh G (x)[∆ + i − 1] ∩ Prop and Req D (Sh G (x)[∆ + i]) = Req D (Sh G (x ′ )[i]) = Req D (Sh G (x)[∆ + i − 1]) thus we can apply Lemma 4 we have Sh G (x)[∆ + i] = F j−1 (contradiction). Let us assume Req B (Sh G (x)[∆ + i]) ⊂ Req B (Sh G (x ′ )[i] ) (case 4) then from Lemma 2 we have that one among F j and F j−1 is B-irreflexive. If F j is B-irreflexive and F j−1 is B-reflexive we can use exactly the previous argument to prove a contradiction. If F j is B-irreflexive and F j−1 is B-irreflexive one of the above conditions holds for i − 1 violating the minimality of i. It remains the case in which F j−1 is B-irreflexive and F j is B-reflexive but in such case we have that one of the above conditions holds for i − 1 violating the minimality of i too (contradiction). A spatial property of columns in homogeneous compass structures. In this section, we provide a very strong characterization of the rows of a compass structure by making use of a covering property, depicted in Fig. 14, which states that the sequences of atoms on two equivalent columns x < x ′ must respect a certain order. To start with, we define the intersection of row y and column x, with 0 ≤ x ≤ y, as the pair consisting of the equivalence class of x and the labelling of (x, y). We associate with each point (x, y) its intersection as well as the set S → (x, y) of intersections of row y with columns x ′ , for all x < x ′ ≤ y. Let us denote by f p(x, y) (f p stands for fingerprint) the triplet associated with point (x, y). We prove that if a point (x, y) has n + 1 columns (x <) x 0 < . . . < x n ≤ y on its right (with n large enough, but polynomially bounded by |ϕ|) such that, for all 0 ≤ i ≤ n, f p(x i , y) is equal to f p(x, y), then the sequence of atoms that goes from (x, y) to (x, N ) is exactly the same as the sequence of atoms that goes from (x 0 , y) to (x 0 , N ). Let G = (G N , L) be a compass structure and let 0 ≤ x ≤ y. We define S → (x, y) as y) collects the equivalence classes of ∼ which are witnessed to the right of x on row y plus a "pointer" to the "current atom", that is, the atom they are exposing on y. If G = (G N , L) is homogeneous (as in our setting), for all 0 ≤ x ≤ y ≤ N , the number of possible sets S → (x, y) is bounded by the set {([Sh G (x ′ )] ∼ , L(x ′ , y)) ∶ x ′ > x}. S → (x,2 2 4|ϕ| 2 +6|ϕ|+2 ⋅2 |ϕ|+1 = 2 2 4|ϕ| 2 +7|ϕ|+3 , that is, it is doubly exponential in the size of |ϕ|. The next lemma constrains the way in which two columns x, x ′ , with x < x ′ and Sh G (x) ∼ Sh G (x ′ ) , evolve from a given row y on when S → (x, y) = S → (x ′ , y). Lemma 6. Let G = (G N , L) be a compass structure and let 0 ≤ x < x ′ ≤ y ≤ N . If f p(x, y) = f p(x ′ , y) and y ′ is the smallest point greater than y such that L(x, y ′ ) ≠ L(x, y), if any, and N otherwise, then, for all y ≤ y ′′ ≤ y ′ , L(x, y ′′ ) = L(x ′ , y ′′ ). Proof. Let y be the minimum point y > y such that L(x ′ , y) ≠ L(x ′ , y). Let us assume by contradiction that y ≠ y ′ . By Lemma 5 we have that y > y ′ . Let Sh Figure 14. A graphical account of the behaviour of covered points. We have that x is covered by x 0 < . . . < x n on row y and thus the labelling of points on column x above (x, y) is exactly the same of the correspondent points on column x 0 above (x 0 , y), that is, L(x, y ′ ) = L(x 0 , y ′ ), for all y ≤ y ′ ≤ N . and let 0 ≤ i < m be the index such that Sh G (x) = F k 0 0 . . . F k m m , ⋯ y y ′ ⋯ N x ⋯ x 0 ⋯ x n ⋯ ∼ ∼ . . . ∼ F . . . F L(x, y ′ ) = L(x0, y ′ ) F S → (x n , y) S → (x i , y) S → (x 0 , y) S → (x, y) S → (x, y) = S → (x 0 , y) = . . . = S → (x i , y) = . . . = S → (x n , y)G (x)[y − x] = Sh G (x)[y − x ′ ] = F i . Then we have L(x, y) = F i = L(x ′ , y), L(x, y ′ ) = F i+1 , and L(x ′ , y ′ ) = F i . Moreover for every y ≤ y ′′ < y ′ we have L(x, y ′′ ) = L(x ′ , y ′′ ) = F i , then F i is B-reflexive. Let us notice that P(x, y ′ − 1) = P(x ′ , y ′ ) = F i ∩ Prop then we have that P(x, y ′ ) = P(x ′ , y ′ ). Since L(x, y ′ − 1) is B-reflexive we have that Req D (L(x, y ′ )) ⊃ Req D (L(x, y ′ − 1)) = Req D (L(x ′ , y ′ )) = Req D (L(x ′ , y ′ − 1)), otherwise conditions for Lemma 4 apply and L(x, y ′ ) = L(x, y ′ − 1) (contradiction). This means that there exists x < x < x ′ such that ψ ∈ (L(x, y ′ − 1) ∩ Req D (L(x, y ′ )) \ Req D (L(x ′ , y ′ ))) and for every x ′ ≤ x ′′ ≤ y ′ − 1 we have ψ / ∈ L(x ′′ , y ′ − 1). The simpler case is when y ′ = y + 1. In such a case from S → (x, y) = S → (x ′ , y) we have that there exists x ′ > x ′ such that L(x ′ , y) = L(x, y) (contradiction). Let us consider now the case in which y ′ > y + 1. Since ¬ψ ∈ Box D (L(x, y ′ − 1)) we have that ψ ∉ L(x ′′ , y ′′ ) for every x < x ′′ ≤ y ′′ < y ′ − 1. Two cases arise: • there exists y ≤ y ′′ < y ′ − 1 such that L(x, y ′′ ) is B-reflexive. If it is the case since F i ∩ Prop = P(x, y ′ ) ⊆ P(x, y ′ ) ⊆ P(x ′ , y ′ ) = F i ∩ Prop and Req D (F i ) = Req D (L(x, y ′ ) ⊇ Req D (L(x, y ′ )) ⊇ Req D (L(x ′ , y ′ )) = Req D (F i ) for every y ≤ y ≤ y ′ −1 we have P(x, y ′ ) = P(x, y ′ ) = P(x ′ , y ′ ) and Req D (L(x, y ′ )) = Req D (L(x, y ′ )) = Req D (L(x ′ , y ′ )) for every y ′′ ≤ y ≤ y ′ − 1. Then for Lemma 4 we have that L(x, y ′ − 1) = L(x, y ′ − 2) = . . . = L(x, y ′′ ) this means that L(x, y ′ − 1) is not the first atom featuring ψ on the column x (contradiction); • for every y ≤ y ′′ < y ′ −1 we have that L(x, y ′′ ) is B-irreflexive. Then, from S → (x, y) = S → (x ′ , y) there exists x ′ > x ′ such that Sh G (x ′ ) ∼ Sh G (x) and L(x ′ , y) = L(x, y). Let us observe that, by definition of B-sequence, for every B-sequence F h 0 0 . . . F h n n and for every 1 ≤ i ≤ n if F i is B-irreflexive then h i = 1 (i.e., B-irreflexive atoms are unique in every B-sequence). Then, we have that for every y ≤ y ′′ ≤ y ′ − 1 we have L(x, y ′′ ) = L(x ′ , y ′′ ) and thus ψ ∈ L(x ′ , y ′ − 1) this implies ψ ∈ Req D (L(x ′ , y ′ )) (contradiction). From Lemma 6, the next corollary follows. Corollary 2. Let G = (G N , L) be a compass structure and let 0 ≤ x < x ′ ≤ y ≤ N . If f p(x, y) = f p(x ′ , y) and y ′ is the smallest point greater than y such that L(x, y ′ ) ≠ L(x, y), if any, and N otherwise, then, for every pair of points x, x ′ , with x < x < x ′ < x ′ , with L(x, y) = L(x ′ , y) and Sh G (x) ∼ Sh G (x ′ ) / ∼ Sh G (x), it holds that L(x, y ′′ ) = L(x ′ , y ′′ ), for all y ≤ y ′′ ≤ y ′ . The above results lead us to the identification of those points (x, y) whose behaviour perfectly reproduces that of a number of points (x ′ , y) on their right with f p(x, y) = f p(x ′ , y). These points (x, y), like all points "above" them, are useless with respect to fulfilment in a compass structure. We call them covered points. Definition 6. Let G = (G N , L) be a compass structure and 0 ≤ x ≤ y ≤ N . We say that (x, y) is covered iff there exist n + 1 = ∆ ↑ (L(x, y)) distinct points x 0 < . . . < x n ≤ y, with x < x 0 , such that for all 0 ≤ i ≤ n, f p(x, y) = f p(x i , y). In such a case, we say that x is covered by x 0 < . . . < x n on y. Lemma 7. Let G = (G N , L) be a compass structure and let x, y, with 0 ≤ x ≤ y ≤ N , be two points such that x is covered by points x 0 < . . . < x n on y. Then, for all y ≤ y ′ ≤ N , it holds that Sh G (x)[y ′ ] = Sh G (x 0 )[y ′ ]. Proof. Let Sh G (x, y) = F k 0 0 . . . F k m m , the proof is by induction on n = ∆ ↑ (L(x, y)). If n = 0 we have that L(x, y) = F m , since L(x, y) = L(x 0 , y) we have F m = L(x 0 , y). Since we are on the last atom of the sequence Sh G (x, y) and Sh G (x, y) ∼ Sh G (x 0 , y) we have L(x, y ′ ) = L(x 0 , y ′ ) for every y < y ′ ≤ N . If n > 0, let L(x, y) = F i with 0 ≤ i < m (if i = m we can apply the same way of the inductive basis), by Lemma 6 we have that there exists a single minimum point y ′ > y for which L(x, y ′ ) = L(x 0 , y ′ ) = . . . = L(x n , y ′ ) = F i+1 and thus for every y ≤ y ′′ ≤ y ′ we have L(x, y ′′ ) = L(x 0 , y ′′ ). Moreover, for Corollary 2 we have that for every x ′ > x n such that Sh G (x ′ ) / ∼ Sh G (x) and there exists x < x ′′ < x n such that Sh G (x ′′ ) / ∼ Sh G (x ′ ) and L(x ′ , y) = L(x ′′ , y) then we have that L(x ′ , y ′ ) = L(x ′′ , y ′ ). Then we have S → (x, y ′ ) = S → (x i , y ′ ) for every 0 ≤ i < n (every one but x n ). Since ∆ ↑ (F i ) < ∆ ↑ (F i+1 ) we have that we can apply the inductive hypothesis since x is covered by x 0 < . . . < x n−1 on y ′ and we have that for every y ′ ≤ y ′′ ≤ N we have L(x, y ′′ ) = L(x 0 , y ′′ ). In Figure 15, we give an intuitive account of the notion of covered point and of the statement of Lemma 7. First of all, we observe that, since S → (x, y) = S → (x 0 , y) = . . . = S → (x n , y) and, for all 0 ≤ j, j ′ ≤ n, it holds that (Sh G (x j ), L(x j , y)) = (Sh G (x j ′ ), L(x j ′ , y)), there exists x n <x ≤ y such that (Sh G (x n ), L(x n , y)) = (Sh G (x), L(x, y)), andx is the smallest point greater than x n that satisfies such a condition. Now, it may happen that S → (x n , y) ⊃ S → (x, y), and all points x ′ > x n with (Sh G (x ′ ), L(x ′ , y)) = (Sh G (x), L(x, y)), for some x < x < x n , are such that x n < x ′ <x. Then, it can be the case that, for all 0 ≤ i ≤ n, L(x i , y ′ ) = F i+1 , as all points (x i , y ′ ) satisfy some D-request ψ that only belongs to L(x ′ , y ′ − 1). In such a case, as shown in Figure 15, L(x, y ′ ) = F i , because for all points (x ′ ,ŷ ′ ), withx <x ′ ≤ŷ ′ < y ′ , ψ ∉ L(x ′ ,ŷ ′ ). Hence, (Sh G (x n ), F i+1 ) ∈ S → (x j , y ′ ) for all 0 ≤ j < n, but (Sh G (x n ), F i+1 ) ∉ S → (x n , y ′ ). Then, by applying Corollary 2, we have that S → (x 0 , y ′ ) = S → (x n−1 , y ′ ). Since ∆ ↑ (F i+1 ) < ∆ ↑ (F i )(= n), it holds that ∆ ↑ (F i+1 ) ≤ n − 1 The same argument can then be applied to x, x 0 , . . . , x n−1 on y ′ , and so on. Figure 15. An intuitive account of the statement of Lemma 7. y y ′ y ′ − 1 x F i x G j x 0 F i . . . x n−1 F i ⋮ F i+1 F i x n F i x ′ G jx F i F i ⋮ G j+k G j+k−1 G j+1 ⋮ F i ⋮ . . . . . . F i ⋮ G j+1 G j+1 ⋮ G j+k−1 G j+k F i ⋮ F i+1 F i+1 F i+1 F i The satisfiability problem for BD hom is decidable in EXPSPACE In this section, by exploiting the properties proved in Section 5, we show that the problem of checking whether a BD hom formula ϕ is satisfied by some homogeneous model can be decided in exponential space. First, by means of a suitable small model theorem, we prove that either ϕ is unsatisfiable or it is satisfied by a model (a compass structure) of at most doubly-exponential size in |ϕ|; then, we show that this model of doubly-exponential size can be guessed in single exponential space. Theorem 2. Let ϕ be a BD hom formula. The problem of deciding whether or not it is satisfiable belongs to EXPSPACE. The proof of Theorem 2 follows from Corollary 3, Lemma 8, and Lemma 9 below. First of all, thanks to the property proved in Section 5.3, we know that, for every row y, there is a finite set of columns C y = {x 1 , . . . , x n } that behave pairwise differently for the portion of the compass structure above y. This means that each column 0 ≤ x ≤ y, with x ∉ C y , behaves exactly as some x i ∈ C y above y, that is, for all y ′ > y, L(x, y ′ ) = L(x i , y ′ ). We prove that n is bounded by |ϕ|, from which it immediately follows that, in any large enough model, there are two rows y and y ′ , with Sh G (x)[y ′ ] = Sh G (x 0 )[y ′ ]. Then, we can suitably contract the model into one whose Y -size is y ′ − y shorter. By (possibly) repeatedly applying such a contraction, we obtain a model whose Y -size satisfies a doubly exponential bound. To complete the proof, it suffices to show that there exists a procedure that checks whether or not such a model exists in exponential space. By exploiting Lemma 7, we can show that, for each row y, the cardinality of the set of columns x 1 , . . . , x m which are not covered on y is exponential in |ϕ|. Then, the sequence of triplets for non-covered points that appear on y is bounded by an exponential value on |ϕ|. It follows that, in a compass structure of size more than doubly exponential in |ϕ|, there exist two rows y, y ′ , with y < y ′ , such that the sequences of the triplets for non-covered points that appear on y and y ′ are exactly the same. This allows us to apply a "contraction" between y and y ′ on the compass structure. An example of how contraction works is given in Figure 16. First of all, notice that rows 7 and 11 feature the same sequences for triplets of non-covered points, and that, on any row, each covered point is connected by an edge to the non-covered point that "behaves" in the same way. More precisely, we have that column 2 behaves as column 4 between y = 7 and y ′ = 15, columns 3, 5, and 7 behave as column 8 between y = 11 and y ′ = 15, and column 4 behaves as column 6 between y = 11 and y ′ = 15. The compass structure in Figure 16.(a) can thus be shrinked into the compass structure in Figure 16.(b), where each column of non-covered points x on y ′ is copied above the corresponding non-covered point x ′ on y. Moreover, the column of a non-covered point x on y ′ is copied over all the points which are covered by the non-covered point x ′ corresponding to x on y. This is the case with point 2 in Figure 16.(b) which takes the new column of its "covering" point 4. The resulting compass structure is y ′ − y shorter than the original one, and we can repeatedly apply such a contraction until we achieve the desired bound. The next corollary, which easily follows from Lemma 7, turns out to be crucial for the proof of Theorem 2. Roughly speaking, it states that the property of "being covered" propagates upward. Corollary 3. Let G = (G N , L) be a compass structure. Then, for every covered point (x, y), it holds that, for all y ≤ y ′ ≤ N , point (x, y ′ ) is covered as well. From Corollary 3, it immediately follows that, for every covered point (x, y) and every y ≤ y ′ ≤ N , there exists x ′ > x such that L(x ′ , y ′ ) = L(x, y ′ ). Hence, for all x, y, with x < x ≤ y ′ < y, and any D-request ψ ∈ Req D (L(x, y)) ∩ Obs D (L(x, y)), we have that ψ ∈ L(x ′ , y), with x ′ > x. This allows us to conclude that if (x, y) is covered, then all points (x, y ′ ), with y ′ ≥ y, are "useless" from the point of view of D-requests. Let G = (G N , L) be a compass structure and 0 ≤ y ≤ N . We define the set of witnesses of y as the set Wit G (y) = {x ∶ (x, y) is not covered}. Corollary 3 guarantees that, for any row y, the shading Sh G (x) and the labelling L(x, y) of witnesses x ∈ Wit G (y) are sufficient, bounded, and unambiguous pieces of information that one needs to maintain about y. Given a compass structure G = (G N , L) and 0 ≤ y ≤ N , we define the row blueprint of y in G, written Row G (y), as the sequence Row G (y) = ([Sh x, x ′ in Wit G (y), b(x) < b(x ′ ) ↔ x < x ′ . Given a compass structure G = (G N , L), the next lemma allows us to prove the existence of a smaller compass structure G = (G N ′ , L ′ ) with N ′ < N if G features two distinct rows y < y ′ which share the same blueprint. Lemma 8. Let G = (G N , L) be a compass structure. If there exist two points y, y ′ , with 0 ≤ y < y ′ ≤ N , such that Row G (y) = Row G (y ′ ) , then there exists a compass structure G ′ = (G N ′ , L ′ ) with N ′ = N − (y ′ − y). Proof. From Row G (y) = Row G (y ′ ), by composing bijections, we have that there exists a bijection b ∶ Wit G (y) → Wit G (y ′ ) such that, for every x ∈ Wit G (y), we have that L(x, y) = L(b(x), y ′ ), Sh G (x) ∼ Sh G (b(x)), and S → (x, y) = S → (b(x), y ′ ). Moreover, for every x, x ′ ∈ Wit G (y), we have that x ≤ x ′ ↔ b(x) ≤ b(x ′ ) . For every point 0 ≤ x ≤ y, we define the function Closest wit ∶ {0, . . . , y} → {0, . . . , y} as follows: , y), Let δ = y ′ − y. We define L ′ as follows: (1) L ′ (x, y) = L(x, y) for all 0 ≤ x ≤ y ≤ y; Closest wit (x) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ x if x ∈ Wit G (y) min x ′ ∶ x ′ > x, x ′ ∈ Wit G (y), L(x ′ , y) = L(xSh G (x ′ ) ∼ Sh G (x), S → (x ′ ) = S → (x) otherwise. (2) L ′ (x, y) = L(x + δ, y + δ) for all y < x ≤ y ≤ N ; (3) L ′ (x, y) = L(Closest wit (x), y + δ) for all points (x, y) with 0 ≤ x ≤ y and y < y ≤ N . Now we have to prove that the resulting structure G ′ = (G N ′ , L ′ ) is a homogeneous compass structure. This part is omitted, since it is pretty simple but extremely long. Let us just say that it can be proved by exploiting Corollary 3 and the definition of witnesses for a row y. To conclude the proof of Theorem 2, it suffices to show that if a BD hom formula is satisfiable, then it is satisfied by a doubly exponential compass structure, whose existence can be checked in exponential space. The following result provides both the small model theorem and the complexity class of checking whether or not a BD hom formula ϕ admits it. Lemma 9. Let ϕ be a BD hom formula. It holds that ϕ is satisfiable if and only if there is a compass structure G = (G N , L) for it such that N ≤ 2 2(|ϕ|+1)(4|ϕ| 2 +7|ϕ|+3)2 8|ϕ| 2 +14|ϕ|+6 , whose existence can be checked in EXP SP ACE. Proof. To start with, let us consider the problem of determining how many possible different Row G (y) we can have in a compass structure G = (G N , L). Let us first observe that for the monotonicity of the function S → we have, for every 0 ≤ y ≤ N , S → (0, y) ⊇ . . . ⊇ S → (y, y). Then, since we cannot have two incomparable, w.r.t. ⊆ relation, S → (x, y) and S → (x ′ , y) we have at most 2 4|ϕ| 2 +6|ϕ|+2 ⋅ 2 |ϕ|+1 = 2 can be associated to at most 4|ϕ| + 2 (i.e., the maximum value for ∆ ↑ plus one) distinct points in Wit G (y). Summing up, we have that the maximum length for Row G (y) is bounded by 2 4|ϕ| 2 +7|ϕ|+3 ⋅ 2 that is 2 2(|ϕ|+1)(4|ϕ| 2 +7|ϕ|+3)2 8|ϕ| 2 +14|ϕ|+6 which is doubly exponential in |ϕ|. Finally, given a ϕ-compass structure G = (G N , L), by repeatedly applying Theorem 8, we can obtain a ϕ-compass structure G = (G N ′ , L ′ ) such that for every 0 ≤ y < y ′ ≤ N we have Row G (y) ≠ Row G (y ′ ), then, by means of the above considerations on the maximum cardinality for the set of all possible Row G (y), we may conclude that that ϕ is satisfiable iff there is a compass structure G = (G N , L) for it such that N ≤ 2 2(|ϕ|+1)(4|ϕ| 2 +7|ϕ|+3)2 8|ϕ| 2 +14|ϕ|+6 . To complete the proof, it suffices to show that checking the existence of such a doubly exponential compass structure can be done in exponential space. Let M = 2 2(|ϕ|+1)(4|ϕ| 2 +7|ϕ|+3)2 8|ϕ| 2 +14|ϕ|+6 + 1 be the bound (plus 1) on the size of a candidate compass structure for the input BD hom formula ϕ, according to the small model theorem just proved. In the following, we briefly describe a decision procedure that decides, for some N ≤ M , whether or not there exists a compass structure G = (G N , L) for the input BD hom formula ϕ. If such a procedure works in exponential space with respect to |ϕ|, we can immediately conclude that the satisfiability problem for BD hom belongs to the EXPSPACE complexity class. The decision procedure begins at step y = 0 by guessing (1) if there exists i for which ϕ ∈ F i , then return true; Row G (y) = ([Sh 0 B ] ∼ , F 0 ) where F 0 = [Sh (2) if y = M , then return f alse; (3) non-deterministically guess a pair ([Sh k+1 B ] ∼ , F k+1 ) such that F k+1 = [Sh k+1 B ] 0 ∼ ; (4) for every 0 ≤ i ≤ k, let F i = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ F i if ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ Req D (F i ) = ⋃ i<j≤k Obs D (F j ) ∪ Req D (F j ) ∧ Req B (F i ) = Obs B (F i ) ∪ Req B (F i ) ∧ F i ∩ Prop = F k+1 ∩ F i ∩ Prop ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ F whereF = next([Sh i B ] ∼ , F i ), Req D (F ) = ⋃ i<j≤k Obs D (F j ) ∪ Req D (F j ), Req B (F ) = Req B (F i ) ∪ Obs B (F i ), andF ∩ Prop = F k+1 ∩F ∩ Prop ⊥ otherwise. By Lemma 4, F i is well defined. (5) if there exists i for which F i = ⊥, then return f alse; (6) let i 0 < . . . < i h be the maximal sub-sequence of indexes in 0 . . . k + 1 such that, for every 0 ≤ j ≤ h, ([Sh i j B ] ∼ , F i j ) is not covered in ([Sh 0 B ] ∼ , F 0 ) . . . ([Sh k+1 B ] ∼ , F k+1 ), then we define Row G (y + 1) = ([Sh i 0 B ] ∼ , F i 0 ) . . . ([Sh i h B ] ∼ , F i h ); (7) update y to y + 1 and restart from step 1. Soundness and completeness of the above procedure can be proved using the result given in this section. In particular, Corollary 3 comes into play in the completeness proof (item 6 keeps track of all and only the not covered points on row y + 1). Moreover, notice that, for each step 0 ≤ y ≤ M , we have to keep track of: (1) the current value of y, which cannot exceed 2 2(|ϕ|+1)(4|ϕ| 2 +7|ϕ|+3)2 8|ϕ| 2 +14|ϕ|+6 + 1 and can be logarithmically encoded using an exponential number of bits; (2) two rows, namely Row G (y) and Row G (y + 1), whose maximum length is bounded by 2 4|ϕ| 2 +7|ϕ|+3 ⋅ 2 4|ϕ| 2 +7|ϕ|+3 ⋅ (4|ϕ| + 2) = (4|ϕ| + 2)2 8|ϕ| 2 +14|ϕ|+6 (exponential in |ϕ|). Moreover, each position in such sequences holds a pair ([F 0 . . . F m ], F i ). Since m ≤ 4|ϕ| + 1, we have that each position holds at most 4|ϕ| + 3 atoms. Each atom can be represented using exactly |ϕ| + 1 bits. Summing up, we have that the total space needed for keeping the two rows y and y + 1 (step 0 ≤ y ≤ M ) consists of 2 ⋅ (|ϕ| + 1)(4|ϕ| + 3) ⋅ 2 4|ϕ| 2 +7|ϕ|+3 ⋅ 2 4|ϕ| 2 +7|ϕ|+3 ⋅ (4|ϕ| + 2) bits, which, simplified, turns out to be 4(8|ϕ| 3 + 18|ϕ| 2 + 13|ϕ| + 3)2 8|ϕ| 2 +14|ϕ|+6 bits that is still exponential in |ϕ|. This shows that we can decide the satisfiability of ϕ in exponential space. 7. Adding modality A to BD hom : the logic BDA hom In this section, we introduce the logic BDA hom , that extends BD hom with modality ⟨A⟩. The semantics of modality ⟨A⟩ has been already given in Section 2 in terms of modality T of CDT as ⟨A⟩ψ = ψ T ⊤. Formally, syntax and semantics of BDA hom are obtained from those of BD hom by simply adding the syntactic rule and the semantic clause for modality ⟨A⟩, respectively. BDA hom formulas are built up from a countable set Prop of proposition letters according to the grammar: ϕ ∶∶= p | ¬ψ | ψ ∨ ψ | ⟨B⟩ψ | ⟨D⟩ψ | ⟨A⟩ψ, where p ∈ Prop and ⟨B⟩, ⟨D⟩, and ⟨A⟩ are the modalities for Allen's relations Begins, During, and Meets, respectively. The semantics of a BDA hom formula is specified by the semantic clauses for BD hom plus the following one: • M, [x, y] ⊧ ⟨A⟩ψ iff there is y ′ , with y ′ ≥ y, such that M, [y, y ′ ] ⊧ ψ. In the rest of section, in analogy to what we did for modalities ⟨B⟩ and ⟨D⟩ in Section 3, we investigate the counterpart of modality ⟨A⟩ in terms of a suitable extension of generalized * -free regular expressions. Basically, we enrich the semantics of generalized * -free regular expressions with what we call a "right context". We will prove that the resulting semantics subsumes the original one, that is, the notion of generalized * -free regular expression given in Section 3 is just a specialization of it. In particular, the encoding of both Pre(e) and Inf(e) directly transfers to this new semantics without any modification. We will conclude the section by providing an example that shows how the operator corresponding to modality ⟨A⟩ has an explicit counterpart in the generalized * -free regular expressions used for real-world programming languages. As a preliminary remark, we would like to observe that one may be tempted to interpret modality ⟨A⟩ as a logical counterpart of the concatenation operator. This is wrong. Informally speaking, modality ⟨A⟩ characterizes words with a specific "right context". Such an idea can be formalized as follows. In order to identify the right generalized * -free regular expression for modality ⟨A⟩, we provide an alternative, yet equivalent, semantics for these expressions. In such a semantics, the language − −− → Lang(e) of a generalized * -free regular expression e is interpreted over pairs of finite words, that is, − −− → Lang(e) ⊆ Σ + × Σ * . A pair (w, w ′ ) ∈ − −− → Lang(e) represents the word w belonging to the language Lang(e), according to the semantics given in Section 3, together with its "right context" word w ′ , which is the word that must appear immediately after w. Formally, generalized * -free regular expressions of Section 3 are extended as follows: e ∶∶= ∅ | a | ¬e | e + e | Pre(e) | Inf(e) | − − → Con(e), for any a ∈ Σ Their semantics is defined as follows: Let us denote the empty word by . With a little abuse of notation, we say that, for every w ∈ Σ + , w ∈ Lang(e) if and only if (w, ) ∈ − −− → Lang(e). Then, it is easy to prove that, for any expression e ∶∶= ∅ | a | ¬e | e + e | Pre(e) | Inf(e), w ∈ Lang(e) if and only if (w, ) ∈ − −− → Lang(e). In such a way, the original (restricted) semantics turns out to be a specialization of the extended one. (i) − −− → Lang(∅) = ∅; (ii) − −− → Lang(a) = {(a, w) ∶ w ∈ Σ * }; (iii) − −− → Lang(¬e) = Σ + × Σ * \ − −− → Lang It can be easily shown that the extended semantics preserves the mapping from a restricted expression e to an equivalent BD hom formula ϕ e outlined in Section 3. In order to capture the language − −− → Lang( − − → Con(e)) in BDA hom , we extend the mapping with the rule: ϕ − −− → Con(e) = ⟨A⟩(⟨B⟩⊤ ∧ [B][B]⊥ ∧ ⟨A⟩ψ e ) . Let us assume that ϕ − −− → Con(e) holds over an interval [x, y]. Then, it predicates over "the right context" of [x, y] by stating that there exists an interval [y, y + 1] (the constraint on the length of such an interval is imposed by the first two conjuncts ⟨B⟩⊤ ∧ [B][B]⊥) which has an adjacent-to-the-right interval [y + 1, y ′ ] where ψ e holds (third conjunct ⟨A⟩ψ e ). In order to show the significance of the proposed extension of generalized * -free regular expression, we explore an interesting correspondence between the operator − − → Con (and thus, indirectly, modality ⟨A⟩) and an operator of the regular expressions tipically used in popular programming languages like, for instance, Python [VRDJ95]. It is easy to see that the − − → Con operator corresponds to the lookahed operation. Such an operation is usually implemented as positive lookahead, whose syntax is (? = e), and negative lookahed, whose syntax is (? ! e), where e is a regular expression. In many real-world applications, regular expressions are used to execute pattern matching inside a long text as an effective alternative to the task of checking whether such a long text belongs to a certain language. This is the case especially in the domain of natural language processing from which the following toy example is borrowed. Let us suppose that we want to capture a pattern that consists of an English word followed by a list of words in English separated by commas and whose last word is prefixed by the word "and". An example of a sentence containing such a pattern is the following: "This paper deals with HS operators meets meets meets meets meets meets meets meets meets meets meets meets meets meets meets meets meets, begins begins begins begins begins begins begins begins begins begins begins begins begins begins begins begins begins, and during during during during during during during during during during during during during during during during during under homogeneity assumption." In such a toy example, a motivation for matching the word operators may be related to the fact that the noun preceding a natural language description of items may represent their type. In the above sentence, "meets", "begins", and "during" are indeed of type "operators". In such an interpretation, we are assuming that the word denoting the type is put immediately before the list of words and thus conjunctions like, e.g., "such as" or "like" are not contemplated. However, they may be captured by regular expressions longer, but not much more complex, than the one we are going to show. For the sake of simplicity, we assume that the number of words in the list is greater than or equal to 3 and each word is a single one. As an example, the pattern "Concepts such as atoms atoms atoms atoms atoms requests requests requests requests requests will be introduced in this section" is not captured. A regular expression re, which works in any modern programming language, that captures such a pattern is: re = (\w+)(? = (? ∶ \w+, ){2, }and \w+) where is used to highlight the single white space " ". Since it is outside the scope of this paper, we will not delve too much into the syntax of this kind of regular expressions. For that matter, wonderful websites such as [Reg], exist (they provide a quick reference for syntax and semantics together with examples and, more importantly, a full on-line environment for testing and debugging regular expressions). Let us briefly explain how re captures the desired pattern. First of all, we have that (e) is used to capture any pattern in e. The (? = e) operator checks whether the current position is followed by a pattern belonging to the language of e. The \w variable represents any word-character, both lower and upper case. The operator + is analogous to the operator e + = ee * in standard regular expressions. Thus, \w+ means any single word. The operator (e){n, }, with n ≥ 0, captures a sequence of n or more occurrences of pattern e. Finally, the operator (? ∶ e) represents just standard parentheses. A graphical account of the various parts of regular expression re is shown in Figure 17. Let Σ = W ∪ S, where W = {a, . . . , z, A, . . . , Z} (word symbols) and S = { , ., ','} (separator symbols). For the sake of brevity, we omit the intermediate phase of translating re into our * -free restricted fragment and we jump directly to the translation into BDA hom . For the sake of simplicity, we do not apply the literal translation here; instead, we make use of a shorter, more understandable encoding which is tailored to the structure of the specific regular expression re. As a preliminary step, we provide some shorthands and assumptions that make the encoding formulas more compact. First, we introduce a global modality [G]ψ, whose semantics is as follows: (i) a whitespace; (ii) a sequence of two or more concatenations of a single word, a comma, and a whitespace; (iii) the concatenation of the word "and", a whitespace, and a single word. In BDA hom , we may capture the semantics of len ≥n and len n by means of the formulas ⟨B⟩ n π and len ≥n ∧ [B] n+1 ⊥, respectively. 4 Since in the proposed encoding we will make use of proposition letters in Σ to represent words as points of an interval model (Figure 18), we need to force each point to hold exactly one symbol σ ∈ Σ. Such a constraint is imposed by putting the formula [G](π → ⋁ σ∈Σ (σ ∧ ⋀ σ ′ ∈Σ\{σ ′ } ¬σ ′ )) in conjunction with the encoding of re. For the sake of brevity, we will tacitly assume that this is the case. Finally, with a little abuse of notation, in the encoding of re we will make use of W as a shorthand for ⋁ σ∈W σ, which basically allow us to state that a certain (point-)interval holds a word symbol. Now, we are ready to encode re by a formula ψ re . More precisely, we will make use of ⟨D⟩ψ re as the main formula, where ψ re just encodes the matching part. Thus, by "reading" a model M = (I N , V) for ⟨D⟩ψ re , we can easily retrieve every matching by taking all and only those intervals [x, y] such that M, [x, y] ⊧ ψ re . As an example, in Figure 18 we have that M, [0, 90] ⊧ ⟨D⟩ψ re , while [24, 34] ⊧ ψ re . In fact, [24,34] is the only interval that satisfies ψ re in the model of Figure 18 and, as we will see when we will discuss ψ re in more detail, this is determined both by the points belonging to [24,34] and by the formulas that hold in its "right context", that is, the intervals [x, y], with 34 ≤ x ≤ 90. Let ψ re = ψ -intervals [x ′ , x ′ ], with x < x ′ < y (conjunct [B] (len ≥1 → ⟨A⟩W )). Finally, it constrains the interval [x, y] to contain at least one word symbol (conjunct ⟨B⟩⟨A⟩W ). Intuitively, ψ gm w + encodes the greedy match (gm) of a single non-empty word preceeded and followed by two singleton separator symbols. as an example, in Figure 18, we have that ψ gm w + holds over the interval [24,34]. (4) the symbol "," appears as a label of at least two distinct point- • ψ (w + , ) 2+ = ⟨A⟩ ∧ ⟨B⟩(len 1 ∧ ⟨B⟩ ∧ ⟨A⟩W ) ∧ [D](len 1 → (⟨B⟩ ∧ ⟨A⟩W ) ∨ (⟨B⟩, ∧⟨A⟩ ) ∨ (⟨B⟩W ∧ ⟨A⟩W ) ∨ (⟨B⟩W ∧ ⟨A⟩, )) ∧ ⟨B⟩(⟨A⟩, ∧⟨B⟩⟨A⟩,intervals [x ′ , x ′ ] and [x ′′ , x ′′ ] in [x, y], i.e., with x < x ′ < x ′′ < y (conjunct ⟨B⟩(⟨A⟩, ∧⟨B⟩⟨A⟩, )). In Figure 18 Intuitively, the conjunct [D](len 1 → (⟨B⟩ ∧ ⟨A⟩W ) ∨ (⟨B⟩, ∧⟨A⟩ ) ∨ (⟨B⟩W ∧ ⟨A⟩W ) ∨ (⟨B⟩W ∧ ⟨A⟩, )) constrains the word underlying [x + 1, y + 1] to belong to the language of ((w) + , ) * , while the conjunct ⟨B⟩(⟨A⟩, ⟨B⟩⟨A⟩, ) forces at least two iterations of the * operation in such a language. Thus, together they force such a word to belong to ((w) + , ) 2+ ; • ψ gm and w + = ⟨B⟩a ∧ ⟨B⟩(len 1 ∧ ⟨A⟩n) ∧ ⟨B⟩(len 2 ∧ ⟨A⟩d) ∧ ⟨B⟩(len 3 ∧ ⟨A⟩ ) ∧ ⟨B⟩(len ≥4 → ⟨A⟩W ) ∧ ⟨A⟩¬W . This formula holds over an interval [x, y] if and only if the word underlying the interval [x, x + 3] is exactly "and " (conjuncts ⟨B⟩a, ⟨B⟩(len 1 ∧ ⟨A⟩n), ∧⟨B⟩(len 2 ∧ ⟨A⟩d), and ∧⟨B⟩(len 3 ∧ ⟨A⟩ )) followed by an uninterrupted sequence of word symbols underlying the interval [x + 4, y − 1] (conjunct ⟨B⟩(len ≥4 → ⟨A⟩W )). In addition, it imposes the word underlying the interval [x + 4, y − 1] to be a greedy match, that is, an entire word is captured, since we force a separator symbol on [y + 1, y + 1] by means of the conjunct ⟨A⟩¬W . We conclude the section with some remarks about the practical use of regular expressions. To the best of our knowledge, in their implementation the majority of existing programming languages do not support the free use of negation in regular expressions, but they allow for positive/negative lookahead/lookbehind. In this section, we showed how to deal with positive/negative lookahead by means of modality ⟨A⟩. Moreover, we argued that positive/negative lookbehind may be captured by adding modality ⟨A⟩, which is the converse of modality ⟨A⟩, to BDA hom , thus obtaining the logic BDAA hom . For the sake of simplicity, we did not take modality ⟨A⟩ into consideration in this work, as its introduction involves a number of technicalities. However, in view of the results established in the next section, we may conjecture with a certain confidence that, under the homogeneity assumption, the satisfiability problem for BDAA hom belongs to the same complexity class as its proper fragment BDA hom . The satisfiability problem for BDA hom is decidable in EXPSPACE In this section, we go through the definitions and proofs of Sections 5.1, 5.2, and 5.3 in order to identify the changes that must be made in order to extend them to the fragment BDA hom . To begin with, we state a lemma that establishes a fundamental property of modality A, and will be extensively used in the following definitions and proofs. Lemma 10 has been proved in several occasions (see [BMS07], for example) here we will provide a graphical account of it in Figure 19. Where it is shown that on intervals (resp., points) sharing their right endpoints (resp., laying on the same row) must feature the same A-requests. Let us consider now an homogeneous ϕ-compass structure G = (G N , L) for an BDA hom formula ϕ it is easy to see that, as a direct consequence of Lemma 10, we have that for every point 0 ≤ y ≤ N ) we have Req A (L(x, y)) = Req A (L(y, y)) for every 0 ≤ x ≤ y and thus Box A (L(x, y)) = Box A (L(y, y)). For the sake of brevity, in the following definitions we will make extensive use of the special constant π which, as we should recall from the previous, holds over an interval [x, y] if and only if x = y. Let us notice that in BDA hom the constant π is just an alias for the formula [B]⊥, from now on we will assume that π ∈ Cl(ϕ) holds. Let us now extend the notion of ϕ-atom introduced in Section 4 to the notion of markedϕ-atom. Let T F ϕ A = {ψ ∶ ⟨A⟩ψ ∈ Cl(ϕ)} the set of all the arguments ψ for ⟨A⟩ψ formulas in Cl(ϕ), marked ϕ-atom (atom from now on) is a pair F α = (F, α) where: (1) F is a maximal subset of Cl(ϕ) that, for all ψ ∈ Cl(ϕ), satisfies the following three conditions: (i) ψ ∈ F if and only if ¬ψ ∉ F , (ii) if ψ = ψ 1 ∨ ψ 2 , then ψ ∈ F if and only if {ψ 1 , ψ 2 } ∩ F ≠ ∅, and (iii) if π ∈ F then for every [A]ψ ∈ F we have ψ ∈ F ; (2) α is a function α ∶ T F Figure 19. A graphical account of the intuition behind the proof of Lemma 10 from both interval and spatial perspectives. π ∈ F and α(ψ) = ◊ then ⟨A⟩ψ ∈ F and ψ ∉ F ; (iv) if π ∈ F and α(ψ) = ⧫ then ψ ∈ F . For the sake of simplicity, from now on when we refer to F α as a set, we refer to its first component F . For instance, when we write ψ ∈ F α , we mean ψ ∈ F . Let At(ϕ) be the set of all ϕ-atoms. We have that |At(ϕ)| ≤ 2 |ϕ|+1 ⋅ 2 |ϕ|−1 = 2 2|ϕ| , where |ϕ| = |Cl(ϕ)|/2. While, by considering just the first component of a newly defined atom, we keep functions Req R , Obs R , and Box R for all R ∈ {A, B, D} the same as the ones introduced in Section 4 we introduce the following specializations for the relations → B and → D : • F α → B G β iff Req B (F α ) = Req B (G β ) ∪ Obs B (G β ) and for every ψ ∈ T F ϕ A we have α(ψ) = β(ψ) if β(ψ) ∈ {⧫, □} or ψ ∉ F ; • F α → D G β iff Req D (F α ) ⊇ Req D (G β ) ∪ Obs D (G β ). In Figure 20 we provide an example of consistent atom labelling of a model of an BDA hom formula ϕ. For what concerns Req R (⋅), Box R (⋅), and Obs R (⋅) with R ∈ {B, D} we can make the same considerations made in the description of the example of Figure 8 in Section 4. For the example in Figure 20, we focus on describing how the behaviour of sets Req A (⋅), Box A (⋅), and Obs A (⋅) differ w.r.t. their counterparts Req R (⋅), Box R (⋅), and Obs R (⋅) with R ∈ {B, D} as well as an initial account of the behaviour of the marking functions α [x,y] . Let us observe first that while Req R is "monotone" for R ∈ {B, D} for atoms labelling intervals which are in the same R-relation. This claim is not true when R = A as a direct consequence of the fact that Allen relations STARTED-BY and CONTAINS are transitive while relation MEETS is not. For instance, in Figure 20 we have that: (1) α [x,y ′ ] (¬ψ 1 ) = ◊ for every x ≤ y ′ < y which means that the pending ⟨A⟩ request ¬ψ 1 is not fulfilled for the intervals ending in x if we consider the model up to y ′ ; α [x,y ′ ] (¬ψ 1 ) = ⧫ for every x ≤ y ≤ y ′ which means that the pending ⟨A⟩ request ¬ψ 1 is fulfilled for the intervals ending in x if we consider the model up to y ′ and, obviously, will stay fulfilled for such intervals ever after. In Figure 20, We can extend the claim on labelings made in Section 4 to BDA hom formula and say that all and only the labelings which respect property ( * 1 ) are the ones for which the following property holds: ⟨A⟩¬ψ 1 [A]ψ 1 Req A (F [x,y] ) Obs A (F [x,y] ) Req B (F [x,y] ) Obs B (F [x,y] ) Req D (F [x,y] ) Obs D (F [x,y] ) [0, 0] 0 0 0 0 0 1 0 ∅ ∅ ∅ ∅ ∅ ∅ [0, 1] 0 0 0 0 0 1 0 ∅ ∅ ∅ ∅ ∅ ∅ [0, 2] 0 0 0 0 0 1 1 {¬ψ 1 } ∅ ∅ ∅ ∅ ∅ [0, 3] 0 0 0 1 0 1 0 ∅ ∅ ∅ ∅ {p} ∅ [0, 4] 0 0 0 1 0 1 0 ∅ ∅ ∅ ∅ {p} ∅ [1, 1] 0 1 0 0 0 1 0 ∅ ∅ ∅ {q} ∅ ∅ [1, 2] 0 1 1 0 0 1 1 {¬ψ 1 } ∅ {q} {q, ⟨B⟩q} ∅ ∅ [1, 3] 0 1 1 1 1 1 0 ∅ ∅ {q,( * 3 -b) Req B (F [x,y] ) = ⋃ x≤y ′ <y Obs B (F [x,y ′ ] ), Req D (F [x,y] ) = ⋃ x≤x ′ ≤y ′ <y Obs D (F [x ′ ,y ′ ] ), and Req A (F [x,y] ) = ⋃ y≤y ′ Obs A (F [y,y ′ ] ), for each [x, y] ∈ I N . An account of how the second component of an atom behaves w.r.t. the relations → B and → D is given in Figure 21. Informally speaking, we have that the second component of an atom associated to an interval [x, y] keeps track of the A-requests featured by [x, x] which have been satisfied by intervals [x, y ′ ] with y ′ ≤ y (i.e., the ones marked with ⧫) against the ones still pending (i.e., the ones marked with ◊). Moreover, the second component of an atom keeps track of the formulas ψ that are forced to appear negated in every interval starting in x due to the presence of [A]ψ in the labelling of [x, x] (i.e., the formulas marked with ◊). Since we cannot consider a model fulfilled until all the A-requests are satisfied for all points x in the model we introduce the notion of final atom. An atom F α is final iff for every ψ ∈ T F ϕ A we have α(ψ) ∈ {⧫, □}. Now we can provide the notion of compass structures for BDA hom formula of by extending the BD hom with the following requirements: • (initial formula) ϕ ∈ L(0, N ); • (A-consistency) for all 0 ≤ x ≤ y ≤ N , Req A (L(x, y)) = Req A (L(y, y)); • (A-fulfilment) for every 0 ≤ x ≤ N atom L(x, N ) is final. Then we have the very same result for compass structures on BDA hom formulas. Theorem 3. A BDA hom formula ϕ is satisfiable iff there is a homogeneous ϕ-compass structure. Now we are ready to point out the minor differences in the steps for generalizing small model theorem of Section 5 to the BDA hom case. First of all it is asy to prove using Lemma 10 that Lemma 2 holds also for BDA hom homogeneous compass structures. in order to take into account the second component of atoms, we redefine the function ∆ ↑ ∶ At(ϕ) → N as follows: ∆ ↑ (F α ) = (2|{⟨B⟩ψ ∈ Cl(ϕ)}| − 2|Req B (F α )|− |Obs B (F α ) \ Req B (F α )|)+ (|{⟨D⟩ψ ∈ Cl(ϕ)}| − |Req D (F α )|)+ (|{¬p ∶ p ∈ Cl(ϕ) ∩ Prop}|− |{¬p ∶ p ∈ Cl(ϕ) ∩ Prop ∧ ¬p ∈ F α }|)+ |{ ψ ∈ T F ϕ A ∶ α(ψ) = ◊}| The main complication that arises from the introduction of the ⟨A⟩ operator consists of the fact that a B-sequence sequence that can be instantiated in a compass structure may . . . be not flat (it is forced to be decreasing still). Then, we introduce the concept of minimal B-sequence. A B-sequence Sh B = F 0 α 0 . . . F n α n is minimal iff for every 0 ≤ i < n then for every 0 ≤ i < n ∆ ↑ (F i α i ) > ∆ ↑ (F i+1 α i+1 ). Let us observe that for every minimal B-sequence Sh B = F 0 α 0 . . . F n α n we have n ≤ 5|ϕ| (i.e., the length of a minimal B-sequence is at most 5|ϕ| + 1). A minimal B-sequence will not represent the whole sequence of atoms on a "column" x of a given compass structure, as it happened for flat decreasing B-sequences in Section 5. In this case, a minimal B-sequence represents the labellings of the sequence of points sharing the same "column" x where the function ∆ ↑ (F i α i ) decreases as long as we move up on y. For capturing such a behaviour we provide the following notion of shading. Let G = (N, L) be a compass structure for ϕ and 0 ≤ x ≤ N . We define the shading of x in G, written Sh G (x), as the sequence of pairs atoms (L(x, y 0 ), y 0 ) . . . (L(x, y m ), y m ) such that: (1) y i < y i+1 for every 0 ≤ i < m; (2) {∆ ↑ (L(x, y)) ∶ 0 ≤ y ≤ N } = {∆ ↑ (L(x, y i )) ∶ 0 ≤ i ≤ m}; (3) for every 0 ≤ i ≤ m we have y i = min {0 ≤ y ≤ N ∶ ∆ ↑ (L(x, y i )) = ∆ ↑ (L(x, y))}, i.e., y i is the minimum height on the column x that exhibits its value for ∆ ↑ . The above (finite) characterisation work just as good as the one defined in Section 5 for defining a natural equivalence relation of finite index over columns: we say that two columns x, x ′ are equivalent, written x ∼ x ′ , if and only if Sh For every 0 ≤ x ≤ N let Sh G (x) = L(x, y 0 ) . . . L(x, y m ) we denote with ShG B (x) = Sh G B (x ′ ) . Then taking advantage of Lemma 10 we can prove that Lemma 5 also holds for BDA hom compass structures. The definitions of S → (x, y) and, consequently, of fingerprint f p(x, y) for all 0 ≤ x ≤ x ≤ N is the same as the one given in Section 5. Let us observe that the number of possible sets S → (x, y) due to the specialization of atoms is bounded by 2 6 5|ϕ| 2 +2|ϕ| ⋅ 2 3 5|ϕ|+2 in this case. For two atoms F α and G β , we say that they are equivalent modulo A, written F α ≡ ¬A G β if and only if F \ Req A (F α ) = G \ Req A (G β ) and α = β (i.e., F α and G β have at most different ⟨A⟩ requests). Then we may prove the analogous of Lemma 6 and related Corollary 2 in the case of BDA hom compass structures. Lemma 12. Let G = (N, L) be a compass structure and let 0 ≤ x < x ′ ≤ y ≤ N . If f p(x, y) = f p(x ′ , y) and y ′ is the smallest point greater than y such that L(x, y ′ ) ≢ ¬A L(x, y), if any, and N otherwise, then, for all y ≤ y ′′ ≤ y ′ , L(x, y ′′ ) = L(x ′ , y ′′ ). Corollary 4. Let G = (N, L) be a compass structure and let 0 ≤ x < x ′ ≤ y ≤ N . If f p(x, y) = f p(x ′ , y) and y ′ is the smallest point greater than y such that L(x, y ′ ) ≢ ¬A L(x, y), if any, and N otherwise, then, for every pair of points x, x ′ , with x < x < x ′ < x ′ , with L(x, y) = L(x ′ , y) and x ∼ x ′ / ∼ x, it holds that L(x, y ′′ ) = L(x ′ , y ′′ ), for all y ≤ y ′′ ≤ y ′ . For BDA hom compass structures the definition of covered point, as well as witnesses Wit G (y), and row blueprint Row G (y) is the same of the ones given in Definition 6 and in Section 6, then Lemma 7, Corollary 2, Lemma 7, and Theorem 8 can be proved also in the case of BDA hom compass structures. On the basis of such results we can provide an algorithm very similar to the one proposed in the proof of Theorem 9 and thus the following analogous result. Theorem 4. Let ϕ be a BDA hom formula. It holds that ϕ is satisfiable iff there is a compass structure G = (N, L) for it such that N ≤ 2 5|ϕ|⋅(6 10|ϕ| 2 +4|ϕ| ⋅ 2 3 10|ϕ|+4 ) , whose existence can be checked in EXP SP ACE. EXPSPACE-hardness of BDA hom over finite linear orders In this section we prove that the satisfiability problem for BDA hom interpreted over finite linear orders is EXPSPACE-hard. The result is obtained by a reduction from the exponentialcorridor tiling problem, which is known to be EXPSPACE-complete [vEB97]. Such a problem can be stated as follows. (1) for every x ∈ N we have tile(x, 0) = 0 and tile(x, C) = T ; (2) for every x ∈ N and every 0 ≤ y ≤ C we have (tile(x, y), tile(x + 1, y)) ∈ ⇒; (3) for every x ∈ N and every 0 ≤ y < C we have (tile(x, y), tile(x, y + 1)) ∈ ⇑. The following classical result will be exploited to prove the main goal of this section. Theorem 5. [vEB97] The exponential-corridor tiling problem is EXPSPACE-hard. To define a reduction from Problem 1 to the finite satisfiability of BDA hom we have to face the problem that formulas of BDA hom are interpreted over finite domains, whereas the tile functions ranges over an infinite domain. Roughly speaking, we will solve Problem 1 by means of an infinite "unfolding" of a finite portion of the tiling space that can be encoded by a (finite) model for a suitable BDA hom formula. The following result is crucial to that purpose. Lemma 13. Given an instance T = (T, ⇒, ⇑, C) of Problem 13 we have that T is a positive instance if and only if there exists a function tile ∶ N × {0, . . . , C} → {0, . . . , T } that fulfills conditions 1, 2, and 3 of Problem 1 together with the following one: (4) there exist pref ix ∈ N and period ∈ N + s.t. for every x ≥ pref ix and every 0 ≤ y ≤ C we have tile(x, y) = tile(x + period, y). The proof of Lemma 13 is straightforward and omitted. Lemma 13 allows us to bound the search space for the existence of the function tile to a finitely representable function tile ∶ {0, . . . , pref ix, . . . , pref ix + period} → {0, . . . , T } for some pref ix ≥ 0 and period > 0. Function tile witnesses that T is a positive instance of Problem 1 if it satisfies conditions 1, 2, and 3 restricted to (x, y) ∈ N × {0, . . . , C} with x < pref ix + period plus the condition that tile(pref ix, y) = tile(pref ix + period, y) for every y ∈ {0, . . . , C}. Given an instance T = (T, ⇒, ⇑, C) of Problem 1 we provide a BDA hom formula ϕ T that is satisfiable over finite models if and only if there exists a function tile that satisfies the aforementioned properties an thus, by Lemma 13, if and only if T is a positive instance of Problem 1. In the proposed encoding we force each point of the model to represent exactly one tile. This is done by exploiting T + 1 propositional variables t 0 , . . . , t T , called tile variables, constrained by the following formulas 5 : ψ ∃ = [G] π → T ⋁ i=0 t i , given a point in the model at least one tile variable holds over it; ψ ! = [G] T ⋀ i=0 t i ∧ π → T ⋀ j=0,j≠i ¬t j , given a point in the model at most one tile variable holds over it (i.e., mutual exclusion). Let us assume w.l.o.g. that C = 2 c − 1 for some c ∈ N. Then, we associate to each model For the sake of brevity, we denote with y n the natural number whose c-bit length binary encoding is bit V (n, b 1 ) . . . bit V (n, b c ). We encode the domain of a general function tile ∶ {0, . . . , pref ix, . . . , pref ix + period} → {0, . . . , T } into a finite model M = (N, V) by enumerating all the points of the grid {0, . . . , pref ix + suf f ix} × {0, . . . , C} along the timepoints {0, . . . , N } of the model in a lexicographical order. The formula ψ tile = ψ ∃ ∧ ψ ! ∧ ψ boundaries ∧ ψ ↑ is used to force such constraint where ψ boundaries and ψ ↑ are formulas defined as follows: ψ boundaries = ⟨B⟩ π ∧ c ⋀ i=1 ¬b i ∧ [A] c ⋀ i=1 b i , every model M = (N, V) for ψ boundaries satisfies y 0 = 0 and y N = C; ψ ↑ = [G] [B]π → c ⋀ i=1 ⟨B⟩b i ∧ [A]⊥ ∨ c ⋀ i=1 ⟨A⟩(π → ¬b i ) ∨ ψ 1 + , for every n ∈ {0, . . . , N } if y n = C then either n = N or y n+1 = 0, if y n < N then y n+1 = y n + 1; if and only if n < n ′ and bit V (n, b j ) = bit V (n ′ , b j ) for every i ≤ j ≤ c; ψ i + = (⟨B⟩b i → ⟨A⟩(π ∧ ¬b i ) ∧ ψ i+1 + )∧ (⟨B⟩¬b i → ⟨A⟩b i ∧ ψ Note that if ψ 1 = holds over [n, n ′ ] then y n = y n ′ . Formula ψ i = is used for guaranteeing the correct bitwise increment in formulas ψ i + , moreover it will be used in the following for correctly identifying tiles which are in the ⇒ relation. It is worth noticing that any model M = (N, V) that satisfies ψ tile = ψ ∃ ∧ψ ! ∧ψ boundaries ∧ψ ↑ fulfills some properties. First of all, the interplay between ψ boundaries and ψ ↑ guarantees that N is a multiple of (C + 1) and thus, for suitably chosen pref ix and suf f ix, we can associate each point (x, y) ∈ {0, . . . , pref ix + suf f ix} × {0, . . . , C} to a point n ∈ {0, . . . , N } by means of a bijection map ∶ {0, . . . , pref ix + suf f ix} × {0, . . . , C} → {0, . . . , N } defined as map(x, y) = x ⋅ (C + 1) + y (i.e., map −1 (n) = (⌊ n C+1 ⌋, n % C) where % is the integer remainder operation). Moreover, let us observe that for every element (x, y) in the grid, we have that x is just implicitly encoded in the model by map(x, y) (i.e., x = ⌊ map(x,y) C+1 ⌋), while y is both implicitly encoded (i.e., x = ⌊map(x, y)% C) and explicitly encoded by the the values of variables b 1 . . . b c since it is easy to prove that ψ boundaries ∧ ψ ↑ forces y = y map(x,y) . Finally, the conjuncts ψ ∃ ∧ ψ ! ensure that each point in n ∈ {0, . . . , N }}, and thus, by means of map, any point in the grid, is associated with exactly one tile, that is the unique tile variable that belongs to V([n, n]). tile(x, y) = i and y map(x,y) = y, it is easy to prove that f is a bijection between the set of all such tile function, for every M ∈ N + , and the set of all finite models for ψ tile . In summary, the detailed description above shows that any model for ψ tile is basically a way to represent a generic function tile ∶ {0, . . . , M } × {0, . . . , C} → {0, . . . , T } and that, viceversa, each of such functions is represented by exactly one model of ψ tile . The next step is the encoding of the constraints of Lemma 13 in BA hom which allow to check whether there exists a function tile that witnesses that T is a positive instance. Such conditions, restricted to the finite case, are imposed by the following formulas: ψ 0,C = [G] π ∧ C ⋀ i=1 ¬b i → t 0 ∧ π ∧ C ⋀ i=1 b i → t T , formula ψ 0,C forces condition 1 of Problem 1, that is, the bottom tile of each column is 0 and the top tile of each column is T ; ψ ⇒ = [G] π ∧ ⟨A⟩¬π → ⟨A⟩ ψ min = ∧ ⋁ (i,j)∈⇒ (⟨B⟩t i ∧ ⟨A⟩t j ) , formula ψ ⇒ forces condition 2 of Problem 1, that is, each pair of grid points of type (x, y), (x + 1, y) must be labelled with two tiles that are in the ⇒ relation. This is done by taking for each point n < N the minimal interval [n, n ′ ] with n < n ′ and y n = y n ′ ; then, the ⇒ relation is forced between the pair of tile variables that hold over [n, n] and y n = y n ′ , and does not exist n < n ′′ < n ′ such that y n = y n ′′ . Let us notice that, for the constraints imposed by ψ tile we have that n ′ − n = C + 1 and thus, according to the definition of map, we have map −1 (n ′ ) = (⌊ n C+1 ⌋ + 1, n % C); then, ψ min = holds on all and only those intervals whose endpoints represent horizontally adjacent points of the original grid; ψ ⇑ = [G] [B]π ∧ c ⋁ i=1 ¬b i → ⋁ (i,j)∈⇑ (⟨B⟩t i ∧ ⟨A⟩t j ) , formula ψ ⇑ forces condition 3 of Problem 1, that is, each pair of grid points of type (x, y), (x, y + 1) must be labelled with two tiles that are in the ⇑ relation. The constraint can be easily imposed since the encoding ensures that vertical consecutive points in the grid corresponds to consecutive points in the model. The constraint is triggered on all the intervals of the type [n, n + 1], with the exception of the of the ones with y n = C. The constraint imposes that unique (thanks to ψ ∃ ∧ ψ ! ) pair of tile variables (t i , t j ) with (t i ) ∈ V([n, n]) and (t j ) ∈ V([n ′ , n ′ ]) must satisfy (i, j) ∈ ⇑. ψ pref ix = ⟨B⟩⟨A⟩ p ∧ C ⋀ i=1 (⟨B⟩(π ∧ ¬b i ) ∧ ⟨A⟩b i ) ∧ [G] p ∧ π → ⟨A⟩ ψ 1 = ∧ [A]¬ψ 1 = ∧ T ⋀ i=0 (⟨B⟩t i ↔ [G]t i ) , formula ψ pref ix forces condition 13 of Lemma 13, which imposes that there are two distinct columns in the grid which are tiled identically and one of such columns is the last one. This is done by means of a propositional letter p. The first conjunct of formula ψ pref ix imposes that there exists an interval [n, n ′ ] in the model for which p ∈ V([n, n ′ ]), y n = 0, and y n ′ = C (i.e., p "covers" at least one column). Moreover, for the homogeneity assumption, we have that p ∈ V([n ′′ , n ′′ ]) for every n ≤ n ′′ ≤ n ′ . The second conjunct imposes that for each p labelled points n there must exist a point n ′ > n with y n = y n ′ (this implicitly implies that n is associated to a grid point which does not belong to the last column). Moreover, formula [A]¬ψ 1 = imposes that n ′ must belong to the last column. Finally, it is required that there exists 0 ≤ i ≤ T s.t. t i ∈ V([n, n]) ∩ V([n ′ , n ′ ]). Notice that in the above definitions the use of the ⟨A⟩ operator enables us to deal with two key aspects: (1) we can predicate on all the intervals [n, n ′ ] for any n, n ′ ∈ {0, . . . , N }, whereas, by using the ⟨B⟩ operator alone, we could predicate only on intervals of the form [0, n]; (2) we can predicate on the ending point of any current interval [n, n ′ ], i.e., the interval [n ′ , n ′ ]. Such a feature is missing in the logic BD hom where we can predicate only on the beginning point of any current interval. For instance, the logic BD hom cannot express properties like ψ 1 = which checks whether the same set of propositional letters holds over the two ending points of an interval. Let us define now the formula ϕ T as ϕ T = ψ tile ∧ ψ 0,C ∧ ψ ⇒ ∧ ψ ⇑ ∧ ψ pref ix . Since the models of ψ tile represent all and only the possible finite tiling functions for T and ψ 0,C ,ψ ⇒ , ψ ⇑ , ψ pref ix select the subset of such functions/models where conditions 1, 2, and 3, of Problem 1 together with condition 13 of Lemma 13 are fulfilled, we can prove the next result. Theorem 6. Let T = (T, ⇒, ⇑, C) be an instance of Problem 1. Then, T is a positive instance if and only if the AB hom formula ϕ T is satisfiable over finite linear orders. It is easy to see that ϕ T may be generated in LOGSPACE. To this end, it suffices to observe that we may define a multitape Turing Machine that performs the reduction using just a constant amount of working tapes, each one holding either ⌈log 2 T ⌉ bits or c bits. From such an observation and Theorem 5, we obtain the main result of the section. Theorem 7. The satisfiability problem for the logic AB hom over finite linear orders is EXPSPACE-hard. We conclude the section with some remarks that allow us to better understand how the homogeneity assumption affects the satisfiability problem of the considered HS fragments. First of all, we observe that the complexity of the satisfiability problem for AB hom over finite linear orders does not change if we replace it by full AB, that is, if we remove the homogeneity assumption [BMM + 14]). Moreover, we would like to point out that the proof of the EXPSPACE-hardness of the satisfiability problem for AB hom , that is, the proof of Theorem 7 to which this entire section is devoted, does not make use of the homogeneity assumption. On the contrary, the homogeneity assumption marks a deep difference in BDA: we proved that the satisfiabilty problem for BDA hom is decidable in exponential space, whereas the problem is known to be undecidable for full BDA [MM14,MMK10]. As for model checking, the model checking problem for AB hom over finite Kripke structures has been proved to be PSPACE-complete [BMM + 19b], while here we proved that the satisfiability checking problem, over finite linear orders, belongs to a higher complexity class, namely, EXPSPACE. The tight complexity bound for the model checking problem over finite Kripke structures for BDA hom is still open: we only know that for its three maximal proper fragments AB hom , DA hom , and BD hom it is PSPACE-complete [BMM + 19b, BMPS21b]. Conclusions In this paper, we proved that, under the homogeneity assumption, the satisfiability checking problem for BDA hom , over finite linear orders, is EXPSPACE-complete. This result stems a number of observations about the complexity landscape of the satisfiability and model checking problems for HS fragments under homogeneity (HS hom ): (1) it improves the previouslyknown non-elementary upper bound [MMM + 16]; (2) it identifies the first EXPSPACEcomplete fragment of HS hom with respect to the satisfiability problem [BMM + 19b]. For what concerns the satisfiability problem for BDA hom , we already observed that the homogeneity assumption plays a crucial role only in the proof of the EXPSPACE membership of the problem (upper bound), while it does not play any role in the proof of the EXPSACE-hardness of the problem (lower bound). The results for BDA hom also shed some light on the problem of determining the exact complexity of the satisfiability checking problem for BE hom , which is still open. As a matter of fact, BDA hom and BE hom are not comparable from the point of view of their expressiveness [BMM + 14]. However, BDA hom captures a fragment of BE hom , that is, BD hom extended with a restricted version of modality ⟨E⟩, namely, ⟨E⟩ π ψ = ⟨A⟩(π ∧ ψ), that allows one to predicate on the right endpoint of an interval. As shown in Section 9, this is the key property that causes the increase in complexity of the satisfiability checking problem from BD hom (PSPACE-complete) to BDA hom (EXPSPACE-complete). It is easy to see that the result given here can be easily extended to the case of homogeneous structures isomorphic to N. From a more practical standpoint, we showed how BDA hom may encode a very expressive fragment of generalized * -free regular expression, namely, the fragment that features prefix, infix, and lookahead. Thanks to the result obtained in this work, we have that the emptiness problem for the languages expressed by means of such a fragment is elementary (EXPSPACEcomplete) in constrast to the non-elementary-hard result which was known for the the emptiness problem for full generalized * -free regular expression [Sto74]. As for future work, we plan to investigate the satisfiability/model checking problems for (fragments of) HS hom , interpreted over the linear orders Q and R. However, the precise characterization of the complexity of the satisfiability problem for BE hom remains the main open problem on the path to determining the exact complexity of the satisfiability problem for full HS hom over finite linear orders. Figure 1 . 1[x, y] may differ from the sets of proposition letters V([x, x]), . . . , V([y, y]) that hold on the point-intervals contained in [x, y] (which, obviously, may differ from each other). Similarly, the set of proposition letters V([x ′ , y ′ ])that hold on a strict-subinterval [x ′ , y ′ ] of [x, y], that is, x ≤ x ′ < y ′ ≤ y and [x ′ , y ′ ] ≠ [x, y],may differ from those that hold on [x, y]. Consider the example of Figure 1, where π and V agree on the labelling of points 0, . . . , 4 (they are interpreted as the intervals [0, 0], . . . , [4, [x, y]) [0, 0] [0, 1] [0, 2] [0, 3] [0, 4] [1, 1] [1, 2] [1, 3] [1, 4] [2, 2] [2, 3] [2, 4] [3, 3] [3, 4] Point-based (π) vs. interval-based (V) labelling over the same finite linear order. in the ITL semantics). The evaluation of proposition letters p and q on strict-intervals does not depend on that on their sub-intervals. See, for instance, the interval [1, 4] of Figure 1. Its labelling is V([1, 4]) = {p, q} and it features all the possible subsets of {p, q} as the labels of its point intervals [1, 1], . . . [4, 4]. As for its strict-subintervals, it holds that V([1, 2]) = V([2, 3]) = V([3, 4]) = ∅, V([1, 3]) = {p}, and V([2, 4]) = {p, q}. Figure 2 . 2The semantics of CDT binary modalities C, D, and T. Figure 3 . 3Allen's relations and the corresponding HS modalities (the relations/modalities considered in this work are highlighted). Figure 6 . 6The proposed translation at work on the model ofFigure 1. Figure 7 . 7A homogeneous model (a -left) vs. a general one (b -right). (a) ⟨B⟩(p ∧ ¬q) holds over the interval [1, 4], that is, M a , [1, 4] ⊧ ⟨B⟩(p ∧ ¬q), but it does not hold over the interval [2, 4], that is, M a , [2, 4] ⟨B⟩(p ∧ ¬q), and ⟨D⟩(q ∧ ¬r) holds over the interval [2, 6], that is, M a , [2, 6] ⊧ ⟨D⟩(q ∧ ¬r), but it does not hold over the interval [2, 4], that is, M a , [2, 4] ⟨D⟩(q ∧ ¬r). For any model M and any interval [x, y], M, [x, y] ⊧ ψ 1 ⟨C⟩ψ 2 if and only if there exists z ∈ [x, y] such that M, [x, z] ⊧ ψ 1 and M, [z, y] ⊧ ψ 2 , and M, [x, y] ⊧ π if and only if x = y. ¬q, ¬r, [D]¬q, ψ 1 , ¬ψ 2 , ⟨B⟩ψ 1 , [D]¬ψ 2 , ¬ϕ}, where ψ 1 = p ∧ ¬r and ψ 2 = ¬q ∧ ⟨D⟩q. Figure 9 . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L(0, 4) → B L(0, 4) L(0, 4) → B L(0, 3) L(0, 2) → B L(0, 1) L(0, 2) → B L(0, 0) L(0, 1) → B L(0, 1) L(0, 1) → B L(0, 0) L(0, 0) → B L(0, 0) A graphical account of relation → B from both the interval point of view (left) and the spatial one (right). [0, 0], [0, 1], [0, 2], [0, 3], and [0, 4]. On the bottom part of Figure 9, we report the atoms assoociated with these intervals, plus the interval [1, 2] which shows when the request ⟨D⟩ψ 1 is satisfied for the intervals [0, 3] and [0, 4]. Morever, At the very bottom of Figure 9, we show when F [0,x] is true and when it is not. A graphical account of the same pieces of information is given in the top right part of Figure 9. Let us now analyse in detail how Req B (F [0,x] ) and Obs B (F [0,x] ) behave moving up from x = 0 to x = 4. We first observe that, since π does not occur in ϕ, [0, 0] and [0, 1] may have the same labelling, as it is actually the case in our example. Both F Figure 10 . 10A graphical account of relation → D from both the interval point of view (left) and the spatial one (right). For the sake of readability, we only highlight the sub-intervals of [0, 4]. [1, 1], [1, 2], [1, 3], [2, 2], [2, 3], and [3, 3] of interval [0, 4]. At the very bottom of Figure 10, we show when F [x,y] Figure 11 . 11A homogeneous model and the corresponding compass structure.Theorem 1. A BD hom formula ϕ is satisfiable if and only if there is a homogeneous ϕ-compass structure. We have that Cl(ϕ) ∩ Prop = {p, q}, {⟨B⟩ψ ∈ Cl(ϕ)} = {⟨B⟩⊤, ⟨B⟩¬p}, and {⟨D⟩ψ ∈ Cl(ϕ)} = {⟨D⟩¬q}. We know that, by the homogeneity assumption, the valuation of proposition letters at point-intervals determines that at non-point ones. As an example, if an interval [x, y] contains time point 3, as, e.g., interval [1, 6], then {p, q} ∩ V([x, y]) = ∅. Similarly, if an interval [x, y] contains time point 7 (resp., 0), then it must satisfy {p} ∩ V([x, y]) = ∅ (resp., {q} ∩ V([x, y]) = ∅). F 3 → 3B F 2 → B F 1 , with Req B (F 3 ) = {ψ 1 } and Req B (F 2 ) = Req B (F 1 ) = ∅. For simplicity, let {ψ ∶ ⟨D⟩ψ ∈ Cl(ϕ)} = ∅, and thus Req D (F 3 ) = Req D (F 2 ) = Req D (F 1 ) = ∅, and (F 3 ∩F 2 ∩F 1 )∩Prop = Prop = {p}. It holds that ∆ ↑ (F 1 ) = (2⋅1−2⋅0−0)+(0−0)+(1−0) = 3, ∆ ↑ (F 2 ) = (2⋅1−2⋅0−1)+(0−0)+(1−0) = 2, and ∆ ↑ (F 3 ) = (2⋅1−2⋅1−0)+(0−0)+(1−0) = 1. decreasing if and only if ∆ ↑ (F 0 ) > . . . > ∆ ↑ (F m ). Flat (decreasing) Bsequence are the cornerstone of the following results for compass-structures, they constitute a suitable abstraction for the labelling of intervals [x, y 1 ], . . . , [x, y n ] which share the same beginning point. In particular, we will prove that, if we ignore the k i exponents the representation of a flat (decreasing) B-sequence is bounded by size of the input formula ϕ.The following definition is a key piece for allowing us to abstract intervals/points of the type [x, y 1 ], . . . , [x, y n ] in a compass structure into flat (decreasing) B-sequences. Definition 4 . 4Let G = (G N , L) be a compass structure for ϕ and 0 ≤ x ≤ N . We define the shading of x in G, written Sh G (x), as the sequence of atoms L(x, x) . . . Figure 12 . 12(a) Monotonicity of atoms along a column in a compass structure, together with a graphical account of the corresponding intervals and of how proposition letters and B/D requests must behave. (b) An example of a violation of monotonicity. ′B , if and only if m = m ′ and, for all 0 ≤ i ≤ m, F i = G i . This amounts to say that two decreasing flat B-sequences are equivalent if and only if they feature exactly the same sequence of atoms regardless of their exponents. Then, we can represent equivalence classes as decreasing flat B-sequences where each exponent is equal to one, e.g., the B-to the equivalence class[F 0 . . . F m ] ∼ . Given an equivalence class [F 0 . . . F m ] ∼ and 0 ≤ i ≤ m, we denote by [F 0 . . . F m ] i ∼ the i th atom in its sequence, i.e., [F 0 . . . F m ] i ∼ = F i for all 0 ≤ i ≤ m.We also define a function next that, given an equivalence class [F 0 . . . F m ] ∼ and one of its atom F i , returns the successor of F i in the sequence [F 0 . . . F m ] ∼ (for i = n, it is undefined). It can be easily checked that ∼ is of finite index. From Corollary 1, it follows that its index is (roughly) bounded by |At(ϕ)| 4|ϕ|+2 = 2 (|ϕ|+1)(4|ϕ|+2) = 2 4|ϕ| 2 +6|ϕ|+2 (function ∆ ↑ is deterministic, so ∆ ↑ (F ) can assume at most 4|ϕ| + 2 distinct values). a decreasing flat B-sequence. We define the length of Sh B , written |Sh B |, as ∑ 0≤i≤m k m . A partial order < over the elements of each equivalence class [Sh B ] ∼ can be defined as follows.Definition 5. Let Sh B = F two equivalent decreasing flat B-sequences. We say that Sh B is dominated by Sh′ B , written Sh B < Sh ′ B , if and only if (i) |Sh B | > |Sh ′ B | and, (ii) for all 0 BBB , by condition (i) of Definition 5 we have that the only possible domination relation may be Sh0 B < Sh 1 B . Then, let us check if condition condition (ii) . In general, one possible possible intuition may be given by the following representation of flat shading. Let us assume (i.e., the shorter sequence) as a suffix of Sh 0 B . Such an alignment is obtained by prefixing Sh 1 B with a word of suitable blank symbol ' ' with length |Sh 0 B | − |Sh 1 B | . Then, in our example, we have that the required alignment and only if the first occurrence of each atom F i in Sh 0 B does not occur in a strictly smaller position inŜh 1 B . In our example, we have that: • F 1 occurs for the first time at position 0 we have that atom F 4 occurs for the first time at position Lemma 5 . 5Let G = (G N , L) be a compass structure. For every pair of columns 0 0 B 0] ∼ , F 0 ) . . . ([Sh m B ] ∼ , F m ) such that m + 1 = |Wit G (y)| and there exists a bijection b ∶ Wit G (y) → {0, . . . , m} such that, for every x ∈ Wit G (y), it holds that Sh G (x) ∈ [Sh b(x) B ] ∼ and F b(x) = L(x, y), and for every Figure 16 . 16An example of contraction, where compass structure (a) is contracted into compass structure (b). 4|ϕ| 2 4|ϕ|+7|ϕ|+3 possible distinct S → (x, y), that is an upper bound of the length of the longest possible ⊆-ascending sequence in the set of pairs ([Sh G ] ∼ , F ) (i.e., equivalence class and atom). Moreover, each one of the possible witnesses is a pair ([Sh G ] ∼ , F ) and, since Wit G (y) does not contain covered points, each fingerprint f p(x, y) = ([Sh G (x)] ∼ , L(x, y), S → (x, y)) . In each of such positions we can put a pair ([Sh G ] ∼ , F ) and thus the cardinality of the set of all possible Row G (y) ∼ and updates y to y + 1. For every y > 0, the procedure proceeds inductively as follows (let Row G (y) = ([Sh0 B ] ∼ , F 0 ) . . . ([Sh k B ] ∼ , F k )): Pre(e)) = {(wv, u) ∶ v ∈ Σ + , (w, vu) ∈ Inf(e)) = {(uwv, z) ∶ u, v ∈ Σ + , (w, vz) ∈ Con(e)) = {(w, u) ∶ u ∈ − −− → Lang(e)}. given a model M = (I N , V), M, [0, y] ⊧ [G]ψ if and only if ψ ∈ V([x ′ , y ′ ]) for every 0 ≤ x ′ ≤ y ′ ≤ N , that is, if [G]ψ holds on an initial interval of the model (an interval whose left endopoint is 0), then ψ holds on every interval of the model. In BDA hom , we may capture the semantics of [G]ψ by means of the formula ψ ∧ [A]ψ ∧ [A][A]ψ ∧ [B]ψ ∧ [B][A]ψ. Moreover, in the encoding, we will make use of (\w+) (? = (? ∶ \w+, ){2, } and \w+ ) Figure 17 . 17A graphical account of re and its sub-expressions. the shorthands len ≥n and len n for any n ∈ N, that constrain the length of the interval on which they hold to be greater than or equal to n and exactly equal to n, respectively.More precisely, given a model M = (I N , V), we have that [x, y] ⊧ len ≥n if and only if y − x ≥ n, and M, [x, y] ⊧ len n if and only if y − x = n. gm w + ∧ ⟨A⟩(ψ (w , ) 2+ ∧ ⟨A⟩ψ gm and w + ). Intuitively, ψ re requires the presence of three adjacent intervals [x, y], [y, z], and [z, w] such that M, [x, y] ⊧ ψ gm w + , M, [y, z] ⊧ 4 Notice that we provide a unary encoding of the length constraints. It is possible to make a binary encoding analogous to the one proposed in [BMM + 22]. ψ (w , ) 2+ , and M, [z, w] ⊧ ψ gm and w + . These sub-formulas constrain the three regular expressions whose concatenation forms re as follows: • ψ gm w + = ⟨B⟩¬W ∧[B](len ≥1 → ⟨A⟩W )∧⟨B⟩⟨A⟩W ∧⟨A⟩¬W . This formula holds over an interval [x, y] if and only if point-intervals [x, x] and [y, y] do not hold a word symbol (conjuncts ⟨B⟩¬W and ⟨A⟩¬W , respectively), but a word symbol holds at all the internal point , such a condition is satisfied by point-intervals [40, 40] and [48, 48], which are included in the interval [34, 49]. Figure 18 . 18a graphical account of how a ⟨D⟩ψ re holds over an interval model representing a text. Lemma 10 . 10For every interval structure M = (I N , V), every triplet of points x ≤ y ≤ z in {0, . . . , N }, and every HS formula ψ M, [x, z] ⊧ ⟨A⟩ψ if and only if M, [y, z] ⊧ ⟨A⟩ψ. ϕ A → {◊, ⧫, □} that, for all ψ ∈ T F ϕ A , satisfies the following four conditions: (i) if α(ψ) = □ then ¬ψ ∈ F ; (ii) if ψ ∈ F then α(ψ) = ⧫; (iii) if , we have that [0, 1] MEETS [1, 2] MEETS [2, 3] but Req A (F [0,1] ) = Req A (F [2,3] ) = ∅ and Req A (F [0,1] ) = {¬ψ 1 }. Let us now focus on the newly introduced second component α [x,y] of each atom which is reported on the very bottom of Figure 20. In the example of Figure 20 we have T F ϕ A = {¬ψ 1 } and thus α [x,y] ϕ = [A]( ⟨B⟩⟨B⟩q ⟶ ⟨D⟩p ) Figure 21 . 21A graphical account of the extension of the → B relation to A-marked atoms both from the interval point of view (left) and the spatial one (right). GB (x) the sequence of atoms L(x, y 0 ) . . . L(x, y m ), and with Sh G N (x) the sequence of natural numbers y 0 . . . y m , that is, the projections of Sh G (x) of on the first and the second components of its elements, respectively. The next lemma represents the BDA hom counterpart of Lemma 3. Lemma 11. Let G = (N, L) be a compass structure and 0 ≤ x ≤ N , then Sh G B (x) is a minimal B-sequence. Problem 1 . 1Given a tuple T = (T, ⇒, ⇑, C) where T, C ∈ N (C is expressed in binary), and ⇒, ⇑ ⊆ {0, . . . , T } × {0, . . . , T }, the exponential-corridor tiling problem consists of determining whether or not there exists a function tile ∶ N × {0, . . . , C} → {0, . . . , T } such that: point a number in {0, . . . , C} by a binary encoding via c-propositional variables b 1 , . . . , b c , where b 1 is the most significant bit. Formally, given a model M = (N, V) and a point we define a function withbit V ∶ {0, . . . , N } × {b 1 , . . . , b c } → {0, 1} where bit V (n, b i ) = 1 if b i ∈ V([n, n]) 0 otherwise . ( the bit-wise increment for every bit b i with i ∈ {1, . . . , c−1}; ψ 1 + is triggered by ψ ↑ on every interval [n, n + 1] with y n < C; ψ c + = ¬⟨B⟩b i ∧ ⟨A⟩b i , formula ψ c + encodes the bit-wise increment for the bit b c ; it is triggered by ψ c−1 + on every interval [n, n+1] for which bit V (n, b i ) = 1 for every 1 ≤ i < c; let us notice that it does not propagate and it handles overflows by creating a contradiction; ⟨B⟩(π ∧ b i ) ↔ ⟨A⟩(π ∧ b i )) , formula ψ i = holds over an interval [n, n ′ ] For the aforementioned properties, if we consider the function f that maps a function tile ∶ {0, . . . , M } × {0, . . . , C} → {0, . . . , T } in the model M = (M ⋅ (C + 1), V) where for every (x, y) ∈ {0, . . . , M } × {0, . . . , C} we have t i ∈ V([map(x, y), map(x, y)]) if and only if an interval [n, n ′ ] if and only if n < n ′ , if [x, y] can be split into [x, z] and [z, y], ψ 1 holds over [x, z], and ψ 2 holds over [z, y] (topmost part of Figure 2). A formula ψ 1 D ψ 2 (ψ 1 dawning ψ 2 ) holds over an interval [x, y] if there exists an interval [z, x] such that ψ 1 holds over [z, x] and ψ 2 holds over the interval [z, y] covering both [z, x] and[x, y] (middle part of Figure 2). A formula ψ 1 T ψ 2 (ψ 1 terminating ψ 2 ) holds over an interval [x, y] if there exists an interval [y, z] such that ψ 1 holds over [y, z] and ψ 2 holds over the interval [x, z] covering both [x, y] and [y, z] (bottom part of y ] yif and only if exists z with x ≤ z ≤ y s.t. if and only if exists z with z ≤ x s.t. ψ 1 holds over [z, x] and ψ 2 holds over [z, y] T ψ 2 holds over [x,y] if and only if exists z with y ≤ z s.t. ψ 1 holds over [y, z] and ψ 2 holds over [x, z]ψ 1 holds over [x, z] and ψ 2 holds over [z, y] 0 p 1 p 2 p, q 3 4 q p, q q ∅ p p, q D (¬p ∧ ¬q) (p ∧ q) p, q . . . . . . z x . . . . . . y ψ 1 D ψ 1 ψ 2 ψ 2 ψ 1 D ψ 2 holds over [x,y] 0 p 1 p 2 p, q 3 4 q T (p ∧ ¬q) (¬p ∧ q) p, q q p p, q p, q . . . . . . x y . . . . . . z T ψ 1 ψ 2 ψ 1 ψ 2 ψ 1 The encoding of LTL f modalities U and in AB.only if there exists an interval [x ′ , y], with y ≥ x ′ , which makes both π and p true. As already pointed out, π is true over [x ′ , y] if and only if y = x ′ . It immediately follows that pinterval [3, y] restricts the number of possible candidates to [3, 3] only; since V([3, 3]) = ∅, it immediately follows that both ¬p and ¬q hold over [3, 3] as well. The first conjunct [B]⟨A⟩(π ∧ p) makes use of modality [B], which forces the formula ⟨A⟩(π ∧ p) to be true over each proper prefix of [0, 3], namely, intervals [0, 2], [0, 1], and [0, 0]. This amounts to say that, for each interval [0, x ′ ], with x ′ ∈ {0, 1, 2}, ⟨A⟩(π ∧ p) holds on [0, x ′ ] if and belongs to bottom). Whenever ⟨A⟩(¬π ∧ [B]π ∧ ⟨A⟩(π ∧ ψ)) holds over an interval [x, y], the outermost modality ⟨A⟩ imposes the existence of an interval [y, y ′ ], with y ≤ y ′ , where ¬π, [B]π, and ⟨A⟩(π ∧ψ) hold. to be equal to y + 1. From the truth of ⟨A⟩(π ∧ ψ) on [y, y + 1], it follows thatThe first two conjuncts ¬π and [B]π respectively force y ′ > y (¬π) and all proper prefixes [y, y ′′ ] of [y, y ′ ] to be point-intervals ([B]π). The only way to satisfy both conditions is to constrain y ′ while both ψ 2 and ⟨D⟩ψ 1 are satisfied locally bythe observables of F [0,2] . 3 Before proceeding with the analysis of relation → D , we state an important lemma that, given an atom F [x,y] , determines how many atoms F [x,y+k] , with k ≥ 1, with a distinct pair (Req B (F [x,y+k] ), Obs B (F [x,y+k] )) can be placed "above" F [x,y] in a compass structure, that is, may have F [x,y] as a prefix. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . [3, 3] 0 1 0 1 0 0 0 0 0 ∅ {ψ 2 } . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . us now describe how Req D (F [x,y] ) and Obs D (F [x,y] ) behave moving from interval [x, y] to its maximal sub-interval [x + 1, y − 1] starting from the largest interval [0, 4]. ] . In analogy to the case of relation → B , we call D-reflexive all and only those atoms F which satisfy F → D F , and thus soFirst, we observe that, since Obs D (F [0,4] ) = ∅, it trivially holds that Req D (F [0,4] ) ⊇ Req D (F [0,4] ) ∪ Obs D (F [0,4] ), and thus F [0,4] → D F [0,4is F [0,4] . Being F [0,4] D-reflexive means that the labelling F [0,4] may be associated with a proper sub-interval of an interval labelled by F [0,4] . Atom F [1,3] is D-reflexive as well. However, in such a case, the relation F [0,4] since both the maximal prefix [x, y − 1] and the maximal suffix [x + 1, y]) of the current interval [x, y] are not proper sub-intervals of it, it may be the case that F → B F [x+1,y] do not hold in a consistent model. For instance, inFigure 10, neither[x,y] → B F [x,y−1] and/or F [x,y] F [1,3] ). are also proper sub-intervals of [x, x + ∆ + i]. Then, since G is a compass structure we have Req DFor case (2) we have that since [x ′ , x ′ + i] is a proper suffix of [x, x + ∆ + i] we have that all the proper sub-intervals of [x ′ , x ′ + i] Figure 20. A graphical (above) and tabular (below) account of the behaviour for Req R (F ), Obs R (F ), and Box R (F ) for F ∈ At(ϕ) andR ∈ {A, B, D} with ϕ = [A](⟨B⟩⟨B⟩q → ⟨D⟩p).assigns to the interval [x, y] the "status" of ¬ψ 1 on it. More precisely, the fact that¬ψ 1 ∈ Req A (F [x,x] ), i.e.,the [G]-requests pending on point [x, x] which are the same, according to Lemma 10, for all the interval of the type [x ′ , x]. More precisely, we have α [x,y] (¬ψ 1 ) = □ if and only if ¬ψ 1 ∉ Req A (F [x,y] ) which means that ¬ψ 1 is not requested by F [x,x] and thus ψ 1 must be satisfied on all the intervals [x, y]. This is the case, for instance, of intervals [0, 0], [1, 1], [3, 3], and [4, 4] in Figure 20 which impose that α [x,y] (¬ψ 1 ) = □, and, consequently, ψ 1 ∈ F [x,y] for all [x, y] ∈ {[x, y] ∶ 0x ≤ y ≤ 4, x ≠ 2}. If α [x,x] (¬ψ 1 ) ≠ □ we have that α [x,y] (¬ψ 1 ) ∈ {◊, ⧫} for every y ≥ x which means that the request ⟨A⟩¬ψ 1 is pending on [x, x] (i.e., ¬ψ 1 ∈ Req A (F [x,y] )) and must be satisfied by some interval of the form [x, y] for some y ≥ x. If we take the minimum y for which ¬ψ 1 ∈ Obs A (F [x,y] )⟨B⟩q} {q, ⟨B⟩q} {p} ∅ [1, 4] 0 0 1 1 1 1 0 ∅ ∅ {q, ⟨B⟩q} {⟨B⟩q} {p} ∅ [2, 2] 1 1 0 0 0 1 1 {¬ψ 1 } ∅ ∅ {q} ∅ {p} [2, 3] 0 1 1 0 0 1 0 ∅ ∅ {q} {q, ⟨B⟩q} ∅ ∅ [2, 4] 0 0 1 0 1 0 0 ∅ {¬ψ 1 } {q, ⟨B⟩q} {⟨B⟩q} ∅ ∅ [3, 3] 0 1 0 0 0 1 0 ∅ ∅ ∅ {q} ∅ ∅ [3, 4] 0 0 1 0 0 1 0 ∅ ∅ {q} {⟨B⟩q} ∅ ∅ [4, 4] 1 0 0 0 0 1 0 ∅ ∅ ∅ ∅ ∅ {p} α [x,y] [0, 0] [0, 1] [0, 2] [0, 3] [0, 4] [1, 1] [1, 2] [1, 3] [1, 4] [2, 2] [2, 3] [2, 4] [3, 3] [3, 4] [4, 4] ¬ψ 1 □ □ □ □ □ □ □ □ □ ◊ ◊ ⧫ □ □ □ this is the case for interval [2, 2] for which ¬ψ 1 ∈ Req A (F [2,2] ) holds. However, since we have ¬ψ 1 ∉ Obs A (F [2,2] ) and ¬ψ 1 ∉ Obs A (F [2,3] ) it turns out that α [2,2] (¬ψ 1 ) = α [2,3] (¬ψ 1 ) = ◊. On the other hand ¬ψ 1 appears "for the first time" in Obs A (F [2,y] ) when y = 4 and thus α [2,4] (¬ψ 1 ) = ⧫. It is worth pointing out that there may be atoms F and G such that ReqB (F ) = Req B (G) ∪ Obs B (G)(that is, F → B G), and Req B (G) ∩ Obs B (G) ≠ ∅, that is, a ⟨B⟩ request may be at the same time locally satisfied by G and featured as request for its proper prefixes. In the encoding we will make extensive use of the "global" operator whose semantics was introduced in Section 7. Adriano Peron, and Pietro Sala. Which fragments of the interval temporal logic HS are tractable in model checking? Theor. Dario Della James F Allen ; Davide Bresolin, Angelo Monica, Pietro Montanari, Guido Sala, Sciavicco ; Laura, Alberto Bozzelli, Angelo Molinari, Adriano Montanari, Pietro Peron, Sala ; Laura, Alberto Bozzelli, Angelo Molinari, Montanari ; Laura, Alberto Bozzelli, Angelo Molinari, Montanari ; Laura, Alberto Bozzelli, Angelo Molinari, Adriano Montanari, Pietro Peron, Sala, 10.46298/lmcs-18(1:24)2022doi:10.46298/lmcs-18(1:24)202244th International Colloquium on Automata, Languages, and Programming. BMM + 19c] Davide Bresolin, Dario Della Monica, Angelo Montanari, Pietro Sala, and Guido SciaviccoWarsaw, PolandCiteseer812022ACM Trans. Comput. Log.James F Allen. An interval-based representation of temporal knowledge. In IJCAI, volume 81, pages 221-226. Citeseer, 1981. [BMM + 14] Davide Bresolin, Dario Della Monica, Angelo Montanari, Pietro Sala, and Guido Sciavicco. Interval temporal logics over strongly discrete linear orders: Expressiveness and complexity. Theor. Comput. Sci., 560:269-291, 2014. doi:10.1016/j.tcs.2014.03.033. [BMM + 17] Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, and Pietro Sala. Satisfia- bility and model checking for the logic of sub-intervals under the homogeneity assumption. In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl, editors, 44th Interna- tional Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland, volume 80 of LIPIcs, pages 120:1-120:14. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik, 2017. doi:10.4230/LIPIcs.ICALP.2017.120. [BMM + 19a] Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, and Pietro Sala. Interval vs. point temporal logic model checking: An expressiveness comparison. ACM Trans. Comput. Log., 20(1):4:1-4:31, 2019. doi:10.1305/ndjfl/1093635589. [BMM + 19b] Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, and Pietro Sala. Which fragments of the interval temporal logic HS are tractable in model checking? Theor. Comput. Sci., 764:125-144, 2019. doi:10.1016/j.tcs.2018.04.011. [BMM + 19c] Davide Bresolin, Dario Della Monica, Angelo Montanari, Pietro Sala, and Guido Sciavicco. Decidability and complexity of the fragments of the modal logic of allen's relations over the rationals. Inf. Comput., 266:97-125, 2019. doi:10.1016/j.ic.2019.02.002. [BMM + 22] Laura Bozzelli, Alberto Molinari, Angelo Montanari, Adriano Peron, and Pietro Sala. Satisfia- bility and model checking for the logic of sub-intervals under the homogeneity assumption. Log. Methods Comput. Sci., 18(1), 2022. doi:10.46298/lmcs-18(1:24)2022. Complexity analysis of a unifying algorithm for model checking interval temporal logic. Laura Bozzelli, Angelo Montanari, Adriano Peron, 10.4230/LIPIcs.TIME.2019.1826th International Symposium on Temporal Representation and Reasoning, TIME 2019. Johann Gamper, Sophie Pinchinat, and Guido SciaviccoMálaga, Spain14717. Schloss Dagstuhl -Leibniz-Zentrum für InformatikLaura Bozzelli, Angelo Montanari, and Adriano Peron. Complexity analysis of a unifying algorithm for model checking interval temporal logic. In Johann Gamper, Sophie Pinchinat, and Guido Sciavicco, editors, 26th International Symposium on Temporal Representation and Reasoning, TIME 2019, October 16-19, 2019, Málaga, Spain, volume 147 of LIPIcs, pages 18:1-18:17. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs. TIME.2019.18. On a temporal logic of prefixes and infixes. Laura Bozzelli, Angelo Montanari, Adriano Peron, Pietro Sala, 10.4230/LIPIcs.MFCS.2020.2145th International Symposium on Mathematical Foundations of Computer Science. Javier Esparza and Daniel KrálPrague, Czech Republic2020202014. Schloss Dagstuhl -Leibniz-Zentrum für InformatikLaura Bozzelli, Angelo Montanari, Adriano Peron, and Pietro Sala. On a temporal logic of prefixes and infixes. In Javier Esparza and Daniel Král', editors, 45th International Symposium on Mathematical Foundations of Computer Science, MFCS 2020, August 24-28, 2020, Prague, Czech Republic, volume 170 of LIPIcs, pages 21:1-21:14. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.MFCS.2020.21. Adding the relation meets to the temporal logic of prefixes and infixes makes it expspace-complete. Laura Bozzelli, Angelo Montanari, Adriano Peron, Pietro Sala, 10.4204/EPTCS.346.12Proceedings 12th International Symposium on Games, Automata, Logics, and Formal Verification. Pierre Ganty and Davide Bresolin12th International Symposium on Games, Automata, Logics, and Formal VerificationPadua, Italy346Laura Bozzelli, Angelo Montanari, Adriano Peron, and Pietro Sala. Adding the relation meets to the temporal logic of prefixes and infixes makes it expspace-complete. In Pierre Ganty and Davide Bresolin, editors, Proceedings 12th International Symposium on Games, Automata, Logics, and Formal Verification, GandALF 2021, Padua, Italy, 20-22 September 2021, volume 346 of EPTCS, pages 179-194, 2021. doi:10.4204/EPTCS.346.12. Pspace-completeness of the temporal logic of sub-intervals and suffixes. Laura Bozzelli, Angelo Montanari, Adriano Peron, Pietro Sala, proceedings of 28th International Symposium on Temporal Representation and Reasoning, TIME 2021. Carlo Combi, Johan Eder, and Mark28th International Symposium on Temporal Representation and Reasoning, TIME 2021Klagenfurt, Austria, LIPIcsSchloss Dagstuhl -Leibniz-Zentrum für InformatikLaura Bozzelli, Angelo Montanari, Adriano Peron, and Pietro Sala. Pspace-completeness of the temporal logic of sub-intervals and suffixes. to appear in: Carlo Combi, Johan Eder, and Mark Reynolds eds. proceedings of 28th International Symposium on Temporal Representation and Reasoning, TIME 2021, September 27-29, 2021, Klagenfurt, Austria, LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. An optimal tableau-based decision algorithm for propositional neighborhood logic. Davide Bresolin, Angelo Montanari, Pietro Sala, 10.1007/978-3-540-70918-3_47STACS 2007, 24th Annual Symposium on Theoretical Aspects of Computer Science. Wolfgang Thomas and Pascal WeilAachen, GermanySpringer4393Davide Bresolin, Angelo Montanari, and Pietro Sala. An optimal tableau-based decision algorithm for propositional neighborhood logic. In Wolfgang Thomas and Pascal Weil, editors, STACS 2007, 24th Annual Symposium on Theoretical Aspects of Computer Science, Aachen, Germany, February 22-24, 2007, Proceedings, volume 4393 of Lecture Notes in Computer Science, pages 549-560. Springer, 2007. doi:10.1007/978-3-540-70918-3\_47. Optimal tableau systems for propositional neighborhood logic over all, dense, and discrete linear orders. Davide Bresolin, Angelo Montanari, Pietro Sala, Guido Sciavicco, 10.1007/978-3-642-22119-4_8doi:10.1007/ 978-3-642-22119-4\_8Automated Reasoning with Analytic Tableaux and Related Methods -20th International Conference. Kai Brünnler and George MetcalfeBern, SwitzerlandSpringer6793Davide Bresolin, Angelo Montanari, Pietro Sala, and Guido Sciavicco. Optimal tableau systems for propositional neighborhood logic over all, dense, and discrete linear orders. In Kai Brünnler and George Metcalfe, editors, Automated Reasoning with Analytic Tableaux and Related Methods - 20th International Conference, TABLEAUX 2011, Bern, Switzerland, July 4-8, 2011. Proceedings, volume 6793 of Lecture Notes in Computer Science, pages 73-87. Springer, 2011. doi:10.1007/ 978-3-642-22119-4\_8. An adequate first order interval logic. Zhou Chaochen, Michael R Hansen, Compositionality: The Significant Difference, International Symposium, COMPOS'97. Willem P. de Roever, Hans Langmaack, and Amir PnueliBad Malente, Germany1536Revised LecturesZhou Chaochen and Michael R. Hansen. An adequate first order interval logic. In Willem P. de Roever, Hans Langmaack, and Amir Pnueli, editors, Compositionality: The Significant Difference, International Symposium, COMPOS'97, Bad Malente, Germany, September 8-12, 1997. Revised Lectures, volume 1536 of Lecture Notes in Computer Science, pages 584-608. . Springer, 10.1007/3-540-49213-5_23Springer, 1997. doi:10.1007/3-540-49213-5\_23. Reasoning on LTL on finite traces: Insensitivity to infiniteness. Giuseppe De Giacomo, Riccardo De Masellis, Marco Montali, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. Carla E. Brodley and Peter Stonethe Twenty-Eighth AAAI Conference on Artificial IntelligenceQuébec City, Québec, CanadaAAAI PressGiuseppe De Giacomo, Riccardo De Masellis, and Marco Montali. Reasoning on LTL on finite traces: Insensitivity to infiniteness. In Carla E. Brodley and Peter Stone, editors, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Québec City, Québec, Canada, pages 1027-1033. AAAI Press, 2014. URL: http://www.aaai.org/ocs/index. php/AAAI/AAAI14/paper/view/8575. A road map of interval temporal logics and duration calculi. Valentin Goranko, Angelo Montanari, Guido Sciavicco, 10.3166/jancl.14.9-54Journal of Applied Non-Classical Logics. 141-2Valentin Goranko, Angelo Montanari, and Guido Sciavicco. A road map of interval temporal logics and duration calculi. Journal of Applied Non-Classical Logics, 14(1-2):9-54, 2004. doi: 10.3166/jancl.14.9-54. A general tableau method for propositional interval temporal logics: Theory and implementation. Valentin Goranko, Angelo Montanari, Pietro Sala, Guido Sciavicco, 10.1016/j.jal.2005.06.012J. Appl. Log. 43Valentin Goranko, Angelo Montanari, Pietro Sala, and Guido Sciavicco. A general tableau method for propositional interval temporal logics: Theory and implementation. J. Appl. Log., 4(3):305-330, 2006. doi:10.1016/j.jal.2005.06.012. Linear temporal logic and linear dynamic logic on finite traces. Giuseppe De Giacomo, Moshe Y Vardi, IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence. IJ-CAI/AAAIBeijing, ChinaGiuseppe De Giacomo and Moshe Y. Vardi. Linear temporal logic and linear dynamic logic on finite traces. In Francesca Rossi, editor, IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013, pages 854-860. IJ- CAI/AAAI, 2013. URL: http://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/ 6997. A hardware semantics based on temporal intervals. Joseph Halpern, Zohar Manna, Ben Moszkowski, Automata, Languages and Programming. Josep DiazBerlin, Heidelberg; Berlin HeidelbergSpringerJoseph Halpern, Zohar Manna, and Ben Moszkowski. A hardware semantics based on temporal intervals. In Josep Diaz, editor, Automata, Languages and Programming, pages 278-291, Berlin, Heidelberg, 1983. Springer Berlin Heidelberg. Non-finite axiomatizability and undecidability of interval temporal logics with C, D, and T. Ian Hodkinson, Angelo Montanari, Guido Sciavicco, 10.1007/978-3-540-87531-4_23CSL. Springer5213Ian Hodkinson, Angelo Montanari, and Guido Sciavicco. Non-finite axiomatizability and unde- cidability of interval temporal logics with C, D, and T. In CSL, volume 5213 of LNCS, pages 308-322. Springer, 2008. doi:10.1007/978-3-540-87531-4_23. A propositional modal logic of time intervals. Y Joseph, Yoav Halpern, Shoham, 10.1145/115234.115351Journal of ACM. 384Joseph Y. Halpern and Yoav Shoham. A propositional modal logic of time intervals. Journal of ACM, 38(4):935-962, 1991. doi:10.1145/115234.115351. The undecidability of the logic of subintervals. Jerzy Marcinkowski, Jakub Michaliszyn, 10.3233/FI-2014-1011Fundam. Inform. 1312Jerzy Marcinkowski and Jakub Michaliszyn. The undecidability of the logic of subintervals. Fundam. Inform., 131(2):217-240, 2014. doi:10.3233/FI-2014-1011. B and D are enough to make the Halpern-Shoham logic undecidable. Jerzy Marcinkowski, Jakub Michaliszyn, Emanuel Kieronski, 10.1007/s00236-015-0250-1doi:10.1007/ s00236-015-0250-1Automata, Languages and Programming, 37th International Colloquium, ICALP. Samson Abramsky, Cyril Gavoille, Claude Kirchner, Friedhelm Meyer auf der Heide, and Paul G. SpirakisBordeaux, France; Angelo Montanari, Aniello Murano, Giuseppe Perelli, and Adriano PeronSpringer6199LNCSJerzy Marcinkowski, Jakub Michaliszyn, and Emanuel Kieronski. B and D are enough to make the Halpern-Shoham logic undecidable. In Samson Abramsky, Cyril Gavoille, Claude Kirchner, Friedhelm Meyer auf der Heide, and Paul G. Spirakis, editors, Automata, Languages and Program- ming, 37th International Colloquium, ICALP, Bordeaux, France, July 6-10, Proceedings, Part II, volume 6199 of LNCS, pages 357-368. Springer, 2010. doi:10.1007/978-3-642-14162-1\_30. [MMM + 16] Alberto Molinari, Angelo Montanari, Aniello Murano, Giuseppe Perelli, and Adriano Peron. Checking interval properties of computations. Acta Inf., 53(6-8):587-619, 2016. doi:10.1007/ s00236-015-0250-1. Reasoning About Digital Circuits. Ben Moszkowski, CAStanford UniversityPhD thesisBen Moszkowski. Reasoning About Digital Circuits. PhD thesis, Stanford University, CA, 1983. An optimal tableau system for the logic of temporal neighborhood over the reals. Angelo Montanari, Pietro Sala, 10.1109/TIME.2012.1819th International Symposium on Temporal Representation and Reasoning. Ben C. Moszkowski, Mark Reynolds, and Paolo TerenzianiLeicester, United KingdomIEEE Computer Society2012Angelo Montanari and Pietro Sala. An optimal tableau system for the logic of temporal neighborhood over the reals. In Ben C. Moszkowski, Mark Reynolds, and Paolo Terenziani, editors, 19th International Symposium on Temporal Representation and Reasoning, TIME 2012, Leicester, United Kingdom, September 12-14, 2012, pages 39-46. IEEE Computer Society, 2012. doi:10.1109/TIME.2012.18. Adding an equivalence relation to the interval logic ABB: Complexity and expressiveness. Angelo Montanari, Pietro Sala, 10.1109/LICS.2013.2528th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2013. New Orleans, LA, USAIEEE Computer SocietyAngelo Montanari and Pietro Sala. Adding an equivalence relation to the interval logic ABB: Complexity and expressiveness. In 28th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2013, New Orleans, LA, USA, June 25-28, 2013, pages 193-202. IEEE Computer Society, 2013. doi:10.1109/LICS.2013.25. Interval logics and ωB-regular languages. Angelo Montanari, Pietro Sala, 10.1007/978-3-642-37064-9_38Language and Automata Theory and Applications -7th International Conference. Adrian-Horia Dediu, Carlos Martín-Vide, and Bianca TrutheLATA; Bilbao, SpainSpringer7810Angelo Montanari and Pietro Sala. Interval logics and ωB-regular languages. In Adrian-Horia Dediu, Carlos Martín-Vide, and Bianca Truthe, editors, Language and Automata Theory and Applications -7th International Conference, LATA 2013, Bilbao, Spain, April 2-5, 2013. Proceedings, volume 7810 of Lecture Notes in Computer Science, pages 431-443. Springer, 2013. doi:10.1007/978-3-642-37064-9\_38. regex101: build, test, and debug regex. regex101: build, test, and debug regex. https://regex101.com. Accessed: 2022-10-31. Intervals and tenses. Peter Roeper, 10.1007/BF00262866Journal of Philosophical Logic. 9Peter Roeper. Intervals and tenses. Journal of Philosophical Logic, 9:451-469, 1980. doi: 10.1007/BF00262866. Complexity hierarchies beyond elementary. Sylvain Schmitz, 10.1145/2858784ACM Transactions on Computation Theory. 81Sylvain Schmitz. Complexity hierarchies beyond elementary. ACM Transactions on Computation Theory, 8(1):3:1-3:36, 2016. doi:10.1145/2858784. The complexity of decision problems in automata theory and logic. Larry Joseph Stockmeyer, Massachusetts Institute of TechnologyPhD thesisLarry Joseph Stockmeyer. The complexity of decision problems in automata theory and logic. PhD thesis, Massachusetts Institute of Technology, 1974. The convenience of tilings. Peter Van Emde, Boas, CRC PressPeter van Emde Boas. The convenience of tilings. CRC Press, 1997. Expressiveness and completeness of an interval tense logic. Yde Venema, 10.1305/ndjfl/1093635589Notre Dame Journal of Formal Logic. 314Yde Venema. Expressiveness and completeness of an interval tense logic. Notre Dame Journal of Formal Logic, 31(4):529-547, 1990. doi:10.1305/ndjfl/1093635589. A Modal Logic for Chopping Intervals. Yde Venema, 10.1093/logcom/1.4.453Journal of Logic and Computation. 14Yde Venema. A Modal Logic for Chopping Intervals. Journal of Logic and Computa- tion, 1(4):453-476, 09 1991. arXiv:http://oup.prod.sis.lan/logcom/article-pdf/1/4/453/ 3817096/1-4-453.pdf, doi:10.1093/logcom/1.4.453. A modal logic for chopping intervals. Yde Venema, 10.1093/logcom/1.4.453Journal of Logic and Computation. 14Yde Venema. A modal logic for chopping intervals. Journal of Logic and Computation, 1(4):453- 476, 1991. doi:10.1093/logcom/1.4.453. Python reference manual. Centrum voor Wiskunde en Informatica Amsterdam. Guido Van Rossum, Fred L Drake Jr, Guido Van Rossum and Fred L Drake Jr. Python reference manual. Centrum voor Wiskunde en Informatica Amsterdam, 1995.
{'fraction_non_alphanumeric': 0.10769184161895241, 'fraction_numerical': 0.02704111000030295, 'mean_word_length': 3.215088364490755, 'pattern_counts': {'":': 0, '<': 152, '<?xml version=': 0, '>': 35, 'https://': 1, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 284, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'A classic result by Stockmeyer [Sto74] gives a non-elementary lower bound to the emptiness problem for generalized * -free regular expressions. This result is intimately connected to the satisfiability problem for interval temporal logic, notably for formulas that make use of the so-called chop operator. Such an operator may indeed be interpreted as the inverse of the concatenation operation on regular languages, and this correspondence enables reductions between non-emptiness of generalized * -free regular expressions and satisfiability of formulas of the interval temporal logic of chop under the homogeneity assumption[HMM83].In this paper, we study the complexity of the satisfiability problem for suitable weakenings of the chop interval temporal logic, that can be equivalently viewed as fragments of Halpern and Shoham interval logic. We first introduce the logic BD hom featuring modalities B, for begins, corresponding to the prefix relation on pairs of intervals, and D, for during, corresponding to the infix relation, whose satisfiability problem has been recently shown to be PSPACE-complete[BMPS21b]. The homogeneous models of BD hom naturally correspond to languages defined by restricted forms of generalized * -free regular expressions, that use union, complementation, and the inverses of the prefix and infix relations. Then, we focus our attention on the extension of BD hom with the temporal neighborhood modality A, corresponing to the Allen relation Meets, and prove that such an addition increases both the expressiveness and the complexity of the logic. In particular, we show that the resulting logic BDA hom is EXPSPACE-complete.', 'arxivid': '2202.07881', 'author': ['Laura Bozzelli [email protected] ', 'Angelo Montanari [email protected] ', 'ANDAdriano Peron [email protected] ', 'Pietro Sala [email protected] ', '© L Bozzelli ', 'A Montanari ', 'A Peron ', 'P Sala ', 'L Bozzelli ', 'A Montanari ', 'A Peron ', 'P Sala ', '\nUniversity of Napoli "Federico II"\nNapoliItaly\n', '\nUniversity of Udine\nUdineItaly\n', '\nUniversity of Trieste\nTriesteItaly\n', '\nCC Creative Commons\nUniversity of Verona\nVeronaItaly\n', 'Laura Bozzelli [email protected] ', 'Angelo Montanari [email protected] ', 'ANDAdriano Peron [email protected] ', 'Pietro Sala [email protected] ', '© L Bozzelli ', 'A Montanari ', 'A Peron ', 'P Sala ', 'L Bozzelli ', 'A Montanari ', 'A Peron ', 'P Sala ', '\nUniversity of Napoli "Federico II"\nNapoliItaly\n', '\nUniversity of Udine\nUdineItaly\n', '\nUniversity of Trieste\nTriesteItaly\n', '\nCC Creative Commons\nUniversity of Verona\nVeronaItaly\n'], 'authoraffiliation': ['University of Napoli "Federico II"\nNapoliItaly', 'University of Udine\nUdineItaly', 'University of Trieste\nTriesteItaly', 'CC Creative Commons\nUniversity of Verona\nVeronaItaly', 'University of Napoli "Federico II"\nNapoliItaly', 'University of Udine\nUdineItaly', 'University of Trieste\nTriesteItaly', 'CC Creative Commons\nUniversity of Verona\nVeronaItaly'], 'corpusid': 246867441, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 63305, 'n_tokens_neox': 57578, 'n_words': 32518, 'pdfsha': 'f702caf1c0e4c9710c21f611597100bce62c584a', 'pdfurls': ['https://export.arxiv.org/pdf/2202.07881v2.pdf'], 'title': ['THE ADDITION OF TEMPORAL NEIGHBORHOOD MAKES THE LOGIC OF PREFIXES AND SUB-INTERVALS EXPSPACE-COMPLETE *', 'THE ADDITION OF TEMPORAL NEIGHBORHOOD MAKES THE LOGIC OF PREFIXES AND SUB-INTERVALS EXPSPACE-COMPLETE *', 'THE ADDITION OF TEMPORAL NEIGHBORHOOD MAKES THE LOGIC OF PREFIXES AND SUB-INTERVALS EXPSPACE-COMPLETE *', 'THE ADDITION OF TEMPORAL NEIGHBORHOOD MAKES THE LOGIC OF PREFIXES AND SUB-INTERVALS EXPSPACE-COMPLETE *'], 'venue': []}
arxiv
Random horizon principal-agent problems * February 11, 2022 Yiqing Lin Zhenjie Ren Nizar Touzi Junjian Yang Random horizon principal-agent problems * February 11, 2022Moral hazardfirst-best and second-best contractingsecond-order backward SDErandom horizon We consider a general formulation of the random horizon principal-agent problem with a continuous payment and a lump-sum payment at termination. In the European version of the problem, the random horizon is chosen solely by the principal with no other possible action from the agent than exerting effort on the dynamics of the output process. We also consider the American version of the contract, where the agent can also quit by optimally choosing the termination time of the contract. Our main result reduces such non-zero-sum stochastic differential games to appropriate stochastic control problems which may be solved by standard methods of stochastic control theory. This reduction is obtained by following the Sannikov [22] approach, further developed in[6]. We first introduce an appropriate class of contracts for which the agent's optimal effort is immediately characterized by the standard verification argument in stochastic control theory. We then show that this class of contracts is dense in an appropriate sense, so that the optimization over this restricted family of contracts represents no loss of generality. The result is obtained by using the recent well-posedness result of random horizon second-order backward SDE in[15].MSC 2010 Subject Classification: 91B40, 93E20Key words: Moral hazard, first-best and second-best contracting, second-order backward SDE, random horizon. Introduction The principal-agent problem is a classical moral hazard problem in economics with many applications in corporate governance and industrial economics, which is formulated as a Stackelberg game. The principal (she) delegates the management of an output process to the agent (he). A contract is signed beforehand, stipulating the terms of an incentive payment. The agent devotes a costly effort for the management of the output. Then, given the contract offered by the principal, he returns an optimal effort response which best balances between his cost of effort and the proposed compensation. Finally, the principal chooses the optimal contract so as to to incite the agent's effort to serve her interest. A crucial feature of the problem is that the principal only observe the output process, and has no access to the amount of effort exerted by the agent. There is tremendous literature on this topic, mainly in the one-period setting; we refer to the seminal book [4]. The first continuous time formulation of this problem was introduced by Holmström and Milgrom [12]. The importance of the continuous time formulation was best illustrated by the simplicity of the results. Since then, there has been a stream of research in this direction using the technique of calculus of variations. We refer to the book by Cvitanić and Zhang [8] for the main achievements with this point of view. An original method was introduced by Sannikov [22] which exploits in a very clever way the agent dynamic value process. This method was related by Cvitanić, Possamaï and Touzi [6] to the theory of backward stochastic differential equations, and extended to the setting where the agent is allowed to control the diffusion of the out process. Such an extension is particularly relevant in portfolio management as illustrated in Cvitanić, Possamaï and Touzi [5]. We also refer to Aïd, Possamaï and Touzi [1] for an application to the demand-response problem in electricity pricing. Sannikov's approach consists of deriving a representation of the dynamic value process, by means of the dynamic programming principle, and then reformulating the principal objective as a control problem on the coefficients of this representation. By this methodology, the initial Stackelberg stochastic differential game is reduced to a stochastic control problem. Notice that this representation is nothing but the non-Markovian version of the Hamilton-Jacobi-Bellman equation corresponding to the agent problem. The extension to the controlled diffusion setting follows the same idea but requires, in addition, a density result of second order backward SDEs. The main objective of this paper is to extend the reduction result of [6] to the random horizon context. In particular, this allows one to cover the seminal paper of Sannikov [22]. The random horizon setting is commonly used in applications in order to reduce the dimensionality of control problems, as the time variable disappears in homogeneous formulations. Consequently, if the controlled state is one-dimensional, the HJB partial differential equation reduces to a nonlinear ordinary differential equation whose analysis is usually simpler, and which may be found in explicit form in several cases. We shall introduce two versions of the random horizon principal-agent problem. The first is a direct extension of the finite horizon one, and is named as the European contracting problem. The second one corresponds to the setting of [22], and is named as the American contracting problem due to the possibility offered to the agent and the principal to terminate the contract at some chosen stopping time. In other words, both actors are faced with an optimal stopping problem in addition to optimally controlling the coefficients of the controlled output process. As in [6], our main results, both for the European and the American contracting problems, rely on a density property of second order backward SDEs in an appropriate family of solutions of the non-Markovian version of the agent HJB equation. The corresponding well-posedness result is obtained in our accompanying paper [15]. However, while the density argument for the European contracting follows the corresponding argument in [6], the American contracting argument requires a new justification based on understanding the principal choice of the optimal termination time of the contract, given the optimal stopping response of the agent. This paper is organized as follows. The random horizon principal-agent problem is described in Section 2 both in its European and American formulations. Section 3 shows that our European contracting problem does not coincide with the corresponding first best contracting problem in the context where the discount factors of both actors are deterministic. This is in contrast with the deterministic horizon situation. In Section 4, we state our main reduction results, and we report their proof based on a density property of second order backward SDEs. We illustrate the usefulness of our reduction result through a solvable example in Section 5. Finally, Section 6 contains the proof of the key density result. Preliminaries and notation Given an integer d and some initial condition X 0 ∈ R d , we introduce the canonical space of continuous paths Ω := ω ∈ C R + , R d : ω 0 = X 0 , equipped with the distance defined by ω − ω ∞ := n≥0 2 −n sup 0≤t≤n |ω t − ω t | ∧ 1 . We denote by M 1 + (Ω) the collection of all probability measures on Ω. The canonical process X is defined by X t (ω) := ω t , for all ω ∈ Ω, with corresponding canonical filtration F = (F t ) t≥0 . We also introduce the the right limit F + = (F + t ) t≥0 of F, and for a measure P ∈ M 1 + (Ω), the augmentation F +,P of the filtration F + under P. For a subset P ⊆ M 1 + (Ω), we introduce F P := F P t t≥0 and F +,P := F +,P t t≥0 , where F P t := P∈P F P t and F +,P t := P∈P F +,P t . We say that a property holds P-quasi-surely, abbreviated as P-q.s., if it holds P-a.s. for all P ∈ P. The universal filtration F U := F U t t≥0 and the corresponding (right-continuous) completion F +,U := F +,U t t≥0 correspond to the case P = M 1 + (Ω). We denote by P loc ⊆ M 1 + (Ω) the collection of probability measures P such that X is a continuous P-local martingale with quadratic variation process absolutely continuous in t, with respect to the Lebesgue measure, with corresponding density σ 2 t := lim sup n→∞ n X, X t − X, X (t− 1 n )∨0 , t > 0. Here, the quadratic covariation process X is pathwisely well-defined by Karandikar [13]. Then, for all P ∈ P loc , we may find a Brownian motion W such that X t = t 0 σ s dW s , t ≥ 0, P-a.s. For a stopping time τ we define the stochastic interval 0, τ := {(t, ω) ∈ R + × Ω : t ≤ τ (ω)}. We next enlarge the canonical space to Ω := Ω × Ω and denote by (X, W ) the coordinate process in Ω. Denote by F the filtration generated by (X, W ). For each P ∈ P loc we may construct a probability measure P on Ω such that P • X −1 = P, W is a P-Brownian motion, and dX t = σ t dW t , P-a.s. From now on, we abuse notation, and keep using P to represent P on Ω. Denote by Q L (P) the set of all probability measures Q λ such that D Q λ |P t := dQ λ dP F t = exp t 0 λ s · dW s − 1 2 t 0 |λ s | 2 ds , t ≥ 0 (1.1) for some F +,P -progressively measurable process λ = (λ) t≥0 uniformly bounded by L. By Girsanov's theorem, W λ := W − · 0 λ s ds is a Q λ -Brownian motion on any finite horizon, and thus X λ := X − · 0 σ t λ t dt is a Q λ -martingale on any finite horizon. We denote E P [·] := sup Q∈Q L (P) E Q [·] for P ∈ P loc and E P [·] := sup P∈P E P [·] for a subset P ⊆ P loc . Let p > 1 and α ∈ R, and let τ be an F +,P -stopping time. Let G := {G t } t≥0 be a filtration with G t ⊇ F t for all t ≥ 0, so that τ is also a G-stopping time. We denote the following: • L p α,τ (P, G), the space of R-valued, G τ -measurable R-valued random variables ξ, such that ξ p L p α,τ (P) := E P e ατ ξ p < ∞. • D p α,τ (P, G), the space of scalar càdlàg G-adapted processes Y such that Y p D p α,τ := E P sup 0≤t≤τ e αt Y t p < ∞. • H p α,τ (P, G), the space of R d -valued, F + -progressively measurable processes Z such that Z p H p α,τ := E P τ 0 e αt σ t Z t 2 dt p 2 < ∞. 2 Principal-agent problem Controlled state equation The agent's effort ν = (α, β) is an F-optional process with values in A × B for some subsets A and B of finite-dimensional spaces. We denote the set of such effort processes as U. The output process takes values in R d , with distribution defined by means of the controlled coefficients: λ : R + × Ω × A −→ R d , bounded, λ(·, a) F-optional for any a ∈ A, σ : R + × Ω × B −→ M d (R), bounded, σ(·, b) F-optional for any b ∈ B, where M d (R) denotes the space of all square d × d matrices with real entries. The controlled state equation is defined by the SDE: X t = X 0 + t 0 σ r (X, β r ) λ r (X, α r )dr + dW r , t ≥ 0, (2.1) where W is a d-dimensional Brownian motion. Notice that the processes α and β are functions of the path of X. As is standard in probability theory, the dependence on the canonical process will be suppressed. A control model is a weak solution of (2.1) defined as a pair M := (P, ν) ∈ M 1 + (Ω) × U. We denote M to be the collection of all such control models, as opposed to control processes. We assume throughout this paper the following implicit condition on σ (see Remark 2.1 below): M = ∅. (2.2) This condition is satisfied, for instance, if x → σ t (x, b) is bounded and continuous for some constant control b ∈ B, see, e.g., [14,Theorem 5.4.22,Remark 5.4.23]. Notice that we do not restrict the controls to those for which weak uniqueness holds. Moreover, by Girsanov's theorem, two weak solutions of (2.1) associated with (α, β) and (α , β) are equivalent. However, different diffusion coefficients induce mutually singular weak solutions of the corresponding stochastic differential equations. We finally introduce the following sets: P(ν) := P ∈ M 1 + (Ω), (P, ν) ∈ M , P := ∪ ν∈U P(ν), U(P) := ν ∈ U, (P, ν) ∈ M , U := ∪ P∈M 1 + (Ω) U(P). Remark 2.1. By the boundedness of λ, we may connect any admissible model (P, ν) ∈ M to a subset of P b as follows. Let (Q, β) be an arbitrary weak solution of the driftless SDE X t = X 0 + t 0 σ r (X, β r )dW r , t ≥ 0,(2.3) for some optional B-valued process β. Then, Q ∈ P loc , and we may use the Girsanov change of measure theorem to define for every A-valued optional process α a pair M := P, (α, β) which solves the SDE (2.1) by setting dP dQ F t = exp t 0 λ s (X, α s ) · dW s − 1 2 t 0 |λ s (X, α s )| 2 ds , t ≥ 0. Conversely, any admissible model M = P, (α, β) ∈ M induces a probability measure Q ∈ P loc by the last Girsanov equivalent change of measure. Agent's problem The effort exerted by the agent is costly, with cost of effort measured by the function c : R + × Ω × A × B → R + , measurable, c(·, u) F-optional for all u ∈ A × B, and c(., 0) = 0. Let (P, ν) ∈ M be fixed. The canonical process X is called the output process, and the control ν is called the agent's effort or action. The agent exerts the effort process ν to control the (distribution of the) output process defined by the state equation (2.1), while subject to cost of effort at rate c(X, ν). The agent values future income through the discount factor K ν := e − . 0 kr(νr)dr , where k : R + × Ω × A × B → R bounded, with k(·, u) F-optional for all u ∈ A × B, and k(., 0) = k 0 for some constant k 0 > 0. A contract is a triple C = (τ P , π, ξ) composed of the following: • a finite stopping time τ P , representing the termination time of the contract; • an optional process π t∧τ P t≥0 , representing a rate of payment from the principal to the agent; and • an F τ P -measurable random variable ξ, representing the final compensation at retirement. The principal observes only the output process X and has no access to the information on the agent's effort. Consequently, the components of the contract C can only be contingent on X, which is immediately encoded in our weak formulation setting. The set of admissible contracts C consists of all such contracts which satisfy in addition the technical requirements reported in subsection 2.4 below. The agent's preferences are defined by a continuous strictly increasing utility function U : R → R. Given a contract C = (τ P , π, ξ), we shall consider in this paper two possible contracting problems which are both relevant in the economics literature. Agent cannot quit. We first consider the contract problem as in Sannikov [22]. By analogy with derivatives securities, we refer to this setting as that of an European contracting problem. The objective function is defined by J E (M, C) := E P K ν τ P U A (ξ) + τ P 0 K ν t U A (π t ) − c t (ν t ) dt for all M = (P, ν) ∈ M. (2.4) Throughout this paper, we adopt the convention ∞ − ∞ = −∞, implying that the above expectation J E (M, C) is well-defined. The European agent aims at optimally choosing the effort, given the promised compensation contract C: V E (C) := sup M∈M J E (M, C), C ∈ C, with the convention sup ∅ = −∞, which also prevails throughout this paper. A control model M = ( P, ν) ∈ M is an optimal response to contract C if V E (C) = J E M, C . We denote by M E (C) the (possibly empty) set of all such optimal control models. Agent can quit. We now introduce a new setting which we name as that of the American contracting problem. We assume that the agent may chose a retirement time τ before the contract terminates. After retirement, the agent receives no more transfers from the principal, i.e., ξ = 0 and π = 0 on {t ≥ τ ∧ τ P }. As c t (0) = 0 and k t (0) = k 0 , the (dynamic) value function of the agent at retirement is given by ∞ τ ∧τ P e −k 0 (t−τ ∧τ P ) U A (0)dt = U A (0) k 0 =: U A (ρ). Given this definition of the constant ρ, we denote by C A the collection of all pairs C A = (τ P , π) such that C := (C A , ρ) ∈ C. We denote by M A the collection of all decision variables (τ, M) for the agent, where τ is an F-stopping time, and M = (P, ν) ∈ M. The American agent's objective function is defined by J A (τ, M, C A ) := E P K ν τ ∧τ P U A (ρ) + τ ∧τ P 0 K ν t U A (π t ) − c t (ν t ) dt for all (τ, M) ∈ M A , and aims at optimally choosing the effort and the quitting time, given the promised compensation contract C A : V A C A := sup (τ,M)∈M A J A τ, M, C A , C A ∈ C A . (2.5) We say that τ , M ∈ M A is an optimal response to contract C A if V A (C A ) = J A τ , M, C A . We denote by M A (C A ) the (possibly empty) set of all such optimal responses. Principal's problem The contracts which can be offered by the principal are those admissible contracts which are subject to the additional restriction: C a R := C ∈ C a : V a (C) ≥ U A (R) , a ∈ {E, A},(2.6) where C E := C, and R is a given participation threshold representing the minimum satisfaction level required by the agent in order to accept the contract. The principal benefits from the value of the output X and pays the agent as promised in the contract C, namely, she pays a continuous compensation at the rate π, and • in case of a European contract, a final compensation ξ at the termination time τ P , • in case of an American agent, ξ = 0 at the agent quitting time τ ∧ τ P . This leads to the following definitions for the second-best principal's problem under European and American contracts, respectively: V PE := sup C∈C E R sup M∈ M E (C) J P (M, C), V PA := sup (τ P ,π)∈C A R sup (τ,M)∈ M A (τ P ,π) J P (M, τ ∧ τ P , π, 0), where, for all C = (τ, π, ξ): J P (M, C) := E P K P τ U P τ − ξ + τ 0 K P r U P (−π r )dr . Here, U P : R → R is a given nondecreasing utility function, : Ω → R is a liquidation function with linear growth, τ := (X .∧τ ), and K P t := e − t 0 k P r dr , t ≥ 0, is a discount factor, defined by means of a discount rate function k P : R + × Ω → R bounded and F-optional. By our convention sup ∅ = −∞, notice that the principal only offers those admissible contracts which induce a nonempty set of optimal responses, i.e. M E (C) = ∅ in the European case and M A (τ P , π) = ∅ in the American case. We also observe that, following the standard economic convention, the above definition of the principal's criterion assumes that, in the case where the agent is indifferent between various optimal responses, he implements the one that is the best for the principal. Remark 2.2. Careful readers may have noted that in this paper we analyze both the cases where the agent can or cannot quit the contract. However, we always allow the principal to end the contract at a chosen time τ . What would happen if the principal has no right to end the contract before the maturity? In fact, the principal can always induce the agent to quit the contract at a stopping time τ chosen by the principal (without ending the contract herself), by offering low payment at any time t = τ . We refer the interested readers to Remark 2.2 in Cvitanić, Wan and Zhang [7] for detailed discussion. Admissible contracts We now provide the precise definition of the set of admissible contracts C. We need the following additional notation: Ω ω t := ω ∈ Ω : ω| [0,t] = ω | [0,t] , (ω ⊗ t ω ) s := 1 {s≤t} ω s + 1 {s>t} (ω t + ω s−t ), and ξ t,ω (ω ) := ξ(ω ⊗ t ω ), X t,ω s (ω ) := X t+s (ω ⊗ t ω ), τ t,ω := τ t,ω − t. We also introduce the dynamic version of P by considering the controlled SDE on [t, ∞) issued from the path ω ∈ Ω: P(t, ω) := P ∈ M 1 + (Ω ω t ) : dX t,ω s = σ t,ω s (X t,ω , β s ) λ t,ω s (X t,ω , α s )ds + dW s , P-a.s., (α, β) ∈ U . In particular P = P(0, 0). We shall use the nonlinear expectations E P(t,ω) (ii) An admissible contract is a triple C = (τ, π, ξ), with τ ∈ T , and [·] := sup P∈P(t,ω) E P [·].E P(t,ω) e ρ τ t,ω U A (ξ t,ω ) q + E P(t,ω) τ t,ω 0 e ρr U A (π t,ω r ) 2 dr q 2 < ∞, (2.8) for some q > 1 and ρ > −µ, where µ = inf u∈A×B inf t≥0 ess inf ω∈Ω k t (ω, u). We denote by C the set of admissible contracts. The following condition ensures that J E and J A are finite for each contract C ∈ C. Assumption 2.4. The cost function c is bounded by c satisfying, for some ρ > −µ and q > 1, E P(t,ω) τ t,ω 0 e ρr c t,ω r 2 dr q 2 < ∞, for all (t, ω) ∈ 0, τ , and τ ∈ T . (2.9) Remark 2.5. For (t, ω) = (0, 0), we have P(t, ω) = P and E P τ 0 e ρr c r 2 dr q 2 + E P |e ρτ U A (ξ)| q + E P τ 0 |e ρr U A (π r )| 2 dr q 2 < ∞. Comparison with first best contracts We first observe that our reduction result of the European contracting problem extends to the case where the set of admissible contracts C is replaced by C[T] := {C = (τ P , π, ξ) ∈ C E R : τ P ∈ T}, for some subset T of finite stopping times. In the economics literature, it is well-known that in the risk-neutral agent setting with deterministic maturity τ P = T , the European principal-agent problem reduces to one single optimization problem corresponding to the case where the principal imposes the amount of effort that the agent devotes. This is the so-called first-best optimal contract problem where the principal has full power to choose both the contract and the agent effort. Under the European contracting rule, the first best risk sharing problem is defined by V PE fb := sup (C,M)∈C[T]×M J A (M,C)≥R E P K P τ U P τ − ξ + τ 0 K P t U P (−π t )dt . (3.1) It is clear that V PE fb ≥ V PE . Example 3.2 below shows that when the termination time τ P is not deterministic, the first and second best contracting problems do not coincide, in general. In this section, we provide precise conditions under which equality holds. We first need to assume that the agent's discount rate k is independent of the effort, so that the agent discount factor is independent of the effort process: K t := K ν t is independent of ν, and we denote η t := K t K P t , t ∈ [0, T ]. (3.2) This condition is necessary in order to identify directly the optimal first best compensations (ξ, π) of the principal independently of the agent's effort. We also assume that the principal's utility function U P is C 1 , increasing, and strictly concave, with U P (−∞) = ∞, U P (∞) = 0,(3.3) and we introduce the corresponding convex conjugate U * P (y) := sup x∈R {U P (x) − xy} = U P • (U P ) −1 (y) − y(U P ) −1 (y). Finally, we shall denote for any function F : R + −→ R with appropriate measurability: J F τ (λ) := K P τ F (λη τ ) + τ 0 K P t F (λη t )dt, λ ≥ 0. Proposition 3.1. Consider a risk neutral agent, i.e., U A = Id R , with discount factor satisfying (3.2), and a principal with utility function satisfying (3.3). Let ξ λ := τ − (U P ) −1 (λη τ ), π λ := −(U P ) −1 (λη), and assume (C1) for all λ ≥ 0, the problem v fb (λ) := sup τ ∈T M∈M(τ,π λ ,ξ λ ) E P J U * P τ (λ) + λH τ (ν) has a solution (τ λ , M λ ), with H τ (ν) := K τ τ − τ 0 K t c t (ν t )dt −R. (1) Then, assuming, in addition, that (C2) 0 = E P λ J U * P τ λ λ − J U P •(U P ) −1 τ λ λ + λH τ λ ν λ , for some λ > 0, (1-i) we have V PE fb = v fb λ , with optimal contract and effort C := τ P , π, ξ := τ λ , π λ , ξ λ and M := M λ = P λ , ν λ . (1-ii) V PE = V PE fb if and only if M is also a solution of the problem v := sup M∈M( C) E P J U * P −U P •(U P ) −1 τ λ ( λ) + λH τ λ (ν) ; in this case τ P , π, ξ is also a second best optimal contract with optimal effort M. (2) Let T = {T } be some fixed deterministic maturity, and let K, K P be deterministic functions. Then, condition (C2) is satisfied, and the problems v fb and v have the same set of solutions. Consequently, V PE = V PE fb , and the first best and second best optimal contracting problems have the same solution. Proof. (1-i) Let C = (τ, π, ξ) and M = (P, ν) satisfy the participation constraint J A (M, C) − R = E P K τ ξ + τ 0 K t π t − c t (ν t ) dt − R ≥ 0. As λ ≥ 0, as defined in condition (C2), we have, J P (M, C) ≤ E P K P τ U P ( τ − ξ) + τ 0 K P t U P (−π t )dt + λ K τ ξ + τ 0 K t π t − c t (ν t ) dt − R (3.4) = E P K P τ U P ( τ − ξ) − λη τ ( τ − ξ) + τ 0 K P t U P (−π t ) − λη t (−π t ) dt + λ K τ τ − τ 0 K t c t (ν t )dt − R ≤ E P K P τ U * P λη τ + τ 0 K P t U * P λη t dt + λH τ (ν) (3.5) = E P J U * P τ λ + λH τ (ν) ≤ v fb λ , (3.6) where the last inequality follows from the definition of U * P . By the arbitrariness of (τ, π, ξ, M), this implies that v fb λ defines an upper bound for V PE fb . Clearly, the contract C = τ P , π, ξ with effort M, whose existence is guaranteed by condition (C1), restores the equality both in (3.5) and in (3.6). By direct verification, we also see that the choice of λ by means of condition (C2) restores equality in (3.4). Hence, the last upper bound is achieved, and therefore τ P , π, ξ, M is a solution of the first best problem. (1-ii) The inequality V PE fb ≥ V PE is obvious. In order to show the equality, a necessary and sufficient condition is that the optimal agent response M C = M, i.e. the agent's optimal response to the first best optimal contract coincides with the first best optimal effort. In order to complete the proof, we now show that given the contract τ P , π, ξ , the effort M = P, ν is an optimal response for the agent problem. Indeed, we directly compute that J E M, C = E P K τ P ξ + τ P 0 K t π t − c t (ν t ) dt = E P K τ P τ − (U P ) −1 λη τ + τ P 0 K t − (U P ) −1 λη t − c t (ν t ) dt = R + 1 λ E P λ K τ P τ P − τ P 0 K t c t (ν t )dt − R − K P τ P λη τ P (U P ) −1 ( λη τ P ) − τ P 0 K P t λη t (U P ) −1 λη t dt = R + 1 λ E P λH τ P (ν) + J U * P −U P •(U P ) −1 τ P λ ≤ R + v λ , where we used the fact that U * P (y) = U P • (U P ) −1 (y) − y(U P ) −1 (y). By the arbitrariness of M ∈ M( C) this provides the upper bound V E ( C) ≤ R + v λ , which is achieved by the maximizer of v. Hence, a necessary and sufficient condition for equality between the first and the second best contracting problems is that the optimal first best effort M is also a maximizer of v. (2) In the present setting, notice that J U * P T (λ) and J U P •(U P ) −1 T (λ) are deterministic. Then, v fb (λ) = J U * P T (λ) + λ sup M∈M(T,π λ ,ξ λ ) E P H T (ν) . Similarly, we have v = J U * P −U P •(U P ) −1 T ( λ) + λ sup M∈M( C) E P H T (ν) , which reduces to the same maximization problem as in v fb ( λ). Let us finally check that condition (C2) is verified. Indeed, notice that in the present case, the optimal controls ( τ , M) = (τ λ , M λ ) are independent of λ. Condition (C2) reduces to 0 = E P J U * P T ( λ) − J U P •(U P ) −1 T ( λ) + λH T ( ν) = V PE fb − J U P •(U P ) −1 T ( λ). Hence, the existence of a unique solution to the last equation follows from our condition (3.3) on the principal's utility function. We conclude this section with an example of European contracting problem with risk neutral agent and one single possible stopping T = {τ P }, where the first best and second best coincide for deterministic τ P , but do not coincide, in general, in the context of a random horizon τ P . Example 3.2. The Holmström and Milgrom contracting problem models the output process under effort α by the dynamics dX t = (rX t + α t )dt + dW α t , P α -a.s. We consider the following random horizon extension of the criteria for the agent and the principal, respectively: J(ξ, α) := E P α ξe −rτ P − 1 2 τ P 0 e −rt α 2 t dt , and J P (ξ, α) := E P α e −rτ P U P (X T − ξ) . Following the same argument as in the previous proof, we find the first best optimal contract is ξ := X T − (U P ) −1 ( λ), with corresponding constant optimal effort α t = 1 for all t ≤ τ P , where the Lagrange multiplierλ is the solution of: R = E P α e −rτ P X T − (U P ) −1 λ − 1 2 1 − e −rτ P = 1 2 1 − E P α e −rτ P − E P α e −rτ P (U P ) −1 ( λ). In order to check whether the first and second best contracts coincide, we need only verify whether the agent's optimal response to the first best contract ξ is also the unit constant α. Direct calculation provides J( ξ, α) = E P α − e −rτ P (U P ) −1 ( λ) + τ P 0 e −rt α t − 1 2 α 2 t dt . For a deterministic finite horizon τ P = T , the maximum of J( ξ, α) is achieved by the constant unit effort process α, thus proving the identity between the first and the second best problem. However, this is not the case anymore for random horizon τ P , in general. Consider, for instance, the example τ P := inf{t > 0 : X t ≤ 0}, with X 0 > 0. By standard control theory, the HJB equation corresponding to this problem is rv − 1 2 v − sup a a(1 + v ) − 1 2 a 2 = 0, on R + , with boundary condition v(0) = −(U P ) −1 ( λ). Suppose to the contrary that the unit constant effort α is optimal, thenâ = 1 must be the maximizer in the last HJB equation which happens if and only if v = 0, meaning that the value function v is constant, but the HJB equation then reduces to v = 1 2r which does not match the boundary condition at the origin. Reduction to a standard stochastic control problem In this section, we extend the result of Cvitanić, Possamaï, and Touzi [6] to the present random horizon setting. The key argument, introduced by Sannikov [22], is to reduce the principal optimization problem by using the dynamic programming representation of the agent's value process. As is standard in stochastic control theory, such a representation involves the agent's (path-dependent) Hamiltonian: H t (ω, y, z, γ) := sup u∈A×B h t (ω, y, z, γ, u); (t, ω) ∈ [0, ∞) × Ω, (y, z, γ) ∈ R × R d × S d (R), (4.1) where S d (R) is the set of symmetric matrices in M d (R), and for u = (a, b) ∈ A × B h t (ω, y, z, γ, u) := −c t (ω, u) − k t (ω, u)y + σ t (ω, b)λ t (ω, a) · z + 1 2 Tr (σ t σ t )(ω, b)γ ,(4.2) where Tr[M ] denotes the trace of a matrix M ∈ M d (R). We next introduce for an arbitrary initial value Y 0 ∈ R and F-predictable processes (Z, Γ) with values in R d × S d (R) the process Y Y 0 ,Z,Γ defined by the random ODE: Y Y 0 ,Z,Γ t := Y 0 + t 0 Z r · dX r + 1 2 Tr Γ r d X r − H r Y Y 0 ,Z,Γ r , Z r , Γ r dr − U A (π r )dr (4.3) under appropriate integrability. We shall see that the process Y Y 0 ,Z,Γ turns out to represent the agent's value process, and will be shown to be a convenient parameterization of the contracts by setting (τ P , π, ξ) = (τ P , π, ξ Y 0 ,Z,Γ ) with ξ Y 0 ,Z,Γ := U −1 A Y Y 0 ,Z,Γ τ P . Definition 4.1. We denote by V the collection of all such processes (Z, Γ) satisfying, in addition, the following: (i) Z H p α,τ P (P) + Y Y 0 ,Z,Γ D p α,τ P (P) < ∞, for some p > 1 and α ∈ R. (ii) There exists a weak solution P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ ∈ M such that H t (Y t , Z t , Γ t ) = h t Y t , Z t , Γ t , ν Y 0 ,Z,Γ t , dt ⊗ P Y 0 ,Z,Γ -a.e. on 0, τ P . (4.4) Condition (i) guarantees that the process Y Y 0 ,Z,Γ of (4.3) is well-defined P-a.s. for all P ∈ P. First, as k is bounded, the Hamiltonian H is Lipschitz in the y variable. It guarantees that Y Y 0 ,Z,Γ is well-defined as the unique solution of the ODE with random coefficients (4.3), provided that the integrals are well-defined. Moreover, as in [6], the integrals are indeed well-defined, without further condition on the process Γ, as we see by applying Itô's formula that K ν t Y Y 0 ,Z,Γ t + t 0 K ν r U A (π r ) − c r (ν r ) dr = Y 0 + t 0 K ν r Z r · σ r (β r )dW P r − A ν t , t ≤ τ, P-a.s., (4.5) for all (P, ν) ∈ M, where A ν := . 0 K ν r H r − h r (., ν r ) Y Y 0 ,Z,Γ r , Z r , Γ r dr is a nondecreasing process. Due to Assumption 2.4 and the admissibility condition (2.8), the first integral is welldefined. Now, the only issue is with the existence of the stochastic integral . 0 K ν r Z r · σ r (β r )dW P r under each P ∈ P. We emphasize that, as a consequence of the main result of Nutz [17], the stochastic integral . 0 K ν r Z r · dX r is defined pathwisely on Ω without exclusion of any null set. This is a crucial fact as our main result below states that the principal's problem can be reduced to choosing among contracts of the form τ P , π, U −1 A (Y Y 0 ,Z,Γ τ P ) , which requires that such contracts be independent from the agent's control model. Condition (ii) states the existence of a maximizer of the Hamiltonian H, defined in (4.1), that induces an admissible control model for the agent's problem. The existence of a maximizer is a standard condition in the verification argument in stochastic control theory, which allows one to identify the optimal control. As in [6], we shall see that, given C = τ P , π, U −1 A (Y Y 0 ,Z,Γ τ P ) , the process Y Y 0 ,Z,Γ is the dynamic value function of the agent's control problem, and is precisely expressed in the required Itô decomposition form (4.3). In particular, Y 0 = V E (C). As the principal problem restricts to those admissible contracts which induce existence for the agent's problem M E (C) = ∅, condition (ii) is necessary to characterize the agent's optimal response which needs to be plugged in the principal's problem V PE . A similar discussion applies to the American principal-agent problem. By Condition (ii) together with the continuity of h, we deduce from a classical measurable selection argument (see e.g. [2,3]), the existence of measurable maps u t (ω, y, z, γ) := ( α, β) t (ω, y, z, γ) which maximize H H t (ω, y, z, γ) = h t ω, y, z, γ, u t (ω, y, z, γ) . We next denote by U the collection of all such measurable maximizers, and we introduce the optimal feedback controls ν Y 0 ,Z,Γ t := u t X, Y Y 0 ,Z,Γ t , Z t , Γ t , which induce the following coefficients for the optimal output process λ t (ω, y, z, γ) := λ t ω, α t (ω, y, z, γ) , σ t (ω, y, z, γ) := σ t ω, β t (ω, y, z, γ) . By Condition (ii) of Definition 4.1, it follows that for all (Z, Γ) ∈ V and any u ∈ U, the following stochastic differential equation driven by a d-dimensional Brownian motion W X t = X 0 + t 0 σ r (X, Y Y 0 ,Z,Γ r , Z r , Γ r ) λ r (X, Y Y 0 ,Z,Γ r , Z r , Γ r )dr + dW r , t ≤ τ,(4.6) has at least one weak solution M Y 0 ,Z,Γ = ( P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ ). Our main result is the following extension of Cvitanić, Possamaï, and Touzi [6] reduction result to the present random horizon context. Recall the notation ξ Y 0 ,Z,Γ : = U −1 A (Y Y 0 ,Z,Γ τ P ) for (Y 0 , Z, Γ) ranging in R × V. Theorem 4.2. Assume that V = ∅. Then, (i) V PE = sup Y 0 ≥R V PE (Y 0 ), where V PE (Y 0 ) := sup (τ P ,π)∈T ×Π (Z,Γ)∈V sup (P,ν)∈ M E τ P ,π,ξ Y 0 ,Z,Γ E P K P τ P U P τ P − ξ Y 0 ,Z,Γ + τ P 0 K P r U P (−π r )dr . Moreover, if (Y * 0 , Z * , Γ * , τ * , π * ) is a solution of the last optimal control problem, then the triple (τ * , π * , ξ Y * 0 ,Z * ,Γ * ) is an optimal contract for the European principal-agent problem. (ii) V PA = sup Y 0 ≥R V PA (Y 0 ), where, denoting h 0 := h Y 0 ,Z,Γ 0 := inf t ≥ 0 : Y Y 0 ,Z,Γ t ≤ U (ρ) , V PA (Y 0 ) := sup π∈Π (Z,Γ)∈V sup (P,ν)∈ M A (h 0 ,π) E P K P h 0 U P h 0 + h 0 0 K P r U P (−π r )dr . Moreover, if (Y * 0 , Z * , Γ * , π * ) is a solution of the last optimal control problem, then denoting τ * := h Y * 0 ,Z * ,Γ * 0 , the pair (τ * , π * ) is an optimal contract for the American principal-agent problem. Remark 4.3. Once proving the main theorem above, we can treat the principal's problem as standard stochastic control problem, using dynamic programming arguments. If the coefficients of the principal's problem are Markovian, then the dynamic programming principle links the control problem to the HJB (see e.g. the examples in Section 5). Otherwise, in case that the coefficients are path-dependent, other tools such as backward stochastic differential equation (BSDE) [9], second-order BSDE [18], backward stochastic PDE [16] and path-dependent PDE [20] can be used to characterize the value functions. The key argument for this reduction result is the following density property of the class of contracts C = (τ, π, ξ Y 0 ,Z,Γ ). Proposition 4.4. Let C = (τ, π, ξ) ∈ C E R . Then, we may find Y ε 0 ≥ R and (Z ε , Γ ε ) ∈ V such that, with ξ ε := U −1 A Y Y ε 0 ,Z ε ,Γ ε τ , we have C ε := (τ, π, ξ ε ) ∈ C E R , M E (C ε ) = M E (C) , and ξ ε = ξ, P-a.s., for all (P, ν) ∈ M E (C). We postpone the proof of this result to the next section, and we use it now for the proof of Theorem 4.2 (i) and (ii). Proof of Theorem 4.2 (i). We organize the proof in two steps. We first establish inequality V PE ≥ V PE (Y 0 ) by following the classical verification argument in stochastic control theory, and we next prove equality by using the density result of Proposition 4.4. Step 1. We first show that V PE ≥ V PE (Y 0 ), for all Y 0 ∈ R. Let (Z, Γ) ∈ V, and fix some stopping time τ P , and optional process π satisfying the integrability condition in (2.8). The required inequality is a direct consequence of the following two steps. 1.a. We first verify that C Y 0 ,Z,Γ = τ P , π, ξ Y 0 ,Z,Γ ∈ C, (P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ ) ∈ M E C Y 0 ,Z,Γ and Y 0 = V E C Y 0 ,Z,Γ . From the definition of Y Y 0 ,Z,Γ τ P in (4.3), it is clear that ξ Y 0 ,Z,Γ is an F τ P - measurable random variable. The integrability of Y Y 0 ,Z,Γ τ P = U A (ξ Y 0 ,Z,Γ ) follows from Definition 4.1 (i). For any M = (P, ν) ∈ M, it follows from a direct application of Itô's formula that K ν τ P Y Y 0 ,Z,Γ τ P = Y 0 + τ P 0 K ν r Z r · σ βr r dW P r − τ P 0 K ν r H r Y Y 0 ,Z,Γ r , Z r , Γ r dr − τ P 0 K ν r U A (π r )dr + τ P 0 K ν r − k νr r Y Y 0 ,Z,Γ r + Z r · σ βr r λ αr r + 1 2 Tr σ 2 r Γ r dr, where we used the simplifying notation ϕ u r := ϕ r (x, u) for ϕ = k, σ, λ. As (Z, Γ) ∈ V 0 , the stochastic integral · 0 K ν r Z r · σ βr r dW P r defines a martingale. By the definition of the agent's optimization criterion J E and the definition of h, we may write the last equation as J E M, C Y 0 ,Z,Γ = E P K ν τ P U A ξ Y 0 ,Z,Γ τ P + τ P 0 K ν r U A (π r ) − c r (ν r ) dr = Y 0 − E P τ P 0 K ν r H r Y Y 0 ,Z,Γ r , Z r , Γ r − h r Y Y 0 ,Z,Γ r , Z r , Γ r , ν r dr . (4.7) It follows by the definition of H that J E M, C Y 0 ,Z,Γ ≤ Y 0 , and thus V E C Y 0 ,Z,Γ ≤ Y 0 by the arbitrariness of M ∈ M. Finally, the equality J E P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ , C Y 0 ,Z,Γ = Y 0 holds in (4.7) with the control (P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ ) introduced in the admissibility condition (ii) of Definition 4.1. This shows that (P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ ) ∈ M E C Y 0 ,Z,Γ = ∅, and therefore C Y 0 ,Z,Γ ∈ C. 1.b. We next show ( P, ν) ∈ M E C Y 0 ,Z,Γ if and only if H t (Y t , Z t , Γ t ) = h t (Y t , Z t , Γ t , ν t ), dt ⊗ P- a.e. on 0, τ P , i.e., the control process ν is a maximizer of the Hamiltonian on the support of P. It follows from (4.7) and the equality V E C Y 0 ,Z,Γ = Y 0 , established in Step 1.a, that we must have for all ( P, ν) ∈ M E C Y 0 ,Z,Γ that E P τ P 0 K ν r H r Y Y 0 ,Z,Γ r , Z r , Γ r − h r Y Y 0 ,Z,Γ r , Z r , Γ r , ν r dr = 0. By the definition of H in (4.1), this holds if and only if ν is a maximizer of H r Y Y 0 ,Z,Γ r , Z r , Γ r , dt ⊗ P-a.e. on 0, τ P . To summarize: for (τ P , π) ∈ T × Π, Y 0 ≥ R and (Z, Γ) ∈ V, we have that C Y 0 ,Z,Γ = (τ P , π, ξ Y 0 ,Z,Γ ) ∈ C, i.e., C Y 0 ,Z,Γ is an admissible contract, and M E (C Y 0 ,Z,Γ ) = ∅ as well as V E (C Y 0 ,Z,Γ ) = Y 0 . Therefore, it follows immediately that V PE ≥ sup Y 0 ≥R V PE (Y 0 ). Step 2. By Proposition 4.4, for any C = (τ P , π, ξ) ∈ C E R with M E = ∅, we may define a contract C ε = (τ P , π, ξ ε ) ∈ C E R , where ξ ε = U −1 A Y Y ε 0 ,Z ε ,Γ ε τ P for some (Z ε , Γ ε ) ∈ V, such that M E (C ε ) = M E (C) and ξ ε = ξ, P-a.s. for all ( P, ν) ∈ M E (C). Therefore, for each ( P, ν) ∈ M E (C) = M E (C ε ) we obtain that J P (C ε ) = sup ( P, ν)∈ M E (C ε ) E P K P τ P U P ( τ P − ξ ε ) + τ P 0 K P r U P (−π r )dr = sup ( P, ν)∈ M E (C) E P K P τ P U P ( τ P − ξ) + τ P 0 K P r U P (−π r )dr = J P (C). By Step 1, notice that, the agent's problem with the contract C ε can be explicitly solved and we obtain V A (C ε ) = Y ε 0 . By arbitrariness of C, we obtain that V PE ≤ sup Y 0 ≥R V PE (Y 0 ). In order to obtain a similar reduction result for the American principal-agent problem, we follow Sannikov's [22] idea by proceeding to a first reduction of the principal problem which allows to transform the corresponding agent problem into that of a European contract as no early exercise is optimal for the agent. Proof of Theorem 4.2 (ii). Similar to the proof of Theorem 4.2 (i), we proceed in three steps, following the classical verification argument in stochastic control theory. Step 1. We first prove that V PA ≥ sup Y 0 ≥R V PA (Y 0 ). Let Y 0 ≥ R, (Z, Γ) ∈ V, π ∈ Π, and h 0 := h Y 0 ,Z,Γ 0 = inf{t ≥ 0 : Y Y 0 ,Z,Γ ≤ U A (ρ)} ∈ [0, ∞] be as defined in the statement of the theorem, and consider the principal contract C := (h 0 , π, ρ). For M ∈ M and τ ≤ h 0 we have J A τ, M, C = E P K ν τ U A (ρ) + τ 0 K ν r U A (π r ) − c r (ν r ) dr ≤ E P K ν τ Y Y 0 ,Z,Γ τ + τ 0 K ν r U A (π r ) − c r (ν r ) dr = Y 0 − E P τ 0 K ν r H r − h r (., ν r ) Y Y 0 ,Z,Γ r , Z r , Γ r dr ≤ Y 0 . The last inequality is due to the definition of H. Moreover, as τ ≤ h 0 , it is clear that the only way to turn both inequalities above into equalities is to take M Y 0 ,Z,Γ = P Y 0 ,Z,Γ , ν Y 0 ,Z,Γ and τ = h 0 , P Y 0 ,Z,Γ -a.s., (4.8) where we use the notations of Definition 4.1, together with the condition that the set V is non-empty. Therefore V A (C) = Y 0 with optimal American agent response given by the pair h 0 , M Y 0 ,Z,Γ . By the same argument as in Step 1 of the proof of Theorem 4.2 (i), this provides the inequality V PA ≥ sup Y 0 ≥R V PA (Y 0 ). Step 2. In order to prove that equality holds, we introduce the dynamic version of the American agent problem for an arbitrary C A = (τ P , π): V A t (C A ) := ess sup τ ≥t, M∈M E P t K ν t,τ ∧τ P U A (ρ) + τ ∧τ p t K ν t,s U A (π s ) − c s (ν s ) ds , where K ν t,s := (K ν t ) −1 K ν s . Then define τ := inf t ≥ 0 : V A t ≤ U A (ρ) , where V A t = lim s↓t,s∈Q V A s . (4.9) Note that τ ≤ τ P . We claim and shall prove in Step 3 that τ is an optimal stopping time for the agent, i.e. V A 0 (C A ) = sup M∈M E P K ν τ U A (ρ) + τ 0 K ν s U A (π s ) − c s (ν s ) ds . (4.10) Therefore, we may reduce the principal to offer contracts of the form C A = ( τ , π), as her utility criterion is not changed by fixing τ P := τ , and the agent's problem reduces to V A C A = sup M∈M J A τ , M, C A = sup M∈M J E M, C , with C := C A , ρ . We have thus transformed the American agent problem into a stochastic control problem (without optimal stopping) as in the European agent context of Theorem 4.2 (i), and we may now continue by adapting the same argument as in Step 2 of the proof of Theorem 4.2 (i). Namely, Proposition 4.4 guarantees the existence of a contract C ε = ( τ , π, ξ ε ) ∈ C R , where ξ ε = U −1 A Y Y ε 0 ,Z ε ,Γ ε τ for some (Z ε , Γ ε ) ∈ V, such that M E (C ε ) = M E (C) and ξ ε = ρ, P-a.s. for all ( P, ν) ∈ M E (C). Next, define the new contract C ε := ( τ ε , π, ρ) where τ ε := τ ∧ inf{t ≥ 0 : Y ε t ≤ U A (ρ)}, and we observe that for all ( P, ν) ∈ M E (C) = M E (C ε ), we have τ ε = τ , P-a.s., which is exactly the condition (4.8) required for the verification argument in Step 1 of the present proof. We continue the proof by following exactly the same line of argument as in Step 2 of the proof Theorem 4.2 (i), and we obtain the required equality. Step 3. Here we are going to complete the proof by showing that τ in (4.9) is the optimal stopping time for the agent. First, by the definition of V A , we have for any t ≥ t V A t (C A ) ≥ E P t K ν t,t V A t C A . Therefore, K ν 0,t V A t is a P-supermartingale for all (P, ν) ∈ M. Then, it is a classical result (see e.g. [14, Proposition 1.3.14]) that the right limit of the process V A exists P-a.s. for all P ∈ P. In particular, the process V A defined in (4.9) is right-continuous P-a.s. for all P ∈ P, and thus τ is a stopping time. Further, let P, ν be an optimal control, and thus V A t (C A ) = ess sup τ ≥t E P t K ν t,τ ∧τ P U A (ρ) + τ ∧τ p t K ν t,s U A (π s ) − c s ( ν s ) ds . It follows the standard result of optimal stopping that the optimal stopping time is equal to τ , P-a.s. Therefore, we obtain (4.10). Examples Sannikov [22] This section reports our understanding of the model in Sannikov [22]. Given a European contract C = (τ, π, ξ) proposed by the principal, the agent has a nonnegative increasing strictly concave utility function U A and a nonnegative increasing convex cost function h, and is solving: sup α E P α e −rτ U A (ξ) + τ 0 e −rt U A (π t ) − h(α t ) dt , where X t = X 0 + t 0 α s ds + dW α s , t ≥ 0, P α -a.s., and, as in the previous example, the agent's effort α is an arbitrary progressively measurable process taking values in some subset A ⊆ R and satisfying E P 0 D P α |P 0 T = 1. The Hamiltonian is given by H(y, z, γ) = −ry + 1 2 Tr[γ] + H 0 (z), where H 0 (z) := sup a∈A az − h(a) , and we assume for simplicity that the supremum is attained by the unique optimal response a(z). Then, similar to the example from the previous section, the lump sum payment ξ promised at τ takes the form U A (ξ) = Y Y 0 ,Z τ = Y 0 + τ 0 Z t dX t + τ 0 rY t − H 0 (Z t ) − U A (π t ) dt, and Y represents the continuation utility of the agent. Remark 5.1. Before continuing, we make the crucial observation that the non-negativity condition on U A and h implies that Y ≥ 0. As the dynamics of the process Y are given by dY t = rY t + h • a(Z t ) − U A (π t ) dt + Z t dW a(Z) t , P a(Z) -a.s. under the optimal response of the agent, we see that 0 is an absorption point for the continuation utility with optimal effort a = 0. By the main reduction result of Theorem 4.2 we have V PE := sup Z∈V sup τ ∈T E P a(Z) τ 0 e −rt a(Z t ) − π t dt − e −rτ U −1 A Y R,Z τ , where dX t = a(Z t )dt + dW a(Z) t and dY t = rY t + h • a(Z t ) − U A (π t ) dt + Z t dW a(Z) t , P a(Z) -a.s. thus leading to a mixed stochastic control and optimal stopping problem with reward function upon stopping (or obstacle) v 0 := −U −1 A . By classical stochastic control theory, the HJB equation corresponding to this problem is 0 = min v − v 0 , rv − ryv − sup π − π − U A (π)v − sup z a(z) + h • a(z)v + 1 2 z 2 v = min v − v 0 , r(v − yv ) + inf π π + U A (π)v − sup a a + h(a)v + 1 2 γ(a) 2 v , y ≥ 0, by using the inverse optimal response function γ := a −1 . Finally, it follows from Remark 5.1 together with the definition of the principal problem that the boundary condition at the left boundary of the domain is v(0) = 0. We are then reduced to the obstacle problem 0 = min v − v 0 , r(v − yv ) + I(v ) − J(v , v ) , y > 0 and v(0) = 0, (5.1) where, assuming further that U A is C 1 with U A (0) = ∞ and U A (∞) = 0, I(p) := (U A ) −1 + pU A • (U A ) −1 −1 p and J(p, q) := sup a∈A a + h(a)p + 1 2 γ(a) 2 q . (5.2) We also refer the interested reader to the recent work [19] for more detailed analysis of this model. An American contracting version of Sannikov [22] In the context of the previous example, let the agent utility function be such that U (0) = 0. Given an American contract C = (τ P , π), the agent problem is defined by: V A (τ P , π) := sup τ,α E P α τ ∧τ P 0 e −rt U A (π t ) − h(α t ) dt where X t = X 0 + t 0 α s ds + dW α s , t ≥ 0, P α -a.s. The principal chooses optimally the contract by solving: V PA := sup τ P ,π V A (τ P ,π)≥U (R) E P α τ P ∧ τ 0 e −rt α t − π t dt , where ( τ , α) denotes the optimal response of the agent to the proposed contract (τ P , π). Applying the result of our main theorem, and following similar calculations as in the previous example, we see that V PA = sup Y 0 ≥R V 0 (Y 0 ), where V 0 (Y 0 ) := sup Z,π E T 0 0 e −rt ( a(Z t ) − π t )dt , where a is the maximizer of the Hamiltonian, as defined in the previous example, and T 0 := inf{t > 0 : Y t ≤ 0}, and the controlled state Y is defined by the dynamics: dY t = rY t + h • a(Z t ) − U A (π t ) dt + Z t dW a(Z) , P a(Z) -a.s. By standard stochastic control theory, we see that the dynamic programming equation corresponding to this problem is where I and J are defined in (5.2). Notice that the last equation differs from (5.1) by the absence of the obstacle constraint. However, it is shown in [19] that the two equations are equivalent, so that by their uniqueness result, the American contracting version of Sannikov coincides with the original Sannikov contracting problem. An explicit example without optimal contract This section illustrates the use of our main result in the context of the European contracting problem. In order to gain in simplicity and to favour as most explicit results as possible, the following example intentionally violates the technical conditions of the general contracting problem. We refrain from giving a fully rigorous proof of the solution provided in the present example, and we shall point out how our main results may be extended to the present context. Suppose that the contract has no continuous payment component, and that the agent is solving the simple problem with τ := τ P possibly taking τ = ∞ with positive probability: sup α E P α ξ − 1 2 τ 0 α 2 t dt , where α is any progressively measurable process which guarantees the existence of a weak solution P α for the following SDE: X t = X 0 + t 0 α s ds + W α t , 0 ≤ t ≤ τ, P α -a.s. Clearly, this requires that E P 0 D P α |P 0 T = 1, so that existence follows from the Girsanov theorem. In the present context, we observe that we also have uniqueness of such a weak solution. The Hamiltonian is given by H(y, z, γ) = 1 2 Tr[γ] + H 0 (z), where H 0 (z) := sup a∈R az − 1 2 a 2 = 1 2 z 2 , and the supremum is attained by the optimal response a(z) = z. In particular, the agent optimal response is unique. In the present setting, the lump sum payment ξ takes the form ξ = Y Y 0 ,Z τ = Y 0 + τ 0 Z t dX t − τ 0 H 0 (Z t )dt = Y 0 + τ 0 Z t dX t − τ 0 1 2 Z 2 t dt. This representation may be proved by means of the standard dynamic programming principle satisfied by the agent dynamic value process together with appropriate transvesality conditions satisfied by the stopping time τ and the admissible controls. This is in fact related to the corresponding backward SDE which allows for possibly infinite stopping, see [8,Secton 6.3] and [15,21]. Given ξ = Y Y 0 ,Z τ , the agent's optimal control is α = Z, and V E 0 (τ, ξ) = Y 0 . Then, the main reduction result of Theorem 4.2 applies and provides V PE = sup (τ,ξ)∈C E R E P α * τ 0 e −βt dX t − e −βτ ξ = sup Z∈V sup τ ∈T E P Z τ 0 e −βt Z t dt − e −βτ Y R,Z τ , where dX t = Z t dt + dW Z t and dY t = 1 2 Z 2 t dt + Z t dW Z t , P Z -a.s. By classical stochastic control theory, the HJB equation corresponding to this combined optimal control and optimal stopping problem is 0 = min v − v 0 , βv − sup z∈R 1 2 z 2 (v + v ) + z with v 0 (y) := −y,(5.3) = min v + y , βv + 1 2 (v + v ) −1 , with v + v < 0, where the supremum is attained at z(y) := − 1 v (y) + v (y) . Notice that the strict concavity of u together with βu + 1 2u ≥ 0 imply that u > 0, and therefore u must be increasing. We may explore the region where the solution u possibly coincides with the obstacle u 0 (s) = −s ln(s): and some C 2 function u n ≥ u 0 satisfying β u n + 1 2u n = 0 on (s n , ∞), with u n (s n ) = u 0 (s n ), u n (s n ) = u 0 (s n ). {u = u 0 } ⊆ βu 0 + 1 2u 0 ≥ 0 = − βs ln s − s 2 ≥ 0 = s ≤ s * := e(5.8) The last ODE is equivalent to 2βu n + 1 un = 0 which, after multiplying by u n and direct integration and using the boundary condition in (5.8), provides βu n (s) 2 = c n − ln u n (s), s ≥ s n , where c n := βu 0 (s n ) 2 + ln u 0 (s n ). (5.9) By the smooth fit condition, we have u n (s n ) = u 0 (s n ) ≥ u 0 (s * ) = 1 2β − 1 > 0 for β ∈ (0, 1 2 ). We then search for an increasing candidate solution of the ODE βu n (s) = c n − ln u n (s), s ≥ s n . and we argue that the sequence (u n ) n is increasing. Indeed, u n > u 0 on (s n , s n ] because u 0 (s n ) < u n (s n ) as s n < s * . Then, u n+1 (s n ) > u n (s n ) and by standard comparison of the solution of the ODE, we see that u n+1 > u n on (s n+1 , ∞). Direct integration of this equation provides s − s n = β Consequently, there exists a strictly concave increasing function u on R + , such that u n −→ u, pointwise and uniformly on compact subsets of R + , Figure 1 shows a numerical result of u and u 0 with β = 0.05. Finally, by following a classical verification argument, we may show that the solution of the optimal control-stopping problem is z Y s , τ := inf t > 0 : Y t = −∞ = ∞, with the optimal controlled dynamics Y t = R + t 0 z Y s dX s − t 0 1 2 z Y s 2 ds, where z is defined in (5.4), and the value of the problem is v(R). See Figure 2 for the numerical result of z. We conclude that in this example there is no optimal contract with finite terminal time. 6 Density of revealing contracts For (t, ω) ∈ 0, τ , define Σ t (ω, b) := (σ t σ t )(ω, b) and Σ t (ω) := Σ t (ω, b) ∈ S + d (R) : b ∈ B . We also introduce the inverse map which assigns to every squared diffusion Σ ∈ Σ t (ω) the corresponding set of generating controls B t (ω, Σ) := b ∈ B : (σ t σ t )(ω, b) = Σ . This allows us to isolate the partial maximization with respect to the squared diffusion in the Hamiltonian H in (4.1): H t (ω, y, z, γ) = sup Σ∈Σt(ω) F t (ω, y, z, Σ) + 1 2 Tr[Σγ] , where F t (ω, y, z, Σ) := sup (a,b)∈A×Bt(ω,Σ) − c t (ω, a, b) − k t (ω, a, b)y + σ t (ω, b)λ t (ω, a) · z . We see that 2H is the convex conjugate of −2F . Let Σ t (ω, b) 1 2 denote the corresponding square root and consider X t = X 0 + t 0 Σ r (ω, β r ) 1 2 dW r . (6.1) Clearly, any weak solution (P, β) of (6.1) is also a solution of (2.3). Let P o := P o ∈ M 1 + (Ω) : (P o , β) is a weak solution of (2.3) for some β , and notice that for any weak solution (P o , β) of (6.1) we have that for P o -almost every ω ∈ Ω σ 2 t (ω) ∈ Σ t (ω) and β t (ω) ∈ B t ω, σ 2 t (ω) . For any fixed diffusion coefficient, there is a one-to-one correspondence between the solutions of (2.1) and (2.3) through Girsanov's theorem. Define Notice that we have a one-to-one correspondence between the set of control models M and the set M o by means of Girsanov's theorem. We may rewrite the agent's problem U o (P o ) := ν = (α, β), F-optional: α t (ω) ∈ A(ω), β t (ω) ∈ B t ω, σ 2 t (ω) on R + , P o -a.V E (C) = sup P o ∈P o V E (C, P o ) with V E (C, P o ) := sup ν∈U o (P o ) E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c r (ν r ) dr . where the measure P ν is defined by the Girsanov transformation dP ν dP o Ft = E · 0 λ r (α r ) · dW r t , t ≥ 0. We now provide a representation of the agent's value function by means of second-order backward SDEs (2BSDEs) as introduced by Soner, Touzi and Zhang [23]. We apply our recent development of 2BSDE with random horizon and without the regularity conditions [15], based on the work of Possamaï, Tan and Zhou [18]. Given a final payment ξ, we consider the 2BSDE Y t∧τ = U A (ξ) + τ t∧τ F s (Y s , Z s , σ 2 s ) + U A (π s )E P K t 2 ∧τ F +,P t 1 ∧τ , where P o + (σ, P) := h>0 P o (σ + h) ∧ τ, P , P o (σ, P) := P ∈ P o : P = P on F σ . The definition of 2BSDE here is slightly different from that in [15]: the nondecreasing process K is assumed to be aggregated, i.e., K is given as a unique process, and not as a family of processes indexed by P o . Indeed, in general a family of processes K P o P o ∈P o is given through a nonlinear Doob-Meyer or optional decomposition theorem, applied under each P o ∈ P o . Under the usual set-theoretic Zermelo-Fraenkel set theory (ZFC) framework and the continuum hypotheses, as in Nutz [17], the stochastic integral t 0 Z s · dX s can be defined pathwisely on Ω without the need for exclusion of any null set and therefore does not depend on P o . Consequently, K does not depend on P o . In other words, K P o P o ∈P o can be aggregated into the resulting medial limit K, i.e., K P o = K, P o -a.s. for all P o ∈ P o . Proposition 6.2. For all C ∈ C the 2BSDE (6.2) has a unique solution. Proof. For (t, ω) ∈ R + × Ω with t ≤ τ (ω) we introduce the dynamic versions M o (t, ω) and P o (t, ω) of the sets M o and P o by considering the SDE (6.1) on t, τ starting at time t from the path ω ∈ Ω. (i). We first show that the family P o (t, ω) : (t, ω) ∈ 0, τ is saturated, i.e., for all P o 1 ∈ P o (t, ω) we have P o 2 ∈ P o (t, ω) for every probability measure P o 2 ∼ P o 1 such that X is a P o 2local martingale. To verify this, notice that the equivalence between P o 1 and P o 2 implies that the quadratic variation of X is not changed by passing from P o 1 to P o 2 . As X is a P o 2 -local martingale, it follows that if (P o 1 , β) ∈ M o (t, ω), then (P o 2 , β) ∈ M o (t, ω). (ii). We next verify that the generator F s (Y s , Z s , σ 2 s ) + U A (π s ) satisfies the conditions of Lipschitz-continuity, monotonicity, and integrability. For all (t, ω) ∈ 0, τ and Σ ∈ Σ t (ω), |F t (ω, y, z, Σ) − F t (ω, y , z , Σ)| ≤ k t ∞ |y − y | + λ t ∞ sup b∈Bt(ω,Σ) σ t (ω, b) (z − z ) = k t ∞ |y − y | + λ t ∞ Σ 1 2 (z − z ) , for (y, z), (y , z ) ∈ R × R d , and (y − y ) F t (ω, y, z, Σ) − F t (ω, y , z, Σ) ≤ (y − y ) sup for y, y ∈ R. As k, σ, λ are bounded, the generator is Lipschitz-continuous in (y, z) and monotone in y. Notice that For f 0 s (ω) := F s (ω, 0, 0, σ 2 s ) + U (π s ) and f 0,t,ω s (ω ) := F t+s ω ⊗ t ω , 0, 0, σ 2 s (ω ) + U A π t+s (ω ⊗ t ω ) = sup (a,b)∈A×B t+s (ω⊗tω , σ 2 s (ω )) − c t+s (ω ⊗ t ω , a, b) + U A π t,ω s (ω ) we obtain for τ = τ t,ω − t that E P o (t,ω) τ 0 e 2ρr f 0,t,ω The dynamic programming requirements of [18, Assumption 2.1] and [15, Lemma 6.6] follow from the more general results given in El Karoui and Tan [10,11]. Finally, as ξ satisfies the integrability condition (2.8), the required well-posedness result is a direct consequence of [15,Theorem 3.3]. Now, we have the relation of the agent's problem and 2BSDE. Proposition 6.3. Let (Y, Z, K) be the solution of the 2BSDE (6.2). Then, we have V E (C) = sup P o ∈P o E P o [Y 0 ]. Moreover, ( P, ν) ∈ M E (C) if and only if • ν is a maximizer in the definition of F (Y, Z, σ 2 ), dt ⊗ P-a.e.; • K τ = 0, P-a.s. where for all P o ∈ P o , (Y P o , Z P o ) is the solution of the following BSDE under P o : where c ν r := c r (ν r ) and similar notation apply to k ν , σ β , λ α . Let P ν be the probability measure equivalent to P o such that Y P o 0 = U A (ξ) + τ 0 F r Y P o r , Z P o r , σ 2 r + U A (π r ) dr − τ 0 Z P o r · dX r −dP ν dP o Ft = E · 0 λ α r dW r t , t ≥ 0, where W is a Brownian motion under P o . By Itô's formula K ν r U A (π r ) − c ν r dr F + 0 , P o -a.s. Observe that the affine generators in (6.4) are equi-Lipschitz by boundedness of k, σ, α, and there is an ε-maximizer ν 0 ∈ U o (P o ) for all ε > 0 which obviously induces a weak solution of the corresponding SDE by Girsanov's theorem. This means that the conditions of [9, Corollary 3.1] are satisfied and provides a representation of Y P o 0 as a stochastic control representation for all P o ∈ P o as Y P o 0 = P o ess sup ν∈U o (P o ) Y P o ,ν 0 = P o ess sup ν∈U o (P o ) E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr F + 0 , P o -a.s. (6.5) Then, for all P o ∈ P o , we obtain P o -a.s. Y 0 = P o ess sup (P ,ν)∈P o + (0,P o )×U o (P ) E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr F + 0 = P o ess sup (P ,ν)∈M o , P =P o on F + 0 E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr F + 0 . By similar arguments as in the proof of [18, Lemma 3.5], we may show that the family E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr F + 0 , (P , ν) ∈ M o is upward directed. Therefore, we may conclude that sup P o ∈P o E P o [Y 0 ] = sup P o ∈P o E P o P o ess sup (P ,ν)∈M o , P =P o on F + 0 E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr F + 0 = sup P o ∈P o sup ν∈U o (P o ) E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr = sup P o ∈P o V E (C, P o ) = V E (C). Definition 2. 3 . 3(i) We denote by τ ∈ T the collection of all stopping times τ satisfying lim n→∞ E P 1 {τ ≥n} = 0.(2.7) v( 0 ) 0= 0, and r(v − yv ) + I(v ) − J(v , v ) = 0 on (0, ∞), the function u(s) := sv(ln s), for s > 0, we compute that u (s) = (v+v )(ln s) and u (s) = 1 s (v + v )(ln s), thus reducing the last ODE to 0 = min u − u 0 , βu + 1 2u , with u < 0, and u 0 (s) := −s ln s. (5.5) searching for a solution of (5.5) of the form u n (s) = 1 {s≤sn} u 0 (s) + 1 {s>sn} u(s), for some s n ∈ (0, s * ], (5.7) Figure 1 : functions u and u 0 Figure 2 : 102− ln u n (t) dt = e cn βπ cn−ln u 0 (sn) cn−ln un(s) γ(t)dt, s ≥ s n ,where γ(t) := e −t √ πt is the density function of the Γ( 1 2 , 1) distribution. Denoting by F the corresponding cumulative distribution function, and recalling that c n − ln u 0 (s n ) = βu 0 (s n ) 2 , we see thatln u n (s) = c n − F −1 − s − s n e cn √ βπ + F βu 0 (s n ) 2 , s ∈ [s n , s n ), (5.10)where s n is the maximum value of s such that the last equation has a solution:s n := s n + e cn βπF βu 0 (s n ) 2 , and u(s n ) = e cn , u (s n ) = 0.(5.11) At this point, we observe that s n < ∞, so that the maximal increasing solution of the ODE started from an arbitrary s n ∈ (0, s * ] is only defined up to the finite point s n . However, if we optimal control z(y) choose a sequence s n converging to zero, then u 0 (s n ) −→ ∞ and c n −→ ∞, so that s n −→ ∞.For this reason, in order to construct a solution of the ODE on the positive real line, we now set s n := 1 n , and we extend u n to R + by u n (s) := u n (s n ) = e cn , for all n ≥ 1, Dini theorem. This limiting function satisfies u(0) = 0, u (0) = ∞, u(∞) = ∞, u (∞) = 0, and u > u 0 , βu + 1 2u = 0 on (0, ∞), and therefore induces the required classical solution v(y) = e −y u(e y ) of the dynamic programming equation (5.3). s. , and M o := (P o , ν) : P o ∈ P o and ν ∈ U o (P o ) . (ω, a, b)(y − y ) 2 , F t (ω, 0, 0, Σ) = sup (a,b)∈A×Bt(ω,Σ) {−c t (ω, a, b)}. P Proof. (i). By[15, Proposition 5.2], the solution of the 2BSDE (6.2) can be represented as the supremum of the solutions of BSDEs ∈P o+ (0,P o ) Y P 0 , P o -a.s. for all P o ∈ P o ,(6.3) o r , P o -a.s. with a càdlàg (F +,P o , P o )-martingale M P o orthogonal to X. For all (P o , ν) ∈ M o , consider the linear BSDE with ( ii). Recall that we have one-to-one correspondence between the set of control models M andthe set M o . From (i), P o , ν ∈ M o is optimal if and only if V E (C) = E P o [Y 0 ] = E P ν [Y 0 ]. Consider M o = P o , ν , J E M o , C = E P ν K ν τ U A (ξ) + τ 0 K ν r U A (π r ) − c ν r dr . ds − dK s , P o -a.s.,(6.2) for each P o ∈ P o . Definition 6.1. For 1 < p < q and −µ ≤ η < ρ, the process (Y,Z, K) ∈ D p η,τ P o , F +,P o × H p η,τ P o , F P o × I p η,τ (P o , F P o )is the solution of the 2BSDE (6.2), if• for each P o ∈ P o , (Y, Z, K) satisfies (6.2) P o -a.s.• the nondecreasing process K satisfies the minimality conidtion: for all P o ∈ P o K t 1 ∧τ = P ∈P + (t 1 ∧τ,P o )τ t∧τ Z s · dX s + τ t∧τ P o ess inf Using Itô's formula and (6.2), we obtain thatTherefore, P o , ν is optimal if and only if ν is a maximizer in the definition of F , dt ⊗ P o -a.s., and K τ = 0, P o -a.s.Proof of Proposition 4.4. Let C = (τ, π, ξ) ∈ C E R . By definition, C ∈ C, M E (C) = ∅ and V E (C) ≥ R. Consider the 2BSDE (6.2) with ξ. Notice that the integrability conditions, Assumption 2.4, and Definition 2.3 imply that the 2BSDE admits a unique solution (Y,and p ∈ (1, q). By Proposition 6.3, we have K τ = 0, P-a.s., for every P, ν ∈ M E (C).We fix some ε > 0 and define the absolutely continuous approximation of K byWe now define the processWe may verify that (Y ε , Z, K ε ) solves the 2BSDE (6.2) with terminal condition ξ ε := Y ε τ and generator F (y, z, σ 2 )+U A (π). Indeed, as in the proof of the stability of SDEs, since K ε τ ≤ K τ and the norms of K and Y are bounded, we may prove that ξ ε satisfies the integrability condition. It follows by (6.6) that K ε satisfies the required minimality condition. By a priori estimation we obtain that Y ε D p η ,τ (P o ) + Z H p η ,τ (P o ) < ∞, for p ∈ (1, p) and η ∈ [−µ, η).(6.8)We observe that a probability measure P satisfies K τ = 0 P-a.s. if and only if it satisfies K ε τ = 0 P-a.s.Define C ε := (τ, π, ξ ε ). As K ε = K = 0, P-a.s., for any ( P, ν) ∈ M E (C), we have Y ε = Y , P-a.s., in particular ξ ε = ξ, P-a.s. Since by Proposition 6.3 ν is a maximizer in the definition of F (Y, Z, σ 2 ), ν is a maximizer in the definition of F (Y ε , Z, σ 2 ), dt ⊗ P-a.e., which again implies by Proposition 6.3 that ( P, ν) ∈ M E (C ε ). The reverse direction holds also true. This implies thatFor (t, ω, y, z) ∈ 0, τ × R × R d , notice that the mapis surjective on (0, ∞). Indeed, it is nonnegative by the definition of H and F , convex, continuous on the interior of its domain, and coercive by the boundedness of λ, σ, k. LetK ε denote the 26 density of the absolutely continuous process K ε with respect to the Lebesgue measure. The continuity allows us to use the classical measurable selection to find an F-predictable process Γ ε such thatKForK ε t > 0, this is a consequence of surjectivity. In the case thatK ε t = 0, as M E (C) = M E (C ε ) = ∅, it follows from Proposition 6.3 that Γ ε t can be chosen arbitrarily, for instance choose Γ ε t = 0. It follows by substituting in (6.7) that we have the representation for Y εtakes the required form (4.3). It follows from (6.8) that the controlled process (Z, Γ ε ) satisfies the integrability condition required in the Definition 4.1 (i). By the same argument as in Step 1.b. in the proof of Theorem 4.2 (i), we obtain for ( P, ν) ∈ M E (C ε ) thatConsequently, the requirement of Definition 4.1 (ii) is satisfied, and therefore (Z, Γ ε ) ∈ V and C ε ∈ C. A principal-agent model for pricing electricity volatility demand. Working paper. R Aid, D Possamaï, N Touzi, R. Aid, D. Possamaï, and N. Touzi. A principal-agent model for pricing electricity volatility demand. Working paper, 2019. Existence of optimal strategies based on specified information, for a class of stochastic decision problems. V E Beneš, SIAM J. Control. 8V. E. Beneš. Existence of optimal strategies based on specified information, for a class of stochastic decision problems. SIAM J. Control, 8:179-188, 1970. Existence of optimal stochastic control laws. V E Beneš, SIAM J. Control. 9V. E. Beneš. Existence of optimal stochastic control laws. SIAM J. Control, 9:446-472, 1971. P Bolton, M Dewatripont, Contract Theory. P. Bolton and M. Dewatripont. Contract Theory. 2005. Moral hazard in dynamic risk management. J Cvitanić, D Possamaï, N Touzi, Management science. 63107J. Cvitanić, D. Possamaï, and N. Touzi. Moral hazard in dynamic risk management. Management science, 63(107):3328-3346, 2017. Dynamic programming approach to principal-agent problems. J Cvitanić, D Possamaï, N Touzi, Finance Stoch. 221J. Cvitanić, D. Possamaï, and N. Touzi. Dynamic programming approach to principal-agent problems. Finance Stoch., 22(1):1-37, 2018. Principal-agent problems with exit options. The B.E. J Cvitanić, X Wan, J Zhang, Journal of Theoretical Economics. 81J. Cvitanić, X. Wan, and J. Zhang. Principal-agent problems with exit options. The B.E. Journal of Theoretical Economics, 8(1):1-43, 2008. Contract theory in continuous-time models. J Cvitanić, J Zhang, Springer Finance. SpringerHeidelbergJ. Cvitanić and J. Zhang. Contract theory in continuous-time models. Springer Finance. Springer, Heidelberg, 2013. Backward stochastic differential equations in finance. N El Karoui, S Peng, M C Quenez, Math. Finance. 71N. El Karoui, S. Peng, and M. C. Quenez. Backward stochastic differential equations in finance. Math. Finance, 7(1):1-71, 1997. Capacities, measurable selection and dynamic programming. Part I: Abstract framework. N El Karoui, X Tan, PreprintN. El Karoui and X. Tan. Capacities, measurable selection and dynamic programming. Part I: Abstract framework. Preprint, 2013. Capacities, measurable selection and dynamic programming. Part II: Application in stochastic control problems. N El Karoui, X Tan, PreprintN. El Karoui and X. Tan. Capacities, measurable selection and dynamic programming. Part II: Application in stochastic control problems. Preprint, 2015. Aggregation and linearity in the provision of intertemporal incentives. B Holmström, P Milgrom, Econometrica. 552B. Holmström and P. Milgrom. Aggregation and linearity in the provision of intertemporal incentives. Econometrica, 55(2):303-328, 1987. On pathwise stochastic integration. R Karandikar, Stochastic Processes and their Applications. 57R. Karandikar. On pathwise stochastic integration. Stochastic Processes and their Appli- cations, 57(1):11-18, 1995. Brownian motion and stochastic calculus. I Karatzas, S E Shreve, Graduate Texts in Mathematics. 113Springer-VerlagI. Karatzas and S. E. Shreve. Brownian motion and stochastic calculus, volume 113 of Graduate Texts in Mathematics. Springer-Verlag, 1991. Second order backward SDE with random terminal time. Y Lin, Z Ren, N Touzi, J Yang, Electronic Journal of Probability. 25Y. Lin, Z. Ren, N. Touzi, and J. Yang. Second order backward SDE with random terminal time. Electronic Journal of Probability, 25:1-43, 2020. Backward stochastic PDEs related to the utility maximization problem. M Mania, R Tevzadze, Georgian Math. J. 174M. Mania and R. Tevzadze. Backward stochastic PDEs related to the utility maximization problem. Georgian Math. J., 17(4):705-740, 2010. Pathwise construction of stochastic integrals. M Nutz, Electron. Commun. Probab. 1724M. Nutz. Pathwise construction of stochastic integrals. Electron. Commun. Probab, 17(24):1-7, 2012. Stochastic control for a class of nonlinear kernels and applications. D Possamaï, X Tan, C Zhou, Annals of Probability. 461D. Possamaï, X. Tan, and C. Zhou. Stochastic control for a class of nonlinear kernels and applications. Annals of Probability, 46(1):551-603, 2018. Is there a Golden Parachute in Sannikov's principal-agent problem?. D Possamaï, N Touzi, PreprintD. Possamaï and N. Touzi. Is there a Golden Parachute in Sannikov's principal-agent problem? Preprint, 2020. An overview of viscosity solutions of path-dependent PDEs. Z Ren, N Touzi, J Zhang, Stochastic analysis and applications. ChamSpringer100Z. Ren, N. Touzi, and J. Zhang. An overview of viscosity solutions of path-dependent PDEs. In Stochastic analysis and applications 2014, volume 100 of Springer Proc. Math. Stat., pages 397-453. Springer, Cham, 2014. BSDEs with a random terminal time driven by a monotone generator and their links with PDEs. M Royer, Stoch. Stoch. Rep. 764M. Royer. BSDEs with a random terminal time driven by a monotone generator and their links with PDEs. Stoch. Stoch. Rep., 76(4):281-307, 2004. A continuous-time version of the principal-agent problem. Y Sannikov, Rev. Econom. Stud. 753Y. Sannikov. A continuous-time version of the principal-agent problem. Rev. Econom. Stud., 75(3):957-984, 2008. Wellposedness of second order backward SDEs. H Soner, N Touzi, J Zhang, Probability Theory and Related Fields. 153H. Soner, N. Touzi, and J. Zhang. Wellposedness of second order backward SDEs. Proba- bility Theory and Related Fields, 153(1-2):149-190, 2012.
{'fraction_non_alphanumeric': 0.09016789295758643, 'fraction_numerical': 0.020695434788784408, 'mean_word_length': 3.0253287592910234, 'pattern_counts': {'":': 0, '<': 16, '<?xml version=': 0, '>': 27, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 183, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "We consider a general formulation of the random horizon principal-agent problem with a continuous payment and a lump-sum payment at termination. In the European version of the problem, the random horizon is chosen solely by the principal with no other possible action from the agent than exerting effort on the dynamics of the output process. We also consider the American version of the contract, where the agent can also quit by optimally choosing the termination time of the contract. Our main result reduces such non-zero-sum stochastic differential games to appropriate stochastic control problems which may be solved by standard methods of stochastic control theory. This reduction is obtained by following the Sannikov [22] approach, further developed in[6]. We first introduce an appropriate class of contracts for which the agent's optimal effort is immediately characterized by the standard verification argument in stochastic control theory. We then show that this class of contracts is dense in an appropriate sense, so that the optimization over this restricted family of contracts represents no loss of generality. The result is obtained by using the recent well-posedness result of random horizon second-order backward SDE in[15].MSC 2010 Subject Classification: 91B40, 93E20Key words: Moral hazard, first-best and second-best contracting, second-order backward SDE, random horizon.", 'arxivid': '2002.10982', 'author': ['Yiqing Lin ', 'Zhenjie Ren ', 'Nizar Touzi ', 'Junjian Yang '], 'authoraffiliation': [], 'corpusid': 246706414, 'doi': '10.1137/20m1321620', 'github_urls': [], 'n_tokens_mistral': 25787, 'n_tokens_neox': 23006, 'n_words': 14861, 'pdfsha': '3425d3b7fe522efbebe485143c2009ef14b61060', 'pdfurls': ['https://arxiv.org/pdf/2002.10982v2.pdf'], 'title': ['Random horizon principal-agent problems *', 'Random horizon principal-agent problems *'], 'venue': []}
arxiv
Spin-orbit entanglement in time evolution of radial wave packets in hydrogenic systems 14 Apr 2004 25th October 2018 Marcin Turek Piotr Rozmej Institute of Physics Institute of Physics Maria Curie-Sk lodowska University 20-031LublinPoland University of Zielona Góra 65-246 ZielonaGóraPoland Spin-orbit entanglement in time evolution of radial wave packets in hydrogenic systems 14 Apr 2004 25th October 2018* presenting author 1 Time evolution of radial wave packets built from the eigenstates of Dirac equation for a hydrogenic systems is considered. Radial wave packets are constructed from the states of different n quantum number and the same lowest angular momentum. In general they exhibit a kind of breathing motion with dispersion and (partial) revivals.Calculations show that for some particular preparations of the wave packet one can observe interesting effects in spin motion, coming from inherent entanglement of spin and orbital degrees of freedom. These effects manifest themselves through some oscillations in the mean values of spin operators and through changes of spatial probability density carried by upper and lower components of the wave function. It is also shown that the characteristic time scale of predicted effects (called T ls ) is for radial wave packets much smaller than in other cases, reaching values comparable to (or even less than) the time scale for the wave packet revival. Introduction For more than fifteen years large efforts [1][2][3][4][5][6][7] have been made to a detailed understanding of the quantum dynamics of wave packets in simple systems like H atoms, hydrogenic atoms as well as in simple molecules. These theoretical investigations resulted in a good understanding of such subtle interference effects as collapse, revivals, fractional revivals of wave packets created in a variety of systems. With development of experimental techniques allowing to tailor many different desired initial states many signatures of these phenomena have been observed. A rich survey of early studies is given in review papers [8]. Radial wave packets (RWP) can be relatively easily excited by short laser pulses [9]. Their motion, at the beginning, resembles a classical motion. During later stages of the evolution packets undergo dispersion and (partial) revivals as well as fractional revivals [10,11]. The idea of construction of the RWP in the paper is similar to that presented in articles cited above. However, we include in our investigation the spin degrees of freedom which motion for RWP can manifest itself much earlier than for other kinds of wave packets. The most natural framework for our considerations is using relativistic wave packets built from the solutions of Dirac equation for hydrogenic systems. Such approach was used by us in [12], where we investigated circular wave packets and in [13], where we considered elliptic wave packets, both in hydrogenic systems. Results of [12][13][14] show that contributions given by small components of wave functions are negligible small. Therefore for time scales of motion which are relevant one can safely use an approximation in which the time evolution is calculated with non-relativistic wave functions and relativistic energies. Such approximation is simpler for analytical presentation as well as for numerical calculations. We will use this approximation throughout the paper. Construction of wave packet Assume that just after creation the radial wave packet has the following form Ψ r (t = 0) = n w n |n l l a b ,(1) where |nll are eigenstates of the non-relativistic hydrogenic system with low angular momentum (usually assumed as l = 1) and m = l. The weight coefficients w n = (−1) n c n = (−1) n (2πσ 2 ) −1/4 exp [−(n − n av ) 2 /4σ 2 ] are given by a Gauss distribution with mean n av and dispersion σ. Distributions of that type describe population of Rydberg states excited by a short laser pulse. The phase (−1) n is added to obtain the initial localization of the wave packet at its external turning point. The spinor    a b    determines the initial direction of the spin. The WP (1) is an approximation of the full relativistic bispinor whose initial small components are set to zero. Time evolution After transformation to the basis |n, l, j, m j one obtains (with notation j > = l + 1/2 and j < = l − 1/2) |Ψ r = n w n a |n, l, j > , j > + b 1 2l + 1 |n, l, j > , j < + 2l 2l + 1 |n, l, j < , j < .(2) In the basis |n, l, j, m j time evolution of each state is given by an exponential factor exp(−iE + nl t/ ) or exp(−iE − nl t/ ), where E + nl and E − nl are energy eigenvalues for j > = l + 1/2 and j < = l − 1/2, respectively. Precisely E ± nl = m 0 c 2        1 + (Zα) 2 n − j> < − 1 2 + j> < + 1 2 2 − (Zα) 2 2        −1/2 .(3) Applying time evolution in that basis and transforming back to the |n, l, s, m s basis one obtains the wave packet after time t in the form |Ψ r (t) = |Ψ 1 |Ψ 2 , where the upper Ψ 1 (t) and the lower Ψ 2 (t) component of the spinor are given by Ψ 1 (t) = n w n a exp (−iE + nl t/ ) |nll + b √ 2l 2l + 1 exp (−iE + nl t/ ) − exp (−iE − nl t/ ) |nll − 1 , Ψ 2 (t) = n w n b 1 2l + 1 exp (−iE + nl t/ ) + 2l exp (−iE − nl t/ ) |nll .(4) Such wave packet is localized only in the radial coordinate, hence the radial density probability of the components ρ 1 (r) = r 2 dΩ |Ψ 1 (r, θ, φ)| 2 and ρ 2 (r) = r 2 dΩ |Ψ 2 (r, θ, φ)| 2(5) are convenient quantities illustrating the wave packet motion. The integration over angular coordinates leads to formula ρ 1 (r) = r 2   a 2 n w n R n (r) exp (−iE + nl t/ ) 2 + b 2 2l (2l + 1) 2 n w n R n (r) exp (−iE + nl t/ ) − exp (−iE − nl t/ ) 2   ρ 2 (r) = r 2   b 2 1 (2l + 1) 2 n w n R n (r) exp (−iE + nl t/ ) + 2l exp (−iE − nl t/ ) 2   ,(6) where R n (r) denotes the radial part of the wave function r|nlm . Periodicity of the motion is well seen with the help of the autocorrelation function A(t) = Ψ(t)|Ψ(0) . For radial wave packet it reads as A(t) = nlm w 2 n a 2 + b 2 1 2l + 1 exp (−iE + nl t/ ) + b 2 2l 2l + 1 exp (−iE − nl t/ ) . (7) The plot of |A(t)| 2 for wave packets with n av = 80, Z = 92 and two different values of σ is presented in Fig. 2. Even for short time evolution presented in fig. 1 one can see a transfer of the probability density from one component of the spinor to the other already after one classical period. It shows that the time scale of the period of spin-orbit motion is for RWP substantially smaller than for circular or elliptic WP. The period of spin-orbit motion is determined by the splitting of energy levels for which n = n av is maximally populated in WP and spin projections are opposite T ls = 2π |E + navl − E − navl | ≃ 4π n 3 av Z 4 α 2 = 2l(l + 1) (Zα) 2 T cl .(8) The result is obtained using lowest order approximation for relativistic energies in a hydrogenic system. It is clear that for RWP, whose l = 1, T ls can be even smaller than T rev , particularly for large Z. A hierarchy of time scales is defined as in [12]. Writing the energy as function of single quantum number n, for n = n av we define a hierarchy of times 1 k! d k E dn k n=nav = 2π T k , k = 1, 2, 3, . . . .(9) For k = 1 we obtain the classical Kepler time T cl , for k = 2 the revival time T rev and so on. Fig. 3 presents the three time scales T cl , T rev and T ls for RWP as functions of n av for different Z. It is clear that for Z > 60 T ls becomes comparable to T rev and even shorter for n av > 50. Because lifetimes of wave packets with respect to radiative decay are about two orders of magnitude larger than T ls (according to [15] T ls /T raddec ≈ 0.06) it is much bigger chance to observe effects of spin-orbit entanglement for RWP than for any other WP. Expectation values of spin operators Expectation values of spin operators can be easily obtained using equations (4). For RWP they have a simple structure σ x t = ab n |w n | 2 2l 2l + 1 + 4l 2l + 1 cos (ω n t) ,(10)σ y t = ab n |w n | 2 4l 2l + 1 sin (ω n t) ,(11)σ z t = n |w n | 2 a 2 − b 2 (2l + 1)(2l − 1) (2l + 1) 2 − b 2 8l (2l + 1) 2 cos (ω n t) ,(12) where ω n = (E + nl − E − nl / ). The terms containing cos (ω n t) and sin (ω n t) indicate that at the beginning one can expect the spin precession followed by the spin collapse implied by nonlinear dependence of frequencies ω n on n. This behaviour is clearly seen in the upper part of fig. 4, where for t/T ls ∈ (0, 5) spin vector makes several rotations while for t/T ls ∈ (5, 20) stays almost constant. At that times the length of spin vector is reduced to σ ≈ 0.55 which means that the part of spin angular momentum in dynamically transferred into orbital motion. Later, for t/T ls ∈ (20, 33) spin revives (at half of T ls 2 = (2/3)n av T ls , that is t ≈ 26.7 T ls for n av = 80). The high peak of autocorrelation function and the larger length of spin vector presented in the lower part of fig. 4 confirm the predicted time of the spin revival. The spin precession is accompanied by the revivals of spatial probability density. It is visible in the autocorrelation function and spin components ( fig. 4) and in details in fig. 5 displaying the spatial probability density for some particular times. The inherent entanglement of the spatial and spin degrees of freedom manifested already for short times in fig. 1 can be illustrated with the help of quantum carpet -space-time plots of the WP evolution [16]. Such space-time plot presenting time evolution of ρ 1 and ρ 2 separately is shown in fig. 6. One sees that if the initial WP has only the Ψ 2 component there is a transfer of probability density to the other component and back. This transfer is governed precisely by T ls time scale. Conclusions We have discussed the time evolution of RWP in hydrogenic systems using a suitable approximation of relativistic approach. The main relativistic effect is the appearance of the new time scale due to the spin-orbit coupling. As shown above this time scale can be much smaller for the radial WP than for previously discussed cases of circular [12] or elliptic [13] WP. This fact implies that in principle experimental observations of some spin-orbit effects may become possible with existing techniques. http://arxiv.org/ps/quant-ph/0404084v1 Fig. 1 1illustrates the time evolution of the radial wave packet with n av = 80, a = 0, b = 1, what corresponds to initial spin antiparallel to Oz axis. The wave packet exhibits a kind of breathing motion, moving towards the center and reassembling itself (approximately) after one classical period at the external turning point. Figure 1 :Figure 2 : 12Short time scale evolution of the radial wave packet with n av = 80, a = 0, Autocorrelation function (squared) for radial wave packets with n av = 80, Z = 92, σ = 1 and 2. Figure 3 : 3Log-log plot of time scales (in seconds) T cl , T rev and T ls of radial WP as function of n av for different Z. Figure 4 : 4Time evolution of spin expectation values for RWP with n av = 80, l = 1 and a = b (upper part) and the square of the autocorrelation function and the 'length' of the spin vector (lower part). Figure 5 : 5Radial probability density ρ = ρ 1 + ρ 2 for RWP with n av = 80, l = 1 and a = b for several time instants corresponding to big values of autocorrelation function (revivals). Figure 6 : 6Time evolution of radial probability densities ρ 1 and ρ 2 for RWP with n av = 80, l = 1, a = 0, b = 1. . J Parker, C R StroudJr, Phys. Rev. Lett. 56716J. Parker and C.R. Stroud Jr., Phys. Rev. Lett. 56 716 (1986). . I Sh, N F Averbukh, Perelman, Phys. Lett. A139. 449I.Sh. Averbukh and N.F. Perelman, Phys. Lett. A139 449 (1989); . Zh. Eksp. Teor. Fiz. 96818Zh. Eksp. Teor. Fiz. 96 818 (1989); . Sov. Phys. JETP. 69464Sov. Phys. JETP 69 464 (1989); . Usp. Fiz. Nauk. 16141Usp. Fiz. Nauk 161 41 (1991); . Sov. Phys. Usp. 34572Sov. Phys. Usp. 34 572 (1991). . N Nauenberg, J. Phys. B: At. Mol. Opt. Phys. 23385N. Nauenberg, J. Phys. B: At. Mol. Opt. Phys. 23 L385 (1990). . Z Dacic-Gaeta, C R StroudJr, Phys. Rev. 426803Z. Dacic-Gaeta and C.R. Stroud Jr., Phys. Rev. A42 6803 (1990). . A Peres, Phys. Rev. 475196A. Peres, Phys. Rev. 47 5196 (1993). . R Bluhm, V A Kostelecky, Phys. Rev. A. 504445R. Bluhm and V.A. Kostelecky, Phys. Rev. A 50 R4445 (1994); . Phys. Lett. A. 200308Phys. Lett. A 200 308 (1995); . Phys. Rev. A. 514767Phys. Rev. A 51 4767 (1995). . R Bluhm, V A Kostelecky, J A Porter, Am. J. Phys. 64944R. Bluhm, V.A. Kostelecky and J.A. Porter, Am. J. Phys. 64 944 (1996). . G Alber, P Zoller, Phys. Reports. 5231G. Alber and P. Zoller, Phys. Reports 5 231 (1991); . B M Garraway, K A Suominen, Rep. Prog. Phys. 58365B.M. Garraway and K.A. Suominen, Rep. Prog. Phys. 58 365 (1995). . G Alber, H Ritsch, P Zoller, Phys. Rev. A. 341058G. Alber, H. Ritsch, and P. Zoller. Phys. Rev. A 34 1058 (1986). . J A Yeazell, M Mallalieu, J Parker, C R StroudJr, Phys. Rev. A. 405040J.A. Yeazell, M. Mallalieu, J. Parker, and C.R. Stroud Jr. Phys. Rev. A 40 5040 (1989). . J A Yeazell, M Mallalieu, C R StroudJr, Phys. Rev. Lett. 642007J.A. Yeazell, M. Mallalieu, and C.R. Stroud Jr. Phys. Rev. Lett. 64 2007 (1990). . R Arvieu, P Rozmej, M Turek, Phys. Rev. A. 6222514R. Arvieu, P. Rozmej, and M. Turek. Phys. Rev. A 62 022514 (2000). . P Rozmej, M Turek, R Arvieu, I Sh, Averbukh. J. Phys. A: Math. Gen. 357803P. Rozmej, M. Turek, R. Arvieu, and I.Sh. Averbukh. J. Phys. A: Math. Gen. 35 7803 (2002). Relativistic effects in time evolution of wave packets. M Turek, University Maria Curie-Sk lodowska. LublinPhD thesisin PolishM. Turek, Relativistic effects in time evolution of wave packets, PhD thesis (in Polish), Uni- versity Maria Curie-Sk lodowska, Lublin 2002. . C E Chang, Phys. Rev. A. 31495C.E. Chang Phys. Rev. A 31 495 (1985). . F Grossman, J.-M Rost, W P Schleich, J. Phys. A. 30227F. Grossman, J.-M. Rost and W.P. Schleich, J. Phys. A 30, L227 (1997); . P Rozmej, R Arvieu, Eur. Phys. J. A. 5357P. Rozmej and R. Arvieu, Eur. Phys. J. A 5 357 (1999); . R Bonifacio, I Marzoli, W P Schleich, J. Mod. Optics. 472891R. Bonifacio, I. Marzoli and W.P. Schleich, J. Mod. Optics 47 2891 (2000).
{'fraction_non_alphanumeric': 0.07539570378745054, 'fraction_numerical': 0.047696438665912944, 'mean_word_length': 3.3749613601236477, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 10, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Time evolution of radial wave packets built from the eigenstates of Dirac equation for a hydrogenic systems is considered. Radial wave packets are constructed from the states of different n quantum number and the same lowest angular momentum. In general they exhibit a kind of breathing motion with dispersion and (partial) revivals.Calculations show that for some particular preparations of the wave packet one can observe interesting effects in spin motion, coming from inherent entanglement of spin and orbital degrees of freedom. These effects manifest themselves through some oscillations in the mean values of spin operators and through changes of spatial probability density carried by upper and lower components of the wave function. It is also shown that the characteristic time scale of predicted effects (called T ls ) is for radial wave packets much smaller than in other cases, reaching values comparable to (or even less than) the time scale for the wave packet revival.', 'arxivid': 'quant-ph/0404084', 'author': ['Marcin Turek ', 'Piotr Rozmej ', '\nInstitute of Physics\nInstitute of Physics\nMaria Curie-Sk lodowska University\n20-031LublinPoland\n', '\nUniversity of Zielona Góra\n65-246 ZielonaGóraPoland\n'], 'authoraffiliation': ['Institute of Physics\nInstitute of Physics\nMaria Curie-Sk lodowska University\n20-031LublinPoland', 'University of Zielona Góra\n65-246 ZielonaGóraPoland'], 'corpusid': 8235956, 'doi': '10.1007/s11080-004-6632-4', 'github_urls': [], 'n_tokens_mistral': 5036, 'n_tokens_neox': 4318, 'n_words': 2592, 'pdfsha': 'dd02cd220f3b0ef5c41d02ca3d60b21c1e4b0ad3', 'pdfurls': ['https://arxiv.org/pdf/quant-ph/0404084v1.pdf'], 'title': ['Spin-orbit entanglement in time evolution of radial wave packets in hydrogenic systems', 'Spin-orbit entanglement in time evolution of radial wave packets in hydrogenic systems'], 'venue': []}
arxiv
IMKGA-SM: Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling Yilin Wen Senior Member, IEEEBiao Luo Yuqian Zhao IMKGA-SM: Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling THIS WORK HAS BEEN SUBMITTED TO THE IEEE FOR POSSIBLE PUBLICATION. COPYRIGHT MAY BE TRANSFERRED WITHOUT NOTICE, AFTER WHICH THIS VERSION MAY NO LONGER BE ACCESSIBLE 1Index Terms-Knowledge graphlink predictionmultimodalinterpretabilitysequence modelingreinforcement learning Multimodal knowledge graph link prediction aims to improve the accuracy and efficiency of link prediction tasks for multimodal data. However, for complex multimodal information and sparse training data, it is usually difficult to achieve interpretability and high accuracy simultaneously for most methods. To address this difficulty, a new model is developed in this paper, namely Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling (IMKGA-SM). First, a multi-modal fine-grained fusion method is proposed, and Vgg16 and Optical Character Recognition (OCR) techniques are adopted to effectively extract text information from images and images. Then, the knowledge graph link prediction task is modelled as an offline reinforcement learning Markov decision model, which is then abstracted into a unified sequence framework. An interactive perception-based reward expectation mechanism and a special causal masking mechanism are designed, which "converts" the query into an inference path. Then, an autoregressive dynamic gradient adjustment mechanism is proposed to alleviate the insufficient problem of multimodal optimization. Finally, two datasets are adopted for experiments, and the popular SOTA baselines are used for comparison. The results show that the developed IMKGA-SM achieves much better performance than SOTA baselines on multimodal link prediction datasets of different sizes. INTRODUCTION T HE knowledge graph is the technology and tool for carrying and representing background knowledge. It structures knowledge in the real world into entities and relations in the form of graphs and organizes them into networks. In a knowledge graph, knowledge data is represented in the form of triples (h, r, t). Among them, h is the head entity, r is the relation connecting two entities, and t is the tail entity. Knowledge graphs are used in various artificial intelligence tasks in different domains [1], such as named entity disambiguation [2] in natural language processing [3], visual relation detection [4] or collaborative filtering [5]. However, it is well known that even state-ofthe-art knowledge graphs are often incomplete (i.e., lack real facts or contain false facts). Therefore, machine learning algorithms aimed at addressing this problem attempt to infer missing triplets from observed connectivity patterns, a task is known as link prediction [6]. For example, given a head entity and a relation (h, r), predict a tail entity t. In order to solve the problem of link prediction, existing problems can be divided into four categories: deductive logic and rules, reasoning based on graph structure, knowledge graph embedded representation and deep neural network model. Rule-based reasoning methods, such as AMIE [7], AnyBURL [8], transform natural language queries into combinations of logical operators, express such queries through combinations, and then implement in a specific programming language to get the query. These methods are accurate and interpretable, but require experts to formulate a large number of inference rules, and have poor generalization ability for unknown rules. Reasoning based on graph structure has two features: one is the path feature, and the representative algorithm is PRA [9]. The path features between nodes are extracted by graph traversal algorithm or random walk method, and the node connections are predicted by path features. Its characteristic is to provide path interpretability while reasoning, and the problem is that it is difficult to solve the problem because the reasoning nodes are not connected. The second is a graph-structurebased approach that utilizes a message-passing mechanism to extract the structural information of target entities and provide subgraph interpretability, and the representative algorithm is DeepPath [10]. However, because the knowledge graph is usually very large, it is extremely complicated to traverse all the subgraph structures in the graph. The knowledge graph embedding representation method is to embed the high-dimensional and discrete data of the knowledge graph into a low-dimensional continuous vector space by designing a certain scoring function, and then representing the entities and relations as numerical vectors to calculate. Its representative model is the TransE type, for example, TransE [11], TransH [12], TransD [13], TransR [14]. The recent research is bilinear models, e.g., RESCAL [15], DisMult [16], TuckER [17], and ComplEx [18]. Its method is characterized by a shallow neural network, and the semantic representation of the knowledge graph is realized through a specific structure of the embedded space. The deep neural network model, e.g., CoKE [19], ConvE [20], is designed by designing entities and relations into query pairs, matching query pairs with entities relations, and arXiv:2301.02445v4 [cs.AI] 11 Jan 2023 obtaining inference similarity scores through deep neural networks to make inference judgments. Both the knowledge graph embedding model and the deep network model are regarded as neural network models, and the same point is that they both design a scoring function, and use the gradient backpropagation method for training in a datadriven manner. Its advantage is that its generalization performance is relatively better, and it effectively alleviates the problem of graph structure dimensionality disaster. Its disadvantage is that it only sees the similarity between input and output values, lacks interpretability, and performs single-step reasoning. In summary, as shown in Table 1, it is found that the methods based on logical deduction rules and graph structure are all symbol-based methods, which have better interpretability but poor generalization performance. Otherwise, based on knowledge graph embedding and deep neural network model, its generalization performance is better, but it lacks interpretability. Therefore, studying how to integrate symbolist and connectionist models is the key to obtaining an interpretable knowledge graph reasoning model. With the development of deep learning, the model structure of knowledge reasoning methods is becoming more and more complex. Because it is difficult for users to have an intuitive understanding of the parameters, structure and characteristics in such models, and they also have less understanding of the decision-making process and reasoning basis, it is difficult for users to trust the prediction results of the model. Therefore, in order to establish trust between users and reasoning models and balance the contradiction between model accuracy and interpretability, multi-hop reasoning methods are used to solve explainable knowledge reasoning [21]. The rationale of the multi-hop reasoning method is to imitate the multi-hop thinking of human beings. A common approach is to apply reinforcement learning frameworks to multi-hop reasoning in knowledge graphs. Reinforcement learning is a model that has received a lot of attention in the past ten years and has been widely used in control [22], games [23], and robots [24]. It models a learning process as a Markov process and trains the model by maximizing long-term cumulative rewards through the interaction between the agent and the environment. Modelling the knowledge map as a reinforcement learning process not only gets the result of reasoning, but also obtains the path of reasoning, and explains the reasoning of the knowledge graph through the reasoning path. The specific fusion method is to regard the knowledge graph as an environment, model the agent as a deep neural network, combine the advantages and disadvantages of symbolism and connectionism, and make the model have both the generalization performance and path interpretability of neural networks. Methods based on reinforcement learning such as DeepPath [10], MINERVA [25], DIVINE [26], and AttnPath [27], however, generally have the shortcomings of slow convergence and low accuracy, and most of them are inferior to some traditional methods. The reason for this may due to the sparse rewards of reinforcement learning. Moreover, the sparse rewards, sparse data, and insufficient exploration of knowledge graphs make reinforcement learning more difficult and challenge in multimodal knowledge graph reasoning tasks [28]. Therefore, it is meaningful and promising to improve the accuracy of reinforcement learning in knowledge graph reasoning. Recently, the cross-border application of Transformer [29] has attracted wide attention, and it has made breakthroughs in image classification [30], semantic segmentation [31], object detection [32] and other fields. Currently, Transformer has been employed as a pre-training model in offline reinforcement learning, e.g., Decision Transformer [33], Trajectory Transformer [34], and Gato [35], etc. These methods regard the data of reinforcement learning as a string of unstructured sequence data and train with supervised or self-supervised learning methods. It avoids the unstable gradient signal in traditional reinforcement learning and performs better in offline reinforcement learning. Deep reinforcement learning is a sequential process, therefore, the process of multi-hop reasoning is handled by stateof-the-art reinforcement learning sequence models, which may achieve better results than traditional reinforcement learning. For knowledge graph reasoning tasks that are complex and have the concept of multimodal data, the core idea of most existing knowledge graph reasoning algorithms is to reason by integrating existing triple structure knowledge, so knowledge of the entity is often ignored. However, information about entities themselves is usually beneficial for link prediction tasks, such as image and textual information. As shown in Fig. 1, for example, when performing the triple < shoes, style, ? > link prediction task, the answer is predicted based on the triple < dress, style, sweet > of the similar head entity image, and finally, answer ? is sweet. It is worth noting that the text information in the image also contains a lot of knowledge, especially when the knowledge graph is applied to the e-commerce field, the text in the product image is often the brand information of the product. Therefore, to address multimodal explainable knowledge graph reasoning tasks with high efficiency and high performance, a new sequential model IMKGA-SM for reinforcement learning is developed, where a reward mechanism is designed based on perceptual interaction and fine-grained multimodal information extraction. Figure 1. When performing triple < shoes, style, ? > link prediction tasks, the answer is predicted based on triples < dress, style, sweet > which is similar to the head entity image, and finally it is concluded that ? is sweet. RELATED WORKS Single-modal Knowledge Graph Reasoning Single-modal knowledge graph reasoning mainly revolves around relational reasoning. The AMIE [7] and AMIE+ [36] algorithms are derived from the early inductive logic programming system [37], emphasizing automatic rule learning methods. It has strong interpretability, however, all the above methods require expert design rules. Graph structurebased reasoning methods (e.g., path ranking algorithm [9]) are also used to tackle such problems, which is interpretable but computationally intensive and time-consuming. The embedding-based methods include TransE [11], ConvE [20], RotatE [38], and TuckER [17]. Each of these models is simple and the training speed is fast, but they are not interpretable. Reasoning methods based on neural networks include neural tensor networks [39], R-GCN [40], implicit ReasoNet [41], etc. They are able to learn to reason through implicit processing, which results in poor interpretability and unstable performance. In addition, there are typical reinforcement learning methods, e.g., DeepPath [10], MINERVA [25], RLH [42], GussuianPath [43], etc, which have better interpretability and inference performance than representation learningbased methods, but the disadvantage is that the effect is poor. Multimodal Knowledge Graph Link Prediction Compared with the single-modal knowledge graph link prediction task, the main contribution of the multi-modal knowledge graph link prediction task is to integrate multimodal data knowledge into the plain text knowledge graph. In multimodal knowledge graph link prediction tasks, it is very necessary to combine the textual semantics of entities with multimodalities, such as semantics, vision, and hearing. IKRL [44] is the first knowledge representation model that includes image information. For each entity, it learns two different representations based on triple structure information and image information, respectively. DKRL [45] is a knowledge representation for fused descriptions. Similar to the IKRL model, the DKRL model also learns a representation based on structural information and a representation based on text descriptions for each entity. Based on the single-modal knowledge graph link prediction model TransE [11], an autoencoder is employed in TransAE [46] to jointly encode visual information and text information to obtain the vector representation of entities. RSME [47] is a multimodal knowledge graph reasoning model based on the traditional knowledge graph embedding model ComplEx [18]. However, most of these multimodal approaches are uninterpretable and with low accuracy. Reinforcement Learning with Transformers In [33], Decision Transformer is proposed by modelling reinforcement learning tasks as a sequence framework transformer, based on which SQUIRE [48] is employed to handle single-modal knowledge graph link prediction. However, these works are deficient in generalization and the reward information is underutilized. Based on Decision Transformer [33], Trajectory Transformer [34] uses the beam search for model-based planning, while generating new trajectories is too complicated. Therefore, a simple random masking mechanism is proposed in this paper, which achieves the effect of data enhancement by randomly masking historical actions that have been generated in the past. Recently, Deepmind proposed a general agent, i.e., Gato [35], which made a further breakthrough in multimodal tasks. It is promising and potential to extend this model to multi-modal multi-hop reasoning. METHODOLOGY In this section, the overall framework of IMKGA-SM is introduced, which treats the multi-hop reasoning problem as a sequence-to-sequence task derived from regression modelling trajectories and applies it to the task of multimodal link prediction. The hybrid transformer architecture of IMKGA-SM mainly includes five stacked modules. (1) The underlying multimodal feature extraction module, as shown in Fig. 2, aims to obtain basic structural information, image information, and text information in images from databases and images, and combine the three as a state feature. (2) The reinforcement learning sequence module, as shown in the bottom part of Fig. 3. The knowledge graph link prediction task is modelled as an offline reinforcement learning problem, which is then abstracted into a sequential framework. (3) The upper multimodal encoder (fusion encoder) module, as shown in Fig. 4, fuses the underlying features, reward features based on perceptual interaction, and action features through a self-attention mechanism. (4) The Mask mechanism module, as shown in the upper part of Fig. 3, includes three mechanisms to ensure the input and output of the encoder and prevent overfitting. (5) The loss function module, adopts an autoregressive self-adjusting mechanism to maximize the multi-modal performance, as shown in Fig. 5. In the following subsections, each module of the IMKGA-SM will be analyzed and discussed in details. The multimodel feature extraction module and the reinforcement learning sequence architecture are developed in Subsections 3.1 and 3.2, respectively. The fusion encoder module is proposed in Subsection 3.3. The mask and loss function modules are designed in Subsection 3.4 and 3.5, respectively. Multimodal Feature Extraction Module In multimodal knowledge graph tasks with only single image data, most of the existing methods only learn simple image information. However, many visual scenes contain text with key information, so understanding text in images is crucial for downstream reasoning tasks, such as product brand, price, and consumer population. To jointly learn multimodal knowledge and inter-entity relations, knowledge in a single modality is extracted and combined into a multimodal transformer. In this paper, two modalities are considered: visual and textual, where text is extracted from image information. The multimodal part includes image input, text input in the image and query input (i.e. head entity, relation). Vgg16 pre-trained on ImageNet is used to process the head entity image information of the image input. Vgg16 consists of several vgg-blocks and three fully connected layers, and the vector output by the last fully connected layer is used as the image feature vector. For image text input, OCR technology is used for image text extraction [49]. Generally, OCR technology consists of two steps. (1) Text detection: locate the position of the text in the image. (2) Text recognition: identify the positioned text area, and convert the text area in the image into character information. In this paper, the CTPN method [50] is adopted for text detection, and the CRNN method [51] is adopted for text recognition. If the image information corresponding to the head entity is missing, ∅ is used instead. For the query input part, the knowledge graphs corresponding to head entities and relations are encoded to form a vector. Eq. 1 models an original multimodal feature φ, which is specifically manifested as the fusion of structural information (h, r), image information and text information. φ : G × G → G(1) Here h f ig , h ocr , h, r ∈ G. Then, let * indicates a grouping operation, h represents the structure embedding of the head entity, h f ig represents the image embedding of the head entity, and h ocr represents the text embedding of the head entity after being extracted by the OCR method. Thus, as formalized in Eq. 2, a characteristic entityq of φ(h, h f ig , h ocr ) and r is written as : q = φ (h, h f ig , h ocr ) * r(2) Multimodal fusion is widely used in the fields of computer vision [52] and natural language processing [53]. Since the currently most popular transformer framework is adopted as the core module of IMKGA-SM, according to the characteristics of the transformer, the number of parameters in the learning process largely determines the operation speed, so it is very necessary to process the input features of the transformer. Therefore, the module of the multimodal feature is a pre-train of the core transformer framework, which aims to filter out irrelevant or redundant features from the original data of features. Sepecifically, three self-attention blocks are used to receive the outputs of the original multimodal feature vector φ, and three autoencoders are used to transfer them into a 14-dimensional vector in the end. Specifically, first, the original multimodal feature φ i passes through a fully connected feed-forward network to obtain different modal features µ i , which consists of two linear transformations and a ReLu activation function, via Eq. 3. µ i = conv {ReLU [conv (φ i )]} , i = 1, . . . , L(3) Then, as Eq. 4 shows, different modal features µ i are passed through a Softmax layer in order to compute the attention of each modality a i . a i = Sof tmax (µ i ) , i = 1, . . . , L(4) The sum of these attention weights a i multiplied by the original multimodal feature embedding µ i is called selfattention Q s φ , formalized in Eq. 5. Q s φ = L i=1 a i µ i (5) Therefore, Q s h , Q s h f ig , Q s hocr of h, h f ig , h ocr are obtained respectively. Q s φ is used as a query for the corresponding feature to calculate the attention weights guided by Q s φ and put into the softmax layer. Finally, the weights are multiplied by the original modal features φ k to get the filtered vector. Then, the output of the attention block is expressed as g φ via Eq. 6. p k = W ReLu W s Q s φ • ReLu (W x φ k ) , s k = Sof tmax (p k ) , k = 1, . . . , N, g φ = N k=1 s k φ k , φ ∈ G (6) After feature g φ is obtained, it is input into the autoencoder for dimensionality reduction. The final feature h φ is shown in Eq. 7, in which h f ig is 8-dimensional, and h ocr is 3- dimensional. h φ = σ (W · φ + b)(7) This part is used as the definition for the state s of reinforcement learning, as shown in Fig. 3, which will be described in detail below. Reinforcement Learning Sequence Framework In this subsection, an offline reinforcement learning framework is developed for the knowledge graph link prediction task. Then, specific Markov triples are designed, and a reward expectation mechanism based on perceptual interaction is proposed. Finally, the whole reinforcement learning process is abstracted into a sequential framework, which is the core module of IMKGA-SM. Offline Reinforcement Learning Design The knowledge graph link prediction problem is modelled as a Markov decision process. Markov decision process tuple consists of a state s ∈ S, an action a ∈ A, a transition dynamic P (s |s, a ), and a reward function r . s n , a n , and r n are used to denote the state, action, and reward at time step n, respectively. A trajectory consists of a sequence of states, actions, and rewards: τ = (s 0 , a 0 , r 0 , s 1 , a 1 , r 1 , . . . , s N , a N , r N ). The purpose of reinforcement learning is to learn a policy that maximizes the expected return E N n=1 r n in the Markov decision process. In this paper, the process of generating a path for each triple link in the knowledge graph is regarded as a reinforcement learning segment. Since the data in this paper are fixed datasets, new data is difficult to be obtained through environmental interaction, so it is regarded as an offline reinforcement learning problem. Track Representation The inference accuracy of reinforcement learning-based inference methods is usually much lower than that of traditional TransE-based methods. This is because the amount of data for offline reinforcement learning is very limited, and the rewards for the knowledge graph link prediction problem are sparse, resulting in serious decision bias in reinforcement learning. Therefore, they are not suitable for direct transfer to the task of multimodal knowledge graph link prediction. To address this problem, a new reinforcement learning sequence framework with perceptual interaction expected reward mechanism is proposed in this subsection. Different from traditional reinforcement learning, the reward setting here is the expected reward in the future, that is, the maximum reward value expected to be obtained in the current state. There are two main differences: (1) An expected reward mechanism is proposed to eliminate the sparsity of rewards by incorporating the perceptual similarity of knowledge graph entities. (2) The multi-modal perception interface is introduced into the decision transformer framework for the first time, making full use of multi-modal features. Through the previous pre-training process, the knowledge graph link prediction process is transformed into a Markov decision process. Its purpose is to find a path to the target entity, which means that the pathfinding process makes multi-hop reasoning interpretable. Therefore, the knowledge graph link prediction task is modelled as an offline reinforcement learning task, which is then transferred to a sequential framework for solving. Similar to offline reinforcement learning, the Markov triplet <R, S, A, P r > of IMKGA-SM is defined as follows. State s k : For the knowledge graph triple < h, r, t > in the dataset, the state of IMKGA-SM is denoted as s = (query, h f ig , h ocr ) ∈ S. Among them, S is the state space, and query = (< bos >, h, r) is the query of triples, representing the beginning, head entity, and relation, respectively. h f ig and h ocr represent the image embedding of the head entity and the text embedding in the image, which are obtained by Eq. 7. Action a k : The action space for a given s n is the set of all entities, relations and < eos >. The purpose is to infer the path from the head entity h, relation r to the tail entity t by generating the action output, and a k is the k-th action represented by the k-th token of the path generated by the rule. Here, AnyBURL [8] is used as the rule-based method to find the path between h and t, r (h, t) → r 1 (h, t 1 ) ∧ r 2 (t 1 , t 2 )∧. . .∧r n (t n , t), decomposing a single relation into a combination of multiple relations and entities. If the model makes an error during the prediction process, the inferred entity or relationship does not conform to the corresponding attribute. Then the rule path is modified to remember only the last target entity. Taking three-hops as an example, A = (0, ∅, t, 0, 0, 0, 0) is modified as a set of actions. Transition P r : The transition function P r is set to map the current state s n to the next state s n+1 . Formally, P r : S × A → S is defined as P r (s n , A n ) = P r (query, h f ig , h ocr , a 0 , . . . , a n−1 ). When entering the next state, the actions of the previous step are added to the previous state as history to realize the state change. The Mask mechanism I does the specific step. Therefore, unlike the traditional Decision Transformer [33], the state transition mechanism P r (s n , A n ) is designed to let the model focus on the state, actions and return-to-go of the previous steps, thereby improving the policy. Return-to-goR: Since the purpose of knowledge graph link prediction is to reason about the tail entity answer, it is impossible to know whether the reasoning is successful unless the last step of the reasoning path is reached. Therefore, there is a phenomenon of sparse rewards in the reinforcement learning sequence method when solving the knowledge graph link prediction task. In order to solve these problems, a reward expectation mechanism based on perceptual similarity is designed to learn interactively by taking the expected reward of the current state as input. The definition ofR is the maximum reward expected in the current state, so after making an action, the value of the nextR will decrease (or increase) due to the reward from the previous action,R n = T n =n r n . SoR changes with state and action. After obtaining the initial triple query = (< bos >, h, r) and obtaining the corresponding reasoning path according to the rules, the return-to-go corresponding to each stepR n is obtained. τ (h, r, t) is defined as a set of triples in the dataset, and ψ(< bos >, r 1 , t 1 , r 2 , . . . , r n , t n ) is the path obtained by the rule that satisfies τ . All dataset entities and relations are stored in collections E and R. Specifically, the generation steps ofR are as follows: (1) At the initial value (n = 0), when no action is taken, the maximum expected reward of the taskR 0 is expected to be able to reach the target entity, which is a fixed value defined in Eq. 8.R 0 = r good(8) (2) When n = 1, the first action is < bos >, which means the beginning. As Eq. 9 shows, the first returnto-goR 1 is defined according to whether the correct tail entity is finally successfully inferred. Here, r good represents a positive constant and r bad represents a negative constant. R 1 = r good , ψ (t n ) = t, r bad , ψ (t n ) = t(9) (3) For each action with n > 1, return-to-go in n-th step is defined as Eq. 10, in which a base penalty r step (negative) is generated since as few hops as possible are desired to be used. When the current action a n is the entity t n , an additional reward r addn (positive) will be generated. r n = r step + r addn , n > 1 (4) Define r addn as: r addn =     R 1 × sim (f tn , f t ) a n ∈ E, 0, a n ∈ R 1 2R 1 , em tn = ∅|em t = ∅(11) Here sim(·) denotes the cosine similarity computed with sim (u, v) = u T v u · v , f tn and f t are the image embedding of the current and target entities, respectively. From Eq. 11, it is noted that the more similar the generated entity action is to the target entity, the greater the reward for that action should be. If the current action is the relation r n , or the target entity or the current recommendation entity has no image information, the additional reward r addn is 0. If the entity action a n has no image information or the target entity t has no image information, the similarity value takes an intermediate value of 0.5. (5) An additional penalty r bad will be imposed if the recommended action does not conform to the attribute (entity or relation) that should be recommended. When performing action a n in the current state s n , Eq. 12 defines the next Return-to-go inputR n as the previous step's Return-to-gô R n−1 minus the reward r n−1 caused by action a n−1 . R n = R n−1 − r n−1 − r bad , a n−1 / ∈ E, a n−1 / ∈ R R n−1 − r n−1 , others (6) Repeated iteration until the end of the round, if it is less than three hops, in order to ensure the same embedding length, the lastR completion vector will be used to length 7. The trajectory τ is expressed in Eq. 13. τ = R 1 , s 1 , a 1 ,R 2 , s 2 , a 2 , . . . ,R N, s N , a N(13) Fusion Encoder Architecture Sequences of token embeddings from the three modes, return-to-go, state, and action, are concatenated and fed to the transformer. Different from the positional embedding of the traditional transformer [29], a time step (return-togo, state) shares the same positional embedding and the position of them are processed as a complete sequence. Therefore, the process of positional embedding is expressed as Eq. 14. Here X U pc is the projected embedding vector, C is the concat operation, U pos , represents the position embedding corresponding to the embedding layer, and U = C R , s , a . To avoid increased computational complexity due to long concatenated sequences, Eq. 15 modelsX C(R,s) τ by adding an embedding linear layer LN for each modality such that the original input is projected to the embedding dimension, followed by layer normalization Sigmoid. X C(R,s) τ = Sigmoid LN X C(R,s) pc X a τ = Sigmoid LN X a pc(15) These tokens are processed by an encoder model that predicts future action tokens via autoregressive modelling. Since the multi-hop inference is a fixed-length sequence, the transformer's encoder structure is selected, which consists of L stacked blocks. As shown in Fig. 4, each block mainly includes two types of sublayers: multi-head self-attention M HA and fully-connected feed-forward network F F N . The transformer model contains many parameters including W V , W Q , W K matrices, and the values in each stack and head are designed. As formalized in Eq. 16, multi-head attention mechanism Attn is introduced to defined head M r i . Here, r and a represent return-to-go and action features in reinforcement learning sequence framework, q represents knowledge graph query (< bos >, h, r >) feature, and I and O represent the image and OCR features in multimodal feature extraction module. head Mr =Attn x r W r Q , x r W r K , x q W q K , x I W I K , x o W o K , x r W r V , x q W q V , x I W I V , x o W o V(X U l = F F N LN X U l +X U l(19) Next, with the mask shown in Fig. 3, the encoder only focus on previous labels a < k of the current return-togo, multimodal fusion input and output paths. The specific details of the mask will be described in the next subsection. Mask Mechanism Design In the link prediction task of the recommendation system, due to feature redundancy, lack of sufficient training data and complex model design, the recommendation system is extremely prone to the one-epoch phenomenon, that is, the over-fitting phenomenon. Therefore, three mask mechanisms are designed to overcome the overfitting phenomenon, and they are also used to solve the input and output requirements of the reinforcement learning framework established in Subsection 3.2. Mask mechanism I: As shown in the shaded area in Fig. 3, Mask mechanism I is used to ensure the input and output of the transformer and realize the Markov decision process. Through step-by-step prediction, the path is predicted sequentially to obtain the final target entity. When predicting the next action, the previous action history will be added to the state to achieve state transition, that is, the real result of the previous step will be used as input to predict the output of the next path token. In this way, the context information is effectively used by the model, thereby ensuring the accuracy of the model, so that the final result will not have a large error due to a one-step error. Mask mechanism II: As shown in the blue dots in Fig. 3, Mask mechanism II is used to solve the problem of model overfitting. After multimodal information dimensionality reduction, data of training features is sparse, which easily leads to fast immature convergence of the model, resulting in overfitting. Therefore, it is necessary to perform data dropout on this part of the embedded input. The double data dropout mechanism is introduced for data enhancement, which is conducive to retaining the original highquality samples as much as possible. Specifically, for a given sequence, the data loss scheme is enabled with a certain probability p k , and when applying the data loss scheme, tokens in the sequence are randomly masked with a certain probability p m . Mask mechanism III: As shown in the red dot in Fig. 3, the Mask mechanism III is used to make the model generate more new trajectories by itself, which makes the trajectories generated by the transformer in the past randomly masked out for the next action prediction task such that the model gradually learns from the self-generated trajectories. Mask mechanism III is simple and easy to implement, and it does not add any additional computational cost and parameters. In the masking mechanism, as formalized in Eq. 20, multiplying each token x i ∈ x by the mask gets its autoregressive log maximum likelihood. Here η represents the mask ratio. log p (x) = log n i=1 p x i |[I [m j ≤ η] · x j ] i=1 j=0 , η ∈ [0, 1](20) Mask mechanism I adopts a complete mask, so η = 1. Mask mechanism II and III are random masks, which are randomly masked according to a certain probability value m j . Loss Function Design The multimodal information of entities (features of images and text features in images) is expected to enhance learning through fusion. However, experiments have found that after incorporating multiple modalities, the model will suffer from a lack of optimization, which is caused by the dominance of one mode in some scenarios. For example, image information dominates when inferring relations is relevant to tasks such as "colour, item type". When reasoning about relations is a relevant task like "brand", textual information in images dominates. Therefore, inspired by [54], a dynamic gradient adjustment mechanism is introduced to train three models separately, taking two modes and their concat as three inputs. By monitoring the contribution of each mode to the learning objective, each mode optimization is adaptively controlled, thereby alleviating the imbalance of mode optimization. Three transformer's encoders, represented by Enc (·), are accepted three modal features. When decoding ψ k := (r 1 , t 1 , . . . , r n t, < eos >), Sof tmax is used in Eq. 21 to calculate the distribution p χ i , where χ ∈ (f ig, ocr), b is the bias of the prediction model [55], and the addition of b/2 is used as a bias compensation of single-modal prediction. p χ i = |ψ| k=1 Sof t max (M LP Enc ψ χ n θ χ , x χ n + b 2 · W χ n ))) k(21) In the same way, the distribution of concat feature p concat i is shown in Eq. 22. p concat i = |ψ| k=1 Sof tmax M LP Enc ψ concat n θ concat , x concat ) ·W concat n k (22) As Eq. 23-25 shows, a cross-entropy loss is used, where ε is a label smoothing hyperparameter ranging from 0 to 1 to avoid overfitting. A single-modal image feature, a singlemodal OCR feature, and a multi-modal feature which is defined as the concat of both are fed respectively to compute three different losses. At the same time, since the previously designed mask mechanism shields some features, it is also necessary to exclude the loss caused by the mask token. Also, to prevent the model from giving higher scores to shorter paths, the sum of log-likelihoods divided by the length of the path is used. L f ig = − 1 p f ig mask · N N i=1 ε log p f ig t + 1 − ε N − 1 log p f ig i (23) L ocr = − 1 |p ocr mask | · N N i=1 ε log p ocr t + 1 − ε N − 1 log p ocr i(24)L concat = − β |p concat mask | · N N i=1 ε log p concat t + 1 − ε N − 1 log p concat i(25) Here β represents the weight value to encourage exploration. If during the training process, Mask mechanism III is activated, that is, the part of the input path is masked out, it means that a new trajectory is generated, so the weight of p concat t should be increased. To optimize the multimodal contribution imbalance problem, the modal contribution difference ratio parameter (ρ f ig t , ρ ocr t ) is introduced in Eq. 26 to adaptively adjust the gradient of each modality, where ρ f ig t is ρ ocr t . As shown in Fig. 5, the coefficient coef f u n is integrated into the network corresponding to the modal association via Eq. 27 based on [54]. At the same time, Gaussian noise N is introduced to enhance the generalization ability of the model. ρ ocr n = L ocr L f ig (26) coef f u n = 1 − tanh (α · relu (ρ u n )) , ρ u n > 1 1, others (27) ∇W n+1 u =∇W n u × coef f u n + N 0, std (∇W n u ) +e −8(28) Here u ∈ {f ig, ocr}, and α is a hyperparameter that controls the degree of modulation. EXPERIMENTS In this section, experiments are implemented based on two newly established datasets, and some state-of-the-art (SOTA) baselines are used for comparison. The experiment is mainly divided into four parts: link prediction main experiment, ablation experiment, training set mask experiment, and parameter interpretability analysis experiment. Datasets In this subsection, two newly established datasets, OpenBG-IMG+ and OpenBG-Complete-IMG+, are introduced. OpenBG-IMG+ A new dataset OpenBG-IMG+ is created, derived from a part of the OpenBG-IMG dataset [56], which is a multimodal dataset in the field of e-commerce. This dataset is released in the CCKS2022 Task Three competition. Since the data set released by the competition has no correct answer, the OpenBG-IMG valid set is used as the test set in this paper, and the training set is divided into several parts as the valid set. The used dataset contains 28,891 entities and 136 relations, where only some of the head entities have image information, while none of the tail entities has image information. Each image corresponds to only one entity, and there is no duplication. Table 2 shows specific statistics. OpenBG-Complete-IMG+ A new repository OpenBG-Complete-IMG+ is created based on the already created database OpenBG-IMG+. Its training set and valid set are obtained from OpenBG-IMG+ deleting the triplet data with no image information in the head entity, and the test set remains unchanged. Like OpenBG-IMG+, the tail entities of all data do not contain image information, but all head entities of the training set of OpenBG-Complete-IMG+ contain image information. This new dataset contains 136 relations and 22297 entities. Table 2 shows specific statistics. Baselines To study the performance of IMKGA-SM, three categories of methods are used for comparison: (1) Translation-based models, TransE [11], TransH [12], and TransD [13]. (2) Nonlinear-based models, DistMult [16], ComplEx [18], and TuckER [17]. (3) Multimodal knowledge graph linking model, TransAE [46]. Evaluation Protocol To further analyze the influence of image information, part of the training data is masked, and IMKGA-SM is used for experiments. According to inferences, the current dataset OpenBG-IMG+ may contain enough structural information for prediction, which interferes with the analysis of visual information. In order to highlight the role of visual information, a part of the training data is masked to create a dataset. Similar to recent works [10] [25], as formalized in Eq. 29 and Eq. 30, the mean reciprocal rank M RR and the average proportion of triples with rank less than n Hits@n are used to evaluate inference performance. M RR = 1 |Q| |Q| i=1 1 rank i = 1 |Q| 1 rank 1 + 1 rank 2 + . . . + 1 rank |S|(29) Here Q is the set of test queries, |Q| represents the number of queries, rank i is the link prediction rank of the i-th triple [57]. The larger the M RR indicator, the better the prediction effect. Here I (·) is the indicator function, if the condition is true, the function value is 1, otherwise, it is 0. The three indicators HIT @1, HIT @3, and HIT @10 describe the probability that the top K(K = 1, 3, 10) entities with the highest score in the link prediction contains the correct entity [11]. HIT @n = 1 |Q| |Q| i=1 I (rank i n)(30) Implementation Details Next, the experiments are mainly based on the knowledge graph link prediction task. To augment the training data, each original training set triplet is reversed to generate an inverse triplet. The knowledge graph triplets in the test dataset are sorted by all entities in descending order of probability value, leaving the top ten predicted entities. Models are trained using the Adam [58] optimizer and analyzed for hyperparameters, eigenvectors. Link Predict Results Link prediction results are shown in Table 3 (all scores are expressed as percentages), where the most competitive baseline TransAE [46] results are underlined and the best results are highlighted in bold. The following points are observed. Table 3 is studied for link prediction tasks. It is seen that the accuracy performance of IMKGA-SM is better than all other models, and IMKGA-SM uses multi-hop reasoning, which is interpretable. It is shown that the model is proven to be both interpretable and highly accurate, achieving state-of-the-art performance. The introduction of a multimodal knowledge graph in the OpenBG-IMG+ dataset is not obvious enough, so only the triplet data with image information in the head entity is retained in the new dataset. The results show that the improvement effect of IMKGA-SM in OpenBG-Complete-IMG+ is generally better than that of OpenBG-IMG+. This is speculated to be due to being overwhelmed with the help of other multimodal information when the dataset already has rich structural information. To further analyze the influence of image information, a new dataset OpenBG-Complete-IMG+ is created. All head entities in the dataset have image information, and the experiment is performed again. Ablation Learning In this section, ablation learning is divided into three parts: the influence of multimodality, reward expectation, and mask mechanism on the model, and perform specific data analysis on the complete IMKGA-SM model. Table 3. Image embedding and ocr embedding are added to IMKGA-SM (MKG), and a dynamic loss function adjustment mechanism is adopted. The experiment found that the improvement effect on OpenBG-Complete-IMG+ is more stable than OpenBG-IMG+. IMKGA-SM(No Img) vs IMKGA-SM(RL) In order to verify the improvement of the model by the introduction of reward expectation, IMKGA-SM (No image) and IMKGA-SM (RL) are compared on two data sets, and the link prediction experiment results are shown in the Table 3. A reward expectation mechanism based on perceptual interaction and a reward-related mask is added to IMKGA-SM (RL). The experiment results show that after adding the reward expectation mechanismR, the model effect has been improved, but the improvement is not as much as that of MKG. IMKGA-SM (MKG+RL) IMKGA-SM (MKG+RL) adds a masking mechanism on the basis of IMKGA-SM(MKG) and IMKGA-SM(RL) and verified its effect on OpenBG-IMG+ and OpenBG-Complete-IMG+ datasets, which are shown in Table 3. Compared Dataset #Ent #Rel #Train #Valid #Test OpenBG-IMG+80% 27839 136 158527 8344 10930 OpenBG-IMG+70% 22297 136 138479 7289 10930 OpenBG-IMG+47% 21817 136 92648 4877 10930 OpenBG-IMG+35% 21469 136 68599 3611 10930 OpenBG-IMG+28% 21215 136 55397 with the multimodal knowledge graph linking baseline TransAE [46], IMKGA-SM has a 17.67% improvement on the OpenBG-IMG+ dataset and a 9.01% improvement on the OpenBG-Complete-IMG+ dataset. Compared with the nonlinear-based baseline DistMult [16], IMKGA-SM has a 50.39% improvement on the OpenBG-IMG+ dataset and a 46.681% improvement on the OpenBG-Complete-IMG+ dataset. Training Data Masking To further explore the impact of image and structural information on the results, masking is performed on part of the training data, and IMKGA-SM is used for experiments. It is speculated that the current dataset OpenBG-IMG+ may contain enough structural information for prediction, interfering with the analysis of visual information. In order to highlight the role of visual information, a part of the training data is masked out to create datasets by controlling the frequency of a head entity with image information, which is OpenBG-IMG+28%, OpenBG-IMG+35%, OpenBG-IMG+47%, OpenBG-IMG+70%, OpenBG-IMG+80% and OpenBG-IMG+100%, respectively, and the dataset information is shown in Table 4. Then, the link prediction experiment is carried out again, and the results are shown in Fig. 6, 7. It is seen that IMKGA-SM has shown obvious advantages in data sets of different scales. The traditional method has a significant increase after adding structural information, while IMKGA-SM is still relatively stable in the improvement of the scale. IMKGA-SM not only has comparable generalization ability to neural network models, but also has stronger interpretability than other baseline methods. Parameter Interpretability In this subsection, the parameter interpretability is divided into three parts, mainly analyzing the impact of different batch sizes, label smooth and modulation impact. Fig. 8 investigates the effect of different batch sizes N . The batch size is set to N ∈ [16,32,64,128,256,512]. It is observed that as N increases, the performance of IMKGA-SM rises first and then declines steadily in most cases, presumably because undertraining and overfitting negatively affect the model. The results show that an appropriate training parameter size improves the effectiveness of the inference model. From the experimental results, the optimal parameter is N = 16. The Influence of Different Batch Sizes N The Influence of Different Labels Smooth ε In this paper, the method of label smoothing is used in the loss function, which is a regularization method to prevent overfitting. By setting α = 0.5, the effect of the label smoothing parameter ε is shown in Fig. 9. The batch size is set to ε ∈ [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]. It is observed that with the decrease of ε, the performance of IMKGA-SM rises first and then declines steadily in most cases, which is probably because MRR is the main evaluation indicator, Figure 9. Link prediction results on OpenBG-Complete-IMG+ in different ε. Figure 10. Link prediction results on OpenBG-Complete-IMG+ in different α. and the shrinking of its proportion has a negative effect on the final result influences. The results show that the optimal parameter is ε = 0.7. The Influence of Different Modulation Impact α With ε = 0.9, the effect of the modulation impact α on IMKGA-SM is demonstrated in Fig. 10. From the results, α = 0.6 is observed to be the optimal value on the data set. CONCLUSIONS AND DISCUSSION In this paper, how to effectively utilize multi-modal auxiliary features for multi-hop knowledge graph inference is investigated, aiming to improve the accuracy of model inference and achieve interpretability simultaneously. An efficient IMKGA-SM model is proposed, which outperforms existing methods on the multimodal knowledge graph inference task. In IMKGA-SM, structural features and multimodal data are first extracted in-depth, and then a return-togo mechanism based on perceptual similarity is constructed and applied to the large sequence framework. In addition, three mask mechanisms are designed to alleviate the problem of data sparsity. Next, a multimodal autoregressive loss function adjustment mechanism is introduced to take full advantage of multimodality. Finally, experimental results show that IMKGA-SM achieves higher effectiveness and interpretable ability versus other trending rivals in knowledge graph link prediction tasks. To conclude, IMKGA-SM requires effective methods to minimize the negative impact of sparse data. These tasks are left for future work. Figure 2 . 2The multimodal feature extraction module. Figure 3 . 3Unified interpretable multimodal knowledge graph sequence framework. Figure 4 . 4The fusion encoder. Figure 5 . 5Autoregressive dynamic loss regulation. 4.6.1 IMKGA-SM (No Img) vs IMKGA-SM (MKG) To further explore the role of image information, IMKGA-SM (No image) and IMKGA-SM (MKG) are compared on two datasets, and the link prediction experiment results are shown in 2916 10930 Figure 6 . 109306Link prediction results (MRR) on OpenBG-Complete-IMG+x%. Figure 7 . 7Link prediction results (HIT@1) on OpenBG-Complete-IMG+x%. Figure 8 . 8Link prediction results on OpenBG-Complete-IMG+ in different N . Table 1 1Summary of Existing Methods for Knowledge Graph Link Predictionreasoning algorithm logical rules graph structure knowledge graph embedding deep neural network reinforcement learning interpretability - performance robustness scale expert experience dependence partial dependence no dependence no dependence no dependence 16) The calculation method of headMq i , head M I i and head Mo i is similar to that of head Mr i , where head Ma i is redefined via Eq. 17. head Ma = Attn x a W a Q , x a W a K , x a W a V (17) Hence, Eqs. 18 and 19 model the hidden state of the encoder layer l.X U l = M HA LN X U τ + X U l−1 (18) Table 2 2Statistics of The Experimental DatasetsDataset #Ent #Rel #Train #Valid #Test #num of image OpenBG-IMG+ 28891 136 197269 10383 10930 14718 OpenBG-Complete-IMG+ 22297 136 138479 7289 10930 14718 Table 3 3Results of Knowledge Graph Link Prediction on OpenBG-IMG+ and OpenBG-Complete-IMG+ DatasetsOpenBG-IMG+ OpenBG-Complete-IMG+ Table 4 4Statistics of Datasets in Training Data Masking Experience Knowledge graph embedding: A survey of approaches and applications. Q Wang, Z Mao, B Wang, L Guo, IEEE Transactions on Knowledge and Data Engineering. 2912Q. Wang, Z. Mao, B. Wang, and L. Guo, "Knowledge graph embedding: A survey of approaches and applications," IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 12, pp. 2724-2743, 2017. Structural semantic relatedness: a knowledge-based method to named entity disambiguation. X Han, J Zhao, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsX. Han and J. Zhao, "Structural semantic relatedness: a knowledge-based method to named entity disambiguation," in Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics, 2010, pp. 50-59. Answering natural language questions by subgraph matching over knowledge graphs. S Hu, L Zou, J X Yu, H Wang, D Zhao, IEEE Transactions on Knowledge and Data Engineering. 305S. Hu, L. Zou, J. X. Yu, H. Wang, and D. Zhao, "Answering natural language questions by subgraph matching over knowl- edge graphs," IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 5, pp. 824-837, 2017. Improving visual relationship detection using semantic modeling of scene descriptions. S Baier, Y Ma, V Tresp, International Semantic Web Conference. S. Baier, Y. Ma, and V. Tresp, "Improving visual relationship detection using semantic modeling of scene descriptions," in International Semantic Web Conference, 2017, pp. 53-68. A recommender system for complex realworld applications with nonlinear dependencies and knowledge graph context. M Hildebrandt, S S Sunder, S Mogoreanu, M Joblin, A Mehta, I Thon, V Tresp, European Semantic Web Conference. M. Hildebrandt, S. S. Sunder, S. Mogoreanu, M. Joblin, A. Mehta, I. Thon, and V. Tresp, "A recommender system for complex real- world applications with nonlinear dependencies and knowledge graph context," in European Semantic Web Conference, 2019, pp. 179- 193. Knowledge graph embedding for link prediction: A comparative analysis. A Rossi, D Barbosa, D Firmani, A Matinata, P Merialdo, ACM Transactions on Knowledge Discovery from Data (TKDD). 152A. Rossi, D. Barbosa, D. Firmani, A. Matinata, and P. Merialdo, "Knowledge graph embedding for link prediction: A comparative analysis," ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 15, no. 2, pp. 1-49, 2021. Logicguided semantic representation learning for zero-shot relation classification. J Li, R Wang, N Zhang, W Zhang, F Yang, H Chen, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsJ. Li, R. Wang, N. Zhang, W. Zhang, F. Yang, and H. Chen, "Logic- guided semantic representation learning for zero-shot relation classification," in Proceedings of the 28th International Conference on Computational Linguistics, 2020, pp. 2967-2978. Anytime bottom-up rule learning for knowledge graph completion. C Meilicke, M W Chekol, D Ruffinelli, H Stuckenschmidt, Proceedings of the 28th International Joint Conference on Artificial Intelligence. the 28th International Joint Conference on Artificial IntelligenceC. Meilicke, M. W. Chekol, D. Ruffinelli, and H. Stuckenschmidt, "Anytime bottom-up rule learning for knowledge graph comple- tion," in Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019, pp. 3137-3143. Relational retrieval using a combination of path-constrained random walks. N Lao, W W Cohen, Machine Learning. 81N. Lao and W. W. Cohen, "Relational retrieval using a combination of path-constrained random walks," Machine Learning, vol. 81, no. 1, pp. 53-67, 2010. Deeppath: A reinforcement learning method for knowledge graph reasoning. W Xiong, T Hoang, W Y Wang, Conference on Empirical Methods in Natural Language Processing. W. Xiong, T. Hoang, and W. Y. Wang, "Deeppath: A reinforcement learning method for knowledge graph reasoning," in Conference on Empirical Methods in Natural Language Processing, 2017, pp. 564-573. Translating embeddings for modeling multirelational data. A Bordes, N Usunier, A Garcia-Durán, J Weston, O Yakhnenko, Proceedings of the 26th International Conference on Neural Information Processing Systems. the 26th International Conference on Neural Information Processing Systems2A. Bordes, N. Usunier, A. Garcia-Durán, J. Weston, and O. Yakhnenko, "Translating embeddings for modeling multi- relational data," in Proceedings of the 26th International Conference on Neural Information Processing Systems, vol. 2, no. 9, 2013, pp. 2787-2795. Knowledge graph embedding by translating on hyperplanes. Z Wang, J Zhang, J Feng, Z Chen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence28Z. Wang, J. Zhang, J. Feng, and Z. Chen, "Knowledge graph embedding by translating on hyperplanes," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28, no. 1, 2014, pp. 1112-1119. Knowledge graph embedding via dynamic mapping matrix. G Ji, S He, L Xu, K Liu, J Zhao, Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing. the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing1G. Ji, S. He, L. Xu, K. Liu, and J. Zhao, "Knowledge graph embedding via dynamic mapping matrix," in Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers), 2015, pp. 687-696. Learning entity and relation embeddings for knowledge graph completion. Y Lin, Z Liu, M Sun, Y Liu, X Zhu, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. the Twenty-Ninth AAAI Conference on Artificial IntelligenceY. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, "Learning entity and relation embeddings for knowledge graph completion," in Proceed- ings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015, pp. 2181-2187. A three-way model for collective learning on multi-relational data. M Nickel, V Tresp, H.-P Kriegel, Proceedings of the 28th International Conference on International Conference on Machine Learning. the 28th International Conference on International Conference on Machine LearningM. Nickel, V. Tresp, and H.-P. Kriegel, "A three-way model for collective learning on multi-relational data," in Proceedings of the 28th International Conference on International Conference on Machine Learning, 2011, pp. 809-816. Embedding entities and relations for learning and inference in knowledge bases. B Yang, S W Yih, X He, J Gao, L Deng, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. the Twenty-Ninth AAAI Conference on Artificial IntelligenceB. Yang, S. W.-t. Yih, X. He, J. Gao, and L. Deng, "Embedding entities and relations for learning and inference in knowledge bases," in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015, p. 2181-2187. Tucker: Tensor factorization for knowledge graph completion. I Balažević, C Allen, T Hospedales, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingI. Balažević, C. Allen, and T. Hospedales, "Tucker: Tensor factor- ization for knowledge graph completion," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 5185-5194. Complex embeddings for simple link prediction. T Trouillon, J Welbl, S Riedel, É Gaussier, G Bouchard, Proceedings of the 33rd International Conference on International Conference on Machine Learning. the 33rd International Conference on International Conference on Machine Learning48T. Trouillon, J. Welbl, S. Riedel,É. Gaussier, and G. Bouchard, "Complex embeddings for simple link prediction," in Proceedings of the 33rd International Conference on International Conference on Machine Learning, vol. 48, no. 10, 2016, pp. 2071-2080. Coke: Contextualized knowledge graph embedding. Q Wang, P Huang, H Wang, S Dai, W Jiang, J Liu, Y Lyu, Y Zhu, H Wu, arXiv:1911.02168arXiv preprintQ. Wang, P. Huang, H. Wang, S. Dai, W. Jiang, J. Liu, Y. Lyu, Y. Zhu, and H. Wu, "Coke: Contextualized knowledge graph embedding," arXiv preprint arXiv:1911.02168, 2019. Convolutional 2d knowledge graph embeddings. T Dettmers, P Minervini, P Stenetorp, S Riedel, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel, "Convo- lutional 2d knowledge graph embeddings," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018. A survey on knowledge graph-based recommender systems. J Liu, L Duan, 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 5J. Liu and L. Duan, "A survey on knowledge graph-based recom- mender systems," in 2021 IEEE 5th Advanced Information Technol- ogy, Electronic and Automation Control Conference (IAEAC), vol. 5, 2021, pp. 2450-2453. Learning to delay in ridesourcing systems: A multi-agent deep reinforcement learning framework. J Ke, F Xiao, H Yang, J Ye, IEEE Transactions on Knowledge and Data Engineering. 345J. Ke, F. Xiao, H. Yang, and J. Ye, "Learning to delay in ride- sourcing systems: A multi-agent deep reinforcement learning framework," IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 5, pp. 2280-2292, 2022. Model based reinforcement learning for atari. Ł Kaiser, M Babaeizadeh, P Miłos, B Osiński, R H Campbell, K Czechowski, D Erhan, C Finn, P Kozakowski, S Levine, International Conference on Learning Representations. Ł. Kaiser, M. Babaeizadeh, P. Miłos, B. Osiński, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine et al., "Model based reinforcement learning for atari," in International Conference on Learning Representations, 2019. Feddsr: Daily schedule recommendation in a federated deep reinforcement learning framework. W Huang, J Liu, T Li, T Huang, S Ji, J Wan, IEEE Transactions on Knowledge and Data Engineering. W. Huang, J. Liu, T. Li, T. Huang, S. Ji, and J. Wan, "Feddsr: Daily schedule recommendation in a federated deep reinforce- ment learning framework," IEEE Transactions on Knowledge and Data Engineering, pp. 1-1, 2021. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. R Das, S Dhuliawala, M Zaheer, L Vilnis, I Durugkar, A Krishnamurthy, A Smola, A Mccallum, International Conference on Learning Representations. R. Das, S. Dhuliawala, M. Zaheer, L. Vilnis, I. Durugkar, A. Krish- namurthy, A. Smola, and A. McCallum, "Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning," in International Conference on Learning Representations, 2018. Divine: a generative adversarial imitation learning framework for knowledge graph reasoning. R Li, X Cheng, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingR. Li and X. Cheng, "Divine: a generative adversarial imitation learning framework for knowledge graph reasoning," in Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), 2019, pp. 2642-2651. Incorporating graph attention mechanism into knowledge graph reasoning based on deep reinforcement learning. H Wang, S Li, R Pan, M Mao, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingH. Wang, S. Li, R. Pan, and M. Mao, "Incorporating graph atten- tion mechanism into knowledge graph reasoning based on deep reinforcement learning," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Processing (EMNLP- IJCNLP), 2019, pp. 2623-2631. Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70C. Finn, P. Abbeel, and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," in Proceedings of the 34th International Conference on Machine Learning, vol. 70, no. 10, 2017, pp. 1126-1135. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. 30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in Neural Information Processing Systems, vol. 30, 2017. Anomaly detection in dynamic graphs via transformer. Y Liu, S Pan, Y G Wang, F Xiong, L Wang, Q Chen, V C Lee, IEEE Transactions on Knowledge and Data Engineering. Y. Liu, S. Pan, Y. G. Wang, F. Xiong, L. Wang, Q. Chen, and V. C. Lee, "Anomaly detection in dynamic graphs via transformer," IEEE Transactions on Knowledge and Data Engineering, pp. 1-1, 2021. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. S Zheng, J Lu, H Zhao, X Zhu, Z Luo, Y Wang, Y Fu, J Feng, T Xiang, P H Torr, L Zhang, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P. H. Torr, and L. Zhang, "Rethinking semantic segmenta- tion from a sequence-to-sequence perspective with transformers," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition (CVPR), 2021, pp. 6877-6886. Masked swin transformer unet for industrial anomaly detection. J Jiang, J Zhu, M Bilal, Y Cui, N Kumar, R Dou, F Su, X Xu, IEEE Transactions on Industrial Informatics. 192J. Jiang, J. Zhu, M. Bilal, Y. Cui, N. Kumar, R. Dou, F. Su, and X. Xu, "Masked swin transformer unet for industrial anomaly detection," IEEE Transactions on Industrial Informatics, vol. 19, no. 2, pp. 2200- 2209, 2023. Decision transformer: Reinforcement learning via sequence modeling. L Chen, K Lu, A Rajeswaran, K Lee, A Grover, M Laskin, P Abbeel, A Srinivas, I Mordatch, Advances in Neural Information Processing Systems. 34L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch, "Decision transformer: Reinforcement learning via sequence modeling," Advances in Neu- ral Information Processing Systems, vol. 34, pp. 15 084-15 097, 2021. Offline reinforcement learning as one big sequence modeling problem. M Janner, Q Li, S Levine, Advances in Neural Information Processing Systems. 34M. Janner, Q. Li, and S. Levine, "Offline reinforcement learning as one big sequence modeling problem," Advances in Neural Informa- tion Processing Systems, vol. 34, pp. 1273-1286, 2021. A generalist agent. S Reed, K Zolna, E Parisotto, S G Colmenarejo, A Novikov, G Barth-Maron, M Gimenez, Y Sulsky, J Kay, J T Springenberg, arXiv:2205.06175arXiv preprintS. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg et al., "A generalist agent," arXiv preprint arXiv:2205.06175, 2022. Rule-guided compositional representation learning on knowledge graphs. G Niu, Y Zhang, B Li, P Cui, S Liu, J Li, X Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34G. Niu, Y. Zhang, B. Li, P. Cui, S. Liu, J. Li, and X. Zhang, "Rule-guided compositional representation learning on knowl- edge graphs," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 03, 2020, pp. 2950-2958. Inductive logic programming: Theory and methods. S Muggleton, L De Raedt, The Journal of Logic Programming. 19S. Muggleton and L. De Raedt, "Inductive logic programming: Theory and methods," The Journal of Logic Programming, vol. 19, pp. 629-679, 1994. Rotate: Knowledge graph embedding by relational rotation in complex space. Z Sun, Z.-H Deng, J.-Y Nie, J Tang, arXiv:1902.10197arXiv preprintZ. Sun, Z.-H. Deng, J.-Y. Nie, and J. Tang, "Rotate: Knowledge graph embedding by relational rotation in complex space," arXiv preprint arXiv:1902.10197, 2019. Reasoning with neural tensor networks for knowledge base completion. R Socher, D Chen, C D Manning, A Y Ng, Proceedings of the 26th International Conference on Neural Information Processing Systems. the 26th International Conference on Neural Information Processing Systems1R. Socher, D. Chen, C. D. Manning, and A. Y. Ng, "Reasoning with neural tensor networks for knowledge base completion," in Proceedings of the 26th International Conference on Neural Information Processing Systems, vol. 1, no. 9, 2013, pp. 926-934. Modeling relational data with graph convolutional networks. M Schlichtkrull, T N Kipf, P Bloem, R Berg, I Titov, M Welling, European Semantic Web Conference. M. Schlichtkrull, T. N. Kipf, P. Bloem, R. v. d. Berg, I. Titov, and M. Welling, "Modeling relational data with graph convolutional networks," in European Semantic Web Conference, 2018, pp. 593-607. Implicit reasonet: Modeling large-scale structured relationships with shared memory. Y Shen, P.-S Huang, M.-W Chang, J Gao, abs/1611.04642ArXiv. Y. Shen, P.-S. Huang, M.-W. Chang, and J. Gao, "Implicit reasonet: Modeling large-scale structured relationships with shared mem- ory," ArXiv, vol. abs/1611.04642, 2017. Reasoning like human: Hierarchical reinforcement learning for knowledge graph reasoning. G Wan, S Pan, C Gong, C Zhou, G Haffari, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. the Twenty-Ninth International Joint Conference on Artificial IntelligenceG. Wan, S. Pan, C. Gong, C. Zhou, and G. Haffari, "Reasoning like human: Hierarchical reinforcement learning for knowledge graph reasoning," in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2021. Gaussianpath: A bayesian multi-hop reasoning framework for knowledge graph reasoning. G Wan, B Du, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35G. Wan and B. Du, "Gaussianpath: A bayesian multi-hop reason- ing framework for knowledge graph reasoning," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, 2021, pp. 4393-4401. Image-embodied knowledge representation learning. R Xie, Z Liu, H Luan, M Sun, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. the Twenty-Sixth International Joint Conference on Artificial IntelligenceR. Xie, Z. Liu, H. Luan, and M. Sun, "Image-embodied knowledge representation learning," in Proceedings of the Twenty-Sixth Interna- tional Joint Conference on Artificial Intelligence, IJCAI-17, 2017, pp. 3140-3146. Representation learning of knowledge graphs with entity descriptions. R Xie, Z Liu, J Jia, H Luan, M Sun, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence30R. Xie, Z. Liu, J. Jia, H. Luan, and M. Sun, "Representation learning of knowledge graphs with entity descriptions," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016. Multimodal data enhanced representation learning for knowledge graphs. Z Wang, L Li, Q Li, D Zeng, 2019 International Joint Conference on Neural Networks (IJCNN). Z. Wang, L. Li, Q. Li, and D. Zeng, "Multimodal data enhanced representation learning for knowledge graphs," in 2019 Interna- tional Joint Conference on Neural Networks (IJCNN), 2019, pp. 1-8. Is visual context really helpful for knowledge graph? a representation learning perspective. M Wang, S Wang, H Yang, Z Zhang, X Chen, G Qi, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaM. Wang, S. Wang, H. Yang, Z. Zhang, X. Chen, and G. Qi, "Is vi- sual context really helpful for knowledge graph? a representation learning perspective," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2735-2743. Squire: A sequence-to-sequence framework for multi-hop knowledge graph reasoning. Y Bai, X Lv, J Li, L Hou, Y Qu, Z Dai, F Xiong, arXiv:2201.06206arXiv preprintY. Bai, X. Lv, J. Li, L. Hou, Y. Qu, Z. Dai, and F. Xiong, "Squire: A sequence-to-sequence framework for multi-hop knowledge graph reasoning," arXiv preprint arXiv:2201.06206, 2022. Video data mining: Semantic indexing and event detection from the association perspective. X Zhu, X Wu, A K Elmagarmid, Z Feng, L Wu, IEEE Transactions on Knowledge and Data engineering. 175X. Zhu, X. Wu, A. K. Elmagarmid, Z. Feng, and L. Wu, "Video data mining: Semantic indexing and event detection from the association perspective," IEEE Transactions on Knowledge and Data engineering, vol. 17, no. 5, pp. 665-677, 2005. Detecting text in natural image with connectionist text proposal network. Z Tian, W Huang, T He, P He, Y Qiao, European Conference on Computer Vision. Z. Tian, W. Huang, T. He, P. He, and Y. Qiao, "Detecting text in natural image with connectionist text proposal network," in European Conference on Computer Vision, 2016, pp. 56-72. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. B Shi, X Bai, C Yao, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3911B. Shi, X. Bai, and C. Yao, "An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 11, pp. 2298-2304, 2016. Multimodal token fusion for vision transformers. Y Wang, X Chen, L Cao, W Huang, F Sun, Y Wang, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Y. Wang, X. Chen, L. Cao, W. Huang, F. Sun, and Y. Wang, "Mul- timodal token fusion for vision transformers," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 12 176-12 185. Emointtrans: A multimodal transformer for identifying emotions and intents in social conversations. G V Singh, M Firdaus, A Ekbal, P Bhattacharyya, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 31G. V. Singh, M. Firdaus, A. Ekbal, and P. Bhattacharyya, "Emoint- trans: A multimodal transformer for identifying emotions and intents in social conversations," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 290-300, 2023. Balanced multimodal learning via on-the-fly gradient modulation. X Peng, Y Wei, A Deng, D Wang, D Hu, 2022X. Peng, Y. Wei, A. Deng, D. Wang, and D. Hu, "Balanced multimodal learning via on-the-fly gradient modulation," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8228-8237. Efficient learning of deep boltzmann machines. R Salakhutdinov, H Larochelle, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsR. Salakhutdinov and H. Larochelle, "Efficient learning of deep boltzmann machines," in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010, pp. 693-700. S Deng, H Chen, Z Li, F Xiong, Q Chen, M Chen, X Liu, J Chen, J Z Pan, H Chen, arXiv:2209.15214Construction and applications of open business knowledge graph. arXiv preprintS. Deng, H. Chen, Z. Li, F. Xiong, Q. Chen, M. Chen, X. Liu, J. Chen, J. Z. Pan, H. Chen et al., "Construction and applications of open business knowledge graph," arXiv preprint arXiv:2209.15214, 2022. Trans4e: Link prediction on scholarly knowledge graphs. M Nayyeri, G M Cil, S Vahdati, F Osborne, M Rahman, S Angioni, A Salatino, D R Recupero, N Vassilyeva, E Motta, Neurocomputing. 461M. Nayyeri, G. M. Cil, S. Vahdati, F. Osborne, M. Rahman, S. Angioni, A. Salatino, D. R. Recupero, N. Vassilyeva, E. Motta et al., "Trans4e: Link prediction on scholarly knowledge graphs," Neurocomputing, vol. 461, pp. 530-542, 2021. Adam: A method for stochastic optimization. D P Kingma, J Ba, abs/1412.6980CoRR. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," CoRR, vol. abs/1412.6980, 2014.
{'fraction_non_alphanumeric': 0.05101641907740422, 'fraction_numerical': 0.022217878550951264, 'mean_word_length': 4.48189156368312, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 17, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 35, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Multimodal knowledge graph link prediction aims to improve the accuracy and efficiency of link prediction tasks for multimodal data. However, for complex multimodal information and sparse training data, it is usually difficult to achieve interpretability and high accuracy simultaneously for most methods. To address this difficulty, a new model is developed in this paper, namely Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling (IMKGA-SM). First, a multi-modal fine-grained fusion method is proposed, and Vgg16 and Optical Character Recognition (OCR) techniques are adopted to effectively extract text information from images and images. Then, the knowledge graph link prediction task is modelled as an offline reinforcement learning Markov decision model, which is then abstracted into a unified sequence framework. An interactive perception-based reward expectation mechanism and a special causal masking mechanism are designed, which "converts" the query into an inference path. Then, an autoregressive dynamic gradient adjustment mechanism is proposed to alleviate the insufficient problem of multimodal optimization. Finally, two datasets are adopted for experiments, and the popular SOTA baselines are used for comparison. The results show that the developed IMKGA-SM achieves much better performance than SOTA baselines on multimodal link prediction datasets of different sizes.', 'arxivid': '2301.02445', 'author': ['Yilin Wen ', 'Senior Member, IEEEBiao Luo ', 'Yuqian Zhao '], 'authoraffiliation': [], 'corpusid': 255522508, 'doi': '10.48550/arxiv.2301.02445', 'github_urls': [], 'n_tokens_mistral': 21149, 'n_tokens_neox': 18576, 'n_words': 11958, 'pdfsha': '952c6f54af4e54eeec24e7128be801e69910528d', 'pdfurls': ['https://export.arxiv.org/pdf/2301.02445v4.pdf'], 'title': ['IMKGA-SM: Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling', 'IMKGA-SM: Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling'], 'venue': []}
arxiv
Challenges and Opportunities in Providing Small Farmers Equal Access to Wealth via Rural Credit in Brazil Vagner Figueredo IBM Research United States D E Santana IBM Research United States David Millen RAQUEL ZARATTINI CHEBABI IBM Research Brazil Vagner Figueredo De Santana IBM Research Brazil Raquel Zarattini Chebabi IBM Research Brazil David Millen IBM Research Brazil Challenges and Opportunities in Providing Small Farmers Equal Access to Wealth via Rural Credit in Brazil Manuscript submitted to ACM 1 arXiv:2304.11255v1 [cs.HC] 21 Apr 2023 , ,Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). ACM Reference Format: Vagner Figueredo de Santana, Raquel Zarattini Chebabi, and David Millen. 2023. Challenges and Opportunities in Providing Small Farmers Equal Access to Wealth via Rural Credit in Brazil. In Proceedings of . ACM, New York, NY, USA, 18 pages. https://doi.org/...CCS Concepts: • Human-centered computing → Field studies Additional Key Words and Phrases: agriculturerural creditfairness in credit offeringsequitable access to loans Fig. 1. Photo diary picture showing the landscape of the region where the study took place, Eastern State of São Paulo, Brazil.Agriculture is impacted by multiple variables such as weather, soil, crop, stocks, socioeconomic context, cultural aspects, supply and demand, just to name a few. Hence, understanding this domain and identifying challenges faced by stakeholders is hard to scale due to its highly localized nature. This work builds upon six months of field research and presents challenges and opportunities for stakeholders acting in the rural credit ecosystem in Brazil, highlighting how small farmers struggle to access higher values in credit. This study combined two methods for understanding challenges and opportunities in rural credit ecosystem in Brazil: (1) a study that took place in a community of farmers in Brazil and it was based on participatory observations of their work processes and interactions of 20 informants (bank employees and farmers); (2) design thinking workshops with teams from 3 banks, counting on 15-20 participants each. The results show that key user experience challenges are tightly connected to the heterogeneity of farmer profiles and contexts of use involving technology available, domain skills, level of education, and connectivity, among others. In addition to presenting data collected from interaction with informants and experiences resulting from active participant observation, we discuss a holistic view of how recommender systems could be used to promote better bank-farmer interactions, improve farmer experience in the whole process, and promote equitable access to loans beyond microcredit. INTRODUCTION Agriculture involves business, science, or the activity of farming 1 . And, as it continues to play a key role in our society, there are still places where crops are managed as centuries ago, representing one of the more heterogeneous work activities still in place in the twenty-first century in terms of tools, technologies, skills, and socioeconomic contexts. This complex ecosystem 2 creates challenges and opportunities for Human-Computer Interaction (HCI) practitioners on understanding, designing, and evaluating products and services in this realm. Moreover, this intrinsically contextdependent aspect of agriculture (weather, soil, crop, stocks, socioeconomic context, supply, demand, cultural aspects, etc.) poses challenges to studies aiming at any type of generalization of research results. Thus, in this work, we convey this challenge by using the term localized challenges to refer to this context-dependent aspect, intrinsic to agriculture. Moreover, promoting agriculture is strategic for multiple economies around the globe and Brazil is not an exception. However, few solutions are designed having in mind adoption by small farmers, infrastructure, and cultural challenges faced in the Global South [6]. Hence, this work discusses the role of technology and its potential to provide small farmers equal access to wealth in Brazil. The biggest agriculture credit initiative in Brazil is Rural Credit and it has the potential to change the lives of small farmers, but there is a gap between the credit lines offered and the credit they actually obtain; this is where this research is situated. More details on Rural Credit in Brazil are provided next. Rural Credit in Brazil Rural Credit is the main component of Brazil's strategy to agricultural development [2]. The first law related to rural credit in Brazil dates back to the 1960s. Currently, the main document guiding the whole process is the Manual of Rural Credit 3 . It is an instrument with approximately 500 pages defining the following: • Stakeholders in the credit ecosystem (e.g., public banks, private banks, cooperatives, farmers 4 ); • Lines of credit to cover costing (e.g., seeds and pesticides), industrialization (e.g., tractors and machinery), and commercialization (e.g., packaging and transport); • Credit types; • Sources of money; • Laws guiding processes; • Cash flow; • Responsibilities and liabilities for farmers and banks. This provides a glimpse of the ecosystem's interconnectedness and complexity. For instance, banks participating in the rural credit ecosystem must employ 30% of cash deposits (updated monthly) as rural credit. In cases where this amount is not met, the bank is subject to fines of up to 40% of the non-lent money. In addition, banks offering rural credit are co-responsible for the proper use of the money. This involves the responsibility of performing inspections throughout the harvest and being subject to fines in case of misuse of the funds (e.g., deforestation). Regarding support 1 https://dictionary.cambridge.org/us/dictionary/english/agriculture 2 The use of the term ecosystem refers to the complex network of rural credit business processes, its multiple stakeholders, the digital systems in place, and its connection with the environment. offered to farmers, the organization promoting the farming activity (e.g., banks and cooperatives) can associate credit with technical support (i.e., Educative Rural Credit) as a way of mitigating risks related to yield and environmental impacts. In sum, banks need to reach and engage farmers, provide support from planning to commercialization, and inspect the proper use of land and credit, all the while respecting the environment. The amount of money lent to small farmers is up to R$ 360.000,00 (approximately U$ 65,500.00 in the current exchange rate), medium farmers up to R$ 1.760.000,00 (approximately U$ 320,000.00 in the current exchange rate), and large producers more than R$ 1.760.000,00. These characteristics highlight how challenging is to design support systems for stakeholders in the rural credit ecosystem and how they differ from microcredit initiatives present in the literature (please refer to the Related Work section) when equitable access to wealth is a key objective. Context of the Study This field study was performed in Brazil, a country with a population of approximately 210 million people 5 , an area of 8,5 million 2 (3,3 million 2 ), yearly agriculture production of $140 billion dollars (approximately 21% of gross domestic product). Agriculture in Brazil is a highly concentrated industry, with 1% of the farmers owning 45% of the agricultural area [16]. Rural credit is highly skewed to these large farms, as 75% of rural credit ($31B dollars) is lent to them 6 . This reinforces the importance of rural credit in the country and the opportunity of using technology to ease access of the population to multiple lines of credit reaching up to large amounts of money, potentially reducing existing inequality numbers and stimulating economic development in their communities. This work presents an in-depth six-month study involving small farmers from a rural community in Eastern State of São Paulo, Brazil ( Figure 2) and the experiences they have had with rural credit. Ethnographic methods, including participant observation, were employed aiming at understanding the challenges, opportunities, and experiences people had during the rural credit process. We used field observations, interviews, and personal experiences to understand the localized agriculture knowledge, workflows, and ways to navigate through the rural credit process, its requirements, and bureaucracies. Related Work Participant Observation has been used in multiple ethnographic studies allowing researchers to learn through different types of observation, data collection, analysis, and interpretation [20]. In the field of HCI, multiple Participant Observation studies were performed considering different populations and contexts. Examples include children with severe motor impairments [8], business intelligence analysts [7], and therapists for children with autism [9]. In the context of agriculture, ethnographic methods and participant observation have been applied in studies aiming for understanding the urban agriculture community in Australia [10], information-sharing practices among rural users in China [15], and understanding the role that collaborative technologies play in skill sharing in gardening activities [12]. The goal of this work builds on prior studies of credit systems (e.g., microfinance, microcredit) and farm management technologies. For example, the HCI literature considers gaps in handheld technologies for rural microfinance [17], digitization of rural microfinance processes in India [19], understanding of informal microcredit practices of small business owners in Brazil [5], farm management platform dealing with low connectivity and adoption challenges [21], and experiences and perceived obligations related to the use of instant loan platforms in India [18]. Having in mind aspects that result in unequal access to wealth, Marlow [13] discusses how women entrepreneurs are disadvantaged by their gender and Adams et al. [2] present impacts on poor farmers as they have unequal access to loans, how initial wealth results in differential access to loans, and how they are impacted by different types of subsidies. Xiao-Hong [23] presents how farmers created credit cooperatives to deal with access to loan and reduced interests. Yang et al. [24] present that, in Togo, less experience, lack of membership in farmers' organization, low income on the main activity, lack of participation in training, or absence in meetings to setup of projects are the main attributes considered by financial institutions when assessing loan requests. These characteristics end up preventing them to access high value loans. In addition, Neves et al. [14] argue that rural credit in Brazil alone cannot raise the social welfare of low-income farmers and that this support should be systemic, for instance, including education and other types of support. In this work, we aim at detailing some of the complexity of the rural credit ecosystem in Brazil and identifying opportunities for supporting small farmers in a systemic way. Contribution Statement Whilst previous works mapped multiple practices from farmers around the world, there is still a valuable path to reveal experiences farms had with respect to rural credit in an ecosystem that counts on the following key differentiators: • The government is a major player (legislation and credit offering via public banks); • Private and public banks collaborating when avoiding fines and competing when offering rural credit as a product; • Producers from different scales, from small properties to huge farms; • From low-tech support to highly educated workers, costly machinery, and state-of-the-art technologies. This said, the present work provides two main contributions: (1) A mapping of the rural credit workflow in Brazil, its stakeholders (e.g., banks, farmers, credit specialists, credit inspectors, technicians), and their social interactions. (2) Implications for design and opportunities for recommender systems aiming at improving equality in loan offerings in a systemic way. The presented outcomes combine results from the Participant Observation study (bottom-up perspective) with requirements identified from Design Thinking workshops our lab ran with staff from 3 banks (top-bottom perspective). This paper is structured as follows: section 2 details the method followed in the study, section 3 discusses the findings in terms of challenges and opportunities for HCI researchers, and section 4 presents the conclusions and future research directions. METHOD This study combined two methods for understanding challenges and opportunities in rural credit ecosystem in Brazil: (1) a study that took place in a community of farmers in Brazil and it was based on participatory observations of their work processes and interactions of 20 informants (bank employees and farmers); (2) design thinking workshops with teams from 3 banks, counting on 15-20 participants each. Next, we present how Participant Observation and Design Thinking were employed to understand localized challenges connecting rural credit and farmers. Participant Observation is a qualitative method, with roots in ethnographic research, whose objective is to help researchers to learn the in-depth perspectives held by studied populations, interested both in knowing what those diverse perspectives are and in understanding the interplay among them [11]. Design Thinking has multiple definitions and it is hard to find a single definition covering its plurality of methods, activities, and artifacts. However, some aspects related to Design Thinking converge as its aim on innovation, human needs, business success, and problem-solving exploratory practices for products/services, leveraging designers' toolkits deeply centered on human processes [3,4] 7 . Activities covered in the Design Thinking workshops run in our lab with teams from 3 banks include As-is Scenario Map, To-be Scenario Map, Empathy Map, Hopes and Fears, Stakeholders Map, and Prioritization Grid 8 . Next, we provide details from the Participant Observation study. Then, in the results section, we triangulate requirements and needs identified in both Participant Observation (bottom-up) and Design Thinking workshops (top-bottom). Participant Observation Participants: The people contacted during the study are mainly from two different profiles: small farmers (16 people) and bank employees (4 people). Small farmers are the people from the small community that one of the authors of this paper visits regularly and owns a small property. A Participant Observation can be covert or overt. In a covert study, people from the studied community/group are not aware of the researcher's goal nor activity. In an overt study, people are aware that the researcher's activity/background and that they are performing a participant observation study. Different studies involve different characteristics regarding covert vs. overt. In this study, farmers know the researcher and know that he works with information technology, but they did not know about the study itself. This 7 https://designthinking.ideo.com/ 8 https://www.ibm.com/design/thinking/page/toolkit , , Vagner Figueredo de Santana, Raquel Zarattini Chebabi, and David Millen previous knowledge about the researcher prevents any valuation (positive or negative) on the researcher trying to be native. The author in contact with farmers has been involved with agriculture for the past 5 years. This engagement also involved past interactions involving different topics. Thus, during this study, the researcher added the interest in rural credit to the list of topics to talk about. In this sense, the study was partially covert considering the interactions and questions related to the process and experiences regarding rural credit. Other interactions involving agriculture, environment, and real estate were already in place previously to this study, in an overt way, supporting rapport building. These aspects and the already established link supported the active participation applied and the observation criteria presented in the procedure subsection. Demographics: Due to the covert aspect of the study, detailed demographics were not accessible and, by asking such type of question, it could impact the study. However, from the interactions with the participants, it is possible to highlight the following participants' characteristics: • 16 farmers and producers (from 3 different cities); • Farmers: 11 (as primary or secondary occupation); artisans: 3; microbrewer owners: 2; • Ages ranging from 38 to 68. All farmers and 1 artisan mentioned they had in-person credit assistance. Two of the farmers informed that they tried using automated credit assistance, but they were not able to obtain credit assistance and had to go to the bank in person. Procedure: The social situations that this study took place followed the five criteria for participant observation (i.e., simplicity, accessibility, unobtrusiveness, permissibleness, and frequent recurring activities), as presented in [20]. Moreover, active participation was applied considering the objective of learning about rural credit based on what members from the community have already done. The interactions with farmers were in the form of unstructured interviews realized by convenience, in the most natural possible way. The interactions occurred during gatherings and visits to each other's properties, in public or semi-public settings. The topics in these conversations included: • Prior good/bad experiences with rural credit; • Type of credit considered; • Line of credit; • Blockers faced; • Suggestions about doing business in the region; • The whole credit process. The visits to the community were performed similarly to how other people owning properties perform, i.e., weekly or bi-weekly. In addition, interactions were planned to occur in the most credible way, showing interest in starting in the agribusiness as a secondary occupation, which is indeed a plan for the researcher that was performing the participant observation. In order to learn about how bankers and small farmers interact during the rural credit process, we decided to propose a feasible project and seek a small farm loan. The goal of the study was to deep dive into the process of requesting rural credit, going from obtaining the required documentation to just before signing a contract. Regarding bank employees, they were four bank operators of the same bank brand, but from three different cities. The bank brand is one of the key players in rural credit ecosystem in Brazil. In preparation for these interactions, the author performing the participant observation attended different courses about the chosen production, prior to preparing a concrete project. As a first step in selecting a small farm project, a survey on common crops and products from the region was performed. The goal was to select a product under the following restrictions: • To require management that matches the periods of visit to the small property; • To fit the area of the small property (approximately 3500 2 or 0.86 acre); • To be profitable/economically viable. The region is known for handcraft works, (Figure 3) crops that need cold weather (e.g., atemoya, strawberry, grape), and also agritourism (e.g., fishing weirs aside from family-owned restaurants, grape plantations from wine producers, micro-brewery). Given that the area considered is small, the initial goal was to identify a perennial/semi-perennial crop to consider. However, the identified crops would require constant management. An additional possibility was related to handcraft or fermented/distilled beverages. Thus, having in mind that the region already has breweries ( Figure 4) and wineries, the opportunity was to propose something in the area of spirits. Once the production was selected, the Materials: In terms of materials used and how the study was documented, notes were taken post-interactions (in a covert way) and, when possible, pictures were taken to register places and outcomes of interaction with stakeholders in a form of a photo diary (e.g., Figure 6). No pictures were taken from people to respect privacy and the validity of the study. The goal of using the photo diary was to support recalling the conversations' content in post-interaction notes and during result analysis. Analysis: The result analysis considered processing the photo diary, consolidated field notes, and stakeholders' quotes and moods. Then, experiences with rural credit and opportunities for HCI practitioners were mapped in an end-to-end workflow of rural credit in Brazil. The workflow details steps, stakeholders, social interactions, design implications, and opportunities for researchers designing/developing recommender systems in the Brazilian rural credit ecosystem aiming at providing equal access. Finally, we also summarize the characteristics of the Brazilian rural credit ecosystem in a mind map (Figure 8) by connecting the main entities, terms, processes, stakeholders, and highlighting (in bold edges) opportunities for designing recommender systems providing equal access to small farmers. Design Thinking Workshops The Design Thinking workshops were organized to bring specialists from banks to detail the challenges they face in the rural credit process and to brainstorm solutions. From our side, we also had scientists and technical specialists in these sessions to understand the problems and opportunities in this realm. Each workshop was a one-day event comprising multiple activities such as Stakeholder Mapping, Scenario Mapping (As-is and To-be), Empathy map, and Prioritization Grid. Each workshop involved 15-20 people, including our staff and bank participants. 6. Photo diary picture from a bank employee's handwritten note suggesting a contact in a different agency. The employee mentioned that in that agency it was hard to find anyone knowing about rural credit and that I should look for agencies in another city; blur applied to note due to privacy issues. The words in Portuguese reffer to agency and manager. RESULTS In this section, we present the identified nuances of rural credit workflow in Brazil (Figure 7) and the implications for HCI research, from a design perspective, and opportunities for fair recommender systems technologies that could change the status quo. The identified workflow encompasses: • Simulation of credit; • Optional technical (e.g., agronomist, veterinarian) support as part of educative credit; • Diagnosis support for the farmer to improve production; • Warranties and associated documentation required; • Project setup and fine-tuning; • A loop involving project execution and verification of credit use until the end of the project. Figure 7 also highlights that most of the social interactions occur between the producer and the credit specialist, followed by the producer-credit inspector and the producer-technician interactions. In addition to the results presented from the participant observation study, we also triangulate these outcomes with insights from Design Thinking workshops our lab run with bank staff. In the period this work took place, our lab interacted with 3 banks interested in brainstorming solutions for credit and related products/services. The Design Thinking sessions were run for a single bank at a time; bank names are omitted due to confidentiality terms. The rationale for combining those two sources of information is to triangulate bottom-up requirements provided by the farmers with the top-bottom requirements provided by the bank staff. The next subsections present each of the steps from rural credit workflow and respective challenges and opportunities for equal access to wealth via high-value loans with low interest rates. Rural Credit Simulation Often, the first step for farmers requesting rural credit is the simulation, including payment terms, lines of credit, documentation, among others. It can be done on bank websites or in person by talking to bank employees. Thus, this first step identified is connected to the design of tools and the adoption of such technologies that clearly communicate rural credit terms and caveats. In rural Brazil, 61% of producers use smartphones and WhatsApp® is the main communication channel, used by 96% of farmers with internet access 9 . One of the probable root causes is that, in Brazil, multiple telecommunication companies provide plans in which such a service does not impact the monthly quota of data consumption. Hence, this is one platform with potential for easing the adoption of credit simulation technologies. Although, when asking a participant about how to perform credit simulation and any existing mobile app, a farmer In this step, it is also key to identify whether farmers comply with all the requirements. However, it was identified that this process is cumbersome and information is often spread in the sense that few bank employees know about rural credit; please see Figure 6 for a case in which a bank employee indicated a different agency that maybe they would know the required documents. Two farmers mentioned that in the past seasons it was easier to obtain rural credit. They reported that the process' bureaucracy increased and it is becoming more and more difficult to obtain credit. One participant said: "In the last years, I've just had to show some of my handwritten notes about the production that the bank accepted. Now, I have to prepare documentation about the property, production, and warranties that were not needed in the past. " Beyond the bureaucracy, during social interactions with bank employees, it was hard to get information in person. Only after visiting 4 different agencies, it was possible to identify the documents the bank required. In the first visited agency, the bank employee was not able to inform the list of documents required and tried to connect us to another 9 https://www.embrapa.br/visao/ , , credit specialist ( Figure 6). After this contact, this happened one more time, showing that, even for the client-facing bank employees, the lack of information about the process happens. After talking to credit specialists, it was possible to obtain the list of required documents. In terms of technology for automating this process, this step could be structured as a decision tree covering, for instance, production in the past year, warranties, amount of money, payment terms, etc. Regarding design implications, a step-by-step form (auto-complete enabled) or a chatbot could convert the steps on this decision tree into a user interface that could result in a good user experience from the first step. In a Design Thinking session with a bank, it was clear that it is necessary to provide automatic credit for producers in some cases. If the bank has documents and information about the farmer, income, and rural production, it is possible to approve credit automatically. However, in some cases, the farmer needs to go to the bank for a new credit. Another possibility involves providing self-education content related to this pre-approval credit to the farmer. Educative Credit Rural credit in Brazil has different types and different services offered as part of the loan. One considers technical support for the farmer (i.e., Educative Rural Credit). The goal of this credit type is to mitigate risks for the producer and for the bank, since a technician support (e.g., by an agronomist or veterinarian, depending on the production) is offered as part of the planning steps. Under the lens of decision-making support systems, this type of credit creates an opportunity for providing educational/technical content to small farmers, from planning to harvesting. For instance, supporting the best time window to plant/harvest, in a data-driven way and what types of pesticides to use, in an environment-respecting way. The design implications here are mostly connected to learning systems and lifelong learning, exploring multiple modalities and connectivity restrictions that may apply in a country with challenging contexts of diversity, (digital) literacy, and connectivity. In such a diverse socioeconomic context, support/educational content might be transformative for some people when quality content meets the need to know. In Design Thinking sessions with banks, it was reported that small farmers usually do not have enough knowledge about historical information involving soil characteristics, climate, yield on other farms in the region, among other rich datasets. Often, they repeat the same maintenance and crop without knowing that they could be more successful with a different crop, especially when mitigating risks related to price oscillations. Diagnosis Support Small farmers usually plan, manage, and harvest based on tacit knowledge. Diagnosis of field is hardly considered in rural credit projects due to its associated costs (e.g., soil tests). In Brazil, there are producers that still use fire as a method for preparing the soil, which has potential environmental impacts. In this sense, low-cost technologies supporting the diagnosis and soil preparation are key for supporting the understanding of the underlying factors that might increase productivity/quality, respecting the environment. The implications for HCI researchers in this step are related to the use of sensors and low-cost devices. Diagnosis support might include characteristics from soil, seeds, precipitation, satellite images, weather forecast, among others. Thus, beyond the use of sensors, accessible information visualization could play a key role in supporting the understanding of characteristics of the field. In Design Thinking sessions with banks, a need emerged for a solution employing easy to use geo-referenced management system, providing farmer recommendations and bank information that the farmer is employing credit responsibly. Such a solution has the potential to facilitate further processes due to the seamless compliance information provided throughout the harvest cycle. Beyond diagnosis support for the crop itself, bank staff also highlighted that producers often need support to manage future income, combine profitable crops, identify best-selling moment, and perform scenario analysis/simulation. Warranties As part of the credit, small farmers must present a warranty to the bank prior to obtaining the loan. For small farmers with good credit scores, the risk evaluation is straightforwardly performed by banks. However, for newcomers or digitally excluded producers, the lack of history poses a challenge for both ends, farmers and banks. In this sense, technologies for finding analogous profiles (e.g., by crop, region, field area) and designs exposing how such credit was performed (explainability 10 ) could support this step. These analogous profiles could also be a data source for recommending warranties usually considered, including project templates, project recommendations, and insurance pricing. Banks employees and producers are the main stakeholders in rural credit workflow presented, hence, trust in this relationship is fundamental. However, in two occasions, informants mentioned that their banks often offer products with the rural credit as part of the warranties, e.g., life insurance, disability insurance, among others. One participant said: "If you contract [a life] insurance, you'll get the credit right away. " Other participant mentioned: "At some point, I had 3 different life insurances to pay and I'd lost track of how much I was paying for it"; the later fact is connected to the automatic renewal for producers contracting rural credit yearly. The design implications here consider clearly communicating the renewal terms and conditions in an accessible way. Project Recommendation The complexity involving rural credit was highlighted in the simulation step when farmers are trying out different amounts of money, payment terms, and lines of credit. However, creating the project also poses and additional challenge due to the technical requirements, including the details about money usage in each of the planned activities. In this sense, the number of details required for newcomers might create a barrier that the researcher performing the participant observation faced when creating the first project. Thus, one opportunity for recommender systems includes providing project templates, based on analogous successful past projects. For instance, involving similar crops, similar regions, similar weather, fields with similar characteristics. These analogous projects could also be a valuable education material for farmers getting in touch with the rural credit for the first time. The main design implication in such a feature includes communicating with users in a way to obtain enough information for finding an analogous project in a privacy-respecting and accessible way. In Design Thinking workshops with banks, it was discussed the possibility of using climate prediction, sensors, and drones/satellite images to assess and recommend equipment to purchase for lines of credit related to industrialization and commercialization. This way, project recommendation can go beyond the current farm capabilities and increase productivity. Project Tuning This step includes adjustments that might be needed after the producer submits the documentation and the project to the bank. In this step, the credit specialists will analyze the proposed planning and requests adjustments in case , , of any discrepancy according to bank metrics. In such iterations, minor changes that can delay the loan approval in days/weeks, could impact planting date and, consequently, yield. Bearing in mind opportunities for decision-making support technologies, it is straightforward to think about preventing these discrepancies to occur in the project creation/recommendation by providing user feedback and education material for any outlier identified, in a way that credit specialists could support producers in a more agile way. Design implications involve, for instance, providing user feedback about these outliers, based on analogous projects considering weather, crop, region, etc. Project Execution After minor adjustments and approval, the project execution loop starts. In this step, the challenge is to guarantee that the plan is being followed. Hence, differently from the previous steps that occur once per project, this step might get as complicated as daily managements activities, depending on the production considered. Thus, the balance between too detailed and too vague, considering farmers' tacit knowledge, seems to be the ultimate goal. In sum, providing personalized support. The implications for designing include connecting to external data sources (e.g., sensors, weather services) to support farmers to perform well-informed decisions. Accessible charts/dashboards could be provided based on interaction history and farmer engagement with services daily (e.g., weather, precipitation, temperature). Big producers count on teams of high skilled professionals to perform continuous data analysis. However, supporting small producers may be the real challenge/opportunity, for instance, mobile technologies easing crop inspection activities and project execution report. For instance, taking pictures of the crop and performing computer vision algorithms to identify proper crop development, pest, and diseases. Moreover, low tech support materials as precipitation charts, weather forecast, task list, paper-based soil testes, week schedule to be printed and attached in visible areas can increase awareness of the plan and support all stakeholders on following the plan. Results from Design Thinking workshops point to the need of providing easy ways for the farmer to easily verify cash flow, credit limit, pricing, and to improve production management. Verification of Credit Use As presented before, in the ecosystem of rural credit in Brazil, banks and credit cooperatives are also liable for the proper use of the credit and for the land where the crop will take place. These stakeholders must verify that the credit is being used for the planned crop and that it is not damaging the environment, just to name a few responsibilities. In this sense, decision-making support technologies can help small farmers to follow the plan and also remote sensing can be applied to verify the proper use of the land/credit. Moreover, Educative Rural Credit and technician support could close this loop. For instance, a producer could ask for technician support by sending a geo-referenced picture of a pest or plant with disease; this would help seamless verification of credit use associated with technician inspection. The last step mapped in the rural credit workflow is the renewal in-between seasons. One participant mentioned that renewing was easy: "I used the rural credit 3 years in a row. And it was easy to renew it. At the end of the season, the money was there again". However, the same participant informed that: "At some point it was difficult for me to fall asleep due to the multiple payments installments I had". In this aspect, the Educative Rural Credit could support small farmers not only on how to renew credit, but also with accounting when dealing with multiple loans. A cohesive view on loans and possibilities on how to improve productivity (via commercialization and industrialization) poses new possibilities for finance and education support as well. In a Design Thinking workshop, bank staff mentioned that banks have a huge challenge to assert that the farmer is using credit for the production proposed in the project. This activity was usually performed by credit inspectors, in person, but currently remote sensing and image recognition are used to enhance the analysis and prevent frauds in a more scalable way. Finally, at the renewal, banks value proactive/automatic actions to support credit renewals based on the clients' history of credit and production. CONCLUSION Differently from microcredit offerings, rural credit in Brazil is a line of credit present in multiple bank brands aiming at all kinds of production, from small farmers up to huge producers. For small farmers it has an important socioeconomic role as it provides financial support for producers in costing, industrialization, and commercialization. While rural credit counts on annual fees ranging from 2.5 to 4.5%, banks in Brazil offer monthly fees of approximately 2.0% for other lines of credit. In addition, banks are responsible for maintaining a credit flow according to cash deposits, which presents a challenge of offering more credit, reducing risk, dealing with adoption of recommendation technologies, and supporting small farmers to grow via Educative Rural Credit and, last but not least, respecting the environment. Responsible Innovation is often described in terms of three main dimensions: (1) avoid harm, (2) 'do good', and (3) support ethical governance to promote the former two dimensions [22]. Initiatives to promote access to wealth and social mobility initiatives count on practices and regulations to avoid harm, however, there is a gap between planned outcomes and actual results and access to wealth in the case of rural credit in Brazil (i.e., 'do good' in a socio-economical way). Bureaucracy, lack of support to small farmers in terms of technology, project preparation, and guidance, are some of the key barriers between small farmers and access to rural credit in Brazil. In this work, we presented how participant observation was employed as a way of identifying localized challenges, inherent to agriculture, and how fair recommender systems (from the design perspective) can support multiple stages of the rural credit workflow. The initial objective of the study was to experience the rural credit process up to just before the contract signing. The blocker experienced by the researcher performing the participant observation was to present the history of production and to obtain the documentation proving previous activity in agribusiness. These aspects resonate with results from the literature about blockers small farmers may face [24]. In a complementary way, different outcomes from Design Thinking workshops were presented to depict a top-down perspective (from bank to producers) on requirements, interests, and goals. This study presented challenges faced by farmers in a small community in Eastern State of São Paulo, Brazil, in multiple rural credit stages as request, settle payments, and renewal. Agriculture is very localized due to all its impacting variables. For HCI researchers studying agriculture domain for the first time, the presented findings can be a starting point for identifying practices and pain points that multiple stakeholders face. More importantly, showing that small farmers should also have access to different lines of credit, beyond microcredit. The mind map presented in Figure 8 summarizes multiple aspects described and discussed throughout this paper. Its goal is to show the connections among terms, stakeholders, and processes in a way that HCI practitioners can grasp the overall ecosystem complexity, impacts on unequal access to credit, and identify opportunities as well. Bold edges highlight these opportunities encompassing design and development of recommender systems, as detailed in previous sections. After connecting with multiple banks and agribusiness companies, people in this domain usually mention two distinct types of interactions with agriculture: inside the fences vs. outside the fences. The idea behind this differentiation is to emphasize the importance of getting closer to the subject matter and the real challenges faced by the people. Figure 9 shows the safety equipment used against snake bites during one of those interactions with clients. Knowing that HCI literature advocates using Ethnographic methods, User-Centered and Participatory Design in such scenario, we emphasize the value of these approaches based on this study, especially in agriculture. In this sense, this study pointed out promising directions for actionable results in the rural credit ecosystem taking into account multiple stakeholders, e.g., public banks, private banks, cooperatives, input companies, and developers of recommender systems. This paper contributed with reflection on challenges and opportunities in providing small farmers access to a line of credit with potential to impact wealth (i.e., rural credit). In addition, we provided insights on the design of recommender systems aiming at equal access to loans based on the experiences reported by informants and by the situations faced during the study. We also detail the rural credit ecosystem in Brazil, depicting stakeholders and their interactions, as well technologies that could be employed in a systematic way to improve decision-making support. This paper also triangulates (bottom-up) requirements emerging from interaction with farmers with (top-bottom) requirements elicited during Design Thinking sessions with banks' staffs. This combination grounded the paths we proposed regarding recommender systems aiming at increasing loan signings, promoting better bank-farmer interactions, improving farmer experience in the whole process, and promoting fair access to loans. We believe that recommender systems in rural credit ecosystem have different purposes as easing the access of the population to multiple lines of credit reaching up to large amounts of money, potentially reducing existing inequality numbers and stimulating economic development. In addition, due to heterogeneity of profiles, non-negligible group of small farmers may not benefit from access to rural credit, which potentially impacts their economic ascension. We also identified that such recommender system can be beneficial for producers and banks. On the one hand, it might increase access to credit, potentially reducing existing inequality numbers and stimulating economic and educational development. On the other hand, it might improve bank-client relationship and overall economy. In sum, access, equity, and justice. The main lesson learned worth sharing with designers and HCI researchers is that, due to its localized nature and heterogeneous stakeholders' profiles, it would be difficult to obtain a user interface to fit all stakeholders' needs around the globe. Hence, when dealing with agriculture, we advocate that designers and HCI researchers should focus on employing methods to tackle localized design challenges (e.g., Ethnographic methods, User-Centered Design, Participatory Design). The research related to agribusiness stakeholders require in-depth study about their activities and the ecosystem they are embedded in. Agriculture counts on specific, localized challenges requiring HCI researchers to go the field to observe, survey, interview, understand pain points, listen to needs, and envision solutions and technologies that materialize these solutions, inside the fences. Next steps of this research involve exploring recommender systems technologies aiming at equal access, providing explainable outcomes about Machine Learning models, supporting credit specialists, farmers, and rural credit newcomers to benefit and to be part of the rural credit ecosystem and having equal access to wealth. Fig. 2 . 23 https://www3.bcb.gov.br/mcr 4 Farmers and producers are used interchangeably throughout the text Challenges and Opportunities in Providing Small Farmers Equal Access to Wealth via Rural Credit Photo diary picture showing one of the properties where interactions with farmers took place. Fig. 3 . 3Photo diary picture from handcrafted mat made from banana tree straws. researcher performing participant observation started to enroll in courses on the selected matter, research about cost of production, equipment (Figure 5), etc. All of this supported the elaboration of a project meant to be economically viable, considering all the informed restrictions. Fig. 4 .Fig. 5 . 45Photo diary picture taken during the visit to micro-brewery from the region. Photo diary picture taken in a store selling a small handcrafted copper distiller.As part of the project, there were multiple conversations with bank employees about rural credit loan. Conversations took place with four bank operators of the same bank brand, but from three different cities. The main goal here was to interact with bank employees in the most natural and credible way as well. The interactions were towards obtaining more information and pointers about rural credit and the required documents. These employees were all in the client-facing credit operations. Fig. Fig. 6. Photo diary picture from a bank employee's handwritten note suggesting a contact in a different agency. The employee mentioned that in that agency it was hard to find anyone knowing about rural credit and that I should look for agencies in another city; blur applied to note due to privacy issues. The words in Portuguese reffer to agency and manager. Fig. 7 . 7Rural credit workflow in Brazil, stakeholders, social interactions, and design implications. reported: "I tried using an app for recommendations about crop management, but it kept asking me the same questions over and over", showing that a bug tarnished his experience with this chatbot. Having the adoption of technologies in mind, HCI researchers could leverage existing communication channels to speed up the adoption of decision-making technologies. For instance, connecting chatbots with WhatsApp® users/public groups, bridging the gap between established communication channels and support technologies. Fig. 8 . 8Mind map highlighting the main entities in the Rural Credit ecosystem in Brazil. Bold edges and nodes highlight opportunities for designing recommender systems. Fig. 9 . 9Photo diary photo showing protective shin guards used to protect from snake bites used during a visit to a coffee farm. Vagner Figueredo de Santana, Raquel Zarattini Chebabi, and David Millen5 https://www.ibge.gov.br/ 6 http://www.agricultura.gov.br/assuntos/politica-agricola/credito-rural , , Explainability or Explainable Artificial Intelligence (XAI) is a research field that aims to turn AI results and models more understandable to humans[1]. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). Amina Adadi, Mohammed Berrada, IEEE access. 6Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6 (2018), 52138-52160. Undermining rural development with cheap credit. W Dale, Adams, RoutledgeDale W Adams. 2021. Undermining rural development with cheap credit. Routledge. . Tim Brown, Design ThinkingTim Brown. 2008. Design Thinking. . Tim Brown, Jocelyn Wyatt, Design Thinking for Social InnovationTim Brown and Jocelyn Wyatt. 2010. Design Thinking for Social Innovation. Design Insights and Opportunities from a Field Study to Digitally Enhance Microcredit Practices in Brazil. Heloisa Candello, David Millen, Claudio Pinhanez, Silvia Bianchi, Design Research Society Conference. Heloisa Candello, David Millen, Claudio Pinhanez, and Silvia Bianchi. 2018. Design Insights and Opportunities from a Field Study to Digitally Enhance Microcredit Practices in Brazil. In Design Research Society Conference 2018. Digital agriculture for small-scale producers: challenges and opportunities. Ranveer Chandra, Stewart Collis, Commun. ACM. 64Ranveer Chandra and Stewart Collis. 2021. Digital agriculture for small-scale producers: challenges and opportunities. Commun. ACM 64, 12 (2021), 75-84. Exploring the Analytical Processes of Intelligence Analysts. George Chin, Olga A Kuchar, Katherine E Wolf, 10.1145/1518701.1518704Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsBoston, MA, USA; New York, NY, USAAssociation for Computing MachineryCHI '09)George Chin, Olga A. Kuchar, and Katherine E. Wolf. 2009. Exploring the Analytical Processes of Intelligence Analysts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). Association for Computing Machinery, New York, NY, USA, 11-20. https://doi.org/10.1145/1518701.1518704 Designing with Children with Severe Motor Impairments. Anthony J Hornof, 10.1145/1518701.1519032Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsBoston, MA, USA; New York, NY, USAAssociation for Computing MachineryCHI '09)Anthony J. Hornof. 2009. Designing with Children with Severe Motor Impairments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). Association for Computing Machinery, New York, NY, USA, 2177-2180. https://doi.org/10.1145/ 1518701.1519032 When the Designer Becomes the User: Designing a System for Therapists by Becoming a Therapist. Julie A Kientz, Gregory D Abowd, 10.1145/1358628.1358639CHI '08 Extended Abstracts on Human Factors in Computing Systems. Florence, Italy; New York, NY, USAAssociation for Computing MachineryCHI EA '08)Julie A. Kientz and Gregory D. Abowd. 2008. When the Designer Becomes the User: Designing a System for Therapists by Becoming a Therapist. In CHI '08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy) (CHI EA '08). Association for Computing Machinery, New York, NY, USA, 2071-2078. https://doi.org/10.1145/1358628.1358639 Designing for Grassroots Food Production: An Event-Based Urban Agriculture Community. Peter Lyle, Jaz Hee-Jeong Choi, Marcus Foth, 10.1145/2686612.2686666Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design. the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of DesignSydney, New South Wales, Australia; New York, NY, USAAssociation for Computing MachineryOzCHI '14)Peter Lyle, Jaz Hee-jeong Choi, and Marcus Foth. 2014. Designing for Grassroots Food Production: An Event-Based Urban Agriculture Community. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (Sydney, New South Wales, Australia) (OzCHI '14). Association for Computing Machinery, New York, NY, USA, 362-365. https://doi.org/10.1145/2686612.2686666 Qualitative Research Methods: A Data Collector's Field Guidee. Natasha Mack, Cynthia Woodsong, Kathleen M Macqueen, Greg Guest, Emily Namey, Family Health International. Natasha Mack, Cynthia Woodsong, Kathleen M. MacQueen, Greg Guest, and Emily Namey. 2005. Qualitative Research Methods: A Data Collector's Field Guidee. Family Health International. Sociality and Skill Sharing in the Garden. Teja Hanuma, Amanda Maddali, Lazar, 10.1145/3313831.3376246Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsHonolulu, HI, USA; New York, NY, USAAssociation for Computing MachineryCHI '20)Hanuma Teja Maddali and Amanda Lazar. 2020. Sociality and Skill Sharing in the Garden. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1-13. https://doi.org/10. 1145/3313831.3376246 All Credit to Men? Entrepreneurship, Finance, and Gender. Susan Marlow, Dean Patton, https:/arxiv.org/abs/https:/doi.org/10.1111/j.1540-6520.2005.00105.xEntrepreneurship Theory and Practice. 29Susan Marlow and Dean Patton. 2005. All Credit to Men? Entrepreneurship, Finance, and Gender. Entrepreneurship Theory and Practice 29, 6 (2005), 717-735. https://doi.org/10.1111/j.1540-6520.2005.00105.x arXiv:https://doi.org/10.1111/j.1540-6520.2005.00105.x Davi Rogério de Moura Costa, and Marcelo José Braga. 2020. Does Access to Rural Credit Help Decrease Income Inequality in Brazil?. Carlos Otávio Mateus De Carvalho Reis Neves, Felipe Freitas, De Figueiredo, Silva, 10.1017/aae.2020.11Journal of Agricultural and Applied Economics. 52Mateus de Carvalho Reis Neves, Carlos Otávio Freitas, Felipe de Figueiredo Silva, Davi Rogério de Moura Costa, and Marcelo José Braga. 2020. Does Access to Rural Credit Help Decrease Income Inequality in Brazil? Journal of Agricultural and Applied Economics 52, 3 (2020), 440-460. https://doi.org/10.1017/aae.2020.11 Designing for Emerging Rural Users: Experiences from China. Elisa Oreglia, Ying Liu, Wei Zhao, 10.1145/1978942.1979152Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsVancouver, BC, Canada; New York, NY, USAAssociation for Computing MachineryCHI '11)Elisa Oreglia, Ying Liu, and Wei Zhao. 2011. Designing for Emerging Rural Users: Experiences from China. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI '11). Association for Computing Machinery, New York, NY, USA, 1433-1436. https://doi.org/10.1145/1978942.1979152 Terrenos da Desigualdade: Terra, Agricultura e Desigualdades no Brasil Rural. Oxfam, Technical ReportOxfam. 2016. Terrenos da Desigualdade: Terra, Agricultura e Desigualdades no Brasil Rural. Technical Report. https://oxfam.org.br/wp-content/ uploads/2019/08/relatorio-terrenos_desigualdade-brasil.pdf Rural Microfinance Service Delivery: Gaps, Inefficiencies and Emerging Solutions. S Tapan, Parikh, 2006 International Conference on Information and Communication Technologies and Development. Tapan S. Parikh. 2006. Rural Microfinance Service Delivery: Gaps, Inefficiencies and Emerging Solutions. In 2006 International Conference on Information and Communication Technologies and Development. 223-232. How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India. Divya Ramesh, Ding Vaishnav Kameswaran, Nithya Wang, Sambasivan, 10.1145/3531146.35332372022 ACM Conference on Fairness, Accountability, and Transparency. Seoul, Republic of Korea; New York, NY, USAAssociation for Computing MachineryFAccT '22)Divya Ramesh, Vaishnav Kameswaran, Ding Wang, and Nithya Sambasivan. 2022. How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT '22). Association for Computing Machinery, New York, NY, USA, 1917-1928. https://doi.org/10. 1145/3531146.3533237 Managing Microfinance with Paper, Pen and Digital Slate. Kentaro Aishwarya Lakshmi Ratan, Sunandan Toyama, Keng Siang Chakraborty, Mike Ooi, Pushkar V Koenig, Matthew Chitnis, Phiong, 10.1145/2369220.2369255Proceedings of the 4th ACM/IEEE International Conference on Information and Communication Technologies and Development. the 4th ACM/IEEE International Conference on Information and Communication Technologies and DevelopmentLondon, United Kingdom; New York, NY, USA, ArticleAssociation for Computing Machinery37ICTD '10)Aishwarya Lakshmi Ratan, Kentaro Toyama, Sunandan Chakraborty, Keng Siang Ooi, Mike Koenig, Pushkar V. Chitnis, and Matthew Phiong. 2010. Managing Microfinance with Paper, Pen and Digital Slate. In Proceedings of the 4th ACM/IEEE International Conference on Information and Communication Technologies and Development (London, United Kingdom) (ICTD '10). Association for Computing Machinery, New York, NY, USA, Article 37, 11 pages. https://doi.org/10.1145/2369220.2369255 . James P Spradley, Participant ObservationJames P. Spradley. 1980. Participant Observation. Farmbeats: An IoT Platform for Data-Driven Agriculture. Deepak Vasisht, Zerina Kapetanovic, Xinxin Jong-Ho Won, Ranveer Jin, Ashish Chandra, Kapoor, N Sudipta, Madhusudhan Sinha, Sean Sudarshan, Stratman, Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation. the 14th USENIX Conference on Networked Systems Design and ImplementationBoston, MA, USA; USANSDI'17). USENIX AssociationDeepak Vasisht, Zerina Kapetanovic, Jong-ho Won, Xinxin Jin, Ranveer Chandra, Ashish Kapoor, Sudipta N. Sinha, Madhusudhan Sudarshan, and Sean Stratman. 2017. Farmbeats: An IoT Platform for Data-Driven Agriculture. In Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation (Boston, MA, USA) (NSDI'17). USENIX Association, USA, 515-528. Responsible innovation and the innovation of responsibility: Governing sustainable development in a globalized world. Christian Voegtlin, Andreas Georg Scherer, Journal of business ethics. 143Christian Voegtlin and Andreas Georg Scherer. 2017. Responsible innovation and the innovation of responsibility: Governing sustainable development in a globalized world. Journal of business ethics 143, 2 (2017), 227-243. Analysis of the Evolutionary Game about Loans between Rural Credit Cooperatives and Farmers in China. Dong Xiao, - Hong, Proceedings of the 2012 3rd International Conference on E-Business and E-Government. the 2012 3rd International Conference on E-Business and E-GovernmentUSAIEEE Computer Society03ICEE '12Dong Xiao-Hong. 2012. Analysis of the Evolutionary Game about Loans between Rural Credit Cooperatives and Farmers in China. In Proceedings of the 2012 3rd International Conference on E-Business and E-Government -Volume 03 (ICEE '12). IEEE Computer Society, USA, 171-174. Analyses of the Determinants of Access to Credit by Smallholder Farmers in Togo. Songling Yang, Abide Tchewafei, Leleingda Tchewafei, Lengue Sambiani, Agoura Badja Tchewafei, Steve-Harold Wendikuuni Kagembega, 10.1145/3472349.3472357Proceedings of the 2021 International Conference on E-Business and Mobile Commerce. the 2021 International Conference on E-Business and Mobile CommerceSeoul, Republic of Korea; New York, NY, USAAssociation for Computing MachineryICEMC '21)Songling Yang, Abide Tchewafei, Leleingda Tchewafei, Lengue Sambiani, Agoura Badja Tchewafei, and Steve-Harold Wendikuuni Kagembega. 2021. Analyses of the Determinants of Access to Credit by Smallholder Farmers in Togo. In Proceedings of the 2021 International Conference on E-Business and Mobile Commerce (Seoul, Republic of Korea) (ICEMC '21). Association for Computing Machinery, New York, NY, USA, 58-68. https://doi.org/10.1145/3472349.3472357
{'fraction_non_alphanumeric': 0.03556238958319702, 'fraction_numerical': 0.01791205915069031, 'mean_word_length': 5.158875680032239, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 20, 'lorem ipsum': 0, 'www.': 5, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 5, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Fig. 1. Photo diary picture showing the landscape of the region where the study took place, Eastern State of São Paulo, Brazil.Agriculture is impacted by multiple variables such as weather, soil, crop, stocks, socioeconomic context, cultural aspects, supply and demand, just to name a few. Hence, understanding this domain and identifying challenges faced by stakeholders is hard to scale due to its highly localized nature. This work builds upon six months of field research and presents challenges and opportunities for stakeholders acting in the rural credit ecosystem in Brazil, highlighting how small farmers struggle to access higher values in credit. This study combined two methods for understanding challenges and opportunities in rural credit ecosystem in Brazil: (1) a study that took place in a community of farmers in Brazil and it was based on participatory observations of their work processes and interactions of 20 informants (bank employees and farmers); (2) design thinking workshops with teams from 3 banks, counting on 15-20 participants each. The results show that key user experience challenges are tightly connected to the heterogeneity of farmer profiles and contexts of use involving technology available, domain skills, level of education, and connectivity, among others. In addition to presenting data collected from interaction with informants and experiences resulting from active participant observation, we discuss a holistic view of how recommender systems could be used to promote better bank-farmer interactions, improve farmer experience in the whole process, and promote equitable access to loans beyond microcredit.', 'arxivid': '2304.11255', 'author': ['Vagner Figueredo \nIBM Research\nUnited States\n', 'D E Santana \nIBM Research\nUnited States\n', 'David Millen \nRAQUEL ZARATTINI CHEBABI\nIBM Research\nBrazil\n', 'Vagner Figueredo De Santana \nIBM Research\nBrazil\n', 'Raquel Zarattini Chebabi \nIBM Research\nBrazil\n', 'David Millen \nIBM Research\nBrazil\n'], 'authoraffiliation': ['IBM Research\nUnited States', 'IBM Research\nUnited States', 'RAQUEL ZARATTINI CHEBABI\nIBM Research\nBrazil', 'IBM Research\nBrazil', 'IBM Research\nBrazil', 'IBM Research\nBrazil'], 'corpusid': 258298752, 'doi': '10.48550/arxiv.2304.11255', 'github_urls': [], 'n_tokens_mistral': 14555, 'n_tokens_neox': 12908, 'n_words': 8789, 'pdfsha': 'a0393749d3ad14dc087879446765d22726ce31ec', 'pdfurls': ['https://export.arxiv.org/pdf/2304.11255v1.pdf'], 'title': ['Challenges and Opportunities in Providing Small Farmers Equal Access to Wealth via Rural Credit in Brazil', 'Challenges and Opportunities in Providing Small Farmers Equal Access to Wealth via Rural Credit in Brazil'], 'venue': []}
arxiv
Renormalisation Group Flows of the SYK Model Dionysios Anninos [email protected] Department of Mathematics King's College London WC2R 2LSStrand, LondonUK Damián A Galante [email protected] Department of Mathematics King's College London WC2R 2LSStrand, LondonUK Sameer U Sheorey [email protected] Department of Mathematics King's College London WC2R 2LSStrand, LondonUK Renormalisation Group Flows of the SYK Model We explore computationally tractable deformations of the SYK model. The deformed theories are described by the sum of two SYK Hamiltonians with differing numbers, q andq, of interacting fermions. In the large N limit, employing analytic and numerical tools, we compute finite temperature correlation functions and thermodynamic quantities. We identify a novel analytically solvable RG flow in the large q limit. We find that, under certain circumstances, the RG flow in the strongly coupled infrared phase exhibits two regions of linear-in-temperature entropy, which we interpret in terms of Schwarzian actions. Using conformal perturbation theory we compute the leading relevant correction away from the intermediate near-conformal fixed point. Holographic spacetimes in two spacetime dimensions that reproduce the thermodynamics of the microphysical theory are discussed. These are flow geometries that interpolate between two Euclidean near-AdS 2 spacetimes with different radii. The Schwarzian soft mode corresponding to the AdS 2 region in the deep interior resides entirely within the geometric regime. Introduction Given the description of a theory at a conformally invariant fixed point, one is naturally led to examine deformations causing the theory to flow toward a novel phase in the infrared. Sufficiently close to the fixed point, one can quantify the deformations by the set of primary operators which are relevant with respect to the original fixed point. The richer the space of relevant operators, the more elaborate the landscape of renormalisation group (RG) flows away from the underlying fixed point, and the more ample the opportunity to design particular infrared behaviour. In this paper, we examine renormalisation group flows away from the near-fixed point of the Sachdev-Ye-Kitaev (SYK) model [1][2][3] of N interacting Majorana fermions subject to randomly disordered couplings. The type of deformation we consider is itself disordered, and further to this, we study the problem at both vanishing and finite temperature. The essential motivation behind our work is to develop a new direction in the study of holographic renormalisation [4,5] by identifying tractable renormalisation group flows for strongly coupled theories at large N . From the perspective of the gravitational description, the renormalisation group flow manifests itself in a geometry that flows away from the asymptotically AdS boundary describing the fixed point. The microphysical flow imposes, directly from its underlying quantum consistency conditions, constraints on the space of deformed holographic bulk theories whose description is often restricted to low energy effective field theory. The basic challenge is that strongly coupled fixed points with tractable renormalisation group flows are hard to come across. To address this challenge, we consider the SYK model whose strongly coupled low temperature phase has been argued to exhibit holographic properties [6][7][8] at large N . Although relevant deformations of SYK have not been studied extensively in the literature, there are exceptions [9][10][11][12][13]. Moreover, there have been a host of interesting variations of SYK including entangling a pair of SYK theories to each other [14,15], endowing SYK type models with internal global symmetries [16][17][18][19], non-Hermitian SYK Hamiltonians modelling open quantum systems [20,21], models of SYK chains and higher-dimensional analogues [22,23], and supersymmetric extensions [24,25]. In this work we employ a variety of analytical and numerical techniques to analyse a class of tractable strongly coupled renormalisation group flows away from the near-fixed point of SYK for a variety of deformations. Concretely, we examine the Hamiltonian H q = i q 2 1≤i 1 <i 2 <···<iq≤N J i 1 i 2 ···iq ψ i 1 ψ i 2 · · · ψ iq , q ∈ 2Z + ,(1.1) deformed by the operator s Hq withq < q, where s is a dimensionless coupling and ψ i are N Majorana fermions. The couplings of both H q and Hq are drawn from a Gaussian ensemble. The deformation is implemented at the level of the ultraviolet degrees of freedom. Nonetheless, concrete evidence is provided that for sufficiently small s, the deformation can be viewed as a relevant deformation by a specific conformal operator of the near-fixed point describing the low energy physics of the undeformed SYK model. Previous work [12] has established this in the large q limit with q/q = 2. Here, we establish this phenomenon at both large and finite q,q. Moreover, the effect is seen for several values for n ≡ q/q. The flow is shown to end at a near-fixed point in the deep infrared, where the theory is captured by an SYK theory governed by Hq. Interestingly, the Schwarzian sector of the theory in the deep infrared resides entirely within the strongly coupled sector of the theory. From a holographic point of view, this can be viewed as a soft mode emerging in the interior of a bulk asymptotically AdS 2 spacetime. The paper is structured as follows. In section 2 we briefly review the SYK model for general q. We discuss the large N saddle-point Schwinger-Dyson equations and the large q limit. In section 3 we introduce the deformations of interest, and the corresponding large N saddle-point Schwinger-Dyson equations. The theory is considered at finite temperature. In section 4 we study the low temperature behaviour of the deformed models, as well as providing evidence for the existence of an intermediate near-conformal fixed point for a subclass of these models. When possible, we consider analytical expressions, including a new regime of small ε ≡ n − 1 permitting analytic treatment. In section 5 we explore the structure of the renormalisation group flow in the vicinity of each near-fixed point and uncover the soft-mode theories governing the leading thermodynamic behaviour. We also show that conformal perturbation theory can be applied to study the leading relevant deformation away from the intermediate near-fixed point. In the outlook section 6 we discuss how our results can be interpreted from a holographic point of view, in the form of a JT gravity theory with deformed dilaton potential. Brief review of the SYK model The SYK model is a quantum mechanical model with random all-to-all interactions. The observables of the theory are built from N Majorana fermions, ψ i , that obey equal time anti-commutation relations {ψ i , ψ j } = δ ij , i, j = 1, . . . , N . (2.1) The Hamiltonian of the model is given by H q = (i) q 2 1≤i 1 <i 2 <...<iq≤N J i 1 i 2 ...iq ψ i 1 ψ i 2 . . . ψ iq , q ∈ 2Z + ,(2.2) where the coupling constants of the theory are all independently drawn from the same probability distribution that satisfies J i 1 i 2 ···iq = 0 , J 2 i 1 i 2 ···iq = 2 q−1 q J 2 (q − 1)! N q−1 . (2. 3) The dimensionality of the Hilbert space is 2 N/2 and the theory is numerically amenable to exact diagonalisation procedures for reasonably large values of N . 1 A review of the SYK model can be found in [28,29], among other articles. Large N limit From the perspective of the path integral, it is useful to express the theory in terms of bi-local fields 3,30]. The Euclidean time coordinate τ ∼ τ + β is periodically identified with period given by the inverse temperature β. Physically, G(τ 1 , τ 2 ) computes the (time-ordered) thermal two point function G(τ 1 , τ 2 ), Σ(τ 1 , τ 2 ) [2,G(τ 1 , τ 2 ) = 1 N N i=1 T ψ i (τ 1 )ψ i (τ 2 ) . (2.4) In terms of G and Σ the action reads I = − 1 2 log det (δ(τ 1 − τ 2 )∂ τ 2 − Σ(τ 1 , τ 2 ))+ 1 2 β 0 β 0 dτ 1 dτ 2 Σ(τ 1 , τ 2 )G(τ 1 , τ 2 ) − J 2 2 q−1 q 2 G(τ 1 , τ 2 ) q ,(2. 5) and the disorder averaged partition function of the theory is given by Z(β) J = [DGDΣ]e −N I[G,Σ] ,(2.6) where we indicate a disorder average by • J . At large N , the theory permits a saddle point approximation. The resulting Schwinger-Dyson equations are the following integro-differential equations G −1 (τ 1 , τ 2 ) = δ(τ 1 − τ 2 )∂ τ 2 − Σ(τ 1 , τ 2 ) , (2.7) Σ(τ 1 , τ 2 ) = 2 q−1 q J 2 G(τ 1 , τ 2 ) q−1 . (2.8) The above equations can be solved numerically using a recursive algorithm and the fast Fourier transform [3]. In the IR of the theory, given by |τ 1 − τ 2 | 1/J , we can self-consistently drop the δ(τ 1 − τ 2 )∂ τ 2 term in (2.7), resulting in an effective theory described by the equations β 0 dτ G(τ 1 , τ )Σ(τ , τ 2 ) = −δ(τ 1 − τ 2 ) , (2.9) Σ(τ 1 , τ 2 ) = 2 q−1 q J 2 G(τ 1 , τ 2 ) q−1 . (2.10) Provided ∆ = 1/q these above equations are invariant under the transformations G(τ 1 , τ 2 ) →G(τ 1 , τ 2 ) = φ (τ 1 ) ∆ G(φ(τ 1 ), φ(τ 2 )) φ (τ 2 ) ∆ , (2.11) Σ(τ 1 , τ 2 ) →Σ(τ 1 , τ 2 ) = φ (τ 1 ) ∆(q−1) Σ(φ(τ 1 ), φ(τ 2 )) φ (τ 2 ) ∆(q−1) ,(2.12) with φ(τ ) a smooth, monotonically increasing function that maps the thermal circle to the thermal circle with single unit of winding. The structure of φ(τ ) is that of a reparameterisation of the circle to itself. In the IR, the SYK model is approximated by a one-dimensional conformal field theory [2,3]. The fermions ψ i transform as primary operators of conformal weight ∆ = 1/q. At the level of the action, the low-energy effective description is given by I CFT = − 1 2 log det(−Σ(τ 1 , τ 2 )) + 1 2 β 0 β 0 dτ 1 dτ 2 Σ(τ 1 , τ 2 )G(τ 1 , τ 2 ) − J 2 2 q−1 q 2 G(τ 1 , τ 2 ) q . (2. 13) The solution to the IR Schwinger-Dyson equations (2.9) and (2.10) is given by G φ (τ 1 , τ 2 ) = φ (τ 1 ) ∆ b sgn(τ 1 − τ 2 )   π βJ sin π(φ(τ 1 )−φ(τ 2 )) β   2∆ φ (τ 2 ) ∆ , (2.14) with b = 1 2 (1 − 2∆) tan(π∆) π∆ ∆ . (2.15) All solutions G φ have the same action when evaluated on the conformal action (2.13). As such, the saddle approximation naively diverges as the volume of the reparameterisation group. To get a finite answer we must account for the effect of the leading 'irrelevant' correction away from the conformal action. It is given by the Schwarzian action [3] I Sch = − α(q) 2J β 0 dτ 2π β 2 φ (τ ) 2 − φ (τ ) φ (τ ) 2 . (2.16) The constant α(q) has to be determined numerically by solving the full Schwinger-Dyson equations, as discussed further in Appendix A, as its precise value does not follow from IR considerations. The Schwarzian action explicitly breaks the reparametrisation symmetry of the conformal action down to an unphysical SL(2, R) reparametrisation group. The final path integral must still be divided by the volume of the residual SL(2, R) to be made sense of [31,32]. Given the Schwarzian theory (2.16), one can compute thermodynamic quantities to leading order in the saddle point approximation. For instance, given the on-shell solution φ(τ ) = τ , the free energy F Sch is found by taking the Schwarzian action on shell (2.17) and is found to be quadratic in the temperature. Given an expression for free energy F , the thermodynamic entropy S can be computed as − βF Sch N = 2π 2 α(q) βJ ,S = (1 − β∂ β )(−βF ) . (2.18) It is straightforward from (2.17) to verify that the entropy of the Schwarzian theory is linear in the temperature, S Sch N = 4π 2 α(q) βJ . (2.19) Additionally, the zero temperature entropy of the SYK can be computed explicitly [2,3] such that the entropy of the SYK model admits the following small temperature expansion S N = S free 0 − 1/q 0 dx π 1 2 − x tan πx + 4π 2 α(q) βJ + · · · ,(2.20) where S free 0 ≡ log 2/2 is the zero temperature entropy of a free fermion. Large q limit The SYK model admits further computational control if, after taking the large N limit, we also take the large q limit. 2 In this case, we can expand the two-point function G(τ 1 , τ 2 ) = G(τ 1 − τ 2 ) as G(τ ) = sgn(τ ) 2 1 + g(τ ) q + O(1/q 2 ) . (2.21) To leading order in q, the Schwinger-Dyson equations (2.7) and (2.8) become a single ordinary differential equation for g(τ ), namely ∂ 2 τ g(τ ) = 2J 2 e g(τ ) . (2.22) Supplemented by thermal boundary conditions, g(0) = g(β) = 0, this equation can be solved analytically and yields, e g(τ ) = cos 2 ν cos 2 2ν 1 2 − |τ | β , βJ = 2ν cos ν . (2.23) Given g(τ ), we can compute the complete thermodynamics of the theory by evaluating the action (2.5) on-shell to leading order in the large q expansion, βF N = −S free 0 − β 8q 2 β 0 dτ 1 2 (∂ τ g(τ )) 2 + 2J 2 e g(τ ) + · · · . (2.24) 2 Another solvable case is known as the double-scaled SYK model, obtained by taking both the large N and large q limit, but with N/q 2 fixed. See, for instance, [33,34]. For large βJ , we obtain that the entropy at large q is given by S N = S free 0 − π 2 4q 2 + π 2 q 2 1 βJ + · · · . (2.25) By comparing (2.25) with (2.19), we see that α(q) → 1/4q 2 as q → ∞. Next order corrections in the large q limit have been studied in [35]. Deformed SYK models In this section we introduce a family of deformations of the single SYK in which the Hamiltonian is the sum of two SYK Hamiltonians with different numbers of fermions in the interactions. The behaviour of the deformed models can be thought of in terms of an RG flow of the original SYK theory. The deformed models can be solved either numerically or analytically for a wide range of parameters of the theory. At finite q, aspects of these models have been studied in [9,11,13], while at large q analytically tractable examples have been considered in [10,12]. Deformed Hamiltonian and effective action The Hamiltonian of the deformed SYK models is given by H def = H q + sHq , (3.1) where s is a tuneable dimensionless parameter and the Hamiltonian H x denotes the Hamiltonian (2.2) of a single SYK model with x fermion interactions. We will assume that q ≥q. Unitarity imposes that s ∈ R, and without loss of generality we can further restrict to s ∈ R + . The term sHq can be viewed as a relevant deformation of the model H q that induces an RG flow and modifies the thermodynamic behaviour of the model in the infrared. Similar to the single SYK case, in the large N limit, the deformed action can be described in terms of bi-local fields [10,12] I = − 1 2 log det(∂ τ − Σ) + 1 2 β 0 β 0 dτ 1 dτ 2 ΣG − J 2 2 q−1 q 2 G q + s 2 2q −1 q 2 Gq , (3.2) from which we get a set of deformed Schwinger-Dyson equations G −1 (τ 1 , τ 2 ) = δ(τ 1 − τ 2 )∂ τ 2 − Σ(τ 1 , τ 2 ) , (3.3) Σ(τ 1 , τ 2 ) = J 2 2 q−1 q G(τ 1 , τ 2 ) q−1 + s 2 2q −1 q G(τ 1 , τ 2 )q −1 . (3.4) As for the case of the single SYK model, these deformed models also simplify in the large q, q limit. In particular, they exhibit solvable properties [10,12] when both q andq are taken to infinity while keeping their ratio, q/q finite and fixed. From now onwards, we will call this ratio n ≡ q/q ≥ 1. In this limit, we can again expand the two-point function as in (2.21) and obtain that the Schwinger-Dyson equations simplify to the following ordinary differential equation ∂ 2 τ g(τ ) = 2ns 2 J 2 e g(τ )/n + 2J 2 e g(τ ) . (3.5) To leading order in the large q andq expansion, the free energy of the deformed model reduces to βF N = −S free 0 − β 8q 2 β 0 dτ 1 2 (∂ τ g(τ )) 2 + J 2 2n 2 s 2 e g(τ )/n + 2e g(τ ) . (3.6) An analytically solvable deformation When n = 2, the differential equation (3.5) reduces to ∂ 2 τ g(τ ) = 4s 2 J 2 e g(τ )/2 + 2J 2 e g(τ ) . (3.7) Provided that g(0) = g(β) = 0, we obtain the following two-point function, 3 e g(τ ) = 4ν 4 (βJ ) 2 ν 2 + s 4 (βJ ) 4 cos(ν(2τ /β − 1)) + s 2 (βJ ) 2 2 , cos ν = 2ν 2 − s 2 (βJ ) 2 (βJ ) 2 ν 2 + s 4 (βJ ) 4 . (3.8) Note that this equation provides a solution for the full RG flow for all values of βJ and s, even in the strongly coupled regime of the theory. We can obtain the free energy by substituting this solution into the on-shell action (3.6) with n = 2. A key observation [12] is that at low temperatures βJ 1 and small s 1, there are two different regimes where the entropy is linear in the temperature. Both regimes can be described analytically. We refer to the regimes as the deep IR and the intermediate IR regimes, given that thet both appear in the infrared sector of the theory. First, let us consider the very small temperature regime, βJ 1/s 2 , which we refer to as the deep IR regime. In this regime, the entropy is given by [12] Deep IR: S N = S free 0 − π 2 4q 2 ‫א+‬ π 2 q 2 1 sβJ + · · · , (3.9) where‫א‬ = √ 1 + 4s 2 2s . (3.10) We can compare (3.9) to the IR behaviour of the single SYK model, sHq, which is (2.25) with q →q and J → sJ . While the zero temperature entropy is unchanged, the deformed model changes dramatically the coefficient of the entropy that is linear in the temperature. This is parameterised by the constant‫.א‬ Note that in the limit s → ‫א,∞‬ → 1 and we recover the single SYK result, as expected. We can also study an intermediate regime in which 1 βJ 1/s 2 . Given that βJ 1, we are still in the infrared, so we call this regime intermediate IR. The leading order thermodynamics can also be computed analytically here obtaining that the entropy is given by Intermediate IR: S N = S free 0 − π 2 4q 2 + π 2 q 2 1 βJ + · · · . (3.11) Note that, to leading order, this entropy is independent of s and corresponds to a single SYK Hamiltonian with a q fermion interaction. The first deviation from the linear behaviour will depend on s and is studied in section 5. In the remainder of the paper we discuss different properties of these deformed models away from this solvable limit. Thermodynamics of deformed SYK In this section we analyse the deformed models (3.1) for general values of n = q/q, both at finite and large q. An emphasis is placed on the deep IR behaviour of the deformed model, given by βJ 1/s 2 . When n = 2, we must resort to a combination of analytical and numerical techniques to compute thermodynamic quantities. We begin by analysing the large q limit. We compute the large q entropy at low temperatures, from which we can numerically extract the coefficient,‫(א‬s, n), of the linear-in-temperature part of the entropy, for various values of n. We conjecture that a similar structure for the entropy holds for finite values of q and check it against numerical data for n = 2, 3, 4, and different finite values of q, finding good agreement. We also provide evidence for the existence of models with two near-conformal regimes at both large and finite q, characterised by two linear-in-temperature regimes for the entropy. Finally, we uncover a novel analytically tractable window for n = 1 + ε, with ε small. Large q We start by computing‫(א‬s, n) numerically for general n, in the large q limit. To do so, we need to solve equation (3.5), with boundary conditions g(0) = g(β) = 0. Given a numerical solution g(τ ), we can compute the free energy following equation (3.6). The entropy then can be obtained using (2.18). Instead of computing the thermodynamic derivative numerically we use that β∂ β = J ∂ J [3] to compute the entropy directly as S N = S free 0 + β 8n 2q2 β 0 dτ 1 2 (∂ τ g(τ )) 2 − J 2 2n 2 s 2 e g(τ )/n + 2e g(τ ) . (4.1) In the deep IR, it is more convenient to parameterise formulas in terms ofq instead of q, as Hq is the dominating term in the Hamiltonian in this regime. Our numerical results confirm that at low enough temperatures, βJ 1/s 2 , the entropy is linear in the temperature, taking the form S N = S free 0 + S 0 (s, n) +‫(א‬s, n) π 2 q 2 1 sβJ + · · · , (4.2) where now‫(א‬s, n) can in general depend on s and n, but is independent of βJ . The zero temperature entropy is shifted by a factor S 0 (s, n) that may also generally depend on s and n. Zero temperature entropy. We can findq 2 S 0 (s, n) numerically by performing a linear fit of q 2 βJ S/N − S free 0 as a function of βJ for large values of βJ . In figure 1, we show results for s 2 = 0.1, 1, 4 with 1 ≤ n ≤ 3, using values of βJ between 2000 and 3000 for the linear fit. We find that for n ≥ 2, the shift in the zero temperature entropy is given byq 2 S 0 (s, n) = −π 2 /4, the same as that of a single SYK model with Hamiltonian sHq. As shown in figure 1, there are deviations from the single SYK result within the interval 1 < n < 2, but they vanish as n → 2. The s dependence of the entropy at vanishing temperature, as well as the transition at n = 2, and their potential holographic interpretation, merit a deeper understanding perhaps along the lines of [36]. We will return to this in future work. The deep IR phase at large q. We numerically compute the entropyq(S/N − S free 0 ) at a single low temperature point. 4 Subtracting the previously obtained values forq 2 S 0 (s, n) from this, the leading contribution to the difference is a term that is proportional to (βJ ) −1 , from which we can numerically extract‫(א‬s, n) in (4.2). For n = 2, there is an analytic answer for‫א‬ given by (3.10). We use this as a consistency check of our numerical procedure. In figure 2, we show agreement between our numerical algorithm and the analytic result for n = 2. For n = 2, there are no known analytic solutions. However, we do expect a certain behaviour of‫(א‬s, n) in a variety of limits. Namely, 1. For s → ∞ and fixed n, we expect the leading entropy to be that of a single SYK model with Hamiltonian sHq and so,‫(א‬s → ∞, n) → 1 in this limit. 2. At fixed s but n → ∞, we also expect‫(א‬s, n → ∞) → 1. To see this, note that n → ∞ implies q → ∞ withq finite. The contribution to the free energy from H q is given by 2 q−1 q −2 G q , see (3.2). Given that |G(τ )| ≤ 1/2, if we take q to infinity this contribution is negligible and only the terms withq will contribute. Thus,‫(א‬s, n → ∞) → 1. 3. When n = 1, the theory is equivalent to a single SYK with Hamiltonian √ 1 + s 2 H q . We therefore expect that‫א‬ (s, n = 1) = s √ 1 + s 2 . (4.3) 4. Finally, as discussed, when n = 2, we know analytically that ℵ(s, n = 2) = √ 1 + 4s 2 2s . (4.4) In figure 3 we plot numerical values of‫(א‬s, n) as a function of n for different values of s 2 . We see that the numerical results behave as expected in the limits mentioned above. When n = 1 and n = 2, the numerical values agree with the analytically known values. We also observe that as s 2 grows deviations from‫(א‬s, n) = 1 decrease for all values of n, consistent with the expectation that when s becomes large‫(א‬s, n) → 1. Furthermore, as n becomes large we see that‫(א‬s, n) → 1, as expected. We also notice an interesting behaviour of‫(א‬s, n) between n = 1 and n = 2, characterised by a peak whose position depends on s. Following the analytic arguments on section 5.1, we expect the peak to move towards n = 3/2, as s becomes smaller. Though we were unable to find a general analytic form for‫(א‬s, n), the numerical results suggest that, at least at small s and n ≥ 2, the empirical formula‫א‬ (s, n) ≈ a(n) s 4/n 2 ,(4.5) holds with 1/2 ≤ a(n) ≤ 1. More details on this are provided in Appendix B. The intermediate IR phase at large q. For large values of q and n ≥ 2, the RG flow at small enough s develops two near-fixed points. At finite temperature this is revealed by the presence of two linear-in-temperature regimes for the entropy. We find that, just as in the n = 2 case, the leading order entropy in the intermediate IR regime is given by (3.11). An example of this behaviour, for n = 3, is given in figure 4. A systematic analysis of the behaviour in the proximity of the two near-fixed points is discussed in section 5. Finite q Given the results in the large q limit, we now analyse the case of finite q. This is numerically more involved than the previous case, as the Schwinger-Dyson equations no longer reduce to an ordinary differential equation. Instead, we need to solve the Schwinger-Dyson equations (3.3) and (3.4) numerically. This set of equations is amenable to numerical computations using a recursive algorithm and the fast Fourier transform. In Appendix C we outline the details of this procedure, which is analogous to the one described in Appendix G of [3] for the single SYK model. The simplest deformed model at finite q has q = 4 andq = 2, which is first studied in [9]. In the present work, we extend this analysis to include smaller values of s 2 , allowing us to observe two different near-conformal regimes. We also present results for a more general class of models with different values of q andq. The deep IR phase at finite q. We start by focussing on the form of the entropy in the deep IR limit. We have numerical access to this regime provided s is not very small. For a single SYK model withq and coupling sJ , the entropy in the limit βJ 1/s is given by S N = S free 0 − 1/q 0 dx π 1 2 − x tan πx + 4π 2 α(q) sβJ + · · · , (4.6) where α(q) is the same (numerical) coefficient that appeared in the Schwarzian action in (2.16) (see Appendix A for more detail). Moving to the case of the deformed Hamiltonian, we first discuss the case of n = 2. In section 4.1, we found that for n ≥ 2 the zero temperature entropy of the deformed model was the same as that of a single SYK. Assuming this is the case even at finite q, we propose that the entropy in the deformed theory should be generalised to S N = S free 0 − 1/q 0 dx π 1 2 − x tan πx ‫א+‬ 4π 2 α(q) sβJ + · · · . (4.7) Namely, the zero temperature entropy remains the same and the linear-in-temperature term gets an extra coefficient of‫א‬ -as defined in (3.10) -with respect to the single SYK theory. We numerically find that for large s and low temperatures, (S/N − S free 0 ) approaches the predicted value of −0.346 obtained from settingq = 2 in (4.7) (see for example figure 7). To test the linear-in-temperature coefficient, we compute the entropy at a single low temperature point and subtract the zero temperature entropy. In figure 5, we show the numerical results for the coefficient and compare to the analytic prediction, as in (4.7), for different values of q andq, with fixed n = 2. To compute the predicted coefficient, we use values of α(q) obtained from the Padé approximation as described in Appendix A and the analytic value of‫א‬ for n = 2 in the large q limit. We find remarkable agreement, suggesting the possibility of using large q (analytical) results to extract finite q information. The results for n = 2 hint towards the possibility of generalising the form of the low temperature entropy even away from the n = 2 point. In fact, following the results at large q, we propose that the only change in the form of the entropy (4.7) for n > 2 is to take‫א‬ →‫(א‬s, n), where‫(א‬s, n) is the coefficient obtained numerically in the large q limit, see figure 3. Note that for 1 < n < 2 we would also expect a change in the zero point temperature, as is seen at large q. The proposal, then, is that, at finite q, for n ≥ 2, the low temperature entropy takes the form, S N = S free 0 − 1/q 0 dx π 1 2 − x tan πx +‫(א‬s, n) 4π 2 α(q) sβJ . (4.8) We test this conjecture for n = 3 and n = 4 by numerically computing the entropy for for q = 12, q = 4 and q = 16,q = 4 respectively. As before we use a single low temperature point and subtract the zero temperature entropy to isolate the linear-in-temperature coefficient. To compute the predicted linear-in-temperature coefficient, as in (4.8), we again use values of α(q) from the Padé approximant described in Appendix A but now use values of‫(א‬s, n) obtained numerically at large q. The results are shown in figure 6, demonstrating strong evidence. The intermediate IR phase at finite q. We now provide evidence that even at finite q, the RG flow at small enough s develops two near-conformal regimes. We consider the cases of n = 2 with q = 4 andq = 2 and n = 3, with q = 6 andq = 2. In figure 7, we plot entropy as a function of (βJ ) −1 for different values of the coupling s 2 , from s 2 = 1 to s 2 = 10 −6 , for both models. In each case, at large temperatures, all the curves approximate the entropy of the free fermions. As we move towards the IR, and similar to what happens at large q, there are two clearly different behaviours depending on the value of s 2 . When s 2 ∼ 1, the entropy goes directly into the deep IR phase. When s 2 1, there is a different intermediate IR phase appearing with a linear-in-temperature regime. It is natural to suspect that at even lower temperatures, these theories will also end up flowing into the deep IR phase. However, the numerical techniques employed are only powerful enough to reach (βJ ) −1 10 −3 . This does not permit us to compute a full RG flow exhibiting both the intermediate and the deep IR phase. Implementing an algorithm based on spectral methods might provide an efficient way of reaching even lower temperatures of order at least (βJ ) −1 ∼ 10 −4 [37]. We leave such an approach for future work. Large q with n = 1 + ε To finish this section we discuss a novel analytically tractable RG flow at large q, for n = 1 + ε, with ε a small positive number. We first discuss the leading order solution g 0 (τ ) with n = 1. At the level of the effective action (3.2), the deformed model with n = 1 is equivalent to a single SYK model with random couplings averaged over a Gaussian distribution with a variance proportional to J 2 (1 + s 2 ). In fact, at large q, the differential equation (3.5) for n = 1, becomes ∂ 2 τ g 0 (τ ) = 2J 2 (1 + s 2 )e g 0 (τ ) ,(4.9) which after imposing thermal boundary conditions, g 0 (0) = g 0 (β) = 0, is solved by e g 0 (τ ) = cos 2 ν cos 2 2ν 1 2 − |τ | β , βJ = 2ν √ 1 + s 2 cos ν . (4.10) We now consider n = 1 + ε, perturbatively in ε. We can expand g(τ ) as g(τ ) = g 0 (τ ) + εg 1 (τ ) + O(ε 2 ). (4.11) Substituting this into the differential equation (3.5), we find, at leading order in ε, a differential equation for g 1 (τ ) ∂ 2 τ g 1 (τ ) = 2e g 0 (τ ) J 2 (1 − g 0 (τ ))s 2 + g 1 (τ ) 1 + s 2 . (4.12) It is straightforward to show that g 1 (τ ) = s 2 1 + s 2 g 0 (τ ) ,(4.13) is the solution to (4.12) with boundary conditions g 1 (0) = g 1 (β) = 0. To see this, note that if we plug this expression for g 1 (τ ) in (4.12), we get that ∂ 2 τ g 0 (τ ) = 2J 2 (1 + s 2 )e g 0 (τ ) ,(4.14) which is exactly (4.9), so it is satisfied by g 0 (τ ). Next, we consider the corrections to the free energy coming from this deformation. Expanding (3.6) to leading order in ε we obtain βF N n=1+ε = −S free 0 + ν(ν − 2 tan ν) q 2 − 2ν(ν − 2 tan ν) 1 + s 2 ε q 2 + O(ε 2 ) . (4.15) Using (2.18) we find the entropy to leading order in ε is given by S N n=1+ε = S free 0 − ν 2 q 2 + 2ν 2 1 + s 2 ε q 2 + O(ε 2 ) . (4.16) This can be used to find the entropy as a function of temperature for the full RG flow. Though we do not observe an intermediate IR at this order in ε, we are able to access some interesting features of the deep IR. Expanding (4.16) in powers of (βJ ) −1 we find the correction to the entropy, S N n=1+ε = S N n=1 + π 2 2 (1 + s 2 ) − 2π 2 (1 + s 2 ) 3/2 1 βJ + O (βJ ) −2 ε q 2 + O(ε 2 ) ,(4.17) where the entropy at low temperatures for n = 1 is given by equation (2.25) with J → √ 1 + s 2 J and q →q. Equation (4.17) provides two predictions that can be tested against numerical computations. We study these next. Zero temperature entropy. Note that the correction to the zero temperature entropy at largẽ q is given by Linear-in-temperature entropy. We can also find analytically the correction to the linear-intemperature term in the entropy, and from this the correction‫(א‬s, n) near n = 1. From (4.17), we find‫א‬ (s, 1 + ε) −‫(א‬s, 1) = −ε 2s (1 + s 2 ) 3/2 + O(ε 2 ) , (4.19) where as‫(א‬s, 1) is given by (4.3). Note that the expected value of‫(א‬s, n) is lower than the value for n = 1. In figure 9, we test the predicted correction in (4.19) against numerical computations for s 2 = 0.1 and small values of ε, finding remarkable agreement. . Note that they match at small ε, showing that‫(א‬s, n) initially decreases as n moves away from n = 1. For larger ε,‫(א‬s, n) starts increasing again, which agrees with the results shown in figure 3. We do not see the initial decrease in‫(א‬s, n) in figure 3 since the lowest ε considered there is ε = 0.05, much larger than the values shown in this plot. lim βJ →∞q 2 S(βJ ) N n=1+ε −q 2 S(βJ ) N n=1 = π 2 2(1 + s 2 ) ε + O(ε 2 ) . Conformal perturbation theory In this section we explore thermodynamic contributions to the free energy and entropy of the deformed SYK near each fixed point. We argue that the leading terms in the entropy expansions (3.9) and (3.11) can be understood as perturbations to the conformal actions of the single SYK models sHq and H q , respectively. In particular, we will argue that in both cases, the leading irrelevant correction to the free energy, which is proportional to the temperature, stems from a Schwarzian action. Moreover, in the intermediate IR regime, the leading relevant correction away from the intermediate fixed point can be understood from a relevant conformal operator in conformal perturbation theory. Schwarzian for the deep IR In section 4.2 numerical evidence was presented indicating that the entropy, S, for the finite q deformed model in the deep IR takes the low temperature expansion S N = const +‫(א‬s, n) 4π 2 α(q) sβJ + · · · . (5.1) The linear-in-temperature part in S is modified from that of an undeformed SYK model with Hamiltonian sHq by‫(א‬s, n). We would like to understand the linear-in-temperature part in S as coming from the leading correction to a conformal piece of the action associated with the SYK Hamiltonian sHq [29]. More explicitly, by taking Σ → Σ + ∂ τ , we can re-write the GΣ-action (3.2) as I =Ĩ CFT +Ĩ UV wherẽ I CFT = − 1 2 log det(−Σ) + 1 2 β 0 β 0 dτ 1 dτ 2 ΣG − s 2 J 2 2q −1 q 2 Gq ,(5.2)I UV = 1 2 β 0 β 0 dτ 1 dτ 2 δ(τ 1 − τ 2 )∂ τ 2 G − J 2 2 q−1 q 2 G q . (5.3) The CFT action (5.2) is the same as the action (2.13) discussed in section 2 upon making the replacements J → sJ and q →q. The UV action,Ĩ UV , has an additional term as compared to that of the undeformed SYK model. We have a continuous family of saddle solutions ofĨ CFT written in terms of reparameterisations, φ(τ ), of the circle to itself with a single unit of winding G φ (τ 1 , τ 2 ) = φ (τ 1 ) ∆ b sgn(τ 1 − τ 2 )   π βsJ sin π(φ(τ 1 )−φ(τ 2 )) β   2∆ φ (τ 2 ) ∆ , ∆ ≡ 1/q , (5.4) where the constant b is given by (2.15). We now argue that the leading correction toĨ CFT due to the effect ofĨ UV takes the form of a Schwarzian action and gives a linear-in-temperature contribution to the specific heat. The argument we make is analogous to the one used for the single SYK [2,29]. 5 It will be convenient to rewrite the reparameterisation modes φ(τ ) in terms of modes on the line f (τ ), defined by f (τ ) = tan πφ(τ ) β . (5.5) After this transformation we find our solutions (5.4) are parameterised as G f (τ 1 , τ 2 ) = b (sJ ) 2∆ f (τ 1 ) ∆ f (τ 2 ) ∆ |f (τ 1 ) − f (τ 2 )| 2∆ . (5.6) We will want to use (5.6) inĨ UV , so that we only pick out contributions to the path integral along the conformal saddle solutions. We expand G f (τ 1 , τ 2 ) around (τ + , τ + ), where τ + ≡ (τ 1 + τ 2 )/2, giving a series in powers of τ 12 ≡ τ 1 − τ 2 , G f (τ 1 , τ 2 ) = 1 (sJ ) 2∆ |τ 12 | 2∆ 1 + ∆ 6 τ 2 12 Sch(f (τ + ), τ + ) + O(τ 3 12 ) ,(5.7) where the Scwharzian derivative is defined by Sch(f (τ + ), τ + ) ≡ f (τ + ) f (τ + ) − 3 2 f (τ + ) f (τ + ) 2 = 1 2 2π β 2 φ (τ + ) 2 − φ (τ + ) φ (τ + ) 2 . (5.8) We now substitute the expansion (5.7) intoĨ U V while changing the integration variables from (τ 1 , τ 2 ) to (τ + , τ 12 ). Due to the periodicity of our fields in β we can take the new region of integration as 0 ≤ τ 12 < β and 0 ≤ τ + < β. We then carry out the integral over τ 12 by taking a cutoff at short time scales beyond τ 12 = ε/sJ , where ε is a small positive number (the range of integration is taken to be ε/sJ ≤ τ 12 < β − ε/sJ ). Assuming n ≡ q/q = 3/2, we find a term proportional to the Schwarzian action in terms of the cutoff ε I Sch = bn(n − q)ε 1−2∆ 6q 2 1 sJ − n 2n − 3 (2b) q ε 3−2n 24q 2 s 2 1 sJ β 0 dτ + Sch(f (τ + ), τ + ) . (5.9) Here, we have kept only terms in the coefficient of the Schwarzian that are constant in β as these contribute to the linear-in-temperature specific heat when the Schwarzian is evaluated on shell. The first term in the Schwarzian coefficient (5.9) stems from the kinetic term in I UV , while the second from the non-kinetic term in I UV . Notice that in the large q limit the cutoff dependence of the first term goes like ε whilst that of the second term goes like ε 3−2n . This suggests that for n close to 1 both terms are important as we take the cutoff ε → 0. For larger values of n, the second term dominates. 6 For the sake of concreteness, let us focus on the case n = 2. Equation (5.9) becomes I Sch = b(2 − q)ε 1−2∆ 3q 2 1 sJ − (2b) q 12q 2 s 2 ε 1 sJ β 0 dτ + Sch(f (τ + ), τ + ) . (5.10) The non-kinetic term goes like 1/ε and so provides the most significant correction to the conformal part of the action. The reparametrisation symmetry is broken by choosing the saddle of the Schwarzian which occurs when φ(τ ) = τ . Substituting this into (5.10) we find the linear-intemperature contribution to the entropy to leading order in ε S Sch N = (2b) q 6q 2 s 2 ε 2π 2 sβJ . (5.11) The takeaway message of this analysis is that due to the dominance of the second term (5.10) the correction to the conformal action comes from the strongly coupled phase of the theory rather than the weakly coupled UV regime which is customary for the undeformed SYK model. Holographically, for those deformed SYK models having both an intermediate and deep IR near-fixed point, we anticipate the emergence of the Schwarzian mode in the interior of an asymptotically AdS 2 spacetime flowing to a distinct infrared AdS 2 region. Schwarzian for the intermediate IR We now proceed to consider the conformal fixed point associated to H q with a small perturbation near the fixed point. By taking Σ → Σ + ∂ τ in (3.2) we can then write I = I CFT + I pert where I CFT = − 1 2 log det(−Σ) + 1 2 β 0 β 0 dτ 1 dτ 2 ΣG − J 2 2 q−1 q 2 G q , (5.12) I pert = 1 2 β 0 β 0 dτ 1 dτ 2 δ(τ 1 − τ 2 )∂ τ 2 G − s 2 J 2 2q −1 q 2 Gq . (5.13) As in the previous section, we make an expansion of the saddle solution to I CFT in powers of τ 12 , written in terms of soft modes f (τ + ), G f (τ 1 , τ 2 ) = 1 J 2∆ |τ 12 | 2∆ 1 + ∆ 6 τ 2 12 Sch(f (τ + ), τ + ) + O(τ 3 12 ) , (5.14) where now ∆ = 1/q. Substituting this into I pert. , we change variables to (τ 12 , τ + ) and carry out the τ 12 integral with a short time scale cutoff ε/J . Keeping only terms constant in β (since these are the terms that contribute to the linear-in-temperature part of the entropy when the Schwarzian is evaluated on shell). Again focusing on n = 2 for the sake of concreteness, we find I Sch = b(1 − q)ε 1−2∆ 6q 2 1 J β 0 dτ + Sch(f (τ + ), τ + ) . (5.15) The coefficient of the Schwarzian is seen to come purely from the kinetic term in (5.13), mimicking the behaviour of the underformed SYK model with Hamiltonian H q . Accordingly, the linear-intemperature term in the entropy expansions (3.11) and (2.25) are found to be the same, and do not depend on s. Conformal relevant perturbation theory We would now like to test the hypothesis that the leading infrared correction away from conformality of the intermediate IR phase can be studied using conformal perturbation theory. The starting point [37,38] is to view the deformed SYK model near the intermediate fixed point as a conformal field theory perturbed by a series of relevant primary operators O h (τ ) of weight h ∈ (0, 1). 7 More explicitly, I = I CFT + h∈rel. g h β 0 dτ O h (τ ) , (5.16) where h denotes the conformal weight of the given operator. We note here that the spectrum of conformal operators discussed in [3,16,[37][38][39], does not contain any relevant operators with h ∈ (0, 1). They are in fact all irrelevant and are encoded in the operator product expansion of the fusion of two fermionic operators. Motivated by the structure of the Hamiltonian deformation (3.1), here we will focus instead on the following microscopic operator O h (τ ) ≡ N h iq 2 1≤i 1 <···<iq≤N J i 1 i 2 ···iq ψ i 1 ψ i 2 · · · ψ iq . (5.17) This operator is to be understood in an averaged sense since it depends on the couplings J i 1 i 2 ···iq which are averaged over. 8 The operator O h (τ ) involves a product ofq fermions. In the undeformed model each fermion has scaling dimension ∆ ψ = 1/q, so the naive estimate of the total weight of the operator (5.17) is h = 1/n up to small corrections, which is within the relevant window h ∈ (0, 1). We fix the value of N h implicitly by our choice of normalisation for the conformal two-point function averaged over the couplings O h (τ 1 ) O h (τ 2 ) β = N   π βJ sin πτ 12 β   2h . (5.18) The action I CFT in (5.16) governs the intermediate IR fixed point. According to conformal perturbation theory we find the following free energy βF = βF CFT + g h β 0 dτ O h β − g 2 h 2 β 0 β 0 dτ 1 dτ 2 O h (τ 1 ) O h (τ 2 ) β + · · · . (5.19) Here O h is the relevant operator (5.17), and again it is understood that we are averaging over the couplings. The one-point function of the O h vanishes under the assumption of conformal invariance of the vacuum. Using the conformal form of the two-point function (5.18), the second order correction is given by [37,38,41] − βδ 2 F h N = π 2h− 1 2 Γ 1 2 − h 2Γ(1 − h) g 2 h J 2 (βJ ) 2h−2 . (5.20) We will now provide evidence that the above correction indeed gives the leading correction to the intermediate CFT as we flow towards the IR. First, we consider the large q limit with n = 2, where we have the analytical form of the correction. We then consider general n in the large q limit and at finite q, where we compare to numerics. Case I: n = 2. The intermediate IR CFT free energy is known analytically [12] at large q with q/q = 2. Concretely, in the regime 1 βJ 1/s 2 , the free energy of the deformed model can be written as − βF N = 1 q 2 βJ + S free 0 − π 2 4q 2 + π 2 2q 2 1 βJ + · · · + 2s 2 q 2 βJ log 2βJ π + · · · ,(5.21) where the terms in the first square bracket are derivable from I CFT given by (5.12) accompanied by the leading irrelevant operators [37,38], and they grow with increasing temperature. The terms in the second square bracket stem from the corrections due to relevant operators. We will now argue that the leading relevant correction to the free energy arises from a relevant operator of weight h = 1/2. Given that expression (5.20) diverges when we take h = 1/2, we are led to a divergent contribution to the free energy which requires regularisation. As a simple regularisation scheme, we take h = 1/2 − h ε for some small number h ε > 0, such that − βδ 2 F h=1/2−hε N = Γ(h ε ) 2 (g 2 1/2 /J 2 ) 4Γ(2h ε ) βJ 2βJ π 2hε . (5.22) Expanding in small h ε gives − βδ 2 F h=1/2 N = g 2 1/2 2h ε J 2 βJ + g 2 1/2 J 2 βJ log 2βJ π + O(h ε ) . (5.23) Consequently, the divergent term only affects the zero point energy whose contribution to the free energy is independent of β. The remaining h ε -independent terms agree with (5.21) provided we take g 2 1/2 → 2s 2 J 2 q 2 as q → ∞ . (5.24) This provides evidence that for n = 2, and in the large q limit, we can view O h in (5.17) as a relevant conformal primary of conformal dimension h =q/q = 1/2. We now consider the case of general n. Case II: General n. For general n we do not have access to an analytic form of the free energy near the intermediate IR fixed point. Nonetheless, we can test the prediction from conformal perturbation theory against numerical results. To do so, we compute the entropy of the model numerically in the large q limit with q = nq as described in section 4.1. Taking h = 1/n in (5.20) and using the formula S = (1 − β∂ β )(−βF ) we find that the correction to the entropy due to the relevant perturbation is given by δ 2 S h=1/n N = 1 − 2 − 2 n π 2 n − 1 2 Γ 1 2 − 1 n (g 2 1/n /J 2 ) 2Γ 1 − 1 n (βJ ) 2− 2 n . (5.25) From this it follows that the entropy near the intermediate IR fixed point, as predicted by conformal perturbation theory, can be expressed as q 2 S N − S free 0 = − π 2 4 + π 2 βJ + · · · + q 2 δ 2 S h=1/n N + · · · . (5.26) The terms in the first square bracket are derivable from I CFT and the irrelevant operators whilst the terms in the second square bracket are proposed to come from the relevant deformation. In figure 10 we plot numerical results for the entropy in the intermediate IR phase against the analytic prediction (5.26), as well as the linear-in-temperature curve without the correction from the relevant perturbation. We show plots for s 2 = 10 −6 and s 2 = 10 −8 , both with curves for n = 3, 4, 5, 6 and 10. Provided g 2 1/n → n 2 s 2 J 2 2q 2 as q → ∞ ,(5.27) there is strong agreement with the numerics. We can also study higher order corrections from conformal perturbation theory. By dimensional analysis the k th order correction is found to be of the form δ k S h=1/n N ∝ s k (βJ ) k− k n , k ≥ 2 . (5.28) To find the sub-leading relevant correction we subtract the prediction (5.26), up to and including the leading relevant correction, from the numerically calculated entropy and perform a numerical fit. For the values of n we have tested we find the sub-leading relevant correction to be proportional to s 4 (βJ ) 4− 4 n . We also find evidence, as discussed below, that this is true even at finite q. The absence of a term proportional to s 3 (βJ ) 3− 3 n leads us to believe that that the conformal threepoint function is sub-leading in the large N expansion, as is seen to be the case for the conformal three-point functions discussed in [42]. Finally, it is also interesting to note that we also find an intermediate IR regime for values of Case III: Finite q. We now test whether the perturbative correction (5.25) still applies at finite q,q and large N . In this case, the predicted entropy near the intermediate IR fixed point is given by 29) and the the coupling constant of our conformal operator takes the form S N − S free 0 = − 1/q 0 dx π 1 2 − x tan πx + 4π 2 α(q) βJ + · · · + δ 2 S h=1/n N + · · · ,(5.g 2 1/n = γ(q,q)s 2 J 2 ,(5.30) where γ(q,q) is an unknown function which, from (5.27), we know tends to 1/(2q 2 ) in the largeq limit. The value of γ(q,q) can be found by fitting the prediction (5.29) to numerically determined values for the entropy in the intermediate IR phase. In figure 12 we plot numerical results against the prediction (5.29) and (5.30) with values for γ(q,q) shown in Table 1. We show plots with s 2 = 10 −4 and s 2 = 10 −3 . qq γ(q,q) 4 2 0.098 6 2 0.111 8 2 0.116 8 4 0.028 As for the large q limit, we can also find the sub-leading relevant correction by performing a numerical fit. In all cases considered, we again find that the sub-leading relevant correction is proportional to s 4 (βJ ) 4− 4 n . Outlook -Geometrisation of an RG flow The goal of this paper has been to explore RG flows at strong coupling, and in particular at finite temperature, for deformations of SYK models. We have identified a class of models permitting a robust treatment by means of both numerical and analytic methods. Given the holographic character of SYK models, our analysis opens up an interesting chapter in the story of holographic renormalisation [4,5], which has so far been explored mostly from the bulk perspective. We have identified models exhibiting RG flows between two near-fixed points and provided an interpretation in terms of conformal perturbation theory. The general character of the models is a sum of two ordinary SYK Hamiltonians (3.1), but with differing numbers of interacting fermions. As for the ordinary SYK model, the flows we study preserve a rich thermodynamic structure and exhibit an extensive entropy all the way into the deep infrared/small temperature regime. Our analysis is performed entirely from the perspective of the microphysical theory. From a holographic perspective, it is interesting to assess what features the putative holographic dual will exhibit. In the vicinity of each near-fixed point, it is natural to postulate that the bulk theory will mimic that of an ordinary SYK, whose thermodynamic features in the large N limit are captured by a JT gravity theory governed by the classical Euclidean action S E = − 1 2κ M d 2 x √ g (φR + U (φ)) − 1 κ ∂M √ hφK (6.1) with dilaton potential U (φ) = −2αφ with α real and positive. For U (φ) = −2αφ, one finds that the two-dimensional metric g ij is Euclidean AdS 2 at the classical level. At finite temperature, M is taken to have a disk topology with S 1 boundary ∂M, and we have the metric on the Poincaré disk. The thermodynamic properties of asymptotically AdS 2 geometries follow readily from the form of U (φ). The specific heat C U and temperature, for instance, are given by [43][44][45] C U = 2π κ U (φ h ) ∂ φ U (φ h ) , β = 4π U (φ h ) , (6.2) where φ h is the value of the dilaton field φ at the Euclidean horizon. It follows that the near-fixed point exhibits a specific heat linear in the temperature. For two near-fixed points, the ratio of the specific heats fixes the ratio, R ≡ α UV /α IR , of the slopes for the two linear regimes of U (φ). For the models we have studied, we find R > 1. This is in line with an increasing number of degrees of freedom as we go to higher temperatures and can be viewed as a consequence of unitarity and thermal equilibrium. Interestingly, as we increase the temperature, we remove an increasingly large portion of the interior AdS 2 . At large enough temperatures, the remaining geometry becomes a pure AdS 2 and there is boundary soft mode governed by the Schwarzian action. This is the bulk dual of the Schwarzian associated to the intermediate near-fixed point discussed in section 5.2. Continuity of the thermodynamic quantities along the RG flow, throughout which the theory remains in the strongly coupled phase, suggests that the geometric picture continues to hold between the two near-fixed points. For this to occur, one can invoke [12] a more general dilaton potential U (φ), as studied for example in [43][44][45][46] with linear behaviour at the two endpoints. The classical geometry will be asymptotically, but not isometrically, Euclidean AdS 2 . The presence of a macroscopic entropy in the deep infrared/low temperature regime of the flow leads us to postulate that the dual geometry retains a horizon. In section 5 we argued that the RG flow is triggered by a relevant operator of weight ∆ rel =q/q < 1. Thus, the bulk theory should have a corresponding field associated to the relevant operator. Moreover, as one flows to the interior of the geometry an additional AdS 2 region emerges, corresponding to the near-fixed point in the deep infrared. Associated to this is the presence of a soft mode residing at the boundary of the near-AdS 2 geometry in the deep finite interior region, governed by the Schwarzian action. It is interesting that this soft-mode resides entirely within the geometric description. 9 We depict this phenomenon in figure 13. The appearance of a worldline theory in the midst of a gravitating spacetime is a phenomenon worth pursuing in more detail. Looking forward, it will be interesting to test the hypothesis that the microscopic RG flow is captured by a dilaton-gravity theory with generalised dilaton potential by computing other observables such as the correlation functions of the fermionic operators. Moreover, one can consider larger classes of deformations. A particular family of such deformations is given by concatenating multiple SYK Hamiltonians H tot = k i=1 λ i H q i ,(6.3) with q 1 > q 2 > . . . > q k and λ i ∈ C. Although unitarity enforces λ i ∈ R, it is interesting to consider the more general complex case, as such models make contact with the physics of open quantum systems [20,21,50] which, in turn, may bear relevance to the problem of de Sitter. The case k = 3 is particularly interesting, given the recent realisation [48] of a thermodynamically stable macroscopic portion of dS 2 suspended between two approximately AdS 2 geometries, one near the boundary and the other in the deep interior. Technically, this requires reaching lower temperatures in the numerical computations. This might be achieved by incorporating new techniques such as spectral [37] or Krylov [27] methods and/or new approximate models such as the sparse models studied in [51][52][53]. Building a microphysical model 10 for two-dimensional de Sitter from the ingredients of SYK, as originally envisioned in [44,47], is left to near-future work. A Numerical computation of α(q) in a single SYK model In section 2, we saw that the entropy of the single SYK model has a small temperature expansion given by S N = S free 0 − 1/q 0 dx π 1 2 − x tan πx + 4π 2 α(q) βJ + · · · . (A.1) Here we describe how to compute the coefficient α(q) numerically. The first step is to numerically compute the large N entropy, S/N , of the model at a single low temperature point. We then subtract off the temperature independent piece of (A.1) and multiply the answer by βJ /(4π 2 ) to obtain a value for α(q) up to corrections of order (βJ ) −2 . To find the entropy, we numerically solve the Schwinger-Dyson equations with s = 0, as shown in Appendix C. In figure 14 we plot the numerical values of α(q) and compare it with the two-sided Padé approximant, found in [35], α(q) = 3(3π − 2)q + π 2 − 18π + 24 6q 2 (2(3π − 2)q + π 3 + 8) . (A.2) Given the agreement with the numerics, we directly use (A.2) in our numerical computations. B Small s expansion for‫(א‬s, n) In this appendix, we provide an analytic form for‫(א‬s, n), when n ≥ 2 and s 1 by fitting the numerical data. For n = 2, we know‫(א‬s, n = 2) analytically and it is given by‫א‬ in equation (3.10). It is straightforward to obtain‫א‬ (s, n = 2) → 1/2 s + · · · , (B.1) in the small s expansion. Given the shape of the curves from the numerical results, we propose the following structure for general n in the small s limit, ℵ(s, n) → a(n) s b(n) + · · · , (B.2) where a(n), b(n) can depend on n but are independent of s. To test this proposal and find the form of the functions a(n), b(n), we compute‫(א‬s, n) for small values of s such that 0.01 ≤ s 2 ≤ 0.02 and different values of n. This is done numerically using the same methodology as described in section 4. For each n, we perform a fit on the data to find a(n) and b(n). For n = 2, we find a(n = 2) = 0.482 and b(n = 2) = 1.02, which are close to the analytic values of 1/2 and 1, respectively. We repeat the procedure for 2 ≤ n ≤ 10. The results for a(n) and b(n) are shown in figure 16. Note that as n → ∞, a(n → ∞) → 1 and b(n → ∞) → 0, so‫(א‬s, n → ∞) → 1, as expected from the considerations in section 4.1. Moreover, a simple fit in figure 16(b), shows that b(n) ≈ 4/n 2 . We conclude that‫א‬ (s, n) → a(n) s 4/n 2 + · · · as s → 0 , (B.3) with 1/2 ≤ a(n) ≤ 1. C Details on the numerical algorithm to solve the SD equations In this appendix we outline the numerical procedure used to solve the Schwinger-Dyson equations (3.3) and (3.4). The procedure is analogous to the one described in Appendix G of [3] for the single SYK model. The idea is to start with the free solution of the single SYK as an initial seed for an iterative algorithm that has fast convergence properties. For the numerical procedure, it is convenient to write (3.3) in frequency space, so that at finite temperature the Schwinger-Dyson equations can be written as 1 G(ω n ) = −iω n − Σ(ω n ) , (C.1) Σ(τ ) = J 2 2 q−1 q G(τ ) q−1 + s 2 2q −1 q G(τ )q −1 . (C.2) where ω n = 2π (n + 1/2) /β are Matsubara frequencies and β is the inverse temperature. At each step in the procedure we update G(ω n ) by a proportion of the error in (C.1), G j+1 (ω n ) = G j (ω n ) + a 1 −iω n − Σ j (ω n ) − G j (ω n ) , (C.3) where the weight a is initially set to a = 0.5. We then use (C.2) to get an update for Σ(ω n ), using the fast Fourier transform (FFT) to switch between frequency and position space. The iteration is continued until the error in (C.1) is deemed to be sufficiently small. We implemented the algorithm in python using the inbuilt FFT, and IFFT functions from the NumPy module. To get good convergence, it is important to discretise the τ -interval into many points, particularly near 0 and β where we found the most error from the expected solution. 20,000 points is enough to see good plots, but we could go up to 2,000,000 and still run the algorithm in reasonable time. This allowed us to reach inverse temperatures of the order of βJ ∼ 10 2 . To reach much larger βJ requires significant time and memory. Another important aspect in the numerical code is to keep track of the full absolute error squared, n |G j+1 (ω n ) − G j (ω n )| 2 , at each iteration. In the case it increases, we half the value of the weighting parameter a. We found that around 50 iterations was sufficient to get convergence to the solution. D Schwarzian action and entropy for the q = 2 SYK model In this appendix we use the methodology to derive the Schwarzian action employed in sections 5.1 and 5.2 to correctly reproduce the linear-in-temperature entropy of the q = 2 SYK model at large N , which is known to be integrable. For q = 2, we can solve the Schwinger-Dyson equations (2.7) and (2.8) exactly to find that at low temperatures [3] S N q=2 = π 6 1 βJ + · · · . (D.1) Note that the zero-temperature entropy vanishes for q = 2. We want to derive this formula from a Schwarzian action perspective. For that, we take Σ → Σ+∂ τ in (2.5) and write I = I CFT +I UV [2,29], where We then make an expansion of the saddle solution to I CFT in powers of τ 12 . It can be written in terms of soft modes f (τ + ), see (5.14). We can substitute this expansion into I U V , which now becomes an integral over τ 12 and τ + . Carrying out the τ 12 integral with a short time scale cutoff ε/J , we are left with the following Schwarzian action, I CFT = −I Sch = b(1 − q)ε 1−2∆ 6q 2 1 J β 0 dτ + Sch(f (τ + ), τ + ) = − 1 24π 1 J β 0 dτ + Sch(f (τ + ), τ + ) , (D.4) where for the last equality we used that b = 1/π and ∆ = 1/2, in the q = 2 model. Note that the cut off dependence drops out. Upon taking this Schwarzian action on-shell, we obtain that the entropy becomes S Sch N q=2 = π 6 1 βJ , (D.5) which correctly reproduces (D.1). Fig. 1 : 1The zero temperature entropyq 2 S0(s, n) as a function of n, for s 2 = 0.1, 1, 4. The circles are numerical computations for different values of s 2 , while the dashed black line indicates the analytic value ofq 2 S0(s, n) for a single SYK model. Fig. ‫א:2‬ as a function of s 2 for the deformed SYK model in the large q limit with n = 2. The circles are numerical computations while the blue solid curve shows the analytic result in (3.10), for comparison. At large s, we expect the numerics to tend towards the black dashed line at‫א‬ = 1. Fig. 3 : 3‫(א‬s, n) as a function of n for s 2 = 0.1, 1, 4. The circles are numerical computations. For large n, ℵ(s, n) tends towards the expected value of‫(א‬s, n) = 1 shown in a dashed black line. Fig. 4 : 4The entropy as a function of temperature (in logarithmic scale) for the deformed SYK model at large N and large q with n = 3. In 4(a) we plot the full RG flow accessible to our numerics. The dashed line gives the expected zero temperature entropy (seefigure 1, noting that in this case q = 3q). In 4(b) we zoom into the intermediate IR regime. The dashed line gives the expected analytic form(3.11). Fig. 5 : 5The linear-in-temperature coefficient of the entropy as a function of s 2 , in the deformed SYK with n = 2 for finite q andq. The circles correspond to numerical computations while the blue solid curve corresponds to (4.7), conjectured from the large q limit behaviour. Fig. 6 : 6The linear-in-temperature coefficient of the entropy as a function of s 2 , in the deformed SYK for n = 3, 4 with finite q andq. The circles correspond to numerical computations while the crosses correspond to (4.8) with the value of‫(א‬s, n) obtained numerically in the large q,q limit. Fig. 7 : 7The entropy as a function of temperature (in logarithmic scale) for the deformed SYK model at large N and finite q. Different colours correspond to different values of s 2 . Circles correspond to numerical computations. Fig. 8 : 8numerically compute the large q,q entropy for n = 1 and for n = 1 + ε at large βJ for small values of ε and compare with the analytic prediction. We show the results for s 2 = 0.1, 1, 4 at βJ = 2000 infigure 8, showing agreement between the analytical predictions and the numerical computations. The difference in the zero temperature entropy between the large q,q model with n = 1 + ε and n = 1, as function of small ε, for different values of s 2 . The circles correspond to numerical computations at βJ = 2000, while the solid lines are the analytic prediction from (4.18). For small enough ε, both overlap. Fig. 9 : 9Difference in the values of‫(א‬s, n) between n = 1 + ε and n = 1, as a function of small values of ε, with s 2 = 0.1. The circles correspond to numerical computations, while the solid blue line is the analytic result from(4.19) Fig. 10 : 10Entropy as a function of temperature (in logarithmic scale) for the intermediate IR phase in the large N and q expansion. The circles give numerical results. The solid lines give the analytical prediction (5.26) with both the leading irrelevant and relevant corrections. The dashed line gives the analytical prediction (5.26) with only the leading irrelevant correction. Fig. 11 : 11Entropy as a function of temperature (in logarithmic scale) for n = 1.3 and s 2 = 10 −5 , 10 −4 , 10 −3 in the intermediate IR phase in the large N and q expansion. The circles give numerical results. The solid lines give the analytical prediction (5.26) with both the leading irrelevant and relevant corrections. The dashed line gives the analytical prediction (5.26) with only the leading irrelevant correction. Fig. 12 : 12Entropy as a function of temperature (in logarithmic scale) for the intermediate IR phase at finite q in the large N expansion. The circles give numerical results. The solid lines give the analytical prediction (5.29) and (5.30) with values for γ(q,q) shown in Table 1. The dashed line gives the analytical prediction (5.29) with only the leading irrelevant correction. Fig. 13 : 13Pictorial representation of the two Schwarzian soft modes appearing inside Euclidean AdS2. (a) For βJ larger that some critical (βJ ) * there is a Schwarzian soft mode appearing in the deep interior of AdS2. (b) For 1 βJ (βJ ) * , there is a Schwarzian soft mode residing closer to the AdS boundary. In the large q model with n = 2, (βJ ) * ∼ s −2 , with s 1. Fig. 14 : 14The coefficient α(q) as a function of q. The circles are numerical computations while the solid blue curve is given by the Padé approximation (A.2). We used βJ ∼ 10 2.3 for the numerical computations. 1 .Fig. 15 : 115The results for n = 2, 3, 4, 5 are shown in logarithmic scale in figure 15. The linear form of the plots supports the ansatz in equation (B.2). Log-log plots of‫(א‬s, n) as function of s for different values of n. Circles correspond to numerical computations. Dashed lines are fitted curves for the ansatz‫(א‬s, n) = a(n) s b(n) . Fig. 16 : 16(a) Fitted values for a(n) in the ansatz‫(א‬s, n) = a(n) s b(n) for small s. (b) Fitted values for b(n) in the ansatz‫(א‬s, n) = a(n) s b(n) for small s. dτ 2 δ(τ 1 − τ 2 )∂ τ 2 G . (D.3) Table 1 : 1Numericalvalues for γ(q,q) in (5.30) found by fitting the prediction (5.29) to numerically determined values for the entropy in the intermediate IR phase. Scull, and David Vegh for useful discussions. D.A. is funded by the Royal Society under the grant "The Atoms of a deSitter Universe". The work of D.A.G. is funded by UKRI Stephen Hawking Fellowship "Quantum Emergence of an Expanding Universe". S.U.S. is funded by the Royal Society under the grant "The Resonances of a deSitter Universe". D.A. would like to thank the BEL center, KU Leuven and ULB for their kind hospitality during the completion of this work. D.A.G. would like to further thank the University of Amsterdam, the University of Kentucky and the Perimeter Institute for kind hospitality during the completion of this work. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. We also acknowledge the use of King's Computational Research, Engineering and Technology Environment (CREATE) [59]. In[26], for instance, results for N = 34 are reported. For certain observables, it is also possible to partially diagonalise the Hamiltonian using Krylov methods to get up to N = 60, see[27]. This solution is slightly different to the one appearing in[12]. The reason is that the model studied there has fermions with two different flavours and so effectively, the number q of fermionic interactions in each term of the Hamiltonian was twice the one considered here. It is possible to recover the solution in[12] by simply taking g(τ ) → 2g(τ ). In figures 2 and 3 we present numerical results for βJ = 3000. We have also performed this procedure for other values of βJ between 2000 and 3000 allowing us to test the postulated β-dependence of (4.2). In Appendix D we show that this argument gives the correct low temperature entropy in the integrable case of a single SYK model with q = 2. Since the coefficient of the Schwarzian governs the linear-in-temperature specific heat, this competition of factors could perhaps underlie the transition we see in the value of‫(א‬s, n) for small values of n infigure 3. Since in one dimension we can conformally map the line to the circle, we can employ conformal perturbation theory methods on the circle.8 This is somewhat in the spirit of[40]. It is interesting that in contrast to operators associated to large black holes, which are highly irrelevant, O h (τ ) is a complicated operator that is relevant. Perhaps, given its averaged nature, one can associate an entropy different from that of the horizon, to its effect on the bulk spacetime. n such that 1 < n < 2, whose behaviour is in agreement with(5.26). Infigure 11we plot the intermediate IR regime for n = 1.3 and various values of s 2 , again seeing excellent agreement with the prediction from conformal perturbation theory. From our analysis in section 4.1 we would also expect the zero temperature entropy of such flows to have a non-trivial s dependence, giving them an additional richness compared to the case n ≥ 2. A similar emergence of a soft-mode in the interior of an interpolating geometry was also discussed for the centaur geometries studied in[47][48][49]. Rearrangements of a microscopic dual of quantum AdS to obtain dS microstates also plays an interesting role in the approach of[54][55][56], and also[57,58]. AcknowledgementsIt is a pleasure to acknowledge Alexandre Belin, Nikolay Bobev, Shira Chapman, Luca Delacretaz, Masanori Hanada, Eleanor Harris, Diego Hofman, Beatrix Mühlmann, Ben Pethybridge, Andrew Gapless spin fluid ground state in a random, quantum Heisenberg magnet. S Sachdev, J Ye, 10.1103/PhysRevLett.70.3339cond-mat/9212030Phys. Rev. Lett. 703339S. Sachdev and J. Ye, Gapless spin fluid ground state in a random, quantum Heisenberg magnet, Phys. Rev. Lett. 70 (1993) 3339 [cond-mat/9212030]. The soft mode in the Sachdev-Ye-Kitaev model and its gravity dual. A Kitaev, S J Suh, 10.1007/JHEP05(2018)1831711.08467JHEP. 05183A. Kitaev and S. J. Suh, The soft mode in the Sachdev-Ye-Kitaev model and its gravity dual, JHEP 05 (2018) 183 [1711.08467]. Remarks on the Sachdev-Ye-Kitaev model. J Maldacena, D Stanford, 10.1103/PhysRevD.94.1060021604.07818Phys. Rev. D. 94106002J. Maldacena and D. Stanford, Remarks on the Sachdev-Ye-Kitaev model, Phys. Rev. D 94 (2016) 106002 [1604.07818]. On the holographic renormalization group. J Boer, E P Verlinde, H L Verlinde, 10.1088/1126-6708/2000/08/003hep-th/9912012JHEP. 083J. de Boer, E. P. Verlinde and H. L. Verlinde, On the holographic renormalization group, JHEP 08 (2000) 003 [hep-th/9912012]. Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence. S Haro, S N Solodukhin, K Skenderis, 10.1007/s002200100381hep-th/0002230Commun. Math. Phys. 217595S. de Haro, S. N. Solodukhin and K. Skenderis, Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence, Commun. Math. Phys. 217 (2001) 595 [hep-th/0002230]. Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space. J Maldacena, D Stanford, Z Yang, 10.1093/ptep/ptw1241606.01857PTEP. 2016J. Maldacena, D. Stanford and Z. Yang, Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space, PTEP 2016 (2016) 12C104 [1606.01857]. Chaos in AdS 2 Holography. K Jensen, 10.1103/PhysRevLett.117.1116011605.06098Phys. Rev. Lett. 117111601K. Jensen, Chaos in AdS 2 Holography, Phys. Rev. Lett. 117 (2016) 111601 [1605.06098]. An investigation of AdS 2 backreaction and holography. J Engelsöy, T G Mertens, H Verlinde, 10.1007/JHEP07(2016)1391606.03438JHEP. 07139J. Engelsöy, T. G. Mertens and H. Verlinde, An investigation of AdS 2 backreaction and holography, JHEP 07 (2016) 139 [1606.03438]. Chaotic-Integrable Transition in the Sachdev-Ye-Kitaev Model. A M García-García, B Loureiro, A Romero-Bermúdez, M Tezuka, 10.1103/PhysRevLett.120.2416031707.02197Phys. Rev. Lett. 120241603A. M. García-García, B. Loureiro, A. Romero-Bermúdez and M. Tezuka, Chaotic-Integrable Transition in the Sachdev-Ye-Kitaev Model, Phys. Rev. Lett. 120 (2018) 241603 [1707.02197]. Thermodynamics and Many Body Chaos for generalized large q SYK models. J Jiang, Z Yang, 10.1007/JHEP08(2019)0191905.00811JHEP. 0819J. Jiang and Z. Yang, Thermodynamics and Many Body Chaos for generalized large q SYK models, JHEP 08 (2019) 019 [1905.00811]. Kitaev and M. Feigel'man, Perturbed Sachdev-Ye-Kitaev model: a polaron in the hyperbolic plane. A Lunkin, A , A. Lunkin, A. Kitaev and M. Feigel'man, Perturbed Sachdev-Ye-Kitaev model: a polaron in the hyperbolic plane, 2006.14535. Constructing AdS 2 flow geometries. D Anninos, D A Galante, 10.1007/JHEP02(2021)045JHEP. 02452011.01944D. Anninos and D. A. Galante, Constructing AdS 2 flow geometries, JHEP 02 (2021) 045 [2011.01944]. D K Nandy, T Cadez, B Dietz, A Andreanov, D Rosa, 2206.08599Delayed Thermalization in Mass-Deformed SYK. D. K. Nandy, T. Cadez, B. Dietz, A. Andreanov and D. Rosa, Delayed Thermalization in Mass-Deformed SYK, 2206.08599. . J Maldacena, X.-L Qi, 1804.00491Eternal traversable wormholeJ. Maldacena and X.-L. Qi, Eternal traversable wormhole, 1804.00491. Quantum chaos transition in a two-site Sachdev-Ye-Kitaev model dual to an eternal traversable wormhole. A M García-García, T Nosaka, D Rosa, J J M Verbaarschot, 10.1103/PhysRevD.100.0260021901.06031Phys. Rev. D. 10026002A. M. García-García, T. Nosaka, D. Rosa and J. J. M. Verbaarschot, Quantum chaos transition in a two-site Sachdev-Ye-Kitaev model dual to an eternal traversable wormhole, Phys. Rev. D 100 (2019) 026002 [1901.06031]. A Generalization of Sachdev-Ye-Kitaev. D J Gross, V Rosenhaus, 10.1007/JHEP02(2017)0931610.01569JHEP. 0293D. J. Gross and V. Rosenhaus, A Generalization of Sachdev-Ye-Kitaev, JHEP 02 (2017) 093 [1610.01569]. Marginal deformations & rotating horizons. D Anninos, T Anous, R , 10.1007/JHEP12(2017)0951707.03380JHEP. 1295D. Anninos, T. Anous and R. D'Agnolo, Marginal deformations & rotating horizons, JHEP 12 (2017) 095 [1707.03380]. SYK Models and SYK-like Tensor Models with Global Symmetry. J Yoon, 10.1007/JHEP10(2017)1831707.01740JHEP. 10183J. Yoon, SYK Models and SYK-like Tensor Models with Global Symmetry, JHEP 10 (2017) 183 [1707.01740]. Notes on the complex Sachdev-Ye-Kitaev model. Y Gu, A Kitaev, S Sachdev, G Tarnopolsky, 10.1007/JHEP02(2020)157JHEP. 021571910.14099Y. Gu, A. Kitaev, S. Sachdev and G. Tarnopolsky, Notes on the complex Sachdev-Ye-Kitaev model, JHEP 02 (2020) 157 [1910.14099]. Non-unitary dynamics of Sachdev-Ye-Kitaev chain. C Liu, P Zhang, X Chen, 10.21468/SciPostPhys.10.2.048SciPost Phys. 1048C. Liu, P. Zhang and X. Chen, Non-unitary dynamics of Sachdev-Ye-Kitaev chain, SciPost Phys. 10 (2021) 048 [2008.11955]. Symmetry Classification and Universality in Non-Hermitian Many-Body Quantum Chaos by the Sachdev-Ye-Kitaev Model. A M García-García, L Sá, J J M Verbaarschot, 10.1103/PhysRevX.12.021040Phys. Rev. X. 12210402110.03444A. M. García-García, L. Sá and J. J. M. Verbaarschot, Symmetry Classification and Universality in Non-Hermitian Many-Body Quantum Chaos by the Sachdev-Ye-Kitaev Model, Phys. Rev. X 12 (2022) 021040 [2110.03444]. Local criticality, diffusion and chaos in generalized Sachdev-Ye-Kitaev models. Y Gu, X.-L Qi, D Stanford, 10.1007/JHEP05(2017)1251609.07832JHEP. 05125Y. Gu, X.-L. Qi and D. Stanford, Local criticality, diffusion and chaos in generalized Sachdev-Ye-Kitaev models, JHEP 05 (2017) 125 [1609.07832]. Expanding the Black Hole Interior: Partially Entangled Thermal States in SYK. A Goel, H T Lam, G J Turiaci, H Verlinde, 10.1007/JHEP02(2019)1561807.03916JHEP. 02156A. Goel, H. T. Lam, G. J. Turiaci and H. Verlinde, Expanding the Black Hole Interior: Partially Entangled Thermal States in SYK, JHEP 02 (2019) 156 [1807.03916]. Disordered Quivers and Cold Horizons. D Anninos, T Anous, F Denef, 10.1007/JHEP12(2016)0711603.00453JHEP. 1271D. Anninos, T. Anous and F. Denef, Disordered Quivers and Cold Horizons, JHEP 12 (2016) 071 [1603.00453]. Supersymmetric Sachdev-Ye-Kitaev models. W Fu, D Gaiotto, J Maldacena, S Sachdev, 10.1103/PhysRevD.95.0260091610.08917Phys. Rev. D. 9526009W. Fu, D. Gaiotto, J. Maldacena and S. Sachdev, Supersymmetric Sachdev-Ye-Kitaev models, Phys. Rev. D 95 (2017) 026009 [1610.08917]. Black Holes and Random Matrices. J S Cotler, G Gur-Ari, M Hanada, J Polchinski, P Saad, S H Shenker, 10.1007/JHEP05(2017)1181611.04650JHEP. 05118J. S. Cotler, G. Gur-Ari, M. Hanada, J. Polchinski, P. Saad, S. H. Shenker et al., Black Holes and Random Matrices, JHEP 05 (2017) 118 [1611.04650]. Many-Body Chaos in the Sachdev-Ye-Kitaev Model. B Kobrin, Z Yang, G D Kahanamoku-Meyer, C T Olund, J E Moore, D Stanford, 10.1103/PhysRevLett.126.030602Phys. Rev. Lett. 126306022002.05725B. Kobrin, Z. Yang, G. D. Kahanamoku-Meyer, C. T. Olund, J. E. Moore, D. Stanford et al., Many-Body Chaos in the Sachdev-Ye-Kitaev Model, Phys. Rev. Lett. 126 (2021) 030602 [2002.05725]. AdS 2 holography and the SYK model. G Sárosi, 10.22323/1.323.00011711.08482PoS. 20171G. Sárosi, AdS 2 holography and the SYK model, PoS Modave2017 (2018) 001 [1711.08482]. An introduction to the SYK model. V Rosenhaus, 10.1088/1751-8121/ab2ce11807.03334J. Phys. A. 52323001V. Rosenhaus, An introduction to the SYK model, J. Phys. A 52 (2019) 323001 [1807.03334]. Bekenstein-Hawking Entropy and Strange Metals. S Sachdev, 10.1103/PhysRevX.5.0410251506.05111Phys. Rev. X. 410255S. Sachdev, Bekenstein-Hawking Entropy and Strange Metals, Phys. Rev. X 5 (2015) 041025 [1506.05111]. Fermionic Localization of the Schwarzian Theory. D Stanford, E Witten, 10.1007/JHEP10(2017)0081703.04612JHEP. 108D. Stanford and E. Witten, Fermionic Localization of the Schwarzian Theory, JHEP 10 (2017) 008 [1703.04612]. One-dimensional Quantum Gravity and the Schwarzian theory. D Anninos, D M Hofman, S Vitouladitis, 10.1007/JHEP03(2022)121JHEP. 031212112.03793D. Anninos, D. M. Hofman and S. Vitouladitis, One-dimensional Quantum Gravity and the Schwarzian theory, JHEP 03 (2022) 121 [2112.03793]. Chord diagrams, exact correlators in spin glasses and black hole bulk reconstruction. M Berkooz, P Narayan, J Simon, 10.1007/JHEP08(2018)1921806.04380JHEP. 08192M. Berkooz, P. Narayan and J. Simon, Chord diagrams, exact correlators in spin glasses and black hole bulk reconstruction, JHEP 08 (2018) 192 [1806.04380]. Towards a full solution of the large N double-scaled SYK model. M Berkooz, M Isachenkov, V Narovlansky, G Torrents, 10.1007/JHEP03(2019)0791811.02584JHEP. 0379M. Berkooz, M. Isachenkov, V. Narovlansky and G. Torrents, Towards a full solution of the large N double-scaled SYK model, JHEP 03 (2019) 079 [1811.02584]. Large q expansion in the Sachdev-Ye-Kitaev model. G Tarnopolsky, 10.1103/PhysRevD.99.0260101801.06871Phys. Rev. D. 9926010G. Tarnopolsky, Large q expansion in the Sachdev-Ye-Kitaev model, Phys. Rev. D 99 (2019) 026010 [1801.06871]. Universal noninteger 'ground state degeneracy' in critical quantum systems. I Affleck, A W W Ludwig, 10.1103/PhysRevLett.67.161Phys. Rev. Lett. 67161I. Affleck and A. W. W. Ludwig, Universal noninteger 'ground state degeneracy' in critical quantum systems, Phys. Rev. Lett. 67 (1991) 161. Precise Low-Temperature Expansions for the Sachdev-Ye-Kitaev model. E A Cruz, G Tarnopolsky, E. A. Cruz and G. Tarnopolsky, Precise Low-Temperature Expansions for the Sachdev-Ye-Kitaev model, 2206.13547. Excitation spectra of quantum matter without quasiparticles I: Sachdev-Ye-Kitaev models. M Tikhanovskaya, H Guo, S Sachdev, G Tarnopolsky, 10.1103/PhysRevB.103.075141Phys. Rev. B. 103751412010.09742M. Tikhanovskaya, H. Guo, S. Sachdev and G. Tarnopolsky, Excitation spectra of quantum matter without quasiparticles I: Sachdev-Ye-Kitaev models, Phys. Rev. B 103 (2021) 075141 [2010.09742]. The Spectrum in the Sachdev-Ye-Kitaev Model. J Polchinski, V Rosenhaus, 10.1007/JHEP04(2016)0011601.06768JHEP. 041J. Polchinski and V. Rosenhaus, The Spectrum in the Sachdev-Ye-Kitaev Model, JHEP 04 (2016) 001 [1601.06768]. Random statistics of OPE coefficients and Euclidean wormholes. A Belin, J De Boer, 10.1088/1361-6382/ac1082Class. Quant. Grav. 381640012006.05499A. Belin and J. de Boer, Random statistics of OPE coefficients and Euclidean wormholes, Class. Quant. Grav. 38 (2021) 164001 [2006.05499]. Thermalization and hydrodynamics of two-dimensional quantum field theories. L V Delacretaz, A L Fitzpatrick, E Katz, M T Walters, 10.21468/SciPostPhys.12.4.119SciPost Phys. 121192105.02229L. V. Delacretaz, A. L. Fitzpatrick, E. Katz and M. T. Walters, Thermalization and hydrodynamics of two-dimensional quantum field theories, SciPost Phys. 12 (2022) 119 [2105.02229]. All point correlation functions in SYK. D J Gross, V Rosenhaus, 10.1007/JHEP12(2017)1481710.08113JHEP. 12148D. J. Gross and V. Rosenhaus, All point correlation functions in SYK, JHEP 12 (2017) 148 [1710.08113]. Thermodynamics of black holes in two (and higher) dimensions. D Grumiller, R Mcnees, 10.1088/1126-6708/2007/04/074hep-th/0703230JHEP. 0474D. Grumiller and R. McNees, Thermodynamics of black holes in two (and higher) dimensions, JHEP 04 (2007) 074 [hep-th/0703230]. Infrared Realization of dS 2 in AdS 2. D Anninos, D M Hofman, 10.1088/1361-6382/aab1431703.04622Class. Quant. Grav. 3585003D. Anninos and D. M. Hofman, Infrared Realization of dS 2 in AdS 2 , Class. Quant. Grav. 35 (2018) 085003 [1703.04622]. Deformations of JT Gravity and Phase Transitions. E Witten, 3494E. Witten, Deformations of JT Gravity and Phase Transitions, 2006.03494. The path integral of 3D gravity near extremality; or, JT gravity with defects as a matrix integral. H Maxfield, G J Turiaci, 10.1007/JHEP01(2021)118JHEP. 01118H. Maxfield and G. J. Turiaci, The path integral of 3D gravity near extremality; or, JT gravity with defects as a matrix integral, JHEP 01 (2021) 118 [2006.11317]. De Sitter Horizons & Holographic Liquids. D Anninos, D A Galante, D M Hofman, 10.1007/JHEP07(2019)0381811.08153JHEP. 0738D. Anninos, D. A. Galante and D. M. Hofman, De Sitter Horizons & Holographic Liquids, JHEP 07 (2019) 038 [1811.08153]. Interpolating geometries and the stretched dS 2 horizon. D Anninos, E Harris, 2209.06144D. Anninos and E. Harris, Interpolating geometries and the stretched dS 2 horizon, 2209.06144. D Anninos, D A Galante, B Mühlmann, Finite Features of Quantum De Sitter Space. D. Anninos, D. A. Galante and B. Mühlmann, Finite Features of Quantum De Sitter Space, 2206.14146. Measurement-induced purification in large-N hybrid Brownian circuits. G S Bentsen, S Sahu, B Swingle, 10.1103/PhysRevB.104.0943042104.07688Phys. Rev. B. 10494304G. S. Bentsen, S. Sahu and B. Swingle, Measurement-induced purification in large-N hybrid Brownian circuits, Phys. Rev. B 104 (2021) 094304 [2104.07688]. S Xu, L Susskind, Y Su, B Swingle, A Sparse Model of Quantum Holography. 2303S. Xu, L. Susskind, Y. Su and B. Swingle, A Sparse Model of Quantum Holography, 2008.02303. Sparse Sachdev-Ye-Kitaev model, quantum chaos and gravity duals. A M García-García, Y Jia, D Rosa, J J M Verbaarschot, 10.1103/PhysRevD.103.106002Phys. Rev. D. 103106002A. M. García-García, Y. Jia, D. Rosa and J. J. M. Verbaarschot, Sparse Sachdev-Ye-Kitaev model, quantum chaos and gravity duals, Phys. Rev. D 103 (2021) 106002 [2007.13837]. Binary-coupling sparse SYK: an improved model of quantum chaos and holography. M Tezuka, O Oktay, E Rinaldi, M Hanada, F Nori, 2208.12098M. Tezuka, O. Oktay, E. Rinaldi, M. Hanada and F. Nori, Binary-coupling sparse SYK: an improved model of quantum chaos and holography, 2208.12098. V Shyam, 2106.10227TT + Λ 2 Deformed CFT on the Stretched dS 3 Horizon. V. Shyam, TT + Λ 2 Deformed CFT on the Stretched dS 3 Horizon, 2106.10227. De Sitter microstates from TT + Λ 2 and the Hawking-Page transition. E Coleman, E A Mazenc, V Shyam, E Silverstein, R M Soni, G Torroba, 10.1007/JHEP07(2022)140JHEP. 071402110.14670E. Coleman, E. A. Mazenc, V. Shyam, E. Silverstein, R. M. Soni, G. Torroba et al., De Sitter microstates from TT + Λ 2 and the Hawking-Page transition, JHEP 07 (2022) 140 [2110.14670]. Black hole to cosmic horizon microstates in string/M theory: timelike boundaries and internal averaging. E Silverstein, 2212.00588E. Silverstein, Black hole to cosmic horizon microstates in string/M theory: timelike boundaries and internal averaging, 2212.00588. Entanglement and Chaos in De Sitter Holography: An SYK Example. L Susskind, Journal of Holography Applications in Physics. 12021) 1 [2109.14104L. Susskind, Entanglement and Chaos in De Sitter Holography: An SYK Example, Journal of Holography Applications in Physics 1 (2021) 1 [2109.14104]. . F Ecker, D Grumiller, R Mcnees, dS 2 as excitation of AdS 2 , 2204.00045F. Ecker, D. Grumiller and R. McNees, dS 2 as excitation of AdS 2 , 2204.00045. . 10.18742/rnvf-m076King's Computational Research, Engineering and Technology Environment (CREATEKing's Computational Research, Engineering and Technology Environment (CREATE)." https://doi.org/10.18742/rnvf-m076, March 2, 2022.
{'fraction_non_alphanumeric': 0.06971188649196468, 'fraction_numerical': 0.05528208098064765, 'mean_word_length': 3.714464478698595, 'pattern_counts': {'":': 0, '<': 19, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 34, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We explore computationally tractable deformations of the SYK model. The deformed theories are described by the sum of two SYK Hamiltonians with differing numbers, q andq, of interacting fermions. In the large N limit, employing analytic and numerical tools, we compute finite temperature correlation functions and thermodynamic quantities. We identify a novel analytically solvable RG flow in the large q limit. We find that, under certain circumstances, the RG flow in the strongly coupled infrared phase exhibits two regions of linear-in-temperature entropy, which we interpret in terms of Schwarzian actions. Using conformal perturbation theory we compute the leading relevant correction away from the intermediate near-conformal fixed point. Holographic spacetimes in two spacetime dimensions that reproduce the thermodynamics of the microphysical theory are discussed. These are flow geometries that interpolate between two Euclidean near-AdS 2 spacetimes with different radii. The Schwarzian soft mode corresponding to the AdS 2 region in the deep interior resides entirely within the geometric regime.', 'arxivid': '2212.04944', 'author': ["Dionysios Anninos [email protected] \nDepartment of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUK\n", "Damián A Galante [email protected] \nDepartment of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUK\n", "Sameer U Sheorey [email protected] \nDepartment of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUK\n"], 'authoraffiliation': ["Department of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUK", "Department of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUK", "Department of Mathematics\nKing's College London\nWC2R 2LSStrand, LondonUK"], 'corpusid': 254535767, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 29124, 'n_tokens_neox': 24352, 'n_words': 14612, 'pdfsha': 'e9ac41a63caca5dc5f3e455d17871c76ba4f045f', 'pdfurls': ['https://export.arxiv.org/pdf/2212.04944v1.pdf'], 'title': ['Renormalisation Group Flows of the SYK Model', 'Renormalisation Group Flows of the SYK Model'], 'venue': []}
arxiv
Neutrino self-polarization effect in matter 2 Aug 2004 Andrey Lobanov Department of Theoretical physics Moscow State University 119992MoscowRussia Alexander Studenikin [email protected] Department of Theoretical physics Moscow State University 119992MoscowRussia Neutrino self-polarization effect in matter 2 Aug 2004 The quasi-classical theory of the spin light of neutrino (SLν) in background matter, accounting for the neutrino polarization, is developed. The neutrino transitions ν L → ν R and ν R → ν L rates in matter are calculated. It is shown that the SLν in matter leads to the neutrino conversion from active ν L to sterile ν R states (neutrino self-polarization effect in matter).Convincing evidence in favour of non-zero neutrino mass that has been obtained during the last few years in atmospheric and solar-neutrino experiments, are also confirming in the reactor KamLAND and long-baseline accelerator experiments (see[1]for a review on the present status of neutrino mixing and oscillations). Even within the standard model (minimally extended with SU(2)−singlet right-handed neutrino) a massive neutrino inevitably has non-zero magnetic moment µ generated by the one-loop diagramme[2]. A recent studies of a massive neutrino electromagnetic properties within one-loop level, including discussion on the neutrino magnetic moment, can be found in[3]. It should be also noted here that a rather detailed discussion on the neutrino charge radius is presented in the two recent papers[4,5].In a series of our papers[6][7][8][9][10][11]we have developed the Lorentz invariant approach to neutrino oscillations which enables us to study, in particular, the neutrino spin procession in the background matter with effects of the presence of electromagnetic and gravitational fields being also accounted for. A review on these our studies can be found in[12]. * In [10,11] we have predicted the new mechanism of electromagnetic radiation by neutrino moving in background matter and/or electromagnetic and gravitational fields. We have named this radiation as "spin light of neutrino" and introduced the abbreviation SLν which we shall use below in this paper. The SLν originates from the neutrino spin precession that can be induced whether by weak interactions of neutrino with the background matter or by the external electromagnetic or gravitational fields that could be present in the background environment. It should be noted that the discussed mechanism of electromagnetic radiation by a neutrino moving in a constant magnetic field was also studied previously in [13]. As we have shown in [10], the total power of the SLν in matter does not washed out when the emitted photon refractive index is equal to unit and the SLν can not be considered as the neutrino Cerenkov radiation (see, for example, [14] and references therein). It was also emphasized [10] that the initially unpolarized neutrino beam (equal mixture of active left-handed and sterile right-handed neutrinos) can be converted to the totally polarized beam composed of only ν R due to the spin light in contrast to the Cherenkov radiation which can not produce the neutrino spin self-polarization effect. The discovered important properties of SLν (such as strong beaming of the radiation along the neutrino momentum, the rapid growth of the total radiation power with the neutrino energy and density of matter, the possibility to emit photons with energies span up to gamma-rays) enables us to predict that this radiation should be important in different astrophysical environments (quasars, gamma-ray bursts etc) and in dense plasma of the early Universe. In this paper we should like to present a detailed study of the neutrino spin selfpolarization effect in matter which has been recently predicted in our previous paper [10]. In [10] we considered the SLν in matter in the case of unpolarized neutrinos. In the present paper we make a reasonable step forward and study the SLν in matter accounting for the neutrinos polarization. It is sufficient that the SLν in matter (as the similar radiation by neutrinos moving in the magnetic field [13]) originates from the spin-flip transitions ν L → ν R . Within the quantum approach the corresponding Feynman diagram of the proposed new process is the standard one-photon emission diagram with the initial and final neutrino states described by the "broad lines" that account for the neutrino interaction with matter. We show below how derive the transition rate of the polarized neutrinos using the quasi-classical method [15] for description of spin wave functions in the presence of an electromagnetic field, given by the tensor F µν , and apply it for the case when a massive neutrino with non-zero magnetic moment is moving and radiating the spin light in the background matter. As it has been shown [7], the quasi-classical Bargmann-Michel-Telegdi equation [16], describing a neutral particle spin evolution under the influence of an electromagnetic field, can be generalized for the case of a neutrino moving in electromagnetic fields and matter by implementing the substitution of the external electromagnetic field tensor, F µν = (E, B), according to the prescription F µν → F µν + G µν ,(1) where the anti-symmetric tensor G µν = (−P, M) describes the neutrino interaction with the background matter. We have also shown [8] how to construct the tensor G µν with use of the neutrino speed, matter speed, and matter polarization four-vectors. It worth to note that the substitution (1) implies that the magnetic B and electric E fields are shifted by the vectors M and P, respectively: B → B + M, E → E − P.(2) From the new generalized BMT equation for the neutrino spin evolution in an electromagnetic field and matter we have finally derive the following equation for the evolution of the three-dimensional neutrino spin vector S (see [8,10]): dS dt = 2µ γ S × (B 0 + M 0 ) ,(3)B 0 = γ B ⊥ + 1 γ B + E ⊥ × β , γ = (1 − β 2 ) − 1 2 ,(4)M 0 = M 0 + M 0 ⊥ ,(5)M 0 = γβ n 0 1 − v 2 e ρ (1) e 1 − v e β 1 − γ −2 −ρ (2) e ζ e β 1 − v 2 e + (ζ e v e )(βv e ) 1 + 1 − v 2 e 1 1 − γ −2 ,(6)M 0 ⊥ = − n 0 1 − v 2 e v e ⊥ ρ (1) e + ρ (2) e (ζ e v e ) 1 + 1 − v 2 e + ζ e ⊥ ρ (2) e 1 − v 2 e ,(7) where t is time in the laboratory frame, β is the neutrino speed, F ⊥ and F (F = B, E) are transversal and longitudinal (with respect to the direction n of neutrino motion) electromagnetic field components in the laboratory frame. For simplify we neglect here the neutrino electric dipole moment, ǫ = 0, and also consider the case when matter is composed of only one type of fermions (which we chose to be electrons for definiteness). Here n 0 = n e 1 − v 2 e is the invariant number density of matter given in the reference frame for which the total speed of matter is zero. The vectors v e , and ζ e (0 |ζ e | 2 1) denote, respectively, the speed of the reference frame in which the mean momentum of matter (electrons) is zero, and the mean value of the polarization vector of the background electrons in the above mentioned reference frame. The coefficients ρ ρ (1) e =G F 2 √ 2µ , ρ (2) e = − G F 2 √ 2µ ,(8)whereG F = G F (1 + 4 sin 2 θ W ). Our observation [7,8] that the neutrino spin evolution in the presence of matter can be described by the generalized BMT equation with the substitutions given by eqs. (1) and (2) make it possible to use the quasi-classical approach (developed in [15] for the case of a neutral particle spin evolution in electromagnetic fields) for the study of the neutrino spin-polarization effect in matter. Here below we suppose that the effect of an external electromagnetic field (if any is present in the background environment) can be neglected in comparison with the effect of the neutrino interaction with the background matter. Then, the equation for the neutrino quasi-classical spin wave function Ψ(τ ) is i dΨ dτ = 1 2 µǫ µνρλ G ρλ u ν γ µ γ 5û Ψ,(9) where τ is the neutrino proper time and we use the notationû = γ µ u µ , where u µ = (γ, γβ). If for simplicity we neglect effects of the neutrino electric dipole moment (ǫ = 0) and consider unpolarized matter composed of electrons, then we have [10] G µν = ǫ µνρλ j ρ u λ ρ (1) , where j µ = (n, nv), is the electron current, v is the speed of matter (the average speed of the electrons), and we use the notation n = n 0 √ 1−v 2 e . In the case of non-moving matter that will be also considered below, for the tensor G µν we get G µν = γρ (1) n       0 0 0 0 0 0 −β 3 β 2 0 β 3 0 −β 1 0 −β 2 β 1 0       .(12) To derive the total transition probability of the neutrino spin light radiation in matter we define the density matrix of partially polarized neutrino in the form, ̺(τ, τ ′ ) = 1 2 U(τ, τ 0 ) p(τ 0 ) + m ν 1 − γ 5Ŝ (τ 0 ) U −1 (τ ′ , τ 0 ),(13) where p(τ 0 ) is the neutrino initial momentum, U(τ, τ 0 ) is the neutrino spin evolution operator correspondent to the equation (9). For the pure state the density matrix reduces to direct product of bispinors, normalized by the conditionΨ (τ )Ψ (τ ) = 2m ν . The total transition probability of the neutrino spin light radiation in matter is P = − d 4 x d 4 y d 4 p d 4 q d 4 k (2π) 6 δ(k 2 ) δ(p 2 − m 2 ν ) δ(q 2 − m 2 ν ) × ̺ µν ph (x, y; k) Sp Γ µ (x)̺ i (x, y; p)Γ ν (y)̺ f (y, x; q) . (14) Here ̺ i (x, y; p), ̺ f (y, x; q) are density matrices of the initial, i, and final, f , neutrino states, ̺ µν ph (x, y; k) is the density matrix of the emitted photon, Γ µ = − √ 4πµσ µν k ν is the vertex function, σ µν = 1 2 (γ µ γ ν − γ ν γ µ ). In order to transit to the quasi-classical approximation, it is necessary to substitute precise density matrices for once of (13) and to neglect the recoil in the photon radiation process. After calculations similar to that performed in [15] for the case of a neutral fermion transition under the influence of an electromagnetic field, we get the quasi-classical expression for the neutrino total transition probability in matter, P = µ 2 4π 2 dΩ ∞ 0 k 3 dk dτ dτ ′ e ik(lu)(τ −τ ′ ) T (τ, τ ′ ; u),(15) where T (τ, τ ′ ; u) = V i V f − A i A f ,(16) and V i,f = 1 4 Sp l U(τ )(1 +û)(1− γ 5Ŝ 0i,f )U −1 (τ ′ ) , A i,f = 1 4 Sp γ 5l U(τ )(1 +û)(1− γ 5Ŝ 0i,f )U −1 (τ ′ ) .(17) Integrations in eq.(15) are performed over the solid angle Ω and energy k of the photon, and the proper times τ and τ ′ of neutrino. The four-dimensional vector l ν = {1, l} is fixed by the three-dimensional unit vector l that points the direction of radiation. Performing integration over angular variables, we get from (15) the neutrino transition probability in the form P = 4µ 2 3π dτ dτ ′ 1 2(τ − τ ′ + i0) (∂ τ ∂ 2 τ ′ − ∂ 2 τ ∂ τ ′ )Ṽ iṼf ,(18) whereṼ i,f = 1 4 Sp U(τ )(1 − γ 5Ŝ 0i,f )U −1 (τ ′ ) .(19) Let us consider a neutrino propagating in matter composed of unpolarized electrons. In this case the tensor G µν is given by (10). Then the neutrino spin evolution operator U(τ ) correspondent to the equation (9) is U(τ, τ 0 ) = cos ω(τ − τ 0 ) + iγ 5Ŝ tpû sin ω(τ − τ 0 ),(20) where S µ tp = − j µ − u µ (uj) (uj) 2 − (j) 2(21) is the four-dimensional vector determining the axis of the total neutrino spin self-polarization. The neutrino spin precession frequency ω is determined by the neutrino speed vector u α and the tensor G µν : ω = µ u α G αµ G µν u ν .(22) In the case of moving and unpolarized matter composed of electrons we have from eqs. (10) and (11) that ω = µρ (1) nγ (1 − vβ) 2 − (1 − v 2 )/γ.(23) Note that the latter expression for the frequency ω in the case of non-moving matter is in agreement with the estimation of ref. [10] for the neutrino spin light photon energy ω γ ∼ G F nγ 2(24) in the laboratory reference frame. From the previous discussion it is evident that the neutrino spin polarization axis in the rest frame of the neutrino is given by the vector M 0 (see eqs. (5), (6) and (7)). It follows from (21) that in the case of unpolarized and non-moving matter the direction of the neutrino spin polarization coincides with the direction of the neutrino speed, S tp = β/β.(25) The neutrino spin light radiation leads to the total spin polarization in the direction of the neutrino motion, i.e. initially left-handed polarized neutrinos are converted to the righthanded polarized neutrinos, ν L → ν R .(26) From (18) we get that the neutrino transition rate (the probability per unite time) from ν L to ν R state is Γ ν L →ν R = 32 3 µ 2 γ −1 ω 3 ,(27) whereas the rate of the transition ν R → ν L is zero, Γ ν R →ν L = 0.(28) It should be noted here that eqs. Γ ν L →ν R = 2 √ 2 3 µ 2 γ 2G3 F n 3 ,(29) and, obviously, the rate of the transition ν R → ν L is again zero. The obtained result (29) exceeds the value of the neutrino spin light rate derived in [10] by a factor of two because it gives the emission rate of the totaly polarized left-handed neutrinos ν L , whereas the corresponding rate of ref. [10] was derived under the assumption that neutrinos in the initial state were not polarized. For the ultra relativistic neutrino (that is the most interesting case for different astrophysical and cosmology applications) interactions of the right-handed polarized neutrinos, ν R , with the background particles is suppressed with respect to interactions of the lefthanded polarized neutrinos, ν L , by the factor of ∼ γ −1 . Therefore, we conclude that, in fact, the "spin light of neutrino" in matter leads to the neutrino conversion from active to sterile states. As it follows from the above discussion, the rate of the "spin light of neutrino" significantly depends on the density of the background matter. That is why the considered neutrino self-polarization effect is expected to be important at the early stages of evolution of the Universe. given, and within the extended standard model supplied with SU(2)-singlet right-handed neutrino ν R , (27) and (28) with ω determined by (22) reproduce the neutrino transition rates for the unpolarized background matter of arbitrary particles composition moving with arbitrary common speed if the appropriate form (see in [8]) of the tensor G µν is chosen. If matter is not moving and composed of only electrons then from eqs. (13), (22), (23) and (27) we get . S M Bilenky, Proc.R.Soc.Lond. 460S.M.Bilenky, Proc.R.Soc.Lond.A460 (2004) 403-443. . K Fujikawa, R Shrock, Phys.Rev.Lett. 45963K.Fujikawa, R.Shrock, Phys.Rev.Lett.45 (1980) 963. . M Dvornikov, A Studenikin, hep-ph/0305206Phys.Rev.D. 69254JETPM.Dvornikov, A.Studenikin, Phys.Rev.D 69 (2004) 073001, hep-ph/0305206; JETP 99 (2004) 254. . K Fujikava, R Shrock, Phys. Rev. 6913007K.Fujikava, R.Shrock, Phys. Rev. D69, 013007 (2004). . J Bernabéu, J Papavassilion, D Binosi, hep-ph/0405288J.Bernabéu, J.Papavassilion, D.Binosi, hep-ph/0405288. . G Likhachev, A Studenikin, unpublishedG.Likhachev, A.Studenikin, unpublished, 1995. . A Egorov, A Lobanov, A Studenikin, hep-ph/9910476Phys.Lett.B491. 137A.Egorov, A.Lobanov, A.Studenikin, Phys.Lett.B491 137 (2000), hep-ph/9902447, hep-ph/9910476. . A Lobanov, A Studenikin, hep-ph/0106101Phys.Lett. 51594A.Lobanov, A.Studenikin, Phys.Lett.B515, 94 (2001), hep-ph/0106101. . M Dvornikov, A Studenikin, hep-ph/0202113JHEP. 0916M.Dvornikov, A.Studenikin, JHEP 09 (2002) 016, hep-ph/0202113. . A Lobanov, A Studenikin, hep-ph/0212393Phys.Lett. 56427A.Lobanov, A.Studenikin, Phys.Lett.B564, 27 (2003), hep-ph/0212393. . M Dvornikov, A Grigoriev, A Studenikin, hep-ph/0406114M.Dvornikov, A.Grigoriev, A.Studenikin, hep-ph/0406114. . A Studenikin, hep-ph/0407010Phys.Atom.Nucl. 67A.Studenikin, Phys.Atom.Nucl.67 (2004) 1014, hep-ph/0306280, hep-ph/0407010. . A V Borisov, V Ch, A I Zhukovsky, Ternov, Sov.Phys.J. 31228A.V.Borisov, V.Ch.Zhukovsky, A.I.Ternov, Sov.Phys.J. 31 (1988) 228. . A Ioannisian, G G Raffelt, Phys.Rev. 557038A.Ioannisian, G.G.Raffelt, Phys.Rev.D55 (1997) 7038. . A Lobanov, hep-ph/0311021A.Lobanov, hep-ph/0311021. . V Bargmann, L Michel, V Telegdi, Phys.Rev.Lett. 2435V.Bargmann, L.Michel, V.Telegdi, Phys.Rev.Lett. 2 (1959) 435.
{'fraction_non_alphanumeric': 0.06738695978016487, 'fraction_numerical': 0.041718710966774916, 'mean_word_length': 3.9301108374384235, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 26, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The quasi-classical theory of the spin light of neutrino (SLν) in background matter, accounting for the neutrino polarization, is developed. The neutrino transitions ν L → ν R and ν R → ν L rates in matter are calculated. It is shown that the SLν in matter leads to the neutrino conversion from active ν L to sterile ν R states (neutrino self-polarization effect in matter).Convincing evidence in favour of non-zero neutrino mass that has been obtained during the last few years in atmospheric and solar-neutrino experiments, are also confirming in the reactor KamLAND and long-baseline accelerator experiments (see[1]for a review on the present status of neutrino mixing and oscillations). Even within the standard model (minimally extended with SU(2)−singlet right-handed neutrino) a massive neutrino inevitably has non-zero magnetic moment µ generated by the one-loop diagramme[2]. A recent studies of a massive neutrino electromagnetic properties within one-loop level, including discussion on the neutrino magnetic moment, can be found in[3]. It should be also noted here that a rather detailed discussion on the neutrino charge radius is presented in the two recent papers[4,5].In a series of our papers[6][7][8][9][10][11]we have developed the Lorentz invariant approach to neutrino oscillations which enables us to study, in particular, the neutrino spin procession in the background matter with effects of the presence of electromagnetic and gravitational fields being also accounted for. A review on these our studies can be found in[12]. *', 'arxivid': 'astro-ph/0408026', 'author': ['Andrey Lobanov \nDepartment of Theoretical physics\nMoscow State University\n119992MoscowRussia\n', 'Alexander Studenikin [email protected] \nDepartment of Theoretical physics\nMoscow State University\n119992MoscowRussia\n'], 'authoraffiliation': ['Department of Theoretical physics\nMoscow State University\n119992MoscowRussia', 'Department of Theoretical physics\nMoscow State University\n119992MoscowRussia'], 'corpusid': 17860522, 'doi': '10.1016/j.physletb.2004.09.037', 'github_urls': [], 'n_tokens_mistral': 5573, 'n_tokens_neox': 4648, 'n_words': 2792, 'pdfsha': 'f55e1d3d516423c998e7c7a0392c31d8840a83df', 'pdfurls': ['https://export.arxiv.org/pdf/astro-ph/0408026v1.pdf'], 'title': ['Neutrino self-polarization effect in matter', 'Neutrino self-polarization effect in matter'], 'venue': []}
arxiv
Critical-point finite-size scaling in the microcanonical ensemble 2 Aug 1999 A D Bruce Department of Physics and Astronomy The University of Edinburgh Edinburgh EH9 3JZScotland, United Kingdom N B Wilding Department of Physics and Astronomy The University of Edinburgh Edinburgh EH9 3JZScotland, United Kingdom Critical-point finite-size scaling in the microcanonical ensemble 2 Aug 1999numbers: 0520Gg0570Jk6460Fr We develop a scaling theory for the finite-size critical behavior of the microcanonical entropy (density of states) of a system with a critically-divergent heat capacity. The link between the microcanonical entropy and the canonical energy distribution is exploited to establish the former, and corroborate its predicted scaling form, in the case of the 3d Ising universality class. We show that the scaling behavior emerges clearly when one accounts for the effects of the negative background constant contribution to the canonical critical specific heat. We show that this same constant plays a significant role in determining the observed differences between the canonical and microcanonical specific heats of systems of finite size, in the critical region.PACS numbers: 05.20. Gg, 05.70.Jk, 64.60.Fr Statistical mechanics can be formulated in any of a set of ensembles distinguished by the relationship between the system and its environment[1]. The principal members of this set are the microcanonical (prescribed energy) and canonical (prescribed temperature) ensembles. In the thermodynamic limit (when it exists) the ensembles yield the same predictions (and are, in this sense, equivalent) and the choice of ensemble is a matter of practical convenience. The canonical ensemble tends to win this contest because it circumnavigates the hard-constant-energy constraint imposed by the microcanonical ensemble.The two ensembles are, however, not always equivalent[2]. They differ for systems which are 'small' in some sense: inherently small systems such as nuclei or clusters [3]; systems with unscreened long-range forces [4] where the thermodynamic limit is problematic; and systems at critical points[5], which are our principal concern here. Theoretical studies of critical phenomena are almost invariably conducted within the framework of the canonical ensemble[6]. In consequence there is no substantive framework within which to interpret computational studies of microcanonical critical behavior. Such studies do, nevertheless, exist, having been motivated, variously, by the belief that the microcanonical framework may have some computational advantages [7] and by the discovery [8] that, apparently, critical anomalies in the microcanonical heat capacity are significantly enhanced with respect to their canonical counterparts.This paper goes some way towards supplying the missing framework. We develop (section 2) a finite-size-scaling theory [9] for the microcanonical entropy (the density of states) of a system with a critically-divergent heat capacity. In so doing we have, of necessity, to consider more general questions about the structure of the density of states of a finite-size system -in particular the implications of well-established results for the finite-size structure of the canonical partition function[10].Though somewhat more than a phenomenology, our theory falls short of being microscopically explicit: to determine an explicit form for the relevant scaling function we need to appeal (section 3) to Monte Carlo (MC) measurements of the critical canonical energy probability distribution (pdf).The canonical energy pdf itself has a near-critical finite-size-scaling form which has featured in a number of studies of critical points in fluids[11]and lattice gauge theories[12]. Since energy fluctuations (like the critical anomaly in the canonical specific heat which they control) are relatively weak (by comparison with the fluctuations of the order parameter, and the divergence of its response function) the degree of 'scaling' reported in previously measured energy pdfs has been relatively poor -unsatisfactorily so for our purposes here. This problem is addressed in section 3. We show that one can fold out (from the measured distributions) the sub-dominant (but significant) non-scaling effects that are associated with the constant background contribution to the canonical heat capacity, negative in the case of the 3d Ising model[13]. This procedure exposes the underlying behavior, which manifests scaling to an impressive degree. In addition to providing us with the platform needed for this work, this procedure may offer the basis for improving the mixed-scaling-field theory [11] of critical points in systems that belong to the Ising universality class but which do not have full Ising symmetry; recent studies[12]have suggested that the current framework is not fully satisfactory.The scaling form for the critical energy pdf allows us to determine the scaling form of the microcanonical entropy. In section 4 we explore this form and show that it is consistent with predictions for both the bulk-critical limit (as regards the parameters characterizing the specific heat singularity[13]) and the finite-size critical limit (the Fisher-Privman constant[14]).The microcanonical entropy also provides us with a unified basis for dealing with both the canonical and the microcanonical specific heats (section 5). We show that the 'corrections' to the scaling behavior of the canonical specific heat (the negative background constant) have subtle consequences for the microcanonical behavior. In particular they serve to amplify the difference between microcanonical and canonical behavior, and are at least partially responsible for the strength of the anomaly observed in some microcanonical studies [8].A. The microcanonical scaling ansatzWe consider a d-dimensional many-body system of linear dimension L; we assume hypercubic geometry with periodic boundary conditions. The canonical partition function is, in principle, a discrete sum over system microstates (r) or system energy levels (s):Z(β, L) = r e −βEr = s Ω s e −βEs (1a) I. INTRODUCTION where Ω s is the degeneracy of level s. We shall suppose that the system is sufficiently large that the sum over levels can be replaced by an integral: Z(β, L) = dǫΩ(ǫ, L)e −βL d ǫ (1b) where ǫ ≡ E/L d is the energy density. The function Ω(ǫ, L) is the density of states; as we have defined it, it is a true density, having dimensions of inverse energy. We note that the transition from the discrete representation (Eq. 1a) to its continuum counterpart (Eq. 1b) requires some care: it is discussed in Appendix A. Our microcanonical scaling theory comprises a proposal for the form of the density of states function. We formulate it in two stages. Consider first a regime remote from critical points or lines of phase coexistence. In such a regime we make the general finite-size ansatz [15]: Ω(ǫ, L) ≃ −L d s ′′ (ǫ) 2π 1/2 e L d s(ǫ)(2) The structure proposed for the prefactor makes this a little more than simply a definition of the microcanonical entropy density s(ǫ). In its support we note, first, that one may readily verify it explicitly (Appendix B) in the case of some simple model systems. Secondly we note the implications for the associated canonical partition function. Inserting Eq. (2) into Eq. (1b), a saddle-point integration gives Z(β, L) = L d 2π 1/2 dǫ [−s ′′ (ǫ)] 1/2 e L d [s(ǫ)−βǫ] = e −L d f (β) 1 + O(L −d )(3) where f (β) ≡ βǫ − s(ǫ)(4) andǫ is the solution of β = s ′ (ǫ)(5) Eq. (3) recovers the prefactor-free form of the canonical partition function believed to be widely appropriate in regions (those where the saddle point integration is to be trusted) remote from critical points or lines of phase coexistence [10]. We note that this form is achieved by virtue of the prefactor that does appear in the density of states ansatz (Eq. 2), which is just such as to cancel the contributions made by the fluctuations about the saddle point [16]. The argument we have given leaves open the possibility of power-law corrections to Eq. (3). It has long been believed, and more recently established rather generally [10], that the corrections to the leading form are actually exponentially small in the system size. Since the saddle-point integration necessarily generates power-law corrections, one must suppose that there are compensating power-law corrections to the ansatz (Eq. 2) for the density of states. This conclusion serves as a warning (already suggested by the double appearance of the function s(ǫ) in Eq. (2)) that the microcanonical framework faces problems which are skirted in the canonical formalism [17]. Now, more specifically, consider a system, of the kind specified above, in the vicinity of a critical point. We will suppose that the critical point has a divergent heat capacity; where we need to be more specific we shall assume it is a member of the d=3 Ising universality class (or, more specially still, the d=3 Ising model itself). Within the microcanonical framework the critical point of such a system is located by a critical value ǫ c of the energy density, sharply-defined in the thermodynamic limit. We are concerned with the behavior of the microcanonical entropy for energies in the vicinity of this critical value. To describe this regime we introduce the dimensionless scaling variable [18] x ≡ a ǫ L 1/νǫ (ǫ − ǫ c ) (6) where a ǫ is an appropriate scale factor and the index is defined by 1 ν e = 1 − α ν(7) with α the index (assumed positive) characterizing the heat capacity divergence, and ν the correlation length index [19]. We now reformulate and extend our basic ansatz (Eq. 2) with the proposal that, in a region of sufficiently large L and sufficiently small | ǫ − ǫ c | [20] Ω(ǫ, L) ≃ −L d s ′′ (ǫ, L) 2π 1/2 e L d s(ǫ,L) (8a) with L d s(ǫ, L) ≃ L d [s c + β c (ǫ − ǫ c )] +S(x) (8b) Here s c is an unimportant constant, β c is the critical inverse temperature andS(x) is a finite-size-scaling function, universal given some convention on the scale factor a ǫ , introduced in Eq. (6). The remainder of this paper is devoted to providing support for this proposal, and exploring the structure of the microcanonical entropy scaling function which it introduces. II. DETERMINING THE SCALING FUNCTION It should be possible to determine the finite-size-scaling functionS(x) within the renormalization group framework [21]. We have not done that. Instead we have chosen to learn what we can about this function from its signatures in MC studies of the canonical ensemble. Consider, then, the implications of the scaling form Eq. (8b) for the canonical partition function, Eq. (1b). We suppose initially (we shall have to refine the supposition, shortly) that the relevant part of the energy spectrum is adequately captured by Eq. (8b). Then Z(β, L) ≃ e −L d f0(β)Z (y)(9) where f 0 (β) = βǫ c − s c (10) andZ (y) = dx −S ′′ (x) 2π e −xy+S(x)(11) while y = a −1 ǫ L 1/ν (β − β c )(12) provides a scaling measure of the deviation from the critical temperature. We have made use of the hyperscaling relation [19] which links the correlation length index ν and the heat capacity index α through 1 ν + 1 ν e = 2 − α ν = d(13) The scaling form of the free energy follows: F (β, L) ≡ − ln Z(β, L) ≃ L d f 0 (β) − lnZ(y) ≡F (β, L)(14) The canonical energy pdf P (ǫ|β, L) ≡ Z −1 (β, L)Ω(ǫ, L)e −βL d ǫ(15) may also be written in scaling form: P (ǫ|β, L)dǫ ≡ P (x|y, L)dx(16) with P (x|y, L) ≃Z −1 (y) −S ′′ (x) 2π e −xy+S(x) ≡P (x|y)(17) The scaling predictions for the pdf may be tested by examining its cumulants [22], for which the free energy is a generator: ǫ (n) (β, L) ≡ (−1) n+1 L −nd ∂ n F (β, L) ∂β n(18) Eq. (14) then implies that the cumulants have the scaling form ǫ (n) (β, L) ≃ a ǫ L 1/νǫ −nx (n) (y) + ǫ c δ n,1(19) where the scaled cumulantsx (n) (y) are universal functions: x (n) (y) = (−1) n ∂ n lnZ(y) ∂y n(20) The canonical mean of the energy density at criticality (β = β c ) follows as: ǫ c ≡ ǫ (1) (β c , L) ≃ ǫ c + a ǫ L (1−α)/ν −1x (1) (y = 0)(21) MC measurements on the 3d Ising model using a range of system sizes ( Fig. 1) are fully consistent with this behavior. Eq. (14) implies, likewise, that the canonical variance of the energy density should have the power law behavior ǫ 2 c − ǫ 2 c ≡ ǫ (2) (β c , L) ≃ a −2 ǫ L −d+α/νx(2) (y = 0)(22) MC measurements (Fig. 2) are only partially consistent with this prediction: the power law is confirmed, but with an extrapolation whose intercept is far from zero. This inconsistency is reflected in the rather limited success (Fig. 3) of attempts to collapse the measured energy pdfs for different system sizes on to a single scaling form. The source of these problems can be guessed from the implications of Eq. (22) for the canonical specific heat, which it mirrors: the scaling form fails to capture the effects associated with the constant background which constitutes the dominant correction to pure scaling (power-law divergence) of the canonical specific heat. There are two ways to rectify this failure. One might extend the theory to predict the behavior of the (very) finite systems accessible to MC study; or one might seek to correct the MC results to expose the true scaling behavior. We adopt the latter strategy. Define ∆f (β, L) ≡ L −d F (β, L) −F (β, L)(23) the difference between the true free energy density and its asymptotic scaling form (see Eq. (14)). We shall ignore the effects of confluent singularities: they are not the dominant 'corrections to scaling' here. Then ∆f (β, L) is analytic and may be approximated, near β c , by the expansion ∆f (β, L) ≃ ∞ n=0 ∆f (n) c (β − β c ) n n!(24) These additional contributions to the free energy imply additional contributions to the energy cumulants (Eq. 18): ∆ǫ (n) (β, L) ≡ (−1) n+1 L −(n−1)d ∂ n ∆f (β, L) ∂β n(25) At criticality Eq. (19) must then be modified to read ǫ (n) (β c , L) = a ǫ L 1/νǫ −n x (n) (y = 0) + ∆x (n) c (L) + ǫ c δ n,1 ≡ a ǫ L 1/νǫ −n x (n) (y = 0, L) + ǫ c δ n,1(26) where ∆x (n) c (L) ≡ x (n) (y = 0, L) −x (n) (y = 0) = (−1) n+1 a n ǫ L d−n/ν ∆f (n) c(27) The n = 1 correction is absent by fiat: the choice of ǫ c ensures this. The n ≥ 3 corrections are sufficiently strongly 'irrelevant' (they vanish sufficiently strongly with L) that they may reasonably be neglected. But the n = 2 correction decays only slowly: ∆x (2) c (L) = −a 2 ǫ L −α/ν ∆f (2) c = a 2 ǫ L −α/ν c 0c ≡ −g(L)(28) where the last step defines g(L) (a convenient parameter) while c 0c ≡ −∆f (2) c = − ∂ 2 ∆f (β, L) ∂β 2 | βc(29) is identifiable as the constant 'background' to the near-critical canonical specific heat. With this addition, Eq. (22) is modified to read L d ǫ 2 c − ǫ 2 c ≃ L α/ν a −2 ǫ x (2) (y = 0, L) = L α/ν a −2 ǫx (2) (y = 0) + c 0c(30) which is now fully consistent with the MC measurements of Fig. 2, with (it is to be noted) a negative value for c 0c [23]. From a thermodynamic point of view these results simply reflect the fact that, for any system size practically accessible to computer-simulation, the 'critical' contribution to the canonical specific heat is not large enough to dominate the 'non-critical background'. But the argument also shows us how to eliminate the effects of this 'background' from the energy pdf. Consider the cumulant representation [22] of the scaling energy pdf (Eq. 17) at criticality: P (x|y = 0) = 1 2π ∞ −∞ dτ e ixτ exp ∞ n=1 (−iτ ) n n!x n (y = 0)(31) The corresponding relation for the observed energy pdf at criticality, written in its inverse form, is exp ∞ n=1 (−iτ ) n n! x (n) (y = 0, L) = ∞ −∞ dx ′ e −ix ′ τ P (x ′ |y = 0, L)(32) Appealing to the our conclusion that, for large enough L, the cumulants of the two pdfs differ significantly only in the n = 2 case, and using Eqs. (27) and (28), we find that P (x|y = 0) = 1 2π ∞ −∞ dτ ∞ −∞ dx ′ e i(x−x ′ )τ −g(L)τ 2 /2 P (x ′ |y = 0, L) = 1 2πg(L) ∞ −∞ dx ′ e −(x−x ′ ) 2 /[2g(L)] P (x ′ |y = 0, L)(33) This result shows that the scaling form of the critical pdf may be exposed by convolution of the observed (and thus, generally, non-scaling) pdfs with gaussians whose widths are controlled by the specific heat background. Note that the argument rests on the fact that this background is negative (so that g(L) as defined in Eq. (28) is positive). If the background constant were positive our argument would have to be restructured to prescribe the scaling form by a process of deconvolution, which is numerically problematic. As it is, the convolution process can be implemented easily. With c 0c fixed by the ordinate intercept in Fig. 2, the pdfs measured on different system sizes can each be corrected in this way to yield estimates of the scaling pdf. The results are shown in Fig. 4. The improvement with respect to the raw data ( Fig. 3) is striking. This improvement reflects not only the removal of the non-scaling contribution to the second cumulant but also that the requisite convolution process provides a natural smoothing of the MC data [28]. The consequences of this correction for the shape of the distribution are also striking. The skewness [29] clearly visible in the raw distributions ( Fig. 3) is largely suppressed to expose a scaling form that is, at first appearance, gaussian. Indeed the portion of the distribution evident on the scale of Fig. 4 is gaussian to within deviations of a few per cent. However the behavior in the wings (evident on the logarithmic scale utilized in Fig. 5) is markedly different on the high-and low-energy sides. The scaling of the critical energy pdf corroborates the scaling of the microcanonical entropy (cf Eq. (17)). Given the double appearance ofS(x) in Eq. (17) it is practical to infer only the 'effective' microcanonical entropy scaling functionS ef f (x) ≡S(x) + 1 2 ln −S ′′ (x) 2π(34)S ef f (x) =S ef f (x = 0) + ln P (x|y = 0) P (x = 0|y = 0)(35) We note as a matter of empirical fact thatS ef f (x) is concave. The concavity ofS(x) itself is already assumed in our basic scaling ansatz [15]. III. THE SCALING THEORY: IMPLICATIONS AND TESTS Although we have no first-principles calculation of the scaling functionS(x) to offer here we can identify, and test, some of the properties it must have to match anticipated behavior in both the thermodynamic and finite-size-critical limits. We consider, in particular, the limiting large | x | behavior. In this regime we anticipate that S(x) ≃ −b ± | x | θ +r ± (| x |≫ 1)(36) where the + and − subscripts refer, respectively, to the regions of positive and negative x. To make explicit identifications of the new quantities introduced in this equation (the exponent θ and the amplitudes b ± , r ± ), consider the scaling part of the partition function (Eq. 11). In the limit of large | y | the integral in Eq. (11) is dominated by one or other of the large | x | regimes. Substituting Eq. (36), a saddle-point integration yields lnZ(y) ≃ a ± | y | θ/(θ−1) +r ± (| y |≫ 1)(37) where the + and − subscripts now refer, respectively, to the regions of negative and positive y [30], and a + a − = b − b + 1/(θ−1)(38) As in the argument leading to Eq. (3) the fluctuations about the saddle point are canceled by the pre-exponential factor in Eq. (11) to leave power-law ('ln-free') behavior [31]. The thermodynamic limit of the near-critical free energy, defined by Eq. (14), now follows as F (β, L) ≃ L d f 0 (β) − A ± | β − β c | 2−α − r ±(39) where we have identified θ = 2 − α 1 − α(40) and (given Eq. 38) A + A − = a + a − = b − b + 1−α(41) To establish the role of the remaining constants (r ± ) in Eq. (36) we consider the anomalous contribution to the free energy [14] defined by F a (β) ≡ lim L→∞ F (β, L) − L d lim L→∞F (β, L) L d(42) Appealing to Eq. (39), and recalling our sign convention [30], we identify F a (β) = −r + (β < β c ) −r − (β > β c )(43) On the basis of rather general arguments [10] we expect that away from a critical point the free energy anomaly is just minus the logarithm of the number of coexisting phases, so that r + = −F a (β < β c ) = 0 (44a) r − = −F a (β > β c ) = ln 2 (44b) In the critical finite-size limit we find from Eq. (14) F (β c , L) = L d f 0 (β c ) − lnZ(0)(45) The critical value of the free energy anomaly, defining the Privman -Fisher constant U 0 [14], follows as U 0 ≡ F a (β c ) = − lnZ(0)(46) These predictions are testable to varying degrees through both the energy-dependence of the energy pdf and the temperature-dependence of the associated free energy. Figure 6 shows the results for the ratio of the specific heat amplitudes that follow from Eq. (41) when the measured decay of the critical energy pdf (Figs. 4, 5) is matched to the prediction (36), in conjunction with Eq. (17). We can expect the predictions and observations to match up only in a window of x values. Clearly, x must be large enough to lie within the thermodynamic critical region; but it must also not be so large that the associated energy lies outside the domain of validity of the basic scaling ansatz (Eq. 8b). The size of this window should increase with increasing system size. The location of this window on the x-axis may also be expected to be different for the positive and negative x-branches -if, as seems reasonable, one regards the correlation length ξ (rather than x ∼ ǫ − ǫ c or y ∼ β − β c ) as a measure of criticality. This is, indeed, the view we have adopted [33]. Thus Fig. 6 shows the results for the 'effective' amplitude ratio, obtained by fitting over ranges of x values, with each pair of (positive and negative ranges) being centered on the same value of z = L/ξ, used as the abscissa [32]. On the basis of this data [34] we make the assignment A + /A − = 0.575(10) which is to be compared with A + /A − = 0.523 (9) in reference [13] and 0.567 (16) in reference [25]. In Fig. 7 we show the results for lnZ(y) that follow (cf Eqs. (11), (34), (35)) from the measured energy pdfs, using Eq. (35). The latter determinesS ef f (x) only to within an additive constant which must be fixed by appeal to the predicted value of either r + or r − . We have chosen the latter so that Eq. (44b) is satisfied, by fiat. The motivation for this choice is that it provides us with an inherently more reliable estimate of the parameter U 0 (which, unlike r ± , is not known a priori). Since −U 0 = lnZ(0) is closer to r − than to r + the function lnZ(y) converges more quickly to its y > 0 asymptote than it does to its y < 0 asymptote. Fixing r − (the intercept of the y > 0 asymptote) thus tethers the value assigned to U 0 more effectively than fixing r + . As with the amplitude ratio considered above, the value assigned to U 0 depends upon the range of y-values used in the fit to the anticipated asymptotic form (Eq. 37). Again we have chosen to characterize the temperature range utilized through the value of the ratio z = L/ξ; again we can expect the analysis to be trustworthy only if it is based upon data lying within the thermodynamic-critical window. Our data (Fig. 8) do not allow a systematic analysis of the approach to the desired limit; but they provide the basis for the assignment U 0 = −0.57(2) [34]. The assignment of the uncertainty limit is subjective but, we think, conservative. We note the close correspondence with the assignment (U 0 = −0.57) emerging from an earlier study [36], similar in concept, but utilizing the distribution of the order parameter. However our assignment differs (in what would seem to be a statistically significant fashion) from the result U 0 = −0.625(5) obtained by Mon [35] on the basis of altogether different techniques. IV. MICROCANONICAL AND CANONICAL SPECIFIC HEATS A. Generalities Thus far we have focused on the implications of the microcanonical entropy for observations made in the canonical ensemble. We now turn to consider their implications for observations made within ensembles that are (or are approximations to) microcanonical. We will assume (in keeping with eg [2,8]) that the temperature of a microcanonical system should be identified from the relation β µe (ǫ, L) = L −d ∂ ln Ω(ǫ, L) ∂ǫ(47) This identification is certainly required in the thermodynamic limit; but in the context of systems of finite size it is, it seems, a matter of convention [37]. It is illuminating to link this temperature with canonical observables. Appealing to Eq. (15) we may write β µe (ǫ, L) = β + L −d ∂ ln P (ǫ|β, L) ∂ǫ(48) where (notwithstanding appearances to the contrary) the rhs depends on ǫ but not β. This result shows that the equation prescribing the microcanonical temperature for a given energy is just the inverse of the equation prescribing the most probable energy for a given temperature: β = β µe (ǫ, L) ⇐⇒ ǫ =ǫ ce (β, L)(49) By comparison, within the canonical ensemble itself, 'the' energy for a given temperature is customarily identified with the canonical mean: ǫ =ǭ ce (β, L)(50) Eqs. (49) and (50) make it immediately plain that the energy-temperature relationships associated with the two ensembles will coincide to the extent that the canonical energy distribution is gaussian (and thus has coincident meanǭ ce and modeǫ ce ). This correspondence is guaranteed in the thermodynamic limit, but not (in general) when finite-size effects are significant. The energy-temperature relationships are most usually probed through their derivatives, the associated specific heats. In the microcanonical case c µe (ǫ, L) = − ∂β µe (ǫ, L) ∂ǫ −1 = −L d ∂ 2 ln P (ǫ|β, L) ∂ǫ 2 −1(51) where, again, the β-dependence of the rhs is illusory. In the canonical case (appealing to Eq. (18)) c ce (β, L) = − ∂ǭ ce (β, L) ∂β = L d ǫ (2) (β, L)(52) Like the two 'caloric equations of state' (Eqs. (49) and (50)) these two specific heats are guaranteed to agree in the thermodynamic limit; but they differ (in general) in the finite-size critical regime to which we now turn. B. Scaling forms First, we examine the asymptotic scaling regime where the the 'background' contribution to the specific heat can be neglected. We will consider the consequences of the corrections associated with the latter in the following section. In the scaling regime where the canonical energy pdf may be represented by its scaling form (Eqs. (16) and (17)) Eq. (48) can be rewritten in terms of the energy and temperature scaling variables (Eqs. (6) and (12)) as y µe (x) = y + ∂ lnP (x|y) ∂x = ∂S ef f (x) ∂x(53) where in the last step we have exercised the right to set y = 0 (the result is independent of y) and have made use of Eqs. (17) and (34). The microcanonical specific heat (Eq. 51) follows in scaling form c µe (ǫ, L) ≃ L α/ν a −2 ǫc µe (x) (54a) withc µe (x) = − ∂ 2 lnP (x|y = 0) ∂x 2 −1 = − ∂ 2S ef f (x) ∂x 2 −1 (54b) The scaling form of the canonical specific heat follows in a similar fashion, using Eqs. (19) and (52) c ce (β, L) ≃ L α/ν a −2 ǫc ce (y) (55a) with (Eq. 22)c ce (y) =x (2) (y) (55b) The forms of both the scaling functionsc µe (x) andc ce (y) can be determined from the scaling form for the microcanonical entropy (Fig. 5) or, equivalently, the critical canonical energy pdf (Fig. 4), established in the preceding section. They are compared in Fig. 9. In the microcanonical case we have used Eq. (53) to identify the microcanonical temperature y = y µe (x) to be associated with a given value of the energy variable x. In the thermodynamic limit realised at large values of | y | the two functions are, necessarily, consistent with one another, and approach the asymptotic behavior implied by Eq. (39). In the finite-size-critical (small | y |) regime, however, clear differences between the two scaling forms are apparent. In particular, the microcanonical maximum exceeds the canonical maximum by some 10%. One can show (Appendix C) that this -the fact that the microcanonical maximum is the larger one-follows necessarily if the scaling functionS(x) is concave. The two scaling functions cross very close to the point (y = 0) identifying the bulk critical temperature. One can see this already in published microcanonical data [8]; a similar 'ensemble-independence' has also been noted in studies of the 'gaussian ensemble' [38]. We have been unable to see any deep reason for this correspondence; but we do not discount the possibility that there is one. Though clearly visible, the differences between the two scaling functions are smaller than suggested by existing MC data [8]. The next section explains why. C. Beyond scaling: the role of the 'background' To understand the behavior observed in MC studies of microcanonical behavior, we must allow for the 'corrections to scaling ' which, in the canonical ensemble, are reflected simply in the existence of the additive negative background contribution to the heat capacity; their signature in the microcanonical ensemble is more subtle. The differences between the canonical and microcanonical results is effectively a strongly anharmonic effect: in a system of finite size, critical fluctuations sample a region of the entropy surface sufficiently large that the variation of its curvature becomes significant. We can expose the consequences analytically within an anharmonic perturbation theory in the cumulants of the energy pdf. The calculation is straightforward and we describe it in outline only. We appeal to the cumulant representation of the energy pdf at some (general) temperature: P (ǫ|β, L) = 1 2π ∞ −∞ dτ e iǫτ exp ∞ n=1 (−iτ ) n n! ǫ (n) (β, L)(56) We expand perturbatively to first order in the fourth cumulant and to second order in the third. We evaluate the second derivative of the logarithm of this function, which determines (cf Eq. (51)) the microcanonical specific heat associated with a given energy density. We evaluate this function at the modal energyǫ =ǫ(β, L) associated with the chosen temperature, prescribed by the (perturbative) solution of the microcanonical caloric equation of state (Eq. 48). The result is c µe (ǫ, L) = c ce (β, L) 1 − ǫ (4) (β, L)ǫ (2) (β, L) − ǫ (3) (β, L) 2 2 ǫ (2) (β, L) 3 + . . .(57) Eq. 18 shows that the cumulant correction terms displayed in this equation are O(L −d ) in the thermodynamic limit, confirming the equality of microcanonical and canonical predictions in this limit. To see what happens in the finite-size-critical region we focus (for simplicity) on the temperature β m for which the canonical specific heat is maximal, identified by the solution of whereǫ m =ǫ(β m , L), and we have used Eq. (52). Now we appeal to the scaling forms for the cumulants (Eq. 19), and fold in the effects of the additional non-scaling contribution to the second cumulant (Eq. 28) to conclude that c µe (ǫ m , L) = c ce (β m , L) − L α/ν a −2 ǫx (4) (y m ) 2 x (2) (y m ) − g(L) + . . .(60) This result makes clear (albeit perturbatively) that, in the finite-size-limited regime, the temperature-independent additive 'background constant' in the canonical specific heat (manifested in the parameter g(L)) does not simply translate into an additive energy-independent background in its microcanonical counterpart. To expose the implications for the difference between canonical and microcanonical specific heats we introduce the dimensionless parameter: R(L) ≡ c µe (ǫ m , L) − c ce (β m , L) c ce (β m , L) = −x (4) (y m ) 2 x (2) (y m ) − g(L) 2 + . . .(61) Then R(L) R(∞) = x (2) (y m ) x (2) (y m ) − g(L) 2 = 1 − c 0c c ce (β m , L) 2 (62) where R(∞) is the scaling limit of R(L). The significance of the background constant c 0c -in particular, its sign-is now apparent. The negative value of this constant results in an amplification of the difference between the microcanonical and canonical results (at β m ), to a degree that diminishes with increasing system size. This is not simply the trivial effect that would arise from a uniform (downward) shift of both functions: Eq. (60) shows that this is not what happens, as does the power of two on the rhs of Eq. (62). It is not hard to track down the origins of this effect. The difference between the canonical and microcanonical specific heats is, we have noted, an anharmonic effect; in the present context the 'corrections to scaling' reduce (only) the second cumulant of the energy pdf and thus, in a relative sense, enhance the anharmonic (non-gaussian) character of the energy pdf, as one can see immediately from a comparison of Figures 3 and 4. The effect is significant. For L = 10 (as used in the simulations reported in [8]), estimating c ce (β m , L) by c ce (β c , L) one can read off from Fig. 2 that R(L)/R(∞) ∼ 4. The somewhat unexpected conclusion that the fractional difference between c ce and c µe at bulk criticality actually decreases for increasing L is consistent with some MC studies [39]. V. CONCLUSIONS We review, briefly, the three principal strands of this work. First, we have broached the general question of the finite-size corrections to the density of states of a many-body system. The explicit proposal for the pre-exponential structure advanced in Eq. (2) is consistent with the prefactor-free structure of the canonical partition function [10] and with the behavior of the simple models discussed in Appendix B. Given the growing interest in the behavior of mesoscopically-sized systems, this proposal seems to merit some further study, with more rigor than we have attempted to offer here. Second, we have shown how one can fold out from the canonical energy fluctuation spectrum the principal corrections to scaling. The underlying behavior exhibits scale-invariance to a degree that seems remarkable, given the relative weakness of energy fluctuations. It is, we have seen, also largely consistent with established 3d-Ising critical properties. Third, we have provided a finite-size scaling theory of the microcanonical ensemble. This was the original motivation for this work-specifically, the suggestion [8] that the finite-size-smearing of critical behavior characteristic of the canonical ensemble is 'greatly reduced' within the microcanonical framework. Reference [8] offers two pieces of supporting evidence for this contention, which merit final comment. Reference [8] suggests, firstly, that, in the vicinity of ǫ c , the microcanonical entropy (measured with the techniques described in [40]) can be adequately represented by a form (Eq. 6 of reference [8]) which allows for no finite-size corrections at all, and which corresponds essentially to the large x limit (Eq. 36) of our scaling function. In fact the quality of the fit provided by this representation is rather poor. And we would expect it to be so. The measured microcanonical entropy evolves in a manifestly smooth way [41] between the limiting thermodynamic forms appropriate above and below ǫ c ; Eq. 6 of reference [8] is non-analytic at ǫ c . Moreover, in analyzing data for the entropy and its derivatives, it is -we have seen -essential (on all systems practically accessible) to do justice to the corrections associated with the background constant c 0c . Even in the thermodynamic limit the corrections allowed for in Eq. (8) of reference [8] do not do this. The second piece of supporting evidence offered in reference [8] is a striking enhancement of the critical peak in the microcanonical specific heat, with respect to its canonical counterpart. As we have seen, this behavior is at least partly due [42] to the effects associated with c 0c ; Figure 9 indicates that the underlying differences are rather less dramatic. [24]. The statistical errors are an order of magnitude smaller than the system size. The points marked ⋄ are taken from reference [25]. The relevant parameters have been assigned the values [26]: α = 0.108, ν = 0.63067 and βc = 0.2216544. The arrow identifies the best-fit value for the intercept, prescribing the constant ǫc (Eq. 21). 11), deduced from the microcanonical entropy (Fig. 5). The dimensionless variable y provides a scaled representation of the reduced (inverse) temperature (Eq. 12) . The straight lines represent fits to the predicted asymptotic forms (Eq. 37). The arrows identify the roles of the parameters r± (Eqs. 44a,b) and U0 (Eq. 46). Comparison of the dimensionless microcanonical and canonical specific heat scaling functions,c µe (Eq. 54a) andc ce (Eq. 55a), plotted as a function of the scaled dimensionless temperature (Eq 12). The light dashed line shows the power-law behavior characterizing the thermodynamic (large | y |) limit, extended back into the finite-size-limited region. ACKNOWLEDGMENTS [C2A] Ω s e −βEs is slowly varying over the interval E → E + δE, if that interval lies in a range contributing significantly to the thermal properties at temperature β. The range contributing significantly . . . is centered on the saddle pointÊ = L dǫ (Eq. 5). As a result, while condition C2 requires Eq. (A2), condition C2A requires only that d 2 ln Ω(ǫ, L) dǫ 2 δǫ 2 = L −d [c µe (ǫ, L)] −1 δE 2 ≪ 1 (A5) where we have used Eq. (51). Thus, in place of Eq. (A3), we need simply δE I ≪ L d c µe (ǫ, L) 1/2 ≃ L d ǫ (2) (β µe , L) 1/2 (A6) where the last step uses Eq. (52), and β µe = β µe (ǫ, L) (Eq. 47). This equation expresses more explicitly the implications of condition C2A. A density of states function will exist in the operational sense (Eq. 1b) that it may be used to compute thermal properties at a given temperature as long as the canonical energy distribution (for that temperature) is broad on the scale of the intrinsic discreteness of the energy spectrum. APPENDIX B: DENSITY OF STATES OF SIMPLE MODELS Here we show that the general form for the density of states function proposed in Eq. (2) is consistent with exact results for two simple models. 1. Quasi-continuous energy spectrum: harmonic lattice model Consider a system (a harmonic model of the vibrations of a crystal structure, for example) whose energy spectrum is that of N weakly-interacting harmonic oscillators, with associated frequencies ν j , j = 1 . . . N . Then E({n}) = h N j=1 n j ν j ≡ N j=1 ǫ j gives the energy of a microstate in which (for each j) mode j has quantum number n j . We consider the classical (h → 0) limit, in which the energy levels are quasicontinuous. In this case δE I ∼ hν min , Eq. (A3) is satisfied, and we may proceed as in Eq. Writing an integral representation of the δ-function we find [44] I(N ) = 1 2π +∞ −∞ dhe −ih e ih − 1 ih N = 2 π ∞ 0 dhcos [(N − 2)h] sin h h N = 1 Γ(N ) which may be approximated using the asymptotic expansion for the Γ function [44] Γ(z) = √ 2πz z− 1 2 e −z 1 + O(z −1 ) (B1) In analyzing the remaining (energy-independent) contribution we suppose that the frequency spectrum is that of a d = 1 system of particles, with a gap. Then ln Q(N ) = −N ln h − N q N where the sum q N may be written in the form q N = 1 N N j=1 ln ν j = 1 N N −1 r=0 H( 2πr N ) where H(θ) is periodic, and (invoking the assumed gap) H(0) is non-zero. It can be shown [45] that the N → ∞ limit of this sum q ∞ = 1 2π 2π 0 H(θ)dθ has finite-size corrections that are exponentially small in N. Gathering these results together we conclude that the density of states has the form of Eq. where 2K = βǫ I , and the last term reflects our choice of ground state energy. APPENDIX C: A BOUND ON THE CANONICAL SPECIFIC HEAT We outline here an argument establishing that the maximum of the value of the microcanonical specific heat provides an upper bound for the canonical specific heat, within the asymptotic scaling region. The argument assumes the concavity of the functionS ef f (x); the concavity ofS(x) is already presupposed in the formulation of Eq. (2). We write the scaling function for the energy pdf (Eq. 17) in the form: P (x|y) = Q(x, y)G(x −x(y), −1/S ′′ ef f (x ⋆ )) (C1) where G(z, b) is a gaussian of zero mean, and variance b;x(y) is the modal scaled energy, for a given y, the solution of dS ef f (x) dx = y (C2) and x ⋆ locates the maximum of the microcanonical specific heat, identified by the condition (Eq. 54b) S ′′ ef f (x) ≤S ′′ ef f (x ⋆ ) (C3) The function Q(x, y) introduced in Eq. (C1) is defined by: Q(x, y) = Q 0 e T (x,y) where T (x, y) = −xy +S ef f (x) −S ′′ ef f (x ⋆ ) 2 (x −x(y)) 2 (C4) while Q 0 is an x-independent constant, defined by normalization. From the assumed concavity ofS ef f (x) it is straightforward to show that, for any given y, T (x, y) is concave in x, with a single maximum at x =x(y). Now appealing to Eqs. (C1) and (55b) we can writẽ c ce (y) =x (2) From the properties of the function T (x, y) it is straightforward to show thatQ(z, y) has a single turning point (at z = 0), and that there exists some z 0 (y) such that Q(z, y) > 1 if | z |< z 0 (y) < 1 if | z |> z 0 (y) (C8) Then, finally, appealing to Eqs. (C6) and (54b) c ce (y) −c µe (x ⋆ ) < dz Q (z, y) − 1 G(z, −1/S ′′ ef f (x ⋆ ))z 2 < z 2 0 (y) dz Q (z, y) − 1 G(z, −1/S ′′ ef f (x ⋆ )) = 0 (C9) where the last step exploits normalization conditions. It follows that the microcanonical specific heat maximumc µe (x ⋆ ) provides an upper bound for the canonical specific heat. Fig. Fig. 5 shows the form implied by Eq. (17) ǫ ( 3 ) 3(β, L) = 0 = dc ce (β, L) dβ (58) At this temperature Eq. (57) simplifies to c µe (ǫ m , L) = c ce (β m , L) − L d ǫ (4) (β m , L) 2ǫ (2) (β m , L) + . . . FIG. 1 . 1NBW acknowledges the financial support of the Royal Society (grant no. 19076), the EPSRC (grant no. GR/L91412) and the Royal Society of Edinburgh. The canonical mean of the energy density for the critical d=3 Ising model as a function of system size FIG. 2 .FIG. 3 .FIG. 4 .FIG. 6 .FIG. 7 . 23467The canonical variance of the energy density for the critical d=3 Ising model as a function of system size[24]. The statistical errors are an order of magnitude smaller than the system size. The parameters are as specified inFig. 1. The arrow identifies the best fit value for the intercept, prescribing the constant c0c (Eq. 30). The canonical pdf (Eqs.15 -17) of the scaled dimensionless energy density x of the critical d=3 Ising model, for a range of system sizes. The scaled variable is defined in Eq. (6) with the choice (cfFig. 1) ǫc = −0.9909. The scale factor aǫ implicit in the scale of the x-variable is chosen such that (cf Eq. (6)) x = ǫ − ǫc for L = 10. The data ofFigure 3with the effects of the non-scaling background convoluted out as prescribed byEq. (33). Estimates of the specific heat amplitude ratio, deduced from the decay of the energy pdf (for different system sizes) at 'large' (positive and negative) x-values. The estimates were determined by fitting to pairs of ranges of x values, with the ranges forming each pair being centered on a common value of L/ξ, which forms the abscissa. The function lnZ(y) defined in Eq. ( FIG. 8 . 8Estimates of U0 (Eq. 46) determined, as in Fig. 6, for a sequence of different ranges of L/ξ values; the mid-point of the range defines the abscissa. ǫ ≡ E/N is the energy per oscillator. In the h → 0 limit the sums on n j can be replaced by integrals on ǫ j to giveΩ(ǫ, N ) = ǫ −1 E N Q(N ) Eq. (3) one can readily recover, as a check, the canonical partition function Z(β, N ) = (βh) −N e −N q∞ 2. Discrete energy spectrum: 1d Ising model Consider a d = 1 Ising model of N sites, with periodic boundary conditions. Choosing the ground state as the energy-zero, the energy density for a macrostate of M domain walls is ǫ = M ǫ I /N , where ǫ I is the domain wall energy. The number of microstates corresponding to macrostate M, N isΩ M,N = 2 × N ! (N − M )!M !Appealing to the asymptotic form (B1) once more we find thatΩ M,N = 2 N 2π [x(1 − x)] −1/2 x −N x (1 − x) −N (1−x) 1 + O(N −1 )wherex ≡ M/N = ǫ/ǫ I . In this case δE I = ǫ I and Eq. (A3) is not in general satisfied. But, since Eq. (A6) is, we may still identify a density of states by Ω(ǫ, N ) = 1 2ǫ I Ω M,N which one may then readily recast in the form of Eq. (2) with the identifications N = L d ands(ǫ) = −(1 − x) ln(1 − x) − x ln x (B3)Again, as a check, one can readily use this result to recover the canonical free energy density in the form f (β, N ) = − 1 N ln Z(β, N ) = − ln(2 cosh K) + K = dxP (x|y) x −x (1) (y) 2 ≤ dxP (x|y) [x −x(y)] 2 = dzQ(x(y) + z, y)G(z, −1/S ′′ ef f (x ⋆ ))z 2 = dzQ(z, y)G(z, −1/S ′′ ef f (x ⋆ )(x(y) + z, y) + Q(x(y) − z, y)]= Q 0 2 e T (x(y)+z,y) + e T (x(y)−z,y) IG. 5. The finite-size scaling function for the 'effective' microcanonical entropyS ef f (x) defined by Eq. (34) and deduced from the critical canonical energy pdf, with the aid of Eq.(35). Multi-histogram methods[27] have been used to allow access to an extended range of x-values. APPENDIX A: DEFINING A DENSITY OF STATESWe discuss here, in general terms, the issues arising in defining a density of states function for a system in which the energy spectrum is discrete. The conventional argument[43]makes the identificationwith the implicit assumption that the right hand side is proportional to δE (≡ L d δǫ). This requires that:[C1] There exist many distinct levels s within the interval δE.[C2] The level degeneracy Ω s is slowly varying over the interval E → E + δE.The fractional variation of Ω s over the interval δE can be estimated using Eq. (47); condition C2 can then be expressed in the formTaken together, conditions C1 and C2 thus amount to the requirement thatwhere δE I characterizes the intrinsic discreteness of the energy spectrum. This condition is trivially satisfied in the classical limit (Appendix B1 considers one case explicitly). But there are obvious exceptions: in the Ising model (Appendix B2) Eq. (A3) is satisfied only at energies corresponding to microcanonical temperatures that are 'high' on the scale of the critical temperature. Or, to put it another way, Ω s is certainly not slowly varying over a range wide enough to embrace many system energy levels. We must now recognize, however, that Eq. (A1) (along with its implicit assumptions) does not faithfully reflect the conditions needed to legitimize the transition from discrete (Eq. 1a) to continuum (Eq. 1b) representations. Instead of Eq. (A1) we require, rather, that we can consistently write Ω(ǫ, L)e −βE δǫ = E<Es<E+δE Ω s e −βEs (A4)where (while retaining condition C1) we must replace condition C2 by T L See For Example, Hill, Statistical Mechanics. McGraw-HillSee for example T.L. Hill, Statistical Mechanics (McGraw-Hill, 1956). . D H E Gross, Physics Reports. 279119D.H.E. Gross, Physics Reports 279, 119 (1997). . X Zhang, D H E Gross, S Y M Yu &amp; Y, Zheng, Nucl. Phys A. 461668X.Z Zhang, D.H.E. Gross S.Y. Yu & Y.M. Zheng, Nucl. Phys A 461, 668 (1987). . W Thirring, Z. Phys. 235339W. Thirring, Z. Phys. 235, 339 (1970). . R C Desai, D W Heermann, &amp; K Binder, J. Stat. Phys. 53795R.C. Desai, D.W. Heermann & K. Binder, J. Stat. Phys. 53, 795 (1988). This choice reflects the belief that 'fields' rather than 'densities' provide the natural coordinates of the critical region: see M.E. Fisher. Phys. Rev. 176257This choice reflects the belief that 'fields' rather than 'densities' provide the natural coordinates of the critical region: see M.E. Fisher, Phys. Rev. 176, 257 (1968). . M Creutz, Phys. Rev. Lett. 501411M. Creutz, Phys. Rev. Lett. 50, 1411 (1983). . M Promberger, &amp; A Hüller, Z. Phys. 97341M. Promberger & A. Hüller, Z. Phys. B97, 341 (1995). For a review of finite size scaling see Finite Size Scaling and Numerical Simulation of Statistical Systems. V. PrivmanWorld Scientific PublishingSingaporeFor a review of finite size scaling see Finite Size Scaling and Numerical Simulation of Statistical Systems, edited by V. Privman (World Scientific Publishing, Singapore, 1990). . C Borgs, R Kotecký, J. Stat. Phys. 6179C. Borgs and R. Kotecký, J. Stat. Phys. 61, 79 (1990); . Phys. Rev. Lett. 681734Phys. Rev. Lett. 68, 1734 (1992). . A D Bruce, N B Wilding, Phys. Rev. Lett. 68193A.D. Bruce and N.B. Wilding, Phys. Rev. Lett. 68, 193 (1992); . N B Wilding, A D Bruce, J.Phys: Condens. Matter. 43087N.B.Wilding and A.D.Bruce, J.Phys: Condens. Matter 4, 3087 (1992); . N B Wilding, Phys. Rev. E. 52602N.B. Wilding, Phys. Rev. E 52, 602 (1995). . K Rummukainen, M Tsypin, K Kajantie, M Laine, &amp; M Shaposhnikov, Nuclear Physics B. 532283K. Rummukainen, M. Tsypin, K. Kajantie, M. Laine & M. Shaposhnikov Nuclear Physics B 532, 283 (1998). . A J E Liu &amp; M, Fisher, Physica. 15635A.J. Liu & M.E. Fisher, Physica 156, 35 (1989). . V Privman, M E Fisher, Phys. Rev. B. 30322V. Privman and M.E. Fisher, Phys. Rev. B 30, 322 (1984). ) assumes that the entropy density is concave, so that −s ′′ (ǫ) is positive. This concavity condition is guaranteed only in the thermodynamic limit. It is violated in the two phase region of a finite-sized system exhibiting a first order phase transition with a latent heat. See eg A. Hüller. Eq, Z. Phys B. 95263In these circumstances Eq. (2) cannot be correctEq. (2) assumes that the entropy density is concave, so that −s ′′ (ǫ) is positive. This concavity condition is guaranteed only in the thermodynamic limit. It is violated in the two phase region of a finite-sized system exhibiting a first order phase transition with a latent heat. See eg A. Hüller, Z. Phys B. 95, 63 (1994). In these circumstances Eq. (2) cannot be correct. . Compare Eqs. 2andCompare Eqs. (2) and ( Such problems are already apparent, for example, in the correlations induced in ideal gases by the microcanonical energy constraint: see eg J.L Lebowitz and J.K. Percus. Phys. Rev. 1241673Such problems are already apparent, for example, in the correlations induced in ideal gases by the microcanonical energy constraint: see eg J.L Lebowitz and J.K. Percus, Phys. Rev. 124, 1673 (1961). We expect that the key results of the present work -notably the scaling ansatz for the density of states (Eq. 8)-remain applicable provided the scaling energy density defined in Eq. 6 is replaced with an appropriate linear combination of the energy density and the density of the ordering variable (eg the particle density in the case of a fluid). The generalisation of our analysis to incorporate simultaneous 'microcanonical. We assume that our Ising system has the full ('particle-hole') symmetry of the Ising model itself. In those cases [11,12] which fall into the Ising universality class but do not have this symmetry one must allow for scaling-field-mixing: see [11] and references therein. ordering density is beyond the scope of the present workWe assume that our Ising system has the full ('particle-hole') symmetry of the Ising model itself. In those cases [11,12] which fall into the Ising universality class but do not have this symmetry one must allow for scaling-field-mixing: see [11] and references therein. We expect that the key results of the present work -notably the scaling ansatz for the density of states (Eq. 8)-remain applicable provided the scaling energy density defined in Eq. 6 is replaced with an appropriate linear combination of the energy density and the density of the ordering variable (eg the particle density in the case of a fluid). The generalisation of our analysis to incorporate simultaneous 'microcanonical' constraints on both energy and ordering density is beyond the scope of the present work. . M E See For Example, Fisher, Rep. Prog. Phys. 30615See for example M.E. Fisher, Rep. Prog. Phys. 30, 615 (1967). We will use ≃ to signify equality in such an asymptotic sense. We will use ≃ to signify equality in such an asymptotic sense. . E Brézin, J Zinn-Justin, Nucl. Phys. B. 257FS14867E. Brézin and J Zinn-Justin, Nucl. Phys. B 257 (FS14), 867 (1985). H Cramer, Mathematical Methods of Statistics. PrincetonPrinceton University PressH. Cramer, Mathematical Methods of Statistics (Princeton University Press, Princeton, 1946). The results of that paper indicate that the range of reduced temperatures below criticality (y > 0), in which the subdominant 'background' contribution to the specific heat is effectively constant. See reference [13. is smallSee reference [13]. The results of that paper indicate that the range of reduced temperatures below criticality (y > 0), in which the subdominant 'background' contribution to the specific heat is effectively constant, is small. The Ising system energy is measured in units of the coupling constant J. The Ising system energy is measured in units of the coupling constant J. . M Hasenbusch, K Pinn, J.Phys. A: Math. Gen. 316157M. Hasenbusch and K. Pinn, J.Phys. A: Math. Gen. 31, 6157 (1998). . H W J Blöte, &amp; J R Luitjen, Herringa, J.Phys. A: Math. Gen. 286289H.W.J. Blöte, E Luitjen & J.R. Herringa, J.Phys. A: Math. Gen. 28, 6289 (1995). . A M Ferrenberg, R H Swendsen, Phys. Rev. Lett. 631195A.M. Ferrenberg and R.H. Swendsen, Phys. Rev. Lett. 63, 1195 (1989); . R H Swendsen, Physica A. 19453R. H. Swendsen, Physica A 194, 53 (1993). a phenomenological smoothing of raw estimates of the microcanonical entropy. in contrast, that the procedures adopted in reference [8] require. with results that depend sensitively upon the values chosen for the smoothing parametersNote, in contrast, that the procedures adopted in reference [8] require a phenomenological smoothing of raw estimates of the microcanonical entropy, with results that depend sensitively upon the values chosen for the smoothing parameters. This skewed energy distribution has appeared (without finite-size-scaling analysis) in a number of papers: see for example Figure 1 of. Phys. Rev. B. N.A. Alvez, B.A. Berg & R. Villanova41383This skewed energy distribution has appeared (without finite-size-scaling analysis) in a number of papers: see for example Figure 1 of N.A. Alvez, B.A. Berg & R. Villanova, Phys. Rev. B 41, 383 (1990); . A M Figure 2 Of, P Ferrenberg &amp; D, Landau, Phys. Rev. B. 445081Figure 2 of A.M. Ferrenberg & D.P. Landau, Phys. Rev. B 44, 5081 (1991). The sign convention for the subscripts is chosen to match the conventions of critical phenomena, where amplitudes are labeled according to the sign of t ∼ T − Tc rather than y ∼ β − βc. The sign convention for the subscripts is chosen to match the conventions of critical phenomena, where amplitudes are labeled according to the sign of t ∼ T − Tc rather than y ∼ β − βc. Our data provides no significant direct support for this term. The existence of such a term in the pdf of the order parameter has been noted by many authors, and is relatively well established: see. G R D Smith &amp; A, Bruce, J. Phys. A: Math. Gen. 28216623power-law modulation of the exponential decay of the energy pdf, in the large | x | regimeThe pre-exponential factor also leads to a (ǫ − ǫc) α/[2(1−α)] power-law modulation of the exponential decay of the energy pdf, in the large | x | regime. Our data provides no significant direct support for this term. The existence of such a term in the pdf of the order parameter has been noted by many authors, and is relatively well established: see [21], [36] and G.R. Smith & A.D. Bruce, J. Phys. A: Math. Gen. 28, 6623 (1995); e-print hep-lat/9401034. M M Tsypin, ; J Rudnick, W Lay &amp; D Jasnow, Phys. Rev. E. 582902M.M. Tsypin, e-print hep-lat/9401034; J. Rudnick, W. Lay & D Jasnow, Phys. Rev. E 58, 2902 (1998). The conversion between x, y and z = L/ξ scales uses the amplitudes (of correlation length and specific heat singularities) assigned in reference. 13The conversion between x, y and z = L/ξ scales uses the amplitudes (of correlation length and specific heat singularities) assigned in reference [13]. The tuning of the choice of high-temperature series-expansion-variable in reference [13] has qualitatively similar consequences for amplitude-ratio-determination. The tuning of the choice of high-temperature series-expansion-variable in reference [13] has qualitatively similar conse- quences for amplitude-ratio-determination. Note that this assignment (including its uncertainty) is conditional upon the choices of the other parameters specified in the caption to Fig. 1Note that this assignment (including its uncertainty) is conditional upon the choices of the other parameters specified in the caption to Fig. 1. . K K Mon, Phys. Rev. B. 39467K.K. Mon, Phys. Rev. B 39, 467 (1989). . A D Bruce, J. Phys. A: Math. Gen. 283345A.D. Bruce, J. Phys. A: Math. Gen. 28, 3345 (1995). . B B See For Example, Mandelbrot, Physics Today. 4271See for example B.B. Mandelbrot, Physics Today 42, 71 (1989). . M S Challa, &amp; J H Hetherington, Phys Rev A. 386324M.S. Challa & J.H. Hetherington, Phys Rev A 38, 6324 (1988). Note that, in common with others, these authors impose the microcanonical constant energy constraint on a system comprising the intrinsic (Ising spin) degrees of freedom and a set of artificial conjugate momenta. J R Ray, &amp; C Frelechoz, Phys. Rev. E. 533402J.R. Ray & C. Frelechoz, Phys. Rev. E 53, 3402 (1996). Note that, in common with others, these authors impose the microcanonical constant energy constraint on a system comprising the intrinsic (Ising spin) degrees of freedom and a set of artificial conjugate momenta. . G R Gerling, A Hüller, Z. Phys B. 90207G.R. Gerling and A. Hüller, Z. Phys B 90, 207 (1993). L d ǫ the entropy displays an effectively straight line section (as it does at a first order phase transition) extending over an interval (in which | x |≪ 1 ) that grows as L 1/ν -in contrast to a first-order transition where the interval grows as L d . This bears out a conjecture by L Stodolsky and J Wosiek. Nucl. Phys. B. 413813If plotted as a function of E = L d ǫ the entropy displays an effectively straight line section (as it does at a first order phase transition) extending over an interval (in which | x |≪ 1 ) that grows as L 1/ν -in contrast to a first-order transition where the interval grows as L d . This bears out a conjecture by L Stodolsky and J Wosiek, Nucl. Phys. B 413, 813 (1994). Note that the sharpness of the microcanonical peak reported in Fig. 1 of [8] is also heavily dependent upon the parameters of the smoothing procedure used there: see. 28Note that the sharpness of the microcanonical peak reported in Fig. 1 of [8] is also heavily dependent upon the parameters of the smoothing procedure used there: see [28]. See, Reif, Fundamentals of statistical and thermal physics. New YorkMcGraw-Hill61being wary of the notational differencesSee for example F Reif, Fundamentals of statistical and thermal physics (McGraw-Hill, New York, 1965) p61, being wary of the notational differences. I S M Gradshteyn &amp; I, Ryzhik, Table of integrals, series and products. New YorkAcademic PressI.S. Gradshteyn & I.M. Ryzhik, Table of integrals, series and products (Academic Press, New York, 1980). . M N E Barber &amp; M, Fisher, Ann. Phys. 771M.N. Barber & M.E. Fisher, Ann. Phys. 77, 1 (1973).
{'fraction_non_alphanumeric': 0.07112193792375048, 'fraction_numerical': 0.026506762972300774, 'mean_word_length': 4.0173504273504275, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 8, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 8, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "We develop a scaling theory for the finite-size critical behavior of the microcanonical entropy (density of states) of a system with a critically-divergent heat capacity. The link between the microcanonical entropy and the canonical energy distribution is exploited to establish the former, and corroborate its predicted scaling form, in the case of the 3d Ising universality class. We show that the scaling behavior emerges clearly when one accounts for the effects of the negative background constant contribution to the canonical critical specific heat. We show that this same constant plays a significant role in determining the observed differences between the canonical and microcanonical specific heats of systems of finite size, in the critical region.PACS numbers: 05.20. Gg, 05.70.Jk, 64.60.Fr Statistical mechanics can be formulated in any of a set of ensembles distinguished by the relationship between the system and its environment[1]. The principal members of this set are the microcanonical (prescribed energy) and canonical (prescribed temperature) ensembles. In the thermodynamic limit (when it exists) the ensembles yield the same predictions (and are, in this sense, equivalent) and the choice of ensemble is a matter of practical convenience. The canonical ensemble tends to win this contest because it circumnavigates the hard-constant-energy constraint imposed by the microcanonical ensemble.The two ensembles are, however, not always equivalent[2]. They differ for systems which are 'small' in some sense: inherently small systems such as nuclei or clusters [3]; systems with unscreened long-range forces [4] where the thermodynamic limit is problematic; and systems at critical points[5], which are our principal concern here. Theoretical studies of critical phenomena are almost invariably conducted within the framework of the canonical ensemble[6]. In consequence there is no substantive framework within which to interpret computational studies of microcanonical critical behavior. Such studies do, nevertheless, exist, having been motivated, variously, by the belief that the microcanonical framework may have some computational advantages [7] and by the discovery [8] that, apparently, critical anomalies in the microcanonical heat capacity are significantly enhanced with respect to their canonical counterparts.This paper goes some way towards supplying the missing framework. We develop (section 2) a finite-size-scaling theory [9] for the microcanonical entropy (the density of states) of a system with a critically-divergent heat capacity. In so doing we have, of necessity, to consider more general questions about the structure of the density of states of a finite-size system -in particular the implications of well-established results for the finite-size structure of the canonical partition function[10].Though somewhat more than a phenomenology, our theory falls short of being microscopically explicit: to determine an explicit form for the relevant scaling function we need to appeal (section 3) to Monte Carlo (MC) measurements of the critical canonical energy probability distribution (pdf).The canonical energy pdf itself has a near-critical finite-size-scaling form which has featured in a number of studies of critical points in fluids[11]and lattice gauge theories[12]. Since energy fluctuations (like the critical anomaly in the canonical specific heat which they control) are relatively weak (by comparison with the fluctuations of the order parameter, and the divergence of its response function) the degree of 'scaling' reported in previously measured energy pdfs has been relatively poor -unsatisfactorily so for our purposes here. This problem is addressed in section 3. We show that one can fold out (from the measured distributions) the sub-dominant (but significant) non-scaling effects that are associated with the constant background contribution to the canonical heat capacity, negative in the case of the 3d Ising model[13]. This procedure exposes the underlying behavior, which manifests scaling to an impressive degree. In addition to providing us with the platform needed for this work, this procedure may offer the basis for improving the mixed-scaling-field theory [11] of critical points in systems that belong to the Ising universality class but which do not have full Ising symmetry; recent studies[12]have suggested that the current framework is not fully satisfactory.The scaling form for the critical energy pdf allows us to determine the scaling form of the microcanonical entropy. In section 4 we explore this form and show that it is consistent with predictions for both the bulk-critical limit (as regards the parameters characterizing the specific heat singularity[13]) and the finite-size critical limit (the Fisher-Privman constant[14]).The microcanonical entropy also provides us with a unified basis for dealing with both the canonical and the microcanonical specific heats (section 5). We show that the 'corrections' to the scaling behavior of the canonical specific heat (the negative background constant) have subtle consequences for the microcanonical behavior. In particular they serve to amplify the difference between microcanonical and canonical behavior, and are at least partially responsible for the strength of the anomaly observed in some microcanonical studies [8].A. The microcanonical scaling ansatzWe consider a d-dimensional many-body system of linear dimension L; we assume hypercubic geometry with periodic boundary conditions. The canonical partition function is, in principle, a discrete sum over system microstates (r) or system energy levels (s):Z(β, L) = r e −βEr = s Ω s e −βEs (1a)", 'arxivid': 'cond-mat/9908025', 'author': ['A D Bruce \nDepartment of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom\n', 'N B Wilding \nDepartment of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom\n', 'A D Bruce \nDepartment of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom\n', 'N B Wilding \nDepartment of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom\n'], 'authoraffiliation': ['Department of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom', 'Department of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom', 'Department of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom', 'Department of Physics and Astronomy\nThe University of Edinburgh Edinburgh\nEH9 3JZScotland, United Kingdom'], 'corpusid': 1076355, 'doi': '10.1103/physreve.60.3748', 'github_urls': [], 'n_tokens_mistral': 18016, 'n_tokens_neox': 16244, 'n_words': 10115, 'pdfsha': '987e670c9d5aa6c632a9d87b91d4cd5b6988e1f5', 'pdfurls': ['https://arxiv.org/pdf/cond-mat/9908025v1.pdf'], 'title': ['Critical-point finite-size scaling in the microcanonical ensemble', 'Critical-point finite-size scaling in the microcanonical ensemble', 'Critical-point finite-size scaling in the microcanonical ensemble', 'Critical-point finite-size scaling in the microcanonical ensemble'], 'venue': []}
arxiv
Measuring the halo mass function in loose groups 6 Dec 2010 D J Pisano D G Barnes B K Gibson L Staveley-Smith K C Freeman V A Kilborn Centre for Astrophysics and Supercomputing West Virginia University Dept. of Physics P.O. Box 631526506MorgantownWVUSA Centre for Astrophysics Swinburne University Hawthorn, VIC 3122Aus-tralia L. Staveley-Smith School of Physics, M013 University of Central Lancashire PR1 @HUPrestonUK K.C. Freeman RSAA, Mount Stromlo Observatory University of Western Australia, Crawley Cotter Road6009, 2611WestonWA, ACTAustralia, Australia Measuring the halo mass function in loose groups 6 Dec 2010 Using data from our Parkes & ATCA HI survey of six groups analogous to the Local Group, we find that the HI mass function and velocity distribution function for loose groups are the same as those for the Local Group. Both mass functions confirm that the "missing satellite" problem exists in other galaxy groups. Project Overview Cold dark matter (CDM) models of galaxy formation predict that the Local Group should contain about 300 dark matter halos but there is an order of magnitude fewer galaxies observed [4,5]. While the "missing satellite" problem can be mitigated by the inclusion of baryon physics in CDM models or alternate forms of dark matter, it is important to establish how this problem is manifest beyond the Local Group. We have conducted a HI survey of six loose groups of galaxies that are analogous to the Local Group. The six groups are composed of only spiral and irregular galaxies that have mean separations of ∼550 kpc. The groups have average diameters of 1.6 Mpc and have M virial ∼10 11.7−13.6 M ⊙ ; they are similar to the Local Group in all these ways. Details on our observations, data reduction, and our search for HI clouds in the groups can be found in [6]. The survey identified a total of 63 group galaxies with all of the new detections having properties consistent with being typical dwarf irregular galaxies. Fig. 1 Left: The HIMF for loose groups as compared to that for the Local Group galaxies with HI detections and Local Group galaxies with HI detections and upper limits. Also shown is the HIMF from HIPASS [7] and a flat HIMF. Right: The VDF for the loose groups compared to all Local Group galaxies, only those detected in HI, the HIPASS VDF from [7], field galaxies from [3], cluster galaxies from [1], and the theoretical predictions from Via Lactea II [2]. Aside from the loose and Local Group data, all other functions have been arbitrarily renormalized. Halo Mass Functions Using the survey completeness from [6] and our catalog of group galaxies, we constructed both a HI mass function (HIMF) and a circular velocity distribution function (VDF) for the six loose groups as shown in Figure 1. The figure shows that both the HIMF and VDF for the Local Group are not atypical, but are consistent with those for the six loose groups. The HIMF for low density regions, such as loose groups (and including the Local Group), is consistent with being flatter than the HIMF in the field as was found by [7]. The VDF for loose groups has a consistent low mass slope to field galaxies and HIPASS galaxies [3,8], but is much shallower than is predicted by simulations [2] or observed in galaxy clusters [1]. For a more complete discussion of these results, see Pisano et al. (2011, in preparation). . V Desai, J J Dalcanton, L Mayer, D Reed, T Quinn, F Governato, MNRAS. 351265Desai, V., Dalcanton, J. J., Mayer, L., Reed, D., Quinn, T., & Governato, F. 2004, MNRAS, 351, 265 . J Diemand, M Kuhlen, P Madau, M Zemp, B Moore, D Potter, J Stadel, Nature. 454735Diemand, J., Kuhlen, M., Madau, P., Zemp, M., Moore, B., Potter, D., & Stadel, J. 2008, Nature, 454, 735 . A H Gonzalez, K A Williams, J S Bullock, T S Kolatt, J R Primack, ApJ. 528145Gonzalez, A. H., Williams, K. A., Bullock, J. S., Kolatt, T. S., & Primack, J. R. 2000, ApJ, 528, 145 . A Klypin, A V Kravtsov, O Valenzuela, F Prada, ApJ. 52282Klypin, A., Kravtsov, A.V., Valenzuela, O., & Prada, F., 1999, ApJ, 522, 82 . B Moore, S Ghigna, F Governato, G Lake, T Quinn, J Stadel, P Tozzi, ApJ. 52419Moore, B., Ghigna, S., Governato, F., Lake, G., Quinn, T., Stadel, J., & Tozzi, P., 1999, ApJ, 524, L19 . D J Pisano, D G Barnes, B K Gibson, L Staveley-Smith, K C Freeman, V A Kilborn, ApJ. 662959Pisano, D. J., Barnes, D. G., Gibson, B. K., Staveley-Smith, L., Freeman, K. C., & Kilborn, V. A. 2007, ApJ, 662, 959 . M A Zwaan, M J Meyer, L Staveley-Smith, R L Webster, MNRAS. 35930Zwaan, M.A., Meyer, M.J., Staveley-Smith, L., & Webster, R.L., 2005, MNRAS, 359, L30 . M A Zwaan, M J Meyer, L Staveley-Smith, MNRAS. 403Zwaan, M.A., Meyer, M.J., & Staveley-Smith, L., 2010, MNRAS, 403, 1969
{'fraction_non_alphanumeric': 0.07168458781362007, 'fraction_numerical': 0.04069154543537845, 'mean_word_length': 3.9313929313929314, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Using data from our Parkes & ATCA HI survey of six groups analogous to the Local Group, we find that the HI mass function and velocity distribution function for loose groups are the same as those for the Local Group. Both mass functions confirm that the "missing satellite" problem exists in other galaxy groups.', 'arxivid': '1012.1362', 'author': ['D J Pisano ', 'D G Barnes ', 'B K Gibson ', 'L Staveley-Smith ', 'K C Freeman ', 'V A Kilborn ', '\nCentre for Astrophysics and Supercomputing\nWest Virginia University Dept. of Physics\nP.O. Box 631526506MorgantownWVUSA\n', '\nCentre for Astrophysics\nSwinburne University\nHawthorn, VIC 3122Aus-tralia\n', '\nL. Staveley-Smith School of Physics, M013\nUniversity of Central Lancashire\nPR1 @HUPrestonUK\n', '\nK.C. Freeman RSAA, Mount Stromlo Observatory\nUniversity of Western Australia, Crawley\nCotter Road6009, 2611WestonWA, ACTAustralia, Australia\n'], 'authoraffiliation': ['Centre for Astrophysics and Supercomputing\nWest Virginia University Dept. of Physics\nP.O. Box 631526506MorgantownWVUSA', 'Centre for Astrophysics\nSwinburne University\nHawthorn, VIC 3122Aus-tralia', 'L. Staveley-Smith School of Physics, M013\nUniversity of Central Lancashire\nPR1 @HUPrestonUK', 'K.C. Freeman RSAA, Mount Stromlo Observatory\nUniversity of Western Australia, Crawley\nCotter Road6009, 2611WestonWA, ACTAustralia, Australia'], 'corpusid': 118623496, 'doi': '10.1007/978-3-642-20285-8_8', 'github_urls': [], 'n_tokens_mistral': 1641, 'n_tokens_neox': 1379, 'n_words': 799, 'pdfsha': '029c21fc4af891ee7773043ebd25dd0145fef50a', 'pdfurls': ['https://arxiv.org/pdf/1012.1362v1.pdf'], 'title': ['Measuring the halo mass function in loose groups', 'Measuring the halo mass function in loose groups'], 'venue': []}
arxiv
Transport properties for driven granular fluids in situations close to homogeneous steady states 21 Feb 2013 Vicente Garzó Departamento de Física Universidad de Extremadura E-06071BadajozSpain Moisés G Chamorro Departamento de Física Universidad de Extremadura E-06071BadajozSpain Francisco Vega Departamento de Física Universidad de Extremadura E-06071BadajozSpain Reyes Departamento de Física Universidad de Extremadura E-06071BadajozSpain Transport properties for driven granular fluids in situations close to homogeneous steady states 21 Feb 2013(Dated: May 5, 2014)numbers: 0520Dd4570Mg5110+y0560-k The transport coefficients of a granular fluid driven by a stochastic bath with friction are obtained by solving the inelastic Enskog kinetic equation from the Chapman-Enskog method. The heat and momentum fluxes as well as the cooling rate are determined to first order in the deviations of the hydrodynamic field gradients from their values in the homogeneous steady state. Since the collisional cooling cannot be compensated locally for the heat produced by the external driving force, the reference distribution f (0) (zeroth-order approximation) depends on time through its dependence on temperature. This fact gives rise to conceptual and practical difficulties not present in the undriven case. On the other hand, to simplify the analysis and given that we are interested in computing transport in the first order of deviations from the reference state, the steady-state conditions are considered to get explicit forms for the transport coefficients and the cooling rate. A comparison with recent Langevin dynamics simulations for driven granular fluids shows an excellent agreement for the kinematic viscosity although some discrepancies are observed for the longitudinal viscosity and the thermal diffusivity at large densities. Finally, a linear stability analysis of the hydrodynamic equations with respect to the homogeneous steady state is performed. As expected, no instabilities are found thanks to the presence of the external bath. I. INTRODUCTION It is well established that when a granular material is externally excited (rapid flow conditions) the motion of grains resembles the random motion of atoms or molecules in an ordinary gas. In these conditions, kinetic theory together with numerical simulations are the best tools to describe the behavior of granular flows. However, in contrast to ordinary fluids, the collisions between grains are inelastic and so, one has to feed energy into the system to achieve a steady non-equilibrium state. This can done either by driving through the boundaries, for example, shearing the system or vibrating its walls [1], or alternatively by bulk driving, as in air-fluidized beds [2,3]. On the other hand, this way of supplying energy causes in most of the cases strong spatial gradients in the system. To avoid the difficulties associated with inhomogeneous states, it is quite usual in computer simulations to homogeneously heat the system by the action of an external driving force [4][5][6][7]. Borrowing a terminology often used in nonequilibrium molecular dynamics of ordinary fluids [8], this type of external forces are usually called "thermostats". Nevertheless, in spite of its practical importance, the understanding of the effect of the external driving force on the dynamical properties of the system (such as the transport coefficients) is still not completely understood [9][10][11]. In particular, recent computer simulations [6,7] have computed some transport coefficients by measuring the static and dynamical structure factors for shear and longitudinal modes in a * [email protected]; http://www.unex.es/eweb/fisteor/vicente/ † [email protected]; http://www.unex.es/eweb/fisteor/fran/ driven granular fluid. Given that the expressions for the transport coefficients were not known in this driven problem, the simulation data were compared with their corresponding elastic forms. Thus, it would be desirable to provide simulators with the appropriate theoretical tools to work when studying problems in granular fluids driven by thermostats. The aim of this paper is to determine the transport coefficients of a dense driven granular gas of inelastic hard spheres in the framework of the Enskog kinetic equation. As in the undriven case [12,13], the transport coefficients are obtained by solving the Enskog equation by means of the Chapman-Enskog expansion [14] around a certain reference state f (0) (zeroth-order approximation). While in the undriven case the distribution f (0) is chosen to be the local version of the homogeneous cooling state (HCS), there is some flexibility in the choice of f (0) for a driven gas. For simplicity, one possibility is to take a local thermostat such that the distribution f (0) is still stationary at any point of the system. This was the choice assumed in previous works [15,16] to compute the transport coefficients of a heated granular gas. On the other hand, for general small deviations from the steady reference state, the zeroth-order distribution f (0) is not in general a stationary distribution since the collisional cooling cannot be compensated locally by the heat injected by the driving force. This fact introduces additional difficulties not present in previous studies [15,16]. In this paper, we will adopt this point of view and will consider this kind of thermostat that seems to be closer to the one used in computer simulations. The underlying motivation of such a study is twofold. On of the one hand, given that most of the computer simulations [4] are performed by driving the fluid by means of a thermostat, it is important to know the effect of thermostat on the transport properties. In this sense, the question arises then as to whether, and if so to what extent, the conclusions drawn in previous works [15,16] may be altered when the time-dependent distribution f (0) (r, v, t) is considered in the Chapman-Enskog expansion instead of the steady distribution. On the other hand, it is also of interest to evaluate the transport coefficients in a situation where a direct comparison with computer simulations is more likely to occur. Here, in the same way as the simulations carried out in Ref. [7], we have assumed that the fluid is driven by a stochastic bath with friction. This allows us to compare directly our theoretical predictions for the kinematic and longitudinal viscosities and the thermal diffusivity with those obtained from Langevin dynamics simulations [7]. The derivation of macroscopic equations and associated transport coefficients from kinetic theory is limited in practice to weakly coupled systems (dilute and moderately dense gases, ideal plasmas, anharmonic crystals). In the case of ordinary fluids, the application of formally exact methods from nonequilibrium statistical mechanics has provided exact expressions for the transport coefficients known as Green-Kubo formula [17]. Green-Kubo expressions are time integrals of equilibrium time correlation functions for the fluxes associated with conserved quantities. These expressions have the advantage of being formally exact and closely related to properties measured in the experiments. The derivation of the corresponding Green-Kubo formula for the transport coefficients of an undriven granular fluid has been a subject of great interest in the past few years [18]. On the other hand, much less is known in the case of driven granular fluids. To the best of our knowledge, the only derivation of Green-Kubo formula for Navier-Stokes transport coefficients of a granular dilute gas heated by the stochastic thermostat has been recently carried out by García de Soria et al. [19]. Starting from the Boltzmann equation, these authors obtain explicit expressions for the transport coefficients as a function of the inelasticity and the spatial dimension. As will show below, in the low-density limit, our results agree with those obtained in Ref. [19] showing the equivalence between the Chapman-Enskog and linear response methods for driven granular gases. The plan of the paper is as follows. In Sec. II, the Enskog kinetic equation for a granular gas fluidized by an external force is introduced. Then, before considering inhomogeneous situations, Sec. III analyzes the steady homogeneous state. As in the HCS [20], a scaling solution ϕ s is proposed at the steady state. However, the new feature is that the dependence of the reduced distribution ϕ s on the granular temperature occurs through two dimensionless parameters (dimensionless velocity and the reduced noise strength) instead of a single parameter (dimensionless velocity) as in the HCS. Once the steady homogeneous distribution is well characterized, Sec. IV addresses the Chapman-Enskog expansion around the unsteady reference distribution f (0) (r, v; t). The details of the calculations to first order in the spatial gradients are displayed along several Appendices and the explicit expressions of the transport coefficients and the cooling rate are displayed in Sec. V. In reduced form, they are given in terms of the volume fraction, the coefficient of restitution and the parameters of the thermostat. A comparison with recent Langevin dynamics simulations [7] for hard disks is done in Sec. VI showing an excellent agreement in the case of the kinematic viscosity and some discrepancies for the longitudinal viscosity and the thermal diffusivity, specially at large solid fractions. Section VII is devoted to the linear stability analysis around the steady homogeneous state. As in previous studies [5,7] based on the elastic forms of the Enskog transport coefficients, our results also indicate that the steady homogeneous state is linearly stable. The paper is closed in Sec. VIII with a discussion of the results derived here. II. ENSKOG KINETIC THEORY FOR DRIVEN SYSTEMS We consider a system of inelastic hard spheres in d dimensions with mass m and diameter σ. Collisions are characterized by a (constant) coefficient of normal restitution 0 < α ≤ 1, with α = 1 in the elastic limit. As said in the Introduction, in order to maintain a stationary fluidized state, the granular fluid is driven by means of an external force or thermostat that acts locally on each particle [21]. This is a quite usual choice in computer simulations [4]. Under these conditions, the equation of motion for a particle i with peculiar velocity V i can be written as mV i = F th i (t) + F coll i ,(1) where F th i is the thermostat force and F coll i is the force due to inelastic collisions. Here, V i = v i − U where v i is the velocity of the particle i and U is the local mean flow velocity. We assume that F th i is composed by two different terms: (i) a stochastic force where the particles are randomly kicked between collisions [21] and (ii) a viscous drag force which mimics the interaction of the particles with and effective viscous "bath" at temperature T b . More explicitly, F th i is given by [7] F th i (t) = −γ b V i (t) + F st i (t),(2) where γ b is a drag coefficient that defines the characteristic interaction time with the external bath, τ −1 b = γ b /m. As usual, the stochastic force F st i is assumed to have the form of a Gaussian white noise [21]: F st i (t) = 0, F st i (t)F st j (t ′ ) = m 2 ξ 2 b δ ij δ(t − t ′ )1 1,(3) where 1 1 is the d × d unit matrix and ξ 2 b represents the strength of the correlation. The forcing term in the Enskog equation associated to F st i is represented by a Fokker-Planck operator [20] of the form − 1 2 ξ 2 b ∂ 2 /∂v 2 . One of the advantages of using the model (1) instead of other kind of thermostats is that the temperature of the thermostat T b (different from the kinetic temperature of the fluid T < T b ) is always well defined. In particular, this kind of thermostat is able to equilibrate the system when collisions are elastic. Moreover, a similar external driving force to that of Eq. (2) has been recently proposed to model the effect of the interstitial fluid on grains in monodisperse gas-solid suspensions [22]. Thus, the corresponding Enskog kinetic equation for the one-particle velocity distribution function f (r, v, t) reads ∂ t f +v·∇f − γ b m ∂ ∂v ·Vf − 1 2 ξ 2 b ∂ 2 ∂v 2 f = J E [r, v|f, f ] ,(4) where J E [r, v 1 |f, f ] = σ d−1 dv 2 d σ Θ( σ · g 12 )( σ · g 12 ) × α −2 χ(r, r − σ)f (r, v ′ 1 ; t)f (r − σ, v ′ 2 ; t) − χ(r, r + σ)f (r, v 1 ; t)f (r + σ, v 2 ; t)] (5) is the Enskog collision operator. Here, d is the dimensionality of the system (d = 2 for disks and d = 3 for spheres), σ = σ σ, σ being a unit vector, Θ is the Heaviside step function, and g 12 = v 1 − v 2 . The primes on the velocities in Eq. (5) denote the initial values {v ′ 1 , v ′ 2 } that lead to {v 1 , v 2 } following a binary collision, v ′ 1 = v 1 − 1 2 1 + α −1 ( σ · g 12 ) σ, v ′ 2 = v 2 + 1 2 1 + α −1 ( σ · g 12 ) σ. In addition, χ[r, r+σ|{n(t)] is the equilibrium pair correlation function at contact as a functional of the nonequilibrium density field n(r, t) defined by n(r, t) = dvf (r, v, t).(6) The macroscopic balance equations for the system are obtained when one multiplies the Enskog equation (4) by {1, mv, mv 2 } and integrates over velocity. After some algebra one gets [12,22] D t n + n∇ · U = 0 , D t U = −ρ −1 ∇ · P ,(7)D t T + 2 dn (∇ · q + P : ∇U) = − 2T m γ b + mξ 2 b − ζ T. (9)(8) In the above equations, D t = ∂ t + U · ∇ is the material derivative and ρ = mn is the mass density. The cooling rate ζ is proportional to 1 − α 2 and is due to dissipative collisions. The pressure tensor P(r, t) and the heat flux q(r, t) have both kinetic and collisional transfer contributions, i.e., P = P k + P c and q = q k + q c . The kinetic contributions are given by P k = dvmVVf (r, v, t), q k = dv m 2 V 2 Vf (r, v, t),(10) and the collisional transfer contributions are [12] P c = 1 + α 4 mσ d dv 1 dv 2 d σ Θ( σ · g 12 )( σ · g 12 ) 2 × σ σ 1 0 dx f (2) [r − xσ, r + (1 − x)σ, v 1 , v 2 ; t] ,(11)q c = 1 + α 4 mσ d dv 1 dv 2 d σ Θ( σ · g 12 )( σ · g 12 ) 2 ×(G 12 · σ) σ 1 0 dx f (2) [r − xσ, r + (1 − x)σ, v 1 , v 2 ; t] .(12) Here, G 12 = 1 2 (V 1 + V 2 ) is the velocity of center of mass and f (2) (r 1 , r 2 , v 1 , v 2 , t) ≡ χ(r 1 , r 2 |n(t))f (r 1 , v 1 , t)f (r 2 , v 2 , t). (13) Finally, the cooling rate is given by ζ = 1 − α 2 4dnT mσ d−1 dv 1 dv 2 d σΘ( σ · g 12 ) ×( σ · g 12 ) 3 f (2) (r, r + σ, v 1 , v 2 ; t).(14) The model (4) can be seen as the dilute version of the Fokker-Planck model studied previously by Hayakawa [23] when both model parameters γ b and ξ 2 b are related by [17]. Thus, for homogeneous situations, the steady distribution without collisions relaxes to the Maxwellian distribution ξ 2 b = 2γ b T b /m 2f M (V ) = n m 2πT b d/2 e −mV 2 /2T b(15) where V = v − U is the peculiar velocity. III. STEADY HOMOGENEOUS STATES Before considering inhomogeneous situations, it is quite instructive to analyze first the homogeneous state. In this situation, the density n(r, t) ≡ n s is constant, the granular temperature T (r, t) ≡ T (t) is spatially uniform and the mean flow vanishes (U = 0). As a consequence, the one-particle distribution function f (v, t) verifies the kinetic equation ∂ t f − γ b m ∂ ∂v · vf − 1 2 ξ 2 b ∂ 2 ∂v 2 f = J E [f, f ],(16) where J E [f, f ] = χσ d−1 dv 2 d σΘ( σ · g 12 )( σ · g 12 ) × α −2 f (v ′ 1 )f (v ′ 2 ) − f (v 1 )f (v 2 ) .(17) Here, χ is the pair correlation function evaluated at the (homogeneous) density n s . The collision operator (17) can be recognized as the Boltzmann operator for inelastic collisions multiplied by the factor χ. The energy balance equation reads simply ∂ t T = − 2T m γ b + mξ 2 b − ζ T.(18) In the hydrodynamic regime, f qualifies as a normal solution and so, its time dependence only occurs through the granular temperature T : ∂ t f = ∂f ∂T ∂ t T = − 2 m γ b − m T ξ 2 b + ζ T ∂f ∂T .(19) Substitution of Eq. (19) into Eq. (16) yields − 2 m γ b − m T ξ 2 b + ζ T ∂f ∂T − γ b m ∂ ∂v · vf − 1 2 ξ 2 b ∂ 2 ∂v 2 f = J E [f, f ].(20) After a transient regime, the gas will achieve a steady state characterized by a constant temperature T s . According to Eq. (18), the steady granular temperature T s is given by the equation ζ s T s + 2γ b m T s = mξ 2 b ,(21) where the (steady) cooling rate ζ s is defined by Eq. (14) by using the stationary distribution function f s (v). Equation (21) establishes a relation between the model parameters γ b and ξ 2 b so that only one of the above parameters is independent in the steady state. Here, we will take the noise strength ξ 2 b as the relevant external parameter. By using the relation (21), in the steady state Eq. (20) becomes 1 2 ζ s ∂ ∂v ·vf s − mξ 2 b 2T s ∂ ∂v ·vf s − 1 2 ξ 2 b ∂ 2 ∂v 2 f s = J E [f s , f s ]. (22) Equation (22) shows clearly that the steady distribution f s also depends on the model parameter ξ 2 b (apart from its dependence on the coefficient of restitution and the granular temperature). Thus, although the explicit form of f s is not known so far, dimensionless analysis requires that f s has the scaled form [24] f s (v, ξ 2 b ) = n s v −d 0 ϕ s (c, ξ * s ) ,(23) where ϕ s is an unknown function of the dimensionless parameters c ≡ v v 0 , ξ * s ≡ mℓ T s v 0 ξ 2 b .(24) Here, v 0 = 2T s /m is the thermal speed and ℓ = 1/(n s σ d−1 ) is the mean free path for hard spheres. Note that the dependence of the scaled distribution function ϕ s on the temperature is encoded through two parameters: the dimensionless velocity c and the (reduced) noise strength ξ * s . This scaling differs from the one assumed in the case of the HCS [20] where only the dimensionless velocity c is required to characterize the distribution ϕ s . A similar scaling solution to the form (23) has been recently proposed [25] for a driven homogeneous granular gas before reaching the stationary regime. In terms of the (reduced) distribution function ϕ s , Eq. (22) can be finally rewritten as 1 2 ζ * s ∂ ∂c ·cϕ s − 1 2 ξ * s ∂ ∂c ·cϕ s − 1 4 ξ * s ∂ 2 ∂c 2 ϕ s = J * E [ϕ s , ϕ s ],(25) where we have introduced the dimensionless quantities ζ * s ≡ ℓζ s v 0 , J * E ≡ ℓv d−1 0 n s J E .(26) Since the cooling rate vanishes for elastic collisions, then the solution of Eq. (25) is the Maxwellian distribution ϕ s (c) = π −d/2 e −c 2 .(27) However, if the particles collide inelastically (α < 1), ζ * = 0 and the exact form of ϕ s (c) is not known. In particular, the deviation of ϕ s (c, ξ * s ) from its Maxwellian form (27) is measured through the kurtosis or fourthcumulant a 2,s = 4 d(d + 2) c 4 − 1,(28) where c k = dc c k ϕ s (c).(29) The steady-state value a 2,s of the kurtosis can be determined by considering the leading Sonine approximation for ϕ s (c, ξ * s ) [20]: ϕ s ≃ e −c 2 π d/2 1 + a 2,s c 4 2 − (d + 2)c 2 2 + d(d + 2) 8 ,(30) The approximation (30) is justified because the coefficient a 2,s is expected to be small [20]. With the use of the form (30) and neglecting nonlinear terms in a 2,s , the dependence of a 2,s on α and ξ * s can be explicitly determined. After some algebra, one gets [24] a 2,s = 16(1 − α)(1 − 2α 2 ) 9 + 24d − α(41 − 8d) + 30(1 − α)α 2 + C d ξ * s χ(1+α) ,(31)where C d = 16d(d+2) √ 2Γ(d/2)/π (d−1)/2 . In the absence of friction (γ b = 0), the steady-state condition yields ζ * s = ξ * s and Eq. (31) agrees with the results obtained when the system is only driven by the stochastic thermostat [20]. Moreover, when ξ * s = 0, we also recover the results of the undriven case [20]. Once the kurtosis is known, the cooling rate ζ s can be written in terms of a 2,s as ζ s = 2 d π (d−1)/2 Γ d 2 (1 − α 2 )χ 1 + 3 16 a 2,s n s σ d−1 T s m ,(32) where the steady granular temperature T s obeys the equation T s = m 2 ξ 2 b 2γ b − 2 d−1 σ m π χφ γ b (1 − α 2 ) 1 + 3 16 a 2,s T 3/2 s . (33) Here, φ = π d/2 2 d−1 dΓ d 2 n s σ d(34) is the solid volume fraction. Equation (33) gives the granular temperature T s in the non-equilibrium stationary state. Figure 1 shows the (reduced) steady temperature T s /T b versus the volume fraction φ for two different values of the coefficient of restitution α. The theoretical results obtained from Eq. (33) for hard disks (d = 2) are compared with those obtained by numerically solving the Enskog-Boltzmann equation from the direct simulation Monte Carlo (DSMC) method [26]. As in Ref. [7], the fixed parameters of the simulations are m = 1, σ = 0.01, γ b = 1, ξ 2 b = 2, and T b = 1. For a two-dimensional system, we have chosen the following form for χ(φ) [27]: χ(φ) = 1 − 7 16 φ (1 − φ) 2 .(35) We observe an excellent agreement between theory and simulation in the complete range of values of φ considered. As expected, at a given value of the solid fraction, the steady granular temperature T s decreases as the gas becomes more inelastic. The dependence of the kurtosis a 2,s on α is shown in Fig. 2 for φ = 0.25, d = 2, and the same simulation parameters as in Fig. 1. It is quite apparent that simulation data compare very well with the theoretical result (31), even for extreme values of dissipation. This good agreement suggests that the scaled distribution ϕ s (c, ξ * s ) can be well represented by the leading Sonine approximation (30) in the region of thermal velocities. In addition, the values of a 2,s in the driven case are generally smaller than in the undriven case [20,28]. A similar model (inelastic Enskog equation with a stochastic bath with friction) has been used in Ref. [29] to study the nonequilibrium statistical properties of a one dimensional hard rod inelastic fluid. The authors consider a self-consistent density functional approach and compare their theoretical predictions with Brownian dynamics simulations for the hydrodynamic profiles (density, temperature and pressure) and the kurtosis of the velocity distribution function. The agreement found in Ref. [29] between theory and simulation is similar to the one observed here in Figs. 1 and 2 showing the reliability of both approaches (kinetic theory and dynamics density functional theory) to reproduce the steady temperature and the fourth cumulant. However, no attention is devoted in Ref. [29] to transport coefficients so that a direct comparison with the latter cannot be carried out in the next Sections. IV. SMALL SPATIAL PERTURBATIONS AROUND THE HOMOGENEOUS STEADY STATE The homogeneous steady state described in the previous Section can be perturbed by small spatial gradients. The response of the system to these perturbations gives rise to nonzero contributions to the heat and momentum fluxes, which are characterized by transport coefficients. The main goal of this paper is to determine the transport coefficients of the driven granular fluid. In order to obtain them, we consider states that deviate from steady homogeneous states by small spatial gradients. In this case, the Enskog kinetic equation (4) is solved by means of the Chapman-Enskog method [14] conveniently adapted to dissipative dynamics. The Chapman-Enskog method assumes the existence of a normal solution such that all space and time dependence of the distribution function occurs through the hydrodynamic fields f (r, v, t) = f [v|n(r, t), T (r, t), U(r, t)] .(36) The notation on the right hand side indicates a functional dependence on the density, temperature and flow velocity. For small spatial variations (i.e., low Knudsen numbers), this functional dependence can be made local in space through an expansion in the gradients of the hydrodynamic fields. To generate it, f is written as a series expansion in a formal parameter ǫ measuring the non-uniformity of the system, f = f (0) + ǫ f (1) + ǫ 2 f (2) + · · · ,(37) where each factor of ǫ means an implicit gradient of a hydrodynamic field. The uniformity parameter ǫ is related to the Knudsen number defined by the length scale for variation of the hydrodynamic fields. Note that while the strength of the gradients can be controlled by the initial or the boundary conditions in the case of elastic collisions, the problem is more complicated for granular fluids since in some cases (e.g., steady states such as the simple shear flow [30,31]) there is an intrinsic relation between dissipation and some hydrodynamic gradient. Here, however we consider situations where the spatial gradients are sufficiently small (low Knudsen number). Moreover, in ordering the different level of approximations in the kinetic equation, one has to characterize the magnitude of the external (thermostat) forces relative to the gradients as well. As usual, it is assumed that the external forces (drag and stochastic forces) do not induce any flux in the system and only modify the form of the transport coefficients. As a consequence, γ b and ξ 2 b are taken to be of zeroth order in gradients. According to the expansion (37) for the distribution function, the Enskog collision operator and time derivative are also expanded in powers of ǫ: J E = J (0) E + ǫJ (1) E + · · · , ∂ t = ∂ (0) t + ǫ∂ (1) t + · · · . (38) The coefficients in the time derivative expansion are identified by a representation of the fluxes and the cooling rate in the macroscopic balance equations as a similar series through their definitions as functionals of f . This is the usual CE method [11,14] for solving kinetic equations. The expansions (38) lead to similar expansions for the heat and momentum fluxes when substituted into Eqs. (10)- (12), P ij = P (0) ij + ǫP (1) ij + · · · , q = q (0) + ǫq (1) + · · · . (39) In this paper, we shall restrict our calculations to the first order in the uniformity parameter ǫ. A. Zeroth-order approximation To zeroth order in the expansion, the distribution f (0) obeys the kinetic equation (17) with the replacements n s → n(r, t) and f s → f (0) (r, v, t). The conservation laws at this order give ∂ (0) t f (0) − γ b m ∂ ∂v · Vf (0) − 1 2 ξ 2 b ∂ 2 ∂v 2 f (0) = J (0) E [f (0) , f (0) ],(40)where J (0) E [f (0) , f (0) ] is given by Eq.∂ (0) t n = 0, ∂ (0) t U = 0,(41)∂ (0) t T = − 2T m γ b + mξ 2 b − ζ (0) T,(42) where ζ (0) is determined by Eq. (14) to zeroth order (namely, it is given by Eq. (32) in the first Sonine approximation). The time derivative ∂ (0) t f (0) can be more explicitly written as ∂ (0) t f (0) = ∂f (0) ∂n ∂ (0) t n + ∂f (0) ∂U i ∂ (0) t U i + ∂f (0) ∂T ∂ (0) t T = − 2 m γ b − m T ξ 2 b + ζ (0) T ∂f (0) ∂T .(43) With this result, Eq. (40) becomes − 2 m γ b − m T ξ 2 b + ζ (0) T ∂f (0) ∂T − γ b m ∂ ∂v · Vf (0) − 1 2 ξ 2 b ∂ 2 ∂v 2 f (0) = J (0) E [f (0) , f (0) ].(44) It is important to remark that for given values of γ b , ξ 2 b and α, the steady-state condition (21) establishes a mapping between the density and temperature so that every density corresponds to one and only one temperature. Since the density n(r, t) and temperature T (r, t) are specified separately in the local reference state f (0) , the collisional cooling is only partially compensated for the heat injected in the system by the external driving force and so, ∂ (0) t T = 0. Consequently, the zeroth-order distribution function f (0) depends in general on time through its dependence on the temperature. On the other hand, for simplicity, one could impose the steady-state condition (21) at any point of the system and so, ∂ (0) t T = 0. This was the choice used in previous theoretical works [15,16] in the case of the stochastic thermostat (γ b = 0) where the relation mξ 2 b = ζ (0) T was assumed to apply also in the inhomogeneous state. As we will see below, while the expressions of the shear and bulk viscosities are the same in both choices (∂ t T = 0) will be referred here to as the choice A while the latter (∂ (0) t T = 0) will be referred as to the choice B. Note that the choice A has the advantage of a simpler implementation in computer simulations. However, at the level of kinetic theory, the fact that ∂ (0) t T = 0 gives rise to conceptual and practical difficulties not present in the previous analysis [15,16] carried out by using the choice B. The above difficulties are also present in a recent Chapman-Enskoglike method proposed to analyze rheological properties around the steady shear flow state [32,33]. Although for granular gases the drag parameter γ b and the white noise parameter ξ 2 b can be considered in general as independent parameters, to make contact here with previous results obtained for dilute gases [19,23] we will assume that both parameters are related by γ b = β m 2 ξ 2 b T b ,(45) where β is an arbitrary constant. Thus, when β = 0 our thermostat reduces to the usual stochastic thermostat [19] while the choice β = 1 2 reduces to the conventional Fokker-Planck model for ordinary gases [23]. In the unsteady state, the zeroth-order distribution function f (0) obeys Eq. (20). Dimensional analysis requires that f (0) is also given by the scaled form (23) (once one uses the relation (45)), namely f (0) (r, v, t) = n(r, t)v 0 (r, t) −d ϕ (c, ξ * ) ,(46) where now c ≡ V/v 0 , V = v − U being the peculiar velocity. Here, the thermal velocity v 0 and the reduced model parameter ξ * are defined as in Sec. III with the replacement T s → T (r, t). As in the steady state, the temperature dependence of f (0) is not only through v 0 and c but also through ξ * (see Eq. (23)). Thus, T ∂f (0) ∂T = − 1 2 ∂ ∂V · Vf (0) − 3 2 ξ * ∂f (0) ∂ξ * ,(47) and in dimensionless form Eq. (40) can be written as 3 2 [(2βT * − 1)ξ * + ζ * 0 ] ξ * ∂ϕ ∂ξ * + 1 2 (ζ * 0 − ξ * ) ∂ ∂c · cϕ − 1 4 ξ * ∂ 2 ∂c 2 ϕ = J * E [ϕ, ϕ],(48) where T * ≡ T T b , ζ * 0 ≡ ζ (0) nσ d−1 2T /m .(49) Upon writing Eq. (48) use has been made of the relation (45). Note that the reduced temperature T * ∝ ξ * −2/3 . As before, the explicit form of ϕ is not known. An indirect information on the scaled distribution ϕ is given through its fourth-cumulant a 2 (ξ * ) which is defined by Eq. (28). This cumulant can be obtained by multiplying both sides of Eq. (48) by c 4 and integrating over velocity. The result is − 3d(d + 2) 8 [(2βT * − 1)ξ * + ζ * 0 ] ξ * ∂a 2 ∂ξ * + d(d + 2) 2 [ζ * 0 (1 + a 2 ) − ξ * a 2 ] = µ 4 ,(50) where µ ℓ = − dc c ℓ J * E [ϕ, ϕ].(51) In the steady-state, Eq. (21) applies and the first term on the left hand side of Eq. (50) vanishes. In this case, the solution to Eq. (50) is given by Eq. (31). In general, Eq. (50) must be solved numerically to get the dependence of a 2 on ξ * (or equivalently, on the reduced temperature T * ). An analytical expression of ∂a 2 /∂ξ * in the steady state has been obtained in the Appendix A. Thus, in what follows a 2 (ξ * ) will be considered as a known function of ξ * . V. TRANSPORT COEFFICIENTS The analysis to first order in spatial gradients is similar to the one worked out in the undriven case [12,13,34]. Some technical details on the determination of the transport coefficients and the cooling rate are provided in the Appendices B and C. The form of the first-order velocity distribution function f (1) is given by f (1) = A (V) · ∇ ln T + B (V) · ∇ ln n +C ij (V) 1 2 ∂ i U j + ∂ j U i − 2 d δ ij ∇ · U +D (V) ∇ · U,(52) where the quantities A (V), B (V), C ij (V) and D (V) are the solutions of the linear integral equations (B14)-(B17), respectively. However, the evaluation of the transport coefficients from the above integral equations requires to know the complete time dependence of the first order contributions to the pressure tensor and the heat flux vector. This is quite an intricate problem. On the other hand, some simplifications occur if attention is restricted to linear deviations from the steady state described in Section II. In particular, since the kinetic and collisional contributions to the heat and momentum fluxes are already of first order in the deviations from the steady state, one only needs to know the transport coefficients to zeroth order in the deviations. This means we can evaluate the transport coefficients in the steady state conditions, namely, when the condition (21) applies. In this case, Eqs. (B14)-(B17) become − m T ξ 2 b 1 − 3 2 ∂ζ * 0 ∂ξ * + 1 2 ζ (0) A − γ b m ∂ ∂v · VA − 1 2 ξ 2 b ∂ 2 ∂v 2 A + LA = A,(53)− γ b m ∂ ∂v · VB − 1 2 ξ 2 b ∂ 2 ∂v 2 B + LB = B + ζ (0) 1 + φ ∂ ∂φ ln χ A + φ ∂χ ∂φ ∂ ∂χ ζ (0) χ − ξ * ∂ζ (0) ∂ξ * A,(54)− γ b m ∂ ∂v · VC ij − 1 2 ξ 2 b ∂ 2 ∂v 2 C ij + LC ij = C ij ,(55)− γ b m ∂ ∂v · VD − 1 2 ξ 2 b ∂ 2 ∂v 2 D + LD = D,(56) where it is understood that the quantities A, B, C ij , and D (defined by Eqs. (B6)-(B9), respectively) are evaluated in the steady state. Consequently, all the transport coefficients are given in terms of the steady granular temperature T s . The forms of the collisional contributions to the momentum and heat fluxes are exactly the same as those previously obtained in the undriven case [12,13] except that a 2,s depends on ξ * s (see Eq. (31)). Thus, we will focus here our attention in the evaluation of the kinetic contributions to the transport coefficients and the cooling rate. Technical details on this calculation are given in the Appendix C. A. Pressure tensor To first order in the spatial gradients, the pressure tensor is given by P (1) ij = −η ∂ i U j + ∂ j U i − 2 d δ ij ∇ · U −λδ ij ∇·U,(57) where η is the shear viscosity and λ is the bulk viscosity. While the shear viscosity has kinetic and collisional contributions, the bulk viscosity has only a collisional contribution. The bulk viscosity λ is given by λ = 2 2d+1 π(d + 2) φ 2 χ(1 + α) 1 − a 2,s 16 η 0 ,(58) where η 0 = d + 2 8 Γ d 2 π (d−1)/2 σ 1−d mT s(59) is the low density value of the shear viscosity in the elastic limit. The shear viscosity η can be written as η = η 0 ν 0 ν η + 2βm T b ξ 2 b 1 − 2 d−2 d + 2 (1 + α)(1 − 3α)φχ × 1 + 2 d−1 d + 2 (1 + α)φχ + d d + 2 λ,(60) where ν 0 = n s T s /η 0 and the collision frequency ν η is [35] ν η = 3ν 0 4d χ 1 − α + 2 3 d (1 + α) 1 + 7 16 a 2,s . (61) B. Heat Flux The constitutive form for the heat flux in the Navier-Stokes approximation is q (1) = −κ∇T − µ∇n,(62) where κ is the thermal conductivity and µ is a new coefficient not present in the elastic case (α = 1). The thermal conductivity κ is given by κ = κ k 1 + 3 2 d−2 d + 2 φχ(1 + α) + 2 2d+1 (d − 1) (d + 2) 2 π φ 2 χ(1 + α) 1 + 7 16 a 2,s κ 0 ,(63) where κ 0 = d(d + 2) 2(d − 1) η 0 m(64) is the thermal conductivity coefficient of an elastic dilute gas. The expression of the kinetic part κ k appearing in Eq. (63) is κ k = κ 0 ν 0 d − 1 d ν κ + 1 2 mξ 2 b T s 1 + 3ζ M ∂a 2 ∂ξ * s − 2ζ (0) s −1 1 + 2a 2,s − 3 2 ξ * s ∂a 2 ∂ξ * s + 3 2 d−3 d + 2 φχ(1 + α) 2 [2α − 1 +a 2,s (1 + α) − 1 4 10 + 2d − 3α + 3α 2 1 + α ξ * s ∂a 2 ∂ξ * s .(65) In Eq. (65), ζ (0) s is given by Eq. (32), ζ M = 3 √ 2 16d π (d−1)/2 Γ d 2 (1 − α 2 )χ,(66) and the value of the derivative (∂a 2 /∂ξ * ) s in the steadystate is provided in the Appendix A. Moreover, the collision frequency ν κ is [35] ν κ = ν 0 1 + α d χ d − 1 2 + 3 16 (d + 8)(1 − α) + 296 + 217d − 3(160 + 11d)α 256 a 2,s .(67) The coefficient µ is µ = µ k 1 + 3 2 d−2 d + 2 φχ(1 + α) ,(68) where its kinetic contribution µ k is µ k = κ 0 ν 0 T s n s ν κ − 3 2 ζ (0) s − mξ 2 b T s −1 κ k κ 0 ν 0 ζ (0) s (1 + φ∂ φ ln χ) + d − 1 d a 2,s +3 2 d−2 (d − 1) d(d + 2) φχ(1 + α) 1 + 1 2 φ∂ φ ln χ × α(α − 1) + a 2,s 6 (10 + 2d − 3α + 3α 2 ) . (69) C. Cooling rate The cooling rate ζ is given by ζ = ζ (0) s + ζ U ∇ · U.(70) At first order in spatial gradients, the proportionality constant ζ U is a new transport coefficient for granular fluids [12,13]. For a driven gas, ζ U can be written as ζ U = ζ 10 + ζ 11 ,(71) where ζ 10 = −3 2 d−2 d χφ(1 − α 2 ),(72)ζ 11 = 9(d + 2) 2 d−8 d 2 χ(1 − α 2 ) ν γ + 2mξ 2 b T − 2ζ (0) s −1 ωφχ 2(d + 2) − 2 2−d d + 3 3 ξ * s ∂a 2 ∂ξ * s ν 0 −(1 + α) 1 3 − α 2a 2,s − 3 2 ξ * s ∂a 2 ∂ξ * s φχν 0 ,(73) and the collision frequencies ω and ν γ are ω = (1 + α)ν 0 (1 − α 2 )(5α − 1) − a 2,s 6 15α 3 − 3α 2 + 3(4d + 15)α − (20d + 1) ,(74)ν γ = − 1 + α 192 χν 0 30α 3 − 30α 2 + (105 + 24d)α −56d − 73] .(75) Note that the first-order contribution ζ U to the cooling rate vanishes for elastic gases (α = 1, arbitrary solid volume fraction φ). However, for dilute inelastic gases (φ = 0, arbitrary values of the coefficient of restitution α), at variance with the undriven case [36] there is here a nonzero contribution to ζ U proportional to (∂a 2 /∂ξ * ) s [see Eq. (73)]. This result is consistent with those obtained [19] from the Boltzmann equation. The expressions for the Navier-Stokes transport coefficients obtained by using the choice B (i.e., when the condition (42) holds locally and so, ∂ D. Some special limits It is quite apparent that the expressions of the transport coefficients are rather complicated, given the different parameters (inelasticity, density and the model parameter ξ 2 b ) involved in them. Thus, in order to show more clearly the dependence of each parameter on transport, it is instructive to consider some simple cases. In the elastic limit (α = 1), T s = m 2 ξ 2 b /2γ b , ζ (0) s = a 2,s = 0, ν η = χν 0 , and ν κ = (1 − d −1 )χν 0 . In this case, µ = ζ U = 0 and the coefficients λ, η and κ become, respectively, λ = 2 2(d+1) π(d + 2) φ 2 χη 0 ,(76)η = η 0 χ + 2βm T b ν0 ξ 2 b 1 + 2 d d + 2 φχ 2 + d d + 2 λ,(77)κ = κ 0 1 + 3 2 d−1 d+2 φχ 2 χ + d d−1 γ b mν0 + 2 2(d+1) (d − 1) (d + 2) 2 π φ 2 χκ 0 . (78) Note that the expressions (77) and (78) for η and κ differ from their corresponding elastic counterparts for undriven gases. We consider now a low-density granular gas (φ = 0). In this limit case, λ = 0 while η, κ and µ are given, respectively, by η = η 0 ν 0 ν η + 2βm T b ξ 2 b ,(79)κ = κ 0 ν 0 d − 1 d ν κ + 1 2 mξ 2 b T s 1 + 3ζ M ∂a 2 ∂ξ * s − 2ζ (0) s −1 × 1 + 2a 2,s − 3 2 ξ * s ∂a 2 ∂ξ * s ,(80)µ = κ 0 ν 0 T s n s ν κ − 3 2 ζ (0) s − mξ 2 b T s −1 × κ κ 0 ν 0 ζ (0) s + d − 1 d a 2,s ,(81) where ν η and ν κ are defined by Eqs. (61) and (67), respectively, with χ = 1. The expressions (79) and (80) agree with recent results [19] derived from the linearized Boltzmann equation for a granular gas heated by the stochastic thermostat (β = 0). In addition, as mentioned before, when β = 1 2 in Eq. (45), our model reduces to the Fokker-Planck model studied previously by Hayakawa [23] for dilute gases. In this paper, Hayakawa determines the transport coefficients η, κ, and µ by neglecting the dependence of the fourth cumulant a 2 on the (reduced) model parameters γ * and ξ * . In particular, in the steady state, Eqs. (79)-(81) agree with the results obtained in Ref. [23] when (∂a 2 /∂ξ * ) s = 0. All the above limit situations confirm the self-consistency of the results derived here for a dense granular fluid. VI. COMPARISON WITH COMPUTER SIMULATIONS The expressions derived in Sec. V for the transport coefficients and the cooling rate depend on the (steady) granular temperature T s , the coefficient of restitution α, the solid volume fraction φ along with the parameter ξ 2 b characterizing the external energy source. In this Section we will compare our theoretical predictions for the thermostats A and B with recent Langevin dynamics simulations carried out by Gradenigo et al. [7] for hard disks (d = 2). In these simulations, the fluid is also driven by a stochastic bath with friction and the two external parameters γ b and ξ 2 b are related by Eq. (45) with β = 1 2 . In the steady state, they measured the static and dynamic structure factors for shear and longitudinal modes for several values of the coefficient of restitution α and volume fraction φ. The corresponding best fit of the simulation results of the above structure factors allow them to identify the kinematic viscosity ν = η/ρ, the longitudinal viscosity ν l = 1 ρ 2 d − 1 d η + λ ,(82) and the thermal diffusivity D T = 2 dn κ. (83) Figure 3 shows the kinematic viscosity ν for disks as a function of the volume fraction φ for α = 0.6. Symbols refer to the values of ν obtained from Langevin dynamics simulations [7] by using two different procedures: (i) via the equal-time correlation of the transversal shear mode (static correlations) and (ii) via the correlation of the transversal shear mode at different times (dynamical correlations). As in Fig. 1, the parameters of the simulation are γ b = 1, T b = 1, m = 1 and σ = 0.01. We observe first that the simulation data obtained with the two independent procedures are compatible. Regarding the theoretical results, note that for the kinematic viscosity the results obtained by using both kind thermostats are the same. The theoretical prediction for η in the elastic limit (i.e., Eq. (60) with α = 1 and γ b = ξ b = 0) but considering the α-dependence of the granular temperature given by Eq. (33) is also plotted. This was the theoretical expression for ν used in Ref. [7] to compare with simulation data. At a qualitative level, we observe that both theories (the elastic Enskog theory and the one derived here) reproduce in general the trends of simulation data. However, at a more quantitative level, it is quite apparent that the analytical results obtained in this paper for granular fluids agree much better with simulation data than those obtained in the elastic case, since the latter clearly overestimates the value of ν. This is the expected result since the simulations were carried out for inelastic gases in the presence of a stochastic bath with friction. The longitudinal viscosity ν l is plotted in Fig. 4 versus the volume fraction φ for the same systems as in Fig. 3. We observe that, in general, the influence of the thermostat on the longitudinal viscosity is less significant than for the kinematic viscosity ν since both theories agree relatively well. However, the discrepancies with computer simulations are more important than in the case of ν, specially in the low-density limit (φ = 0.1). While the elastic theory is closer to the simulation data than the inelastic theory when α = 0.8 (panel (a) of Fig. 4), the opposite happens at α = 0.6 for denser systems (see the panel (b) of Fig. 4). Since the dependence of the shear viscosity η on φ is well captured by the inelastic Enskog theory (see Fig. 3), it is evident that the discrepancies between theory and simulations are essentially due to the bulk viscosity λ, whose value is specially underestimated at low-density. This is a quite surprising result since one would expect that the influence of λ on the value of ν l increases with increasing density since λ = 0 for a dilute gas (φ = 0). The thermal diffusivity is shown in Fig. 5 for the same cases as those considered in Figs. 3 and 4. Surprisingly, for strong dissipation and quite dense systems (see the panel (b) of Fig. 5), the comparison between theory and simulation agrees in general better when one uses the elastic form for D T instead of its inelastic expression (60). These results contrast with the ones recently obtained [16] for the stochastic driving (i.e., when γ b → 0, keeping γ b T b finite) where it was shown the accuracy of the inelastic Enskog theory (see Fig. 1 of Ref. [16]) for moderate densities and finite collisional dissipation. It is important to note that the identification of the transport coefficients from Langevin dynamics simulations requires to fit the simulation results for small but not zero values of the wave number k. Given that the expressions for the Enskog transport coefficients are independent of the wave number (since the hydrodynamic regime only strictly holds in the limit k → 0), it is possible that the transport coefficients measured in the simulations are still functions of k, specially when the smallest value of k considered to get the fit results is not close to 0. In Table 3 of Ref. [7], one could conclude that the true value of D T would be smaller than the one shown in this figure when kσ = 0. More simulations would be needed to clarify this point. Now we consider the α-dependence of the transport coefficient µ and the first-order contribution ζ U to the cooling rate. Given that both coefficients vanish in the elastic limit, they were also neglected in previous studies for heated granular fluids [5,7]. To assess the impact of the term −µ∇n in the heat flux, the reduced coefficient µn/(T κ) is plotted in Fig. 6 versus the coefficient of restitution for two different values of the volume fraction φ in the case of the choice A. The results derived for µ by using the choice B are also plotted for comparison in the case φ = 0.1. We observe that the coefficient µ is negative in the case of the choice B, although its magnitude is practically zero. This drawback (µ ≤ 0) of choice B is not present in the case of the choice A since µ is always positive for any value of α and φ, similarly to what happens in the undriven case [12,13]. In addition, although the magnitude of µ is in general smaller than that of the thermal conductivity κ, we observe that the influence of µ on the heat transport could not be considered negligible as the degree of dissipation increases. The α-dependence of the magnitude of ζ U derived from the choice A is plotted in Fig. 7 for several values of the volume fraction. It is quite apparent that the influence of dissipation on |ζ U | is more significant than in the case of µ, specially at large densities. Consequently, the contribution of ζ U to the cooling rate should be considered as the rate of dissipation increases. VII. LINEAR STABILITY ANALYSIS OF THE HYDRODYNAMIC EQUATIONS The closed hydrodynamic equations for n, U, and T can be obtained by replacing the constitutive forms of the pressure tensor, the heat flux, and the cooling rate into the balance equations (7)- (9). They are given by D t n + n∇ · U = 0,(84)D t U i + ρ −1 ∂ i p = ρ −1 ∂ j [η (∂ i U j + ∂ j U i − 2 3 δ ij ∇ · U + λδ ij ∇ · U ,(85)D t + 2γ b m − mξ 2 b T + ζ (0) T + 2 dn p∇ · U = 2 dn ∇ · (κ∇T + µ∇n) + 2 dn [η (∂ i U j + ∂ j U i − 2 d δ ij ∇ · U + λδ ij ∇ · U ∂ i U j − T ζ U ∇ · U. (86) Note that consistency would require to consider up to second order in the gradients in the expression (70) for the cooling rate, since this is the order of the terms in Eqs. (57) and (62) coming from the pressure tensor and the heat flux, respectively. However, it has been shown for a dilute gas that the contributions from the cooling rate of second order are negligible [36] as compared with the corresponding contributions from Eqs. (57) and (62). It is assumed here that the same holds in the dense case [37]. The form of the Navier-Stokes equations (84)-(86) for a driven granular fluid is analogous to that of an ordinary fluid, except for the presence of the external bath parameters γ b and ξ 2 b , the contributions to the cooling rate ζ (0) and ζ U and the new transport coefficient µ in the energy balance equation. In addition, as shown in Sec. V and depending on the values of the coefficient of restitution α, the transport coefficients are in general different from those obtained for elastic collisions. Equations (84)-(86) can be linearized around the stationary homogeneous state, where the hydrodynamic fields take the steady values n s ≡ const., T s ≡ const. and U s = 0. A linear stability analysis of the hydrodynamic equations (84)-(86) have also been carried out in Ref. [7] but neglecting any dependence of the transport coefficients on inelasticity and assuming that µ = ζ U = 0. As mentioned in the Introduction, the only impact of inelasticity on their hydrodynamic equations [7] is through the α-dependence of the (steady) granular temperature T s (see Eq. (33) with a 2,s = 0). Thus, it is worth to assess to what extent the previous theoretical results [7] are indicative of what happens when the correct expressions for the transport coefficients and the cooling rate are considered. This is the main motivation of this Section. We assume that the deviations δy α (r, t) = y α (r, t) − y sα (t) are small, where δy α (r, t) denotes the deviations of n, U, and T from their values in the steady homogeneous state. To recover previous linear stability results [37] derived in the undriven case, let us consider the following (reduced) time and space variables: τ = 1 2 n s σ d−1 T s m t, r ′ = 1 2 n s σ d−1 r.(87) The dimensionless time scale τ is a measure of the average number of collisions per particle in the time interval between 0 and t. The unit length introduced in the second equality of (87) corresponds to the mean free path of gas particles. A set of Fourier transformed dimensionless variables are then introduced by ρ k (τ ) = δn k (τ ) n s , w k (τ ) = δU k (τ ) T s /m , θ k (τ ) = δT k (τ ) T s ,(88)where δy kα ≡ {ρ k , w k (τ ), θ k (τ )} is defined as δy kα (τ ) = dr ′ e −ik·r ′ δy α (r ′ , τ ).(89) Note that in Eq. (89) the wave vector k is dimensionless. In Fourier space, as expected, Eq. (85) shows that the d − 1 transverse velocity components w k⊥ = w k − (w k · k) k (orthogonal to the wave vector k) decouple from the other three modes and hence can be obtained more easily. Their evolution equation can be written as ∂ ∂τ + 1 2 η * k 2 w k⊥ = 0,(90) where η * = η σ 1−d √ mT s (91) The solution to Eq. (90) is w k⊥ (k, τ ) = w k⊥ (0) exp [Λ ⊥ (k)τ ] ,(92) where Λ ⊥ (k) = − 1 2 η * k 2 .(93) Since the (reduced) shear viscosity coefficient η * is positive, then Λ ⊥ (k) becomes negative for any finite wave number k and so the transversal shear modes of the driven gas are linearly stable. This result contrasts with the ones obtained in the undriven case [37] where it was shown that the transversal shear modes become unstable for values of k smaller than a certain critical wave number. The remaining (longitudinal) modes correspond to ρ k , θ k , and the longitudinal velocity component of the velocity field, w k|| = w k · k (parallel to k). These modes are coupled and obey the equation ∂δy kα (τ ) ∂τ + M αβ δy kβ (τ ) = 0,(94) where δy kα (τ ) denotes now the set ρ k , θ k , w k|| and M is the square matrix M =   0 0 ik 2 √ 2ζ * 0 g + µ * k 2 √ 2(ζ * 0 + 2ξ * ) + D * T k 2 2 d ik(p * + d 2 ζ U ) ikp * C ρ ikp * ν * l k 2   ,(95) where ζ * 0 = ℓζ (0) s 2T s /m , ξ * = mℓξ 2 b T s 2T s /m (96) p * = p s n s T s = 1 + 2 d−2 (1 + α)χφ,(97) and ν * l = ρ s ν l 2σ 1−d √ mT s , D * T = n s D T 2σ 1−d T s /m ,(98)µ * = ρ s dσ 1−d T s √ mT s µ.(99) Here, ρ s = mn s is the mass density. In the above equations, it is understood that the transport coefficients η, ν l , D T , and µ are evaluated in the homogeneous steady state. In addition, the quantities g(φ) and C ρ (α, φ) appearing in the matrix M are given by g(φ) = 1 + φ ∂ ∂φ ln χ(φ),(100)C ρ (α, φ) = 1 + φ ∂ ∂φ ln p * (α, φ) = 1 + g(φ) − g(φ) 1 + 2 d−2 (1 + α)φχ(φ) ,(101) where in the last equality use has been made of the ex-plicit expression of p * given by Eq. (97). If one assumes µ * = ζ U = 0, the matrix (95) agrees with the dynamical matrix obtained when the gas is heated by a stochastic thermostat (γ b = 0 but γ b T b = finite and ζ * 0 = ξ * ) [5]. The longitudinal three modes have the form exp[Λ ℓ (k)τ ] for ℓ = 1, 2, 3, where Λ ℓ (k) are the eigenvalues of the matrix M, namely, they are the solutions of the cubic equation Λ 3 + A(k)Λ 2 + B(k)Λ + C(k) = 0,(102) where A(k) = √ 2(ζ * 0 + 2ξ * ) + k 2 (ν * l + D * T ) ,(103)B(k) = k 4 ν * l D * T + k 2 p * C ρ + p * 2 d p * + ζ U + √ 2(ζ * 0 + 2ξ * )ν * l ,(104)C(k) = p * k 2 √ 2C ρ (ζ * 0 + 2ξ * ) − 2 √ 2gζ * 0 + (C ρ D * T − µ * ) k 2 .(105) One of the longitudinal modes (the heat mode) could be unstable for k < k h , where k h is obtained from Eq. (102) when Λ = 0, namely, C(k h ) = 0. The result is k 2 h = √ 2 2gζ * 0 − C ρ (ζ * 0 + 2ξ * ) C ρ D * T − µ * .(106) On the other hand, an analysis of the dependence of k 2 h on the coefficient of restitution α and the volume fraction φ shows that k 2 h < 0 for any value of α and φ. Thus, there are no physical values of k h for which the heat mode becomes unstable. Consequently, all the eigenvalues of the dynamical matrix M have a positive real part and no instabilities are found due to the presence of the external bath. This conclusion agrees with the results obtained in Refs. [5] and [7] for driven granular fluids. In summary, the results obtained here including the complete α-dependence of the transport coefficients show no new surprises relative to the earlier works [5,7] by considering the elastic Enskog expressions for the above coefficients. Of course, the quantitative forms for the dispersion relations can be quite different in both (elastic and inelastic) approaches since the impact of dissipation on the transport coefficients and the cooling rate is significant and so, their functional forms differ appreciably from their elastic forms. VIII. DISCUSSION In this paper, we have determined the transport coefficients of a granular fluid driven by a stochastic bath with friction. The results have been obtained within the framework of the (inelastic) Enskog kinetic theory and they are expected to apply over a wide range of densities. Our goal is not only academic since, from a practical standpoint, many of the simulations reported [4][5][6][7] for flowing granular materials have used external driving forces to fluidize the system. For this reason, it would be convenient to provide to simulators with the corresponding expressions of the transport coefficients when the granular fluid is heated by a thermostat. In fact, due to the lack of the above expressions, in most of the cases it is assumed that the forms of the transport coefficients of the driven granular fluid are the same as those given by the elastic Enskog theory [38]. However, as expected from previous theoretical works [15,16], the present results show again that the expressions for the transport coefficients clearly differ from those obtained for ordinary fluids so that, one should use the true inelastic Enskog coefficients to analyze granular flows driven by thermostats. The transport processes considered are those for a driven fluid with small spatial gradients of the hydrodynamic fields. In this situation, the Enskog equation has been solved by means of the Chapman-Enskog method [14] up to the first order in the spatial gradients. Since these gradients have been assumed to be independent of the coefficient of restitution, although the corresponding hydrodynamic equations restrict their applicability to first order in gradients, the transport coefficients appearing in these equations hold a priori to arbitrary degree of dissipation. An important but subtle point is the generalization of the driving external forces (which are mainly used in homogeneous situations) to inhomogeneous states. This is a crucial step since one has to consider situations close to steady homogeneous states to determine the transport coefficients from the Chapman-Enskog expansion. Although the above generalization is a matter of choice, it has important implications in the final expressions of the transport coefficients. For simplicity, in previous works on heated granular gases [15,16] it was assumed that the external driving force has the same form as in the homogeneous case, except that their parameters are local quantities. As a consequence, the parameters of the force are chosen to impose a stationary temperature in the zeroth-order solution (i.e., ∂ (0) t T = 0). However, for general small perturbations around the steady homogeneous state, it is expected that the density and temperature are specified separately in the local reference state f (0) and so, the temperature cannot be stationary at any point of the system (i.e., ∂ (0) t T = 0). This choice is more general than the previous one (∂ (0) t T = 0) and has the advantage of a simpler implementation on computer simulations since the parameters of the driven external force are constant, even for inhomogeneous states. As mentioned in the Introduction, the fact that ∂ (0) t T = 0 gives rise to conceptual and practical difficulties not present in the case of the choice B. One of them is that the evaluation of the complete nonlinear dependence of the transport coefficients on dissipation requires in principle the analysis of the hydrodynamic behavior of the unsteady reference state. This involves the corresponding numerical integration of the differential equations obeying the velocity moments of the zeroth-order distribution f (0) (see for instance, Eq. (50) for the fourth degree moment a 2 of f (0) ). This is quite an intricate long problem. However, given that here we are interested in evaluating the momentum and heat fluxes in the first order of the deviations from the steady reference state, the transport coefficients must be determined to zeroth order in the deviations. As a consequence, the steady-state condition (21) applies and the transport coefficients and the cooling rate can be defined in terms of the hydrodynamic fields in the steady state. Explicit expressions for these quantities have been obtained after considering the leading terms in a Sonine polynomial expansion. These explicit forms have been displayed in Sec. V and Appendix D for the choices A and B, respectively. More specifically, in the case of the choice A, the bulk λ and shear η viscosities are given by Eqs. (58) and (60), respectively, the thermal conductivity κ is given by Eqs. (63) and (65), the coefficient µ is given by Eqs. (68) and (69) and the cooling rate ζ is defined by Eqs. (70)-(75). All these expressions clearly show the complex dependence of the set {λ, η, κ, µ, ζ} on the granular temperature T , the coefficient of restitution α, the solid volume fraction φ and the model parameter ξ 2 b . In the case of the choice B, our results show that the expressions of λ and η are the same as those obtained from the choice A but the forms of κ and µ are different (they are given by Eqs. (D1) and (D2), respectively). An important drawback of the results derived from the choice B is that the coefficient µ can be negative (see Fig. 6), although its magnitude is very small. A comparison with recent Langevin dynamics simulations [7] carried out for a granular fluid driven also by a stochastic bath with friction has been made in Sec. VI. The comparison has been displayed in Fig. 3 for the kinematic viscosity ν, Fig. 4 for the longitudinal viscosity ν l and Fig. 5 for the thermal diffusivity D T . It is quite apparent that while the predictions of the driven kinetic theory compares very well with simulation data for ν in a wide range of densities, some discrepancies appear in the cases of ν l and D T as the gas becomes denser. Surprisingly, in the case of D T , the comparison agrees better when one uses the elastic form of D T in the more inelastic system (α = 0.6) studied. We think that this disagreement is in part due to the fact that while the simulation data have been obtained for small but finite values of the wave number k, the Enskog expressions for the transport coefficients only strictly apply in the limit k → 0. Moreover, given that these discrepancies appear at sufficiently high densities, it could also reflect the limitations of the Enskog equation (which is based on the molecular chaos hypothesis) as the granular fluid becomes denser. With these new expressions for the momentum and heat fluxes and the cooling rate, a closed set of hydrodynamic equations for situations close to homogeneous steady states has been derived. A stability analysis of these linearized hydrodynamic equations with respect to the homogeneous steady state has been carried out to identify the conditions for stability in terms of dissipation. Our results show that the driven homogeneous state is stable for any value of dissipation at sufficiently long wavelengths. This conclusion agrees with previous findings [5,7] obtained by using the elastic expressions of the transport coefficients. An interesting point is the usefulness of the theoretical results derived in this paper to modelize the experiments performed by using boundary driven conditions. As usual in computer simulations [4], in this paper we have fluidized the system by means of a thermostat composed by a friction term which mimics the presence of an interstitial fluid and a stochastic force that models the effect of a vibrating wall. The main advantage of using this type of driving mechanism is the possibility of making theoretical progress. In addition, although the relationship of the last external force with real vibrating walls is not clear to date, some theoretical results (see, for instance Fig. 2 of Ref. [39]) obtained for the temperature ratio of a granular impurity immersed in a granular gas heated by the stochastic thermostat compare quite well with molecular dynamics simulations of shaken mixtures [40]. This agreement could stimulate the use of this simple stochastic driving for qualitative comparisons with experimental results. On the other hand, more comparisons between kinetic theory results for heated granular gases and computer simulations performed in realistic vibrating beds are needed before qualitative conclusions can be drawn. Finally, an extension of the results derived in this paper to the more realistic shear flow problem could be an interesting project for the next future. Another possible future work is the extension of the present results to the important subject of granular mixtures. Given the difficulties associated with multicomponent systems, the tracer diffusion could be perhaps a good starting point to provide some insight into the general problem. Work along the above lines will be carried out in the near future. ACKNOWLEDGMENTS The authors are grateful to Maribel García de Soria and Pablo Maynar for valuable discussions and for sending a preprint of their unpublished results. The present work has been supported by the Ministerio de Educación y Ciencia (Spain) through grants No. FIS2010-16587 (V.G., M.G.Ch. and F.V.) and No. MAT2009-14351-C02-02 (F.V.). The first Grant has been partially financed by FEDER funds and by the Junta de Extremadura (Spain) through Grant No. GRU10158. The research of M. G. Chamorro has been supported by the predoctoral fellowship BES-2011-045869 from the Spanish Government (Spain). + L f (1) − γ b m ∂ ∂v · Vf (1) − 1 2 ξ 2 b ∂ 2 ∂v 2 f (1) = − ∂ (1) t + v · ∇ f (0) − J (1) E [f ].(B1) Here, J t U i = −(mn) −1 ∇ i p,(1)D (1) t T = − 2p dn ∇ · U − ζ (1) T,(B3) where D The coefficient e D is determined by substituting Eq. (C21) into the integral equation (56), multiplying by F (V) and integrating over V. After some algebra one gets the expression (73) for ζ 11 . Here, as before, we have neglected the contribution proportional to the derivative ∂a 2 /∂χ. FIG. 1 .FIG. 2 . 12Plot of the reduced granular temperature Ts/T b versus the volume fraction φ for a two-dimensional (d = 2) granular fluid and two different values of the coefficient of restitution: α = 0.8 (solid line) and α = 0.6 (dashed line). The symbols are the Monte Carlo simulation results (circles for α = 0.8 and triangles for α = 0.6). Plot of the steady fourth cumulant a2,s versus the coefficient of restitution α for a two-dimensional (d = 2) granular fluid with φ = 0.25. The line is the theoretical result and the symbols are the Monte Carlo simulation results. t T = 0), the transport coefficients of the heat flux are different. The former choice of thermostat (∂ (0) the Appendix D. While the expressions of η and λ are also given by Eqs. (58)-(60), the forms of κ and µ are different to those derived from the choice A. FIG. 3 . 3Plot of the kinematic viscosity ν = η/ρ as a function of the volume fraction φ for α = 0.6. The solid line is the theoretical prediction given by Eq. (60) while the dashed line is the theoretical result obtained by assuming the elastic form of the shear viscosity η. Symbols are the simulation results obtained by Gradenigo et al.[7] from the static (circles) and dynamical (triangle) correlations of transversal shear modes. FIG. 4 .FIG. 5 .FIG. 6 . 456Plot of the longitudinal viscosity ν l as a function of the volume fraction φ for two values of the coefficient of restitution: α = 0.8 (panel a), and α = 0.6 (panel b). The solid lines are the theoretical predictions for ν l obtained by using Eqs. (58) and (60) while the dashed lines are the theoretical results obtained by assuming the elastic forms of the shear viscosity η and the bulk viscosity λ. Symbols are the simulation results obtained by Gradenigo et al. [7] by fitting their numerical data for the dynamical correlations of the longitudinal modes. Plot of the thermal diffusivity DT = 2κ/dn as a function of the volume fraction φ for two values of the coefficient of restitution: α = 0.8 (panel a), and α = 0.6 (panel b). Symbols are the simulation results obtained by Gradenigo et al. [7] by fitting their numerical data for the dynamical correlations of the longitudinal modes. The solid lines are the theoretical predictions for DT obtained by using Eqs. (63)-(65), the dotted lines are the theoretical predictions for DT obtained by using Eq. (D1) and the dashed lines are the theoretical results obtained by assuming the elastic form of the thermal conductivity κ. (color online) Plot of the of the dimensionless quantity nµ/T κ versus the coefficient of restitution α for hard disks (d = 2) with m = 1, σ = 0.01, γ b = T b = 1 and two different values of the solid volume fraction φ: (a) φ = 0.1, and (b) φ = 0.3. The dashed line corresponds to the results obtained by considering the choice B for φ = 0.1. Note that µ = 0 in the elastic case (α = 1). FIG. 7 . 7(color online) Plot of the magnitude of the first-order contribution ζU to the cooling rate versus the coefficient of restitution α for for hard disks (d = 2) with with m = 1, σ = 0.01, γ b = T b = 1 and three different values of the solid volume fraction φ: (a) φ = 0.1, (b) φ = 0.3, and (c) φ = 0.5. Note that ζU = 0 in the elastic case (α = 1). particular, the simulation data for φ = 0.3 and 0.5 in the panel (b) of Fig. 4 were obtained for kσ = 0.4 and 0.5, respectively. In this sense, if one would extrapolate the data shown in E [f ] means the first order contribution to the expansion of the Enskog collision operator and L is the linear operatorLf (1) = − J + U · ∇ and ζ(1) is the first order contribution to the cooling rate. Use of Eqs. (B3) in (B1) and taking into account the form of J Appendix A: Behavior of the fourth-cumulant of the zeroth-order distribution in the vicinity of the steady stateAlthough the determination of a 2 (ξ * ) requires numerical work, one can obtain analytically this quantity in the vicinity of the steady state by means of the derivative ∂a 2 /∂ξ * evaluated at the steady state. This derivative appears in the expressions of the thermal conductivity κ (see Eq. (65)) and the first-order contribution ζ U to the cooling rate (see Eq.(73)). This Appendix addresses the evaluation of the above derivative.In order to determine ∂a 2 /∂ξ * from Eq. (50), we first assume that ϕ can be well described by the lowest Sonine approximation(30). Then, approximate forms for ζ * 0 = (2/d)µ 2 and µ 4 can be obtained when one uses the distribution(30)and neglects nonlinear terms in a 2 . The results are[20]whereWith the use of the approximations (A1) and retaining only linear terms in a 2 in Eq. (50), the derivative ∂a 2 /∂ξ * is given byHowever, some care must be taken in Eq. (A5) at the steady state since the numerator and denominator of Eq. (A5) vanish and so, the corresponding expression for the derivative ∂a 2 /∂ξ * becomes indeterminate. This difficulty can be solved by means of l'Hopital's rule. After some algebra, it is straightforward to see that the steadystate value of the derivative ∆ ≡ (∂a 2 /∂ξ * ) s obeys the quadratic equationwhere T * s = T s /T b . Since a 2,s is in general very small, it is expected that the magnitude of ∆ be also quite small. An analysis of the solutions to Eq. (A6) shows that in general one of its roots is much larger than a 2,s while the other is of the order of a 2,s . We take the latter one as the physical root of the quadratic equation (A6).Appendix B: First order approximationThe application of the Chapman-Enskog method up to the first order approximation follows similar mathematical steps as those made before in the undriven case[12,13,34]. Some details on this derivation are provided in this Appendix. Up to the first order in the expansion, the velocity distribution function f (1) obeys the kinetic equationwhereHere, φ and p * are given by Eqs.(34)and(97), respectively,and K i is the operatorwhere ζ (0) is the cooling rate evaluated at zeroth-order. In addition, upon deriving Eqs. (B5)-(B9), use has been made of the spherical symmetry of f (0) which allows us to write the tensor derivative of the flow field ∂ i U j in terms of its independent trace and traceless parts, e.g.,and a similar analysis of the contribution fromThe solution to Eq. (B5) can be written in the form (52), where A, B, C ij , and D are unknown functions of the peculiar velocity. Since the gradients of the hydrodynamic fields are all independent, Eq. (B5) can be separated into independent equations for each coefficient. This yields the following set of linear, inhomogeneous integral equations:Upon deriving Eqs. (B14)-(B17) use has been made of the resultAs noted in Section V, in the first order of the deviations from the steady state, we only need to know the transport coefficients to zeroth order in the deviations (steady state conditions). This means that the termAppendix C: Kinetic contributions to the transport coefficientsIn this Appendix we determine from Eqs. (53)-(56) the kinetic contributions to the transport coefficients η, κ and µ as well as the first order contribution ζ U to the cooling rate. Given that all these coefficients are evaluated in the steady state, the subscript s appearing along the main text will be omitted in this Appendix for the sake of brevity.We start with the shear viscosity η. Its kinetic part η k is given bywhereTo obtain η k , we multiply Eq. (55) by D ij and integrate over velocity. The result iswhereThe collision integral of the right hand side of Eq. (C3) has been evaluated in previous works[12,13]:Thus, the kinetic part η k can be written asIn order to get an explicit expression for η k , one has to evaluate the (reduced) collision frequency ν * η . It can be evaluated by considering the leading terms in a Sonine polynomial expansion of the function C ij (V). Here, we have considered a recent modified version of the standard method[35,41]that yields good agreement with computer simulations even for quite strong values of dissipation[42]. The final form (60) of the shear viscosity η is obtained when one takes into account the relation (45).The kinetic parts κ k and µ k of the transport coefficients characterizing the heat flux are defined, respectively, aswhereWe obtain first the kinetic part κ k . It is obtained by multiplying Eq. (53) by S(V) and integrating over V. The result iswhereThe right hand side of Eq. (C10) is given bywhere use has been made of Eq. (47). The last two terms on the right hand side of Eq. (C12) can be evaluated more explicitly and the result is[34]dVSWith the above results, the kinetic part κ k can be finally written aswhere κ 0 is the low density value of the thermal conductivity of an elastic gas (defined byEq. (64)).The expression (C15) for κ k is still exact. In order to get explicit results, one considers the form (32) for ζ (0) and evaluates ν κ by considering again the leading terms in a Sonine polynomial expansion of A(V). With these approaches, one gets the expression (67) for ν κ whilewhere ζ M is defined by Eq. (66). Use of (C16) in Eq. (C15) gives the final result.The evaluation of the coefficient µ is quite intricate since it involves the derivatives ∂a 2 /∂ξ * and ∂a 2 /∂χ. Since both derivatives are in general very small, for the sake of simplicity we will neglect contributions proportional to those derivatives in the calculation of the coefficient µ. With this approximation, in particular the term n∂ n f (0) appearing in Eq. (B7) becomes simply n∂ n f (0) = f (0) . In order to determine µ k , multiply Eq. (54) by S(V) and integrate over velocity to getwhereThe last term on the right hand side of Eq. (C17) isThe final expression of µ k is obtained from Eq. (C17) when one substitutes Eq. (C14) into Eq. (C17). However, this expression is not explicit unless one knows the collision frequency ν µ . To determine it, one takes the leading terms in a Sonine polynomial expansion of B(V) and gets ν µ = ν κ . This finally yields Eq. (69) for µ k . We consider finally the first-order contribution ζ U to the cooling rate. This coefficient is given by Eq. (71), whereand the unknown D verifies the integral equation(56). An approximate solution to (56) can be obtained by taking the Sonine approximationwhere In this Appendix we display the expressions for the Navier-Stokes transport coefficients η, λ, κ, and µ by using the choice B defined by the condition ∂φχ(1 + α) 1 + 1 2 φ∂ φ ln χ α(α − 1) + a 2,s 6 (10 + 2d − 3α + 3α 2 ) ,where the collision frequency ν κ is defined by Eq. (67). the left-hand side of Eqs. (B14)-(B17) vanishes. The differential equations for the transport coefficients thus become simple coupled algebraic equations. They are given by Eqs. the left-hand side of Eqs. (B14)-(B17) van- ishes. The differential equations for the transport coef- ficients thus become simple coupled algebraic equations. They are given by Eqs. (53)-(56). . X Yang, C Huan, D Candela, R W Mair, R L Walsworth, Phys. Rev. Lett. 8844301X. Yang, C. Huan, D. Candela, R. W. Mair, and R. L. Walsworth, Phys. Rev. Lett 88, 044301 (2002); . C Huan, X Yang, D Candela, R W Mair, R L Walsworth, Phys. Rev. E. 6941302C. Huan, X. Yang, D. Candela, R. W. Mair, and R. L. Walsworth, Phys. Rev. E 69, 041302 (2004). . A R Abate, D J Durian, Phys. Rev. E. 7431308A. R. Abate and D. J. Durian, Phys. Rev. E 74, 031308 (2006). . M Schröter, D I Goldman, H L Swinney, Phys. Rev. E. 7130301M. Schröter, D. I. Goldman, and H. L. Swinney, Phys. Rev. E 71, 030301 (R) (2005). . A See For Instance, V Puglisi, U M B Loreto, A Marconi, A Petri, Vulpiani, Phys. Rev. Lett. 813848See for instance, A. Puglisi, V. Loreto, U. M. B. Mar- coni, A. Petri, and A. Vulpiani, Phys. Rev. Lett. 81, 3848 (1998); . A Puglisi, V Loreto, U Marini Bettolo, A Marconi, Vulpiani, Phys. Rev. E. 595582A. Puglisi, V. Loreto, U. Marini Bettolo Marconi, and A. Vulpiani, Phys. Rev. E 59, 5582 (1999); . R Cafiero, S Luding, H J Herrmann, Phys. Rev. Lett. 846014R. Cafiero, S. Luding, and H. J. Herrmann, Phys. Rev. Lett. 84, 6014 (2000); . A Prevost, D A Egolf, J S Urbach, Phys. Rev. Lett. 8984301A. Prevost, D. A. Egolf, and J. S. Urbach, Phys. Rev. Lett. 89, 084301 (2002); . A Puglisi, A Baldassarri, V Loreto, Phys. Rev. E. 6661305A. Puglisi, A. Baldassarri, and V. Loreto, Phys. Rev. E 66, 061305 (2002); . P Visco, A Puglisi, A Barrat, E Trizac, F Van Wijland, J. Stat. Phys. 125533P. Visco, A. Puglisi, A. Barrat, E. Trizac, and F. van Wijland, J. Stat. Phys. 125, 533 (2006); . A Fiege, T Aspelmeier, A Zippelius, Phys. Rev. Lett. 10298001A. Fiege, T. Aspelmeier, and A. Zippelius, Phys. Rev. Lett. 102, 098001 (2009); . A Sarracino, D Villamaina, G Gradenigo, A Puglisi, Europhys. Lett. 9234001A. Sarracino, D. Villamaina, G. Gradenigo, and A. Puglisi, Europhys. Lett. 92, 34001 (2010); . W T Kranz, M Sperl, A Zippelius, Phys. Rev. Lett. 104225701W. T. Kranz, M. Sperl, and A. Zippelius, Phys. Rev. Lett. 104, 225701 (2010); . A Puglisi, A Gnoli, G Gradenigo, A Sarracino, D Villamaina, J. Chem. Phys. 13614704A. Puglisi, A. Gnoli, G. Gradenigo, A. Sarracino, and D. Villamaina, J. Chem. Phys. 136, 014704 (2012). . T P C Van Noije, M H Ernst, E Trizac, I Pagonabarraga, Phys. Rev. E. 594326T. P. C. van Noije, M. H. Ernst, E. Trizac, and I. Pago- nabarraga, Phys. Rev. E 59, 4326 (1999). . K Vollmayr-Lee, T Aspelmeier, A Zippelius, Phys. Rev. E. 8311301K. Vollmayr-Lee, T. Aspelmeier, and A. Zippelius, Phys. Rev. E 83, 011301 (2011). . G Gradenigo, A Sarracino, D Villamaina, A Puglisi, J. Stat. Mech. 8017G. Gradenigo, A. Sarracino, D. Villamaina, and A. Puglisi, J. Stat. Mech. P08017 (2011). D J Evans, G P Morriss, Statistical Mechanics of Nonequilibrium Liquids. LondonAcademic PressD. J. Evans, G. P. Morriss, Statistical Mechanics of Nonequilibrium Liquids, Academic Press, London, 1990. . J W Dufty, A Santos, J J Brey, R F Rodríguez, Phys. Rev. A. 33459J. W. Dufty, A. Santos, J. J. Brey, and R. F. Rodríguez, Phys. Rev. A 33, 459 (1986). . V Garzó, A Santos, J J Brey, Physica A. 163651V. Garzó, A. Santos, and J. J. Brey, Physica A 163, 651 (1990). Kinetic Theory of Gases in Shear Flows. V Garzó, A Santos, Kluwer AcademicDordrechtNonlinear TransportV. Garzó and A. Santos, Kinetic Theory of Gases in Shear Flows. Nonlinear Transport (Kluwer Academic, Dordrecht, 2003). . V Garzó, J W Dufty, Phys. Rev. E. 595895V. Garzó and J. W. Dufty, Phys. Rev. E 59, 5895 (1999). . J F Lutsko, Phys. Rev. E. 7221306J. F. Lutsko, Phys. Rev. E 72, 021306 (2005). S Chapman, T G Cowling, The Mathematical Theory of Nonuniform Gases. CambridgeCambridge University PressS. Chapman and T. G. Cowling, The Mathematical The- ory of Nonuniform Gases (Cambridge University Press, Cambridge, 1970). . V Garzó, J M Montanero, Physica A. 313336V. Garzó and J. M. Montanero, Physica A 313, 336 (2002). . V Garzó, Phys. Rev. E. 8412301V. Garzó, Phys. Rev. E 84, 012301 (2011). J A Mclennan, Introduction to Nonequilibrium Statistical Mechanics. New JerseyPrentice-HallJ. A. McLennan, Introduction to Nonequilibrium Statis- tical Mechanics (Prentice-Hall, New Jersey, 1989). . N V Brilliantov, T Pöschel, Phys. Rev. E. 611716N. V. Brilliantov and T. Pöschel, Phys. Rev. E 61, 1716 (2000); . J W Dufty, V Garzó, J. Stat. Phys. 105723J. W. Dufty and V. Garzó, J. Stat. Phys. 105, 723 (2001); . J W Dufty, J J Brey, J Lutsko, Phys. Rev. E. 6551303J. W. Dufty and J. J. Brey, and J. Lutsko, Phys. Rev. E 65, 051303 (2002); . J W Dufty, J J Brey, J. Stat. Phys. 109433J. W. Dufty and J. J. Brey, J. Stat. Phys. 109, 433 (2002); . A Baskaran, J W Dufty, J J Brey, J. Stat. Mech. 12002A. Baskaran, J. W. Dufty, and J. J. Brey, J. Stat. Mech. (2007) P12002; . J W Dufty, A Baskaran, J J Brey, Phys. Rev. E. 7731310J. W. Dufty, A. Baskaran, and J. J. Brey, Phys. Rev. E 77, 031310 (2008); . A Baskaran, J W Dufty, J J Brey, Phys. Rev. E. 7731311A. Baskaran, J. W. Dufty, and J. J. Brey, Phys. Rev. E 77, 031311 (2008). . M I García De Soria, P Maynar, E Trizac, arXiv:1211.5790e-printM. I. García de Soria, P. Maynar and E. Trizac, e-print arXiv: 1211.5790. . T P C Van Noije, M H Ernst, Granular Matter. 157T. P. C. van Noije and M. H. Ernst, Granular Matter 1, 57 (1998). . D R M Williams, F C Mackintosh, Phys. Rev. E. 549D. R. M. Williams and F. C. MacKintosh, Phys. Rev. E 54, R9 (1996). . V Garzó, S Tenneti, S Subramaniam, C M Hrenya, J. Fluid Mech. 712129V. Garzó, S. Tenneti, S. Subramaniam, and C. M. Hrenya, J. Fluid Mech. 712, 129 (2012). . H Hayakawa, Phys. Rev. E. 6831304H. Hayakawa, Phys. Rev. E 68, 031304 (2003). M G Chamorro, F Vega Reyes, V Garzó, 28th International Symposium on Rarefied Gas Dynamics. M. Mareschal and A. Santos1501M. G. Chamorro, F. Vega Reyes and V. Garzó, in 28th In- ternational Symposium on Rarefied Gas Dynamics 2012, edited by M. Mareschal and A. Santos, AIP Conf. Proc., Vol. 1501 (2012), pp. 1024-1030. . M I García De Soria, P Maynar, E Trizac, Phys. Rev. E. 8551301M. I. García de Soria, P. Maynar and E. Trizac, Phys. Rev. E 85, 051301 (2012). G A Bird, Molecular Gas Dynamics and the Direct Simulation Monte Carlo of Gas Flows. Clarendon, OxfordG. A. Bird, Molecular Gas Dynamics and the Direct Sim- ulation Monte Carlo of Gas Flows (Clarendon, Oxford, 1994). . S Torquato, Phys. Rev. E. 513170S. Torquato, Phys. Rev. E 51, 3170 (1995). . J M Montanero, A Santos, Granular Matter. 253J. M. Montanero and A. Santos, Granular Matter 2, 53 (2000). . U Marini Bettolo, P Marconi, F Tarazona, Cecconi, J. Chem. Phys. 126164904U. Marini Bettolo Marconi, P. Tarazona, and F. Cecconi, J. Chem. Phys. 126, 164904 (2007). . I Goldhirsch, Annu. Rev. Fluid Mech. 35267I. Goldhirsch, Annu. Rev. Fluid Mech. 35, 267 (2003). . A Santos, V Garzó, J Dufty, Phys. Rev. E. 6961303A. Santos, V. Garzó and J. Dufty, Phys. Rev. E 69, 061303 (2004). . J F Lutsko, Phys. Rev. E. 7321302J. F. Lutsko, Phys. Rev. E 73, 021302 (2006). . V Garzó, Phys. Rev. E. 7321304V. Garzó, Phys. Rev. E 73, 021304 (2006). . V Garzó, arXiv:1204.5114e-printV. Garzó, e-print arXiv:1204.5114. . V Garzó, A Santos, J M Montanero, Physica A. 37694V. Garzó, A. Santos, and J. M. Montanero, Physica A 376, 94 (2007). . J J Brey, J W Dufty, C S Kim, A Santos, Phys. Rev. E. 584638J. J. Brey, J. W. Dufty, C. S. Kim, and A. Santos, Phys. Rev. E 58, 4638 (1998). . V Garzó, Phys. Rev. E. 7221106V. Garzó, Phys. Rev. E 72, 021106 (2005). J Ferziger, H Kaper, Mathematical Theory of Transport Processes in Gases. North-Holland, AmsterdamJ. Ferziger and H. Kaper, Mathematical Theory of Trans- port Processes in Gases (North-Holland, Amsterdam, 1972). . V Garzó, Eur. Phys. J. E. 29261V. Garzó, Eur. Phys. J. E 29, 261 (2009). . M Schröter, S Ulrich, J Kreft, J B Swift, H L Swinney, Phys. Rev. E. 7411307M. Schröter, S. Ulrich, J. Kreft, J. B. Swift, and H. L. Swinney, Phys. Rev. E 74, 011307 (2006). . V Garzó, F Vega Reyes, J M Montanero, J. Fluid Mech. 623387V. Garzó, F. Vega Reyes, and J. M. Montanero, J. Fluid Mech. 623, 387 (2009). . J M Montanero, A Santos, V Garzó, Physica A. 37675J. M. Montanero, A. Santos, and V. Garzó, Physica A 376, 75 (2007).
{'fraction_non_alphanumeric': 0.06864980144138844, 'fraction_numerical': 0.03625533166642153, 'mean_word_length': 3.689022988505747, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 77, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The transport coefficients of a granular fluid driven by a stochastic bath with friction are obtained by solving the inelastic Enskog kinetic equation from the Chapman-Enskog method. The heat and momentum fluxes as well as the cooling rate are determined to first order in the deviations of the hydrodynamic field gradients from their values in the homogeneous steady state. Since the collisional cooling cannot be compensated locally for the heat produced by the external driving force, the reference distribution f (0) (zeroth-order approximation) depends on time through its dependence on temperature. This fact gives rise to conceptual and practical difficulties not present in the undriven case. On the other hand, to simplify the analysis and given that we are interested in computing transport in the first order of deviations from the reference state, the steady-state conditions are considered to get explicit forms for the transport coefficients and the cooling rate. A comparison with recent Langevin dynamics simulations for driven granular fluids shows an excellent agreement for the kinematic viscosity although some discrepancies are observed for the longitudinal viscosity and the thermal diffusivity at large densities. Finally, a linear stability analysis of the hydrodynamic equations with respect to the homogeneous steady state is performed. As expected, no instabilities are found thanks to the presence of the external bath.', 'arxivid': '1211.4985', 'author': ['Vicente Garzó \nDepartamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain\n', 'Moisés G Chamorro \nDepartamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain\n', 'Francisco Vega \nDepartamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain\n', 'Reyes \nDepartamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain\n'], 'authoraffiliation': ['Departamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain', 'Departamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain', 'Departamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain', 'Departamento de Física\nUniversidad de Extremadura\nE-06071BadajozSpain'], 'corpusid': 54667770, 'doi': '10.1103/physreve.87.032201', 'github_urls': [], 'n_tokens_mistral': 26963, 'n_tokens_neox': 23234, 'n_words': 15020, 'pdfsha': '00f79aac4022c210cb6486c4ae6f30e0a524be75', 'pdfurls': ['https://arxiv.org/pdf/1211.4985v5.pdf'], 'title': ['Transport properties for driven granular fluids in situations close to homogeneous steady states', 'Transport properties for driven granular fluids in situations close to homogeneous steady states'], 'venue': []}
arxiv
A linear state feedback switching rule for global stabilization of switched nonlinear systems about a nonequilibrium point Oleg Makarenkov Department of Mathematical Sciences The University of Texas at Dallas 800 West Campbell Road Richardson75080TX A linear state feedback switching rule for global stabilization of switched nonlinear systems about a nonequilibrium point Switched systemswitched equilibriumglobal quadratic stabilization 2010 MSC: 34H1593D1534A36 A switched equilibrium of a switched system of two subsystems is a such a point where the vector fields of the two subsystems point strictly towards one another. Using the concept of stable convex combination that was developed by Wicks-Peleties-DeCarlo (1998) for linear systems, Bolzern-Spinelli (2004) offered a design of a state feedback switching rule that is capable to stabilize an affine switched system to any switched equilibrium. The state feedback switching rule of Bolzern-Spinelli gives a nonlinear (quadratic) switching threshold passing through the switched equilibrium. In this paper we prove that the switching threshold (i.e. the associated switching rule) can be chosen linear, if each of the subsystems of the switched system under consideration are stable. Introduction Using the concept of stable convex combination that was developed by Wicks et al [12] for linear systems, Bolzern-Spinelli [2] offered a design of a state feedback switching rule that is capable to stabilize an affine switched system 1 x = A σ x + b σ , x ∈ R n , σ ∈ {−1, 1}(1) to any point x 0 (called switched equilibrium) that satisfies λ A + x 0 + b + + (1 − λ) A − x 0 + b − = 0,(2) for some λ ∈ [0, 1]. If the matrix λA + + (1 − λ)A − is Hurwitz, then, according to Bolzern-Spinelli [2], the switching signal σ(x) can be defined as σ(x) = arg min i∈{−1,1} {V (x)(A i x + b i )} = = sign (V (x)(A − x + b − ) − V (x)(A + x + b + )) ,(3) where V is the quadratic Lyapunov function of the linear systeṁ x = λ A + x + b + + (1 − λ) A − x + b − . When A − = A + , the rule (3) reduces to σ(x) = sign V (x)b − − V (x)b + ,(4) whose switching threshold {x ∈ R n : V (x)b − − V (x)b + } x 0 is a hyperplane, but in general the state feedback switching rule (3) gives a nonlinear switching threshold (quadratic surface) passing through the switched equilibrium x 0 . Email address: [email protected] (Oleg Makarenkov) 1 Bolzern-Spinelli [2] actually considered a slightly more general case σ : [0, ∞) → {1, ..., m}, but in this paper we stick to just two discrete states. In this paper we provide a wider class of switched systems (1) that can be stabilized to a switched equilibrium by a linear switching rule. Specifically, we show that the nonlinear switching rule (3) can always be replaced with the linear one σ(x) = sign x − x 0 , V (x 0 )(A − (x 0 ) + b − ) T ,(5) when the subsystemsẋ = A + x andẋ = A − x admit a common quadratic Lyapunov function. Here V (x 0 ) doesn't depend on x 0 because V is assumed quadratic. We also note that (5) coincides with (4) when A − = A + . The paper is organized as follows. In the next section of the paper we discuss the main idea behind the switching rule (3), which is based on construction of suitable sets Ω − and Ω + , such that any switching rule σ(x) with the property σ(x) = −1 if x ∈ Ω − , 1 if x ∈ Ω + , stabilizes (1) to x 0 . In section 3 we prove our main result (Theorem 3.1), which offers a linear state feedback switching rule to stabilize a nonlinear switched systeṁ x = f σ (x), x ∈ R n , σ ∈ {−1, 1},(6) to a switched equilibrium x 0 . We recall that, according to Demidovich [3, Ch. IV, §281], nonlinear systems (6) admit a common quadratic Lyapunov function, if the simmetrized derivative f σ x (x) + f σ x (x) T is uniformly negative definite uniformly in x ∈ R n , and σ ∈ {−1, 1}, see also Pavlov et al [9]. The switching rule (14) proposed in Theorem 3.1 takes the form (5) when switched system (6) is affine. The main discovery used in Theorem 3.1 is that, for subsystems of (6) that admit a common quadratic Lyapunov function, the boundaries of Ω − and Ω + are contained in ellipsoids that touch one another at the point x 0 , see Fig. 2. The proof uses a standard Lyapunov stability theorem that is also implicitly used in Bolzern-Spinelli [2]. Specifically, we use a Lyapunov stability theorem for Filippov systems with smooth Lyapunov functions, which is a particular case of more general results available e.g. in Shevitz-Paden [11] or M.-Aguilara-Garcia [7]. But since deriving the required Lyapunov theorem (Theorem 3.2) from [7,11] is not very straightforward (and since we didn't find the exact required theorem elsewhere in the literature), we added a proof for completeness, that we placed in the Appendix section. In section 4 we consider an application of Theorem 3.1 to a model of boost converter and, for illustration purposes, also implement the Bolzern-Spinelli rule (3) for the same model. Some further discussion on when the switching rule (5) coincides with (3) is carried out in the conclusions section. 2. The idea of Wicks et al [12] and Bolzern-Spinelli [2] Recall that x 0 is a switched equilibrium for the nonlinear switched system (6), if there exists λ 0 ∈ [0, 1] such that λ 0 f − (x 0 ) + (1 − λ 0 ) f + (x 0 ) = 0.(7) Assume that the equilibrium x 0 of the convex combinatioṅ x = λ 0 f − (x) + (1 − λ 0 ) f + (x).(8) is asymptotically stable and let V be the respective Lyapunov function satisfying V (x) (λ 0 f − (x) + (1 − λ 0 ) f + (x)) < 0 for all x x 0 . (9) The fundamental idea of Bolzern-Spinelli [2] (who extended Wicks et al [12] to affine linear systems) is that for (6) to stabilize to x 0 , the switching rule σ(x) must take the value σ(x) = −1 in the region Ω − = x : V (x) f − (x) < 0(10) and the value σ(x) = +1 in the region Ω + = x : V (x) f + (x) < 0 .(11) The following lemma discusses the geometry of the intersection Ω − ∩ Ω + , in particular it clarifies that there are situations where one cannot draw a hyperlane in Ω − ∩ Ω + passing through x 0 (Fig. 1a) and there are situations when one can (Fig. 1b). The existence of a hyperplane in Ω − ∩ Ω + passing through x 0 corresponds to the existence of a linear switching rule σ(x) that stabilizes (6) to x 0 . Therefore, what this paper will really prove in Section 3 is that it is Fig. 1b which takes place when both of the subsystems of (6) are stable. Lemma 2.1. (ideas of [12, 2]) Consider f − , f + ∈ C 1 (R n , R n ). Let x 0 be a switched equilibrium for the vector fields f − and f + , i.e. (7) holds. Assume that the equilibrium x 0 of system (8) is asymptotically stable and the respective Lyapunov function V ∈ C 1 (R n , R) satisfies (9). Then, the sets Ω − and Ω + satisfy the properties:  L   R   L   R  x  L   R   L   R  x1) Ω − ∪ Ω + ∪ {x 0 } = R n , Ω − ∪ Ω + = R n , 2) ∂Ω − \{x 0 } ⊂ Ω + , ∂Ω + \{x 0 } ⊂ Ω − , 3) x 0 ∈ ∂Ω − , x 0 ∈ ∂Ω + . Proof. Part 1. Follows directly from (9). Part 2. Consider x ∈ ∂Ω − . Then x Ω − because Ω − is open. Then x ∈ Ω + by Part 1. The property ∂Ω + ⊂ Ω − can be proved by analogy. Part 3. It is sufficient to show that V (x 0 ) = 0. To observe this, fix an arbitrary j ∈ 1, n and consider the vector ξ j ∈ R n defined as ξ j i = 0, i j, and ξ j j = 1. Since V(x) > 0, x x 0 , we have 0 < V(x 0 + kξ j ) − V(x 0 ) = V (x 0 + k * ξ j )ξ j · k = = ∂V ∂x j (x 0 + k * ξ j )k, 0 < V(x 0 − kξ j ) − V(x 0 ) = −V (x 0 − k * * ξ j )ξ j · k = = − ∂V ∂x j (x 0 − k * * ξ j )k, for any k > 0 and for some k * , k * * ∈ [0, k] (that depend on k). Passing to the limit as k → 0, one gets ∂V ∂x j (x 0 ) = 0. The proof of the lemma is complete. The main result In this section we assume that the switched equilibrium x 0 admits a common quadratic Lyapunov function V(x) = (x − x 0 ) T P(x − x 0 ) with respect to each of the two systemṡ x = f − (x) − f − (x 0 ) andẋ = f + (x) − f + (x 0 ),(12) where P is an n×n symmetric matrix and the following standard properties hold: V (x) ( f − (x) − f − (x 0 )) ≤ −α x − x 0 2 , V (x) ( f + (x) − f + (x 0 )) ≤ −α x − x 0 2 ,(13) for some fixed constant α > 0. Theorem 3.1. Consider f − , f + ∈ C 1 (R n , R n ) . Let x 0 be a switched equilibrium for the vector fields f + and f − , i.e. (7) holds. Assume that the systems of (12) admit a common quadratic Lyapunov function V ∈ C 2 (R n , R) that satisfies (13). Then the switching signal σ(x) = sign x − x 0 , V (x 0 ) f − (x 0 ) T(14) makes x 0 quadratically globally stable switched equilibrium of switched system (6). Note that rule (14) takes the form (5) when the nonlinear switched system (6) takes the form (1). Also, using (7) the switching rule (14) can be rewritten as σ(x) = sign x − x 0 , V (x 0 ) f − (x 0 ) − f + (x 0 ) T . In order to prove the theorem, we introduce two sets Ω − α = x ∈ R n : −α x − x 0 2 + V (x) f − (x 0 ) < 0 , Ω + α = x ∈ R n : −α x − x 0 2 + V (x) f + (x 0 ) < 0 and establish the following lemma about the relative properties of the sets Ω i α and Ω i as introduced in (10)-(11). Lemma 3.1. Assume that the conditions of Theorem 3.1 hold. Then Ω − α and Ω + α verify the following properties: 1) Ω − ⊃ Ω − α , Ω + ⊃ Ω + α , 2) x 0 ∈ ∂Ω − α , x 0 ∈ ∂Ω + α , 3) both ∂Ω − α and ∂Ω + α are ellipsoids, 4) hyperplane σ(x) = 0 is tangent to both Ω − α and Ω + α at x 0 , 5) Ω − α ⊂ {x : σ(x) < 0} , Ω + α ⊂ {x : σ(x) > 0} . The notations and statements of Lemma 3.1 are illustrated at Fig. 2. Proof. Part 1. Let x ∈ Ω − α . Then V (x) f − (x) = V (x)( f − (x) − f − (x 0 )) + V (x) f − (x 0 ) ≤ ≤ −α x − x 0 2 + V (x) f − (x 0 ) < 0. Therefore, x ∈ Ω − . The proof for Ω + α and Ω + α is analogous. −α x − x 0 2 + V (x) f − (x 0 ) = 0 into −α y 2 − 2α ∆, y + 2 P f − (0), y − α ∆ 2 + 2 ∆, P f − (0) = 0. If ∆ = P f − (0) α , then we further get Figure which is the equation of ellipsoid centered at 0 and radius −α y 2 − 1 α P f − (0) 2 + 2 α P f − (0) 2 = 0, 0 x 0 x   T x f x V ) ( ) ( 0 0                    0 x    0 ) (  x 1 α 2 P f − (0) 2 . The proof for ∂Ω + α is analogous. Part 4. This follows from the equality d dx −α x − x 0 2 + V (x) f − (x 0 ) x=x 0 = V (x 0 ) f − (x 0 ). and the property (7) of switched equilibrium. Part 5. Let H(x) = −α x − x 0 2 + V (x) f − (x 0 ) . The interior of the ellipsoid ∂Ω − α corresponds to H(x) > 0. Therefore, the exterior of the ellipsoid ∂Ω − α (which, by definition, coincides with the set Ω − α ) corresponds to H(x) < 0. This proves the statement of Part 5 for Ω − α . Since (1 − λ 0 ) f + (x 0 ) = −λ f − (x 0 ) by (7), the proof for Ω + α follows same lines. The proof of the lemma is complete. The proof of our main result uses the following Lyapunov stability theorem for discontinuous systems with smooth Lyapunov functions, which is implicitly used in [12,2]. x = g(x), with g(x) = g + (x), if H(x) > 0, g − (x), if H(x) < 0, x ∈ R n ,(15) where g − , g + , and H are C 1 -functions. Consider x 0 ∈ R n satisfying H(x 0 ) = 0. Let V be a C 1 -smooth Lyapunov function with V(x 0 ) = 0 and V(x) 0 for x x 0 . Consider a piecewise continuous strictly positive for x x 0 scalar function x → w(x) such that for any ρ > 0 there exists ε > 0 for which w(x) ≥ ε as Figure 3: Boost converter from Fribourg-Soulat [4] and Beccuti et al [1]. i L switch x C u S r C u C r 0 x L r L  long as x − x 0 ≥ ρ. If V (x)ξ ≤ −w(x) for any ξ ∈ K[g](x) , and any x x 0 , then x 0 is an asymptotically globally stable stationary point of (15). Here K[g](x) stays for convexification of the discontinuous function g at x, see e.g. Shevitz-Paden [11]. The proof of Theorem 3.2 is given in Appendix. Proof of Theorem 3.1. We will show that the conditions of Theorem 3.2 hold with w(x) =          −V (x) f − (x), σ(x) < 0, − max{V (x) f − (x), V (x) f + (x)}, σ(x) = 0, −V (x) f + (x), σ(x) > 0. If x ∈ D − \{x 0 }, then x ∈ Ω − α ⊂ Ω − by statements 5 and 1 of Lemma 3.1, which implies w(x) > 0. Analogously, w(x) > 0, if x ∈ D + \{x 0 }. This implies that max x: x−x 0 =ρ w(x) isV (x)ξ = λV (x) f − (x) + (1 − λ)V (x) f + (x) ≤ ≤ max{V (x) f − (x), V (x) f + (x)} = −w(x), that completes the proof of the theorem. Application to a model of boost converter Consider a dc-dc boost converter of Fig. 3 with a switching feedback σ(x). Denoting the inductor current i L by x 1 and the capacitor voltage u C by x 2 , the differential equations of the converter read as (see e.g. Fribourg-Soulat [4], Beccuti et al [1]) x =           − r L x L − r 0 x L (r 0 +r C ) σ r 0 x C (r 0 +r C ) σ − 1 x C (r 0 +r C )           x +           u s x L 0           , σ ∈ {0, 1},(16) Let us view the right-hand-side of (16) with σ = 0 and σ = 1 as f − (x) and f + (x) respectively. The equation (7) for switched equilibrium x 0 yields which can be solved for (x 01 , λ 0 ) when the reference voltage x 02 is fixed. The conditions of Theorem 3.1 hold with the Lyapunov function −r L x 01 + u s − (1 − λ 0 ) r 0 r C r 0 +r C x 01 − (1 − λ 0 ) r 0 r 0 +r C x 02 = 0, −x 02 + (1 − λ 0 )r 0 x 01 = 0,(17)V(x) = 1 2x C (x 1 − x 01 ) 2 + 1 2x L (x 2 − x 02 ) 2 . Therefore, V (x 0 ) f − (x 0 ) = 1 x C − r L x L x 01 + v s x L , 1 x L − 1 x C (r 0 + r C ) x 02 , which transpose will be denoted by n. Plugging n into (14), we conclude that any point x 0 that satisfies the switched equilibrium condition (17) with λ 0 ∈ (0, 1), can be stabilized using the switching rule For comparison, Fig. 4 (bottom) shows stabilization of (17) to the switched equilibrium x 0 = (0.79, 10) using the switching rule (3), that can be shown to simplify to σ(x) = 1, if (x − x d )n > 0, 0, if (x − x d )n < 0.(18)σ(x) = sign r L r C x 2 1 − (x 01 r L r C − x 02 )x 1 − x 01 x 2 .(20) The parameters (19) are slightly artificial, but similar to Fig. 4 simulations are achieved in the case of more realistic parameters e.g. taken from [1], [4], or [5]. The parameters (19) are chosen in such a way that the nonlinear behavior of the Bolzern-Spinelli rule (20) is clearly seen in Fig. 4 (bottom). The top and bottom figures of Fig. 4 turn out to be indistinguishable (on the screen) for the parameters from [1,4,5]. Conclusions In this paper we showed that the switching rule (3) of Bolzern-Spinelli [2] for quadratic stabilization of a switched equilibrium x 0 of switched system (1) can be replaced by a linear switching rule (5) when the subsystems of (1) admit a common quadratic Lyapunov function. Moreover, our main result (Theorem 3.1) applies to nonlinear switched systems (6) complimenting the work by Mastellone et al [8] that proposes a nonlinear extension of Bolzern-Spinelli [2] in the case where the subsystems of (6) are shifts of one another (at the same time, the work [8] addresses the case of an arbitrary number of subsystems, while the present paper focuses on just two subsystems). We would like to note that seemingly nonlinear switching rule (3) of Bolzern-Spinelli [2] simplifies to linear in wide classes of particular applications, e.g. in applications to buck converters (see e.g. Lu et al [6]), where A + = A − in (1), or in applications to boost converters of Fig. 3 with neglected resistance r C of the capacitor (see e.g. Schild et al [10]). Still, the switching rule (3) stays nonlinear in some other classes of applications, e.g. in more general boost converters such as the one of Fig. 3 or its further extensions (see Gupta-Patra [5] and references therein). In these classes of applications the linear switching rules (5) and (14) proposed in this paper may simplify the engineering implementation of the feedback control. 6. Appendix: Lyapunov stability theorem for discontinuous systems with smooth Lyapunov functions Proof of Theorem 3.2. Let x be a Filippov solution of (15), see e.g. Shevitz-Paden [11]. We pick ρ > 0 and prove that x(t) ∈ W ρ beginning some t = t ρ , where W ρ = {x ∈ R n : V(x) < ρ}. Step 1. Let r > 0 be such a constant that x(0) ∈ ∂W r . We claim that x(t) ∈ W r for all t > 0. We prove by contradiction, i.e. assume that x(τ) W r for some τ > 0. Without loss of generality we can assume that x([0, τ]) ⊂ W, where W is an open neighborhood of W r , such that w(x) is strictly positive in W\{x 0 }. For the function v(t) = V(x(t)) we have v(0) = r and v(τ) ≥ r. Step 1.1 We claim that v(t) > r/2 for all t ∈ [0, τ]. Indeed, if the latter is wrong, then defining s = max {t ∈ [0, τ] : v(t) ≤ r/2} , one gets v(s) = r/2, v(τ) = r, v(t) ∈ [r/2, r] , for any t ∈ [s, τ]. In particular, x(t) x 0 for all t ∈ [s, τ] and, therefore, v (t) = V (x(t))ξ < 0, for some ξ ∈ K[ f ](x(t)) and almost any t ∈ [s, τ]. This contradicts (22) and proves that v(t) > r/2 for all t ∈ [0, τ]. Step 1. 2 Step 1.1 implies that x(t) 0, for any t ∈ [0, τ], and, as a consequence, v (t) < 0, for any t ∈ [0, τ], which contradicts (21) and completes the proof of the fact that x(t) ∈ W r for all t > 0. Step 2. Let us show that x(t) reaches W ρ at some time moment. Assume that x(t) never reaches W ρ . Then v (t) = V (x(t))ξ < −w(x(t)), for some ξ ∈ K[ f ](x(t)) and almost any t > 0. The definition of function w implies that w min = min{w(x), x ∈ W r \W ρ } > 0. Therefore, v(t) = v(0) + t 0 v (t)dt < v(0) − w min t and v(t) becomes negative, if x(t) never reaches W ρ . Since ρ ∈ (0, r) was chosen arbitrary, our conclusion implies that x(t) → x 0 as t → ∞. The proof of the theorem is complete. Figure 1 : 1Relative locations of sets Ω L and Ω R . Part 2 . 2Follows from V (x 0 ) = 0 established in the proof of Part 3 of Lemma 2.1. Part 3 . 3We execute the proof for x 0 = 0. The proof in the general case doesn't differ. The change of the coordinates y = x − ∆ transforms the equation 2 : 2Top figure: Locations of the boundaries of Ω + , Ω + α , Ω − , Ω − α with respect to the hyperplane σ(x) = 0 and with respect to each other. Bottom figures: The sets Ω + and Ω + α (grey regions). Theorem 3.2. (Lyapunov stability theorem for discontinuous systems with smooth Lyapunov functions) (similar to [11, Theorem 3.1], [7, Theorem 2.3]) Consider a system of differential equations with discontinuous right-hand-sidė a positive function of ρ that approaches 0 as ρ → 0.SinceK[ f ](x) = { f − (x)}, when σ(x) < 0, and K[ f ](x) = { f + (x)}, when σ(x) > 0, then condition V (x)ξ ≤ −w(x) ofTheorem 3.2 holds for σ(x) 0. Consider σ(x) = 0. Then each ξ ∈ K[ f ](x) has the form ξ = λ f − (x) + (1 − λ) f + (x), where λ is a constant from the interval [0, 1]. We have Figure 4 : 4The solution (bold curve) of switched system (17) with the initial condition x(0) = 0, the parameters (19), and the switching signal σ(x) given by (18) (top figure) and by (20) (bottom figure). The thin curve is the switching manifold σ(x) = 0 and the bold point is the switched equilibrium x 0 . An implementation of switching rule (18) with the parameters r L = 20, r C = 5, x L = 600, x C = 70, r 0 = 200, u s = 8, (19) and the reference voltage x 02 = 10 (which, when plugged into (17), yields x 01 = 0.79 and λ 0 = 0.367 as one of the two possible solutions) is given inFig. 4 (top). AcknowledgementsThe research was supported by NSF Grant CMMI-1436856.References Optimal Control of the Boost dc-dc Converter. A G Beccuti, G Papafotiou, M Morari, Proceedings of 44th IEEE Conference on Decision and Control, and the European Control Conference. 44th IEEE Conference on Decision and Control, and the European Control ConferenceA. G. Beccuti, G. Papafotiou, M. Morari, Optimal Control of the Boost dc-dc Converter, Proceedings of 44th IEEE Conference on Decision and Control, and the European Control Conference 2005, 4457-4462. Quadratic stabilization of a switched affine system about a nonequilibrium point. P Bolzern, W Spinelli, Proceedings of the American Control Conference. 5P. Bolzern, W. Spinelli, Quadratic stabilization of a switched affine sys- tem about a nonequilibrium point, Proceedings of the American Control Conference 5 (2004) 3890-3895. B P Demidovich, Lectures on Stability Theory. MoscowNaukaB. P. Demidovich, Lectures on Stability Theory, Nauka, Moscow, 1967. Limit Cycles of Controlled Switched Systems: Existence, Stability, Sensitivity. L Fribourg, R Soulat, J. Phys.: Conf. Ser. 46412007L. Fribourg, R. Soulat, Limit Cycles of Controlled Switched Systems: Existence, Stability, Sensitivity, J. Phys.: Conf. Ser. 464 (2013) 012007. Hybrid Mode-Switched Control of DC-DC Boost Converter Circuits. P Gupta, A Patra, IEEE Trans. Circuits and Systems. 5211P. Gupta, A. Patra, Hybrid Mode-Switched Control of DC-DC Boost Converter Circuits, IEEE Trans. Circuits and Systems 52 (2005), no. 11, 734-738. Y M Lu, X F Huang, B Zhang, L Y Yin, Hybrid Feedback Switching Control in a Buck Converter, IEEE International Conference on Automation and Logistics. Y. M. Lu, X. F. Huang, B. Zhang, L. Y. Yin, Hybrid Feedback Switching Control in a Buck Converter, IEEE International Conference on Automa- tion and Logistics (2008) 207-210. An extension of LaSalles invariance principle for switched systems. J L Mancilla-Aguilara, R A Garcia, Systems & Control Letters. 55J. L. Mancilla-Aguilara, R. A. Garcia, An extension of LaSalles in- variance principle for switched systems, Systems & Control Letters 55 (2006) 376-384. Stability and Convergence for Systems with Switching Equilibria. S Mastellone, D M Stipanovic, M W Spong, Proceedings of the 46th IEEE Conference on Decision and Control. the 46th IEEE Conference on Decision and ControlS. Mastellone, D. M. Stipanovic, M. W. Spong, Stability and Conver- gence for Systems with Switching Equilibria, Proceedings of the 46th IEEE Conference on Decision and Control (2007) 4013-4020. Convergent dynamics, a tribute to Boris Pavlovich Demidovich. A Pavlov, A Pogromsky, N Van De Wouw, H Nijmeijer, Systems Control Lett. 523-4A. Pavlov, A. Pogromsky, N. van de Wouw, H. Nijmeijer, Convergent dynamics, a tribute to Boris Pavlovich Demidovich. Systems Control Lett. 52 (2004), no. 3-4, 257-261. Design of generalized hysteresis controllers for dc-dc switching power converters. A Schild, J Lunze, J Krupar, W Schwarz, IEEE Trans. Power Electron. 241A. Schild, J. Lunze, J. Krupar, and W. Schwarz, Design of generalized hysteresis controllers for dc-dc switching power converters, IEEE Trans. Power Electron. 24 (2009), no. 1, 138-146. Lyapunov Stability Theory of Nonsmooth Systems. D Shevitz, B Paden, IEEE Transactions on Automatic Control. 399D. Shevitz, B. Paden, Lyapunov Stability Theory of Nonsmooth Sys- tems, IEEE Transactions on Automatic Control 39 (1994), no. 9, 1910- 1914. Switched Controller Synthesis for the Quadratic Stabilisation of a Pair of Unstable Linear Systems. M A Wicks, P Peleties, R A Decarlo, European J. Control. 4M. A. Wicks, P. Peleties, and R. A. DeCarlo. Switched Controller Syn- thesis for the Quadratic Stabilisation of a Pair of Unstable Linear Sys- tems, European J. Control 4 (1998) 140-147.
{'fraction_non_alphanumeric': 0.1026313488941341, 'fraction_numerical': 0.038071509747355535, 'mean_word_length': 3.2415647015202076, 'pattern_counts': {'":': 0, '<': 19, '<?xml version=': 0, '>': 22, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 39, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'A switched equilibrium of a switched system of two subsystems is a such a point where the vector fields of the two subsystems point strictly towards one another. Using the concept of stable convex combination that was developed by Wicks-Peleties-DeCarlo (1998) for linear systems, Bolzern-Spinelli (2004) offered a design of a state feedback switching rule that is capable to stabilize an affine switched system to any switched equilibrium. The state feedback switching rule of Bolzern-Spinelli gives a nonlinear (quadratic) switching threshold passing through the switched equilibrium. In this paper we prove that the switching threshold (i.e. the associated switching rule) can be chosen linear, if each of the subsystems of the switched system under consideration are stable.', 'arxivid': '1806.08844', 'author': ['Oleg Makarenkov \nDepartment of Mathematical Sciences\nThe University of Texas at Dallas\n800 West Campbell Road Richardson75080TX\n'], 'authoraffiliation': ['Department of Mathematical Sciences\nThe University of Texas at Dallas\n800 West Campbell Road Richardson75080TX'], 'corpusid': 119170271, 'doi': '10.1016/j.ejcon.2019.02.001', 'github_urls': [], 'n_tokens_mistral': 9014, 'n_tokens_neox': 7736, 'n_words': 4509, 'pdfsha': '788d782b0cb83bdae1ad1895c3a6d67ebbc05dce', 'pdfurls': ['https://arxiv.org/pdf/1806.08844v1.pdf'], 'title': ['A linear state feedback switching rule for global stabilization of switched nonlinear systems about a nonequilibrium point', 'A linear state feedback switching rule for global stabilization of switched nonlinear systems about a nonequilibrium point'], 'venue': []}
arxiv
Fluctuations of a weakly interacting Bose-Einstein condensate 23 Dec 2008 Zbigniew Idziaszek Institute of Theoretical Physics University of Warsaw 00-681WarsawPoland Center for Theoretical Physics Polish Academy of Sciences Aleja Lotników 32/4602-668WarsawPoland Lukasz Zawitkowski Center for Theoretical Physics Polish Academy of Sciences Aleja Lotników 32/4602-668WarsawPoland Mariusz Gajda Institute of Physics Polish Academy of Sciences Aleja Lotników 32/4602-668WarsawPoland Faculty of Mathematics and Sciences Cardinal Stefan Wyszyński University WarsawPoland Kazimierz Rzażewski Center for Theoretical Physics Polish Academy of Sciences Aleja Lotników 32/4602-668WarsawPoland Faculty of Mathematics and Sciences Cardinal Stefan Wyszyński University WarsawPoland Fluctuations of a weakly interacting Bose-Einstein condensate 23 Dec 2008 Fluctuations of the number of condensed atoms in a finite-size, weakly interacting Bose gas confined in a box potential are investigated for temperatures up to the critical region. The canonical partition functions are evaluated using a recursive scheme for smaller systems, and a saddle-point approximation for larger samples, that allows to treat realistic size systems containing up to N ∼ 10 5 particles. We point out the importance of particle-number constrain and interactions between out of condensate atoms for the statistics near the critical region. For sufficiently large systems the crossover from the anomalous to normal scaling of the fluctuations is observed. The excitations are described in a self-consistent way within the Bogoliubov-Popov approximation, and the interactions between thermal atoms are described by means of the Hartree-Fock method. A breakdown of the standard, grand-canonical ensemble to describe fluctuations of an ideal Bose gas and a necessity for canonical or microcanonical description has been noticed already long time ago [1], but only in recent decade the problem of fluctuations has received renewed attention due to the experimental achievement of Bose-Einstein condensate (BEC) in ultracold trapped gases. For ideal gases, the canonical and microcanonical fluctuations have been thoroughly investigated [2,3,4,5,6,7,8,9,10], and several powerful techniques, like the Maxwell Demon ensemble [4,6,7], have been developed. For interacting particles the fluctuations have been studied mainly within the Bogoliubov approximation [11] of weakly interacting gases [12,13,14,15,16,17], that proved to be extremely successful to describe many other properties of BEC. The exact treatments, so far applied only for one-dimensional systems [18], confirmed an excellent agreement with predictions of the Bogoliubov method. We note that some controversy exists about the applicability of the mean-field theory to this problem [19], on the other hand, other approaches, like the perturbation theory, lead to qualitatively different results for fluctuations of relatively small condensates [20,21]. The ultimate verification will be done in experiments. However, to date only the statistics of the total number of atoms has been measured [22], and a technique involving scattering of short laser pulses has been proposed [23] but not realized So far the studies of fluctuations in weakly interacting gases have been limited to the regime of low temperatures, and only recently the critical region (close to the critical temperature T c ) in a finite-size system has been explored [24]. In this case the Bogoliubov-Popov (B-P) approximation [25] has been applied to account for the condensate depletion at finite temperatures and to obtain a description that smoothly interpolates between the degenerate regime below T c and an ideal gas statistics above T c . In this Letter we reinvestigate the problem of fluctuations for weakly interacting gas, putting a special emphasis on the interactions of out of condensate atoms, that apart from the critical region, turn out to be important even at moderate temperatures. Following the Bogoliubov-Popov approximation for a uniform Bose gas of N atoms confined in a three-dimensional box of size L with periodic boundary conditions we start with the Hamiltonian: H =Ĥ B + E ex (N, N 0 ) = k =0 ǫ kb † kb k + E ex (N, N 0 ). (1) Operatorsb k = U kâk + V kâ † −k are the Bogoliubov quasiparticle annihilation operators, obeying Bose commutation relations [b k ,b † k ′ ] = δ k,k ′ ,â k represent annihilation operators for a mode with quantized momentum k. The celebrated Bogoliubov-Popov energy spectrum: ǫ k = (ǫ 0 k + gn 0 ) 2 − (gn 0 ) 2(2) depends on the condensate density n 0 = N 0 /V . Here, ǫ 0 k = 4π 2 2 k 2 /mL 2 is the kinetic energy of a mode k, m is the mass of atoms, g = 4π 2 a/m is the interaction strength, and a is the s-wave scattering length characterizing the contact potential V (r − r ′ ) = gδ (3) (r − r ′ ). Bogoliubov coefficients satisfy equations: [25]. Finally, E ex (N, N 0 ) describes the interaction energy between out of condensate atoms, which in the B-P model can be calculated on the level of Hartree-Fock (HF) approximation: [26], with N ex = N − N 0 denoting the number of atoms in excited (k = 0) modes. This corresponds to taking only secular part of interactions between thermal atoms. The considered Hamiltonian neglects a finite life-time of quasiparticle excitations arising from interaction between quasiparticles [27]. U 2 k + V 2 k = (gn 0 + ǫ 0 k )/ǫ k ≡ W k and U 2 k − V 2 k = 1E ex (N, N 0 ) = g 2V N 2 ex The canonical-ensemble partition function for a system with N atoms and temperature k B T = 1/β is Z(N, β) = N Nex=0 ∞ n1=0 . . . ∞ n k =0 . . . e −βE δ Nex,Nex , (3) where n k are populations of quasiparticle excitations, E = k =0 ǫ k (N 0 ) n k + E ex (N ex ) is the energy and N ex = k =0 n k W k + V 2 k is the number of thermal atoms of a given configuration of excitations, that differs from the total number of excitations due to the Bogoliubov transformation [32]. Although the condensate mode does not appear in the sum in the Hamiltonian, its population affects the energy spectrum and the interaction energy of atoms in excited modes. In order to enforce the constrain on the total number of particles rigorously, we keep the energy spectrum dependent on the actual number of condensed atoms, as follows from Eq. (3). We calculate the conditional statistical partition function: Z N0 (N ex ) = ∞ n1=0 . . . ∞ n k =0 . . . e −βE δ Nex,Nex ,(4) which corresponds to a case with N 0 condensed atoms and N ex thermal atoms. In terms of these functions the probability of finding N 0 condensed atoms is P (N 0 ) = ZN 0 (N −N0) Z and Z = N N0=0 Z N0 (N − N 0 ) . The recurrence algorithm used in our calculations is an enhanced version of the earlier algorithm applied to the ideal Bose gas (IBG) [6], and it will be presented in details elsewhere. It makes use of the fact that Z N0 (N ex ) treats the number of condensed and thermal atoms as independent variables and the number of condensed atoms becomes a parameter. As an intermediate step one obtains the following result for the mean number of quasiparticle excitations in mode q provided that there are N 0 condensed atoms and N ex thermal atoms in the system n q Nex N0 = ∞ l=1 e −βlǫq Z N0 (N ex − lW k ) Z N0 (N ex ) .(5) We calculate the mean condensate population N 0 = N N0=0 P (N 0 )N 0 and its fluctuations δ 2N 0 = N 2 0 − N 0 2 . In the canonical ensemble N 0 = N − N ex and δ 2N 0 = δ 2N ex . The fluctuations, can be written as a sum of two contributions: δ 2N ex = δ 2N ex T + δ 2N ex Q . The first term represents thermal fluctuations, that we calculate from the probability distribution P (N 0 ) N 2 ex T = k,q =0 W k W q ( n knq − n k n q ) ,(6)= N N0=0 N 2 0 P (N 0 ) − N 0 .(7) The second term, δ 2N ex Q describes the quantum part of the fluctuations, a non vanishing component at T = 0 in an interacting gas [12]. They result from the Bogoliubov transformation applied to the quantum average of theN 2 ex operator: δN 2 ex Q = 4 k =0 U 2 k V 2 k (n kn−k +n k + 1 2 ) ,(8) In the equations above, the average of an arbitrary operator can be expressed in terms of conditional averages: X = N N0=0 X N −N0 N0 P (N 0 ) , with the mean occupation numbers n q Nex N0 given by (5), and the correlation of modes with the opposite momenta n kn−k Nex N0 = ∞ l,j=1 e −β(l+j)ǫq Z N0 (N ex − (l + j)W q ) /Z N0 (N ex ). From the practical point of view the recursive method is applicable for systems of maximum a few hundred particles. For larger N , the calculations become numerically very demanding, and to treat larger samples we have developed a semi-analytical approach, that is based on saddle-point approximation to the contour-integral representation of Z(N ex ), known in the literature as Darwin-Fowler method [28]. Derivation proceeds basically in the same manner as for an ideal gas [4,8] and yields Z N0 (N ex , β) ≈ Ξ N0 (z 0 , β) z Nex 0 2π ∂ 2 ∂λ 2 0 ln Ξ N0 (z 0 , β) ,(9) where Ξ N0 (z, β) is the grand-canonical partition function for the excited subsystem, z 0 = e λ0 denotes the position of the saddle point, determined by N ex GC = N ex , with N ex GC ≡ −∂ λ0 ln Ξ N0 (z 0 , β) denoting the grandcanonical expectation value for the number of excited atoms. In analogy to the ideal gas, Ξ N0 (z, β) can be written in a closed form Ξ N0 (z, β) = k =0 z V 2 k 1 − z W k exp(−βǫ k ) −1 .(10) A similar saddle-point method can be applied to determine n k and n kn−k entering formula (8) for δ 2N 2 0 . This way we have obtained a scheme that allows us to calculate statistical properties of the weakly-interacting condensate at all temperatures. While we keep only the HF contribution to interactions between quasiparticles we otherwise preserve the number of atoms throughout the calculations, which requires inclusion of the energy spectrum dependent on the actual number of condensed atoms. This can be contrasted to the common approximation assuming the energy spectrum dependent on the mean number of condensed atoms: ǫ k (N 0 ) = ǫ k ( N 0 ) [14,24,30], used in a simple but useful models of thermal equilibrium with Bose-populated excitations [12,20]. This way one would obtain different formulas for Z = which, being numerically less demanding, require selfconsistent determination of N 0 . However, such a seemingly natural simplification leads to observable distortions of the results. This is illustrated in Fig. 1 presenting the canonical partition function of the excited subsystem in the parameter space (N 0 , N ex ). Inclusion of the excitation spectrum dependent on the actual number of condensed atoms, correspond to performing a cut along a line N 0 + N ex = N , whereas the approximation assuming average spectrum corresponds to a cut along N 0 = N 0 = const, with N 0 determined in a self consistent way. These two approaches yield probability distributions of the number of condensed atoms (see right panel) that differ both in the position of the maximum and the width of the peak, that determine the values of N 0 and δ 2N 0 , respectively. The comparison of the mean condensate populations and fluctuations, for relatively small system of N = 200, and an 1/3 = 0.1, calculated for rigorous and average spectrum is presented in Fig 2. The results obtained in the model with rigorous spectrum and the HF contribution to a thermal atoms interaction differ substantially from the mean occupation and fluctuations obtained when the thermal atoms interaction is totally neglected. On the other hand we note that the two other approaches for an interacting gas lead to rather similar results, and some discrepancies can be only observed in behavior of fluctuations close to the critical temperature. The situation changes, however, for larger systems (see Fig 3). For sufficiently large system of N = 10000 atoms, and an 1/3 = 0.05 inclusion of the rigorous spectrum together with E ex significantly affect the condensate statistics. This is more evident in the case of fluctuations, that in our model remain much smaller than the fluctuations calculated in the model assuming average spectrum, even at temperatures much smaller than the critical one. Finally we have verified how the mean condensate population and its fluctuations depend on the size of the sys- tem, while keeping the interaction parameter an 1/3 fixed. Fig. 4 presents the results for the rigorous spectrum including the excited atom interactions, for an 1/3 = 0.05 and the number of particles varying from N = 100 to N = 10 5 . While the size of the system increases, fluctuations tend to be proportional to √ N , so they become normal. On the other hand, for small systems the scaling remains anomalous. This result shows that anomalous scaling (δN 0 ∼ N 2/3 ), predicted within the Bogoliubov method neglecting the interactions of thermal atoms [12,14,16,17], holds only for relatively small number of atoms. One observes that for large systems the fluctuations exhibit a high and narrow peak close to T c . We are not sure of its physical significance since the B-P spectrum is questionable so close to the critical temperature. We point out that position of this peak, that define some characteristic temperature for our model, remain fixed in the thermodynamic limit (an 1/3 = const, N → ∞). We have verified that the position of the peak vary with interactions as ∆T ch /T c = 1.56an 1/3 , which is similar to the shift of the critical temperature in an interacting gas: ∆T c /T c = 1.29an 1/3 [31]. In this Letter we have presented the most complete to date discussion of the statistical properties of BEC confined to a box. In particular we have pointed out the importance of the strict enforcement of the particle number conservation and of the interactions between thermal atoms. These two new elements, that have been neglected in the previous approaches, turn out to be relevant not only close to the critical region, but also at moderate temperatures, affecting the scaling properties of fluctuations in sufficiently large systems. N N0=0 Z N0=0N0 (N − N 0 ) and P (N 0 ) = Z N 0 (N −N0) on-line) Logarithm of the canonical partition function as a function of N0 and Nex (a), along with cross-sections yielding the probability distributions of the condensate (b) for rigorous (blue) and average spectrum (red). Parameters are: N = 1000, an 1/3 = 0.03 and T = 0.8Tc. on-line) Condensate population and fluctuations (inset) versus scaled temperature T /Tc for a system of N = 200, and an 1/3 = 0.1, described by the Bogoliubov-Popov Hamiltonian HB with (black dot-dash) and without (red dots) inclusion of interactions between of out of condensate atoms Eex. Blue solid line represents a corresponding ideal gas, while green dash line shows the results for the model assuming average spectrum ε k ( N0 ) without Eex term. on-line) Condensate population (upper panel) and fluctuations (lower panel) versus scaled temperature T /Tc for a system with N = 10000, an 1/3 = 0.05 obtained for: (black solid ) HB + Eex with N0-dependent spectrum, (red dash) HB with average spectrum approximation, (blue thin solid ) an ideal gas. on-line) Normalized condensate fluctuations δN0 /N 1/2 and condensate fraction (inset) versus scaled temperature T /Tc for fixed an 1/3 = 0.05 but varying N : (a, red dots) N = 100, (b, blue dash) N = 1000, (c, green dot-dash) N = 10 4 , (d, black solid ) N = 10 5 . Z.I., M.G. and K.R. acknowledge support of the Polish Government Research Grant for 2006-2009, L.Z. acknowledges support of the Polish Government E H Hauge, Physica Nor. 419E.H. Hauge, Physica Nor. 4, 19 (1969); . R M Ziff, G E Uhlenbeck, M Kac, Phys. Rep. 32169R.M. Ziff, G.E. Uhlenbeck, and M. Kac, Phys. Rep. 32, 169 (1977); E Schrödinger, Statistical Thermodynamics. New YorkDover PublE. Schrödinger, Statistical Thermodynamics (Dover Publ., New York, 1989). . H D Politzer, Phys. Rev. A. 545048H.D. Politzer, Phys. Rev. A 54, 5048 (1996). . M Gajda, K Rzażewski, Phys. Rev. Lett. 782686M. Gajda and K. Rzażewski, Phys. Rev. Lett. 78, 2686 (1997). . Navez, Phys. Rev. Lett. 791789Navez et. al., Phys. Rev. Lett. 79, 1789 (1997). . S Grossmann, M Holthaus, Phys. Rev. Lett. 793557S. Grossmann and M. Holthaus, Phys. Rev. Lett. 79, 3557 (1997). . C Weiss, M Wilkens, Opt. Ex. 1272C. Weiss and M. Wilkens, Opt. Ex. 1, 272 (1997). . S Grossmann, M Holthaus, Opt. Ex. 1262S. Grossmann and M. Holthaus, Opt. Ex. 1, 262 (1997). . M Holthaus, E Kalinovski, Ann. Phys. 276321M. Holthaus, E. Kalinovski, Ann. Phys. 276, 321 (1999). . H.-J Schmidt, J Schnack, Physica A. 260479H.-J. Schmidt and J. Schnack, Physica A 260, 479 (1998). . M O Scully, Phys. Rev. Lett. 823927M.O. Scully, Phys. Rev. Lett. 82, 3927 (1999). . N Bogoliubov, J. Phys. USRR. 1123N. Bogoliubov, J. Phys. USRR 11, 23 (1947). . S Giorgini, L P Pitaevskii, S Stringari, Phys. Rev. Lett. 805040S. Giorgini, L. P. Pitaevskii, and S. Stringari, Phys. Rev. Lett. 80, 5040 (1998). . F Meier, W Zwerger, Phys. Rev. A. 605133F. Meier, and W. Zwerger, Phys. Rev. A 60, 5133 (1999). . Kocharovsky, Phys. Rev. Lett. 842306Kocharovsky et. al., Phys. Rev. Lett. 84, 2306 (2000). . Bhaduri, J. Phys. B. 352817Bhaduri et. al., J. Phys. B 35, 2817 (2002). . Xiong, Phys. Rev. A. 6533609Xiong et. al., Phys. Rev. A 65, 033609 (2002). . Z Idziaszek, Phys. Rev. A. 7153604Z. Idziaszek, Phys. Rev. A 71, 053604 (2005). . I Carusotto, Y Castin, Phys. Rev. Lett. 9030401I. Carusotto, and Y. Castin, Phys. Rev. Lett. 90, 030401 (2003). . V I Yukalov, Phys. Lett. A. 340269V.I. Yukalov, Phys. Lett. A 340, 269 (2005). . Z Idziaszek, Phys. Rev. Lett. 824376Z. Idziaszek et. al., Phys. Rev. Lett. 82, 4376 (1999). . F Illuminati, P Navez, M Wilkens, J. Phys. B. 32461F. Illuminati, P. Navez, and M. Wilkens, J. Phys. B 32, L461 (1999). . C S Chuu, Phys. Rev. Lett. 95260403C.S. Chuu et al., Phys. Rev. Lett. 95, 260403 (2005). . Z Idziaszek, K Rzażewski, M Lewenstein, Phys. Rev. A. 6153608Z. Idziaszek, K. Rzażewski, and M. Lewenstein, Phys. Rev. A 61, 053608 (2000). . A A Svidzinsky, M O Scully, Phys. Rev. Lett. 97190402A.A. Svidzinsky, and M.O. Scully, Phys. Rev. Lett. 97, 190402 (2006). V N Popov, Functional Integrals in Quantum Field Theory and Statistical Physics. DordrechtReidel Publishing Company6V.N. Popov, Functional Integrals in Quantum Field The- ory and Statistical Physics (Reidel Publishing Company, Dordrecht, 1983), chapter 6. C J Pethick, H Smith, Bose-Einstein Condensation in Dilute Gases. Cambridge Univ. PressC. J. Pethick, H. Smith, Bose-Einstein Condensation in Dilute Gases, (Cambridge Univ. Press, 2000). . S T Beliaev, Zh. Eksp. Teor. Fiz. 347299Sov. Phys.-JETPS. T. Beliaev, Zh. Eksp. Teor. Fiz. 34, 433 (1958) [Sov. Phys.-JETP 34 (7), 299 (1958)]. K Huang, Statistical Mechanics. New YorkJohn Wiley and Sonschapter 9K. Huang, Statistical Mechanics (John Wiley and Sons, New York, 1987), chapter 9. . Ibg For, 4, 8For IBG see e.g. [4, 8]. . C Connaughton, Phys. Rev. Lett. 95263901C. Connaughton et al., Phys. Rev. Lett. 95, 263901 (2005). . V A Kashurnikov, N V Prokof&apos;ev, B V Svistunov, Phys. Rev. Lett. 87120402V. A. Kashurnikov, N. V. Prokof'ev, and B. V. Svistunov, Phys. Rev. Lett. 87, 120402 (2001). Atomic populations corresponding to quasiparticle excitations are not integers and we have to apply some binning scheme (see e.g. 17Atomic populations corresponding to quasiparticle exci- tations are not integers and we have to apply some bin- ning scheme (see e.g. [17])
{'fraction_non_alphanumeric': 0.06445814072932717, 'fraction_numerical': 0.04627632254750899, 'mean_word_length': 3.8934405629555164, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 5, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Fluctuations of the number of condensed atoms in a finite-size, weakly interacting Bose gas confined in a box potential are investigated for temperatures up to the critical region. The canonical partition functions are evaluated using a recursive scheme for smaller systems, and a saddle-point approximation for larger samples, that allows to treat realistic size systems containing up to N ∼ 10 5 particles. We point out the importance of particle-number constrain and interactions between out of condensate atoms for the statistics near the critical region. For sufficiently large systems the crossover from the anomalous to normal scaling of the fluctuations is observed. The excitations are described in a self-consistent way within the Bogoliubov-Popov approximation, and the interactions between thermal atoms are described by means of the Hartree-Fock method.', 'arxivid': '0708.0092', 'author': ['Zbigniew Idziaszek \nInstitute of Theoretical Physics\nUniversity of Warsaw\n00-681WarsawPoland\n\nCenter for Theoretical Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n', 'Lukasz Zawitkowski \nCenter for Theoretical Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n', 'Mariusz Gajda \nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n\nFaculty of Mathematics and Sciences\nCardinal Stefan Wyszyński University\nWarsawPoland\n', 'Kazimierz Rzażewski \nCenter for Theoretical Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland\n\nFaculty of Mathematics and Sciences\nCardinal Stefan Wyszyński University\nWarsawPoland\n'], 'authoraffiliation': ['Institute of Theoretical Physics\nUniversity of Warsaw\n00-681WarsawPoland', 'Center for Theoretical Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland', 'Center for Theoretical Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland', 'Institute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland', 'Faculty of Mathematics and Sciences\nCardinal Stefan Wyszyński University\nWarsawPoland', 'Center for Theoretical Physics\nPolish Academy of Sciences\nAleja Lotników 32/4602-668WarsawPoland', 'Faculty of Mathematics and Sciences\nCardinal Stefan Wyszyński University\nWarsawPoland'], 'corpusid': 32749973, 'doi': '10.1209/0295-5075/86/10002', 'github_urls': [], 'n_tokens_mistral': 6371, 'n_tokens_neox': 5360, 'n_words': 3247, 'pdfsha': '4db6b0f4fbd910de1f9231d35497aa6c03aaacd2', 'pdfurls': ['https://arxiv.org/pdf/0708.0092v2.pdf'], 'title': ['Fluctuations of a weakly interacting Bose-Einstein condensate', 'Fluctuations of a weakly interacting Bose-Einstein condensate'], 'venue': []}
arxiv
Article Commentary on "Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies Hans Rabus *[email protected] Physikalisch-Technische Bundesanstalt (PTB) BerlinGermany Miriam Schwarze Physikalisch-Technische Bundesanstalt (PTB) BerlinGermany Leo Thomas Physikalisch-Technische Bundesanstalt (PTB) BerlinGermany Article Commentary on "Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies IJRB 202321gold nanoparticlesMonte Carlo simulationsdose enhancement factorradiation therapymicrodosimetry In the recently published article by T.M. Gray et al. "Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies" (IJRB 2023, 99(2), 308-317) results of Monte Carlo simulations and radiobiological assays on the dosimetric effects of gold nanoparticles were presented. This commentary points out that the results of the two parts of the study are in contradiction and that the predicted magnitude of dose enhancement and its dependence on the shape of the nanoparticle appear implausible. Possible reasons for these observations are discussed. Introduction In their recently published work, Gray et al. (2023) performed radiobiological cell experiments to study the effects of the presence of gold nanoparticles (GNPs) during irradiation. In addition, Monte Carlo (MC) simulations were performed to determine the dose-enhancing effects of GNPs at the microscopic level. In these simulations, two different shapes (cubic, spherical) were considered for the same GNP volume. The spherical GNPs had a diameter of 30 nm, and the cubic GNPs had a side length of about 24 nm. Irradiations were simulated for 6 MV and 18 MV linac radiation in a three-step procedure. In the first step, an experimental setup was simulated to obtain the photon fluence in the region of interest. This photon fluence was used in the second simulation step to irradiate the nanoparticles and obtain phase-space files of emitted electrons. In the third step, the energy imparted by these electrons to the water surrounding the nanoparticle was scored as a function of radial distance from the center of the nanoparticle. Based on these simulations, dose enhancement factors (DEFs) were determined for a 1 µm³ volume of water containing the nanoparticle. The DEF values reported were about 6 and 8 for the sphere and the cube, respectively, with the 6 MV spectrum. For the 18 MV spectrum, the values were about 2.7 and 3.3 for the sphere and cube, respectively. From the radiobiological assays, a radiosensitization enhancement factor (REF) was determined for given survival fractions (SFs) of cells. For the two SFs considered, 0.3 and 0.6, an REF of about 1.06 was determined for a mass fraction of gold nanoparticles of 0.10%. For a mass fraction of 0.15%, the REF was about 1.11. Observations on the paper It should be noted that the adjective "microdosimetric" is used in the work of Gray et al. (2023) to indicate that the absorbed dose was determined in a micrometric volume. This is not exactly what the term "microdosimetry", which is also a keyword of the paper, generally refers to, namely to the study of the stochasticity of ionizing radiation interaction at the microscopic scale (Rossi and Zaider 1996;Lindborg and Waker 2017). Leaving this terminological issue aside, the nexus between the radiobiological part of the study and the Monte Carlo simulations is not immediately evident. The study of the difference between cubic and spherical nanoparticles was presumably motivated by the transmission electron microscopy images of the GNPs, which appear to exhibit a cross-sectional shape more like a square than a circle. However, it is not clear how the simulation results are related and relevant to the radiobiological assays. In fact, as will be explained in the next subsections, the results of the two parts of the study (MC simulations and radiobiological assays) appear to be in contradiction. Furthermore, the magnitudes of the local dose around the GNP and the dose enhancement and the influence of the shape of the nanoparticle on it are not plausible. Finally, the results for the radiobiological part of the study do not appear to be statistically significant. Contradictory results between MC simulations and radiobiological assays The results from the MC simulations and the radiobiological assays seem to be in contradiction for the following reason: A 30 nm diameter sphere has a volume of 1.410 -5 µm³. Therefore, the mass fraction of gold in 1 µm³ of water containing a GNP is about 2.710 -4 or 0.03 %, which is much lower than the mass fractions used in the radiobiological experiments of Gray et al. (2023). This means that in a simulation relevant for the experiments, there should have been more than one GNP in a 1 µm³ volume, namely between four and six for 0.1 % and 0.15 % mass fraction, respectively. However, according to the simulations, already one GNP results in an enhancement of the dose in a 1 µm³ volume by a factor of about 3 for both shapes and the 18 MV irradiation. In the presence of four or six GNPs in the 1 µm³ volume instead of only one, the contribution from GNPs should be even further increased, and the resulting dose enhancement should be more than 3. Thus, instead of a dose of 3 Gy, a dose of more than 9 Gy would result in a volume with both 0.1 % and 0.15 % mass fraction of GNPs. The functional shape of the survival curve without GNPs shown in Fig. 7(a) of Gray et al. (2023) indicates that such high doses result in surviving fractions well below 10 -2 . Therefore, a much greater reduction in cell survival would be expected for the experiments with GNPs than is seen in Fig. 7(a) of Gray et al. (2023). It should also be noted that the dependence of the dose enhancement factor on the primary radiation spectrum is contrary to the findings reported by Gray et al. (2021). There, measurements and simulations with an 18 MV linac spectrum were found to produce a larger DEF than a 6 MV spectrum. These finding were presumably the motivation for performing the cell survival studies at 18 MV in the work of Gray et al. (2023). Implausibility of the local dose per photon values A further implausibility is the magnitude of the local dose shown in Fig. 5 of Gray et al. (2023), which is between 110 -25 Gy and 310 -24 Gy in the range up to 500 nm from the center of the GNP. This suggests that the average dose in a 1 µm³ cube is in the order of a few 10 -25 Gy. The mass of 1 µm³ water is 10 -15 kg. Thus, the energy imparted is a few 10 -40 J, or a few 10 -21 eV. If the quantity shown in Fig. 5 of Gray et al. (2023) is normalized to the number of photons used in the second simulation (410 9 ), the total energy scored in the 1 µm³ water volume would be in the order of 10 -11 eV. This is obviously impossible since for each data point in Fig. 5 there must be at least one interaction (for the entire simulation), and the energy imparted by a single ionization is in the order of 10 eV. REF definition According to the Materials and Methods section in Gray et al. (2023), the REF is defined as follows: Since the presence of GNPs reduces the survival rate after irradiation, this definition implies that the REF values should be less than unity, whereas the reported values in their Table 1 are higher than unity. From the Results and Discussion sections, it is evident that Gray et al. (2023) used an REF definition analogous to those used by Chithrani et al. (2010), Kaur et al. (2013), and Cui et al. (2017). In these articles, the REF was defined as , =(1) where D0(S) and are the dose values that produce a survival rate S at mass fractions of gold of 0 and , respectively. This quantity was defined by the International Commission on Radiation Units and Measurements (ICRU) as Dose Modification Ratio (DMR) (ICRU 1979 Cui et al. (2017) is identical to the one given by Eq. (1).) Synergistic effects In Table 2 of Gray et al. (2023), results are presented for the reduction of survival by GNPs and radiation alone and for their simultaneous application. The latter gives larger effects than the combination of the first two. The data column for the case of "radiation only" contains values that vary with GNP concentration. This is implausible, since the "radiation only" data cannot be obtained from cells containing GNPs. Methodological concerns Statistical significance of the change in cell survival in the presence of GNPs The p-values shown in Fig. 7(b) and (c) of Gray et al. (2023) appear implausible given the large (overlapping) error bars. These p-values and uncertainties of the observed REFs are not discussed in the paper. In addition, it should be noted that the survival curves were determined by fitting the linear-quadratic (L-Q) two-parameter model to only two data points. Since the model value at dose zero is independent of the two model parameters, the inclusion of the data point at zero dose does not affect the fit results. Therefore, the best fit curve is effectively an interpolation of the two data points. Dependence of the dose enhancement on nanoparticle shape The reported DEF for the cubic nanoparticle is about 30% higher than the DEF for the spherical nanoparticle. This may be an artefact of the simulation geometry used in the second simulation step. In their Materials and Methods section, Gray et al. (2023) describe this second simulation step as follows: In a subsequent microscopic simulation, this energy spectrum was used for a set of six-point sources evenly distributed around a single GNP as seen in Figure 1(b). Each point source was placed 1 nm from the surface of the GNP. Their Fig. 1(b) only shows the case of the spherical GNP. For the cubic nanoparticle, it seems reasonable to assume that the point sources were placed on the lines passing the GNP center and the centers of the faces. If this was the case, then the simulation setup may have been biased in favor of the cubic nanoparticle. This is illustrated in Fig. 1, which shows that the solid angles covered by beams emitted from the source at point P and hitting the GNP are different for the sphere and the cube. Fig. 1(a) shows a cross section through the spherical GNP (circle) and the halfcone (dashed lines) of half-opening angle , , which delimits the solid angle within which all beams emitted from the point source at P intersect the GNP. This solid angle is given by Eq. (2). Ω = 2 1 cos ,(2) Unlike for the sphere, it is not possible to find an analytical expression for the solid angle subtended by the cube. However, it is possible to give upper and lower limits. This is illustrated by Fig. 1(b), which shows a view of the front of the cube as seen from the point source. All emitted beams incident on the plane of the front face within the longdashed circle intersect the cube. Fig. 1(c) shows a cross section through the cube in the plane defined by points A, A', and P. Also shown is the semi-cone (dashed lines) of halfopening angle , * , which bounds the solid angle subtended by the long-dashed circle in Fig. 1 Beams intersecting the plane of the front face between the long-dashed circle and the short-dashed circle hit the cube only when they strike the white areas near the corners. Hence, the solid angle subtended by the short-dashed circle is an upper bound on the actual solid angle. Fig. 1(d) shows a cross section through the cube in the plane defined by points B, B', and P, and the cross section through the half-cone (short-dashed lines) with the half-opening angle , corresponding to the short-dashed circle in Fig. 1(b). Therefore, the following relation holds for the solid angle for the cube: 2 1 cos , * Ω ! 2 1 cos , . (b). ( In Eqs. (2) and (3), , , , * , and , are the half-opening angles of the conical boundaries of the solid angles shown in Fig. 1(a), (c), and (d), respectively. They are given by (cf. Fig. 2) , = sin $% & &'( , * = tan $% ,( , = tan $% √,((4) where r = 15 nm is the radius of the sphere, s  24.2 nm is the side of the cube, and d = 1 nm is the distance of the source from the GNP. The resulting solid angle for the sphere is 4.10 sr. The solid angle for the cube is between 5.77 sr and 5.92 sr, that is, more than 40% larger than for the sphere. This means that photons emitted in the simulations from the point source have a 40% higher probability of intersecting the GNP when it is a cube than when it is a sphere. This may be the reason for the higher DEFs found for the cube. Since this bias is larger than the difference in DEF found between a spherical and a cube-shaped nanoparticle, it could even be that for unbiased results the DEF for the cube is smaller than for a sphere. Purpose of this commentary The issues stated in the subsection "Observations on the paper" are based on simple plausibility arguments. The other points mentioned in the "Methodological concerns" subsection required somewhat more advanced treatment, such as geometric considerations. Beyond highlighting the issues and concerns, this commentary is meant to answer the following questions, which require more elaborate approaches than backof-envelope calculations: (1) What is a realistic value for the DEF in a 1 µm³ volume containing a GNP of the size considered in the study of Gray et al. (2023)? (2) What is the order of magnitude of the local dose around such a GNP? (3) What is the magnitude of the bias introduced by the point source geometry used by Gray et al. (2023) in their second simulation step? (4) What are the uncertainties associated with the results of the radiobiological assays reported by Gray et al. (2023)? The first two questions are addressed using information available in the literature to derive quantitative estimates. The third question is answered by evaluating the interaction probabilities of photons emitted from a point source for the two GNP shapes compared to the case of uniform isotropic irradiation. For the last question, the data presented by Gray et al. (2023) are reanalyzed including uncertainty propagation. Materials and Methods To investigate the issues mentioned in the introduction, an estimate of the maximum possible DEF for 6 MV linac irradiation was derived using a photon energy spectrum reported by McMahon et al. (2011). This photon spectrum was also used for estimating the expected magnitude of the local dose around the GNP. The chord length distribution and mean chord lengths for the irradiation geometry used in the second simulation step of Gray et al. (2023) were determined to assess a possible bias introduced by the simulated irradiation geometry. Finally, the data shown in Fig. 7 of Gray et al. (2023) were extracted and used to determine the REF values and their uncertainties. Estimate for the upper bound of the DEF in the simulations The dose to a uniform mixture of gold and water under secondary electron equilibrium was determined for the mass fraction of gold corresponding to a single GNP in a 1 µm³ volume of water. As was shown previously, this dose is an upper limit to the average dose to a volume of water around a nanoparticle under secondary electron equilibrium (Rabus et al. 2019). Therefore, the maximum possible DEF in a volume of water containing a nanoparticle is given by ./ = 1 + × 2 ,3(5) where 2 ,3 = 〈 × 5 67 ⁄ 〉 : ; 〈 × 5 67 ⁄ < 〉 : ; ⁄ . In Eqs. (5) and (6), xg is the mass fraction of gold, and (µen/ )g and (µen/ )w are the mass energy absorption coefficients of gold and water, respectively. E denotes the photon energy, and the brackets 〈 〉 : ; indicate a weighted average with respect to the spectral photon fluence > ? . Under secondary particle equilibrium, this weighted average is the dose-to-fluence ratio. The estimate given by Eqs. (5) and (6) is an upper limit since some of the released energy is absorbed in the GNP. A procedure to correct the values for a homogenous mixture of gold and water for this absorption in the GNPs was developed by Koger and Kirkby (2016). In essence, the line of reasoning leading to Eq. (5) is as follows: Assume secondary charged particle equilibrium and that the nanoparticles are arranged in a regular array of voxels as shown in Fig. 1(a) of Gray et al. (2023). Then the average energy imparted in a voxel 'A' by electrons produced by photons interacting in a voxel 'B' is the same as the average energy imparted in voxel 'B' by electrons produced in voxel 'A'. Therefore, the total energy imparted in a voxel can be estimated by the total energy transferred to electrons by photon interactions in that voxel. The energy transferred by a photon of given energy is proportional to the mass energy transfer coefficient, which is approximately equal to the mass energy absorption coefficient. For a mixture of materials, the mass energy absorption coefficients have to be weighted by the mass fractions of the different components. The photon energy spectra used in the microscopic simulation were not shown in the work of Gray et al. (2021Gray et al. ( , 2023. Therefore, Eq. (5) interpolated data were then fed into an Excel template developed in earlier work (Rabus et al. 2019). Visual Basic macro functions are implemented in the Excel workbook to calculate interpolated values of the mass energy absorption coefficients of gold and water for given photon energies. In the main worksheet, numerical integrals of the dose-to fluence ratios are calculated for gold and water under secondary particle equilibrium. It should be noted that it was found in this procedure that the photon fluence shown in Fig. 1 of McMahon et al. (2011) is the fluence per eV and Gy cm 2 and not per keV and Gy cm² as stated in the figure caption. The photon fluence spectrum corrected for this error was used in the further analysis and is shown in Fig. 3. That this photon fluence spectrum has the correct order of magnitude can be seen by the following consideration: The mean energy of the photon spectrum is about 1.3 MeV. The corresponding mass energy transfer coefficient is about 0.03 cm 2 /g (Hubbell and Seltzer 2004). This gives a dose-to-fluence ratio under secondary particle equilibrium of about 610 -12 Gy cm 2 . Therefore, a photon fluence in the order of 210 11 cm -2 is needed to produce a dose of 1 Gy in water. The integral under the photon fluence curve in Fig. 3 is of this magnitude. Estimate for the magnitude of the local dose around a GNP The local dose around a GNP was estimated based on the expected number of photon interactions in the GNP and on literature data for the energy deposition or local dose around a GNP. The first dataset was from a multi-center comparison of simulated dose enhancement around GNPs under X-ray irradiation ). The second dataset was taken from Fig. 2 For a uniform isotropic photon field of fluence Φ (particles per area), the expected number of photon interactions, @ A B , in a GNP is given by Rabus, Li, Nettelbeck, et al. (2021): @ A B = > × 〈5 〉 : ; × C(7) where C is the volume of the GNP and 〈 〉 : ; indicates a weighted average with respect to the spectral distribution of the photon fluence, > ? . Therefore, for a uniform isotropic photon field, the probability of a photon interaction in the GNP does not depend on the shape of the GNP volume. For an isotropic point source of a given radiant intensity dN/dΩ (particles per solid angle), the fluence of emitted particles is inversely proportional to the square of the radial distance. For the two GNP shapes of sphere and cube with point sources located at 1 nm from the GNP surface, there is therefore a different photon fluence at the GNP center. This fluence value is in the order of 50 % larger for the cube than for the sphere. However, since the source is that close to the GNP, the value at the center is not representative for the whole GNP. Instead, the interaction probability for a point source depends on the mean chord length, that is, the expectation of the chord length (CL) distribution. The CL is the length of the path of a beam inside a given volume. CL distribution and the mean chord length depend on the GNP shape. This is also the case for the uniform isotropic irradiation geometry, where the mean CL can be obtained according to Cauchy's theorem (Kellerer 1971) as 4/3 × r and 2/3 × s for the sphere and the cube, that is, 20 nm and about 16.1 nm, respectively. However, owing to the uniform fluence distribution, the different chords cover the volume uniformly. In contrast, for a point source outside the volume, the expected number @ A B of photon interactions in the two GNP shapes is given by Eq. (8). @ A B = DE DF × 〈5 〉 : ; × 〈ℓ〉 H(8) In Eq. (8), 5 is the linear attenuation coefficient of gold, ℓ is the length of the chord inside the GNP of a beam starting at the point source, and 〈 〉 : ; and 〈 〉 H indicate the average over the photon energy spectrum and the full solid angle, respectively. Chord length distributions In the work of Gray et al. (2023), simulations of electrons emitted from the GNP were conducted with the photon fluence obtained in the first simulation and assuming six isotropic point sources slightly outside the GNP. To obtain the mean CL for a sphere of radius r = 15 nm and a cube of the same volume (that is, a side length s of about 24.2 nm), the CL distributions were determined by random-sampling 10 8 radial beams from a point. The point was located outside the GNP at a distance d = 1 nm from the GNP surface along one of the symmetry axes of the geometrical shape. (Considering only one point is sufficient due to the symmetry of the geometry.) The cosine of the polar angle (with respect to the vector from the source point to the center of the GNP) was uniformly sampled in the interval between 1 cos and 1. is the maximum polar angle for which a beam intersects the GNP. (For the sphere, , , and for the cube, , from Eq. (4) were used, respectively.) Exploiting the symmetry, the azimuthal angle was uniformly sampled between 0 and /4 for the cube and was ignored for the sphere. The resulting CL distributions were normalized by multiplying with 1 cos / 2@∆K), where n is the number of beams and L is the bin size of the CL histogram. CL distributions were also determined for the case of uniform isotropic irradiation. For the cube, 10 8 chord lengths were obtained by first random-sampling a beam direction with the cosine of the polar angle uniformly distributed between -1 and 1 and the azimuth uniformly distributed between 0 and 2. Then a random point was sampled in a plane perpendicular to this direction within a circle around the center of the cube of radius equal to s × √3/2 (s is the side of the cube). To correct for the fraction of beams not intersecting the cube, the chord length distribution was normalized by multiplying with 1/ @ M ∆K), where @ M is the number of beams intersecting the cube and L is the bin size of the CL histogram. For the sphere, the CL histogram was directly constructed from the known analytical expression (Kellerer 1971) such that the frequency density for the k th bin was calculated as 2N + 1 × ∆K/ 2O , . From the CL distributions, the mean CLs were determined by multiplying the CL value by the frequency density and the CL bin width and summing over all bins. The conditional mean CL for beams intersecting the target was also determined by dividing the mean CL by the sum of the frequencies. For the cube and the point source, the actual solid angle was determined from the sum of the frequencies for non-zero CL. Uncertainty of the radiation enhancement factors In the study of Gray et al. (2023), the parameters of the L-Q model of cell survival were obtained by fitting the model to the observed survival rates. Since measurements were only performed at two dose values, the parameters of the L-Q model can be directly calculated by solving the set of two linear equations given by Eq. (9). ln ,QRS = T × 3 UV + W × 9 Gy , ln ,[RS = T × 6 UV + W × 36 Gy , . In Eq. (9), ln denotes the natural logarithm, and T and W are the parameters of the L-Q model curve of cell survival. ,QRS and ,[RS are the observed survival rates for doses of 3 Gy and 6 Gy, respectively, at a mass fraction of gold. The dose that produces a given survival level S is then obtained as = T 2W + ]^T 2W _ , ln W .(10) The values of ,QRS and ,[RS and their uncertainties were read from Fig. 7 of Gray et al. (2023) using the inkscape tool (https://inkscape.org). Using Eqs. (1) and (10) with the solution of Eq. (9), the REFs and their uncertainties were calculated. In addition, a simultaneous non-linear regression of all data with the model function shown in Eq. (11) was also performed. ln ,`= T × 1 + 2 + W × a 1 + 2 b , . This model function assumes that the different survival curves are only due to a dose enhancement, expressed by the additional parameter γ (Eq. (6)), while the other two model parameters are independent of . For this approach the uncertainties of the model parameters were also determined. Results Upper bound for the DEF in the simulations Using the spectrum shown in Fig. 3(a) for the photon fluence reported in McMahon et al. (2011) for 1 Gy cm², dose-to-fluence ratios of 1.09 Gy cm² and 2.23 Gy cm² are obtained for water and gold, respectively. The deviation of the first value from the expected value of 1 Gy cm² is about 9% and indicates the overall uncertainty of the procedure used here. This uncertainty includes the limited accuracy of extracting the data by digitization from a printed figure as well as interpolation of the extracted data points and of the tabulated mass energy transfer coefficients in Hubbell and Seltzer (2004). Using the values given above in Eq. (6) gives a value of 2 ,3 of approximately 2, so that from Eq. (5), a maximum DEF of 1.00055 is expected for the 6 MV photon spectrum at a mass fraction of gold of 2.710 -4 , that is, for a single spherical GNP of 15 nm radius in a 1 µm³ volume of water. For the 0.1 % and 0.15 % mass fractions used in the experiments of Gray et al. (2023), the estimated maximum DEFs are about 1.002 and 1.003, respectively. The deviation of these DEF values from unity has a relative uncertainty in the order of 10%. Since the photon energy spectrum shown in Fig. 3 is dominated by high-energy photons, the correction to be applied to account for energy absorption in the GNP can be estimated from the results shown in Koger and Kirkby (2016) to be maximum 5 %. Therefore, the aforementioned DEFs are representing the expected order of magnitude for the 6 MV spectrum. Fig. 4 shows a comparison of the CL distributions for uniform isotropic irradiation and irradiation of a GNP from the point source considered in the second simulation steps of Gray et al. (2023). In both cases a steep increase can be seen at about 24 nm for the CL distribution of the cube. This is not an artefact but rather reflects the fact that starting with this CL, lines leaving the cube through the back side (as seen from the source point) contribute to the distribution in addition to lines leaving the cube on the side. CL distributions and mean CLs For both GNP shapes significant changes can be seen between the two irradiation geometries. For the sphere the linear distribution for uniform isotropic irradiation turns into a curve with saturation behavior for the point source. For the cube a peak appears at about half the side, and the range of CL values is reduced for the point source. This is easily understandable since some beam geometries such as beams passing through opposing corners or edges of the cube are not possible for the point source geometry. The frequency density of the CL distribution relating to the point source is lower than that of the uniform isotropic case. This reflects the fact that for the point source only a fraction of the full solid angle is covered by beams intersecting the GNP (Fig. 1). The mean CLs for the uniform isotropic irradiation of the sphere and the cube are obtained as 20 nm and about 16.1 nm, respectively, in accordance with Cauchy's theorem. For the point source, the mean CLs amount to about 7.08 nm for the cube and (Table 1). The ratio of these mean CLs for the cube and the sphere is approximately 1.21. This means that the mean CL and the interaction probability of a photon is about 21 % higher for the cube than for the sphere. Thus, a major part of the larger dose enhancement from the cube of about 30% (DEF of 8 vs. DEF of 6 for the sphere) appears to be due to a bias introduced by the way the photon fluence was sampled in the work of Gray et al. (2023). Local dose per photon For the case of uniform isotropic irradiation, the probability of a photon interaction in the GNP calculated with Eq. (7) is 6.3610 -5 for a fluence producing a dose of 1 Gy (about 1.910 11 cm -2 ). For the point source geometry used in the second simulation of Gray et al. (2023), the probability of a photon interaction occurring in the GNP as obtained from Eq. (8) is 1.5010 -5 per emitted photon for the sphere. For the cube, the corresponding probability is about 1.8210 -5 (Table 1). In the work of , the mean energy imparted by electrons emitted from the GNP under X-ray irradiation was found to be more or less constant at distances from the GNP surface between about 150 nm and 1000 nm. Since the GNPs considered in that work had diameters of 50 nm and 100 nm, this translates into distances from the GNP center of 200 nm or more. The amount of energy imparted per 10 nm spherical shell was in the order of 40 eV per photon interaction in the GNP. A 10 nm thick spherical shell of 200 nm mean radius has a volume of about 510 -3 µm 3 , which corresponds to a mass of about 510 -18 kg. Therefore, the estimated dose per photon interaction in the GNP at 200 nm from the GNP is about 1.3 Gy. For the GNP sizes considered by Gray et al. (2023), the estimated dose at this distance amounts to about 2.510 -5 Gy. Mc Mahon et al. (2011) presented in their Fig. 2 the dose as a function of radial distance from the GNP center. At 200 nm distance a dose of 0.18 Gy per ionization in the GNP was found for the 6 MV linac spectrum and a GNP of 2 nm diameter. Applying this value of dose per interaction to a spherical GNP of 30 nm diameter gives an estimated dose at 200 nm from the GNP center of about 3.510 -6 Gy. That the two estimates of the local dose are different is easily understood. The data from the work of were determined for irradiation Table 1: Mean chord lengths for the cubic and spherical GNP in the point source geometry and the estimated resulting probabilities of photon interaction in the GNP for the photon fluence spectrum shown in Fig. 3. The uncertainties of the mean CLs and their ratio are the standard deviation from 100 batches of 10 6 random samples. The uncertainty of the interaction probability is dominated by the uncertainty of the fluence-mean of the linear attenuation coefficient 〈5 〉 : ; . GNP shape sphere cube Mean CL / nm 5.837 ± 0.003 7.076 ± 0.004 Interaction probability (1.50 ± 0.15)  10 -5 (1.82 ± 0.18)  10 -5 Ratio mean CL cube to sphere 1.212 ± 0.004 of the GNP with 50 kVp and 100 kVp X-ray spectra. The photons of these spectra are in an energy range in which photoabsorption is the dominant process of photon interaction in gold (Rabus et al. 2019). The spectra have a large component of low-energy photons that produce photoelectrons of energies comparable to the energies of the gold L-shell Auger electrons. In contrast, the photon fluence spectrum used by McMahon et al. (2011) (shown in Fig. 3) has only a small contribution of photons with energies below 100 keV. This means that most photoelectrons have energies much higher than those of the gold Lshell Auger electrons and, thus, a much smaller energy loss in the vicinity of the GNP. Furthermore, about 50% of the photons have energies in the range above 500 keV, where incoherent Compton scattering is the dominant interaction process in gold (Rabus et al. 2019). Therefore, one may expect a significant reduction of the dose around the GNP when moving from an X-ray photon spectrum, as used by , to a 6 MV photon spectrum as used by McMahon et al. (2011). The value shown in Fig. 5 of Gray et al. (2023) for the 6 MV irradiation and the spherical GNP is about 1.810 -25 Gy. Assuming that the estimate derived from the data of McMahon et al. (2011) is the more relevant one, it appears that these values are too small by a factor in the order of 510 -20 . L-Q model parameters and REFs The results of the analysis of the data for cell survival presented in Fig. 7 of Gray et al. (2023) are listed in Table 2 Gray et al. (2023), the REF values are higher for the higher survival rate, and the deviation from unity doubles for 0.15 % mass fraction compared to 0.1 %. However, the uncertainties are so large for all parameters that the changes are not statistically significant. The last three rows of Table 2 show the parameters obtained by fitting Eq. (11) to all data. The values of  and  are approximately equal to the means of T and W , respectively. This is expected, as is the observation that the uncertainties are much smaller than for the individual T and W . The value of the parameter  is comparatively large, given that for the 6 MV photon spectrum a factor of approximately 2 was found for the constant of proportionality between (DEF-1) and (Eq. (5)). For the 18 MV radiation, the photon fluence spectrum is expected to have more high-energy contributions for which the mass energy absorption coefficient should be smaller than for the fluence spectrum deriving from 6 MV linac radiation. Discussion Magnitude of the simulated local dose and DEF The contradiction between the radiobiological experiments and the simulations in Gray et al. (2023) seems to be due to incorrect results obtained from the simulations. The reported DEF values in the 1 µm³ water volume are much higher than what one expects from the energy transfer coefficients of photons. For the 6 MV photon fluence spectrum from the work of McMahon et al. (2011), a maximum DEF was estimated to be about 1.00055 for a mass fraction of gold of 2.710 -4 (1 µm³ water volume containing a cubic GNP of 24 nm side). Similarly, DEFs of about 1.002 and 1.003 would apply for the cell experiments with 0.1 % and 0.15 % mass fraction of gold, respectively, if they were conducted with 6 MV irradiation instead of 18 MV. In the work of Gray et al. (2023), the DEF is stated to be determined from the microscopic simulations with GNP and the dose obtained from the first macroscopic simulation for a volume of water without GNP inside. The latter dose value was obtained under conditions of secondary electron equilibrium (Gray et al. 2021). However, for the dose with the GNP, only the energy imparted by electrons emitted from the GNP (as determined in the second simulation) was scored. This simulation only considered the single 1 µm³ volume containing a GNP and, therefore, a situation of charged particle disequilibrium. This is expected to result in a large underestimation of the dose in the 1 µm³ volume when GNPs are present. Nevertheless, the reported DEFs were much larger than unity, which is generally an indication that both dose without and with GNP were determined under secondary particle disequilibrium. The DEFs of the average dose in the 1 µm³ water volume deviated from unity by more than a factor of 10,000 with respect to the deviation from unity of the above upper Table 2: Results of the analysis of the cell survival data presented in Gray et al. (2023) following the approach of determining the radiosensitivity enhancement factor (REF) according to Eq. (1):  and  are the parameters of the L-Q model, is the dose corresponding to a given survival level according to Eq. (7). The last three rows are the best fit parameters of all data to Eq. (11). bound for the 6 MV spectrum. In contrast, the values for the local dose per photon emitted from the point source shown in Fig. 5 of Gray et al. (2023) are about 20 orders of magnitude smaller than the values estimated based on the two different approaches used. Since not much detail is given in the work of Gray et al. (2023) on how the different steps of the simulations were linked, the reason for these obviously wrong results remains obscure. One possible source of error is the conversion of the photon fluence (particles per area) from the first simulation into the radiant intensity (particles per solid angle) of the point sources used in the second simulation. The most probable explanation for the conflicting directions of the deviation from the expected values or orders of magnitude for the dose in a 1 µm³ water volume containing a GNP and for the local dose around this GNP is the occurrence of two errors related to normalization. Dependence of the dose enhancement on nanoparticle shape The analysis of the chord length distributions and mean chord lengths for the point source considered by Gray et al. (2023) showed that this irradiation geometry produces a higher probability of a photon interaction in the cube-shaped GNP. The effect was an about 21% increase for the cubic GNP compared to the spherical one. This accounts for a large proportion of the 30 % higher DEF of the cubic compared to the spherical nanoparticle. The unresolved issues regarding the magnitude of local absorbed dose and average dose in the 1 µm³ water volume leads to the question whether the differences between the cube and the sphere still remain when these issues are resolved. The photon field can be expected to be uniform over the small dimensions of the nanoparticle. Therefore, the probability of photons interacting in the nanoparticles is expected to be proportional to the nanoparticle's volume (Rabus et al. 2019) and, hence, should be the same for the two shapes of nanoparticle. (At least when either the photon fluence is isotropic or the non-spherical nanoparticles are randomly oriented.) Thus, an increased dose for the cube as compared to the sphere should be solely due to the difference in the emitted electron spectra. The surface-to-volume ratio is about 25% higher for the cubic nanoparticle compared with the spherical one. This will lead to an enhanced emission of low-energy electrons (Au M-and N-shell Auger electrons and low energy secondaries), which are stopped within the first 150 nm from the GNP surface ). This contribution can be expected to show an increase proportional to the increased surface area. In contrast, no significant change with GNP shape is expected for the energy imparted by all higher-energy electrons in the 1 µm³ water volume around the GNP. This is because the energy imparted per unit path by electrons within higher distances up to 1 µm from the GNP (mainly from gold L-shell Auger electrons and electrons of similar energies) was found to be almost constant and to be approximately independent of the photon energy spectrum for X-ray spectra. For higher photon energy spectra, Compton and photoelectrons produced in the GNP will generally have so high energies that their energy loss within the first 1 µm around the GNP is negligible, such that the energy imparted is only from gold L-shell Auger electrons in this case. The energy imparted in the 1 µm³ volume is composed of the contribution of lowenergy electrons from the GNP, which is expected to increase with surface area, and the contribution of electrons of higher energies, which is independent of the GNP shape. Therefore, the dose in 1 µm³ water volume around a cubic GNP is expected to be increased compared to the case of a spherical GNP. Consider a sphere of 1 µm³ volume with a radius of 625 nm. From the data shown in Fig. 6 of , approximately one third of the energy imparted within the first 625 nm around the GNP is deposited within the first 100 nm. Two thirds of this energy can be attributed to low-energy electrons that are stopping in the range, which makes up about 25 % of the total energy imparted in a sphere with 625 nm radius. When this contribution increases by 25 %, the overall increase of dose in the volume is in the order of 6 %. This is of similar magnitude as the difference between the 30 % increase in DEF reported by Gray et al. (2023) and the 21 % bias introduced by the irradiation geometry in their second simulation step. This means that there may remain a small benefit of using cubic instead of spherical GNPs. Significance of the REF values The analysis presented in the subsection "L-Q model parameters and REFs" showed that the uncertainties associated with the radiobiological data are too high to make a statistically significant case. The alternative approach of simultaneously analyzing all data with the assumption that the changes seen can be described only by dose enhancement gave a proportionality factor for the excess dose contribution from the GNPs that is an order of magnitude higher than the value estimated for the 6 MV irradiation. This is implausible, since for the higher photon energy spectrum, incoherent scattering is more important and produces higher-energy electrons than Auger electrons emitted after photoabsorption. Therefore, if the reduced survival was only due to an increase of the average dose, the parameter  is expected to be smaller for the 18 MV irradiation. This expectation is also supported by the simulations of Gray et al. (2023), where a higher effect was found for the 6 MV spectrum. While the REF values and their changes are not statistically significant, the alternative analysis presented here shows that the reduced survival in the presence of GNPs is not an effect of the average dose enhancement and that other factors, such as the local dose enhancement (McMahon et al. 2011;Butterworth et al. 2012;Zygmanski and Sajo 2016;Kirkby et al. 2017;Hullo et al. 2021) or chemical effects have to be considered as well. Conclusions The work of Gray et al. (2023) contains a contradiction between the results of their radiobiological assays and Monte Carlo simulations as well as between different results obtained from these simulations. As discussed here, the reported dose enhancement factors for one GNP in a 1 µm³ volume are inconsistent with upper limits estimated from the principle of energy conservation. The deviation of the DEFs from unity (no dose enhancement) are higher by a factor of about 10,000 than the difference between the upper bounds and unity. At the same time, the data presented by Gray et al. (2023) for the local dose around a GNP are about 20 orders of magnitude too low. This suggests that these results are compromised by two different normalization errors. The analysis presented here further indicates that the large variation of dose enhancement with nanoparticle shape (for the same volume) seems to be mostly due to a bias introduced by the simulation setup for the microscopic simulation of photons interacting with the GNPs. In addition, the reanalysis of the cell survival data revealed that the uncertainties of the radiosensitization enhancement factors obtained by fitting the L-Q model to just two data points are so large that the differences appear not to be statistically significant. Fig. 1 : 1Illustrations (to scale) of the irradiation geometry used in the simulations byGray et al. (2023). (a) Cross-sectional view of the sphere and the solid angle (gray shaded area) covered by beams emitted from the source at point P that intersect the GNP. Point C is the GNP center. Points T and T' denote the tangential points of the dashed lines and the circle.(b) "Front view" of the irradiation of the cubic GNP. The long-dashed line indicates the boundary of the solid angle within which all beams emitted from the point source intersect the cube. The short-dashed line indicates the solid angle within which the beams intersect the cube for some azimuthal angles. Point D is the center of the cube face; points A and A' are the centers of two opposite edges of the cube face; B and B' are two opposite corners on that face. (c) and (d) Cross sections through the cube in the planes PAA' and PBB', respectively, and through the solid angles indicated by the longand short-dashed lines in (b). Fig. 2 : 2Illustrations (to scale) of the triangles used for determining the solid angles inFig. 1 (a), (c) and (d). was evaluated using the photon spectrum reported by McMahon et al. (2011) for 6 MV linac radiation and 5 cm depth in water. The data from Fig. 1 in the paper of McMahon et al. (2011) were digitized using WebPlotDigitizer (https://apps.automeris.io/wpd/) and interpolated using GDL. The Fig. 3 : 3(a) Photon fluence spectrum in 5 cm of water for a 6 MV linac source. The data were taken fromMcMahon et al. (2011) and corrected (see text). (b) The same photon fluence spectrum plotted in "microdosimetry style" with a logarithmic x-axis and a yaxis showing the fluence multiplied by the photon energy and the natural logarithm of 10. In this way, the area under the curve is representative of the contribution to the total photon fluence of the different energy regions. of the publication of McMahon et al. (2011), which shows results for a 6 MV linac irradiation on 2 nm GNPs in 5 cm depth of water. Fig. 4 : 4Chord length distribution within a sphere of 30 nm diameter and a cube of the same volume for (a) a uniform isotropic radiation field and (b) an isotropic point source located outside the respective volume at a radial distance of 1 nm from the sphere surface and a linear distance along the surface normal from the center of one of the squares forming the surface of the cube. 5.84 nm for the sphere . The first row shows the gold concentration (mass fraction). The second and third rows list the parameters of the L-Q model applied to the data of each mass fraction. The values of T show an increasing trend with increasing , whereas the values of W decrease. The next two rows give the intermediate results for the doses needed to produce a cell survival rate of 0.3 and 0.6, respectively. These values show a decreasing trend with increasing . The ensuing two rows are the corresponding REF values. Similar to the values reported by ). (It should be noted that Cui et al. (2017) used a slightly more intricate definition of the REF. However, when the same survival rates are considered for the absence of GNPs and their presence, the REF definition of Radiosensitization enhancement factor (REF) is the ratio of the dose producing a given cell survival percentage in the presence of GNPs to the dose producing the same cell survival percentage in the absence of GNPs. AcknowledgementsL.T. acknowledges funding by (source to be inserted after peer review) Physical basis and biological mechanisms of gold nanoparticle radiosensitization. K T Butterworth, S J Mcmahon, F J Currell, K M Prise, 10.1039/C2NR31227ANanoscale. 416Butterworth KT, McMahon SJ, Currell FJ, Prise KM. 2012. Physical basis and biological mechanisms of gold nanoparticle radiosensitization. Nanoscale. 4(16):4830- 4838. https://doi.org/10.1039/C2NR31227A Gold Nanoparticles as Radiation Sensitizers in Cancer Therapy. D B Chithrani, S Jelveh, F Jalali, M Van Prooijen, Allen C Bristow, R G Hill, R P Jaffray, D A , 10.1667/RR1984.1Radiation Research. 1736Chithrani DB, Jelveh S, Jalali F, van Prooijen M, Allen C, Bristow RG, Hill RP, Jaffray DA. 2010. Gold Nanoparticles as Radiation Sensitizers in Cancer Therapy. Radiation Research. 173(6):719-728. https://doi.org/10.1667/RR1984.1 Radiosensitization by gold nanoparticles: Will they ever make it to the clinic?. L Cui, S Her, G R Borst, R G Bristow, D A Jaffray, C Allen, 10.1016/j.radonc.2017.07.007Radiotherapy and Oncology. 1243Cui L, Her S, Borst GR, Bristow RG, Jaffray DA, Allen C. 2017. Radiosensitization by gold nanoparticles: Will they ever make it to the clinic? Radiotherapy and Oncology. 124(3):344-356. https://doi.org/10.1016/j.radonc.2017.07.007 A detailed experimental and Monte Carlo analysis of gold nanoparticle dose enhancement using 6 MV and 18 MV external beam energies in a macroscopic scale. T Gray, N Bassiri, S David, D Y Patel, S Stathakis, N Kirby, K M Mayer, 10.1016/j.apradiso.2021.109638Applied Radiation and Isotopes. 171109638Gray T, Bassiri N, David S, Patel DY, Stathakis S, Kirby N, Mayer KM. 2021. A detailed experimental and Monte Carlo analysis of gold nanoparticle dose enhancement using 6 MV and 18 MV external beam energies in a macroscopic scale. Applied Radiation and Isotopes. 171:109638. https://doi.org/10.1016/j.apradiso.2021.109638 Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies. T M Gray, S David, N Bassiri, D Y Patel, N Kirby, K M Mayer, International Journal of Radiation Biology. 992Gray TM, David S, Bassiri N, Patel DY, Kirby N, Mayer KM. 2023. Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies. International Journal of Radiation Biology. 99(2):308-317. . 10.1080/09553002.2022.2087931https://doi.org/10.1080/09553002.2022.2087931 J H Hubbell, S M Seltzer, Tables of X-Ray Mass Attenuation Coefficients and Mass Energy-Absorption Coefficients from 1 keV to 20 MeV for Elements Z = 1 to 92 and 48 Additional Substances of Dosimetric Interest. Gaithersburg, MDNational Institute of Standards and Technologyversion 1.4). [Online] Available at. InternetHubbell JH, Seltzer SM. 2004. Tables of X-Ray Mass Attenuation Coefficients and Mass Energy-Absorption Coefficients from 1 keV to 20 MeV for Elements Z = 1 to 92 and 48 Additional Substances of Dosimetric Interest (version 1.4). [Online] Available at: http://physics.nist.gov/xaamdi (Gaithersburg, MD: National Institute of Standards and Technology) [Internet]. . 10.18434/T4D01Fhttps://doi.org/10.18434/T4D01F Radiation Enhancer Effect of Platinum Nanoparticles in Breast Cancer Cell Lines: In Vitro and In Silico Analyses. M Hullo, R Grall, Y Perrot, C Mathé, V Ménard, X Yang, S Lacombe, E Porcel, C Villagrasa, S Chevillard, E Bourneuf, 10.3390/ijms22094436IJMS. 2294436Hullo M, Grall R, Perrot Y, Mathé C, Ménard V, Yang X, Lacombe S, Porcel E, Villagrasa C, Chevillard S, Bourneuf E. 2021. Radiation Enhancer Effect of Platinum Nanoparticles in Breast Cancer Cell Lines: In Vitro and In Silico Analyses. IJMS. 22(9):4436. https://doi.org/10.3390/ijms22094436 Quantitative concepts and dosimetry in radiobiology. International Commission on Radiation Units and Measurements. Washington, D.CICRUICRU, editor. 1979. Quantitative concepts and dosimetry in radiobiology. Washington, D.C.: International Commission on Radiation Units and Measurements. In vitro studies on radiosensitization effect of glucose capped gold nanoparticles in photon and ion irradiation of HeLa cells. H Kaur, G Pujari, M K Semwal, A Sarma, D K Avasthi, 10.1016/j.nimb.2013.02.015Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 301Kaur H, Pujari G, Semwal MK, Sarma A, Avasthi DK. 2013. In vitro studies on radiosensitization effect of glucose capped gold nanoparticles in photon and ion irradiation of HeLa cells. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 301:7-11. https://doi.org/10.1016/j.nimb.2013.02.015 Considerations on the Random Traversal of Convex Bodies and Solutions for General Cylinders. A M Kellerer, Radiation Research. 47Kellerer AM. 1971. Considerations on the Random Traversal of Convex Bodies and Solutions for General Cylinders. Radiation Research. 47:359-376. Dosimetric consequences of gold nanoparticle clustering during photon irradiation. C Kirkby, B Koger, N Suchowerska, D R Mckenzie, 10.1002/mp.12620AIP Conference Proceedings. 4412Kirkby C, Koger B, Suchowerska N, McKenzie DR. 2017. Dosimetric consequences of gold nanoparticle clustering during photon irradiation. AIP Conference Proceedings. 44(12):6560-6569. https://doi.org/10.1002/mp.12620 A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy. B Koger, C Kirkby, 10.1088/0031-9155/61/5/2014Physics in Medicine and Biology. 615Koger B, Kirkby C. 2016. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy. Physics in Medicine and Biology. 61(5):2014-2024. https://doi.org/10.1088/0031- 9155/61/5/2014 Microdosimetry: Experimental Methods and Applications. L Lindborg, A Waker, CRC PressBoca RatonLindborg L, Waker A. 2017. Microdosimetry: Experimental Methods and Applications. Boca Raton: CRC Press. Nanodosimetric effects of gold nanoparticles in megavoltage radiation therapy. S J Mcmahon, W B Hyland, M F Muir, J A Coulter, S Jain, K T Butterworth, G Schettino, G R Dickson, A R Hounsell, J M O&apos;sullivan, 10.1016/j.radonc.2011.08.026Radiotherapy and Oncology. 1003McMahon SJ, Hyland WB, Muir MF, Coulter JA, Jain S, Butterworth KT, Schettino G, Dickson GR, Hounsell AR, O'Sullivan JM, et al. 2011. Nanodosimetric effects of gold nanoparticles in megavoltage radiation therapy. Radiotherapy and Oncology. 100(3):412-416. https://doi.org/10.1016/j.radonc.2011.08.026 Measuring radioenhancement by gold nanofilms: Comparison with analytical calculations. J A Mirza, K Choi, W Sung, S Jung, S-J Ye, 10.1016/j.ejmp.2019.10.040Physica Medica. 68Mirza JA, Choi K, Sung W, Jung S, Ye S-J. 2019. Measuring radioenhancement by gold nanofilms: Comparison with analytical calculations. Physica Medica. 68:1-9. https://doi.org/10.1016/j.ejmp.2019.10.040 Determining dose enhancement factors of high-Z nanoparticles from simulations where lateral secondary particle disequilibrium exists. H Rabus, E Gargioni, W Li, H Nettelbeck, Villagrasa, 10.1088/1361-6560/ab31d4Phys Med Biol. 6415155016Rabus H, Gargioni E, Li W, Nettelbeck, H. C Villagrasa. 2019. Determining dose enhancement factors of high-Z nanoparticles from simulations where lateral secondary particle disequilibrium exists. Phys Med Biol. 64(15):155016 (26 pp.). https://doi.org/10.1088/1361-6560/ab31d4 Consistency checks of results from a Monte Carlo code intercomparison for emitted electron spectra and energy deposition around a single gold nanoparticle irradiated by X-rays. H Rabus, W B Li, H Nettelbeck, J Schuemann, C Villagrasa, M Beuve, Di Maria, S , Heide B Klapproth, A P Poignant, F , 10.1016/j.radmeas.2021.106637Radiat Meas. 147106637Rabus H, Li WB, Nettelbeck H, Schuemann J, Villagrasa C, Beuve M, Di Maria S, Heide B, Klapproth AP, Poignant F, et al. 2021. Consistency checks of results from a Monte Carlo code intercomparison for emitted electron spectra and energy deposition around a single gold nanoparticle irradiated by X-rays. Radiat Meas. 147:106637. https://doi.org/10.1016/j.radmeas.2021.106637 Intercomparison of Monte Carlo calculated dose enhancement ratios for gold nanoparticles irradiated by Xrays: Assessing the uncertainty and correct methodology for extended beams. H Rabus, W B Li, C Villagrasa, J Schuemann, P A Hepperle, L De La Fuente Rosales, M Beuve, S D Maria, A P Klapproth, C Y Li, 10.1016/j.ejmp.2021.03.005Phys Medica. 841Rabus H, Li WB, Villagrasa C, Schuemann J, Hepperle PA, de la Fuente Rosales L, Beuve M, Maria SD, Klapproth AP, Li CY, et al. 2021. Intercomparison of Monte Carlo calculated dose enhancement ratios for gold nanoparticles irradiated by X- rays: Assessing the uncertainty and correct methodology for extended beams. Phys Medica. 84(1):241-253. https://doi.org/10.1016/j.ejmp.2021.03.005 Microdosimetry and its Applications. H H Rossi, M Zaider, InternetRossi HH, Zaider M. 1996. Microdosimetry and its Applications [Internet]. . Berlin, 10.1007/978-3-642-85184-1SpringerHeidelberg, New YorkBerlin, Heidelberg, New York: Springer. https://doi.org/10.1007/978-3-642-85184-1 Nanoscale radiation transport and clinical beam modeling for gold nanoparticle dose enhanced radiotherapy (GNPT) using X-rays. P Zygmanski, E Sajo, 10.1259/bjr.20150200British Journal of Radiology. 89Zygmanski P, Sajo E. 2016. Nanoscale radiation transport and clinical beam modeling for gold nanoparticle dose enhanced radiotherapy (GNPT) using X-rays. British Journal of Radiology. 89(1059):20150200. https://doi.org/10.1259/bjr.20150200
{'fraction_non_alphanumeric': 0.0407400187840017, 'fraction_numerical': 0.033439067179387216, 'mean_word_length': 4.284884809889492, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 3, 'https://': 19, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In the recently published article by T.M. Gray et al. "Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies" (IJRB 2023, 99(2), 308-317) results of Monte Carlo simulations and radiobiological assays on the dosimetric effects of gold nanoparticles were presented. This commentary points out that the results of the two parts of the study are in contradiction and that the predicted magnitude of dose enhancement and its dependence on the shape of the nanoparticle appear implausible. Possible reasons for these observations are discussed.', 'arxivid': '2304.11392', 'author': ['Hans Rabus *[email protected] \nPhysikalisch-Technische Bundesanstalt (PTB)\nBerlinGermany\n', 'Miriam Schwarze \nPhysikalisch-Technische Bundesanstalt (PTB)\nBerlinGermany\n', 'Leo Thomas \nPhysikalisch-Technische Bundesanstalt (PTB)\nBerlinGermany\n'], 'authoraffiliation': ['Physikalisch-Technische Bundesanstalt (PTB)\nBerlinGermany', 'Physikalisch-Technische Bundesanstalt (PTB)\nBerlinGermany', 'Physikalisch-Technische Bundesanstalt (PTB)\nBerlinGermany'], 'corpusid': 258298485, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 16346, 'n_tokens_neox': 13636, 'n_words': 9321, 'pdfsha': '3f8eef831d6edc96ca86f11b7ef142b4d3daf91b', 'pdfurls': ['https://export.arxiv.org/pdf/2304.11392v1.pdf'], 'title': ['Article Commentary on "Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies', 'Article Commentary on "Microdosimetric and radiobiological effects of gold nanoparticles at therapeutic radiation energies'], 'venue': ['IJRB']}
arxiv
1 May 2001 Pragya Shukla Department of Physics Non-Hermiticity and Universality Indian Institute of Technology KharagpurIndia 1 May 2001 We study the statistical properties of the eigenvalues of non-Hermitian operators assoicated with the dissipative complex systems. By considering the Gaussian ensembles of such operators, a hierarchical relation between the correlators is obtained. Further the eigenvalues are found to behave like particles moving on a complex plane under 2-body (inverse square) and 3-body interactions and there seems to underlie a deep connection and universality in the spectral behaviour of different complex systems. PACS numbers: 05.45+b, 03.65 sq, 05.40+j . The random non-Hermitian operators play a significant role in the dynamics of variety of complex systems e.g. dissipative quantum systems [1], chaotic quantum scattering [2], neural network dynamics [3], statistical mechanics of flux lines in superconductors with columnar disorder [4], classical diffusion in random media [5], biological growth problems[6]. The study of the statistical properties of their eigenvalues and eigenvectors therefore is relevant and has been of great interest in recent past.The random nature of a quantum operator reveals itself through a distribution of values for each of its matrix elements. The type of the distribution depends on the complexity of the system, resulting in a variety of non-Hermitian random matrices (NHRM) for various dissipative cases. In this letter, we develop a technique to study the spectral properties of the Gaussian ensembles of NHRM. The technique is based on a mapping of the eigenvalue distribution of a general NHRM to a non-stationary state of a classically integrable Hamiltonain. The latter, a variant of Calogero-Sutherland (CS) Hamiltonian in two dimension, is a generator of the dynamics of N particles interacting via long-ranged two body and three body interactions and confined by a harmonic oscillator potential[7,8]. A similar technique has already been applied to Hermitian operators which maps the eigenvalue distribution to a state of the CS Hamiltonian in one dimension [9]; the known particle correlators for the latter case are then used to determine the eigenvalue correlations. A detailed knowledge of the non-stationary states of the 2-dimensional Calogero system can therefore be useful in dealing with a variety of NHRM.As well-known, the CS Hamiltonian is a fully integrable system with particles evolving in an ordered way with respect to time [10]; this implies a strong correlation between various particle states at different times. Our mapping thus reveals a very interesting feature of the eigenvalues of the operators associated with both conservative and dissipative dynamics. The eigenvalues evolve in a highly ordered, correlated way as the degree or the nature of the complexity changes. This implies a connection between the statistical nature of the eigenvalues of two different complex systems. As the nature of the interaction in the corresponding CS system is 1/r 2 type, 1-dimensional for conservative and 2-dimensional for dissipative systems, regardless of the nature of the complexity, a great deal of universality among the physical properties (those related to eigenvalue correlations) of complex systems seems to be present. The universality in the eigenvalue statistics of the operators in the regime of weak non-Hermiticity was indicated by another study too[11].We consider an ensemble of N × N non-Hermitian matrices H defined by a Gaussian measureρ(H) whereρ(H, y) ∝ exp[− β s=1 k,l (y kl;s H 2 kl;s + x kl;s H kl;s H lk;s )] = Cρ(H) with C as the normalization constant, y and x related to variances and covariances of various matrix elements. Here the subscript s on a variable refers to one of its components, i.e real (s = 1) or imaginary (s = 2) part, with β as total number of the components. The above choice of ρ is made so as to include a large class of NHRM ensembles (for example y kl = N/(1 − τ 2 ), x kl;s = x lk;s = (−1) s−1 τ N/(1 − τ 2 ) give GUE (τ = 1), Ginibre (τ = 0) and the ensemble of complex anti-symmetric matrices, referred as GASE later on, τ = −1 [12]; see [13] for the eigenvalue statistics for the cases with 0 ≤ τ ≤ 1). A non-Hermitian matrix can be diagonalized by a transformation of the type Λ = U HV with Λ as the matrix of eigenvalues λ j and U and V as the left and right eigenvector matrices respectively. Let us first consider the case of an ensemble of non-Hermitian complex matrices (β = 2). Here the eigenvalues λ j ≡ 2 r=1 (i) r−1 λ jr , in general, are distributed over an area in the complex plan. LetP (z, y) be the probability of finding eigenvalues λ i of H between z i and z i + dz i at a given y,P (z, y) = C N i=1 δ(z i − λ i )δ(z * i − λ * i )ρ(H, y)dH = CP(1) with P as the unnormalized distribution: P (z, y) = f (z, z * )ρ(H, y)dH with f (z, z * ) = N i=1 δ(z i − λ i )δ(z * i − λ * i ). The degree of difficulty associated with solving the integral eq.(1) motivates us to seek another route. We attempt to obtain an evolution equation for P (z, y) for the eigenvalues moving on the complex plane due to changing distribution parameters. As in the Hermitian case [9], we consider a combination of the rates of change of P in the parametric space, namely the sum S, S ≡ β s=1 k,l (γ + (−1) s x kl;s ) y kl;s ∂P ∂y kl;s + x kl;s ∂P ∂x kl;s , and attempt to express it in terms of the rates of changes of P in the eigenvalue space. (The reason behind the choice of sum S is that it can be reduced to the Schrodinger equation for CS Hamiltonian). This would require a knowledge of the rates of change of the eigenvalues as well as the eigenvectors due to a small change in the matrix element H kl which can be given as follows, ∂λ n ∂H kl;s = i s−1 U nk V ln , ∂ 2 λ n ∂H 2 kl;s = m =n 1 λ n − λ m ∂λ m ∂H kl;s ∂λ n ∂H kl;s (2) ∂U nr ∂H kl;s = i s−1 m =n 1 λ n − λ m U mr U nk V lm ,(3)∂V rn ∂H kl;s = i s−1 m =n 1 λ n − λ m U mk V ln V rm ,(4)k,l β s=1 ∂λ n ∂H kl;s H kl;s = k,l H kl U nk V ln = λ n ,(5)β s=1 ∂λ n ∂H kl;s ∂λ m ∂H lk;s = βδ mn (6) β s=1 ∂ 2 λ n ∂H kl;s H lk;s = 2β m 1 λ n − λ m(7) The k,l (γ + (−1) s x kl;s ) and r referring to the components of the eigencvalues. With the help of eq.(5), the first term on the right hand side of eq.(8) can further be simplified, β s=1 k,l I kl;s = n,r ∂ ∂znr (z nr P ). By using eqs. (6,7), the second term can also be rewritten as follows: s (−1) s k,l x kl;s I kl;s = ∂ 2 P ∂z 2 nr − 2 ∂ ∂z nr ∂ln|∆(z)| ∂z nr P − G(9) where G = s (−1) s k,l y kl;s y lk;s ∂P ∂x kl;s + x kl;s y lk;s ∂P ∂y lk;s and ∆ N (z) = N j<k (z j − z k ). A substitution of eq.(9) in eq.(8) will now give the sum S 1 ≡ S + G + C 1 P , a combination of various parametric derivatives, in terms of the eigenvalue derivatives of P , S 1 = 2 r=1 N n=1 ∂ ∂z nr ∂ ∂z nr − β ∂ln|∆ N (z)| ∂z nr + γz nr P(10) However the sum S 1 , on the other hand, can also be expressed as a derivative of P with respect to a single parameter Y ≡ Y (y kl;s , x kl;s ) where Y is a function of all y kl;s and x kl;s , given by the condition that S 1 ≡ β s=1 k,l A kl;s ∂P ∂y lk;s + B kl;s ∂P ∂x kl;s + C 1 P = ∂P ∂Y + C 1 P = 1 C 2 ∂P 1 ∂Y(11) where A kl;s = y kl;s [γ +2(−1) s x lk;s ] and B kl;s = [γx kl;s +(−1) s x kl;s x lk;s +(−1) s y kl;s y lk;s ], C 2 = e C1dY and P 1 = C 2 P . The form of Y , fulfilling the desired condition, can therefore be obtained by solving following equations [9] (for all k, l and s values): dy kl;s A kl;s = dx kl;s B kl;s = dY 1 (12) which gives Y = (1/N 2 ) k,l β s=1 F (y kl;s ) + Y 0 with Y 0 given by the initial conditions. Here F (y kl;s ) = ± dy kl;s (y kl;s √ W ) −1 = ln y kl;s 2(γ 2 +2(−1) sc kl;s y kl;s +γ √ W ) with W = γ 2 + 4y kl;s (c kl;s y kl;s + (−1) sc kl;s ) and constants c kl;s andc kl;s given by relations y lk;s = c kl;s y kl;s and x 2 kl;s + (−1) s γx kl;s − c kl;s y 2 kl;s − (−1) sc kl;s y kl;s = 0. . Here various y kl;s being indicators of the complexity of the system, Y can be termed as a complexity parameter. Now, by comparing the two forms of S 1 , the evolution of eigenvalues in terms of the parameter Y can be obtained ∂P 1 ∂Y = 2 r=1 N n=1 ∂ ∂z nr ∂ ∂z nr − β ∂ln|∆ N (z)| ∂z nr + γz nr P 1(13) with β = 2 and P 1 is related to the normalized distribution byP = CP 1 /C 2 . Note the analogy of the above equation to that of Hermitian case [9] but the evolution is now occuring on a complex plane. .. ....... The steady state of eq. (13), P s ≡ |Q N | 2 = j<k |∆ N (z)| 2 e − γ 2 k |z k | 2 , corresponds to ∂P ∂Y → 0 which can occur when almost all y kl;s → N/(1 − τ 2 ) and almost all x kl;s → (−1) s N τ /(1 − τ 2 ) with τ → 0, ±1. Here each τ value leads to a different steady state, namely, GBE (τ = 0) GUE (τ = 1) and GASE (τ = −1) with the distribution P s representing all the three cases. Note P s in each case agrees well with the nature of the matrix H for these limits which is complex for τ = 0, complex hermitian for τ = 1 (therefore real eigenvalues) and complex antisymmetric for τ = −1 (thus eigenvalues in equal and opposite pairs). The eq.(13) describeis a transition from a given initial ensemble (with Y = Y 0 ) to either GBE, GUE or GASE with Y − Y 0 as the transition parameter. The nonequilibrium states of these transitions, given by non-zero finite values of Y − Y 0 , are various ensembles of the complex matrices corresponding to varying values of y kl 's and x kl 's thus modelling different complex systems. Note the eq.(13) for P 1 ≡ P 1 (µ, Y |µ 0 , Y 0 ) has been obtained for arbitrary initial conditions, say P 1 (µ 0 , Y 0 ); the eigenvalue distribution P 1 (µ, Y ) = P 1 (µ, Y |µ 0 , Y 0 )P 1 (µ 0 , Y 0 ) of a given RNHE can therefore be found by solving the eq.(13) by using a convenient initial ensemble. Just as in the Hermitian case, the "convenience" depends on mathematical tractability of the integrals as well as on involved physics [9]. The case of Non-Hermitian real matrices (β = 1) can similarly be treated. Here eigenvalues are either real or form complex conjugate pairs and therefore if U n is an eigenvector corresponding to the complex eigenvalue λ n , its complex conjugate will correspond to an eigenvector U * n . Consider the case with L real and M complex conjugate pairs of the eigenvalues with N = L + 2M . The rates of change of the eigenvalues and the eigenvectors are still given by eqs. (2-7) with H kl;1 ≡ H kl . The distribution P in this case can be described by P = N j=1 f (z, z * )g(z, z * )ρ(H)dH with f (z, z * ) = L j=1 δ(µ j − z j )δ ( µ j − z * j ), g(z, z * ) = L+M j=L+1 δ(µ j − z j )δ ( µ * j − z j+M )δ(µ j+M − z * j )δ(µ * j+M − z * j+M ) . (As obvious, here first L eigenvalues are chosen to be real and rest of them complex conjugate). Proceeding similarly as for the complex case, using eqs.(2-7) and equalities ∂f g ∂H kl = − L+2M n=1 ∂(znf g) ∂µn , ∂ 2 f g ∂H kl H lk = − L+2M n=1 ∂ ∂µn ∂f g ∂µn + m =n f g zm−zn , one obtains the following ∂P 1 ∂Y = L+2M n=1 ∂ ∂z n ∂ ∂z n − β ∂ln|∆(z)| ∂z n + γz n P 1(14) where Y is still given by the eq.(11) with β = 1 (y kl;1 ≡ y kl and x kl;1 ≡ x kl ); Y = 1 N 2 k,l F (y kl ) with F (y kl ) = dy kl (y kl √ W ) −1 = ln y kl 2(γ−2c kl y kl +γ √ W ) , W = γ 2 + 4y kl (c kl y kl −c kl ) and c kl andc kl given by relations y lk = c kl y kl and x 2 kl − γx kl − c kl y 2 kl +c kl y kl = 0. The steady state again occurs for τ = 0, ±1 with solution of eq. (14) is P = P s = |∆ N (z)| N i=1 e −γz 2 i erfc(z i − z * i ) 1/2 . The distribution P (z; τ = 0, ±1) is in agreement with the results obtained in [14] by a different method. The eq.(13) for P (µ, Y ) can be used to obtain n th order density correlator R n (z 1 , ..z n ; Y ), defined by R n = N ! (N −n)! P (z, Y )dz n+1 ..dz N with dz n ≡ dz n1 dz n2 . The similar forms of the equations for P , in Hermitian and non-Hermitian case, result in the same for the equations for R n too [9]. Again a direct integration of F-P equation (13) leads to the hierarchic relations among R n ∂R n ∂Y = 2 r=1 n j=1 ∂ 2 R n ∂z 2 jr − β ∂ ∂z jr R n ∂ln|Q N | ∂z nr − β ∂ ∂z jr ∞ −∞ dz n+1 R n+1 ∂ln|z j − z n+1 | ∂z jr(15) For real applications, it is important to consider the limit N → ∞ for fixed n. For ρ(z) = N −1 R 1 (z) which fixes the scale for the eigenvalue fluctuations, the large N limit of eq.(15), gives the following form (with z = N e) ∂ρ(e) ∂Y = 2 r=1 ∂ ∂e r γe r − 2P de ′ ρ(e ′ ) e r − e ′ r |e − e ′ | 2 ρ(e)(16) where P refers to the priniciple part of the integral. The eq. (16) is valid under the similar approximations as in the Hermitian case, that is, by neglecting contributions from the terms containing 2 nd order cluster functions (see page 142 of [15], also [16]) and the diffusion term; both are of the order of N 2 or lower. For n > 1, the correlators should be unfolded (that is, a rescaling of eigenvalues to result in a unit mean spacing) as follows: R n (ζ 1 , .., ζ n ; Λ) = LimN → ∞ Rn(z1,..,zn;Y ) ). The transition therefore takes place for finite values of Y R 2 1 and a smooth transition can only be seen in terms of a parameter Λ = (Y − Y 0 )/D 2 (D = R −1 1 ; the mean level spacing) [16]. Assuming that for Y = Y 0 , ρ is not singular (nor zero) and the R n are well defined, the ρ, given by eq.(16), remains nearly unchanged for finite Λ. Keeping only O(R n+2 1 ) terms, the eq.(15) can then be reduced to following form (see page 145 of [15]) R1(z1;Y )...R1(zn;Y ) with ζ = ζ R 1 (z; Y )dz.∂R n ∂Λ = b r=1 n j=1 ∂ ∂ζ jr |∆ n | β ∂ ∂ζ jr R n |∆ n | β − β ∂ ∂ζ jr ∞ −∞ dζ n+1 R n+1 ∂ln|ζ j − ζ n+1 | ∂ζ jr(17) here b = 2 (for simplification, γ is chosen to be unity). The above equation is obtained from eq.(15) by neglecting the linear drift of the eigenvalues which is dominated, by a factor N , by their diffusion and mutual repulsion. In fact the linear restoring force, responsible for the global behaviour ρ of the density of levels is entirely negligible on scales at which local fluctuations occur. On the other hand, the diffusion is ineffective on the global scale (see eq.(16)). As discussed above, the transition for R n occurs on the scales determined by Y ≈ D 2 , while, for R 1 , the corresponding scale is given by Y ≈ N D 2 . This indicates a clear separation of the scales of the global and local behaviour of the density. (It is worth noting here the similarity of eq.(17) to its Hermitian counterpart; however it does not imply the similarity of correlations, the former being on the complex plane). The hierarchical equation of correlations for the real assymetric case can be obtained by integrating eq.(14) which will again lead to a relation similar to eq.(17) but now β = 1, r to be dropped and ζ jr replaced by ζ j . For n = 2 and small values of ζ, the integral term in eq.(17) makes a negligible contribution thus leading to following approximated closed form equation for R 2 (r ≡ ζ 2 − ζ 1 ) (with r = r 1 + ir 2 ) ∂R 2 ∂Λ = 1 2 2 s=1 ∂ 2 R 2 ∂r 2 s − ∂ ∂r s R 2 ∂ln|r| 2 ∂r s(18) which gives R 2 (r) ≈ |r| 2 for small-r; for large r behaviour, it may be easier to consider the fourier transform of eq.(17). The hierarchic equation can then be used to obtain an approximate form of the higher order correlations. For example, the approximate information about R 3 may be extracted by a substitution of large and small r behaviour of R 2 in eq.(15) with n = 2. An alternative route to obtain correlations is by exploiting the connection of eq.(13) to CS Hamiltonian. This can be shown by using the transformation Ψ = P/|Q N | β/2 in eq.(13) reducing it in a form ∂Ψ ∂Y = −ĤΨ where the 'Hamiltonian'Ĥ turns out to be the CS Hamiltonian in two dimensions (for simplification take γ = 1). H = − i ∂ 2 ∂r 2 i + g i,j;i<j 1 r 2 ij + G i,j,k;i<j,i,j =k r ki .r kj r 2 ki r 2 kj + i r 2 i(19) with r i ≡ z i , r ki ≡ z k − z i and r ki ≡ |r ki |. Here G = g (with g = 1 for NHRM case with all real eigenvalues and g = 2 for the complex NHRM) and, unlike the complex Hermitian case (G = 0, g = 1), the inverse square term does not drop out for the complex non-Hermitian case. Further as Y → ∞, the particles are in their ground state ψ with a distribution ψ 2 0 . As the mapping between the eigenvalues and paticles requires ψ 2 0 = P s which is possible if the particles are considered as bosons. The bosonic radial eigenstate and the eigenvalues of the Hamiltonian are well known, ψ n = N j<k |r i −r j | Λ e − 1 2 k |r k | 2 L n with L n as Laguerre's polynomial and energy E n = [4n+N (N −1)Λ+2N ]/2N, Λ = G/2 [8]. The "state" ψ or P (µ, Y |µ 0 , Y 0 ) can then formally be expressed as a sum over the eigenvalues and eigenfunctions which on integration over the initial state P (µ 0 , Y 0 ) would lead to the joint probability distribution P (µ, Y ) and thereby static (at a single parameter value) density correlations R n . The above correspondence can also be used to map the multi-parametric correlations of levels to multi-time correlations of the particle-positions [9]. Although the explicit calculations of correlations, involves technical handling of various integrals and is still an unfinished task, nontheless our study reveals an important connection. The level correlations of different complex systems need not be studied separately, a thorough probing of the particle correlations of CS will give all the required information. The CS system being integrable in nature, the semiclassical techniques can also be very successful for the probing. The reasons for the correspondence between Gaussian NHRM and CS Hamiltonian are worth paying attention. Note the analogy with harmonic oscillator type confining potential in the CS system results from the Gaussian nature of the ensemble. The correspondence with 1/r 2 term comes from the mutual repulsion between eigenvalues. Its mathematical origin lies in the transformation from matrix space to eigenvalue-eigenvector space which is same for all the non-hermitian ensembles (belonging to same symmetry class irrespective of matrix element distribution). It should be possible, therefore, to map the non-Gaussian NHRM also to a variant of CS Hamiltonian, although with a different type of confining potential. For ρ(H) ∝ e −f (H) , the correspondence can be shown by following the similar steps as used for the Hermitian case [9]. . In this paper, we have studied the statistical properties of the eigenvalues of non-Hermitian systems. We find that the evolution of the eigenvalues is governed by an equation in which system-dependence enters only through the evolution parameter Y − Y 0 related to complexity of the system. It is possible that widely different systems with different values of the distribution parameters may share a same Y − Y 0 -value. Such systems will thus have similar statistical featurs which indiactes an underlying universality in the distribution of the eigenvalues of non-hermitian operators. Furthermore as the eigenvalue distribution for each complex system appears as a general state of CS system, any two such states, for example ψ(Y 1 ) and ψ(Y 2 ), being related by "time" evolution operator U (Y 2 , Y 1 ), the eigenvalue distributions of the complex systems corresponding to Y 1 and Y 2 will also be connected. This would also reflect in their physical properties based on spectral fluctuations e.g conductance (assuming existence of ergodicity, that is, ensemble averages same as spectral averages). A detailed investigation of CS hamiltonian in arbitrary dimension can therefore give a lot of useful information about variety of complex systems and is therefore very much desirable. ..... parametric-dependence of P in sum S enters only through ρ(H) and as ∂ρ ∂y kl;s = −H 2 kl;s ρ, ∂ρ ∂x kl;s = −H kl;s H lk;s ρ,∂ρ ∂H kl;s = −2(y kl;s H kl;s + x kl;s H lk;s ) with ∂λn ∂H kl;s ∂f ∂λn + ∂λ * n ∂H kl;s ∂f ∂λ * n = −2 2 r=1 ∂λnr ∂H kl;s ∂f ∂znr , the sum S can be written as follows S = β s=1   γ k,l I kl;s + (−1) s k,l x kl;s I kl;s   − C 1 P (8) where I kl;s = 2 r=1 N n=1 ∂ ∂znr f ∂λnr ∂H kl;s H kl;s ρ dH, C 1 = 1 2 β s=1 . F Haake, Z.Phys. B. 88F.Haake et al., Z.Phys. B, 88, 359, (1992). . Y V Fyodorov, H.-J Sommers, J. Math. Physics (N.Y.). 38Y.V.Fyodorov and H.-J. Sommers, J. Math. Physics (N.Y.), 38, 1918, (1997). . H J Sommers, A Crisanti, H Sompolinsky, Y Stein, Phys. Rev. Lett. 601895H.J.Sommers, A.Crisanti, H.Sompolinsky and Y.Stein, Phys. Rev. Lett., 60, 1895 (1988). . N Hatano, D R Nelson, Phys. Rev. Lett. 77570N.Hatano and D.R.Nelson, Phys. Rev. Lett. 77, 570 (1996); . K B Efetov, Phys Rev. B. 569630K.B.Efetov, Phys Rev. B 56 9630 (1997); . I Y Glodsheild, B A Khoruzhenko, Phys. Rev. Lett. 802897I.Y.Glodsheild and B.A.Khoruzhenko, Phys. Rev. Lett., 80, 2897 (1998); . C Mudry, B D Simons, A Altland, Phys. Rev. Lett. 804257C.Mudry, B.D.Simons and A.Altland, Phys. Rev. Lett., 80, 4257 (1998). . J T Chalker, Z J Wang, Phys. Rev. Lett. 791797J.T.Chalker and Z.J.Wang, Phys. Rev. Lett. 79, 1797, (1997). . D R Nelson, N M Shnerb, cond-mat/9708071D.R.Nelson and N.M.Shnerb, cond-mat/9708071. . Marchiero Calogero, J. Math.Phys. 14Calogero and Marchiero, J. Math.Phys. 14, 182, (1973). . A Khare, Phys. Lett. A. 245A.Khare, Phys. Lett. A, 245, 14, (1998); . A Khare, Koushik Ray, Phys. Lett. A. 230A.Khare and Koushik Ray, Phys. Lett. A, 230, 139, (1997). . P Shukla, Phys. Rev. E. 62P.Shukla, Phys. Rev. E 62, 2098, (2000). . F Calogero, ; B Sutherland, J. Math. Phys. 101372Phys. Rev. AF.Calogero, J. Math. Phys., 10, 2191, 2197 (1969). B.Sutherland, J. Math. Phys., 12, 246, (1971); 12, 252, (1971); Phys. Rev. A, 4, 2019, (1971); 5, 1372, (1972). . Y V Fyodorov, B A Khoruzhenko, Ann. Inst. Henri Poincare (Physique Theorique). 68449Y.V.Fyodorov and B.A.Khoruzhenko, Ann. Inst. Henri Poincare (Physique Theorique), 68, 449, (1998). . J T Chalker, B Mehlig, Phys. Rev. Lett. 81J.T.Chalker and B.Mehlig, Phys. Rev. Lett. 81, 3367, (1998). . Y V Fyodorov, B A Khoruzhenko, H.-J Sommers, Phys. Rev. Lett. 79Y.V.Fyodorov, B.A.Khoruzhenko and H.-J. Sommers, Phys. Rev. Lett., 79, 557, (1997); . Y V Fyodorov, B A Khoruzhenko, Phys. Rev. Lett. 83Y.V.Fyodorov and B.A.Khoruzhenko, Phys. Rev. Lett., 83, 65, (1999). . Nils Lehmann, H-J Sommers, Phys. Rev. Lett. 67941Nils Lehmann and H-j Sommers, Phys. Rev. Lett. 67, 941, (1991). F Haake, Quantum Signature of Chaos. BerlinSpringerF.Haake, Quantum Signature of Chaos, Springer, Berlin (1991). . A Pandey, P Shukla, J. Phys. A. A.Pandey and P.Shukla, J. Phys. A, (1991). * E-Mail , [email protected]. * E-Mail : [email protected]
{'fraction_non_alphanumeric': 0.08691998053700181, 'fraction_numerical': 0.032954394656522314, 'mean_word_length': 3.3975880178953513, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 6, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We study the statistical properties of the eigenvalues of non-Hermitian operators assoicated with the dissipative complex systems. By considering the Gaussian ensembles of such operators, a hierarchical relation between the correlators is obtained. Further the eigenvalues are found to behave like particles moving on a complex plane under 2-body (inverse square) and 3-body interactions and there seems to underlie a deep connection and universality in the spectral behaviour of different complex systems. PACS numbers: 05.45+b, 03.65 sq, 05.40+j . The random non-Hermitian operators play a significant role in the dynamics of variety of complex systems e.g. dissipative quantum systems [1], chaotic quantum scattering [2], neural network dynamics [3], statistical mechanics of flux lines in superconductors with columnar disorder [4], classical diffusion in random media [5], biological growth problems[6]. The study of the statistical properties of their eigenvalues and eigenvectors therefore is relevant and has been of great interest in recent past.The random nature of a quantum operator reveals itself through a distribution of values for each of its matrix elements. The type of the distribution depends on the complexity of the system, resulting in a variety of non-Hermitian random matrices (NHRM) for various dissipative cases. In this letter, we develop a technique to study the spectral properties of the Gaussian ensembles of NHRM. The technique is based on a mapping of the eigenvalue distribution of a general NHRM to a non-stationary state of a classically integrable Hamiltonain. The latter, a variant of Calogero-Sutherland (CS) Hamiltonian in two dimension, is a generator of the dynamics of N particles interacting via long-ranged two body and three body interactions and confined by a harmonic oscillator potential[7,8]. A similar technique has already been applied to Hermitian operators which maps the eigenvalue distribution to a state of the CS Hamiltonian in one dimension [9]; the known particle correlators for the latter case are then used to determine the eigenvalue correlations. A detailed knowledge of the non-stationary states of the 2-dimensional Calogero system can therefore be useful in dealing with a variety of NHRM.As well-known, the CS Hamiltonian is a fully integrable system with particles evolving in an ordered way with respect to time [10]; this implies a strong correlation between various particle states at different times. Our mapping thus reveals a very interesting feature of the eigenvalues of the operators associated with both conservative and dissipative dynamics. The eigenvalues evolve in a highly ordered, correlated way as the degree or the nature of the complexity changes. This implies a connection between the statistical nature of the eigenvalues of two different complex systems. As the nature of the interaction in the corresponding CS system is 1/r 2 type, 1-dimensional for conservative and 2-dimensional for dissipative systems, regardless of the nature of the complexity, a great deal of universality among the physical properties (those related to eigenvalue correlations) of complex systems seems to be present. The universality in the eigenvalue statistics of the operators in the regime of weak non-Hermiticity was indicated by another study too[11].We consider an ensemble of N × N non-Hermitian matrices H defined by a Gaussian measureρ(H) whereρ(H, y) ∝ exp[−', 'arxivid': 'cond-mat/0105007', 'author': ['Pragya Shukla \nDepartment of Physics\nNon-Hermiticity and Universality\nIndian Institute of Technology\nKharagpurIndia\n'], 'authoraffiliation': ['Department of Physics\nNon-Hermiticity and Universality\nIndian Institute of Technology\nKharagpurIndia'], 'corpusid': 119084016, 'doi': '10.1103/physrevlett.87.194102', 'github_urls': [], 'n_tokens_mistral': 8142, 'n_tokens_neox': 7300, 'n_words': 4272, 'pdfsha': '860b9e791679c2419503212515505de33dbd3ca3', 'pdfurls': ['https://export.arxiv.org/pdf/cond-mat/0105007v1.pdf'], 'title': [], 'venue': []}
arxiv
Phonon interference in the array of carbon nanotubes Smirnov V V V V Smirnov [email protected] Federal Research Center for Chemical Physics RAS 4 Kosygin street119991MoscowRussia Phonon interference in the array of carbon nanotubes Carbon nanotubevan der Waals interactionsCarbon nanotube arrayPhonon interferenceFano resonanceTransfer matrix The dynamics of the one-dimensional array of the single-walled carbon nanotubes, which interact by van der Waals forces, is considered. The molecular dynamics simulation shows that both the mutual displacements of the nanotubes and the deformations of their walls occur in the low-frequency oscillations domain. The composite model taking into account both types of the nanotubes' motions was developed in the framework of the thin elastic shell theory. Such an approach allows us to reduce the problem to the dynamics of the two-parametric linear lattice with contact interaction. The dispersion relations are represented analytically and the multichannel propagation that results to phonon interference (the acoustical analogue of the Fano resonance) is observed in the presence of the array's irregularities. The latter can be formed with redundant nanotubes, which are placed over the array in the groove between neighbour sites. The calculations of the transmittance have been performed by the transfer matrix method for several typical configurations. corresponds to the dense hexagonal arrangement, the symmetry of which controls the mode of the nanotube deformation. If the nanotubes are long enough the bundle can be considered as a fragment of twodimensional regular array, called the CNT-crystal, which has been studied in work [24] for the first time. The elastic properties of the CNT crystal were considered in [25,26,27]. It is important that the total energy of the CNT crystal consists of the energy of nanotube interaction as well as the energy of their deformation [28]. The nanotube deformation in the CNT crystal should be considered as the internal degree of freedom, the presence of which gives rise to the optical-type branch in the dispersion relation. The description of the CNT crystal dynamics has to include the mutual displacement of the nanotubes' center of masses as well as their deformation oscillations. The latter may be studied in the framework of the elastic thin shell theory [29], when the CNT are considered as a thin elastic shell, which is characterized by the elastic moduli, Poisson ratio and the effective thickness of the "wall" [30,31,32,33]. One should notice that such an approach to study of the nanotube vibrations is often used in the problem of the bending vibrations with or without elastic foundation [34,35]. Moreover, it was shown that there is well agreement between the data of the molecular-dynamical simulations and the results of the description of the nanotubes in the framework of the nonlinear theory of thin elastic shell by Sanders and Koiter [36,37,38]. The latter turn out to be successful in the analysis of the low-frequency oscillation localization in the single-walled CNT [39,40] as well as of the interaction of nonlinear normal modes which belong to the different branches of the dispersion relation [41]. Generally speaking, the problem of the thin shell deformation in the nonlinear formulation is one of most difficult one in the contemporary mechanics and it may be solved analytically in isolated cases only [29]. However, the mentioned deformation of the CNTs is specific in that the changing the nanotubes cross section, which is normal to the tube's axis, is not accompanied the variation of its contour length. The latter leads to some relationship between radial and circumferential displacements of the shell that allows us to reduce the complexity of the dynamical problem [42,40]. At a small deformation of the nanotubes the changing cross section's contour is characterized by the circumferential wave number l ≥ 0: R(θ) = R 0 1 + l w l cos lθ ,(1) where R 0 is the radius of non-deformed nanotube, θ is the azimuthal angle and w l is the amplitude of the radial displacement of the l-th mode. For the isolated nanotube the circumferential wave numbers l = 0 and l = 1 correspond to the well known radial breathing mode (RBM) and bending oscillations, respectively, while l = 2 gives rise so-called circumferential flexure mode (CFM), which is the most lowfrequency optical-type vibration of nanotubes [43,44]. Further we consider the particular system of the one-dimensional array of the single-walled nanotubes that turns to be interesting as the model one and may be useful in various problems of the nanoelectronics and nanomechanics. The model Let us consider the one-dimensional array of the single-walled CNTs, which are placed on some distance d from each other. In the equilibrium, the nanotubes' interaction results to that the cross section's contour can undergo the deformations [28] which is described by Equation (1) with some set of the circumferential wave numbers l. Figure 1(a) shows the snapshot of the molecular dynamics simulation of the (12,0) CNT array of the surface on the three-layered graphene at the temperature T = 300K. The circumferential flexure deformations of the CNTs' walls result to the imperfection of the nanotube cross sections. Figure 1(b) shows the snapshot of the simulation of the interaction two (20,0) nanotubes at T = 300K. The quasi-elliptical deformation of the right-hand nanotube is well observed. From the viewpoint of the nanotube interaction the energy of the system consists of the energy of elastic deformation of the CNT and the energy of the van der Waals interaction between carbon atoms belonging to neighbouring nanotubes. The first is determined by the CNT's circumferential rigidity and may be describes as the on-site described as follows: E c = Ω 2 2 w 2 ,(2) w is the amplitude of the radial deformation and Ω is the frequency of natural oscillations of the nanotubes which accompanied by varying of the nanotube's cross section (see Supporting Information). In the one-dimensional array the symmetry dictates the circumferential flexure mode with l = 2 as the preferential one. We assume that the energy of the van der Waals interaction between neighbour nanotubes is determined by the distance between nanotubes' walls, which depends on the displacement of center of masses as well as on the radial deformation amplitude. Figure 2 shows the sketch of the interaction of two deformed CNTs. It is essential that the effect of the circumferential deformation on the inter-wall gap differs for the left and right "edges" of the nanotubes. On the right-hand edge of the nanotube the radial deformation and displacement of center of masses are summarized, while on left-had edge they effect in the opposite direction. Under these assumptions we can represent the potential energy of the regular array of the nanotubes in the linear approximation as follows: V = 1 2 χ 2 ((u n − u n−1 ) − (w n−1 + w n )) 2 + ((u n+1 − u n ) − (w n+1 + w n )) 2 + Ω 2 2 w 2 n ,(3) where constant χ characterizes the rigidity of the van der Waals interaction, and frequency Ω is determined by the own rigidity of the nanotube's contour. Here we use the dimensionless variables: displacement u and radial deformation w are measured in the units of the nanotube's radius, frequency Ω and coupling constant χ are related with the frequency of the radial breathig mode (RBM). (One should note, that frequency Ω can take into account the effect of the substrate attraction, if the array is placed on the solid surface [7].) Let us note that displacement of the center of masses u n and amplitude of radial displacement w n of the n-th nanotube are represented by different manner in the first and second terms of Equation 3, as it was mentioned above. Therefore, we define new variables, which describe the displacements of the left and right "edges" of the nanotube as follows: ψ n = 1 √ 2 (u n − w n ) , ϕ n = 1 √ 2 (u n + w n ) ,(4) where factor √ 2 is introduced for the convenience. So, in terms of these variables the energy of the system can be written in the form E = n 1 2   dψ n dt 2 + dϕ n dt 2   + χ 2 (ψ n+1 − ϕ n ) 2 + (ψ n − ϕ n−1 ) 2 + Ω 2 4 (ϕ n − ψ n ) 2 .(5) The respective equations of motion are read as d 2 ϕ n dt 2 + Ω 2 2 (ϕ n − ψ n ) + 2χ 2 (ϕ n − ψ n+1 ) = 0 (6) d 2 ψ n dt 2 + Ω 2 2 (ψ n − ϕ n ) + 2χ 2 (ψ n − ϕ n−1 ) = 0. The dispersion relations consist of two branches: Figure 3 shows dispersion relations (7) for the CNT array with parameters: χ = 1/0, Ω = 1.5. One should remark that the right edge of the optical branch of dispersion relation (7) has to correspond to the frequency of the natural oscillations of the isolated nanotube Ω, while the acoustic branch has to converge to value 2χ. However, it occurs if frequency Ω is smaller than 2χ. Otherwise, we can observe ω → 2χ for the optical branch and ω → Ω for the acoustical one (see Figure 3). ω 2 = 1 2 4χ 2 + Ω 2 ± 8χ 2 Ω 2 cos(κ) + 16χ 4 + Ω 4(7) In the CNT array the phonon interference arises as the result of the Fano resonance [45,46], if an additional nanotube is placed in the groove between two neighbour nanotubes of the array. Such a "discrete state" can be formed artificially or be the result of instability of the array under action of the pressure in the direction, which is normal to the nanotubes' axes. The example of such instability is shown in Figure 4. The "excess" nanotube in Figure 4b arises as the result of the instability of the initially stressed array in Figure 4a. One of the nanotubes is ejected from the array under action of the thermal fluctuations and sites the position in the groove between two neighbour nanotubes. Such a configuration turns out to be stable and the upper nanotube does not change its location. The sketch of intertubes' bonds in the fragment of the CNT array with the "discrete state" is represented in Figure 5. The "discrete state" and the nanotubes of the array interact by the van der Waals forces and the energy of this interaction is controlled by distances ∆ 1 and ∆ 2 . One can show that the values of these distances depend on the differences (Ψ − ϕ n+1 ) and (Φ − ψ n ) and the energy of the "discrete state" can be written in the form: V d = χ 2 4 (Ψ − ϕ n+1 ) 2 + (Φ − ψ n ) 2 + Ω 2 1 4 (Φ − Ψ) 2(8) The "discrete state" has two own frequencies: ω 1 = χ √ 2 , ω 2 = Ω 2 1 + χ 2 2(9) In order to study the transmission of the wave through the "discrete state" placed between sites n and n + 1 we use the transfer matrix method [47]. Assuming that variables ψ and ϕ depend on the time as e iωt we can represent Equations (6) in the vector form: ψ n+1 ϕ n = T 0 ψ n ϕ n−1(10) where transfer matrix T 0 is read as follows (see Supporting Information): T 0 =   (2χ 2 −ω 2 )(2χ 2 −ω 2 +Ω 2 ) χ 2 Ω 2 − 4χ 4 −2χ 2 ω 2 +χ 2 Ω 2 χ 2 Ω 2 − −4χ 2 +2ω 2 −Ω 2 Ω 2 − 4χ 2 Ω 2  (11) The coupling between sites n + 1 and n − m is described by the relation: ψ n+1 ϕ n = Z ψ n−m+1 ϕ n−m−1 ,(12) where Z is the transfer matrix of the m−steps way and Z = T m 0 for the regular (defectless) array. However, if the array's fragment contains the irregularities, the one-step matrix T i,j differs from matrix T 0 . Thus, we should calculate the transfer matrix taking into account the interactions between nanotubes of the regular array and the redundant nanotube, which forms the "discrete state". In particular, the transition from ψ n+3 ϕ n+2 to ψ n−1 ϕ n−2 is described by the matrix Z = I − T 0 T 2,1 τ 1,2 T −1 0 −1 T 0 (T 2,1 T 1,0 + τ 2,0 ) T 0 ,(13) where I is identity matrix and the transfer matrixes T i,j and τ i,j describe the transition from the bond {n + i, n + i − 1} to {n + j, n + j − 1} (see Supporting Information for details). They are calculated with accounting the oscillations of the "discrete state" nanotube. In order to analyse the transmission of the wave in the system under consideration, we assume that the left half-array contains both the incoming and reflected wave, while only the transmitted wave occurs in the right half-array. Thus, we should write ψ n−j ϕ n−j = A 0 e iκ(n−j) + A r e −iκ(n−j) , j > 1 ψ n+j ϕ n+j = A t e iκ(n+j) , j > 3,(14) where A 0 and A r are the two-component vectors of the amplitudes of the incoming and reflected waves, and A t is the vector of amplitude of the transmitted wave. Amplitudes A 0 , A r and A t are the vectors because the array under study is complex and the solution for Equations (6) contains two component. It should be taken into account in the process of finding the transmission coefficient, which can be defined as the square of the modulus of the ratio of amplitudes of the incoming and transmitted waves. Under these conditions the transmission coefficient t = |A t /A 0 | 2 can be written as follows: (15) where Z i j are the components of the matrix Z. One should note that transmission coefficient (15) is similar to that in [47], but is not identical with it because of the complexity of the CNT array. t = 4 (Ω 4 + 8χ 2 Ω 2 cos κ + 16χ 4 ) Ω 4 sin 2 κ | (Ω 2 + 4e iκ χ 2 ) (4χ 2 − 2ω 2 + Ω 2 ) (Z 12 − Z 21 ) − e iκ (4χ 2 − 2ω 2 + Ω 2 ) 2 Z 22 + e −iκ (Ω 2 + 4e iκ χ 2 ) 2 Z 11 | 2 , Phonon interference The model discussed above has two parameters: the constant of the intertube interaction χ and frequency of natural oscillations Ω. The theoretical estimation of constant χ can be made by the direct numerical evaluation of the energy of van der Waals interaction [19,20,28]. The parameters of the Lennard-Jones potential are well known for the carbon nanostructures [14]. The frequency of the circumferential flexure oscillations Ω can be evaluated in the framework of the thin elastic shell theory [44,48,42,40] with using of the effective elastic constants of the nanotubes or can be obtained from the data of the moleculardynamical simulations. In particular, the dispersion curves for the nanotube array in work [49] can be well approximated with ratio Ω/χ ≈ 1.55 and dimensional value of the frequency Ω d ≈ 80cm −1 . The measured value of the frequency of the circumferential flexure mode is 27 cm −1 for the separated (10,0) nanotube [43] and ∼ 80cm −1 for the nanotube in the bundle [8]. Further we will use the dimensionless value χ = 1 and the ratio Ω/χ = 1.5 for the calculation of the transmission coefficient. As it was mentioned above, the simplest structure, which leads to the Fano resonance in the CNT array, is the single "redundant" nanotube ejected from the regular array (see Figure 4.b). However, a similar structure with the redundant nanotube, which parameters differ from the parameters of the array's CNTs, can leads to the phonon interference also. The nanotubes of smaller diameter has the bigger rigidity and the larger frequency Ω 1 > Ω. And vice versa, a larger nanotube has a smaller frequency of the circumferential flexure oscillations. If these frequency are in the permitted band we should observe the effect of the phonon interference. Figure 6 shows the transmission coefficients for three structures with different redundant nanotube as the function of the reduced frequency ω/ω max , where ω max = √ 4χ 2 + Ω 2 is the frequency of the optical branch at wave number κ = 0. Solid, dashed and dot-dashed curves correspond to the redundant nanotubes with natural frequencies Ω 1 = Ω, 1.5 Ω and 0.5 Ω, respectively. One can see that all three structures have the destructive interference that leads to full reflection of the incoming phonons at the relative frequency ω/ω nax ≈ 0.3. This frequency belongs to the acoustic branch of the spectrum and corresponds to oscillations of the redundant nanotube as whole. Second resonant frequency, which associates with circumferential flexure oscillations of the excess nanotube, is in the forbidden band if Ω 1 = Ω. If the redundant nanotube is more rigid than the CNTS of the array, its resonant frequencies are in the acoustical as well as in the optical branches (see Figure 6 -dashed curves). Thus, such a nanotube reflects both the acoustical and optical phonons. While the resonance in the acoustical part of the spectrum is similar to one for the nanotube with Ω 1 = Ω, the destructive resonance in the optical branch is essentially more sharp. If the nanotube above the array is larger than the CNT of the array, its natural frequency is less than Ω. The dot-dashed line in Figure 6 shows the transmittance for a nanotube with an eigenfrequency Ω 1 = 0.5 Ω. In this case, both resonant frequencies are in the acoustic region, and we can observe a certain overlap of the destructive resonances. Another structure that can lead to phonon interference in the CNT array is a combination of several nanotubes above the array. For example, two additional nanotubes can be located in adjacent grooves (double CNTs) formed by three consecutive nanotubes of the array, or they can be placed at some distance from each other (separate CNTs). In the first case, the connections between the additional CNTs and the nanotubes of the array overlap, and the resulting transfer matrix Z has a more complex structure. (Some details of these configurations are presented in the Supporting Information.) The second combination should be considered as two non-interacting resonant structures separated by a fragment of a regular lattice. In this case, we can construct the resulting transfer matrix as a combination of the matrix Z of the Equation (13) and the product of matrices T 0 . Figure 7 shows the examples of the transmittance for resonant structures with two nanotubes. The solid line corresponds to the doubled nanotubes above the array, while the dashed curve is associated with two nanotubes, which are located on three lattice constants from one to the other. We can observe the effect of both destructive and constructive resonance in both the acoustic and optical regions. The main acoustic destructive resonance near the frequency ω ≈ 0.3ω max always occurs, but in the case of separated nanotubes, an extremely narrow constructive resonance arises in the vicinity of it. The transmittances in Figure 7 have been calculated for the redundant nanotubes, which are the same as CNTs in the array. Nevertheless, the resonances in the optical domain appear as for doubled as well as for the separated nanotubes, therefore the optical phonons of certain frequencies are reflected from the considered structures. Thus we can effectively control the phonon transmittance through the CNT array by the various combination of the additional nanotubes placed over the array. Conclusion In this work we construct the model of the regular array of the single-walled carbon nanotubes, which is simple enough and allows us to evaluate the phonon interference resulting to the Fano resonance in the presence of the locally resonance structures. The latter can be formed by the additional nanotubes, which are placed over the CNT array in the various locations. Varying the parameters of the nanotubes (the diameter, chirality and number of walls) we can change the position and the width of the destructive resonance, which results to the full reflection of the phonons with the certain frequency as in the acoustical as well as in the optical domain. Also, in order to change the frequency interval we can modify the CNT's surface that leads to the changing the intertube constant χ. Thus, the model considered here can be useful in the investigations of the phonon as well as the electro-mechanical properties of the regular CNT structures. Figure 2 : 2Sketch of two nanotubes interaction. Dashed contours correspond to non-deformed CNTs of radii R at equilibrium distance d in the regular array. Red contours show the deformed nanotubes and the displacements of their center of masses (red crosses). Variables u and w correspond to the displacement of center of masses and amplitude of the radial deformation, respectively. Figure 3 : 3Dispersion curves for equations(6). χ = 1, Ω = 1.5 . S Iijima, Nature. 35456S. Iijima, Nature 1991, 354 56 . . R Rao, C L Pint, A E Islam, R S Weatherup, S Hofmann, E R Meshot, F Wu, C Zhou, N Dee, P B Amama, J Carpena-Nuñez, W Shi, D L Plata, E S Penev, B I Yakobson, P B Balbuena, C Bichara, D N Futaba, S Noda, H Shin, K S Kim, B Simard, F Mirri, M Pasquali, F Fornasiero, E I Kauppinen, ACS. B. A. Cola, P. Nikolaev, S. Arepalli, H.-M. Cheng, D. N. Zakharov, E. A. Stach, J. Zhang, F. Wei, M. Terrones, D. B. Geohegan, B. Maruyama, S. Maruyama, Y. Li, W. W. Adams, A. J. Hart1211756R. Rao, C. L. Pint, A. E. Islam, R. S. Weatherup, S. Hofmann, E. R. Meshot, F. Wu, C. Zhou, N. Dee, P. B. Amama, J. Carpena-Nuñez, W. Shi, D. L. Plata, E. S. Penev, B. I. Yakobson, P. B. Balbuena, C. Bichara, D. N. Futaba, S. Noda, H. Shin, K. S. Kim, B. Simard, F. Mirri, M. Pasquali, F. Fornasiero, E. I. Kauppinen, M. Arnold, B. A. Cola, P. Nikolaev, S. Arepalli, H.-M. Cheng, D. N. Zakharov, E. A. Stach, J. Zhang, F. Wei, M. Terrones, D. B. Geohegan, B. Maruyama, S. Maruyama, Y. Li, W. W. Adams, A. J. Hart, ACS Nano 2018, 12, 12 11756 . . T Ando, J Phys Soc Japan. 74777T. Ando, J Phys Soc Japan 2005, 74, 3 777. Physical Properties of Carbon Nanotubes. R Saito, G Dresselhaus, M Dresselhaus, Imperial College PressLondonR. Saito, G. Dresselhaus, M. Dresselhaus, Physical Properties of Carbon Nanotubes, Imperial Col- lege Press, London, 1998. . S Zhang, L Kang, X Wang, L Tong, L Yang, Z Wang, K Qi, S Deng, Q Li, X Bai, F Ding, J Zhang, Nature. 543234S. Zhang, L. Kang, X. Wang, L. Tong, L. Yang, Z. Wang, K. Qi, S. Deng, Q. Li, X. Bai, F. Ding, J. Zhang, Nature 2017, 543 234 . . J Xiao, S Dunham, P Liu, Y Zhang, C Kocabas, L Moh, Y Huang, K.-C Hwang, C Lu, W Huang, J A Rogers, Nano Letters. 94311J. Xiao, S. Dunham, P. Liu, Y. Zhang, C. Kocabas, L. Moh, Y. Huang, K.-C. Hwang, C. Lu, W. Huang, J. A. Rogers, Nano Letters 2009, 9, 12 4311, pMID: 19899745. . L Henrard, E Hernández, P Bernier, A Rubio, Phys. Rev. B. L. Henrard, E. Hernández, P. Bernier, A. Rubio, Phys. Rev. B 1999, 60 R8521. . J.-L Sauvajol, E Anglaret, S Rols, L Alvarez, Carbon. 401697J.-L. Sauvajol, E. Anglaret, S. Rols, L. Alvarez, Carbon 2002, 40 1697 . . J Xiao, H Jiang, D Khang, J Wu, Y Huang, J A Rogers, J Appl Phys. 10433543J. Xiao, H. Jiang, D. Khang, J. Wu, Y. Huang, J. A. Rogers, J Appl Phys 2008, 104 033543. . A G Van Der Geest, Z Lu, M T Lusk, M L Dunn, J Appl Phys. 10984316A. G. Van Der Geest, Z. Lu, M. T. Lusk, M. L. Dunn, J Appl Phys 2011, 109 084316. . V Perebeinos, S Rotkin, A Petrov, P Avourus, Nano Lett. 9312V. Perebeinos, S. Rotkin, A. Petrov, P. Avourus, Nano Lett 2009, 9 312 . . B Flebus, A H Macdonald, Phys Rev Research. 202022041R)B. Flebus, A. H. MacDonald, Phys Rev Research 2020, 2 022041(R). . S Park, M Vosguerichian, Z Bao, Nanoscale. 2013S. Park, M. Vosguerichian, Z. Bao, Nanoscale 2013, 1727 -1752. . L Girifalco, M Hodak, R Lee, Phys Rev B. 62L. Girifalco, M. Hodak, R. Lee, Phys Rev B 2000, 62. . A Šiber, R F Rajter, R H French, W Y Ching, V A Parsegian, R Podgornik, Phys. Rev. B. 80165414A.Šiber, R. F. Rajter, R. H. French, W. Y. Ching, V. A. Parsegian, R. Podgornik, Phys. Rev. B 2009, 80 165414. V Harik, Mechanics of Carbon Nanotubes. Fundamentals, Modelling and Safety. Academic PressV. Harik, Mechanics of Carbon Nanotubes. Fundamentals, Modelling and Safety, Academic Press, 2018. . A Savin, M Mazo, Physica. 2020113937A. Savin, M. Mazo, Physica E 2020, 118 113937. H Rafii-Tabar, Computational physics of carbon nanotubes. Cambridge, UKCambridge University PressH. Rafii-Tabar, Computational physics of carbon nanotubes, Cambridge University Press, Cam- bridge, UK, 2008. . C.-H Sun, L.-C Yin, F Li, G.-O Lu, H.-M Cheng, Chem Phys Lett. 403343C.-H. Sun, L.-C. Yin, F. Li, G.-O. Lu, H.-M. Cheng, Chem Phys Lett 2005, 403 343 . . C.-H Sun, G.-O Lu, H.-M Cheng, Phys. Rev. B. 73195414C.-H. Sun, G.-O. Lu, H.-M. Cheng, Phys. Rev. B 2006, 73 195414. . A Popescu, L M Woods, I V Bondarev, Rhys. Rev. B. 77115443A. Popescu, L. M. Woods, I. V. Bondarev, Rhys. Rev. B 2008, 77 115443. . J Zhao, J.-W Jiang, Y Jia, W Guo, T Rabczuk, 57108CarbonJ. Zhao, J.-W. Jiang, Y. Jia, W. Guo, T. Rabczuk, Carbon 2013, 57 108. . J Tang, L.-C Qin, T Sasaki, M Yudasaka, A Matsushita, S Iijima, Phys Rev Lett. 85J. Tang, L.-C. Qin, T. Sasaki, M. Yudasaka, A. Matsushita, S. Iijima, Phys Rev Lett 2000, 85. . J Tersoff, R S Ruoff, Phys Rev Lett. 73J. Tersoff, R. S. Ruoff, Phys Rev Lett 1994, 73. V Popov, V Van Doren, M Balkanski, Solid State Comm. 114V. Popov, V. Van Doren, M. Balkanski, Solid State Comm 2000, 114 395-399. . E E Saether, S Frankland, R Pipes, Comp Sci Tech. 631543E. E. Saether, S. Frankland, R. Pipes, Comp Sci Tech 2003, 63 1543 . . E Saether, Comp Sci Tech. 631551E. Saether, Comp Sci Tech 2003, 63 1551 . . V Smirnov, L Manevitch, Doklady Physics. 486173V. Smirnov, L. Manevitch, Doklady Physics 2019, 486 173 . M Amabili, Nonlinear vibrations and stability of shells and plates. CambridgeCambridge University PressM. Amabili, Nonlinear vibrations and stability of shells and plates, Cambridge University Press, Cambridge, 2008. . T Vodenitcharova, L C Zhang, Phys. Rev. B. 68165401T. Vodenitcharova, L. C. Zhang, Phys. Rev. B 2003, 68 165401. . Y Huang, J Wu, K C Hwang, Phys. Rev. B. 74245413Y. Huang, J. Wu, K. C. Hwang, Phys. Rev. B 2006, 74 245413. . S S Gupta, F G Bosco, R Batra, Comp. Mat. Sci. 471049S. S. Gupta, F. G. Bosco, , R. Batra, Comp. Mat. Sci. 2010, 47 1049. . T Chang, J. Mech. Phys. Solids. 581422T. Chang, J. Mech. Phys. Solids 2010, 58 1422. . C Y Wang, C Q Ru, A Mioduchowski, J. Appl. Mech. 71622C. Y. Wang, C. Q. Ru, A. Mioduchowski, J. Appl. Mech. 2004, 71 622. . K M Liew, Q Wang, J. Eng. Sci. 45227K. M. Liew, Q. Wang, J. Eng. Sci. 2007, 45 227. . N Silvestre, C Wang, Y Zhang, Y Xiang, Composite Structures. 931683N. Silvestre, C. Wang, Y. Zhang, Y. Xiang, Composite Structures 2011, 93 1683. . N Silvestre, Eur. J. Mech. A. 32103N. Silvestre, Eur. J. Mech. A 2012, 32 103. . R Rafiee, R M Moghadam, Composites: Part B. 56435R. Rafiee, R. M. Moghadam, Composites: Part B 2014, 56 435-. . V V Smirnov, D S Shepelev, L I Manevitch, Phys. Rev. Lett. 113135502V. V. Smirnov, D. S. Shepelev, L. I. Manevitch, Phys. Rev. Lett. 2014, 113 135502. . V Smirnov, L Manevitch, M Strozzi, F Pellicano, Physica D: Nonlinear Phenomena. 325113V. Smirnov, L. Manevitch, M. Strozzi, F. Pellicano, Physica D: Nonlinear Phenomena 2016, 325 113 . . V Smirnov, L Manevitch, Nonlinear Dynamics. 93205V. Smirnov, L. Manevitch, Nonlinear Dynamics 2018, 93 205 . . J Kaplunov, L I Manevitch, V V Smirnov, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 2189J. Kaplunov, L. I. Manevitch, V. V. Smirnov, Proceedings of the Royal Society of London A: Mathe- matical, Physical and Engineering Sciences 2016, 472, 2189. . M S Dresselhaus, P C Eklund, Adv. in Phys. 49705M. S. Dresselhaus, P. C. Eklund, Adv. in Phys. 2000, 49 705. . G D Mahan, Phys.Rev. B. 65235402G. D. Mahan, Phys.Rev. B 2002, B 65 235402. . Y A Kosevich, Phys. Usp. 51848Y. A. Kosevich, Phys. Usp. 2008, 51 848 . . A Miroshnichenko, S Flach, Y Kivshar, Rev Mod Phys. 822258A. Miroshnichenko, S. Flach, Y. Kivshar, Rev Mod Phys 2010, 82 2258 . . P Tong, B Li, B Hu, Phys. Rev. B. 598639P. Tong, B. Li, B. Hu, Phys. Rev. B 1999, 59 8639 . . L Chico, R Perez-Alvarez, C Cabrillo, Phys. Rev. B. 7375425L. Chico, R. Perez-Alvarez, C. Cabrillo, Phys. Rev. B 2006, 73 075425. . A Savin, O Savina, Phys. Solid State. 63145A. Savin, O. Savina, Phys. Solid State 2021, 63 145 . Figure 1: a) The snapshot of MD simulation of the (12,0) CNT array on the tree-layered graphene. b) The snapshot of MD simulation of two. 20,0) CNTsFigure 1: a) The snapshot of MD simulation of the (12,0) CNT array on the tree-layered graphene. b) The snapshot of MD simulation of two (20,0) CNTs. 0) CNTs on three-layered graphene under external stress along the graphene surface and normally to the nanotubes' axes. Panels (a) and (b) show the configuration before and after loss of the stability. REFERENCES REFERENCES Figure 5: Sketch of the CNT array with additional nanotube as the "discrete state. The snapshots of the MD simulation of the. 4Thick solid lines show the "contact" interaction and dashed lines show the bonds between regular array and "discrete state" nanotubeFigure 4: The snapshots of the MD simulation of the (12,0) CNTs on three-layered graphene under external stress along the graphene surface and normally to the nanotubes' axes. Panels (a) and (b) show the configuration before and after loss of the stability. REFERENCES REFERENCES Figure 5: Sketch of the CNT array with additional nanotube as the "discrete state". Thick solid lines show the "contact" interaction and dashed lines show the bonds between regular array and "discrete state" nanotube. Solid, dashed and dot-dashed curves correspond to Ω 1 /Ω = 1, 1.5 and 0.5, respectively. REFERENCES REFERENCES Figure 7: Normalized amplitude of transmitted wave vs normalized frequency for various configuration of the nanotubes on the regular array. Solid black and dashed red curves curves correspond to doubled and two separated nanotubes on the regular array. Normalized amplitude of transmitted wave vs normalized frequency for single nanotube on the regular array. 6respectively. Parameters: χ = 1.0, Ω = 1.5, Ω 1 = 1.5Figure 6: Normalized amplitude of transmitted wave vs normalized frequency for single nanotube on the regular array. Solid, dashed and dot-dashed curves correspond to Ω 1 /Ω = 1, 1.5 and 0.5, respectively. REFERENCES REFERENCES Figure 7: Normalized amplitude of transmitted wave vs normalized frequency for various configuration of the nanotubes on the regular array. Solid black and dashed red curves curves correspond to doubled and two separated nanotubes on the regular array, respectively. Parameters: χ = 1.0, Ω = 1.5, Ω 1 = 1.5.
{'fraction_non_alphanumeric': 0.062047086324929036, 'fraction_numerical': 0.03963933878777759, 'mean_word_length': 3.8331181407359587, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 8, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "The dynamics of the one-dimensional array of the single-walled carbon nanotubes, which interact by van der Waals forces, is considered. The molecular dynamics simulation shows that both the mutual displacements of the nanotubes and the deformations of their walls occur in the low-frequency oscillations domain. The composite model taking into account both types of the nanotubes' motions was developed in the framework of the thin elastic shell theory. Such an approach allows us to reduce the problem to the dynamics of the two-parametric linear lattice with contact interaction. The dispersion relations are represented analytically and the multichannel propagation that results to phonon interference (the acoustical analogue of the Fano resonance) is observed in the presence of the array's irregularities. The latter can be formed with redundant nanotubes, which are placed over the array in the groove between neighbour sites. The calculations of the transmittance have been performed by the transfer matrix method for several typical configurations.", 'arxivid': '2108.09526', 'author': ['Smirnov V V V V Smirnov [email protected] \nFederal Research Center for Chemical Physics\nRAS 4 Kosygin street119991MoscowRussia\n', 'Smirnov V V V V Smirnov [email protected] \nFederal Research Center for Chemical Physics\nRAS 4 Kosygin street119991MoscowRussia\n'], 'authoraffiliation': ['Federal Research Center for Chemical Physics\nRAS 4 Kosygin street119991MoscowRussia', 'Federal Research Center for Chemical Physics\nRAS 4 Kosygin street119991MoscowRussia'], 'corpusid': 237266828, 'doi': '10.1002/pssb.202100429', 'github_urls': [], 'n_tokens_mistral': 10213, 'n_tokens_neox': 8411, 'n_words': 5187, 'pdfsha': 'b54c5f53c3657028aeef0a92547b5d635b4028c1', 'pdfurls': ['https://export.arxiv.org/pdf/2108.09526v1.pdf'], 'title': ['Phonon interference in the array of carbon nanotubes', 'Phonon interference in the array of carbon nanotubes', 'Phonon interference in the array of carbon nanotubes', 'Phonon interference in the array of carbon nanotubes'], 'venue': []}
arxiv
Fast generalized Bruhat decomposition 23 Feb 2017 Gennadi Malaschonok Tambov State University Internatsionalnaya 33392622TambovRussia Fast generalized Bruhat decomposition 23 Feb 2017 The deterministic recursive pivot-free algorithms for the computation of generalized Bruhat decomposition of the matrix in the field and for the computation of the inverse matrix are presented. This method has the same complexity as algorithm of matrix multiplication and it is suitable for the parallel computer systems. ⋆ Introduction An LU matrix decomposition without pivoting is a decomposition of the form A = LU , a decomposition with partial pivoting has the form P A = LU , and decomposition with full pivoting (Trefethen and Bau) has the form P AQ = LU , where L and U are lower and upper triangular matrices, P and Q is a permutation matrix. French mathematician Francois Georges René Bruhat was the first who worked with matrix decomposition in the form A = V wU , where V and U are nonsingular upper triangular matrices and w is a matrix of permutation. Bruhat decomposition plays an important role in algebraic group. The generalized Bruhat decomposition was introduced and developed by D. Grigoriev [1], [2]. He uses the Bruhat decomposition in the form A = V wU , where V and U are upper triangular matrices but they may be singular when the matrix A is singular. In the papers [3] and [4] there was analyzed the sparsity pattern of triangular factors of the Bruhat decomposition of a nonsingular matrix over a field. Fast matrix multiplication and fast block matrix inversion were discovered by Strassen [5]. The complexity of Strassen's recursive algorithm for block matrix inversion is the same as the complexity of an algorithm for matrix multiplication. But in this algorithm it is assumed that principal minors are invertible and leading elements are nonzero as in the most of direct algorithms for matrix inversion. There are known other recursive methods for adjoint and inverse matrix computation, which have the complexity of matrix multiplications([6]- [8]). In a general case it is necessary to find suitable nonzero elements and to perform permutations of matrix columns or rows. Bunch and Hopkroft suggested such algorithm with full pivoting for matrix inversion [9]. The permutation operation is not a very difficult operation in the case of sequential computations by one processor, but it is a difficult operation in the case of parallel computations, when different blocks of a matrix are disposed in different processors. A matrix decomposition without permutations is needed for parallel computation for construction of efficient and fast computational schemes. The problem of obtaining pivot-free algorithm was studied in [10], [11] by S.Watt. He presented the algorithm that is based on the following identity for a nonsingular matrix: A −1 = (A T A) −1 A T . Here A T is the transposed matrix to A and all principal minors of the matrix A T A are nonzero. This method is useful for making an efficient parallel program with the help of Strassen's fast decomposition of inverse matrix for dense nonsingular matrix over the field of zero characteristic when field elements are represented by the float numbers. Other parallel matrix algorithms are developed in [12] - [15]. This paper is devoted to the construction of the pivot-free matrix decomposition method in a common case of singular matrices over a field of arbitrary characteristic. The decomposition will be constructed in the form LAU = E, where L and U are lower and upper triangular matrices, E is a truncated permutation matrix, which has the same rank as the matrix A. Then the generalized Bruhat decomposition may be easy obtained using the matrix L, E and U . This algorithm has the same complexity as matrix multiplication and does not require pivoting. For singular matrices it allows to obtain a nonsingular block of the biggest size, the echelon form and kernel of matrix. The preliminary variants of this algorithm were developed in [16] and [17]. Preliminaries We introduce some notations that will be used in the following sections. Let F be a field, F n×n be an n × n matrix ring over F , S n be a permutation group of n elements. Let P n be a multiplicative semigroup in F n×n consisting of matrices A having exactly rank(A) nonzero entries, all of them equal to 1. We call P n the permutation semigroup because it contains the permutation group of n elements S n and all their truncated matrix. The semigroup D n ⊂ P n is formed by the diagonal matrices. So |D n |=2 n and the identity matrix I is the identity element in D n , S n and P n . Let W i,j ∈ P n be a matrix, which has only one nonzero element in the position (i, j). For an arbitrary matrix E of P n , which has the rank n − s (s = 0, ..n) we shall denote by i E = {i 1 , .., i s } the ordered set of zero row numbers and i E = {i 1 , .., i s } the ordered set of zero column numbers. It is easy to see that ∀E ∈ P n : E + E ∈ S n , and ∀I ∈ D n : I + I = I. Therefore the map I → I = I − I is the involution and we have II = 0. We can define the partial order at D n : I < J ⇔ J − I ∈ D n . For each matrix E ∈ P n we shall denote by I E = EE T and J E = E T E the diagonal matrix: I E , J E ∈ D n . The unit elements of the matrix I E show nonzero rows of the matrix E and the unit elements of the matrix J E show nonzero columns of the matrix E. Therefore we have several zero identities: E T I E = I E E = EJ E = J E E T = 0.(1) For any pare I, J ∈ D n let us denote the subset of matrices F n×n F n×n I,J = {B : B ∈ F n×n , IBJ = B}. We call them (I, J)-zero matrix. It is evident that F n×n = F n×n I,I , 0 ∈ ∪ I,J F n×n I,J and if I 2 < I 1 and J 2 < J 1 then F n×n I2,J2 ⊂ F n×n I1,J1 . Definition 2. We shall call the factorization of the matrix A ∈ F n×n I,J A = L −1 EU −1 ,(2) LEU -decomposition if E ∈ P n , L is a nonsingular low triangular matrix, U is an upper unitriangular matrices and L − I E ∈ F n×n I,IE , U − J E ∈ F n×n JE ,J .(3) If (2) is the LEU -decomposition we shall write (L, E, U ) = LU(A), Sentence 1 Let (L, E, U ) = LU (A) be the LEU -decomposition of matrix A ∈ F n×n I,J then L = I E + ILI E , U = J E + J E U J, E ∈ F n×n I,J ,(4)L −1 = I E + L −1 I E , U −1 = J E + J E U −1 . Proof. The first and second equalities follows from (3). To prove the property of matrix E we use the commutativity of diagonal semigroup D n : E = LAU = (I E + ILI E )IAJ(J E + J E U J) = I(I E + LI E I)A(J E + JJ E U )J. To prove the property of matrix L −1 let us consider the identity I = L −1 L = L −1 (I E + LI E ) = L −1 I E + II E Therefore L −1 I E = I E and L −1 = L −1 (I E + I E ) = I E + L −1 I E . The prove of the matrix U −1 property may be obtained similarly. Sentence 1 states the property of matrix E, which may be written in the form I E < I, J E < J. We shall call it the property of immersion. on the other hand, each zero row of the matrix E goes to the unit column of matrix L and each zero column of the matrix E goes to the unit row of matrix U . Let us denote by E n the permutation matrix W 1,n + W 2,n−1 + .. + W n,1 ∈ S n . It is easy to see that if the matrix A ∈ F n×n is low-(upper-) triangular, then the matrix E n AE n is upper-(low-) triangular. Sentence 2 Let (L, E, U ) = LU (A) be the LEU -decomposition of matrix A ∈ F n×n , then the matrix E n A has the generalized Bruhat decomposition V 1 wV 2 and V 1 = E n (L −1 − I E )E n , w = E n (E + E), V 2 = (U −1 − J E ). Proof. As far as L −1 is a low triangular matrix and U −1 is an upper triangular matrix we see that V 1 and V 2 are upper triangular matrices. Matrix w is a product of permutation matrices so w is a permutation matrix. One easily checks that V 1 wV 2 = E n L −1 EU −1 = E n A. Examples. For any matrix I ∈ D n , E ∈ P n , 0 = a ∈ F the product (aI + I)I I is a LEU decompositions of matrix aI and the product (aI E + I E )E I is a LEU decompositions of matrix aE. Let us assume that for any matrix of size n we can write a LEU decomposition and let us given matrix A ∈ F 2n×2n I,J has the size 2n. We shall construct a LEU decomposition of matrix A. Algorithm of LEU decomposition First of all we shall divide the matrices A, I, J and a desired matrix E into four equal blocks: A = A 11 A 12 A 21 A 22 , I = diag(I 1 , I 2 ), J = diag(J 1 , J 2 ), E = E 11 E 12 E 21 E 22 ,(5) and denote I ij = E ij E T ij , J ij = E T ij E ij ∀i, j ∈ {1, 2}.(6) Let (L 11 , E 11 , U 11 ) = LU (A 11 ), denote the matrices Q = L 11 A 12 , B = A 21 U 11 ,(8)A 1 21 = BJ 11 , A 1 12 = I 11 Q, A 1 22 = A 22 − BE T 11 Q.(9) denote the matrices G = L 21 A 1 22 U 12 , A 2 22 = I 21 GJ 12 .(11) Let us put (L 22 , E 22 , U 22 ) = LU (A 2 22 ),(12) and denote W = (GE T 12 L 12 + L 21 BE T 11 ), V = (U 21 E T 21 GJ 12 + E T 11 QU 12 ),(13)L = L 12 L 11 0 −L 22 W L 11 L 22 L 21 , U = U 11 U 21 −U 11 V U 22 0 U 12 U 22 .(14) We have to prove that (L, E, U ) = LU(A). As far as L 11 , L 12 , L 21 , L 22 are low triangular nonsingular matrices and U 11 , U 12 , U 21 , U 22 are upper unitriangular matrices we can see in (10) that the matrix L is a low triangular nonsingular matrix and the matrix U is upper unitriangular. Let us show that E ∈ P 2n . As far as E 11 , E 12 , E 21 , E 22 ∈ P n and A 11 = I 11 A 11 J 11 , A 1 21 = BJ 11 , A 1 12 = I 11 Q, A 2 22 = I 21 GJ 12 and due to the Sentence 1 we obtain E 11 = I 11 E 11 J 11 , E 21 = E 21 J 11 , E 12 = I 11 E 12 , E 22 = I 21 E 22 J 12 . Therefore the unit elements in each of the four blocks of the matrix E are disposed in different rows and columns of the matrix E. So E ∈ P 2n , and next identities hold E 11 E T 21 = E 11 J 21 = J 11 E T 21 = J 11 J 21 = 0,(16) E T 12 E 11 = E T 12 I 11 = I 12 E 11 = I 12 I 11 = 0, E 12 E T 22 = E 12 J 22 = J 12 E T 22 = J 12 J 22 = 0,(17)E T 22 E 21 = E T 22 I 21 = I 22 E 21 = I 22 I 21 = 0.(18) We have to prove, that E = LAU . This equation in block form consists of four block equalities: E 11 = L 12 L 11 A 11 U 11 U 21 ; E 12 = L 12 L 11 (A 12 U 12 − A 11 U 11 V )U 22 ; E 21 = L 22 (L 21 A 21 − W L 11 A 11 )U 11 U 21 ; E 22 = L 22 ((L 21 A 22 − W L 11 A 12 )U 12 − (L 21 A 21 − W L 11 A 11 )U 11 V )U 22 . (20) Therefore we have to prove these block equalities. Let us note, that from the identity A 11 = I 1 A 11 J 1 and Sentence 1 we get L 11 = I 11 + I 1 L 11 I 11 , U 11 = J 11 + J 11 U 12 J 1 . (21) The Sentence 1 together with equations A 1 12 = I 11 L 11 A 12 , A 1 21 = A 21 U 11 J 11 , A 2 22 = I 21 L 21 (A 22 − A 21 U 11 E T 11 L 11 A 12 )U 12 J 12 give the next properties of L-and U-blocks: L 12 = I 12 + I 11 I 1 L 12 I 12 , U 12 = J 12 + J 12 U 12 J 2 , L 21 = I 21 + I 2 L 21 I 21 , U 21 = J 21 + J 21 U 12 J 1 J 11 , L 22 = I 22 + I 21 I 2 L 22 I 22 , U 22 = J 22 + J 22 U 22 J 2 J 12 .(22) The following identities can be easy checked now L 12 E 11 = E 11 , L 12 I 11 = I 11 ,(23)E 11 U 21 = E 11 , J 11 U 21 = J 11 ,(24)E 12 U 22 = E 12 , J 12 U 22 = J 12 ,(25)L 22 E 21 = E 21 , L 22 I 21 = I 21 .(26) We shall use the following equalities, L 11 A 11 U 11 = E 11 , L 12 A 1 12 U 12 = E 12 , L 21 A 1 21 U 21 = E 21 , L 22 A 2 22 U 22 = E 22 ,(27) which follows from (7),(10) and (12), the equality E 11 V = I 11 QU 12 ,(28) which follows from the definition of the block V in (13), (24), (16) and (6), the equality W E 11 = L 21 BJ 11 ,(29) which follows from the definition of the block W in (13), (23), (17) and (6) follows from (23) and (17). We have to check that ( 12 , using the definitions of the blocks W in (13), A 1 22 and A 1 12 in (9), the identity (28), the second equality in (27) and the definition (6). L 21 A 22 − W L 11 A 12 )U 12 = (L 21 A 22 − (GE T 12 L 12 + L 21 BE T 11 )Q)U 12 = L 21 (A 22 − BE T 11 Q)U 12 − GE T 12 L 12 QU 12 = L 21 A 1 22 U 12 − GE T 12 L 12 I 11 QU 12 = G − GE T 12 L 12 A 1 12 U 12 = G − GE T 12 E 12 = GJ We have to check that −( L 21 A 21 −W L 11 A 11 )U 11 V = −(L 21 A 21 U 11 −W E 11 )V = (−L 21 B+L 21 BJ 11 )V = −L 21 BJ 11 V = −L 21 BJ 11 (U 21 E T 21 GJ 12 +E T 11 QU 12 ) = −L 21 A 1 21 U 21 E T 21 GJ 12 = −I 21 GJ 12 , using the first equality in (27), the identity (29), the definitions of the blocks V in (13), (1), then the third equality in (27) and definition (6). To prove the forth equality we have to substitute obtained expressions to the right part of the fourth equality: To prove the first block equalities we have to multiply its left part by the unit matrix in the form I = (I 1 + I 1 ) from the left side and by the unit matrix in the form I = (I 11 + I 12 ) + I 11 I 12 from the left side. Then we use the following identities to obtain in the left part the same expression as in the right part: L 11 I 11 = I 11 , L 12 I 12 = I 12 , I 1 L 12 L 11 = I 1 , I 1 (I 11 + I 12 ) = 0. The same idea may be used for proving the last block equality, but we must use other forms of unit matrix: I = (I 2 + I 2 ), I = (I 21 + I 22 ) + I 21 I 22 . The second block equality is evident. Let us prove the third block equality. We have to multiply the left part of the third block equality by the unit matrix in the form I = (I 2 + I 2 ) from the left side and by the unit matrix in the form I = (I 11 + I 12 ) + I 11 I 12 from the right side. The block W is equal to the following expression by the definition (13), (11) and (8): W = (L 21 (A 22 − A 21 U 11 E T 11 Q)U 12 E T 12 L 12 + L 21 A 21 U 11 E T 11 ). We have to use in the left part the equations I 2 L 22 = I 2 , I 2 L 21 = I 2 , I 2 A 22 = 0, I 2 A 21 = 0, and L 11 I 11 = I 11 , L 12 I 12 = I 12 , E T 12 I 12 = 0, E T 11 I 11 = 0. The property of the matrix U : U − J E ∈ F JE ,J may be proved in the same way as the property of the matrix L. Theorem 2. For any matrix A of size s, (s ≥ 1), an algorithm of LEU -decomposition which has the same complexity as matrix multiplication exists. Proof. We have proved an existence of LEU -decomposition for matrices of size 2 k , k > 0. Let A ∈ F s×s I,J be a matrix of size 2 k−1 < s < 2 k , A ′ be a matrix of size 2 k , which has in the left upper corner the submatrix equal A and all other elements equal zero. We can construct LEU -decomposition of matrix A ′ : (L ′ , E ′ , U ′ ) = LU(A ′ ). According to the Sentence 1 the product L ′ A ′ U ′ = E ′ has the form L 0 0 I A 0 0 0 U 0 0 I = E 0 0 0 Therefore LAU = E is a LEU decomposition of matrix A. The total amount of matrix multiplications in (7)- (15) is equal to 17 and total amount of recursive calls is equal to 4. We do not consider multiplications of the permutation matrices, we can do these multiplications due to permutation of pointers for the blocks which are disposed at the lockal processors. We can compute the decomposition of the second order matrix by means of 5 multiplicative operations. Therefore we get the following recurrent equality for complexity t(n) = 4t(n/2) + 17M (n/2), t(2) = 5. Let γ and β be constants, 3 ≥ β > 2, and let M (n) = γn β +o(n β ) be the number of multiplication operations in one n × n matrix multiplication. After summation from n = 2 k to 2 1 we obtain 17γ(4 0 2 β(k−1) + . . . + 4 k−2 2 β1 ) + 4 k−2 5 = 17γ n β − 2 β−2 n 2 2 β − 4 + 5 16 n 2 . Therefore the complexity of the decomposition is ∼ 17γn β 2 β − 4 If A is an invertible matrix, then A −1 = U E T L and a recursive block algorithm of matrix inversion is written in the expressions (7)- (15). This algorithm has the complexity of matrix multiplications. Conclusion An algorithms for finding the generalized Bruhat decomposition and matrix inversion are described. These algorithms have the same complexity as matrix multiplication and do not require pivoting. For singular matrices they allow to obtain a nonsingular block of the biggest size. These algorithms may be used in any field, including real and complex numbers, finite fields and their extensions. The proposed algorithms are pivot-free, and do not change the matrix block structure. So they are suitable for parallel hardware implementation. Definition 1 . 1Let E ∈ P n be the matrix of the rank n − s, let i E = {i 1 , .., i s } and i E = {i 1 , .., i s } are the ordered set of zero row numbers and zero columns number of the matrix E. Let us denote by E the matrix E = k=1,..s W i k ,j k and call it the complimentary matrix for E. For the case s = 0 we put E = 0. Theorem 1 . 1For any matrix A ∈ F n×n of size n = 2 k , k ≥ 0 a LEUdecomposition exists. For computing such decomposition it is enough to compute 4 LEU -decompositions, 17 multiplications and several permutations for the matrices of size n = 2 k−1 .Proof. For the matrix of size 1 × 1, when k = 0, we can write the following LEU decompositions LU(0) = (1, 0, 1) and LU(a) = (a −1 , 1, 1), if a = 0. Let(L 12 , E 12 , U 12 ) = LU (A 1 12 ) and (L 21 , E 21 , U 21 ) = LU(A 1 21 ), . 1 . 1The first equality of (20) follows from (27), (23) and (24). 2.The right part of the second equality of (20) takes the form L 12 (I − I 11 )QU 12 U 22 due to (8), (27) and (28). To prove the second equality we use the definition of the blocks B and A1 12 in(8)and(9), then the second equality in (27) and identity (25):L 12 (I − I 11 )QU 12 U 22 = L 12 A 1 12 U 12 U 22 = E 12 U 22 = E 12 .3. The right part of the third equality of (20) takes the form L 22 L 21 B(I − J 11 )U 21 due to definition of the block B (8), the first equality in (27) and (29). To prove the third equality we use the definition of the blocks A 1 21 in (9), then the third equality in (27) and identity (26): L 22 L 21 BJ 11 U 21 = L 22 L 21 A 1 21 U 21 = L 22 E 21 = E 21 .4. The identity E T 12 L 12 = E T 12 L 12 (I 11 + I 11 ) = E T 12 L 12 I 11 L 22 ( 22GJ 12 − I 21 GJ 12 )U 22 = L 22 I 21 GJ 12 U 22 = L 22 A 2 22 U 22 = E 22 . For the completion of the proving of this theorem we have to demonstrate the special form of the matrices U and L: L − I E ∈ F I,IE and U − J E ∈ F JE ,J . The matrix L is invertible and I E < I therefore we have to prove that L = I E + ILI E , where I E = diag(I 11 + I 12 , I 21 + I 22 ), I E = diag(I 11 I 12 , I 21 I 22 ), I = diag(I 1 , I 2 ). This matrix equality for matrix L (14) is equivalent to the four block equalities: L 12 L 11 = I 1 L 12 L 11 (I 11 + I 12 ) + I 11 I 12 , 0 = I 1 0(I 21 + I 22 ), −L 22 W L 11 = −I 2 L 22 W L 11 (I 11 + I 12 ), L 22 L 21 = I 2 L 22 L 21 (I 21 + I 22 ) + I 21 I 22 . Analogy of Bruhat decomposition for the closure of a cone of Chevalley group of a classical serie. D Grigoriev, Soviet Math. Dokl. 23Grigoriev D. Analogy of Bruhat decomposition for the closure of a cone of Chevalley group of a classical serie. Soviet Math. Dokl., vol.23, N 2, 393-397 (1981) Additive complexity in directed computations. D Grigoriev, Theoretical Computer Science. 19Grigoriev D. Additive complexity in directed computations. Theoretical Computer Science, vol.19, 39-67 (1982) Sparsity of Bruhat decomposition factors of nonsingular matrices. Notes of Scientific Seminars of LOMI. L Kolotilina, Yu, 202Kolotilina L.Yu. Sparsity of Bruhat decomposition factors of nonsingular matrices. Notes of Scientific Seminars of LOMI, v.202, 5-17 (1992) Bruhat decomposition and solution of linear algebraic systems with sparse matrices. L Kolotilina, Yemin A Yiu, Yu, Sov.J.Numer.Anal. and Math.Model. v. 2Kolotilina L.Yiu and Yemin A.Yu. Bruhat decomposition and solution of linear algebraic systems with sparse matrices. Sov.J.Numer.Anal. and Math.Model. v.2, 421-436 (1987) Gaussian Elimination is not optimal. V Strassen, Numerische Mathematik. 13Strassen V.: Gaussian Elimination is not optimal. Numerische Mathematik. 13, 354-356 (1969) Effective Matrix Methods in Commutative Domains. G I Malaschonok, Formal Power Series and Algebraic Combinatorics. BerlinSpringerMalaschonok G.I.: Effective Matrix Methods in Commutative Domains. In: Formal Power Series and Algebraic Combinatorics. Springer, Berlin 506-517 (2000) G I Malaschonok, Matrix computational methods in commutative rings. Tambov. Tambov State UniversityMalaschonok G.I.: Matrix computational methods in commutative rings. Tambov, Tambov State University (2002). Computation of Adjoint Matrix. A Akritas, G Malaschonok, Fourth International Workshop on Computer Algebra Systems and Applications (CASA 2006). BerlinSpringer3992Akritas A. and Malaschonok G.: Computation of Adjoint Matrix. In: Fourth Inter- national Workshop on Computer Algebra Systems and Applications (CASA 2006), LNCS 3992. Springer, Berlin, 486-489 (2006) Triangular factorization and inversion by fast matrix multiplication. J Bunch, J Hopkroft, Watt S.M. Pivot-Free Block Matrix Inversion. Maple Conference. Waterloo, Canada.2810Bunch J., Hopkroft J. Triangular factorization and inversion by fast matrix multi- plication. Mat. Comp. V. 28, 231-236 (1974) 10. Watt S.M. Pivot-Free Block Matrix Inversion. Maple Conference 2006, July 23-26, Waterloo, Canada. (2006) Pivot-Free Block Matrix Inversion. S M Watt, Proc 8th International Symposium on Symbolic and Numeric Algorithms in Symbolic Computation (SYNASC). 8th International Symposium on Symbolic and Numeric Algorithms in Symbolic Computation (SYNASC)IEEE Computer SocietyWatt S.M. Pivot-Free Block Matrix Inversion, Proc 8th International Symposium on Symbolic and Numeric Algorithms in Symbolic Computation (SYNASC), IEEE Computer Society, 151-155 (2006) Efficient parallel independent subsets and matrix factorization. W Eberly, Proceedings, 3rd IEEE Symposium on Parallel and Distributed Processing. 3rd IEEE Symposium on Parallel and Distributed ProcessingDallas, USAEberly, W.. Efficient parallel independent subsets and matrix factorization. In Pro- ceedings, 3rd IEEE Symposium on Parallel and Distributed Processing (Dallas, USA, 1991), 204-211 (1991) Processor-efficient parallel solution of linear systems over an abstract field. E Kaltofen, V Pan, Proceedings, 3rd Annual ACM Symposium on Parallel Algorithms and Architectures. 3rd Annual ACM Symposium on Parallel Algorithms and ArchitecturesACM PressKaltofen, E., Pan, V. Processor-efficient parallel solution of linear systems over an abstract field. In Proceedings, 3rd Annual ACM Symposium on Parallel Algorithms and Architectures, ACM Press, 180-191 (1991) Processor-efficient parallel solution of linear systems II: The general case. E Kaltofen, V Pan, Proceedings, 33rd IEEE Symposium on Foundations of Computer Science. 33rd IEEE Symposium on Foundations of Computer SciencePittsburgh, USAKaltofen, E., Pan, V. Processor-efficient parallel solution of linear systems II: The general case. In Proceedings, 33rd IEEE Symposium on Foundations of Computer Science (Pittsburgh, USA, 1992) 714-723 (1992) Parallel solution of Toeplitz and Toeplitz-like linear systems over fields of small positive characteristic. E Kaltofen, V Pan, Proceedings, PASCO 94: First International Symposium on Parallel Symbolic Computation. PASCO 94: First International Symposium on Parallel Symbolic ComputationWorld Scientific PublishingKaltofen, E., Pan, V. Parallel solution of Toeplitz and Toeplitz-like linear systems over fields of small positive characteristic. In Proceedings, PASCO 94: First Inter- national Symposium on Parallel Symbolic Computation, World Scientific Publish- ing, 225-233 (1994) Parallel Algorithms of Computer Algebra. Materials of the conference dedicated for the 75 years of the Mathematical and Physical Dep. of Tambov State University. G I Malaschonok, Tambov. TSUMalaschonok G.I.: Parallel Algorithms of Computer Algebra. Materials of the con- ference dedicated for the 75 years of the Mathematical and Physical Dep. of Tam- bov State University. (November 22-24, 2005). Tambov. TSU, 44-56 (2005) Generalized algorithm for computing of inverse matrix. 11-th conference. G I Malaschonok, M S Zuyev, Derzhavinskie Chtenia". Malaschonok G.I. and Zuyev M.S.: Generalized algorithm for computing of inverse matrix. 11-th conference "Derzhavinskie Chtenia". (February 2-6, 2006). Tambov. TSU, 58-62 (2006)
{'fraction_non_alphanumeric': 0.06465229020394517, 'fraction_numerical': 0.06674189234369776, 'mean_word_length': 3.2540444444444443, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 21, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The deterministic recursive pivot-free algorithms for the computation of generalized Bruhat decomposition of the matrix in the field and for the computation of the inverse matrix are presented. This method has the same complexity as algorithm of matrix multiplication and it is suitable for the parallel computer systems. ⋆', 'arxivid': '1702.07242', 'author': ['Gennadi Malaschonok \nTambov State University\nInternatsionalnaya 33392622TambovRussia\n'], 'authoraffiliation': ['Tambov State University\nInternatsionalnaya 33392622TambovRussia'], 'corpusid': 27512704, 'doi': '10.1007/978-3-642-15274-0_16', 'github_urls': [], 'n_tokens_mistral': 8848, 'n_tokens_neox': 7101, 'n_words': 4774, 'pdfsha': '0e53568abf652dda74fcbe3e6c4f7bb9d61cc1c4', 'pdfurls': ['https://arxiv.org/pdf/1702.07242v1.pdf'], 'title': ['Fast generalized Bruhat decomposition', 'Fast generalized Bruhat decomposition'], 'venue': []}
arxiv
A method for accelerating projection-based magnetic particle imaging Kenya Murase Department of Medical Physics and Engineering Faculty of Health Science Graduate School of Medicine Department of Future Diagnostic Radiology Graduate School of Medicine Osaka University SuitaOsakaJapan Osaka University SuitaOsakaJapan A method for accelerating projection-based magnetic particle imaging 1 Magnetic particle imaging (MPI) is an imaging method that can visualize magnetic nanoparticles in positive contrast, without radiation exposure. Recently, we proposed an image reconstruction method for projection-based MPI (pMPI), in which the system function was incorporated into the simultaneous algebraic reconstruction technique and the total variation minimization was used to suppress noise and artifacts. This study investigated the usefulness of our method for accelerating pMPI through simulation and phantom experiments with varying number of projections. The present results suggest that our method is useful for accelerating pMPI without deteriorating the image quality. Magnetic particle imaging (MPI) is an imaging method that utilizes the non-linear response of magnetic nanoparticles (MNPs) to an external alternating magnetic field, 1) and can visualize MNPs in positive contrast, without radiation exposure. 2,3) Recently, we proposed a method for simultaneous correction of sensitivity and spatial resolution in projection-based MPI (pMPI). 4) In this method, the system function (SF) was incorporated into the simultaneous algebraic reconstruction technique (SART) 5) and total variation (TV) minimization was used as a regularizer to suppress noise amplification and artifacts. For the practical application of pMPI, it is necessary to shorten the data acquisition time as much as possible. There are two methods for accelerating pMPI. One is to shorten the data acquisition time for each projection, and the other is to reduce the number of projections (N p ) or combination of both. This study aimed to investigate the effect of N p on the quality of pMPI images obtained by our method and to evaluate the usefulness of our method for accelerating pMPI through simulation and phantom experiments. The details of our method are described in our previous paper. 4) Briefly, the image reconstruction was performed iteratively based on the following two procedures: First, the image at pixel j and iteration n ( ) was updated as follows: +1 = + ∑ ∑ ( −∑ ∑ ) ( = 1, 2, ⋯ , and = 1, 2, ⋯ , ),(1) where denotes the relaxation parameter at iteration n, which was fixed at 1.0, the projection data at bin i, N the number of iterations, and J the total number of pixels. The term denotes the element of the system matrix, which is expressed as 4) = ( ) • ( ),(2) where S(j) is the sensitivity of the receiving coil at pixel j, and ( ) denotes the SF value at distance between pixel and the field-free line (FFL) at bin i. 4) In this study, the SF was measured using a point source as described afterwards, and S(j) was calculated using the Biot-Savart law based on the principle of reciprocity. 6) After the SART update using Eq. (1), a regularization method with TV minimization was used. For the TV minimization, isotropic TV, that is, the sum of the gradients of the image 7) was used, and the gradient descent method was applied on +1 , with +1,1 set as where α and M denote a regularization parameter and the number of iterations for TV minimization, respectively, the amount of change in the first reconstruction step, div and ∇ the divergence and gradient operators, respectively, and ‖•‖ 2 the ℓ 2 norm. After m reached M, Eq. (1) was applied again, with n and set as n+1 and +1, +1 , respectively. The iterative procedure above was repeated until n reached N or √∑ ( +1 − ) 2 =1 √∑ ( ) 2 =1 < ( ≥ 2) ⁄ was satisfied. In this study, N, M, α, and were set to 1000, 1, 0.05, and 10 -4 , respectively. The initial estimate 1 ( = 1, 2, ⋯ , ) was set to a uniform image with zero pixel intensity. When the reconstructed image contained negative values, these values were set to zero. In the simulation, a vortex-shaped numerical phantom with an image matrix size of 128×128 was used. First, projection data with various N p were generated using the forward projection method and were convolved with the SF. In addition, Gaussian noise with a noise level of 5% was added to the projection data using normally distributed random numbers. Here, the noise level is defined as the standard deviation of the noise divided by the maximum value of the projection data. Image reconstruction from the generated projection data was performed using our method. For comparison, the filtered back-projection (FBP) method with a Shepp-Logan filter 8) was also used. As in our method, when the image reconstructed using the FBP method contained negative values, these values were set to zero. The N p over 180° was varied between 4 and 180. The angular increment between two successive projections is 180/N p (°). The reconstructed images were quantitatively evaluated using two measures. The first measure is the percent root mean square error (PRMSE), which was calculated as follows: = √ ∑ ( − ) 2 =1 ∑ ( ) 2 =1 × 100 (%),(4) where and denote the image intensities at pixel j of the reconstructed and ground truth (GT) images, respectively. The GT image is shown in the upper row of Fig. 2(a). The other measure is the structural similarity (SSIM) index, proposed by Wang et al. 9) . It was calculated as follows: = (2 + 1 )(2 + 2 ) ( 2 + 2 + 1 )( 2 + 2 + 2 ) ,(5) where and denote the averages within the windows set on the reconstructed and GT images, respectively; 2 and 2 denote the corresponding variances; denotes the covariance between the two windows. The terms 1 and 2 denote two variables to stabilize the division with a small denominator. We used the default values given by Wang et al. 9) for the window size, 1 , and 2 . The resultant SSIM is a decimal value between −1 and 1, and a higher SSIM indicates higher image similarity. An SSIM of 1 implies that the two images are identical. Phantom experiments were performed using an OU-shaped phantom and our MPI scanner. 10,11) The details of our MPI system have been reported in our previous papers. 10,11) Briefly, a selection magnetic field (SMF) was generated by two opposing neodymium magnets, and an FFL was generated at the center of the two neodymium magnets. The MPI images (matrix size: 64×64) were reconstructed from the projection data generated above using our method and the FBP method. To quantitatively evaluate the reconstructed images, the profiles along the horizontal line passing through the center of the reconstructed images were calculated. 5 The SF was measured using a point source (length: 1 mm and ID: 1 mm) filled with MNPs. The point source was placed at the center in the receiving coil and was translated in the direction perpendicular to the axis of the receiving coil from −10 mm to +10 mm in steps of 0.5 mm using the XYZ-axes rotary stage. The data acquisition time was 30 s per position. Figure 1 shows the measured SF (red closed circles) and that fitted to the sum of two Gaussian functions (blue solid line). In this study, the fitted curve was normalized to the unit sum and was used as the SF in Eq. (2). Fig. 2(b), the PRMSE for our method was significantly lower than that for the FBP method. When using the FBP method, the PRMSE gradually decreased with increasing N p . In contrast, when using our method, the PRMSE decreased with increasing N p at N p < 12, following which it became almost constant. As shown in Fig. 2(c), our method produced much higher SSIM than the FBP method. When using the FBP method, the SSIM gradually increased with increasing N p . In contrast, 6 when using our method, it rapidly increased and had a peak at N p = 12, following which it plateaued. (a) Figure 7 3(b) shows the profiles along the horizontal line passing through the center of the reconstructed images at N p = 6 for the FBP method (red solid line) and our method (blue dotted line). As shown in Fig. 3(a), the quality of the images obtained by our method was significantly better than that for the FBP method. When using our method, artifacts and blurring almost disappeared even at N p = 6. This was also confirmed by the profiles shown in Fig. 3(b). In the simulation (Fig. 2), our method exhibited significant improvement in image quality and produced significantly lower PRMSE and higher SSIM than the FBP method even at extremely low N p . The phantom experiments (Fig. 3) also demonstrated that the deterioration of image quality was significantly suppressed by our method even at extremely low N p . Thus, these results suggest that our method is useful for accelerating pMPI. (b) (c) Theoretically, the N p required for obtaining good reconstructed images can be estimated from the Nyquist-Shannon (sampling) theorem. 12) According to this theorem, a unique reconstruction of an object sampled in space is obtained if the object was sampled with a frequency greater than twice the highest frequency of the object details. Otherwise, it causes aliasing artifacts. In the parallel scan mode for computed tomography, the number of scanned points per one projection line (N s ) is required to satisfy the following relationship: 13) ≥ 2 × .(6) N s is usually equal to the square root of image matrix size. When N s is 64 or 128, N p should be equal to or greater than 100 or 201, respectively, to satisfy the sampling theorem. 12) However, when using our method (Figs. 2 and 3), satisfactory images were obtained even if N p was much lower than that satisfying Eq. (6), whereas the images obtained by the FBP method were remarkably deteriorated and artifacts were observed (Figs. 2 and 3). The TV minimization used in our method (Eq. (3)) is effective for inducing sparsity in image processing. 7) According to the theory of compressed sensing, 14) the sparsity of an image can be exploited to reconstruct it from far fewer projections than required by the sampling theorem. 12) . Equation (3) is equivalent to solving the diffusion equation used in the anisotropic diffusion method for image denoising, 15,16) in which the diffusion coefficient is proportional to the reciprocal of ‖∇ ‖ 2 . Thus, when ‖∇ ‖ 2 decreases, the diffusion progresses, resulting in an increase in the piecewise smoothness of images. Conversely, when it increases, the diffusion decreases, resulting in the edge preservation of images. Thus, these features appear to be the main reason why the superiority of our method is maintained even at much lower N p than that satisfying Eq. (6). In summary, the present simulation and phantom experiments demonstrated that our method is useful for improving the image quality of pMPI even at extremely low N p . Thus, our method will be useful for accelerating pMPI. obtained in the first step, 4) as shown in Eq. (3). gradient strengths of the SMF perpendicular and parallel to the FFL were 3.9 T/m and 0.1 T/m, respectively. A drive magnetic field (DMF) for exciting the magnetization of MNPs was generated using a solenoid excitation coil (length: 100 mm, inner diameter (ID): 80 mm, and outer diameter (OD): 110 mm). The frequency and peak-to-peak strength of the DMF were 400 Hz and 20 mT, respectively. A gradiometer coil (length: 50 mm, ID: 35 mm, and OD: 40 mm) was placed in the excitation coil to receive the signal generated by MNPs. The third-harmonic signal was extracted using a lock-in amplifier and was converted to digital data by a multifunction data acquisition device.To acquire projection data, the OU-shaped phantom (1.5-mm-ID silicon tubes filled with MNPs) placed in the receiving coil was automatically rotated around the axis of the receiving coil through 180° in steps of 1° (N p = 180) and translated in the direction perpendicular to the axis of the receiving coil from −16 mm to +16 mm in steps of 1 mm using an XYZ-axes rotary stage. The data acquisition time was 1 s per position. The projection data with different N p were generated by thinning out the projection data acquired above. Each projection data was then transformed into 64 bins by linear interpolation. In this study, Resovist ® (Fuji Film RI Pharma Co., iron amount: 27.9 mg/mL) was used as the MNPs. Fig. 1 1System function measured using a point source (red closed circles) and that fitted to the sum of two Gaussian functions (blue sold line). Figure 2 ( 2a) shows the reconstructed images of the vortex-shaped phantom obtained by the FBP method (middle row) and our method (lower row) for different N p indicated at the bottom. As shown inFig. 2(a), our method yielded the images with much less noise and artifacts compared to those obtained by the FBP method and similar to the GT image at ≥ 12. Figure 2 ( 2b) shows the PRMSE values for the FBP method (red solid line) and our method (blue dotted line) as a function of N p , whereas Fig. 2(c) shows the SSIM values for them. As shown in Fig. 2 2(a) Ground truth image of a vortex-shaped numerical phantom (upper row) and images reconstructed using the FBP method (middle row) and our methods (lower row) for different projection numbers (N p ) indicated at the bottom. The display window is set the same in all images. Scale bar = 10 mm. (b) PRMSE and (c) SSIM as functions of N p for the images reconstructed using the FBP method (red solid line) and our method (blue dotted line). Figure 3 ( 3a) shows the images of the OU-shaped phantom reconstructed using the FBP method (upper row) and our method (lower row) for different N p indicated at the top. Fig. 3 3(a) Reconstructed images of an OU-shaped phantom obtained by the FBP method (upper row) and our method (lower row) for different N p indicated at the top. Scale bar = 10 mm. (b) Profiles along the horizontal line passing through the center of the images reconstructed using the FBP method (red solid line) and our method (blue dotted line) for N p = 6. The maximum intensities of the reconstructed images are normalized to unity in both methods. AcknowledgmentsThe author thanks Mr. Shinichiro Morishita for his help in phantom experiments. This work was supported by Grants-in-Aid for Scientific Research (Grant Nos.: 25282131 and 15K12508) from the Japan Society for the Promotion of Science (JSPS). . B Gleich, J Weizenecker, Nature. 4351214B. Gleich and J. Weizenecker, Nature 435, 1214 (2005). . B Zheng, M P See, E Yu, B Gunel, K Lu, T Vazin, D V Schaffer, P W Goodwill, S M Conolly, Theranostics. 6291B. Zheng, M. P. von See, E. Yu, B. Gunel, K. Lu, T. Vazin, D. V. Schaffer, P. W. Goodwill, and S. M. Conolly, Theranostics 6, 291 (2016). . P Chandrasekharan, Z W Tay, X Y Zhou, Br. J. Radiol. 9120180326P. Chandrasekharan, Z. W. Tay, X. Y. Zhou, et al. Br. J. Radiol. 91, 20180326 (2018). . K Murase, Med. Phys. 471845K. Murase, Med. Phys. 47, 1845 (2020). . A H Andersen, A C Kak, Ultrason. Imaging. 681A. H. Andersen and A. C. Kak, Ultrason. Imaging 6, 81 (1984). . J Rahmer, J Weizenecker, B Gleich, J Borgert, BMC Med. Imaging. 41J. Rahmer, J. Weizenecker, B. Gleich, and J. Borgert. BMC Med. Imaging 4, 1 (2009). . C R Vogel, M E Oman, IEEE Trans. Image Process. 7813C. R. Vogel and M. E. Oman, IEEE Trans. Image Process. 7, 813 (1998). . L A Shepp, B F Logan, IEEE Trans. Nucl. Sci. NS. 21L. A. Shepp and B. F. Logan, IEEE Trans. Nucl. Sci. NS-21, 21 (1974), . Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, E P , IEEE Trans. Image Process. 13600Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli EP. IEEE Trans. Image Process. 13, 600 (2004). . K Murase, S Hiratsuka, R Song, Y Takeuchi, Jpn. J. Appl. Phys. 5367001K. Murase, S. Hiratsuka, R. Song, and Y. Takeuchi, Jpn. J. Appl. Phys. 53, 067001 (2014). . K Murase, R Song, S Hiratsuka, Appl. Phys. Lett. 104252409K. Murase, R. Song, and S. Hiratsuka, Appl. Phys. Lett. 104, 252409 (2014). . H Nyquist, Trans. AIEE. 47617H. Nyquist, Trans. AIEE 47, 617 (1928). F Kharfi, Imaging and Radioanalytical Techniques in Interdisciplinary Research -Fundamentals and Cutting Edge Applications. F. KharfiLondonInTech4.F. Kharfi, in Imaging and Radioanalytical Techniques in Interdisciplinary Research - Fundamentals and Cutting Edge Applications, ed. F. Kharfi (InTech, London, 2013) Chap. 4. . D L Donoho, IEEE Trans. Inf. Theory. 521289D. L. Donoho, IEEE Trans. Inf. Theory 52, 1289 (2006). . P Perona, J Malik, IEEE Trans. Pattern Anal. Mach. Intell. 12629P. Perona and J. Malik, IEEE Trans. Pattern Anal. Mach. Intell. 12, 629 (1990). . K Murase, Y Yamazaki, M Shinohara, K Kawakami, K Kikuchi, H Miki, T Mochizuki, J Ikezoe, Phys. Med. Biol. 462713K. Murase, Y. Yamazaki, M. Shinohara, K. Kawakami, K. Kikuchi, H. Miki, T. Mochizuki, and J. Ikezoe, Phys. Med. Biol. 46, 2713 (2001).
{'fraction_non_alphanumeric': 0.05657800997387794, 'fraction_numerical': 0.029031109000237473, 'mean_word_length': 4.087586831772878, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Magnetic particle imaging (MPI) is an imaging method that can visualize magnetic nanoparticles in positive contrast, without radiation exposure. Recently, we proposed an image reconstruction method for projection-based MPI (pMPI), in which the system function was incorporated into the simultaneous algebraic reconstruction technique and the total variation minimization was used to suppress noise and artifacts. This study investigated the usefulness of our method for accelerating pMPI through simulation and phantom experiments with varying number of projections. The present results suggest that our method is useful for accelerating pMPI without deteriorating the image quality.', 'arxivid': '2304.14085', 'author': ['Kenya Murase \nDepartment of Medical Physics and Engineering\nFaculty of Health Science\nGraduate School of Medicine\nDepartment of Future Diagnostic Radiology\nGraduate School of Medicine\nOsaka University\nSuitaOsakaJapan\n\nOsaka University\nSuitaOsakaJapan\n'], 'authoraffiliation': ['Department of Medical Physics and Engineering\nFaculty of Health Science\nGraduate School of Medicine\nDepartment of Future Diagnostic Radiology\nGraduate School of Medicine\nOsaka University\nSuitaOsakaJapan', 'Osaka University\nSuitaOsakaJapan'], 'corpusid': 258352478, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 5036, 'n_tokens_neox': 4297, 'n_words': 2816, 'pdfsha': 'bd69dcdae693c1cbc7e7c77b636870c814422c2f', 'pdfurls': ['https://export.arxiv.org/pdf/2304.14085v2.pdf'], 'title': ['A method for accelerating projection-based magnetic particle imaging', 'A method for accelerating projection-based magnetic particle imaging'], 'venue': []}
arxiv
A short comment on OPERA neutrino velocity measurement P L Frabetti [email protected]**[email protected] Telcom MPK Dubna Russia JINRDubna Russia Telcom MPK Dubna Russia L P Chernenko Telcom MPK Dubna Russia A short comment on OPERA neutrino velocity measurement In this report a potential problem in the data analysis of the OPERA experiment is discussed: the main issue is that the quantity " дt" used in the maximum likelihood procedure is not a " true" parameter of the parentdistribution ( called PDF in the paper) but a shift in the xaxis (time scale) . This means that the quantity дt has to be considered only as systematic effect these error is not simply deducible from a gaussian distribution as stated.The OPERA collaboration [1] has recently reported on the early arrival time of CNGS muon neutrinos, estimating a neutrinos propagation velocity higher that the light velocity in vacuum. The later conclusion is based on some very important and not simple measurements : the distance between CERN K's and π's production target and the OPERA detector at LNGS and the clocks synchronization, again between CERN and LNGS. Both these measurements are very impressive and complicated, mainly because the general relativity has to be taken into account : see discussion in[2]. The next step in the data analysis is a statistical comparison between the time distribution of the proton pulses 1 and 2 (called PDF in the paper), extracted from the SPS at 400 Gev/c (see ref.[3] for a better understanding) and a posterori correlated to the detected neutrino events, and the time distribution of the neutrino events as measured in the OPERA detector .* In fact, this distribution is the time of the first hit in the scintillator strips related to a detected muon , occoured in a CC event into the apparatus or in the rock mountain in front of the apparatus, or related to an adronic shower, in case of NC event in the apparatus. In total ~16000 events have been observed during tree years of operation. In the OPERA paper and somewhere else [1] is explained how these distributions are obtained. We pay attention here only on the statistical aspect of the performed maximum likelihood procedure choose to estimate the δt (time difference between neutrino time and hypothetical light time in vacuum over the same distance). As far as we understood, from the [1] and from private discussion with OPERA collaborators, the PDF is the so called (in statistics language) 'parent-distribution" while the neutrino event times are considered as a sampling of the PDF distribution. In this case, as well known from statistical books, the following function (now variables are the parameters) has to be maximized to estimate the most probable values of the parameters them-self. L = П i PDF (x i , α,β,..) (1) PDF is the parent-function, X i are the measured samples and α , β etc. are parameters of the hypothesized parent-distribution; the product act over all the sample. In the OPERA work X i is the time of the i-nth neutrino detected T i and α is δt: δt being the only parameter used in the analysis, as a term added to the independent (controlled) variable of the parent-distribution see ref. [1] L = П i W (t i +δt) (2) First we notice that δt is not an "usual" parameter, because changing it's value does not change any of the mathematical properties (commonly called "shape") of the function L. Then since the function has 0 (zero) value everywhere except for a small interval (~ from 0 to 11500 ns) (let us consider only the proton extraction number 1, for simplicity), where the value is almost constant (let us make such an approximation for clearness), the function is not compact. Any sampling (set of experimental values) with a single value outside the mentioned interval annul the L value (for any δt): in other word the likelihood function depend on the boundary [4]. This means there are not possible solutions (maximums) if the width of the neutrinos measured times interval (over the tree years of data collection) has some values outside the proton time interval distribution. So the convergence of the maximization procedure can occur only when the experimental values are distributed in an interval smaller than the total width of the parent distribution. This is an evident bias of the algorithm and invalidate, in our opinion, at least the claimed error of ~ 7 ns. In any case, since the proposed parameter δt looks like a systematic error of the time scale (independent variable) any attempt to consider it normally distributed has to be demonstrate . As simple solution to the issue, being the parameter δt an x-axis translation, we suggest to consider the protons and neutrinos time distributions as two "samplings" of the same parent-distribution. In this case the two estimated averages (first momentum) can be compared using the Student-distribution and consequently estimate both δt value and its error. Measurement of the neutrino velocity with the OPERA detector in the CNGS beam. T Adam, OPERA CollaborationarXiv:1109.4897T. Adam et al. [ OPERA Collaboration ], "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam," [arXiv:1109.4897] Neutrino velocity measurement with the OPERA experiment in the CNGS beam, PhD thesis, in joint supervision of the Universite' Claude Bernard Lyon-I and Universita' di Bologna. G Brunetti, G. Brunetti, Neutrino velocity measurement with the OPERA experiment in the CNGS beam, PhD thesis, in joint supervision of the Universite' Claude Bernard Lyon-I and Universita' di Bologna, 2011, http://operaweb.lngs.infn.it:2080/Opera/phpmyedit/theses-pub.ph Contaldi The OPERA neutrino velocity results and the synchronization of clocks. Carlo R , arXiv:1109.6160Carlo R. Contaldi The OPERA neutrino velocity results and the synchronization of clocks [arXiv:1109.6160] . J Knoblock, arXiv:1110.0595J. Knoblock Is there a neutrino speed anomaly? [arXiv:1110.0595] Sigmund Brandt Statistics for data analysis. North Holland Publishing CoSigmund Brandt Statistics for data analysis North Holland Publishing Co. Appunti dalle lezioni di Statistica e teoria degli errori. Raccolta a cura degli studenti del corso di Lurea in Astronomia (UNIBO). P L Frabetti, P.L. Frabetti Appunti dalle lezioni di Statistica e teoria degli errori. Raccolta a cura degli studenti del corso di Lurea in Astronomia (UNIBO), 1985
{'fraction_non_alphanumeric': 0.03683471203371313, 'fraction_numerical': 0.013734977368503199, 'mean_word_length': 4.711229946524064, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In this report a potential problem in the data analysis of the OPERA experiment is discussed: the main issue is that the quantity " дt" used in the maximum likelihood procedure is not a " true" parameter of the parentdistribution ( called PDF in the paper) but a shift in the xaxis (time scale) . This means that the quantity дt has to be considered only as systematic effect these error is not simply deducible from a gaussian distribution as stated.The OPERA collaboration [1] has recently reported on the early arrival time of CNGS muon neutrinos, estimating a neutrinos propagation velocity higher that the light velocity in vacuum. The later conclusion is based on some very important and not simple measurements : the distance between CERN K\'s and π\'s production target and the OPERA detector at LNGS and the clocks synchronization, again between CERN and LNGS. Both these measurements are very impressive and complicated, mainly because the general relativity has to be taken into account : see discussion in[2]. The next step in the data analysis is a statistical comparison between the time distribution of the proton pulses 1 and 2 (called PDF in the paper), extracted from the SPS at 400 Gev/c (see ref.[3] for a better understanding) and a posterori correlated to the detected neutrino events, and the time distribution of the neutrino events as measured in the OPERA detector .*', 'arxivid': '1111.3116', 'author': ['P L Frabetti [email protected]**[email protected] \nTelcom MPK\nDubna Russia\n', 'JINRDubna Russia \nTelcom MPK\nDubna Russia\n', 'L P Chernenko \nTelcom MPK\nDubna Russia\n'], 'authoraffiliation': ['Telcom MPK\nDubna Russia', 'Telcom MPK\nDubna Russia', 'Telcom MPK\nDubna Russia'], 'corpusid': 116060629, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 1680, 'n_tokens_neox': 1521, 'n_words': 1009, 'pdfsha': '8dee586b305bdd081a596d24b1aef1bb88de266f', 'pdfurls': ['https://export.arxiv.org/pdf/1111.3116v1.pdf'], 'title': ['A short comment on OPERA neutrino velocity measurement', 'A short comment on OPERA neutrino velocity measurement'], 'venue': []}
arxiv
A generalization of the Hopf-Cole transformation for stationary Mean Field Games systems 22 May 2015 Marco Cirant Dipartimento di Matematica "F. Enriques" Università di Milano Via C. Saldini 5020133MilanoItaly A generalization of the Hopf-Cole transformation for stationary Mean Field Games systems 22 May 2015Stationary Mean Field Gamesp-LaplacianHopf-Cole transformation 1991 MSC: 35J4749N7035B45 In this note we propose a transformation which decouples stationary Mean Field Games systems with superlinear Hamiltonians of the form |p| r ′ , r ′ > 1, and turns the Hamilton-Jacobi-Bellman equation into a quasi-linear equation involving the r-Laplace operator. Such a transformation requires an assumption on solutions of the system, which is satisfied for example in space dimension one or if solutions are radial. Introduction Mean Field Games (briefly MFG) is a branch of Dynamic Games which has been proposed independently by Lasry, Lions [7], [8], [9], [11] and Caines, Huang, Malhamé [6], and aims at modeling and analyzing decision processes involving a very large number of indistinguishable rational agents. In MFG, every agent belonging to a population of infinite individuals has the goal of minimizing some cost which depends on his own state and on the average distribution of the other players. Suppose that the state of a typical player is driven by the stochastic differential equation dX s = −α s ds + √ 2ν dB s ∈ Ω, where α s is the control, B s is a Brownian motion, ν > 0 and the domain Ω ⊆ R d , d ≥ 1, is the so-called state space. Suppose also that the cost functional has the long-time-average form J (X 0 , α) = lim inf T →∞ 1 T T 0 E[L(α s ) + f (X s ,m s )]ds, where the (convex) Lagrangian function L(α) is associated to the cost paid by the player to change his own state, and the term involving f , that we assume to be a C 1 (Ω × [0, ∞)) function, is the cost paid for being at state x ∈ Ω, and it depends on the empirical densitym s of the other players. Then, under the assumption that players are indistinguishable, it has been shown that equilibria of the game (in the sense of Nash) are captured by the following system of non-linear elliptic equations              (HJB) −ν∆u(x) + H(Du(x)) + λ = f (x, m(x)) in Ω (K) −ν∆m(x) − div(DH(Du(x)) m(x)) = 0 in Ω Ω m(x)dx = 1, m ≥ 0.(1) Here, H denotes the Legendre transform of L. The two unknowns u, λ in the Hamilton-Jacobi-Bellman equation (1)-(HJB) provide respectively the optimal control of a typical player, given in feedback form by α * : x → −DH(Du(x)), and the average cost J (X 0 , α * ). On the other hand, the solution m of the Kolmogorov equation (1)-(K) is the stationary distribution of players implementing the optimal strategy, that is the long time behavior of the whole population playing in an optimal way. The two equations are coupled via the cost function f . Note that a set of boundary conditions is usually associated to (1), for example u, m can be assumed to be periodic if Ω = (0, 1) d or Neumann conditions are imposed if Ω is bounded and X s is subject to reflection at ∂Ω. In some models Ω is the whole R d . A relevant class of MFG models assumes that the Lagrangian function L has the form L(α) = l 0 r |α| r , l 0 > 0, r > 1. Consequently, the Hamiltonian function H becomes H(p) = h 0 r ′ |p| r ′ , h 0 = l 1−r ′ 0 > 0, r ′ = r r − 1 > 1.(2) In the particular situation where L and H are quadratic (namely r = r ′ = 2), it has been pointed out (see [7], [11]) that the so-called Hopf-Cole transformation decouples (1), and reduces it to a single elliptic semilinear equation of generalized Hartree type. Precisely, let ϕ := ce − u 2ν , c > 0, then (1)-(HJB) reads (setting for simplicity h 0 = 1) − 2ν 2 ∆ϕ + (f (x, m) − λ)ϕ = 0 in Ω (3) for all c > 0. Moreover, if we set c 2 = Ω e − u ν −1 , an easy computation shows that ϕ 2 is also a solution of (1)-(K), so if uniqueness of solutions for such equation holds (that is true, for example, if suitable boundary conditions are imposed), then m = ϕ 2 , and therefore ϕ becomes the only unknown in (3). This transformation can be exploited to study quadratic MFG systems, both from the theoretical and numerical point of view. This strategy is adopted, for example, in the works [1], [3], [4], [5]. The aim of this note is to show that if r ′ = 2 there exists a similar change of variables, involving a suitable power of m, that in some cases decouples (1) and turns the Hamilton-Jacobi-Bellman equation (1) -(HJB) into a quasi-linear equation of the form      −µ∆ r ϕ + (f (x, ϕ r ) − λ)ϕ r−1 = 0 in Ω, Ω ϕ r dx = 1, ϕ > 0, µ = ν νr h 0 r−1 ,(4) where ∆ r ϕ = div(|Dϕ| r−2 Dϕ) is the standard r-Laplace operator (r is the conjugate exponent of r ′ ). Such a transformation is in particular possible if the vector field νDm+DH(Du)m, which is divergence-free because of (1)-(K), is identically zero on Ω. Let us briefly recall in which sense (u, m, λ) solves (1). Definition 1.1 We say that a triple (u, m, λ) ∈ C 2 (Ω) × W 1,2 loc (Ω) × R is a (local) solution of (1) if u, λ solve pointwise (1)-(HJB) and m solves (1)-(K) in the weak sense, namely ν Ω Dm · Dξ + Ω m DH(Du) · Dξ = 0 ∀ξ ∈ C ∞ 0 (Ω).(5) We say that a couple (ϕ, λ) ∈ (W 1,r loc (Ω) ∩ L ∞ loc (Ω)) × R is a solution of (4) if µ Ω |Dϕ| r−2 Dϕ · Dξ + Ω (f (x, ϕ r ) − λ)ϕ r−1 ξ = 0 ∀ξ ∈ C ∞ 0 (Ω). Then, the transformation can be stated as follows. Theorem 1.2 Suppose that H satisfies (2). a) Let (u, m, λ) be a solution of (1). If the following equality holds, in Ω (7) is a solution of (4). b) Let (ϕ, λ) be a solution of (4), and suppose that there exists u ∈ C 1 (Ω) νDm + h 0 m|Du| r ′ −2 Du = 0 a.e. in Ω,(6) such that h 0 ϕ|Du| r ′ −2 Du + νrDϕ = 0 in Ω. Then, (u, m, λ) is a solution of (1), where m := ϕ r in Ω. The proposed transformation reveals a connection between some (stationary) MFG systems with non-quadratic Hamiltonians of the form (2) and r-Laplace equations, which have been widely studied in the literature and appear in many other areas of interest. Apart from existence and uniqueness issues, this link might shed some light on MFG problems in general, which can be translated into problems involving the r-Laplacian (e.g. qualitative properties of solutions, MFG in unbounded domains, vanishing viscosity limit ν → 0). In the quadratic case r ′ = 2, we have mentioned that if solutions of (1)-(K) are unique, then m = e − h 0 u ν / Ω e − h 0 u ν −1 . In this case condition (6) is easily verified, and the assertion of Theorem 1.2 is that ϕ = m 1/2 solves (4) with r = 2, which is precisely (3). In this sense our transformation can be seen as a generalization of the standard Hopf-Cole. Note that the change of variables m = ϕ r is local, in particular it is independent of boundary conditions that might be added to (1). However, in order to verify (6), (8) and therefore to apply Theorem 1.2, it is necessary to specify additional information on the problem. Space dimension d = 1 and Neumann conditions at the boundary, or u, m, ϕ enjoying radial symmetry are two possible scenarios where (6), (8) hold. (2) and Ω = {x ∈ R d : |x| < R} for some R ∈ (0, ∞]. Then, (u, m, λ) is a radial solution of (1) if and only if (ϕ, λ) is a radial solution of (4), where ϕ = m 1/r . (2) and Ω = (a, b) for some −∞ < a < b < ∞. Then, (u, m, λ) ∈ C 2 (Ω) × W 1,2 (Ω) × R is a solution of (1) satisfying the Neumann boundary Corollary 1.3 Suppose that H satisfies Corollary 1.4 Suppose that H satisfies conditions 2 u ′ (a) = u ′ (b) = m ′ (a) = m ′ (b) = 0 if and only if (ϕ, λ) ∈ W 1,r (Ω) ∩ L ∞ (Ω) is a solution of (4) satisfying the Neumann boundary conditions ϕ ′ (a) = ϕ ′ (b) = 0, where ϕ = m 1/r . Remark 1 We point out that m ∈ W 1,q loc (Ω) for all q ≥ 1 by standard regularity results on weak solutions of Kolmogorov equations. Moreover, the Harnack inequality guarantees that m > 0 on Ω. However, even if u ∈ C 2 (Ω), we do not expect in general the same regularity for m, since DH(Du) might lack of the desired smoothness if 1 < r ′ < 2. If r ′ ≥ 2, it is possible to conclude that m is twice differentiable on Ω and it solves (1)-(K) in the classical sense. On the other hand, it is known that a solution ϕ of (4) enjoys local C 1,α regularity (see, for example, [2], [10]). Proof of Theorem 1.2, a). Note that m ∈ W 1,q loc (Ω) for all q ≥ 1 (see Remark 1), and therefore ϕ r−1 has the same regularity. Moreover, m > 0 on Ω, hence ϕ > 0 as well. Equalities (6) and (7) imply that νr Dϕ ϕ = −h 0 |Du| r ′ −2 Du a.e. in Ω.(9) We multiply the Hamilton-Jacobi-Bellman equation (1)-(HJB) by ξϕ r−1 , where ξ ∈ C ∞ 0 (Ω) is a generic test function. Integrating by parts, ν Ω Du · D(ξϕ r−1 ) + Ω h 0 r ′ |Du(x)| r ′ ξϕ r−1 + Ω (λ − f )ξϕ r−1 = 0.(10) We note that by (9) ν Ω ξDu · D(ϕ r−1 ) = ν(r − 1) Ω ξDu · Dϕ ϕ r−2 = −h 0 r − 1 r Ω ξ|Du| r ′ ϕ r−1 , and (r ′ ) −1 = (r − 1)r −1 , so (10) becomes ν Ω Du · Dξ ϕ r−1 + Ω (λ − f )ξϕ r−1 = 0.(11) Again using (9), we obtain Ω |Dϕ| r−2 Dϕ · Dξ = − h 0 νr r−1 Ω |Du| (r ′ −1)(r−2)+r ′ −2 ϕ r−1 Du · Dξ = 1 ν h 0 νr r−1 Ω (λ − f )ξϕ r−1 ,(12) since (r ′ − 1)(r − 2) + r ′ − 2 = 0, and by virtue of (11). Equality (12), which holds for all test functions ξ, is precisely the weak formulation of (4), hence we are done. We finally observe that ϕ enjoys local C 1,α regularity (see Remark 1), which is inherited by m. b). It is easily verified, in view of (8), that m solves (1)-(K). Since ϕ = m 1/r is positive (see Remark 1), νr Dϕ ϕ = −h 0 |Du| r ′ −2 Du in Ω. By carrying out backwards computations (10)-(12) of part a), it follows that u, λ is a weak solution of (1)-(HJB). Standard regularity results for the Poisson equation guarantee that u ∈ C 2 (Ω) and (1)-(HJB) is satisfied in the classical sense. Proof of Corollary 1.3. In the following, ρ := |x| and e ρ = e ρ (x) = x/ρ will denote the standard unit vector in the radial direction. Let (u, m, λ) be a radial solution of (1). We write u(x) = u(ρ), m(x) = m(ρ), Du(x) = u ′ (ρ)e ρ , Dm(x) = m ′ (ρ)e ρ . The Kolmogorov equation (1)-(K) reads Ω (νDm + h 0 |Du| r ′ −2 Du m) · Dξ = 0(13) for all test functions ξ ∈ C ∞ 0 (Ω), and νDm(x) + h 0 |Du(x)| r ′ −2 Du(x) m(x) = F (ρ)e ρ for all x ∈ Ω, where F (ρ)ρ (d−1)/q ∈ L q ((0, R ′ )), for all q ≥ 1, R ′ < R, so (13) becomes defines a radial function u(x) = u(ρ) that belongs to C 1 (Ω). An easy computation shows that u satisfies (8), hence (u, m, λ) is a solution of (1) as a consequence of Theorem 1.2, b). Proof of Corollary 1.4. We proceed as in the proof of Corollary 1.3. Let (u, m, λ) be a solution of (1) and set νm ′ (x) + h 0 |u ′ (x)| r ′ −2 u ′ (x) m(x) =: F (x) Email address: [email protected] (Marco Cirant). 1 . The author is supported by a Post-Doc Fellowship from the Università degli Studi di Milano. R 0 F 0(ρ)ξ ′ (ρ)ρ d−1 dρ = 0 for all radial test functions ξ. It is then possible to conclude that F (ρ)ρ d−1 = 0 a.e. in (0, R), so (6) holds and (ϕ, λ) is a solution of (4) because of Theorem 1.2, a).If (ϕ, λ) is a radial solution of (4), say ϕ(x) = ϕ(ρ), Dϕ(x) = ϕ ′ (ρ)e ρ , thenϕ ′ (0) = 0. Setting b(ρ) := −νrϕ ′ (ρ) h 0 ϕ(ρ) ∀ρ ∈ [0, R) we have b ∈ C 0,α ([0, R ′ ]) for all 0 < R ′ < R and b(0) . For equations (1)-(K) and (4), Neumann boundary conditions are intended in the weak sense, namely the space of test functions ξ is set to be C ∞ (Ω). Acknowledgements. The author wishes to express his gratitude to the anonymous referee for his very careful reading of the manuscript and his valuable advices. Therefore, b a F (x)ξ ′ (x)dx = 0. F ∈ L Q , [a, b], and F ∈ L q ((a, b)). Therefore, b a F (x)ξ ′ (x)dx = 0 . ∈ C ∞, so F (x) = 0 for a. e. x ∈ [a, bfor all test functions ξ ∈ C ∞ ([a, b]), so F (x) = 0 for a. e. x ∈ [a, b]. Vice-versa, (8) holds if we choose u(x) := x a |b(y)| (2−r ′ )/(r ′ −1) b(y)dy, where b(y) = (−νrϕ ′ (y))/(h 0 ϕ(y)) for all. Hence (6) holds and one implication stated by the corollary follows by Theorem 1.2, a). and Theorem 1.2, b) appliesHence (6) holds and one implication stated by the corollary follows by Theorem 1.2, a). Vice-versa, (8) holds if we choose u(x) := x a |b(y)| (2−r ′ )/(r ′ −1) b(y)dy, where b(y) = (−νrϕ ′ (y))/(h 0 ϕ(y)) for all y ∈ [a, b], and Theorem 1.2, b) applies. . Références, Références Long time average of mean field games. P Cardaliaguet, J.-M Lasry, P.-L Lions, A Porretta, Netw. Heterog. Media. 72P. Cardaliaguet, J.-M. Lasry, P.-L. Lions, A. Porretta, Long time average of mean field games, Netw. Heterog. Media 7 (2) (2012) 279-301. C 1+α local regularity of weak solutions of degenerate elliptic equations. E Dibenedetto, Nonlinear Anal. 78E. DiBenedetto, C 1+α local regularity of weak solutions of degenerate elliptic equations, Nonlinear Anal. 7 (8) (1983) 827-850. A stochastic Evans-Aronsson problem. D Gomes, H Sánchez, Morgado, Trans. Amer. Math. Soc. 3662D. Gomes, H. Sánchez Morgado, A stochastic Evans-Aronsson problem, Trans. Amer. Math. Soc. 366 (2) (2014) 903-929. A reference case for mean field games models. O Guéant, J. Math. Pures Appl. 923O. Guéant, A reference case for mean field games models, J. Math. Pures Appl. 92 (3) (2009) 276-294. Mean field games equations with quadratic Hamiltonian : a specific approach. O Guéant, Math. Models Methods Appl. Sci. 22937O. Guéant, Mean field games equations with quadratic Hamiltonian : a specific approach, Math. Models Methods Appl. Sci. 22 (9) (2012) 1250022, 37. Large population stochastic dynamic games : closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. M Huang, R P Malhamé, P E Caines, Commun. Inf. Syst. 63M. Huang, R. P. Malhamé, P. E. Caines, Large population stochastic dynamic games : closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst. 6 (3) (2006) 221-251. Jeuxà champ moyen. I. Le cas stationnaire. J.-M Lasry, P.-L Lions, C. R. Math. Acad. Sci. Paris. 3439J.-M. Lasry, P.-L. Lions, Jeuxà champ moyen. I. Le cas stationnaire, C. R. Math. Acad. Sci. Paris 343 (9) (2006) 619-625. Jeuxà champ moyen. II. Horizon fini et contrôle optimal. J.-M Lasry, P.-L Lions, C. R. Math. Acad. Sci. Paris. 34310J.-M. Lasry, P.-L. Lions, Jeuxà champ moyen. II. Horizon fini et contrôle optimal, C. R. Math. Acad. Sci. Paris 343 (10) (2006) 679-684. Mean field games. J.-M Lasry, P.-L Lions, Jpn. J. Math. 21J.-M. Lasry, P.-L. Lions, Mean field games, Jpn. J. Math. 2 (1) (2007) 229-260. Boundary regularity for solutions of degenerate elliptic equations. G M Lieberman, Nonlinear Anal. 1211G. M. Lieberman, Boundary regularity for solutions of degenerate elliptic equations, Nonlinear Anal. 12 (11) (1988) 1203-1219. . P.-L Lions, Cours au collège de franceP.-L. Lions, Cours au collège de france, http ://www.college-de-france.fr.
{'fraction_non_alphanumeric': 0.11267796610169492, 'fraction_numerical': 0.035457627118644065, 'mean_word_length': 3.353896103896104, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 12, 'https://': 0, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 5, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In this note we propose a transformation which decouples stationary Mean Field Games systems with superlinear Hamiltonians of the form |p| r ′ , r ′ > 1, and turns the Hamilton-Jacobi-Bellman equation into a quasi-linear equation involving the r-Laplace operator. Such a transformation requires an assumption on solutions of the system, which is satisfied for example in space dimension one or if solutions are radial.', 'arxivid': '1505.06017', 'author': ['Marco Cirant \nDipartimento di Matematica "F. Enriques"\nUniversità di Milano Via C. Saldini\n5020133MilanoItaly\n'], 'authoraffiliation': ['Dipartimento di Matematica "F. Enriques"\nUniversità di Milano Via C. Saldini\n5020133MilanoItaly'], 'corpusid': 117755617, 'doi': '10.1016/j.crma.2015.06.016', 'github_urls': [], 'n_tokens_mistral': 5903, 'n_tokens_neox': 5099, 'n_words': 2827, 'pdfsha': '609076cfdd72b818c49ef95544c2f396e298f5ea', 'pdfurls': ['https://arxiv.org/pdf/1505.06017v1.pdf'], 'title': ['A generalization of the Hopf-Cole transformation for stationary Mean Field Games systems', 'A generalization of the Hopf-Cole transformation for stationary Mean Field Games systems'], 'venue': []}
arxiv
Ink-Jet Printed Graphene Electronics 21 Nov 2011 F Torrisi Department of Engineering University of Cambridge CB3 0FACambridgeUK T Hasan Department of Engineering University of Cambridge CB3 0FACambridgeUK W Wu Department of Engineering University of Cambridge CB3 0FACambridgeUK Z Sun Department of Engineering University of Cambridge CB3 0FACambridgeUK A Lombardo Department of Engineering University of Cambridge CB3 0FACambridgeUK T Kulmala Department of Engineering University of Cambridge CB3 0FACambridgeUK G W Hshieh Department of Engineering University of Cambridge CB3 0FACambridgeUK S J Jung Department of Engineering University of Cambridge CB3 0FACambridgeUK F Bonaccorso Department of Engineering University of Cambridge CB3 0FACambridgeUK P J Paul Department of Engineering University of Cambridge CB3 0FACambridgeUK D P Chu Department of Engineering University of Cambridge CB3 0FACambridgeUK A C Ferrari Department of Engineering University of Cambridge CB3 0FACambridgeUK Ink-Jet Printed Graphene Electronics 21 Nov 2011 We demonstrate ink-jet printing as a viable method for large area fabrication of graphene devices. We produce a graphene-based ink by liquid phase exfoliation of graphite in N-Methylpyrrolidone. We use it to print thin-film transistors, with mobilities up to∼95cm 2 V −1 s −1 , as well as transparent and conductive patterns, with∼80% transmittance and∼30kΩ/ sheet resistance. This paves the way to all-printed, flexible and transparent graphene devices on arbitrary substrates. I. INTRODUCTION Flexible electronics is a rapidly expanding research area 1 . Applications include touch screens 2 , electronic paper (e-paper) 3,4 , sensors 5 , radio frequency tags 6 , photovoltaic cells 7,8 , and electronic textiles 9 . To date, it mainly relies on two fabrication strategies: one in which substrates bearing thousands of Field-effect Transistors (FETs) are bonded to plastic by transfer printing or pickand place methods 10 ; another in which FETs are prepared directly on the target substrate by several coating, curing and lithographic steps 1,11 . Rubber stamping 12 , embossing 13 and ink-jet printing 14,15 reduce the number of such fabrication steps. Ink-jet printing is one of the most promising techniques for large area fabrication of flexible plastic electronics 15 . A range of components can be printed, such as transistors 13,[15][16][17][18] , photovoltaic devices 19 , organic light emitting diodes (OLEDs) 13,18,20 , and displays 13 . Inkjet printing is versatile 18 , involves a limited number of process steps 21 , is amenable for mass production, and can deposit controlled amounts of material 21 . Drop on demand 21,22 ink-jet printing has progressed from printing text and graphics 21 , to a tool for rapid manufacturing 23 , being now an established technique to print Thin Film Transistor (TFT) based on organic conducting and semiconducting inks 5,15,24 . However, their mobilities, µ<0.5cm 2 V −1 s −1 , 5,18 are still much lower than standard silicon technology. Several approaches aim to improve this, such as the use of polysilicon 25 , zinc oxide nanoparticles 26 and carbon nanotubes (CNTs) [27][28][29][30][31][32] . Metal nanoparticle inks are not stable in ordinary solvents, such as Deionized (DI) Water, Acetone, Isopropyl Alcohol, N-Methylpyrrolidone (NMP), Tetrahydrofuran 18,33 . They need to be chemically modified in order to be dispersed 18 , using stabilizers, which usually degrade in a couple of years 18,33 . Metal nanoparticles also tend to oxidize after printing 18,33 . Inkjet printed CNT-TFTs have been reported with µ up to 50cm 2 V −1 s −1 and a ON/OFF ratio∼10 3 . 32 Graphene is the two-dimensional (2d) building block for sp 2 carbon allotropes of every other dimensionality. It can be stacked into 3d graphite, rolled into 1d nanotubes, or wrapped into 0d fullerenes 34 . It is at the centre of an ever expanding research area [34][35][36][37] . Near-ballistic transport and high mobility make it an ideal material for nano-electronics, especially for high frequency applications 38 . Furthermore, its optical and mechanical properties are ideal for micro and nanomechanical systems, thin-film transistors, transparent and conductive composites and electrodes, and photonics 34,37,39 . Graphene was isolated by micromechanical exfoliation of graphite 40 . This technique is still the best in terms of purity, defects, mobility and optoelectronics properties. However, large scale production approaches are needed for widespread application. These encompass growth by chemical vapor deposition (CVD) [41][42][43][44][45][46] , segregation by heat treatment of silicon carbide [47][48][49][50] and metal substrates [51][52][53][54] , liquid phase exfoliation (LPE) [55][56][57][58] . Amongst these, LPE is ideally suited to produce printable inks. Graphite can be exfoliated by chemical wet dispersion followed by ultrasonication, both in aqueous 56,58 and non-aqueous solvents 55,58 . Dispersions can be achieved by mild sonication in water with Sodium Deoxycholate, followed by sedimentation based-ultracentrifugation 58,59 . Bile salt surfactants also allow the isolation of flakes with controlled thickness, when combined with density gradient ultracentrifugation (DGU) 60 . Exfoliation of graphite intercalated compounds 57 and expandable graphite 61 was also reported. LPE was first achieved through sonication of graphite oxide 62 , following the Hummers method 63 . The oxidation of graphite in the presence of acids and oxidants 64,65 disrupts the sp 2 -network and introduces hydroxyl or epoxide groups 66,67 , with carboxylic or carbonyl groups attached to the edge 66,67 . These make graphene oxide (GO) sheets readily dispersible in water 62,68 and several other solvents 69 . Although large GO flakes can be produced, these are intrinsically defective 62,70 , and electrically insulating 62,66 . Despite several attempts 62,66 , reduced GO (RGO) does not fully regain the pristine graphene electrical conductivity 66,71 . It is thus important to distinguish between dispersion processed graphene flakes [55][56][57][58] , retaining the electronic properties of graphene, and insulating GO dispersions 62,71 . Several groups reported GO-based inks 33,72,73 . Ref. 72 ink-jet printed RGO films for sensors applications, while Ref. 33 produced RGO-stabilized Cu nanoparticles as low temperature metal colloids, to replace standard metal nanoparticle inks, that require high temperature sintering postprocessing 74 . Mobilities up to 90cm 2 V −1 s −1 have been achieved for highly reduced GO films by ink-jet printing 73 , with an ON/OFF ratio up to 10. 73 Here we produce a graphene-based ink and demonstrate its viability for printed electronics. II. RESULTS AND DISCUSSION III. INK REQUIREMENTS A key property of inks viable for printing is their ability to generate droplets 75 75,76 . For Z<1 the high viscosity prevents drop ejection 75,76 , whereas at Z>14 the primary drop is accompanied by a number of satellite droplets 75,76 . Moreover, when inks contain dispersed molecules or nano-particles, the latter should be smaller than the nozzle diameter, to prevent clogging 21,23 . Refs. 23,78 suggested that the size of such molecules or particles should be at least 1/50 of the nozzle diameter, in order to exclude any printing instability, such as clustering of the particles at the nozzle edge, which may deviate the drop trajectory, or result in agglomerates that will eventually block the nozzle. The ejected drop behavior on the substrate can be efficiently described by fluid dynamics. When a small liquid droplet is in contact with a flat surface, partial wetting results in a finite angle between the liquid and the substrate 79 , known as contact angle, θ C 79-81 . The lower drop size limit is given by 75,76 s[µm] = a W e+12 3(1−cosθC )+4W e/Re 1/2 . Thus, e.g., for a typical a=50µm, W e=20, Re=58 and θ C ∼45 • , we get s∼85-90µm. The distance from the substrate must be optimized to guarantee both homogeneous printing and the highest resolution, barring any unusual jetting conditions, such as perturbations from the surrounding environment and diversion of the drop trajectory 18,75,82 . Furthermore, a substrate very close to the nozzle causes secondary drops to scatter off during the impact of the primary drop 18,83 , due to the initial drop jetting pressure, thus affecting the homogeneity of the final printed features 83 . The final assembly of printed nano-particle inks depends on the substrate Surface Energy (SE) 21,23 , as well as the ink viscosity and surface tension 21 . When a drop of an ink containing dispersed particles evaporates on a surface it commonly leaves a dense, ringlike, deposit along its perimeter 21,23 . This is the so-called "coffee ring effect" 84 , i.e. a distortion of the drops during solvent drying due to the interplay of ink viscosity and solute transport via solvent motion (arising from surface tension interaction between solvent and substrate) 18,84 . This is one of the most important phenomena affecting the homogeneity of ink-jet printed drops 18,84 . In order to prevent this, it is necessary to "freeze" the drops geometry immediately after they form an homogeneous and continuous film on the substrate 18 . Here we use an ink-jet printer with a nozzle diameter∼50µm, thus we need to have flakes less than 1µm across. By tuning η, γ and ρ we will target a Z within the optimal range. We print on Si/SiO 2 (to probe the electrical properties of the ink) and borosilicate (Pyrex 7740-Polished Prime Grade) glass substrates (to test the ink as transparent conductor), both with a roughness R z <15nm. Our aim is to obtain ink-jet printed drops on the substrate, with homogeneous flakes and uniform morphology, i.e. with roughness comparable to the substrate. We obtain this by varying the contact angle and optimizing the substrate wettability. In order to reduce the coffee ring effect we need both a solvent with boiling point (T c [ • C]) and heat of vaporization (V c [kJ/mol]) higher than water 18,82,84 , and a substrate that promotes adhesion 85 . Thus, we use NMP as solvent for two main reasons. First, it has higher boiling point (∼202 • C) 86 and heat of vaporization (54.5kJ/mol) 86 , than water (∼100 • C and ∼40kJ/mol). Second, NMP is the best solvent to get high-yield, surfactant-free exfoliation of graphite 55,58 . We then test several surface treatments to optimize substrate adhesion. After printing, NMP is removed by thermal annealing at 170 • C for 5 minutes. A. Graphene-based printable ink We prepare the graphene-based printable ink as follows. Graphite flakes (NGS Naturgraphit) are sonicated (Decon bath, 100W) in NMP for 9 hours. The unexfoliated flakes are left to settle for 10 mins after sonication. The decanted dispersions are then ultracentrifuged using a TH-641 swinging bucket rotor in a Sorvall WX-100 Ultra-centrifuge at 10,000 rpm (∼15,000g) for an hour and filtered to remove flakes>1µm, that might clog the nozzle. The resulting ink is characterized by Optical Absorption Spectroscopy (OAS), High Resolution Transmission Electron Microscopy (HRTEM), Electron diffraction and Raman spectroscopy. A Perkin-Elmer Lambda 950 spectrometer with 1nm resolution is used for OAS measurements. OAS can be used to estimate the concentration of graphene 55,56,59 using the Beer-Lambert Law according to the relation A = αcl, where A is the absorbance, l [m] is the light path length, c [g/L] the concentration of dispersed graphitic material and α [L g −1 m −1 ] the absorption coefficient. Fig.1 plots an OAS spectrum of our ink diluted to 10%. The ink is diluted to avoid strong scattering losses at higher concentrations, which could cause deviation of the measured value of A from the Beer-Lambert law. The spectrum in Fig.1 is mostly featureless, as expected due to the linear dispersion of the Dirac electrons 37,39,87-90 , the peak in the UV region being a signature of the van Hove singularity in the graphene density of states 88 . From α ∼1390Lg −1 m −1 at 660nm, as for Refs. 56,58, we estimate c∼0.11±0.02g/L. We disperse drops of our ink on Holey carbon Transmission electron microscopy (TEM) grids for analysis using a Tecnai T20 high resolution TEM, with an acceleration voltage of 200KV operating in phase contrast mode. Fig.2a is HRTEM image of a Single Layer Graphene (SLG) flake from the ink, while Fig.2b is a normal-incidence electron diffraction of the same flake of Fig.2a. It shows the expected sixfold symmetry [91][92][93] . The peaks are labeled with the corresponding Miller-Bravais (hkil) indexes. For Few Layer Gaphene (FLG) flakes with Bernal (AB) stacking, the intensity ratio I 1100 /I 2110 is<1, while for SLG I 1010 /I 2110 >1 91,93 . We use this to distinguish SLG from FLGs 55,59 . Fig.2c plots the diffraction intensity measured along the line section through the (1210), (0110), (1010), (2110) axis, reported in Fig.2b. The inner peaks, (0110) and (1010), are∼1.5 times the outer ones, (1210) and (2110), indicating that the flake is SLG 91 . The analysis of the edges also gives a reliable information on the number of layers and can be used to investigate a large number of flakes 91 , from zoomed-in high resolution edge images 55,94 . If SLG folds or several SLGs stack one on the other, selected area diffraction is used to distinguish contentious cases. These combined analysis show that our ink mostly consists of SLGs, Bi-Layers (BLG) and FLGs, with lateral size∼300-1000nm. We find that∼35% SLGs are larger than 300nm (Fig.2d);∼40% BLGs are larger than 350nm (Fig.2e);∼55% FLGs are larger than 450nm (Fig.2f). In particular, we have∼33% SLG with c∼0.11g/L. Previous works on LPE of graphite in NMP reported up to∼28% SLG for c∼0.18g/L 58 and ∼21% for c∼1.8g/L 94 . Ref. 57 also reported exfoliation of intercalated graphite in NMP, with∼20% SLGs for c∼0.01g/L. Thus, our ink has higher SLG yield with respect to previous works, but lower c than Ref. 94. This higher c was achieved by long time (up to 460h) ultrasonication 94 . However Ref. 94 reported defects and reduction of size as a result. Our combination of low-power sonication (<25W) and ultracentrifugation is ideal for high-yield of defect-free SLGs. Stable dispersions require the Gibbs free energy of mixing, ∆G mix , to be zero or negative 95 , where ∆G mix = ∆H mix −K∆S mix , K being the temperature, ∆H mix the enthalpy of mixing and and ∆S mix the entropy change in the mixing process 55,95 . For graphene and nanotubes, ∆S mix is small 55,96 . Therefore, for dispersion and stabilization of graphene in solvents, ∆H mix needs to be very small. This can be achieved by choosing a solvent whose surface energy is very close to that of graphene 55 . The surface energy of NMP satisfies this requirement and allows efficient exfoliation of graphite. Graphite can also be efficiently exfoliated in water with the use of bile salt surfactants. Ref. 97 reported∼20%SLGs and c∼0.3g/L SLGs, while Ref. 59 reported∼60% SLGs for c∼0.012g/L. The yield can be increased up to∼80% by density gradient ultracentrifugation 60 . The flake size of LPE graphene in water-surfactant dispersions is on average smaller(∼200nm 97 , ∼30nm 59 ) than thus far reported for NMP(∼1µm 55,58 ). The viscosity at room temperature of NMP (1.7mPas 86 ) is higher than water (∼1mPas 86 ). Larger flakes in a higher viscosity medium (such as NMP) experience higher frictional force 98,99 and sedimentation coefficient 99,100 , making it more difficult for them to sediment during ultracentrifugation. This reduces the SLG yield in NMP compared to water. The centrifuged dispersions are drop-cast onto a Si wafer with 300nm thermally grown SiO 2 (LDB Technologies ltd.) and annealed at 170 • C to remove NMP. These samples are then used for Raman measurements, collected with a Renishaw 1000 at 457, 514.5 and 633nm and a 100× objective, with an incident power∼1mW. Fig.3a plots a typical Raman spectrum of the ink at 514.5nm. Besides the G and 2D peaks, it shows significant D and D' intensities and the combination mode D+D'∼2950cm −1 . The G peak corresponds to the E 2g phonon at the Brillouin zone centre. The D peak is due to the breathing modes of sp 2 rings and requires a defect for its activation by double resonance (DR) 93,101,102 . The 2D peak is the second order of the D peak. This is a single band in SLG 93 , whereas it splits in four in BLG, reflecting the evolution of the band structure 93 . The 2D peak is always seen, even when no D peak is present, since no defects are required for the activation of two phonons with the same momentum, one backscattering from the other 93 . DR can also happen intra-valley, i.e. connecting two points on the same cone around K or K' [101][102][103] . This gives the D' peak. The 2D' is the second order of the D' peak. We assign the D and D' peaks to the edges of the sub-micrometer flakes 104 , rather than to the presence of a large amount of disorder within the flakes. This is further supported by the plot of the G peak dispersion, Disp(G) (Fig.3b)). This is defined as Disp(G) = ∆P os(G)/∆λ L , where λ L is the laser excitation wavelength. Disp(G) is generated from the linear fit the plot of the G peak position, Pos(G), as a function of the laser excitation wavelength. In disordered carbons Pos(G) increases as the excitation wavelength decreases, from IR to UV 101 , thus Disp(G) increases with disorder 101,105 . The full width at half maximum of the G peak, FWHM(G), always increases with disorder 106,107 . Thus, combining the intensity ration of the D and G peaks, I(D)/I(G), with FWHM(G) and Disp(G) allows us to discriminate between disorder localized at the edges, and disorder in the bulk of the samples. In the latter case, to higher I(D)/I(G) would correspond higher FWHM(G) and Disp(G). Figs.4 a,b) show that Disp(G), I(D)/I(G) and FWHM(G) are not correlated, a clear indication that the major contribution to the D peak comes from the sample edges. Also, Disp(G) is nearly zero for all samples, compared to the values bigger than 0.1cm −1 /nm expected for disordered carbons 105,108 , another indication of the lack of large structural disorder within our flakes. The distribution of 2D peak positions, Pos(2D), shown in 3d, has two maxima∼2692 and 2705cm −1 , similar to FWHM(2D) (3e). This is consistent with the samples being a distribution of SLG, BLG and FLGs, but with a significant fraction of SLGs. We note that for the flakes with the smallest Pos(2D) and FWHM(2D), the ration of the 2D and G integrated areas, A(2D)/A(G), is at most 3.5, implying a doping of at least 10 13 cm −2 . [109][110][111] We now estimate η, ρ and γ for our ink, in order to check its viability for ink-jet printing. η can be evaluated as η = η 0 (1+2.5φ) 82,112 , where η 0 is the viscosity of the pure solvent and φ the volume fraction of particles in the dispersion. We assume η 0 = η N MP ∼0.8mPas, the viscosity of pure NMP at ∼80 • C 86,113 (the temperature of the drops ejected from our printer, as specified in Ref. 114). We take φ=1-V ol ink V olNMP , where Vol N MP [∼0.972 mm 3 ] is the volume of 1mg pure NMP and Vol ink [∼0.94 mm 3 ] is the volume of 1mg of our ink, both measured by a micropipette (±2nL precision), at room temperature and pressure. We thus get φ ∼0.03, and η ∼0.96mPas. From the same measurement we also obtain ρ ∼1.06gcm −3 and derive γ ∼50mJ m −2 from tensiometer measurements. Given these parameters, and our nozzle diameter∼50µm, we get Z∼ √ γρa η ∼1.7, which falls within the range suitable for printing 75,76 , but close to the lower boundary of allowed Z, 75-77 thus implying a lower probability of secondary drops ejection 75,82,112 . However, high viscosity may generate nanoparticle re-aggregation 112 . B. Ink-jet printed features The final layout of printed nano-particle inks depends on substrate SE 21,23 , ink viscosity and surface tension 21 . To investigate the influence of surface treatments, we print our ink on pristine, HMDS coated and O 2 plasma treated Si/SiO 2 . A modified Epson Stylus 1500 ink-jet printer equipped with an Epson S020049 cartridge is used to print the dispersions under a constant nitrogen flow, followed by annealing at 170 • C for 5 minutes to remove the NMP. The nozzle is placed∼1mm above the substrate. HMDS is deposited by spin coating for 40s at 1000rpm, followed by annealing at 80 • C for 2 min. Alternatively the substrates are cleaned by a RF O 2 plasma at 200W and 4×10 −1 Torr for 2 min. We use optical micrographs to visualize the ink-jet printed drops, Figs.5a,b,c. The bright green/blue color of the printed features is due to the use of dark field imaging. These reveal that HMDS constrains the drops to 90µm diameter (Fig.5c), smaller than on the other substrates (∼100µm and ∼150µm for pristine, Fig.5b, and plasma treated SiO 2 , Fig.5a). As discussed above, we use NMP as solvent to reduce the coffee ring effect compared to low boiling point solvents (e.g. water, chloroform) 18,82,84 . However, we still observe coffee-rings when printing on pristine SiO 2 (Fig.5b), while Fig.5c reveals a higher flake uniformity, and no coffee-rings on HMDS treated SiO 2 . Fig.5d a representative printed pattern, showing the ability to fabricate complex layouts. Thus, HMDS appears to prevent coffee-rings. To understand this, we measure the substrates SE and investigate the printed stripes morphology, before and after surface treatment. We perform contact angle measurements with a A KSV CAM200 system. The contact angle is measured by dispensing 1µl DI water on the substrates. The surface tension is measured by the DuNouy-Padday technique 147 . This consists in using a rod few millimeters diameter immersed in the dispersion, followed by pull out. The rod is attached to a scale or balance via a thin metal hook that measures the maximum pull force. This is recorded as the probe is first immersed 1mm into the solution and then slowly withdrawn from the interface. The contact angle, θ C , depends on the liquid surface tension 79-81 and the substrate critical surface tension [79][80][81] , according to the Young's relation 79,81,115 : γ SV -γ SL -γ LV cosθ C =0, where γ SV [mJ m −2 ] is the solidvapor surface tension, γ SL is the solid-liquid surface tension and γ LV is the liquid-vapor surface tension. Figs.6a,b show ink drops printed onto pristine and HMDS treated Si/SiO 2 , with θ C ∼6 • and∼65 • , indicating that the pristine substrate SE is modified following HMDS treatment. γ LV was measured∼73mJ m −2 in Ref. 116 for DI water, whereas γ SV ∼116.5mJ m −2 and∼40mJ m −2 were reported for pristine 117 and HMDS treated 118 Si/SiO 2 . Consequently, γ SL ∼43.9mJ m −2 and ∼9.1mJ m −2 for pristine and HMDS treated Si/SiO 2 , respectively. A higher γ SL implies a higher SE 119 . Indeed, our γ SL correspond to SEs∼73.9 and∼39.1mJ m −2 for pristine and HMDS treated Si/SiO 2 . A small θ C results in the drop rapid spreading on the substrate 79 , as seen in pristine SiO 2 . On the other hand, HMDS provides higher θ C , since it lowers γ SL (thus the substrate SE), therefore reducing the wettability 80,120 . When ink-jet printing stripes, the inter-drop (i.e. centre to centre) distance is an important parameter 121 . For a large distance, individual drops are deposited on the substrate 75,82,121 . As the inter-drop distance decreases, these merge into a line 121 . Thus, in order to obtain a continuous line we need an inter-drop distance smaller than the drop diameter 121 . On the other hand, Refs.82,112 reported that a very small inter-drop distance can result in particle aggregation on the substrate, thus a non-uniform stripe (i.e. irregular edges). We thus select an inter-drop distance suitable to have continuous lines, avoiding at the same time non-uniformities and irregular edges. Figs.7a,b,c are optical images of printed stripes on pristine, O 2 plasma treated and HMDS treated Si/SiO 2 , whereas Figs.7d,e,f plot the respective Atomic Force Microscope (AFM) topographies. The stripe in Fig.7a is∼100-110µm wide, has an average thickness∼70nm and an irregular flake distribution, with aggregation of flakes. That in Fig.7b is wider (∼130-140µm), with aggregates at the edges, and an average thickness∼55nm. The stripe in Fig.7c has a more uniform and regular distribution of flakes,∼85-90µm wide and∼90nm thick. The width narrows going from the O 2 plasma treated to the HMDS treated Si/SiO 2 , due to the SE decrease. Figs.7d,e show stripes with voids and irregular flake distribution, with R z ∼30-40nm. Fig.7f presents a more homogeneous network with R z ∼15nm. Thus, R z is lower when θ C is higher, because the poor wettability of drops with higher θ C reduces the stripe diameter (as shown in Figs.7a,b,c), confining the flakes onto a smaller area. The uniformity of stripes printed on the HMDS treated substrate corroborates the above considerations on the SE changes. In fact, the presence of silane in HMDS 85 promotes the adhesion of metallic particles to the substrate 85,122 . Analogously, HMDS may promote the adhesion of graphene to the substrate, thus resulting in a uniform network. a multi-layer sample, having lost any direct signature of SLG. Note however that the 2D peak shape, even for the 90nm stripe, remains distinctly different from that of graphite. A similar aggregation of flakes was previously observed for thick films derived from graphene solutions 55 . In all cases Disp(G) remains similar, and very low, again showing the lack of large amounts of defects within the flakes. C. Transparent and conductive patterns We now investigate the viability of our ink to print transparent and conductive patterns. We characterize the sheet resistance R s [Ω/ ] and Transmittance T [%] of our stripes when placed on a transparent substrate. We thus use pristine, O 2 and HMDS treated borosilicate glass, with R z <15nm similar to SiO 2 on Si, but with T∼99% (Pyrex 7740-Polished Prime Grade). T is measured on samples ink-jet printed on borosilicate glass (followed by annealing at 170 • C for 5 mins to remove the NMP) by scanning a 514.5nm laser beam with 100µm steps. The transmitted beam is measured with a photodiode. A microscope equipped with 100× long distance objective focuses the laser to∼2µm. The incident power is kept at∼8mW. The transmitted power is measured by a Ophir Nova II power meter with 0.1µW resolution. Fig.10a shows that for our stripes the experimentally measured thickness (t) increases linearly as a function of printing repetitions, with a slope defined by the surface treatment. Fig.10b plots the four-probe measured R s as a function of t. For large t, R s settles to∼34,∼500,∼10 5 kΩ/ for HMDS treated, pristine and O 2 treated glass, respectively. For t<20nm, R s increases for all substrates. For a thin film, R s = (σ t) −1 , where σ [S/m] its conductivity 123 . Thus, from Fig.10b and σ=(R s t) −1 , we get the data in Fig.10c. σ is constant for t>20nm, in the case of HMDS treated, pristine and plasma treated glass, with an average∼10 2 ,∼30,∼10 −1 S/m, respectively. Thus, stripes on HMDS treated glass have an higher σ combined with a more regular network of flakes, compared to the other two substrates. When t<20nm, σ decreases for all substrates. A similar trend was reported for CNT films on SiO 2 (produced by vacuum filtration) 124,125 , ink-jet printed CNT patterns on SiO 2 29,30 , graphene films on SiO 2 , 126,127 and Polyethylene-terephthalate(PET), 126,127 as well as Ag nanowire films, produced by vacuum filtration on SiO 2 126 . Refs. 124-127 explained this decrease of σ for small t, due to percolation. The percolation theory 128 predicts σ, for a network of conductive particles, to scale as 128 : σ ∝ (X − X c ) β(1) where X [µg/mm 2 ] is the concentration of conductive particles per unit area, X c [µg/mm 2 ] is the critical concentration of flakes corresponding to the percolation threshold and β is the percolation exponent. Eq.1 can be rewritten in terms of t, rather than X 124 as: σ ∝ (t − t c ) ǫ(2) where t c is the critical thickness and ǫ is the percolation exponent. Fig.10c shows two regimes for σ as a function of t: a percolative linear behavior for t<20nm and a constant σ bulk for t>20nm. This can be explained considering that our films stop behaving like bulk materials below a critical thickness (t min ). The exponent ǫ can be estimated by a linear fit of the log 10 plot of σ vs t, in the percolation region (t<20nm), Fig.11. We get ǫ∼4 for stripes on HMDS treated and pristine glass, while ǫ ∼3 for O 2 treated glass. These values indicate percolation, as reported by Refs.126,129-131 for networks with various geometries. ǫ is expected to increase with particle size 130,131 and decrease with X c 130,131 . Assuming a similar particle size, since the same ink is used for all cases, we deduce that ǫ∼4 points to a bigger X c than ǫ ∼3. This indicates formation of a more uniform network on HMDS treated and pristine glass compared to O 2 treated glass. We also determine the minimum concentration necessary to achieve the bulk conductivity regime. To do so, we assume X≫X C , because the bulk regime needs a tight network of interconnected flakes 126,129,132 . Given our c ∼0.11g/L, volume per printed drop∼10nL 114 , and a dried drop size on the three substrates of∼90,100,130µm, we estimate X ∼4×10 −2 ,∼10 −2 and∼0.7×10 −2 µg/mm 2 for stripes printed on HMDS, pristine and plasma treated . Our experimental T deviates from the dashed lines for T>75%. We assign this to the percolative regime where σ DC deviates from a bulk-like behavior. Also in this case, printing on HMDS treated glass gives the highest T for a given R s . D. Ink jet printed devices Ink-jet printed TFTs based on organic semiconducting polymers have been widely investigated 15,134,135 . The current state of the art devices have µ ranging from 0.01 to∼0.5cm 2 V −1 s −1 , with ON/OFF ratios up to 10 5 . 134-136 Several Inkjet printed TFTs using various carbon nanomaterials have been reported. For example, fullerene-based TFTs were discussed in Refs. 137,138, with µ up to 0.01cm 2 V −1 s −1 and an ON/OFF ratio<10. TFTs printed from CNT-based inks have been presented by several groups [27][28][29]31,32 . The highest µ thus far is∼50cm 2 V −1 s −1 combined with an ON/OFF ratio 10 3 , but measured at 10 −6 Torr and 100K 32 . Ink-jet printed TFTs from GO-based inks were discussed in Refs. 72,73, with µ up to∼90cm 2 V −1 s −1 for an ON/OFF ratio of 10 (measured at room conditions), after GO reduction. We print our TFTs as for Fig.12a, and contact them with chromium-gold source and drain pads (Fig.12b). The transfer characteristics are measured (at room conditions) at different drain voltages (V d =-2,-4,-8V). µ is derived from µ= L W Ci V d dI d dVg , where L [µm] and W [µm] are the channel length and width respectively, C i is the gate dielectric capacitance (∼10nF/cm 2 ) 139 . We get µ ∼95cm 2 V −1 s −1 for an ON/OFF ratio∼10 at V d =-2V, comparable to that reported in Ref. 73 for ink-jet printed RGO TFTs. µ in our devices is almost four orders of magnitude higher than printed fullerene-based TFTs 137,138 (for the same ON/OFF ratio) and more than two orders higher than ink-jet printed CNTs 27,29 (for a ON/OFF ratio of 10). However, the ON/OFF ratio in our TFTs is lower than the state of the art for CNTs (but measured at 10 −6 Torr and 100K) at similar µ 32 . We note that ink-jet printed electronics requires high µ at room conditions 11,18 . So far CNT ink-jet printed devices measured at room conditions have µ no larger than ∼1cm 2 V −1 s −1 (at ON/OFF∼10) 29 , which is two orders of magnitude smaller than our jet printed TFTs. Organic semiconducting inks 134-136 suffer from low µ, limited by variable range hopping between the isolated polymer chains 140 . The overall charge conduction in crystalline organic semiconducting thin films is determined by both intra-chain and inter-chain charge transport 141 . The former is much faster than inter-chain hopping 140,141 . Many groups have tried to improve interchain hopping 27,28,142,143 . Ref. 142 proposed a chemical modification of the semiconducting organic ink by electron acceptors, while embedding Au nano-particles in the semiconducting organic ink was proposed by Ref. 143. Embedding CNTs in the semiconducting ink 27,28 allowed us to get µ ∼ 0.07cm 2 V −1 s −1 at room conditions. We combine our graphene-ink with one of the most commonly used organic polymers for ink-jet printing, Poly[5,5'-bis(3-dodecyl-2-thienyl)-2,2'-bithiophene] (PQT-12) [134][135][136] in order to investigate its viability as interchain hopping enhancer (similarly to Au nanopar- ticles and CNTs). PQT-12 is widely used due to the higher environmental stability (up to 300 days at room conditions 144 ), with respect to other organic semiconducting inks 143,144 . Graphene can bridge the polymer chains, allowing a more efficient charge transport. We fabricate a graphene/PQT-12 TFT following the steps shown in Figs.12a,b,c. Fig.13a plots its output characteristics at V g =-2,-5,-20 V. For each V g , V d is swept from 0 to -30 V in steps of 2V. At V d =-2V, we get µ ∼0.17cm 2 V −1 s −1 and an ON/OFF ratio∼4×10 5 . This µ is about ten times that of ink-jet printed CNTs/PQT-12 TFTs 27,28 at ON/OFF∼10 5 . When compared to pure organic semiconducting polymers, our µ is ∼20 times higher than ink-jet printed PQT-12 135,136 , and twice the highest reported µ for ink-jet printed TFTs made of pure (Poly(2,5-bis(3-tetradecyllthiophen-2-yl)thieno[3,2b]thiophene) 18,143,145,146 . Thus, the combination of our graphene-ink with organic semiconducting inks is promising for high performance printed electronics. IV. CONCLUSIONS We demonstrated ink-jet printing of graphene. Liquid phase exfoliated graphene is an ideal and low cost material for the fabrication of transparent conductive inks. Our graphene-ink was used to print TFTs with µ up to∼95cm 2 V −1 s −1 . It was also combined with PQT-12 to fabricate devices with µ ∼0.2cm 2 V −1 s −1 and ON/OFF ratios∼4×10 5 . This demonstrates the viability of graphene-inks for flexible and transparent electronics. V. ACKNOWLEDGEMENTS We acknowledge funding from the Royal Society Brian Mercer Award for Innovation, the ERC grant NANOPOTS, EPSRC grants EP/GO30480/1 and EP/F00897X/1, EU Grants RODIN and GENIUS, King's college, Cambridge. ACF is a Royal Society Wolfson Research Merit Award holder. FIG. 1 : 1Absorbance of graphene-ink. The inset is a picture of a vial of ink. FIG. 2 2: a,b) HRTEM image and diffraction pattern of a dispersed SLG flake. c) Diffracted intensity along the dashed line in b. Lateral size distribution of d) SLGs, e) BLGs, f) FLGs. FIG. 3 3: a) Raman spectrum of graphene-ink deposited on Si/SiO2. Distribution of b) Disp(G), c) I(D)/I(G), d) FWHM(G), e) Pos(2D), f) FWHM(2D), g) I(2D)/I(G). FIG. 4 4: a) I(D)/I(G) as function of Disp(G), b) I(D)/I(G) as function of FWHM(G) measured on flakes of our ink deposited on Si/SiO2 FIG. 5 : 5Dark field optical micrograph of inkjet printed drops on a) plasma cleaned, b) pristine and c) HMDS treated substrate. Scale: 20µm. d) SEM micrograph of printed pattern. FIG. 6: Images of water drops dispensed on a) pristine and b) HMDS teated Si/SiO2 FIG. 8 FIG. 9 : 89Fig.8acompares a typical Raman spectrum of a flake in the ink, with a measurement on the first stripe and on a stripe 90 nm thick, after 30 printing repetitions.Figs.8b,c,d,e,f,g,9 compare the Pos(2D), FWHM(2D) and Disp(G) distributions. The data show that the first stripe has very similar characteristics to the ink, as expected. However, the spectra after 90 repetitions show a Pos(2D) and FWHM(2D) distribution more typical of FIG. 7: Optical micrograph of ink-jet printed stripes on a) pristine, b) O2 and c) HMDS treated substrates.d,: a) Typical Raman spectrum of individual flakes in the ink, compared with spectra measured on the first stripe and on a stripe 90 nm thick. Pos(2D) and FWHM(2D) for b,c) ink; d,e) fist stripe; f,g) 90nm thick stripe Distribution of Disp(G) for a) ink; b) fist stripe; c) 90nm thick stripe FIG. 11 : 11. 10: a) Thickness as a function of printing repetitions. b, c) Rs and σ as a function of stripe thickness. d) T as a function of Rs for HMDS coated (red dots), O2 plasma treated (green triangles) and pristine (black squares) Conductivity as a function of thickness, in logarithmic scale, for stripes printed on HMDS treated (red dots), O2 treated (green triangles) and pristine (black squares) substrates. Lines are fits in the percolation regime glass, respectively. Consequently, from Eq.1, σ for stripes on HMDS treated glass (∼10 2 S/m) is higher than on pristine (∼40S/m) and plasma treated glass(∼0.1S/m).Fig.10dshows T as a function of R s . The dashed lines are a plot of the relation T= 1 + Z0 G0 2Rsσ bulk −2 expected for stripes with σ bulk conductivity, where Z 0 =377Ω is the free-space impedance, G 0 ∼6×10 −5 Ω −1 is the universal optical conductance of graphene133 . The solid lines are 126 , where Π is the percolativeFigure of FIG. 12: a) Ink on Si/SiO2.b) Cr-Au pads define the source and drain contacts. c) A layer of Poly[5,5'-bis(3-dodecyl-2thienyl)-2,2'-bithiophene] (PQT-12) is printed on top FIG. 13: a) Output and b) transfer characteristics of an inkjet printed graphene/PQT TFT. ,76 . Ink viscosity, η [mPa s], surface tension, γ [mJ m −2 ], density, ρ [g cm −3 ], and nozzle diameter, a [µm], influence the spreading of the liquid drops 75-77 . These parameters can be arranged into dimensionless figures of merit (FOM), such as the Reynolds FOM to characterize drop formation, 1<Z<14 being required to get stable drop generation(Re) 75-77 , Weber (We) 75-77 , and Ohnesorge (Oh) 75-77 numbers: Re= υρa η ; We= υ 2 ρa γ , Oh= √ W e Re = η √ γρa , where υ[m/s] is the drop velocity. Refs. 75-77 suggested to use Z=1/Oh as the appro- priate * Electronic address: [email protected]. * Electronic address: [email protected] . Q Cao, H S Kim, N Pimparkar, J P Kulkarni, C J Wang, M Shim, K Roy, M A Alam, J A Rogers, Nature. 454495Q. Cao, H. S. Kim, N. Pimparkar, J. P. Kulkarni, C. J. Wang, M. Shim, K. Roy, M. A. Alam, J. A. Rogers, Na- ture 454, 495 (2008). . L Zhou, A Wanga, S C Wu, J Sun, S Park, T N Jackson, Appl. Phys. Lett. 8883502L. Zhou, A. Wanga, S.C. Wu, J. Sun, S. Park, T. N. Jackson, Appl. Phys. Lett. 88, 083502 (2006). . I Ota, J Ohnishi, M Yoshiyama, M , Proc. IEEE. 61832I. Ota, J. Ohnishi and M. Yoshiyama, M. Proc. IEEE 61, 832 (1973). . G H Gelinck, H E A Huitema, E Van Veenendaal, E Cantatore, L Schrijnemakers, J B P H Van Der Putten, T C T Geuns, M Beenhakkers, J B Giesbers, B.-H , G. H. Gelinck, H. E. A. Huitema, E. van Veenendaal, E. Cantatore, L. Schrijnemakers, J. B. P. H. van der Putten, T. C. T. Geuns, M. Beenhakkers, J. B. Giesbers, B.-H. . E J Huisman, E M Meijer, F J Benito, A W Touwslager, B J E Marsman, D M Van Rens, De Leeuw, Nat. Mater. 3106Huisman, E. J. Meijer, E. M. Benito, F. J. Touwslager, A. W. Marsman, B. J. E. van Rens, D. M. de Leeuw, Nat. Mater. 3, 106 (2004). . T Sekitani, T Yokota, U Zschieschang, H Klauk, S Bauer, K Takeuchi, M Takamiya, T Sakurai, T Someya, Science. 3261516T. Sekitani, T. Yokota, U. Zschieschang, H. Klauk, S. Bauer, K. Takeuchi, M. Takamiya, T. Sakurai, T. Someya, Science 326, 1516 (2009). . K Myny, S Steudel, P Vicca, M J Beenhakkers, N A J M Van Aerle, G H Gelinck, J Genoe, W Dehaene, P Heremans, Solid State Electron. 531220K. Myny, S. Steudel, P. Vicca, M. J. Beenhakkers, N. A. J. M. van Aerle, G. H. Gelinck, J. Genoe, W. Dehaene, P. Heremans, Solid State Electron. 53, 1220 (2009). . C G Granqvist, Sol. Energ. Mat. Sol. C. 911529C. G., Granqvist, Sol. Energ. Mat. Sol. C. 91, 1529 (2007). . J Yoon, A J Baca, S.-I Park, P Elvikis, J B Geddes, L Li, R H Kim, J Xiao, S Wang, T.-H Kim, M J Motala, B Y Ahn, E B Duoss, J A Lewis, R G Nuzzo, P M Ferreira, Y Huang, A Rockett, J A Rogers, Nat. Mater. 7907J. Yoon, A. J. Baca, S.-I. Park, P. Elvikis, J. B. Geddes, L. Li, R. H. Kim, J. Xiao, S. Wang, T.-H. Kim, M. J. Motala, B. Y. Ahn, E. B. Duoss, J. A. Lewis, R. G. Nuzzo, P. M. Ferreira, Y. Huang, A. Rockett, J. A. Rogers, Nat. Mater. 7, 907 (2008). . B Schmied, J Gunther, C Klatt, H Kober, E Raemaekers, Smart Textiles. 6067B. Schmied, J. Gunther, C. Klatt, H. Kober, E. Raemaek- ers, Smart Textiles 60, 67 (2009). . D Kim, A Jong-Hyun, K Hoon-Sik, L Jae, K Tae-Ho, Y Chang-Jae, R G Nuzzo, J A Rogers, IEEE Electr. Device Lett. 2973D. Kim, A. Jong-Hyun, K. Hoon-Sik, L. Keon Jae, K. Tae-Ho, Y. Chang-Jae, R. G. Nuzzo, J. A. Rogers, IEEE Electr. Device Lett. 29, 73 (2008). . T B Singh, N S Sariciftci, Annu. Rev. Mater. Res. 36199T. B. Singh, N. S. Sariciftci, Annu. Rev. Mater. Res. 36, 199 (2006). . J A Rogers, Z Bao, K Baldwin, A Dodabalapur, B Crone, V R Raju, V Kuck, H Katz, K Amundson, J Ewing, P Drzaic, P. Natl. Acad. Sci. U.S.A. 984835J. A. Rogers, Z. Bao, K. Baldwin, A. Dodabalapur, B. Crone, V. R. Raju, V. Kuck, H. Katz, K. Amundson, J. Ewing, P. Drzaic, P. Natl. Acad. Sci. U.S.A. 98, 4835 (2001). . S R Forrest, Nature. 428911S. R. Forrest, Nature 428, 911 (2004). . Z Bao, J A Rogers, H E Katz, J. Mater. Chem. 91895Z. Bao, J. A. Rogers, H. E. Katz, J. Mater. Chem. 9, 1895(1999). . H Sirringhaus, T Kawase, R H Friend, T Shimoda, M Inbasekaran, W Wu, E P Woo, Science. 2902123H. Sirringhaus, T. Kawase, R. H. Friend, T. Shimoda, M. Inbasekaran, W. Wu, E. P. Woo, Science 290, 2123 (2000). . Y G Sun, E Menard, J A Rogers, H S Kim, S Kim, G Chen, I Adesida, R Dettmer, R Cortez, A Tewksbury, Appl. Phys. Lett. 883Y. G. Sun, E. Menard, J. A. Rogers, H. S. Kim, S. Kim, G. Chen, I. Adesida, R. Dettmer, R. Cortez, A. Tewksbury, Appl. Phys. Lett. 88, 3 (2006). M C Mcalpine, R S Friedman, C M Lieber, Proc. IEEE 93. IEEE 931357M. C. McAlpine, R. S. Friedman, C. M. Lieber, Proc. IEEE 93, 1357 (2005). . M Singh, H M Haverinen, P Dhagat, G E Jabbour, Adv. Mater. 22673M. Singh, H. M. Haverinen, P. Dhagat, G. E. Jabbour, Adv. Mater. 22, 673 (2010). . P Peumans, S Uchida, S R Forrest, Nature. 425158P. Peumans, S. Uchida, S. R. Forrest, Nature 425, 158 (2003). P Servati, A Nathan, Proc. IEEE 93. IEEE 931257P. Servati, A. Nathan, Proc. IEEE 93, 1257(2005). . B J Degans, P Duineveld, U Schubert, Adv. Mater. 16203B. J., DeGans, P. Duineveld, U. Schubert, Adv. Mater. 16, 203 (2004). . H M Dong, W W Carr, J F Morris, Phys. Fluids. 1816H. M. Dong, W. W. Carr, J. F. Morris, Phys. Fluids 18, 16 (2006). . T H J Van Osch, J Perelaer, A W M De Laat, U S Schubert, Adv. Mater. 20343T. H. J. van Osch, J. Perelaer, A. W. M. de Laat, U. S. Schubert, Adv. Mater. 20, 343 (2008). . J E Yoo, K S Lee, A Garcia, J Tarver, E D Gomez, K Baldwin, Y Sun, H Meng, T Q Nguyen, Y L Loo, Proc. Natl. Acad. Sci. U.S.A. 1075712J. E. Yoo, K. S. Lee, A. Garcia, J. Tarver, E. D. Gomez, K. Baldwin, Y. Sun, H. Meng, T.Q. Nguyen, Y.L. Loo, Proc. Natl. Acad. Sci. U.S.A. 107, 5712 (2010). . T Shimoda, Y Matsuki, M Furusawa, T Aoki, I Yudasaka, H Tanaka, H Iwasawa, D Wang, M Miyasaka, Y Takeuchi, Nature. 440783T. Shimoda, Y. Matsuki, M. Furusawa, T. Aoki, I. Yu- dasaka, H. Tanaka, H. Iwasawa, D. Wang, M. Miyasaka, Y. Takeuchi, Nature 440, 783 (2006). . Y Y Noh, X Cheng, H Sirringhaus, J I Sohn, M E Welland, D J Kang, Appl. Phys. Lett. 9143109Y. Y. Noh, X. Cheng, H. Sirringhaus, J. I. Sohn, M. E. Welland, D. J. Kang, Appl. Phys. Lett. 91, 043109 (2007). . P Beecher, P Servati, A Rozhin, A Colli, V Scardaci, S Pisana, T Hasan, A J Flewitt, J Robertson, G W Hsieh, F M Li, A Nathan, A C Ferrari, W I Milne, J. Appl. Phys. 10243710P. Beecher, P. Servati, A. Rozhin, A. Colli, V. Scardaci, S. Pisana, T. Hasan, A. J. Flewitt, J. Robertson, G. W. Hsieh, F. M. Li, A. Nathan, A. C. Ferrari, W. I. Milne, J. Appl. Phys. 102, 043710 (2007). . G W Hsieh, F M Li, P Beecher, A Nathan, Y L Wu, B S Ong, W I Milne, J. Appl. Phys. 1067G. W. Hsieh, F. M. Li, P. Beecher, A. Nathan, Y. L. Wu, B. S. Ong, W. I. Milne, J. Appl. Phys. 106, 7 (2009). . T Takenobu, N Miura, S Y Lu, H Okimoto, T Asano, M Shiraishi, Y A Iwasa, App. Phys. Expr. 225005T., Takenobu, N. Miura, S.Y. Lu, H. Okimoto, T. Asano, M. Shiraishi, Y. A. Iwasa, App. Phys. Expr. 2, 025005 (2009). . H Okimoto, T Takenobu, K Yanagi, Y Miyata, H Shimotani, H Kataura, Y Iwasa, Adv. Mater. 223981H. Okimoto, T. Takenobu, K. Yanagi, Y. Miyata, H. Shimotani, H. Kataura, Y. Iwasa, Adv. Mater. 22, 3981(2010). . H Okimoto, T Takenobu, K Yanagi, Y Miyata, H Kataura, T Asano, Y Iwasa, J. J. App. Phys. 484H. Okimoto, T. Takenobu, K. Yanagi, Y. Miyata, H. Kataura, T. Asano, Y. Iwasa, J. J. App. Phys. 48, 4 (2009). . M Ha, Y Xia, A A Green, W Zhang, M J Renn, C H Kim, M C Hersam, C D Frisbie, ACS Nano. 44388M. Ha, Y. Xia, A. A. Green, W. Zhang, M. J. Renn, C. H. Kim, M. C. Hersam, C. D. Frisbie, ACS Nano 4, 4388 (2010). . N Luechinger, A , E K Athanassiou, W J Stark, Nanotechnol. 19445201N. Luechinger, A., E. K. Athanassiou, W. J. Stark, Nan- otechnol. 19, 445201 (2008). . A K Geim, K S Novoselov, Nat. Mater. 6183A. K. Geim, K. S. Novoselov, Nat. Mater. 6, 183 (2007). . K S Novoselov, A K Geim, S V Morozov, D Jiang, Y Zhang, S V Dubonos, I V Grigorieva, A A Firsov, Science. 306666K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, A. A. Firsov, Science 306, 666 (2004). . J C Charlier, P C Eklund, J Zhu, A C Ferrari, Topics Appl. Phys. 111673J. C. Charlier, P. C. Eklund, J. Zhu, A. C. Ferrari, Topics Appl. Phys. 111, 673 (2008). . F Bonaccorso, Z Sun, T Hasan, A C Ferrari, Nat. Photon. 4611F. Bonaccorso, Z. Sun, T. Hasan, A. C. Ferrari, Nat. Pho- ton. 4, 611 (2010). . Y M Lin, C Dimitrakopoulos, K A Jenkins, D B Farmer, H Y Chiu, A Grill, P Avouris, Science. 327662Y. M. Lin, C. Dimitrakopoulos, K. A. Jenkins, D. B. Farmer, H. Y. Chiu, A. Grill, P. Avouris, Science 327, 662 (2010). . Z Sun, T Hasan, F Torrisi, D Popa, G Privitera, F Wang, F Bonaccorso, D M Basko, A C Ferrari, ACS Nano. 4803Z. Sun, T. Hasan, F. Torrisi, D. Popa, G. Privitera, F. Wang, F. Bonaccorso, D. M. Basko, A.C. Ferrari, ACS Nano 4, 803 (2010). . K S Novoselov, D Jiang, F Schedin, T J Booth, V V Khotkevich, S V Morozov, A K Geim, PNAS. 10210451K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, A. K. Geim, PNAS 102, 10451 (2005). . A E Karu, M Beer, J. Appl. Phys. 372179A. E. Karu,M. Beer, J. Appl. Phys. 37, 2179 (1966). . A N Obraztsov, E A Obraztsova, A V Tyurnina, A A Zolotukhin, Carbon. 452017A. N. Obraztsov, E. A. Obraztsova, A. V. Tyurnina, A. A. Zolotukhin, Carbon 45, 2017 (2007). . K S Kim, Y Zhao, H Jang, S Y Lee, J M Kim, K S Kim, J.-H Ahn, P Kim, J Y Choi, B H Hong, Nature. 457706K. S. Kim, Y. Zhao, H. Jang, S. Y. Lee, J. M. Kim, K. S. Kim, J.-H. Ahn, P. Kim, J.Y. Choi, B. H. Hong, Nature 457, 706 (2009). . A Reina, X Jia, J Ho, D Nezich, H Son, V Bulovic, M S Dresselhaus, J Kong, Nano Lett. 930A. Reina, X. Jia, J. Ho, D. Nezich, H. Son, V. Bulovic, M. S. Dresselhaus, J. Kong, Nano Lett. 9, 30 (2009). . X S Li, W W Cai, J H An, S Kim, J Nah, D X Yang, R Piner, A Velamakanni, I Jung, E Tutuc, S K Banerjee, L Colombo, R S Ruoff, Science. 3241312X. S. Li, W. W. Cai, J. H. An, S. Kim, J. Nah, D. X. Yang, R. Piner, A. Velamakanni, I. Jung, E. Tutuc, S. K. Banerjee, L. Colombo, R.S. Ruoff, Science 324, 1312 (2009). . S Bae, H Kim, Y Lee, X Xu, J.-S Park, Y Zheng, J Balakrishnan, T Lei, H Kim, Nat. Nano. I. Song, Y.J. Kim, K. S. Kim, B. Ozyilmaz, J.H. Ahn, B. H. Hong,S. Iijima5574S. Bae, H. Kim, Y. Lee, X. Xu, J.-S. Park, Y. Zheng, J. Balakrishnan, T. Lei, H. Ri Kim, Y. I. Song, Y.J. Kim, K. S. Kim, B. Ozyilmaz, J.H. Ahn, B. H. Hong,S. Iijima, Nat. Nano. 5, 574 (2010). . C Berger, Z M Song, X B Li, X S Wu, N Brown, C Naud, D Mayou, T B Li, J Hass, A N Marchenkov, E H Conrad, P N First, W A De Heer, J. Phys. Chem. B. 10819912C. Berger, Z. M. Song, X. B. Li, X. S. Wu, N. Brown, C. Naud, D. Mayou, T. B. Li, J. Hass, A. N. Marchenkov, E. H. Conrad, P. N. First,W. A. de Heer, J. Phys. Chem. B 108, 19912 (2006). . E G Acheson, 615E. G. Acheson, US patent 615 (1896). . D V Badami, Nature. 193569D. V. Badami, Nature 193, 569 (1962). . K V Emtsev, A Bostwick, K Horn, J Jobst, G L Kellogg, L Ley, J L Mcchesney, T Ohta, S A Reshanov, J Rohrl, E Rotenberg, A K Schmid, D Waldmann, H B Weber, T Seyller, Nat. Mater. 8203K. V. Emtsev, A. Bostwick, K. Horn, J. Jobst, G. L. Kel- logg, L. Ley, J. L. McChesney, T. Ohta, S. A. Reshanov, J. Rohrl, E. Rotenberg, A. K. Schmid, D. Waldmann, H. B. Weber,T. Seyller, Nat. Mater. 8, 203 (2009). . C Oshima, A Nagashima, J. Phys.: Condens. Matter. 91C. Oshima, A. Nagashima, J. Phys.: Condens. Matter 9, 1 (1997). . Y Gamo, A Nagashima, M Wakabayashi, M Terai, C Oshima, Surf. Sci. 37461Y. Gamo, A. Nagashima, M. Wakabayashi, M. Terai, C. Oshima, Surf. Sci. 374, 61 (1997). . R Rosei, M De Crescenzi, F Sette, C Quaresima, A Savoia, P Perfetti, Phys. Rev. B. 281161R. Rosei, M. De Crescenzi, F. Sette, C. Quaresima, A. Savoia, P. Perfetti, Phys. Rev. B 28, 1161 (1983). . P W Sutter, J I Flege, E A Sutter, Nat. Mater. 7406P. W. Sutter, J. I. Flege, E. A. Sutter, Nat. Mater. 7, 406 (2008). . Y Hernandez, V Nicolosi, M Lotya, F M Blighe, Z Sun, S De, I T Mcgovern, B Holland, M Byrne, Y K Gun&apos;ko, J J Boland, P Niraj, G Duesberg, S Krishnamurthy, R Goodhue, J Hutchison, V Scardaci, A C Ferrari, J N Coleman, Nat. Nanotech. 3563Y. Hernandez, V. Nicolosi, M. Lotya, F. M. Blighe, Z. Sun, S. De, I. T. McGovern, B. Holland, M. Byrne, Y. K. Gun'Ko, J. J. Boland, P. Niraj, G. Duesberg, S. Krish- namurthy, R. Goodhue, J. Hutchison, V. Scardaci, A. C. Ferrari, J. N. Coleman, Nat. Nanotech. 3, 563 (2008). . M Lotya, Y Hernandez, P J King, R J Smith, V Nicolosi, L S Karlsson, F M Blighe, S De, Z Wang, I T Mcgovern, G S Duesberg, J N Coleman, J. Am. Chem. Soc. 1313611M. Lotya, Y. Hernandez, P. J. King, R. J. Smith, V. Ni- colosi, L. S. Karlsson, F. M. Blighe, S. De, Z. Wang, I. T. McGovern, G. S. Duesberg, J. N. Coleman, J. Am. Chem. Soc. 131, 3611 (2009). . C Valles, C Drummond, H Saadaoui, C A Furtado, M He, O Roubeau, L Ortolani, M Monthioux, A Penicaud, J. Am. Chem. Soc. 13015802C. Valles, C. Drummond, H. Saadaoui, C. A. Furtado, M. He, O. Roubeau, L. Ortolani, M. Monthioux and A. Penicaud, J. Am. Chem. Soc 130, 15802 (2008). . T Hasan, F Torrisi, Z Sun, D Popa, V Nicolosi, G Privitera, F Bonaccorso, A C Ferrari, Phys. Stat. Sol. B. 2472953T. Hasan, F. Torrisi, Z. Sun, D. Popa, V. Nicolosi, G. Privitera, F. Bonaccorso, A. C. Ferrari, Phys. Stat. Sol. B 247, 2953 (2010). . O M Marago, P H Jones, F Bonaccorso, V Scardaci, P G Gucciardi, A G Rozhin, A C Ferrari, ACS Nano. 47515O. M. Marago, P. H. Jones, F. Bonaccorso, V. Scardaci, P. G. Gucciardi, A. G. Rozhin, A. C. Ferrari, ACS Nano 4, 7515 (2010). . A A Green, M C Hersam, Nano Lett. 94031A. A. Green, M. C. Hersam, Nano Lett. 9, 4031 (2009). . X L Li, X R Wang, L Zhang, S W Lee, H J Dai, Science. 3191229X. L. Li, X. R. Wang, L. Zhang, S. W. Lee, H. J. Dai, Science 319, 1229 (2008). . S Stankovich, R D Piner, S T Nguyen, R S Ruoff, Carbon. 443342S. Stankovich, R. D. Piner, S. T. Nguyen, R. S. Ruoff, Carbon 44, 3342 (2006). . W S Hummers, R E Offeman, J. Am. Chem. Soc. 801339W. S. Hummers, R. E. Offeman, J. Am. Chem. Soc. 80, 1339 (1958). . B C Brodie, Ann. Chim. Phys. 59466B. C. Brodie, Ann. Chim. Phys. 59, 466 (1860). . L Staudenmaier, Ber. Deut. chem. Ges. 311481L. Staudenmaier, Ber. Deut. chem. Ges. 31, 1481 (1898). . C Mattevi, G Eda, S Agnoli, S Miller, K A Mkhoyan, O Celik, D Mastrogiovanni, G Granozzi, E Garfunkel, M Chhowalla, Adv. Funct. Mater. 292577C. Mattevi, G. Eda, S. Agnoli, S. Miller, K. A. Mkhoyan, O. Celik, D. Mastrogiovanni, G. Granozzi, E. Garfunkel, M. Chhowalla, Adv. Funct. Mater. 29, 2577 (2009). . W W Cai, R D Piner, F J Stadermann, S Park, M A Shaibat, Y Ishii, D X Yang, A Velamakanni, S J An, M Stoller, J H An, D M Chen, R S Ruoff, Science. 3211815W. W. Cai, R. D. Piner, F. J. Stadermann, S. Park, M. A. Shaibat, Y. Ishii, D. X. Yang, A. Velamakanni, S. J. An, M. Stoller, J. H. An, D. M. Chen, R. S. Ruoff, Science 321, 1815 (2008). . G Eda, M Chhowalla, Adv. Mater. 222392G. Eda, M. Chhowalla, Adv. Mater. 22, 2392 (2010). . J I Paredes, S Villar-Rodil, A Martinez-Alonso, J M D Tascon, Langmuir. 2410560J. I., Paredes, S. Villar-Rodil, A. Martinez-Alonso, J. M. D. Tascon, Langmuir, 24, 10560 (2008). . H He, J Klinowski, M Forster, A Lerf, Chem. Phys. Lett. 28753H., He, J. Klinowski, M. Forster, A. Lerf, Chem. Phys. Lett., 287, 53 (1998). . G Eda, G Fanchini, M Chhowalla, Nat. Nanotech. 3270G. Eda, G. Fanchini, M. Chhowalla, Nat. Nanotech. 3, 270 (2008). . V Dua, S Surwade, S Ammu, S Agnihotra, S Jain, K Roberts, S Park, R Ruoff, S Manohar, Angew. Chem. Int. Ed. 492154V. Dua, S. Surwade, S. Ammu, S. Agnihotra, S. Jain, K. Roberts, S. Park, R. Ruoff, S. Manohar, Angew. Chem. Int. Ed. 49, 2154 (2010). . S Wang, P K Ang, Z Wang, A L L Tang, J T L Thong, K P Loh, Nano Lett. 1092S. Wang, P. K. Ang, Z. Wang, A. L. L. Tang, J. T. L. Thong, K. P. Loh, Nano Lett. 10, 92 (2009). . B K Park, D Kim, S Jeong, J Moon, J S Kim, Thin Solid Films. 5157706B. K. Park, D. Kim, S. Jeong, J. Moon, J. S. Kim, Thin Solid Films 515, 7706 (2007). . N Reis, B Derby, MRS. Symp. Proc. 62465N. Reis, B. Derby, MRS. Symp. Proc. 624, 65 (2000). . D Jang, D Kim, J Moon, Langmuir. 252629D. Jang, D. Kim and J. Moon, Langmuir 25, 2629 (2009). . J E Fromm, IBM J. Res. Dev. 28322J. E. Fromm, IBM J. Res. Dev., 28, 322 (1984). . P G De Gennes, Rev. Mod. Phys. 57827P. G. De Gennes, Rev. Mod. Phys. 57, 827 (1985). . E G Shafrin, W A Zisman, J. Phys. Chem. 64519E. G. Shafrin, W. A. Zisman, J. Phys. Chem. 64, 519 (1960). Intermolecular and Surface Forces. J Israelachvili, Academic pressNew YorkJ. Israelachvili, Intermolecular and Surface Forces; Aca- demic press, New York, (1991). . B Derby, N Reis, MRS. Bull. 28815B. Derby, N. Reis, MRS. Bull. 28, 815 (2003). . J S Park, J P Kim, C Song, M Lee, J S Park, J P Kim, C Song, M Lee, Displays. 31164J. S. Park, J. P. Kim, C. Song, M. Lee, J. S. Park, J. P. Kim, C. Song, M. Lee, Displays 31, 164 (2010). . R D Deegan, O Bakajin, T F Dupont, G Huber, S R Nagel, T A Witten, Nature. 389827R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, T. A. Witten, Nature 389, 827 (1997). . R C Osthoff, S W Kantor, Organosilazane Compounds John Wiley & Sons, IncR. C. Osthoff, S.W. Kantor, Organosilazane Compounds John Wiley & Sons, Inc.; (1997) D R Lide, Handbook of Chemistry and physics. Boca Raton, FLCRC Press Inc86th ed.D. R. Lide, In Handbook of Chemistry and physics 86th ed.; CRC Press Inc.; Boca Raton, FL, (2005) . K F Mak, M Y Sfeir, Y Wu, C H Lui, J A Misewich, T F Heinz, Phys. Rev. Lett. 101196405K. F. Mak, M. Y. Sfeir, Y. Wu, C. H. Lui, J. A. Mis- ewich,T. F. Heinz, Phys. Rev. Lett. 101, 196405 (2008). . V G Kravets, A N Grigorenko, R R Nair, P Blake, S Anissimova, K S Novoselov, A K Geim, Phys. Rev. B. 81155413V. G. Kravets, A. N. Grigorenko, R. R. Nair, P. Blake, S. Anissimova, K. S. Novoselov, A. K. Geim, Phys. Rev. B 81, 155413 (2010). . R R Nair, P Blake, A N Grigorenko, K S Novoselov, T J Booth, T Stauber, N M R Peres, A K Geim, Science. 3201308R. R. Nair, P. Blake, A. N. Grigorenko, K. S. Novoselov, T. J. Booth, T. Stauber, N. M. R. Peres, A. K. Geim, Science 320, 1308 (2008). . C Casiraghi, A Hartschuh, E Lidorikis, H Qian, H Harutyunyan, T Gokus, K S Novoselov, A C Ferrari, Nano Lett. 72711C. Casiraghi, A. Hartschuh, E. Lidorikis, H. Qian, H. Harutyunyan, T. Gokus, K. S. Novoselov, A. C. Ferrari, Nano Lett., 7, 2711 (2007). . J C Meyer, A K Geim, M I Katsnelson, K S Novoselov, T J Booth, S Roth, Nature. 44660J. C. Meyer, A. K. Geim, M. I. Katsnelson, K. S. Novoselov, T. J. Booth, S. Roth, Nature 446, 60 (2007). . J C Meyer, A K Geim, M I Katsnelson, K S Novoselov, D Obergfell, S Roth, C Girit, A Zettl, Solid State Commun. 143101J. C. Meyer, A. K. Geim, M. I. Katsnelson, K. S. Novoselov, D. Obergfell, S. Roth, C. Girit, A. Zettl, Solid State Commun. 143, 101 (2007). . A C Ferrari, J C Meyer, V Scardaci, C Casiraghi, M Lazzeri, F Mauri, S Piscanec, D Jiang, K S Novoselov, S Roth, A K Geim, Phys. Rev. Lett. 974A. C. Ferrari, J. C. Meyer, V. Scardaci, C. Casiraghi, M. Lazzeri, F. Mauri, S. Piscanec, D. Jiang, K. S. Novoselov, S. Roth, A. K. Geim, Phys. Rev. Lett. 97, 4 (2006). . U Khan, A O&apos;neill, M Lotya, S De, J N Coleman, Small. 6864U. Khan, A. O'Neill, M. Lotya, S. De, J. N. Coleman, Small 6, 864 (2010). Hansen Solubility Parameters: A User's Handbook. C M Hansen, CRC Press IncBoca Raton, FLC. M. Hansen, Hansen Solubility Parameters: A User's Handbook, CRC Press Inc., Boca Raton, FL (2007) . S D Bergin, V Nicolosi, P V Streich, S Giordani, Z Sun, A H Windle, P Ryan, N P P Niraj, Z.-T Wang, L Carpenter, W J Blau, J J Boland, J P Hamilton, J N Coleman, Adv. Mater. 201876S. D. Bergin, V. Nicolosi, P. V. Streich, S. Giordani, Z. Sun, A. H. Windle, P. Ryan, N. P. P. Niraj, Z.-T. Wang, L. Carpenter, W. J. Blau, J. J. Boland, J. P. Hamilton, J. N. Coleman Adv. Mater. 20, 1876 (2008). . M Lotya, P J King, U Khan, S De, J N Coleman, ACS Nano. 43155M. Lotya, P. J. King, U. Khan, S. De, J. N. Coleman, ACS Nano 4, 3155 (2010). . J W Williams, K E Van Holde, R L Baldwin, H Fujita, Chem. Rev. 58715J. W. Williams, K. E. Van Holde, R. L. Baldwin, H. Fu- jita, Chem. Rev. 58, 715 (1958). . P Schuck, Biophys. J. 781606P. Schuck, Biophys. J. 78, 1606 (2000). T Svedberg, K O Pedersen, The Ultracentrifuge. LondonOxford University pressT. Svedberg, K. O. Pedersen, The Ultracentrifuge, Oxford University press, London (1940) . A C Ferrari, J Robertson, Phys. Rev. B. 6114095A. C. Ferrari, J. Robertson, Phys. Rev. B 61, 14095 (2000). . F Tuinstra, J L Koenig, J. Chem. Phys. 531126F. Tuinstra, J. L. Koenig, J. Chem. Phys. 53, 1126 (1970). . S Piscanec, M Lazzeri, F Mauri, A C Ferrari, J Robertson, Phys. Rev. Lett. 934S. Piscanec, M. Lazzeri, F. Mauri, A. C. Ferrari, J. Robertson, Phys. Rev. Lett. 93, 4 (2004). . C Casiraghi, A Hartschuh, H Qian, S Piscanec, C Georgi, A Fasoli, K S Novoselov, D M Basko, A C Ferrari, Nano Lett. 91433C. Casiraghi, A. Hartschuh, H. Qian, S. Piscanec, C. Georgi, A. Fasoli, K. S. Novoselov, D. M. Basko, A. C. Ferrari, Nano Lett. 9, 1433 (2009). . A C Ferrari, J Robertson, Phys. Rev. B. 6413A. C. Ferrari, J. Robertson, Phys. Rev. B 64, 13 (2001). . L G Cancado, A Jorio, E H Ferreira, F Stavale, C A Achete, R B Capaz, M V O Moutinho, A Lombardo, T S Kulmala, A C Ferrari, Nano Lett. 113190L.G. Cancado, A. Jorio, E. H. Ferreira, F. Stavale, C. A. Achete, R. B. Capaz, M. V. O. Moutinho, A. Lombardo, T. S. Kulmala, A.C. Ferrari, Nano Lett., 11, 3190 (2011). . A C Ferrari, S E Rodil, J Robertson, Phys. Rev. B. 67155306A. C. Ferrari, S. E. Rodil, J. Robertson, Phys. Rev. B 67, 155306 (2003) . A C Ferrari, Surf. Coat. 190Tech. 180-181Ferrari A. C., Surf. Coat. Tech. 180-181, 190 (2004). . D M Basko, S Piscanec, A C Ferrari, Phys. Rev. B. 80165413D. M. Basko, S. Piscanec, A. C. Ferrari, Phys. Rev. B 80, 165413 (2009). . A Das, S Pisana, B Chakraborty, S Piscanec, S K Saha, U V Waghmare, K S Novoselov, H R Krishnamurthy, A K Geim, A C Ferrari, A K Sood, Nat. Nano. 3210A. Das, S. Pisana, B. Chakraborty, S. Piscanec, S. K. Saha, U. V. Waghmare, K. S. Novoselov, H. R. Krish- namurthy, A. K. Geim, A. C. Ferrari, A. K. Sood, Nat. Nano. 3, 210 (2008). . S Pisana, M Lazzeri, C Casiraghi, K S Novoselov, A K Geim, A C Ferrari, F Mauri, Nat. Mater. 6198S. Pisana, M. Lazzeri, C. Casiraghi, K. S. Novoselov, A. K. Geim, A. C. Ferrari, F. Mauri, Nat. Mater. 6, 198 (2007). Powder mixing. B H Kaye, B. H. Kaye, Powder mixing; . &amp; Chapman, Hall; London, Chapman & Hall; London, (1997). . G W Kauffman, P C Jurs, J. Chem. Inf. Comp. Sci. 41408G. W. Kauffman, P. C. Jurs, J. Chem. Inf. Comp. Sci. 41, 408 (2001). . T Young, Philos. T. R. Soc. Lon. 9565T. Young, Philos. T. R. Soc. Lon. 95, 65 (1805). . E G Shafrin, W A Zisman, J. Phys. Chem. 711309E. G. Shafrin, W. A. Zisman, J. Phys. Chem. 71, 1309 (1967). . R R Thomas, F B Kaufman, J T Kirleis, R A Belsky, J. Electrochem. Soc. 143643R. R. Thomas, F. B. Kaufman, J. T. Kirleis, R. A. Belsky, J. Electrochem. Soc., 143, 643 (1996). Handbook of VLSI microlithography: principles, technology, and applications. W B Glendinning, J N Helbert, Noyes, New JerseyW. B. Glendinning, J. N. Helbert, Handbook of VLSI mi- crolithography: principles, technology, and applications, Noyes, New Jersey, (1991). . M H Ghatee, L Pakdel, Fluid Phase Equilibr. 234101M. H. Ghatee, L. Pakdel, Fluid Phase Equilibr. 234, 101 (2005). . A Marmur, Langmuir. 198343A. Marmur, Langmuir 19, 8343 (2003). . P C Duineveld, J. Fluid Mech. 477175P. C. Duineveld, J. Fluid Mech. 477, 175 (2003). . S Gamerith, A Klug, H Scheiber, U Scherf, E Moderegger, E J W List, Adv. Func. Mater. 173111S. Gamerith, A. Klug, H. Scheiber, U. Scherf, E. Modereg- ger, E. J. W. List, Adv. Func. Mater. 17, 3111 (2007). . F M Smits, Bell Sys. Tech. Jour. 37711F. M. Smits, Bell Sys. Tech. Jour. 37, 711 (1958) . L Hu, D S Hecht, G Gruner, Nano Lett, 42513L. Hu, D. S. Hecht, G. Gruner , Nano Lett. 4, 2513 (2004). . H Z Geng, K K Kim, K P So, Y S Lee, Y Chang, Y H Lee, J. Am. Chem. Soc. 1297758H. Z. Geng, K. K. Kim, K. P. So, Y. S. Lee, Y. Chang, Y. H. Lee J. Am. Chem. Soc. 129, 7758 (2007). . S De, P J King, P E Lyons, U Khan, J N Coleman, ACS Nano. 47064S. De, P.J. King, P.E. Lyons, U. Khan, J. N. Coleman, ACS Nano 4 7064 (2010). . S De, J N Coleman, Small. 6458S. De, J. N. Coleman, Small 6, 458 (2009). . S Kirkpatrick, Rev. Mod. Phys. 45574S. Kirkpatrick, Rev. Mod. Phys. 45, 574 (1973). Introduction to percolation theory. D Stauffer, A Aharony, Taylor&Francis: London. D. Stauffer, A. Aharony, Introduction to percolation the- ory, Taylor&Francis: London, (1985). . P M Kogut, J P Straley, J. Phys. C. 122151P. M. Kogut, J. P. Straley, J. Phys. C 12, 2151 (1979). . N Johner, C Grimaldi, I Balberg, P Ryser, Phys. Rev. B. 77174204N. Johner, C. Grimaldi, I. Balberg, P. Ryser, Phys. Rev. B 77, 174204 (2008) . E M Doherty, S De, P E Lyons, A Shmeliov, P N Nirmalraj, V Scardaci, J Joimel, W J Blau, J J Boland, J N Coleman, Carbon. 472466E. M. Doherty, S. De, P. E. Lyons, A. Shmeliov, P. N. Nir- malraj, V. Scardaci, J. Joimel, W. J. Blau, J. J. Boland, J. N. Coleman, Carbon 47, 2466 (2009) . A B Kuzmenko, E Van Heumen, F Carbone, D Van Der Marel, Phys. Rev. Lett. 100117401A. B. Kuzmenko, E. Van Heumen, F. Carbone, D. van der Marel, Phys. Rev. Lett. 100, 117401 (2008). . B S Ong, Y Wu, P Liu, S Gardner, J. Am. Chem. Soc. 1263378B. S. Ong, Y. Wu, P. Liu, S. Gardner, J. Am. Chem. Soc. 126, 3378 (2004) . A C Arias, S E Ready, R Lujan, W S Wong, K E Paul, A Salleo, M L Chabinyc, R Apte, R A Street, Y Wu, P Liu, B Ong, Appl. Phys. Lett. 853304A. C. Arias, S. E. Ready, R. Lujan, W. S. Wong, K. E. Paul, A. Salleo, M. L. Chabinyc, R. Apte, R. A. Street, Y. Wu, P. Liu, B. Ong, Appl. Phys. Lett. 85, 3304 (2004). . Y Wu, P Liu, B S Ong, T Srikumar, N Zhao, G Botton, S Zhu, Appl. Phys. Lett. 86142102Y. Wu, P. Liu, B. S. Ong, T. Srikumar, N. Zhao, G. Bot- ton, S. Zhu, Appl. Phys. Lett. 86, 142102 (2005) . K Kaneto, M Yano, M Shibao, T Morita, W Takashima, Jap J. App. Phys. 461736K. Kaneto, M. Yano, M. Shibao, T. Morita, W. Takashima, Jap J. App. Phys. 46, 1736 (2007) . T Morita, V Singh, S Oku, S Nagamatsu, W Takashima, S Hayase, K Kaneto, Jap. J. App. Phys. 494161T. Morita, V. Singh, S. Oku, S. Nagamatsu, W. Takashima, S. Hayase, K. Kaneto, Jap. J. App. Phys. 49, 04161 (2010) . J H Oh, H W Lee, S Mannsfeld, R M Stoltenberg, E Jung, Y W Jin, J M Kim, J.-B Yoo, Z Bao, PNAS. 1066065J. H. Oh, H. W. Lee, S. Mannsfeld, R. M. Stoltenberg, E. Jung, Y. W. Jin, J. M. Kim, J.-B. Yoo, Z. Bao, PNAS 106, 6065 (2009). . H Sirringhaus, N Tessler, R H Friend, Science. 2801741H. Sirringhaus, N. Tessler, R. H. Friend, Science 280, 1741 (1998) . Y J Song, J U Lee, W H Jo, J Song, J U Lee, W H Jo, Carbon. 48389Y. J. Song, J. U. Lee, W. H. Jo, J. Song, J. U. Lee, W. H. Jo, Carbon 48, 389 (2010) . G L Whiting, A C Arias, Appl. Phys. Lett. 95253302G. L. Whiting, A.C. Arias, Appl. Phys. Lett. 95, 253302 (2009) . H Klauk, Organic Electronics. Wiley-VCH: WeinheimH. Klauk, Organic Electronics, Wiley-VCH: Weinheim, (2006); Chapter. Chapter 4. M Chason, P W Brazis, J Zhang, K Kalyanasundaram, D R Gamota, Proc. IEEE 93. IEEE 931348M. Chason, P.W. Brazis, J. Zhang, K. Kalyanasundaram, D. R. Gamota, Proc. IEEE 93, 1348 (2005) . T Kawase, S Moriya, C J Newsome, T Shimoda, Jap. J. App. Phys. 443649T. Kawase, S. Moriya, C. J. Newsome, T. Shimoda, Jap. J. App. Phys. 44, 3649 (2005) . J E Parmer, A C Mayer, B E Hardin, S R Scully, M D Mcgehee, M Heeney, I Mcculloch, Appl. Phys. Lett. 92113309J. E. Parmer, A. C. Mayer, B. E. Hardin, S. R. Scully, M. D. McGehee, M. Heeney, I. McCulloch, Appl. Phys. Lett. 92, 113309 (2008) . J F Padday, Phyl. Trans. R. Soc. Lond. A. 269265J. F. Padday, Phyl. Trans. R. Soc. Lond. A 269, 265 (1972)
{'fraction_non_alphanumeric': 0.09521181178418656, 'fraction_numerical': 0.059672201524002266, 'mean_word_length': 3.425266300291743, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We demonstrate ink-jet printing as a viable method for large area fabrication of graphene devices. We produce a graphene-based ink by liquid phase exfoliation of graphite in N-Methylpyrrolidone. We use it to print thin-film transistors, with mobilities up to∼95cm 2 V −1 s −1 , as well as transparent and conductive patterns, with∼80% transmittance and∼30kΩ/ sheet resistance. This paves the way to all-printed, flexible and transparent graphene devices on arbitrary substrates.', 'arxivid': '1111.4970', 'author': ['F Torrisi \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'T Hasan \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'W Wu \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'Z Sun \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'A Lombardo \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'T Kulmala \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'G W Hshieh \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'S J Jung \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'F Bonaccorso \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'P J Paul \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'D P Chu \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n', 'A C Ferrari \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK\n'], 'authoraffiliation': ['Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK', 'Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUK'], 'corpusid': 8624837, 'doi': '10.1021/nn2044609', 'github_urls': [], 'n_tokens_mistral': 26562, 'n_tokens_neox': 21972, 'n_words': 11347, 'pdfsha': '0514dfd83f692117807d206a28984de45fd1c37a', 'pdfurls': ['https://arxiv.org/pdf/1111.4970v1.pdf'], 'title': ['Ink-Jet Printed Graphene Electronics', 'Ink-Jet Printed Graphene Electronics'], 'venue': []}
arxiv
Online Placement of Multi-Component Applications in Edge Computing Environments Member, IEEEShiqiang Wang Member, IEEEMurtaza Zafer Fellow, IEEEKin K Leung Online Placement of Multi-Component Applications in Edge Computing Environments Index Terms-Cloud computinggraph mappingmobile edge- cloud (MEC), online approximation algorithmoptimization theory Mobile edge computing is a new cloud computing paradigm which makes use of small-sized edge-clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the coexistence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared to the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands/availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes. I. INTRODUCTION Mobile applications relying on cloud computing became increasingly popular in the recent years [1], [2]. Different from traditional standalone applications that run solely on a mobile device, a cloud-based application has one or multiple components running in the cloud, which are connected to another component running on the handheld device and they jointly constitute an application accessible to the mobile user. Examples of cloud-based mobile applications include map, storage, and video streaming services [3], [4]. They all require high data processing/storage capability that cannot be satisfied on handheld devices alone, thus it is necessary to run part of the application in the cloud. Traditionally, clouds are located in centralized data-centers. One problem with cloud-based applications is therefore the long-distance communication between the user device and the cloud, which may cause intermittent connectivity and long latency that cannot satisfy the requirements of emerging interactive applications such as real-time face recognition and online gaming [5]. To tackle this issue, mobile edge-cloud (MEC) has been proposed recently [6], [7]. The idea is to have small cloud-like entities (i.e., MECs) deployed at the edge of communication networks, which can run part or all of the application components. These MECs are located close to user locations, enabling users to have seamless and lowlatency access to cloud services. For example, they can colocate with edge devices such as Wi-Fi access points or cellular base stations (BSs), as shown in Fig. 1, forming up a hierarchy together with the centralized cloud and mobile users. The concept of MEC is also known as cloudlet [8], fog computing [9], follow me cloud [10], and small cell cloud [11]. Although MECs are promising, there are limitations. In particular, they have a significantly lower processing and storage capability compared to the core (centralized) cloud, thus it is usually infeasible to completely abandon the core cloud and run everything on MECs. An important problem is therefore to decide where (i.e., whether on the core cloud, MEC, or mobile device) to place different processing and storage components of an application. This is referred to as the application placement problem, which is a non-trivial problem as illustrated by the example below. A. Motivating Example Consider an application which recognizes faces from a realtime video stream captured by the camera of a hand-held device. As shown in Fig. 1, we can decompose this application into one storage component (the database) and three different processing components including face detection (FD), image processing and feature extraction (IPFE), and face recognition (FR). The FD component finds areas of an image (a frame of the video stream) that contains faces. This part of image is sent to IPFE for further processing. The main job of IPFE is to filter out noise in the image and extract useful features for recognizing the person from its face. These features are sent to FR for matching with a large set of known features of different persons' faces stored in the database. Fig. 1 shows one possible placement of FD, IPFE, FR, and the database onto the hierarchical cloud architecture. This can be a good placement in some cases, but may not be a good placement in other cases. For example, the benefit of running FD on the mobile device instead of MEC is that it reduces the amount of data that need to be transferred between the mobile device and MEC. However, in cases where the mobile device's processing capability is strictly limited but there is a reasonably high bandwidth connection between the mobile device and MEC, it is can be good to place FD on the MEC. Having the database in the core cloud can be beneficial because it can contain a large amount of data infeasible for the MEC to store. In this case, FR should also be in the core cloud because it needs to frequently query the database. However, if the database is relatively small and has locally generated contents, we may want to place the database and FR onto the MEC instead of the core cloud, as this reduces the backhaul network load. We see that even with this simple application, it is nonstraightforward to conceptually find the best placement, while many realistic applications such as streaming, multicasting, and data aggregation [12]- [14] are much more complex. We also note that MECs can be attached to devices at different cellular network layers [7], yielding a hierarchical cloud structure with more than three layers. Meanwhile, there usually exist multiple applications that are instantiated at the cloud system over time. All these aspects motivate us to consider the application placement problem in a rigorous optimization framework where applications arrive in an online manner. We will abstract the application placement problem as the problem of placing application graphs, which represent application components and their resource demands, onto a physical graph, which represents the computing devices and communication links in the physical system, as shown in Fig. 2. The detailed problem formulation will be presented in Section II. B. Related Work So far, research on application placement in MECs has only considered applications with two components, one running on an MEC and the other running on the user [10], [15]- [17]. Multi-component applications that can be deployed accross multiple MECs and core cloud(s) have not been considered, whereas such applications widely exist in practice. The multi-component application placement problem has been studied mainly in data-center settings. Because this problem is NP-hard even for simple graphs (as we discuss later), a common practice is to employ heuristic algorithms without performance guarantees [18], [19], which may unknowingly suffer from poor performance as compared to the optimal solution. Only a very limited amount of existing work followed a rigorous theoretical framework from approximation algorithms [20] and competitive analysis [21], and proposed approximation algorithms (i.e., approximately optimal algorithms) with provable approximation/competitive ratios 1 for the application placement problem, in particular when it involves both node and link placements. In [22], the authors proposed an algorithm for minimizing the sum cost while considering load balancing, which has an approximate approximation ratio of O(N ), where N is the number of nodes in the physical graph. The algorithm is based on linear program (LP) relaxation, and only allows one node in each application graph to be placed on a particular physical node; thus, excluding server resource sharing among different nodes in one application graph. It is shown that the approximation ratio of this algorithm is O(N ), which is trivial because one would achieve the same approximation ratio when placing the whole application graph onto a single physical node instead of distributing it across the whole physical graph. A theoretical work in [23] proposed an algorithm with N O(D) time-complexity and an approximation ratio of δ = O(D 2 log(N D)) for placing a tree application graph with D levels of nodes onto a physical graph. It uses LP relaxation and its goal is to minimize the sum cost. Based on this algorithm, the authors presented an online algorithm for minimizing the maximum load on each node and link, which is O(δ log(N ))competitive when the application lifetimes are equal. The LP formulation in [23] is complex and requires N O(D) variables and constraints. This means when D is not a constant, the space-complexity (specifying the required memory size of the algorithm) is exponential in D. Another related theoretical work which proposed an LPbased method for offline placement of paths into trees in datacenter networks was reported in [24]. Here, the application nodes can only be placed onto the leaves of a tree physical graph, and the goal is to minimize link congestion. In our problem, the application nodes are distributed across users, MECs, and core cloud, thus they should not be only placed at the leaves of a tree so the problem formulation in [24] is inapplicable to our scenario. Additionally, [24] only focuses on minimizing link congestion. The load balancing of nodes is not considered as part of the objective; only the capacity limits of nodes are considered. Some other related work focuses on graph partitioning, such as [25] and [26], where the physical graph is defined as a complete graph with edge costs associated with the distance or latency between physical servers. Such an abstraction combines multiple network links into one (abstract) physical edge, which may hide the actual status of individual links along a path. One important aspect to note is that most existing work, including [22], [24], [25], and [26], do not specifically consider the online operation of the algorithms. Although some of them implicitly claim that one can apply the algorithm repeatedly for each newly arrived application, the competitive ratio of such procedure is unclear. To the best of our knowledge, [23] is the only work that studied the competitive ratio of the online application placement problem that considers both node and link placements. C. Our Approach In this paper, we focus on the MEC context and propose algorithms for solving the online application placement problem with provable competitive ratios. Different from [23], our approach is not based on LP relaxation. Instead, our algorithms are built upon a baseline algorithm that provides an optimal solution to the placement of a linear application graph (i.e., an application graph that is a line). This is an important novelty in contrast to [23] where no optimal solution was presented for any scenario. Many applications expected to run in an MEC environment can be abstracted as hierarchical graphs, and the simplest case of such a hierarchical graph is a line, such as the face recognition example in Section I-A. Therefore, the placement of a linear application graph is an important problem in the context of MECs. Another novelty in our work, compared to [23] and most other approaches based on LP relaxation, is that our solution approach is decomposable into multiple small building blocks. This makes it easy to extend our proposed algorithms to a distributed solution in the future, which would be very beneficial for reducing the amount of necessary control information exchange among different cloud entities in a distributed cloud environment containing MECs. This decomposable feature also makes it easier to use these algorithms as a sub-procedure for solving a larger problem. It is also worth noting that the analytical methodology we use in this paper is new compared to existing techniques such as LP relaxation, thus we enhance the set of tools for online optimization. The theoretical analysis in this paper also provides insights on the features and difficulty of the problem, which can guide future practical implementations. In addition, the proposed algorithms themselves are relatively easy to implement in practice. Figure 3. Mapping with and without cycles. In this example, the path in the application graph is between application node 1 and application node 5. D. Main Results We propose non-LP based approximation algorithms for online application placement in this paper. The general problem of application placement is hard to approximate [23], [24], [27]. Therefore, similar to related work [22]- [26], we make a few simplifications to make the problem tractable. These simplifications and their motivations are described as follows. Throughout this paper, we focus on application and physical graphs that have tree topologies. This is due to the consideration that a tree application graph models a wide range of MEC applications that involve a hierarchical set of processes (or virtual machines), including streaming, multicasting, and data aggregation applications [12]- [14] such as the exemplar face recognition application presented earlier. For the physical system, we consider tree physical graphs due to the hierarchical nature of MEC environment (see Fig. 1). We note that the algorithms we propose in this paper also works with several classes of non-tree graphs and an example will be given in Section VI. For ease of presentation, we mainly focus on tree graphs in this paper. In the tree application graph, if we consider any path from the root to a leaf, we only allow those assignments 2 where the application nodes along this path are assigned in their respective order on a sub-path of the physical topology (multiple application nodes may still be placed onto one physical node), thus, creating a "cycle-free" placement. Figure 3 illustrates this placement. Let nodes 1 to 5 denote the application nodes along a path in the application-graph topology. The cycle-free placement of this path onto a subpath of the physical network ensures the order is preserved (as shown in Fig. 3(b)), whereas the order is not preserved in Fig. 3(c). A cycle-free placement has a clear motivation of avoiding cyclic communication among the application nodes. For example, for the placement in Fig. 3(c), application nodes 2 and 4 are placed on physical node B, while application node 3 is placed on physical node C. In this case, the physical link B-C carries the data of application links 2-3 and 3-4 in a circular fashion. Such traffic can be naturally avoided with a cycle-free mapping ( Fig. 3(b)), thus relieving congestion on the communication links. As we will see in the simulations in Section V, the cycle-free constraint still allows the proposed scheme to outperform some other comparable schemes that allow cycles. Further discussion on the approximation ratio associated with cycle-free restriction is given in Appendix A. In this paper, for the purpose of describing the algorithms, we classify an application node as a junction node in the tree application graph when it has two or more children. These junction nodes may represent data splitting or joining processes for multiple data streams. In some cases, they may have pre-specified placement, because they serve multiple data streams that may be associated with different end-users, and individual data streams may arrive dynamically in an online fashion. Our work first considers cases where the placements of these junction nodes are pre-specified, and then extends the results to the general case where some junction nodes are not placed beforehand. For the aforementioned scenario, we obtain the following main results for the problem of application placement with the goal of load balancing among physical nodes and edges: 1) An optimal offline algorithm for placing a single application graph which is a linear graph, with O(V 3 N 2 ) time-complexity and O(V N (V +N )) space-complexity, where the application graph has V nodes and the physical graph has N nodes. 2) An online approximation algorithm for placing single or multiple tree application graphs, in which the placements of all junction nodes are pre-specified, i.e., their placements are given. This algorithm has a time-complexity of O(V 3 N 2 ) and a space-complexity of O(V N (V +N )) for each application graph placement; its competitive ratio is O(log N ). 3) An online approximation algorithm for placing single or multiple tree application graphs, in which the placements of some junction nodes are not pre-specified. This algorithm has a time-complexity of O(V 3 N 2+H ) and a space-complexity of O(V N 1+H (V + N )) for each application graph placement; its competitive ratio is O(log 1+H N ), where H is the maximum number of junction nodes without given placement on any single path from the root to a leaf in the application graph. Note that we always have H ≤ D, where D is the depth of the tree application graph. Our work considers multiple types of resources on each physical node, such as CPU, storage, and I/O resources. The proposed algorithms can work with domain constraints which restrict the set of physical nodes that a particular application node can be assigned to. The exact algorithm for single line placement can also incorporate conflict constraints where some assignments are not allowed for a pair of adjacent application nodes that are connected by an application edge; such constraints may arise in practice due to security policies as discussed in [18]. II. PROBLEM FORMULATION A. Definitions We consider the placement of application graphs onto a physical graph, where the application graphs represent applications that may arrive in an online manner. In the following, we introduce some notations that will be used in this paper. Application Graph: An application is abstracted as a graph, in which nodes represent processing/computational modules of the application, edges represent communication demand between the nodes. Each node v ∈ V in the application graph R = (V, E) is associated with parameters that represent the computational resource (of K different types) demands of node v. Similarly, each edge e ∈ E is associated with a communication bandwidth demand. The notation e = (v 1 , v 2 ) denotes that application edge e connects application nodes v 1 and v 2 . The application graph R can be either a directed or an undirected graph. If it is a directed graph, the direction of edges specify the direction of data communication; if it is an undirected graph, data communication can occur in either direction along application edges. Physical Graph: The physical computing system is also abstracted as a graph, with nodes denoting computing devices 3 and edges denoting communication links between nodes. Each node n ∈ N in the physical graph Y = (N , L) has K different types of computational resources, and each edge l ∈ L has communication resource. A physical node can also represent a network device such as a router or switch with zero computational resource. We use the notation l = (n 1 , n 2 ) to denote that physical link l connects physical nodes n 1 and n 2 . Similar to the application graph, the physical graph can be either directed or undirected, depending on whether the physical links are bidirectional (i.e., communication in both directions share the same link) or single-directional (i.e., communication in each direction has a separate link). Because we consider multiple application graphs in this paper, we denote the tree application graph for the ith application arrival as R(i) = (V(i), E(i)). Throughout this paper, we define V = |V|, E = |E|, N = |N |, and L = |L|, where | · | denotes the number of elements in the corresponding set. We consider undirected application and physical graphs in the problem formulation, which means that data can flow in any direction on an edge, but the proposed algorithms can be easily extended to many types of directed graphs. For example, when the tree application graph is directed and the tree physical graph is undirected, we can merge the two application edges that share the same end nodes in different directions into one edge, and focus on the merged undirected application graph for the purpose of finding optimal placement. This does not affect the optimality because for any placement of application nodes, there is a unique path connecting two different application nodes due to the cycle-free constraint and the tree structure of physical graphs. Thus, application edges in both directions connecting the same pair of application nodes have to be placed along the same path on the physical graph. Costs: For the ith application, the weighted cost (where the weighting factor can serve as a normalization to the total resource capacity) for type k ∈ {1, 2, ..., K} resource of placing v to n is denoted by d v→n,k (i). Similarly, the weighted communication bandwidth cost of assigning e to l is denoted by b e→l (i). The edge cost is also defined for a dummy link l = (n, n), namely a non-existing link that connects the same node, to take into account the additional cost when placing two application nodes on one physical node. It is also worth noting that an application edge may be placed onto multiple physical links that form a path. Remark: The cost of placing the same application node (or edge) onto different physical nodes (or edges) can be different. This is partly because different physical nodes and edges may have different resource capacities, and therefore different weighting factors for cost computation. It can also be due to the domain and conflict constraints as mentioned earlier. If some mapping is not allowed, then we can set the corresponding mapping cost to infinity. Hence, our cost definitions allow us to model a wide range of access-control/security policies. Mapping: A mapping is specified by π : V → N . Because we consider tree physical graphs with the cycle-free restriction, there exists only one path between two nodes in the physical graph, and we use (n 1 , n 2 ) to denote either the link or path between nodes n 1 and n 2 . We use the notation l ∈ (n 1 , n 2 ) to denote that link l is included in path (n 1 , n 2 ). The placement of nodes automatically determines the placement of edges. In a successive placement of the 1st up to the ith application, each physical node n ∈ N has an aggregated weighted cost of p n,k (i) = i j=1 v:π(v)=n d v→n,k (j),(1) where the second sum is over all v that are mapped to n. Equation (1) gives the total cost of type k resource requested by all application nodes that are placed on node n, upto the ith application. Similarly, each physical edge l ∈ L has an aggregated weighted cost of q l (i) = i j=1 e=(v1,v2):(π(v1),π(v2)) l b e→l (j),(2) where the second sum is over all application edges e = (v 1 , v 2 ) for which the path between the physical nodes π(v 1 ) and π(v 2 ) (which v 1 and v 2 are respectively mapped to) includes the link l. B. Objective Function The optimization objective in this paper is load balancing for which the objective function is defined as min π max max k,n p n,k (M ); max l q l (M ) ,(3) where M is the total number of applications (application graphs). Equation (3) aims to minimize the maximum weighted cost on each physical node and link, ensuring that no single element gets overloaded and becomes a point of failure, which is important especially in the presence of bursty traffic. Such an objective is widely used in the literature [28], [29]. Remark: While we choose the objective function (3) in this paper, we do realize that there can be other objectives as well, such as minimizing the total resource consumption. We note that the exact algorithm for the placement of a single linear application graph can be generalized to a wide class of other objective functions as will be discussed in Section III-E. For simplicity, we restrict our attention to the objective function in (3) in most parts of our discussion. A Note on Capacity Limit: For simplicity, we do not impose capacity constraints on physical nodes and links in the optimization problem defined in (3), because even without the capacity constraint, the problem is very hard as we will see later in this paper. However, because the resource demand of each application node and link is specified in every application graph, the total resource consumption at a particular physical node/link can be calculated by summing up the resource demands of application nodes/links that are placed on it. Therefore, an algorithm can easily check within polynomial time whether the current placement violates the capacity limits. If such a violation occurs, it can simply reject the newly arrived application graph. In most practical cases, the costs of node and link placements should be defined as proportional to the resource occupation when performing such placement, with weights inversely proportional to the capacity of the particular type of resource. With such a definition, the objective function (3) essentially tries to place as many application graphs as possible without increasing the maximum resource occupation (normalized by the resource capacity) among all physical nodes and links. Thus, the placement result should utilize the available resource reasonably well. A more rigorous analysis on the impact of capacity limit is left as future work. III. BASIC ASSIGNMENT UNIT: SINGLE LINEAR APPLICATION GRAPH PLACEMENT We first consider the assignment of a single linear application graph (i.e., the application nodes are connected in a line), where the goal is to find the best placement of application nodes onto a path in the tree physical graph under the cyclefree constraint (see Fig. 3). The solution to this problem forms the building block of other more sophisticated algorithms presented later. As discussed next, we develop an algorithm that can find the optimal solution to this problem. We omit the application index i in this section because we focus on a single application, i.e., M = 1, here. A. Sub-Problem Formulation Without loss of generality, we assume that V and N are indexed sets, and we use v to exchangeably denote elements and indices of application nodes in V, and use n to exchangeably denote elements and indices of physical nodes in N . This index (starting from 1 for the root node) is determined by the topology of the graph. In particular, it can be determined via a breadth-first or depth-first indexing on the tree graph (note that linear graphs are a special type of tree graphs). From this it follows that, if n 1 is a parent of n 2 , then we must have n 1 < n 2 . The same holds for the application nodes V. With this setting, the edge cost can be combined together with the cycle-free constraint into a single definition of pairwise costs. The weighted pairwise cost of placing v − 1 to n 1 and v to n 2 is denoted by c (v−1,v)→(n1,n2) , and it takes the following values with v ≥ 2: -If the path from n 1 to n 2 traverses some n < n 1 , in which case the cycle-free assumption is violated, then c (v−1,v)→(n1,n2) = ∞. -Otherwise, c (v−1,v)→(n1,n2) = max l∈(n1,n2) b (v−1,v)→l (π(v−1),π(v)) l . (4) The maximum operator in (4) follows from the fact that, in the single line placement, at most one application edge can be placed onto a physical link. Also recall that the edge cost definition incorporates dummy links such as l = (n, n), thus there always exists l ∈ (n 1 , n 2 ) even if n 1 = n 2 . Then, the optimization problem (3) with M = 1 becomes min π max max k,n v:π(v)=n d v→n,k ; max (v−1,v)∈E c (v−1,v)→(π(v−1),π(v)) .(5) The last maximum operator in (5) takes the maximum among all application edges (rather than physical links), because when combined with the maximum in (4), it essentially computes the maximum among all physical links that are used for data transmission under the mapping π. B. Decomposing the Objective Function In this subsection, we decompose the objective function in (5) to obtain an iterative solution. Note that the objective function (5) already incorporates all the constraints as discussed earlier. Hence, we only need to focus on the objective function itself. When only considering a subset of application nodes 1, 2, ..., v 1 ≤ V , for a given mapping π, the value of the objective function for this subset of application nodes is J π (v 1 ) = max max k,n v≤v1:π(v)=n d v→n,k ; max (v−1,v)∈E,v≤v1 c (v−1,v)→(π(v−1),π(v)) .(6) Compared with (5), the only difference in (6) is that we consider the first v 1 application nodes and the mapping π is assumed to be given. The optimal cost for application nodes 1, 2, ..., v 1 ≤ V is then J π * (v 1 ) = min π J π (v 1 ),(7) where π * denotes the optimal mapping. Proposition 1. (Decomposition of the Objective Func- tion): Let J π * |π(v1) (v 1 ) denote the optimal cost under the condition that π(v 1 ) is given, i.e. J π * |π(v1) (v 1 ) = min π(1),...,π(v1−1) J π (v 1 ) with given π(v 1 ). When π(v 1 ) = π(v 1 −1) = ... = π(v s ) > π(v s −1) ≥ π(v s −2) ≥ ... ≥ π(1), where 1 ≤ v s ≤ v 1 , which means that v s is mapped to a different physical node from v s − 1 and nodes v s , ..., v 1 are mapped onto the same physical node 4 , then we have J π * |π(v1) (v 1 ) = min vs=1,...,v1 min π(vs−1) max J π * |π(vs−1) (v s − 1); 4 Note that when vs = 1, then vs − 1 does not exist, which means that all nodes 1, ..., v 1 are placed onto the same physical node. For convenience, we define Jπ(0) = 0. max k=1,...,K v=vs...v1 d v→π(v1),k ; max (v−1,v)∈E,vs≤v≤v1 c (v−1,v)→(π(v−1),π(v)) . (8) The optimal mapping for v 1 can be found by J π * (v 1 ) = min π(v1) J π * |π(v1) (v 1 ). (9) Proof. Because π(v s ) = π(v s + 1) = ... = π(v 1 ), we have J π (v 1 ) = max J π (v s − 1); max k=1,...,K v=vs...v1 d v→π(v1),k ; max (v−1,v)∈E,vs≤v≤v1 c (v−1,v)→(π(v−1),π(v)) . (10) The three terms in the maximum operation in (10) respectively correspond to: 1) the cost at physical nodes and edges that the application nodes 1, ..., v s −1 (and their connecting edges) are mapped to, 2) the costs at the physical node that v s , ..., v 1 are mapped to, and 3) the pairwise costs for connecting v s −1 and v s as well as interconnections 5 of nodes in v s , ..., v 1 . Taking the maximum of these three terms, we obtain the cost function in (6). In the following, we focus on finding the optimal mapping based on the cost decomposition in (10). We note that the pairwise cost between v s −1 and v s depends on the placements of both v s − 1 and v s . Therefore, in order to find the optimal J π (v 1 ) from J π (v s − 1), we need to find the minimum cost among all possible placements of v s − 1 and v s , provided that nodes v s , ..., v 1 are mapped onto the same physical node and v s and v s − 1 are mapped onto different physical nodes. For a given v 1 , node v s may be any node that satisfies 1 ≤ v s ≤ v 1 . Therefore, we also need to search through all possible values of v s . This can then be expressed as the proposition, where we first find J π * |π(v1) (v 1 ) as an intermediate step. Equation (8) is the Bellman's equation [30] for problem (5). Using dynamic programming [30], we can solve (5) by iteratively solving (8). In each iteration, the algorithm computes new costs J π * |π(v1) (v 1 ) for all possible mappings π(v 1 ), based on the previously computed costs J π * |π(v) (v) where v < v 1 . For the final application node v 1 = V , we use (9) to compute the final optimal cost J π * (V ) and its corresponding mapping π * . C. Optimal Algorithm The pseudocode of the exact optimal algorithm is shown in Algorithm 1. It computes V · N number of J π * |π(v)=n (v) values, and we take the minimum among no more than V · N values in (8). The terms in (8) include the sum or maximum of no more than V values and the maximum of K values. Algorithm 1 Placement of a linear application graph onto a tree physical graph 1: Given linear application graph R, tree physical graph Y 2: Given V × N × K matrix D, its entries represent the weighted type k node cost d v→n,k 3: (8), put the result into J and the corresponding mapping into Π 9: Given (V − 1) × N × N matrix C, its entries represent the weighted pairwise cost c (v−1,v)→(n 1 ,n 2 ) 4: Define V × N matrix J to keep the costs J π * |π(v)=n (v) for each node (v, n) in the auxiliary graph 5: Define V × N × V matrix Π to keep the mapping corresponding to its cost J π * |π(v)=n (v) for each node (v, n) in the auxiliary graph 6: for v = 1...V do 7: for n = 1...N do 8: Compute J π * |π(v)=n (v) from end for 10: end for 11: Compute J π * (V ) ← minn J π * |π(V )=n (V ) 12: return the final mapping result π * and final optimal cost J π * (V ) Because K is a constant in practical systems, we conclude that the time-complexity of this algorithm is O(V 3 N 2 ). The space-complexity of Algorithm 3 is O(V N (V + N )), which is related to the memory required for storing matrices D, C, J, and Π in the algorithm, where K is also regarded as a constant here. Also note that the optimality of the result from Algorithm 1 is subject to the cycle-free constraint, and the sequence of nodes is always preserved in each iteration. D. Example To illustrate the procedure of the algorithm, we construct an auxiliary graph from the given application and physical graphs, as shown in Fig. 4. Each node (v 1 , n 1 ) in the auxiliary graph represents a possible placement of a particular application node, and is associated with the cost value J π * |π(v1)=n1 (v 1 ), where v 1 is the application node index and n 1 is the physical node index in the auxiliary graph. When computing the cost at a particular node, e.g. the cost J π * |π(4)=C (4) at node (4,C) in Fig. 4, the algorithm starts from the "earlier" costs (3,B). From each of these nodes, the subsequent application nodes (i.e. from v s to node 4) are all mapped onto physical node C, and we compute the cost for each such "path" with the maximum operations in (8), by assuming the values of v s −1 and π(v s −1) are given by its originating node in the auxiliary graph. For example, one path can be (2,B) - J π * |π(vs−1) (v s −1) where the tuple (v s −1, π(v s −1)) is either (1,A), (1,B), (2,A), (2,B), (3,A), or(3,C) -(4,C) where v s − 1 = 2 and π(v s − 1) = B, another path can be (1,A) -(2,C) -(3,C) -(4,C) where v s − 1 = 1 and π(v s − 1) = A. Then, the algorithm takes the minimum of the costs for all paths, which corresponds to the minimum operations in (8) and gives J π * |π(4)=C (4). In the end, the algorithm searches through all the possible mappings for the final application node (node 5 in Fig. 4) and chooses the mapping that results in the minimum cost, which corresponds to the procedure in (9). E. Extensions The placement algorithm for single linear application graph can also be used when the objective function (in the form of (3) with M = 1) is modified to one of the following: min π max max k,n f n,k v:π(v)=n d v→n,k ; max l g l e=(v1,v2):(π(v1),π(v2)) l b e→l , (11) min π k,n f n,k v:π(v)=n d v→n,k + l g l e=(v1,v2):(π(v1),π(v2)) l b e→l ,(12) where f n,k (·) and g l (·) are increasing functions with f n,k (0) = 0, g l (0) = 0, f n,k (∞) = ∞, and g l (∞) = ∞. The algorithm and its derivation follow the same procedure as discussed above. These alternative objective functions can be useful for scenarios where the goal of optimization is other than min-max. The objective function in (12) will also be used later for solving the online placement problem. IV. ONLINE PLACEMENT ALGORITHMS FOR TREE APPLICATION GRAPHS Using the optimal algorithm for the single linear application graph placement as a sub-routine, we now present algorithms for the generalized case; namely, placement of an arriving stream of application graphs with tree topology. We first show that even the offline placement of a single tree is NPhard. Then, we propose online algorithms to approximate the optimal placement with provable competitive ratio, by first considering the case where junction nodes in the application graph have pre-specified placements that are given beforehand, and later relax this assumption. A. Hardness Result Proposition 2. (NP-hardness) Placement of a tree application graph onto a tree physical graph for the objective function defined in (3), with or without pre-specified junction node placement, is NP-hard. Proof. To show that the given problem is NP-hard, we show that the problem can be reduced from the NP-hard problem of minimum makespan scheduling on unrelated parallel machines (MMSUPM) [20], which minimizes the maximum load (or job processing time) on each machine. Figure 5. Example of application graph with given placement of junction nodes. Junction node 2 is placed on physical node B and junction node 5 is placed on physical node E. The algorithm needs to decide the placement of the remaining nodes, subject to the cycle-free constraint. Consider a special case in our problem where the application graph has a star topology with two levels (one root and multiple leaf nodes), and the physical graph is a line with multiple nodes. Assume that the number of resource types in the nodes is K = 1, the application edge resource demand is zero, and the application node resource demand is non-zero. Then, the problem is essentially the MMSUPM problem. It follows that the MMSUPM problem reduces to our problem. In other words, if we can solve our problem in polynomial time, then we can also solve the MMSUPM problem in polynomial time. Because MMSUPM is NP-hard, our problem is also NP-hard. The above result holds no matter whether the root node (junction node) of the application graph has pre-specified placement or not. B. When All Junction Node Placements Are Given We first consider tree application graphs for which the placements of junction nodes are given, and focus on placing the remaining non-junction nodes which are connected to at most two edges. An example is shown in Fig. 5. Given the placed junction nodes, we name the set of application edges and nodes that form a chain between the placed nodes (excluding each placed node itself, but including each edge that is connected to a placed node) as a simple branch, where the notion "simple" is opposed to the general branch which will be defined in Section IV-C. A simple branch can also be a chain starting from an edge that connects a placed node and ending at a leaf node, such as the nodes and edges within the dashed boundary in the application graph in Fig. 5. Each node in a simple branch is connected to at most two edges. 1) Algorithm Design: We propose an online placement algorithm, where we borrow some ideas from [31]. Different from [31] which focused on routing and job scheduling problems, our work considers more general graph mapping. When an application (represented by a tree application graph) arrives, we split the whole application graph into simple branches, and regard each simple branch as an independent application graph. All the nodes with given placement can also be regarded as an application that is placed before placing the individual simple branches. After placing those nodes, each individual simple branch is placed using the online algorithm that we describe below. In the remaining of this section, by application we refer to the application after splitting, i.e. each application either consists of a simple branch or a set of nodes with given placement. How to Place Each Simple Branch: While our ultimate goal is to optimize (3), we use an alternative objective function to determine the placement of each newly arrived application i (after splitting). Such an indirect approach provides performance guarantee with respect to (3) in the long run. We will first introduce the new objective function and then discuss its relationship with the original objective function (3). We define a variableĴ as a reference cost. The reference cost may be an estimate of the true optimal cost (defined as in (3)) from optimal offline placement, and we will discuss later about how to determine this value. Then, for placing the ith application, we use an objective function which has the same form as (12), with f n,k (·) and g l (·) defined as f n,k (x) exp α p n,k (i − 1) + x J −exp α p n,k (i − 1) J ,(13a)g l (x) exp α p l (i − 1) + x J −exp α p l (i − 1) J ,(13b) subject to the cycle-free placement constraint, where we define exp α (y) α y and α 1+1/γ (γ > 1 is a design parameter). Why We Use an Alternative Objective Function: The objective function (12) with (13a) and (13b) is the increment of the sum exponential values of the original costs, given all the previous placements. With this objective function, the performance bound of the algorithm can be shown analytically (see Proposition 3 below). Intuitively, the new objective function (12) serves the following purposes: -"Guide" the system into a state such that the maximum cost among physical links and nodes is not too high, thus approximating the original objective function (3). This is because when the existing cost at a physical link or node (for a particular resource type k) is high, the incremental cost (following (12)) of placing the new application i on this link or node (for the same resource type k) is also high, due to the fact that exp α (y) is convex increasing and the cost definitions in (13a) and (13b). -While (3) only considers the maximum cost, (12) is also related to the sum cost, because we sum up all the exponential cost values at different physical nodes and links together. This "encourages" a low resource consuming placement of the new application i (which is reflected by low sum cost), thus leaving more available resources for future applications. In contrast, if we use (3) directly for each newly arrived application, the placement may greedily take up too much resource, so that future applications can no longer be placed with a low cost. In practice, we envision that objective functions with a shape similar to (12) can also serve our purpose. How to Solve It: Because each application either obeys a pre-specified placement or consists of a simple branch, we can use Algorithm 1 with appropriately modified cost functions to find the optimal solution to (12) with (13a) and (13b). For the case of a simple branch having an open edge, such as edge (2,4) in Fig. 5, we connect an application node that has zero resource demand to extend the simple branch to a graph, so that Algorithm 1 is applicable. Algorithm 2 Online placement of an application that is either a simple branch or a set of nodes with given placement 1: Given the ith application that is either a set of nodes with given placement or a simple branch 2: Given tree physical graph Y 3: Given p n,k (i − 1), q l (i − 1), and placement costs 4: GivenĴ and β 5: if application is a set of nodes with given placement then 6: Obtain π i based on given placement 7: else if application is a simple branch then 8: Extend simple branch to linear graph R(i), by connecting zeroresource-demand application nodes to open edges, and the placements of these zero-resource-demand application nodes are given 9: Run Algorithm 1 with objective function (12) with (13a) and (13b), for R(i), to obtain π i 10: end if 11: if ∃n, k : p n,k (i − 1) + v:π i (v)=n d v→n,k (i) > βĴ or ∃l : q l (i − 1) + e=(v 1 ,v 2 ):(π i (v 1 ),π i (v 2 )) l b e→l (i) > βĴ then 12: return FAIL 13: else 14: return π i 15: end if Algorithm 2 summarizes the above argument as a formal algorithm for each application placement, where π i denotes the mapping for the ith application. Define a parameter, β = log α γ(N K+L) γ−1 , then Algorithm 2 performs the placement as long as the cost on each node and link is not bigger than βĴ, otherwise it returns FAIL. The significance of the parameter β is in calculating the competitive ratio, i.e., the maximum ratio of the cost resulting from Algorithm 2 to the optimal cost from an equivalent offline placement, as shown below. Why We Need the Reference CostĴ: The reference cost J is an input parameter of the objective function (12) and Algorithm 2, which enables us to show a performance bound for Algorithm 2, as shown in Proposition 3. Proposition 3. If there exists an offline mapping π o that considers all M application graphs and brings cost J π o , such that J π o ≤Ĵ, then Algorithm 2 never fails, i.e., p n,k (M ) and q l (M ) from Algorithm 2 never exceeds βĴ. The cost J π o is defined in (3). Proof. See Appendix B. Proposition 3 guarantees a bound for the cost resulting from Algorithm 2. We note that the optimal offline mapping π o * produces cost J π o * , which is smaller than or equal to the cost of an arbitrary offline mapping. It follows that for any π o , we have J π o * ≤ J π o . This means that if there exists π o such that J π o ≤Ĵ, then we must have J π o * ≤Ĵ. If we can setĴ = J π o * , then from Proposition 3 we have max {max k,n p n,k (M ); max l q l (M )} ≤ βJ π o * , which means that the competitive ratio is β. How to Determine the Reference CostĴ: Because the value of J π o * is unknown, we cannot always setĴ exactly to J π o * . Instead, we need to setĴ to an estimated value that is not too far from J π o * . We achieve this by using the doubling technique, which is widely used in online approximation algorithms. The idea is to double the value ofĴ every time Algorithm 2 fails. After each doubling, we ignore all the previous placements when calculating the objective function (12) with (13a) and (13b), i.e., we assume that there is no existing application, and we place the subsequent applications Algorithm 3 High-level procedure for multiple arriving tree application graphs 1: InitializeĴ ←Ĵ 0 2: Define index i as the application index, which automatically increases by 1 for each new application (after splitting) 3: Initialize i ← 1 4: Initialize i 0 ← 1 5: loop 6: if new application graph has arrived then 7: Split the application graph into simple branches and a set of nodes with given placement, assume that each of them constitute an application 8: for all application i do 9: repeat 10: Call Algorithm 2 for application i with p n,k (i − 1) = max 0, p n,k (i − 1) − p n,k (i 0 − 1) and q l (i − 1) = max {0, q l (i − 1) − q l (i 0 − 1)} 11: if Algorithm 2 returns FAIL then 12: SetĴ ← 2Ĵ 13: Set i 0 ← i 14: end if 15: until Algorithm 2 does not return FAIL 16: Map application i according to π i resulting from Algorithm 2 17: end for 18: end if 19: end loop (including the one that has failed with previous value ofĴ) with the new value ofĴ. At initialization, the value ofĴ is set to a reasonably small numberĴ 0 . In Algorithm 3, we summarize the high-level procedure that includes the splitting of the application graph, the calling of Algorithm 2, and the doubling process, with multiple application graphs that arrive over time. 2) Complexity and Competitive Ratio: In the following, we discuss the complexity and competitive ratio of Algorithm 3. Because the value of J π o * is finite 6 , the doubling procedure in Algorithm 3 only contains finite steps. The remaining part of the algorithm mainly consists of calling Algorithm 2 which then calls Algorithm 1 for each simple branch. Because nodes and links in each simple branch together with the set of nodes with given placement add up to the whole application graph, similar to Algorithm 1, the time-complexity of Algorithm 3 is O(V 3 N 2 ) for each application graph arrival. Similarly, when combining the procedures in Algorithms 1-3, we can see that the space-complexity of Algorithm 3 is O(V N (V + N )) for each application graph arrival, which is in the same order as Algorithm 1. For the competitive ratio, we have the following result. Proof. If Algorithm 2 fails, we know that J π o * >Ĵ according to Proposition 3. Hence, by doubling the value ofĴ each time Algorithm 2 fails, we haveĴ f < 2J π o * , whereĴ f is the final value ofĴ after placing all M applications. Because we ignore all previous placements and only consider the applications (i) − q l (i 0 − 1)} ≤ βĴ(14) for the particular value ofĴ. When we consider all the placements of M applications, by summing up (14) for all values ofĴ, we have max max k,n p n,k (M ); max l q l (M ) ≤ 1+ 1 2 + 1 4 + 1 8 +· · · βĴ f < 2 1 + 1 2 + 1 4 + 1 8 + · · · βJ π o * = 4βJ π o * . Hence, the proposition follows. The variables α, γ and K are constants, and L = N − 1 because the physical graph is a tree. Hence, the competitive ratio of Algorithm 3 can also be written as O(log N ). It is also worth noting that, for each application graph, we can have different tree physical graphs that are extracted from a general physical graph, and the above conclusions still hold. C. When at Least One Junction Node Placement Is Not Given In this subsection, we focus on cases where the placements of some or all junction nodes are not given. For such scenarios, we first extend our concept of branches to incorporate some unplaced junction nodes. The basic idea is that each general branch is the largest subset of nodes and edges that are interconnected with each other not including any of the nodes with pre-specified placement, but (as with our previous definition of simple branches) the subset includes the edges connected to placed nodes. A simple branch (see definition in Section IV-B) is always a general branch, but a general branch may or may not be a simple branch. Examples of general branches are shown in Fig. 6. 1) Algorithm Design: The main idea behind the algorithm is to combine Algorithm 2 with the enumeration of possible placements of unplaced junction nodes. When there is only a constant number of such nodes on any path from the root to a leaf, the algorithm remains polynomial in time-complexity while guaranteeing a polynomial-logarithmic (poly-log) competitive ratio. To illustrate the intuition, consider the example application graph shown in Fig. 6(a), where nodes 2 and 5 are both initially unplaced. We follow a hierarchical determination of the placement of unplaced nodes starting with the nodes in the deepest level. For the example in Fig. 6(a), we first determine the placement of node 5, given each possible placement of node 2; then determine the placement of node 2. Recall that we use the cost function in (12) with (13a) and (13b) to determine the placement of each simple branch when all the junction nodes are placed. We use the same cost function (with slightly modified parameters) for the placement of nodes 2 and 5. However, when determining the placement of node 5, we regard the general branch that includes node 5 (which contains nodes 3, 5, 7, and 8 and the corresponding edges as shown in Fig. 6(b)) as one single application, i.e. the values of p n,k (i − 1) and q l (i − 1) in (13a) and (13b) correspond to the resource utilization at nodes and links before placing this whole general branch, and the application i contains all the nodes and edges in this general branch. Similarly, when determining the placement of node 2, we consider the whole application graph as a single application. It is worth noting that in many cases we may not need to enumerate all the possible combinations of the placement of unplaced junction nodes. For example, in Fig. 6(c), when the placement of node 2 is given, the placement of nodes 5 and 6 does not impose additional restrictions upon each other (i.e., the placement of node 5 does not affect where node 6 can be placed, for instance). Hence, the general branches that respectively include node 5 and node 6 can be placed in a subsequent order using the online algorithm. Based on the above examples, we summarize the procedure as Algorithm 4, where we solve the problem recursively and determine the placement of one junction node that has not been placed before in each instance of the function Unplaced(v, h). The parameter v is initially set to the top-most unplaced junction node (node 2 in Fig. 6(a)), and h is initially set to H (the maximum number of unplaced junction nodes on any path from the root to a leaf in the application graph). Algorithm 4 can be embedded into Algorithm 3 to handle multiple arriving application graphs and unknown reference costĴ. The only part that needs to be modified in Algorithm 3 is that it now splits the whole application graph into general branches (rather than simple branches without unplaced junction nodes), and it either calls Algorithm 2 or Algorithm 4 depending on whether there are unplaced junction nodes in the corresponding general branch. When there are such nodes, it calls Unplaced(v, h) with the aforementioned initialization parameters. 2) Complexity and Competitive Ratio: The time-complexity of Algorithm 4 together with its high-level procedure that is a modified version of Algorithm 3 is O(V 3 N 2+H ) for each application graph arrival, as explained below. Note that H is generally not the total number of unplaced nodes. Obviously, when H = 0, the time-complexity is the same as the case where all junction nodes are placed beforehand. When there is only one unplaced junction node (in which case H = 1), Algorithm 4 considers all possible placements for this vertex, which has at most N choices. Hence, its time-complexity becomes N times the time-complexity with all placed junction nodes. When there are multiple unplaced junction nodes, we can see from Algorithm 4 that it only increases its recursion depth when some lower level unplaced Algorithm 4 Tree-to-tree placement when some junction nodes are not placed 1: function Unplaced(v, h) 2: Given the ith application that is a general branch, tree physical graph Y, J, and β 3: Given p n,k (i − 1) and q l (i − 1) which is the current resource utilization on nodes and links 4: Define Π to keep the currently obtained mappings, its entry π| π(v)=n 0 for all n 0 represents the mapping, given that v is mapped to n 0 5: Define p n,k (i)| π(v)=n 0 and q l (i)| π(v)=n 0 for all n 0 as the resource utilization after placing the ith application, given that v is mapped to n 0 6: Initialize p n,k (i)| π(v)=n 0 ← p n,k (i − 1) and q l (i)| π(v)=n 0 ← q l (i − 1) for all n 0 7: for all n 0 that v can be mapped to do 8: Assume v is placed at n 0 9: for all general branch that is connected with v do 10: if the general branch contains unplaced junction nodes then 11: Find the top-most unplaced vertex v within this general branch 12: Call Unplaced(v , h − 1) while assuming v is placed at n 0 , and with p n,k (i − 1) = p n,k (i)| π(v)=n 0 and q l (i − 1) = q l (i)| π(v)=n 0 13: else 14: (in which case the general branch is a simple branch without unplaced junction nodes) Run Algorithm 2 for this branch 15: end if 16: Put mappings resulting from Unplaced(v , h − 1) or Algorithm 2 into π| π(v)=n 0 17: Update p n,k (i)| π(v)=n 0 and q l (i)| π(v)=n 0 to incorporate new mappings 18: end for 19: end for 20: Find minn 0 k,n exp α p n,k (i)| π(v)=n 0 β hĴ −exp α p n,k (i−1) β hĴ + l exp α q l (i)| π(v)=n 0 β hĴ − exp α q l (i−1) β hĴ , returning the optimal placement of v as n * 0 . 21: if h = H and (∃n, k : p n,k (i)| π(v)=n * 0 > β 1+HĴ or ∃l : q l (i)| π(v)=n * 0 > β 1+HĴ ) then 22: return FAIL 23: else 24: return π| π(v)=n * 0 25: end if junction nodes exist. In other words, parallel general branches (such as the two general branches that respectively include node 5 and node 6 in Fig. 6(c)) do not increase the recursion depth, because the function Unplaced(v, h) for these general branches is called in a sequential order. Therefore, the timecomplexity depends on the maximum recursion depth which is H; thus, the overall time-complexity is O (V 3 N 2+H ). The space-complexity of Algorithm 4 is O(V N 1+H (V +N )) for each application graph arrival, because in every recursion, the results for all possible placements of v are stored, and there are at most N such placements for each junction node. Regarding the competitive ratio, similar to Proposition 3, we can obtain the following result. Proposition 5. If there exists an offline mapping π o that considers all M application graphs and brings cost J π o , such that J π o ≤Ĵ, then Algorithm 4 never fails, i.e., p n,k (M ) and q l (M ) resulting from Algorithm 4 never exceeds β 1+HĴ . Proof. When H = 0, the claim is the same as Proposition 3. When H = 1, there is at most one unplaced junction node in each general branch. Because Algorithm 4 operates on each general branch, we can regard that we have only one unplaced junction node when running Algorithm 4. In this case, there is no recursive calling of Unplaced(v, h). Recall that v is the top-most unplaced junction node. The function Unplaced(v, h) first fixes the placement of v to a particular physical node n 0 , and finds the placement of the remaining nodes excluding v. It then finds the placement of v. From Proposition 3, we know that when we fix the placement of v, the cost resulting from the algorithm never exceeds βĴ if there exists a mapping π o | π(v)=n0 (under the constraint that v is placed at n 0 ) that brings cost J π o | π(v)=n 0 ≤Ĵ. To find the placement of v, Algorithm 4 finds the minimum cost placement from the set of placements that have been obtained when the placement of v is given. Reapplying Proposition 3 for the placement of v, by substitutingĴ with βĴ, we know that the cost from the algorithm never exceeds β 2Ĵ , provided that there exists a mapping, which is within the set of mappings produced by the algorithm with given v placements 7 , that has a cost not exceeding βĴ. Such a mapping exists and can be produced by the algorithm if there exists an offline mapping π o (thus a mapping π o | π(v)=n0 for a particular placement of v) that brings cost J π o with J π o ≤Ĵ. Hence, the claim follows for H = 1. When H > 1, because we decrease the value of h by one every time we recursively call Unplaced(v, h), the same propagation principle of the bound applies as for the case with H = 1. Hence, the claim follows. Using the same reasoning as for Proposition 4, it follows that Algorithm 4 in combination with the extended version of Algorithm 3 is 4β 1+H = 4 log 1+H V. NUMERICAL EVALUATION We compare the proposed algorithm against two heuristic approaches via simulation. The first approach is one that greedily minimizes the maximum resource utilization (according to (3)) for the placement of every newly arrived application graph. The second approach is the Vineyard algorithm proposed in [22], where load balancing is also considered as a main goal in application placement. Both the greedy and Vineyard algorithms require an optimization problem to be solved as a subroutine, for the placement of every newly arrived application. This optimization problem can be expressed as a mixed-integer linear program (MILP). MILPs are generally not solvable in polynomial-time, thus an LP-relaxation and rounding procedure is used in [22]. In this paper, to capture the best generality and eliminate inaccuracies caused by heuristic rounding mechanisms (because there are multiple ways of rounding that one could use), we solve the MILP subroutines directly using CPLEX [32]. This gives an exact solution to the subroutine, thus the greedy and Vineyard algorithms in the simulation may perform better than they would in reality, and we are conservative in showing the effectiveness of the proposed algorithm. Note that these MILP solutions do not represent the optimal offline solution, because an optimal offline solution needs to consider all application graphs at the same time, whereas the methods that we use for comparison only solve the MILP subroutine for each newly arrived application. Obtaining the optimal offline solution requires excessive computational time such that the simulation infeasible, hence we do not consider it here. We also do not compare against the theoretical approach in [23] via simulation, because that approach is nonstraightforward to implement. However, we have outlined the benefits of our approach against [23] in Section I-C and some further discussion will be given in Section VI. To take into account possible negative impacts of the cyclefree restriction in the proposed algorithm, we do not impose the cycle-free constraint in the baseline greedy and Vineyard algorithms. However, for a fair comparison, we do require in the baseline approaches that when the placements of junction nodes are given, the children of this junction node can only be placed onto the physical node on which the junction node has been placed, or onto the children of this physical node. Because MEC is a very new concept which has not been practically deployed in a reasonably large scale, we currently do not have real topologies available to us for evaluation. Therefore, similar to [22], we consider synthetic tree application and physical graphs. The number of application nodes for each application is randomly chosen from the interval [3,10], and the number of physical nodes ranges from 2 to 50. This simulation setting is similar to that in [22]. We use a sequential approach to assign connections between nodes. Namely, we first label the nodes with indices. Then, we start from the lowest index, and connect each node m to those nodes that have indices 1, 2, ..., m − 1. Node m connects to node m − 1 with probability 0.7, and connects to nodes 1, 2, ..., m − 2 each with probability 0.3/(m − 2). We restrict the application root node to be placed onto the physical root node, considering that some portion of processing has to be performed on the core cloud possibly due to the constraint of database location (see Fig. 1). We consider 100 application arrivals and simulate with 100 different random seeds to obtain the overall performance. The placement cost of a single node or link is uniformly distributed between 0 and a maximum cost. For the root application node, the cost is divided by a factor of 10. We set the design parameter γ = 2. Figures 7 and 8 show the maximum resource utilization, i.e., the value of (3), averaged over results from different random seeds 8 , respectively with and without pre-specified placement of junction nodes. In Figs. 7(a) and 8(a), the 8 We only consider those random seeds which produce a maximum resource utilization that is smaller than one, because otherwise, the physical network is considered as overloaded after hosting 100 applications. We also observed in the simulations that the number of accepted applications is similar when using different methods. The relative degradation in the number of accepted applications of the proposed method compared with other methods never exceeds 2% in the simulations. number of physical nodes is randomly chosen from the interval [2,50]; and in Figs. 7(b) and 8(b), the maximum cost per application node/link is set to 0.015. It is evident that the proposed method outperforms those methods in comparison. The resource utilization tends to converge when the number of physical nodes is large because of the fixed root placement. As mentioned earlier, practical versions of greedy and Vineyard algorithms that have LP-relaxation and rounding may perform worse than what our current results show. We now explain why the proposed method outperforms other methods. We first note that the uniqueness in the proposed algorithm is that it uses a non-linear objective function for placing each new application, whereas the baseline methods and most other existing approaches use linear objective functions. The exponential-difference cost (12) with (13a) and (13b) used in the proposed algorithm for the placement of each newly arrived application graph aims at both load balancing and reducing sum resource utilization. It leaves more space for applications that arrive in the future. Therefore, it outperforms the greedy approach which does not take future arrivals into account. The Vineyard approach does not strongly enforce load balancing unless operating close to the resource saturation point, due to the characteristics of its objective function used in each subroutine of application arrival. When comparing Fig. 7 to Fig. 8, we can find that the performance gaps between the proposed method and other methods are larger when the junction nodes are not placed beforehand. This is mainly because the judgment of whether Algorithm 4 has failed is based on the factor β 1+H , and for Algorithm 2 it is based on β. It follows that Algorithm 4 is less likely to fail when H > 0. In this case, the value ofĴ is generally set to a smaller value by the doubling procedure in Algorithm 3. A smaller value ofĴ also results in a larger change in the exponential-difference cost when the amount of existing load changes 9 . This brings a better load balancing on average (but not for the worst-case, the worst-case result is still bounded by the bounds derived earlier in this paper). VI. DISCUSSION Is the Tree Assumption Needed? For ease of presentation and considering the practical relevance to MEC applications, we have focused on tree-to-tree placements in this paper. However, the tree assumption is not absolutely necessary for our algorithms to be applicable. For example, consider the placement problem shown in Fig. 9, where the application graph contains two junction nodes 10 (nodes 1 and 2) and multiple simple branches (respectively including nodes 3, 4, 5, and 6) between these two junction nodes. Such an application graph is common in applications where processing can be parallelized at some stage. The physical graph shown in Fig. 9(b) still has a hierarchy, but we now have connections between all pairs of nodes at two adjacent levels. Obviously, neither the application nor the physical graph in this problem has a tree structure. Let us assume that junction node 1 has to be placed at the top level of the physical graph (containing nodes A, B, C, D, E), junction node 2 has to be placed at the bottom level of the physical graph (containing nodes K, L, M, N, O), and application nodes 3, 4, 5, 6 have to be placed at the middle level of the physical graph (containing nodes F, G, H, I, J). One possible junction node placement under this restriction is shown in Fig. 9(c). With this pre-specified junction node placement, the mapping of each application node in {3, 4, 5, 6} can be found by the simple branch placement algorithm (Algorithm 3 which embeds Algorithm 2) introduced earlier, because it only needs to map each application node in {3, 4, 5, 6} onto each physical node in {F, G, H, I, J}, and find the particular assignment that minimizes (12) with (13a) and (13b). Therefore, in this example, when the junction node placements are pre-specified, the proposed algorithm can find the placement of other application nodes with O(V 3 N 2 ) time-complexity, which is the complexity of Algorithm 3 as discussed in Section IV-B2. When the junction node placements are not pre-specified, the proposed algorithm can find the placement of the whole application graph with O(V 3 N 4 ) time-complexity, because here H = 2 (recall that the complexity result was derived in Section IV-C2). We envision that this example can be generalized to a class of application and physical graphs where there exist a limited number of junction nodes that are not placed beforehand. The algorithms proposed in this paper should still be applicable to such cases, as long as we can find a limited number of cyclefree paths between two junction nodes when they are placed on the physical graph. We leave a detailed discussion on this aspect as future work. Practical Implications: Besides the proposed algorithms themselves, the results of this paper also reveal the following insights that may guide future implementation: 1) The placement is easier when the junction nodes are placed beforehand. This is obvious when comparing the time-complexities and competitive ratios for cases with and without unplaced junction nodes. 2) There is a trade-off between instantaneously satisfying the objective function and leaving more available resources for future applications. Leaving more available resources may cause the system to operate in a suboptimal state for the short-term, but future applications may benefit from it. This trade-off can be controlled by defining an alternative objective function which is different from (but related to) the overall objective that the system tries to achieve (see Section IV-B1). Comparison to [23]: As mentioned in Section I, [23] is the only work which we know that has studied the competitive ratio of online application placement considering both node and link optimization. Our approach has several benefits compared to [23] as discussed in Section I-C. Besides those benefits, we would like to note that the proposed algorithm outperforms [23] in time-complexity, space-complexity, and competitive ratio when the junction node placements are prespecified (the performance bounds of these two approaches can be found in Sections I-B and I-D, respectively). When some junction node placements are not pre-specified, our approach provides a performance bound comparable to that in [23], where we also note that H ≤ D. Moreover, [23] does not consider exact optimal solutions for the placement of a single linear application graph, and it also does not have simulations to show the average performance of the algorithm. Tightness of Competitive Ratio: By comparing the competitive ratio result of our approach to that in [23], we see that both approaches provide poly-log competitive ratios for the general case. It is however unclear whether this is the best performance bound one can achieve for the application placement problem. This is an interesting but difficult aspect worth studying in the future. VII. CONCLUSIONS In this paper, the placement of an incoming stream of application graphs onto a physical graph has been studied under the MEC context. We have first proposed an exact optimal algorithm for placing one linear application graph onto a tree physical graph which works for a variety of objective functions. Then, with the goal of minimizing the maximum resource utilization at physical nodes and links, we have proposed online approximation algorithms for placing tree application graphs onto tree physical graphs. When the maximum number of unplaced junction nodes on any path from the root to a leaf (in the application graph) is a constant, the proposed algorithm has polynomial time and space complexity and provides poly-log worst-case optimality bound (i.e., competitive ratio). Besides the theoretical evaluation of worst-case performance, we have also shown the average performance via simulation. A combination of these results implies that the proposed method performs reasonably well on average and it is also robust in extreme cases. The results in this paper can be regarded as an initial step towards a more comprehensive study in this direction. Many constraints in the problem formulation are for ease of presentation, and can be readily relaxed for a more general problem. For example, as discussed in Section VI, the tree-topology restriction is not absolutely essential for the applicability of our proposed algorithms. The algorithms also work for a class of more general graphs as long as the cycle-free constraint is satisfied. While we have not considered applications leaving at some time after their arrival, our algorithm can be extended to incorporate such cases, for example using the idea in [33]. The algorithm for cases with unplaced junction nodes is essentially considering the scenario where there exists some low-level placement (for each of the branches) followed by some high level placement (for the junction nodes). Such ideas may also be useful in developing practical distributed algorithms with provable performance guarantees. APPENDIX A APPROXIMATION RATIO FOR CYCLE-FREE MAPPING We focus on how well the cycle-free restriction approximates the more general case which allows cycles, for the placement of a single linear application graph. We first show that with the objective of load balancing (defined in (3) in Section II-B), the problem of placing a single linear application graph onto a linear physical graph when allowing cycles is NP-hard, and then discuss the approximation ratio of the cycle-free restriction. Proposition 6. The line-to-line placement problem for the objective function defined in (3) while allowing cycles is NPhard. Proof. The proof is similar with the proof of Proposition 2 in Section IV-A, namely the problem can be reduced from the minimum makespan scheduling on unrelated parallel machines (MMSUPM) problem. Consider the special case where the edge demand is zero, then the problem is the same with the MMSUPM problem, which deals with placing V jobs onto N machines without restriction on their ordering, with the goal of minimizing the maximum load on each machine. To discuss the approximation ratio of the cycle-free assignment, we separately consider edge costs and node costs. The worst case ratio is then the maximum among these two ratios, because we have max {r 1 x 1 , r 2 x 2 } ≤ max {r 1 , r 2 } · max {x 1 , x 2 }, for arbitrary r 1 , r 2 , x 1 , x 2 ≥ 0. The variables x 1 and x 2 can respectively denote the true optimal maximum costs at nodes and links, and the variables r 1 and r 2 can be their corresponding approximation ratios. Then, max {x 1 , x 2 } is the true optimal maximum cost when considering nodes and links together, and max {r 1 , r 2 } is their joint approximation ratio. The joint approximation ratio max {r 1 , r 2 } is tight (i.e., there exists a problem instance where the actual optimality gap is arbitrarily close the approximation ratio, recall that the approximation ratio is defined in an upper bound sense) when r 1 and r 2 are tight, because we can construct worst-case examples, one with zero node demand and another with zero link demand, and there must exist one worst-case example which has approximation ratio max {r 1 , r 2 }. In the following discussion, we assume that the application and physical nodes are indexed in the way described in Section III-A. The following proposition shows that cyclefree placement is always optimal when only the edge cost is considered. Proposition 7. Cycle-free placement on tree physical graphs always has lower or equal maximum edge cost compared with placement that allows cycles. Proof. Suppose a placement that contains cycles produces a lower maximum edge cost than any cycle-free placement, then there exists v and v 1 (v 1 > v + 1) both placed on a particular node n, while nodes v +1, ..., v 1 −1 are placed on some nodes among n+1, ..., N . In this case, placing nodes v +1, ..., v 1 −1 all onto node n never increases the maximum edge cost, which shows a contradiction. For the node cost, we first consider the case where the physical graph is a single line. We note that in this case the cycle-free placement essentially becomes an "ordered matching", which matches V items into N bins, where the first bin may contain items 1, ..., v 1 , the second bin may contain items v 1 + 1, ..., v 2 , and so on. We can also view the problem as partitioning the ordered set V into N subsets, and each subset contains consecutive elements from V. Proposition 8. When each application node has the same cost when placing it on any physical node, then the cycle-free lineto-line placement has a tight approximation ratio of 2. Proof. Suppose we have V items that can be packed into N bins by a true optimal algorithm (which does not impose ordering on items), and the optimal cost at each bin is OPT. To show that the worst case cost ratio resulting from the ordering cannot be larger than 2, we consider a bin packing where the size of each bin is OPT. (Note that the bin packing problem focuses on minimizing the number of bins with given bin size, which is slightly different from our problem.) Because an optimal solution can pack our V items into N bins with maximum cost OPT, when we are given that the size of each bin is OPT, we can also pack all the V items into N bins. Hence, the optimal solution to the related bin packing problem is N . When we have an ordering, we can do the bin packing by the first-fit algorithm which preserves our ordering. The result of the first-fit algorithm has been proven to be at most 2N bins [20]. Now we can combine two neighboring bins into one bin. Because we have at most 2N bins from the first-fit algorithm, we will have at most N bins after combination. Also because each bin has size OPT in the bin packing problem, the cost after combination will be at most 2 · OPT for each bin. This shows that the worst-case cost for ordered items is at most 2 · OPT. To show that the approximation ratio of 2 is tight, we consider the following problem instance as a tight example. Suppose V = 2N . Among the 2N items, N of them have cost of (1 − )·OPT, where > 1 1+N , the remaining N have a cost of · OPT. Obviously, an optimal allocation will put one (1 − )·OPT item and one ·OPT item into one bin, and the resulting maximum cost at each bin is OPT. A bad ordering could have all (1 − )·OPT items coming first, and all ·OPT items coming afterwards. In this case, if the ordered placement would like the maximum cost to be smaller than (2 − 2 )·OPT, it would be impossible to fit all the items into N bins, because all the (1 − )·OPT items will already occupy N bins, as it is impossible to put more than one (1 − )·OPT item into each bin if the cost is smaller than (2 − 2 )·OPT. Because N ·OPT > 1 − 1 ·OPT = (1 − )·OPT, it is also impossible to put all ·OPT into the last bin on top of the existing (1 − )·OPT item. This means an ordered placement of these V items into N bins has a cost that is at least (2 − 2 )·OPT Considering arbitrarily large N and thus arbitrarily small , we can conclude that the approximation ratio of 2·OPT is tight. Corollary 1. When the physical graph is a tree and the maximum to minimum cost ratio for placing application node v on any physical node is d %,v , then the cycle-free line-to-line placement has an approximation ratio of 2V · max v d %,v = O(V ). Proof. This follows from the fact that OPT may choose the minimum cost for each v while the ordered assignment may have to choose the maximum cost for some v, and also, in the worst case, the cycle-free placement may place all application nodes onto one physical node. The factor 2 follows from Proposition 8. It is not straightforward to find out whether the bound in the above corollary is tight or not, thus we do not discuss it here. We conclude that the cycle-free placement always brings optimal link cost, which is advantageous. The approximation ratio of node costs can be O(V ) in some extreme cases. However, the cycle-free restriction is still reasonable in many practical scenarios. Basically, in these scenarios, one cannot split the whole workload onto all the available servers without considering the total link resource consumption. The analysis here is also aimed to provide some further insights that helps to justify in what practical scenarios the proposed work is applicable, while further study is worthwhile for some other scenarios. APPENDIX B PROOF OF PROPOSITION 3 The proof presented here borrows ideas from [31], but is applied here to the generalized case of graph mappings and arbitrary reference offline costs J π o . For a givenĴ, we definep n,k (i) = p n,k (i)/Ĵ,d v→n,k (i) = d v→n,k (i)/Ĵ, q l (i) = q l (i)/Ĵ, andb e→l (i) = b e→l (i)/Ĵ. To simplify the proof structure, we first introduce some notations so that the link and node costs can be considered in an identical framework, because it is not necessary to distinguish them in the proof of this proposition. We refer to each type of resources as an element, i.e., the type k resource at node n is an element, the resource at link l is also an element. Then, we define the aggregated cost up to application i for element r asz r (i). The value ofz r (i) can be eitherp n,k (i) orq l (i) depending on the resource type under consideration. Similarly, we definew r|π (i) as the incremental cost that application i brings to element r under the mapping π. The value ofw r|π (i) can be either ∀v:π(v)=nd v→n,k (i) or ∀e=(v1,v2):(π(v1),π(v2)) lb e→l (i). Note that bothz r (i) andw r|π (i) are normalized by the reference costĴ. Using the above notations, the objective function in (12) with (13a) and (13b) becomes min πi r αz r (i−1)+w r|π i (i) − αz r (i−1) . Note that due to the notational equivalence, (15) is the same as (12) with (13a) and (13b). Recall that π o denotes the reference offline mapping result, let π o i denote the offline mapping result for nodes that correspond to the ith application, andz o r (i) denote the corresponding aggregated cost until application i. Define the following potential function: Φ(i) = r αz r (i) (γ −z o r (i)) ,(16) which helps us prove the proposition. Note that variables without superscript "o" correspond to the values resulting from Algorithm 2 that optimizes the objective function (15) where the notation π i (·) = r or π o i (·) = r means that application i has occupied some resource from element r when respectively using the mapping from Algorithm 2 or the reference offline mapping. We explain the relationships in (17) where the last equality follows from the fact that αz r (i) − αz r (i−1) = 0 for all r that ∀π i (·) = r, andw r|π o i (i) = 0 for all r that ∀π o i (·) = r. Inequality (18) follows fromz o r (i − 1) ≥ 0 andz r (i) =z r (i − 1) +w r|πi (i). Note that the first term in (18) is the same as the objective function (15). Because the mapping π i results from Algorithm 2 which optimizes (15), we know that the reference mapping π 0 must produce a cost αz r (i−1)+w r|π o i (i) − αz r (i−1) that is greater than or equal to the optimum, hence following (19). Equality (20) is obvious. Now we proof that the potential function Φ(i) does not increase with i, by proving that (20) is not larger than zero. For the ith request, the reference offline mapping produces the mapping result π o i . Therefore, for all r such that ∃π o i (·) = r, we have 0 ≤w r|π o i (i) ≤ J π o /Ĵ ≤ 1. Hence, we only need to show that γ αw r|π o i (i) − 1 −w r|π o i (i) ≤ 0 forw r|π o i (i) ∈ [0, 1], which is true for α ≤ 1+1/γ. From (17)- (20), it follows that Φ(i) ≤ Φ(i−1). (We take α = 1+1/γ because this gives the smallest value of β.) Becausez r (0) =z o r (0) = 0, we have Φ(0) = γ(N K + L). Because Φ(i) does not increase, α > 1, andz o r (i) ≤ 1 due to J π o ≤Ĵ, we have (γ − 1) α maxrzr(i) ≤ (γ − 1) r αz r (i) ≤ Φ(i) ≤ Φ(0) = γ(N K + L).(21) Taking the logarithm on both sides of (21), we have max rz r (i) ≤ log α γ(N K + L) γ − 1 = β,(22) which proves the result because z r (i) =z r (i) ·Ĵ. Figure 1 . 1Application scenario with mobile edge-clouds (MECs). Example scenario with face recognition application, where the dashed lines stand for physical communication links and red arrows stand for the data transmission path. Figure 2 . 2The application placement problem. Figure 4 . 4Auxiliary graph and algorithm procedure for the placement of a linear application graph onto a tree physical graph. w. min-max obj. MILP w. min-sum obj. Figure 6 . 6Example of application graphs with some unplaced junction nodes, the nodes and edges within each dashed boundary form a general branch: (a) nodes 2 and 5 are both unplaced, (b) node 2 is placed, node 5 is unplaced, (c) node 2 is placed, nodes 5 and 6 are unplaced.i 0 , ..., i for a particular value ofĴ, competitive ratio is O(log 1+H N ) . w. min-max obj. MILP w. min-sum obj. Figure 7 . 7Maximum resource utilization when junction node placements are pre-specified. Figure 8 . 8Maximum resource utilization when junction node placements are not pre-specified. Figure 9 . 9Example where application and physical graphs are not trees: (a) application graph, (b) physical graph, (c) restricted physical graph with prespecified placement of application nodes 1 and 2. For a minimization problem, the competitive ratio is defined as an upper bound of the online approximation algorithm's cost to the true optimal cost that can be obtained from an offline placement, where the offline placement considers all application graphs simultaneously instead of considering them arriving over time. The definition of approximation ratio is the same but it is for offline problems. We interchangeably use the terms "placement", "assignment", and "mapping" in this paper. Multiple individual servers can be seen as a single entity if they constitute a single cloud. Note that, although vs, ..., v 1 are mapped onto the same physical node, their pairwise costs may be non-zero if there exists additional cost when placing different application nodes onto the same physical node. In the extreme case where adjacent application nodes are not allowed to be placed onto the same physical node (i.e., conflict constraint), their pairwise cost when placing them on the same physical node becomes infinity. The value of J π o * is finite unless the placement cost specification does not allow any placement with finite cost. We do not consider this case here because it means that the placement is not realizable under the said constraints. In practice, the algorithm can simply reject such application graphs when the mapping cost resulting from Algorithm 2 is infinity, regardless of what value ofĴ has been chosen. Note that, as shown in Line 20 of Algorithm 4, to determine the placement of v, we only take the minimum cost (expressed as the difference of exponential functions) with respect to those mappings that were obtained with given placement of v. It follows that the minimization is only taken among a subset of all the possible mappings. This restricts the reference mapping to be within the set of mappings that the minimization operator operates on. Because, only in this way, the inequality(19) in the proof of Proposition 3 can be satisfied. On the contrary, Algorithm 2 considers all possible mappings that a particular simple branch can be mapped to, by calling Algorithm 1 as its subroutine. This is except for the top-level instance of Unplaced(v, h) due to the division by β h in Line 20 of Algorithm 4.10 For non-tree graphs, a junction node can be defined as those nodes that are not part of a simple branch. A view of cloud computing. M Armbrust, A Fox, R Griffith, A D Joseph, R Katz, A Konwinski, G Lee, D Patterson, A Rabkin, I Stoica, M Zaharia, Commun. ACM. 534M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, "A view of cloud computing," Commun. ACM, vol. 53, no. 4, pp. 50-58, Apr. 2010. Advancing the state of mobile cloud computing. P Bahl, R Y Han, L E Li, M Satyanarayanan, Proceedings of the third ACM workshop on Mobile cloud computing and services. the third ACM workshop on Mobile cloud computing and servicesACMP. Bahl, R. Y. Han, L. E. Li, and M. Satyanarayanan, "Advancing the state of mobile cloud computing," in Proceedings of the third ACM workshop on Mobile cloud computing and services. ACM, 2012, pp. 21-28. vTube: efficient streaming of virtual appliances over lastmile networks. Y Abe, R Geambasu, K Joshi, H A Lagar-Cavilla, M Satyanarayanan, Proceedings of the 4th annual Symposium on Cloud Computing. the 4th annual Symposium on Cloud ComputingACM16Y. Abe, R. Geambasu, K. Joshi, H. A. Lagar-Cavilla, and M. Satya- narayanan, "vTube: efficient streaming of virtual appliances over last- mile networks," in Proceedings of the 4th annual Symposium on Cloud Computing. ACM, 2013, p. 16. Towards wearable cognitive assistance. K Ha, Z Chen, W Hu, W Richter, P Pillai, M Satyanarayanan, Proc. of ACM MobiSys. of ACM MobiSysK. Ha, Z. Chen, W. Hu, W. Richter, P. Pillai, and M. Satyanarayanan, "Towards wearable cognitive assistance," in Proc. of ACM MobiSys, 2014. An open ecosystem for mobile-cloud convergence. M Satyanarayanan, R Schuster, M Ebling, G Fettweis, H Flinck, K Joshi, K Sabnani, IEEE Communications Magazine. 533M. Satyanarayanan, R. Schuster, M. Ebling, G. Fettweis, H. Flinck, K. Joshi, and K. Sabnani, "An open ecosystem for mobile-cloud con- vergence," IEEE Communications Magazine, vol. 53, no. 3, pp. 63-70, Mar. 2015. Smarter wireless networks. IBM Whitepaper No. WSW14201USEN. "Smarter wireless networks," IBM Whitepaper No. WSW14201USEN, Feb. 2013. [Online]. Available: www.ibm.com/services/multimedia/ Smarter_wireless_networks.pdf Mobile-edge computing -introductory technical white paper. "Mobile-edge computing -introductory technical white paper," Sept. 2014. [Online]. Available: https://portal.etsi.org/tb.aspx?tbid= 826&SubTB=826 The role of cloudlets in hostile environments. M Satyanarayanan, G Lewis, E Morris, S Simanta, J Boleng, K Ha, IEEE Pervasive Computing. 124M. Satyanarayanan, G. Lewis, E. Morris, S. Simanta, J. Boleng, and K. Ha, "The role of cloudlets in hostile environments," IEEE Pervasive Computing, vol. 12, no. 4, pp. 40-49, Oct. 2013. Fog computing and its role in the internet of things. F Bonomi, R Milito, J Zhu, S Addepalli, Proceedings of the first edition of the MCC workshop on Mobile cloud computing. the first edition of the MCC workshop on Mobile cloud computingACMF. Bonomi, R. Milito, J. Zhu, and S. Addepalli, "Fog computing and its role in the internet of things," in Proceedings of the first edition of the MCC workshop on Mobile cloud computing. ACM, 2012, pp. 13-16. An analytical model for follow me cloud. T Taleb, A Ksentini, Proc. of IEEE GLOBECOM 2013. of IEEE GLOBECOM 2013T. Taleb and A. Ksentini, "An analytical model for follow me cloud," in Proc. of IEEE GLOBECOM 2013, 2013. Path selection using handover in mobile networks with cloud-enabled small cells. Z Becvar, J Plachy, P Mach, Proc. of IEEE PIMRC. of IEEE PIMRCZ. Becvar, J. Plachy, and P. Mach, "Path selection using handover in mobile networks with cloud-enabled small cells," in Proc. of IEEE PIMRC 2014, Sept. 2014. A case for end system multicast (keynote address)," SIGMETRICS Perform. Y Chu, S G Rao, H Zhang, Eval. Rev. 281Y.-h. Chu, S. G. Rao, and H. Zhang, "A case for end system multicast (keynote address)," SIGMETRICS Perform. Eval. Rev., vol. 28, no. 1, pp. 1-12, June 2000. The impact of data aggregation in wireless sensor networks. B Krishnamachari, D Estrin, S Wicker, Proc. of 22nd International Conference on Distributed Computing Systems Workshops. of 22nd International Conference on Distributed Computing Systems WorkshopsB. Krishnamachari, D. Estrin, and S. Wicker, "The impact of data aggregation in wireless sensor networks," in Proc. of 22nd International Conference on Distributed Computing Systems Workshops, 2002, pp. 575-578. Concealed data aggregation for reverse multicast traffic in sensor networks: Encryption, key distribution, and routing adaptation. D Westhoff, J Girao, M Acharya, IEEE Trans. on Mobile Computing. 510D. Westhoff, J. Girao, and M. Acharya, "Concealed data aggregation for reverse multicast traffic in sensor networks: Encryption, key distribution, and routing adaptation," IEEE Trans. on Mobile Computing, vol. 5, no. 10, pp. 1417-1431, Oct. 2006. A Markov decision process-based service migration procedure for follow me cloud. A Ksentini, T Taleb, M Chen, Proc. of IEEE ICC. of IEEE ICCA. Ksentini, T. Taleb, and M. Chen, "A Markov decision process-based service migration procedure for follow me cloud," in Proc. of IEEE ICC 2014, Jun. 2014. Dynamic service migration in mobile edge-clouds. S Wang, R Urgaonkar, M Zafer, T He, K Chan, K K Leung, Proc. of IFIP Networking. of IFIP NetworkingS. Wang, R. Urgaonkar, M. Zafer, T. He, K. Chan, and K. K. Leung, "Dynamic service migration in mobile edge-clouds," in Proc. of IFIP Networking 2015, May 2015. Dynamic service migration and workload scheduling in edge-clouds. R Urgaonkar, S Wang, T He, M Zafer, K Chan, K K Leung, Performance Evaluation. 91R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, and K. K. Leung, "Dynamic service migration and workload scheduling in edge-clouds," Performance Evaluation, vol. 91, pp. 205-228, Sept. 2015. Virtual network embedding: A survey. A Fischer, J Botero, M Beck, H De Meer, X Hesselbach, 15A. Fischer, J. Botero, M. Beck, H. De Meer, and X. Hesselbach, "Virtual network embedding: A survey," vol. 15, no. 4, pp. 1888-1906, 2013. Enabling efficient placement of virtual infrastructures in the cloud. I Giurgiu, C Castillo, A Tantawi, M Steinder, Proceedings of the 13th International Middleware Conference, ser. Middleware '12. the 13th International Middleware Conference, ser. Middleware '12I. Giurgiu, C. Castillo, A. Tantawi, and M. Steinder, "Enabling efficient placement of virtual infrastructures in the cloud," in Proceedings of the 13th International Middleware Conference, ser. Middleware '12, 2012, pp. 332-353. Approximation Algorithms. V V Vazirani, SpringerV. V. Vazirani, Approximation Algorithms. Springer, 2001. A Borodin, R El-Yaniv, Online Computation and Competitive Analysis. Cambridge University PressA. Borodin and R. El-Yaniv, Online Computation and Competitive Analysis. Cambridge University Press, 1998. Vineyard: Virtual network embedding algorithms with coordinated node and link mapping. M Chowdhury, M Rahman, R Boutaba, IEEE/ACM Transactions on Networking. 201M. Chowdhury, M. Rahman, and R. Boutaba, "Vineyard: Virtual net- work embedding algorithms with coordinated node and link mapping," IEEE/ACM Transactions on Networking, vol. 20, no. 1, pp. 206-219, 2012. Minimum congestion mapping in a cloud. N Bansal, K.-W Lee, V Nagarajan, M Zafer, Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing, ser. PODC '11. the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing, ser. PODC '11N. Bansal, K.-W. Lee, V. Nagarajan, and M. Zafer, "Minimum con- gestion mapping in a cloud," in Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing, ser. PODC '11, 2011, pp. 267-276. Embedding paths into trees: VM placement to minimize congestion. D Dutta, M Kapralov, I Post, R Shinde, Algorithms -ESA 2012, ser. Lecture Notes in Computer Science. L. Epstein and P. FerraginaBerlin HeidelbergSpringer7501D. Dutta, M. Kapralov, I. Post, and R. Shinde, "Embedding paths into trees: VM placement to minimize congestion," in Algorithms - ESA 2012, ser. Lecture Notes in Computer Science, L. Epstein and P. Ferragina, Eds. Springer Berlin Heidelberg, 2012, vol. 7501, pp. 431-442. Network aware resource allocation in distributed clouds. M Alicherry, T V Lakshman, Proc. of IEEE INFOCOM 2012. of IEEE INFOCOM 2012M. Alicherry and T. V. Lakshman, "Network aware resource allocation in distributed clouds," in Proc. of IEEE INFOCOM 2012, Mar. 2012, pp. 963-971. Optimal approximation algorithm of virtual machine placement for data latency minimization in cloud systems. J.-J Kuo, H.-H Yang, M.-J Tsai, Proc. of IEEE INFOCOM 2014. of IEEE INFOCOM 2014J.-J. Kuo, H.-H. Yang, and M.-J. Tsai, "Optimal approximation algorithm of virtual machine placement for data latency minimization in cloud systems," in Proc. of IEEE INFOCOM 2014, 2014. Approximating the minimum quadratic assignment problems. R Hassin, A Levin, M Sviridenko, ACM Trans. Algorithms. 61R. Hassin, A. Levin, and M. Sviridenko, "Approximating the minimum quadratic assignment problems," ACM Trans. Algorithms, vol. 6, no. 1, pp. 18:1-18:10, Dec. 2009. Rejecting jobs to minimize load and maximum flow-time. A R Choudhury, S Das, N Garg, A Kumar, Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA). of ACM-SIAM Symposium on Discrete Algorithms (SODA)A. R. Choudhury, S. Das, N. Garg, and A. Kumar, "Rejecting jobs to minimize load and maximum flow-time," in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA), Jan. 2015. On-line load balancing. Y Azar, Theoretical Computer Science. Y. Azar, "On-line load balancing," Theoretical Computer Science, pp. 218-225, 1992. Approximate Dynamic Programming: Solving the curses of dimensionality. W B Powell, John Wiley & SonsW. B. Powell, Approximate Dynamic Programming: Solving the curses of dimensionality. John Wiley & Sons, 2007. On-line routing of virtual circuits with applications to load balancing and machine scheduling. J Aspnes, Y Azar, A Fiat, S Plotkin, O Waarts, J. ACM. 443J. Aspnes, Y. Azar, A. Fiat, S. Plotkin, and O. Waarts, "On-line routing of virtual circuits with applications to load balancing and machine scheduling," J. ACM, vol. 44, no. 3, pp. 486-504, May 1997. . Ibm Cplex Optimizer, IBM CPLEX Optimizer. [Online]. Available: http://www-01.ibm.com/ software/commerce/optimization/cplex-optimizer/ On-line load balancing of temporary tasks. Y Azar, B Kalyanasundaram, S Plotkin, K R Pruhs, O Waarts, Journal of Algorithms. 221Y. Azar, B. Kalyanasundaram, S. Plotkin, K. R. Pruhs, and O. Waarts, "On-line load balancing of temporary tasks," Journal of Algorithms, vol. 22, no. 1, pp. 93-110, 1997.
{'fraction_non_alphanumeric': 0.05048145579174997, 'fraction_numerical': 0.01607572728385491, 'mean_word_length': 4.253013984889889, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 13, 'https://': 1, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 8, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Mobile edge computing is a new cloud computing paradigm which makes use of small-sized edge-clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the coexistence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared to the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands/availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes.', 'arxivid': '1605.08023', 'author': ['Member, IEEEShiqiang Wang ', 'Member, IEEEMurtaza Zafer ', 'Fellow, IEEEKin K Leung '], 'authoraffiliation': [], 'corpusid': 7501354, 'doi': '10.1109/access.2017.2665971', 'github_urls': [], 'n_tokens_mistral': 26078, 'n_tokens_neox': 23587, 'n_words': 16622, 'pdfsha': '2885b18e6e3e776a0f621c82a36a77ebd272ca70', 'pdfurls': ['https://arxiv.org/pdf/1605.08023v1.pdf'], 'title': ['Online Placement of Multi-Component Applications in Edge Computing Environments', 'Online Placement of Multi-Component Applications in Edge Computing Environments'], 'venue': []}
arxiv
Adversarial Networks and Machine Learning for File Classification Ken St Department of Cyber Science United States Naval Academy Annapolis MD Germain Department of Cyber Science United States Naval Academy Annapolis MD Josh Angichiodo Department of Cyber Science United States Naval Academy Annapolis MD Adversarial Networks and Machine Learning for File Classification Correctly identifying the type of file under examination is a critical part of a forensic investigation. The file type alone suggests the embedded content, such as a picture, video, manuscript, spreadsheet, etc. In cases where a system owner might desire to keep their files inaccessible or file type concealed, we propose using an adversarially-trained machine learning neural network to determine a file's true type even if the extension or file header is obfuscated to complicate its discovery. Our semi-supervised generative adversarial network (SGAN) achieved 97.6% accuracy in classifying files across 11 different types. We also compared our network against a traditional standalone neural network and three other machine learning algorithms. The adversarially-trained network proved to be the most precise file classifier especially in scenarios with few supervised samples available. Our implementation of a file classifier using an SGAN is implemented on GitHub (https://ksaintg.github.io/SGAN-File-Classier/). I. INTRODUCTION Machine learning can be used to determine file types based on a file's byte value distribution. In this work, we introduce an adversarial learning approach to accurately identify file types regardless of file extension, headers, or footers. By inspecting the histogram-based distribution of byte values in a file, we can greatly reduce the time and effort expended by subject matter experts during the course of a forensic investigation. Machine learning algorithms are designed to extract relevant information from data [1], and the field of deep learning has been shown effective in solving classification problems [2]. In this paper we use a generative adversarial network (GAN) to determine the type of file under investigation. Specifically, we employ a GAN model with semi-supervised learning known as a semi-supervised GAN (SGAN) [3] where only a small portion of the training dataset is labeled. A. Hiding files Privacy advocates [4] urge users to protect their private information from criminal interception or unlawful government overreach, and protecting the digital data stored on users' computers, phones, and other devices can include denying physical access or employing encryption. While encryption has become more commonplace and accessible [5], users desiring more security against cryptographic weaknesses [6], [7] may apply additional measures to safeguard their information. By changing file extensions or removing them altogether, a user can obfuscate the true file type. While this rudimentary technique applied to a small number of files may not be a challenge to computer forensic investigators, it may be more effective if used across a large body of files composed of varying types. Many operating systems will select (or suggest) an application to open a file based on the file extension [8]. For example, Microsoft Windows will use the file extension, such as .docx to determine the application to open the file. A file named cat.docx suggests that the file is a document that can be opened by Microsoft Word. However, users can change the names and extensions of the file to any arbitrary string of characters. A file originally created as a bitmap file named cat.bmp and renamed to cat.docx will not open and render correctly using Microsoft Word. There are a variety of reasons to keep the nature of a file unknown to all but the user. By obfuscating file types, malware developers may hope to evade email filters or antivirus software [9] [10]. A user engaged in illicit activities may desire to hinder law enforcement by complicating evidence discovery [11]. Whatever the user's motivation, without the correct file extension and absent a brute-force approach, an investigator will require a tool to efficiently discover the appropriate program to open the file. B. Finding files Many file types can be determined by examining the file header and footer information, also known as a "magic number". The file header is the first few bytes in a file and the footer is the last few bytes in a file. Depending on file type, the file headers and footers will be of various lengths and have different values. Many file types will have unique headers and footers, yet some file types will share header and footer values, e.g., .xls, .doc, .ppt [12]. File headers and footers can be analyzed through commandline tools that perform a binary or hexadecimal dump, or by using binary or hexadecimal readers/editors to provide insight to the file type. Alternatively, tools like Scalpel [13] search a chunk of data that may contain multiple files, and based on user-configured options, will perform file carving that allows the investigator to see the chunk's number and file types within. Scalpel's configurable options use header and footer values as well as common signatures within a file. For example, although an html file is plaintext and will not have a header, it will likely include the text string <html>. Regardless of an investigator's methods, specialized knowledge is required to conclude the type of file under examination. If the hexadecimal string D0 CF 11 E0 A1 B1 1A E1 is found in the header, this could be one of five Microsoft Office file types [12]. When several thousand or more files require classification, the time demand on the most experienced investigator greatly increases. C. Contributions This work uses machine learning algorithms trained on extracted file features to identify the type of file under investigation. We created histograms based on the frequency of byte-values (ranging from zero to 255) to train and then test our machine learning algorithms. Specifically, our contribution provides: • A classifier from a semi-supervised generative adversarial network designed to identify file types • Comparison of classifier accuracy with the performance of a traditionally-trained multi-layer perceptron (MLP) network • Comparison and analysis of the neural network method compared to the results from non-neural network machine learning algorithms, specifically Decision Tree, extreme gradient boosting (XGBoost), and k-Nearest Neighbor (kNN) To the best of our knowledge, no other work has used a classifier of an adversarially-trained neural network to conduct file type classification. We show improved accuracy over previously explored methods can be achieved with reduced expert analysis required to create samples for a training dataset. This paper provides background and discussion of related works in Section II. We then discuss our dataset and how we derive our samples for machine learning in Section III. We present our SGAN architecture in Section IV and discuss other machine learning algorithms in Section V. The results of our work are summarized in Section VI and we provide our conclusions and future work in Section VII. II. BACKGROUND This section examines previous work in file classification and introduces the SGAN. We summarize the use of byte values within files to determine file types and we discuss the use of machine learning in file classification. Finally, we discuss the nature of adversarial networks and examine the SGAN model. A. Classification using byte values As an alternative to header and footer inspection, McDaniel and Heydari used the binary content of files to identify the type in [14]. They used several algorithms based on a byte frequency distribution fingerprint to determine a file type, showing that file classification can be accomplished by comparing a candidate file's byte distribution to the distribution of 120 other files of known type. The accuracy of their proposed algorithms was just under 96% when they grouped together .acd, .doc, .xls, and .ppt file types into one class. When these files were separately classified, the accuracy rate dropped to 85%. Based on the binary frequency distribution in [14], several authors have extended the research on file classification. In [15], Li et al. were able to improve on McDaniel and Heydari's accuracy in [14] using a centroid-based approach and saw improved accuracy when truncating the files. Li used the Manhattan Distance for each files' byte distributions to compare files and determine the appropriate classification. Because of file header similarity, Li created centroid models that combined file types similar to McDaniel's approach in [14]. Specifically, there was one model that combined .exe and .dll files into one class, and another model that combined .doc, .xls, and .ppt files together in another class. Moody and Erbacher introduced the Statistical Analysis Data Identification (SADI) algorithm in [16]. After calculating byte values for each file, a range of statistical information was gathered and subsequently used to determine file types. The accuracy of SADI had varying success with nine different file types, reaching 76% accuracy of all file types after initial analysis. A secondary assessment on file types that previously did not reach greater than 92% accuracy showed improvement when characteristic patterns were considered. Using fragments of .pdf, .rtf, and .doc files from a publicly-available dataset [17], Rahmet et al. leveraged longest common sub-sequences to identify file fragments in [18]. The authors' algorithm successfully classified these file fragments with 92.91% overall accuracy. Our work extends the efforts discussed here, and we also made use of byte values and the frequency in which they arose in a file. The byte value distribution was provided to machine learning algorithms, and each file type was classified. While we also used file types that shared the same header strings and files that did not contain headers, we created models that differentiated the files uniquely instead of choosing to group them together. B. Machine learning for file classification In [19], Amirani et al. used principle component analysis (PCA) and neural networks to achieve file classification accuracy of 98.33% against a pool of six different file types. The authors used two neural networks: a five-layer MLP network that uses PCA features as the input, and a second three-layer MLP network to conduct file classification. Each of their six file types were equally represented in the dataset, with 120 files of each type. Konaray et al. conducted several experiments using a variety of machine learning algorithms in [20]. The dataset used by Konaray were composed of 13 text-based files (e.g., .html, .py, .bat, etc.). The authors were able to achieve an accuracy of 97.83% using the XGBoost algorithm [21]. Comparing statistical classification algorithms such as support vector machine (SVM) and kNN with commercially available tools, Gopal et al. showed that machine learning algorithms could outperform commercial products in [22]. The authors collected byte values for their experiments using an n-gram approach. They showed that kNN with 1-gram byte values and SVM with 2-gram byte values greatly outperformed commercial tools in terms of accuracy. Inspired by the efforts in machine learning research, we also hope to improve file classification accuracy. As we will discuss in Section III, the dataset we used provided access to more file types and of a wider variety than those mentioned in the works here. The present work uses 11 types of files, including some that are solely composed of ASCII characters such as .txt and .html. In order to further research in this domain, we investigated the SGAN-trained classifier. C. Semi-supervised GAN Semi-supervised learning requires that only a portion of the training data be labeled. Semi-supervised learning differs from supervised learning where all training data is labeled, and also unsupervised learning, where no labels exist and the networks must find their own way to organize the data. Semi-supervised learning is valuable for large training data sets when it would be laborious and time-intensive to manually label each file. When training an unsupervised GAN, the discriminator, D, is a two-class classifier that receives authentic samples from the training dataset or spoofed samples created by the generator, G. The generator uses random variable input to create the fake samples and the parameters in G. The discriminator assigns a probability from zero to one based on its assessment that the sample is fake (0.0) or authentic (1.0). The value function that describes this relationships from the original work by Goodfellow [23] is given by min G max D V (D, G) = E x∼p data (x) [log D(x)] + E z∼pz(z) [log(1 − D(G(z)))](1) where D(x ) is the probability that x came from the data distribution p data (x ) containing authentic training samples, and D(G(z )) is the estimate of the probability that the discriminator incorrectly identifies the fake instance as authentic. The generator network attempts to maximize D(G(z )), while the discriminator network tries to minimize it. The generator creates samples, G(z ), based on the parameter values in G and the random values z provided to the generator consistent with p z (z ). With semi-supervised learning, a small percentage of the training data is labeled and the discriminator becomes a multiclass classifier. For N classes, the model requires N + 1 outputs to account for all the authentic classes plus one additional class for the fake generated class. This can be implemented in a variety of ways. Following Salimans et al. [24], we can build an N -class classifier network, C, with output logits {l 1 , l 2 , . . . , l N } prior to a softmax activation for C. The logits vector is used as the input to a single perceptron followed by the sigmoid activation function for D. The sigmoid function is given as σ(z) = 1 1+e −z , where z is the output value of the discriminator output layer perceptron. Because D and C share the same input and hidden layer weights, both networks act as a single network, D/C, that is updated during backpropagation based on their respective loss functions, J (D) and J (C) . The generator loss function is given by J (G) . Figure 1 shows a functional depiction of an SGAN in training. The training dataset is partially labeled and provided to the D/C model for classification by C. The remainder of the training dataset as well as the generated samples from G are used as input to D/C for discrimination where D will predict whether the sample came from the training dataset or if it was created by G. III. DATASET A. Histograms To capture byte value distributions, every file was converted to a histogram. Each histogram contained 256 bins in the range [0 , 255], representing the decimal value of each byte in the file. For every bin, the frequency of that decimal value occurring in the file was recorded. Histogram examples are shown in Figure 2. In each plot, the bins are shown on the horizontal axis while the frequency value is represented on the vertical axis. As Figure 2 shows, there are differences in the byte distribution between both files of the same type and files of different types, but there are also similarities in different file types such as .txt and .html files. Machine learning is an appropriate tool to capture the histogram distributions and not only differentiate among the different file types but also group together files of matching type despite varying byte values. B. Samples After creating histograms for each file, we then processed the histograms into samples. To ensure consistency across our samples regardless of file size, we normalized each histogram, scaling each to a cumulative distribution of 1.0. Figure 3 shows the same .pdf file where Figure 3a is the original and Figure 3b is normalized. Since insufficient sample sizes for each class can precipitate classification error [25], we removed the file types that appeared less than 20 times, representing less than 0.7% of the total. There were 14 different file types and 86 total files removed, leaving our dataset with 11 classes and 2860 samples. Our dataset's composition is shown in Table I and testing datasets. The training dataset used 80% of the total samples, while the remaining 20% were reserved for testing. IV. SGAN ARCHITECTURE The adversarial competition in the SGAN is a minimax game described by (1) where the discriminative model attempts to correctly identify authentic training samples from a distribution produced by the scaled histograms representing the dataset files, p data , and fake training samples created by the generator. While D and G adversarially train each other, they learn to improve their individual performances. Additionally, C is trained on labeled samples from the training dataset. Although C does not directly receive unlabeled authentic or fake samples, the weights of C are affected by unsupervised training since it shares weights with D in the D/C implementation. The SGAN was implemented using the Python programming language, Keras [26] front-end, and Tensorflow [27] back-end. Additionally, Numpy, and Matplotlib Python libraries were used. The overall SGAN design is summarized in Table II, with a total of 417,271 parameters for the discriminator and the generator, and 304,779 parameters for the classifier. The file size of the classifier was 3,634 KB. The discriminator/classifier network, D/C, is a densely or fully connected MLP deep neural network (DNN) with a single input for the file histograms. Four additional fully connected layers of size 512, 256, 128 and 64 are followed with rectified linear unit (ReLU) activation functions. The ReLU function, g is given by g(z) = max(0, z). The four hidden layers use Dropout of 0.3 to prevent overfitting. Prior to the output layers, a fully connected layer of size 11 is used to capture the number of file types to be classified. The discriminator output layer of size 1 is fully connected and uses a sigmoid activation function to provide values [0.0, 1.0] as discussed in Section II-C. The classifier output is a softmax activation connected to the 11 The generator network, G, has a single input with 100 nodes fully connected to the first hidden layer of size 32. Two additional hidden layers of sizes 64 and 128 are again fully connected using ReLU activations. Finally, a layer of size 256 is connected to the output layer and sigmoid activation that ultimately creates the fake histograms samples. The learning rate for G was 0.0005 using the Adam optimizer. V. MACHINE LEARNING ALGORITHMS In order to illustrate the SGAN's performance when classifying files, we used additional machine learning algorithms. We assessed another neural network, the decision trees learning method, the XGBoost algorithm, and the nearest neighbors algorithm. The same training and testing dataset were used for each machine learning model. The SGAN was the most complex to train due to using multiple neural networks and no convergence to a global minima. In terms of structure, the closest model to the SGAN is a supervised learning-based neural network. We created an MLP network with identical architecture to our SGAN classifier. The standalone MLP network was trained in a fully supervised manner to accurately select the correct file type based on input. Both the SGAN and standalone MLP models were trained with a batch size of 32 samples, and training was limited to no more than 300 epochs. Following training, the best classifiers were selected based on their accuracy against the training dataset. These classifiers were then evaluated on the test dataset as reported in Section VI. Decision trees are a supervised learning approach that can be used to accomplish multi-class classification [29]. Using the features of the histograms, the decision tree algorithm examines the parametric values in each sample and attempts to accurately classify the file based on a series of decisions based on learned thresholds. The XGBoost algorithm was implemented as a classifier. XGBoost is a supervised learning tool that can be used to help us predict the correct file type category. With multiple classes, the multi-class logistic loss function was used to train the model. Finally, the nearest neighbors classification algorithm compares measurements of the input data and training data [29] based on previously stored training information. The classification result is determined by the number of samples selected, k, with the smallest Euclidean distance among the sample attributes. We iterated k from one to six to determine the most appropriate number of neighbors to consider when deciding the classification. VI. RESULTS . c s v . VII. CONCLUSION AND FUTURE WORK The adversarial training of a neural network produced encouraging results in terms of classification accuracy. While the neural networks were more complex to train than the other machine learning algorithms, the accuracy results were far superior. Though the SGAN was the most complex of all the models, its accuracy was the best at correctly classifying files based on their byte value distribution, especially with few supervised samples. Once trained, the time difference in classifying the dataset between any of the algorithms was indistinguishable. This work leads to future research using additional neural network architectures and using our spoofed histograms from the generator network to improve other machine learning algorithms. Fig. 1 : 1Training a semi-supervised generative adversarial network with N classes. Fig. 2 : 2Sample of histograms showing byte value distribution for various files. Fig. 3 : 3Example histogram samples showing (a) an unscaled .pdf file and (b) a normalized .pdf file. Fig. 4 : 4Confusion matrix for fully-supervised SGAN. Fig. 5 : 5Confusion matrices for (a) SGAN and (b) standalone MLP trained with 50 supervised samples.. TABLE I : IDataset file composition TABLE II : IISGAN architectureDiscriminator/Classifier: layer output size activation Input: x ∼ p data (x) 256 Fully Connected 512 ReLU Dropout = 0.3 Fully Connected 256 ReLU Dropout = 0.3 Fully Connected 128 ReLU Dropout = 0.3 Fully Connected 64 ReLU Dropout = 0.3 Fully Connected 11 ln = {l 1 , l 2 , . . . , l 11 } Discriminator Output 1 sigmoid Classifier Output 11 softmax Generator: layer output activation Input: z ∼ pz(z) 100 Fully Connected 32 ReLU Dropout = 0.3 Fully Connected 64 ReLU Dropout = 0.3 Fully Connected 128 ReLU Dropout = 0.3 Fully Connected 256 ReLU Output 256 sigmoid node layer. The softmax function indicates the most likely class to which the input belongs. The learning rate for D/C was 0.0005 using the Adam [28] optimizer and training was done with batches of 32 samples. TABLE III : IIIClassification AccuracyNumber of supervised samples SGAN Standalone MLP Decision Tree XGBoost kNN, k = 1 kNN, k = 2 kNN, k = 3 kNN, k = 4 kNN, k = 5 kNN, k = 6 2288 0.97552 0.96154 0.90734 0.90384 0.88986 0.82692 0.874126 0.83042 0.85490 0.81293 1144 0.93357 0.92132 0.86363 0.87413 0.86713 0.79720 0.84091 0.75350 0.81469 0.76049 500 0.91783 0.9021 0.82168 0.77972 0.84965 0.71504 0.76573 0.62063 0.74650 0.65734 100 0.87413 0.81469 0.48252 0.65559 0.71504 0.44406 0.61189 0.48427 0.52800 0.38990 50 0.81993 0.62062 0.26573 0.56818 0.66084 0.38112 0.54895 0.43007 0.30944 0.08741 . c s v . d o c . g if . h t m l . jp g . p d f . p p t . p s . t x t . x ls . x m l Predicted label .csv .doc .gif .html .pdf .ppt .ps .txt .xls .xml True label 1.0 0.8 0.62 0.81 0.9 0.93 0.93 0.85 0.64 0.65 0.5 SGAN File Classifier 0.0 0.2 0.4 0.6 0.8 1.0 (a) . c s v . d o c . g if . h t m l . jp g . p d f . p p t . p s . t x t . x ls . x m l Predicted label .csv .doc .gif .html .pdf .ppt .ps .txt .xls .xml True label 1.0 0.86 0.62 0.67 0.67 0.75 0.86 0.46 0.3 0.39 0.5 Standalone MLP File Classifier M P Deisenroth, A A Faisal, C S Ong, Mathematics for Machine Learning. Cambridge University PressM. P. Deisenroth, A. A. Faisal, and C. S. Ong, Mathematics for Machine Learning. Cambridge University Press, Apr. 2020. Deep Learning. I Goodfellow, Y Bengio, A Courville, MIT PressI. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. [Online]. Available: http://www.deeplearningbook.org Semi-Supervised Learning with Generative Adversarial Networks. A Odena, arXiv:1606.01583cs, statA. Odena, "Semi-Supervised Learning with Generative Adversarial Networks," arXiv:1606.01583 [cs, stat], Oct. 2016. [Online]. Available: http://arxiv.org/abs/1606.01583 Electronic Frontier Foundation. H Fakhoury, D Kayyali, Know Your RightsH. Fakhoury and D. Kayyali, "Know Your Rights," Electronic Frontier Foundation, Oct. 2014. [Online]. Available: https://www.eff.org/issues/ know-your-rights Everything you need to know about encryption: Hint, you're already using it. A Peterson, Washington Post. A. Peterson, "Everything you need to know about encryption: Hint, you're already using it." Washington Post, Dec. 2015. [Online]. Avail- able: https://www.washingtonpost.com/news/the-switch/wp/2015/12/08/ you-already-use-encryption-heres-what-you-need-to-know-about-it/ Fault Injection Attacks on Cryptographic Devices: Theory, Practice, and Countermeasures. A Barenghi, L Breveglieri, I Koren, D Naccache, Proceedings of the IEEE. 10011A. Barenghi, L. Breveglieri, I. Koren, and D. Naccache, "Fault Injection Attacks on Cryptographic Devices: Theory, Practice, and Countermea- sures," Proceedings of the IEEE, vol. 100, no. 11, pp. 3056-3076, Nov. 2012. Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey. A Mukherjee, S A A Fakoorian, J Huang, A L Swindlehurst, IEEE Communications Surveys Tutorials. 163A. Mukherjee, S. A. A. Fakoorian, J. Huang, and A. L. Swindlehurst, "Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey," IEEE Communications Surveys Tutorials, vol. 16, no. 3, pp. 1550-1573, 2014. Filename extension definition. The Linux Information Project (LINFO). "Filename extension definition," The Linux Information Project (LINFO), Jul. 2006. [Online]. Available: http://www.linfo.org/filename extension.html Blocked attachments in Outlook. Microsoft, Microsoft, "Blocked attachments in Outlook." [Online]. Avail- able: https://support.microsoft.com/en-us/office/blocked-attachments- in-outlook-434752e1-02d3-4e90-9124-8b81e49a8519 Don't Click On These 5 Dangerous Email Attachments. B Collins, B. Collins, "Don't Click On These 5 Danger- ous Email Attachments," Jan. 2021. [Online]. . Available, Avail- able: https://www.forbes.com/sites/barrycollins/2021/01/16/dont-click- on-these-5-dangerous-email-attachments/ Statement Before the Permanent Select Committee on Intelligence United States House of Representatives. L Freeh, The Impact of Encryption on Public SafetyL. Freeh, "The Impact of Encryption on Public Safety," Statement Before the Permanent Select Committee on Intelligence United States House of Representatives, Sep. 1997. [Online]. Available: https://irp.fas.org/congress/1997 hr/h970909f.htm The Microsoft Compound Document File Format. D Rentz, D. Rentz, "The Microsoft Compound Document File Format," Aug. 2007. [Online]. Available: http://www.openoffice.org/sc/ compdocfileformat.pdf Scalpel: A Frugal, High Performance File Carver. G G Richard Iii, V Roussev, The Digital Forensics Research Conference. New Orleans, LA, USAG. G. Richard III and V. Roussev, "Scalpel: A Frugal, High Performance File Carver." in The Digital Forensics Research Conference, New Orleans, LA, USA, Aug. 2005. [Online]. Available: https://dfrws.org/ presentation/scalpel-a-frugal-high-performance-file-carver/ Content based file type detection algorithms. M Mcdaniel, M Heydari, 36th Annual Hawaii International Conference on System Sciences. Proceedings of theM. McDaniel and M. Heydari, "Content based file type detection algorithms," in 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the, Jan. 2003. Fileprints: identifying file types by n-gram analysis. W.-J Li, K Wang, S Stolfo, B Herzog, Proceedings from the Sixth Annual IEEE SMC Information Assurance Workshop. from the Sixth Annual IEEE SMC Information Assurance WorkshopW.-J. Li, K. Wang, S. Stolfo, and B. Herzog, "Fileprints: identifying file types by n-gram analysis," in Proceedings from the Sixth Annual IEEE SMC Information Assurance Workshop, Jun. 2005, pp. 64-71. SÁDI -Statistical Analysis for Data Type Identification. S J Moody, R F Erbacher, 2008 Third International Workshop on Systematic Approaches to Digital Forensic Engineering. S. J. Moody and R. F. Erbacher, "SÁDI -Statistical Analysis for Data Type Identification," in 2008 Third International Workshop on Systematic Approaches to Digital Forensic Engineering, May 2008, pp. 41-54. Bringing science to digital forensics with standardized forensic corpora. S Garfinkel, P Farrell, V Roussev, G Dinolt, Digital Investigation. 6S. Garfinkel, P. Farrell, V. Roussev, and G. Dinolt, "Bringing science to digital forensics with standardized forensic corpora," Digital Investigation, vol. 6, pp. S2-S11, Sep. 2009. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1742287609000346 File Type Identification of File Fragments using Longest Common Subsequence (LCS). R F Rahmat, F Nicholas, S Purnamawati, O S Sitompul, 10.1088/1742-6596/801/1/012054Journal of Physics: Conference Series. 80112054IOP PublishingR. F. Rahmat, F. Nicholas, S. Purnamawati, and O. S. Sitompul, "File Type Identification of File Fragments using Longest Common Subsequence (LCS)," Journal of Physics: Conference Series, vol. 801, p. 012054, Jan. 2017, publisher: IOP Publishing. [Online]. Available: https://doi.org/10.1088/1742-6596/801/1/012054 A new approach to content-based file type detection. M C Amirani, M Toorani, A Beheshti, 2008 IEEE Symposium on Computers and Communications. M. C. Amirani, M. Toorani, and A. Beheshti, "A new approach to content-based file type detection," in 2008 IEEE Symposium on Comput- ers and Communications, Jul. 2008, pp. 1103-1108, iSSN: 1530-1346. Detecting File Types Using Machine Learning Algorithms. S K Konaray, A Toprak, G M Pek, H Akçekoce, D Kılınç, 2019 Innovations in Intelligent Systems and Applications Conference (ASYU). S. K. Konaray, A. Toprak, G. M. Pek, H. Akçekoce, and D. Kılınç, "Detecting File Types Using Machine Learning Algorithms," in 2019 Innovations in Intelligent Systems and Applications Conference (ASYU), Oct. 2019, pp. 1-4. XGBoost: A Scalable Tree Boosting System. T Chen, C Guestrin, 10.1145/2939672.2939785Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD '16. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD '16New York, NY, USAAssociation for Computing MachineryT. Chen and C. Guestrin, "XGBoost: A Scalable Tree Boosting System," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD '16. New York, NY, USA: Association for Computing Machinery, Aug. 2016, pp. 785-794. [Online]. Available: https://doi.org/10.1145/2939672.2939785 Statistical Learning for File-Type Identification. S Gopal, Y Yang, K Salomatin, J Carbonell, 2011 10th International Conference on Machine Learning and Applications and Workshops. 1S. Gopal, Y. Yang, K. Salomatin, and J. Carbonell, "Statistical Learning for File-Type Identification," in 2011 10th International Conference on Machine Learning and Applications and Workshops, vol. 1, Dec. 2011, pp. 68-73. Generative Adversarial Nets. I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Proceedings of the 27th International Conference on Neural Information Processing Systems. the 27th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT Press2ser. NIPS'14I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Nets," in Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, ser. NIPS'14. Cambridge, MA, USA: MIT Press, 2014, pp. 2672-2680. Improved Techniques for Training GANs. T Salimans, I Goodfellow, W Zaremba, V Cheung, A Radford, X Chen, Advances in Neural Information Processing Systems. D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. GarnettCurran Associates, Inc29T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved Techniques for Training GANs," in Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. Curran Associates, Inc., 2016, pp. 2234-2242. [Online]. Available: http: //papers.nips.cc/paper/6125-improved-techniques-for-training-gans.pdf Small sample size effects in statistical pattern recognition: recommendations for practitioners. S Raudys, A Jain, conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence. 13S. Raudys and A. Jain, "Small sample size effects in statistical pattern recognition: recommendations for practitioners," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 3, pp. 252-264, Mar. 1991, conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence. . F Chollet, Keras. F. Chollet, et al., Keras, 2015, https://keras.io. [Online]. Available: https://keras.io TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. M Abadi, M. Abadi, et al., "TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems," 2015. [Online]. Available: https:// www.tensorflow.org/ Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, Proceedings of the 3rd International Conference on Learning Representations (ICLR). the 3rd International Conference on Learning Representations (ICLR)D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," Proceedings of the 3rd International Conference on Learning Represen- tations (ICLR), Dec. 2014. Scikit-learn: Machine Learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, D Duchesnay, Journal of Machine Learning Research. 1285F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and d. Duchesnay, "Scikit-learn: Machine Learning in Python," Journal of Machine Learning Research, vol. 12, no. 85, pp. 2825-2830, 2011. [Online]. Available: http://jmlr.org/papers/v12/pedregosa11a.html Probabilistic Machine Learning: An introduction. K Murphy, MIT PressK. Murphy, Probabilistic Machine Learning: An introduction. MIT Press, 2022. [Online]. Available: https://probml.github.io/pml-book/ book1.html
{'fraction_non_alphanumeric': 0.05164107620791661, 'fraction_numerical': 0.03380757853218763, 'mean_word_length': 4.340956966596449, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 1, 'https://': 14, 'lorem ipsum': 0, 'www.': 7, 'xml': 2}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "Correctly identifying the type of file under examination is a critical part of a forensic investigation. The file type alone suggests the embedded content, such as a picture, video, manuscript, spreadsheet, etc. In cases where a system owner might desire to keep their files inaccessible or file type concealed, we propose using an adversarially-trained machine learning neural network to determine a file's true type even if the extension or file header is obfuscated to complicate its discovery. Our semi-supervised generative adversarial network (SGAN) achieved 97.6% accuracy in classifying files across 11 different types. We also compared our network against a traditional standalone neural network and three other machine learning algorithms. The adversarially-trained network proved to be the most precise file classifier especially in scenarios with few supervised samples available. Our implementation of a file classifier using an SGAN is implemented on GitHub (https://ksaintg.github.io/SGAN-File-Classier/).", 'arxivid': '2301.11964', 'author': ['Ken St \nDepartment of Cyber Science United States\nNaval Academy Annapolis\nMD\n', 'Germain \nDepartment of Cyber Science United States\nNaval Academy Annapolis\nMD\n', 'Josh Angichiodo \nDepartment of Cyber Science United States\nNaval Academy Annapolis\nMD\n', 'Ken St \nDepartment of Cyber Science United States\nNaval Academy Annapolis\nMD\n', 'Germain \nDepartment of Cyber Science United States\nNaval Academy Annapolis\nMD\n', 'Josh Angichiodo \nDepartment of Cyber Science United States\nNaval Academy Annapolis\nMD\n'], 'authoraffiliation': ['Department of Cyber Science United States\nNaval Academy Annapolis\nMD', 'Department of Cyber Science United States\nNaval Academy Annapolis\nMD', 'Department of Cyber Science United States\nNaval Academy Annapolis\nMD', 'Department of Cyber Science United States\nNaval Academy Annapolis\nMD', 'Department of Cyber Science United States\nNaval Academy Annapolis\nMD', 'Department of Cyber Science United States\nNaval Academy Annapolis\nMD'], 'corpusid': 256389975, 'doi': '10.48550/arxiv.2301.11964', 'github_urls': [], 'n_tokens_mistral': 10402, 'n_tokens_neox': 9009, 'n_words': 5261, 'pdfsha': 'b52ecb6e65a490062658ea53b0b129d6f79321f9', 'pdfurls': ['https://export.arxiv.org/pdf/2301.11964v2.pdf'], 'title': ['Adversarial Networks and Machine Learning for File Classification', 'Adversarial Networks and Machine Learning for File Classification', 'Adversarial Networks and Machine Learning for File Classification', 'Adversarial Networks and Machine Learning for File Classification'], 'venue': []}
arxiv
AN ELEMENTARY PROOF OF THE CHROMATIC SMITH FIXED POINT THEOREM 3 Mar 2023 William Balderrama Nicholas J Kuhn AN ELEMENTARY PROOF OF THE CHROMATIC SMITH FIXED POINT THEOREM 3 Mar 2023arXiv:2303.02022v1 [math.AT] A recent theorem by T. Barthel, M. Hausmann, N. Naumann, T. Nikolaus, J. Noel, and N. Stapleton says that if A is a finite abelian p-group of rank r, then any finite A-space X which is acyclic in the nth Morava K-theory with n ≥ r will have its subspace X A of fixed points acyclic in the (n − r)th Morava K-theory. This is a chromatic homotopy version of P. A. Smith's classical theorem that if X is acyclic in mod p homology, then so is X A .The main purpose of this paper is to give an elementary proof of this new theorem that uses minimal background, and follows, as much as possible, the reasoning in standard proofs of the classical theorem. We also give a new fixed point theorem for finite dimensional, but possibly infinite, A-CW complexes, which suggests some open problems. Introduction Fixing a prime p and finite group G, say that G-space X is a finite Gspace if its p-localization is a retract of the p-localization of a finite G-CW complex in the G-equivariant homotopy category. We let X G denote its subspace of fixed points. Let K(n) * denote the nth Morava K-theory at the prime p. In particular, K(1) * is a summand of complex K-theory with mod p coefficients, and K(0) * is rational homology. A key result in [6A19] can be stated as follows. Theorem 1.1. Let A be a finite abelian p-group of rank r, and let X be a finite A-space. If K(n) * (X) = 0 with n ≥ r, then K(n − r) * (X A ) = 0. This is a chromatic homotopy theory analogue of the following classical theorem of P. A. Smith [S41]. Theorem 1.2. Let P be a finite p-group, and let X be a finite dimensional P -space. If H * (X; Z/p) = 0, then H * (X P ; Z/p) = 0. We note that Theorem 1.1 follows by iteration from the cases when A = C p k , the cyclic p-group of order p k , and Theorem 1.2 follows from the case when P = C p . Date: March 6, 2023. 2010 Mathematics Subject Classification. Primary 55M35; Secondary 55N20, 55P42, 55P91. The main purpose of this note is to give an elementary proof of Theorem 1.1 that follows, as much as possible, the reasoning in standard proofs of the classical theorem. All needed background material about K(n) * and related theories is in papers published by 2000. We hope that our presentation will lend some clarity about the interesting remaining problems in this area. The classical theorem includes the statement that X P = ∅, while the version of Theorem 1.1 proved in [6A19] implicitly assumes that X has a point fixed by A, an assumption we do not need to make. Indeed, the first steps in our proof hold when X is just assumed to be finite dimensional, and not necessarily finite, and lead to the following new fixed point theorem. Theorem 1.3. Let A be a finite abelian p-group of rank r, and X a finite dimensional A-CW complex. If K(n) * (X) = 0 with n ≥ r, then X A = ∅. We note that, if X is any space, then K(n) * (X) = 0 ⇒ K(r) * (X) = 0 for n ≥ r ≥ 1 [Bou99]. (This generalized Ravenel's result [Rav84,Thm.2.11] about finite X.) Thus this fixed point theorem for all n is implied by the special case when n = r. In §2, we recall some needed background material, and deduce some simple consequences. The theorems are then quickly proved in §3. A final section has various remarks and speculations. In particular, we wonder if some weakening of the finiteness hypothesis of Theorem 1.1 might be possible, and we are curious about the existence of examples showing that Theorem 1.3 is as strong as possible. 1.1. Acknowledgements. The second author is a PI of RTG NSF grant DMS-1839968, which has supported the research of the first author. Some of the writing of this paper was done while the second author was visiting the Utrecht Geometry Center, with support from Simons Foundation Collaboration Grant 709802. We thank Neil Strickland for helpful email about literature references, and Tom Goodwillie and Ian Leary for their helpful answers to a Mathoverflow query (see §4.3). Background material and a localization result 2.1. Morava E-theory and E * n (BA). Recall the Brown-Peterson homology theory BP , with coefficient ring BP * = Z (p) [v 1 , v 2 , . . .]. We work with complex oriented theories E that are p-local and with p-typical formal group laws. The coefficient ring E * of such a theory is a BP * -algebra, and is said to be Landweber exact and v n -periodic if v 0 , v 1 , . . . acts as a regular sequence on E * with v n acting as a unit on E * /(v 0 , . . . , v n−1 ). (Throughout, we let v 0 = p, as is standard.) Lemma 2.1. [H95a,Cor. 1.12] Let E * be Landweber exact and v n -periodic. A spectrum is E * -acyclic if and only if it is K(m) * -acyclic for 0 ≤ m ≤ n. Thus, if a finite spectrum is E * -acyclic then it is K(n) * -acyclic. Now let E n be the nth Morava E-theory, as in [H95b], [HS99], or [HKR00]. There are variants of these, so for concreteness, we will say that E * n = Z p [u ±1 ][[v 1 , . . . , v n−1 ]], where u ∈ E −2 n , and with complex orientation y ∈ E 2 n (BU (1)) whose associated formal group law F has p-series of the form [p](y) = py + F v 1 y p + F · · · + F v n−1 y p n−1 + F u p n −1 y p n . In particular, v n = u p n −1 . This is a Landweber exact and v n -periodic theory, with the following additional property. Lemma 2.2. [H95b,Prop.3.6] If f : X → Y is a map of spectra, then f is an E * n -isomorphism if and only if it is a K(n) * -isomorphism. Proposition 2.3. Let G be a finite group and X a G-space. If K(n) * (X) = 0, then the map EG × G X → BG induces an isomorphism E * n (BG) ≃ E * n (EG × G X). Proof. There are implications K(n) * (X) = 0 ⇔ K(n) * (X) ≃ K(n) * (pt) ⇒ K(n) * (EG × G X) ≃ K(n) * (BG) ⇔ E * n (BG) ≃ E * n (EG × G X) . The previous lemma gives the third implication. The second implication is a special case of a general fact: if f : X → Y is a G-equivariant map between G-spaces that is an isomorphism in a generalized homology theory E * , then it induces an isomorphism on the associated Borel theory E * (EG × G X) ∼ − → E * (EG × G Y ). We recall some basic calculations as in [HKR00,§5]. The complex orientation y ∈ E 2 n (BU (1)) defines an isomorphism E * n (BU (1)) ≃ E * n [[y]], and the p k -series satisfies [p k ](y) ≡ u p nk −1 y p nk mod (v 0 , . . . , v n−1 , y p nk +1 ). The standard inclusion i k : C p k ֒→ U (1) induces an isomorphism E * n (BC p k ) ≃ E * n [[y k ]]/([p k ](y k )), where y k = i * k (y) , and it follows from the Weierstrass preparation theorem that E * n (BC p k ) is a free E * n -module with basis 1, y k , y 2 k , . . . , y p nk −1 k . Similarly, if l < k and q : C p k → C p l is the standard quotient map, then the map q * : E * n (BC p l ) → E * n (BC p k ) satisfies q * (y l ) = [p k−l ](y k ), and induces an isomorphism E * n (BC p k ) ≃ E * n (BC p l )[[y k ]]/([p k−l ](y k ) − y l ). This makes E * n (BC p k ) into a free E * n (BC p l ) module of rank p n(k−l) . From these calculations a couple of general facts follow. Lemma 2.4. (a) If A is a finite abelian p-group, then E * n (BA) is a free E * n -module of rank |A| n , and there is a natural isomorphism E * n (BA) ⊗ E * n E * n (X) ≃ E * n (BA × X) for any space X. (b) If q : A →Ā is an epimorphism between two finite abelian p-groups, then q * : E * n (BĀ) → E * n (BA) makes E * n (BA) into a free E * n (BĀ)-module of rank |Ker q| n . 2.2. Localization. Let A be a finite abelian p-group. Observe that the group C × p of automorphisms of C p acts freely on the set of nontrivial homomorphisms α : A → C p via postcomposition. If α : A → C p is a homomorphism, we let e(α) = α * (y 1 ) ∈ E 2 n (BA). This is the Euler class of the 1-dimensional complex representation defined by the composite A α − → C p ֒→ U (1). With this notation, we define two Euler classes e(A),ē(A) ∈ E * n (BA) as follows. Definitions 2.5. Let e(A) = e(α), with the product over all nontrivial α : A → C p . Letē(A) = e(α), with the product over one representative of each C × p -orbit. The definition ofē(A) involves choices, but, for concreteness, we letē(C p ) = y 1 , i.e. we choose the identity C p → C p . Basic facts about these are summarized in the next lemma. Lemma 2.6. (a) If i : A ′ ֒→ A is the inclusion of a proper subgroup, then i * (e(A)) = i * (ē(A)) = 0. (b) e(A) −1 E * n (BA) =ē(A) −1 E * n (BA). Proof. e(A) andē(A) are products of the various e(α). If A ′ is a proper subgroup of A, then at least one α restricts to the trivial representation of A ′ . As the Euler class of the trivial representation vanishes, statement (a) follows. To prove statement (b), we first note that, sinceē(A) divides e(A), inverting e(A) also invertsē(A). To see the converse, we need to show that, given a nontrivial α : A → C p , inverting e(α) also inverts e(β) for all β in the C × p -orbit of α. Such β are given by the composites A α − → C p m − → C p , with m ∈ {1, . . . , p − 1}. Since e(m • α) = [m](e(α)), it suffices to show that inverting e(α) also inverts [m](e(α)) for any m ∈ {1, . . . , p − 1}. One way to see this goes as follows. Choose s such that sm ≡ 1 mod p. Then y 1 = [s]([m](y 1 )) ∈ E * n (BC p ), so that e(α) = [s]([m](e(α))) ∈ E * n (BA) . Since x always divides [s](x), we see that [m](e(α)) divides e(α), and so inverting e(α) also inverts [m](e(α)). Proposition 2.7. Let A be a finite abelian p-group, and let e be either e(A) orē(A). If X is a finite dimensional A-CW complex, then the inclusion X A ֒→ X induces an isomorphism e −1 E * n (EA × A X) ∼ − → e −1 E * n (BA × X A ). Proof. This is an application of classic localization theory, as in [tD87,Chapter III], and a simple proof goes as follows. For notational simplicity, let F = X A . The A-equivariant cofibration sequence F → X → X/F induces a cofibration sequence BA × F → EA × A X → EA + ∧ A X/F, so we need to show that e −1 E * n (EA + ∧ A X/F ) = 0. Since X is finite dimensional, X/F has a finite filtration by its equivariant skeleta (X/F ) j , so it suffices to show e −1 E * n (EA + ∧ A (X/F ) j /(X/F ) j−1 ) = 0 for each j. One has an equivariant equivalence of the form (X/F ) j /(X/F ) j−1 ≃ i Σ j (A/A i ) + where each A i is a proper subgroup of A. Since EA + ∧ A (A/A i ) + ≃ BA i+ , we need to show that e −1 i E * n (BA i ) = 0. But this follows immediately from Lemma 2.6(a). Proofs of the theorems Proposition 2.3, Proposition 2.7, and Lemma 2.4 combine to prove the following theorem, which gets us much of the way towards the proofs of Theorem 1.1 and Theorem 1.3. Theorem 3.1. Let A be a finite abelian p-group, and let X be a finite dimensional A-CW complex. If K(n) * (X) = 0, then, with e either e(A) or e(A), the map X A → pt induces an isomorphism e −1 E * n (BA) ∼ − → e −1 E * n (BA) ⊗ E * n E * n (X A ) . Now we need to know something about e −1 E * n (BA). The following algebraic result of Mark Hovey and Hal Sadofsky will suffice to prove Theorem 1.1. Proposition 3.2. Let e =ē(C p ) = y ∈ E * n (BC p ) ≃ E * n [[y]]/([p](y)). The ring e −1 E * n (BC p ) ≃ E * n ((y))/([p](y)) is Landweber exact and v n−1 -periodic. Proof. This is given an elementary proof in [HS96,p.3583]. For completeness, and to emphasize its elementary nature, we give a slightly different short proof. As E * n (BC p ) is a free E * n -module, and regular sequences are preserved by localization, the ring e −1 E * n (BC p ) is Landweber exact. We must only show that it is v n−1 -periodic. Let I = (v 0 , v 1 , . . . , v n−2 ). Then we must show that (e −1 E * n (BC p ))/I is nonzero and that v n−1 is a unit in this ring. Recall v n = u p n −1 . We have [p](y) ≡ v n−1 y p n−1 + F v n y p n mod I, so that v n−1 y p n−1 ≡ [−1](v n y p n ) mod (I, [p](y)) where [−1](y) is the (−1)-series associated to the formal group law F . When n ≥ 2, it follows that (e −1 E * n (BC p ))/I ≃ Z/p[u ±1 ][[v n−1 ]]((y))/(v n−1 y p n−1 + F v n y p n ) ≃ Z/p[u ±1 ][[v n−1 ]]((y))/(v n−1 − y −p n−1 · [−1](v n y p n )) ≃ Z/p[u ±1 ]((y)) since y −p n−1 · [−1](v n y p n )) = −v n y p n −p n−1 ǫ(y), where ǫ(y) is a monic power series in y. A similar computation applies when n = 1, only one ends up with e −1 E * 1 (BC p ) ≃ Z p [u ±1 ]((y))/(p − y −1 · [−1](v 1 y p )), which will be a free Q p [u ±1 ]-module with basis 1, y, . . . , y p−2 . In either case, this ring is visibly nonzero and contains v n−1 as a unit as claimed. A variant of Proposition 3.2 will suffice to prove Theorem 1.3. Proposition 3.3. Let n ≥ r. The ring e(C r p ) −1 E * n (BC r p ) is not zero. Proof. The special case when r = n was analyzed in [HKR00, §6.2]: the ring e(C n p ) −1 E * n (BC n p ) is the nonzero ring called L 1 (E * ) there, and is in fact easily shown to be finite and faithfully flat over p −1 E * n . For the general case, note that the projection C n p → C r p onto the first r coordinates induces an algebra map from e(C r p ) −1 E * n (BC r p ) to the nonzero ring e(C n p ) −1 E * n (BC n p ). Both of the propositions imply similar results for more general abelian groups. Corollary 3.4. (a) The ring e(C p k ) −1 E * n (BC p k ) is Landweber exact and v n−1 -periodic. (b) Let A be a finite abelian p-group of rank r. Let n ≥ r and let e ∈ E * n (BA) be e(A) orē(A). Then the ring e −1 E * n (BA) is nonzero. Proof. If A has rank r, then any surjection q : A → C r p induces a bijection q * : Hom(C r p , C p ) ∼ − → Hom(A, C p ). It follows that q * (e(C r p )) = e(A). Lemma 2.4(b) tells us that, via q * , E * n (BA) is a finitely generated free E * n (BC r p )-algebra, so e(A) −1 E * n (BA) will be a finitely generated free e(C r p ) −1 E * n (BC r p )-algebra. Thus Proposition 3.2 proves statement (a) and Proposition 3.3 implies statement (b). 3.1. Proof of Theorem 1.3. Let A have rank r. Theorem 3.1 tells us that if K(n) * (X) = 0, then e −1 E * n (BA) ∼ − → e −1 E * n (BA) ⊗ E * n E * n (X A ) . Corollary 3.4(b) tells us that, if n ≥ r, then e −1 E * n (BA) = 0. (When r = 1, Corollary 3.4(a) already shows this.) Thus E * n (X A ) = 0, and so X A = ∅. 3.2. Proof of Theorem 1.1. We first observe that if A is abelian and A ′ is a subgroup of A, then X A = (X A ′ ) A/A ′ . Thus it suffices to prove Theorem 1.1 when A is cyclic, as the general case will follow by iteration. So let C = C p k , and let X be a finite C-space such that K(n) * (X) = 0 for some n ≥ 1. As before, Theorem 3.1 tells us that X C = ∅ and then that e(C) −1 E * n (BC) ⊗ E * n E * n (X C ) = 0. Given a finite CW complex Y , let h * (Y ) = e(C) −1 E * n (BC) ⊗ E * n E * n (Y ). By Corollary 3.4(a), this defines a reduced cohomology theory with coefficient ring that is Landweber exact and v n−1 -periodic. Since h * (X C ) = 0, Lemma 2.1 tells us that K(n − 1) * (X C ) = 0. Further Remarks 4.1. Comparison with other proofs. The first thing to say is that every proof we know of Theorem 1.1 involves inverting Euler classes at some point in the argument. In particular, the proof of this theorem in the special case when A = C p by Strickland [S10,Thm.16.9] (unpublished) or Balmer and Sanders [BS17] uses results of [HS96], and Barthel et al. [6A19] invert Euler classes in the key section 3 of their paper. (In all these papers, Theorem 1.1 is stated in terms of the geometric fixed point functor Φ A applied to a compact object in A-spectra, but this is easily seen to be equivalent to Theorem 1.1 as stated above, only with the added assumption that there exists a fixed point.) Unlike those proofs, our proof here switches to E n -cohomology, which leads to our Theorem 3.1, and also allows for our simple deduction of Theorem 1.1 for all A from the case when A = C p using Corollary 3.4, which itself has an easy proof. Stronger than we need for our purposes is the statement that, if A is a finite abelian p-group of rank r, and n ≥ r, then e(A) −1 E * n (BA) is Landweber exact and v n−r periodic. In [T02], T. Torii shows this when A is elementary abelian. In [MNN19,Prop.5.28], A. Mathew, N. Naumann, and J. Noel show this for general A, as an application of Greenlees and Strickland's analysis of the rings E * n (BA)/(v 0 , . . . , v t ) in [GS99]. Regarding our fixed point theorem, Theorem 1.3, we note that application of the main theorem in [KL20] leads to the deduction of a generalization of Theorem 1.3 for all finite p-groups P [KL20,Thm.2.20], but specialized to the finite P -space case. Our argument here proving Theorem 1.3 for abelian groups is simpler, and, of course, applies to all finite dimensional complexes with appropriate A-actions. In a different direction, the second author has noted [K21] that Theorem 1.1 in the case when n = r can be immediately 'read off of' the generalized character theory of [HKR00], and that the theorem in the general case can be similarly deduced from Stapleton's more general transchromatic characters [S13]. (This argument yields the full unbased version of Theorem 1.1.) Constructing these characters also involves inverting appropriate classes in the Morava E-theory of abelian p-groups (and then assembling the localized rings into an appropriate universal ring). For the purposes of proving Theorem 1.1, our proof here uses much less analysis of E * n (BA) than is used in [HKR00] and [S13]. 4.2. Generalizations to non-abelian groups. The paper [BS17] reduced the problem of understanding the topology of the Balmer spectrum of a stable equivariant homotopy category to a problem that can be posed as follows: given n ≥ 0, and a subgroup Q of a finite p-group P , compute r n (P, Q), where r n (P, Q) is the minimal r such that if X is a finite P -space and X Q is K(n + r) * -acyclic, then X P is K(n) * -acyclic. As discussed in [KL20], iterated use of Theorem 1.1 shows that r n (P, Q) ≤ r(P, Q), where r(P, Q) is the minimal r such that there exists a sequence of subgroups Q = K 0 ⊳ K 1 ⊳ · · · ⊳ K r = P with each K i−1 normal in K i and each K i /K i−1 cyclic. (In particular, r(P, Q) is the rank of P/Q when P is abelian.) One might ask if this upper bound is best possible. To show this for a particular n and pair Q < P , one needs to find a finite P -space X such that K(n + r) * (X Q ) = 0 but K(n) * (X P ) = 0, with r = r(P, Q) − 1. The authors of [6A19] find such examples when P is abelian. The main theorem of [KL20] says that the statement "for all finite Pspaces X, if X Q is K(n + r) * -acyclic then X P is K(n) * -acyclic" implies the apparently stronger statement "for all finite P -spaces X, there is an inequality dim K(n+r) * K(n+r) * (X Q ) ≥ dim K(n) * K(n) * (X P )." (This sort of conclusion is analogous to a theorem of Floyd [F52] in the classical situation.) This leads both to interesting applications of Theorem 1.1 [KL21] and to more families of examples showing that r n (P, Q) = r(P, Q) [KL20]. Some steps in our proof of Theorem 1.1 are quite formal, and can be generalized to nonabelian groups. But the proof ultimately hinges on Corollary 3.4, which fails in the strongest possible way for nonabelian groups: one can define an Euler class e ∈ E * n (BG) for any finite group G, but e −1 E * n (BG) = 0 whenever G is nonabelian. If there are pairs Q < P for which r n (P, Q) is strictly less than r(P, Q), then some clever new ideas will be needed to prove this. [KL20] describes a number of pairs Q < P for which r n (P, Q) is not yet known. Here we will just advertise one of these: let C be a noncentral subgroup of order 2 in the dihedral group D of order 16. Is r n (D, C) equal to 2 or 3? (We note that r(D, C) = 3 and r n (D, {e}) = r(D, {e}) = 2.) 4.3. Questions about finite dimensional complexes. An obvious question is whether Theorem 1.1 might hold for finite dimensional complexes that aren't necessarily finite. Finiteness appears in our proof to ensure that e −1 E * n (BC) ⊗ E * n E * n (X C ) = 0 =⇒ K(n − 1) * (X C ) = 0. Still one can ask: Question 4.1. Let A be a finite abelian p-group of rank r, and let X be a finite dimensional A-CW complex. If K(n) * (X) = 0 with n ≥ r, must K(n − r) * (X A ) = 0? The first thing to say is that the answer is no, if n = r. If M (Q) is the mapping telescope of S 1 2 − → S 1 3 − → S 1 4 − → . . . , then H * (M (Q); Z) = Q, concentrated in dimension 1. Thus if we let an abelian p-group A act trivially on M (Q), then K(n) * (M (Q)) = 0 for all n > 0 (and all p), but K(0) * (M (Q) A ) = 0. But this is really a non-equivariant example, and Bousfield's result in [Bou99], that if X is a space then K(n) * (X) = 0 implies K(n − r) * (X) = 0 for n − r ≥ 1, makes it plausible that the question could have a positive answer for n − r ≥ 1. We note that [KL20] shows that such a chromatic Smith theorem would imply that the analogous chromatic Floyd theorem would hold, and, furthermore, the fixed point theorem [KL20,Thm.2.20] would generalize to finite dimensional complexes with an action of a finite p-group. One might also wonder if our fixed point theorem Theorem 1.3 is best possible. Question 4.2. For all r ≥ 1, does there exist a finite dimensional C r p -space X that is K(r − 1) * -acyclic, and has no fixed points? The answer is yes, when r = 1. The second author asked on Mathoverflow if C p could act on a rationally trivial finite dimensional complex without fixed points. As answers, Tom Goodwillie [G22] described a free action of C 2 on a rationally acyclic 2-dimensional complex, and Ian Leary pointed to a paper of his that constructs fixed point free actions of any finite group on a 3-dimensional rationally acyclic complex [L05,Thm.13.1]. The spectrum of the equivariant stable homotopy theory of a finite group. P Balmer, B Sanders, Invent. Math. 208P. Balmer and B. Sanders, The spectrum of the equivariant stable homotopy theory of a finite group, Invent. Math. 208 (2017), 283-326. The Balmer spectrum of the equivariant homotopy category of a finite abelian group. T Barthel, M Hausmann, N Naumann, T Nikolaus, J Noel, N Stapleton, Invent. Math. 216T. Barthel, M. Hausmann, N. Naumann, T. Nikolaus, J. Noel, and N. Stapleton, The Balmer spectrum of the equivariant homotopy category of a finite abelian group, Invent. Math. 216 (2019), 215-240. On K(n)-equivalences of spaces, Homotopy invariant algebraic structures (Baltimore). A K Bousfield, A. M. S. Cont. Math. Series. 239A. K. Bousfield, On K(n)-equivalences of spaces, Homotopy invariant algebraic structures (Baltimore), A. M. S. Cont. Math. Series 239(1999), 85-89. Transformation groups. T Tom Dieck, De Gruyter Studies in Mathematics. 8de GruyterT. tom Dieck, Transformation groups, De Gruyter Studies in Mathematics 8, de Gruyter, 1987. On periodic maps and the Euler characteristics of associated spaces. E E Floyd, Trans. Amer. Math. Soc. 72E. E. Floyd, On periodic maps and the Euler characteristics of associated spaces, Trans. Amer. Math. Soc. 72 (1952), 138-147. Varieties and local cohomology for chromatic group rings. J P C Greenlees, N P Strickland, Topology. 38J. P. C. Greenlees and N. P. Strickland, Varieties and local cohomology for chromatic group rings, Topology 38 (1999), 1093-1139. answer to Can a cyclic group of prime order act on a rationally acyclic finite dimensional complex and have no fixed points? asked on Mathoverflow. T Goodwillie, T. Goodwillie, answer to Can a cyclic group of prime order act on a ratio- nally acyclic finite dimensional complex and have no fixed points? asked on Mathoverflow, https://mathoverflow.net/q/438014. Generalized group characters and complex oriented cohomology theories. M J Hopkins, N J Kuhn, D C Ravenel, J. Amer. Math. Soc. 13M. J. Hopkins, N. J. Kuhn, and D. C. Ravenel, Generalized group characters and complex oriented cohomology theories, J. Amer. Math. Soc. 13 (2000), 553-594. Bousfield localization functors and Hopkins' chromatic splitting conjecture. M Hovey, JA.M.S. Cont. Math. 181M. Hovey, Bousfield localization functors and Hopkins' chromatic splitting con- jecture, JA.M.S. Cont. Math. 181 (1995), 225-250. Cohomological Bousfield classes. M Hovey, J. P. A. A. 103M. Hovey, Cohomological Bousfield classes, J. P. A. A. 103 (1995), 45-59. Tate cohomology lowers chromatic Bousfield classes. M Hovey, H Sadofsky, Proc. A. M. S. 124M. Hovey and H. Sadofsky, Tate cohomology lowers chromatic Bousfield classes, Proc. A. M. S. 124 (1996), 3579-3585. M Hovey, N P Strickland, Morava K-theories and localizations. 666M. Hovey and N. P. Strickland, Morava K-theories and localizations, Mem. Amer. Math. Soc. 666, 1999. Chromatic Fixed Point Theory and the Balmer spectrum for extraspecial 2-groups. N J Kuhn, C J R Lloyd, Amer. J. Math. Preprintto appearN. J. Kuhn and C. J. R. Lloyd, Chromatic Fixed Point Theory and the Balmer spectrum for extraspecial 2-groups, Amer. J. Math., to appear. Preprint 2020. Computing the Morava K-theory of real Grassmanians using chromatic fixed point theory. N J Kuhn, C J R Lloyd, Alg. Geo. Top. Preprintto appearN. J. Kuhn and C. J. R. Lloyd, Computing the Morava K-theory of real Grass- manians using chromatic fixed point theory, Alg. Geo. Top., to appear. Preprint 2021. A short proof of the chromatic Smith fixed point theorem. N J Kuhn, Preprint 2021. To be renamed Generalized character theory implies the chromatic Smith fixed point theoremN. J. Kuhn, A short proof of the chromatic Smith fixed point theorem, Preprint 2021. To be renamed Generalized character theory implies the chromatic Smith fixed point theorem. On finite subgroups of groups of type VF. I Leary, Geo. Top. 9I. Leary, On finite subgroups of groups of type VF, Geo. Top. 9 (2005), 1953- 1976. Derived induction and restriction theory. A Mathew, N Naumann, J Noel, Geom. Topol. 23A. Mathew, N. Naumann, and J. Noel, Derived induction and restriction theory, Geom. Topol. 23 (2019), 541-636. Localization with respect to certain periodic homology theories. D C Ravenel, Amer. J. Math. 106D. C. Ravenel, Localization with respect to certain periodic homology theories, Amer. J. Math. 106 (1984), 351-414. Fixed point theorems for periodic transformations. P A Smith, Amer.J.Math. 63P. A. Smith, Fixed point theorems for periodic transformations, Amer.J.Math. 63(1941), 1-8. Transchromatic generalized character maps. N Stapleton, Algebr. Geom. Topol. 13N. Stapleton, Transchromatic generalized character maps, Algebr. Geom. Topol. 13 (2013), 171-203. Thick ideals of finite G-spectra, unpublished notes from. N P Strickland, N. P. Strickland, Thick ideals of finite G-spectra, unpublished notes from 2010. The geometric fixed point spectrum of (Z/p) r Borel cohomology for En and its completion. T Torii, A.M.S. Cont. Math. 293University of VirginiaEmail address: [email protected] Email address: [email protected] Department of MathematicsT. Torii, The geometric fixed point spectrum of (Z/p) r Borel cohomology for En and its completion, A.M.S. Cont. Math. 293 (2002), 343-369. Email address: [email protected] Email address: [email protected] Department of Mathematics, University of Virginia, Charlottesville, VA
{'fraction_non_alphanumeric': 0.0942775869075326, 'fraction_numerical': 0.028741364660681962, 'mean_word_length': 3.2442772028849167, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 1, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 18, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "A recent theorem by T. Barthel, M. Hausmann, N. Naumann, T. Nikolaus, J. Noel, and N. Stapleton says that if A is a finite abelian p-group of rank r, then any finite A-space X which is acyclic in the nth Morava K-theory with n ≥ r will have its subspace X A of fixed points acyclic in the (n − r)th Morava K-theory. This is a chromatic homotopy version of P. A. Smith's classical theorem that if X is acyclic in mod p homology, then so is X A .The main purpose of this paper is to give an elementary proof of this new theorem that uses minimal background, and follows, as much as possible, the reasoning in standard proofs of the classical theorem. We also give a new fixed point theorem for finite dimensional, but possibly infinite, A-CW complexes, which suggests some open problems.", 'arxivid': '2303.02022', 'author': ['William Balderrama ', 'Nicholas J Kuhn '], 'authoraffiliation': [], 'corpusid': 257353651, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 9648, 'n_tokens_neox': 8581, 'n_words': 5110, 'pdfsha': 'cd1e0c5977599a1960746be48b2a4497572be86b', 'pdfurls': ['https://export.arxiv.org/pdf/2303.02022v1.pdf'], 'title': ['AN ELEMENTARY PROOF OF THE CHROMATIC SMITH FIXED POINT THEOREM', 'AN ELEMENTARY PROOF OF THE CHROMATIC SMITH FIXED POINT THEOREM'], 'venue': []}
arxiv
Rethinking Generative Methods for Image Restoration in Physics-based Vision: A Theoretical Analysis from the Perspective of Information AUGUST 2021 1 Journal Of L A T E X Class Files Rethinking Generative Methods for Image Restoration in Physics-based Vision: A Theoretical Analysis from the Perspective of Information 148AUGUST 2021 1 End-to-end generative methods are considered a more promising solution for image restoration in physics-based vision compared with the traditional deconstructive methods based on handcrafted composition models. However, existing generative methods still have plenty of room for improvement in quantitative performance. More crucially, these methods are considered black boxes due to weak interpretability and there is rarely a theory trying to explain their mechanism and learning process. In this study, we try to re-interpret these generative methods for image restoration tasks using information theory. Different from conventional understanding, we analyzed the information flow of these methods and identified three sources of information (extracted high-level information, retained lowlevel information, and external information that is absent from the source inputs) are involved and optimized respectively in generating the restoration results. We further derived their learning behaviors, optimization objectives, and the corresponding information boundaries by extending the information bottleneck principle. Based on this theoretic framework, we found that many existing generative methods tend to be direct applications of the general models designed for conventional generation tasks, which may suffer from problems including over-invested abstraction processes, inherent details loss, and vanishing gradients or imbalance in training. We analyzed these issues with both intuitive and theoretical explanations and proved them with empirical evidence respectively. Ultimately, we proposed general solutions or ideas to address the above issue and validated these approaches with performance boosts on six datasets of three different image restoration tasks.Index Terms-deep generative models, image restoration, information bottleneck principle. I. INTRODUCTION I MAGES captured by cameras inevitably suffer from visual degradations caused by both internal (like noise, blur, aliasing, and compression artifact inside the camera) or external (such as rain, fog, haze, and other weather distortions) factors and can only reflect limited information of the observed scenes [1]. Image restoration in physics-based vision (such as image denoising [2], [3], dehazing [4], and deraining [5]) has long been studied as a set of foundational tasks in computer vision, which attempts to remove these visual degradations and recover the captured scenes with clean backgrounds or of a higher visual quality. With complex physics systems involved, image restoration requires not only the simulation of the visual degradations (like noise, haze, and rain) but also the handling of how these degradations integrate with the background scenes to form the captured images [5]- [7]. Recent advances in image restoration methods apply the representation learning idea of deep neural networks to simulate the complicated patterns of visual degradations without engineering features. However, regarding how these patterns integrate with the background, considerable methods still tend to rely on handcrafted composition models that are manually designed to describe their integrations [8]- [11]. These methods, noticeably, tend to be hypothetical models which are handcrafted based on human observation, statistical understanding, or prior knowledge under ideal conditions. They may not truly reflect the real-world scenarios, or may even involve human bias, contributing to the performance gap between the models evaluating on synthetic datasets and in actual practices. Generative methods are considered more promising solutions for image restoration tasks, which allow end-to-end simulation of the entire restoration processes using Deep Generative Models (DGMs) [12]- [14] without handcrafting composition models. Compared with the deconstructive idea above, generative methods also have better support in completing damaged / lost information, lighter-weight models, higher generalization-ability as well as many other advantages. Therefore, a growing number of recent studies start to apply generative methods to various image restoration tasks [15], [16]. Nevertheless, many existing generative methods still tend to have ample room to be improved in the quantitative performance compared with those deconstructive methods using handcrafted composition models, or may require more training data to achieve competitive performance with state-of-the-art results on many image restoration tasks. Another problem lies in the interpretability of these generative methods: unlike deconstructive methods whose mechanisms are intuitively explainable, generative methods tend to be purely data-driven and can be a black box, where both the patterns of visual degradations as well as how they integrate with the backgrounds are learned inside the DGMs. There seems to be no solid theory specified for image restoration tasks that can explain the learning behavior inside these models nor understand its reliability. In this study, we noticed that: the conventional understanding tends to consider the generative methods in image restoration tasks only as an information extraction process, where the network models simply attempt to optimize the extracted representation for better restoration (Fig. 2a). However, we consider in the actual scenario, a considerable proportion of background information may be retained intactly throughout the network without abstraction across layers, and there may exist a certain amount of fine-grained details of the backgrounds in the target outputs that are absent / missed from the input images (Fig. 2b). Based on this hypothesis, we analyzed the information flow in the generative methods of image restoration and affirm all three sources of information above are involved in generating the restoration results. By extending the information bottleneck principle, we re-interpret the learning process of DGMs in these generative methods: we deduced that the three sources of information above are to learn / optimized to approximate (i) the features / patterns of the visual degradations; (ii) background pixels / information to be retained in the restoration results; and (iii) fine-grained details or background information that is damaged / lost in the input images; respectively. Using this theoretical framework, we further found that: existing generative methods in image restoration tasks tend to be direct applications of DGMs designed for conventional generation tasks, where we identified three major issues in these conventional DGMs that may result in the performance gaps above: (i) these DGMs often contain over-invested abstraction processes; (ii) their network structures may inherently discard details information; and (iii) the loss functions for training tend to optimize two different component objectives, which may contribute to gradient vanishing and imbalance of training in GAN-based models. We analyzed and formulate these issues with both intuitive and theoretical explanations. Then we provided empirical evidence and experiment results to prove their existence respectively, as well as to support and validate our theoretical framework. Ultimately, we gave general solutions or ideas to address the above issues and to improve the performance of generative methods for image restoration, such as optimizing network structure, enhancing details extraction, accumulation, and re- tention, as well as using more sensitive measures of loss with pre-training. Then we validated these approaches with performance boosts on six datasets of different image restoration tasks, including image denoising, dehazing, deraining, and the hybrid of rain and haze removal. To sum up, this study contributes in the following aspects: • by revealing the sources and flow of information in these models, we elaborated the theory of generative methods in image restoration tasks and proposed an informationtheoretic framework to explain the learning behaviors, optimization objectives, and their corresponding optimal information boundaries, which can be helpful for the analysis and design of relevant models; • we analyzed existing generative methods and identified three key issues in the direct application of conventional DGMs to image restoration tasks, where we provided intuitive analysis, theoretical explanations, and proofs with empirical evidence respectively; • we proposed general solutions for the above issues, showed the ways to improve generative methods for image restoration tasks, and validated them on six datasets of three different image restoration tasks. II. RELATED WORK 1) Deconstructive Methods for Image Restoration: Early studies of many image restoration tasks assume the visual degradations are linearly added onto the background scenes, and the related methods mainly focus on modeling these degradations for better removal by engineering their features [2], [3], [17]- [21]. Deep neural networks were later introduced to image restoration tasks and have now become the mainstream models for simulating these complicated patterns of visual degradations [5]- [7], [10], [22], [23]. Whereas, a growing number of recent studies started to figure out that these visual degradations may not be simply superposed onto the background scenes, and they proposed different theories and designed various composition models to describe how these degradations blend in with the backgrounds to form the captured images. Examples include the famous atmospheric scattering model [8], [9] in the image dehazing, as well as the heavy rain model [10] and the depth-aware rain model [11] in the image deraining task. However, all these methods still consider the visual degradations as independent layer(s) of pixels and try to manually deconstruct / interpret their integrations using human assumptions or statistical understandings based on limited data, which may involve human bias and fail to truly reflect the real-world situations. 2) Generative Methods for Image Restoration: Recent studies try to use DGMs to directly learn / simulate the end-toend mappings of image restoration tasks without the need to understand their compositions or detailed mechanisms, which shows considerable advantages (summarized in Appendix A) compared with the deconstructive methods above. As the simplest form of these generative methods, Autoencoders (AEs) [24]- [26] have been applied to image denoising [27], [28], deraining [29], [30], dehazing [31], [32], and other image restoration tasks [33]- [35]. However, it can be difficult for AE-based generative methods to learn high-level semantics knowledge for generating high fidelity results or may require extra domain-specific knowledge [36]. Generative methods based on Generative Adversarial Networks (GANs) [37]- [39] can be regarded as an improved version of the above AE-based generative methods, which introduce an extra discriminator network with an adversarial training strategy to allow generating more eidetic results. In fact, most GAN models for image-to-image translation (like pixel2pixel [40] and CycleGAN [41]) can be directly adopted in many image restoration tasks and can obtain plausible results, but their quantitative performance on the benchmarks tend to be less satisfactory. Many existing GAN-based image restoration methods attempt to reduce these performance gaps by modifying these basic architectures [42]- [49]. However, we consider many of them only made minor changes where key issues in these conventional GANs seem to be ignored or left unsolved. Some others only apply GANs as supplementary, where their network structures still tend to be based on the deconstructive idea or do not use end-to-end training [50], [51]. Despite their promise in image restoration, many existing generative methods still tend to be direct applications of general DGMs for conventional generation problems, whose performances may be less competitive with the deconstructive methods or may require extra training data to converge. 3) Conventional Interpretation of the Generative Methods: Unlike deconstructive methods, whose mechanisms can be easily explained, generative methods of image restorations have long been viewed as black boxes due to lacking interpretability. So far as we know, there is no solid theory to explain the mechanism or learning behaviors inside the DGMs of these generative methods for image restoration tasks. Whereas, some related studies try to intuitively interpret their learning processes. The most common understanding regards the generative methods of image restoration simply as a background extraction process, which believes the restoration performance is fully determined by the quality of latent space representations / embeddings extracted by the encoders of DGMs [52]. [53] further extended this idea and disentangled the latent representations into task-relevant (background / con-tents to be restored) and task-irrelevant (visual degradations) factors, where they interpret the learning process as to isolate the task-irrelevant part so as to reduce the ambiguity of these learned representations. These conventional understandings deem the image restoration performance is fully dependent on the "background-extract-ability" of the encoder networks, where the extracted representations only need to contain as much information about the target's contents / background scenes as possible, but they tend to ignore the training / optimization in the decoder networks. [54] also considers that the extracted representations consist of the above two kinds of features. But differently, they use two separate encoders to respectively learn each kind of feature, and, rather than suppressing the task-irrelevant information (visual degradations) before sending it to the decoder, they let the decoder trade-off between the two sources of representations. This interpretation steps closer to our findings. However, it does not take into account the differences between the two types of features in the levels of abstraction and the amount of required information, and it still considers that the source inputs contain all the information for restoring the target outputs, which we found to be less accurate. Generally speaking, all the ideas above seems to be simple interpretations referred from the conventional understanding of DGMs for general domain transfer generation problem and may not accurately reflect the actual mechanism of generative methods in image restoration tasks. III. INFORMATION-THEORETIC FRAMEWORK Information Bottleneck (IB) Principle [55], [56] theoretically interprets the learning behaviors of general deep neural network models by employing an information-theoretic method. This theory explains the information flow and quantifies the optimization process with information. In the conventional understandings, generative methods for image restoration are interpreted as an information extraction process about background contents from the source inputs. This can be directly explained using the IB theory: given a visually-degraded image X to be fed into a DGM, its desired output Y is regarded as the image of its corresponding clean background, which, in reverse, determines the basic information of X. Suppose we consider the network layers in the DGM as a whole, hence we define the representation obtained from the latent layers asX and the final output from the DGMỸ as the estimated restored image in approximation to Y . Their dependency relationship can form a Markov Chain: Y → X →X →Ỹ (same as Fig. 2a), where the optimization goal of the learning process is to maximize the mutual information between the extracted latent representatioñ X and the ideal output Y while minimizing the mutual information betweenX and the input X: min[I(X;X) − βI(Y ;X)](1) where β is a positive Lagrange multiplier that trades-off between the two terms. According to the Data Processing Inequality (DPI) [57], we can have the optimal information boundaries of this conventional interpretation of the learning process: I(Y ; X) I(Y ;X) I(Y ;Ỹ )(2) where the first equality is satisfied if and only ifX is a sufficient statistic for Y based on X, which requires the encoder network to be powerful enough to fully extract the mutual information I(Y ; X) in its high-level embedding / latent space representationX, and, similarly, the second equality is satisfied if the decoder can pass the entire information it received to the outputỸ . In this way, the mutual information in the restoration result I(Y ;Ỹ ) can be maximized to reach I(Y ; X), which, in these conventional understandings, is believed to contain all information of the target Y . However, we noticed that: the information about contents / background scenes in the inputs is supposed to be simply retained without the need for an abstraction process across network layers. Moreover, in actual practices, many commonlyused DGMs (such as U-Net [58]) even provide structures like skip connections to allow passing this low-level information directly and intactly to the decoders / generation part of the models without going through the encoders. Thus in this study, we consider that: besides the high-level features learned by the encodersX, considerable low-level information from the inputs, such as the background pixels, may probably be retained throughout the networks of DGMs (I(X |X;Ỹ )). In addition, the conventional interpretation above assume the entirety of restoration targets Y can be retrieved from their corresponding source inputs X. But in the real scenarios of many image restoration tasks, X may not contain all the information required for restoring the targets Y (I(Y ; X) = H(Y )): some background pixels observed may be seriously distorted, blurred, damaged or may even be completely covered by the visual degradations. Thus, relevant information may have already been lost and may not be recovered using only the information from a single input. In fact, most datadriven generative models tend to more or less "imagine" the missing contents based on the predictions of network parameters or external knowledge learned from multiple inputs [37], [59]. Therefore, to sum up, we consider three sources of information are involved in generating the restoration resultsỸ : 1) high-level information from the feature embeddings / latent space representationsX that are extracted by the encoder networks / feature extraction models: I(X;Ỹ ); 2) low-level information in the source inputs X that pass directly through the skip connections or are retained intactly in the resultsỸ : I(X |X;Ỹ ); 3) external information involved by the parameters of networks in the restoration results without coming from the source inputs: H(Ỹ | X,X). Based on the above insights, we re-interpreted the flow of information as Figure 2b, where we deduced that the learning processes are to optimize the above three sources of information correspondingly (See Appendix B for more detailed analysis and explanations). By analyzing the possible ranges of each part of the information, we can derive the overall training objectives and the corresponding optimization boundaries for each of its components as follow (derivation and proof are attached in Appendix C): min[I(X;X;Ỹ ) − β 1 I(X |X;Ỹ ) − β 2 I(Y | X;Ỹ )] (3) s.t.      I(X;X;Ỹ ) −H(X | Y ) I(X |X;Ỹ ) H(X) I(Y | X;Ỹ ) H(Y | X)(4) where β 1 and β 2 are positive coefficients. In simple terms, we interpret the internal process and learning behaviors of the generative methods in image restoration as follows: 1) rather than doing only the background extraction, the encoder networks process and deliver both the features / patterns of visual degradations H(X | Y ) and the information of contents / background scenes I(X | X;Ỹ ) in the sources images if sufficient amounts of information are allowed to pass, while the removal of H(X | Y ) happens in the generation process of the decoder networks (rather than in the extraction process of the encoder networks); 2) in the encoder parts of networks, the two kinds of information above can be disentangled according to their differences in levels of abstraction, and therefore are processed by different structures of the networks: the high-level information I(X;Ỹ ) in the latent representation extracted by the encoders, will be optimized to approximate the visual degradations (H(X | Y )), while the contents / background scenes information to be restored is considered low-level information that can be retained throughout the network without going through the abstraction process across the encoder networks, and this part of the information I(X |X;Ỹ ) will be optimized when the intact amount of information of the source input H(X) can be passed; 3) besides the information extracted or retained from the inputs, the decoder network may also involve external knowledge in its restoration outputs H(Ỹ | X,X), which is optimized to approach / complete the information of the targets but is absent in the inputs H(Y | X). IV. EXISTING PROBLEMS & ANALYSIS Many existing generative methods for image restoration tasks tend to be simple applications of general DGMs that was originally designed for conventional generation problems. According to the above theory, we can identify three critical issues (corresponds to the optimization of three information sources above) in the conventional DGMs that probably contribute to the performance gaps. A. Problem 1: Over-invested Abstraction Process Description: Features / patterns of visual degradations in an image restoration task only require a specific level of abstraction for extraction / simulation and occupy only a certain amount of information. However, conventional DGMs tend to contain excessive abstraction processes, which may not help the performance of image restoration tasks, bring in unnecessary network parameters, and may even involve noises / irrelevant information. Intuition / Observation: DGMs designed for conventional generation problems are supposed to learn higher-level semantic features that globally span large pixel areas, while visual degradations in image restoration tasks tend to be locally distributed and relatively lower-level features according to Marr's definition [60]. Analysis / Theoretical Explanation: See Appendix D. B. Problem 2: Inherent Details Loss Description: The network structures of the conventional DGMs do not support retaining intact inputs in the generated results, where low-level information may be discarded inherently in both extraction and generation processes. In image restoration tasks, this mainly corresponds to the loss of background information and fine-grained details, contributing to severe distortion and poor quantitative performance in the restoration results. Intuition / Observation: Traditional generation problems pay more attention to the high-level consistency of the generated results and encourage variations in the low-level details, but this can be fatal to the image restoration tasks. Analysis / Theoretical Explanation: See Appendix E. C. Problem 3: Vanishing Gradient & Imbalance Training Description: Loss functions used in conventional generation problems tend to optimize two uneven component objectives when applied to image restoration tasks. Thus, they may no longer provide smooth gradients for the continuous convergence of models and may drop abruptly during the training process, contributing to vanishing gradient or even leading to an imbalance in the updating between the generators and the discriminators in GAN-based methods. Intuition / Observation: In conventional generation tasks, the inputs are often independent of the target outputs (like random noise) or do not contain much information about the targets. But for image restoration tasks, the source inputs and the targets tend to share considerable similarities (like the majority of the same pixels of the backgrounds). Therefore, the models may converge much easier by utilizing this similar information but may become difficult to learn knowledge about the targets that are not involved in the inputs. Analysis / Theoretical Explanation: See Appendix F. V. SOLUTIONS & METHODS To improve the performance of generative methods for image restoration tasks, in this section, we indicate the general solutions / suggestions for the above problems as well as specific methods to validate them respectively. To prevent the over-invested abstraction process, we need to investigate the minimum requirements for extracting / simulating the corresponding visual degradations in the image restoration tasks, and therefore remove the unnecessary abstraction process and redundant network parameters. For DGMs based on Convolutional Neural Networks (CNN), this process of abstraction is often realized by the down-and-upsampling mechanism. Thus, we consider for each kind of visual degradation, there exists a specific number of downand-up-sampling layers and a certain dimensionality of the latent representations that can be sufficient to fully simulate / extract the patterns / features of this degradation, where the layers or dimensions larger than these numbers may do no good to the restoration performances. To reduce the inherent details loss, we need to handle the discard of low-level information both before and inside the decoder networks. For the first parts of information loss, a global skip-connection that can pass intact inputs directly to the decoder networks may solve. But in CNN-based DGMs, this may not be easily applicable without affecting the latent representations H(X). As a more general solution, we proposed increasing the total amount of information in the inputs so as to guarantee that more information can be retained. According to H(X |X) = H(X) − I(X;X), where I(X;X) can be regarded as a constant (upper-bounded by the amount of information of the visual degradations H(X | Y )), increasing H(X) may help to improve H(X |X). More specifically, to achieve this goal, we put an information accumulation (InfoAccum) module before the DGMs, which enhances the extraction and accumulation of the inputs' information before sending it to the encoder, and the number of layers in this module can reflect the total amount of this accumulated information (see Appendix G for more details and relevant discussion). As for the second part of details loss that happens inside the decoder network, we need to search for a decoder network that can be powerful enough to: (i) retain all information it received in its outputs, (ii) parse the latent representation extracted by the encoder and remove the information of visual degradations, as well as (iii) to learn external knowledge for completing the missing details. For CNN-based DGMs, we consider an enhancement in the upsampling methods of the decoder network may help. As for the vanishing gradient and imbalance when training GAN models, we suggest using more sensitive measures of loss functions in the later stages of training and consider pre-training on image reconstruction (for autoencoders or generators in GANs) and on extra datasets (for discriminators in GANs) may help to accelerate convergence and balanced two models in a GAN architecture. VI. EXPERIMENTS Here we provided empirical evidence to prove the above three problems respectively and validated our proposed solutions as well as the theoretical framework with general experiments on six benchmarks of different image restoration tasks. 1) Empirical Evidence of Problem 1: To prove the existence of over-invested abstraction processes, we investigate the image restoration performances of DGMs with different levels of abstraction. Here we adopted two common types of backbone DGMs for image-to-image translation: convolutional Even more interesting is that more complicated visual degradations seem to require higher levels of abstraction process (Tab. I), which intuitively makes sense. 2) Empirical Evidence of Problem 2: Although the problem of details loss can be easily observed in their corresponding generated results (Fig. 5), we also proved it quantitatively by training these generator models directly as AEs (learn to do reconstruction on the input images) (See Table. III). To verify that this problem relates to the loss of lowlevel information before the decoder, we apply the InfoAccum module with different layers to the baseline models and investigate their image restoration performances. Corresponding results (Fig. 6) indicate that the overall performances of models do improve along with the increase of layers in the InfoAccum modules. In addition, we also tried to put the InfoAccum module in different positions of the DGMs (not only before the encoders) and replaced the InfoAccum module with other more complicated network modules (see Appendix G). Relevant results validate that the InfoAccum module works as accumulating the inputs' information which does help with the restoration tasks. (a) input (b) pixel2pixel (c) pix2pix + InfoAccum Fig To demonstrate that the details loss also happens in the decoder networks, we proposed to enhance the baselines' decoders by adopting the sub-pixel convolution [62] as the upsampling methods (denoted as SubPixUpsamp) of their top layers. Apparent improvements can be observed in the both image reconstruction (Table. III) as well as image restoration tasks (Table. II). (Fig. 7), where the loss functions tend to converge fast in the first stage but suddenly slow down in the second stage. This seems to coincide with our earlier analysis, where the loss functions tend to optimize two component objectives with different gradients. We further noticed that pre-training DGMs on image reconstruction before training on specific restoration tasks can alleviate this kind of problem and allow easier convergence. For GAN-based methods, all trials of our experiments ended up with large values in the generator losses, while the discriminator losses all tend to approximate zero. This is commonly regarded as a training failure in GANs, where the discriminators converge much earlier than the generators, thus cannot provide gradients for the generators to continue training. To further validate this imbalance problem, we applied LSGAN loss [63] to replace the traditional GAN loss based on JS-divergence [37], which works as a more sensitive measure when the distributions between the targets the generated results are fairly close to each other. We find that it also allows further convergence of GAN models and significantly improves the restoration performances based on the baselines ( More details is explained in Appendix F. 4) General Experiments on Image Restoration Tasks: We generally validate the above problems and the proposed solutions on the benchmarks of different image restoration tasks. Since most existing generative methods tend to base on the U-Net structure, here we applied the pixel2pixel [40] model (pix2pix), which uses an 8-layer U-Net, as our baseline. We reduced its over-invested abstraction process by using a 5-layer U-Net (UNet-5) (which we found sufficient for most image restoration tasks), equipped it with a 15layer InfoAccum module (InfoAccum-15) and modified its decoder with SubPixUpsamp to reduce the inherent details loss. Ultimately, we adopted the LSGAN loss in replace of the original loss function to validate the vanishing gradient and imbalance training problem. We trained and evaluated the above models on benchmarking datasets of SIDD-Small [64] for image denoising, RESIDE-ITS [65] for image dehazing, Rain800 [45] and Rain1200 [66] for image deraining, as well as RainCityScapes [11] and OutdoorRain-8-2 [50] for the hybrid of deraining and dehazing. More details about datasets, implementation, and further discussion can be found in the Appendix H. Results (Table. II) indicates the proposed solutions achieve apparent improvements with InfoAccum-15, SubPixUpsamp, and LSGAN, with no performance drop on UNet-5. VII. CONCLUSION In this study, we identified three sources of information that are optimized in the generative methods for image restoration and we re-interpret their learning mechanism using information theory. We further pointed out three key issues in the existing methods, gave general solutions, and validated them on the benchmarks of different image restoration tasks. APPENDIX A GENERATIVE METHODS AND DECONSTRUCTIVE METHODS FOR IMAGE RESTORATION: DEFINITION & COMPARISONS In this study, we define generative methods for image restoration as methods that use (conditional) Deep Generative Models (DGMs) [12]- [14] (like Autoencoders (AEs) [24]- [26] and Generative Adversarial Networks (GANs) [37]- [41]) or similar deep neural network models that conduct end-toend simulations of the entire processes of image restoration task(s) as high-dimensional probability distributions on a latent feature space, and generate restoration results by sampling from the distributions conditioned on the visually-degraded inputs. Differently, deconstructive methods are methods that try to only simulate the visual degradations / distortions in the specific image restoration tasks as independent layer(s) of pixels (using either deep-learning-based models or other conventional models) and try to describe their integrations with the background scenes images using handcrafted hypothetical composition models (such as linear additive models et al.). Therefore, whether a handcrafted composition model is involved can be one of the key identities to distinguish between a generative method and a deconstructive method for image restoration. Compared with deconstructive methods, generative methods try to directly optimize the generated results to approximate the targets' distribution, where both the patterns of visual degradations as well as their integrations with the background scenes are learned as a whole inside the DGMs. Therefore, we do not need an explicit understanding of the properties of the compositions or the detailed mechanisms behind them, and thus handcrafting composition models is no longer required and can elegantly avoid human bias. Moreover, generative methods also have better support in completing missing details / damaged information from the source inputs. Traditional image restoration methods tend to consider all information for the restored targets can be fully retrieved from their source inputs, where they tend to ignore the details and information that are completely damaged, seriously distorted, or fully covered by the visual degradations and are unrecoverable from the single input data. Therefore, deconstructive methods may not be able to recover / complete the missing information, unless specifically designed by introducing extra networks [67], [68]. Whereas for generative methods, this is functionally well-supported: by transferring general knowledge learned from bid data, DGMs, especially GAN-based models, can easily fill up the missing pixels / lost information and can generate semantically plausible restoration results. Generative methods also have better generalization ability. Unlike deconstructive methods that tend to be task-specific and require the specialized design of composition models for different tasks, models in generative methods may be generally applicable to different image restoration tasks and even allow "all-in-one" models. In addition, generative methods also benefit from lighterweight scales and more concise models compared with those deconstructive methods using sophisticated architectures that are based on handcrafted composition models. To sum up, the advantages of generative methods compared with deconstructive methods, as well as their existing problems are as follows: Pros: 1) more accurately simulate the real-world scenarios with sufficient training data and can avoid human bias; 2) allow learning knowledge from big data to complete damaged details / missing information, and can generate more semantically plausible restoration results with high fidelity; 3) can be generally applicable to different image restoration tasks without task-specialized designs of composition models, and even allow all-in-one models; 4) have more concise end-to-end models and lighter-weight in scale without complicated composition models / deconstruction process, which are less likely to overfit; 5) allow end-to-end training without multi-path / multistage optimization, which can have direct gradients and efficient updating of parameters during training and often have much faster inference speed; 6) GAN-based generative methods can have better support for unsupervised training of models with unpaired or real-world data. Cons: 1) rely on much larger amount of training to converge or achieve competitive performances; 2) tend to be black-boxes and less interpret-able, thus can be difficult to design network structure or make improvement. APPENDIX B INFORMATION FLOW & TRAINING OBJECTIVES The proposed information-theoretic framework (information flow and its training objectives) for generative methods in image restoration tasks can be inferred by analogy from the information analyses of other conventional DGMs. The information bottleneck principle [55], [56], [69] originally focuses on the information extraction process for discriminative deep neural networks (such as classification, prediction, and dimensional compression), while previous works [70]- [72] try to generalize it to explain the training process of DGMs. Here we re-analyze and indicate the information flows as well as their optimization objectives of different conventional models and therefore derive our proposed interpretation (Fig. 8 compares the relevant framework of these models). The original GAN model [37] (Vanilla GAN) can be regarded as a decoder network that attempts to reach a balance between its inputs Z and the targets Y in the generated results Y (Fig. 8a). Z here is random noise, which is responsible for adding variations (mainly low-level details) to the generated results and thus is independent of Y . Conditional GAN (CGAN) [39] and InfoGAN [71] take extra inputs of conditions / class labels, which is related to the target Y (thus denoted asX). Therefore, information of Y is guiding the generation ofỸ in two paths: I(Y ;X;Ỹ ) and I(Y |X;Ỹ ) (Fig. 8b). Models above only play the role of generation, where inputs to the networks (decoder) are already highly condensed. Whereas for generation tasks like image-to-image translation [40], [41], inputs to the models (such as images) are of high-dimensionality and involve considerable irrelevant information. Hence the encoder networks are equipped for extracting featuresX from these inputs X before passing them to the decoders for the generation process. The overall training objectives of the models, therefore, consist of components for both the encoder network (formula in blue: compressing information and fitting the targets' features) and the decoder network (formula in red: optimizing generation results). Notably, the featuresX to be extracted are supposed to be information in the inputs X that can help with the generation and simulation of the targets Y (i.e. information shared between X and Y : I(Y ; X)). In conventional imageto-image translation tasks, this maintains the consistency of high-level semantics before and after the translation, where the inputs X and the targets Y probably have no dependency nor relation except for the high-level featuresX they shared (Fig. 8c). This can be a different story for the image restoration tasks. The conventional understandings tend to consider that the inputs X (visually-degraded images) are determined by their corresponding targets Y (images of the clean background scenes), where a Markov Chain above stands (Fig. 8d). It assumes that information of Y is fully contained in X, thus the process of extracting relevant information of Y from X is also the process to obtain the restoration results at the same time: I(Y ; X) = H(Y ). Therefore, there is no need for a network like a decoder to do generation nor to optimize its generated results based on the limited information it received by approximating specific targets. Of course, in this study, we pointed out that the conventional understanding above can also be less accurate in interpreting the learning process of generative methods in image restoration. As discussed in the body text, the input observed images X to be restored may not contain all the information about the targets Y (I(Y ; X) < H(Y )), thus a generation process with a decoder network can be essential to provide extra information for fully restoring Y . Moreover, different from the traditional image-to-image translation models, information passed to the decoder network for the generation does not only come from the extracted featuresX by the encoder network but also directly flows from X without abstraction process. Therefore, we consider the generative methods for image restoration should be understood by putting together both the conventional interpretation as well as the DGMs for imageto-image translation. Suppose there exist a middle state M between the feature extraction process (which tries to extract all the information from the inputs X and stored in H(M )) and the generation process (which tries to complete the information that is absent from X but required for restoring Y ), the information flow can therefore be written as Figure ?? and its optimization objectives can therefore be written as both optimizing the two processes respectively (which will be further derived to remove M in the next section). Noted that M here is DIFFERENT from the middle state between the encoder and the decoder networks. Because we found that: both the information about the background scenes to be retained as well as the information about the features / patterns of the visual degradations are passed from the encoders to the decoders, and both the removal of these visual degradations as well as the completion of missing information / details happen inside the decoder networks. APPENDIX C PROOF OF OPTIMIZATION BOUNDARIES In this section, we further derived the interpretation in the last section to obtain our proposed information-theoretic framework in this study as well as its component optimization boundaries simultaneously. Given the information flow and the optimization objectives of the models as Fig. 8e the forth objective can be divided into optimizing two components: 0 ≤ I(X |X; M ) ≤ H(X)(6) Thus, as long as H(X) ≥ H(X | Y ), we can easily solve the min-max problem by considering the last two objectives (max I(Y ; M ) and min I(X; M )) together: min I(Y ; X;X; M ) −H(X | Y ) max I(X |X; M ) H(X) (8) where the two paths of information that are passed to M are optimized to approximate the information about the features / patterns of visual degradations −H(X | Y ) and the intact information of the source inputs H(X), respectively, which, add together to approximate the total amount of information about the targets Y that can be retrieved from the inputs X (H(X) − H(X | Y ) = I(Y ; X)). For the first two objectives (max I(Y ;Ỹ ) and I(M ;Ỹ )), since they are all doing maximization, we can simply get the objectives of the two paths of information that are passed to Y : max I(M ;Ỹ ) I(Y ; X) max I(Y | M ;Ỹ ) H(Y | X)(9) similarly, the two paths of information add together to approximate the intact information of the targets Y (I(Y ; X)+H(Y | X)). On account that M here is just a hypothetical middle state, and there does not exist such a variable in the actual models of generative methods in image restoration, we can easily simplify the above information flow and optimization objectives as well as their optimal information boundaries as follow: where the information boundaries of the above optimization objectives are as follow:      I(X;X;Ỹ ) −H(X | Y ) I(X |X;Ỹ H(X) I(Y | X;Ỹ ) H(Y | X)(10) APPENDIX D EXPLANATION OF PROBLEM 1: OVER-INVESTED ABSTRACTION PROCESS Besides the intuitive explanation in the body text, the problem of over-invested abstraction process can also be explain using the theoretic framework. Suppose the total amount of information required to describe the features / patterns of visual degradations is limited to H(X | Y ), which is consider to be extracted and passed by the encoder network through I(X;X), thus occupying a certain proportion in H(X). For CNN-based DGMs, this process of abstraction is often achieved by the down-andup-sampling mechanism, given the total number of downand-up-sampling layers in a generator network as N and the corresponding amount of information can be passed through each of these layers as H(X) N n (n ∈ 1, 2, ..., N ). For a UNetlike generator network (Encoder-decoder network with skip connections connecting corresponding down-and-up-sampling layers on both sides), the total amount of information can pass through the encoder network: H(X) N = N n H(X) N n , which increase along N . When H(X) N ≥ H(X | Y ), continue increasing N may no longer help to extract features of visual degradations for further improving the performance of models on the restoration tasks, causing excessive network parameters, and may even involve extra information of noise H(X | X). But for a generator of encoder-decoder without skip-connection, the total amount of information can pass is limited by the bottleneck layers: H(X) N = min{H(X) N n | n ∈ 1, 2, ..., N }, which decrease along N . When H(X) N ≤ H(X | Y ), continue increasing N will contribute to drops in the model's performances due to less enough information can be passed. Therefore, we deem that for both kinds of generator network, there exist a specific number of down-andup-sampling N saturated where continuing to increase N may do no good to the overall performance of the model in the image restoration tasks. APPENDIX E EXPLANATION OF PROBLEM 2: INHERENT DETAILS LOSS For the problem of inherent details loss, since we regard them as low-level information that is supposed to be retained through the network models without abstraction process, this part of the information is optimized through the objective max I(X |X;Ỹ ) H(X). Specifically, it is related to both two parts of the models: it is not only determined by the amount of information passed to the decoders but is also restricted by the decoder networks' capability to retain relevant information in the generated results. We noticed that in real practice, both two steps above involve the loss of low-level information. The information loss inside the decoder can be obvious: as a generative problem, extra information introduced by its network parameters can be inevitable (which is also necessary for approximating the absent information H(Y | X)): H(Ỹ | X,X) = 0 and H(Y ) = H(X) = H(X). Thus, only a certain proportion of the information that the decoder receives can be retained in the generated results H(Ỹ ). Whereas more essentially, considerable low-level information has already been discarded before passing to the decoder. We noticed that the network structure of existing generator models does not support passing intact low-level information to the decoder without occupying H(X), even with skip connections: I(X |X; M ) H(X | X). Altogether, these two sources of information loss (H(X | M ) and H(M |Ỹ ; X |X)) constitute total loss of lowlevel information in the generated resultsỸ (H(X |X,Ỹ )). Noticeably, in the image restoration tasks, the observed inputs X share large proportions of pixels about the background and relevant details with the target outputs Y (I(X; Y ) is much larger than the other generative problems), which, we consider, are mainly low-level information. As a consequence, this inherent discard of low-level information in the generators tends to be more fatal in the restoration task, contributing to a more serious loss of details and distortion of the background scenes in the generated results (lower I(Y ; X |X;Ỹ ) thus larger I(Y ; X |Ỹ )). APPENDIX F EXPLANATION OF PROBLEM 3: VANISHING GRADIENTS & IMBALANCED TRAINING Given a generator model that tries to generate dataỸ based on the inputs to the generator X in a bid to approximate the ideal outputs Y , existing measures of loss, both pixelwise similarities (like MAE or MSE loss) and high-level consistency (like perceptural loss and GAN loss) are trying to optimize the mutual information betweenỸ and Y (I (Y ;Ỹ )). According to the information flow ofỸ , can be divided into two parts of optimization objectives: max I(Y ;Ỹ ) = max I(Y ; X;Ỹ ) L1 + max I(Y ;Ỹ | X) L2(11) For conventional generation tasks, L 1 is often zero or negligible, and the optimization of the above measures tends to be only maximizing L 2 . Nevertheless, for image-to-image translation in the image restoration task, we consider that the input images X and the target Y shared more information I(Y ; X) than most of the other generation tasks, which makes Y easy to approximate Y by utilizing this information from X, where objective L 1 converges much faster than L 2 (expected gradient E∇L 1 > E∇L 2 ). As a consequence, conventional measures above may result in smaller values or even fail to provide gradients for further improving the generated results (gradient vanishing). For GAN models, these measures of performance may lead to an imbalance between the generator and the discriminator model. APPENDIX G DETAILS OF METHODS & EXPERIMENTS A. Datasets In this study, we conducted all training experiments and evaluated relevant models mainly on six benchmarking datasets of image restoration as follows: • one image denoising dataset -SIDD-sRGB [64] • one image dehazing dataset -RESIDE [65] • two image deraining datasets -Rain800 [45] & Rain12000 [66] • two datasets with rain and haze appear simultaneously -RainCityScapes [11] & OutdoorRain [50] To reduce the computational cost for training, we only use the smallest subset of SIDD-sRGB (i.e. SIDD-Small-sRGB) for training our models, but we evaluate our models on the entire benchmark of the SIDD-sRGB dataset (i.e. SIDD-Validation-sRGB). Since the testing set of the OutdoorRain dataset is not yet public available, we randomly split its training set with ratio 8:2 (7200:1800) as our OutdoorRain-8-2 datasets in this paper. Noticeably, apart from the observed image inputs and their corresponding ground truths, three datasets above: RESIDE, RainCityScapes and OutdoorRain provide additional training data to provide extra supervision for their proposed methods. The RESIDE dataset provides the layers of haze for each input, the RainCityScapes dataset contains maps of scene depth for each training data, and the OutdoorRain provides the groundtruths of rain streak layers, atmosphere light layers, as well as transmittance layers as supervision. All this information is useless for generative models, and we only use the hazy(rainy) inputs and their ground truths for training and evaluation. Detailed statistics for the datasets are summarized in Table IV. B. Evaluation Metrics We adopted the peak signal to noise ratio (PSNR) and structural similarity index (SSIM) [73], [74] as the quantitative methods to evaluate the performances of models on both image restoration tasks and image reconstruction task. For both PSNR and SSIM, larger values indicate better performances of models. C. InfoAccum Module Rather than directly increasing the information in these skip connections H(X |X), we proposed to increase the total amount of information in the inputs H(X) as an alternative solution. For the low-level information we intend to enhance, there is: H(X |X) = H(X) − I(X;X)(12) Since I(X;X) is supposed to be the features of the image degradations, we consider it to be constant. Therefore, we can simply increase the amount of information of inputs H(X) before sending them to the generator networks to indirectly increase H(X |X) without modifying the skip connections or network structure of the generator network. More specifically, we proposed to introduce a network module that can enhance the extraction and accumulation of information before sending them to the generator network. We refer to the network structure of Densely Connected Network (DenseNet) [75]: by using concatenative skip connections, feature maps in the previous layer can be reused in the deeper layers of the network. Thus, source information from the inputs can be fully retained and repeatedly emphasized for further extraction. For a given input x 0 , the output of a Dense Block can be represented as a recursive concatenation of L layers: x l = concat([x l−1 , F l (x l−1 )])(13) where F l (·) denotes the operations in dense layer l. Notably, by considering the outputs from all L layers as a whole, Ψ(x 0 ), where Ψ(x 0 ) = concat([F 1 (x 0 ), F 2 (x 1 ), . . . , F l (x l−1 )]) represents a concatenation of extracted feature maps from each layer, the entire outputs of this kind of structure can be regarded as a concatenation of the input x 0 and these extracted features F l (x l−1 ) from each layer: x l = concat([x 0 , Ψ(x 0 ))]) . It indicates that the original input x 0 is preserved in its entirety through a direct connection from the beginning to the end, where the later processes can still have intact information of the original source input. The Residual Network (ResNet) [76] also has a similar network structure by using skip connections to pass information to deeper layers. However, it achieves in an additive manner, which applies in-place addition of the learned residual features with the layer's input. Therefore the output feature maps may hardly contain intact input information for later processing. Many existing methods also adopted the DenseNet structure in their models, but here we use it differently. For example, Zhang et al. [45] also adopted the DenseNet structure in their GAN-based deraining model. However, instead of placing the DenseNet module before the down-sampling processes to emphasize the input information, it applies dense blocks after the pooling layers of the network, where details information might have already been lost in the foregoing down-sampling process. Figure 10 illustrates the difference between the previous model and our idea. Furthermore, we consider some fine-grind details within the patches of the convolutional filters may be obfuscated and hardly recovered if all filters are of the same size. To help with the extraction of these features and to eliminate the interference caused by the difference in receptive fields, we adopt the idea of multi-scaling, so as to aggregate contextual information from different receptive fields. More specifically, we refer to the Dilated Convolution [77] to obtain a larger receptive field without increasing the number of layers or involving extra parameters and achieve the above idea by using a multi-path structure, concatenating convolutions with different dilation rates. Our proposed InfoAccum module is indicated as follow: x l = concat([x l−1 , F l (x l−1 ), G l (x l−1 ), H l (x l−1 )]) (14) with F l (·), G l (·) and H l (·) represents the composite functions involving convolution with dilation rate 1, 3, and 5 respectively. Due to the reuse-ability of features, each dense layer only needs to focus on extracting a small number of features, and the overall feature extract-ability of the module can be determined only by the number of dense layers inside. Theoretically, the number of feature maps in the output is related to both the growth rate and the number of layers. But in this case, a larger growth rate is equivalent to adding extra layers, because the inputs to all the dense layers include complete source data and thus the features extracted from each layer are independent. Therefore, we simply assign a relatively small value to the growth rate and determine the complexity of the feature to be extracted by adjusting only the number of layers, so as to control the feature extract-ability. The loss of details also exists in the up-sampling process of the decoder network. The earliest up-sampling methods based on un-pooling (missing pixels are abandoned), or interpolation operations (missing pixels are filled based on their neighbors) involve irreversible information loss. Better solutions try to fill the missing pixels with spatially-adjacent textures, or contextual information. For example, in the UNet of the pixel2pixel model, up-sampling is achieved using deconvolution (transposed convolution), which is useful for involving some more general information when filling the missing pixels. However, all these up-sampling methods do not retain the input details and try to fill the missing pixels with calculated results, which is likely to introduce noises or information that is inconsistent with the source inputs, or contributes to the Checkerboard Artifacts [78] in the generated results. Sub-pixel convolution [62] is a better solution for upsampling, which is commonly used in applications like super resolution for generating higher quality images. A sub-pixel convolution module often consist of a convolution layer and a pixel-shuffle operation, in which an input of H × W × Cr 2 tensor will be rearrange to form a rH × rW × C tensor using phase shift (r denotes the upscale factor): PS(T ) h,w,c = T h/r , w/r ,c·r·mod(w,r)+c·mod(h,r)+c (15) where h, w and c corresponds to the height, weight and number of channels in the resulted image. To retain details in the generated images to the largest extent and prevent the Checkerboard Artifact, we proposed to use the sub-pixel convolution up-sampling (SubPixUpsamp) at the top layer of the decoder network. E. General Implementation Details Generally in this study, we conduct our experiments mainly based on the pixel2pixel model [40]. Thus, after applying the LSGAN loss, the overall training objectives of the GAN model are as follow: L G (G, D) = E x,y (D(x, G(x)) − 1) 2 + λL L1 (G) (16) L D (G, D) = 1 2 E x,y (D(x, y) − 1) 2 + 1 2 E x,y D(x, G(x)) 2(17) Similarly, we also include L1-loss as the complement to the discriminator on scoring low-frequency information, which can reduce blurring and guide the generator in details adjustment. In case the discriminator fails, the generator can still go in the gradient-appropriate direction. L L1 (G) = E x,y y − G(x) 1(18) the L1-loss L L1 (G) is joined with the LSGAN MSE loss to form the generator loss (equation 16), with λ as a hyperparameter. As for the discriminator, we use a 5-layer fully convolutional network and follow the idea of PatchGAN in pixel2pixel. Since we have mentioned that an imbalance exists between the generator and the discriminator, in which the discriminator is always the first to converge and thus fails to provide the gradient to the generator to continue training. A common understanding here is that the discriminator is overpowerful than the generator. However, we also tried to reduce the number of layers and try to use some "weaker" networks as the discriminator, but all experiment ends up the same. This may illustrate that the difference between the generated data and the real ground truth does not lie on high-level features, and a shallow network can also tell their differences. Therefore, instead of elaborating the discriminator network, we try to reinforce the generator network so as to compete with the discriminator. PatchGAN here is found still useful in deraining tasks, which processes each image patch identically and independently and guarantees that when the noise is not uniformly distributed on the input image, the discriminator can still make a general evaluation on the quality of the generated image. We also compared its performance with the multi-scale discriminator proposed in the ID-CGAN, where the experiment results turn out to be the same. So here for faster training, we do not use the multi-scale model, which includes more convolution operations. For the training of our model, we use batch size equal to 1. For each iteration of training, image are randomly crop into a smaller size as input to the our model, in a bid to augment the training data and improve model's generalization ability. The ideal crop size are combinations among 256, 512, and 1024, which mainly depends on the datasets (the crop size should be large enough to contain as complete semantic information as possible, the minimum crop size for RainCityScapes dataset, for instance, should be 512x512). we employed Adam as the optimizer with 0.0002 as learning rate, 0.5 and 0.999 as the first and the second momentum values, and 0 as weight decay. Relevant programmes are implemented using the platform of PyTorch and we conducted all experiments a physical environment with Intel Xeon(R) Silver 4108 as CPU and GeForce RTX 2080 Ti as GPU. F. Empirical Evidences of Problem 1: Implementation Details In these experiments, we adjusted the number of network layers for down-and-up-sampling in the generator models and investigate their corresponding image restoration performances on four datasets above, which are SIDD-sRGB, RESIDE, Rain800, and Rain12000. We adopt two common types of backbone generator networks for comparison: Convolutional Encoder-decoder without skip connection (denoted as En/Decoder, which, conventionally, uses 2 * 2 max-pooling as the down-sampling method, and nearest-neighbor interpolation for upsampling) and UNet [58] (which is first introduced as a generator in the pixel2pixel model [40] with 8 layers by default, using fully convolutional layers for both down-sampling and up-sampling with skip connections concatenating outputs of each level). The scale factors of both networks are set to 2, and to be consistent, we compared the performances of both models with 1 to 8 layers of down-and-up-sampling respectively. Relevant results indicate that: for the En/Decoder generators, the restoration performances of models sharply drop after a short climbing (at around 2-3 layers) along with the increase of down-and-up-samplings layers. It may be because the amount of information that can pass is limited by the bottleneck of the network, where information compressed more than 2 layers may not be enough for restoring the cleanbackground images. This may also reflect that considerable low-level information is required for the restoration task. For the UNet generators, we can observe that the performances of the models increase from 1 to 4 layers, meaning that along with the increase of the number of layers, higher-level features can be extracted, while information from the previous layers can still pass through the skip-connections. Whereas, when reaching around the 5th layer, the performances of models will no longer improve even if we continue to increase the number of layers. It indicates that the level of features to learn to reach their saturation here, and a 5-layer UNet can already achieve the same performance as the 8-layer UNet used in the pixel2pixel models. Extra abstraction processing may not be helpful for the tasks. The following Figure [] indicate the results of models on three different image restoration tasks. Three sets of experiments were conducted in this section. We first prove the existence of inherent details loss quantitatively by training relevant generator networks as Autoencoders to perform image reconstruction. We input the clean background images (ground truth) from the Rain800 dataset to these generator networks and have them attempt to output images that are as similar to their inputs as possible by introducing MSE loss between the inputs and outputs. The higher similarity between the inputs and the outputs indicates less information is lost in the generator network. We compare corresponding reconstruction performances of different generator networks. The results show that none of the generator networks can completely reconstruct the input images meaning that all of them more or less suffer from the problem of details loss. Noticeably, the model with details enhancement (methods proposed above) achieves the best restoration performance, with an average PSNR of the restored images reaching 50.0 and an average SSIM reaching 0.9990. To prove that the details loss problem originates from the discard of low-level information before the decoder network, as well as to verify that improving the extraction and accumulation of information in the inputs can help with the problem, we conducted the second experiment. We adjusted the number of dense layers in the network module equipped before the generator network and investigate their corresponding deraining performances on the Rain800 dataset. Here we use a 5-layer-UNet as the backbone generator and adopt the InfoAccum module to enhance the inputs' information before being sent to the generator network. Relevant results indicate that: as the number of InfoAccum layers increases, the deraining performances of the models also improve significantly. Since the amount of information that actually passed inside the generator networks is constant in all these models, the InfoAccum modules applied only increase the information of the inputs, which, we consider, is acting the role of emphasizing the low-level information in the inputs. To prove that the details loss also happens inside the decoder network, and enhancing the decoder network can help, we introduced the SubPixUpsamp module to the decoder network and compare its performances on both image deraining and image reconstruction with models without SubPixUpsamp (similar experiment settings as above). Results on decoders with SubPixUpsamp module indicate advances on both deraining and reconstruction performances, meaning that enhancing the decoder network does help to alleviate the problem of details loss. APPENDIX H DISCUSSION & SUPPLEMENTARY EXPERIMENTS A. Positions of Adding the Detail Enhancing Module Originally, we intend to apply the InfoAccum module before the generator network to help extraction and accumulation of low-level information. We also investigate the models' performances by applying the network modules on different positions of the generator networks (Fig. 13). Here we use the 8-layer-UNet in pixel2pixel [40] as the backbone generator and try to insert 15-layer-InfoAccum modules before each of its encoder layers ("1st" denotes adding a InfoAccum module before the generator, while "1st -8th" means that 8 InfoAccum modules are added before all 8 encoder layers of the UNet generator). Similarly, we train the pixel2pixel model on the Rain800 datasets. We observe that adding the InfoAccum module to the "1st" position brings the greatest improvement while adding which to deeper layers does not may much different to the restoration performance of the model. This also reveals the inherent discards of low-level information in the network structure before the decoder network. Noticeably, adding extra InfoAccum at the "2nd" position also make minor improvement on the models. It may indicate that some relatively higher-level information is also enhanced by the InfoAccum module. Relevant studies have also proposed considerable network modules to enhance the image restoration performances of their models. Here, we also compare the InfoAccum with some other famous modules proposed for single image deraining task on deraining datasets (Fig. 14), including Residual deraining module (Residual) [79], Contextualized Dilated Block (ContextDilated) [10], SCAN module (SCAN) [80], Recursive deraining module (Recursive) [36], Attentive Recurrent module (Attention) [46], and ordinary DenseNet module (Dense), as contrast to the proposed InfoAccum module. We use a 5layer-UNet as backbone generator with SubPixUpsamp module and compare both their image restoration and image reconstruction performances. Results indicate that the proposed InfoAccum module makes the greatest improvement on the baseline model than other network modules. baseline Residual [79] ContextDilated [10] SCAN [80] Recursive [36] Attention [46] Dense InfoAccum Fig. 1 : 1Comparison between deconstructive methods and generative methods for image restoration Fig. 2 : 2Information flows and optimization objectives of generative methods for image restoration tasks in conventional understanding versus our proposed interpretation Fig. 3 : 3Image restoration performances with different numbers of down-and-up-sampling layers. Here we demonstrates the deraining performance on Rain800 datasets, while similar patterns are observed on other datasets for different image restoration tasks. Fig. 6 : 6Image restoration performances of baseline DGMs using InfoAccum modules with different numbers of layers 3) Empirical Evidence of Problem 3: The problem of vanishing gradient and imbalance in GANs can be apparent in the training of these conventional DGMs on image restoration tasks: we observed a two-stage convergence in most of our experiments Fig. 7 : 7Two-stage variation of the loss function along the training process with multinomial exponential regression results. Fig. 8 : 8Information flows and optimization Objectives among Different Generative Models and Tasks min I(X; M ) = min I(X;X; M ) + min I(X |X; M ) (5)Because in DGMs, I(X;X; M ) are features extracted by the learn-able parameters inside the encoder networks, while I(X |X; M ) is only information in X that directly passes through the encoders without learning process, we can notice that the possible ranges of the two terms above are different (noted thatX andỸ are the variables to be optimized):−H(X) ≤ I(X;X; M ) ≤ H(X) Fig. 9 : 9I(X;X;Ỹ ) max I(X |X;Ỹ max I(Y | X;Ỹ ) Information flow and the optimization objectives of the proposed interpretation. Fig. 10 : 10Comparison between the generator network of ID-CGAN with DenseNet structure and our proposed detailenhancing generator using InfoAccum module(s). Fig. 11 : 11Proposed InfoAccum module D. Sub-pixel Convolutional Upsampling Fig. 12 : 12Performances of GAN models on different image restoration tasks using U-Net generator networks with different numbers of down-and-up-sampling layers G. Empirical Evidences of Problem 2: Implementation Details Fig. 13 : 13Deraining performances of pixel2pixel models[40] with InfoAccum modules inserted to different positions of their backbone generator networks B. InfoAccum Modules Compared with other different Network Modules Fig. 14 : 14Deraining performance of models with other deraining network modules added before the encoder of the baseline generator model. TABLE II : IIEvaluation results on image restoration tasks encoder-decoders without skip connection[61] (En/Decoder), and U-Net[58] (UNet), each of them with different numbers of down-and-up-sampling layers respectively. We train and test these methods on the benchmarks of three different image restoration tasks (see Appendix G for more implementation details). Experiment results(Fig. 3)verified that: the number of down-and-up-sampling layers N tends to be saturated at certain values N saturated , where continuing increasing N > N saturated does not improve the performance of the model (UNet), or may even cause a performance drop (En/Decoder). TABLE I : IMinimum numbers of down-sampling layers required for learning / simulating the visual degradations of different image restoration tasks. Here we demonstrates the saturated numbers of down-and-up-sampling layers using U-Nets, where more than this number the performances do not increase. See Appendix G for more details . 5: Examples of image restoration results from conventional DGMs with and without details enhancement on realworld data. Here we demonstrate the examples of deraining results using pixel2pixel[40] model with and without the InfoAccum module. We can observe that: even in the areas with no rain, the restored results from these conventional DGMs tend to display inaccurately compared to their corresponding ground truthsPSNR SSIM En/Decoder 24.06 0.7194 pix2pix (UNet-8) 34.20 0.9802 UNet-5 34.28 0.9811 UNet-5 + InfoAccum-15 42.58 0.9967 UNet-5 + InfoAccum-15 + SubPixUpsamp 49.41 0.9987 TABLE III : IIIImage reconstruction performances of conventional DGMs with and without details enhancement Table. II).0 25 50 75 100 125 150 175 200 10 15 epoch training loss recorded average loss values fitting curve: 3.9e −0.822x + 4.2e −0.016x + 8.8 1st component: 3.9e −0.822x + 8.8 2nd component: 4.2e −0.016x + 8.8 TABLE IV : IVInformation of the datasets used in our experiments Image restoration. B Gunturk, X Li, CRC PressB. Gunturk and X. Li, Image restoration. CRC Press, 2018. Brief review of image denoising techniques. L Fan, F Zhang, H Fan, C Zhang, Visual Computing for Industry. 2L. Fan, F. Zhang, H. Fan, and C. Zhang, "Brief review of image denoising techniques," Visual Computing for Industry, Biomedicine, and Art, vol. 2, no. 1, pp. 1-12, 2019. Image denoising review: From classical to state-of-the-art approaches. B Goyal, A Dogra, S Agrawal, B S Sohi, A Sharma, Information fusion. 55B. Goyal, A. Dogra, S. Agrawal, B. S. Sohi, and A. Sharma, "Image denoising review: From classical to state-of-the-art approaches," Infor- mation fusion, vol. 55, pp. 220-244, 2020. Benchmarking single-image dehazing and beyond. B Li, W Ren, D Fu, D Tao, D Feng, W Zeng, Z Wang, IEEE Transactions on Image Processing. 281B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, "Benchmarking single-image dehazing and beyond," IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492-505, 2018. Single image deraining: From model-based to data-driven and beyond. W Yang, R T Tan, S Wang, Y Fang, J Liu, IEEE Transactions. 4311W. Yang, R. T. Tan, S. Wang, Y. Fang, and J. Liu, "Single image derain- ing: From model-based to data-driven and beyond," IEEE Transactions on pattern analysis and machine intelligence, vol. 43, no. 11, pp. 4059- 4077, 2020. Deep learning on image denoising: An overview. C Tian, L Fei, W Zheng, Y Xu, W Zuo, C.-W Lin, Neural Networks. 131C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.-W. Lin, "Deep learning on image denoising: An overview," Neural Networks, vol. 131, pp. 251- 275, 2020. J Gui, X Cong, Y Cao, W Ren, J Zhang, J Zhang, D Tao, arXiv:2106.03323A comprehensive survey on image dehazing based on deep learning. arXiv preprintJ. Gui, X. Cong, Y. Cao, W. Ren, J. Zhang, J. Zhang, and D. Tao, "A comprehensive survey on image dehazing based on deep learning," arXiv preprint arXiv:2106.03323, 2021. Optics of the atmosphere: scattering by molecules and particles. E J Mccartney, New YorkE. J. McCartney, "Optics of the atmosphere: scattering by molecules and particles," New York, 1976. Chromatic framework for vision in bad weather. S G Narasimhan, S K Nayar, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662). IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662)IEEE1S. G. Narasimhan and S. K. Nayar, "Chromatic framework for vision in bad weather," in Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 1. IEEE, 2000, pp. 598-605. Deep joint rain detection and removal from a single image. W Yang, R T Tan, J Feng, J Liu, Z Guo, S Yan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionW. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, "Deep joint rain detection and removal from a single image," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1357-1366. Depth-attentional features for single-image rain removal. X Hu, C.-W Fu, L Zhu, P.-A Heng, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionX. Hu, C.-W. Fu, L. Zhu, and P.-A. Heng, "Depth-attentional features for single-image rain removal," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8022-8031. Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models. S Bond-Taylor, A Leach, Y Long, C G Willcocks, IEEE Transactions on Pattern Analysis and Machine Intelligence. S. Bond-Taylor, A. Leach, Y. Long, and C. G. Willcocks, "Deep generative modelling: A comparative review of vaes, gans, normalizing flows, energy-based and autoregressive models," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, 2021. An introduction to deep generative modeling. L Ruthotto, E Haber, GAMM-Mitteilungen. 442202100008L. Ruthotto and E. Haber, "An introduction to deep generative model- ing," GAMM-Mitteilungen, vol. 44, no. 2, p. e202100008, 2021. Deep generative models: Survey. A Oussidi, A Elhassouny, 2018 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEEA. Oussidi and A. Elhassouny, "Deep generative models: Survey," in 2018 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEE, 2018, pp. 1-8. Image restoration with deep generative models. R A Yeh, T Y Lim, C Chen, A G Schwing, M Hasegawa-Johnson, M N Do, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPR. A. Yeh, T. Y. Lim, C. Chen, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do, "Image restoration with deep generative models," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 6772-6776. Physics-based generative adversarial models for image restoration and beyond. J Pan, J Dong, Y Liu, J Zhang, J Ren, J Tang, Y.-W Tai, M.-H Yang, IEEE transactions on pattern analysis and machine intelligence. 437J. Pan, J. Dong, Y. Liu, J. Zhang, J. Ren, J. Tang, Y.-W. Tai, and M.-H. Yang, "Physics-based generative adversarial models for image restoration and beyond," IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 7, pp. 2449-2462, 2020. Automatic single-image-based rain streaks removal via image decomposition. L.-W Kang, C.-W Lin, Y.-H Fu, IEEE transactions on image processing. 214L.-W. Kang, C.-W. Lin, and Y.-H. Fu, "Automatic single-image-based rain streaks removal via image decomposition," IEEE transactions on image processing, vol. 21, no. 4, pp. 1742-1755, 2011. Removing rain from a single image via discriminative sparse coding. Y Luo, Y Xu, H Ji, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionY. Luo, Y. Xu, and H. Ji, "Removing rain from a single image via discriminative sparse coding," in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3397-3405. Joint bi-layer optimization for single-image rain streak removal. L Zhu, C.-W Fu, D Lischinski, P.-A Heng, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionL. Zhu, C.-W. Fu, D. Lischinski, and P.-A. Heng, "Joint bi-layer optimization for single-image rain streak removal," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2526- 2534. A directional global sparse model for single image rain removal. L.-J Deng, T.-Z Huang, X.-L Zhao, T.-X Jiang, Applied Mathematical Modelling. 59L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang, "A directional global sparse model for single image rain removal," Applied Mathemat- ical Modelling, vol. 59, pp. 662-679, 2018. Rain streak removal using layer priors. Y Li, R T Tan, X Guo, J Lu, M S Brown, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionY. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, "Rain streak removal using layer priors," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2736-2744. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. K Zhang, W Zuo, Y Chen, D Meng, L Zhang, IEEE transactions on image processing. 267K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, "Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising," IEEE transactions on image processing, vol. 26, no. 7, pp. 3142-3155, 2017. Dehazenet: An end-to-end system for single image haze removal. B Cai, X Xu, K Jia, C Qing, D Tao, IEEE Transactions on Image Processing. 2511B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "Dehazenet: An end-to-end system for single image haze removal," IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, 2016. Auto-association by multilayer perceptrons and singular value decomposition. H Bourlard, Y Kamp, Biological cybernetics. 594H. Bourlard and Y. Kamp, "Auto-association by multilayer perceptrons and singular value decomposition," Biological cybernetics, vol. 59, no. 4, pp. 291-294, 1988. Efficient learning of sparse representations with an energy-based model. M Ranzato, C Poultney, S Chopra, Y Lecun, Advances in neural information processing systems. 191137M. Ranzato, C. Poultney, S. Chopra, Y. LeCun et al., "Efficient learning of sparse representations with an energy-based model," Advances in neural information processing systems, vol. 19, p. 1137, 2007. Auto-encoding variational bayes. D P Kingma, M Welling, arXiv:1312.6114arXiv preprintD. P. Kingma and M. Welling, "Auto-encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013. Blind denoising autoencoder. A Majumdar, IEEE transactions on neural networks and learning systems. 30A. Majumdar, "Blind denoising autoencoder," IEEE transactions on neural networks and learning systems, vol. 30, no. 1, pp. 312-317, 2018. Image denoising by autoencoder: Learning core representations. Z Zhao, The Australian National UniversityZ. Zhao, "Image denoising by autoencoder: Learning core representa- tions," The Australian National University, 2012. Variational image deraining. Y Du, J Xu, Q Qiu, X Zhen, L Zhang, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionY. Du, J. Xu, Q. Qiu, X. Zhen, and L. Zhang, "Variational image deraining," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp. 2406-2415. Conditional variational image deraining. Y Du, J Xu, X Zhen, M.-M Cheng, L Shao, IEEE Transactions on Image Processing. 29Y. Du, J. Xu, X. Zhen, M.-M. Cheng, and L. Shao, "Conditional variational image deraining," IEEE Transactions on Image Processing, vol. 29, pp. 6288-6301, 2020. Convolutional autoencoder for single image dehazing. R Chen, E , M.-K Lai, ICIP. R. Chen and E. M.-K. Lai, "Convolutional autoencoder for single image dehazing." in ICIP, 2019, pp. 4464-4468. Lca-net: Light convolutional autoencoder for image dehazing. A Bennur, M Gaggar, arXiv:2008.10325arXiv preprintA. Bennur, M. Gaggar et al., "Lca-net: Light convolutional autoencoder for image dehazing," arXiv preprint arXiv:2008.10325, 2020. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. X Mao, C Shen, Y.-B Yang, Advances in neural information processing systems. 29X. Mao, C. Shen, and Y.-B. Yang, "Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connec- tions," Advances in neural information processing systems, vol. 29, 2016. Attention-based adaptive selection of operations for image restoration in the presence of unknown combined distortions. M Suganuma, X Liu, T Okatani, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionM. Suganuma, X. Liu, and T. Okatani, "Attention-based adaptive se- lection of operations for image restoration in the presence of unknown combined distortions," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9039-9048. End-to-end learning for joint image demosaicing, denoising and super-resolution. W Xing, K Egiazarian, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionW. Xing and K. Egiazarian, "End-to-end learning for joint image demosaicing, denoising and super-resolution," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3507-3516. Lightweight pyramid networks for image deraining. X Fu, B Liang, Y Huang, X Ding, J Paisley, IEEE transactions on neural networks and learning systems. 31X. Fu, B. Liang, Y. Huang, X. Ding, and J. Paisley, "Lightweight pyramid networks for image deraining," IEEE transactions on neural networks and learning systems, vol. 31, no. 6, pp. 1794-1807, 2019. Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. 27I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014. Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, arXiv:1511.06434arXiv preprintA. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv preprint arXiv:1511.06434, 2015. Conditional generative adversarial nets. M Mirza, S Osindero, arXiv:1411.1784arXiv preprintM. Mirza and S. Osindero, "Conditional generative adversarial nets," arXiv preprint arXiv:1411.1784, 2014. Image-to-image translation with conditional adversarial networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionP. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125- 1134. Unpaired image-to-image translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJ.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223-2232. Image denoising using a generative adversarial network. A Alsaiari, R Rustagi, M M Thomas, A G Forbes, 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). IEEEA. Alsaiari, R. Rustagi, M. M. Thomas, A. G. Forbes et al., "Image denoising using a generative adversarial network," in 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). IEEE, 2019, pp. 126-132. Image blind denoising with generative adversarial network based noise modeling. J Chen, J Chen, H Chao, M Yang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Chen, J. Chen, H. Chao, and M. Yang, "Image blind denoising with generative adversarial network based noise modeling," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3155-3164. End-to-end unpaired image denoising with conditional adversarial networks. Z Hong, X Fan, T Jiang, J Feng, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Z. Hong, X. Fan, T. Jiang, and J. Feng, "End-to-end unpaired image denoising with conditional adversarial networks," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 4140-4149. Image de-raining using a conditional generative adversarial network. H Zhang, V Sindagi, V M Patel, IEEE transactions on circuits and systems for video technology. H. Zhang, V. Sindagi, and V. M. Patel, "Image de-raining using a con- ditional generative adversarial network," IEEE transactions on circuits and systems for video technology, 2019. Attentive generative adversarial network for raindrop removal from a single image. R Qian, R T Tan, W Yang, J Su, J Liu, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionR. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, "Attentive generative adversarial network for raindrop removal from a single image," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2482-2491. Single image dehazing via conditional generative adversarial network. R Li, J Pan, Z Li, J Tang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionR. Li, J. Pan, Z. Li, and J. Tang, "Single image dehazing via conditional generative adversarial network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8202-8211. Dhsgan: An end to end dehazing network for fog and smoke. R Malav, A Kim, S R Sahoo, G Pandey, Asian conference on computer vision. SpringerR. Malav, A. Kim, S. R. Sahoo, and G. Pandey, "Dhsgan: An end to end dehazing network for fog and smoke," in Asian conference on computer vision. Springer, 2018, pp. 593-608. Cycle-dehaze: Enhanced cyclegan for single image dehazing. D Engin, A Genç, H Ekenel, Proceedings of the IEEE conference on computer vision and pattern recognition workshops. the IEEE conference on computer vision and pattern recognition workshopsD. Engin, A. Genç, and H. Kemal Ekenel, "Cycle-dehaze: Enhanced cyclegan for single image dehazing," in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 825-833. Heavy rain image restoration: Integrating physics model and conditional adversarial learning. R Li, L.-F Cheong, R T Tan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionR. Li, L.-F. Cheong, and R. T. Tan, "Heavy rain image restoration: Integrating physics model and conditional adversarial learning," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1633-1642. Aod-net: All-in-one dehazing network. B Li, X Peng, Z Wang, J Xu, D Feng, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionB. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, "Aod-net: All-in-one dehazing network," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4770-4778. Erl-net: Entangled representation learning for single image de-raining. G Wang, C Sun, A Sowmya, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionG. Wang, C. Sun, and A. Sowmya, "Erl-net: Entangled representation learning for single image de-raining," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5644-5652. Disentangled representation learning and enhancement network for single image deraining. G Wang, C Sun, X Xu, J Li, Z Wang, Z Ma, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaG. Wang, C. Sun, X. Xu, J. Li, Z. Wang, and Z. Ma, "Disentangled representation learning and enhancement network for single image de- raining," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 3015-3023. Unsupervised domain-specific deblurring via disentangled representations. B Lu, J.-C Chen, R Chellappa, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition10B. Lu, J.-C. Chen, and R. Chellappa, "Unsupervised domain-specific deblurring via disentangled representations," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 225-10 234. Deep learning and the information bottleneck principle. N Tishby, N Zaslavsky, 2015 IEEE Information Theory Workshop. ITWN. Tishby and N. Zaslavsky, "Deep learning and the information bot- tleneck principle," in 2015 IEEE Information Theory Workshop (ITW). . IEEE. IEEE, 2015, pp. 1-5. Opening the black box of deep neural networks via information. R Shwartz-Ziv, N Tishby, arXiv:1703.00810arXiv preprintR. Shwartz-Ziv and N. Tishby, "Opening the black box of deep neural networks via information," arXiv preprint arXiv:1703.00810, 2017. Elements of information theory. M Thomas, A. Thomas Joy, Wiley3New YorkM. Cover Thomas and A. Thomas Joy, "Elements of information theory," New York: Wiley, vol. 3, pp. 37-38, 1991. U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241. Generalized denoising auto-encoders as generative models. Y Bengio, L Yao, G Alain, P Vincent, arXiv:1305.6663arXiv preprintY. Bengio, L. Yao, G. Alain, and P. Vincent, "Generalized denoising auto-encoders as generative models," arXiv preprint arXiv:1305.6663, 2013. Vision: A computational investigation into the human representation and processing of visual information, henry holt and co. D Marr, Inc2New York, NYno. 4.2D. Marr, "Vision: A computational investigation into the human repre- sentation and processing of visual information, henry holt and co," Inc., New York, NY, vol. 2, no. 4.2, 1982. Learning deconvolution network for semantic segmentation. H Noh, S Hong, B Han, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionH. Noh, S. Hong, and B. Han, "Learning deconvolution network for semantic segmentation," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1520-1528. Real-time single image and video superresolution using an efficient sub-pixel convolutional neural network. W Shi, J Caballero, F Huszár, J Totz, A P Aitken, R Bishop, D Rueckert, Z Wang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionW. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, "Real-time single image and video super- resolution using an efficient sub-pixel convolutional neural network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874-1883. Least squares generative adversarial networks. X Mao, Q Li, H Xie, R Y Lau, Z Wang, S Paul Smolley, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionX. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794-2802. A high-quality denoising dataset for smartphone cameras. A Abdelhamed, S Lin, M S Brown, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Abdelhamed, S. Lin, and M. S. Brown, "A high-quality denoising dataset for smartphone cameras," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Benchmarking single-image dehazing and beyond. B Li, W Ren, D Fu, D Tao, D Feng, W Zeng, Z Wang, IEEE Transactions on Image Processing. 281B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, "Benchmarking single-image dehazing and beyond," IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492-505, 2019. Density-aware single image de-raining using a multi-stream dense network. H Zhang, V M Patel, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionH. Zhang and V. M. Patel, "Density-aware single image de-raining using a multi-stream dense network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 695-704. Drdnet: Detail-recovery image deraining via context aggregation networks. S Deng, M Wei, J Wang, L Liang, H Xie, M Wang, arXiv:1908.10267arXiv preprintS. Deng, M. Wei, J. Wang, L. Liang, H. Xie, and M. Wang, "Drd- net: Detail-recovery image deraining via context aggregation networks," arXiv preprint arXiv:1908.10267, 2019. Gradual network for single image de-raining. W Yu, Z Huang, W Zhang, L Feng, N Xiao, Proceedings of the 27th ACM international conference on multimedia. the 27th ACM international conference on multimediaW. Yu, Z. Huang, W. Zhang, L. Feng, and N. Xiao, "Gradual network for single image de-raining," in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 1795-1804. N Tishby, F C Pereira, W Bialek, physics/0004057The information bottleneck method. arXiv preprintN. Tishby, F. C. Pereira, and W. Bialek, "The information bottleneck method," arXiv preprint physics/0004057, 2000. A A Alemi, I Fischer, J V Dillon, K Murphy, arXiv:1612.00410Deep variational information bottleneck. arXiv preprintA. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, "Deep variational information bottleneck," arXiv preprint arXiv:1612.00410, 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, P Abbeel, Advances in neural information processing systems. 29X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, "Infogan: Interpretable representation learning by information maximizing generative adversarial nets," Advances in neural information processing systems, vol. 29, 2016. Ib-gan: Disengangled representation learning with information bottleneck generative adversarial networks. I Jeon, W Lee, M Pyeon, G Kim, I. Jeon, W. Lee, M. Pyeon, and G. Kim, "Ib-gan: Disengangled rep- resentation learning with information bottleneck generative adversarial networks," 2021. Image quality metrics: Psnr vs. ssim. A Horé, D Ziou, 2010 20th International Conference on Pattern Recognition. A. Horé and D. Ziou, "Image quality metrics: Psnr vs. ssim," in 2010 20th International Conference on Pattern Recognition, 2010, pp. 2366- 2369. Structural similarity quality metrics in a coding context: exploring the space of realistic distortions. A C Brooks, X Zhao, T N Pappas, IEEE Transactions on image processing. 178A. C. Brooks, X. Zhao, and T. N. Pappas, "Structural similarity quality metrics in a coding context: exploring the space of realistic distortions," IEEE Transactions on image processing, vol. 17, no. 8, pp. 1261-1273, 2008. Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, Proceedings of the IEEE confer. the IEEE conferG. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE confer- ence on computer vision and pattern recognition, 2017, pp. 4700-4708. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. Multi-scale context aggregation by dilated convolutions. F Yu, V Koltun, arXiv:1511.07122arXiv preprintF. Yu and V. Koltun, "Multi-scale context aggregation by dilated convolutions," arXiv preprint arXiv:1511.07122, 2015. Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize. A Aitken, C Ledig, L Theis, J Caballero, Z Wang, W Shi, arXiv:1707.02937arXiv preprintA. Aitken, C. Ledig, L. Theis, J. Caballero, Z. Wang, and W. Shi, "Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize," arXiv preprint arXiv:1707.02937, 2017. Removing rain from single images via a deep detail network. X Fu, J Huang, D Zeng, Y Huang, X Ding, J Paisley, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionX. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, "Removing rain from single images via a deep detail network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3855-3863. Recurrent squeezeand-excitation context aggregation net for single image deraining. X Li, J Wu, Z Lin, H Liu, H Zha, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, "Recurrent squeeze- and-excitation context aggregation net for single image deraining," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 254-269.
{'fraction_non_alphanumeric': 0.04661190965092402, 'fraction_numerical': 0.01775691796225677, 'mean_word_length': 4.838385568305075, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'End-to-end generative methods are considered a more promising solution for image restoration in physics-based vision compared with the traditional deconstructive methods based on handcrafted composition models. However, existing generative methods still have plenty of room for improvement in quantitative performance. More crucially, these methods are considered black boxes due to weak interpretability and there is rarely a theory trying to explain their mechanism and learning process. In this study, we try to re-interpret these generative methods for image restoration tasks using information theory. Different from conventional understanding, we analyzed the information flow of these methods and identified three sources of information (extracted high-level information, retained lowlevel information, and external information that is absent from the source inputs) are involved and optimized respectively in generating the restoration results. We further derived their learning behaviors, optimization objectives, and the corresponding information boundaries by extending the information bottleneck principle. Based on this theoretic framework, we found that many existing generative methods tend to be direct applications of the general models designed for conventional generation tasks, which may suffer from problems including over-invested abstraction processes, inherent details loss, and vanishing gradients or imbalance in training. We analyzed these issues with both intuitive and theoretical explanations and proved them with empirical evidence respectively. Ultimately, we proposed general solutions or ideas to address the above issue and validated these approaches with performance boosts on six datasets of three different image restoration tasks.Index Terms-deep generative models, image restoration, information bottleneck principle.', 'arxivid': '2212.02198', 'author': ['Journal Of L A T E X Class ', 'Files ', 'Journal Of L A T E X Class ', 'Files '], 'authoraffiliation': [], 'corpusid': 254246585, 'doi': '10.48550/arxiv.2212.02198', 'github_urls': [], 'n_tokens_mistral': 26504, 'n_tokens_neox': 23394, 'n_words': 15190, 'pdfsha': '9b7aaa5e18cf08abdd4dea9f926e2afd6c5c922a', 'pdfurls': ['https://export.arxiv.org/pdf/2212.02198v2.pdf'], 'title': ['Rethinking Generative Methods for Image Restoration in Physics-based Vision: A Theoretical Analysis from the Perspective of Information', 'Rethinking Generative Methods for Image Restoration in Physics-based Vision: A Theoretical Analysis from the Perspective of Information', 'Rethinking Generative Methods for Image Restoration in Physics-based Vision: A Theoretical Analysis from the Perspective of Information', 'Rethinking Generative Methods for Image Restoration in Physics-based Vision: A Theoretical Analysis from the Perspective of Information'], 'venue': []}
arxiv
arXiv:math-ph/0703046v2 14 Aug 2007 Distributed Order Calculus and Equations of Ultraslow Diffusion Anatoly N Kochubei Institute of Mathematics National Academy of Sciences of Ukraine Tereshchenkivska 301601KievUkraine arXiv:math-ph/0703046v2 14 Aug 2007 Distributed Order Calculus and Equations of Ultraslow Diffusion Running head: "Equations of Ultraslow Diffusion" * Partially supported by the Ukrainian Foundation for Fundamental Research, Grant 14.1/003 1distributed order derivativedistributed order integralultraslow diffusionfundamental solution of the Cauchy problem AMS subject classifications: 26A33, 35K99, 82C31 2 We consider equations of the formwhere D (µ) is a distributed order derivative, that isis the Caputo-Dzhrbashyan fractional derivative of order α, µ is a positive weight function.The above equation is used in physical literature for modeling diffusion with a logarithmic growth of the mean square displacement. In this work we develop a mathematical theory of such equations, study the derivatives and integrals of distributed order. INTRODUCTION Fractional diffusion equations with the Caputo-Dzhrbashyan fractional time derivatives D (α) t u (t, x) − Bu(t, x) = f (t, x), t > 0, x ∈ R n ,(1.1) where 0 < α < 1, B is an elliptic differential operator in the spatial variables, are widely used in physics to model anomalous diffusion in fractal media. Physically, the most important characteristic of diffusion is the mean square displacement (∆x) 2 = R n |x − ξ| 2 Z(t, x − ξ) dξ of a diffusive particle, where Z is a fundamental solution of the Cauchy problem for the diffusion equation. In normal diffusion (described by the heat equation or more general parabolic equations) the mean square displacement of a diffusive particle behaves like const · t for t → ∞. A typical behavior for anomalous diffusion on some amorphous semiconductors, strongly porous materials etc is const · t α , and this was the reason to invoke the equation (1.1), usually with B = ∆, where this anomalous behavior is an easy mathematical fact. There are hundreds of physical papers involving equations (1.1); see the surveys [23,24]. The mathematical theory was initiated independently by Schneider and Wyss [37] and the author [18,19]; for more recent developments see [9,10,15] and references therein. A number of recent publications by physicists (see [3,4,5,25,38] and references there) is devoted to the case where the mean square displacement has a logarithmic growth. This ultraslow diffusion (also called "a strong anomaly") is encountered in polymer physics (a polyampholyte hooked around an obstacle), as well as in models of a particle's motion in a quenched random force field, iterated map models etc. In order to describe ultraslow diffusion, it is proposed to use evolution equations D (µ) t u (t, x) − Bu(t, x) = f (t, x), t > 0, x ∈ R n ,(1.2) where D (µ) is the distributed order derivative (introduced by Caputo [2]) of the form D (µ) ϕ (t) = 1 0 (D (α) ϕ)(t)µ(α) dα, (1.3) µ is a positive weight function. The above physical papers contain some model calculations for such evolution equations. There are only two mathematical papers on this subject. Meerschaert and Scheffler [22] developed a stochastic model based on random walks with a random waiting time between jumps. Scaling limits of these random walks are subordinated random processes whose density functions solve the ultraslow diffusion equation. The solutions in [22] are understood as solutions of "algebraic" equations obtained if the Laplace transform in t and the Fourier transform in x are applied. Umarov and Gorenflo [39] applied to equations (1.2) Dubinskij's theory [8] of analytic pseudo-differential operators. This leads to solvability results for (1.2) in the spaces of analytic functions and dual spaces of analytic functionals. Such a theory is very different from the theory of parabolic equations; results obtained this way "do not feel" the difference between the equations (1.2) with B = ∆ and B = −∆. The aim of this paper is to develop a theory of the model equation (1.2) with B = ∆ comparable with the classical theory of the Cauchy problem for the heat equation. In particular, we construct and investigate in detail a fundamental solution of the Cauchy problem for the homogeneous equation (f = 0) and the corresponding kernel appearing in the volume potential solving the inhomogeneous equation, prove their positivity and subordination properties. This leads to a rigorous understanding of a solution of the Cauchy problem -it is important to know, in which sense a solution satisfies the equation. In its turn, this requires a deeper understanding of the distributed order derivative (1.3), the availability of its various forms resembling the classical fractional calculus [36]. We also introduce and study a kind of a distributed order fractional integral corresponding to the derivative (1.3). A Marchaud-type representation of the distributed order derivative (based on a recent result by Samko and Cardoso [35]) is the main tool for obtaining, in the spirit of [19,9], uniqueness theorems for the Cauchy problem for general equations (1.2) in the class of bounded functions and, for n = 1 and B = d 2 /dx 2 , in the class of functions of sub-exponential growth. Comparing with the theory of fractional diffusion equation (1.1) we see that the distributed order equations (under reasonable assumptions regarding µ) constitute the limiting case equations, as α → 0. That is readily observed from estimates of fundamental solutions having, as |x| → ∞, the estimate exp −a|x| 2 2−α , a > 0, for the fractional diffusion equations, and exp(−a|x|) in the case of ultraslow diffusion. In fact, we begin with the "ordinary" equation D (µ) u = λu, λ ∈ R. If λ < 0, already this equation demonstrates a logarithmic decay of solution at infinity; see Theorem 2.3 below. In general, the theory presented here is an interesting example of subtle analysis (with kernels from L 1 belonging to no L p , p > 1, etc) appearing in problems of a direct physical significance. 2 Distributed Order Derivative 2.1. Definitions. Recall that the regularized fractional derivative of a function ϕ ∈ C[0, T ] (also called the Caputo or Caputo-Dzhrbashyan derivative) of an order α ∈ (0, 1) is defined as D (α) ϕ (t) = 1 Γ(1 − α)   d dt t 0 (t − τ ) −α ϕ(τ ) dτ − t −α ϕ(0)   , 0 < t ≤ T,(2.D (α) ϕ (t) = 1 Γ(1 − α) t 0 (t − τ ) −α ϕ ′ (τ ) dτ (2.2) (see [9]). Let µ(t), 0 ≤ t ≤ 1, be a continuous non-negative function, different from zero on a set of a positive measure. If a function ϕ is absolutely continuous on [0, T ], then by (1.3) and (2.2) D (µ) ϕ (t) = t 0 k(t − τ )ϕ ′ (τ ) dτ (2.3) where k(s) = 1 0 s −α Γ(1 − α) µ(α) dα, s > 0. (2.4) It is obvious that k is a positive decreasing function. Note that for an absolutely continuous ϕ, d dt t 0 k(t − τ )ϕ(τ ) dτ = d dt t 0 k(s)ϕ(t − s) ds = t 0 k(s)ϕ ′ (t − s) ds + k(t)ϕ(0), so that D (µ) ϕ (t) = d dt t 0 k(t − τ )ϕ(τ ) dτ − k(t)ϕ(0). (2.5) The right-hand side of (2.5) makes sense for a continuous function ϕ, for which the derivative d dt t 0 k(t − τ )ϕ(τ ) dτ exists. Below we use (2.5) as a general definition of the distributed order derivative D (µ) ϕ. The necessity to use the regularized fractional derivatives, not the Riemann-Liouville ones (defined as in (2.1), but without subtracting t −α ϕ(0)), in the relaxation and diffusion problems, is caused by the fact that a solution of an equation with a Riemann-Liouville derivative typically has a singularity at the origin t = 0 (see, for example, [9]), so that the initial state of a system to be described by the equation is not defined and requires a regularization. However, mathematically such problems are legitimate. A distributed order derivative with a constant weight, based on the Riemann-Liouville fractional derivative, was introduced by Nakhushev (see [26]). The diffusion equation with such a time derivative and a single spatial variable was investigated by Pskhu [30]. For other definitions of variable order and distributed order derivatives see also [17,21] and references therein. In this paper we use only the derivatives (2.1) and (2.5). Asymptotic properties. Since the kernel (2.4) is among the main objects of the distributed order calculus, it is important to investigate its properties. Proposition 2.1. If µ ∈ C 3 [0, 1], µ(1) = 0, then k(s) ∼ s −1 (log s) −2 µ(1), s → 0, (2.6) k ′ (s) ∼ −s −2 (log s) −2 µ(1), s → 0. (2.7) Proof. Denote r = − log s (→ ∞), ψ(α) = µ(α) Γ(1 − α) . Then k(s) = 1 0 ψ(α)e rα dα. Integrating twice by parts, we get k(s) = r −2 1 0 ψ ′′ (α)e rα dα − r −1 µ(0) − r −2 [ψ ′ (1)e r − ψ ′ (0)] . We have ψ ′ (α) = µ ′ (α)Γ(1 − α) + µ(α)Γ ′ (1 − α) [Γ(1 − α)] 2 , so that ψ ′ (1) = −µ(1), and another integration by parts yields the relation k(s) = µ(1)r −2 e r + O(r −3 e r ), r → ∞, which implies (2.6). The proof of (2.7) is similar. It follows from (2.6) that k ∈ L 1 (0, T ); however k / ∈ L β for any β > 1. Note also that one cannot integrate by parts in (2.3) because, by (2.7), k ′ has a non-integrable singularity. Throughout this paper we use the Laplace transform K(p) = ∞ 0 k(s)e −K(p) = 1 0 p α−1 µ(α) dα. (2.8) It will often be useful to write (2.8) as K(p) = 1 0 e (α−1) log p µ(α) dα. (2.9) Taking the principal value of the logarithm we extend K(p) to an analytic function on the whole complex plane cut along the half-axis R − = {Im p = 0, Re p ≤ 0}. Proposition 2.2. (i) Let µ ∈ C 2 [0, 1]. If p ∈ C \ R − , |p| → ∞, then K(p) = µ(1) log p + O (log |p|) −2 . (2.10) More precisely, if µ ∈ C 3 [0, 1], then K(p) = µ(1) log p − µ ′ (1) (log p) 2 + O (log |p|) −3 . (2.10 ′ ) (ii) Let µ ∈ C[0, 1], µ(0) = 0. If p ∈ C \ R − , p → 0, then K(p) ∼ p −1 log 1 p −1 µ(0). (2.11) (iii) Let µ ∈ C[0, 1], µ(α) ∼ aα λ , a > 0, λ > 0. If p ∈ C \ R − , p → 0, then K(p) ∼ aΓ(1 + λ)p −1 log 1 p −1−λ . (2.11 ′ ) Proof. (i) Integrating by parts, as in the proof of Proposition 2.1, we find that 2.3. "Ordinary" equations. Let us consider the simplest equation with a distributed order derivative, that is D (µ) u λ (t) = λu λ (t), t > 0, (2.12) where λ ∈ R, and it is assumed that a solution satisfies the initial condition u(0) = 1. A solution of (2.12) should be seen as an analog of the exponential function t → e λt of the classical analysis and the function t → E α (λt α ), where E α is the Mittag-Leffler function, appearing for the equation with the regularized fractional derivative of order α ∈ (0, 1) (see [9]). The equation (2.12) with λ < 0 is discussed in [16] as the one describing distributed order relaxation. The uniqueness of a solution will follow from the uniqueness theorem for the equation (1.2); see Theorem 6.1. Of course the method of proof of the latter theorem can be used to prove separately the uniqueness for a much simpler equation (2.12). Below we assume that µ ∈ C 2 [0, 1], µ(1) = 0, λ = 0; evidently, u 0 (t) ≡ 1. 1 0 e αr µ(α) dα = µ(1)e r r + O r −2 e r , Applying formally the Laplace transform to the equation (2.12) and taking into account the initial condition u(0) = 1, for the transformed solution u λ (p) we get u λ (p) = K(p) pK(p) − λ . (2.13) The function (2.13) is analytic on the half-plane Re p > γ, if γ > 0 is large enough. We have u λ (p) ∼ p −1 , p = σ + iτ , σ, τ ∈ R, |τ | → ∞. Therefore [7] u λ is indeed the Laplace transform of some function u λ (t), and for almost all values of t, u λ (t) = d dt 1 2πi γ+i∞ γ−i∞ e pt p K(p) pK(p) − λ dp. (2.14) Let 1 2 < ω < 1. We will often use the contour S γ,ω in C consisting of the arc T γ,ω = {p ∈ C : |p| = γ, | arg p| ≤ ωπ}, and two rays Γ + γ,ω = {p ∈ C : | arg p| = ωπ, |p| ≥ γ}, Γ − γ,ω = {p ∈ C : | arg p| = −ωπ, |p| ≥ γ}, The contour S γ,ω is oriented in the direction of growth of arg p. By Jordan's lemma, u λ (t) = d dt 1 2πi Sγ,ω e pt p K(p) pK(p) − λ dp. In contrast to (2.14), here we may differentiate under the integral, so that u λ (t) = 1 2πi Sγ,ω e pt K(p) pK(p) − λ dp. (2.15) Note that γ is chosen in such a way that pK(p) = λ for all p ∈ S γ,ω (this is possible, since pK(p) → ∞, as |p| → ∞). The next result establishes qualitative properties of the function u λ resembling those of the exponential function and the Mittag-Leffler function. Recall that a function u ∈ C ∞ (0, ∞) is called completely monotone [13], if (−1) n u (n) (t) ≥ 0 for all t > 0, n = 0, 1, 2, . . .. (ii) If λ > 0, then u λ (t) is non-decreasing, and u λ (t) ≥ 1 for all t ∈ (0, ∞). (iii) If λ < 0, then u λ (t) is completely monotone. (iv) Let λ < 0. If µ(0) = 0, then u λ (t) ∼ C(log t) −1 , t → ∞. (2.16) If µ(α) ∼ aα ν , α → 0 (a > 0, ν > 0), then u λ (t) ∼ C(log t) −1−ν , t → ∞.(2. 17) Here and below C denotes various positive constants. Proof. The smoothness of u λ for t > 0 is evident from (2.15). The integral in (2.15) is the sum of the integral over T γ,ω (obviously continuous at t = 0) and the functions u ± λ (t) = 1 2πi Γ ± γ ,ω e pt K(p) pK(p) − λ dp. We have u + λ (t) + u − λ (t) = 1 π Im    e iωπ ∞ γ e tre iωπ K(re iωπ ) re iωπ K(re iωπ ) − λ dr    = 1 π Im ∞ γ r −1 e tre iωπ dr + λ π Im ∞ γ e tre iωπ r (re iωπ K(re iωπ ) − λ) dr. The second summand is obviously continuous at t = 0. The first summand equals 1 π (recall that cos ωπ < 0), and this expression is continuous at t = 0. Let λ > 0. The function p → 1 p − λ is the Laplace transform of the function t → e λt . Therefore [13] it is completely monotone. The Laplace transform of the function u ′ λ (t) equals p u λ (p) − u λ (0) = pK(p) pK(p) − λ − 1 = λ pK(p) − λ (strictly speaking, we have to use this formula, together with the asymptotics of K(p), to prove the existence of the Laplace transform of u ′ λ ). On the other hand, the function pK(p) = 1 0 p α µ(α) dα is positive, while its derivative is completely monotone. By the Criterion 2 of the complete monotonicity (see Chapter XIII of [13]), the Laplace transform of the function u ′ λ (t) is completely monotone. It follows from the Bernstein theorem about completely monotone functions and the uniqueness property for the Laplace transform that u ′ λ (t) ≥ 0 for all t > 0, whence u λ is non-decreasing and u λ (t) ≥ 1. Let λ < 0. Up to now, γ was chosen so big that pK(p) = λ for all p ∈ S γ,ω . In fact, Im pK(p) = 1 0 |p| α µ(α) sin(α arg p) dα, so that Im pK(p) = 0 only for arg p = 0. Meanwhile, if arg p = 0 and λ < 0, then pK(p) − λ = 1 0 |p| α µ(α) dα − λ > 0. Therefore, the above integral representation of u λ holds for any γ > 0. Since |K(p)| ≤ C|p| −1 log 1 |p| −1 for small |p|, we find that K(p) pK(p) − λ ≤ C|p| −1 log 1 |p| −1 whence Tγ,ω e pt K(p) pK(p) − λ dp ≤ Ce γt log 1 γ → 0, as γ → 0. Considering other summands in the integral representation of u λ , we see that 1 π ∞ γ r −1 e tre iωπ dr = − 1 π ∞ −γt cos ωπ s −1 e −s sin(s tan ωπ) ds −→ − 1 π ∞ 0 s −1 e −s sin(s tan ωπ) ds, as γ → 0. Next, we have to consider the expression λ π ∞ γ Im e tre iωπ r Re 1 re iωπ K(re iωπ ) − λ dr + λ π ∞ γ Re e tre iωπ r Im 1 re iωπ K(re iωπ ) − λ dr def = I 1 + I 2 . We have Im e tre iωπ r = r −1 e tr cos ωπ sin(tr sin ωπ), and this expression has a finite limit, as r → 0. Since also pK(p) → 0, as p → 0, we see that we may pass to the limit in I 1 , as γ → 0. In order to consider I 2 , we have to study the function Φ(r, ω) = Im 1 re iωπ K(re iωπ ) − λ . We have re iωπ K(re iωπ ) = 1 0 (re iωπ ) α µ(α) dα = 1 0 e −α(s−iωπ) µ(α) dα, s = − log r → ∞, as r → 0, so that Φ(r, ω) = − 1 0 e −αs sin(αωπ)µ(α) dα 1 0 e −αs cos(αωπ)µ(α) dα − λ 2 + 1 0 e −αs sin(αωπ)µ(α) dα 2 . As s → ∞, the denominator tends to λ 2 , while in the numerator 1 0 e −αs sin(αωπ)µ(α) dα ≤ C 1 0 αe −αs ≤ C s 2 = C (log r) 2 . This makes it possible to pass to the limit in I 2 , as γ → 0, so that u λ (t) = − 1 π ∞ 0 s −1 e −s sin(s tan ωπ) ds + λ π ∞ 0 r −1 e tr cos ωπ sin(tr sin ωπ)Ψ(r, ω) dr + λ π ∞ 0 r −1 e tr cos ωπ cos(tr sin ωπ)Φ(r, ω) dr (2.18) where Ψ(r, ω) = 1 0 e −αs cos(αωπ)µ(α) dα − λ 1 0 e −αs cos(αωπ)µ(α) dα − λ 2 + 1 0 e −αs sin(αωπ)µ(α) dα 2 , s = − log r. In (2.18), we may pass to the limit, as ω → 1. It is easy to see that the first two terms in (2.18) tend to zero, so that u λ (t) = λ π ∞ 0 r −1 e −tr Φ(r, 1) dr, (2.19) Φ(r, 1) = − 1 0 r α sin(απ)µ(α) dα 1 0 r α cos(απ)µ(α) dα − λ 2 + 1 0 r α sin(απ)µ(α) dα 2 . Since λ < 0, it is seen from (2.19) that u λ is the Laplace transform of a positive function. Therefore u λ is completely monotone. Let λ < 0 and µ(0) = 0. As we have proved, u λ is monotone decreasing. It follows from (2.13) and (2.11) that u λ (p) ∼ C p log 1 p , p → +0. Applying the Karamata-Feller Tauberian theorem (see Chapter XIII in [13]) we get (2.16). Similarly, if µ(α) ∼ aα ν , α → 0, we use the asymptotic relation (2.11 ′ ), and the same Tauberian theorem yields (2.17). A non-rigorous "physicist-style" proof of the statement (iii) was given in [16], where the asymptotics (2.16) was also found for the case µ(α) ≡ 1. show (see [7]) that the function p → 1 pK(p) is the Laplace transform of some function κ(t), and κ(t) = d dt 1 2πi γ+i∞ γ−i∞ e pt p · 1 pK(p) dp, γ > 0. (3.1) It is natural to define the distributed order integral I (µ) , as the convolution operator I (µ) f (t) = t 0 κ(t − s)f (s) ds. Proposition 3.1. Suppose that µ ∈ C 3 [0, 1], µ(1) = 0, and either µ(0) = 0 or µ(α) ∼ aα ν , a > 0, ν > 0. Then: (i) κ ∈ C ∞ (0, ∞), and κ is completely monotone; (ii) for small values of t, κ(t) ≤ C log 1 t , (3.2) |κ ′ (t)| ≤ Ct −1 log 1 t , (3.3) Proof. As in Sect. 2, we deform the contour of integration in (3.1) and differentiate: κ(t) = 1 2πi Sγ,ω e pt pK(p) dp. (3.4) We will need information about the asymptotic behavior of 1 K(p) . By (2.10 ′ ), K(p) = µ(1) log p − µ ′ (1) (log p) 2 + c(p), c(p) = O 1 (log |p|) 3 , |p| → ∞. Then we can write 1 K(p) − log p µ(1) − µ ′ (1) [µ(1)] 2 = −µ(1)c(p)(log p) 3 + [µ ′ (1)] 2 − µ ′ (1)c(p)(log p) 2 [µ(1)] 2 [µ(1) log p − µ ′ (1) + c(p)(log p) 2 ] , whence 1 K(p) = log p µ(1) + µ ′ (1) [µ(1)] 2 + O 1 (log |p|) , p → ∞. (3.5) The integral in (3.4) consists of the integral over T γ,ω (a function from C ∞ [0, ∞)) and integrals over Γ ± γ,ω . Each of the latter ones is estimated, due to (3.5), by an expression C ∞ γ e −art r −1 log r dr ∼ C log 1 t , t → 0 (a, C > 0; see the asymptotic formula (13.49) in [32]). This implies (3.2). The proof of (3.3) is similarly based on the same asymptotic relation from [32]. In order to prove that κ is completely monotone, we proceed as in the proof of Theorem 2.3, to transform (3.4) into a representation by a Laplace integral. First we pass to the limit, as γ → 0. This is possible because, by Proposition 2.2, either 1 pK(p) ∼ µ(0) log 1 p , p → 0, (3.6) if µ(0) = 0, or 1 pK(p) ∼ C log 1 p 1+ν , p → 0, (3.7) if µ(α) ∼ aα ν , α → 0. Both the relations (3.6) and (3.7) are sufficient to prove that the integral over T γ,ω tends to 0, as γ → 0, while the γ → 0 limits of both the integrals over Γ ± γ,ω exist. We come to the representation κ(t) = 1 π Im    e iωπ ∞ 0 e tre iωπ dr re iωπ K(re iωπ )    . (3.8) We find, introducing the parameter s = − log r → ∞, as r → 0, that re iωπ K(re iωπ ) = 1 0 re iωπ α µ(α) dα = 1 0 e −α(s−iωπ) µ(α) dα = 1 0 e −αs (cos(αωπ) + i sin(αωπ))µ(α) dα. Taking into account the logarithmic behavior of the integrand of (3.8) near the origin, we may pass to the limit in (3.8), as ω → 1, and we get that κ(t) = 1 π ∞ 0 e −tr 1 0 r α sin(απ)µ(α) dα 1 0 r α cos(απ)µ(α) dα 2 + 1 0 r α sin(απ)µ(α) dα 2 dr, as desired. Note that, by (3.2), κ ∈ L loc 1 (0, ∞). 3.2. The Marchaud form of the distributed order derivative. If f ∈ L 1 (0, T ), u = I (µ) f , then u = κ * f , D (µ) u (t) = d dt (k * κ * f )(t) = d dt (1 * f ) = d dt t 0 f (τ ) dτ = f (t) almost everywhere. Thus D (µ) I (µ) = I on L 1 (0, T ). The identity (k * κ)(t) ≡ 1 (almost everywhere), which follows from the fact that the product of the Laplace transforms K(p) and 1 pK(p) equals 1 p , means that κ is a Sonine kernel (see [35]). Since both the functions k and κ are monotone decreasing (obviously, k is completely monotone), we are within the conditions [35], under which the operator D (µ) , on functions u = I (µ) f , f ∈ L p (0, T ), 1 < p < ∞, can be represented in the form D (µ) u (t) = k(t)u(t) + t 0 k ′ (τ )[u(t − τ ) − u(t)] dτ, 0 < t ≤ T, (3.9) where the representation (3.9) is understood as follows. Let (Ψ ε u) (t) =    t ε k ′ (τ )[u(t − τ ) − u(t)] dτ, if t ≥ ε, 0, if 0 < t < ε. Then lim ε→0 D (µ) u (t) − k(t)u(t) − (Ψ ε u) (t) Lp(0,T ) = 0. (3.10) The representation (3.9), similar to the Marchaud form of a fractional derivative [36], will be useful for our proofs of uniqueness theorems, because the integral operator in (3.9) has the form enabling the maximum principle approach. On the other hand, the precaution we made understanding (3.9) in terms of (3.10) cannot be easily avoided due to a strong singularity of k ′ in accordance with the asymptotics (2.7). ) with B = ∆, that is D (µ) t u (t, x) = ∆u(t, x), x ∈ R n , t > 0. (4.1) In this section we construct fundamental solution Z(t, x) of the Cauchy problem, a solution of (4.1) with Z(0, x) = δ(x), and obtain its estimates. Below we use the following normalization of the Fourier transform: u(ξ) = R n e ix·ξ u(x) dx, so that u(x) = 1 (2π) n R n e −ix·ξ u(ξ) dξ. For a radial function u(r), r = |x|, u(r) = 2π n/2 r 2 1− n 2 ∞ 0 ρ n/2 u(ρ)J n 2 −1 (rρ) dρ,(4.2) where J ν is the Bessel function. Applying formally the Laplace transform in t and the Fourier transform in x, we find that Z(p, ξ) = K(p) pK(p) + |ξ| 2 . By (4.2), Z(p, x) = (2π) − n 2 |x| 1− n 2 K(p) ∞ 0 s n/2 pK(p) + s 2 J n 2 −1 (|x|s) ds. (4.3) It is known ([29], 2.12.4.28) that ∞ 0 y ν+1 y 2 + z 2 J ν (cy) dy = z ν K ν (cz), −1 < ν < 3 2 , (4.4) where K ν is the McDonald function. If n ≤ 4, then the above restriction upon ν = n 2 − 1 is satisfied, and (4.4) implies the representation Z(p, x) = (2π) − n 2 |x| 1− n 2 K(p)(pK(p)) 1 2 ( n 2 −1) K n 2 −1 (|x| pK(p)). (4.5) We have simpler formulas in the lowest dimensions -if n = 2, then Z(p, x) = 1 2π K(p)K 0 (|x| pK(p)); (4.6) if n = 1, then Z(p, x) = 1 2 K(p) pK(p) e −|x| √ pK(p) (4.7) because K −1/2 (z) = K 1/2 (z) = π 2z e −z (see [1]). The function K ν decays exponentially at infinity: K ν (z) ∼ π 2z e −z , z → ∞, while K ν (z) ∼ Cz −ν , as z → 0 (if ν > 0), and K 0 (z) ∼ − log z. We see that the function on the right in (4.5) belongs to L 1 (R n ) in x for any n, not only for n ≤ 4. Using the identity ∞ 0 rJ ν (br)K ν (cr) dr = b ν c −ν (b 2 + c 2 ) −1 ([29] , 2.16.21.1) we check that the inverse Fourier transform of the right-hand side of (4.5) coincides with K(p) (pK(p) + |ξ| 2 ) −1 . Therefore the formula (4.5) is valid for any n. Let us consider estimates of the function Z and its derivatives. Qualitatively, the behavior of Z is similar to that of the fundamental solution of the Cauchy problem for the fractional diffusion equation (1.1) (see [19,10,9]). In addition to the singularity at t = 0, Z(t, x) has, if n > 1, a singularity at x = 0 (a logarithmic singularity, if n = 2, and a power one, if n ≥ 3). As usual Z(t, x) → δ(x), as t → 0. This means that the singularity at t = 0 becomes "visible" near the origin in x. In fact, we obtain separate estimates for a small |x|, showing the character of singularities in t and x, and for a large |x|. In addition, subsequent applications of the fundamental solutions require estimates of D (µ) Z, applicable simultaneously for all x = 0, and uniform in t. Of course, estimates for D (µ) Z at the origin and infinity can be obtained from the relation D (µ) Z = ∆Z. All the above estimates deal with a finite time interval, t ∈ (0, T ], and it is this kind of estimates, that is needed to study the Cauchy problem. Separately we will give some estimates of Z for large values of t, just to see the qualitative behavior of Z. Theorem 4.1. Suppose that µ ∈ C 2 [0, 1], µ(α) = α ν µ 1 (α), µ 1 (α) ≥ ρ > 0, 0 ≤ α ≤ 1, ν ≥ 0. Denote by ε a small positive number. The function Z is infinitely differentiable for t = 0 and x = 0. The following estimates hold for 0 < t ≤ T . If n = 1, then |D m x Z(t, x)| ≤ Ct − m+1 2 , |x| ≤ ε, 0 ≤ m ≤ 3. (4.8) If n = 2, then |Z(t, x)| ≤ Ct −1 log |x| −1 , |x| ≤ ε, (4.9) |D m x Z(t, x)| ≤ Ct −1 |x| −m , |x| ≤ ε, 1 ≤ m ≤ 3. (4.10) If n ≥ 3, then |D m x Z(t, x)| ≤ Ct −1 |x| −n+2−m , |x| ≤ ε, 0 ≤ m ≤ 3. (4.11) In all cases, |D m x Z(t, x)| ≤ Ce −a|x| (a > 0), |x| ≥ ε −1 . (4.12) The estimate of D (µ) Z, uniform in t, is as follows: D (µ) Z (t, x) ≤ C|x| −n−2 e −a|x| (a > 0), |x| = 0. (4.13) If |x| ≤ ε, then D (µ) Z (t, x) ≤ Ct −2 |x| −n+2 . (4.13 ′ ) Proof. As before, using Jordan's lemma we write Z(t, x) = (2π) − n 2 |x| 1− n 2 Sγ,ω e pt K(p)(pK(p)) 1 2 ( n 2 −1) K n 2 −1 (|x| pK(p)) dp, x = 0. (4.14) The integral in (4.14) consists of the ones on T γ,ω and Γ ± γ,ω . Let us begin with the first of them, denoted by Z 0 (t, x). Below we assume that γ > e. If p ∈ T γ,ω , then p = γe iϕ , |ϕ| ≤ ωπ, 1 2 ≤ ω < 1. Under our assumptions, pK(p) = 1 0 γ α e iαϕ α ν µ 1 (α) dα. Let us consider the location of values of pK(p), p ∈ T γ,ω . If |ϕ| ≤ π/2, then Re pK(p) ≥ 0. Suppose that π 2 < ϕ ≤ ωπ. Then Re pK(p) ≥ R cos(ωπ), R = 1 0 γ α α ν µ 1 (α) dα, Im pK(p) ≥ ρ 1 0 α ν sin(αϕ) dα = ρϕ −1−ν ϕ 0 β ν sin β dβ ≥ ρ(ωπ) −1−ν π/2 0 β ν sin β dβ > 0, so that 0 ≤ arg pK(p) < π, as p belongs to the part of T γ,ω lying in the upper half-plane. Similarly, −π < arg pK(p) ≤ 0 for the part from the lower half-plane. Thus, | arg pK(p)| < π, and since T γ,ω is compact, we have | arg pK(p)| ≤ ϕ 0 < π, p ∈ T γ,ω . This means that Re pK(p) ≥ cos ϕ 0 2 · inf p∈Tγ,ω 1 0 p α µ(α) dα def = r 0 > 0 because Im 1 0 p α µ(α) dα = 0 with p ∈ T γ,ω only if p = γ, and there Re 1 0 γ α µ(α) dα = 0. Therefore, using the above-mentioned asymptotics of the McDonald function, we find that Z 0 (t, x) ≤ Ce −a|x| (a > 0), |x| ≥ ε −1 . (4.15) As |x| ≤ ε, we get Z 0 (t, x) ≤      C, if n = 1; C log |x| −1 , if n = 2; C|x| −n+2 , if n ≥ 3. (4.16) Let Z ± (t, x) be the parts of Z(t, x) corresponding to the integration over Γ ± γ,ω . If, for example, n ≥ 3, then . In order to express (asymptotically) r as a function of z, we denote s = log r, so that s −1 e s = z 2 where s → ∞ and z → ∞. Taking the logarithm of both parts of the last equality we get s − log s = 2 log z. It is known ( [12], page 50) that s = 2 log z + O(log log z), z → ∞. Z ± (t, x) ≤ C|x| Therefore r = r(z) satisfies the inequalities z 2 (log z) −b ≤ r(z) ≤ (log z) b (4.18) for some b ≥ 0. For |x| ≥ ε −1 , the factor e tr cos(ωπ) in (4.17) can be estimated by 1, and after the use of (4.18) the power terms, as well as the logarithmic ones, are dominated by the exponential factor (the integral in z is taken over (γ 1 , ∞), γ 1 > 0), so that Z ± (t, x) ≤ Ce −a ′ |x| , a ′ > 0, and, together with (4.15), this implies (4.12) for n ≥ 3, m = 0. For |x| < ε, the factor e −a|x|( r log r ) 1/2 is estimated by 1, and an elementary estimate gives that Z ± (t, x) ≤ Ct −1 |x| −n+2 , which implies the required estimate of Z. The bounds for the derivatives, as well as the estimates (4.8)-(4.10) for n = 1 and n = 2 are obtained in a similar way. Some of the estimates can in fact be slightly refined (using the asymptotic formulas for the Laplace integrals with logarithmic factors [32]), involving t −1 log t −1 for small values of t, instead of t −1 . Let us prove (4.13). Let n ≥ 3; the cases n = 2 and n = 1 are similar. The estimates of the McDonald function for small and large arguments can be combined as follows: K n 2 −1 (z) ≤ C|z| − n 2 +1 e −a|z| , z = 0,(4.D (µ) u (t) = e pt 1 0 p α µ(α) Γ(1 − α) γ(1 − α, pt) dα. It is known that γ(1 − α, z) ∼ 1 1 − α z 1−α , z → 0, γ(1 − α, z) ∼ Γ(1 − α) − z −α e −z , z → ∞ (see Chapter 9 in [1]). This implies the inequality γ(1 − α, z) Γ(1 − α) ≤ C valid, in particular, for all z = pt, p ∈ S γ,ω , t ∈ (0, T ]. Recalling also that 1 0 p α µ(α) dα = pK(p) we see that the application of D (µ) to the integral representing Z leads to the appearance of the factor |pK(p)| in the estimates of D (µ) Z, compared to those of Z. Using also (4.19) and estimating by 1 the decaying exponential involving t (in the integrals over Γ ± γ,ω ) we come to the inequality (4.13). The proof of the inequality (4.13 ′ ) is similar to those for estimates of the derivatives in spatial variables. Subordination and positivity. Let us find a connection between Z and the fundamental solution of the heat equation. Our approach follows [4] where the case n = 1 was considered (without a full rigor). Let us consider the function g(u, p) = K(p)e −upK(p) , u > 0, Re p > 0. Let p = γ + iτ , γ > 0, τ ∈ R. As |τ | → ∞, K(p) ∼ C log γ 2 + τ 2 + i arg p , arg p → ± π 2 . It follows that Re(pK(p)) ∼ C γ log γ 2 + τ 2 + π 2 |τ | (log γ 2 + τ 2 ) 2 , |τ | → ∞, whence e −upK(p) ≤ C exp −au γ log(γ 2 + τ 2 ) + |τ | (log(γ 2 + τ 2 )) 2 , a > 0. (4.20) Writing log(γ 2 + τ 2 ) ≤ C(γ 2 + τ 2 ) ε , 0 < ε < 1/4, we find from (4.20) that ∞ −∞ |g(u, γ + iτ )| dτ ≤ C ∞ 0 exp −au γ (γ 2 + τ 2 ) ε + τ (γ 2 + τ 2 ) 2ε dτ ≤ C 1 0 e −au γ (γ 2 +τ 2 ) ε dτ + C ∞ 1 e −au τ (γ 2 +τ 2 ) 2ε dτ ≤ Ce −au γ (γ 2 +1) ε + Cγ ∞ γ −1 e −a ′ uγ 1−4ε y 1−4ε dy ≤ C + C ∞ 0 e −a ′ uz 1−4ε dz (a ′ > 0), whence sup γ≥1 ∞ −∞ |g(u, γ + iτ )| dτ < ∞. (4.21) It follows from (4.21) (see [7]) that g(u, p) is the Laplace transform of some locally integrable function G(u, t): 22) and the integral in (4.22) is absolutely convergent if Re p ≥ 1. g(u, p) = ∞ 0 e −pt G(u, t) dt,(4. On the other hand, the function pK(p) is positive and has a completely monotone derivative, so that e −upK(p) is completely monotone. Since K(p) is completely monotone, we find that g is completely monotone in p (we have used Criteria 1 and 2 of the complete monotonicity; see [13]), so that G(u, t) ≥ 0 by Bernstein's theorem. Z(t, x) = ∞ 0 G(u, t)(4πu) −n/2 e − |x| 2 4u du, x = 0, t > 0, (4.23) where G(u, t) ≥ 0 and Proof. In order to prove (4.24), we integrate (4.22) in p using Fubini's theorem. We get ∞ 0 e −pt dt ∞ 0 G(u, t) du = 1 p , which implies (4.24). Let us prove (4.23). The convergence of the integral at infinity follows from (4.24), while near the origin the function u → (4πu) −n/2 e − |x| 2 4u decays exponentially. Let v(t, x) be the right-hand side of (4.23). Multiplying by e −pt and integrating in t we find that By the formula 2.3.16.1 from [28], the right-hand side coincides with the one from (4.5), so that v(t, x) = Z(t, x). Now the non-negativity of Z is a consequence of (4.23), and the identity (4.25) follows from (4.23), (4.24) and Fubini's theorem. Long time behavior. Let us give a rigorous proof of the asymptotics of the mean square displacement, basic for applications of the distributed order calculus. We also give some long time estimates of the fundamental solution Z. Theorem 4.3. (i) Let m(t) = R n |x| 2 Z(t, x) dx. If µ(0) = 0, then m(t) ∼ C log t, t → ∞. (4.26) If µ(α) ∼ aα ν , α → 0, a, ν > 0, (4.27) then m(t) ∼ C(log t) 1+ν , t → ∞. (4.28) (ii) Suppose that (4.27) holds with ν > 1, if n = 1, and with an arbitrary ν > 0, if n ≥ 2. Then for |x| ≤ ε, ε > 0, and t > ε −1 , Z(t, x) ≤      C(log t) − ν−1 2 , if n = 1; C| log |x||(log t) −ν log(log t), if n = 2; C|x| −n+2 (log t) −ν−1 , if n ≥ 3. (4.29) Proof. (i) It follows from the Plancherel identity for the Fourier transform that m(t) = −(2π) n ∆ ξ Z(t, ξ) ξ=0 . Applying the Laplace transform in t we find that m(p) = −(2π) n ∆ ξ 1 pK(p) + |ξ| 2 ξ=0 , and after an easy calculation we get m(p) = 2n · (2π) n p 2 K(p) , whence m(t) = 2n · (2π) n t 0 κ(τ ) dτ where κ was introduced in Sect. 3.1. Now the relations (4.26) and (4.28) are consequences of Karamata's Tauberian theorem [13]. (ii) As before, we proceed from the integral representation (4.14) where the contour S γ,ω consists of a finite part T γ,ω and the rays Γ ± γ,ω . Let n = 1. Then (4.14) takes the form Z(t, x) = 1 2 Sγ,ω e pt K(p) pK(p) e −|x| √ pK(p) dp. (4.30) As p → 0, pK(p) ∼ C(log p −1 ) − 1+ν 2 , K(p) pK(p) ∼ Cp −1 (log p −1 ) − 1+ν 2 , where 1+ν 2 > 1. These asymptotic relations make it possible to pass to the limit in (4.30), as γ → 0, substantiating simultaneously the convergence to 0 of the integral over T γ,ω and the existence of the integrals over the rays starting at the origin. Thus, Z(t, x) ≤ C ∞ 0 e rt cos(ωπ) r −1 | log r| − 1+ν 2 e −a|x|| log r| − 1+ν 2 dr. (4.31) Let us decompose the integral in (4.31) into the sum of the integrals over (0, 1/2) and (1/2, ∞). Estimating the latter we drop the factor containing |x| and obtain easily the exponential decay, as t → ∞. The integral over (0, 1/2) is estimated via the function M(t) = 1/2 0 e −art r −1 log 1 r − 1+ν 2 dr. Integrating by parts we see that M(t) ≤ C   e − at 2 + t 1/2 0 e −art log 1 r 1−ν 2 dr   . It is known (see (18.52) in [33] or (32.11) in [34]) that 1/2 0 e −art log 1 r 1−ν 2 dr ≤ Ct −1 (log t) 1−ν 2 for large values of t. This implies the first inequality of (4.29). Let n = 2. Then Z(p, x) = 1 2π K(p)K 0 (|x| pK(p), so that we have, for |x| < ε and small |p|, that Z(p, x) ≤ C|p| −1 | log |p|| −1−ν log |x| −1 + log log |p| −1 . This estimate is sufficient (for ν > 0) to substantiate passing to the limit, as γ → 0. The above argument gives, as the main part of the upper estimate of Z(t, x) for a large t, the expression C log |x| −1 1/2 0 e −art r −1 | log r| −1−ν log log r −1 dr = C 1 log |x| −1 1/2 0 e −art log log r −1 d dr log r −1 −ν dr. Integrating by parts we reduce the investigation of the above integral in r to that of two integrals, 1/2 0 e −art r −1 log r −1 −1−ν dr (it has been estimated above), and 32.11). This results in the second estimate from (4.29). The third one is derived similarly. 1/2 0 e −art log r −1 −ν log log r −1 dr ∼ Ct −1 (log t) −ν log(log t), t → ∞ ([34], The relations (4.26) and (4.28) for the case where n = 1 and µ(α) ≡ const or µ(α) ≡ const·α ν were proved in [4]. where ϕ is a locally Hölder continuous function of the sub-exponential growth: for any b > 0, |ϕ(x)| ≤ C b e b|x| . (5.2) We will assume that the weight function µ defining the distributed order derivative D (µ) satisfies the conditions of Theorem 4.1. Theorem 5.1. (i) The function u(t, x) = R n Z(t, x − ξ)ϕ(ξ) dξ (5.3) is a classical solution of the Cauchy problem (4.1)-(5.1), that is the function (5.3) is twice continuously differentiable in x for each t > 0, for each x ∈ R n it is continuous in t > 0, the function t → t 0 k(t − τ )u(τ, x) dτ, t > 0, is continuously differentiable, the equation (4.1) is satisfied, and u(t, x) −→ ϕ(x), as t → 0, (5.4) for all x ∈ R n . (ii) On each finite time interval (0, T ], the solution u(t, x) satisfies the inequality |u(t, x)| ≤ Ce d|x| , x ∈ R n , (5.5) with some constants C, d > 0. If ϕ is bounded, then |u(t, x)| ≤ C, x ∈ R n , 0 < t ≤ T. (5.6) (iii) For each x ∈ R n , there exists such an ε > 0 that D (µ) u(t, x) ≤ C x t −1+ε , 0 < t ≤ T. (5.7) Proof. Using (4.25) we can write u(t, x) = R n Z(t, x − ξ)[ϕ(ξ) − ϕ(x)] dξ + ϕ(x). (5.8) Let us fix x and prove (5.4), that is prove that the integral in (5.8) (denoted by u 0 (t, x)) tends to 0. Let n = 1. Then u 0 (t, x) = 1 4πi γ+i∞ γ−i∞ e pt K(p) pK(p) H(p, x) dp (5.9) where γ > 0, H(p, x) = ∞ −∞ e −|x−ξ| √ pK(p) [ϕ(ξ) − ϕ(x)] dξ (the change of the order of integration leading to (5.9) will be justified when we prove the decay of H(p, x), as p → γ ± i∞; see below). By our assumption, |ϕ(x) − ϕ(ξ)| ≤ C x |x − ξ| λ , λ > 0, |x − ξ| ≤ 1. Let ρ = pK(p), p = γ + iτ . As |τ | → ∞, ρ ∼ C |τ | log |τ | 1/2 . We have |H(p, x)| ≤ C |x−ξ|≤1 e −ρ|x−ξ| |x − ξ| λ dξ + C |x−ξ|>1 e −ρ|x−ξ|+b|ξ| dξ + |ϕ(x)| |x−ξ|>1 e −ρ|x−ξ| dξ ≤ Cρ −1−λ + Ce b|x| |z|>1 e (b−ρ)|z| dz + 2|ϕ(x)| ∞ 1 e −ρz dz ≤ Cρ −1−λ , if b is taken such that ρ > b. Therefore the absolute value of the integrand in (5.9) does not exceed Ce γt |τ | −1− λ 2 (log |τ |) λ/2 , so that the integral in (5.9) exists and possesses a limit, as t → 0, equal to u 0 (0, x) = 1 4πi γ+i∞ γ−i∞ K(p) pK(p) H(p, x) dp (5.10) The integrand in (5.10) is analytic in p on the half-plane Re p ≥ γ. Let us consider (within that half-plane) a contour consisting of an interval {p : Re p = γ, |p| ≤ R} and the arc {p : Re p > γ, |p| = R}, R > γ. The absolute value of the integral over the arc (with the same integrand as in (5.10)) does not exceed CR −λ/2 (log R) λ/2 → 0, as R → ∞. This means that u 0 (0, x) = 0, and we have proved (5.4) for n = 1. The scheme of proof is completely similar for n > 1 too; one has only to use the asymptotics of the McDonald function. If we perform the above estimates, not ignoring the dependence on x but, on the contrary, taking it into account, then we obtain the estimates (5.5) and (5.6). Due to the estimates of Z given in Theorem 4.1, we may differentiate once in the spatial variables in (5.3) under the sign of integral. Using also the identity (4.25) we get, for a fixed x 0 , the formula ∂u(t, x 0 ) ∂x k = R n ∂Z(t, x 0 − ξ) ∂x k [ϕ(ξ) − ϕ(x 0 )] dξ. (5.11) Let us decompose the domain of integration in (5.11) into the union of Ω 1 = ξ ∈ R n : |x 0 − ξ| ≥ 1 and Ω 2 = R n \ Ω 1 . Correspondingly, the integral becomes a sum of two functions, w 1 (t, x) + w 2 (t, x). If x is in a small neighbourhood of x 0 , while ξ ∈ Ω 1 , then |x − ξ| is separated from zero. Therefore ∂w 1 (t, x 0 ) ∂x k = Ω 1 ∂ 2 Z(t, x 0 − ξ) ∂x 2 k [ϕ(ξ) − ϕ(x 0 )] dξ.1 d w 2 (t, x 0 +d) − w 2 (t, x 0 ) − Ω 2 ∂ 2 Z(t, x 0 − ξ) ∂x 2 k [ϕ(ξ) − ϕ(x 0 )] dξ = 1 d |x 0 −ξ|≤2d ∂Z(t, x 0 +d − ξ) ∂x k [ϕ(ξ) − ϕ(x 0 )] dξ − 1 d |x 0 −ξ|≤2d ∂Z(t, x 0 − ξ) ∂x k [ϕ(ξ) − ϕ(x 0 )] dξ − |x 0 −ξ|≤2d ∂ 2 Z(t, x 0 − ξ) ∂x 2 k [ϕ(ξ) − ϕ(x 0 )] dξ + 2d≤|x 0 −ξ|≤1 1 d ∂Z(t, x 0 +d − ξ) ∂x k − ∂Z(t, x 0 − ξ) ∂x k − ∂ 2 Z(t, x 0 − ξ) ∂x 2 k [ϕ(ξ) − ϕ(x 0 )] dξ. (5.13) The integrals converge due to the local Hölder continuity of ϕ. We have (if n ≥ 2) 1 d |x 0 −ξ|≤2d ∂Z(t, x 0 +d − ξ) ∂x k [ϕ(ξ) − ϕ(x 0 )] dξ ≤ Ct −1 d −1 |x 0 −ξ|≤2d |x 0 +d − ξ| −n+1 |ξ − x 0 | λ dξ = Ct −1 d −1 |η|≤2d |η +d| −n+1 |η| λ dη ≤ Ct −1 d λ → 0, d → 0 (the change of variables η = dζ was made in the last integral). In a similar way we obtain estimates of other integrals over the set {|x 0 − ξ| ≤ 2d}. In the integral over its complement, we use the Taylor formula: 1 d ∂Z(t, x 0 +d − ξ) ∂x k − ∂Z(t, x 0 − ξ) ∂x k − ∂ 2 Z(t, x 0 − ξ) ∂x 2 k = d 2 ∂ 3 Z(t, x 0 + θd − ξ) ∂x 3 k , 0 < θ < 1. If |x 0 − ξ| ≥ 2d, then |x 0 + θd − ξ| ≥ |ξ − x 0 | − d ≥ 1 2 |ξ − x 0 |. Using the inequality for the third derivative of Z from Theorem 4.1 we find that the last integral in (5.13) does not exceed Cdt −1 2d≤|x 0 −ξ|≤1 |ξ − x 0 | −n−1+λ dξ ≤ Ct −1 d λ → 0, as d → 0. It follows from (5.12), (5.13) and the above estimates that ∂ 2 u(t, x 0 ) ∂x 2 k = R n ∂ 2 Z(t, x 0 − ξ) ∂x k [ϕ(ξ) − ϕ(x 0 )] dξ.(5.14) If n = 1, then the formula (5.14) is obtained by a straightforward differentiation under the sign of integral. Let us consider the distributed order derivative D (µ) u. First of all we check the identity D (µ) Z(t, x) = ∆Z(t, x), t > 0, x = 0. (5.15) A direct calculation based on identities for the derivatives of the McDonald function [1] shows that ∆ Z(p, x) = pK(p) Z(p, x). On the other hand, if x = 0, then Z(t, x) → 0, as t → 0. This fact follows from the integral representation of Z in a manner similar to the above proof of (5.4). Therefore the Laplace transform of D (µ) Z(t, x), x = 0, equals pK(p) Z(p, x), which implies (5.15). Now, having the estimates of the derivatives of Z in spatial variables given in Theorem 4.1, from (5.15) we get estimates for D (µ) Z sufficient to justify the distributed differentiation in (5.8). Thus we come to the formula D (µ) u (t, x 0 ) = R n D (µ) Z (t, x 0 − ξ)[ϕ(ξ) − ϕ(x 0 )] dξ. (5.16) Together with (5.14) and (5.15), this proves that u(t, x) is a solution of the equation (4.1). In order to prove (5.7), we use the inequalities (4.13), (4.13 ′ ), and the assumption (5.2) with b < a. Substituting into (5.16) we get, for a fixed x 0 , that D (µ) u (t, x 0 ) ≤ Ct −2 |x 0 −ξ|<t 1/2 |x 0 − ξ| −n+2+λ dξ + C t 1/2 ≤|x 0 −ξ|≤1 |x 0 − ξ| −n−2+λ e −a|x 0 −ξ| dξ + C |x 0 −ξ|>1 |x 0 − ξ| −n−2 e −a|x 0 −ξ| e b|ξ| + e b|x 0 | dξ ≤ Ct −2 t 1/2 0 r 1+λ dr + C 1 t 1/2 r −3+λ e −ar dr + C ≤ Ct −1+ λ 2 for small values of t, as desired. The inhomogeneous equation. Let us consider the Cauchy problem D (µ) t u (t, x) − ∆u(t, x) = f (t, x); x ∈ R n , t > 0, (5.17) u(0, x) = 0. (5.18) We assume that the function f is continuous in t, bounded and locally Hölder continuous in x, uniformly with respect to t. Our task in this section is to obtain a solution of (5.17)-(5.18) in the form of a "heat potential" u(t, x) = t 0 dτ R n E(t − τ, x − y)f (τ, y) dy. (5.19) In contrast to the classical theory of parabolic equations [14], the kernel E in (5.19) does not coincide with the fundamental solution Z -just as this happens for fractional diffusion equations [10,9]. However the behavior of the function E is very similar to that of Z. Applying formally the Laplace transform in t and the Fourier transform in x we find that Ẽ (p, ξ) = 1 pK(p) + |ξ| 2 whenceẼ (p, x) = (2π) − n 2 |x| 1− n 2 (pK(p)) 1 2 ( n 2 −1) K n 2 −1 (|x| pK(p)),(5.20) which differs from (4.5) only by the absense of the factor K(p) with a logarithmic behavior at infinity. Therefore the function E(t, x) obtained from (5.20) via contour integration, satisfies the same estimates (see Theorem 4.1) as the function Z, except the estimates for large values of t. The function E(t, x) is non-negative. Indeed, the function p → p ν/2 K ν (a √ p), a > 0, is the Laplace transform of the function t → a ν (2t) ν+1 e − a 2 4t (see [7]). This means that the above function in p is completely monotone. Since the function p → pK(p) is positive and has a completely monotone derivative, we find thatẼ(p, x) is completely monotone in p, so that E(t, x) ≥ 0, x = 0. The counterparts of the estimates (4.29) (proved just as in Theorem 4.3) are as follows. If (4.27) holds with ν ≥ 0, then for |x| ≤ ε, ε > 0, and t > ε −1 E(t, x) ≤      Ct −1 (log t) 1+ν 2 , if n = 1; Ct −1 log log t log |x| −1 , if n = 2; Ct −1 |x| −n+2 , if n ≥ 3. (5.21) The function E has (in x) an exponential decay at infinity. In fact, for the analysis of the potential (5.19) we need estimates of E and its derivatives, uniform in t ∈ (0, T ]. D j x E(t, x) ≤ C|x| −j−n |1 + | log |x||| β e −a|x| , x = 0, j ≥ 0, (5.22) D (µ) t E(t, x) ≤ C|x| −n−2 |1 + | log |x||| β e −a|x| , x = 0, (5.23) whre C, a, β are positive constants. Proof. Let, for example, n ≥ 3 (other cases are considered in a similar way). As usual, we write the Laplace inversion formula and deform the contour of integration to S γ,ω . The integral over T γ,ω gives an exponentially decaying contribution without local singularities. In the integrals over the rays Γ ± γ,ω we use the upper bound K n 2 −1 (z) ≤ C|z| − n 2 +1 e −a|z| , z = 0, (a > 0) obtained from the asymptotics of the McDonald function near the origin and infinity. As in the proof of Theorem 4.1, we perform the change of variable z = r log r 1/2 and use the inequality (4.18) for the dependence of r on z. As a result, for the integrals over Γ ± γ,ω we obtain the upper bound C|x| −n+2 ∞ γ 1 z(log z) β e −a|x|z dz ≤ C|x| −n (| log |x|| + 1) β e −a ′ |x| with some positive constants, and we come to the estimate (5.22), j = 0. The estimates of the derivatives in spatial variables are proved similarly. The proof of (5.23) is completely analogous to that of the inequality (4.13) for D (µ) Z. As we have noticed,Ẽ (p, x) = 1 K(p)Z (p, x), and since R nZ (p, x) dx = 1 p , we have R nẼ (p, x) dx = 1 pK(p) , so that we come to an interesting identity R n E(t, x) dx = κ(t). (5.24) The existence of the integral in (5.24) follows from the above estimates or from the fact that κ ∈ L loc 1 (see (3.2)) and Fubini's theorem. Proof. The initial condition (5.18) is evidently satisfied. Just as for the kernel Z above, we prove that D (µ) E − ∆E = 0 for x = 0. Next, we may differentiate once in (5.19) under the sign if integral. Indeed (here and below we make estimates for n ≥ 3; other cases are similar), t 0 dτ R n ∂E(t − τ, x − y) ∂x j dy ≤ C t 0 dτ |x|> √ τ |x| −n−1 (1 + | log |x||) β e −a|x| dx + C t 0 τ −1 dτ |x|≤ √ τ |x| −n+1 dx ≤ C t 0 τ −1/2 dτ |y|>1 |y| −n−1 (1 + log |y| + 1 2 log τ −1 ) dy + C t 0 τ −1/2 dτ |y|≤1 |y| −n+1 dy < ∞. In order to calculate the second order derivatives, note that the function u h (t, x) = t−h 0 dτ R n E(t − τ, x − y)f (τ, y) dy, t > h > 0, may be differentiated twice, and that R n ∂ 2 ∂x 2 i E(t − τ, x − y) dy = 0, whence ∂ 2 u h (t, x) ∂x 2 i = t−h 0 dτ R n ∂ 2 ∂x 2 i E(t − τ, x − y)[f (τ, y) − f (τ, x)] dy. (5.25) Using the local Hölder continuity and boundedness of f , we perform estimates as above and prove the possibility to pass to the limit in (5.25), as h → 0, so that ∆u(t, x) = t 0 dτ R n ∆E(t − τ, x − y)[f (τ, y) − f (τ, x)] dy. (5.26) To calculate D (µ) u, we use (5.24) and write u(t, x) = t 0 dτ R n E(t − τ, x − y)[f (τ, y) − f (τ, x)] dy + t 0 κ(t − τ )f (τ, x) dτ def = u 1 (t, x) + u 2 (t, x). Recall that u 2 (t, x) = I (µ) f (t, x), so that D (µ) u 2 = f (see Sect. 3). Let us consider u 1 . First we estimate ∂E ∂t . As before, we use the contour integral representation of E and note that the differentiation in t leads to an additional factor p in the integrals. This results in the estimates ∂E(t, x) ∂t ≤ Ct −2 |x| −n+2 , |x| ≤ ε; (5.27) ∂E(t, x) ∂t ≤ C|x| −n−2 (| log |x|| + 1) β e −a|x| , |x| = 0. (5.28) As the first step of computing D (µ) u 1 , we compute ∂u 1 ∂t . Note that R n E(t − τ, x − y)[f (τ, y) − f (τ, x)] dy = 1 (2π) n/2+1 i γ+i∞ γ−i∞ e p(t−τ ) (pK(p)) 1 2 ( n 2 −1) L n (p, x, τ ) dp (5.29) where L n (p, x, τ ) = R n |x − y| 1− n 2 [f (τ, y) − f (τ, x)]K n 2 −1 (|x − y| pK(p)) dy. The role of the function L n is quite similar to that of the function H introduced in the proof of Theorem 5.1 (where the case n = 1 was considered in detail). Using, as it was done there, the local Hölder continuity and boundedness of f we find that |L n (p, x, τ )| ≤ C| pK(p)| − n 2 −λ−1 . As in the proof of Theorem 5.1, we deform the contour of integration to the right of the line in (5.29) and show that lim τ →t R n E(t − τ, x − y)[f (τ, y) − f (τ, x)] dy = 0. (5.30) On the other hand, using (5.27) and (5.28) we get R n ∂E(t − τ, x − y) ∂t |f (τ, y) − f (τ, x)| dy ≤ C |y|≥ √ t−τ |y| −n−2+λ (| log |y|| + 1) β e −a|y| dy + C(t − τ ) −2 |y|< √ t−τ |y| −n+2+λ dy = 2C ∞ √ t−τ r −3+λ (| log r| + 1) β e −ar dr + 2C(t − τ ) −2 √ t−τ 0 r 1+λ dr ≤ C(t − τ ) −1+λ/2 (| log(t − τ )| + 1) β . Together with (5.30), this implies the equality ∂u 1 ∂t = t 0 dτ R n ∂E(t − τ, x − y) ∂t [f (τ, y) − f (τ, x)] dy. Now we compute D (µ) u 1 using the formula (2.3), the fact that k ∈ L loc 1 (following from (2.4)) and Fubini's theorem: D (µ) u 1 (t, x) = t 0 dτ R n D (µ) E (t − τ, x − y)[f (τ, y) − f (τ, x)] dy. Together with (5.26), this means that ∆u = D (µ) u 1 = D (µ) u − f , as desired. In Theorem 5.3 we constructed a solution u of the problem (5.17)-(5.18), such that u = u 1 + u 2 , u 1 (0, x) = u 2 (0, x) = 0, u 1 is absolutely continuous in t, and u 2 = I (µ) f . On this solution u, I (µ) D (µ) u = I (µ) (k * u ′ 1 ) + I (µ) f = κ * k * u ′ 1 + u 2 = u 1 + u 2 = u (u ′ means the derivative in t). Applying I (µ) to both sides of the equation (5.17) we find that u(t, x) − t 0 κ(t − s)∆u(s, x) ds = (κ * f )(t, x). (5.31) The equation (5.31) can be interpreted as an abstract Volterra equation u + κ * (Au) = ϕ,(5.32) if we assume that u belongs to some Banach space X (in the variable x), and A is the operator −∆ on X. The operator −A generates a contraction semigroup if, for example, X = L 2 (R n ) or X = C ∞ (R n ) (the space of continuous functions decaying at infinity; see Sect. X.8 in [31]). Now the existence of a solution in L 1 (0, T ; X) can be obtained from a general theory of equations (5.32) developed in [6]; it is essential that κ is completely monotone (conditions of some other papers devoted to equations (5.32) do not cover our situation). Of course, our "classical" approach gives a much more detailed information about solutions, while the abstract method is applicable to more general equations. 6 Uniqueness Theorems 6.1. Bounded solutions. In this section we consider a more general equation D (µ) u (t, x) = Lu(t, x), x ∈ R n , 0 < t ≤ T,(6.1) with the zero initial condition u(0, x) = 0. (6.2) Here L is an elliptic second order differential operator with bounded continuous real-valued coefficients: Lu = n i,j=1 a ij (t, x) ∂ 2 u ∂x i ∂x j + n j=1 b j (t, x) ∂u ∂x j + c(t, x)u, n i,j=1 a ij (t, x)ξ i ξ j > 0, 0 = ξ = (ξ 1 , . . . , ξ n ) ∈ R n . We assume that µ ∈ C 3 [0, 1], µ(1) = 0. We will consider classical solutions u(t, x), such that D (µ) u (t, x) belongs, for each fixed x, to L p (0, T ) with some p > 1. As we saw in Theorem 5.1 and Theorem 5.3, the solutions for the case L = ∆ obtained via the fundamental solution and heat potential possess the last property making it possible to represent D (µ) u in the Marchaud form (3.9). It is often convenient to transform the equation (6.1) setting u(t, x) = u λ (t)w(t, x) where λ > 0, and u λ is the solution of the equation D (µ) u λ = λu λ constructed in Sect. 2.3. It is easy to check that the function w satisfies the equation (A λ w) (t, x) = (L − λ)w(t, x) where (A λ w) (t, x) = 1 u λ (t)    k(t)w(t, x) + lim ε→0 t−ε 0 u λ (τ )k ′ (t − τ )[w(τ, x) − w(t, x)] dτ    . (6. 3) The operator (6.3) is very similar in its properties to the distributed order derivative D (µ) . Theorem 6.1. If u(t, x) is a bounded classical solution of the problem (6.1)-(6.2), such that for each x ∈ R n , D (µ) u ∈ L p (0, T ) for some p > 1, then u(t, x) ≡ 0. Proof. Let M = sup |u(t, x)|. Consider the function F R (t, x) = M R 2   |x| 2 + σ t 0 κ(s) ds + 1   , with R, σ > 0. It follows from (3.2) that t 0 κ(s) ds → 0, as t → 0. As we have seen (Sect. 3.2), D (µ) t t 0 κ(s) ds = 1, so that D (µ) F R (t, x) = σM R 2 . Let c 0 = sup |c(t, x)|, d > 0, λ = c 0 + d. Since u λ is non-decreasing (Theorem 2.3), and k ′ (s) ≤ 0, we have (A λ F R ) (t, x) ≥ D (µ) F R (t, x) u λ (T ) = σM R 2 u λ (T ) . On the other hand, (LF R ) (t, x) ≤ 2M R 2 n i=1 a ii (x) + n j=1 b j (x)x j ≤ 2M R 2 (C 1 + C 2 |x|) , C 1 , C 2 > 0, so that taking d > 0 we get ((A λ − (L − λ)) F R ) (t, x) ≥ M R 2 σ u λ (T ) − 2C 1 − 2C 2 |x| + d|x| 2 + d ≥ 0 for all x ∈ R n , t ∈ (0, T ), if σ is taken sufficiently big. Denote v(t, x) = u(t, x) − F R (t, x). By the above inequalities, (A λ v) (t, x) − (L − λ)v(t, x) ≤ 0. (6.4) If |x| = R, then v(t, x) = u(t, x)−M −R −2 σ t 0 κ(s) ds + 1 < 0. Next, v(0, x) = −F R (0, x) < 0 for all x. This means that v(t, x) ≤ 0 for |x| < R, t ∈ [0, T ]. Indeed, otherwise the function v would possess a point of the global maximum (t 0 , x 0 ) on the set {(t, x)| 0 < t ≤ T, |x| < R}, such that v(t 0 , x 0 ) > 0. Then (L − λ)v(t 0 , x 0 ) ≤ 0 (see the proof of the maximum principle for a second order parabolic differential equation [14,20]), so that (A λ v) (t 0 , x 0 ) ≤ 0, due to (6.4). However it follows from (6.3) that (A λ v) (t 0 , x 0 ) > 0, and we have come to a contradiction. Thus, we have proved that u(t, x) ≤ M R 2   |x| 2 + σ t 0 κ(s) ds + 1   , |x| ≤ T. Since R is arbitrary, we find that u(t, x) ≤ 0 for all t ∈ [0, T ], x ∈ R n . Considering −u(t, x) instead of u(t, x) we prove that u(t, x) ≡ 0. The above proof was based on standard "maximum principle" arguments. In fact, it is easy to prove, for the equation ( The proof is similar to the classical one [20]. 6.2. Solutions of subexponential growth. In this section we will prove a more exact uniqueness theorem for the case where n = 1, L = ∂ 2 ∂x 2 . Theorem 6.2. Suppose that u(t, x) is a classical solution of the problem (6.1)-(6.2) with n = 1, L = ∂ 2 ∂x 2 , such that for any a > 0, |u(t, x)| ≤ C a e a|x| , 0 < t ≤ T, x ∈ R 1 , and D (µ) t u ∈ L p (0, T ), p > 1, in t, for any fixed x. Then u(t, x) ≡ 0. Proof. This time we choose the comparison function as F (1) R (t, x) = Me aR [Z(t, x − R) + Z(t, x + R)], |x| ≤ R, (6.5) where Z is the above fundamental solution of the Cauchy problem (Sect. 4), M and a are positive constants to be specified later. We will need the following auxiliary result. Note that in Sect. 4 we used the Laplace inversion formula for Z(t, x) only for x = 0. Here the task is just the opposite, and we use the inversion formula from [11] involving the derivative of the Laplace image. In our case ∂ Z(p, 0) ∂p = 1 2p d dp K(p) · √ p − K(p) 2 √ p , d dp K(p) = K ′ (p) 2 K(p) , K ′ (p) = 1 0 (α − 1)p α−2 µ(α) dα = o(p −1 ), p → ∞, so that ∂ Z(p, 0) ∂p ≤ C ε |p| − 3 2 +ε , Re p ≥ 1, for any ε > 0. This is sufficient for the inversion formula Z(t, 0) = 1 2πi γ+i∞ γ−i∞ Z(p, 0)e pt dp, t = 0, (6.7) where γ ≥ 1. Using (6.6), (6.7) and an asymptotic theorem for the Laplace inversion (see (22.115) and (22.114) in [33]) we find the asymptotics Z(t, 0) = Ct −1/2 log 1 t For our purpose, it is sufficient to derive from (6.8) that Z(t, 0) → +∞, as t → +0. On the other hand, it follows from the subordination identity (4.23) and the fact that G(u, t) > 0 for each t on the set of a positive measure in u (see (4.24)), that Z(t, 0) > 0 for each t > 0. Together with (6.8), this implies the required inequality. Proof of Theorem 6.2 (continued). If |x| = R, that is x = ±R, then by (6.5) and Lemma 6.3, F Indeed, if w(t, x) < 0 for some t and x, then there exist t 0 ∈ (0, T ], x 0 ∈ R 1 , |x 0 | < R, such that w(t 0 , x 0 ) = inf |x|≤R t∈(0,T ] w(t, x) < 0. If |x| < R, then the function F R is infinitely differentiable in t, with the derivative being continuous on [0, T ]. Therefore we may write D (µ) w in the Marchaud form, so that k(t)w(t, x) + lim ε→0 t ε k ′ (τ )[w(t − τ, x) − w(t, x)] dτ = ∂ 2 w(t, x) ∂x 2 . (6.9) For (t, x) = (t 0 , x 0 ), we see that the left-hand side of (6.9) is negative, while the right-hand side is non-negative, and we get a contradiction. Thus, we have proved that u(t, x) ≤ Me aR [Z(t, x − R) + Z(t, x + R)], 0 < t ≤ T, |x| ≤ R. Now we choose a in such a way that a < b, and (see above) fix M ≥ C a ρ −1 0 . Obviously, M does not depend on R. Passing to the limit in (6.11), as R → ∞, we see that u(t, x) ≤ 0 for arbitrary t and x. Similarly, taking −u(t, x) instead of u(t, x), we find that u(t, x) ≥ 0. Theorem 2 . 3 . 23(i) The function u λ (t) is continuous at the origin t and belongs to C ∞ (0, ∞). 1 e tr cos ωπ sin(tr sin ωπ) dr = 1 π ∞ −γt cos ωπ s −1 e −s sin(−s tan ωπ) ds . A fundamental solution of the Cauchy problem. Let us consider the equation (1.2 make the change of variables z = z(r 19) with a possibly different choice of the constant a > 0.Next, let us write an integral representation of D (µ) Z. If u(t) = e pt , thenD (µ) u (t) = p t 0 k(t − τ )e pτ dτ =pe pt t 0 k(s)e −ps ds. Using the expression (2.4) for k(s), the identity t 0 s −α e −ps ds = p α−1 γ(1 − α, pt) ([28], 1.3.2.3), where γ is an incomplete gamma function, we find that Theorem 4 . 2 . 42(i) The fundamental solution Z(t, x) satisfies the subordination identity For all t > 0, x = 0, Z(t, x) is non-negative, and ∞ 0 e 0−pt v(t, x) dt = K(p) ∞ 0 e −upK(p) (4πu) −n/2 e − |x| 2 4u du. The homogeneous equation. Let us consider the equation (4.1) with the initial conditionu(0, x) = ϕ(x), x ∈ R n ,(5.1) ( 5 . 12 ) 512Let d be a small positive number,d = (0, . . . , d, . . .), with d being at the k-th place. Then Proposition 5. 2 . 2Let µ satisfy the conditions of Theorem 4.1. Then, uniformly in t ∈ (0, T ], Theorem 5. 3 . 3Under the above assumptions regarding f , and the assumptions of Theorem 4.1 regarding µ, the function (5.19) is a classical solution of the problem (5.17)-(5.18), bounded near the origin in t for each x ∈ R n . 6.1), an analog of the maximum principle itself. Namely, letc(t, x) − λ ≤ 0 for t ∈ [0, T ], x ∈ G, where G ⊂ R n is a bounded domain. Suppose that (L − λ)u(t, x) − (A λ u) (t, x) ≥ 0, (t, x) ∈ [0, T ] × G. Lemma 6. 3 . 3For any T > 0, there exists such a constant ρ 0 > 0 thatZ(t, 0) ≥ ρ 0 , 0 < t ≤ T. R (t, x) = Me aR [Z(t, 0) + Z(t, ±2R)] ≥ Mρ 0 e aR .We have u(t, x) ≤ F R (t, x), if C a ≤ Mρ 0 (a has not yet been chosen), that is if M is chosen in such a way that M ≥ C a ρ −1 0 . The function w(t, x) = F R (t, x) − u(t, x) satisfies, if |x| ≤ R, t ∈ (0, T ), the equation D 2 . If |x| = R, then w(t, x) ≥ 0. < R. It follows that w(t, x) ≥ 0 for t ∈ (0, T ], |x| ≤ R. fix x and consider the limit R → ∞. For large values of R we haveZ(t, x ± R) ≤ Be −bR , t ∈ (0, T ],where b > 0 depends only on T , B > 0 depends on T and x, and not on R. By (6.10),u(t, x) ≤ BMe (a−b)R .(6.11) ps ds, Re p > 0.Using (2.4) and the relation ∞ 0 s −α e −ps ds = Γ(1 − α) p 1−α (see 2.3.3.1 in [28]), we find that r → ∞,In some cases it is convenient to use a rough estimatewhich implies (2.10). The relation (2.10 ′ ) is proved similarly. (ii), (iii). The relations (2.11) and (2.11 ′ ) follow from the complex version of Watson's lemma ([27], Chapter 4). |K(p)| ≤ C|p| −1 log 1 |p| −1 , |p| ≤ p 0 , valid for any µ ∈ C[0, 1]. This estimate follows from general results about the behavior of the Laplace transform near the origin (see Chapter II, §1 of [7]). Distributed Order Integral 3.1. Definition and properties. Suppose that D (µ) u = f , u(0) = 0. Applying formally the Laplace transform u → u, we find that u(p) = 1 pK(p) f (p). The asymptotic properties of K(p) . H Bateman, A Erdélyi, Higher Transcendental Functions. 2McGraw-HillH. Bateman and A. Erdélyi, Higher Transcendental Functions, Vol. 2, McGraw-Hill, New York, 1953. Mean fractional-order derivatives, differential equations and filters. M Caputo, Ann. Univ. Ferrara, Sez. VII, Sc. Mat. 41M. Caputo, Mean fractional-order derivatives, differential equations and filters, Ann. Univ. Ferrara, Sez. VII, Sc. Mat. 41 (1995), 73-84. Retarding subdiffusion and accelerating superdiffusion governed by distributed order fractional diffusion equations. A V Chechkin, R Gorenflo, I M Sokolov, Phys. Rev. E. 66046129A. V. Chechkin, R. Gorenflo and I. M. Sokolov, Retarding subdiffusion and accelerating superdiffusion governed by distributed order fractional diffusion equations, Phys. Rev. E 66, No. 046129 (2002), 1-7. Distributed order fractional diffusion equation. A V Chechkin, R Gorenflo, I M Sokolov, V Yu, Gonchar, Fract. Calc. Appl. Anal. 6A. V. Chechkin, R. Gorenflo, I. M. Sokolov and V. Yu. Gonchar, Distributed order frac- tional diffusion equation, Fract. Calc. Appl. Anal. 6 (2003), 259-279. Fractional Fokker-Planck equation for ultraslow kinetics. A V Chechkin, J Klafter, I M Sokolov, Europhys. Lett. 63A. V. Chechkin, J. Klafter and I. M. Sokolov, Fractional Fokker-Planck equation for ultraslow kinetics, Europhys. Lett. 63 (2003), 326-332. Abstract linear and nonlinear Volterra equations preserving positivity. Ph, J A Clément, Nohel, SIAM J. Math. Anal. 10Ph. Clément and J. A. Nohel, Abstract linear and nonlinear Volterra equations preserving positivity, SIAM J. Math. Anal. 10 (1979), 365-388. V A Ditkin, A P Prudnikov, Integral Transforms and Operational Calculus. OxfordPergamon PressV. A. Ditkin and A. P. Prudnikov, Integral Transforms and Operational Calculus, Perg- amon Press, Oxford, 1965. The algebra of pseudo-differential operators with analytic symbols and its applications to mathematical physics. Yu A Dubinskij, Russ. Math. Surv. 37Yu. A. Dubinskij, The algebra of pseudo-differential operators with analytic symbols and its applications to mathematical physics, Russ. Math. Surv. 37 (1982), 109-153. S D Eidelman, S D Ivasyshen, A N Kochubei, Analytic Methods in the Theory of Differential and Pseudo-Differential Equations of Parabolic Type. Birkhäuser, BaselS. D. Eidelman, S. D. Ivasyshen, and A. N. Kochubei. Analytic Methods in the Theory of Differential and Pseudo-Differential Equations of Parabolic Type, Birkhäuser, Basel, 2004. Cauchy problem for fractional diffusion equations. S D Eidelman, A N Kochubei, J. Diff. Equat. 199S. D. Eidelman and A. N. Kochubei, Cauchy problem for fractional diffusion equations. J. Diff. Equat. 199 (2004), 211-255. Analytic Functions. M A Evgrafov, Saunders, PhiladelphiaM. A. Evgrafov, Analytic Functions, Saunders, Philadelphia, 1966. . M V Fedoryuk, Asymptotics. Integrals and Series. 1987RussianM. V. Fedoryuk, Asymptotics. Integrals and Series, Nauka, Moscow, 1987 (Russian). W Feller, An Introduction to Probability Theory and Its Applications. New YorkWiley2W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 2, Wiley, New York, 1971. A Friedman, Partial Differential Equations of Parabolic Type. Englewood Cliffs, NJPrentice-HallA. Friedman, Partial Differential Equations of Parabolic Type, Prentice-Hall, Englewood Cliffs, NJ, 1964. Simply and multiply scaled diffusion limits for continuous time random walks. R Gorenflo, F Mainardi, J. Phys.: Conf. Ser. 7R. Gorenflo and F. Mainardi, Simply and multiply scaled diffusion limits for continuous time random walks. J. Phys.: Conf. Ser. 7 (2005), 1-16. Fractional relaxation of distributed order. R Gorenflo, F Mainardi, M. M. NovakWorld ScientificSingaporeComplex Mundi. Emergent Patterns in NatureR. Gorenflo and F. Mainardi, Fractional relaxation of distributed order, In: M. M. Novak (Ed.), "Complex Mundi. Emergent Patterns in Nature", World Scientific, Singapore, 2006, pp. 33-42. The fractional derivatives with orders as functions depending on the variable of integration. L Ya, Ya L Kobelev, Kobelev, Proc. FDA'04: Fractional differentiation and its applications. FDA'04: Fractional differentiation and its applicationsBordeaux, FranceL. Ya. Kobelev and Ya. L. Kobelev, The fractional derivatives with orders as functions depending on the variable of integration. In: Proc. FDA'04: Fractional differentiation and its applications. Bordeaux, France, July 19-21, 2004, pp. 132-136. A Cauchy problem for evolution equations of fractional order. A N Kochubei, Differential Equations. 25A. N. Kochubei, A Cauchy problem for evolution equations of fractional order, Differential Equations 25 (1989), 967-974. Fractional-order diffusion. A N Kochubei, Differential Equations. 26A. N. Kochubei, Fractional-order diffusion, Differential Equations 26 (1990), 485-492. Second Order Equations of Elliptic and Parabolic Type. E M Landis, AMSProvidenceE. M. Landis, Second Order Equations of Elliptic and Parabolic Type, AMS, Providence, 1998. Variable order and distributed order fractional operators. C F Lorenzo, T T Hartley, Nonlinear Dynamics. 29C.F. Lorenzo and T.T. Hartley. Variable order and distributed order fractional operators, Nonlinear Dynamics 29 (2002), 57-98. Stochastic model for ultraslow diffusion. M M Meerschaert, H.-P Scheffler, Stoch. Proc. Appl. 116M. M. Meerschaert and H.-P. Scheffler, Stochastic model for ultraslow diffusion, Stoch. Proc. Appl. 116 (2006), 1215-1235. The random walk's guide to anomalous diffusion: a fractional dynamics approach. R Metzler, J Klafter, Physics Reports. 339R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: a fractional dynamics approach, Physics Reports, 339 (2000), 1-77. The restaurant at the end of the random walk: recent developments in the description of anomalous transport by fractional dynamics. R Metzler, J Klafter, J. Phys. A. 37R. Metzler and J. Klafter, The restaurant at the end of the random walk: recent develop- ments in the description of anomalous transport by fractional dynamics, J. Phys. A 37 (2004), R161-R208. Distributed order fractional subdiffusion. M Naber, Fractals. 12M. Naber, Distributed order fractional subdiffusion, Fractals 12 (2004), 23-32. A M Nakhushev, Fractional Calculus and its Applications. Fizmatlit, MoscowRussianA. M. Nakhushev, Fractional Calculus and its Applications, Fizmatlit, Moscow, 2003 (Russian). F W J Olver, Asymptotics and Special Functions. New YorkAcademic PressF. W. J. Olver, Asymptotics and Special Functions, Academic Press, New York, 1974. . A P Prudnikov, Yu A Brychkov, O I Marichev, Integrals and Series. 1Elementary Functions, Gordon and BreachA. P. Prudnikov, Yu. A. Brychkov, and O. I. Marichev, Integrals and Series. Vol. 1: Elementary Functions, Gordon and Breach, New York, 1986. A P Prudnikov, Yu A Brychkov, O I Marichev, Special Functions. New YorkGordon and Breach2A. P. Prudnikov, Yu. A. Brychkov, and O. I. Marichev, Integrals and Series. Vol. 2: Special Functions, Gordon and Breach, New York, 1986. A V Pskhu, Partial Differential Equations of Fractional Order. Nauka, Moscow2005RussianA. V. Pskhu, Partial Differential Equations of Fractional Order, Nauka, Moscow, 2005 (Russian). M Reed, B Simon, Methods of Modern Mathematical Physics. II. Fourier Analysis, Self-Adjointness. New YorkAcademic PressM. Reed and B. Simon, Methods of Modern Mathematical Physics. II. Fourier Analysis, Self-Adjointness, Academic Press, New York, 1975. Riekstynsh (Riekstiņš). E Ya, Asymptotic Expansions of Integrals. Zinatne, Riga11974RussianE. Ya. Riekstynsh (Riekstiņš), Asymptotic Expansions of Integrals, Vol. 1, Zinatne, Riga, 1974 (Russian). Riekstynsh (Riekstiņš). E Ya, Asymptotic Expansions of Integrals. Zinatne, Riga21977RussianE. Ya. Riekstynsh (Riekstiņš), Asymptotic Expansions of Integrals, Vol. 2, Zinatne, Riga, 1977 (Russian). Riekstynsh (Riekstiņš). E Ya, Asymptotic Expansions of Integrals. 31981RussianE. Ya. Riekstynsh (Riekstiņš), Asymptotic Expansions of Integrals, Vol. 3, Zinatne, Riga, 1981 (Russian). S G Samko, R P Cardoso, Sonine integral equations of the first kind in L p (0, b). 6S. G. Samko and R. P. Cardoso, Sonine integral equations of the first kind in L p (0, b), Fract. Calc. Appl. Anal. 6 (2003), 235-258. S G Samko, A A Kilbas, O I Marichev, Fractional Integrals and Derivatives: Theory and Applications. New YorkGordon and BreachS. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach, New York, 1993. Fractional diffusion and wave equations. W R Schneider, W Wyss, J. Math. Phys. 30W. R. Schneider and W. Wyss, Fractional diffusion and wave equations. J. Math. Phys. 30 (1989), 134-144. Distributed-order fractional kinetics. I M Sokolov, A V Chechkin, J Klafter, Acta Phys. Polon. 35I. M. Sokolov, A. V. Chechkin, and J. Klafter, Distributed-order fractional kinetics, Acta Phys. Polon. 35 (2004), 1323-1341. Cauchy and nonlocal multi-point problems for distributed order pseudo-differential equations. S Umarov, R Gorenflo, Z. Anal. Anwend. 24S. Umarov and R. Gorenflo, Cauchy and nonlocal multi-point problems for distributed order pseudo-differential equations, Z. Anal. Anwend. 24 (2005), 449-466.
{'fraction_non_alphanumeric': 0.12717195925704014, 'fraction_numerical': 0.043573996405032955, 'mean_word_length': 3.116729358081026, 'pattern_counts': {'":': 0, '<': 56, '<?xml version=': 0, '>': 98, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 5, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 32, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We consider equations of the formwhere D (µ) is a distributed order derivative, that isis the Caputo-Dzhrbashyan fractional derivative of order α, µ is a positive weight function.The above equation is used in physical literature for modeling diffusion with a logarithmic growth of the mean square displacement. In this work we develop a mathematical theory of such equations, study the derivatives and integrals of distributed order.', 'arxivid': 'math-ph/0703046', 'author': ['Anatoly N Kochubei \nInstitute of Mathematics\nNational Academy of Sciences of Ukraine\nTereshchenkivska 301601KievUkraine\n'], 'authoraffiliation': ['Institute of Mathematics\nNational Academy of Sciences of Ukraine\nTereshchenkivska 301601KievUkraine'], 'corpusid': 15108110, 'doi': '10.1016/j.jmaa.2007.08.024', 'github_urls': [], 'n_tokens_mistral': 28340, 'n_tokens_neox': 24864, 'n_words': 13674, 'pdfsha': '3d00d95b719220a99447a618a8cf4059e3b53005', 'pdfurls': ['https://export.arxiv.org/pdf/math-ph/0703046v2.pdf'], 'title': ['arXiv:math-ph/0703046v2 14 Aug 2007 Distributed Order Calculus and Equations of Ultraslow Diffusion', 'arXiv:math-ph/0703046v2 14 Aug 2007 Distributed Order Calculus and Equations of Ultraslow Diffusion'], 'venue': []}
arxiv
Linked cluster expansion on trees Deepak Iyer Department of Physics & Astronomy Bucknell University 1 Dent Dr17837LewisburgPAUSA Yuyi Wan Department of Physics and Astronomy University of Notre Dame 225 Nieuwland Science Hall Notre Dame 46556INUSA Linked cluster expansion on trees (Dated: March 15, 2023) The linked cluster expansion has been shown to be highly efficient in calculating equilibrium and nonequilibrium properties of a variety of 1D and 2D classical and quantum lattice models. In this article, we extend the linked cluster method to the Cayley tree and its boundaryless cousin the Bethe lattice. We aim to (a) develop the linked cluster expansion for these lattices, a novel application, and (b) to further understand the surprising convergence efficiency of the linked cluster method, as well as its limitations. We obtain several key results. First, we show that for nearest-neighbor Hamiltonians of a specific form, all finite tree-like clusters can be mapped to one dimensional finite chains. We then show that the qualitative distinction between the Cayley tree and Bethe lattice appears due to differing lattice constants that is a result of the Bethe lattice being boudaryless. We use these results to obtain the explicit closed-form formula for the zero-field susceptibility for the entire disordered phase upto the critical point for Bethe lattices of arbitrary degree; remarkably, only 1D chain-like clusters contribute. We also obtain the exact zero field partition function for the Ising model on both trees with only the two smallest clusters, similar to the 1D chain. Finally, these results achieve a direct comparison between an infinite lattice with a non-negligible boundary and one without any boundary, allowing us to show that the linked cluster expansion eliminates boundary terms at each order of the expansion, answering the question about its surprising convergence efficiency. We conclude with some ramifications of these results, and possible generalizations and applications. The linked cluster expansion has been shown to be highly efficient in calculating equilibrium and nonequilibrium properties of a variety of 1D and 2D classical and quantum lattice models. In this article, we extend the linked cluster method to the Cayley tree and its boundaryless cousin the Bethe lattice. We aim to (a) develop the linked cluster expansion for these lattices, a novel application, and (b) to further understand the surprising convergence efficiency of the linked cluster method, as well as its limitations. We obtain several key results. First, we show that for nearest-neighbor Hamiltonians of a specific form, all finite tree-like clusters can be mapped to one dimensional finite chains. We then show that the qualitative distinction between the Cayley tree and Bethe lattice appears due to differing lattice constants that is a result of the Bethe lattice being boudaryless. We use these results to obtain the explicit closed-form formula for the zero-field susceptibility for the entire disordered phase upto the critical point for Bethe lattices of arbitrary degree; remarkably, only 1D chain-like clusters contribute. We also obtain the exact zero field partition function for the Ising model on both trees with only the two smallest clusters, similar to the 1D chain. Finally, these results achieve a direct comparison between an infinite lattice with a non-negligible boundary and one without any boundary, allowing us to show that the linked cluster expansion eliminates boundary terms at each order of the expansion, answering the question about its surprising convergence efficiency. We conclude with some ramifications of these results, and possible generalizations and applications. I. INTRODUCTION Amongst the wide variety of lattice models used in statistical mechanics, a small handful are exactly solvable in the thermodynamic limit [1]. Examples include the 2D Ising model on a square lattice, various "ice" models, and some quantum models in one dimension such as the Heisenberg model, the nonlinear Schrödinger equation, and some relativistic models [2,3]. Whereas these models have demonstrated surprisingly wide applicability, in many situations we are forced to lift certain assumptions, rendering them no longer exactly solvable. Besides, even in cases where exact solutions are available, not all physical quantities can be calculated via closed form expressions, or easily translated into an experimentally relevant language [3]. We are then left with approximate solutions. The success of an approximation method often has to do with the underlying physics. For instance, it is notoriously hard to obtain good approximations for long range order, precisely because in order to capture long range correlations, one needs large system sizes, and any approximation that relies on truncating the system size can only give us hints of what long range order might lie beyond [4]. Perturbative methods that work directly on infinitely large systems can overcome this issue, but often do not allow easy access to the parameter regimes where long range order appears. Indeed, strongly correlated physics is perhaps the most elusive physics to effectively model. Similarly, strongly out of equilibrium physics like a quantum quench is often intractable because simple low-energy approximations fail [5]. The most basic approximation method for a lattice model relies on studying the properties of small systems as a function of the system size and carrying out an appropriate scaling, or attempting an extrapolation from a trend. Simple extrapolations will by definition fail to capture singularities, which indicate phase transitions, since these are sudden deviations from the behavior away from the singularity. Nevertheless, these methods are very effective away from a phase transition when we are deep in a particular phase. Other methods need to and can be employed in combination to recognize where these phase boundaries lie [1]. Within such finite size approximations, the particular statistical ensemble and boundary conditions used play a strong role in convergence, with open boundary conditions giving rise to O(1/N ) errors, where N is the system size, coming from the boundary of each finite size system [6]. Further, cluster methods such as the linked cluster expansion [7][8][9] seem to do even better [6]. However, reasons for the latter are not known. In this article, we resolve this outstanding issue by showing that the linked cluster expansion eliminates the boundary contribution at each order of the expansion. The linked cluster expansion has found tremendous application in classical and quantum systems, especially in recent studies pertaining to their dynamics [10][11][12][13], as well as in studies of disordered or inhomogeneous systems [11,12,14,15] (see articles cited in these references for earlier work) and periodically driven systems using Floquet Hamiltonians [16], and has proved to be a remarkably effective method for approximating lattice models. At an intuitive level, link cluster expansions of the kind used here operate by singling out the "new" contributions to an extensive quantity at any stage of the finite size approximation, by effectively canceling out arXiv:2303.07754v1 [cond-mat.stat-mech] 14 Mar 2023 contributions that are merely appearing from the smaller systems embedded in a larger system. For example, if we know the physics for a system size N 1 , and all of the physics in a system of size N 2 appears due to the multiple copies of the smaller system, then at the higher level, the linked cluster expansion gives us a zero contribution. It appears to be a more efficient method to "extract the physics" at smaller system sizes as has been observed in the works referenced above. In this article, we continue exploring the linked cluster expansion (LCE) in the context of exponentially growing tree lattices, and study if the above improvements in convergence efficiency prevail. The systems so far studied using the linked cluster expansions are regular lattices, occasionally with disorder. From the physical standpoint, tree lattices are unusual given their exponentially growing structure and in the case of a Cayley tree, the presence of a boundary that has as many vertices as the entire bulk -this latter property upends the common wisdom that in the thermodynamic limit, the boundary does not significantly contribute to the bulk properties. The Bethe lattice, on the other hand, does not have a boundary at all, and looks the same from every vertex. Despite these unusual properties, they have proved to be exceptionally useful lattices to study several models on. The Ising model on the Bethe lattice bears similar thermodynamics to the mean field approximation, and has the same critical exponents [1]. More recently, the Bethe lattice has proved useful in studies of Anderson and many-body localization [17][18][19]. Cayley trees and Bethe lattices have been extensively studied and find application in a variety of problems [20]. Tree lattices are often studied using simple finite size approximations or self-similarity based methods. The latter can be used to provide implicit expressions for the magnetization of the Ising model in both ordered and disordered phase, as well as reveal information about the critical point via a set of exact self-consistent equations that can be solved numerically to very high accuracy [1]. Our goal in this article is twofold -to study microscopically how the linked cluster expansion works in the context of a simple nearest neighbor model on a Cayley tree and its infinite/boundaryless and rootless sibling the Bethe lattice, and also to understand its surprising convergence efficiency. In the following sections, we systematically develop the linked cluster expansion on trees, establish equivalences between finite trees and one dimensional chains, and calculate the exact zero-field partition function for the Ising model on both types of trees. We then go on to study the weak field approximation with a hope to extract the critical temperature and indeed show that this is possible within the linked cluster expansion framework, showing a first example of a model where the N = 2 system is capable of giving us the critical point and the exact formula for the susceptibility at zero field. We use these results that allow us to compare how the linked cluster expansion operates on the Cayley tree and the Bethe lattice to conclude that the convergence efficiency of the linked cluster expansion is because it eliminates boundary contributions (known to be the source of poor convergence in systems with open boundary conditions [6]) at each stage of the expansion. II. DEFINITIONS Cayley trees and Bethe lattices are tree graphs, i.e., graphs that are connected and do not have any loops. In other words, it is not possible to make a circuit and return to the starting point (vertex) without retracing one or more edges. The absence of loops is crucial from a physical standpoint. As an example, the reason the Ising model on a square lattice differs from the mean-field approximation on a lattice with the same vertex degree is because of the presence of loops; without the loops the model can be studied using a Bethe-Peierls approximation of the appropriate vertex degree, and gives different critical exponents. Note that a Hamiltonian with next nearest neighbor hopping or interaction fundamentally destroys the tree structure by creating loops; we do not consider such models. In other words, we assume a Hamiltonian that has only nearest neighbor interactions or hopping. An m-Cayley tree is constructed by starting with one vertex and drawing m ≥ 3 edges out from it. From each of these new vertices in the first "shell", m − 1 new edges emerge (for a total of m edges at each vertex). The Cayley tree therefore grows symmetrically and can be terminated at any shell. The outermost shell has vertices that are attached to only one edge each. It is finite, and one can meaningfully ask a question about the infinite or thermodynamic limit. An m-Bethe lattice on the other hand has no center (root vertex) and no boundary. It is a connected graph where every vertex is attached to m edges without creating any loops. The graph is therefore entirely self-similar and appears the same from every vertex. It is infinite, and there is no meaningful finite subset of it [20]. Nevertheless, as we show below, we can use the linked cluster expansions that relies on computing properties on progressively growing finite clusters. It is critical to note that a simple finite size extrapolation based on finite clusters is unreliable and will generally fail on the Bethe lattice since it does not appropriately account for the absence of a boundary. III. PARTITION FUNCTIONS ON FINITE TREE GRAPHS On a finite tree graph like the Cayley tree, for N vertices, there are always N − 1 edges, since growing the lattice always involves adding one or more vertices and an equal number of edges. Consider a classical nearest neighbor spin Hamiltonian given by H = ij H ij (s i , s j )(1) The grand partition function is given by a sum over all possible spin configurations: Z = {sj } e −β ij Hij (si,sj ) .(2) The linked cluster expansion being a series expansion cannot use the canonical partition function since the system size, and therefore total magnetization (or charge or particle number) changes from one order to the next. Consider a spin that can take q different values. In what follows, we restrict ourselves to Hamiltonians of the form H = ij H ij (|s i − s j |).(3) The reason for the restriction will become clear in the theorem below-in short, it ensures that all possible values of H ij can be obtained by changing only one spin in the pair. For now, we note that the 1D spin-1/2 Ising model can indeed be cast into the above form as H ij = J[(s i − s j ) 2 /2 − 1]. Other examples of Hamiltonians that have this form include the standard q-state Potts model, given by H ij = −Jδ sisj , and its cyclic form, given by H ij = −J cos[2π(s i − s j )/q]. The theorem, however, does not apply to the spin-1 Ising model, for example, which cannot be cast into this form. With this constraint on H, we show that the partition function for a given H is identical on all tree graphs with the same number of vertices N . Proof. We first note that for the Hamiltonians we consider, the value of the Hamiltonian depends only on the edge set e ij = H ij (s i , s j ). We show the result by constructing the tree with the vertex set {v i } from a 1D chain by maintaining the same edge set while ensuring that there is a one-one mapping between the vertex set of the chain and the vertex set of the tree. Consider now a specific configuration of spins (vertices) s 1 , . . . , s N of a 1D chain. This corresponds to a specific edge configuration e 12 , . . . , e N −1,N given by the Hamiltonian H ij . We show that there is another vertex configuration v 1 , . . . , v N on the desired tree graph that leads to the same edge configuration (and therefore the same value of the Hamiltonian) that obtains from the rearrangement of bonds produced by transforming the 1D chain into the tree. The finite 1D chain Hamiltonian is given by H = N −1 j=1 H j,j+1(4) We define a "move" M i that takes the last available edge from the original 1D chain and connects it to the i-th vertex, retaining the labeling of the original chain. For example, M 2 would move the last edge and join it to the second vertex producing one degree 3 vertex. The Hamiltonian after this move becomes Any tree can thus be constructed by a sequence of moves M i , and after every move, we can restore the edge set by making one change to the vertex set, ensuring that the mapping is 1-1. In this way, every vertex set of the 1D chain goes to an equivalent vertex set of any tree with the same edge set. We have therefore established, by construction, a one-one mapping between vertex sets of the 1D chain and that of any tree that leaves the edge set invariant. This implies that there is a mapping between a 1D chain Hamiltonian a corresponding tree Hamiltonian that has the same numerical value, but possibly a different vertex set. H(M 2 ) = N −2 j=1 H j,j+1 + H 2,N(5) Note that the equivalence above is broken by any term in the Hamiltonian that is dependent on the vertex set, such as an external magnetic field, or if the Hamiltonian cannot be put into the form in Eq. (3). Proof. This follows from Theorem 1 and the fact that the grand partition functions sum over all vertex (spin) configurations. Note that the one-one mapping in the above construction ensures that each of the q N configurations is counted only once, and the constraint that the Hamiltonian depends only on the edge set ensures that a different correspondence between energies (numerical value of the Hamiltonian) and spin configuration in the tree, relative to the chain, does not impact the result. We reiterate that this latter part is violated by terms such as an external magnetic field, as considered in Section V. In the following, this equivalence will become central to some of the results we derive using the linked cluster expansion. Nevertheless, caution is warranted when it comes to tree graphs -our results above imply that the zero-field Ising partition function on the Cayley tree (which has a well-defined infinite volume limit) must be identical to the 1D model. This is true, nevertheless, the Cayley tree shows a finite temperature critical point as shown by the zero-field susceptibility [21]. IV. COUNTING CLUSTERS The linked-cluster expansion requires an enumeration of the number of embeddings of a subgraph H in a graph G, also known as a lattice constant [22]. Following Sykes et al [7], the weight of a particular cluster (graph) c in the expansion is given by W c (O) = O(c) − s∈c M s W s (O)(6) where s are all subgraphs that can be embedded in cluster c. In this expression, O corresponds to any extensive observable such as the logarithm of the partition function, or quantities that can be derived from it, and M s corresponds to the multiplicity of the subgraph s in the graph c. This quantity, also known as a lattice constant, provides an enumeration of the number of ways in which s can be embedded in c. Equation (6) then gives us an iterative procedure where the weight of the smallest cluster is equal to the value of the observable; other weights can be obtained sequentially. Once the weights are obtained, the value of the observable per unit volume can be obtained via lim N →∞,N N O N = c M c W c ,(7) where M c are multiplicities per unit volume of the infinite system. In other words, the M c enumerate how many ways the cluster c can be embedded in a much larger system (N N ) divided by the number of vertices (or a corresponding volume-like quantity) of that larger system. For the classical Ising model on a 1D chain (with no magnetic field), one can show that W j = 0 for j ≥ 3 and O = log Z[6] [23]. Given Theorem 1, it follows that this statement is true on all trees for all quantities that can be derived from the partition function, since the clusters used to calculate weights are all finite clusters. We are interested in calculating a generic linked cluster expansion for a Bethe lattice. In this, we use the distinction made by Baxter between a Bethe lattice and a Cayley tree. On a Cayley tree, the number of boundary vertices (that are attached to only one edge) scale with the number of vertices. For a tree of degree 3 with N vertices, the number of boundary points is N/2 + 1. Therefore, the boundary does not become irrelevant in the infinite volume limit. On the other hand, the Bethe lattice does not have a boundary and cannot be thought of as the "bulk" of a Cayley tree, since there is no consistent way to terminate this "bulk"; it is always infinitely large. Nevertheless, we will see that in the linked cluster expansion we can treat both the Bethe lattice, and the Cayley tree in the infinite volume limit. The computation of the multiplicities of various clusters needs to be carried out separately for the Cayley tree and the Bethe lattice. To motivate this, we first consider the 1D chain. In a 1D chain of N vertices, there are always N + 1 − j ways to embed a j-chain, so that M j = N + 1 − j. The linked cluster expansion then becomes straightforward. From Eq. (6), we get W N (O) = O N − N −1 j=1 (N + 1 − j)W j (O).(8) The result for the observable per unit volume is then given by Eq. (7) Here we consider the "infinite" system to have size N N . Since M c are the multiplicities of clusters embedded in the infinite system per unit volume, we get M j = (N + 1 − j)/N → 1 since j ≤ N N . Combining this with Eq. (7) we get a particularly simple result for 1D chains, lim N →∞ O/N = N j=1 W j .(9) It is critical to note that this result is applicable for all models on a 1D-chain, quantum or classical, since the result does not assume an underlying Hamiltonian or a specific observable O. Consider now an infinite (N → ∞ vertices) 2D square lattice. Figure 3 shows our notation for some graphs. For c = g 1 , there are N ways to embed this cluster, and we obtain M g1 = 1. For c = g 2 , there are ∼ 2N ways to embed it because of the vertical and horizontal edges, so we end up with M g2 = 2 in the limit. In this fashion, the multiplicity has to be computed for every cluster c. Table I shows the multiplicities for some clusters embedded in infinite 2D square and triangular lattices. Again, we note that these values are independent of the Hamiltonian, or whether it is quantum or classical, and only depend on the structure of the lattice. We now proceed to obtain the multiplicities M c for the Bethe lattice and the Cayley tree. Before we proceed, boundary vertices, and we recover the above result for arbitrary m. c CT = 1 N 3 N 2 − 1 + N 2 + 1 = 2 − 2 N .(10 For a Bethe lattice, however, there is no boundary, and all vertices have the same degree, m, giving c BL = m.(11) In the thermodynamic limit of the Cayley tree, we approach c = 2 for any m, which is identical to the connectivity of a 1D chain. This is one way of understanding why the Cayley tree has the same partition function as the 1D chain. Another way is using the equivalence established in Sec. III. Below we develop the linked cluster expansion for these m = 3 lattices and obtain the partition function for the Cayley Tree and the Bethe Lattice. A. Cayley tree First, we calculate multiplicities for the Cayley tree and show that we indeed reproduce the result of a 1D lattice. Consider a Cayley tree with N vertices. We have M g1 = N /N → 1. For g 2 , we count a total of N − 1 edges (each edge is a g 2 ) giving us M g2 = (N −1)/N → 1 in the limit of large N . For g 3 , each vertex has three ways of embedding g 3 , except the boundary vertices. We therefore have to subtract N /2+1 vertices since one cannot embed a g 3 centered on a boundary vertex. This gives us M g3 = 3(N /2 − 1)/N → 3/2. The calculation becomes more tedious from here onwards due to increasing complexity of the clusters. before, the weights W 3 and beyond vanish for the classical spin-1 /2 Ising model, therefore these multiplicities are irrelevant, and we get a partition function: −βf = lim N →∞ log Z N = M 1 W 1 + M 2 W 2 = log 2 + log cosh βJ = log[2 cosh(βJ)],(12) where f is the free energy per unit volume (number of vertices). This result is identical to the 1D chain. Nevertheless, we note that the model has a known critical point that only becomes manifest when we compute the zero-field susceptibility. B. Bethe lattice The Bethe lattice is in a sense already in the thermodynamic limit since it does not have a boundary. As seen from the average connectivity, we cannot treat the Bethe lattice as the thermodynamic limit of the Cayley tree. There is no consistent way to define the partition function of a finite part of the Bethe lattice, since that notion is ill-defined. The LCE gives us the partition function per unit volume, so there is some hope that we can effectively divide by the already infinite volume since we do not take a limit in the process. An application of the LCE gives us a different multiplicity M g2 = 3/2 on the Bethe lattice; each vertex is connected to three edges, and each edge is double counted. We cannot use the "edge counting" method we used for the Cayley tree because one cannot terminate the Bethe lattice. More generally, we cannot calculate the total number of ways of embedding a given cluster in a "finite but large" graph and then divide out by the volume and take the limit. A "finite but large" graph does not exist for the Bethe lattice. In other to calculate multiplicities, we have to work "intensively" by counting the number of ways to embed a given cluster at a given vertex and then correcting for any multiple-counting. For the case of a lattice whose boundary is always negligible in the limit (1D chain, square, etc.) these two methods coincide. Further, since W 3 and above are zero, we do not need to calculate higher multiplicities (see Table II for some of these; we use them in the susceptibility calculation in Section V), and we end up with a partition function per unit volume given by − βf = log 2 + 3 2 log cosh(βJ) (13) where f is the free energy per site. This, remarkably, is the correct free energy for the Ising model on the Bethe lattice [1]. The result is an analytic function of β and therefore one might naively assume that there is no phase transition for β < ∞. However, this is a known oddity with the Ising model on the Bethe lattice, and the model indeed has a finite temperature phase transition that only becomes manifest when one computes the zero-field magnetic susceptibility. We note generally that in an Ising model with a phase transition, at T < T c , in the absence of an external magnetic field, there is nothing to break the symmetry to determine whether the majority of the spins point up or down. On a finite lattice, one could pin boundary spins, but we do not have that luxury on the Bethe lattice. The only option we're left with is to calculate the free energy in the presence of a magnetic field, find the susceptibility, and study it for non-analyticity. V. ISING MODEL WITH A MAGNETIC FIELD We now turn on a small magnetic field H J and study the free energy in the presence of this small field: H = − ij Js i s j − H j s j(14) For J > 0, the model is ferromagnetic. At zero temperature, all spins are aligned and point along the external field. Note that in the presence of a magnetic field, the conditions for Theorem 1 no longer hold and generally the tree is not equivalent to a 1D chain. In principle then, the partition function on all branched clusters will have to be calculated separately, and one does not generally expect their weights to go to zero. Nevertheless, a simplification occurs at lowest order in the external field. Since the free energy has to be an even function of H, we will keep terms to O(H 2 ) and discard the rest. We begin with the 1D chain. A. 1D chain We study the weights for graphs g 1 ,g 2 , and g 3 . Denoting log Z g = −βF g for a graph g, and switching to Calculating the weights, we get W g1 = log 2 + h 2 2 , W g2 = log cosh(K) + h 2 tanh(K), W g3 = h 2 tanh 2 (K).(16) In this case, W 3 does not go to zero to O(h 2 ). This is to be expected. At the next level to the same order in h, we get W g4 = h 2 tanh 3 (K).(17) Carrying on, at generic order n ≥ 3, we get W gn = h 2 tanh n−1 (K),(18) giving −βf 1D = ∞ j=1 W j = log[2 cosh(K)] + h 2   1 2 + ∞ j=1 tanh j (K)   = log[2 cosh(K)] + h 2 2 e 2K ,(19) where we can sum the series for all 0 ≤ K < ∞, since tanh(K) < 1 in this range. We can obtain the low-field magnetization density by m 1D = −β ∂f 1D ∂h = he 2K = H k B T e 2J/k B T .(20) There are clearly no singularities in T at h = 0 + , consistent with the fact that there is no ordered phase for T > 0 and zero field for the 1D chain. B. Cayley tree Considering only the chain like clusters (a direct calculation of branched clusters at this order shows that they do not contribute to the weights; see Appendix A), we use the weights derived above and calculate the free energy density from the multiplicities in Table II to get − βf CT = log[2 cosh(K)] + h 2 2 1+ + 1 tanh(K) + 3 2 ∞ j=1 2 j tanh 2j (K) . (21) The above sum converges for 2 tanh 2 (K) < 1. We get − βf CT = log[2 cosh(K)] + h 2 2 {1 + tanh(K)} 2 1 − 2 tanh 2 (K) .(22) The above expression has a singularity at K c,CT = tanh −1 (1/ √ 2) indicating a critical point. This is in fact a well-known result, and what we see here is only the first of a chain of critical points from K c,CT to K c,BL obtained in the next section [24]. The other critical points appear at higher order in h and for K < K c,CT . However, at higher order, the branched clusters cannot be neglected and it is not straightforward to obtain the other singularities analytically using this method. C. Bethe lattice First, we note that the multiplicity for g 3 on the Bethe lattice is given by M g3 = 3 since we can embed g 3 in 3 ways at every vertex. For g 4 , we note that starting at any vertex of the Bethe lattice, we can choose a "path" for g 4 in 3 × 2 × 2 ways. Since the opposite path exists starting at a different vertex, this path is double counted, giving M g4 = 6. In fact, this can be immediately generalized to all the chain graphs, M gn = 3 × 2 n−3 for n ≥ 2. The graphs with branches are a little more complicated. g 4 embeds uniquely at every vertex, and therefore, M g 4 = 1. Note that for the chain graphs, the multiplicities rise exponentially. For the branched graphs, however, each time a new branch is introduced the multiplicity falls because the Bethe lattice has a very specific branching structure. For a fixed branch structure, the multiplicities grow as we make the chain longer; each branched structure then produces its own cascade of chains. We begin by only considering the chain graphs since, like for the Cayley Tree, the branched clusters do not contribute at this order in h. This leads to a free energy given by − βf BL = log 2 + 3 2 log cosh(K)+ + h 2   1 2 + 3 4 ∞ j=1 (2 tanh K) n   . (23) The sum in the equation above can only be carried out for 0 ≤ K < tanh −1 (1/2) signaling the possibility of a finite temperature phase transition. For K in this range, we get −βf BL = log 2+ 3 2 log cosh(K)+ h 2 2 1 + tanh(K) 1 − 2 tanh(K) .(24) The corresponding zero-field susceptibility is given by χ BL = β 1 + tanh(K) 1 − 2 tanh(K) .(25) As indicated above, the 0-field susceptibility has a sin- gularity as K c = β c J → tanh −1 (1/2) from below, precisely the temperature of the known phase transition of the Ising model on the Bethe lattice [1]. For T → ∞, we retrieve the result of the 1D chain as one can check from a high temperature expansion, since at infinite temperature, the spins are all uncorrelated and the magnetization per unit volume is proportional to the external field and inversely proportional to the temperature. In fact, we will see that we recover the correct low-field magnetization for T > T c , i.e., the unordered phase (see Fig. 5). Our calculation thus far cannot predict the free energy for the ordered phase or the correct magnetization discontinuity at the critical point. Indeed, our formula gives us the unphysical result that the magnetization density goes to infinity at the critical point. Nevertheless, we are able to extract the critical temperature. For an m-Bethe lattice, the chain graphs g n for n ≥ 2 have multiplicities given by M gn = m(m−1) n−2 /2. Considering again only the chain-like graphs (the branched clusters do not contribute at this order for all m), we can find the corresponding free energy − βf mBL = log 2 + m 2 log cosh(K)+ h 2 2 1 + tanh(K) 1 − (m − 1) tanh(K) . (26) We therefore obtain a critical temperature of β c J = tanh −1 1 m − 1 = 1 2 log m m − 2 .(27) where the second equality is the more familiar form of this expression. The linked-cluster expansion therefore not only gives us the critical point, but also does much better than a high temperature expansion, giving us the low-field result in the entire disordered phase for arbitrary m with minimal effort. Methods using self-similarity (the general approach to the exact solution) can give us the same results (see e.g. Refs. [21], [24]). However, the LCE shows us what clusters contribute to the critical point. We reiterate that we are able to produce the results of a mean-field Bethe-Peierls approximation for the disordered phase close to the phase transition at small fields without engaging with the self-similar nature of the lattice (equivalent to the consistency conditions imposed in the Bethe approximation). In fact, we only consider all possible linear chains, and our result derives purely from the combinatorics of placing these linear chains in m-Bethe lattices. The various branched clusters do not appear to contribute to the critical temperature, since it is a zero-field property. VI. CONCLUSIONS In this work, we extend the linked cluster expansion to tree graphs, and obtain lattice constants and multiplicities of the various cluster embeddings. In particular, we show how the subtle difference between the Cayley tree and Bethe lattice can be captured using this method, leading to different results for the corresponding partition functions. The derived lattice constants can be applied to any classical or quantum model on these trees, since they depend only on the lattice structure and not the Hamiltonians or the observables calculated, therefore laying the groundwork for several future studies that will potentially extend the precision of numerical approximations. The Ising model on the Cayley tree and Bethe lattice have been extensively studied (despite some confusion in the literature about how these lattices are defined, especially the language surrounding thermodynamic limits), and their critical properties are well known. Nevertheless, in this work, our use of the linked cluster expansion has revealed several interesting insights about these models. First, we have shown that for a classical spin Hamiltonian that can be cast in the form of Eq. (3), any finite tree lattice can be mapped onto an equivalent 1D lattice for the purpose of computing the partition function, and all properties derived from it. This result explains why branched clusters and cluster with more than 3 vertices do not contribute to the LCE for trees in the absence of an external magnetic field. We use this to show that the finite m-Cayley tree in the absence of a external magnetic field is similar to a 1D chain on the same number of vertices, and has the same free energy. Nevertheless, in the thermodynamic limit, even though the free energy stays identical to that of the 1D chain, the model develops a nontrivial singularity in the zero-field susceptibility that can be analytically obtained using the LCE, showing a departure from the 1D chain. We then show that the same method applied to the Bethe lattice provides a different free energy solely from the different combinatorics of embedding clusters. We see that the linked cluster expansion, despite being a "series" type of approximation method that progressively counts larger finite clusters, is capable of providing correct solutions for models where neither a finite lattice, or a thermodynamic limit are well-defined, indicating that it achieves an elimination of boundary contributions at every stage or order of the expansion. This feature of the LCE is the reasons why it overcomes the large [O(1/N )] errors that occur in calculations using a simple finite size extrapolation based on the grandcanonical partition function in open systems, and why we get more rapid convergence to the thermodynamic limit. We reiterate that we have shown a novel way that the linked cluster expansion can be used, in places where traditional finite size extrapolations are fundamentally inapplicable. We foresee straightforward application of the developments in this work to quantum models, including disordered models, and periodically driven models, thus allowing an alternate method to studying complex phenomena that can be modeled using trees. Fig. 1 1shows some illustrations of the two trees. To simplify the visualization, we choose m = 3. In what follows, we also restrict to m = 3, noting that all results are generalizable to arbitrary integer m ≥ 3 (the m = 2 case is the one dimensional chain). FIG. 1 . 1A finite Cayley tree with m = 3 and N = 10 vertices (left), and a Bethe lattice with m = 3 (right). The dashed lines signify the infinite continuation of the tree structure. Theorem 1 . 1For every finite m-tree with a given vertex (spin) configuration (or vertex set) {v i } and corresponding edge configuration (edge set) {e ij } obtained from a Hamiltonian H ij (v i , v j ) such as inEq. (3), there exists an equivalent finite 1D chain with the same edge set, and therefore the same value of the Hamiltonian. Fig. 2 FIG. 2 . 22Example of a series of moves Mj used in Theorem 1 on a N = 6 chain. Corollary 1 . 1The partition function (defined in Eq. (2)) on any finite tree graph on N vertices is identical to the partition function on a 1D chain with N vertices for Hamiltonians of the form in Eq. (3). FIG. 3 . 3Some graphs and corresponding notation. )For an m-Cayley tree, the bulk vertices have degree m and the boundary vertices have degree 1. For a graph with N vertices, there are N ( variables K = βJ and h = βH, we have−βF g1 = log 2 + log cosh(h) ≈ log 2 + h 2 2 , −βF g2 ≈ log[4 cosh(K)] + h 2 [1 + tanh(βJ)], −βF g3 ≈ log[8 cosh 2 ( FIG. 5 . 5Magnetization as a function of inverse temperature K = βJ in the disordered phase upto the critical point for m = 3. Note that the two methods indicated give exactly the same results. In this plot, h/K = 0.001. As h/K → 0, we are able to access regions closer to the critical point. The orange dotted line marks the critical point. The equality between the two methods holds for m > 3. be formed from a sequence of such moves. If the new edge e 2,N = e N −1,N , then we retain the same edge set. If they are not equal, we change s N → v N such that e 2,N = e N −1,N . Thus we obtain a new vertex set s 1 , . . . , s N −1 , v N . For this to work, we require that all edge values can be achieved by changing one vertex in a pair. For the spin-1/2 Ising model, this requirement is clearly satisfied. For the q-state Potts model, which is a many to two mapping, if e N −1,N = −J, then we require v N = s 2 so that e 2,N = 1. This can always be arranged. For e N −1,N = 0, we require v N = s 2 . For s 1 , . . . , s N −1 fixed, there are q − 1 equivalent values that s N can take that result in the same value of H on the chain. This is also true on a tree derived from, say, M 2 . Therefore each vertex configuration of the chain can be mapped on to a unique vertex configuration of the tree, ensuring that we have the same number of configurations with a given energy. The argument however fails for the spin-1 Ising model. There, if s 2 = 0, then no matter the value of v N , we cannot achieve a value different from 0 for e 2N . This failure of the spin-1 Ising has to do with it not satisfying the requirement that it have a Hamiltonian of the form in Eq.(3), which ensures that all edge values can be achieved by changing one spin of a pair, since for i ∈ {1, . . . , q}, {|s i − s j |} is the same set for all values of j. Table IIshows the multiplicities for some higher order clusters. However, as notedTABLE II. Lattice constants (multiplicities) for various clusters on the Cayley tree (M c,CT ) and the Bethe lattice (M c,BL ) with m = 3.Cluster M c,CT M c,BL 1 1 1 3 /2 3 /2 3 2 6 1 /2 1 n vertices 2 n 2 −1 , n even, n ≥ 2 3 · 2 n−5 2 , n odd, n ≥ 3. 3 · 2 n−3 , n ≥ 1 Appendix A: Branched clusters at low field for m= 3In Section V B, we noted that at lowest order in h, the branched clusters do not contribute any weight. For completeness, we show a few examples of this.One branch chainsFIG. 6. 6-vertex graphs with one branchWe first consider chains of all lengths that have one branched vertex somewhere along the chain.Fig. 6shows the two possibilities at n = 6. Starting with the smallest branched cluster, g 4 , we get for the free energy,The corresponding weight is given bySimilarly, for g 5 , we get − βF g 5 = 5 log 2 + 4 log(cosh K)+The corresponding weight is given byThis continues to remain true at higher orders, showing that chains extending out on to one side do not contribute to the low-magnetic field free energy. This leaves us with graphs like g 6 which, one can show, also have zero weight at this order.We therefore conclude that at O(h 2 ) clusters with a single branch do not contribute to the free energy and therefore the magnetization.Two branch chainsA direct calculation of the partition function and the corresponding weights on clusters with two branches reveals that their weights are also zero at O(h 2 ), leading us to conclude that this is true for all branched chains. R J Baxter, Exactly Solved Models in Statistical Mechanics, Dover books on physics. Dover PublicationsR.J. Baxter, Exactly Solved Models in Statistical Mechan- ics, Dover books on physics (Dover Publications, 2007). Exact integrability in quantum field theory and statistical systems. H B Thacker, 10.1103/RevModPhys.53.253Rev. Mod. Phys. 53H. B. Thacker, "Exact integrability in quantum field the- ory and statistical systems," Rev. Mod. Phys. 53, 253- 285 (1981). Izergin. V E Korepin, N M Bogoliubov, A , 10.1017/CBO9780511628832Quantum Inverse Scattering Method and Correlation Functions, Cambridge Monographs on Mathematical Physics. Cambridge University PressV. E. Korepin, N. M. Bogoliubov, and A. G. Izer- gin, Quantum Inverse Scattering Method and Correla- tion Functions, Cambridge Monographs on Mathemat- ical Physics (Cambridge University Press, 1993). Jaan Oitmaa, Chris Hamer, Weihong Zheng, 10.1017/CBO9780511584398Series Expansion Methods for Strongly Interacting Lattice Models. Cambridge University PressJaan Oitmaa, Chris Hamer, and Weihong Zheng, Se- ries Expansion Methods for Strongly Interacting Lattice Models (Cambridge University Press, 2006). Quantum quench dynamics. Aditi Mitra, 10.1146/annurev-conmatphys-031016-025451Condensed Matter Physics. 9Annual Review ofAditi Mitra, "Quantum quench dynamics," Annual Re- view of Condensed Matter Physics 9, 245-259 (2018). Erratum: Optimization of finite-size errors in finite-temperature calculations of unordered phases. Deepak Iyer, Mark Srednicki, Marcos Rigol, 10.1103/PhysRevE.96.039903Phys. Rev. E. 9139903Phys. Rev. EDeepak Iyer, Mark Srednicki, and Marcos Rigol, "Opti- mization of finite-size errors in finite-temperature calcu- lations of unordered phases," Phys. Rev. E 91, 062142 (2015); "Erratum: Optimization of finite-size errors in finite-temperature calculations of unordered phases [phys. rev. e 91, 062142 (2015)]," Phys. Rev. E 96, 039903 (2017). Lattice Constant Systems and Graph Theory. M F Sykes, J W Essam, B R Heap, B J Hiley, 10.1063/1.1705066Journal of Mathematical Physics. 7M. F. Sykes, J. W. Essam, B. R. Heap, and B. J. Hiley, "Lattice Constant Systems and Graph Theory," Journal of Mathematical Physics 7, 1557-1572 (1966). On the theory of cooperative phenomena in crystals. C Domb, 10.1080/00018736000101199Advances in Physics. 9Advances in PhysicsC. Domb, "On the theory of cooperative phenomena in crystals," Advances in Physics 9, 149-244 (1960); "On the theory of cooperative phenomena in crystals," Ad- vances in Physics 9, 245-361 (1960). A short introduction to numerical linked-cluster expansions. B Tang, E Khatami, M Rigol, Comput. Phys. Commun. 184557B. Tang, E. Khatami, and M. Rigol, "A short intro- duction to numerical linked-cluster expansions," Com- put. Phys. Commun. 184, 557 (2013). Quantum quenches in the thermodynamic limit. M , 10.1103/PhysRevLett.112.170601Phys. Rev. Lett. 112170601M. Rigol, "Quantum quenches in the thermodynamic limit," Phys. Rev. Lett. 112, 170601 (2014). Thermodynamics of two-dimensional spin models with bimodal random-bond disorder. Baoming Tang, Deepak Iyer, Marcos Rigol, 10.1103/PhysRevB.91.174413Phys. Rev. B. 91174413Baoming Tang, Deepak Iyer, and Marcos Rigol, "Ther- modynamics of two-dimensional spin models with bi- modal random-bond disorder," Phys. Rev. B 91, 174413 (2015). Quantum quenches and many-body localization in the thermodynamic limit. Baoming Tang, Deepak Iyer, Marcos Rigol, 10.1103/PhysRevB.91.161109Phys. Rev. B. 91161109Baoming Tang, Deepak Iyer, and Marcos Rigol, "Quan- tum quenches and many-body localization in the ther- modynamic limit," Phys. Rev. B 91, 161109 (2015). Numerical linked cluster expansions for quantum quenches in onedimensional lattices. Krishnanand Mallayya, Marcos Rigol, 10.1103/PhysRevE.95.033302Phys. Rev. E. 9533302Krishnanand Mallayya and Marcos Rigol, "Numerical linked cluster expansions for quantum quenches in one- dimensional lattices," Phys. Rev. E 95, 033302 (2017). Prethermalization, thermalization, and fermi's golden rule in quantum many-body systems. Krishnanand Mallayya, Marcos Rigol, 10.1103/PhysRevB.104.184302Phys. Rev. B. 104184302Krishnanand Mallayya and Marcos Rigol, "Prethermal- ization, thermalization, and fermi's golden rule in quan- tum many-body systems," Phys. Rev. B 104, 184302 (2021). Numerical linked cluster expansions for inhomogeneous systems. Johann Gan, R A Kaden, Hazzard, 10.1103/PhysRevA.102.013318Phys. Rev. A. 10213318Johann Gan and Kaden R. A. Hazzard, "Numerical linked cluster expansions for inhomogeneous systems," Phys. Rev. A 102, 013318 (2020). A floquet model for the many-body localization transition. Liangsheng Zhang, Vedika Khemani, David A Huse, 10.1103/PhysRevB.94.224202Phys. Rev. B. 94224202Liangsheng Zhang, Vedika Khemani, and David A. Huse, "A floquet model for the many-body localization transi- tion," Phys. Rev. B 94, 224202 (2016). A self consistent theory of localization. R Abou-Chacra, D J Thouless, P W Anderson, 10.1088/0022-3719/6/10/009Journal of Physics C: Solid State Physics. 6R Abou-Chacra, D J Thouless, and P W Anderson, "A self consistent theory of localization," Journal of Physics C: Solid State Physics 6, 1734-1752 (1973). Metalinsulator transition in a weakly interacting many-electron system with localized single-particle states. D M Basko, I L Aleiner, B L Altshuler, 10.1016/j.aop.2005.11.014Annals of Physics. 321D.M. Basko, I.L. Aleiner, and B.L. Altshuler, "Metal- insulator transition in a weakly interacting many-electron system with localized single-particle states," Annals of Physics 321, 1126-1205 (2006). Anderson localization on the bethe lattice using cages and the wegner flow. Samuel Savitz, Changnan Peng, Gil Refael, 10.1103/PhysRevB.100.094201Phys. Rev. B. 10094201Samuel Savitz, Changnan Peng, and Gil Refael, "Ander- son localization on the bethe lattice using cages and the wegner flow," Phys. Rev. B 100, 094201 (2019). Cayley trees and bethe lattices: A concise analysis for mathematicians and physicists. M Ostilli, 10.1016/j.physa.2012.01.038Physica A: Statistical Mechanics and its Applications. 391M. Ostilli, "Cayley trees and bethe lattices: A concise analysis for mathematicians and physicists," Physica A: Statistical Mechanics and its Applications 391, 3417- 3423 (2012). Infinite Susceptibility without Spontaneous Magnetization: Exact Properties of the Ising Model on the Cayley Tree. Hirotsugu Matsuda, 10.1143/PTP.51.1053Progress of Theoretical Physics. 51Hirotsugu Matsuda, "Infinite Susceptibility without Spontaneous Magnetization: Exact Properties of the Ising Model on the Cayley Tree," Progress of Theoret- ical Physics 51, 1053-1063 (1974). ding and a weak embedding. A strong emebdding implies that all edges between vertices present in the graph are present in the cluster, whereas a weak embedding does not require that. Sykes, The LCE we use relies on weak embdeddings. Sykes et al [7] make a distinction between a strong embed- ding and a weak embedding. A strong emebdding implies that all edges between vertices present in the graph are present in the cluster, whereas a weak embedding does not require that. The LCE we use relies on weak embded- dings. This results generalizes to the q-state Potts model as can be verified directly by computing log ZN for N = 1, 2, 3, 4, . . . and calculating the corresponding weights. By obtaining a formula for log ZN , we can show that Wj = 0 for j ≥ 3. just as in the spin-1 /2 classical Ising modelThis results generalizes to the q-state Potts model as can be verified directly by computing log ZN for N = 1, 2, 3, 4, . . . and calculating the corresponding weights. By obtaining a formula for log ZN , we can show that Wj = 0 for j ≥ 3, just as in the spin-1 /2 classical Ising model. New type of phase transition. E Müller-Hartmann, J Zittartz, 10.1103/PhysRevLett.33.893Phys. Rev. Lett. 33E. Müller-Hartmann and J. Zittartz, "New type of phase transition," Phys. Rev. Lett. 33, 893-897 (1974).
{'fraction_non_alphanumeric': 0.0434518260305622, 'fraction_numerical': 0.025110453568967614, 'mean_word_length': 4.201665612481552, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 14, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The linked cluster expansion has been shown to be highly efficient in calculating equilibrium and nonequilibrium properties of a variety of 1D and 2D classical and quantum lattice models. In this article, we extend the linked cluster method to the Cayley tree and its boundaryless cousin the Bethe lattice. We aim to (a) develop the linked cluster expansion for these lattices, a novel application, and (b) to further understand the surprising convergence efficiency of the linked cluster method, as well as its limitations. We obtain several key results. First, we show that for nearest-neighbor Hamiltonians of a specific form, all finite tree-like clusters can be mapped to one dimensional finite chains. We then show that the qualitative distinction between the Cayley tree and Bethe lattice appears due to differing lattice constants that is a result of the Bethe lattice being boudaryless. We use these results to obtain the explicit closed-form formula for the zero-field susceptibility for the entire disordered phase upto the critical point for Bethe lattices of arbitrary degree; remarkably, only 1D chain-like clusters contribute. We also obtain the exact zero field partition function for the Ising model on both trees with only the two smallest clusters, similar to the 1D chain. Finally, these results achieve a direct comparison between an infinite lattice with a non-negligible boundary and one without any boundary, allowing us to show that the linked cluster expansion eliminates boundary terms at each order of the expansion, answering the question about its surprising convergence efficiency. We conclude with some ramifications of these results, and possible generalizations and applications.', 'arxivid': '2303.07754', 'author': ['Deepak Iyer \nDepartment of Physics & Astronomy\nBucknell University\n1 Dent Dr17837LewisburgPAUSA\n', 'Yuyi Wan \nDepartment of Physics and Astronomy\nUniversity of Notre Dame\n225 Nieuwland Science Hall\n\nNotre Dame\n46556INUSA\n'], 'authoraffiliation': ['Department of Physics & Astronomy\nBucknell University\n1 Dent Dr17837LewisburgPAUSA', 'Department of Physics and Astronomy\nUniversity of Notre Dame\n225 Nieuwland Science Hall', 'Notre Dame\n46556INUSA'], 'corpusid': 256754954, 'doi': '10.1103/physreve.107.024108', 'github_urls': [], 'n_tokens_mistral': 13428, 'n_tokens_neox': 11878, 'n_words': 8338, 'pdfsha': '59c3b87fc7b85e1d4f4884a9ee1931877b555f33', 'pdfurls': ['https://export.arxiv.org/pdf/2303.07754v1.pdf'], 'title': ['Linked cluster expansion on trees', 'Linked cluster expansion on trees'], 'venue': []}
arxiv
Microfluidic tools for assaying immune cell function Joel Voldman Massachusetts Institute Microfluidic tools for assaying immune cell function Technology Biography Joel Voldman is a Professor in the Electrical Engineering and Computer Science Department at MIT. He received the B.S. degree in electrical engineering from the University of Massachusetts, Amherst, in 1995. He received the M.S and Ph.D. degree in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, in 1997 and 2001, developing bioMEMS for single-cell analysis. Following this, he was a postdoctoral associate in George Church's lab at Harvard Medical School, where he studied developmental biology. In 2002 he returned to MIT as an Assistant Professor in the Electrical Engineering and Computer Science department at MIT. In 2004 he was awarded the NBX Career Development Chair, in 2006 promoted to Associate Professor, and in 2013 promoted to Professor in the department. Among several awards, he has received an NSF CAREER award, an ACS Young Innovator Award, a Bose Fellow award, Jamieson Teaching Award, Smullin Teaching Award, Quick Faculty Research Innovation Fellowship, and awards for posters and presentations at international conferences. Prof. Voldman's research focuses on developing microfluidic technology for biology and medicine, with an emphasis on cell sorting and stem cell biology. He has developed a host of technologies to arrange, culture, and sort diverse cell types including immune cells, endothelial cells, and stem cells. Current areas of research include recapitulating the induction of atherosclerosis on a microfluidic chip, and using microfluidic tools to study how immune cells decide to attack tumor cells. He is also interested in translational medical work, such as developing point-of-care drop-ofblood assays for proteins and rapid microfluidic tests for immune cell activation for the treatment of sepsis.AbstractMicrosystems have the potential to impact biology by providing new ways to manipulate cells and the microenvironment around them. Simply physically manipulating cells or their environment-using microfluidics, electric fields, or optical forces-provides new ways to separate cells and organize cell-cell interactions. Immune cells are of particular interest because of their central role in defending the body against foreign invaders. As a consequence, many microfluidic devices have been used to study both the basic biology of immune cells as well as to assay them for clinical use. Prof. Voldman's research focuses on developing microfluidic technology for biology and medicine, with an emphasis on cell sorting and stem cell biology. He has developed a host of technologies to arrange, culture, and sort diverse cell types including immune cells, endothelial cells, and stem cells. Current areas of research include recapitulating the induction of atherosclerosis on a microfluidic chip, and using microfluidic tools to study how immune cells decide to attack tumor cells. He is also interested in translational medical work, such as developing point-of-care drop-ofblood assays for proteins and rapid microfluidic tests for immune cell activation for the treatment of sepsis. Abstract Microsystems have the potential to impact biology by providing new ways to manipulate cells and the microenvironment around them. Simply physically manipulating cells or their environment-using microfluidics, electric fields, or optical forces-provides new ways to separate cells and organize cell-cell interactions. Immune cells are of particular interest because of their central role in defending the body against foreign invaders. As a consequence, many microfluidic devices have been used to study both the basic biology of immune cells as well as to assay them for clinical use. Our lab has developed technologies on both ends of the spectrum, from cell pairing devices able to study information flow in immune cells, to electrical sorting devices for assaying immune cell function in response to disease. In terms of cell pairing, we have developed two complementary approaches to creating programmed pairs ofcells, one using capture "cups" and a three-step back-and-forth loading procedure to pair thousands of cells in parallel 1,2 , and the other using microfluidic "corrals" to contain cells 3,4 ( Figure 1). With these devices we can pair immune cells with each other or with other cells (i.e., tumor cells to study information flow from first contact to downstream effector functions, elucidating how decision-making occurs in these interactions. Figure 2. Iso-dielectric separation. A heterogeneous cell population (blue tube) is introduced in a microfluidic device where the cells encounter a spatial gradient in liquid conductivity and a dielectrophoretic force that pushes them across this gradient, until they reach their iso-dielectric point (IDP), where the force goes to zero and the cells cross over. At right are histograms of cell IDPs for unactivated (blue) and activated (red) human neutrophils. In terms of electrical sorting devices, we have developed microfluidic systems to sort cells based on their intrinsic electrical properties. Electrical properties have previously been correlated with important biological phenotypes (apoptosis, cancer, etc.), but a sensitive and specific method approach has been lacking. We have developed a method called isodielectric separation that uses electric fields to drive cells to the point in a conductivity gradient where they become electrically transparent, resulting in a continuous separation method specific to electrical properties [5][6][7][8] . With this method, we have screened the entire genome of an organism to understand the biological basis of electrical properties, finding that the relationship between genetics and intrinsic properties has both intuitive and nonintuitive features. Figure 1 . 1High-throughput cell pairing and fusion. (A) Device overview, (B) close-up of cell pairing, (C) Pairing over the entire array. Microfluidic Control of Cell Pairing and Fusion. A M Skelley, O Kirak, H Suh, R Jaenisch, J Voldman, PMC3251011Nature Methods. 6Skelley, A.M., Kirak, O., Suh, H., Jaenisch, R. & Voldman, J. Microfluidic Control of Cell Pairing and Fusion. Nature Methods 6, 147-152 (2009). PMC:PMC3251011. Profiling Lymphocyte Interactions at the Single-Cell Level by Microfluidic Cell Pairing. B Dura, S K Dougan, M Barisa, M M Hoehl, C T Lo, H L Ploegh, J Voldman, Nat Commun. 65940Dura, B., Dougan, S.K., Barisa, M., Hoehl, M.M., Lo, C.T., Ploegh, H.L. & Voldman, J. Profiling Lymphocyte Interactions at the Single-Cell Level by Microfluidic Cell Pairing. Nat Commun 6, 5940 (2015). Longitudinal Multiparameter Assay of Lymphocyte Interactions from Onset by Microfluidic Cell Pairing and Culture. B Dura, M M Servos, R M Barry, H L Ploegh, S K Dougan, J Voldman, Proc Natl Acad Sci U S A. 113Dura, B., Servos, M.M., Barry, R.M., Ploegh, H.L., Dougan, S.K. & Voldman, J. Longitudinal Multiparameter Assay of Lymphocyte Interactions from Onset by Microfluidic Cell Pairing and Culture. Proc Natl Acad Sci U S A 113, E3599-3608 (2016). Deformability-Based Microfluidic Cell Pairing and Fusion. B Dura, Y Liu, J Voldman, Lab on a Chip. 14Dura, B., Liu, Y. & Voldman, J. Deformability-Based Microfluidic Cell Pairing and Fusion. Lab on a Chip 14, 2783-2790 (2014). An Equilibrium Method for Continuous-Flow Cell Sorting Using Dielectrophoresis. M D Vahey, J Voldman, Analytical Chemistry. 80Vahey, M.D. & Voldman, J. An Equilibrium Method for Continuous-Flow Cell Sorting Using Dielectrophoresis. Analytical Chemistry 80, 3135-3143 (2008). High-Throughput Cell and Particle Characterization Using Isodielectric Separation. M D Vahey, J Voldman, PMC2675787Analytical Chemistry. 81Vahey, M.D. & Voldman, J. High-Throughput Cell and Particle Characterization Using Isodielectric Separation. Analytical Chemistry 81, 2446-2455 (2009). PMC:PMC2675787. Microfluidic Genome-Wide Profiling of Intrinsic Electrical Properties in Saccharomyces Cerevisiae. M D Vahey, L Q Pesudo, J P Svensson, L D Samson, J Voldman, PMC3686985Lab on a Chip. 13Vahey, M.D., Pesudo, L.Q., Svensson, J.P., Samson, L.D. & Voldman, J. Microfluidic Genome-Wide Profiling of Intrinsic Electrical Properties in Saccharomyces Cerevisiae. Lab on a Chip 13, 2754-2763 (2013). PMC:PMC3686985. Monitoring Sepsis Using Electrical Cell Profiling. J L Prieto, H.-W Su, H W Hou, M P Vera, B D Levy, R M Baron, J Han, J Voldman, Prieto, J.L., Su, H.-W., Hou, H.W., Vera, M.P., Levy, B.D., Baron, R.M., Han, J. & Voldman, J. Monitoring Sepsis Using Electrical Cell Profiling. Lab on a Chip (2016).
{'fraction_non_alphanumeric': 0.04640477831380657, 'fraction_numerical': 0.02228348265563979, 'mean_word_length': 4.85935397039031, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "Prof. Voldman's research focuses on developing microfluidic technology for biology and medicine, with an emphasis on cell sorting and stem cell biology. He has developed a host of technologies to arrange, culture, and sort diverse cell types including immune cells, endothelial cells, and stem cells. Current areas of research include recapitulating the induction of atherosclerosis on a microfluidic chip, and using microfluidic tools to study how immune cells decide to attack tumor cells. He is also interested in translational medical work, such as developing point-of-care drop-ofblood assays for proteins and rapid microfluidic tests for immune cell activation for the treatment of sepsis.AbstractMicrosystems have the potential to impact biology by providing new ways to manipulate cells and the microenvironment around them. Simply physically manipulating cells or their environment-using microfluidics, electric fields, or optical forces-provides new ways to separate cells and organize cell-cell interactions. Immune cells are of particular interest because of their central role in defending the body against foreign invaders. As a consequence, many microfluidic devices have been used to study both the basic biology of immune cells as well as to assay them for clinical use.", 'arxivid': '1802.05609', 'author': ['Joel Voldman \nMassachusetts Institute\n\n'], 'authoraffiliation': ['Massachusetts Institute\n'], 'corpusid': 3305452, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 2489, 'n_tokens_neox': 2087, 'n_words': 1271, 'pdfsha': '2c467b193fca098c543b0f64c858a0683e53b6d7', 'pdfurls': ['https://arxiv.org/pdf/1802.05609v1.pdf'], 'title': ['Microfluidic tools for assaying immune cell function', 'Microfluidic tools for assaying immune cell function'], 'venue': []}
arxiv
Distribution and Generalized Centre in Planar Nearrings 10 Nov 2016 Tim Boykett [email protected] Institute for Algebra Johannes Kepler University 4040LinzAustria and Time's Up Research Industriezeile 33b4020LinzAustria Distribution and Generalized Centre in Planar Nearrings 10 Nov 2016Society of Edinburgh: Submitted Paper Paper 1 July 2018 (MS received 'Received date'; 'Accepted date')arXiv:1607.01204v2 [math.RA] Nearrings are the nonlinear generalization of rings. Planar nearrings play an important role in nearring theory, both from the structural side, being close to generalized nearfields, as well as from an applications perspective, in geometry and combinatorial designs related to difference families. In this paper we investigate the distributive elements of planar nearrings. If a planar nearring has nonzero distributive elements, then it is an extension of its zero multiplier part by an abelian group. In the case that there are distributive elements that are not zero multipliers, then this extension splits, giving an explicit description of the nearring. This generalizes the structure of planar rings. We provide a family of examples where this does not occur, the distributive elements being precisely the zero multipliers. We apply this knowledge to the question of determining the generalized center of planar nearrings as well as finding new proofs of other older results. Introduction Nearrings are the nonlinear generalization of rings, having only one distributive law. Planar nearrings are a special class of nearrings that generalize nearfields, themselves a generalization of fields, which play an important role in the structural theory of nearrings [14,20], as well as having important geometric and combinatorial properties [6,10,15]. In this paper we look at the distributive elements of a planar nearring, in some sense the ring-like elements of the planar nearring. This extends Aichinger's work on planar rings [2]. We find that the general structure of planar nearrings with nontrivial distributive elements generalizes the structure of planar rings. We use this information to then investigate the generalized center of a planar nearrings, building on Farag, Cannon, Kabza and Aichinger's work [3,7]. This was the original motivation for this work and can be found in §6. As the results there indicate, it was necessary to obtain a good understanding of the distributive elements of a planar nearring, which we undertake in §4 and find some small applications of in §5. The structure implied by nontrivial distributive elements forces several special forms of planar nearrings, which we introduce in §3 and use repeatedly throughout. In the next section we introduce the necessary background about nearrings and planar nearrings. While it is readily seen that 0 * a = 0 for all a ∈ N , it is not necessary that a * 0 = 0 for all a. If this is the case, the nearring is called zero symmetric. A subset of N is a subnearring if it is closed as an additive group and a multiplicative semigroup. A subnearring I ⊆ N is a right ideal if I is a normal additive subgroup and for all i ∈ I, n ∈ N , i * n ∈ I. A subnearring I ⊆ N is a left ideal if I is a normal additive subgroup and for all i ∈ I, n, m ∈ N , n * m − n * (m + i) ∈ I. A subnearring that is a left and right ideal is an ideal. As we expect, ideals correspond to the kernels of nearring homomorphisms. We write N * for N \ {0}. If (N * , * ) is a group, then (N, +, * ) is a nearfield, generalizing fields by having a single distributive law. Nearfields play an important role in geometries, nearring and nearfield theory are well discussed in [10,11,17,19]. Nearfield were first described by Dickson using a process that is now called a Dickson nearfield. All but 7 finite nearfields are Dickson. The set D(N ) = {n ∈ N |n(a + b) = na + nb ∀a, b ∈ N } of distributive elements of N is the core investigation of this paper. Elements a, b ∈ N are called equivalent multipliers if x * a = x * b for all x ∈ N . We write a ∼ = b, an equivalence relation. A nearring (N, +, * ) is called planar if • The equivalent multiplier equivalence relation has at least 3 classes • For every a, b, c ∈ N , a ∼ = b, the equation x * a = x * b+c has a unique solution. Every field other than Z 2 is a planar nearring. A finite nearfield (except Z 2 ) is always planar, there are known to be nonplanar infinite nearfields [19, page 46]. Planar nearrings are zero symmetric. Planar nearrings can be described by fixed point free automorphism groups of groups. Let (N, +) be a group. Then we say that some nonidentity φ ∈ Aut(N ) is fixed point free if for all n ∈ N , nφ = n iff n = 0. A group of automorphisms is fixed point free if all nonidentity elements are fixed point free. Now let Φ ≤ Aut(N ) be a group of fixed point free automorphisms of N acting from the right such that −id + φ is bijective for all non identity φ ∈ Φ. We write aΦ for the orbit containing a. Let R ⊆ N be a set of orbit representatives, M ⊆ R a set. Every element a ∈ N can be written uniquely as r a φ a for some r a ∈ R, φ a ∈ Φ. We define a multiplication by: a * b = 0 r b ∈ M r a (φ a φ b ) = aφ b r b ∈ M (2.1) Then (N, +, * ) is a planar (right) nearring. All planar nearrings can be so derived [10]. Let aΦ * = aΦ ∪ {0}. We call M the set of zero multipliers. If a ∈ M then we call aΦ a zero multiplier orbit, and we will call all elements of this orbit zero multipliers. Distribution in Planar Nearrings 3 Then n ∈ N * is a zero multiplier iff n ∈ M Φ * iff a * n = 0 for all a ∈ N . Note that a * b = 0 iff a = 0, b = 0 or r b ∈ M . In particular, n * b = 0 for all n iff b = 0 or r b ∈ M . The elements r ∈ R \ M are right identities, x * r = x for all n ∈ N . A planar nearring has a left identity iff it has an identity iff it has exactly one nontrivial Φ orbit iff it is a planar nearfield. We use Z(Φ) to denote the centre of the group Φ and remark that we will use the British spelling throughout this paper. The distributive elements of a nearfield are called the kern of the nearfield and contains the multiplicative centre. There are related nearrings. The trivial nearring on any group (N, +) with multiplication a * b = 0 would correspond to ∼ = having one equivalence class. The Malone trivial nearrings [8] correspond to Φ having order 1, so R = N \ {0}, a * b = 0 if b ∈ M and a * b = a otherwise. It is interesting to note that the complemented Malone nearrings in [8] are a generalization of planar nearrings with Φ of order 2, allowing fixed points (i.e. elements of additive order 2) that are zero multipliers. In the next section we will look at two special constructions for planar nearrings. Then in the following section we look at the distributive elements of a planar nearring. With that knowledge, we will determine the generalized centre of a planar nearring. (Near) Vector Spaces In this section we construct two families of example planar nearrings, which will prove to be useful in the rest of the paper. Let V be a vector space over a division ring D of order at least 3 and φ : V → D a vector space epimorphism which acts from the right. Define a multiplication * : V × V → V by a * b = a(bφ). By [2, theorem 4.1] and [21, theorem 5.2.1] this is a planar ring and all planar rings have this form. Using the terminology above, Φ is isomorphic to the nonzero elements of D under multiplication, R \ M = φ −1 (1) and the elements of M can be chosen arbitrarily from the orbits that lie completely within ker φ. Let v 1 ∈ R be arbitrary. If V is a finite dimensional vector space, we can choose a new basis v 1 , . . . , v n ∈ V such that v 2 , . . . , v n ∈ ker φ and for all x = (x 1 , . . . , x n ) ∈ V , xφ = x 1 . We can generalize this construction to nearvector spaces 1 . We begin with a brief overview and some definitions of nearvector spaces. See [5] for further details. Definition 3.1. A pair (V, A) is called a nearvector space if: 1. (V, +) is a group and A is a set of endomorphisms of V , which act from the right; 2. A contains the endomorphisms 0, id and −id; 3. A * = A \ {0} is a subgroup of the group Aut(V ); 4. A acts fixed point freely on V ; 4 T. Boykett 5. the quasi-kernel {x ∈ V | ∀α, β ∈ A, ∃γ ∈ A : xα + xβ = xγ} generates V as a group. We sometimes refer to V as a nearvector space over A. We write Q(V ) for the quasi-kernel of V . The elements of V are called vectors and the members of A scalars and it turns out that A is a nearfield. The action of A on V is called scalar multiplication. Note that (V, +) is an abelian group. Also, the dimension of the nearvector space, dim(V ), is uniquely determined by the cardinality of an independent generating set for Q(V ). In [18, theorem 3.4] we find a characterization of finite dimensional nearvector spaces, see also [5, theorem 4.6]. F 1 , F 2 , . . . , F n , semigroup isomorphisms ψ i : A → F i and a group isomorphism Φ : V → F 1 ⊕ F 2 ⊕ · · · ⊕ F n such that if Φ(v) = (x 1 , x 2 , . . . , x n ), (x i ∈ F i ) then Φ(vα) = (x 1 (αψ 1 ), x 2 (αψ 2 ), . . . , x n (αψ n )), for all v ∈ V and α ∈ A. In [5, 4.13 ff] we find the following. A nearvector space is regular if all the ψ i are identical (up to nearfield automorphisms). Every nearvector space has a unique maximal decomposition V = V 1 ⊕ V 2 ⊕ . . . into regular sub-nearvector spaces V i . Then for all nonzero u ∈ Q(V ), there is precisely one i auch that u ∈ V i . Note that a regular vector space over a field is a vector space. We can construct a planar nearring from a near vector space along the lines used above for vector spaces. Let V be a nearvector space over a nearfield F of order at least 3. Let φ : V → F be a nearvector space epimorphism and define a * b = a(bφ). The right identities R \ M are φ −1 (1) and the representatives in M can be chosen arbitrarily. Example 3.3. Let V be the two dimensional nearvector space over F = Z 5 with ψ 1 the identity and ψ 2 = (2 3) the automorphism of F * exchanging 2 and 3, equiv- alently xψ 2 = x 3 . Note that Q(V ) = F × {0} ∪ {0} × F . Taking φ(v 1 , v 2 ) = v 1 , we obtain a planar nearring (V, +, * ). Then (v 1 , v 2 ) ∈ D(V ) iff v 1 x + v 1 y = v 1 (x + y) and v 2 (xψ 2 ) + v 2 (yψ 2 ) = v 2 ((x + y)ψ 2 ) for all x, y, ∈ F . The first equation always holds, but the second equation can be seen to fail for x = y = 1 unless v 2 = 0. Thus we see that D(V ) = F × {0}. Example 3.4. Let V be the two dimensional nearvector space over F , the proper nearfield of order 9 with kern K of order 3. Let ψ 1 = ψ 2 be the identity. Then Q(V ) = K × K. Taking φ(v 1 , v 2 ) = v 1 , we obtain a planar nearring (V, +, * ). Then (v 1 , v 2 ) ∈ D(V ) iff v 1 x + v 1 y = v 1 (x + y) and v 2 x + v 2 y = v 2 (x + y) for all x, y, ∈ F . These equations hold iff v 1 and v 2 are both in the kern K of F , so D(V ) = K × K. Conjecture 3.5. Let F be a nearfield with kern K, let V be a finite dimensional F -nearvector space derived planar nearring as above, V = V 1 ⊕ · · · ⊕ V n the regular decomposition with V i = F ni . Then D(V ) = D(V 1 )⊕· · ·⊕D(V n ) with D(V i ) = K ni . Distribution in Planar Nearrings 5 It is worth noting in passing that nearvector spaces and the homogeneous mappings of them to themselves are closely related to questions about nearring matrices over the associated nearfield. Thus we hope that future work here could shed light on the question raised in the final section of [14] as to the inverses of units in matrix nearrings over planar nearfields. The Distributive Elements In this section we investigate the distributive elements of a planar nearring. Some examples are well known. A finite field is a planar nearfield and thus a planar nearring, with all elements being distributive. In [2] (see §3 above) the structure of planar rings is completely determined, so we know what happens when D(N ) = N . The distributive elements of a nearfield are called the kern of the nearfield. Using Sonata [1] we found all planar nearrings up to order 15 with nontrivial D(N ). 1. The fields of order 3,4,5,7,8,9,11,13. 2. The proper nearfield of order 9 with kern of order 3. 3. The planar ring of order 9. We see that 1-3 can be readily explained, but not 4 or 5. One of the goals of this paper is to understand all these examples in terms of general classes. We first determine some properties of the orbits that contain distributive elements. Proof. Suppose dΦ * is not additively closed, so that there exist some φ 1 , φ 2 such that dφ 1 + dφ 2 ∈ dΦ * . Let Proof. From lemma 4.1 above, we know that dΦ * is additively closed. r 3 φ 3 = r d φ 1 + r d φ 2 . Then d * (r d φ 1 + r d φ 2 ) = d * r 3 φ 3 = r d φ d φ 3 ∈ dΦ * . However d * r d φ 1 + d * r d φ 2 = d * φ 1 + d * φ 2 ∈ dΦ * , so d is not distributive, a contradiction. Let a, b ∈ N , φ ∈ Φ. Let r = r d . Then we can show the following. rφ * (a + b) = rφ * rφ −1 d * rφ d * (a + b) (4.1) = rφ * rφ −1 d * (rφ d * a + rφ d * b) (4.2) = rφ * rφ −1 d * (rφ d * r * a + rφ d * r * b) (4.3) = rφ * rφ −1 d * rφ d * (r * a + r * b) (4.4) = rφ * (r * a + r * b) (4.5) 6 T. Boykett We use this in the following calculation. We know that r d Φ * is additively closed and that r = r d is a left multiplicative identity in r d Φ. rφ −1 d * a + rφ −1 d * b = r * (rφ −1 d * a + rφ −1 d * b) (4.6) = rφ −1 d * rφ d * (rφ −1 d * a + rφ −1 d * b) (4.7) = rφ −1 d * (rφ d * rφ −1 d * a + rφ d * rφ −1 d * b) (4.8) = rφ −1 d * (r * a + r * b) (4.9) = rφ −1 d * (a + b) (4.10) Thus the multiplicative inverse of d in dΦ is in D(N ) ∩ dΦ. By standard arguments, D(N ) ∩ dΦ is multiplicatively closed, thus a group, so{φ ∈ Φ|r d φ ∈ D(N )} ≤ Φ is a subgroup of Φ. Now we know that r d * (a + b) = r d * a + r d * b. Let φ ∈ Z(Φ). Then let a, b ∈ N , r d φ * (a + b) = r d φφ a+b (4.11) = r d φ a+b φ (4.12) = r d (a + b)φ (4.13) = r d φ * a + r d φ * b (4.14) so Z(Φ) ≤ {φ ∈ Φ|r d φ ∈ D(N )} and we are done. Proof. From lemma 4.1 above, we know that dΦ * is additively closed. Thus dΦ * is additively and multiplicatively closed, forming a planar subnearring. We note that there is precisely one orbit on this nearring, so the planar nearring must be a nearfield with r d the multiplicative identity. Let n, c ∈ D(N ) be arbitrary, then there exists a unique x ∈ N such that x − x * n = c by the planarity of N , but this might not be in dΦ * . However r d * x−r d * x * n = r d * (x − x * n) = r d * c = c so r d * x is also a solution to the equation. This solution is unique so x = r d * x ∈ dΦ * and dΦ * is a planar nearfield. Similarly we know that, even in the case that there are distributive elements that are zero multipliers, additive closure of the orbit dΦ * allows us to define a multiplication dφ 1 • dφ 2 = d(φ 1 φ 2 ) that gives us (dΦ * , +, •) a planar nearfield. Thus we know a lot more about the forms of Φ that can emerge, as not all fixed point free automorphism groups arise as the multiplicative group of a nearfield. m ∈ M Φ * , a ∈ N \ M Φ * , φ m+a = φ a . Proof. We proceed by calculation. Let 0 = d ∈ D(N ). r d φ d φ m+a = d * (m + a) = d * m + d * a (4.15) = 0 + d * a = d * a = r d φ d φ a (4.16) ⇒ φ d φ m+a = φ d φ a (4.17) ⇒ φ m+a = φ a (4.18) By symmetry, φ a+m = φ a as well. Distribution in Planar Nearrings Lemma 4.5. Let N be a planar nearring. If D(N ) is nontrivial, then the zero multipliers form a nearring ideal. Proof. Let 0 = d ∈ D(N ). Let K = M Φ * , the zero multipliers. The mapping ρ : n → d * n is an additive homomorphism. Then n ∈ ker ρ iff d * n = 0 iff n ∈ K so the zero multipliers form an additively normal group. Let m, n ∈ N , k ∈ K. If n is a zero multiplier, then k * n = 0 so k * n ∈ K. If n is not a zero multiplier, then k * n = r k φ k φ n ∈ K, so K is a right ideal. If n + k ∈ K, then since K is a subgroup, n ∈ K so m * n − m * (n + k) = 0 − 0 ∈ K. If n + k ∈ K, then remember that by the lemma above, φ n+k = φ n . Then m * n − m * (n + k) = m * n − r m φ m φ n+k (4.19) = m * n − r m φ m φ n (4.20) = m * n − m * n = 0 ∈ K (4.21) so we have a left ideal and thus an ideal. Note that, in general, the mapping ρ is not a nearring homomorphism, unless φ d is the identity. This can only be guaranteed to be the case when D(N ) contains non zero multipliers, by lemma 4.2. This implies that (N, +) is an extension of (ρ(N ), +) by (K, +). By lemma 4.3 and the comments afterwards, we know that (ρ(N ), +) is the additive group of a nearfield, thus abelian and, in the finite case, elementary abelian. When all distributive elements are zero multipliers, we do not necessarily have that the extension splits. If we have a non zero multiplier distributive element, then we get a clear result. Proof. By the previous lemma, we know that (N, +) is an extension of (K, +) by (ρ(N ), +). By lemma 4.3 we know that ρ(N ) is a nearfield, let F = ρ(N ). Because (F, +) is a subgroup of N that is fixed by ρ, the extension splits, so (N, +) ∼ = (K, +) ⋊ (F, +). Let k ∈ K, f ∈ F . Writing k f = f + k − f we know that for all k 1 , k 2 ∈ K, f 1 , f 2 ∈ F , (k 1 , f 1 ) + (k 2 , f 2 ) = (k 1 + k f1 2 , f 1 + f 2 ). We can write each element of N as (k, f ) = (k, 0) + (0, f ) ∈ M Φ * + F so by lemma 4.4 we know that φ (k,f ) = φ (0,f ) . We write φ f for φ (0,f ) . Then we can write the multiplication on K ⋊ F as above. The representatives for K ⋊ F are {(k, 1)|k ∈ K} ∪ {(m, 0)|m ∈ M }. Since the additive groups are isomorphic as Φgroups and the representatives are matched by the isomorphism, we know that the resulting planar nearrings are isomorphic. 8 T. Boykett We see that the example on additive group (Z 9 , +) falls outside this theorem. The distributive elements lie within the zero multipliers and the additive group is not a semidirect product of the zero multipliers with anything. We can create a family of examples of planar nearrings with D(N ) lying within the zero multipliers, based upon the example on page 49 of [10]. These examples do not split. Example 4.2. Let p be an odd prime, N = Z p 2 the cyclic group of order p 2 . There is a cyclic subgroup Φ of the multiplicative semigroup of order p − 1. One of the orbits of this automorphism group is pZ p 2 . These are our zero multipliers, this orbit has representative p ∈ Z p 2 . We choose the rest of our representatives to be a coset of pZ p 2 . Then the resulting planar nearring has D(N ) = pZ p 2 , all zero multipliers. We know (e.g. [16]) that the additive group of a finite planar nearring is nilpotent and thus a direct sum of p-groups. Thus a finite planar nearring is a finite direct sum of planar nearrings of prime power order. Thus by lemma 4.3 at most one of these summands has a non zero multiplier distributive element. If one summand has such an element, then lemma 4.5 indicates that we have a trivial multiplication of all summands other than the one with a non zero multiplier distributive element. We have shown the following. Corollary 4.3. Let N be a finite planar nearring with nontrivial distributive elements. Let Φ be the multiplicative group associated to N , R the representatives and M the zero multipliers. Then (N, +) is the direct sum of finitely many psubgroups N = N 1 ⊕ · · · ⊕ N k , only one of which has a nontrivial multiplication, so R \ M ⊆ {0} ⊕ · · · ⊕ {0} ⊕ N i ⊕ {0} ⊕ · · · ⊕ {0} for some i. In the example of order 15, we have the Z 3 as the nearfield with automorphism group of order 2 and Z 5 having the same automorphism group generated by (1 4) Some applications In this section we look at the way that these results can be contextualized in relation to other similar results. One of the strong applications of planar nearrings is in the construction of BIBDs. The blocks of the BIBD are {aΦ * + b|0 = a ∈ N, b ∈ N } and the basic blocks are {aΦ * |0 = a ∈ N }. The following result shows that the construction used in [9] is the only way that all basic blocks can be subgroups. Lemma 5.1. Let N be a finite abelian planar nearring with more than one orbit, in which all orbits aΦ * are additively closed. Then N is a vector space over a subfield of N . Proof. Let F = aΦ * for some non zero multiplier a. F is additively and multiplicatively closed, thus a planar subnearring. F contains only one orbit of Φ, so it has an identity and is thus a nearfield. Thus F is prime power order. If F is odd, then Φ is even order, thus −1 ∈ Φ. If F is even, then −1 = 1 ∈ Φ. We see that Φ * acting on (N, +) satisfies the first four conditions for being a nearvector space. By the additive closedness of each nΦ * , for each α, β ∈ Φ * , nα + nβ ∈ nΦ * so there exists some γ ∈ Φ * such that nα + nβ = nγ. Thus Q(N ) = N and N is a near vector space over Φ * . We see that F must then be the nearfield from van der Walt's result above, (Φ * , * ) ∼ = (F, * ). By [5, Satz 5.5] we know that Q(N ) = N implies that N is a vector space and that F is actually a field. We note in passing that if a planar nearring has non trivial distributive elements, then we know that the corresponding orbits give additively closed basic blocks. Thus by [10,Thm 7.17] we will obtain a statistical, but not a geometric BIBD. We also obtain Aichinger's result as a corollary. Corollary 5.1. Let N be a planar ring. Then N is a vector space over a field F with Φ = F * . Proof. All elements of N are distributive, so we know by lemma 4.1 that every orbit is additively closed. Select some non zero multiplier d ∈ N and we see that d * N is a distributive nearfield, that is, a field. All orbits are additively closed, so we obtain a nearvector space with Q(N ) = N , so by the same argument as above, we know that N is a vector space. The Generalized Centre Now that we know some things about the distributive elements of a planar nearring, we can say some things about the generalized centre. Let (N, +, * ) be a 0-symmetric nearring. The generalized centre of N is GC(N ) = {n ∈ N |nd = dn ∀d ∈ D(N )} [3,7]. The generalized centre was introduced because the centre of a nearring is not always well behaved. For instance, it is not always a subnearring, while the generalized centre is. If N is a ring, then GC(N ) is the usual centre of N . The generalizes centre of a planar nearfield F is the set of elements that commute with the kern. In the finite case, the kern is the multiplicative centre and thus GC(F ) = F . In the infinite case, when the kern is distinct from the multiplicative centre, we know only that GC(F ) contains the multiplicative centre. 3. If D(N ) intersects exactly one orbit of Φ, which is not a zero multiplier, let it be aΦ. Then aZ(Φ) * ≤ GC(N ) ≤ aΦ * . all a, b, c ∈ N , (a + b) * c = a * c + b * c (right distributivity) Theorem 3. 2 . 2Let V be a group and let A := D ∪ {0}, where D is a fixed point free group of automorphisms of V . Then (V, A) is a finite dimensional nearvector space if and only if there exists a finite number of nearfields, 4 . 4An example of order 9. The additive group is Z 9 , Φ = {1, −1}, R = {2, 3, 5, 8}, M = {3}. The distributive elements are the zero multipliers {0, 3, 6}.5. An example of order 15, Φ of order 2 with generator g acting on Z 3 × Z 5 as (x, y) * g = (−x, −y), the orbit {0} × Z 5 zero multipliers. The distributive elements are Z 3 × {0}. Lemma 4 . 1 . 41Let d ∈ N . Then d ∈ D(N ) ⇒ dΦ * is additively closed. Lemma 4 . 2 . 42Let N be a planar nearring. Let d ∈ D(N ) be a non zero multiplier, d = r d φ d . Then {φ ∈ Φ|r d φ ∈ D(N )} ≤ Φ is a subgroup, containing Z(Φ). Lemma 4. 3 . 3Let d ∈ N be a non zero multiplier. Then d ∈ D(N ) ⇒ dΦ * is a planar nearfield. Lemma 4 . 4 . 44Let N be a planar nearring with nontrivial distributive elements. Then for every Theorem 4 . 1 . 41Let N be a planar nearring with automorphism group Φ. Let d ∈ D(N ) be a non zero multiplier. Then there is a subnearfield F ≤ N with F * ∼ = Φ and an additive group K with Φ a group of fixed point free automorphisms, such that N ∼ = K ⋊ F as an additive group and (a, b) * (c, d) = 0 d = 0 (aφ d , bφ d ) otherwise (4.22) such that (N, +, * ) ∼ = (K ⋊ F, +, * ). The zero multipliers form an ideal K × {0}. (2 3) acting on it. Then N = Z 3 × Z 5 with {0} × Z 5 forming the zero multipliers in the nearring. Thus we have shown all of our small examples of planar nearrings with nontrivial distributive elements fall into larger classes of examples. Theorem 6 . 1 . 61Let N be a planar nearring. Then GC(N ) is one of four cases: 1. If D(N ) intersects only zero multiplier orbits, then GC(N ) is the zero multipliers, an ideal. 2. If D(N ) intersects more than one orbit of Φ and at least one of them is not a zero multiplier, then GC(N ) = {0}. We note in passing the existance of another, similar, definition of a nearvector space used by Karzel and colleagues[12,13] in which the right nearfield scalars operate from the left, giving significantly different properties. . If D(N ) = {0}, then GC(N ) = N . AcknowledgementsResearch supported by SFB Project F5004 of the Austrian Science Foundation, FWF. I would like to thank my colleagues Günter Pilz and Wen-Fong Ke for some insightful questions in the early development of this paper.Proof. We proceed by cases. Case 1: Suppose D(N ) intersects only zero multiplier orbits. Then for all d ∈ D(N ), for all n ∈ N , nd = 0. Thus if r n ∈ M , dn = 0 and thus n ∈ GC(N ). So GC(N ) = ∪{aΦ * |a ∈ M }, the zero multipliers, which we know from lemma 4.5 to be an ideal.Case 2: Suppose D(N ) intersects two orbits nontrivially, one of them is a non zero multiplier orbit. Let a, b ∈ D(N ) be in distinct orbits, a not a zero multiplier. Then a nonzero c ∈ GC(N ) implies that ca = ac, so r a = r c and c is not a zero multiplier. Thus cb = bc implies that b is also not a zero multiplier, so r c = r b , but r a = r b so we have a contradiction, so GC(N ) is trivial.Case 3: Let c ∈ GC(N ), so cd = dc for all d ∈ D(N ). Then r c = r d so c ∈ aΦ * , giving us the upper bound. We know that F := aΦ * is a nearfield, so D(N ) = K is the kern of F . If K * F * as multiplicative groups, then K = Z(F ) by[4], so GC(N ) = F = aΦ * showing that this bound can be achieved, for instance in the finite case. Otherwise φ c ∈ Z(Φ) gives us the lower bound.Case 4: If D(N ) is trivial, then by zerosymmetry, GC(N ) = N .We can break this down depending upon the properties of the additive group. Proof. Only the third case is different to the above. If N is additively nonabelian, then we know that the fixed point free automorphism group is of odd order, thus cyclic, see e.g.[16]. Thus Z(Φ) = Φ, so D(N ) is all of one orbit with zero. Because the multiplication within this orbit is commutative, the generalized centre is all of the orbit and we are done.If the additive group is abelian, then many interesting and strange things can happen with skew fields. However in the finite case we know more.Distribution in Planar Nearrings11If D(N ) intersects exactly one orbit of Φ, which is not a zero multiplier, letit be aΦ. Then GC(N ) = aΦ * ≥ D(N ).If D(N ) intersects no nonzero orbit of Φ, then GC(N ) = N .Proof. Only the third case is different to the above. We note that abelian addition implies that the distributor D(N ) is additively closed. Thus D(N ) is a planar ring. By [2] we know that a planar ring is derived from a vector space over a field, where the field multiplication is isomorphic to the fixed point free automorphisms of the planar ring. We know that D(N ) lies within one orbit which is a nearfield, so (D(N ) * , ·) is a cyclic subgroup of Φ and thus precisely the centre of Φ. Thus all elements of the orbit containing the distributor are in the generalized centre.ConclusionIn this paper we have investigated the distributive elements in a planar nearring.We have been able to show that if there are nontrivial distributive elements then the additive group is an extension of an abelian subgroup by the zero multipliers. This additive group is the additive group of a nearfield, so elementary abelian in the finite case. If the distributive elements include non zero multipliers, then the extension splits and we obtain a clear structure.As a result, we are able to re-prove Aichinger's theorem on planar rings as a corollary, as well as Clay's results on BIBDs with additively closed basic blocks. It is unclear whether lemma 5.1 can be extended to the infinite case. Applying these results to the question of the generalized centre, we are able to obtain a clear set of cases and to describe the generalized centre.It would be valuable to know what sort of other examples can occur with the distributive elements all lying within the zero multipliers, in order to complete the classification of structures.It would be of value to calculate D(N ) explicitly in theorem 4.1. It is easy to see that D(N ) is a direct product of some subset E ⊆ K and the kernel of F . The question is how to calculate which parts of K have a(φ b+c ) = aφ b + aφ c where the first addition is in F while the second is in K. Note that the orbits will be additively closed, giving us nearfields. This might be another nearvector space construction.As we have been able to determine the generalized centre of planar nearrings, we can now look forward to describing the generalized centre of more complex classes of nearrings. SONATA -system of near-rings and their applications, GAP package, Version 2.6. E Aichinger, F Binder, J Ecker, P Mayr, C Nöbauer, Boykett 2 E. Aichinger. Planar rings. Results Math. 301-2E. Aichinger, F. Binder, J. Ecker, P. Mayr, and C. Nöbauer. SONATA -sys- tem of near-rings and their applications, GAP package, Version 2.6, 2012. (\protect\vrule width0pt\protect\href{http://www.algebra.uni-linz.ac.at/Sonata/}{http://www.algebra.uni-linz T. Boykett 2 E. Aichinger. Planar rings. Results Math., 30(1-2):10-15, 1996. On when the multiplicative center of a near-ring is a subnearring. E Aichinger, M Farag, Aequationes Math. 681-2E. Aichinger and M. Farag. On when the multiplicative center of a near-ring is a subnear- ring. Aequationes Math., 68(1-2):46-59, 2004. Uber eine Beziehung zwischen Zentrum und Kern endlicher Fastkörper. J André, Archiv der Mathematik. 141J. André. Uber eine Beziehung zwischen Zentrum und Kern endlicher Fastkörper. Archiv der Mathematik, 14(1):145-146, 1963. . J André, Lineare Algebraüber Fastkörpern. Math. Z. 136J. André. Lineare Algebraüber Fastkörpern. Math. Z., 136:295-313, 1974. Einflußverschiedener Parameter auf den Mykotoxingehalt von Winterweizen Versuchsdurchführung mit Hilfe eines neuen statistischen Modells (Influence of various parameters on the mycotoxin content of winter wheat: experimental process with the assistance of a new statistical model). B Bäck, H Köppl, G Pilz, G Wendt, Proceedings, 63th ALVA-Tagung. 63th ALVA-TagungRaumberg, AustriaB. Bäck, H. Köppl, G. Pilz, and G. Wendt. Einflußverschiedener Parameter auf den Myko- toxingehalt von Winterweizen Versuchsdurchführung mit Hilfe eines neuen statistischen Modells (Influence of various parameters on the mycotoxin content of winter wheat: ex- perimental process with the assistance of a new statistical model). In Proceedings, 63th ALVA-Tagung, Raumberg, Austria, 2008. Centers and generalized centers of near-rings. G A Cannon, M Farag, L Kabza, G. A. Cannon, M. Farag, and L. Kabza. Centers and generalized centers of near-rings. . Comm, Algebra, 35Comm. Algebra, 35(2):443-453, 2007. Centers and generalized centers of near-rings without identity defined via Malone-like multiplications. G A Cannon, M Farag, L Kabza, K M Neuerburg, Mathematica Pannonica. to appearG. A. Cannon, M. Farag, L. Kabza, and K. M. Neuerburg. Centers and generalized cen- ters of near-rings without identity defined via Malone-like multiplications. Mathematica Pannonica, to appear. Generating balanced incomplete block designs from planar near rings. J R Clay, J. Algebra. 22J. R. Clay. Generating balanced incomplete block designs from planar near rings. J. Algebra, 22:319-331, 1972. J R Clay, Nearrings, Geneses and applications. New YorkOxford University PressJ. R. Clay. Nearrings, Geneses and applications. (Oxford Science Publications. The Claren- don Press, Oxford University Press, New York, 1992). Nearrings, Some developments linked to semigroups and groups. C , Cotti Ferrero, G Ferrero, Advances in Mathematics. 4Kluwer Academic PublishersC. Cotti Ferrero and G. Ferrero. Nearrings, Some developments linked to semigroups and groups, volume 4 of Advances in Mathematics. (Kluwer Academic Publishers, Dordrecht, 2002). Fastvektorräume, unvollständige Fastkörper und ihre abgeleiteten geometrischen. H Karzel, Strukturen. Mitt. Math. Sem. Giessen. 166H. Karzel. Fastvektorräume, unvollständige Fastkörper und ihre abgeleiteten geometrischen Strukturen. Mitt. Math. Sem. Giessen, (166):127-139, 1984. Determination of all near vector spaces with projective and affine fibrations. H Karzel, G Kist, J. Geom. 232H. Karzel and G. Kist. Determination of all near vector spaces with projective and affine fibrations. J. Geom., 23(2):124-127, 1984. Matrix maps over planar near-rings. W.-F Ke, J H Meyer, G Wendt, Proc. Roy. Soc. Edinburgh Sect. A. 1401W.-F. Ke, J. H. Meyer, and G. Wendt. Matrix maps over planar near-rings. Proc. Roy. Soc. Edinburgh Sect. A, 140(1):83-99, 2010. Abstract algebra in statistics. W.-F Ke, G Pilz, Journal of Algebraic Statistics. W.-F. Ke and G. Pilz. Abstract algebra in statistics. Journal of Algebraic Statistics, pages 6-12, 2010. Fixed point free automorphism groups. P Mayr, Johannes Kepler University LinzMaster's thesisP. Mayr. Fixed point free automorphism groups. Master's thesis, Johannes Kepler Univer- sity Linz, 1998. Near-rings. The theory and its applications. G Pilz, North-Holland Mathematics Studies. 23North-Holland Publishing Cosecond editionG. Pilz. Near-rings. The theory and its applications., volume 23 of North-Holland Math- ematics Studies. (North-Holland Publishing Co., Amsterdam, second edition, 1983). Matrix near-rings contained in 2-primitive near-rings with minimal subgroups. A P J Van Der, Walt , J. Algebra. 1482A. P. J. van der Walt. Matrix near-rings contained in 2-primitive near-rings with minimal subgroups. J. Algebra, 148(2):296-304, 1992. H Wähling, Theorie der Fastkörper. EssenThales-VerlagH. Wähling. Theorie der Fastkörper. (Thales-Verlag, Essen, 1987). Minimal left ideals of near-rings. G Wendt, Acta Math. Hungar. 1271-2G. Wendt. Minimal left ideals of near-rings. Acta Math. Hungar., 127(1-2):52-63, 2010. Planarity in Near-Rings. G Wendt, Johannes Kepler University LinzPhD thesisG. Wendt. Planarity in Near-Rings. PhD thesis, Johannes Kepler University Linz, 2004.
{'fraction_non_alphanumeric': 0.07304327742009989, 'fraction_numerical': 0.02176862892282819, 'mean_word_length': 3.3646673387096775, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 44, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Nearrings are the nonlinear generalization of rings. Planar nearrings play an important role in nearring theory, both from the structural side, being close to generalized nearfields, as well as from an applications perspective, in geometry and combinatorial designs related to difference families. In this paper we investigate the distributive elements of planar nearrings. If a planar nearring has nonzero distributive elements, then it is an extension of its zero multiplier part by an abelian group. In the case that there are distributive elements that are not zero multipliers, then this extension splits, giving an explicit description of the nearring. This generalizes the structure of planar rings. We provide a family of examples where this does not occur, the distributive elements being precisely the zero multipliers. We apply this knowledge to the question of determining the generalized center of planar nearrings as well as finding new proofs of other older results.', 'arxivid': '1607.01204', 'author': ["Tim Boykett [email protected] \nInstitute for Algebra\nJohannes Kepler University\n4040LinzAustria\n\nand Time's Up Research\nIndustriezeile 33b4020LinzAustria\n"], 'authoraffiliation': ['Institute for Algebra\nJohannes Kepler University\n4040LinzAustria', "and Time's Up Research\nIndustriezeile 33b4020LinzAustria"], 'corpusid': 119162269, 'doi': '10.14232/actasm-018-036-y', 'github_urls': [], 'n_tokens_mistral': 11500, 'n_tokens_neox': 10348, 'n_words': 6398, 'pdfsha': 'c6c5ebe884ebfa140cd7fd4a634fca62ad167073', 'pdfurls': ['https://arxiv.org/pdf/1607.01204v2.pdf'], 'title': ['Distribution and Generalized Centre in Planar Nearrings', 'Distribution and Generalized Centre in Planar Nearrings'], 'venue': []}
arxiv
A stochastic maximum principle for backward delayed system via advanced stochastic differential equation (ASDE) * 4 Dec 2011 Li Chen Jianhui Huang A stochastic maximum principle for backward delayed system via advanced stochastic differential equation (ASDE) * 4 Dec 2011Advanced stochastic differential equation (ASDE)Backward stochastic differential equation (BSDE)Maximum principlePension fund with delayed surplusStochastic recursive delayed control The main contributions of this paper are three fold. First, our primary concern is to investigate a class of stochastic recursive delayed control problems which arise naturally with sound backgrounds but have not been well-studied yet. For illustration purpose, some concrete examples are also provided here. We derive the stochastic maximum principle of sufficient condition to the optimal control in both cases with and without control delay. Second, it is interesting that a new class of time-advanced stochastic differential equations (ASDEs) is introduced as the adjoint process via duality relation. To our best knowledge, such equations have never been discussed in literature although they possess more academic values besides the control study here. Some existence and uniqueness result to ASDEs is presented. Third, to illustrate our theoretical results, some dynamic optimization problems are discussed based on our stochastic maximum principles. In particular, the optimal controls are derived explicitly by solving the associated time-advanced ordinary differential equation (AODE), the counterpart of the ASDE in its deterministic setup. Introduction Our starting point is the following backward stochastic differential equation (BSDE) with time-delayed generator:        −dy(t) = f (t, y(t), y(t − δ), z(t), z(t − δ))dt − z(t)dW (t), t ∈ [0, T ], y(T ) = ξ, y(t) = ϕ(t), z(t) = ψ(t), t ∈ [−δ, 0). (1) Two remarkable features of Eq.(1): (i) The terminal instead initial condition is specified; (ii) The generator f depends not only on the instantaneous state (y(t), z(t)) but also on (y(t − δ), z(t − δ)) through the timedelayed parameter δ > 0. The feature (i) makes Eq.(1) essentially different to the well-studied stochastic delay differential equation (SDDE) (see e.g. Mohammed [16], [17], etc.) in which the initial state condition is given beforehand. Eq.(1) also differs from the standard BSDE due to its time-delayed generator from (ii). In particular, it distinguishes from the anticipated backward stochastic differential equation (ABSDE) introduced by Peng and Yang [22] which is the duality of SDDE. Eq.(1) is first introduced by Delong and Imkeller [7] and it has many real backgrounds in economics, finance, management, or other decision sciences. More details can be found in Delong [5], [6], Delong and Imkeller [7] and the reference therein. Due to the interesting structure and wide-range applications, it is very natural and necessary to study the dynamic optimizations of Eq. (1). However, to our best knowledge, very few works have been done along this direction thus we aim to fill this research gap in some systematic way. To this end, we study the following more general controlled backward delayed system:                  −dy(t) = f t, y(t), t t−δ φ(t, s)y(s)α(ds), z(t), t t−δ φ(t, s)z(s)α(ds), v(t), t t−δ φ(t, s)v(s)α(ds) dt − z(t)dW (t), t ∈ [0, T ], y(T ) = ξ, y(t) = ϕ(t), z(t) = ψ(t), t ∈ [−δ, 0). (2) Here, δ is time delay parameter, α is some σ-finite measure and φ(·, ·) is some bounded process. The relevance and importance of our optimization problems can be illustrated by the following concrete examples. Example 1.1 (Optimization of recursive utility with moving average) This example originates from Delong [5] in which the decision makers have recursive utility with moving average generators. Such utility can be used to characterize the habit information, disappointment effects as well as volatility aversion in decisionmaking. Accordingly, the objective of decision maker is to maximum his/her utility by selecting suitable instantaneous consumption process c(t). This leads to the following dynamic optimization problem inf c(·)∈U ad y c (0) where the recursive utility y(t) satisfies the following BSDE with time-delayed generator                  −dy(t) = f t, y(t), 1 t t 0 y(s)ds, z(t), 1 t t 0 z(s)ds, c(t) dt − z(t)dW (t), t ∈ [0, T ],y(T ) = ξ,y(t) = ϕ(t), z(t) = ψ(t), t ∈ [−δ, 0).(3) Eq.(3) can be viewed as the special case of Eq.(2) by noting 1 t t 0 y(s)ds = t t−T y(s) T t χ {s≥0} α(ds) where α is uniform measure on [t − T, t]. It can characterize the non-monotonic utility to volatility aversion. Example 1.2 (Pension fund with delayed surplus) This example comes from Federico [8] where the pension fund manager can invest two assets: the riskless asset P 0 (t) satisfies dP 0 (t) = rP 0 (t)dt with instantaneous return rate r ≥ 0, and the risky asset P 1 (t) satisfies dP 1 (t) = µP 1 (t)dt + σP 1 (t)dW (t), with return rate µ ≥ r, volatility rate σ > 0. Here, W (·) is a standard Brownian motion. Denote λ = µ−r σ the risk premium, θ(t) ∈ [0, 1] the proportion of fund invested in risky asset, and S(t) the surplus premium to fund members. Suppose the wealth of pension fund at time t is y(t), and it is reasonable to assume S(t) depends on the performance of fund growth during the past period. Thus, we assume: S(t) = g (y(t) − κy(t − δ)) for some κ > 0 and g : R → [0, +∞) which is increasing, convex and Lipschitz continuous, δ > 0 is the time delay. On the other hand, there should be some running cost or consumption for fund management, which is represented by the instantaneous rate c(t). Hence the wealth process y(t) evolves as:        dy(t) = ([θ(t)σλ + r]y(t) − g(y(t) − κy(t − δ)) − c(t)) dt + σθ(t)y(t)dW (t), t ∈ [0, T ],y(0) = y 0 , y(t) = 0, t ∈ [−δ, 0).(4) Note that in practice, the pension fund will be required to provide some minimum guarantee, i.e., to pay some part of the due benefits ξ (which is some random variable) at some given future time T . Keep this in mind, the objective of fund manager is to choose θ(t) and c(t) to reach terminal condition y(T ) = ξ, and also maximize some given cost functional at the same time. By setting z(t) = σθ(t)y(t), Eq.(4) can be reformulated by the following controlled backward delayed system:        dy(t) = {ry(t) + λz(t) − g(y(t) − κy(t − δ)) − c(t)}dt + z(t)dW (t), t ∈ [0, T ],y(t) = 0, t ∈ [−δ, 0), y(T ) = ξ.(5) Eq.(5) is a special case of (2) by setting α(ds) to be Dirac measure at −δ, the pointwise delay with lag δ. Example 1.3 It is remarkable that there exist considerably rich literature to discuss the controlled stochastic delay differential equations (SDDEs) (see e.g. [2], [11], [19] etc.) which arise naturally due to the time lag between the observation and regulator, or the possible aftereffect of control. The SDDEs and its optimization have attracted extensive research attention in last few decades, and have been applied in wide-range domains including physics, biology and engineering, etc. (see [16], [17] for more details). Note that these works are discussed in the forward setup because the initial condition is given as the priori. On the other hand, as suggested by Kohlmann and Zhou [14], Ma and Yong [15], the forward controlled systems can be reformulated into some backward controlled systems under mild conditions. For example, in case of some state constraints (e.g. no short selling), it is better to reformulate the controlled forward systems into some backward systems which are more convenient to be analyzed in some cases (see Ji and Zhou [10], El. Karoui, Peng and Quenez [21]). Also, inspired by Lim and Zhou [12], we aim to investigate the following controlled linear backward delayed system: dy(t) = (β 1 y(t) + β 2 y(t − δ) + γ 1 z(t) + γ 2 z(t − δ) + αv(t)) dt + z(t)dW (t), t ∈ [0, T ], y(T ) = ξ, t ∈ [−δ, 0], which can be viewed as the linear constrained forward controlled delay system by using penalty approach, or the limit of a family of linear unconstrained forward delayed system. The rest of this paper is organized as follows. In Section 2, we introduce the advanced stochastic differential equation (ASDE). Some preliminary results on ASDE and the associated BSDE with delay generator are also given. The stochastic recursive delayed control problems are formulated in Section 3, and two maximum principles are derived based on the duality between the ASDE and the BSDE with delayed generator. As the application of our theoretical results, in Section 4 − 6 we revisit some motivating examples given in Section 1 and the optimal controls are derived explicitly by solving the associated time-advanced ordinary differential equation (AODE). Notations and Preliminaries Let T > 0 be some finite time horizon. For any Euclidean space H, we denote by ·, · (resp. | · |) the scalar product (resp. norm) of H. Let R n×m be the Hilbert space of all n × m matrices with the inner product A, B := tr{AB ⊤ }, ∀A, B ∈ R n×m . Here the superscript ⊤ denotes the transpose of vector or matrix. Let W (·) be a standard d-dimensional Brownian motion on a complete probability space (Ω, F, P). The information structure is given by the filtration F = {F t } t≥0 which is generated by W (·) and augmented by all P-null sets. For p ≥ 1, the following notations are used throughout this paper: L p (Ω, F t , P; H) := {ξ is H-valued F t − measurable random variable satisfying E[ξ p ] < +∞}; L p F (t 1 , t 2 ; H) := {ϕ(t), t 1 ≤ t ≤ t 2 , is F − adapted process satisfying E t 2 t 1 |ϕ(t)| p dt < +∞}, L ∞ F (t 1 , t 2 ; H) := {ϕ(t), t 1 ≤ t ≤ t 2 , is H-valued F − adapted bounded process}. We set y δ (t) = t t−δ φ(t, s)y(s)α(ds), z δ (t) = t t−δ φ(t, s)z(s)α(ds). Then the backward delayed system of form (2) can be rewritten as        −dy(t) = f (t, y(t), y δ (t), z(t), z δ (t)) dt − z(t)dW (t), t ∈ [0, T ],y(T ) = ξ,y(t) = ϕ(t), z(t) = ψ(t), t ∈ [−δ, 0).(6) We introduce the following assumptions: (H2.1) The function f : Ω × [0, T ] × R n × R n × R n×d × R n×d → R n is F-adapted and satisfies |f (t, y, y δ , z, z δ ) − f (t, y ′ , y ′ δ , z ′ , z ′ δ )| ≤ C(|y − y ′ | + |y δ − y ′ δ | + |z − z ′ | + |z δ − z ′ δ |) for any y, y δ , y ′ , y ′ δ ∈ R n , z, z δ , z ′ , z ′ δ ∈ R n×d with constant C > 0. (H2. 2) The fixed time delay satisfies 0 ≤ δ ≤ T , ξ ∈ L 2 (Ω, F T , P; R n ), the initial path of (y, z): ϕ(·), ψ(·) are given square-integrable functions and φ(t, s) ≤ M is given bounded F s −adapted process with 0 ≤ s ≤ t ≤ T and M is some positive constant. (H2.3) E[ T 0 |f (t, 0, 0, 0, 0)| 2 dt] < +∞. Then we have the following existence and uniqueness of the delayed BSDE (2): Theorem 2.1. Suppose that (H2.1)-(H2.3) hold, then for sufficiently small time delay δ, the BSDE with delay (2) has a unique adapted solution (y(·), z(·)) ∈ L 2 F (−δ, T ; R n ) × L 2 F (−δ, T ; R n×d ). Proof Let us introduce the following norm in Banach space L 2 F (−δ, T ; R n ) which is equivalent to the original norm of L 2 F (−δ, T ; R n ): ν(·) β = E[ T −δ |ν(s)| 2 e βs ds] 1 2 . Set      y(t) = ξ + T t f (s, Y (s), Y δ (s), Z(s), Z δ (s))ds − T t z(s)dW (s), t ∈ [0, T ],y(t) = ϕ(t), z(t) = ψ(t), t ∈ [−δ, 0).(7) Define a mapping h : L 2 F (−δ, T ; R n × R n×d ) −→ R n × R n×d such that h[(Y (·) , Z(·))] = (y(·), z(·)). So if we can prove that h is a contraction mapping under the norm · β , then the desired result can be obtained by the fixed point theorem. For two arbitrary elements (Y (·), Z(·)) and (Y ′ (·), Z ′ (·)) in L 2 F (−δ, T ; R n ×R n×d ), set (y(·), z(·)) = h[(Y (·), Z(·))] and (y ′ (·), z ′ (·)) = h[(Y ′ (·), Z ′ (·))]. Denote their difference by (Ŷ (·),Ẑ(·)) = (Y (·) − Y ′ (·), Z(·) − Z ′ (·)), (ŷ(·),ẑ(·)) = (y(·) − y ′ (·), z(·) − z ′ (·)). In fact Eq. (7) is a classical BSDE, and it follows that E[ T 0 ( β 2 |ŷ(s)| 2 + |ẑ(s)| 2 )e βs ds] ≤ 2 β E[ T 0 |f (s, Y (s), Y δ (s), Z(s), Z δ (s)) − f (s, Y ′ (s), Y ′ δ (s), Z ′ (s), Z ′ δ (s))| 2 e βs ds] ≤ 2C 2 β E[ T 0 |Ŷ (s)| + |Ŷ δ (s)| + |Ẑ(s)| + |Ẑ δ (s)| 2 e βs ds] ≤ 6C 2 β E[ T 0 |Ŷ (s)| 2 + |Ẑ(s)| 2 + 2|Ŷ δ (s)| 2 + 2|Ẑ δ (s)| 2 e βs ds] ≤ 6C 2 β [1 + 2M 2 δ 0 −δ e −βr α(dr)]E[ T −δ |Ŷ (s)| 2 + |Ẑ(s)| 2 e βs ds] = K(C, M, δ, α, β)E[ T −δ |Ŷ (s)| 2 + |Ẑ(s)| 2 e βs ds]. Note that E T 0 |Ŷ δ (s)| 2 e βs ds = E T 0 | 0 −δ φ(s, s + r)(Y (s + r) − Y ′ (s + r))α(dr)| 2 e βs ds ≤ M 2 δE T 0 0 −δ |Y (s + r) − Y ′ (s + r)| 2 α(dr)e βs ds = M 2 δE 0 −δ e −βr T 0 |Y (s + r) − Y ′ (s + r)| 2 e β(s+r) dsα(dr) = M 2 δE 0 −δ e −βr T +r r |Y (u) − Y ′ (u)| 2 e βu duα(dr) ≤ M 2 δE 0 −δ e −βr α(dr) T −δ |Ŷ (s)| 2 e βs ds. If we choose β = 1 δ , then K(C, M, δ, α, β) = 6C 2 δ[1 + 2M 2 δeα([−δ, 0])]. Therefore, if δ is sufficiently small satisfying K(C, M, δ, α, β) < 1, then h is a contraction mapping under the norm · β . Our proof is completed. Now, let us introduce the following advanced SDE as following:            dx(t) = b t, x(t), t+δ t φ(t, s)x(s)α(ds) dt + σ t, x(t), t+δ t φ(t, s)x(s)α(ds) dW (t), t ∈ [0, T ], x(0) = x 0 , x(t) = λ(t), t ∈ (T, T + δ].(8) It is notable that there exist some results to discuss the time-advanced ordinary differential equations (AODEs) (e.g., refer [1], [9], [13], [18], [20], [24], etc.) which have been applied in various areas including traveling waves in physics, cell-growth in population dynamics, capital market in economics, life-cycle models, electronics, etc. However, to our best knowledge, the stochastic differential equations of advanced type (ASDE) has never been discussed before. Nevertheless, these stochastic advanced equations should also have considerable real meanings besides the control study only (as implied by the broad-range application of AODES, their deterministic counterpart). Keep this in mind, we will discuss these meanings in future study. Now we aim to study the F t -adapted solution x(·) ∈ L 2 F (0, T + δ; R n ) of the ASDE (8). Suppose that for all t ∈ [0, T ], b : Ω × R n × L 2 (Ω, F r , P; R n ) → L 2 (Ω, F t , P; R n ), σ : Ω × R n × L 2 (Ω, F r , P; R n ) → L 2 (Ω, F t , P; R n×d ), where r ∈ [t, T + δ]. We also assume that b and σ satisfies the following conditions: (H2.4) There exists a constant C > 0, such that for all t ∈ [0, T ], x, x ′ ∈ R n , ζ(·), ζ ′ (·) ∈ L 2 F (t, T + δ; R n ), r ∈ [t, T + δ], we have |b(t, x, ζ(r)) − b(t, x ′ , ζ ′ (r))| + |σ(t, x, ζ(r)) − σ(t, x ′ , ζ ′ (r))| ≤ C(|x − x ′ | + E Ft [|ζ(r) − ζ ′ (r)|]). (H2.5) sup 0≤t≤T |b(t, 0, 0) + σ(t, 0, 0)| < +∞. Under these conditions, b(t, ·, ·) and σ(t, ·, ·) are F t -measurable and this ensures the solution of the advanced SDE will be F t -adapted. We have the following result to the ASDE (8). Theorem 2.2. Assume b and σ satisfy (H2.4) and (H2.5), E|x 0 | 2 < +∞, E sup T ≤t≤T +δ |λ(t)| 2 < +∞, and the time delay δ is sufficiently small, then the ASDE (8) admits a unique F t -adapted solution. Proof Similar to Theorem 2.1, let us define the following norm in Banach space L 2 F (0, T +δ; R n ) which is more convenient for us to construct a contraction mapping: ν(·) β = E[ T +δ 0 |ν(s)| 2 e −βs ds] 1 2 . For simplicity, we denote t+δ t φ(t, s)x(s)α(ds) by x δ + (t), and set      x(t) = x 0 + t 0 b(s, X(s), X δ + (s))ds + t 0 σ(s, X(s), X δ + (s))dW (s), t ∈ [0, T ], x(t) = λ(t), t ∈ (T, T + δ]. Then we can define a mapping I : L 2 F (0, T + δ; R n ) → L 2 F (0, T + δ; R n ) such that I[X(·)] = x(·). For arbitrary X(·), X ′ (·) ∈ L 2 F (0, T + δ; R n ), we introduce the following notations: I[X(·)] = x(·), I[X ′ (·)] = x ′ (·), X(·) = X(·) − X ′ (·),x(·) = x(·) − x ′ (·). Consequently,x(·) satisfies                   x (t) = t 0 [b(s, X(s), X δ + (s)) − b(s, X ′ (s), X ′ δ + (s))]ds + t 0 [σ(s, X(s), X δ + (s)) − σ(s, X ′ (s), X ′ δ + (s))]dW (s), t ∈ [0, T ], x(0) = 0, x(t) = 0, t ∈ (T, T + δ]. Applying Itô's formula to e −βt |x(t)| 2 on [0, T ], we get E[e −βT |x(T )| 2 ] + βE[ T 0 e −βt |x(t)| 2 dt] = E[ T 0 (2e −βt b (t),x(t) + e −βt σ(t),σ(t) )dt], withb (t) = b(t, X(t), X δ + (t)) − b(t, X ′ (t), X ′ δ + (t)), σ(t) = σ(t, X(t), X δ + (t)) − σ(t, X ′ (t), X ′ δ + (t)) . Since b, σ satisfy (H2.4), we have βE[ T 0 e −βt |x(t)| 2 dt] ≤ E[ T 0 e −βt |x(t)| 2 dt] + E[ T 0 e −βt |b(t)| 2 dt] + E[ T 0 e −βt |σ(t)| 2 dt], ≤ E[ T 0 e −βt |x(t)| 2 dt] + 2C 2 E T 0 e −βt |X(t)| + E Ft [|X δ + (t)|] 2 dt . Moreover, it follows that If we choose β = 1 δ , then for sufficiently small δ, we have (β − 1)E[ T 0 e −βt |x(t)| 2 dt] ≤ 4C 2 E[ T 0 e −βt |X(t)| 2 dt] + 4C 2 E[ T 0 e −βt |X δ + (t)| 2 dt] ≤ 4C 2 [1 + M 2 δ δ 0 e βs α(ds)]E[ T +δ 0 e −βt |X(t)| 2 dt], due to the fact E[ T 0 e −βt |X δ + (t)| 2 dt] = E[ T 0 e −βt | t+δ t φ(t, s)X(s)α(ds)| 2 dt] ≤ M 2 δE[ T 0 e −βtK ′ (C, M, δ, α, β) ≤ 4C 2 δ[1+M 2 δeα([0,δ])] 1−δ < 1. It follows the mapping I is contraction, hence the result. Optimal control problem for backward stochastic system with delay In this section we study a kind of stochastic recursive delayed control problems as follows:                  −dy(t) = f t, y(t), t t−δ φ(t, s)y(s)α(ds), z(t), t t−δ φ(t, s)z(s)α(ds), v(t), t t−δ φ(t, s)v(s)α(ds) dt − z(t)dW (t), t ∈ [0, T ],y(T ) = ξ,y(t) = ϕ(t), z(t) = ψ(t), t ∈ [−δ, 0).(9) Here f : Ω × [0, T ] × R n × R n × R n×d × R n×d × R k × R k −→ R n is given measurable function, ξ ∈ L 2 (Ω, F T , P; R n ), ϕ(·) is deterministic function. v(·) is the control process with initial path η. The stochastic recursive control problems is to find the optimal control to achieve a pre-given goal ξ at the terminal time T , and also maximize some given cost functional. Let U be a nonempty convex subset. We denote U the set of all admissible control processes v(·) of the form v(t) = η(t), t ∈ [−δ, 0), v(t) ∈L 2 F (0, T ; R k ), v(t) ∈ U, a.s., t ∈ [0, T ]. The objective is to maximize the following functional over U : J(v(·)) = E[ T 0 l t, y(t), t t−δ φ(t, s)y(s)α(ds), z(t), t t−δ φ(t, s)z(s)α(ds), v(t), t t−δ φ(t, s)v(s)α(ds) dt + γ(y(0))]. For simplicity, denote ( t t−δ φ(t, s)y(s)α(ds), t t−δ φ(t, s)z(s)α(ds), t t−δ φ(t, s)v(s)α(ds)) by (y δ (t), z δ (t), v δ (t)) if no confusion occurs. (H3.1) f is continuously differentiable in (y, y δ , z, z δ , v, v δ ). Moreover, the partial derivatives f y , f y δ , f z , f z δ , f v and f v δ of f with respect to (y, y δ , z, z δ , v, v δ ) are uniformly bounded. Then if v(·) is admissible control and assumption (H3.1) holds, then the delayed BSDE (9) has a unique solution (y v (·), z v (·)) ∈ L 2 F (0, T +δ; R n )×L 2 F (0, T +δ; R n×d ) on [0, T +δ] for sufficiently small 0 ≤ δ ≤ T . (H3.2) For each v(·) ∈ U , l(·, y v (·), y v δ (·), z v (·), z v δ (·), v(·), v δ (·)) ∈ L 1 F (0, T ; R), l is differentiable to (y, y δ , z, z δ , v, v δ ), γ is differentiable with respect to y, and all the derivatives are bounded. Define the Hamiltonian function H : [0, T ] × R n × R n × R n×d × R k × R k × R n → R by H(t, y, y δ , z, z δ , v, v δ ) = l(t, y, y δ , z, z δ , v, v δ ) − f (t, y, y δ , z, z δ , v, v δ ), p . For each v(·) ∈ U , the associated adjoint equation satisfies the following ASDE:                            dp v (t) = − H y (t, Θ v (t), v(t), v δ (t), p v (t)) − E Ft [ t+δ t H y δ (s, Θ v (s), v(s), v δ (s), p v (s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt dt + − H z (t, Θ v (t), v(t), v δ (t), p v (t)) − E Ft [ t+δ t H z δ (s, Θ v (s), v(s), v δ (s), p v (s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt dW (t), t ∈ [0, T ], p v (0) = − γ y (y(0)),(10)with Θ v (t) = (y v (t), y v δ (t), z v (t), z v δ (t)) and α(dt) dt is the Radon-Nikodym derivative. 1) and (H3.2) hold. Suppose for u(·) ∈ U , (y(·), z(·)) is the corresponding trajectory and p(·) the corresponding solution of adjoint equation (10). If the following condition holds true: H v (t, Θ(t), u(t), u δ (t), p(t)) + E Ft [ t+δ t H v δ (s, Θ(s), u(t), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , u(t) = max v∈U H v (t, Θ(t), u(t), u δ (t), p(t)) + E Ft [ t+δ t H v δ (s, Θ(s), u(t), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , v ,(11) moreover, if H(t, y, y δ , z, z δ , v, v δ , p(t)) is a concave function of (y, y δ , z, z δ , v, v δ ), and γ is concave in y, then u(·) is an optimal control for our problem. Proof Choose a v(·) ∈ U and let (y v (·), z v (·)) be the corresponding solution of (9). To simplify the notation, we also use Θ v (t) = (y v (t), y v δ (t), z v (t), z v δ (t) ) and Θ(t) = (y(t), y δ (t), z(t), z δ (t)). Let I = E T 0 {l(t, y(t), y δ (t), z(t), z δ (t), u(t), u δ (t)) − l(t, y v (t), y v δ (t), z v (t), z v δ (t), v(t), v δ (t))} dt , II = γ(y(0)) − γ(y v (0)) . We want to prove that J(u(·)) − J(v(·)) = I + II ≥ 0. Since γ is concave on y, II ≥ γ y (y(0)) ⊤ (y(0) − y v (0)) = −p(0) ⊤ (y(0) − y v (0)). Applying Itô's formula to p(·), y(·) − y v (·) , we have p(0) ⊤ (y(0) − y v (0)) =E T 0 p(t), f (t, Θ(t), u(t), u δ (t)) − f (t, Θ v (t), v(t), v δ (t)) dt + E T 0 H y (t, Θ(t), u(t), u δ (t), p(t)) + E Ft [ t+δ t H y δ (s, Θ(s), u(s), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , y(t) − y v (t) dt + E T 0 H z (t, Θ(t), u(t), u δ (t), p(t)) + E Ft [ t+δ t H z δ (s, Θ(s), u(s), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , z(t) − z v (t) dt.(13) On the other hand, I = E T 0 [H(t, Θ(t), u(t), u δ (t), p(t)) − H(t, Θ v (t), v(t), v δ (t), p(t))]dt + E T 0 p(t), f (t, Θ(t), u(t), u δ (t)) − f (t, Θ v (t), v(t), v δ (t)) dt.(14) Since (Θ, v, v δ ) → H(t, Θ, v, v δ , p(t)) is concave, we have I ≥ − E T 0 H y (t, Θ(t), u(t), u δ (t), p(t)), y v (t) − y(t) dt − E T 0 H y δ (t, Θ(t), u(t), u δ (t), p(t)), y v δ (t) − y δ (t) dt − E T 0 H z (t, Θ(t), u(t), u δ (t), p(t)), z v (t) − z(t) dt − E T 0 H z δ (t, Θ(t), u(t), u δ (t), p(t)), z v δ (t) − z δ (t) dt − E T 0 H v (t, Θ(t), u(t), u δ (t), p(t)), v(t) − u(t) dt − E T 0 H v δ (t, Θ(t), u(t), u δ (t), p(t)), v δ (t) − u δ (t) dt + E T 0 p(t), f (t, Θ(t), u(t), u δ (t)) − f (t, Θ v (t), v(t), v δ (t)) dt.(15) Moreover, we have E T 0 H v δ (t, Θ(t), u(t), u δ (t), p(t)), v δ (t) − u δ (t) dt = E T 0 H v δ (s, Θ(s), u(s), u δ (s), p(s)), s s−δ φ(s, r)(v(r) − u(r))α(dr) ds = E T 0 E Fr r+δ r H v δ (s, Θ(s), u(s), u δ (s), p(s))φ(s, r)χ [0,T ] (s)ds, v(r) − u(r) α(dr) = E T 0 E Ft [ t+δ t H v δ (s, Θ(s), u(s), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , v(t) − u(t) dt.(16) By the maximum condition (11), we can obtain E T 0 H v (t, Θ(t), u(t), u δ (t), p(t)), v(t) − u(t) dt + E T 0 H v δ (t, Θ(t), u(t), u δ (t), p(t)), v δ (t) − u δ (t) dt = 0.(17) From (12)- (17), it is easy to get J(u(·) − J(v(·)) ≥ − E T 0 H y (t, Θ(t), u(t), u δ (t), p(t)), y v (t) − y(t) dt − E T 0 H y δ (t, Θ(t), u(t), u δ (t), p(t)), y v δ (t) − y δ (t) dt − E T 0 H z (t, Θ(t), u(t), u δ (t), p(t)), z v (t) − z(t) dt − E T 0 H z δ (t, Θ(t), u(t), u δ (t), p(t)), z v δ (t) − z δ (t) dt + E T 0 H y (t, Θ(t), u(t), u δ (t), p(t)) + E Ft [ t+δ t H y δ (s, Θ(s), u(s), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , y v (t) − y(t) dt + E T 0 H z (t, Θ(t), u(t), u δ (t), p(t)) + E Ft [ t+δ t H z δ (s, Θ(s), u(s), u δ (s), p(s))φ(s, t)χ [0,T ] (s)ds] α(dt) dt , z v (t) − z(t) dt = 0. So, we verify that J(u(·)) − J(v(·)) ≥ 0 for any v(·) ∈ U , and it follows that u(·) is the optimal control. Corollary 3.3. If the α(dt) is the Dirac measure at −δ, then the system involves pointwise delay, i.e. y δ (t) = y(t − δ), z δ (t) = z(t − δ), v δ (t) = v(t − δ). In this case, the sufficient condition of optimality is H v (t, Θ(t), u(t), u(t − δ), p(t)) + E Ft [H v δ (t + δ, Θ(t + δ), u(t), u(t + δ), p(t + δ))] = 0, with adjoint equation                          dp v (t) = − H y (t, Θ v (t), v(t), v(t − δ), p v (t)) − E Ft [H y δ (t + δ, Θ v (t + δ), v(t + δ), v(t), p v (t + δ))] dt − H z (t, Θ v (t), v(t), v(t − δ), p v (t)) − E Ft [H z δ (t + δ, Θ v (t + δ), v(t + δ), v(t), p v (t + δ))] dW (t), t ∈ [0, T ], p v (0) = − γ y (y(0)), p v (t) = 0, t ∈ (T, T + δ],(18)where Θ v (t) = (y v (t), y v (t − δ), z v (t), z v (t − δ)). Now we consider the special case wherein the control variable involves no delay, to derive the corresponding maximum condition, we first introduce the following condition. (H3.3) For each v(·) ∈ U , l(·, y v (·), y v δ (·), z v (·), z v δ (·), v(·)) ∈ L 1 F (0, T ; R), l is differentiable on (y, y δ , z, z δ ) and γ is differentiable with respect to y, all derivatives are bounded. We have the following result: Theorem 3.4. (The case without control delay) In case there has no control delay, that is, f = f (·, y v (·), y v δ (·), z v (·), z v δ (·), v(·)), l = l(·, y v (·), y v δ (·), z v (·), z v δ (·), v(·)). Suppose u(·) ∈ U , (y(·), z(t)) is its corresponding trajectory and p(·) the corresponding solution of adjoint equation (10). Let (H3.1), (H3.3) and the following condition holds true: H(t, Θ(t), u(t), p(t)) = max v∈U H(t, Θ(t), v, p(t)), for all t ∈ [0, T ],(19) with Θ(t) = (y(t), y δ (t), z(t), z δ (t)), moreover, suppose for each (t, y, y δ , z, z δ ) ∈ [0, T ] × R n × R n × R n×d × R n×d ,Ĥ(t, y, y δ , z, z δ ) = max v∈U H(t, y, y δ , z, z δ , v, p(t)) is a concave function of (y, y δ , z, z δ ), and γ is concave in y, then u(·) is an optimal control. Obviously, Γ(t, y, y δ , z, z δ ) ≤ 0 for all (y, y δ , z, z δ ) and Γ(t, Θ(t)) = 0. It follows that Γ attains its maximum value at (y(t), y δ (t), z(t), z δ (t)). Consequently we have Γ y (t, Θ(t)) = 0, Γ y δ (t, Θ(t)) = 0, Γ z (t, Θ(t)) = 0, Γ z δ (t, Θ(t)) = 0. These will lead to H y (t, Θ(t), u(t), p(t)) = a 1 (t), H y δ (t, Θ(t), u(t), p(t)) = a 2 (t), H z (t, Θ(t), u(t), p(t)) = b 1 (t), H z δ (t, Θ(t), u(t), p(t)) = b 2 (t). Combine (22) and note the arbitrariness of (y, y δ , z, z δ ), we havê H(t, Θ v (t)) −Ĥ(t, Θ(t)) ≤ H y (t, Θ(t), u(t), p(t)), y v (t) − y(t) + H y δ (t, Θ(t), u(t), p(t)), y v δ (t) − y δ (t) + H z (t, Θ(t), u(t), p(t)), z v (t) − z(t) + H z δ (t, Θ(t), u(t), p(t)), z v δ (t) − z δ (t) . Substitute the above result into (20), we obtain J(u(·)) − J(v(·)) ≥ 0. Application I: Dynamic optimization of recursive utility with moving average In this section, we investigate Example 1.1: the dynamic optimization of recursive utility with moving average, which is already given in Section 1. The state equation satisfies the following dynamics: y(t) = ξ − T t αc(s) + β 1 s s 0 y(u)du ds − T t z(s)dW s(23) where α, β > 0 are some constants, and the control variable is consumption process c(·). The class of admissible controls is denoted by C = {c(·) ∈ L 2 F (0, T ; R), t ∈ [0, T ]}. Given some standard utility function U , for example, U (x) = x R R for 0 < R < 1, we can consider the following dynamic optimization problem: inf c(·)∈C J(c(·)) where the objective functional is given by J(c(·)) = −E[ T 0 U (c(t))dt] + y c (0) which follows Delong [5]. The state of Eq.(23) can be reformulated as y(t) = ξ −H(t, y(t), y δ (t), z(t), c(t), p(t)) = −U (c(t)) + αc(t) + β 1 t t t−T T y(u)χ {u≥0} (u)α(du) p(t). The associated adjoint equation satifies      dp(t) = T t βp(s) 1 s ds dt, t ∈ [0, T ], p(0) = 1.(24) It follows Eq. (24) can be reduced to the following ordinary differential equation: p(t) = T t βp(s) 1 s ds,p(t) = − β t p(t) which is solvable and by Theorem 3.2, we have the following result. Application II: Dynamic optimization of pension fund with delayed surplus In this section, let us turn to study Example 1.2 in Section 1. We will use the results obtained in Section 3 to derive the optimal control. For simplicity, suppose g(·) is some linear function as follows g(y(t) − κy(t − δ)) = αy(t) − ακy(t − δ) where α, β > 0. Then our model can be rewritten as        dy(t) = {(r − α)y(t) + λz(t) + ακy(t − δ) − c(t)}dt + z(t)dW (t), t ∈ [0, T ],y(t) = 0, t ∈ [−δ, 0), y(T ) = ξ.(25) Denote the admissible control set by C = {c(·) ∈ L 2 F (0, T ; R), t ∈ [0, T ]}. It follows that if δ is sufficiently small, then Eq. (25) admits a unique solution pair (y(·), z(·)). Introduce the objective functional of the fund manager as follows J(c(·)) = E T 0 Le −ρt (c(t)) 1−R 1 − R dt − Kx(0),(26) with L and K are positive constants, ρ is a discount factor, and R ∈ (0, 1) is index of risk aversion. The manager aims to maximize the expected objective functional by taking account both the cumulative consumption and initial reserve requitement. The optimal control problem is to maximize J(c(·)) over C. The Hamiltonian function is given by H(t, y(t), y(t − δ), c(t), p(t)) = Le −ρt (c(t)) 1−R 1 − R + {(r − α)y(t) + λz(t) + ακy(t − δ) − c(t)}p(t). The adjoint equation is        dp(t) = {(α − r)p(t) − ακE Ft [p(t + δ)]}dt − λp(t)dW (t), t ∈ [0, T ],p(0) = K, p(t) = 0, t ∈ (T, T + δ].(27) Then from Corollary 3.3, we have the following result. Proposition 5.1. If p(t) is the solution of ASDE (27), then the optimal consumption is given by c(t) = p(t)e ρt L − 1 R and the optimal fund proportion in risky asset is θ(t) = z(t) σy(t) where (y(t), z(t)) satisfies (25). In the following, we aim to get the explicit solution of ASDE (27). To this end, we first set M (t) = e t 0 −λdW (s)− 1 2 t 0 λ 2 ds , t ∈ [0, T + δ]. It follows that M (t) is an exponential martingale and satisfies dM (t) = −λM (t)dW (t). Let p(t) = q(t)M (t), where q(t) is a deterministic function defined on [0, T + δ], then apply Itô formula to p(t), we have dp(t) = q ′ (t)M (t)dt − λq(t)M (t)dW (t), t ∈ [0, T ].(28) On the other hand, substituting p(t) = q(t)M (t) into Eq. (27), we have dp(t) = {(α − r)q(t)M (t) − ακq(t + δ)E Ft [M (t + δ)]}dt − λq(t)M (t)dW (t) = {(α − r)q(t)M (t) − ακq(t + δ)M (t)}dt − λq(t)M (t)dW (t), t ∈ [0, T ](29) Comparing (28) h + ακe hδ = (α − r). Note that α, r, κ are the parameter of the state equation, so the above characteristic equation has solution h if the the delayed parameter δ is small enough. In fact, denote F (h) = h + ακe hδ , then it follows that lim h−→+∞ F (h) = +∞. In addition, F ′ (h) > 0 so F (h) is an increasing function of h, so there exists unique h such that F (h) = (α − r), thus q(t) as well as p(t) are uniquely determined. One remark to the parameter range. Set L = max{|α − r|, ακ, λ}, then we have the following parameter range to wellposeness of BSDE (25) and ASDE (27): 6L 2 δ(1 + 2δ 2 e) < 1, 4L 2 δ(1 + δ 2 e) + δ < 1. Application III: The dynamic optimization of linear delayed system Here, we revisit Example 1.3 of the backward system with time-delayed generator. The state equation is given by y(t) = ξ − T t [β 1 y(s) + β 2 y(s − δ) + γ 1 z(s) + γ 2 z(s − δ) + αv(s)] ds − T t z(s)dW s where α, β 1 , β 2 , γ 1 , γ 2 are some constants, v(·) is the control process, and the class of admissible controls is denoted by U ad = {v(·) ∈ L 2 F (0, T ; R), t ∈ [0, T ]}. The dynamic optimization problem is as follows: inf v∈U ad J(v(·)) where the objective functional is given by (0) for some constant K and nonnegative function R(t) defined on [0, T ]. By Corollary 3.3, the Hamiltonian function of our optimization problem becomes H(t, y(t), y(t − δ), z(t), z(t − δ), v(t)) = − 1 2 R(t)v 2 (t) + (αv(t) + β 1 y(t) + β 2 y(t − δ) + γ 1 z(t) + γ 2 z(t − δ)) p(t), J(v(·)) = 1 2 E T 0 R(t)v 2 (t)dt + Ky Similar to Application II, we can introduce the exponential martingale satisfying: dM (t) = γM (t)dW (t) where γ is some coefficient to be determined, and set p(t) = q(t)M (t). Then we get the following AODE equation system: q ′ (t) = − β 1 q(t) − β 2 q(t + δ), γq(t) = γ 1 q(t) + γ 2 q(t + δ). The first equation q(t) can be solved using the same method to Eq.(30). Based on it, we can plug q(t) into second equation to get the value of γ thus the exponential martingale M (t) can be uniquely determined. Consequently, the optimal control is given by u(t) = αp(t) R(t) , where p(t) = q(t)M (t) is the solution of the ASDE (32). e −βt |X(t)| 2 dt]. Set K ′ (C, M, δ, α, β) = 4C 2 [1 + M 2 δ δ 0 e βs α(ds)] β − 1 . Remark 3. 1 . 1For a given admissible control v(·), Eq.(10) is an ASDE. By the virtue of the indicative function χ [0,T ] (s), it is not necessary to give the value of p v (t) on (T, T + δ]. Moreover, the ASDE (10) admits a unique solution under condition (H3.1) and (H3.2) due to Theorem 2.2. Now we can give the first main result of this paper in the following: Theorem 3.2. (Sufficient condition of optimality) Let (H3. (u)χ {u≥0} α(du) ds − T t z(s)dW swhere α is the uniform measure. Introduce the Hamiltonian function Proposition 4. 1 . 1The optimal consumption is given by c(t) = (αp(t)) 1 R−1 , where p(t) satisfies Eq.(24). q ′ (t) = (α − r)q(t) − ακq(t + δ), t ∈ [0, T ], q(0) = K, q(t) = 0, t ∈ (T, T + δ],(30)then p(t) = q(t)M (t) is a solution of ASDE (27). The solution of AODE (30) can be obtained via the characteristic function as follows: q(t) = Ke ht for t ∈ [0, T ], and q(t) = 0 for t ∈ (T, T + δ]. Here, h satisfies the following characteristic equation: t) = −β 1 p(t) − β 2 E Ft [p(t + δ)] dt + −γ 1 p(t) − γ 2 E Ft [p(t + δ)] dW (t), t ∈ [0, T ], p(0) = K, p(t) = 0, t ∈ (T, T + δ]. Proof Similar to the proof of Theorem 3.2, we also choose arbitrary v(·) ∈ U , and aim to prove J(u(·)) − J(v(·)) ≥ 0. From the procedure of Theorem 3.2, we can see thatBy the condition(19)and the definition ofĤ,≥Ĥ(t, Θ(t)) −Ĥ(t, y, y δ , z, z δ ).Since (y, y δ , z, z δ ) →Ĥ(t, y, y δ , z, z δ ) is concave for any given t ∈ [0, T ], it follows that there exists a supergradient a 1 (t), a 2 (t) ∈ R n and b 1 (t), b 2 (t) ∈ R n×d forĤ(t, y, y δ , z, z δ ) at (y, y δ , z, z δ ) (refer Chapter 5, Section 23 in[23]), that is, for all (y, y δ , z, z δ ),Define Γ(t, y, y δ , z, z δ ) = H(t, y, y δ , z, z δ , u(t), p(t)) − H(t, Θ(t), u(t), p(t)) − a 1 (t), y − y(t) − a 2 (t), y δ − y δ (t) On some nonlinear ordinary differential equations with advanced arguments. A Augustynowicz, H Leszczynski, W Walter, Nonlinear Analysis. 53A. Augustynowicz, H. Leszczynski and W. Walter (2003). On some nonlinear ordinary differential equations with advanced arguments. Nonlinear Analysis, 53, 495-505. Maximum principle for the stochastic optimal control problem with delay and application. L Chen, Z Wu, Automatica. 46L. Chen and Z. Wu (2010). Maximum principle for the stochastic optimal control problem with delay and application. Automatica, 46, 1074-1080. An equation alternately to retarded and advanced type. Proceeding of the. K L Cooke, J Wiener, American Mathematical Society99K. L. Cooke and J. Wiener (1987). An equation alternately to retarded and advanced type. Proceeding of the American Mathematical Society, 99, 726-732. Stochastic control with terminal contingent conditions. N Dokuchaev, X Y Zhou, J. Math. Anal. Appl. 238N. Dokuchaev, X. Y. Zhou (1999). Stochastic control with terminal contingent conditions, J. Math. Anal. Appl., 238, 143-165. BSDEs with time-delayed generators of a moving average type with applications to non-monotone preferences. Ł Delong, To appear in Stochastic ModelsŁ. Delong (2011). BSDEs with time-delayed generators of a moving average type with applications to non-monotone preferences. To appear in Stochastic Models. Applications of time-delayed backward stochastic differential equations to pricing, hedging and portfolio management. Ł Delong, Working paperŁ. Delong (2011). Applications of time-delayed backward stochastic differential equations to pricing, hedging and portfolio management. Working paper. Backward stochastic differential equations with time delayed generators-results and counterexamples. Ł Delong, P Imkeller, Annals of Applied Probability. 20Ł. Delong and P. Imkeller (2010). Backward stochastic differential equations with time delayed generators-results and counterexamples. Annals of Applied Probability, 20, 1512-1536. A stochastic control problem with delay arising in a pension fund model. S Federico, Finance and Stochastics. To appear inS. Federico (2011). A stochastic control problem with delay arising in a pension fund model. To appear in Finance and Stochastics. Steady size distributions for cells in one dimensional plant issues. A J Hall, G C Wake, P W Gandar, J. Math. Bio. 30A. J. Hall, G. C. Wake and P. W. Gandar (1991). Steady size distributions for cells in one dimensional plant issues. J. Math. Bio., 30, 101-123. A maximum principle for stochastic optimal control with terminal state constraints, and its applications. S Ji, X Y Zhou, Communications in Information and Systems. 6S. Ji and X.Y. Zhou (2006). A maximum principle for stochastic optimal control with terminal state constraints, and its applications. Communications in Information and Systems, 6, 321-338. Dynamic programming in stochastic control of systems with delay. B Larssen, Stochastics and Stochastics Reports. 74B. Larssen (2002). Dynamic programming in stochastic control of systems with delay. Stochastics and Stochastics Reports, 74, 651-673. Linear-quadratic control of backward stochastic differential equations. A Lim, X Y Zhou, SIAM J. Control Optim. 40A. Lim and X.Y. Zhou (2001). Linear-quadratic control of backward stochastic differential equations. SIAM J. Control Optim., 40, 450-474. The functional-differential equation. T Kato, J B Mcleod, Bull. Amer. Math. Soc. 77T. Kato and J. B. McLeod (1971). The functional-differential equation. Bull. Amer. Math. Soc., 77, 891-937. Relationship between backward stochastic differential equations and stochastic controls: a linear-quadratic approach. M Kohlmann, X Y Zhou, SIAM J. Control Optim. 38M. Kohlmann and X.Y. Zhou (2000). Relationship between backward stochastic differential equations and stochastic controls: a linear-quadratic approach. SIAM J. Control Optim., 38, 1392-1407. J Ma, J Yong, Forward-backward Stochastic Differential Equations and their Applications. Springer-Verlag1702J. Ma and J. Yong (1999). Forward-backward Stochastic Differential Equations and their Applications, Lecture Notes in Math. 1702, Springer-Verlag. Stochastic Functional Differential Equations. S E A Mohammed, Pitman Advanced Publishing ProgramS. E. A. Mohammed (1984). Stochastic Functional Differential Equations. Pitman Advanced Publish- ing Program. Stochastic Analysis and Related Topics 6. The Geido Workshop. S E A Mohammed, Stochastic Differential Equations with Memory: Theory, Examples and Applications. BirkhauserS. E. A. Mohammed (1996). Stochastic Differential Equations with Memory: Theory, Examples and Applications. Stochastic Analysis and Related Topics 6. The Geido Workshop, Progress in Probability, Birkhauser. On the local existence of solutions of certain functional differential equations. R J Oberg, Proc. Amer. Math. Sco. 20R. J. Oberg (1969). On the local existence of solutions of certain functional differential equations. Proc. Amer. Math. Sco., 20, 285-302. A maximum principle for optimal control of stochastic systems with delay, with applications to finance. Optimal Control and Partial Differential Equations. B Øksendal, A Sulem, J. L. Menaldi, E. Rofman and A. SulemIOS PressAmsterdamB. Øksendal and A. Sulem (2001). A maximum principle for optimal control of stochastic systems with delay, with applications to finance. Optimal Control and Partial Differential Equations, eds J. L. Menaldi, E. Rofman and A. Sulem, IOS Press, Amsterdam, 64-79. On a linear differential equation of the advanced type. G P Papavassilopoulos, G J Olsder, J. Math. Anal. Appl. 103G. P. Papavassilopoulos and G. J. Olsder (1984). On a linear differential equation of the advanced type. J. Math. Anal. Appl., 103, 74-82. A dynamic maximum principle for the optimization of recursive utilities under constrains. N El Karoui, S Peng, M C Quenez, Annals of Applied Probability. 11N. El Karoui, S. Peng and M. C. Quenez (2001). A dynamic maximum principle for the optimization of recursive utilities under constrains. Annals of Applied Probability, 11, 664-693. Anticipated backward stochastic differential equation. S Peng, Z Yang, Annals of Probability. 37S. Peng and Z. Yang (2009). Anticipated backward stochastic differential equation, Annals of Proba- bility, 37, 877-902. R T Rockafellar, Convex Analysis. Princeton, NJPrinceton Univ. PressR. T. Rockafellar (1970). Convex Analysis. Princeton, NJ: Princeton Univ. Press. On the functional-differential equation of advanced type. T Yoneda, J. Math. Anal. Appl. 332T. Yoneda (2006). On the functional-differential equation of advanced type. J. Math. Anal. Appl., 332, 487-496. J Yong, X Y Zhou, Stochastic controls: Hamiltonian Systems and HJB equations. New YorkSpringer-VerlagJ. Yong and X. Y. Zhou (1999). Stochastic controls: Hamiltonian Systems and HJB equations. Springer-Verlag, New York.
{'fraction_non_alphanumeric': 0.1585459215172993, 'fraction_numerical': 0.024300496382090732, 'mean_word_length': 3.323049001814882, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 10, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 40, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The main contributions of this paper are three fold. First, our primary concern is to investigate a class of stochastic recursive delayed control problems which arise naturally with sound backgrounds but have not been well-studied yet. For illustration purpose, some concrete examples are also provided here. We derive the stochastic maximum principle of sufficient condition to the optimal control in both cases with and without control delay. Second, it is interesting that a new class of time-advanced stochastic differential equations (ASDEs) is introduced as the adjoint process via duality relation. To our best knowledge, such equations have never been discussed in literature although they possess more academic values besides the control study here. Some existence and uniqueness result to ASDEs is presented. Third, to illustrate our theoretical results, some dynamic optimization problems are discussed based on our stochastic maximum principles. In particular, the optimal controls are derived explicitly by solving the associated time-advanced ordinary differential equation (AODE), the counterpart of the ASDE in its deterministic setup.', 'arxivid': '1112.0703', 'author': ['Li Chen ', 'Jianhui Huang '], 'authoraffiliation': [], 'corpusid': 119319232, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 17658, 'n_tokens_neox': 15756, 'n_words': 8099, 'pdfsha': '68a1621b2dab7d6c4060804880dfe7109a99bd8e', 'pdfurls': ['https://arxiv.org/pdf/1112.0703v1.pdf'], 'title': ['A stochastic maximum principle for backward delayed system via advanced stochastic differential equation (ASDE) *', 'A stochastic maximum principle for backward delayed system via advanced stochastic differential equation (ASDE) *'], 'venue': []}
arxiv
Characterizing the nonlocal correlations of particles that never interacted 6 Nov 2009 C Branciard Group of Applied Physics University of Geneva 20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland N Gisin Group of Applied Physics University of Geneva 20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland S Pironio Group of Applied Physics University of Geneva 20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland Characterizing the nonlocal correlations of particles that never interacted 6 Nov 2009(Dated: November 6, 2009) Quantum systems that have never interacted can become nonlocally correlated through a process called entanglement swapping. To characterize nonlocality in this context, we introduce local models where quantum systems that are initially uncorrelated are described by uncorrelated local variables. While a pair of maximally entangled qubits prepared in the usual way (i.e., emitted from a common source) requires a visibility close to 70% to violate a Bell inequality, we show that an entangled pair generated through entanglement swapping will already violate a Bell inequality for visibilities as low as 50% under our assumption. It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past -this is the idea behind Bell's concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities. However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities. Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests. To make this intuition concrete and to fully grasp the extent of nonlocality in entanglement swapping experiments, it seems appropriate to contrast them with the predictions of local models where systems that are initially uncorrelated are described by uncorrelated local variables. This is the idea that we pursue here. To precise it further consider the general scenario depicted below. A source S 1 sends particles to Alice and Bob, and a separate source S 2 sends particles to Charles and Bob. All parties can perform measurements on their system, labeled x, y and z for Alice, Bob and Charles respectively, and they obtain outcomes denoted a, b, and c, respectively. Bob's measurement y might correspond to a joint measurement on the two systems that he receives from each source. The correlations between the measurement outcomes of the three parties are described by the joint probability distribution P (a, b, c|x, y, z). An entanglement swapping experiment is clearly a particular case of this scenario, where Bob's measurement corresponds to a Bell measurement entangling Alice's and Charles's particles. Under the usual assumption, the tripartite distribution P (a, b, c|x, y, z) would be said to be local if it can be written in the factorized form P (a, b, c|x, y, z) = dλ ρ(λ) P (a|x, λ)P (b|y, λ)P (c|z, λ) , (1) where the variable λ with distribution ρ(λ) describes the joint state of the three systems according to the local model, and P (a|x, λ), P (b|y, λ), P (c|z, λ) are the local probabilities for each separate outcome given λ. In our scenario, however, they are two separate sources S 1 and S 2 . It it thus natural to assume that the local model assigns two different states λ 1 and λ 2 , one to each source, and to consider instead of (1) the decomposition P (a, b, c|x, y, z) (2) = dλ 1 dλ 2 ρ(λ 1 , λ 2 ) P (a|x, λ 1 )P (b|y, λ 1 , λ 2 )P (c|z, λ 2 ). The local response function for Alice now depends only on λ 1 , the one of Charles only on λ 2 , while the one of Bob depends on both λ 1 and λ 2 . So far, the decompositions (1) and (2) are equivalent because ρ(λ 1 , λ 2 ) could be different from zero only when λ 1 = λ 2 = λ to recover (1). We now introduce our basic assumption: since the two sources S 1 and S 2 are supposed to be independent and uncorrelated, it is natural to assume that this property carries over to the local model. The variables λ 1 and λ 2 should therefore be independent and their joint distribution ρ(λ 1 , λ 2 ) factorize: ρ(λ 1 , λ 2 ) = ρ 1 (λ 1 )ρ 2 (λ 2 ).(3) We refer to models satisfying this independence assumption as "bilocal" models, since they aim at explaining the correlations P (a, b, c|x, y, z) with two independent sources of local variables. Even though the local variables λ 1 and λ 2 are initially independent, once conditioned on the joint measurement result of Bob they will bear enough correlations to reproduce non-trivial correlations between Alice's and Charles's system. These correlations, however, are much weaker than those that can be established through joint measurements in quantum theory. We introduce below a (quadratic) Bell inequality that is satisfied by all bilocal correlations, but which is violated by quantum correlations. As we will see, our inequality simplifies the requirements for the demonstration of quantumness in entanglement swapping experiments. Restricted classes of local models with independent sources were considered in [3,4] within the context of the detection loophole. But apart from these exploratory works, little was known about how nonlocality is induced through measurements on independent quantum systems. Beyond its fundamental interest, nonlocality is also known to play a key role in several quantum information protocols [5,6], and measurement-induced correlations are at the basis of quantum repeaters [7] and measurement-based quantum computation [8]. One of our contribution is to introduce a theoretical framework to address broadly the role of nonlocality in such contexts. Before entering in the details of our results, it might be worth justifying further our independence assumption. It is strictly speaking an assumption, rather than something which follows logically from locality. Indeed, some events in the common past of the sources S 1 and S 2 could in principle have influenced, in a way that is perfectly in accord with locality, both λ 1 and λ 2 such that they wind up correlated, in violation of (3). However, an assumption similar to (3) is actually hidden in any standard Bell-type experiment. In order to derive a Bell-type inequality, one needs (in addition to the premise of local causality) an assumption having to do with the measurement settings being "freely chosen". What this means in practice is that the measurement settings are determined by a random mechanism that is considered independent of the variable λ describing the particle source. Seen from this perspective, the assumption that the laser sources in the quantum random number generators used to choose the measurement settings in a standard Bell experiment [9] are independent of the laser source generating the entangled photons is not much different from the assumption that the two laser sources (which may be of different brands, assembled in different parts of the world, and powered by different electrical supplies) used in an entanglement swapping experiment are independent. Of course we cannot exclude in principle that such apparently independent sources are significantly correlated. But, quoting Bell, "this way of arranging quantum mechanical correlations would be even more mind-boggling than one in which causal chains go faster than light. Apparently separate parts of the world would be deeply and conspiratorially entangled" [1]. Characterization of the bilocal set. We start by giving a characterization of the set of bilocal correlations which is more handy for analytical and numerical purposes than the definition (2) and (3). First note that without loss of generality the local response function P (a|x, λ 1 ) of Alice can be taken to be deterministic, i.e., such that it assigns a unique measurement output a to every input x (any randomness used locally by Alice can always be thought of as being included in the shared variable λ 1 ). In the case of a finite number of possible measurement inputs and outputs, there is a finite number of such deterministic strategies corresponding to an assignment of an output α x to each of Alice's N possible inputs x. We thus label each of these strategies with the string α = α 1 . . . α N and denote the corresponding response function P α (a|x). Similarly, the response functions P (b|y, λ 1 , λ 2 ) and P (c|z, λ 2 ) can also be taken deterministic. We label the associated strategies β and γ and the corresponding response functions P β (b|y) and P γ (c|z). Let Λ 12 αβγ denote the set of pairs (λ 1 , λ 2 ) specifying the strategies α, β, and γ for Alice, Bob, and Charles. Defining (2) can then be rewritten as q αβγ = Λ 12 αβγ dλ 1 dλ 2 ρ(λ 1 , λ 2 ), Eq.P (a, b, c|x, y, z) = α,β,γ q αβγ P α (a|x)P β (b|y)P γ (c|z) (4) with q αβγ ≥ 0 and αβγ q αβγ = 1. So far we have not used the independence condition (3), and (4) corresponds to the well-known decomposition of local correlations as a convex sum of deterministic strategies, where the weights q αβγ can be understood as the probabilities assigned by the source to the strategies α, β, and γ. Let us now define q αγ = β q αβγ , q α = βγ q αβγ , and q γ = αβ q αβγ . Using the fact that Λ 12 α,β γ = Λ 1 α × Λ 2 γ ∩ Λ 12 β , as follows from (2), the independence condition (3) implies that q αγ = q α q γ .(5) Conversely, any correlation P (a, b, c|x, y, z) satisfying (4) and (5) can be written in the form (2). Indeed, since q αγ = q α q γ , we can write q αβγ = q α q γ q β|αγ . Inserting this expression in (4) and defining P α,γ (b|y) = β q β|αγ P β (b|y), we then find that P (a, b, c|x, y, z) = α,γ q α q γ P α (a|x)P α,γ (b|y)P γ (c|z), which is clearly of the form (2). We thus conclude that a tripartite correlation is bilocal if and only if it admits the decomposition (4) with the restriction (5). The bilocal set that we have just characterized is clearly contained in the local set. The extremal points of the local set, corresponding to deterministic strategies, are also bilocal, but a mixture of deterministic strategies is not necessarily bilocal due to the non-convex constraint (5). Therefore, one cannot use standard Bell inequalities to distinguish one set from the other. As we will see below, however, correlations can be shown to be non-bilocal using non-linear inequalities (or joint sets of linear inequalities). Note that while deciding if a given set of correlations is local can be conveniently solved using linear programming, deciding if a correlation is bilocal is a quadratically constrained problem which is much more difficult to handle numerically. However, standard linear and semidefinite relaxations can be used to deal with the nonlinear constraint (5). We describe in [10], a linear relaxation of (5) which works well on many instances. Application to entanglement swapping. We now illustrate how the bilocality constraint restricts the set of possible correlations on a simple example inspired by the standard entanglement swapping protocol. The sources S 1 and S 2 send pairs of particles in the singlet state |Ψ − = (|01 − |10 ) / √ 2. Bob performs a Bell state measurement on the two particle he receives, with four possible outputs b = b 0 b 1 = 00, 01, 10, 11 corresponding to the four Bell states |Φ + , |Ψ + , |Ψ − , and |Φ − , respectively. Depending on Bob's result, Alice and Charles's particles end up in the corresponding Bell state. To check whether the entanglement swapping succeeded, we assume that Alice and Charles can perform one out of two measurement x, z ∈ {0, 1} with binary outputs a, c ∈ {0, 1} on their system. This is sufficient, e.g., to test the CHSH inequality [11] (or more precisely, for each state prepared by Bob, a different version of the CHSH inequality corresponding to a relabeling of the inputs and outputs). This scenario is characterized by the probabilities P Q (abc|xz) = P Q (b)P Q|b (ac|xz), where P Q|b (ac|xz) denote the correlations seen by Alice and Charles conditioned on Bob's output b, and where for convenience we omitted Bob's input y since he is assumed to make a single, fixed measurement. We are interested in the robustness of P Q to the admixture of white noise, quantified by the maximal v ∈ [0, 1] such that P Q (v) = vP Q + (1 − v)P R is bilocal, where P R denotes the distribution with completely random outcomes. The quantity v can also be interpreted as the experimental visibility of the final entangled pair shared by Alice and Charles. If Alice and Charles use the measurement settings optimal for the CHSH inequality, given by x 0 = σ x , x 1 = σ z , z 0 = (σ x + σ z ) / √ 2, and z 1 = (σ x − σ z ) / √ 2, no improvement is obtained over the usual locality condition, i.e., we found that the quantum correlations become bilocal for a visibility v = 1 √ 2, the same point at which they also become local. Using the characterization defined by (4) and (5), we looked numerically for other choices of measurement settings for Alice and Charles and the best noise resistance that we found is v = 1/2 and is obtained for x 0 = z 0 = (σ x + σ z )/ √ 2, x 1 = z 1 = (σ x − σ z )/ √ 2. The corresponding correlations observed by the three parties are given by P Q (b) = 1/4 for all b, and P Q|b (ac|xz) =    1/2 if a ⊕ c = b 0 and x ⊕ z = b 1 0 if a ⊕ c = b 0 and x ⊕ z = b 1 1/4 otherwise (6) For instance, if Alice and Charles end up in the |Φ + state, corresponding to b = 00, they obtain perfectly correlated results if they performed the same measurements, and completely uncorrelated results otherwise. The above correlations are local as they can be decomposed as P Q = (P C + P C )/2 where P C andP C are defined in term of deterministic strategies as P C = 1 8 αβ0β1 P αα P β0β1 P (α⊕β0)(α⊕β0) and P C = 1 8 αβ0β1 P (α⊕β1)(α⊕β1⊕1) P β0β1 P (α⊕β0)(α⊕β0⊕+1) . They are not bilocal, however, as we now show. A quadratic Bell inequality for bilocality. Let us first define, for a given probability distribution P (abc|xz), the fol-lowing correlation term between Alice and Charles' outputs, conditioned on Bob's output: E b (xz) = a⊕c=b0 P b (ac|xz) − a⊕c =b0 P b (ac|xz) .(7) Inspired by the properties (6) of P Q , we introduce the following combination that quantifies the high degree of correlations expected between Alice's and Charles's outcomes when x ⊕ z = b 1 I = b P (b) x⊕z=b1 E b (xz) .(8) We quantify the deviation from the expected uncorrelated results when x ⊕ z = b 1 through E = max b max x⊕z =b1 4|P (b)E b (xz)| .(9) For P Q , one gets I = 2 and E = 0. Note that there exist bilocal correlations which also reach the value I = 2, for instance, the deterministic point defined by α = 00, β = 00, γ = 00, or the correlations P C introduced above. For these correlations, E = 4 and E = 1, respectively. The linear expression I cannot therefore be used as a standard Bell inequality to test bilocality. However, as we prove below, the following quadratic inequality I ≤ 1 + E 2(10) is satisfied by all bilocal correlations and is violated by the quantum point P Q , since we find 2 > 1 + 0. Intuitively, I correspond to a Bell inequality whose bound is not fixed for the entire bilocal set, but depends on how much the outputs are uncorrelated when x ⊕ z = b 1 , as quantified by E. The noisy point P Q (v) yields I(v) = 2v and E(v) = 0 and thus violate (10) whenever v > 1/2 (see Fig. 1). On the other hand, P Q (v) is bilocal when v ≤ 1/2 [14]. Our inequality thus detects optimally the resistance to noise of the point P Q . Proof of (10). Let P be a bilocal probability distribution with decomposition (4), where α = α 0 α 1 , β = β 0 β 1 , γ = γ 0 γ 1 . Let us define the following weights (whereᾱ 0 = α 0 ⊕ 1 and similarly for the other indices): q ′ α0α1,β0β1,γ0γ1 = (q α0α1,β0β1,γ0γ1 + q α0α1,β0β1,γ0γ1 +qᾱ 0ᾱ1 ,β0β1,γ0γ1 + qᾱ 0ᾱ1 ,β0β1,γ0γ1 )/4, q ′′ α0α1,β0β1,γ0γ1 = (q ′ α0α1,β0β1,γ0γ1 + q ′ α0α1,β0β1,γ1γ0 +q ′ α1α0,β0β1,γ0γ1 + q ′ α1α0,β0β1,γ1γ0 )/4. The "depolarized" correlation P ′′ = αβγ q ′′ αβγ P α P β P γ is then also bilocal, i.e., q ′′ αγ = q ′′ α q ′′ γ . Moreover, P ′′ is such that I ′′ = I and E ′′ ≤ E. Due to the symmetries imposed through the above equations, the weights q ′′ αβγ depend on only 4 parameters, which we choose to be r = q ′′ α0=α1 , s = q ′′ γ0=γ1 , t = q ′′ β0=0|α0α1=00,γ0γ1=00 and u = q ′′ β0=β1|α0α1=01,γ0γ1=01 (with obvious notations, the FIG. 1: Two-dimensional slice of the correlation space, which contains the points PQ, PC ,PC and PR defined in the text [this is precisely the slice that contains all the depolarized correlations P ′′ introduced in the proof of (10); they can indeed be written as P ′′ = XPC + YPC + (1 − X − Y )PR]. The square delimits the local polytope in this slice. All these local correlations can be reproduced in quantum theory with two independent sources. They cannot, however, be all reproduced locally with two independent sources. The four portions of parabola delimit the bilocal set [the upper parabola is obtained from (10). A similar constrained Bell-type inequality can be derived to obtain the lower, left, and right parabolas]. The quantum point PQ enters the bilocal region for a visibility v ≤ 1/2. weights q ′′ αβγ being understood as probabilities [15]). Defining X = rs(2t − 1) and Y = (1 − r)(1 − s)(2u − 1), we find I ′′ = 2(X + Y ), E ′′ = |X − Y |. From their definition, X and Y are restricted to satisfy |X| + |Y | ≤ √ rs + (1 − r)(1 − s) ≤ 1. One can easily check that under this constraint, I ′′ ≤ 1 + E ′′2 , which implies (10). Note that for any value of E ≤ 1, the bound is tight, i.e., there exists a bilocal correlation such that I = 1 + E 2 . Take for instance P ′′ , with r = s = (1 + E)/2 and t = u = 1. Discussion and open questions. We have shown that if one makes the reasonable assumption, underlying all of modern empirical science, that the world is composed of different parts that are independent for the purposes at hand, then nonlocality is a phenomena even more common than usually thought. While the standard analysis leads to the conclusion that the final singlet pair in an entanglement swapping experiment needs a visibility higher than v = 1/ √ 2 ≃ 71% to violate the CHSH inequality and will not violate any Bell inequality (with von-Neumann measurements) for visibilites smaller than v ≃ 66% [12], we have shown here that under our assumption it already exhibits nonlocal correlations for visibilites as low as 50%. This simplifies the requirements for the demonstration of quantumness in entanglement swapping experiments [13]. Is this v = 50% limit a fundamental limit? It is easy to show that there exists a bilocal model for visibilites lower than 25% [10]. But what happens in-between? Can we lower the visibility threshold by considering experiments with more inputs? Do we gain by letting Bob choose between two or more measurements, e.g., between two Bell state measurements in different bases? Can bilocality be violated for visibilites lower than 33%, corresponding to a final noisy singlet pair that is separable? This last question is not completely trivial at first sight: a setting where the source S 1 produces a singlet state, S 2 the separable state ρ = (|+z, +z +z, +z| + |+x, −z +x, −z|)/2, where Bob performs a standard Bell measurement, Alice measures in the (σ x ± σ z )/ √ 2 bases and Charles always measures in the z basis generates correlations that are non-bilocal [16]. This shows that a Bell measurement can correlate independent systems in ways that are even more astonishing that one would "quantum naively" think. From this perspective, it would be interesting to characterize the class of states (including separable states) that are non-bilocal when correlated through Bell measurements. The present work raise many other questions. In particular, the condition (3) can be straightforwardly extended to models with n independent sources. How do such n-local models differ from bilocal ones? How does their tolerance to noise scale with n? Finally, it would be interesting to explore the implications of our approach and findings in the context of quantum information protocols based on non-locality, and in particular protocols that use at their heart measurements on independent systems, such as quantum repeaters and measurement based computation. We acknowledge support by the Swiss NCCR Quantum Photonics and the European ERC-AG QORE. r, s, t, u ∈ [0, 1]. The weights q ′′ αβγ are then fully defined byThis experiment can be interpreted as a standard test of the CHSH inequality performed on the singlet S1, where Charles's outcomes specify in which basis (x or z) Bob's particles has been measured. Such an interpretation makes the similarity between our independence assumption and the "free choice" assumption of standard Bell experiments more explicit[10]. J Bell, Speakable and unspeakable in quantum mechanics. Cambridge University Press2nd edJ. Bell, Speakable and unspeakable in quantum mechanics (Cambridge University Press, 2004), 2nd ed. . M Zukowski, A Zeilinger, M A Horne, A K Ekert, Phys. Rev. Lett. 714287M. Zukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, Phys. Rev. Lett. 71, 4287 (1993). . N Gisin, B Gisin, Phys. Lett. A. 297279N. Gisin and B. Gisin, Phys. Lett. A 297, 279 (2002). . D M Greenberger, M Horne, A Zeilinger, Phys. Rev. A. 7822110D. M. Greenberger, M. Horne, A. Zeilinger, Phys. Rev. A 78, 022110 (2008); . D M Greenberger, M Horne, A Zeilinger, M Zukowski, Phys. Rev. A. 7822111D. M. Greenberger, M. Horne, A. Zeilinger, M. Zukowski, Phys. Rev. A 78, 022111 (2008). . R Cleve, H Buhrman, Phys. Rev. A. 561201R. Cleve and H. Buhrman, Phys. Rev. A 56, 1201 (1997). . A Acin, N Brunner, N Gisin, S Massar, S Pironio, V Scarani, Phys. Rev. Lett. 98230501A. Acin, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani, Phys. Rev. Lett. 98, 230501 (2007). . H.-J Briegel, W Dür, J I Cirac, P Zoller, Phys. Rev. Lett. 815932H.-J. Briegel, W. Dür, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 81, 5932 (1998). . R Raussendorf, H J Briegel, Phys. Rev. Lett. 865188R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001). . G Weihs, T Jennewein, C Simon, H Weinfurter, A Zeilinger, Phys. Rev. Lett. 815039G. Weihs, T. Jennewein, C. Simon, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. 81, 5039 (1998). . C Branciard, N Gisin, S Pironio, in preparationC. Branciard, N. Gisin, and S. Pironio, in preparation. . J F Clauser, M A Horne, A Shimony, R A Holt, Phys. Rev. Lett. 23880J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. 23, 880 (1969). . A Acín, N Gisin, B Toner, Phys. Rev. A. 7362105A. Acín, N. Gisin, and B. Toner, Phys. Rev. A 73, 062105 (2006). . M Halder, A Beveratos, N Gisin, V Scarani, C Simon, H Zbinden, Nat. Phys. 3692M. Halder, A. Beveratos, N. Gisin, V. Scarani, C. Simon, and H. Zbinden, Nat. Phys. 3, 692 (2007). PQ(v) can be decomposed, using the notation of the proof of (10), as P ′′ with r = s = 1/2, t = u = 1/2 + v. PQ(v) can be decomposed, using the notation of the proof of (10), as P ′′ with r = s = 1/2, t = u = 1/2 + v. ′′ 00,β,γ +q ′′ 11,β,γ ). Note thatq ′′ 00,β,γ +q ′′ 11,β,γ ). Note that
{'fraction_non_alphanumeric': 0.06386357936972331, 'fraction_numerical': 0.027167098553874447, 'mean_word_length': 4.0404465978512745, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 9, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Quantum systems that have never interacted can become nonlocally correlated through a process called entanglement swapping. To characterize nonlocality in this context, we introduce local models where quantum systems that are initially uncorrelated are described by uncorrelated local variables. While a pair of maximally entangled qubits prepared in the usual way (i.e., emitted from a common source) requires a visibility close to 70% to violate a Bell inequality, we show that an entangled pair generated through entanglement swapping will already violate a Bell inequality for visibilities as low as 50% under our assumption.', 'arxivid': '0911.1314', 'author': ["C Branciard \nGroup of Applied Physics\nUniversity of Geneva\n20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland\n", "N Gisin \nGroup of Applied Physics\nUniversity of Geneva\n20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland\n", "S Pironio \nGroup of Applied Physics\nUniversity of Geneva\n20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland\n"], 'authoraffiliation': ["Group of Applied Physics\nUniversity of Geneva\n20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland", "Group of Applied Physics\nUniversity of Geneva\n20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland", "Group of Applied Physics\nUniversity of Geneva\n20 rue de l'Ecole-de-MédecineCH-1211Geneva 4Switzerland"], 'corpusid': 119243609, 'doi': '10.1103/physrevlett.104.170401', 'github_urls': [], 'n_tokens_mistral': 7387, 'n_tokens_neox': 6524, 'n_words': 4142, 'pdfsha': '0421afea8ee3c4e4f5a5da54bdde9581cf757c25', 'pdfurls': ['https://arxiv.org/pdf/0911.1314v1.pdf'], 'title': ['Characterizing the nonlocal correlations of particles that never interacted', 'Characterizing the nonlocal correlations of particles that never interacted'], 'venue': []}
arxiv
Semi-analytical technique for the design of disordered coatings with tailored optical properties Rishi Bhrigu Department of Energy Science and Engineering Indian Institute of Technology Bombay Mishra Department of Energy Science and Engineering Indian Institute of Technology Bombay Nithin Jo Varghese Department of Energy Science and Engineering Indian Institute of Technology Bombay Karthik Sasihithlu *[email protected] Department of Energy Science and Engineering Indian Institute of Technology Bombay Semi-analytical technique for the design of disordered coatings with tailored optical properties Disordered media coatings are finding increasing use in applications such as day-time radiative cooling paints and solar thermal absorber plate coatings which require tailored optical properties over a broad spectrum ranging from visible to far-IR wavelengths. Both monodisperse and polydisperse configurations with thickness of coatings up to 500 are currently being explored for use in these applications. In such cases it becomes increasingly important to explore utility of analytical and semi-analytical methods for design of such coatings to help reduce the computational cost and time for design. While well-known analytical methods such as Kubelka-Munk and four-flux theory have previously been used for analysis of disordered coatings, analysis of their utility has so far in literature been restricted to either solar spectrum or IR but not simultaneously over the combined spectrum as required for the above applications. In this work, we have analysed the applicability of these two analytical methods for such coatings over the entire wavelength range from visible to IR, and based on observed deviation from exact numerical simulation we propose a semi-analytical technique to aid in the design of these coatings with significant computational cost savings. Introduction Disordered coatings, which consist of dielectric/metal nanoparticles dispersed randomly in a matrix, find their application in several fields such as solar thermal absorber coatings [1], solar reflecting coatings [2], color paints [3], translucent paints [4], tissue optics [5], daytime passive radiative cooling coatings [6][7][8], and many more. The main advantages that such disordered media offer that make them an attractive proposition for use in these applications are their cost-effective means of fabrication, and tunability of the desired optical properties of the coating -since the spectral position of Mie (plasmon) resonance of the embedded dielectric (metal) particles strongly depend on the size of the particles. The main challenging task in the design of such disordered media is the modelling of its optical properties. Techniques based on homogenization of the composite structures that predict effective permittivity and permeability of the disordered media, such as Maxwell-Garnett theory [9] and Bruggeman's model [10], are valid only when the particle sizes are much smaller than the incident wavelength [11]. Doyle et al. [12] showed that the use of Mie coefficients in this effective medium theory provides good accuracy to the calculation of effective optical properties of metal spheres suspended in a polymer. However, the theory predicts absorption for a nonabsorbing particle [11] and thus cannot be used to predict the effective refractive index of disordered media for solar reflecting paint/coatings where non absorbing particles are utilized. In literature, other analytical techniques developed for this objective include those which consider diffusion of photons [13], and those which solve the radiative transfer equation under -flux (2 ≤ ≤ 4) approximations [14,15]. Of these methods Kubelka Munk (KM) theory [16] (for which = 2) and the four-flux (FF) method [17] are commonly used. In addition, simulation techniques such as the Monte Carlo method [18], and exact electromagnetic solvers are employed to model the optical/radiative properties of disordered coatings. However, these simulation techniques do not present a clear picture linking the microscopic properties of the particles, such as the scattering and absorption coefficients, to the macroscopic optical properties of the coating. Moreover, exact electromagnetic solvers which solve Maxwell's equations numerically to obtain radiative properties of the coating, put a premium on computational resources and the time for design when the thickness of random media is in the order of tens/hundreds of microns -as is currently being deployed in these applications. Particularly when several parameters of the configuration are in play, such as that encountered in disordered media, analytical techniques such as KM and FF theories provide important means to arrive at an optimum combination of the parameters with minimal computational resources while also explicitly linking the properties of the micro constituents to the observed optical properties of the coating. KM and FF theories have so far in literature been used in applications where the spectrum of interest has been limited to either the visible spectrum or IR separately. For example, KM theory has been used extensively in paints and coatings [1,3], paper industry [19], tissue optics [5] among others. Similarly, the FF method has been used extensively by researchers to model, predict and optimize the optical properties of light scattering coatings [2,[20][21][22]. However, the applicability of these theories over a broad spectrum covering both visible and IR spectrum simultaneously has not been a subject of attention. This becomes important when designing coatings for applications such as day-time passive radiative cooling and solar thermal absorber plate coatings where tailored optical properties over a broad spectrum covering both the solar spectrum as well as far-infrared are crucial. For example, coatings for day-time passive radiative cooling [23] require high reflectivity in solar spectrum (0.3-3 m wavelength range) and high emissivity in the infra-red spectrum (5-15 m wavelength range). It is not obvious that the analytical techniques retain their accuracy over such a broad spectrum since with increasing wavelength there is a possibility that the nature of scattering transitions from the independent scattering regime (where scattering cross-section of particles can be superposed) to dependent scattering regime (where near-field effects, and interference between far-field scattered radiation become important). Previously reported relation [24] demarcating the two regimes has been obtained from experimental observations carried out in the visible spectrum only. Thus there is a need to explore the applicability of these analytical techniques over a broad spectrum in greater depth. In regimes where the predictions from these analytical techniques are not satisfactory, other possible methods of design which combine the accuracy of exact electromagnetic solvers with the minimal computational requirements of analytical methods are expected to be of pressing need to researchers interested in designing such coatings. With this in mind, the manuscript has been arranged as follows. In Section 2 we have compared the reflectivity and emissivity predictions of KM and FF techniques with results from exact numerical solvers for different degrees of absorption in the particles (imaginary index of particle = 0.0, 10 −2 , and 10 −1 ) and in the matrix (imaginary index of matrix = 0.0, 10 −4 , and 10 −2 ) for different thickness of the coating (10 m and 50 m) in the wavelength range 0.3 − 15 . We show that these techniques are accurate over the entire spectrum when particles are in the limit of independent scattering and under low absorption conditions but fail when volume fraction of particles is high such that interaction among particles is no longer negligible or when absorption in the matrix/particles is high. For such conditions where analytical techniques fail to predict the optical properties accurately we propose an alternative technique which combines the use of exact numerical solver and KM theory which we show can predict optical properties with accuracy as well as with minimal computational requirements. This 'semi-analytical' technique is detailed in Section 3. In the end, as an example to showcase the applicability of this semi-analytical technique, we predict the properties of a disordered coating suitable for the application of passive radiative cooling and compare these with experimental measurements previously reported in literature. Analytical techniques -Kubelka Munk (KM) and the Four-Flux (FF) methods We start with the expressions for reflectivity and transmissivity of the coating as obtained from KM and FF theories which we use in the work to analyze the optical properties of the disordered coating. Detailed derivations of these expressions can be found in several references [17,25,26]. The optical coating considered in this work is a plane-parallel slab of particulate composite on a substrate as shown in Fig. 1. The composite is considered to be of finite thickness and infinite extension in the lateral direction. The randomly distributed spherical particles embedded within the host medium (also called the matrix) act as inhomogeneities to the propagating EM wave, thereby causing its scattering (and absorption, in case the particle is lossy). The objective is to predict the optical properties of this coating including the total reflectance, transmittance and absorption. The expression for the reflectivity ( KM ) and transmissivity ( KM ) from KM theory is given by [3,25]: KM = (1 − ) (1 + ) (exp( ) − exp(− )) (1 + ) 2 exp( ) − (1 − ) 2 exp(− )(1)KM = 4 (1 + ) 2 exp( ) − (1 − ) 2 exp(− )(2) where, is the thickness of the layer, the coefficients and are given by [3,27]: = √︁ /( + ); = √︁ 2 (2 + )(3)with = 3 (1 − ) − 4 ;(4) and the factors and got using Mie theory [1] = 3 sca 4 ; = 3 abs 4 ,(5) where is the volume fraction, is the radius of the sphere, sca ( abs ) is the Mie scattering (absorption) efficiency of a single particle embedded in a host medium of index ℎ , and is the asymmetry parameter. Expressions for sca , abs , and in terms of standard Mie coefficients can be found in Ref. [28]. It should be pointed out that the relations between the coating properties and , and the particle properties and given in Eq. 3 are not unique -several other relations [4,5,[29][30][31][32][33] have been proposed over the years. The expressions in Eq. 3 and Eq. 4, taken from Ref. [3,27], is representative and have been chosen for demonstrative reasons. As we will see in Sec. 3 the semi-analytical method being proposed in this work does not depend on such relations and hence do not affect the central results of this work. In the limit of low absorption, → 0, the reflectivity in Eq. (1) can be shown to reduce to [25]: KM = + 1 .(6) It must be noted that Eq. (1) and Eq. (2) do not take into account surface reflection of incident radiation at interface (1). Modified reflectance 0 and transmittance 0 which take into account surface reflection correction are calculated using [34]: 0 = c + (1 − c ) (1 − i ) KM 1 − i KM ; 0 = (1 − c ) KM 1 − i KM (7) where, c is the specular reflectance of incident light got from Fresnel reflection which for normal incidence from a medium of index surr reads: c = − 1 + 1 2(8) with = h / surr and i is the diffuse reflectance of internal radiation at interface (1), marked in Fig. 1, which is calculated using: = 2 ∫ /2 0 ( ) sin cos d .(9) where, from Fresnel's coefficients: ( ) = 1 2       √ 2 − sin − cos √ 2 − sin + cos 2 + 2 cos − √ 2 − sin 2 cos + √ 2 − sin 2      .(10) The expression for i from Eq. 9 can be used even the limit of low diffuse scattering since the contribution from the product KM will be negligible in this regime. Many configurations developed for radiative cooling application [7,35,36] and solar absorber plates [37,38] involve use of a substrate. In the presence of a substrate, the net reflectance and transmittance from Eq. 7 will have to be further modified as [39]: = 0 + 2 0 g 1 − 0 g ; = (1 − g ) 0 1 − 0 g(11) Here g is the diffuse reflectance at interface (2) obtained from Eq. 9 with = h / g . The substrate index is taken to be 1.5 in this work. The derivation of reflection and transmission coefficients from KM theory assumes that the incident light is diffuse. When the incident radiation is collimated, alternate methods such as the four-flux theory, which take into account the propagation of both collimated and diffuse radiation across the interfaces in two directions, are expected to be more accurate. This careful consideration of both collimated and diffuse components leads to expressions for the optical properties being far more complicated than in KM theory. The net reflection and transmission coefficients when incident radiation is fully collimated can be expressed in terms of a summation over collimated-collimated reflectivity ( cc ), collimated-diffuse reflectivity ( cd ), collimated-collimated transmissivity ( cc ), and collimated-diffuse transmissivity ( cd ) as: = cc + cd + dd ; = cc + cd + dd(12) Expressions for cc , cd , cc and cd are quite elaborate and have been included in the supplementary document (Section S1) for reference. In Sections 2.1, 2.2, and 2.3, we use the expressions for and given in Eqs. 11 for KM theory and Eqs. 12 for FF to predict the optical properties of disordered coatings and compare these with the results obtained from Lumerical FDTD solver [40]. We analyze situations where both the particles and host medium are absorbing as well as non-absorbing, and also consider the effect of different thickness of the coating. The degree of absorption in particles considered in this work are relevant for dielectric inclusions typically included in coatings for use in radiative cooling and solar thermal applications. In addition, to facilitate the parametric study we assume non-dispersive form of refractive index for both the particles as well as the host matrix. We first confine our analysis to the independent scattering regime in Sections 2.1 and 2.2, and extend the analysis to dependent scattering regime later in Sec. 2.3. The FDTD simulations were set up in ANSYS Lumerical. Periodic boundary conditions were applied in the lateral and directions, and coating is illuminated with a plane wave source from direction. A mesh size of 30 nm was used which we find is sufficient for convergence (mesh convergence study is shown in supplementary Fig. S3). (a) (b) (c) (d) 2.1. Comparison of predictions from KM, FF theories and FDTD solver in the independent scattering regime for monodisperse inclusions with and without absorption in particles and in host medium keeping the other parameters = 0.25 , = 0.05, ℎ = 1.5 non varying. It is observed that particularly for smaller thickness of coating and in the absence of absorption the predictions from FF method deviates significantly from the FDTD simulations as compared to KM method in both the visible as well as IR spectrum. However, for larger thicknesses of the coating and in the presence of absorption in particles, FF is relatively more accurate than the KM method across the spectrum, more so for higher wavelengths. In the presence of absorbing host media, the expressions for and in Eqs. 1 and 2 needs to be modified to account for absorption in the matrix [20,22] as: = √︁ /( + ), and = √︁ 2 (2 + ) where, = + (1 − ) . Here, = 4 h / , with h being the imaginary part of refractive index of the matrix, and the wavelength in vacuum. In addition, expressions for sca and abs in Eq. 5 needs to be modified as shown by Mischenko et al. [41]. Figure 3a and 3b show the comparison between KM, FF, and FDTD results for the case when host medium is weakly absorbing with ℎ = 10 −4 and Fig. 3c and 3d show the corresponding comparison when it is more strongly absorbing with ℎ = 10 −2 . In the presence of weakly absorbing matrix and for smaller thickness of the coating FF is again observed to deviate significantly from the FDTD simulations. As absorption increases we observe significant deviation from FDTD results in both FF and KM theories particularly for the higher wavelengths. Comparison of predictions from KM, FF theories and FDTD solver in the independent scattering regime for polydisperse inclusions with and without absorption in particles In this section we explore the predictive capability of KM and FF theories for polydisperse medium which consists of randomly positioned particles with different sized radius. The study is motivated from the observation that synthesis of nanoparticles via various methods such as sol-gel [42], microemulsion [43], hydrothermal [44], results in a polydisperse size distribution of particles. Moreover, some recent studies [45,46] have also deliberately adopted coatings with different size distribution of particles to make use of the property of size-dependent scattering of particles to obtain wavelength-selective coatings. Such a particulate medium can be analyzed by considering the particles to be distributed about a mean radius with standard deviation , with the expressions for and to be used in Eqs. 5 got by summing over the respective coefficients for individual particle volume fractions [22] as: = ∑︁ =1 ; = ∑︁ =1(13) where and are the Mie scattering and absorption coefficients respectively of the particle with fill fraction . Equation (13) can also be used to calculate and when there are two or with and without absorption. The particle size distribution curve has been shown in Fig. S2. Figure 4a and 4b show the comparison between KM, FF, and FDTD results for the case when particles are non-absorbing and Fig. 4c and 4d show the corresponding comparison when particles are absorbing with = 10 −2 . Other parameter values are retained as for the case of monodisperse particulate coating. The observations follow the trend seen for the case of monodisperse coating with significant deviations observed in the predictions of FF method for lower values of thicknesses of coatings and when particles are nonabsorbing. For larger thicknesses of the coating and in the presence of absorption, both FF and KM are observed to predict the optical properties with reasonable accuracy across the spectrum. Comparison of predictions from KM, FF and FDTD solvers in the dependent scattering regime So far we have analyzed for the situations where the fill fraction of particles in the composite is small enough so that the particles can be assumed to independently scatter from one another. However, as the fill fraction of particles increases, there will be a transition to dependentscattering regime where both the near-field interaction between the particles as well as far-field interference between the scattered field of individual particles have a significant impact on the overall properties of the coating. Hotel [24] empirically determined this transition to occur when > 0.27 and / > 0.3 where is the mean inter-particle spacing and is the wavelength. Several coatings reported in literature [7,[46][47][48][49] have fill fractions in the range 0.1-0.6 where such effects cannot be neglected. We thus explore here the predictive capability of FF and KM theories for such coatings by considering a monodisperse distribution of particles with increased fill fraction = 0.3 while retaining other parameter values to be same as that included for Fig. 2b. This comparison is shown in Fig. 5, where we observe that the predictions from both FF and KM theories deviate significantly from FDTD simulations across the spectra, and thus cannot be relied on for predicting optical properties of such coatings. A comparison between the weighted average of the optical properties across the spectra as predicted by KM and FF theories for the different cases considered so far has been tabulated in Table 1 where AM1.5 ( ) is the spectral solar irradiance [50] and IR = ∫ BB ( ) ( ) / ∫ BB ( ) , where BB ( ) is the black body irradiance. For the relevant applications in consideration for this study i.e. coatings suitable for radiative cooling application and for use in solar thermal absorber plates, the reflection over the solar spectrum i.e. over wavelength range 0.3 − 3 and emissivity over the infra-red spectrum i.e. over wavelength range 5 − 15 is of primary importance, and the weighted average over this spectral range is reported in Table 1 along with the deviation from FDTD simulations expressed in % error in brackets. Semi-analytical method The comparison with FDTD simulations shown in Section 2 demonstrate the failure of KM and FF analytical methods in configurations where dependent scattering is not negligible and when matrix/particles are absorbing. This failure can be attributed to the actual scattering and absorption coefficients of these coatings diverging from the values calculated using Mie scattering coefficients of the individual particles. At present no single analytical technique exists that can correctly predict the optical properties of particulate media in the presence of dependent scattering effects as well as correctly account for the absorption in matrix/particles. One can then resort to using exact numerical solvers to accurately estimate the optical properties of the coating in such cases. However, as Fig. 6 shows, the computational time required to simulate such structures increases exponentially with thickness of the coating. For coatings of thickness in the range 100-500 microns which are currently being adopted in literature for the radiative cooling application [6,8,36,46,49] the design time is clearly prohibitive. In such cases it becomes imperative to develop alternate techniques which can combine the accuracy power of exact FDTD solvers with the simplicity and minimal computational requirements of the analytical techniques. Particularly when multiple parameters are involved in design -such as that observed for disordered media -such a method will prove to be useful in reducing the design time to find the optimum combination of parameters necessary to obtain the required optical properties of the coating. In order to obtain a better estimate for the absorption and scattering coefficients of such media where dependent scattering effects are non-negligible, researchers have previously [27,[51][52][53] relied on experimental measurements of the optical properties of a fabricated coating and then using the KM theory results from Section 2 to extract the required coefficients. Instead of relying on experimental measurements which is not always feasible especially at the initial state of design, we modify this technique and instead propose the following two-step semi-analytical method to estimate the optical properties of random media of thickness when usage of exact numerical solvers to simulate the properties of such a thick coating is prohibitive. • Step 1: Use a numerical solver to obtain the optical properties and of a similar coating but with much smaller thickness and extract the and parameters by inverting Eq. 1 and 2. Care must be taken at this step to ensure that the configuration set up in the solver considers incident light to be in the same medium as the index of matrix i.e., surr = h in order to ensure that reflection from surfaces and substrates are not included in this step. In case the host matrix is absorbing then only the real part is considered i.e., surr = Re( h ). Care must also be taken to ensure that when scattering efficiency of the particles is high, the value of should be chosen such that where ≈ 1/( ) is the scattering mean-free path with being the particle number density and the scattering cross section. At the other limit when scattering efficiency is low the optical properties of the coating are primarily determined from surface reflection and transmission which are accounted for in step 2. Thus the choice of is determined from the scattering mean-free path calculated in the high-scattering regime. • Step 2: From the and parameters extracted from step 1 use the analytical expressions from KM theory i.e. Eqs. 1, 2, 7 and 11 to predict the optical properties of the coating of the required thickness . Specular reflection at the surfaces as well as at the substrate are accounted for here. A more elaborate procedure, along with details of a supporting convergence test which may need to be incorporated in some cases to arrive at the value of thickness is included in Section S4 of supplementary. We now apply this technique for the cases considered in Section 2 where the predictions from analytical methods deviated significantly from those of FDTD solver, such as for the dependent scattering regime, as well as when the absorption in the particles/host matrix is significant. Fig. 7 (Fig. 8) shows the comparison between the predictions from the semi-analytical technique and from FDTD simulations when absorption in particles (host matrix) is varied. In both these cases the semi-analytical technique uses the results of exact FDTD simulations of a 10 m thick coating to predict the optical properties of a larger 50 m thick coating. A volume fill fraction = 0.3 is maintained in both these cases where dependent scattering effects are known to be dominant, while keeping other parameter values same as that analysed for the monodisperse case of Sec. 2.1. For these cases we observe a close match in the predictions of the semi-analytical method with the FDTD results over the entire spectrum, with only a slight deviation observed for the higher wavelengths when absorption is high. The weighted-average reflectivity of the coating for solar spectrum, and emissivity over the infra-red spectrum for the cases considered in Figs. 7 and 8 are listed in Table 2 Comparison with experimental data We now apply the semi-analytical technique described in Section 3 to predict the optical properties of fabricated coatings reported in literature which have been designed for radiative cooling application. We choose two such disordered coatings where dependent scattering is expected to be dominant so that analytical techniques are not applicable, and the thickness of the coating prohibits the use of exact electromagnetic solvers to predict the optical properties to good accuracy. In Ref. [48], a hierarchically porous polymer (P(VdF-HFP)) coating of thickness 300 m containing air voids with sizes ranging from 0.05-5 m in P(VdF-HFP) matrix has been fabricated, and experimentally characterized to have solar reflectivity value of 0.96 and emissivity in the 8-13 m wavelength range to be 0.97. In order to apply semi-analytical technique to predict the properties of this coating, we set up a simulation in FDTD solver with a smaller coating thickness = 50 m (determined using the convergence test explained in Section S4 of supplementary). This thickness is chosen to ensure sufficient number of larger sized air voids ( ≈ 2.5 m) in this P(VdF-HFP) matrix. The size distribution of nano-micro air voids used in the simulation is given in supplementary (Fig. S4). Refractive index data of P(VDF-HFP) is extracted from Ref. [48]. The reflectivity data in the wavelength range 0.3 − 16 m, predicted using the semi-analytical method for = 300 m thickness, is compared with that reported in Ref. [48] in Fig. 9a. While an appreciable match is noticed in the predicted values across the spectrum, small deviation observed in the reflectivity values can be attributed to our inability to incorporate exact size distribution of both micro and nano voids as present in the fabricated structure, in ANSYS Lumerical. In Ref. [46] an ultrawhite BaSO 4 film of thickness 400 m has been developed with 60 % volume fraction of BaSO 4 nanoparticles, and has been characterized to have reflectivity of 0.976 in the solar spectrum and emissivity of 0.96 in 8-13 m wavelength range. In order to apply the semi-analytical technique to predict the properties of this coating, we set up a simulation in FDTD solver with structure thickness = 15 m and BaSO 4 spherical particles randomly distributed with volume fraction 60 %. The particles are taken to be of uniform size distribution with diameters spread over the range 398 ± 130 nm to match that reported in Ref. [46]. Matrix is considered to be air for BaSO 4 film. Refractive index data of BaSO 4 is extracted from Ref. [54]. The emissivity data in the wavelength range 0.3 − 16 m, predicted using the semi-analytical method for = 400 m thickness, is compared with that reported in Ref. [46] in Fig. 9b. While we again observe an appreciable match across the spectrum, some deviation observed particularly around wavelength of 2 m is suspected to be due to difference in the refractive index of the fabricated film and that calculated from first-principles in Ref. [54]. Conclusion In this study we have analyzed the applicability of well-known analytical techniques of KM and FF theories to predict optical properties of a disordered metamaterial coating over a broad spectrum ranging from 300 nm to 15 m wavelength. Recent advancements in the use of disordered coatings in applications such as radiative cooling and solar thermal absorber plates which require tailored optical properties over this wavelength range necessitates such a study. Based on deviations observed between the predictions of these analytical techniques and exact FDTD solver in the dependent scattering regime, a two-step semi-analytical technique has been proposed which can be used to predict optical properties of such coatings with good accuracy and minimal computational resources. Such a method is expected to be resourceful for designing coatings with specific optical properties where several parameter combinations need to be investigated to arrive at an optimal combination. Small deviations observed when absorption in host matrix is high warrants further research to improve this technique. Fig. 1 . 1Schematic of coating considered in this work with incident plane wave source. Fig. 2 . 2Reflectivity and transmissivity spectrum for h = 1.5, = 0.25 m, = 0.05, and (a) p = 2.5, = 10 m; (b) p = 2.5, = 50 m. Reflectivity and absorptivity spectrum for h = 1.5, = 0.25 m, = 0.05, and (c) p = 2.5 + 0.1, = 10 m; (d) p = 2.5 + 0.1, = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical. Figure 2a 2aand 2b show the comparison between KM, FF, and FDTD results for the case when particles are non-absorbing andFig. 2c and 2dshow the corresponding comparison when particles are absorbing with imaginary index of particles = 10 −1 . We compare the predictions for different coating thicknesses 10 and 50 Fig. 3 . 3Reflectivity and absorptivity spectra for p = 2.5, = 0.25 m, = 0.05, and (a) h = 1.5 + 10 −4 , = 10 m; (b) h = 1.5 + 10 −4 , = 50 m; (c) h = 1.5 + 10 −2 , = 10 m; (d) h = 1.5 + 10 −2 , = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical. more type of particles present in the matrix (with different refractive index). For demonstration we consider a Gaussian distribution of spherical particles about mean radius = 0.25 with standard deviation = 0.016 Fig. 4 . 4Reflectivity and transmissivity for h = 1.5, = 0.25 m, = 0.016, = 0.05 and (a) p = 2.5, = 10 m; (b) p = 2.5, = 50 m. Reflectivity and absorptivity for h = 1.5, = 0.25 m, = 0.016, = 0.05 and (c) p = 2.5 + 0.01 , = 10 m; (d) p = 2.5 + 0.01 , = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical. Fig. 5 . 5. The weighted averages are calculated as: solar = ∫ AM1.5 ( ) ( ) / ∫ AM1.5 ( ) , Reflectivity and transmissivity for p = 2.5, h = 1.5, = 0.25 m, = 0.3, and = 50 m. Here, FF stands for four flux, KM for Kubelka Munk, and LM for Lumerical. Fig. 6 . 6Comparison of computational time as a function of thickness of the disordered media coating. Simulations are carried out in ANSYS Lumerical using an eight-core Intel Xeon workstation for the configuration: = 2.5, ℎ = 1.5, = 0.25 m, = 0.05, with mesh size 30 nm. Auto shutoff level (simulation termination criteria) is set at 10 −3 . Fig. 7 . 7(a) Reflectivity and transmissivity for p = 2.5, h = 1.5, = 0.25 m, = 0.3, and = 50 m; (b) Reflectivity and absorptivity for p = 2.5 + 0.1 , h = 1.5, = 0.25 m, = 0.3, and = 50 m. Here, SM stands for semi-analytical method and LM for Lumerical. Fig. 8 . 8Reflectivity and absorptivity for p = 2.5, = 0.25 m, = 0.3, = 50 m, and (a) h = 1.5 + 10 −2 ; (b) h = 1.5 + 10 −1 . Here, SM stands for semi-analytical method and LM for Lumerical. Fig. 9 . 9(a) Reflectivity of hierarchically porous P(VDF-HFP) coating calculated using semi-analytical technique is compared with experimental result given by Mandal et al.[48]; (b) Absorptivity/emissivity of BaSO 4 film calculated using semi-analytical technique is compared with experimental result given by Li et al.[46]. Table 1 . 1Weighted-average reflectivity in solar spectrum solar,KM ( solar,FF ) and emissivity in IR spectrum IR,KM ( IR,FF ) calculated using KM (FF) theory. Values in brackets denote deviation of prediction from FDTD results.Sr. no. Fig. no. solar,KM IR,KM solar,FF IR,FF 1 2a 0.391 (21.4%) - 0.509 (58.1%) - 2 2b 0.745 (0.67%) - 0.747 (0.4%) - 3 2c 0.076 (13.4%) 0.08 (95%) 0.081 (20.9%) 0.043 (4.87%) 4 2d 0.078 (18.2%) 0.331 (70.6%) 0.079 (19.7%) 0.197 (1.55%) 5 3a 0.376 (18.6%) - 0.483 (52.4%) - 6 3b 0.619 (8.22%) 0.012 (100%) 0.612 (6.99%) 0.005 (16.67%) 7 3c 0.112 (30.2%) 0.205 (83.0%) 0.110 (27.9%) 0.086 (23.2%) 8 3d 0.112 (36.6%) 0.661 (51.3%) 0.108 (31.7%) 0.357 (18.3%) 9 4a 0.392 (19.1%) - 0.510 (55.0%) - 10 4b 0.746 (4.63%) - 0.755 (5.89%) - 11 4c 0.260 (25.0%) 0.008 (60%) 0.279 (34.1%) 0.004 (20%) 12 4d 0.304 (16.5%) 0.041 (78.3%) 0.268 (2.68%) 0.023 (0.01%) 13 5 0.944 (8.13%) - 0.935 (6.98%) - along with the deviation from FDTD simulations expressed in % error in brackets. Particularly illustrative of the effectiveness of the semi-analytical technique is the reduction in error (1.03 % in Sr. no. 1) as compared to those obtained from analytical techniques and reported inTable 1( 8.13 % using KM theory and 6.98 % using FF theory in Sr. no. 13) for the configuration: p = 2.5, h = 1.5, = 0.25 m, = 0.3, and = 50 m where dependent scattering is expected to be dominant. Table 2 . 2Weighted-average reflectivity in solar spectrum solar,SM , and emissivity in IR spectrum IR,SM calculated using the semi-analytical technique. Values in brackets denote deviation of prediction from FDTD results.Sr. no.Fig. no.solar,SM IR,SM 1 7a 0.864 (1.03%) - 2 7b 0.059 (0.01%) 0.710 (8.73%) 3 8a 0.178 (2.73%) 0.391 (6.25%) 4 8b 0.062 (19.2%) 0.935 (5.17%) DisclosuresThe authors declare no conflicts of interest.Supplementary informationSee Supplement 1 for supporting content. Absorption and scattering of light by pigment particles in solar-absorbing paints. M Gunde, Z Orel, Appl. Opt. 39M. Gunde and Z. Orel, "Absorption and scattering of light by pigment particles in solar-absorbing paints," Appl. Opt. 39, 622-628 (2000). Radiative cooling during the day: simulations and experiments on pigmented polyethylene cover foils. T Nilsson, G Niklasson, Sol. Energy Mater. & Sol. Cells. 37T. Nilsson and G. Niklasson, "Radiative cooling during the day: simulations and experiments on pigmented polyethylene cover foils," Sol. Energy Mater. & Sol. Cells 37, 93-118 (1995). The color of finely dispersed nanoparticles. M Quinten, Appl. Phys, B. 73M. Quinten, "The color of finely dispersed nanoparticles," Appl. Phys, B 73, 317-326 (2001). Mathematical and empirical evaluation of accuracy of the kubelka-munk model for color match prediction of opaque and translucent surface coatings. M Bandpay, F Ameri, K Ansari, S Moradian, J. Coat. Technol. Res. 15M. Bandpay, F. Ameri, K. Ansari, and S. Moradian, "Mathematical and empirical evaluation of accuracy of the kubelka-munk model for color match prediction of opaque and translucent surface coatings," J. Coat. Technol. Res. 15, 1117-1131 (2018). Empirical relationship between kubelka-munk and radiative transfer coefficients for extracting optical parameters of tissues in diffusive and nondiffusive regimes. A Roy, R Ramasubramaniam, H Gaonkar, J. Biomed. Opt. 177A. Roy, R. Ramasubramaniam, and H. Gaonkar, "Empirical relationship between kubelka-munk and radiative transfer coefficients for extracting optical parameters of tissues in diffusive and nondiffusive regimes," J. Biomed. Opt. 17, 7 (2012). Nanoparticle embedded double-layer coating for daytime radiative cooling. Z Huang, X Ruan, Int. J. Heat Mass Transf. 104Z. Huang and X. Ruan, "Nanoparticle embedded double-layer coating for daytime radiative cooling," Int. J. Heat Mass Transf. 104, 890-896 (2017). Double-layer nanoparticle-based coatings for efficient terrestrial radiative cooling. H Bao, C Yan, B Wang, X Fang, C Zhao, X Ruan, Sol. Energy Mater. Sol. Cells. 168H. Bao, C. Yan, B. Wang, X. Fang, C. Zhao, and X. Ruan, "Double-layer nanoparticle-based coatings for efficient terrestrial radiative cooling," Sol. Energy Mater. Sol. Cells 168, 78-84 (2017). Disordered metamaterial coating for daytime passive radiative cooling. B Mishra, S Sundaram, N Varghese, K Sasihithlu, AIP Adv. 11105218B. Mishra, S. Sundaram, N. Varghese, and K. Sasihithlu, "Disordered metamaterial coating for daytime passive radiative cooling," AIP Adv. 11, 105218 (2021). Colours in metal glasses and in metallic films. J Garnett, Phil Trans. R. Soc. Lond. A. 203J. Garnett, "Colours in metal glasses and in metallic films," Phil Trans. R. Soc. Lond. A 203, 385-420 (1904). Berechnung verschiedener physikalischer konstanten von heterogenen substanzen. i. dielektrizitätskonstanten und leitfähigkeiten der mischkörper aus isotropen substanzen. D Bruggeman, Ann der Phys. 24636D. Bruggeman, "Berechnung verschiedener physikalischer konstanten von heterogenen substanzen. i. dielektrizität- skonstanten und leitfähigkeiten der mischkörper aus isotropen substanzen," Ann der Phys. 24, 636 (1935). Applicability of effective-medium theories to problems of scattering and absorption by nonhomogeneous atmospheric particles. C Bohren, J. Atmos. Sci. 43C. Bohren, "Applicability of effective-medium theories to problems of scattering and absorption by nonhomogeneous atmospheric particles," J. Atmos. Sci. 43, 468-475 (1986). Optical properties of a suspension of metal spheres. W Doyle, Phys. Rev. B. 39W. Doyle, "Optical properties of a suspension of metal spheres," Phys. Rev. B 39, 9852-9858 (1989). The determination of light absorption in diffusing materials by a photon diffusion model. L Gate, J. Phys. D: Appl. Phys. 4L. Gate, "The determination of light absorption in diffusing materials by a photon diffusion model," J. Phys. D: Appl. Phys. 4, 1049-1056 (1971). Wave propagation and scattering in random media. A Ishimaru, Academic press2New YorkA. Ishimaru, Wave propagation and scattering in random media, vol. 2 (Academic press New York, 1978). Radiative transfer calculations in multilayer systems with smooth or rough interfaces. J Caron, C Andraud, J Lafait, J. modern optics. 51J. Caron, C. Andraud, and J. Lafait, "Radiative transfer calculations in multilayer systems with smooth or rough interfaces," J. modern optics 51, 575-595 (2004). Ein beitrag zur optik der farbanstriche. P Kubelka, F Munk, Z. Tech. Phys. (Leipzig). 12P. Kubelka and F. Munk, "Ein beitrag zur optik der farbanstriche," Z. Tech. Phys. (Leipzig) 12, 593-601 (1931). Four-flux models to solve the scattering transfer equation in terms of lorenz-mie parameters. B Maheu, J Letoulouzan, G Gouesbet, Appl. Opt. 23B. Maheu, J. Letoulouzan, and G. Gouesbet, "Four-flux models to solve the scattering transfer equation in terms of lorenz-mie parameters." Appl. Opt. 23, 3353-3362 (1984). Mcml -monte carlo modelling of light transport in multi-layered tissues. L Wang, S Jacques, L Zheng, Comput. Methods Programs Biomed. 47L. Wang, S. Jacques, and L. Zheng, "Mcml -monte carlo modelling of light transport in multi-layered tissues," Comput. Methods Programs Biomed. 47, 131-146 (1995). Kubelka-munk theory in describing optical properties of paper (i). V Dzimbeg-Malcic, Z Barbaric-Mikocevic, K Itric, Tech Gaz. 18V. Dzimbeg-Malcic, Z. Barbaric-Mikocevic, and K. Itric, "Kubelka-munk theory in describing optical properties of paper (i)," Tech Gaz 18, 117-124 (2011). A theoretical feasibility study of pigments for thickness-sensitive spectrally selective paints. N Etherden, T Tesfamichael, G Niklasson, Wäckelg0ard , J. Phys.: Appl. Phys. 37N. Etherden, T. Tesfamichael, G. Niklasson, and Wäckelg0ard, "A theoretical feasibility study of pigments for thickness-sensitive spectrally selective paints." J. Phys.: Appl. Phys. 37, 1115-1122 (2004). Four-flux model of the light scattering in porous varnish and paint layers: towards understanding the visual appearance of altered blanched easel oil paintings. A Genty-Vincent, T Song, C Andraud, M Menu, Appl. Phys. A. 123473A. Genty-Vincent, T. Song, C. Andraud, and M. Menu, "Four-flux model of the light scattering in porous varnish and paint layers: towards understanding the visual appearance of altered blanched easel oil paintings," Appl. Phys. A 123, 473 (2017). Extending the applicability of four-flux radiative transfer method. M Gali, A Gentle, M Arnold, G Smith, Appl. Opt. 56M. Gali, A. Gentle, M. Arnold, and G. Smith, "Extending the applicability of four-flux radiative transfer method," Appl. Opt. 56, 8699-8709 (2017). Passive radiative cooling below ambient air temperature under direct sunlight. A Raman, M Anoma, L Zhu, E Rephaeli, S Fan, Nature. 515A. Raman, M. Anoma, L. Zhu, E. Rephaeli, and S. Fan, "Passive radiative cooling below ambient air temperature under direct sunlight," Nature 515, 540-544 (2014). Optical properties of coatings. effect of pigment concentration. H Hottel, A Sarofim, AIAA J. 9H. Hottel and A. Sarofim, "Optical properties of coatings. effect of pigment concentration," AIAA J. 9, 1895-1898 (1971). New contributions to the optics of intensely light scattering materials. Part I. P Kubelka, J. Opt. Soc. Am. 38P. Kubelka, "New contributions to the optics of intensely light scattering materials. Part I," J. Opt. Soc. Am. 38, 448-457 (1948). Four-flux models to solve the scattering transfer equation: Special cases. B Maheu, G Gouesbet, Appl. Opt. 25B. Maheu and G. Gouesbet, "Four-flux models to solve the scattering transfer equation: Special cases." Appl. Opt. 25, 1122-1128 (1986). Determination of kubelka-munk scattering and absorption coefficients by diffuse illumination. R Molenaar, J T Bosch, J Zijp, Appl. Opt. 38R. Molenaar, J. t. Bosch, and J. Zijp, "Determination of kubelka-munk scattering and absorption coefficients by diffuse illumination." Appl. Opt. 38, 2068-2077 (1999). Absorption and scattering of light by small particles. C F Bohren, D R Huffman, John Wiley & SonsC. F. Bohren and D. R. Huffman, Absorption and scattering of light by small particles (John Wiley & Sons, 2008). Forward-scattering ratios and average pathlength parameter in radiative transfer models. W Vargas, G Niklasson, Appl. Opt. 36W. Vargas and G. Niklasson, "Forward-scattering ratios and average pathlength parameter in radiative transfer models." Appl. Opt. 36, 3735-3738 (1997). Inversion methods from kubelka-munk analysis. W Vargas, J. Opt. A: Pure Appl. Opt. 4W. Vargas, "Inversion methods from kubelka-munk analysis," J. Opt. A: Pure Appl. Opt. 4, 452-456 (2002). Revised kubelka-munk theory. i. theory and application. L Yang, B Kruse, J. Opt. Soc. Am. A. 21L. Yang and B. Kruse, "Revised kubelka-munk theory. i. theory and application," J. Opt. Soc. Am. A 21, 1933-1941 (2004). Modified kubelka-munk model for calculation of the reflectance of coatings with optically-rough surfaces. A Murphy, J. Phys. D: Appl. Phys. 39A. Murphy, "Modified kubelka-munk model for calculation of the reflectance of coatings with optically-rough surfaces," J. Phys. D: Appl. Phys. 39, 3571-3581 (2006). Relationship between the kubelka-munk scattering and radiative transfer coefficients. S , J. Opt. Soc. Amer. A. 25S. Thennadil, "Relationship between the kubelka-munk scattering and radiative transfer coefficients," J. Opt. Soc. Amer. A. 25, 1480-1485 (2008). Calculation of the color of pigmented plastics. J Saunderson, J. Opt. Soc. Am. 32J. Saunderson, "Calculation of the color of pigmented plastics." J. Opt. Soc. Am. 32, 727-736 (1942). Use of hollow silica and titanium dioxide microparticles in solar reflective paints for daytime radiative cooling applications in a tropical region. S Atiganyanun, J. Photonics for Energy. 1122103S. Atiganyanun, "Use of hollow silica and titanium dioxide microparticles in solar reflective paints for daytime radiative cooling applications in a tropical region," J. Photonics for Energy 11, 022103 (2021). Effective radiative cooling with ZrO 2 /PDMS reflective coating. Y Zhang, X Tan, G Qi, X Yang, D Hu, P Fyffe, X Chen, Sol. Energy Mater. & Sol. Cells. 229111129Y. Zhang, X. Tan, G. Qi, X. Yang, D. Hu, P. Fyffe, and X. Chen, "Effective radiative cooling with ZrO 2 /PDMS reflective coating." Sol. Energy Mater. & Sol. Cells 229, 111129 (2021). A review of cermet-based spectrally selective solar absorbers. F Cao, K Mcenaney, G Chen, Z Ren, Energy & Environ. Sci. 7F. Cao, K. McEnaney, G. Chen, and Z. Ren, "A review of cermet-based spectrally selective solar absorbers," Energy & Environ. Sci. 7, 1615-1627 (2014). Design and optimization of nanoparticle-pigmented solar selective absorber coatings for high-temperature concentrating solar thermal systems. X Wang, X Yu, S Fu, E Lee, K Kekalo, J Liu, J. Appl. Phys. 12333104X. Wang, X. Yu, S. Fu, E. Lee, K. Kekalo, and J. Liu, "Design and optimization of nanoparticle-pigmented solar selective absorber coatings for high-temperature concentrating solar thermal systems," J. Appl. Phys. 123, 033104 (2018). G Kortüm, Reflectance Spectroscopy: Principles, Methods, Applications. SpringerG. Kortüm, Reflectance Spectroscopy: Principles, Methods, Applications (Springer, 1969). Lumerical Inc, FDTD: 3D Electromagnetic Simulator. Lumerical Inc., FDTD: 3D Electromagnetic Simulator (2021). Far-field lorenz-mie scattering in an absorbing host medium: Theoretical formalism and fortran program. M Mishchenko, P Yang, J. Quant. Spectrosc. & Radiat. Transf. 205M. Mishchenko and P. Yang, "Far-field lorenz-mie scattering in an absorbing host medium: Theoretical formalism and fortran program." J. Quant. Spectrosc. & Radiat. Transf. 205, 241-252 (2018). Nanomaterial by sol-gel method: synthesis and application. D Bokov, A Jalil, S Chupradit, W Suksatan, M Javed Ansari, I H Shewael, G H Valiev, E Kianfar, Adv. Mater. Sci. Eng. 2021D. Bokov, A. Turki Jalil, S. Chupradit, W. Suksatan, M. Javed Ansari, I. H. Shewael, G. H. Valiev, and E. Kianfar, "Nanomaterial by sol-gel method: synthesis and application," Adv. Mater. Sci. Eng. 2021 (2021). Microemulsion method: A novel route to synthesize organic and inorganic nanomaterials: 1st nano update. M A Malik, M Y Wani, M A Hashim, Arab. journal Chem. 5M. A. Malik, M. Y. Wani, and M. A. Hashim, "Microemulsion method: A novel route to synthesize organic and inorganic nanomaterials: 1st nano update," Arab. journal Chem. 5, 397-417 (2012). Hydrothermal synthesis of nanomaterials. Y X Gan, A H Jayatissa, Z Yu, X Chen, M Li, J. Nanomater. 2020Y. X. Gan, A. H. Jayatissa, Z. Yu, X. Chen, and M. Li, "Hydrothermal synthesis of nanomaterials," J. Nanomater. 2020 (2020). A strategy of hierarchical particle sizes in nanoparticle composite for enhancing solar reflection. J Peoples, X Li, Y Lv, J Qiu, Z Huang, X Ruan, Int. J. Heat Mass Transf. 131J. Peoples, X. Li, Y. Lv, J. Qiu, Z. Huang, and X. Ruan, "A strategy of hierarchical particle sizes in nanoparticle composite for enhancing solar reflection," Int. J. Heat Mass Transf. 131, 487-494 (2019). Ultrawhite baso 4 paints and films for remarkable daytime subambient radiative cooling. X Li, J Peoples, P Yao, X Ruan, ACS Appl. Mater. Interfaces. 13X. Li, J. Peoples, P. Yao, and X. Ruan, "Ultrawhite baso 4 paints and films for remarkable daytime subambient radiative cooling," ACS Appl. Mater. Interfaces 13, 21733-21739 (2021). Effective radiative cooling by paint-format microsphere-based photonic random media. S Atiganyanun, J Plumley, S Han, K Hsu, J Cytrynbaum, T Peng, S Han, S Han, ACS Photonics. 5S. Atiganyanun, J. Plumley, S. Han, K. Hsu, J. Cytrynbaum, T. Peng, S. Han, and S. Han, "Effective radiative cooling by paint-format microsphere-based photonic random media," ACS Photonics 5, 1181-1187 (2018). Hierarchically porous polymer coatings for highly efficient passive daytime radiative cooling. J Mandal, Y Fu, A Overvig, M Jia, K Sun, N Shi, H Zhou, X Xiao, N Yu, Y Yang, Science. 362J. Mandal, Y. Fu, A. Overvig, M. Jia, K. Sun, N. Shi, H. Zhou, X. Xiao, N. Yu, and Y. Yang, "Hierarchically porous polymer coatings for highly efficient passive daytime radiative cooling," Science 362, 315-319 (2018). Full daytime sub-ambient radiative cooling in commercial-like paints with high figure of merit. X Li, J Peoples, Z Huang, Z Zhao, J Qiu, X Ruan, Cell Reports Phys. Sci. 1100221X. Li, J. Peoples, Z. Huang, Z. Zhao, J. Qiu, and X. Ruan, "Full daytime sub-ambient radiative cooling in commercial-like paints with high figure of merit," Cell Reports Phys. Sci. 1, 100221 (2020). ASTM International. ASTM G173-03, Standard Tables for Reference Solar Spectral Irradiances: Direct Normal and Hemispherical on 37°Tilted Surface. ASTM International. ASTM G173-03, Standard Tables for Reference Solar Spectral Irradiances: Direct Normal and Hemispherical on 37°Tilted Surface (2012). Solar spectral optical properties of pigments -part i: model for deriving scattering and absorption coefficients from transmittance and reflectance measurements. R Levinson, P Berdahl, H Akbari, Sol. Energy Mater. & Sol. Cells. 89R. Levinson, P. Berdahl, and H. Akbari, "Solar spectral optical properties of pigments -part i: model for deriving scattering and absorption coefficients from transmittance and reflectance measurements." Sol. Energy Mater. & Sol. Cells 89, 319-349 (2005). Toward a quantitative model for suspended particle devices: Optical scattering and absorption coefficients. D Barrios, R Vergaz, J Sánchez-Pena, C Granqvist, G Niklasson, Sol. Enegy Mater. & Sol. Cells. 111D. Barrios, R. Vergaz, J. Sánchez-Pena, C. Granqvist, and G. Niklasson, "Toward a quantitative model for suspended particle devices: Optical scattering and absorption coefficients." Sol. Enegy Mater. & Sol. Cells 111, 115-122 (2013). General method for determining light scattering and absorption of nanoparticle composites. J Wang, C Xu, A Nilsson, D Fernandes, M Strömberg, J Wang, G Niklasson, Adv. Opt. Mater. 1801315J. Wang, C. Xu, A. Nilsson, D. Fernandes, M. Strömberg, J. Wang, and G. Niklasson, "General method for determining light scattering and absorption of nanoparticle composites." Adv. Opt. Mater. p. 1801315 (2018). Atomistic metrics of BaSO 4 as an ultra-efficient radiative cooling material: a first-principles prediction. Z Tong, J Peoples, X Li, X Yang, H Bao, X Ruan, arXiv:2101.05053arXiv preprintZ. Tong, J. Peoples, X. Li, X. Yang, H. Bao, and X. Ruan, "Atomistic metrics of BaSO 4 as an ultra-efficient radiative cooling material: a first-principles prediction," arXiv preprint arXiv:2101.05053 (2021).
{'fraction_non_alphanumeric': 0.058588600232648315, 'fraction_numerical': 0.0369716944552152, 'mean_word_length': 4.329166236181424, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Disordered media coatings are finding increasing use in applications such as day-time radiative cooling paints and solar thermal absorber plate coatings which require tailored optical properties over a broad spectrum ranging from visible to far-IR wavelengths. Both monodisperse and polydisperse configurations with thickness of coatings up to 500 are currently being explored for use in these applications. In such cases it becomes increasingly important to explore utility of analytical and semi-analytical methods for design of such coatings to help reduce the computational cost and time for design. While well-known analytical methods such as Kubelka-Munk and four-flux theory have previously been used for analysis of disordered coatings, analysis of their utility has so far in literature been restricted to either solar spectrum or IR but not simultaneously over the combined spectrum as required for the above applications. In this work, we have analysed the applicability of these two analytical methods for such coatings over the entire wavelength range from visible to IR, and based on observed deviation from exact numerical simulation we propose a semi-analytical technique to aid in the design of these coatings with significant computational cost savings.', 'arxivid': '2301.00382', 'author': ['Rishi Bhrigu \nDepartment of Energy Science and Engineering\nIndian Institute of Technology\nBombay\n', 'Mishra \nDepartment of Energy Science and Engineering\nIndian Institute of Technology\nBombay\n', 'Nithin Jo Varghese \nDepartment of Energy Science and Engineering\nIndian Institute of Technology\nBombay\n', 'Karthik Sasihithlu *[email protected] \nDepartment of Energy Science and Engineering\nIndian Institute of Technology\nBombay\n'], 'authoraffiliation': ['Department of Energy Science and Engineering\nIndian Institute of Technology\nBombay', 'Department of Energy Science and Engineering\nIndian Institute of Technology\nBombay', 'Department of Energy Science and Engineering\nIndian Institute of Technology\nBombay', 'Department of Energy Science and Engineering\nIndian Institute of Technology\nBombay'], 'corpusid': 255372702, 'doi': '10.1364/oe.484308', 'github_urls': [], 'n_tokens_mistral': 15460, 'n_tokens_neox': 13088, 'n_words': 7920, 'pdfsha': 'bdeb31757336354cfc6d6b377f7e798134032221', 'pdfurls': ['https://export.arxiv.org/pdf/2301.00382v1.pdf'], 'title': ['Semi-analytical technique for the design of disordered coatings with tailored optical properties', 'Semi-analytical technique for the design of disordered coatings with tailored optical properties'], 'venue': []}
arxiv
Refining the Oort Constants: the case for a smaller Milky Way August 1997 Rob P Olling Department of Physics and Astronomy University of Southampton SO17 1BJSouthamptonUnited Kingdom Michael R Merrifield Department of Physics and Astronomy University of Southampton SO17 1BJSouthamptonUnited Kingdom Refining the Oort Constants: the case for a smaller Milky Way ASP Conference Series Santa CruzAugust 1997 The local stellar kinematics of the Milky Way, parameterized by the Oort constants A and B, depend on the local gradient of the rotation curve, its absolute value (Θ 0 ), and the distance to the Galactic center (R 0 ). The surface density of interstellar gas in the Milky Way varies non-monotonically with radius, and so contributes significantly to the local gradient of the rotation curve, and the Oort constants. Because of this, the Oort functions A(R) and B(R) differ significantly from the dominant ∼ Θ 0 /R dependence, in the Solar neighborhood and other locations in the Galaxy. These models may explain the ∼40% difference between the values for 2AR 0 derived from radial velocity data originating in the inner and outer Galaxy (Merrifield 1992). Incorporating these local non-linearities explains the significant differences between the Oort constants derived from nearby stars (d ≤ 1 kpc; Hanson 1987=H87) and distant Cepheids (d = 0.5 − 6 kpc; Feast & Whitelock 1997=FW97). However, a consistent picture only emerges if one adopts small values for the Galactic constants: R 0 = 7.1 ± 0.4 kpc, and Θ 0 = 184 ± 8 km s −1 . These values are consistent with most kinematical methods of determining R 0 , including the proper motion of Sgr A * (Backer 1996), the direct determination of R 0 using water masers (7.2 ± 0.7 kpc, Reid 1993), and constraints set by the shape of the Milky Way's dark halo (Olling & Merrifield 1997b=OM97b). Introduction Due to our location within the Milky Way and the modest uncertainties in R 0 (7.7 ± 0.7 kpc; Reid 1993) and Θ 0 (200 ± 20 km s −1 ; Sackett 1997), the rotation curve of the Milky Way, Θ(R), is difficult to establish (Fich & Tremaine 1991;Olling & Merrifield 1997a=OM97a). Stellar kinematical data in the form of proper motions and radial velocities can be used to constrain the Galactic constants via the Oort functions: A(R) = 1 2 Θ(R) R − dΘ(R) dR and B(R) = − 1 2 Θ(R) R + dΘ(R) dR . Unfortunately, the available observations of the Milky Way's rotation curve are not good enough to calculate the derivatives of Θ(R) directly. Instead, we fit mass models to the observations and calculate the derivatives from the model rotation curves. The dominant contributors to the total mass are the stellar disk and the dark matter (DM) halo, which are believed to be fairly smoothly distributed with radius. However, the distribution of interstellar hydrogen (ISM) shows density enhancements such as rings and arms, which will produce a contribution to Θ(R) that varies non-monotonically with radius. This effect gives rise to local features superimposed on the dominant Θ/R behavior of the Oort functions ( Fig. 1). On larger scales, the Oort functions follow the no-ISM relations (dotted line). (Kerr & Lynden-Bell 1986=KLB86) are plotted (squares and circles), as well as the velocity dispersion ratios of local stars (triangles and hexagons, bottom panel). Notice the ∼40% difference between the values of 2AR 0 inferred from extrapolating the inner and outer Galaxy data, similar to Merrifield's (1992) observational findings. From Fig. 1 it is clear that if the radial extend of the stellar kinematical surveys is more than a few hundred parsec, it is imperative to take the slope in the Oort functions (a few km s −1 kpc −2 ) into account. However, notice that A(R) and B(R) are almost flat in the first kpc beyond the Solar circle. This is the region sampled by the Lick Northern Proper Motion stars used in Hanson's (1987) determination of the Oort constants. Thus, we compared his values (A=11.3 ± 1.1, and B=-13.9 ± 0.9 km s −1 kpc −1 ) with our model predictions. We also use the combinations (A − B) and −B/(A − B) as constraints (for details, see OM97a). Inspection of Figure 1, reveals that models with small values for the Galactic constants fit the observations better than the values currently considered best (middle panels) and the IAU values (rightmost panels). Results We can formalize the constraints placed on the values of the Galactic constants by calculating a χ 2 statistic comparing the five observed combinations of the Oort constants in Fig. 1 to the values predicted by the models. Because of the radial dependence of the Oort functions, we compared the model and observed values over the radial extend of the observations (horizontal error bars). Since these regions are approximately equal to the size of the epicycles of the stellar populations studied, we expect that the Oort functions can show structure on these scales. The χ 2 statistics were calculated for a range of values for R 0 and Θ 0 . The best-fit (minimum-χ 2 ) values are: R 0 = 7.1 ± 0.4 kpc, and Θ 0 = 184 ± 8 km s −1 . In Figure 2, we plot the probability that any given values for R 0 and Θ 0 are consistent with the observed Oort constraints. For example, the official IAU-sanctioned values of R 0 = 8.5 kpc and Θ 0 = 220 km s −1 are ruled out at the 99% confidence level. Comparing our best fit values with R 0 determinations based on kinematical constraints (see Reid 1993 for a compilation), we find that all are consistent with the leaner Galaxy we propose here. In particular, R 0 =7.1 kpc is entirely consistent with its only direct determination employing H 2 O masers proper motions (R 0 = 7.2 ± 0.7 kpc, Reid 1993). Furthermore, these Galactic constants are consistent with the proper motion of Sgr A * (Backer 1996). From a new and completely independent analysis based on the shape of the Galaxy's dark matter halo, we find similar constraints on the Galactic constants (OM97b). Fig. 1. The best-fit minimum-χ 2 values for R 0 , Θ 0 , and their 1-σ errors are also indicated. The IAU standard values are R 0 =8.5 kpc, and Θ 0 =220 km s −1 . The Oort constants derived from nearby stars (d ≤ 1 kpc, H87) differ significantly (∼3σ) from the values derived at large distances (d = 0.5 − 6 kpc, FW97). However, extrapolating our best fit model from the distant Galaxy towards the Solar position, i.e., following the no-ISM line in Fig. 1, yields A and B's very close to those of FW97. Furthermore, our models and the FW97 models predict almost identical Cepheid proper motions. We conclude that the discrepancy between the H87 and FW97 Oort constants is caused by the non-linear behavior in the Solar neighborhood and that a consistent picture only emerges for a leaner Milky Way with R 0 =7.1 ± 0.4 kpc, Θ 0 =184 ± 8 km s −1 . Fig. 1 . 1The Oort functions, A(R) B(R), and (A − B) [km s −1 kpc −1 ] as derived for three model rotation curves. The solid lines are derived from the full mass model, the dashed lines have no gas component. The various observational estimates of these quantities are also shown, with the horizontal error bars indicating the radial range over which the observations effectively averaged. H87's and the IAU standard values Fig. 2 . 2The contours of equal likelihood as a function of R 0 and Θ 0 , calculated by comparing the model values with the observed A, B, A − B, and −B/(A − B) constraints presented in D C Backer, M Fich, S Tremaine, astro-ph/9706293Unsolved Problems of the Milky Way Feast M., Whitelock P. Blitz L., Teuben P.29409ARA&ABacker D.C., 1996, in Blitz L., Teuben P., eds, Unsolved Problems of the Milky Way Feast M., Whitelock P., 1997, MNRAS, in press, astro-ph/9706293 (FM97) Fich M., Tremaine S., 1991, ARA&A, 29, 409 . R B Hanson, AJ. 94H87409Hanson R.B., 1987, AJ, 94, 409 (H87) . F J Kerr, D Lynden-Bell, MNRAS. 221KLB861023Kerr F.J., Lynden-Bell D., 1986, MNRAS, 221, 1023 (KLB86) . M R Merrifield, AJ. 1031552Merrifield M.R., 1992, AJ, 103, 1552 . R P Olling, M R Merrifield, in preparation. Summary in this volume and at astro-ph/9710224 (OM97b)Olling R.P., Merrifield M.R., 1997b, in preparation. Summary in this volume and at astro-ph/9710224 (OM97b) . R P Olling, M R Merrifield, submitted to MNRAS (OM97a)Olling R.P., Merrifield M.R., 1997b, submitted to MNRAS (OM97a) . M J Reid, ARA&A. 31345Reid M.J., 1993, ARA&A, 31, 345 . P D Sackett, ApJ. 483103Sackett P.D., 1997, ApJ, 483, 103
{'fraction_non_alphanumeric': 0.05745764719819926, 'fraction_numerical': 0.047980097144888045, 'mean_word_length': 4.150701647345943, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "The local stellar kinematics of the Milky Way, parameterized by the Oort constants A and B, depend on the local gradient of the rotation curve, its absolute value (Θ 0 ), and the distance to the Galactic center (R 0 ). The surface density of interstellar gas in the Milky Way varies non-monotonically with radius, and so contributes significantly to the local gradient of the rotation curve, and the Oort constants. Because of this, the Oort functions A(R) and B(R) differ significantly from the dominant ∼ Θ 0 /R dependence, in the Solar neighborhood and other locations in the Galaxy. These models may explain the ∼40% difference between the values for 2AR 0 derived from radial velocity data originating in the inner and outer Galaxy (Merrifield 1992). Incorporating these local non-linearities explains the significant differences between the Oort constants derived from nearby stars (d ≤ 1 kpc; Hanson 1987=H87) and distant Cepheids (d = 0.5 − 6 kpc; Feast & Whitelock 1997=FW97). However, a consistent picture only emerges if one adopts small values for the Galactic constants: R 0 = 7.1 ± 0.4 kpc, and Θ 0 = 184 ± 8 km s −1 . These values are consistent with most kinematical methods of determining R 0 , including the proper motion of Sgr A * (Backer 1996), the direct determination of R 0 using water masers (7.2 ± 0.7 kpc, Reid 1993), and constraints set by the shape of the Milky Way's dark halo (Olling & Merrifield 1997b=OM97b).", 'arxivid': 'astro-ph/9711157', 'author': ['Rob P Olling \nDepartment of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom\n', 'Michael R Merrifield \nDepartment of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom\n'], 'authoraffiliation': ['Department of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom', 'Department of Physics and Astronomy\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom'], 'corpusid': 14570638, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 2731, 'n_tokens_neox': 2245, 'n_words': 1417, 'pdfsha': '95f3fe3204219948d09522fce36c2e57f4ad0c82', 'pdfurls': ['https://arxiv.org/pdf/astro-ph/9711157v1.pdf'], 'title': ['Refining the Oort Constants: the case for a smaller Milky Way', 'Refining the Oort Constants: the case for a smaller Milky Way'], 'venue': ['ASP Conference Series']}
arxiv
arXiv:physics/0304059v1 [physics.atom-ph] Evaluation of the two-photon exchange diagrams for the (1s) 2 2p 3/2 electron configuration in Li-like ions 17 Apr 2003 A N Artemyev Department of Physics St. Petersburg State University Oulianovskaya 1198504Petrodvorets, PetersburgStRussia Institut für Theoretische Physik Mommsenstraße 13D-01062Dresden, DresdenTUGermany Gesellschaft für Schwerionenforschung Planckstrasse 1D-64291DarmstadtGermany V M Shabaev Department of Physics St. Petersburg State University Oulianovskaya 1198504Petrodvorets, PetersburgStRussia Institut für Theoretische Physik Mommsenstraße 13D-01062Dresden, DresdenTUGermany Gesellschaft für Schwerionenforschung Planckstrasse 1D-64291DarmstadtGermany M M Sysak Department of Physics St. Petersburg State University Oulianovskaya 1198504Petrodvorets, PetersburgStRussia Gesellschaft für Schwerionenforschung Planckstrasse 1D-64291DarmstadtGermany V A Yerokhin Department of Physics St. Petersburg State University Oulianovskaya 1198504Petrodvorets, PetersburgStRussia T Beier Gesellschaft für Schwerionenforschung Planckstrasse 1D-64291DarmstadtGermany G Plunien Institut für Theoretische Physik Mommsenstraße 13D-01062Dresden, DresdenTUGermany G Soff Institut für Theoretische Physik Mommsenstraße 13D-01062Dresden, DresdenTUGermany arXiv:physics/0304059v1 [physics.atom-ph] Evaluation of the two-photon exchange diagrams for the (1s) 2 2p 3/2 electron configuration in Li-like ions 17 Apr 2003(Dated: November 21, 2018)numbers: 1220Ds3130Jv3110+z We present ab initio calculations of the complete gauge-invariant set of two-photon exchange graphs for the (1s) 2 2p 3/2 electron configuration in Li-like ions. These calculations are an important step towards the precise theoretical determination of the 2p 3/2 -2s transition energy in the framework of QED. I. INTRODUCTION At present the lowest-lying states in heavy Li-like ions can be investigated very precisely both theoretically and experimentally. One of the most precise experimental results in these systems has been obtained by Beiersdorfer and co-workers [1] for the 2p 3/2 -2s transition energy in Li-like bismuth, which was determined with an accuracy of 0.04 eV. Accurate experimental data are available at present for a number of other elements as well. For the latest high-precision measurements we refer to Refs. [2,3,4]; the outline of earlier investigations can be found in Ref. [2]. The accuracy reached in experimental investigations provides a promising tool for probing QED corrections in the strong Coulomb field of the nucleus up to second order in the fine structure constant α. For the 2p 1/2 -2s transition, this project has been carried out in a series of our previous investigation [5,6,7,8]. In Ref. [8] we completed the evaluation of all two-electron QED corrections of second order in α and obtained most accurate theoretical predictions for the 2p 1/2 -2s splitting within a wide range of nuclear charge numbers Z. Based on a careful estimate of the uncertainty of the theoretical values, we concluded that already now the comparison of theory and experiment for Li-like uranium provides a test of QED effects of second order in α at the level of accuracy of about 17%. For the 2p 3/2 -2s and 2p 1/2 -2s transitions in Li-like bismuth, analogous calculations have been performed recently by Sapirstein and Cheng [9]. However, in order to match with the experimental accuracy for the 2p 3/2 -2s splitting, rigorous evaluations of second-order QED corrections are required also for other ions than bismuth. The first step in this direction has been performed in our earlier investigation [5] where we have evaluated the vacuum-polarization screening correction for several energy levels of Li-like ions, including the (1s) 2 2p 3/2 state. The aim of the present work is to calculate the two-photon exchange correction for this state (for extensive calculations of these corrections for the lower states in Li-like ions and for nonmixed low-lying states in He-like ions we refer the reader to Refs. [7,8,10,11,12,13,14]). After all that, the self-energy screening correction remains the last uncalculated two-electron second-order QED contribution for this state. This paper is organized as follows. In the next section we present the basic formulas for the two-photon exchange correction for the (1s) 2 2p 3/2 state. The description of our numerical procedure is given in Sec. III, and the results obtained are discussed in Sec. IV. II. BASIC FORMULAS The detailed derivation of the two-photon exchange corrections to the (1s) 2 2s and (1s) 2 2p 1/2 states of Li-like ions can be found in our previous paper [8]. For the (1s) 2 2p 3/2 state the derivation is performed along the same lines. Thus, we present mainly the final formulas here. Our derivation is based on the two-time Green function (TTGF) method [15,16]. For the detailed description of the method we refer to the recent review [17]. The two-photon exchange corrections to the (1s) 2 2p 3/2 state of the Li-like ions can be conveniently separated in three parts: the two-photon exchange contribution due to the interaction between two 1s electrons, the two-photon exchange contribution due to the interaction between the valence electron and one of the 1s electrons, and the three-electron contribution. The first part coincides with the two-photon exchange correction to the ground-state energy of He-like ions. Its calculation was carried out in [10,11]. This correction does not contribute to the 2p-2s splitting in Li-like ions and is not considered here. The remaining two-electron and three-electron corrections are diagrammatically depicted in Fig. 1. We start from the expression for the second-order correction to the energy shift of the level k [17], ∆E (2) k = 1 2πi Γ dE ∆E ∆g (2) kk (E) − 1 2πi Γ dE ∆E∆g (1) kk (E) 1 2πi Γ dE ′ ∆g (1) kk (E ′ ) ,(1)where ∆g kk (E) = g kk (E) − g (0) kk (E), g kk (E) = u k |g(E)|u k , u k is the unperturbed wave function, ∆E = E − E (0) k , E (0) k is the unperturbed energy of the state k, and g kk (E) = (∆E) −1 is the function g kk (E) in the zeroth-order approximation. The function g(E) ≡ g(E, x ′ 1 , · · · , x ′ N ; x 1 , · · · , x N ) is the temporal Fourier transform of the N-electron two-time Green function. Its definition and the corresponding Feynman rules can be found in [17]. The superscripts in Eq. (1) indicate the order of the contribution in α. For the two-photon exchange correction, Feynman diagrams contributing to ∆g (2) (E) are presented in Fig. 1. We refer to the corresponding contributions as the ladder (a), the crossed (b), and the three-electron (c) terms. The second term in Eq. (1) is known as the disconnected contribution. It vanishes completely when considered together with the reducible contribution (for details, see [8]). In our case, the unperturbed wave function is u k = 1 √ 3! P (−1) P ψ P a (x 1 )ψ P b (x 2 )ψ P v (x 3 ) ,(2) where v denotes the valence electron, a and b are the electrons in (1s) 2 shell, and P is the permutation operator (in the factor (−1) P , the parity of the permutation is implied by P ). For brevity we will use also the following notations: I(ω) = e 2 α µ α ν D µν (ω) ,(3)I abcd (ω) = ab|I(ω)|cd ,(4)I ab;cd = I abcd (∆ bd ) − I bacd (∆ ad ) ,(5)I ′ (ω) = dI(ω) dω ,(6) where ∆ ab = ε a − ε b , α µ = (1, α) are the Dirac matrices, and D µν (ω) is the photon propagator. We separate the contributions of the diagrams under consideration into two parts: the reducible, with the energy of the intermediate state coinciding with the energy of the initial (final) state, and the irreducible, for the remainder, respectively. Omitting the derivation similar to that in Ref. [8], we present here only the final expressions for the energy shift. The reducible ("red") and irreducible ("ir") three-electron contributions read ∆E 3el ir = P Q (−1) P +Q × n ′ I P 2P 3nQ3 (∆ P 3Q3 ) I P 1nQ1Q2 (∆ Q1P 1 ) ε Q1 + ε Q2 − ε P 1 − ε n ,(7)∆E 3el red = µa I ′ vaav (∆ va )(I ab;ab − I bv;bv ) + 1 2 I ′ avṽb (∆ va )I bṽ;av + 1 2 I ′ bṽva (∆ va )I va;ṽb ,(8) where P and Q are the permutation operators, and the prime in the sum in Eq. (7) indicates that terms with the vanishing denominator should be omitted in the summation. In Eq. (8) a and b denote 1s electrons with opposite angular-momentum projections µ a = −µ b , v stands for the valence state with the angular-momentum projection µ v , andṽ is the valence state with µṽ = 2µ a + µ v (the corresponding contribution is assumed to be zero when µṽ is out of the range −j v , . . . , j v ). The irreducible two-electron contribution is ∆E 2el "ir" = ∆E lad dir + ∆E lad exch + ∆E cr dir + ∆E cr exch ,(9)∆E lad dir = n 1 n 2 ′ i 2π ∞ −∞ dω × F lad dir (ω, n 1 n 2 ) (ε c − ω − ε n 1 u)(ε v + ω − ε n 2 u) ,(10)∆E lad exch = − n 1 n 2 ′ i 2π ∞ −∞ dω × F lad exch (ω, n 1 n 2 ) (ε v − ω − ε n 1 u)(ε c + ω − ε n 2 u) ,(11)∆E cr dir = n 1 n 2 ′ i 2π ∞ −∞ dω × F cr dir (ω, n 1 n 2 ) (ε c − ω − ε n 1 u)(ε v − ω − ε n 2 u) ,(12)∆E cr exch = − n 1 n 2 ′ i 2π ∞ −∞ dω × F cr exch (ω, n 1 n 2 ) (ε v − ω − ε n 1 u)(ε v − ω − ε n 2 u) .(13) Here we introduced the labels "lad" and "cr" for the ladder and the crossed diagram, and "dir" and "exch" for the direct and the exchange parts. The other notations are: F lad dir (ω, n 1 n 2 ) = µcµn 1 µn 2 I cvn 1 n 2 (ω)I n 1 n 2 cv (ω) ,(14) F lad exch (ω, n 1 n 2 ) = µcµn 1 µn 2 I vcn 1 n 2 (ω)I n 1 n 2 cv (ω − ∆ vc ) ,(15) F cr dir (ω, n 1 n 2 ) = µcµn 1 µn 2 I cn 2 n 1 v (ω)I n 1 vcn 2 (ω) ,(16) F cr exch (ω, n 1 n 2 ) = µcµn 1 µn 2 I vn 2 n 1 v (ω)I n 1 ccn 2 (ω − ∆ vc ) ,(17)u = (1 − i0) , and the prime on the sum indicates that some terms are excluded from the summation. First of all, we omit the reducible contribution, i.e. the terms for which the intermediate two-electron energy ε n 1 + ε n 2 equals the energy of the initial two-electron state ε v + ε c . Those are: (ε n 1 ε n 2 ) = (ε c ε v ) and (ε v ε c ). In addition, we exclude also the infrareddivergent terms (see [8,18] for details), namely those with (ε n 1 ε n 2 ) = (ε c ε v ) in the direct crossed part and with (ε n 1 ε n 2 ) = (ε c ε c ) and (ε v ε v ) in the exchange crossed part. These terms should be considered together with the reducible contribution. Their sum can be shown to be infrared finite. We employ the notations ∆E 2el "ir" and ∆E 2el "red" in order to emphasize that the corresponding terms are not "pure" irreducible and reducible contributions. We mention that the case under consideration differs from the cases of the 2s or 2p 1/2 valence electrons considered previously in [8] by the fact that for the 2p 3/2 Dirac state there is no adjoining state separated only by the finite-nuclear-size effect. Consequently, there is no need to exclude any further terms from the crossed contribution, as we had to proceed in Ref. [8] in the case of the 2s and the 2p 1/2 valence electron. Finally, we note the "reducible" contribution ∆E 2el "red" = i 4π ∞ −∞ dω 1 (ω + i0) 2 2F cr exch (−ω + ∆ vc , cc) + 2F cr exch (−ω, vv) − F lad exch (ω + ∆ vc , cv) − F lad exch (−ω + ∆ vc , cv) − F lad dir (ω − ∆ vc , vc) − F lad dir (−ω − ∆ vc , vc) − F lad exch (ω, vc) − F lad exch (−ω, vc) .(18) III. NUMERICAL EVALUATION The three-electron contribution to the energy of (1s) 2 2s, (1s) 2 2p 1/2 , and (1s) 2 2p 3/2 levels of Li-like ions has been calculated in our recent investigation [19]. This evaluation appears as relatively simple since the corresponding expressions (7), (8) ∆E 2el "red",dir = 1 2 F lad dir (∆ vc , vc) ′ − 1 π ∞ 0 dω ω ∆ 2 vc + ω 2 d dω F lad dir (iω, vc) ,(20) where F ′ (∆) = (dF/dω) ω=∆ . Let us now turn to the exchange contribution. As one can see from Figs One of the integration contours C reg used for the evaluation of the regular part is depicted in Fig. 5. The evaluation of the irregular part is less time consuming, but its structure is more difficult. In this case we need to take care of single and double poles of the integrand that are located close to the integration contour. The potential occurrences of one or two single poles and one double pole within the interval [0, ∆ vc ] were treated by means of the following identities: After integration by parts, the exchange contribution of the reducible part can be written as ∆E 2el "red",exch = − ω 2 ω 1 dω f (ω) x 0 − ω ± i0 = P ω 2 ω 1 dω f (ω) x 0 − ω ∓ iπf (x 0 ) ,(21)ω 2 ω 1 dω f (ω) (x 0 − ω ± i0) 2 = ±iπf ′ (x 0 ) + f (ω 2 ) x 0 − ω 2 − f (ω 1 ) x 0 − ω 1 − P ω 2 ω 1 dω f ′ (ω) x 0 − ω ,(22)ω 2 ω 1 dω f (ω) (x 0 − ω ± i0)(x 1 − ω ± i0) = 1 x 1 − x 0 P ω 2 ω 1 dω f (ω) x 0 − ω −P ω 2 ω 1 dω f (ω) x 1 − ω ∓ iπf (x 0 ) ± iπf (x 1 ) ,(23)1 2 F cr exch (∆ v c, cc) + F cr exch (0, vv) ′ + 1 2πi P ∞ −∞ dω ω d dω F cr exch (∆ v c + ω, cc) +F cr exch (ω, vv) − 2F lad exch (ω, vc) .(24) It is again worth mentioning that the integral in Eq. (24) exists only if the sum of all 3 terms in the brackets is considered. For the each single term, the integral is infrared divergent. IV. NUMERICAL RESULTS AND DISCUSSION The results of our calculations are presented in Table I, where the direct, the exchange, and the three-electron contribution to the two-photon exchange correction of the valence 2p 3/2 electron with the (1s) 2 shell are listed separately. The evaluation was performed within the Feynman gauge. We estimate the numerical uncertainty of our results to be less than 5 × 10 −5 a.u. For bismuth, our results can be compared with the calculation by Sapirstein and Cheng [9]. They report −6.529 and −6.670 eV for the two-electron and the three-electron contribution, respectively. This agrees well with our corresponding results of ∆E 3el MBPT = P Q (−1) P +Q εn>0 ′ I P 2P 3nQ3 (0)I P 1nQ1Q2 (0) ε Q1 + ε Q2 − ε P 1 − ε n ,(25)∆E 2el MBPT = µc εn 1 εn 2 >0 ′ [I cvn 1 n 2 (0) − I vcn 1 n 2 (0)]I n 1 n 2 cv (0) ε c + ε v − ε n 1 − ε n 2 ,(26) where the photon propagators should be taken in the Coulomb gauge and the prime indicates that terms with vanishing denominator should be omitted. We mention that Eqs. (25) and (26) include the contribution due to the exchange by two Breit photons (the B × B term). Strictly speaking, this term is of higher order than the level of validity of the Breit approximation, and, therefore, it appears to be inconsistent to include it within the MBPT scheme. In Table II and in Fig. 6 we compare the results of the rigorous QED treatment of the where the B × B term turns out to be of the same order of magnitude, but of different sign than the nontrivial QED contribution. To summarize this investigation we presented a rigorous QED evaluation of the twophoton exchange correction for the (1s) 2 2p 3/2 state of Li-like ions. Combining these results with the data for the (1s) 2 2s state from our previous study [8], we obtained the two-photon exchange correction for the 2p 3/2 -2s splitting. This is an important step towards the final goal consisting in the evaluation of all two-electron second-order QED corrections to the 2p 3/2 -2s transition energy for the Li isoelectronic sequence. contain at most one summation over the Dirac spectrum and no integrations over the virtual-photon energy. Thus we focus here on the calculation of the two-electron contribution.The summation over magnetic substates in Eqs. (10)-(13),(18) was performed by means of standard techniques. The resulting expressions can be found in[8]. As an independent check we employed also the direct numerical summation of Clebsch-Gordan coefficients.To calculate infinite summations over the spectrum of the Dirac equation in Eqs. (10)-(13), we employed the method of the B-spline basis set for the Dirac equation [20]. Typical basis sets contained 50 positive and 50 negative-energy eigenstates for each value of the angular-momentum quantum number κ. The finite size of the nucleus has been taken into account employing the homogeneously-charged sphere model for the nuclear-charge distribution. The values of the rms radii used in this work are the same as in [8]. Infinite summations over κ were truncated typically at |κ| = 10. Partial sums of the expansion over |κ| were fitted to the form S |κ| = a least squares method. The coefficient a 0 yields the extrapolated value for the sum of the expansion. We found that different fits with N = 4-6 yield the same result with an accuracy of at least 5 digits. The integration over the energy of the virtual photon ω in Eqs. (10)-(13) represents the most difficult part of the calculation. To avoid strong oscillations for large values of ω, we performed the Wick rotation of the integration contour. Deforming the contour, one should take care about the poles and the branch cuts of the integrand. The analytic structure of the integrand for Eqs. (11)-(13) is shown in Figs. 2-5. These graphs are very similar tothose for the 2s-and 2p 1/2 -valence electrons in Ref.[8]. The only difference is that now three Dirac energy levels occur which are more deeply bound than the valence state: 1s, 2s, and 2p 1/2 . The terms in Eqs.(11) and(13)containing these states and the valence state as intermediate were treated in a different way than the remainder, as is discussed below.For the evaluation of the direct parts of the ladder and crossed contributions, we perform the Wick rotation of the integration contours separating the corresponding pole contributions, as shown in Figs. 2 and 3. In the direct part of the reducible contribution, we also perform a Wick rotation and then integrate by parts. This yields the following expression which can be evaluated directly, . 4 or 5 , 5in this case the integration contour is squeezed between two branch cuts of the photon propagators on the interval [0, ∆ vc ]. Therefore, the standard Wick rotation of the contour is not possible.It is convenient to divide the contributions of Eqs.(11) and(13)into two parts. The first one accounts for the poles of the integrand on the interval [0, ∆ vc ] and is referred to as the irregular part. The remainder is denoted as the regular part. This contribution does not possess any poles close to the squeezed part of the contour, which simplifies its numerical evaluation. However, it turns out as is the most time-consuming part of the calculation. where P indicates the principal value of the integral. In Eq. (23) the choice of the sign before iπf (x 0 ) and iπf (x 1 ) is determined by the sign of the infinitesimal addition ±i0 in the first and the second denominator, respectively. For the numerical evaluation of the irregular contribution we employed the integration contour C irr shown in Fig. 4. It consists of 3 parts: [−i∞ − ǫ, −ǫ], [−ǫ, ∆ vc + ǫ], and [∆ vc + ǫ, ∆ vc + ǫ + i∞]. A small positive constant ǫ was introduced in order to facilitate the numerical evaluation of the principal-value integrals. − 6 . 65330 and −6.6698 eV, respectively. It is interesting to compare the results of the rigorous QED treatment with approximations evaluations based on relativistic many-body perturbation theory (MBPT). The difference between the QED and MBPT results can be conventionally regarded as a "nontrivial" QED contribution. In order to deduce the two-photon exchange correction within the framework of MBPT, we should introduce the following changes in our basic formulas: all summations over intermediate states should be restricted to positive-energy states only, the calculation should be performed within Coulomb gauge, and the virtual-photon energy in the photon propagator should be set equal to zero. Within this approximation, all reducible parts vanish, and the integration over the energy of the virtual photon can be carried out employing Cauchy's theorem. This yields zero for the crossed contribution, and finally we are left with the following expression for the total two-photon exchange correction within the MBPT approximation: work was supported by the Russian Foundation for Basic Research (Grant No. 01-02-17248), by the Russian Ministry of Education (Grant No. E02-3.1-49), and by the program "Russian Universities" (Grant No. UR.01.01.072). The work of A.N.A. and V.M.S. was supported by the joint grant of the Russian Ministry of Education and the Administration of Saint Petersburg (Grant No. PD02-1.2-79). V.A.Y. acknowledges the support of the foundation "Dynasty" and the International Center for Fundamental Physics. The work of V.M.S. was supported by the Alexander von Humboldt Stiftung. We also acknowledge FIG. 1 : 1Feynman diagrams for the two-photon exchange corrections. C LD FIG. 2: The poles and the branch cuts of the integrand for the direct part of the ladder contribution, and the integration contour C LD . C CD FIG. 3: The poles and the branch cuts of the integrand for the direct part of the crossed contribution, and the integration contour C CD .C irr FIG. 4: The poles and the branch cuts of the integrand for the exchange part of the ladder contribution, and the integration contour C irr . FIG. 6 : 6The difference of the QED result for the two-photon exchange correction to the 2p 3/2 -2s transition and the corresponding MBPT results, with the B × B term included (solid line) and without this term (dashed line). The upper graph presents this difference in atomic units, and the lower one in units of per cent of the total QED contribution. TABLE I : IVarious contributions to the two-photon exchange correction for the (1s) 2 2p 3/2 state of Li-like ions, in atomic units. The subscripts "dir" and "exch" label the direct and the exchange parts, respectively; the superscripts "2el" and "3el" refer to the two-electron and the three-electron contributions, respectively.Z ∆E 2el dir ∆E 2el exch ∆E 3el Total 20 0.03876 0.03902 −0.45509 −0.37731 28 −0.10511 0.03760 −0.31608 −0.38359 30 −0.12453 0.03715 −0.29807 −0.38545 32 −0.14058 0.03673 −0.28363 −0.38748 TABLE II : IIComparison of the rigorous QED treatment of the two-photon exchange correction to the 2p 3/2 -2s splitting in Li-like ions with the approximate MBPT treatment, in atomic units. B × B denotes the term corresponding to the exchange by two Breit photons.Z QED MBPT MBPT−(B × B) 20 −0.11912 −0.11917 −0.11920 28 −0.11778 −0.11784 −0.11794 30 −0.11731 −0.11741 −0.11754 32 −0.11681 −0.11693 −0.11709 40 −0.11414 −0.11449 −0.11483 47 −0.11078 −0.11147 −0.11205 50 −0.10897 −0.10985 −0.11056 54 −0.10606 −0.10732 −0.10825 60 −0.10064 −0.10258 −0.10391 66 −0.09355 −0.09640 −0.09824 70 −0.08756 −0.09125 −0.09352 74 −0.08044 −0.08508 −0.08786 79 −0.06960 −0.07562 −0.07915 80 −0.06711 −0.07345 −0.07715 82 −0.06183 −0.06879 −0.07286 83 −0.05898 −0.06630 −0.07056 90 −0.03514 −0.04509 −0.05096 92 −0.02676 −0.03759 −0.04401 100 0.01597 0.00138 −0.00781 ¡ ¢ £ µ´ µ´µ . Bmbf, Daad Dfg, Gsi , from BMBF, DFG, DAAD, and GSI. . P Beiersdorfer, A L Osterheld, J H Scofield, J R Crespo López-Urrutia, K Widmann, Phys. Rev. Lett. 803022P. Beiersdorfer, A. L. Osterheld, J. H. Scofield, J. R. Crespo López-Urrutia, and K. Widmann, Phys. Rev. Lett. 80, 3022 (1998). . P Bosselmann, U Staude, D Horn, K.-H Schartner, F Folkmann, A E Livingston, P H Mokler, Phys. Rev. A. 591874P. Bosselmann, U. Staude, D. Horn, K.-H. Schartner, F. Folkmann, A. E. Livingston, and P. H. Mokler, Phys. Rev. A 59, 1874 (1999). . D Feili, P Bosselmann, K.-H Schartner, F Folkmann, A E Livingston, P H Mokler, Phys. Rev. A. 6222501D. Feili, P. Bosselmann, K.-H. Schartner, F. Folkmann, A. E. Livingston, and P. H. Mokler, Phys. Rev. A 62, 022501 (2000). . C Brandau, University of GießenPhD thesisC. Brandau, PhD thesis, University of Gießen, 2000. . A N Artemyev, T Beier, G Plunien, V M Shabaev, G Soff, V A Yerokhin, Phys. Rev. A. 6045A. N. Artemyev, T. Beier, G. Plunien, V. M. Shabaev, G. Soff, and V. A. Yerokhin, Phys. Rev. A 60, 45 (1999). . V A Yerokhin, A N Artemyev, T Beier, G Plunien, V M Shabaev, G Soff, Phys. Rev. A. 603522V.A. Yerokhin, A.N. Artemyev, T. Beier, G. Plunien, V.M. Shabaev, and G. Soff, Phys. Rev. A 60, 3522 (1999). . V A Yerokhin, A N Artemyev, V M Shabaev, M M Sysak, O M Zherebtsov, G Soff, Phys. Rev. Lett. 854699V.A. Yerokhin, A.N. Artemyev, V.M. Shabaev, M.M. Sysak, O.M. Zherebtsov, and G. Soff, Phys. Rev. Lett. 85, 4699 (2000). . V A Yerokhin, A N Artemyev, V M Shabaev, M M Sysak, O M Zherebtsov, G Soff, Phys. Rev. A. 6432109V. A. Yerokhin, A. N. Artemyev, V. M. Shabaev, M. M. Sysak, O. M. Zherebtsov, and G. Soff, Phys. Rev. A 64, 032109 (2001). . J Sapirstein, K T Cheng, Phys. Rev. A. 6422502J. Sapirstein and K. T. Cheng, Phys. Rev. A 64, 022502 (2001). . S A Blundell, P J Mohr, W R Johnson, J Sapirstein, Phys. Rev. A. 482615S. A. Blundell, P. J. Mohr, W. R. Johnson, and J. Sapirstein, Phys. Rev. A 48, 2615 (1993). . I Lindgren, H Persson, S Salomonson, L Labzowsky, Phys. Rev. A. 511167I. Lindgren, H. Persson, S. Salomonson, and L. Labzowsky, Phys. Rev. A 51, 1167 (1995). . P J Mohr, J Sapirstein, Phys. Rev. A. 6252501P. J. Mohr and J. Sapirstein, Phys. Rev. A 62, 052501 (2000). . O Yu, L N Andreev, G Labzowsky, G Plunien, Soff, Phys. Rev. A. 6442513O. Yu. Andreev, L. N. Labzowsky, G. Plunien, and G. Soff, Phys. Rev. A 64, 042513 (2001). . O Yu, L N Andreev, G Labzowsky, G Plunien, Soff, Phys. Rev. A. 6712503O. Yu. Andreev, L. N. Labzowsky, G. Plunien, and G. Soff, Phys. Rev. A 67, 012503 (2003). . V M Shabaev, Izv. Vyssh. Uchebn. Zaved., Fiz. 33660Sov. Phys. J.V. M. Shabaev, Izv. Vyssh. Uchebn. Zaved., Fiz. 33, 43 (1990), [Sov. Phys. J. 33, 660 (1990)]. . V M Shabaev, Phys. Rev. A. 504521V. M. Shabaev, Phys. Rev. A 50, 4521 (1994). . V M Shabaev, Physics Reports. 356119V. M. Shabaev, Physics Reports 356, 119 (2002). . V M Shabaev, I G Fokeeva, Phys. Rev. A. 494489V. M. Shabaev and I. G. Fokeeva, Phys. Rev. A 49, 4489 (1994). . M M Sysak, V A Erokhin, V M Shabaev, Opt. Spektrosk. 92Opt. Spectrosc.M. M. Sysak, V. A. Erokhin, and V. M. Shabaev, Opt. Spektrosk. 92, 332 (2002), [Opt. Spectrosc. 92, 375 (2002)]. . W R Johnson, S A Blundell, J Sapirstein, Phys. Rev. A. 37307W. R. Johnson, S. A. Blundell, and J. Sapirstein, Phys. Rev. A 37, 307 (1988).
{'fraction_non_alphanumeric': 0.07759508640750773, 'fraction_numerical': 0.058367985350780145, 'mean_word_length': 3.5629242819843343, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 13, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We present ab initio calculations of the complete gauge-invariant set of two-photon exchange graphs for the (1s) 2 2p 3/2 electron configuration in Li-like ions. These calculations are an important step towards the precise theoretical determination of the 2p 3/2 -2s transition energy in the framework of QED.', 'arxivid': 'physics/0304059', 'author': ['A N Artemyev \nDepartment of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia\n\nInstitut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany\n\nGesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany\n', 'V M Shabaev \nDepartment of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia\n\nInstitut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany\n\nGesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany\n', 'M M Sysak \nDepartment of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia\n\nGesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany\n', 'V A Yerokhin \nDepartment of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia\n', 'T Beier \nGesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany\n', 'G Plunien \nInstitut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany\n', 'G Soff \nInstitut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany\n'], 'authoraffiliation': ['Department of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia', 'Institut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany', 'Gesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany', 'Department of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia', 'Institut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany', 'Gesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany', 'Department of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia', 'Gesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany', 'Department of Physics\nSt. Petersburg State University\nOulianovskaya 1198504Petrodvorets, PetersburgStRussia', 'Gesellschaft für Schwerionenforschung\nPlanckstrasse 1D-64291DarmstadtGermany', 'Institut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany', 'Institut für Theoretische Physik\nMommsenstraße 13D-01062Dresden, DresdenTUGermany'], 'corpusid': 119411445, 'doi': '10.1103/physreva.67.062506', 'github_urls': [], 'n_tokens_mistral': 9900, 'n_tokens_neox': 8414, 'n_words': 4644, 'pdfsha': '88f705fbf761d1311bd35bc3841b7663eb7423a5', 'pdfurls': ['https://arxiv.org/pdf/physics/0304059v1.pdf'], 'title': ['arXiv:physics/0304059v1 [physics.atom-ph] Evaluation of the two-photon exchange diagrams for the (1s) 2 2p 3/2 electron configuration in Li-like ions', 'arXiv:physics/0304059v1 [physics.atom-ph] Evaluation of the two-photon exchange diagrams for the (1s) 2 2p 3/2 electron configuration in Li-like ions'], 'venue': []}
arxiv
A bounded-confidence model of opinion dynamics with heterogeneous node-activity levels Grace J Li Mason A Porter Department of Mathematics Department of Mathematics University of California 90095Los AngelesCaliforniaUSA University of California 90095Los AngelesCaliforniaUSA and Santa Fe Institute 87501Santa FeNew MexicoUSA A bounded-confidence model of opinion dynamics with heterogeneous node-activity levels (Dated: March 22, 2023) Agent-based models of opinion dynamics allow one to examine the spread of opinions between entities and to study phenomena such as consensus, polarization, and fragmentation. By studying a model of opinion dynamics on a social network, one can explore the effects of network structure on these phenomena. In social networks, some individuals share their ideas and opinions more frequently than others. These disparities can arise from heterogeneous sociabilities, heterogeneous activity levels, different prevalences to share opinions when engaging in a social-media platform, or something else. To examine the impact of such heterogeneities on opinion dynamics, we generalize the Deffuant-Weisbuch (DW) bounded-confidence model (BCM) of opinion dynamics by incorporating node weights. The node weights allow us to model agents with different probabilities of interacting. Using numerical simulations, we systematically investigate (using a variety of network structures and node-weight distributions) the effects of node weights, which we assign uniformly at random to the nodes. We demonstrate that introducing heterogeneous node weights results in longer convergence times and more opinion fragmentation than in a baseline DW model. The node weights in our BCM allow one to consider a variety of sociological scenarios in which agents have heterogeneous probabilities of interacting with other agents. I. INTRODUCTION Humans are connected in numerous ways, and our many types of interactions with each other influence what we believe and how we act. To model how opinions spread between people or other agents, researchers across many disciplines have developed a variety of models of opinion dynamics [1][2][3][4][5][6][7]. However, in part because of the difficulty of gathering empirical data on opinions, much of the research on opinion dynamics has focused on theory and model development, with little empirical validation [1,[6][7][8]. Some researchers have examined how human opinions change in controlled experimental settings with questionnaires [9][10][11], and others have examined empirical opinion dynamics using data from social-media platforms [12][13][14]. One of the many difficulties in empirically validating models of opinion dynamics is the potential sensitivity of model outcomes to measurement errors of real-life opinion values [15]. See Mäs [16] for a discussion of some of the challenges of validating models in the social sciences. Even with the difficulty of validating models of opinion dynamics, it is valuable to formulate and study such models. Developing mechanistic models forces researchers to clearly define assumptions, variables, and the relationships between variables; such models provide frameworks to explore and generate testable hypotheses about complex social phenomena [8,17]. In an agent-based model (ABM) of opinion dynamics, each agent is endowed with an opinion and an underlying * [email protected] network structure governs which agents can interact with each other. We assume that all interactions are dyadic (i.e., between exactly two agents), and we suppose that the agent opinions take continuous values in a closed interval on the real line [18]. This interval represents a continuous spectrum of views about something, such an ideology or the strength of support for a political candidate. At each discrete time step of an ABM of opinion dynamics, one selects which agents interact and then use an update rule to determine if and how their opinions change. Bounded-confidence models (BCMs) are a popular class of models with continuous-valued opinions [4]. In a BCM, interacting agents influence each other only when their opinions are sufficiently similar. This mechanism is reminiscent of the psychological idea of selective exposure, which asserts that people tend to seek information or conversations that support their existing views and avoid those that challenge their views [19]. Under this assumption, an agent's views are influenced directly only by agents with sufficiently similar views. For example, social-media platforms include polarizing posts, but individuals can choose whether or not to engage with such content; they do not adopt the views of everything in their social-media feeds. The two most popular BCMs are the Hegselmann-Krause (HK) model [20] and the Deffuant-Weisbuch (DW) model [21]. At each time step, the HK model has synchronous updates of node opinions, whereas the DW model has asynchronous opinion updates, with a single pair of agents (i.e., a dyad) interacting and potentially updating their opinions at each time. An asynchronous mechanism is consistent with empirical studies, which suggest that individuals in social networks have different activity times and frequencies [22]. In the present paper, we generalize the DW model to incorporate heterogeneous node-activity levels. Although the DW model has been generalized in many ways [5], few studies have modified the procedure to select which agents interact in a time step. The ones that have modified this procedure (see, e.g., Refs. [22][23][24][25]) have focused on specific scenarios, rather than on investigating the effects of introducing heterogeneities into agent-selection probabilities. Before we describe previous extensions of the DW model that incorporate heterogeneities in agent selection, we first discuss other generalizations of the model. The DW model was first studied on complete graphs [21]. To explore the effects of network structure on DW dynamics, many researchers subsequently simulated DW models on time-independent graphs [26]. Researchers have also examined DW models on hypergraphs [27] and coevolving networks [28]. Additionally, many studies have extended the DW model to consider different initial conditions and/or BCM parameters. Some studies have considered initial node opinions that arise from nonuniform distributions [27,[29][30][31], yielding initial conditions that are different from those in the standard DW model. Other investigations have incorporated heterogeneous confidence bounds or heterogeneous opinion compromises [31][32][33][34][35][36][37][38]. Such generalizations affect the opinion updates of interacting agents. In the standard DW model, one selects pairs of agents to interact uniformly at random, but social interactions are not uniform in real life. Few studies of the DW model have modified the selection procedure that determines which agents interact with each other; see, e.g., [22][23][24][25]. When selecting agents in a way that is not uniformly at random, one can think of the agents as having different activity levels that encode their interaction frequencies. (In a given time interval, we expect these agents to have different numbers of interactions.) The idea of heterogeneous node-activity levels plays an important role in activity-driven models of temporal networks [39]. There have also been studies of activity-driven models of opinion dynamics. Li et al. [40] developed an activity-driven model of opinion dynamics using networks with fixed nodes with assigned activity rates (i.e., assigned activation probabilities). At each time step of their model, one removes all existing edges and the active agents randomly form a fixed number of connections. All agents then evaluate the mean opinions of their neighbors to determine if and how to update their own opinions [40]. Baronchelli et al. [41] studied a voter model with heterogeneous edge weights, which one can interpret as encoding heterogeneous edge activities. Some researchers have generalized the DW model to incorporate heterogeneous agent selection. Alizadeh and Cioffi-Revilla [22] studied a modified DW model that incorporates a repulsion mechanism (which was proposed initially by Huet et al. [42]) in which interacting agents with opinions that differ by more than a cognitive-dissonance threshold move farther away from each other in the space of opinions. They used two-dimensional (2D) vector-valued opinions and placed their nodes on complete graphs. To model agents with different activity levels, Alizadeh and Cioffi-Revilla [22] implemented a Poisson node-selection probability, which one can interpret as independent internal "clocks" that determine agent activation. In comparison to selecting agent pairs uniformly at random (as in the standard DW model) the Poisson node-selection probability can either lessen or promote the spread of extremist opinions, depending on which opinions are more prevalent in more-active agents. Zhang et al. [23] examined a modified DW model with asymmetric updates on activity-driven networks. In their model, each node has a fixed activity potential, which one assigns uniformly at random from a distribution of activity potentials. The activity potential of an agent is its probability to activate. At each discrete time step, each active agent i randomly either (1) creates a message (e.g., a social-media post) or (2) boosts a message that was created by a neighboring agent j. If agent i boosts a message from agent j, then i updates its opinion using the standard DW update mechanism. Zhang et al. [23] simulated their model on a social network from Tencent Weibo (腾 讯微博) and found that the distribution of activity potentials influences the location of the transition between opinion consensus and fragmentation. The node weights in our BCM are similar in spirit to the activity potentials of Zhang et al. [23]; they can encode the social activity levels of individuals, such as their frequencies of posting or commenting on social media. However, the way that we incorporate node weights in our BCM differs fundamentally from Ref. [23]. We consider a time-independent network G, and we select a single pair of neighboring agents to interact at each time step. We first randomly select one agent with a probability that is proportional to its node weight, and then we randomly select a second neighboring agent with a probability that depends on its node weight. The two selected agents then update their opinions using the DW update mechanism. Heterogeneities in which interactions occur in a social network arise not only because some individuals are more likely to have interactions, but also because some pairs of individuals are more likely to interact than other pairs [41]. The curation of content in social-media feeds is affected by homophily, which is the idea that individuals have a tendency to connect with others that are similar to themselves (e.g., perhaps they have similar ideas or beliefs) [43]. Social-media feeds tend to show content to users that closely matches their profiles and past activities [44]. To examine the effect of such algorithmic bias on opinion dynamics, Sîrbu et al. [24] studied a modified DW model that includes a homophilypromoting activation mechanism. At each time step, one agent is selected uniformly at random, and then one of its neighbors is selected with a probability that depends on the magnitude of the opinion difference between that neighbor and the first agent. The simulations by Sîrbu et al. of their model on complete graphs suggest that more algorithmic bias yields slower convergence times and more opinion fragmentation [24]. Pansanella et al. [25] applied the same algorithmic-bias model to a variety of network topologies (specifically, Erdős-Rényi, Barabási-Albert, and Lancichinetti-Fortunato-Radicchi (LFR) graphs), and they found similar trends as Sîrbu et al. did on complete graphs. From the investigations in Refs. [22][23][24][25], we know that incorporating heterogeneous node-selection probabilities into a DW model can influence opinion dynamics. Each of these papers examined a specific implementation of heterogeneous agent selection; we are not aware of any systematic investigations of the effects of heterogeneous agent selection on opinion dynamics in asynchronous BCMs. In the present paper, we propose a novel BCM with heterogeneous agent-selection probabilities, which we implement using node weights. In general terms, we are studying a dynamical process on node-weighted networks. We use node weights to model agents with different probabilities of interacting. These probabilities can encode heterogeneities in individual behavior, such as in sociability or activity levels. We conduct a methodical investigation of the effects of incorporating heterogeneous node weights, which we draw from various distributions, into our generalization of the DW model. We examine these effects on a variety of types of networks. In our study, we consider fixed node weights that we assign in a way that disregards network structure and node opinions. However, one can readily adapt the node weights in our BCM to consider a variety of sociological scenarios in which nodes have heterogeneous selection probabilities. We find that introducing heterogeneous node weights into our node-weighted BCM results in longer convergence times and more opinion fragmentation than selecting nodes uniformly at random. Our results illustrate that it is important to consider the influence of assigning node-selection probabilities uniformly at random in models with heterogeneous node selection before drawing conclusions about more specific mechanisms such as algorithmic bias [24]. More generally, our model illustrates the importance and utility of incorporating node weights into network analysis and dynamics. Our paper proceeds as follows. In Sec. II, we describe the standard DW model and present our generalized DW model with node weights to incorporate heterogeneous agent-selection probabilities. In Sec. III, we discuss the setup of our simulations, the networks and node-weight distributions that we examine, and the quantities that we compute to characterize the behavior of our model. In Sec. IV, we discuss the results of our numerical simulations of our BCM. In Sec. V, we summarize our results and discuss their implications, present some ideas for future work, and highlight the importance of studying networks with node weights. Our code is available at https://gitlab.com/graceli1/NodeWeightDW. II. OUR MODEL In this section, we first discuss the Deffuant-Weisbuch (DW) [21] bounded-confidence model (BCM) of opinion dynamics, and we then introduce our BCM with heterogeneous node-selection probabilities. A. The standard Deffuant-Weisbuch (DW) BCM The DW model was introduced over two decades ago [21], and this model and its extensions have been studied extensively since then [4,5]. The DW model was examined originally on complete graphs and encoded agent opinions as scalar values in a closed interval on the real line. Deffuant et al. [21] let each agent have an opinion in [0, 1], and we follow this convention. The standard DW model has two parameters. The "confidence bound" c ∈ [0, 1] is a thresholding parameter; when two agents interact, they compromise their opinions by some amount if and only if their opinions differ by less than c. The "compromise parameter" m ∈ (0, 0.5] (which is also sometimes called a convergence parameter [21] or a cautiousness parameter [26]) parametrizes the amount that an agent changes its opinion to compromise with the opinion of an agent with whom it interacts. In the standard DW model, the opinions of the agents update asynchronously. We endow each agent with an initial opinion. At each discrete time, one uniformly randomly selects a pair of agents to interact. At time t, suppose that we pick agents i and j, whose associated opinions are x i and x j , respectively. Agents i and j update their opinions through the following equations: x i (t + 1) = x i (t) + m∆ ji , if |∆ ij (t)| < c x i (t) , otherwise , x j (t + 1) = x j (t) + m∆ ij , if |∆ ij (t)| < c x j (t) , otherwise ,(1) where ∆ ij (t) = x i (t) − x j (t). When |∆ ij (t)| < c, we say that agents i and j are "receptive" to each other at time t. When |∆ ij (t)| ≥ c, we say that agents i and j are "unreceptive" to each other. When one extends the DW model to consider an underlying network of agents [45], only adjacent agents are allowed to interact. Consider an undirected network G = (V, E), where V is the set of nodes and E is the set of edges between them. Let N = |V | denote the size of the network (i.e., the number of nodes of the network). Each node of a network represents an agent, and each edge between two agents encodes a social or communication tie between them. At each discrete time, one selects an edge of a given network uniformly at random and the two agents that are attached to that edge interact with each other; they update their opinions following Eq. (1). For the DW model, an alternative to an edge-based approach of randomly selecting an interacting edge is to take a node-based approach to determine the agents that interact. (See Ref. [46] for a discussion of node-based updates versus edge-based updates in the context of voter models.) In a node-based approach, one first randomly selects one node and then randomly selects a second node from its neighbors. To capture the fact that some agents have more frequent interactions (such as from greater sociability or a stronger desire to share their opinions) than others, we implement a node-based agent-selection procedure in our study. The choice between edge-based and node-based agent selection can have substantial effects on the dynamics of voter models of opinion dynamics [46], and we expect that this is also true for other types of opinion-dynamics models. We are not aware of a comparison of edge-based and node-based agent selection in asynchronous BCMs (and, in particular, in DW models), and it seems both interesting and relevant to explore this issue. Most past research on the DW model has considered edge-based selection [5]. However, Refs. [22,24,25] used a node-based selection procedure to model heterogeneous activities of agents. B. A BCM with heterogeneous node-selection probabilities We now introduce our BCM with heterogeneous nodeselection probabilities. Consider an undirected network G = (V, E). As in the standard DW model, suppose that each agent i has a time-dependent opinion x i (t). In our BCM, each agent also has a fixed node weight w i that encodes sociability, how frequently it engages in conversations, or simply the desire to share its opinions. One can think of a node's weight as a quantification of how frequently it talks to its friends or posts on social media. By incorporating network structure, the standard DW model can include agents with different numbers of friends (or other social connections). However, selecting interacting node pairs uniformly at random is unable to capture the heterogeneous interaction frequencies of individuals. By introducing node weights, we encode such heterogeneity and then examine how it affects opinion dynamics in a BCM. Although we employ fixed node weights, one can adapt our model to include time-dependent node weights, such as through purposeful strategies (such as posting on social media more frequently as one's opinions become more extreme). In our node-weighted BCM, at each discrete time, we first select an agent i with a probability that is proportional to its weight. Agent i then interacts with a neighbor j, which we select with a probability that is equal to its weight divided by the sum of the weights of i's neighbors. That is, the probabilities of first selecting agent i and then selecting agent j are P 1 (i) = w i N k=1 w k , P 2 (j|i) = w j k∈N (i) w k ,(2) where N (i) denotes the neighborhood (i.e., the set of neighbors) of node i. Once we select the pair of interacting agents, we update their opinions following the DW opinion update rule in Eq. 1. Our BCM incorporates heterogeneous node-selection probabilities with node weights that model phenomena such as the heterogeneous sociability of individuals. One can also study heterogeneous selection probabilities of pairwise (i.e., dyadic) interactions, instead of focusing on the probabilities of selecting individuals. For instance, an individual may discuss their ideological views with a close friend more frequently than with a work colleague. One can use edge weights to determine the probabilities of selecting the dyadic interactions in a BCM. At each discrete time, one can select an edge with a probability that is proportional to its weight. We do not examine edge-based heterogeneous selection probabilities in the present paper, but it is worth exploring in BCMs. III. METHODS AND SIMULATION DETAILS In this section, we discuss the network structures and node-weight distributions that we consider, the setup of our numerical simulations, and the quantities that we compute to characterize the results of our simulations. A. Network structures We now describe the details of the networks on which we simulate our node-weighted BCM. We summarize these networks in Table I. We first simulate our BCM on complete graphs as a baseline scenario that will allow us to examine how incorporating heterogeneous node-selection probabilities affects opinion dynamics. Although DW models were introduced more than 20 years ago, complete graphs are still the most common type of network on which to study them [4]. To examine finite-size effects from our networks, we consider complete graphs with sizes N ∈ {10, 20, 30, 45, 65, 100, 150, 200, 300, . . . , 1000}. For all other synthetic networks, we consider networks with N = 500 nodes. We consider synthetic networks that we generate using the G(N, p) Erdős-Rényi (ER) random-graph model, where p is the homogeneous, independent probability of an edge between each pair of nodes [49]. When p = 1, this yields a complete graph. We examine G(500, p) graphs with p ∈ {0.1, 0.3, 0.5, 0.7}. To determine how a network with an underlying block structure affects the dynamics of our node-weighted [46], we consider two types of SBM networks. The first has a two-community structure, in which there is a larger probability of edges within a community than between communities. The second SBM has a core-periphery structure, in which there is a set of core nodes with a large probability of edges within the set, a set of peripheral nodes with a small probability of edges within the set, and edges exist between core nodes and peripheral nodes with an intermediate probability. To construct our 2×2 SBMs, we partition a network into two sets of nodes; set A has 375 nodes (i.e., 75% of the network) and set B has 125 nodes (i.e., 25% of the network). We define a symmetric edge-probability matrix P = P AA P AB P AB P BB ,(3) where P AA and P BB are the probabilities that an edge exists between two nodes in set A and set B, respectively, and P AB is the probability that an edge exists between a node in set A and a node in set B. In a two-community SBM, the probabilities P AA and P BB are larger than P AB , so edges between nodes in the same community exist with a larger probability than edges between nodes in different communities. For our two-community SBM, we choose P AA and P BB so that the expected mean degree matches that of the G(500, 0.1) ER model if we only consider edges within set A or edges within set B. A network from the G(N, p) model has an expected mean degree of p(N −1) [49], so we want the two communities of these SBM networks to have an expected mean degree of 49.9 = 0.1 × 499. We thus use the edge probabilities P AA = 49.9/374 and P BB = 49.9/124. To ensure that there are few edges between the sets A and B, we choose P AB = 1/500. We want our core-periphery SBM with core set A and periphery set B to satisfy P AA > P AB > P BB . We chose P AA so that the expected mean degree matches that of the G(500, 0.3) model (i.e., it is 147.9) if we only consider edges within the set A. We thus choose the edge probability P AA = 147.9/374. To satisfy P AA > P AB > P BB , we choose P AB = 1/25 and P BB = 1/174. Finally, we investigate our node-weighted BCM on a real social network from Facebook friendship data. We use the Caltech network from the Facebook100 data set; its nodes encode individuals at Caltech, and its edges encode Facebook "friendships" between them on one day in fall 2005 [47,48]. We only consider the network's largest connected component, which has 762 nodes and 16,651 edges. B. Node-weight distributions In Table II, we give the parameters and probability density functions of the node-weight distributions that we examine in our BCM. In this subsection, we discuss our choices of distributions. To study the effects of incorporating node weights in our BCM, we compare our model to a baseline DW model. To ensure a fair comparison, we implement a baseline DW model that selects interacting agents uniformly at random using a node-based selection process. As we discussed in Sec. I, it is much more common to employ an edge-based selection process. We refer to the case in which all nodes weights are equal to 1 (that is, w i = 1 for all nodes i) as the "constant weight distribution". The constant weight distribution (and any other situation in which all node weights equal the same positive number) results in a uniformly random selection of Constant δ(x − 1) N/A {1} 1 1 Pareto-80-10 α x α+1 α = log 4.5 (10) [1, ∞) α α − 1 2.8836 Pareto-80-20 α = log 4 (5) 7.2126 Pareto-90-10 α = log 9 (10) 21.8543 Exp-80-10 1 β exp −(x−1) β β = 1.8836 [1, ∞) β + 1 2.8836 Exp-80-20 β = 6.2125 7.2125 Exp-90-10 β = 20.8543 21.8543 Unif-80-10 1 b − 1 b = 4.7672 [1, b] 1 2 (1 + b) 2.8836 Unif-80-80 b = 13.425 7.2125 Unif-90-10 b = 42.7086 21.8543 nodes for interaction. This is what call the "baseline DW model"; we compare our DW models with heterogeneous node weights to this baseline model. We reserve the term "standard DW model" for the DW model with uniformly random edge-based selection of agents. The node weights in our BCM encode heterogeneities in interaction frequencies, such as when posting content online. The majority of online content arises from a minority of user accounts [50]. A "90-9-1 rule" has been proposed for such participation inequality. In this rule of thumb, about 1% of the individuals in online discussions (e.g., on social-media platforms) account for most contributions, about 9% of the individuals contribute on occasion, and the remaining 90% of the individuals are present online (e.g., they consume content) but do not contribute to it [51]. Participation inequality has been documented in a variety of situations, including the numbers of posts on digital-health social networks [52], posts on internet support groups [53], and contributions to open-source software-development platforms [54]. Inequality in user activity has also been examined on Twitter [55], and Xiong and Liu [56] used a powerlaw distribution to model the number of tweets about different topics. A few years ago, a survey by the Pew Research Center found that about 10% of the accounts of adult Twitter users in the United States generate about 80% of the tweets of such accounts [57]. One can interpret the node weights in our BCM as encoding the participation of individuals in the form of contributing content to a social-media platform. We model online participation inequality by using a Pareto distribution for the node weights. This choice of distribution is convenient because of its simple power-law form. It has also been used to model inequality in a variety of other contexts, including distributions of wealth, word frequencies, website visits, and numbers of paper citations [58]. When representing social-media interactions, we only care about accounts that make posts or com-ments; we ignore inactive accounts. Therefore, we impose a minimum node weight in our model. We use the Pareto type-I distribution, which is defined on [1, ∞), so each node has a minimum weight of 1. This positive minimum weight yields a reasonable convergence time for the simulations of our BCM. Nodes with weights close to 0 would have very small probabilities of interacting, and allowing such weights would prolong simulations. Let Pareto-X-Y denote the continuous Pareto distribution in which (in theory) X% of the total node weight is distributed among Y% of the nodes. In practice, once we determine the N node weights for our simulations from a Pareteo node-weight distribution, it is not true that precisely X% of the total weight is held by Y% of the N nodes. Inspired by the results of the aforementioned Pew Research Center survey of Twitter users [57], we first consider a Pareto-80-10 distribution, in which we expect 80% of the total weight to be distributed among 10% of nodes. The Pareto principle (which is also known as the "80-20 rule") is a popular rule of thumb that suggests that 20% of individuals have 80% of the available wealth [58]. Accordingly, we also consider a Pareto-80-20 distribution. Finally, as an example of a node-weight distribution with a more extreme inequality, we also consider a Pareto-90-10 distribution. We also examine uniform and exponential distributions of node weights. To match the domain of our Pareto distributions, we shift the uniform and exponential distributions so that their minimum node weight is also 1. We also choose their parameters to approximate the means of our Pareto distributions. We use Exp-X-Y and Unif-X-Y as shorthand notation to denote exponential and uniform distributions, respectively, with means that match that of the Pareto-X-Y distribution to four decimal places (see Table II). When we examine the results of our numerical simulations, we want to compare distributions with similar means. We use the phrase "80-20 distributions" to refer to the Pareto-80-20, Exp-80-20, and Unif-80-20 distributions. We analogously use the phrases "80-10 distributions" and "90-10 distributions." In total, we examine three different families of distributions (Pareto, exponential, and uniform) with tails of different heaviness. In Table II, we show the details of the probability density functions and the parameters of our node-weight distributions. C. Simulation specifications In our node-weighted BCM, agents have opinions in the one-dimensional (1D) opinion space [0, 1]. Accordingly, we examine values of the confidence bound c ∈ (0, 1) [59]. We examine values of the compromise parameter m ∈ (0, 0.5], which is the typically studied range for the DW model [4,26]. When m = 0.5, two interacting agents that influence each other fully compromise and average their opinions. When m < 0.5, the two agents move towards each other's opinions, but they do not change their opinions to the mean (i.e., they do not fully compromise). In our node-weighted BCM, the generation of graphs in a random-graph ensemble, the sets of node weights, the sets of initial opinions, and the selection of pairs of agents to interact at each time step are all stochastic. We use Monte Carlo simulations to reduce these sources of noise in our simulation results. For each of our random-graph models (i.e., the ER and SBM graphs), we generate 5 graphs. For each graph and each node-weight distribution, we randomly generate 10 sets of node weights. For each set of node weights, we generate 10 sets of initial opinions that are distributed uniformly at random. In total, we consider 100 distinct sets of initial opinions and node weights for the Monte Carlo simulations of each individual graph. When we compare simulations from different distributions of node weights in the same individual graph, we reuse the same 100 sets of initial opinions. In theory, the standard DW model and our nodeweighted DW model can take infinitely long to approach a steady state. We define an "opinion cluster" S r to be a maximal connected set of agents in which the pairwise differences in opinions are all strictly less than the confidence bound c; adding any other agent to S r will yield at least one pair of adjacent agents with an opinion difference of at least c. Equivalently, for each graph G, we define the "effective-receptivity network" G eff (t) = (V, E eff (t)) as the time-dependent subgraph of it with edges only between pairs of nodes that are receptive to each others' opinions. That is, E eff (t) = {(i, j) ∈ E : |x i (t) − x j (t)| < c} .(4) The opinion clusters are the connected components of the effective-receptivity network G eff (t). If two opinion clusters S 1 and S 2 are separated by a distance of at least c (i.e., |x i − x j | ≥ c for all i ∈ S 1 and j ∈ S 2 ) at some timẽ T , then (because c is fixed) no agents from S 1 can influence the opinion of an agent in S 2 (and vice versa) for all t ≥T . Therefore, in finite time, we observe the formation of steady-state clusters of distinct opinions. Inspired by Meng et al. [26], we specify that one of our simulations has "converged" if all opinion clusters are separated from each other by a distance of at least c and each opinion cluster has an opinion spread that is less than a tolerance of 0.02. That is, for each cluster S r , we have that max i,j∈Sr |x i −x j | < 0.02. We use T to denote the convergence time in our simulations; the connected components of G eff (T ) are the steady-state opinion clusters. It is computationally expensive to numerically simulate a DW model. Additionally, as we will show in Sec. IV, our node-weighted DW model with heterogeneous node weights often converges to a steady state even more slowly than the baseline DW model. To reduce the computational burden of checking for convergence, we do not check for it at each time step and we compute the convergence time to three significant figures. To guarantee that each simulation stops in a reasonable amount of time, we set a bailout time of 10 9 time steps. In our simulations, the convergence time is always shorter than the bailout time. We thus report the results of our simulations as steady-state results. D. Quantifying opinion consensus and fragmentation In our numerical simulations, we investigate which situations yield consensus (specifically, they result in one "major" opinion cluster, which will discuss shortly) at steady state and which situations yield opinion fragmentation (when there are at least two distinct major clusters) at steady state. [60] We are also interested in how long it takes to determine the steady-state behavior of a simulation and in quantifying opinion fragmentation when in occurs. To investigate these model behaviors, we compute the convergence time and the number of steadystate opinion clusters. It is common to study these quantities in investigations of BCMs [4,6,26]. In some situations, an opinion cluster has very few agents. Consider a 500-node network in which 499 agents eventually have the same opinion, but the remaining agent (say, Agent 86, despite repeated attempts by Agent 99 and other agents to convince them) retains a distinct opinion at steady state. In applications, it is not appropriate to think of this situation as opinion fragmentation. To handle such situations, we use a notion of "major clusters" and "minor clusters" [34,61]. We characterize major and minor clusters in an ad hoc way. We define a "minor" opinion cluster in a network as an opinion cluster with at most 2% of the agents. Any opinion cluster that is not a minor cluster is a "major" cluster. In our simulations, we calculate the numbers of major and minor opinion clusters at steady state. We only account for the number of major clusters when determining if a simulation reaches a consensus state (i.e., exactly one major cluster) or a fragmented state (i.e., more than one major cluster). We still track the number of minor clusters and use the minor clusters when quantifying opinion fragmentation. Quantifying opinion fragmentation is much less straightforward than determining whether or not there is fragmentation. Researchers have proposed a variety of notions of fragmentation and polarization [62], and they have also proposed several ways to quantify such notions [62][63][64]. In principle, a larger number of opinion clusters is one indication of more opinion fragmentation. However, as we show in Fig. 1, there can be considerable variation in the sizes (i.e., the number of nodes) of the opinion clusters. For example, suppose that there are two opinion clusters. If the two opinion clusters have the same size, then one can view the opinions in the system as more polarized than if one opinion cluster has a large majority of the nodes and the other opinion cluster has a small minority. Additionally, although we use only major clusters to determine if a system reaches a consensus or fragmented state, we seek to distinguish quantitatively between the scenarios of opinion clusters (major or minor) with similar sizes from ones with opinion clusters with a large range of sizes. Following Han et al. [65], we do this by calculating Shannon entropy. Suppose that there are K opinion clusters, which we denote by S r for r ∈ {1, . . . , K}. We refer to the set {S r } K r=1 as an "opinion-cluster profile"; such a profile is a partition of a network. The fraction of agents in opinion cluster S r is |S r |/N . The Shannon entropy H of the opinion-cluster profile is H = − K r=1 |S r | N ln |S r | N .(5) The Shannon entropy H gives us a scalar value to quantify the distribution of opinion-cluster sizes. For a given opinion-cluster profile, H indicates the increase in information of knowing the opinion-cluster membership of a single agent instead of not knowing the cluster membership of any agents. For a fixed K, the entropy H is larger if the cluster sizes are closer in magnitude than if there is more heterogeneity in the cluster sizes. For opinioncluster profiles with similar cluster sizes, H is larger if there are more clusters. We use H to quantify opinion fragmentation, with larger H corresponding to more opinion fragmentation. We calculate the steady-state entropy H(T ) using all steady-state opinion clusters (i.e., both major and minor clusters). Another way to quantify opinion fragmentation is to look at a local level and consider individual agents of a network. As Musco et al. [63] pointed out, if an individual agent has many neighbors with with similar opinions to it, then it may be "unaware" of other opinions in the network. For example, an agent can observe that a majority of its neighbors hold an opinion that is uncommon in the global network. This phenomenon is sometimes called a "majority illusion" [66]. If a set of adjacent agents tend to have neighbors with similar opinions as theirs, they may be in an "echo chamber" [67], as it seems that they are largely exposed only to conforming opinions. To quantify the local observations of agents, Musco et al. [63] calculated a notion of local agreement that measures the fraction of an agent's neighbors with opinions that are on the same side of the mean opinion in a network. In our simulations, we often observe opinion fragmentation with three or more opinion clusters. Therefore, we need to look beyond the mean opinion of an entire network. To do this, we introduce the "local receptiveness" of an agent. At time t, a node i with neighborhood N (i) has a local receptiveness of L i (t) = |{j ∈ N (i) : |x i (t) − x j (t)| < c}| |N (i)| .(6) That is, L i (t) is the fraction of the neighbors of agent i at time t to which it is receptive (i.e., with which it will compromise its opinion if they interact). In the present paper, we only consider connected networks, so each agent i has |N (i)| ≥ 1 neighbors. If one wants to consider isolated nodes, one can assign them a local receptiveness of 0 or 1. In our numerical simulations, we calculate the local receptiveness of each agent of a network at the convergence time T . We then calculate the mean L i (T ) of all agents in the network. This is the steady-state mean local receptiveness, as it is based on edges in the steady-state effective-receptivity network G eff (T ). When consensus is not reached, a smaller mean local receptiveness is an indication of greater opinion fragmentation. As we will discuss in Sec. IV, the Shannon entropy and the mean local receptiveness can provide insight into the extent of opinion fragmentation when one considers them in concert with the number of opinion clusters. FIG. 1. Sample trajectories of agent opinions versus time in a single simulation of our node-weighted BCM on a 500-node complete graph with a constant weight distribution. Therefore, this situation corresponds to our baseline DW model. We color the trajectory of each node by its final opinion cluster. Observe that the final opinion clusters have different sizes. There is a minor cluster (in black); it consists of a single node whose final opinion is about 0.4. The opinion cluster that converges to the largest opinion value has about twice as many nodes as the other major clusters. IV. NUMERICAL SIMULATIONS AND RESULTS In this section, we present results of our numerical simulations of our node-weighted BCM. In our numerical experiments, the compromise parameter takes the values m ∈ {0.1, 0.3, 0.5}. For the confidence bound, we first consider the values c ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, and we then examine additional values of c near regions with interesting results. As we discussed in Sec. III C, for each individual graph, we simulate a total of 100 distinct sets of initial opinions and node weights in Monte Carlo simulations of our BCM. For each of the randomgraph models (i.e., ER and SBM graphs), we generate 5 graphs. For the 500-node complete graphs, we simulate the 10 weight distributions in Table II. Because of computation time, we consider the 90-10 distributions only on 500-node complete graphs. For the other networks in Table I, we consider 7 distributions in total: the constant weight distribution, the 80-10 distributions, and the 80-20 distributions. In Table III, we summarize the trends that we observe in the examined networks. In the following subsections, we discuss details of our results for each type of network. The numbers of major and minor clusters, Shannon entropies, and values of mean local receptiveness are all steady-state values. We include our code and figures in our repository at https://gitlab.com/ graceli1/NodeWeightDW. In the present paper, we visualize our results using heat maps; in our code repository, we also show visualizations with line plots. A. Complete graphs The simplest underlying network structure on which we run our node-weighted BCM is a complete graph. Complete graphs provide a baseline setting to examine how heterogeneous node-selection probabilities affect opinion dynamics. In our numerical simulations on complete graphs, we consider all three means (which we denote by 80-10, 80-20, and 90-10 in Table II) for each of the uniform, exponential, and Pareto node-weight distribution families. The standard DW model on a complete graph with agents with opinions in the interval [0, 1] eventually reaches consensus if the confidence bound c ≥ 0.5. As one decreases c from 0.5, there are progressively more steady-state opinion clusters (both minor and major) [34,68]. Lorenz [34] showed using numerical simulations that the number of major clusters is approximately • For fixed values of c and m and a fixed distribution mean, there is more opinion fragmentation as the tail of a distribution becomes heavier. • For fixed values of c and m and a given family of distributions, there is more opinion fragmentation when a distribution has a larger mean. Number of Major Clusters • A larger minimum value of c is required to always reach consensus for a heterogeneous weight distribution than for the constant weight distribution. • For fixed values of c and m and a fixed distribution mean, there are more major clusters as the tail of a distribution becomes heavier. • For fixed values of c and m and a given family of distributions, there are more major clusters when a distribution has a larger mean. Number of Minor Clusters • For the constant weight distribution and for fixed c, there are typically more minor clusters when the compromise parameter m ∈ {0.3, 0.5} than when m = 0.1. The heterogeneous weight distributions do not follow this trend. b a We quantify opinion fragmentation using Shannon entropy and mean local receptiveness. We observe clearer trends for Shannon entropy than for the mean local receptiveness. b For the Caltech network, we usually observe more minor clusters when m ∈ {0.3, 0.5} than when m = 0.1 for each of our heterogeneous weight distributions. BCM simulations for various node-weight distributions. For fixed values of c and m, all of the heterogeneous weight distributions yield longer convergence times than the constant weight distribution. Additionally, for fixed c and m and a fixed family of distributions (uniform, exponential, or Pareto), the convergence time increases as we increase the mean of the distribution. For fixed c and for each heterogeneous weight distribution, the conver-gence time also increases as we decrease the compromise parameter m. When calculating convergence time, we include time steps in which two nodes interact but do not change their opinions. To see if the heterogeneous weight distributions have inflated convergence times as a result of having more of these futile interactions, we also calculate the number of time steps to converge when we exclude such time steps. That is, we count the total number of opinion changes that it takes to converge. On a logarithmic scale, there is little difference between the total number of opinion changes and the total number of time steps to converge. We include a plot of the numbers of opinion changes in our code repository. In Fig. 3, we show the numbers of major opinion clusters at steady state in our BCM simulations for various node-weight distributions. For all weight distributions, consensus occurs in all of our simulations when the confidence bound c ≥ 0.5. For fixed values of c ∈ [0.1, 0.4] and m, the heterogeneous weight distributions yield more steady-state major clusters than the constant weight distribution. When we introduce heterogeneous node weights into our BCM, we need a larger confidence bound c than for the constant weight distribution to always reach consensus in our simulations. It appears that our BCM with heterogeneous node weights tends to have more opinion fragmentation than the baseline DW model. For fixed c and m, we observe for each distribution family (uniform, exponential, and Pareto) that there are more steady-state major clusters when the distribution mean is larger. To see this, proceed from left to right in Fig. 3 from the 80-10 distributions to the 80-20 distributions and then to the 90-10 distributions. Additionally, for fixed values of c and m and a fixed distribution mean, there are more steady-state major clusters as we proceed from a uniform distribution to an exponential distribution and then to a Pareto distribution. To investigate how the node-weight distribution and the BCM parameters (i.e., c and m) affect the amount of opinion fragmentation, we calculate the Shannon entropy and mean local receptiveness (see Sec. III D) at steady state. In Fig. 4, we show the steady-state entropy values of our BCM simulations for various node-weight distributions. For all node-weight distributions, when there is opinion fragmentation instead of consensus, the steady-state entropy increases as we decrease the confidence bound c for fixed m. In line with our observations in Fig. 3, when c ∈ [0.1, 0.4], simulations of heterogeneous weight distributions usually yield larger entropies than the constant weight distribution. For fixed values of c and m and a fixed distribution mean, we also observe a slightly larger entropy as we proceed from a uniform distribution to an exponential distribution and then to a Pareto distribution. For fixed c and m, for the Pareto distributions, the entropy increases as we increase the mean of the distribution. (Proceed from left to right in Fig. 4.) The exponential and uniform distributions have the same trend, although it is less pronounced (i.e., the entropies do not increase as much) than for the Pareto FIG. 3. The numbers of major opinion clusters at steady state in simulations of our node-weighted BCM on a 500-node complete graph with various node-weight distributions. We consider a cluster to be major cluster if it has more than 2% of the nodes of a network. (In this case, a major cluster must have at least 11 nodes.) distribution. For the exponential and uniform distributions, a larger mean weight results in more major opinion clusters. For these two families of distributions, increasing the mean weight also tends to lead to smaller major opinion clusters. Therefore, given either a uniform or an exponential distribution, we obtain similar Shannon entropies for different distribution means. Consequently, if we quantify fragmentation using Shannon entropy, we conclude that in comparison to the Pareto distributions, increasing the mean weight has less effect on the amount of opinion fragmentation for the uniform and exponential distributions. Because Shannon entropy depends on the sizes of the opinion clusters, it provides more information about opinion fragmentation than tracking only the number of major opinion clusters. Our plot of the steadystate mean local receptiveness illustrates the same trends as the entropy. (See our code repository for the relevant figure.) This suggests that both Shannon entropy and mean local receptiveness are useful for quantifying opinion fragmentation. We now discuss the numbers of steady-state minor opinion clusters in our BCM simulations on complete graphs. (See our code repository for a plot.) For each node-weight distribution and each value of c and m, when FIG. 4. Shannon entropies of the steady-state opinioncluster profiles in simulations of our node-weighted BCM on a 500-node complete graph with various node-weight distributions. we take the mean of our 100 simulations, we obtain at most 2 steady-state minor clusters. We observe the most minor clusters when c ∈ {0.1, 0.2}, which are the smallest confidence bounds that we examine. For the constant weight distribution, we typically observe more minor clusters when m ∈ {0.3, 0.5} than when m = 0.1. However, we do not observe this trend for the heterogeneous weight distributions. For example, for the Pareto-80-10 distribution, when c ∈ [0.34, 0.4], decreasing m results in more minor opinion clusters. For the Pareto distributions, as we decrease m, we also observe that minor clusters tend to appear at smaller confidence bounds. Smaller values of m entail smaller opinion compromises for interacting agents; this may give more time for agents to interact before they settle into their final opinion clusters. For the constant weight distribution, this may reduce the number of minor clusters by giving more opportunities for agents to assimilate into a major cluster. However, for our heterogeneous weight distributions, nodes with larger weights have a larger probability of interacting with other nodes and we no longer observe fewer minor clusters as we decrease m. FIG. 5. Sample trajectories of agent opinions versus time in a single simulation of our node-weighted BCM on a complete graph with N = 500 nodes and node weights that we draw from a Pareto-80-10 distribution. We color the trajectory of each agent by its node weight, which we normalize so that the sum of all node weights is 1. The nodes in the two minor opinion clusters are all small-weight nodes; their weights are close to 0 (and are hence in purple). We now propose a possible mechanism by which our node-weighted BCM may promote the trends in Table III. In Fig. 5, we show the trajectories of opinions versus time for a single simulation with node weights that we draw from a Pareto-80-10 distribution. To qualitatively describe our observations, we examine the large-weight and small-weight nodes (i.e., the nodes that are near and at the extremes of a set of node weights in a given simulation). Because our node-selection probabilities are proportional to node weights, to compare the weights in a simulation, we normalize them to sum to 1. In Fig. 5, the large-weight nodes appear to quickly stabilize into their respective steady-state major opinion clusters, and some small-weight nodes are left behind to form the two minor clusters. In our numerical simulations on complete graphs, we observe that heterogeneity in the node weights results in large-weight nodes interacting more frequently than other nodes and quickly settling into steady-state major opinion clusters. Small-weight nodes that are not selected for opinion updates early in a simulation are left behind to form the smallest clusters in a steady-state opinion-cluster profile; this increases the amount of opinion fragmentation. In comparison to the constant weight distribution, when we increase the mean node weight or increase the relative proportion of large-weight nodes (by increasing the heaviness of the tail of the distribution) or decrease the value of the compromise parameter m, small-weight nodes take longer to settle into opinion clusters; this may promote both opinion fragmentation and the formation of minor opinion clusters. B. Erdős-Rényi (ER) graphs We now examine random graphs that we generate using G(N, p) ER random-graph models, where p is the homogeneous, independent probability of an edge between any pair of nodes [49]. For p = 1, these ER graphs are complete graphs. In this subsection, we consider the edge probabilities p ∈ {0.1, 0.3, 0.5, 0.7}. For each value of p, we observe the trends in Table III. We include the plots of our simulation results at steady state for the convergence times, the numbers of major and minor opinion clusters, and the values of mean local receptiveness in our code repository. In Fig. 6, we show the steady-state Shannon entropies of our simulations for various node-weight distributions and values of p. The entropies are comparable to those that we obtained in our simulations on 500-node complete graphs (see Sec. IV A). When c ∈ [0.1, 0.4], for each of our three node-weight distribution families and for fixed values of p, c, and m, the 80-20 distribution tends to yield a larger Shannon entropy than the 80-10 distribution (which has a smaller mean). For larger p, we expect the results of our simulations on G(500, p) networks to be similar to those of our simulations on a 500-node complete graph. For p ∈ {0.3, 0.5, 0.7} and N = 500, the number of major opinion clusters and the mean local receptiveness are comparable to the corresponding results for a 500-node complete graph. When p = 0.1 and there is opinion fragmentation, for a fixed node-weight distribution and fixed values of c and m, we observe fewer major opinion clusters than for larger values of p. For p = 0.1, when c ∈ [0.1, 0.4], for a fixed node-weight distribution and fixed c and m, we also observe that the mean local receptiveness tends to be larger than it is for larger p. One possible contributing factor for this observation may be that smaller values of p yield G(N, p) graphs with more small-degree nodes; these small-degree nodes have fewer available values of local receptiveness than larger-FIG. 6. Shannon entropies of the steady-state opinion-cluster profiles in simulations of our node-weighted BCM on G(500, p) ER random graphs with various node-weight distributions. degree nodes. For example, a node with degree 2 can only have a local receptiveness of 0, 0.5, or 1. Unless a small-degree node is an isolated node in the steadystate effective-receptivity network G eff (T ), its presence may help inflate the value of the steady-state mean local receptiveness. For progressively smaller values of p, we observe progressively more minor opinion clusters at steady state. For p ∈ {0.5, 0.7}, the steady-state numbers of minor clusters are comparable to the numbers that we obtained for a 500-node complete graph. When p ∈ {0.5, 0.7}, for each distribution and each value of c and m, when we take the mean of our 500 simulations, we obtain at most 3 steady-state minor clusters. For these simulations, we observe the most minor clusters when c ∈ {0.1, 0.2}. For p = 0.1, the mean number of minor clusters at steady state can be as large as 9; this occurs when c ∈ {0.35, 0.4}. It seems sensible that smaller values of p yield more minor opinion clusters. For small p, there are more small-degree nodes than for larger values of p. It is easier for small-degree nodes than for large-degree nodes to be in a minor opinion cluster, as small-degree nodes need to become unreceptive to few neighbors to end up in a minor cluster at steady state. That is, if i is a small-degree node, few neighbors j need to satisfy the inequality |x i − x j | < c. C. Stochastic-block-model (SBM) graphs We now examine SBM random graphs that we generate using the parameters in Table I. For both the twocommunity and core-periphery SBM graphs, we observe the trends in Table III. We include the plots of our simulation results at steady state for the convergence times, the numbers of major and minor opinion clusters, the Shannon entropies, and the values of mean local receptiveness in our code repository. For the two-community SBM graphs, the steady-state Shannon entropies and numbers of major opinion clusters are comparable to those in our simulations on a complete graph. When there is opinion fragmentation, for a fixed node-weight distribution and fixed values of c and m, the steady-state values of mean local receptiveness tend to be similar to the values for G(500, 0.1) graphs and larger than the values for a complete graph. The steady-state numbers of minor opinion clusters are similar to those for the G(500, 0.1) random graphs. For the two-community SBM graphs, for each nodeweight distribution and each value of c and m, when we take the mean of our 500 simulations, we obtain at most 9 steady-state minor clusters. We observe the most steadystate minor clusters when c ∈ {0.35, 0.4}. Recall that we select the edge probabilities of the two-community SBM so that each of the two communities has an expected mean degree that matches that of G(500, 0.1) graphs. Therefore, it is reasonable that we obtain similar results for the two-community SBM and the G(500, 0.1) random graphs. In our numerical simulations, we assign the node weights randomly without considering the positions (which, in this case, is the community assignments) of the nodes of a network. With node weights assigned in this way, it seems that graph sparsity may be more important than community structure for determining if the system reaches a consensus or fragmented state. For a fixed node-weight distribution and fixed values of c and m, the core-periphery SBM graphs tend to have fewer major clusters than complete graphs. Additionally, both the steady-state Shannon entropy and the mean local receptiveness tend to be larger for the core-periphery SBM graphs than for complete graphs. Larger entropy and smaller local receptiveness are both indications of more opinion fragmentation. If we consider only the number of major opinion clusters, it seems that the coreperiphery SBM graphs yield less opinion fragmentation than complete graphs. However, when we examine the entire opinion-cluster profile of a network and account for the cluster sizes and the minor clusters, the Shannon entropy reveals that there is more opinion fragmentation in core-periphery SBM graphs than in complete graphs. The steady-state mean local receptiveness indicates that the nodes of a core-periphery SBM graph tend to be receptive to a larger fraction of their neighbors than the nodes of a complete graph. We believe that Shannon entropy provides a more useful quantification than mean local receptiveness of opin-ion fragmentation in a network. For networks with a large range of degrees, small-degree nodes can inflate the mean value of local receptiveness. Analogously, for clustering coefficients, a network's mean local clustering coefficient places more importance on small-degree nodes than its global clustering coefficient [49]. In the context of our node-weighted BCM, consider a node with degree 2 and a node with degree 100, and suppose that both of them have a local receptiveness of 0.5. The largerdegree node having a local receptiveness of 0.5 gives a better indication that there may be opinion fragmentation in a network than the smaller-degree node having the same local receptiveness. However, we treat both nodes equally when we calculate the mean local receptiveness. We believe that local receptiveness is a useful quantity to calculate for individual nodes to determine how they perceive the opinions of their neighbors. However, it appears to be less useful than Shannon entropy for quantifying opinion fragmentation in a network. For a fixed node-weight distribution and fixed values of c and m, the steady-state numbers of major opinion clusters that we obtain in the core-periphery SBM graphs are comparable to the numbers for a complete graph. The steady-state numbers of minor opinion clusters tend to be larger for core-periphery SBM graphs than for twocommunity SBM graphs (which have more minor clusters than a complete graph). For each node-weight distribution and each value of c and m, when we take the mean of our 500 simulations, we observe at most 11 steady-state minor clusters; this occurs when c = 0.1. One possibility is that the core-periphery structure makes it easier to disconnect peripheral nodes of an effective-receptivity network, causing these nodes to form minor clusters. For the core-periphery SBM graphs, it seems interesting to investigate the effect of using network structure to assign which nodes have large weights. For example, if we assign all of the large weights to nodes in the core, will that pull more of the peripheral nodes into opinion clusters with core nodes? If we place a large-weight node in the periphery, will it be able to pull core nodes into its opinion cluster? D. Caltech network We now discuss the Caltech Facebook network, which is an empirical data set in which the nodes are individuals with Caltech affiliations and the edges represent "friendships" on Facebook on one day in fall 2005 [47,48]. We consider the network's largest connected component, which has 762 nodes and 16,651 edges. The Caltech network has all but one of the trends that we reported in Table III; the only exception is the trend in the number of minor opinion clusters. When there is opinion fragmentation, the Caltech network has more steady-state minor clusters and larger steady-state Shannon entropies than in the synthetic networks. In Fig. 7, we show the steady-state numbers of minor opinion clusters in simulations of our BCM on the Caltech network. We obtain the most minor clusters when c = 0.1, which is the smallest value of c that we examine. For each node-weight distribution and each value of c and m, when we take the mean of our 100 simulations on the Caltech network, we obtain as many as 78 minor clusters, which is much more than the single-digit numbers that we usually observe for our synthetic networks. Additionally, unlike in our synthetic networks, for all distributions (not just the constant weight distribution), the Caltech network tends to have more minor clusters when m ∈ {0.3, 0.5} than when m = 0.1. We include our plot of the steady-state number of major opinion clusters in our code repository. The Caltech network tends to have fewer major opinion clusters than the examined synthetic networks. In Fig. 8, we show the steady-state Shannon entropies for the Caltech network. For a fixed node-weight distribution and fixed values of c and m, when there is opinion fragmentation, the Caltech network has a larger entropy than for our synthetic networks. This aligns with our observation that the Caltech network has many more minor opinion clusters than our synthetic networks. We show a plot of the steady-state values of mean local receptiveness for the Caltech network in our code repository. The values of the mean local receptiveness tend to be larger for the Caltech network than for the 500-node complete graph. We suspect that this arises from the presence of many small-degree nodes in the Caltech network. In Sec. IV C, we discussed the impact of small-degree nodes on the mean local receptiveness. The histogram of the node degrees of the Caltech net- work (see Fig. 9) differs dramatically from those of our synthetic networks. Unlike in our synthetic networks, the most common degrees in the Caltech network are among the smallest degrees. In Fig. 9, the tallest bar in the histogram is for nodes of degrees 0-9. These abundant small-degree nodes are likely to disconnect from the largest connected component(s) of the effectivereceptivity network and form minor opinion clusters. Because we select the initial opinions uniformly at random from [0, 1], when c = 0.1, it is possible that smalldegree nodes are initially isolated nodes of the effectivereceptivity network because of their initial opinions. The abundance of small-degree nodes in the Caltech network helps explain its larger steady-state numbers of minor opinion clusters and the correspondingly larger entropies than for our synthetic networks. Despite the fact that the Caltech network is structurally very different from our synthetic networks, it follows all of the trends in Table III aside from the one for the number of minor opinion clusters. Therefore, it seems that the trends that we observe in our node-weighted BCM when we assign node weights uniformly at random (and hence in a way that is independent of network structure) are fairly robust to the underlying network structure. E. Finite-size effects We now investigate finite-size effects in our BCM results for our simulations on a complete graph. To ensure reasonable computation times, we examined synthetic networks with 500 nodes. However, it is useful to get a sense of whether or not the trends in Table III hold for networks of different sizes. To start to investigate this, we simulate our BCM on complete graphs of sizes N ∈ {10, 20, 30, 45, 65, 100, 150, 200, 300, . . . , 1000}. We examine m ∈ {0.3, 0.5}, and c ∈ {0.1, 0.3, 0.5}, which give regimes of opinion fragmentation, a transition between fragmentation and consensus for the constant weight distribution, and opinion consensus. We consider the constant weight distribution and the 80-10 distributions (i.e., the uniform, exponential, and Pareto distributions with a mean node weight of 2.8836). We do not examine any larger-mean distributions because they require longer computation times. In Fig. 10, we show the convergence times of our simulations of our BCM on complete graphs of various sizes. To visualize our results, we plot the graph sizes on a logarithmic scale. For all distributions, the convergence times become longer as we increase the graph size. For each graph size, the convergence times for the heterogeneous weight distributions are similar to each other and are longer than those for the constant weight distribution. In Fig. 11, we show the steady-state Shannon entropies from our simulations of our BCM on complete graphs of various sizes. For a fixed value of c, we observe similar results when m = 0.3 and m = 0.5. When c = 0.5, for each distribution, the simulations always reach a consensus (i.e., there is exactly one major steady-state opinion cluster) for N ≥ 200. Correspondingly, the steady-state entropies are close to 0. (They are not exactly 0 because the calculation of Shannon entropies includes information from minor clusters.) As we increase the network size, the error bars (which indicate one standard deviation from the mean) become progressively smaller. When c ∈ {0.1, 0.3}, for sufficiently large graph sizes (specifically, when N ≥ 100), we observe that the entropy increases as we increase the heaviness of the tail of a distribution. For c = 0.3, the mean steady-state entropies appear to no longer change meaningfully with N when N ≥ 400. For c = 0.1, this is the case when N ≥ 100. When there is opinion fragmentation, the heterogeneous node-weight distributions yield larger steady-state Shannon entropies (and hence more opinion fragmentation, if one is measuring it using entropy) than the constant weight distribution for each graph size. Additionally, for a given distribution mean, we obtain larger entropies (and thus more opinion fragmentation) as we increase the heaviness of the tail of a distribution. We have not explored the effect of graph size on the trends that we observe (see Table III) when we increase the distribution mean for a fixed family of distributions. In our code repository, we include a plot of the the steady-state mean local receptiveness for complete graphs of various sizes. In that plot, we also observe the trend of more opinion fragmentation (in the sense of a smaller mean local receptiveness) for heterogeneous node-weight distributions with increasingly heavy tails. We also examine the steady-state numbers of major and minor opinion clusters in simulations of our BCM on complete graphs of various sizes; we include plots of them in our code repository. For a fixed value of c, we observe similar results when m = 0.3 and m = 0.5. When N ≤ 49, there are no minor opinion clusters, by definition, because minor clusters can include at most 2% of the nodes of a network (and even a single node constitutes more than 2% of all nodes for such small networks). When N ≥ 65 and c ∈ {0.1, 0.3}, for each distribution, the number of minor clusters tends to increase as we increase N . We do not observe a clear trend in which distributions yield more minor clusters. When c = 0.5, the mean number of minor clusters is always near 0. When c = 0.5 and N ≥ 200, all simulations yield 1 major opinion cluster (i.e., they all reach consensus). When c = 0.3, for all graph sizes, there are more major opinion clusters as we increase the heaviness of the tail of a distribution. Additionally, when c = 0.3, for the Pareto-80-10 distribution, the number of major clusters tends to increase as we increase the graph size. For the other distributions, the number of major clusters tends to decrease as we increase the graph size. When c = 0.1 and N ≥ 200, there again tends to be more major clusters as we increase the heaviness of the tail of a distribution, although the trend is not as clear as it was for c = 0.3. For graphs with N = 500 or more nodes, the mean steady-state Shannon entropies for each node-weight distribution appear to no longer change meaningfully with respect to N ; the mean entropies are more consistent for N ≥ 500 than for smaller values of N . For each graph size, the heterogeneous 80-10 distributions have longer convergence times than the constant weight distribution. In all of these cases, we also observe more opinion fragmentation as we increase the heaviness of the tail of a distribution. Because of computation time, we have not examined finite-size effects for different values of the distribution means. However, because the mean Shannon entropies no longer change meaningfully with respect to N for graphs with N ≥ 500 nodes, we hypothesize that the trends in opinion fragmentation and convergence time in Table III continue to hold for our synthetic networks when there are more than 500 nodes. V. CONCLUSIONS AND DISCUSSION We developed a novel bounded-confidence model (BCM) with heterogeneous node-selection probabilities, which we modeled by using node weights. One can interpret these node weights as encoding phenomena such as heterogeneous agent sociabilities or activity levels. We studied our node-weighted BCM with fixed node weights that we assign in a way that disregards network structure and node opinions. We demonstrated that our node-weighted BCM has longer convergence times and more opinion fragmentation than a baseline Deffuant-Weisbuch (DW) BCM in which we uniformly randomly select nodes for interaction. It is straightforward to adapt our BCM to assign node weights in a way that depends on network structure and/or node opinions. See Sec. V B and Sec. V C for discussions. A. Summary of our main results We simulated our node-weighted BCM with a variety of node-weight distributions (see Table II) on several random and deterministic networks (see Table I). For each of these distributions and networks, we systematically investigated the convergence time and opinion fragmentation for different values of the confidence bound c and the compromise parameter m. To determine if the nodes of a network reach consensus or if there is opinion fragmentation, we calculated the steady-state number of major clusters in our simulations. To quantify the amount of opinion fragmentation, we calculated the steady-state Shannon entropy and mean local receptiveness. For a given network, we found that entropy and mean local receptiveness follow the same trends in which distributions have more opinion fragmentation (see Table III). Based on our results, we believe that Shannon entropy is more useful than mean local receptiveness for quantifying opinion fragmentation in a network. However, calculating local receptiveness is insightful for explorations of the opinion dynamics of individual nodes. In our simulations of our node-weighted BCM, we observed a variety of typical trends (see Table III). In particular, we found that heterogeneous node-weight distributions yield longer convergence times and more opinion fragmentation than the baseline DW model (which we obtain by using a constant weight distribution) in simulations of our BCM. Opinion fragmentation also increases if either (1) for a fixed distribution mean, we make the tail of the distribution heavier or (2) for a given distribution family, we increase the mean of the distribution. Given a set of heterogeneous node weights, we hypothesize that large-weight nodes are selected early in a simulation with large probabilities and quickly settle into their associated steady-state major opinion clusters. Small-weight nodes that are not selected early in a simulation are left behind to form small opinion clusters, resulting in more opinion fragmentation than in the baseline DW model. B. Relating node weights to network structure We examined deterministic and random graphs with various structures, and we observed the trends in Table III. For each of our BCM simulations, we selected node weights from a specified distribution and then assigned these weights to nodes uniformly at random. Therefore, our investigation conveys what trends to expect with fixed, heterogeneous node weights that are assigned to nodes without regard for network structure. However, our model provides a flexible framework to study the effects of node weights when they are correlated with network structure. For example, one can assign weights to nodes in a way that depends on some centrality measure (such as degree). In our BCM, we expect large-degree and large-weight nodes to have more interactions than small-degree or small-weight nodes. Nodes with larger degrees have more neighbors that can select them for an interaction, and nodes with larger weights have associated larger probabilities of being selected for an interaction. One possible area of future work is to investigate the combined effects of node weight and node degree on the frequency of interactions and the distribution of steady-state opinions in our BCM. Mean-field approaches, such as the one in [69], may offer insights into these effects. For a given set of node weights, larger-weight nodes have larger probabilities of interacting with other nodes; their position in a network likely affects the dynamics of BCMs and other models of opinion dynamics. One can also investigate the effects of homophily in the assignment of node weights. For example, in social-media platforms, very active accounts may engage with each other more frequently by sharing or commenting on each others' posts. We can incorporate such features into our BCM through a positive node-weight assortativity, such that large-weight nodes are more likely to be adjacent to each other than to other nodes. As in the standard DW model, we assign the initial opinions uniformly at random in our BCM. However, in a real social network with community structure, this choice may not be realistic. One can investigate a social network with communities with different mean opinion values and examine the effect of placing large-weight nodes into different communities. For example, how does placing all large-weight nodes into the same community affect opinion dynamics and steady-state opinion-cluster profiles? How does the presence of a small community of "outspoken" (i.e., large-weight) nodes influence the final opinions of nodes in other communities of a network? Will the small community quickly engender an echo chamber [67], will it pull other nodes into its final opinion cluster, or will something else occur? C. Relating node weights to node opinions In the present paper, we considered fixed node weights that are independent of node opinions. One can readily adapt our BCM to incorporate time-dependent node weights, such as ones that depend on node opinions. One can allow the probability of selecting a node for interaction to depend on how extreme its opinion is [22] or on the similarity of its opinion to that of another node [24]. Sîrbu et al. [24] studied a modified DW model with heterogeneous node-selection probabilities that model algorithmic bias on social media. In their model, one first selects an agent uniformly at random. One then calculates the magnitude of the opinion difference between that agent and each of its neighbors and then selects a neighbor with a probability that is proportional to this difference. In the context of our BCM, one can represent their agent-selection mechanism using time-dependent node weights. To do this, at each time t, one first assigns the same constant weight to all nodes when selecting a first node i. When selecting a second node j to interact with i, one then assign weights to neighbors of i that are a function of the opinion difference |x i (t) − x j (t)|. One assigns a weight of 0 to nodes that are not adjacent to i. The simulations by Sîrbu et al. on complete graphs suggest that greater algorithmic bias results in longer convergence times and more opinion clusters [24]. Very recently, Pansanella et al. [25] observed similar trends in a study of the algorithmic-bias model of Sîrbu et al. for various random-graph models. In our simulations of our BCM with heterogeneous node-selection probabilities, we observed similar trends of longer convergence times and more opinion clusters (and opinion fragmentation) than in our baseline DW model. Our results illustrate that it is important to consider the baseline effect of assigning node weights uniformly at random in studies of BCMs with heterogenous node-selection probabilities before attributing trends such as longer convergence times and more opinion fragmentation to specific mechanisms such as algorithmic bias. Different mechanisms can yield very similar empirical observations. D. Edge-based heterogeneous activities In the standard DW model, at each time, one selects an edge of a network uniformly at random and the two agents that are attached to that edge interact with each other [45]. Most past work on the DW model and its extensions has focused on this edge-based selection mechanism [5]. In our BCM, to incorporate node weights (e.g., to encode heterogeneous sociabilities or activity levels of individuals), we instead used a node-based selection mechanism. For voter models of opinion dynamics, it is known that the choice between edge-based and nodebased agent selection can substantially affect a model's qualitative behavior [46]. We are not aware of a comparison of edge-based and node-based agent selection in asynchronous BCMs (and, in particular, in DW models), and it seems interesting to investigate this issue. We developed our BCM to incorporate node weights that encode heterogeneous activity levels of individuals. One can also examine heterogeneous dyad-activity levels to account for the fact that individuals do not interact with each of their social contacts with the same probability. To encode such heterogeneity, one can construct a variant of our BCM that incorporates edge weights. At each time step, one can select a pair of agents to interact with a probability that is proportional to weight of the edge between them. We have not yet examined edge-based heterogeneous activity levels in a BCM, and we expect that it will be interesting to investigate them. E. Importance of node weights The key novelty of our BCM is our incorporation of node weights into opinion dynamics. Node weights have been used in activity-driven models of temporal networks [39], and activity-driven frameworks have been used to model which agents can interact with each other in models of opinion dynamics [23,40]. In our BCM, the node weights determine the probabilities to select agents for interaction in a time-independent network. Alizadeh and Cioffi-Revilla [22], Sîrbu et al. [24], and Pansanella et al. [25] examined specific scenarios of heterogeneous nodeselection probabilities in DW models. Our node-weighted BCM provides a general framework to incorporate node weights into an asynchronous BCM. Using our framework, one can consider node weights that fixed and are assigned uniformly at random to nodes (i.e., as we investigated in this paper), fixed and assigned according to some other probability distribution (see the discussion in Sec. V B), or assigned in a time-dependent way (see the discussion in Sec. V C). In network science, node weights have been studied far less than edge weights, and even the term "weighted network" usually refers specifically to edge-weighted networks by default. For example, it is very common to study centralities in edge-weighted networks [70], but studies of centralities in node-weighted networks (e.g., see Refs. [71,72]) are much less common. Heitzig et al. [71] generalized common network statistics to node-weighted networks and used node weights to represent the "sizes" of the nodes of a network. They used their framework to study brain networks with node weights that encode the areas of regions of interest, international trade networks with node weights that encode the gross domestic products (GDPs) of countries, and climate networks with node weights that encode areas in a regular grid on the Earth's surface. Singh et al. [72] developed centrality measures that incorporate both edge weights and node weights and used them to study service-coverage problems and the spread of contagions. These studies demonstrate the usefulness of node weights for incorporating salient information in network analysis in a variety of applications. In our node-weighted BCM, we are interested in determining which nodes of a network are (in some sense) more influential than others and thereby exert larger effects on steady-state opinion-cluster profiles. Recently, Brooks and Porter [73] quantified the influence of media nodes in a BCM by examining how their ideologies influence other nodes of a network. An interesting area of future work is to develop ways to quantify the influence of specific nodes in models of opinion dynamics with node weights. For example, can one determine which weighted nodes to seed with extreme opinions to best spread such opinions? Are there nodes that make it particularly easy for communities to reach consensus and remain connected in a steady-state effective-receptivity network G eff (T )? One can adapt the node weights in our BCM to examine a variety of sociological scenarios in which nodes have heterogeneous activity levels or interaction frequencies. More generally, our model illustrates the importance of incorporating node weights into network analysis, and we encourage researchers to spend more time studying the effects of node weights on network structure and dynamics. standard DW model. Therefore, a transition between consensus and opinion fragmentation occurs for c ∈ [0.25, 0.3]. In our simulations, we observe that this transition occurs for c ∈ [0.25, 0.4] in our node-weighted BCM. To examine this transition, we thus zoom in on these values of c. For the uniform and exponential distributions, we focus on c ∈ [0.25, 0.3]. For the Pareto distri- butions, the transition occurs for larger values of c than for the other distributions; we consider additional values of c ∈ [0.3, 0.4]. Because the constant weight distribution is our baseline DW model, we simulate our BCM with the constant weight distribution for all values of c that we consider for any other distribution.InFig. 2, we show the convergence times (which we measure in terms of the numbers of time steps) of ourFIG. 2. Convergence times (in terms of the number of time steps) in simulations of our node-weighted BCM on a 500node complete graph. If we only consider the time steps in which interacting nodes actually change their opinions, the convergence times are smaller; however, the trends are the same. For this heat map and all subsequent heat maps, the depicted values are the means of our simulations of our BCM with each node-weight distribution and each value of the BCM parameter pair (c, m). FIG. 7 . 7The steady-state numbers of minor opinion clusters in simulations of our node-weighted BCM on the Caltech Facebook network with various distributions of node weights. We consider a cluster to be minor cluster if it has at most 2% of the nodes (i.e., 15 or fewer nodes) of a network.FIG. 8. Shannon entropies of the steady-state opinion-cluster profile in simulations of our node-weighted BCM on the Caltech Facebook network with various node-weight distributions. FIG. 9 . 9Histogram of the node degrees of the Caltech Facebook network. The bins have width 10 and originate at the left end point (i.e., the bins indicate degrees of 0-9, 10-19, and so on). FIG. 10 . 10Convergence times (in terms of the number of time steps) in simulations of our node-weighted BCM on complete graphs of various sizes. We show results for various choices of c and m; the marker shape and color indicate the node-weight distribution. For this figure and subsequent figures of this type, the points are means of 100 simulations and the error bars indicate one standard deviation from the mean. The horizontal axis gives the graph size on a logarithmic scale. For clarity, the vertical axes of the plots have different scales. FIG. 11. Shannon entropies of the steady-state opinion-cluster profiles in simulations of our node-weighted BCM on complete graphs of various sizes. We show results for various choices of c and m; the marker shape and color indicate the node-weight distribution. TABLE I . IThe networks on which we simulate our node-weighted BCM.Network Description Parameters C(N ) Complete graph with N nodes N ∈ {10, 20, 30, 45, 65, 100, 150, 200, 300 . . . , 1000} G(N, p) Erdős-Rényi (ER) random-graph model with N nodes and ho- mogeneous, independent edge probability p p ∈ {0.1, 0.3, 0.5, 0.7} Two-Community SBM a Stochastic block model with 2 × 2 blocks. Edges between nodes in the same set (A or B) exist with a larger probability than edges between nodes in different sets; the block probabilities satisfy PBB > PAA > PAB. PAA = 49.9/374 PBB = 49.9/124 PAB = 1/500 Core-Periphery SBM a Stochastic block model with 2 × 2 blocks. Set A is a set of core nodes and set B is a set of peripheral nodes. The block probabilities satisfy PAA > PAB > PBB. PAA = 147.9/374 PBB = 1/174 PAB = 1/25 Caltech Network The largest connected component of the Facebook friendship network at Caltech on one day in fall 2005. This network, which is part of the Facebook100 data set [47, 48], has 762 nodes and 16,651 edges. a Our SBM networks have N = 500 nodes. We partition an SBM network into two sets of nodes; set A has 75% of the nodes, and set B has 25% of the nodes. BCM, we consider stochastic-block-model (SBM) net- works [49] with 2 × 2 blocks, where each block consists of an ER graph. Inspired by the choices of Kureh and Porter TABLE II . IINames and specifications of our distributions of node weights. We show both the general mathematical expressions for the means and the specific values of the means for our parameter values. For the Pareto distributions, the distribution means in the table are approximate. For all other distributions, the means are exact.Distribution Probability density function Parameter values Domain Mean TABLE III . IIISummary of the trends in our simulations of our node-weighted BCM. Unless we note otherwise, we observe these trends for each of the networks that we examine (complete graphs, ER and SBM random graphs, and the Caltech Facebook network).Quantity Trends Convergence Time • For fixed values of c and m, the heterogeneous weight distributions have longer convergence times than the con- stant weight distribution. Opinion Fragmentation a • For fixed values of c ∈ [0.1, 0.4] and m, the heterogeneous weight distributions usually have more opinion fragmentation than the constant weight distribution. ACKNOWLEDGMENTSWe thank Andrea Bertozzi, Jacob Foster, Jerry Luo, Deanna Needell, and the participants of UCLA's Networks Journal Club for helpful comments and discussions. We also thank the two anonymous referees for helpful comments. We acknowledge financial support from the National Science Foundation (grant number 1922952) through the Algorithms for Threat Detection (ATD) program. GJL was also supported by NSF grant number 1829071. Statistical physics of social dynamics. C Castellano, S Fortunato, V Loreto, 10.1103/RevModPhys.81.591Reviews of Modern Physics. 81591C. Castellano, S. Fortunato, and V. Loreto, Statistical physics of social dynamics, Reviews of Modern Physics 81, 591 (2009). Opinion dynamics: Models, extensions and external effects, in Participatory Sensing, Opinions and Collective Awareness. A Sîrbu, V Loreto, V D P Servedio, F Tria ; V. Loreto, M Haklay, A Hotho, V D P Servedio, G Stumme, J Theunis, F Tria, 10.1007/978-3-319-25658-0_17Springer International PublishingCham, SwitzerlandA. Sîrbu, V. Loreto, V. D. P. Servedio, and F. Tria, Opin- ion dynamics: Models, extensions and external effects, in Participatory Sensing, Opinions and Collective Aware- ness, edited by V. Loreto, M. Haklay, A. Hotho, V. D. P. Servedio, G. Stumme, J. Theunis, and F. Tria (Springer International Publishing, Cham, Switzerland, 2017) pp. 363-401. Complex Spreading Phenomena in Social Systems: Influence and Contagion in Real-World Social Networks. S Lehmann, Y.-Y Ahn, Springer International PublishingCham, SwitzerlandS. Lehmann and Y.-Y. Ahn, Complex Spreading Phenom- ena in Social Systems: Influence and Contagion in Real- World Social Networks (Springer International Publish- ing, Cham, Switzerland, 2018). From classical to modern opinion dynamics. H Noorazar, K R Vixie, A Talebanpour, Y Hu, 10.1142/S0129183120501016International Journal of Modern Physics C. 312050101H. Noorazar, K. R. Vixie, A. Talebanpour, and Y. Hu, From classical to modern opinion dynamics, International Journal of Modern Physics C 31, 2050101 (2020). Recent advances in opinion propagation dynamics: A 2020 survey. H Noorazar, 10.1140/epjp/s13360-020-00541-2The European Physical Journal Plus. 135521H. Noorazar, Recent advances in opinion propagation dy- namics: A 2020 survey, The European Physical Journal Plus 135, 521 (2020). Opinion dynamics in social networks: From models to data. A F Peralta, J Kertész, G Iñiguez, 10.48550/ARXIV.2201.01322arXiv:2201.01322Handbook of Computational Social Science. T. Yasseri2023e-printto appear as a chapterA. F. Peralta, J. Kertész, and G. Iñiguez, Opinion dy- namics in social networks: From models to data, e-print arXiv:2201.01322; to appear as a chapter in Handbook of Computational Social Science (T. Yasseri (Ed.), 2023) (2022). Integrating social and cognitive aspects of belief dynamics: Towards a unifying framework. M Galesic, H Olsson, J Dalege, T Van Der Does, D L Stein, 10.1098/rsif.2020.0857Journal of The Royal Society Interface. 1820200857M. Galesic, H. Olsson, J. Dalege, T. van der Does, and D. L. Stein, Integrating social and cognitive aspects of belief dynamics: Towards a unifying framework, Journal of The Royal Society Interface 18, 20200857 (2021). Modeling and analysis of social phenomena: Challenges and possible research directions. F Vazquez, 10.3390/e24040491Entropy. 24491F. Vazquez, Modeling and analysis of social phenomena: Challenges and possible research directions, Entropy 24, 491 (2022). Opinion formation by social influence: From experiments to modeling. A Chacoma, D H Zanette, 10.1371/journal.pone.0140406PLOS ONE. 10140406A. Chacoma and D. H. Zanette, Opinion formation by social influence: From experiments to modeling, PLOS ONE 10, e0140406 (2015). Modelling influence and opinion evolution in online collective behaviour. C Vande Kerckhove, S Martin, P Gend, P J Rentfrow, J M Hendrickx, V D Blondel, 10.1371/journal.pone.0157685PLOS ONE. 11157685C. Vande Kerckhove, S. Martin, P. Gend, P. J. Rent- frow, J. M. Hendrickx, and V. D. Blondel, Modelling influence and opinion evolution in online collective be- haviour, PLOS ONE 11, e0157685 (2016). Discrepancy and disliking do not induce negative opinion shifts. K Takács, A Flache, M Mäs, 10.1371/journal.pone.0157948PLOS ONE. 11157948K. Takács, A. Flache, and M. Mäs, Discrepancy and dis- liking do not induce negative opinion shifts, PLOS ONE 11, e0157948 (2016). Learning opinion dynamics from social traces. C Monti, G De Francisci, F Morales, Bonchi, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20 (Association for Computing Machinery. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20 (Association for Computing MachineryNew York, NY, USAC. Monti, G. De Francisci Morales, and F. Bonchi, Learn- ing opinion dynamics from social traces, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20 (Asso- ciation for Computing Machinery, New York, NY, USA, 2020) pp. 764-773. Formal models of opinion formation and their application to real data: evidence from online social networks. I V Kozitsin, 10.1080/0022250X.2020.1835894The Journal of Mathematical Sociology. 46120I. V. Kozitsin, Formal models of opinion formation and their application to real data: evidence from online social networks, The Journal of Mathematical Sociology 46, 120 (2022). Opinion dynamics of online social network users: A micro-level analysis. I V Kozitsin, 10.1080/0022250X.2021.1956917The Journal of Mathematical Sociology. 471I. V. Kozitsin, Opinion dynamics of online social network users: A micro-level analysis, The Journal of Mathemat- ical Sociology 47, 1 (2023). Propagation of measurement error in opinion dynamics models: The case of the Deffuant model. D Carpentras, M Quayle, Physica A: Statistical Mechanics and its Applications. 606127993D. Carpentras and M. Quayle, Propagation of measure- ment error in opinion dynamics models: The case of the Deffuant model, Physica A: Statistical Mechanics and its Applications 606, 127993 (2022). Challenges to simulation validation in the social sciences. a critical rationalist perspective. M Mäs, 10.1007/978-3-319-70766-2_35Computer Simulation Validation: Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives. C. Beisbart and N. J. SaamCham, SwitzerlandSpringer International PublishingM. Mäs, Challenges to simulation validation in the so- cial sciences. a critical rationalist perspective, in Com- puter Simulation Validation: Fundamental Concepts, Methodological Frameworks, and Philosophical Perspec- tives, edited by C. Beisbart and N. J. Saam (Springer International Publishing, Cham, Switzerland, 2019) pp. 857-879. Mechanistic models in computational social science. P Holme, F Liljeros, 10.3389/fphy.2015.00078Frontiers in Physics. 378P. Holme and F. Liljeros, Mechanistic models in compu- tational social science, Frontiers in Physics 3, 78 (2015). There are also many models of opinion dynamics with discrete-valued opinions and/or polyadic interactions between agents. 2, 4, 74There are also many models of opinion dynamics with discrete-valued opinions and/or polyadic interactions be- tween agents [2, 4, 74]. D Chandler, R Munday, 10.1093/acref/9780199568758.001.0001A Dictionary of Media and Communication. Oxford, United KingdomOxford University PressD. Chandler and R. Munday, A Dictionary of Media and Communication (Oxford University Press, Oxford, United Kingdom, 2011). Opinion dynamics and bounded confidence: Models, analysis and simulation. R Hegselmann, U Krause, Journal of Artificial Societies and Social Simulation. 5R. Hegselmann and U. Krause, Opinion dynamics and bounded confidence: Models, analysis and simulation, Journal of Artificial Societies and Social Simulation 5, 2 (2002). Mixing beliefs among interacting agents. G Deffuant, D Neau, F Amblard, G Weisbuch, 10.1142/S0219525900000078Advances in Complex Systems. 387G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch, Mixing beliefs among interacting agents, Advances in Complex Systems 3, 87 (2000). Activation regimes in opinion dynamics: Comparing asynchronous updating schemes. M Alizadeh, C Cioffi-Revilla, 10.18564/jasss.2733Journal of Artificial Societies and Social Simulation. 18M. Alizadeh and C. Cioffi-Revilla, Activation regimes in opinion dynamics: Comparing asynchronous updating schemes, Journal of Artificial Societies and Social Sim- ulation 18, 8 (2015). Dynamics of Deffuant model in activity-driven online social network. J Zhang, H Xia, P Li, 10.1007/978-981-13-3149-7_16Knowledge and Systems Sciences. J. Chen, Y. Yamada, M. Ryoke, and X. TangSingapore; SingaporeSpringerJ. Zhang, H. Xia, and P. Li, Dynamics of Deffuant model in activity-driven online social network, in Knowledge and Systems Sciences, edited by J. Chen, Y. Yamada, M. Ryoke, and X. Tang (Springer Singapore, Singapore, 2018) pp. 215-224. Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. A Sîrbu, D Pedreschi, F Giannotti, J Kertész, 10.1371/journal.pone.0213246PLOS ONE. 14213246A. Sîrbu, D. Pedreschi, F. Giannotti, and J. Kertész, Al- gorithmic bias amplifies opinion fragmentation and po- larization: A bounded confidence model, PLOS ONE 14, e0213246 (2019). From mean-field to complex topologies: Network effects on the algorithmic bias model. V Pansanella, G Rossetti, L Milli, Complex Networks & Their Applications X. R. M. Benito, C. Cherifi, H. Cherifi, E. Moro, L. M. Rocha, and M. Sales-PardoCham, SwitzerlandSpringer International PublishingV. Pansanella, G. Rossetti, and L. Milli, From mean-field to complex topologies: Network effects on the algorith- mic bias model, in Complex Networks & Their Applica- tions X, edited by R. M. Benito, C. Cherifi, H. Cherifi, E. Moro, L. M. Rocha, and M. Sales-Pardo (Springer International Publishing, Cham, Switzerland, 2022) pp. 329-340. Opinion formation and distribution in a bounded-confidence model on various networks. X F Meng, R A Van Gorder, M A Porter, 10.1103/PhysRevE.97.022312Physical Review E. 9722312X. F. Meng, R. A. Van Gorder, and M. A. Porter, Opin- ion formation and distribution in a bounded-confidence model on various networks, Physical Review E 97, 022312 (2018). A bounded-confidence model of opinion dynamics on hypergraphs. A Hickok, Y Kureh, H Z Brooks, M Feng, M A Porter, 10.1137/21M1399427SIAM Journal on Applied Dynamical Systems. 211A. Hickok, Y. Kureh, H. Z. Brooks, M. Feng, and M. A. Porter, A bounded-confidence model of opinion dynamics on hypergraphs, SIAM Journal on Applied Dynamical Systems 21, 1 (2022). An adaptive bounded-confidence model of opinion dynamics on networks. U Kan, M Feng, M A Porter, 10.1093/comnet/cnac055Journal of Complex Networks. 1155U. Kan, M. Feng, and M. A. Porter, An adaptive bounded-confidence model of opinion dynamics on net- works, Journal of Complex Networks 11, cnac055 (2023). Focusing of opinions in the Deffaunt model: First impression counts. D Jacobmeier, 10.1142/S0129183106010108International Journal of Modern Physics C. 171801D. Jacobmeier, Focusing of opinions in the Deffaunt model: First impression counts, International Journal of Modern Physics C 17, 1801 (2006). The role of noise and initial conditions in the asymptotic solution of a bounded confidence, continuous-opinion model. A Carro, R Toral, M San Miguel, 10.1007/s10955-012-0635-2Journal of Statistical Physics. 151131A. Carro, R. Toral, and M. San Miguel, The role of noise and initial conditions in the asymptotic solution of a bounded confidence, continuous-opinion model, Journal of Statistical Physics 151, 131 (2013). Extremism without extremists: Deffuant model with emotions. P Sobkowicz, 10.3389/fphy.2015.00017Frontiers in Physics. 317P. Sobkowicz, Extremism without extremists: Deffuant model with emotions, Frontiers in Physics 3, 17 (2015). Meet, discuss, and segregate!. G Weisbuch, G Deffuant, F Amblard, J.-P Nadal, 10.1002/cplx.10031Complexity. 755G. Weisbuch, G. Deffuant, F. Amblard, and J.-P. Nadal, Meet, discuss, and segregate!, Complexity 7, 55 (2002). How can extremism prevail? A study based on the relative agreement interaction model. G Deffuant, F Amblard, G Weisbuch, Journal of Artificial Societies and Social Simulation. 51G. Deffuant, F. Amblard, and G. Weisbuch, How can ex- tremism prevail? A study based on the relative agree- ment interaction model, Journal of Artificial Societies and Social Simulation 5, 1 (2002). Continuous opinion dynamics under bounded confidence: A survey. J Lorenz, 10.1142/S0129183107011789International Journal of Modern Physics C. 181819J. Lorenz, Continuous opinion dynamics under bounded confidence: A survey, International Journal of Modern Physics C 18, 1819 (2007). Multi-level opinion dynamics under bounded confidence. G Kou, Y Zhao, Y Peng, Y Shi, 10.1371/journal.pone.0043507PLOS ONE. 743507G. Kou, Y. Zhao, Y. Peng, and Y. Shi, Multi-level opin- ion dynamics under bounded confidence, PLOS ONE 7, e43507 (2012). Convergence analysis for asymmetric Deffuant-Weisbuch model. J Zhang, Kybernetika. 5032J. Zhang, Convergence analysis for asymmetric Deffuant- Weisbuch model, Kybernetika 50, 32 (2014). Effects of heterogeneous convergence rate on consensus in opinion dynamics. C Huang, Q Dai, W Han, Y Feng, H Cheng, H Li, 10.1016/j.physa.2018.02.026Physica A: Statistical Mechanics and its Applications. 499428C. Huang, Q. Dai, W. Han, Y. Feng, H. Cheng, and H. Li, Effects of heterogeneous convergence rate on consensus in opinion dynamics, Physica A: Statistical Mechanics and its Applications 499, 428 (2018). G Chen, W Su, W Mei, F Bullo, 10.1016/j.automatica.2020.108825Convergence properties of the heterogeneous Deffuant-Weisbuch model. 114108825G. Chen, W. Su, W. Mei, and F. Bullo, Convergence properties of the heterogeneous Deffuant-Weisbuch model, Automatica 114, 108825 (2020). Activity driven modeling of time varying networks. N Perra, B Gonçalves, R Pastor-Satorras, A Vespignani, 10.1038/srep00469Scientific Reports. 2469N. Perra, B. Gonçalves, R. Pastor-Satorras, and A. Vespignani, Activity driven modeling of time varying networks, Scientific Reports 2, 469 (2012). D Li, D Han, J Ma, M Sun, L Tian, T Khouw, H E Stanley, 10.1209/0295-5075/120/28002Opinion dynamics in activity-driven networks. 12028002D. Li, D. Han, J. Ma, M. Sun, L. Tian, T. Khouw, and H. E. Stanley, Opinion dynamics in activity-driven net- works, Europhysics Letters 120, 28002 (2017). . A Baronchelli, C Castellano, R Pastor-Satorras, 10.1103/PhysRevE.83.066117Physical Review E. 8366117A. Baronchelli, C. Castellano, and R. Pastor-Satorras, Voter models on weighted networks, Physical Review E 83, 066117 (2011). A rejection mechanism in 2D bounded confidence provides more conformity. S Huet, G Deffuant, W Jager, 10.1142/S0219525908001799Advances in Complex Systems. 11529S. Huet, G. Deffuant, and W. Jager, A rejection mech- anism in 2D bounded confidence provides more confor- mity, Advances in Complex Systems 11, 529 (2008). Birds of a feather: Homophily in social networks. M Mcpherson, L Smith-Lovin, J M Cook, 10.1146/annurev.soc.27.1.415Annual Reviews of Sociology. 27415M. McPherson, L. Smith-Lovin, and J. M. Cook, Birds of a feather: Homophily in social networks, Annual Reviews of Sociology 27, 415 (2001). Fake news and ideological polarization: Filter bubbles and selective exposure on social media. D Spohr, 10.1177/0266382117722446Business Information Review. 34150D. Spohr, Fake news and ideological polarization: Filter bubbles and selective exposure on social media, Business Information Review 34, 150 (2017). Interacting agents and continuous opinions dynamics, in Heterogenous Agents, Interactions and Economic Performance. G Weisbuch, G Deffuant, F Amblard, J P , R. Cowan and N. JonardSpringer-VerlagHeidelberg, GermanyG. Weisbuch, G. Deffuant, F. Amblard, and J. P. Nadal, Interacting agents and continuous opinions dynamics, in Heterogenous Agents, Interactions and Economic Perfor- mance, edited by R. Cowan and N. Jonard (Springer- Verlag, Heidelberg, Germany, 2003) pp. 225-242. Fitting in and breaking up: A nonlinear version of coevolving voter models. Y H Kureh, M A Porter, 10.1103/PhysRevE.101.062303Physical Review E. 10162303Y. H. Kureh and M. A. Porter, Fitting in and break- ing up: A nonlinear version of coevolving voter models, Physical Review E 101, 062303 (2020). Comparing community structure to characteristics in online collegiate social networks. V Red, E D Kelsic, P J Mucha, M A Porter, 10.1137/080734315SIAM Review. 53526V. Red, E. D. Kelsic, P. J. Mucha, and M. A. Porter, Comparing community structure to characteristics in on- line collegiate social networks, SIAM Review 53, 526 (2011). Social structure of Facebook networks. A L Traud, P J Mucha, M A Porter, 10.1016/j.physa.2011.12.021Physica A: Statistical Mechanics and its Applications. 3914165A. L. Traud, P. J. Mucha, and M. A. Porter, Social struc- ture of Facebook networks, Physica A: Statistical Me- chanics and its Applications 391, 4165 (2012). . M E J Newman, Networks, Oxford University PressOxford, United Kingdom2nd ed.M. E. J. Newman, Networks, 2nd ed. (Oxford University Press, Oxford, United Kingdom, 2018). Analyzing patterns of user content generation in online social networks. L Guo, E Tan, S Chen, X Zhang, Y E Zhao, Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '09 (Association for Computing Machinery. the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '09 (Association for Computing MachineryNew York, NY, USAL. Guo, E. Tan, S. Chen, X. Zhang, and Y. E. Zhao, An- alyzing patterns of user content generation in online so- cial networks, in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '09 (Association for Computing Ma- chinery, New York, NY, USA, 2009) pp. 369--378. The 90-9-1 rule for participation inequality in social media and online communities. J Nielsen, Nielsen Norman Group. Last AccessedJ. Nielsen, The 90-9-1 rule for participation inequality in social media and online communities, https://www. nngroup.com/articles/participation-inequality/ (2006), Nielsen Norman Group. Last Accessed: 7 Jan 2022. The 1% rule in four digital health social networks: An observational study. T Van Mierlo, 10.2196/jmir.2966Journal of Medical Internet Research. 1633T. van Mierlo, The 1% rule in four digital health social networks: An observational study, Journal of Medical Internet Research 16, e33 (2014). Griffiths, Describing the distribution of engagement in an internet support group by post frequency: A comparison of the 90-9-1 principle and Zipf's law. B Carron-Arthur, J A Cunningham, K M , 10.1016/j.invent.2014.09.003Internet Interventions. 1165B. Carron-Arthur, J. A. Cunningham, and K. M. Grif- fiths, Describing the distribution of engagement in an in- ternet support group by post frequency: A comparison of the 90-9-1 principle and Zipf's law, Internet Interventions 1, 165 (2014). Participation inequality and the 90-9-1 principle in open source. M Gasparini, R Clarisó, M Brambilla, J Cabot, 10.1145/3412569.3412582OpenSym 2020: Proceedings of the 16th International Symposium on Open Collaboration. OpenSym; New York, NY, USAAssociation for Computing MachineryM. Gasparini, R. Clarisó, M. Brambilla, and J. Cabot, Participation inequality and the 90-9-1 principle in open source, in OpenSym 2020: Proceedings of the 16th Inter- national Symposium on Open Collaboration, OpenSym 2020 (Association for Computing Machinery, New York, NY, USA, 2020). Characterizing the behavioral evolution of twitter users and the truth behind the 90-9-1 rule. A Antelmi, D Malandrino, V Scarano, Companion Proceedings of The 2019 World Wide Web Conference, WWW '19. New York, NY, USAAssociation for Computing MachineryA. Antelmi, D. Malandrino, and V. Scarano, Character- izing the behavioral evolution of twitter users and the truth behind the 90-9-1 rule, in Companion Proceedings of The 2019 World Wide Web Conference, WWW '19 (Association for Computing Machinery, New York, NY, USA, 2019) pp. 1035--1038. Opinion formation on social media: An empirical approach. F Xiong, Y Liu, 10.1063/1.4866011Chaos: An Interdisciplinary Journal of Nonlinear Science. 2413130F. Xiong and Y. Liu, Opinion formation on social me- dia: An empirical approach, Chaos: An Interdisciplinary Journal of Nonlinear Science 24, 013130 (2014). Sizing up Twitter users. S Wojcik, A Hughes, Pew Research Center. Last Accessed. 31S. Wojcik and A. Hughes, Sizing up Twitter users, https://www.pewresearch.org/internet/2019/04/ 24/sizing-up-twitter-users/ (2019), Pew Research Center. Last Accessed: 31 May 2021. Power laws, Pareto distributions and Zipf's law. M E J Newman, 10.1080/00107510500052444Contemporary Physics. 46323M. E. J. Newman, Power laws, Pareto distributions and Zipf's law, Contemporary Physics 46, 323 (2005). The extreme case c = 0 is degenerate (because no agents update their opinions), and the case c = 1 allows all adjacent agents to interact with each other. We are not interested in examining these cases. The extreme case c = 0 is degenerate (because no agents update their opinions), and the case c = 1 allows all adjacent agents to interact with each other. We are not interested in examining these cases. Some researchers use the term "polarization" to refer to the presence of exactly two opinion clusters (or to exactly two major opinion clusters) and "fragmentation" to refer to the presence of three or more opinion clusters (or to three or more major opinion clusters. 20, 62Some researchers use the term "polarization" to refer to the presence of exactly two opinion clusters (or to exactly two major opinion clusters) and "fragmentation" to refer to the presence of three or more opinion clusters (or to three or more major opinion clusters) [20, 62]. However, because we are interested in distinguishing between consensus states and any state that is not a consensus, we use the term "fragmentation" for any state with at least two major opinion clusters. We then quantify the extent of opinion fragmentation. However, because we are interested in distinguishing between con- sensus states and any state that is not a consensus, we use the term "fragmentation" for any state with at least two major opinion clusters. We then quantify the extent of opinion fragmentation. Minorities in a model for opinion formation. M F Laguna, G Abramson, D H Zanette, 10.1002/cplx.20018Complexity. 931M. F. Laguna, G. Abramson, and D. H. Zanette, Minori- ties in a model for opinion formation, Complexity 9, 31 (2004). Disambiguation of social polarization concepts and measures. A Bramson, P Grim, D J Singer, S Fisher, W Berger, G Sack, C Flocken, 10.1080/0022250X.2016.1147443The Journal of Mathematical Sociology. 4080A. Bramson, P. Grim, D. J. Singer, S. Fisher, W. Berger, G. Sack, and C. Flocken, Disambiguation of social po- larization concepts and measures, The Journal of Math- ematical Sociology 40, 80 (2016). How to quantify polarization in models of opinion dynamics. C Musco, I Ramesh, J Ugander, R T Witter, 10.48550/arXiv.2110.11981arXiv:2110.11981e-printC. Musco, I. Ramesh, J. Ugander, and R. T. Witter, How to quantify polarization in models of opinion dynamics, e-print arXiv:2110.11981 (2021). Mathematical measures of societal polarisation. J A Adams, G White, R P Araujo, 10.1371/journal.pone.0275283PLoS ONE. 17275283J. A. Adams, G. White, and R. P. Araujo, Mathemat- ical measures of societal polarisation, PLoS ONE 17, e0275283 (2022). W Han, Y Feng, X Qian, Q Yang, C Huang, 10.1016/j.physa.2020.125033Clusters and the entropy in opinion dynamics on complex networks. 559125033W. Han, Y. Feng, X. Qian, Q. Yang, and C. Huang, Clusters and the entropy in opinion dynamics on com- plex networks, Physica A: Statistical Mechanics and its Applications 559, 125033 (2020). The "majority illusion" in social networks. K Lerman, X Yan, X.-Z Wu, 10.1371/journal.pone.0147617PLOS ONE. 11147617K. Lerman, X. Yan, and X.-Z. Wu, The "majority il- lusion" in social networks, PLOS ONE 11, e0147617 (2016). Filter bubbles, echo chambers, and online news consumption. S Flaxman, S Goel, J M Rao, 10.1093/poq/nfw006Public Opinion Quarterly. 80298S. Flaxman, S. Goel, and J. M. Rao, Filter bubbles, echo chambers, and online news consumption, Public Opinion Quarterly 80, 298 (2016). Bifurcations and patterns in compromise processes. E Ben-Naim, P Krapivsky, S Redner, 10.1016/S0167-2789(03)00171-4Physica D: Nonlinear Phenomena. 183190E. Ben-Naim, P. Krapivsky, and S. Redner, Bifurcations and patterns in compromise processes, Physica D: Non- linear Phenomena 183, 190 (2003). Generalized mean-field approximation for the deffuant opinion dynamics model on networks. S C Fennell, K Burke, M Quayle, J P Gleeson, 10.1103/PhysRevE.103.012314Phys. Rev. E. 10312314S. C. Fennell, K. Burke, M. Quayle, and J. P. Gleeson, Generalized mean-field approximation for the deffuant opinion dynamics model on networks, Phys. Rev. E 103, 012314 (2021). Node centrality in weighted networks: Generalizing degree and shortest paths. T Opsahl, F Agneessens, J Skvoretz, 10.1016/j.socnet.2010.03.006Social Networks. 32245T. Opsahl, F. Agneessens, and J. Skvoretz, Node cen- trality in weighted networks: Generalizing degree and shortest paths, Social Networks 32, 245 (2010). Node-weighted measures for complex networks with spatially embedded, sampled, or differently sized nodes. J Heitzig, J F Donges, Y Zou, N Marwan, J Kurths, 10.1140/epjb/e2011-20678-7The European Physical Journal B. 8538J. Heitzig, J. F. Donges, Y. Zou, N. Marwan, and J. Kurths, Node-weighted measures for complex networks with spatially embedded, sampled, or differently sized nodes, The European Physical Journal B 85, 38 (2012). Nodeweighted centrality: A new way of centrality hybridization. A Singh, R R Singh, S R S Iyengar, 10.1186/s40649-020-00081-wComputational Social Networks. 76A. Singh, R. R. Singh, and S. R. S. Iyengar, Node- weighted centrality: A new way of centrality hybridiza- tion, Computational Social Networks 7, 6 (2020). A model for the influence of media on the ideology of content in online social networks. H Z Brooks, M A Porter, 10.1103/PhysRevResearch.2.023041Physical Review Research. 223041H. Z. Brooks and M. A. Porter, A model for the influ- ence of media on the ideology of content in online social networks, Physical Review Research 2, 023041 (2020). F Battiston, G Cencetti, I Iacopini, V Latora, M Lucas, A Patania, J.-G Young, G Petri, Networks beyond pairwise interactions: Structure and dynamics. 8741F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lu- cas, A. Patania, J.-G. Young, and G. Petri, Networks beyond pairwise interactions: Structure and dynamics, Physics Reports 874, 1 (2020).
{'fraction_non_alphanumeric': 0.0450083301294374, 'fraction_numerical': 0.029210987227134864, 'mean_word_length': 4.534351505981371, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 4, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 18, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Agent-based models of opinion dynamics allow one to examine the spread of opinions between entities and to study phenomena such as consensus, polarization, and fragmentation. By studying a model of opinion dynamics on a social network, one can explore the effects of network structure on these phenomena. In social networks, some individuals share their ideas and opinions more frequently than others. These disparities can arise from heterogeneous sociabilities, heterogeneous activity levels, different prevalences to share opinions when engaging in a social-media platform, or something else. To examine the impact of such heterogeneities on opinion dynamics, we generalize the Deffuant-Weisbuch (DW) bounded-confidence model (BCM) of opinion dynamics by incorporating node weights. The node weights allow us to model agents with different probabilities of interacting. Using numerical simulations, we systematically investigate (using a variety of network structures and node-weight distributions) the effects of node weights, which we assign uniformly at random to the nodes. We demonstrate that introducing heterogeneous node weights results in longer convergence times and more opinion fragmentation than in a baseline DW model. The node weights in our BCM allow one to consider a variety of sociological scenarios in which agents have heterogeneous probabilities of interacting with other agents.', 'arxivid': '2206.09490', 'author': ['Grace J Li ', 'Mason A Porter ', '\nDepartment of Mathematics\nDepartment of Mathematics\nUniversity of California\n90095Los AngelesCaliforniaUSA\n', '\nUniversity of California\n90095Los AngelesCaliforniaUSA\n', '\nand Santa Fe Institute\n87501Santa FeNew MexicoUSA\n'], 'authoraffiliation': ['Department of Mathematics\nDepartment of Mathematics\nUniversity of California\n90095Los AngelesCaliforniaUSA', 'University of California\n90095Los AngelesCaliforniaUSA', 'and Santa Fe Institute\n87501Santa FeNew MexicoUSA'], 'corpusid': 249889972, 'doi': '10.48550/arxiv.2206.09490', 'github_urls': [], 'n_tokens_mistral': 31081, 'n_tokens_neox': 27227, 'n_words': 18325, 'pdfsha': 'c011785d62b03772b7d58d393ae474db3f06b2b9', 'pdfurls': ['https://export.arxiv.org/pdf/2206.09490v2.pdf'], 'title': ['A bounded-confidence model of opinion dynamics with heterogeneous node-activity levels', 'A bounded-confidence model of opinion dynamics with heterogeneous node-activity levels'], 'venue': []}
arxiv
Singular control and optimal stopping of memory mean-field processes 2010 Nacira Agram [email protected] Department of Mathematics University of Oslo BlindernP.O. Box 1053N-0316OsloNorway. Emails University Mohamed Khider BiskraAlgeria Achref Bachouch [email protected] Department of Mathematics University of Oslo BlindernP.O. Box 1053N-0316OsloNorway. Emails Bernt Øksendal [email protected] Department of Mathematics University of Oslo BlindernP.O. Box 1053N-0316OsloNorway. Emails Frank Proske [email protected]. Department of Mathematics University of Oslo BlindernP.O. Box 1053N-0316OsloNorway. Emails Singular control and optimal stopping of memory mean-field processes MSC 602010arXiv:1802.05527v2 [math.OC]Memory mean-field stochastic differential equationreflected advanced mean- field backward stochastic differential equationsingular controloptimal stopping The purpose of this paper is to study the following topics and the relation between them: (i) Optimal singular control of mean-field stochastic differential equations with memory, (ii) reflected advanced mean-field backward stochastic differential equations, and (iii) optimal stopping of mean-field stochastic differential equations.More specifically, we do the following:• We prove the existence and uniqueness of the solutions of some reflected advanced memory backward stochastic differential equations (AMBSDEs),• we give sufficient and necessary conditions for an optimal singular control of a memory mean-field stochastic differential equation (MMSDE) with partial information, and• we deduce a relation between the optimal singular control of a MMSDE, and the optimal stopping of such processes. Introduction Let (Ω, F , P) be a given probability space with filtration F = (F t ) t≥0 generated by a 1dimensional Brownian motion B = B(t, ω); (t, ω) ∈ [0, T ] × Ω. Let G = {G t } t≥0 be a given subfiltration of F = (F t ) t≥0 , in the sense that G t ⊂ F t for all t. The purpose of this paper is to study the following concepts and problems, and the relation between them. For simplicity of notation we deal only with the 1-dimensional case. • Topic 1: Optimal singular control of memory mean-field stochastic differential equations: Consider the following mean-field memory singular controlled system, with a state process X(t) = X ξ (t) and a singular control process ξ(t), of the form    dX(t) = b(t, X(t), X t , M(t), M t , ξ(t), ω)dt + σ(t, X(t), X t , M(t), M t , ξ(t), ω)dB(t) +λ(t, ω)dξ(t); t ∈ [0, T ], X(t) = α(t); t ∈ [−δ, 0], (1.1) {eq6.1a} {eq6.1a} where X t = {X(t − s)} 0≤s≤δ , (the memory segment of X(t)), M(t) = L(X(t)) (the law of X(t)), M t = {M(t − s)} 0≤s≤δ , (the memory segment of M(t)). We assume that our control process ξ(t) is R-valued right-continuous G-adapted process, and t → ξ(t) is increasing (non-decreasing) with ξ(0 − ) = 0, and such that the corresponding state equation has a unique solution X with ω → X(t, ω) ∈ L 2 (P) for all t. The set of such processes ξ is denoted by Ξ. The performance functional is assumed to be of the form For simplicity we will in the following suppress the ω in the notation. We may interpret these terms as follows: J(ξ) = E[ T 0 f (t, X(t), The state X(t) may be regarded as the value at time t of, e.g. a fish population. The control process ξ(t) models the amount harvested up to time t, the coefficient λ(t) is the unit price of the amount harvested, f is a profit rate, g is a bequest or salvage value function, and h is a cost rate for the use of the singular control ξ. The σ-algebra G t represents the amount of information available to the controller at time t. The problem we consider, is the following: This problem turns out to be closely related to the following topic: • Topic 2: Reflected mean-field backward stochastic differential equations We study reflected AMBSDEs where at any time t the driver F may depend on future information of the solution processes. More precisely, for a given driver F , a given threshold process S(t) and a given terminal value R we consider the following type of reflected AMBSDEs in the unknown processes Y, Z, K:                  (i)Y (t) = R + T t F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds +K(T ) − K(t) − T t Z(s)dB(s); 0 ≤ t ≤ T, (ii)Y (t) ≥ S(t); 0 ≤ t ≤ T, (iii) T 0 (Y (t) − S(t))dK c (t) = 0 a.s. and △K d (t) = −△Y (t)1 {Y (t − )=S(t − )} a.s., (iv)Y (t) = R; t ≥ T, (v)Z(t) = 0; t > T. (1.3) Here L(Y s , Z s ) is the joint law of paths (Y s , Z s ), and for a given positive constant δ we have put This problem is connected to the following: • Topic 3: Optimal stopping and its relation to the problems above. (i) Then, for t ∈ [0, T ], the process Y (t) is the solution of the optimal stopping problem Y (t) = ess sup τ ∈T [t,T ] E[ τ t F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds + S(τ )1 τ <T + R1 τ =T |F t ] . (1.4) (ii) Moreover, for t ∈ [0, T ] the solution process K(t) is given by K(T ) − K(T − t) = max s≤t R + T T −s F (r, Y (r), Z(r), E[Y r |F r ], E[Z r |F r ], L(Y r , Z r ))dr − T T −s Z(r)dB(r) − S(T − s) − ,(1.5) where x − = max(−x, 0), and an optimal stopping timeτ t is given bŷ τ t : = inf{s ∈ [t, T ], Y (s) ≤ S(s)} ∧ T = inf{s ∈ [t, T ], K(s) > K(t)} ∧ T. (iii) In particular, if we choose t = 0, we get that τ 0 : = inf{s ∈ [0, T ], Y (s) ≤ S(s)} ∧ T = inf{s ∈ [0, T ], K(s) > 0} ∧ T solves the optimal stopping problem Y (0) = sup τ ∈T [0,T ] E[ τ 0 F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds + S(τ )1 τ <T + R1 τ =T ], t ∈ [0, T ] . (1.6) More specifically, the content of the paper is the following: In Section 2, we define the spaces of measures and spaces of path segments with their associated norms, and we give the necessary background results for our methods. In Section 3, we prove existence and uniqueness of the solution for a class of reflected advanced mean-field backward stochastic differential equations. In Section 4, we recall a fundamental connection between a class of reflected AMBSDEs and optimal stopping under partial information. equations. Then in Section 5, we study the problem of optimal singular control of memory mean-field stochastic differential equations. We give sufficient and necessary conditions for optimality in terms of variational inequalities. Finally, in Section 6, we deduce a relation between the following quantities: (i) The solution of a singular control problem for a mean-field SDE with memory. (ii) The solution of a coupled system of forward memory & backward advanced mean-field SDEs. (iii) The solution of an optimal stopping problem involving these quantities. A Hilbert space of random measures In this section, we proceed as in Agram and Øksendal [2], [3] and construct a Hilbert space M of random measures on R. It is simpler to work with than the Wasserstein metric space that has been used by many authors previously. See e.g. Carmona et al [7], [8], Buckdahn et al [5] and the references therein. Following Agram and Øksendal [2], [3], we now introduce the following Hilbert spaces: Definition 2.1 • Let n be a given natural number. Then we defineM =M n to be the pre-Hilbert space of random measures µ on R n equipped with the norm µ 2M n := E[ R n |μ(y)| 2 (1 + |y|) −2 dy], with y = (y 1 , y 2 , ..., y n ) ∈ R n , andμ is the Fourier transform of the measure µ, i.e. µ(y) := R n e −ixy dµ(x); y ∈ R n , where xy = x · y = x 1 y 1 + x 2 y 2 + ... + x n y n is the scalar product in R n . There are several advantages with working with this Hilbert space M, compared to the Wasserstein metric space: • A Hilbert space has a useful stronger structure than a metric space. • Our space M is easier to work with. • The Wasserstein metric space P 2 deals only with probability measures with finite second moment, while our Hilbert space deals with any (possibly random) measure µ ∈ M. Let us give some examples for n = 1: Example 2.1 (Measures) 1. Suppose that µ = δ x 0 , the unit point mass at x 0 ∈ R. Then δ x 0 ∈ M 0 and R e ixy dµ(x) = e ix 0 y , and hence µ 2 M 0 = R |e ix 0 y | 2 (1 + |y|) −2 dy < ∞. 2. Suppose dµ(x) = f (x)dx, where f ∈ L 1 (R). Then µ ∈ M 0 and by Riemann-Lebesque lemma,μ(y) ∈ C 0 (R), i.e.μ is continuous andμ(y) → 0 when |y| → ∞. In particular, |μ| is bounded on R and hence µ 2 M 0 = R |μ(y)| 2 (1 + |y[) −2 dy < ∞. 3. Suppose that µ is any finite positive measure on R. Then µ ∈ M 0 and |μ(y)| ≤ R dµ(y) = µ(R) < ∞ for all y, and hence µ 2 M 0 = R |μ(y)| 2 (1 + |y|) −2 dy < ∞. 4. Next, suppose x 0 = x 0 (ω) is random. Then δ x 0 (ω) is a random measure in M. Simi- larly, if f (x) = f (x, ω) is random, then dµ(x, ω) = f (x, ω)dx is a random measure in M. Definition 2.2 (Law process) From now on we use the notation M t := M(t) := L(X(t)); 0 ≤ t ≤ T, for the law process L(X(t)) of X(t) with respect to the probability P. We recall the following results from Agram & Øksendal [2]: M(t) = M(0) + t 0 M ′ (s)ds; t ≥ 0. The following result, based on Agram & Øksendal [3], is essential for our approach: (1) and X (2) be two 2-dimensional random variables in L 2 (P). Then there exist a constant C 0 not depending on X (1) and X (2) , such that Lemma 2.5 (i) Let XL(X (1) ) − L(X (2) ) 2 M 2 0 ≤ C 0 E[(X (1) − X (2) ) 2 ]. (ii) Let {X (1) (t)} t∈[0,T ] , {X (2) (t)} t∈[0.T ] be two paths, such that E[ T 0 X (i)2 (s)ds] < ∞ for i = 1, 2. Then, for all t, ||L(X (1) t ) − L(X (2) t )|| 2 M 2 0,δ ≤ C 0 E[ 0 −δ (X (1) (t − s) − X (2) (t − s)) 2 ds]. Proof. By definition of the norms and standard properties of the complex exponential function, we have ||L(X (1) , X (2) ) − L( X (1) , X (2) )|| 2 M 2 0 := R 2 | L(X (1) , X (2) )(y 1 , y 2 ) − L( X (1) , X (2) )(y 1 , y 2 )| 2 e −y 2 1 −y 2 2 dy 1 dy 2 = R 2 | R 2 e −i(x (1) y 1 +x (2) y 2 ) dL(X (1) , X (2) )(x (1) , x (2) ) − R 2 e −i( x (1) y 1 + x (2) y 2 ) dL( X (1) , X (2) )( x (1) , x (2) )| 2 e −y 2 1 −y 2 2 dy 1 dy 2 = R 2 |E[e −i(X (1) y 1 +X (2) y 2 ) − e −i( X (1) y 1 + X (2) y 2 ) ]| 2 e −y 2 1 −y 2 2 dy 1 dy 2 ≤ R 2 E[|e −i(X (1) y 1 +X (2) y 2 ) − e −i( X (1) y 1 + X (2) y 2 ) | 2 ]e −y 2 1 −y 2 2 dy 1 dy 2 = R 2 E[(cos(X (1) y 1 + X (2) y 2 ) − cos( X (1) y 1 + X (2) y 2 ) 2 + (sin(X (1) y 1 + X (2) y 2 ) − sin( X (1) y 1 + X (2) y 2 )) 2 ]e −y 2 1 −y 2 2 dy 1 dy 2 ≤ R 2 (E[|(X (1) − X (1) )y 1 + (X (2) ) − X (2) )y 2 | 2 ] + E[(X (1) − X (1) )y 1 + (X (2) ) − X (2) )y 2 | 2 )]e −y 2 1 −y 2 2 dy 1 dy 2 = 2 R 2 (E[|(X (1) − X (1) )y 1 + (X (2) ) − X (2) )y 2 |] 2 )e −y 2 1 −y 2 2 dy 1 dy 2 ≤ 4 R 2 (E[(X (1) − X (1) ) 2 ]y 2 1 + E[(X (2) − X (2) ) 2 ]y 2 2 )e −y 2 1 −y 2 2 dy 1 dy 2 ≤ C 0 E[(X (1) − X (1) ) 2 + (X (2) ) − X (2) ) 2 ]. Similarly, we get that ||L(X (1) t ) − L(X (2) t )|| 2 M 2 0,δ ≤ 0 −δ L(X (1) (t − s)) − L(X (2) (t − s)) 2 M 2 0 ds ≤ C 0 E[ 0 −δ (X (1) (t − s) − X (2) (t − s)) 2 ds]. Spaces Throughout this work, we will use the following spaces: • L 2 is the space of measurable functions σ : [0, δ] → R, such that σ 2 L 2 := δ 0 |σ(r)| 2 dr < ∞. • S 2 is the set of R-valued F-adapted càdlàg processes (X(t)) t∈[0,T ] , such that X 2 S 2 := E[ sup t∈[0,T ] |X(t)| 2 ] < ∞ . • L 2 is the set of R-valued F-predictable processes (Q(t)) t∈[0,T ] , such that Q 2 L 2 := E[ T 0 |Q(t)| 2 dt] < ∞ . • Ξ is the set of G-adapted, nondecreasing right-continuous processes ξ with ξ(0 − ) = 0 (the set of admissible singular controls). • L 2 (Ω, F t ) is the set of R-valued square integrable F t -measurable random variables. • R is the set of functions r : R 0 → R. Existence and Uniqueness of Solutions of Reflected AMBSDEs In this section, we will prove existence and uniqueness of solutions of reflected mean-field BSDEs with a generator which is (time-) advanced, in the sense that at any time t, the generator may depend on future values up to a positive constant δ as follows: For a given driver F , terminal value R and barrier (or obstacle) process S, we say that an F-adapted process (Y, Z, K) ∈ S 2 × L 2 × Ξ is a solution of the corresponding reflected AMBSDEs if the following holds:                  (i)Y (t) = R + T t F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds +K(T ) − K(t) − T t Z(s)dB(s); 0 ≤ t ≤ T, (ii)Y (t) ≥ S(t); 0 ≤ t ≤ T, (iii) T 0 (Y (t) − S(t))dK c (t) = 0 a.s. and △K d (t) = −△Y (t)1 {Y (t − )=S(t − )} a.s., (iv)Y (t) = R; t ≥ T, (v)Z(t) = 0; t > T, (3.1) {eq3.1} {eq3.1} where Y s = (Y (s + r)) r∈[0,δ] , Z s = (Z(s + r)) r∈[0,δ] , the terminal condition R ∈ L 2 (Ω, F T ), the driver F : [0, T ] × Ω × R 2 ×L 2 × L 2 × M 0,δ −→ R is F t -progressively measurable and we have denoted by K c and K d the continuous and discontinuous parts of K respectively. We may remark here that in order to guarantee adaptedness, the time-advanced terms are given under conditional expectation with respect to F s . Our result can be regarded as an extension of the existing results on advanced BSDEs of Peng & Yang [17], Øksendal et al [15], Jeanblanc et al [11] and we refer here to the paper by Quenez and Sulem [18] on reflected BSDEs for càdlàg obstacle. To obtain the existence and the uniqueness of a solution, we make the following set of assumptions: • For the driver F, we assume (i) There exists a constant c ∈ R such that |F (·, 0, 0, 0, 0, L(0, 0))| ≤ c, where L(0, 0) is the Dirac measure with mass at zero. (ii) There exists a constant C F Lip ∈ R such that, for t ∈ [0, T ], |F (t, y 1 , z 1 , y 2 , z 2 , L(y 2 , z 2 )) − F (t, y ′ 1 , z ′ 1 , y ′ 2 , z ′ 2 , L(y ′ 2 , z ′ 2 ))| 2 ≤ C F Lip {|y 1 − y ′ 1 | 2 + |z 1 − z ′ 1 | 2 + ||y 2 − y ′ 2 || 2 L 2 + ||z 2 − z ′ 2 || 2 L 2 + ||L(y 2 , z 2 ) − L(y ′ 2 , z ′ 2 )|| 2 M 0,δ )}, for all y 1 , z 1 , y ′ 1 , z ′ 1 ∈ R, y 2 , z 2 , y ′ 2 , z ′ 2 ∈ L 2 , L(y 2 , z 2 ), L(y ′ 2 , z ′ 2 ) ∈ M 0,δ . • For the barrier S, we assume: (iii) The barrier S is nondecreasing, F-adapted, càdlàg process satisfying E[ sup t∈[0,T ] |S(t)| 2 ] < ∞. (iv) Y (t) ≥ S(t); 0 ≤ t ≤ T . • For the local time K, we assume: Proof. For t ∈ [0, T ] and for all β > 0, we define the Hilbert space H 2 β to be the set of all (Y, Z) ∈ S 2 × L 2 , equipped with the norm (v) K is a nondecreasing F-adapted càdlàg process with K(0 − ) = 0, such that T 0 (Y (t) − S(t))dK c (t) = 0 a.s. and △K d (t) = −△Y (t)1 {Y (t − )=S(t − )} a.s.||(Y, Z)|| 2 H 2 β := E[ T +δ 0 e βt (Y 2 (t) + Z 2 (t))dt] . Define the mapping Φ : H 2 β →H 2 β by Φ(y, z) = (Y, Z) where (Y, Z) ∈S 2 ×L 2 (⊂ L 2 × L 2 ) is defined by        Y (t) = R + T t F (s, y(s), z(s), E[y s |F s ], E[z s |F s ], L(y s , z s ))ds +K(T ) − K(t) − T t Z(s)dB(s); 0 ≤ t ≤ T, Y (t) = R; t ≥ T, Z(t) = 0; t > T. To prove the theorem, it suffices to prove that Φ is a contraction mapping in H 2 β under the norm || · || H 2 β for large enough β. For two arbitrary elements (y 1 , z 1 , k 1 ) and (y 2 , z 2 , k 2 ), we denote their difference by ( y, z, k) = (y 1 − y 2 , z 1 − z 2 , k 1 −, k 2 ) . Applying Itô formula for semimartingale, we get E[ T 0 e βt (β Y 2 (t) + Z 2 (t))dt] = 2E[ T 0 e βt Y (t){F (t, y 1 (t), z 1 (t), E[y t 1 |F t ], E[z t 1 |F t ], L(y t 1 , z t 1 )) − F (t, y 2 (t), z 2 (t), E[y t 2 |F t ], E[z t 2 |F t ], L(y t 2 , z t 2 ))}dt] + 2E[ T 0 e βt Y (t)dK 1 (t)] − 2E[ T 0 e βt Y (t)dK 2 (t)]. We have that Y (t)dK 1,c (t) = (Y 1 (t) − S(t))dK 1,c (t) − (Y 2 (t) − S(t))dK 1,c (t) = −(Y 2 (t) − S(t))dK 1,c (t) ≤ 0 a.s., and by symmetry, we have also Y (t)dK 2,c (t) ≥ 0 a.s. For the discontinuous case, we have as well Y (t)dK 1,d (t) = (Y 1 (t) − S(t))dK 1,d (t) − (Y 2 (t) − S(t))dK 1,d (t) = −(Y 2 (t) − S(t))dK 1,d (t) ≤ 0 a.s., and by symmetry, we have also Y (t)dK 2,d (t) ≥ 0 a.s. By Lipschitz assumption and standard estimates, it follows that E[ T 0 e βt (β Y 2 (t) + Z 2 (t))dt] ≤ 8ρC 2 E[ T 0 e βt Y 2 (t)dt] + 1 2ρ E[ T 0 e βt ( y 2 (t) + z 2 (t) + δ 0 ( y 2 (t + r) + z 2 (t + r))dr)dt] . By change of variable s = t + r, we get E[ T 0 e βt δ 0 ( y 2 (t + r) + z 2 (t + r))dr)dt] ≤ E[ T 0 e βt t+δ t ( y 2 (s) + z 2 (s))ds)dt]. Fubini's theorem gives that E[ T 0 e βt δ 0 ( y 2 (t + r) + z 2 (t + r))dr)dt] ≤ E[ T +δ 0 ( s s−δ e βt dt)( y 2 (s) + z 2 (s)))ds] ≤ E[ T +δ 0 e βs ( y 2 (s) + z 2 (s)))ds]. Consequently, by choosing β = 1 + 8ρC 2 , we have E[ T 0 e βt ( Y 2 (t) + Z 2 (t))dt] ≤ 1 ρ E[ T +δ 0 e βt ( y 2 (t) + z 2 (t))dt] . Since Y (t) = Z(t) = 0 for t > T , we get ||( Y , Z)|| 2 H 2 β ≤ 1 ρ ||( y, z)|| 2 H 2 β . For ρ> 1, we get that Φ is a contraction on H 2 β . Reflected AMBSDEs and optimal stopping under partial information In this section we recall a connection between reflected AMBSDEs and optimal stopping problems under partial information. Definition 4.1 Let F : Ω × [0, T ] × R 2 × L 2 × L 2 × M 0,δ → R be a given function. Assume that: • F is G-adapted and |F (t, 0, 0, 0, 0, L(0, 0))| < c, for all t; for some constant c. • S(t) is a given F-adapted càdlàg nondecreasing process, such that E[ sup t∈[0,T ] (S(t)) 2 ] < ∞. • The terminal value R ∈ L 2 (Ω, F T ) is such that R ≥ S(T ) a.s. We say that a G-adapted triplet (Y, Z, K) is a solution of the reflected AMBSDE with driver F , terminal value R and the reflecting barrier S(t) under the filtration G, if the following hold: 1. E[ T 0 |F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))| 2 ds] < ∞, 2. Z(t) is a G − martingale, 3. Y (t) = R + T t F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds − T t dK(s) − T t dZ(s); t ∈ [0, T ] , or, equivalently, Y (t) = E[R + T t F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds − T t dK(s)|G t ]; t ∈ [0, T ] , K(t) is nondecreasing, G-adapted, càdlàg and K(0 − ) = 0, 5. Y (t) ≥ S(t) a.s.; t ∈ [0, T ], 6. T 0 (Y (t) − S(t))dK(t) = 0 a.s. The following result is essentially due to El Karoui et al [10]. See also Øksendal & Sulem [14] and Øksendal & Zhang [16]. Y (t) = ess sup τ ∈T [t,T ] {E[ τ t F (s, Y (s), Z(s), Y s , Z s , L(Y s , Z s ))ds +S(τ )1 τ <T + R1 τ =T |G t ]}; t ∈ [0, T ] . (ii) Moreover the solution process K(t) is given by K(T ) − K(T − t) = max s≤t R + T T −s F (r, Y (r), Z(r), E[Y r |F r ], E[Z r |F r ], L(Y r , Z r ))dr − T T −s dZ(r) − S(T − s) − ; t ∈ [0, T ] ,(4.1) where x − = max(−x, 0), and an optimal stopping timeτ t is given bŷ τ t : = inf{s ∈ [t, T ], Y (s) ≤ S(s)} ∧ T = inf{s ∈ [t, T ], K(s) > K(t)} ∧ T. (iii) In particular, if we choose t = 0, we get that τ 0 : = inf{s ∈ [0, T ], Y (s) ≤ S(s)} ∧ T = inf{s ∈ [0, T ], K(s) > 0} ∧ T, solves the optimal stopping problem Y (0) = sup τ ∈T [0,T ] E[ τ 0 F (s, Y (s), Z(s), E[Y s |F s ], E[Z s |F s ], L(Y s , Z s ))ds + S(τ )1 τ <T + R1 τ =T ]; t ∈ [0, T ] . Optimal singular control of memory mean-field SDEs We now return to the singular control problem stated in the Introduction: Problem statement Consider the following mean-field memory singular controlled system, with a state process X(t) = X ξ (t) and a singular control process ξ(t), of the form    dX(t) = b(t, X(t), X t , M(t), M t , ξ(t))dt + σ(t, X(t), X t , M(t), M t , ξ(t))dB(t) +λ(t)dξ(t); t ∈ [0, T ], X(t) = α(t); t ∈ [−δ, 0], (5.1) {eq6.1} {eq6.1} where X t = {X(t − s)} 0≤s≤δ , M(t) = L(X(t)), M t = {M(t − s)} 0≤s≤δ , b, σ : Ω × [0, T ] × R × L 2 × M 0 × M 0,δ × R × Ξ → R, λ : [0, T ] → R. We assume that our control process ξ(t) is R-valued right-continuous G-adapted processes, and t → ξ(t) is increasing (nondecreasing) with ξ(0 − ) = 0, and such that the corresponding state equation has a unique solution X with ω → X(t, ω) ∈ L 2 (P) for all t. The set of such processes ξ is denoted by Ξ. The performance functional is assumed to be of the form J(ξ) = E[ T 0 f (t, X(t), X t , M(t), M t , ξ(t))dt + g(X(T ), M(T )) + T 0 h(t, X(t))dξ(t)]; ξ ∈ Ξ, (5.2) {eq6.3} {eq6.3} where f : Ω × [0, T ] × R × L 2 × M 0 × M 0,δ × R × Ξ → R, h : Ω × [0, T ] × R → R, g : Ω × R × M 0 → R. The problem we consider, is the following: Problem 5.1 Find an optimal controlξ ∈ Ξ, such that J(ξ) = sup ξ∈Ξ J(ξ) . (5.3) {eq6.4} {eq6.4} First we explain some notation and introduce some useful dual operators. Let L 2 0 denote the set of measurable stochastic processes Y (t) on R such that Y (t) = 0 for t < 0 and for t > T and E[ T 0 Y 2 (t)dt] < ∞ a.s. • Let G(t,x) = Gx(t, ·) : [0, T ] × L 2 → R be a bounded linear functional on L 2 for each t, uniformly bounded in t.Then the map Y → E[ T 0 G x (t), Y t dt]; Y ∈ L 2 0 is a bounded linear functional on L 2 0 . Therefore, by the Riesz representation theorem there exists a unique process denoted by G * x (t) ∈ L 2 0 such that E[ T 0 G x (t), Y t dt] = E[ T 0 G * x (t)Y (t)dt], (5.4) {eq6.7a} {eq6.7a} for all Y ∈ L 2 0 . We illustrate these operators by some auxiliary results. Lemma 5.2 Consider the case when Gx(t, ·) = F, · p(t), with p ∈ L 2 0 . Then G * x (t) := F, p t (5.5) {eq4.8} {eq4.8} satisfies (5.4), where p t := {p(t + r)} r∈[0,δ] . Proof. We must verify that if we define G * x (t) by (5.5), then (5.4) holds. To this end, choose Y ∈ L 2 0 and consider T 0 F, p t Y (t)dt = T 0 F, {p(t + r)} r∈[0,δ] Y (t)dt = T 0 F, {Y (t)p(t + r)} r∈[0,δ] dt = F, { T +r r Y (u − r)p(u)du} r∈[0,δ] = F, { T 0 Y (u − r)p(u)du} r∈[0,δ] = T 0 F, Y u p(u)du = T 0 ∇xG(u), Y u du. Example 5.1 (i) For example, if a ∈ R [0,δ] is a bounded function and F (x) is the averaging operator defined by F (x) = F,x = 0 −δ a(s)x(s)ds whenx = {x(s)} s∈[0,δ] , then F, p t = δ 0 a(r)p(t + r)dr. (ii) Similarly, if t 0 ∈ [0, δ] and G is evaluation at t 0 , i.e. G(x) = x(t 0 ) whenx = {x(s)} s∈[0,δ] , then G, p t = p(t + t 0 ). We now have the machinery to start working on Problem (5.1). Let M be the set of all random measures on [0, T ]. Define the (singular) Hamiltonian H : [0, T ] × R × L 2 × M 0 × M 0,δ × Ξ × R × R × C a ([0, T ], M 0 ) → M as the following random measure: dH(t) = dH(t, x,x, m,m, ξ, p 0 , q 0 , p 1 ) (5.6) {eq5.2a} {eq5.2a} = H 0 (t, x,x, m,m, ξ, p 0 , q 0 , p 1 )dt + {λ(t)p 0 + h(t, x)}dξ(t) , where H 0 (t, x,x, m,m, ξ, p 0 , q 0 , p 1 ) (5.7) {eq5.3a} {eq5.3a} := f (t, x,x, m,m, ξ) + b(t, x,x, m,m, ξ)p 0 + σ(t, x,x, m,m, ξ)q 0 + p 1 , β(m) , where β(m) is defined below. Here m denotes a generic value of the measure M(t). We assume that f, b, σ, γ, h and g are Fréchet differentiable (C 1 ) in the variables x,x, m,m, ξ. Then the same holds for H 0 and H. We define the adjoint processes (p 0 , q 0 ), (p 1 , q 1 ) as the solutions of the following BSDEs, respectively:      dp 0 (t) = − ∂H 0 ∂x (t) + E[∇ * x H 0 (t)|F t ] dt − ∂h ∂x (t)dξ(t) + q 0 (t)dB(t); t ∈ [0, T ], p 0 (t) = ∂g ∂x (T ); t ≥ T, q 0 (t) = 0; t > T, (5.8) {eqp0} {eqp0} and    dp 1 (t) = −{∇ m H 0 (t) + E [∇ * m H 0 (t)|F t ]}dt + q 1 (t)dB(t); t ∈ [0, T ], p 1 (t) = ∇ m g(T ); t ≥ T, q 1 (t) = 0; t > T, (5.9) {eqp1} {eqp1} where g(T ) = g(X(T ), M(T )) and H 0 (t) = H 0 (t, x,x, m,m, ξ, p 0 , q 0 , p 1 ) x=X(t),x=Xt,m=M (t),m=Mt,ξ=ξ(t),p 0 =p 0 (t),q 0 =q 0 (t),p 1 =p 1 (t) . Here ∇ m H 0 is the Frechét derivative of H 0 with respect to m, and ∇ * m H 0 is defined similarly to ∇ * x H 0 . A sufficient maximum principle for singular mean field control with partial information We proceed to state a sufficient maximum principle (a verification theorem) for the singular mean-field control problem described by (5.1) -(5.3). Because of the mean-field terms, it is natural to consider the two-dimensional system (X(t), M(t)), where the dynamics for M(t) is the following: dM(t) = β(M(t)dt, M(0) ∈ M 0 , where we have put β(M(t)) = M ′ (t). See Lemma 2.3. Theorem 5.3 (Sufficient maximum principle for mean-field singular control ) Let ξ ∈ Ξ be such that the system of (5.1) and (5.8) -(5.9) has a solutionX(t),p 0 (t),q 0 (t),p 1 (t),q 1 (t) and setM (t) = L(X(t)). Suppose the following conditions hold: • (The concavity assumptions) The functions R × L 2 × M 0 × M 0,δ × Ξ ∋ (x,x, m,m, ξ) → dH(t, x,x, m,m, ξ,p 0 (t),q 0 (t),p 1 (t),q 1 (t)) and R × M 0 ∋ (x, m) → g(x, m) are concave for all t ∈ [0, T ] and almost all ω ∈ Ω. (5.10) {eq3.10a} {eq3.10a} • (Conditional variational inequality) For all ξ ∈ Ξ we have E[dH(t)|G t ] ≤ E[dĤ(t)|G t ], i.e. E[H 0 (t)|G t ]dt + E[λ(t)p 0 (t) +ĥ(t)|G t ]dξ(t) ≤ E[Ĥ 0 (t)|G t ]dt + E[λ(t)p 0 (t) +ĥ(t)|G t ]dξ(t), (5.11) {eq5.21} {eq5.21} where the inequality is interpreted in the sense of inequality between random measures in M. Thenξ(t) is an optimal control for J(ξ). Proof. Choose ξ ∈ Ξ and consider J(ξ) − J(ξ) = I 1 + I 2 + I 3 , where I 1 = E[ T 0 {f (t) −f (t)}dt], I 2 = E[g(T ) −ĝ(T )], I 3 = E[ T 0 h(t)dξ(t) −ĥ(t)dξ(t)]. (5.12) {eq6.16} {eq6.16} By the definition of the Hamiltonian (5.7) we have Applying the Itô formula top 0 (t)X(t) and p 1 (t),M(t) , we get I 1 = E[ T 0 {H 0 (t) −Ĥ 0 (t) −p 0 (t)b(t) −q 0 (t)σ(t) − p 1 (t),M ′ (t) }dt],(5.I 2 ≤ E[p 0 (T )X(T ) + p 1 (T ),M(T ) ] = E[ T 0p 0 (t)dX(t) + T 0X (t)dp 0 (t) + T 0q 0 (t)σ(t)dt + E[ T 0 p 1 (t), dM(t) + T 0M (t)dp 1 (t)] = E[ T 0p 0 (t)b(t)dt − T 0 ∂Ĥ 0 ∂x (t)X(t)dt − T 0 E[∇ * xĤ 0 (t)|F t ] X(t)dt − T 0 ∂ĥ ∂x (t) X(t)dξ(t) + T 0q 0 (t)σ(t)dt + T 0 p 1 (t),M ′ (t) dt − T 0 ∇ mĤ0 (t),M(t) dt − T 0 E[∇ * mĤ 0 (t)|F t ]M (t)dt], (5.14) {I2} {I2} where we have used that the dB(t) andÑ (dt, dζ) integrals with the necessary integrability property are martingales and then have mean zero. Substituting (5.13) and (5.14) in (5.12), yields J(ξ) − J(ξ) ≤ E[ T 0 {H 0 (t) −Ĥ 0 (t) − ∂Ĥ 0 ∂x (t)X(t) − ∇xĤ 0 (t),X t − ∇ mĤ0 (t),M(t) − ∇mĤ 0 (t),M t }dt + T 0 h(t)dξ(t) − T 0ĥ (t)dξ(t) − T 0 ∂ĥ ∂x (t) X(t)dξ(t) + T 0 (λ(t)p 0 (t) + h(t))dξ(t) − T 0 (λ(t)p 0 (t) +ĥ(t))dξ(t) − T 0 (λ(t)p 0 (t) + h(t))dξ(t) + T 0 (λ(t)p 0 (t) +ĥ(t))dξ(t)]. By the concavity of dH and the fact that the process ξ is G-adapted, we obtain J(ξ) − J(ξ) ≤ E[ T 0 ∂Ĥ 0 ∂ξ (t)(ξ(t) −ξ(t))dt + T 0 (λ(t)p 0 (t) + h(t)(dξ(t) − dξ(t))] = E[ T 0 E( ∂Ĥ 0 ∂ξ (t)(ξ(t) −ξ(t)) +ĥ(t)(dξ(t) − dξ(t))|G t )dt] = E[ T 0 E(∇ ξĤ (t)|G t ), ξ(t) −ξ(t) dt] ≤ 0, where ∂Ĥ 0 ∂ξ = ∇ ξĤ0 . The last equality holds because ξ =ξ maximizes the random measure dH(t,X(t),X t ,M (t),M t , ξ,p 0 (t),q 0 (t),p 1 (t)) at ξ =ξ. From the above result, we can deduce the following sufficient variational inequalities. and that the following variational inequalities hold: (i) E[λ(t)p 0 (t) + h(t,X(t))|G t ] ≤ 0, (5.15) {eq5.28} {eq5.28} (ii) E[λ(t)p 0 (t) + h(t,X(t))|G t ]dξ(t) = 0. (5.16) {eq5.29} {eq5.29} Thenξ is an optimal singular control. Proof. Suppose (5.15) -(5.16) hold. Then for ξ ∈ Ξ we have E[λ(t)p 0 (t) + h(t,X(t))|G t ]dξ(t) ≤ 0 = E[λ(t)p 0 (t) + h(t,X(t))|G t ]dξ(t). Since H 0 does not depend on ξ, it follows that (5.11) hold. A necessary maximum principle for singular mean-field control In the previous section we gave a verification theorem, stating that if a given controlξ satisfies (5.10)-(5.11), then it is indeed optimal for the singular mean-field control problem. We now establish a partial converse, implying that if a controlξ is optimal for the singular mean-field control problem, then it is a conditional critical point for the Hamiltonian. For ξ ∈ Ξ, let V(ξ) denote the set of G-adapted processes η of finite variation such that there exists ε = ε(ξ) > 0 satisfying ξ + aη ∈ Ξ for all a ∈ [0, ε]. Note that the following processes η i (s), i = 1, 2, 3 belong to V(ξ): η 1 (s) := α(ω)χ [t,T ] (s), where t ∈ [0, T ], α > 0 is G t -measurable , η 2 (s) := ξ(s), η 3 (s) := −ξ(s), s ∈ [0, T ]. Then for ξ ∈ Ξ and η ∈ V(ξ) we have, by our smoothness assumptions on the coefficients, lim a→0 + 1 a (J(ξ + aη) − J(ξ)) (5.18) {eq5.2} {eq5.2} = E[ T 0 { ∂f ∂x (t)Z(t) + ∇xf (t), Z t + ∇ m f (t), DM(t) + ∇mf (t), DM t }dt + ∂f ∂ξ (t)η(t) + ∂g ∂x (T )Z(T ) + ∇ m g(T ), DM(T ) + T 0 ∂h ∂x (t)Z(t)dξ(t) + T 0 h(t)dη(t)], where Z(t) := Z η (t) := lim a→0 + 1 a (X (ξ+aη) (t) − X (ξ) (t)) Z t := Z t,η := lim a→0 + 1 a (X (ξ+aη) t − X (ξ) t ) (5.19) {eq5.3} {eq5.3} and DM(t) := D η M(t) := lim a→0 + 1 a (M (ξ+aη) (t) − M (ξ) (t)), DM t := D η M t := lim a→0 + 1 a (M (ξ+aη) t − M (ξ) tThen        dZ(t) = [ ∂b ∂x (t)Z(t) + ∇xb(t), Z t + ∇ m b(t), DM(t) + ∇mb(t), DM t + ∂b ∂ξ (t)η(t)]dt + [ ∂σ ∂x (t)Z(t) + ∇xσ(t), Z t + ∇ m σ(t), DM(t) + ∇mσ(t), DM t + ∂b ∂ξ (t)η(t)]dB(t) + λ(t)dη(t) ; Z(0) = 0 , and similarly with dZ t , dDM(t) and dDM t . We first state and prove a basic step towards a necessary maximum principle. Proposition 5.5 Let ξ ∈ Ξ and choose η ∈ V(ξ).Then d da J(ξ + aη)| a=0 = E[ T 0 ∂H 0 ∂ξ (t)η(t)dt + T 0 {λ(t)p 0 (t) + h(t)}dη(t)]. (5.21) {eq6.44} {eq6.44} Proof. Let ξ ∈ Ξ and η ∈ V(ξ). Then we can write d da J(ξ + aη)| a=0 = A 1 + A 2 + A 3 + A 4 , (5.22) {eq6.45} {eq6.45} where A 1 = E[ T 0 { ∂f ∂x (t)Z(t) + ∇xf (t), Z t + ∇ m f (t), DM(t) + ∇mf (t), DM t }dt], A 2 = E[ T 0 ∂f ∂ξ (t)η(t)dt], A 3 = E[ ∂g ∂x (T )Z(T ) + ∇ m g(T ), DM(T ) ] A 4 = E[ T 0 ∂h ∂x (t)Z(t)dξ(t) + h(t)dη(t)]. By the definition of H 0 we have A 1 = E[ T 0 Z(t){ ∂H 0 ∂x (t) − ∂b ∂x (t)p 0 (t) − ∂σ ∂x (t)q 0 (t)}dt (5.23) {eq5.20} {eq5.20} + T 0 ∇xH 0 (t) − ∇xb(t)p 0 (t) − ∇xσ(t)q 0 (t), Z t dt + T 0 ∇ m H 0 (t) − ∇ m b(t)p 0 (t) − ∇ m σ(t)q 0 (t), DM(t) dt + T 0 ∇mH 0 (t) − ∇mb(t)p 0 (t) − ∇mσ(t)q 0 (t), DM t }dt], and A 2 = E[ T 0 { ∂H 0 ∂ξ (t) − ∂b ∂ξ (t)p 0 (t) − ∂σ ∂ξ (t)q 0 (t)}η(t)dt]. By the terminal conditions of p 0 (T ), p 1 (T ) (see (5.8)-(5.9)) and the Itô formula, we have A 3 = E[p 0 (T )Z(T ) + p 1 (T ), DM(T ) ] (5.24) {eq6.49} {eq6.49} = E[ T 0 p 0 (t)dZ(t) + T 0 Z(t)dp 0 (t) + T 0 q 0 (t){ ∂σ ∂x (t)Z(t) + ∇xσ(t), Z(t) + ∇ m σ(t), DM(t) + ∇mσ(t), DM(t) + ∂σ ∂ξ (t)η(t)}dt + p 1 (t), dDM(t) + DM(t), dp 1 (t) = E[ T 0 p 0 (t){ ∂b ∂x (t)Z(t) + ∇xb(t), Z t + ∇ m b(t), DM(t) + ∇mb(t), DM t + ∂b ∂ξ (t)η(t)}dt + T 0 q 0 (t){ ∂σ ∂x (t)Z(t) + ∇xσ(t), Z t + ∇ m σ(t), DM(t) + ∇mσ(t), DM t + ∂σ ∂ξ (t)η(t)}dt + T 0 p 0 (t)λ(t)dη(t) + T 0 Z(t)(−{ ∂H 0 ∂x (t) + E(∇ * x H 0 (t)|F t )}) − ∇ m H 0 (t) + E[∇ * m H 0 (t)|F t ], DM(t) dt − T 0 ∂h ∂x (t)Z(t)dξ(t)] . Combining (5.22)-(5.24) and using (5.4), we get (5.21). Theorem 5.6 (Necessary maximum principle for mean-field singular control) Supposê ξ ∈ Ξ is optimal, i.e. satisfies (5.3). Suppose that ∂H 0 ∂ξ = 0. Then the following variational inequalities hold: Proof. From Proposition (5.5) we have, sinceξ is optimal, for all η ∈ V(ξ). (i) E[λ(t)p 0 (t) + h(t)|G t ] ≤ 0 for all t ∈ [0, T ] a0 ≥ d da J(ξ + aη)| a=0 = E[ T 0 {λ(t)p 0 (t) +ĥ(t)}dη(t)],(5. If we choose η to be a pure jump process of the form η(s) = 0<t i ≤s α(t i ), where α(s) > 0 is G s -measurable for all s, then η ∈ V(ξ) and ( With (5.28) and (5.29) the proof is complete. Application to optimal stopping From now on, let us assume, in addition to We summarize what we have proved as follows: Theorem 6.1 Supposeξ is an optimal control for the singular control problem (5.1) -(5.3), with corresponding optimal processesX(t),X t ,M(t),M t . Define S, Y, Z, K as in (6.5), (6.6), (6.8). ThenX together with (Y, Z, K) solve the following forward-backward memoryadvanced mean-field singular reflected system: • (i) Forward mean-field memory singular SDE inX:    dX(t) = b(t,X(t),X t ,M (t),M t )dt +σ(t,X(t),X t ,M(t),M t )dB(t) − λ 0 dξ(t); t ∈ [0, T ] X(t) = α(t), t ∈ [−δ, 0], (6.9) • (ii) Advanced reflected BSDE in (Y, Z, K) (for givenX(t)):                dY (t) = − ∂Ĥ 0 ∂x (t) + E[∇ * xĤ 0 (t)|F t ] dt −dK(t) + Z(t)dB(t); t ∈ [0, Connection to optimal stopping of memory mean-field SDE If we combine the results above, we get Theorem 6.2 Supposeξ is an optimal control for the singular control problem (5.1) -(5.3), with corresponding optimal processesX(t),X t ,M(t),M t and adjoint processesp 0 (t),q 0 (t). Put R = ∂g ∂x (T ). X t , M(t), M t , ξ(t), ω)dt + g(X(T ), M(T ), ω) + T 0 h(t, X(t), ω)dξ(t)]; ξ ∈ Ξ . Y t := {Y (t + s)} s∈[0,δ] and Z t := {Z(t + s)} s∈[0,δ] (the (time)-advanced segment). For t ∈ [0, T ] let T [t,T ] denote the set of all F-stopping times τ with values in [t, T ]. Suppose (Y, Z, K) is a solution of the reflected AMBSDE in Topic 2 above. •• M δ is the pre-Hilbert space of all path segments µ = {µ(s)} s∈[0,δ] of processes µ(·) with µ(s) ∈M for each s ∈ [0, δ], equipped with the norm We let M and M δ denote the completion ofM andM δ and we let M 0 and M 0,δ denote the set of deterministic elements of M and M 0,δ , respectively. Lemma 2. 3 3The map t → M(t) : [0, T ] → M 0 is absolutely continuous, and the derivative M ′ (t) := d dt M(t)exists for all t. Lemma 2. 4 4If X(t) is an Itô-Lévy process as in (1.1), then the derivative M ′ (s) := d ds M(s) exists in M 0 for a.a. s, and we have • C a ([0, T ], M 0 ) denotes the set of absolutely continuous functions m : [0, T ] → M 0 . Theorem 3 . 1 ( 31Existence and Uniqueness) Under the above assumptions (i)-(v), the reflected AMBSDEs (3.1) has a unique solution (Y, Z, K) ∈ S 2 × L 2 × Ξ. Theorem 4. 2 2For t ∈ [0, T ], let T [t,T ] denote the set of all G-stopping times τ : Ω → [t, T ]. Suppose (Y, Z, K) is a solution of the reflected AMBSDE above. ( i ) iThen Y (t) is the solution of the optimal stopping problem 13) {i1} {i1} whereb(t) =b(t) −b(t) etc.By the concavity of g and the terminal values of Theorem 5 . 4 ( 54Sufficient variational inequalities) Suppose that H 0 does not depend on ξ,i.e. that ∂H 0 ∂ξ = 0, dividing by λ 0 in (5.25) -(5.26) we get(i)p 0 (t) ≥ 1 λ 0ĥ (t)) for all t ∈ [0, T ] a.s. and (6.3) {eq6.2} {eq6.2} (ii) p 0 (t) − 1 λ 0ĥ (t) dξ(t) = 0 for all t ∈ [0, T ] a.s. (6.4) {eq6.3} {eq6.3}Comparing with (3.1), we see that (6.3)-(6.4), together with the singular BSDE (5.8) for p 0 =p 0 , q 0 =q 0 , ξ =ξ, constitutes an AMBSDEs related to the type discussed in T ], Y (t) ≥ S(t); t ∈ [0, T ], [Y (t) − S(t)]dK(t) = 0; t ∈ [0, T ], Y (T ) = ∂g ∂x (T ). FF (t) := F (t,X(t),M (t),X t ,M t , Y (t), Z(t), Y t , Z t ):= ∂Ĥ 0 ∂x (t) + E[∇ * xĤ 0 (t)|F t ]. (6.11) (i) Then, for each t ∈ [0, T ] , Y (t)is the solution of the optimal stopping problemY (t) = ess sup τ ∈T [t,T ] (s)ds + S(τ )1 τ <T + R1 τ =T |F t ] .(6.12)(ii) Moreover, for t ∈ [0, T ] the solution process K(t) is given byK(T ) − K(T − t) x − = max(−x, 0), and an optimal stopping timeτ t is given bŷτ t : = inf{s ∈ [t, T ], Y (s) ≤ S(s)} ∧ T = inf{s ∈ [t,T ], K(s) > K(t)} ∧ T. ( iii )F iiiIn particular, if we choose t = 0 we get thatτ 0 : = inf{s ∈ [0, T ], Y (s) ≤ S(s)} ∧ T = inf{s ∈ [0, T ], (s)ds + S(τ )1 τ <T + R1 τ =T ]. .s. and (5.25) {eq1.17a} {eq1.17a}(ii) E[λ(t)p 0 (t) +ĥ(t)|G t ]dξ(t) = 0 for all t ∈ [0, T ] a.s. (5.26) {eq1.17b} {eq1.17b} 27) {eq5.17a} {eq5.17a} 5.27) givesE[{λ(t)p 0 (t) +ĥ(t)}α(t i )] ≤ 0 for all t i a.s.Since this holds for all such η with arbitrary t i , we conclude thatE[λ(t)p 0 (t) +ĥ(t)|G t ] ≤ 0 for all t ∈ [0, T ]a.s. (5.28) {eq5.23a} {eq5.23a} Finally, applying (5.27) to η 1 :=ξ ∈ V(ξ) and then to η 2 :=ξ ∈ V(ξ) we get, for all t ∈ [0, T ], E[λ(t)p 0 (t) +ĥ(t)|G t ]dξ(t) = 0 for all t ∈ [0, T ] a.s. (5.29) {eq5.24a} {eq5.24a} N Agram, Y Hu, B Øksendal, arXiv:1801.03349Mean-field backward stochastic differential equations and applications. arXiv preprintAgram, N., Hu, Y., & Øksendal, B. (2018). Mean-field backward stochastic differential equations and applications. arXiv preprint arXiv:1801.03349. Model uncertainty stochastic mean-field control. N Agram, B Øksendal, arXiv:1611.01385To appear in Stochastic Analysis and Applications. arXiv preprintAgram, N., & Øksendal, B. (2016). Model uncertainty stochastic mean-field control. To appear in Stochastic Analysis and Applications. arXiv preprint arXiv:1611.01385. N Agram, B Øksendal, Stochastic Control of Memory Mean-Field Processes. Agram, N., & Øksendal, B. (2017). Stochastic Control of Memory Mean-Field Processes. Applied Mathematics & Optimization, 1-24. Stochastic optimal control of McKean-Vlasov equations with anticipating law. N Agram, arXiv:1604.03582arXiv preprintAgram, N. (2016). Stochastic optimal control of McKean-Vlasov equations with antici- pating law. arXiv preprint arXiv:1604.03582. Mean-field backward stochastic differential equations and related partial differential equations. R Buckdahn, J Li, S Peng, Stochastic Processes and their Applications. 119Buckdahn, R., Li, J., & Peng, S. (2009). Mean-field backward stochastic differential equations and related partial differential equations. Stochastic Processes and their Ap- plications, 119(10), 3133-3154. P Cardaliaguet, Notes on mean field games. 120Technical reportCardaliaguet, P. (2010). Notes on mean field games (p. 120). Technical report. R Carmona, F Delarue, A Lachapelle, Control of McKean-Vlasov dynamics versus mean field games. Mathematics and Financial Economics. 7Carmona, R., Delarue, F., & Lachapelle, A. (2013). Control of McKean-Vlasov dynam- ics versus mean field games. Mathematics and Financial Economics, 7(2), 131-166. Forward-backward stochastic differential equations and controlled McKean-Vlasov dynamics. R Carmona, F Delarue, The Annals of Probability. 435Carmona, R., & Delarue, F. (2015). Forward-backward stochastic differential equations and controlled McKean-Vlasov dynamics. The Annals of Probability, 43(5), 2647-2700. Optimal stopping time problem in a general framework. M Kobylanski, M C Quenez, Electronic Journal of Probability. 17Kobylanski, M., & Quenez, M. C. (2012). Optimal stopping time problem in a general framework. Electronic Journal of Probability, 17. Reflected solutions of backward SDE's, and related obstacle problems for PDE's. the Annals of Probability. N El Karoui, C Kapoudjian, É Pardoux, S Peng, M C Quenez, El Karoui, N., Kapoudjian, C., Pardoux,É., Peng, S., & Quenez, M. C. (1997). Reflected solutions of backward SDE's, and related obstacle problems for PDE's. the Annals of Probability, 702-737. Some existence results for advanced backward stochastic differential equations with a jump time. M Jeanblanc, T Lim, N Agram, ESAIM: Proceedings and Surveys. 56Jeanblanc, M., Lim, T., & Agram, N. (2017). Some existence results for advanced backward stochastic differential equations with a jump time. ESAIM: Proceedings and Surveys, 56, 88-110. Singular mean-field control games. Y Hu, B Øksendal, A Sulem, Stochastic Analysis and Applications. 355Hu, Y., Øksendal, B., & Sulem, A. (2017). Singular mean-field control games. Stochastic Analysis and Applications, 35(5), 823-851. Cours au college de france: Théorie des jeuxa champs moyens. P L Lions, Lions, P. L. (2014). Cours au college de france: Théorie des jeuxa champs moyens. Singular stochastic control and optimal stopping with partial information of Itô-Lévy processes. B Øksendal, A Sulem, SIAM Journal of Control and Optimization. 504Øksendal, B., & Sulem, A. (2012). Singular stochastic control and optimal stopping with partial information of Itô-Lévy processes. SIAM Journal of Control and Optimization, 50(4), 2254-2287. Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. B Øksendal, A Sulem, T Zhang, Advances in Applied Probability. 432rØksendal, B., Sulem, A., & Zhang, T. (2011). Optimal control of stochastic delay equa- tions and time-advanced backward stochastic differential equations. Advances in Applied Probability, 43(2), 572-596.r 2007. Backward stochastic differential equations with respect to general filtrations and applications to insider finance. B Øksendal, T Zhang, Communications on Stochastic Analysis (COSA). 64Øksendal, B. & Zhang, T.: Backward stochastic differential equations with respect to general filtrations and applications to insider finance. Communications on Stochastic Analysis (COSA) Vol 6, No 4 (2012). Anticipated backward stochastic differential equations. S Peng, Z Yang, The Annals of Probability. 373Peng, S., & Yang, Z. (2009). Anticipated backward stochastic differential equations. The Annals of Probability, 37(3), 877-902. Reflected BSDEs and robust optimal stopping for dynamic risk measures with jumps. M C Quenez, A Sulem, Stochastic Processes and their Applications. 124Quenez, M. C., & Sulem, A. (2014). Reflected BSDEs and robust optimal stopping for dynamic risk measures with jumps. Stochastic Processes and their Applications, 124(9), 3031-3054.
{'fraction_non_alphanumeric': 0.1635103694994121, 'fraction_numerical': 0.043029044605108445, 'mean_word_length': 3.053336037314946, 'pattern_counts': {'":': 0, '<': 21, '<?xml version=': 0, '>': 17, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 50, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The purpose of this paper is to study the following topics and the relation between them: (i) Optimal singular control of mean-field stochastic differential equations with memory, (ii) reflected advanced mean-field backward stochastic differential equations, and (iii) optimal stopping of mean-field stochastic differential equations.More specifically, we do the following:• We prove the existence and uniqueness of the solutions of some reflected advanced memory backward stochastic differential equations (AMBSDEs),• we give sufficient and necessary conditions for an optimal singular control of a memory mean-field stochastic differential equation (MMSDE) with partial information, and• we deduce a relation between the optimal singular control of a MMSDE, and the optimal stopping of such processes.', 'arxivid': '1802.05527', 'author': ['Nacira Agram [email protected] \nDepartment of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails\n\nUniversity Mohamed Khider\nBiskraAlgeria\n', 'Achref Bachouch [email protected] \nDepartment of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails\n', 'Bernt Øksendal [email protected] \nDepartment of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails\n', 'Frank Proske [email protected]. \nDepartment of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails\n'], 'authoraffiliation': ['Department of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails', 'University Mohamed Khider\nBiskraAlgeria', 'Department of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails', 'Department of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails', 'Department of Mathematics\nUniversity of Oslo\nBlindernP.O. Box 1053N-0316OsloNorway. Emails'], 'corpusid': 119300963, 'doi': '10.1137/18m1174787', 'github_urls': [], 'n_tokens_mistral': 18788, 'n_tokens_neox': 16818, 'n_words': 8111, 'pdfsha': 'a428e5c9575e4bbcfc5a8772ba81d1caf84b729e', 'pdfurls': ['https://arxiv.org/pdf/1802.05527v2.pdf'], 'title': ['Singular control and optimal stopping of memory mean-field processes', 'Singular control and optimal stopping of memory mean-field processes'], 'venue': ['MSC']}
arxiv
Extending the Service Composition Formalism with Relational Parameters 10 Sep 2019 Paul Diac Alexandru Ioan Cuza Unviersity of Iaşi România Liana Ţ Ucȃr Alexandru Ioan Cuza Unviersity of Iaşi România Radu Mereuţȃ Alexandru Ioan Cuza Unviersity of Iaşi România Extending the Service Composition Formalism with Relational Parameters 10 Sep 2019Submitted to: FROM 2019 c P. Diac, L. Ţ ucȃr & R. Mereuţȃ This work is licensed under the Creative Commons Attribution License. Web Service Composition deals with the (re)use of Web Services to provide complex functionality, inexistent in any single service. Over the state-of-the-art, we introduce a new type of modeling, based on ontologies and relations between objects, which allows us to extend the expressiveness of problems that can be solved automatically. Introduction Web Service Composition is a complex research area, involving other domains such as: web standards, service-oriented architectures, semantics, knowledge representation, algorithms, optimizations, and more [4]. We propose an extended model that allows the specification of relationships between parameters, as a generalization of previous models such as [1]. Moreover, it allows working with different instances of the same type of concept within the automatic composition; a feature that is fundamental in manual composition, and also defines inference rules. The formalism defined in Section 3 is a complete specification of the model presented in our previous work [3]. We motivate the proposed model by an intuitive example and verify its effectiveness by implementing and testing a composition algorithm. We present a simple query, where a user wants to travel to a university located in a different city. Each rectangle represents a web service with input at the top and output at the bottom. We also represent the query twice as a services with no input and respectively, no output. The dashed rectangle is an inference rule, handled by the algorithm as a virtual web service. Because we cannot directly get the answer to the query, we must use the information provided by different services and rules found in the ontology to compose an answer. We see that in order to buy a ticket we must know the source city and the destination city, the latter found indirectly. We must first call a different web service which finds the city where the university is located and by using the inference rule we can finally match the precondition of the web service which can return the plane ticket. Arrows show parameters and relations matching. The algorithm in 4 finds the correct order of calls; in this case: (1) getUniversityLocation, applies (2) getDestinationCityRule and then (3) getAriplaneTicket. Formal Model We define the model in three steps: the original composition formalism matches parameters by name; the semantic level defining concepts over a taxonomy; and finally the new relational level, enhancing the taxonomy to a full ontology. On these three levels, expressiveness increases allowing for more and more natural composition examples to be resolvable by composition algorithms if appropriately modeled. The first two are well-known and were used in the Composition Challenges in [2] and [1]. The last level is our contribution, introducing two important concepts: parameter relations and, as a consequence of the first, type instances as separate matching objects. Name-based Parameter Matching The initial and simplest model for Web Service Composition uses parameter names to match services. Each name represents a concept. Expecting that parameters are chosen from a set of predefined concepts, the output of a previously called service can be used as input to another service. The user specifies a composition request by a list of known concepts and a list of required concepts. A request has the structure of a service but expresses the need for such a service. A satisfying composition is a (partially ordered) list of services generating the requested concepts starting from known concepts. Definition 3. The Repository is the set of all services, also written as S. Definition 4. Parameter Matching. If C is a set of (known) concepts and ws = ws.name, ws.I, ws.O is a web service, then C matches ws iff ws.I ⊆ C. The result of matching, or the union of C and ws.O is C ⊕ ws = C ∪ ws.O. Definition 5. Chained Matching. If C is a set of concepts and ws 1 , ws 2 , . . . , ws n a list of services then: C ⊕ ws 1 ⊕ ws 2 ⊕ · · · ⊕ ws n is a chain of matching services, generating C ∪ n i=1 ws i .O; valid iff: ws i .I ⊆ C ∪ i−1 j=1 ws j .O , ∀i = 1..n Definition 6. Web Service Composition Problem. Given a repository of services S and a user request r = r.name, r.I, r.O , all with parameters defined over the set of concepts C; find a chain of matching services ws 1 , ws 2 , . . . , ws n such that r.I ⊕ ws 1 ⊕ ws 2 ⊕ · · · ⊕ ws n ⊆ r.O. Taxonomy-based Parameter Matching Subsequent models extend the definitions of concepts and parameter matching (1 and 4), and the rest of the definitions adapt to these changes. Definition 7. Concepts (in model 3.2) are elements of the set of concepts C, over which the binary relation subtypeO f is defined. subtypeO f ⊆ C 2 and subtypeO f is transitive. Definition 8. Parameter Matching (in 3.2). If C ∈ C is a set of concepts with subtypeO f relation, and ws = ws.name, ws.I, ws.O a service, then C matches ws iff: ∀ c ∈ ws.I, ∃ spec ∈ C, such that (spec, c) ∈ subtypeO f . The result of matching is C ⊕ ws = C gen ∈ C ∃ c ∈ ws.O, such that (c, gen) ∈ subtypeO f . Ontological Level: Relational and Contextual Model The main contribution of the paper is the introduction of two elements: relations and objects. Relations are a generalization of the subtypeO f relation of the previous level. Multiple relations are allowed between concepts, defined in the semantic ontology. Service providers do not define new relations; they can only use existing relations defined in the ontology to describe their parameters. Relations can be transitive and/or symmetric. Concepts can now be described with more semantic context, and it is useful to allow updates on it. Therefore, we also introduce objects, that are similar to instances of concepts. Instances are not concrete values of concept types, but distinct elements that are passed trough service workflow, distinguished by their provenance and described by a set of semantic relations. Inference rules are also introduced as a generalization of relation properties. Inference rules generate new relations on objects if some preconditions are met. Similarly, web service calls that exclusively generate objects can also generate relations. Service input can define preconditions that include relations on objects matching input parameters. Definition 9. An Object is an element of the set of objects O = {o = id,type }. id is a unique identifier generated at object creation. The type is a concept: type ∈ C. Definition 10. Relation. A relation r is a triple consisting of: the name as a unique identifier, relation properties, and the set of pairs of objects that are in that relation. The latter is dynamic, i.e., can be extended through the composition process. The set of all inference rules is written as I. Preconditions must hold before applying the rule for the objects matching rule parameters, and relations in the effects are generated accordingly. Rules are structurally similar to services, but they apply automatically and, conceptually, with no cost. For example, transitivity and symmetry are particular rules, the following expresses that equals is symetric: equals symmetric = {X ,Y }, { equals, {(X ,Y )} }, { equals, {(Y, X )} } Definition 14. Ontology G consists of: concepts organized hierarchically, relations and inference rules. G = C, subtypeO f , R, I . At ontological level, relations are static and defined only by names and properties. At knowledge level, relations are dynamic in what objects they materialize to. We refer to both using R. Definition 15. Parameter Matching (in 3.3). In ontology G = C, subtypeO f , R, I , a web service ws matches (is "callable" in) a knowledge state K = O, R , iff: ∃ f unction f : ws.I → O such that : ∀ i ∈ ws.I, f (i), i ∈ subtypeO f and (1) ∀ i, j ∈ ws.I and r ws ∈ ws.relations, with (i, j) ∈ r ws .parameters ∃ r ob j ∈ R with : r ob j .name = r ws .name and f (i), f ( j) ∈ r ob j .ob jects We skip other similar definitions in model 3.3 that are intuitively similar, such as K ⊕ ws, chained matching, user request and the composition problem. Composition Algorithm Overview The algorithm takes as input a query from the user, the repository and ontology and returns a composition of services that answers the query. We start by populating a set (called "knowledge") with objects and relations based on the information provided by the user. We then repeat the process of adding new objects and relations until no more service calls can be made or until the query can be answered. init: data structures and create virtual services for inference rules; while ¬canAnswerQuery(query) And compositionUpdated = True do compositionUpdated ← False ; foreach service ∈ repository do possibleCalls ← searchForPossibleCalls(service) ; foreach servCall ∈ possibleCalls do if providesUsefulInformation(servCall) then makeCall(servCall); compositionUpdated ← True ; if canAnswerQuery(query) then return composition ; else return Not Solved ; Construction Phase At each step, we iterate over all the services and search for all possible calls. We then add to the composition the service calls that provide new information: i.e. a service call is excluded if all the new objects added are semantically similar to others already present in the knowledge. To obtain the similarity between objects, we represent the knowledge as a labeled directed graph where vertices are objects (labeled with the type of the object), and edges are relations, and we consider two objects similar if their associated connected components are isomorphic. When a service call is made, new objects and relations corresponding with the service output and postconditions are created and added in the knowledge. Search for service calls Finding all possible service calls of a given service serv means finding all combinations of objects that can be used as input parameters for the service: i.e. finding for each input parameter a corresponding object in the knowledge that has a type that is equal or more general with the parameter type. Besides this condition regarding the types, all relations from preconditions need to hold between corresponding objects used to call the service. This problem reduces to finding all subgraph isomorphisms in the following problem instance: • q = (V, E, L), where V = serv.inputParams, E = serv.preConditions, L(u) = u.type, ∀u ∈ V and L(e) = e.name, ∀e ∈ E; • g = (V ′ , E ′ , L ′ ), where V ′ = knwoledge.ob jects, E ′ = knowledge.relationsBo, L(u ′ ) = {type | (u ′ .type,type) ∈ ontology.subType}, ∀u ′ ∈ V ′ and L(e ′ ) = e ′ .name, ∀e ′ ∈ E ′ ; The associated decision problem is known to be NP-Complete and an optimized backtracking procedure was implemented to solve it. In real-world use cases, we expect that instances that are computationally hard are rare. This is because service and rule preconditions are checked at each step of the backtracking, pruning many execution paths. Moreover, the inference rules that are more generic -i.e. without typed parameters -are defined in the ontology that cannot be updated by service developers or other users, so expensive rules can be safely avoided. canCallService To check if service calls provide useful information and if the query is solved, virtual services with corresponding parameters and conditions are constructed and checked if can be called. This problem is similar to the one described above, except only one solution is needed, not all possible service calls. An optimization implemented in the backtracking procedure is to split the query graph into connected components and to search each of them independently in the data graph. If the query graph is formed by multiple connected components this optimization helps by reducing a cartesian product of tested solutions of each component to a union of them. This has been proven to have a significant benefit on the runtime on tests (depending on how the tests are generated). Conclusion Current Web Service Composition models include limited semantics in expressing how service parameters are matched. Particularly, there is no way to express any relationships between parameters, and parameter typing models do not allow distinguishing the separation between instances of the same concept. In this paper, we propose a formalism that solves both of these limitations. We also implemented an efficient automatic composition algorithm that produced valid compositions on generated tests, using all elements in the proposed model. Definition 1 . 1Concepts are elements from the predefined set of all concepts C. Definition 2. Web Services are triplets name, I, O consisting of: the service name and two disjoint sets of concepts, also referred to as parameters: input and output. The User Request has the same structure and specifies a required functionality, possibly solvable by a list of services. I ∩ O = / 0 and I, O ⊆ C. R = name, properties, ob jects properties ⊆ {transitivity, symmetry} and ob jects ⊆ O 2 Definition 11. The knowledge K is a dynamic structure consisting of objects and relations between the objects. Knowledge describes what is known at a stage of the composition workflow, i.e., at a time when a set of services have been added to the composition. K = O, R . Definition 12. Web Services (in model 3.3) are tuples name, I, O, relations with I, O defined as in def.2 and relations specifying preconditions and postconditions (effects) over objects matched to service inputs or generated at output. relations within service definitions are pairs consisting of: the name used to refer to an existing relation (the relation from R with the same name), and a binary relation over all service parameters. Relations between inputs are preconditions, and relations between output are effects, i.e., they are generated after the call. Relations between input and output parameters are effects.ws.relations = name, parameters names from R and parameters ⊆ (ws.I ∪ ws.O) 2Definition 13. Inference Rules (in 3.3) are tuples rule = name, parameters, preconditions, effects where parameters is a set of parameter names with local visibility (within rule), and preconditions and effects are relations defined over parameters. More precisely:rule.preconditions, rule.effects ⊆ rel ∈ R rel.name, rule.parameters 2 WSC-08: continuing the web services challenge. Ajay Bansal, Brian Blake, Srividya Kona, Steffen Bleul, Thomas Weise, &amp; Michael, C Jaeger, IEEE Conference on E-Commerce Technology and the Fifth IEEE Conference on Enterprise Computing, E-Commerce and E-Services. IEEEAjay Bansal, M Brian Blake, Srividya Kona, Steffen Bleul, Thomas Weise & Michael C Jaeger (2008): WSC- 08: continuing the web services challenge. In: IEEE Conference on E-Commerce Technology and the Fifth IEEE Conference on Enterprise Computing, E-Commerce and E-Services, IEEE. The EEE-05 challenge: A new web service discovery and composition competition. Kwok Ching Tsui &amp; Andreas M Brian Blake, Wombacher, IEEE International Conference on e-Technology. ServiceIEEEM Brian Blake, Kwok Ching Tsui & Andreas Wombacher (2005): The EEE-05 challenge: A new web service discovery and composition competition. In: IEEE International Conference on e-Technology, e-Commerce and e-Service, IEEE. Relational Model for Parameter Description in Automatic Semantic Web Service Composition. Paul Diac, Liana Ţ , Andrei Netedu, International Conference on Knowledge-Based and Intelligent Information & Engineering Systems. ElsevierPaul Diac, Liana Ţ ucȃr & Andrei Netedu (2019): Relational Model for Parameter Description in Automatic Semantic Web Service Composition. In: International Conference on Knowledge-Based and Intelligent Infor- mation & Engineering Systems, Elsevier. A survey of automated web service composition methods. Jinghai Rao, &amp; Xiaomeng Su, International Workshop on Semantic Web Services and Web Process Composition. SpringerJinghai Rao & Xiaomeng Su (2004): A survey of automated web service composition methods. In: Interna- tional Workshop on Semantic Web Services and Web Process Composition, Springer, pp. 43-54.
{'fraction_non_alphanumeric': 0.04408576246687545, 'fraction_numerical': 0.006564683208865334, 'mean_word_length': 4.501988071570577, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Web Service Composition deals with the (re)use of Web Services to provide complex functionality, inexistent in any single service. Over the state-of-the-art, we introduce a new type of modeling, based on ontologies and relations between objects, which allows us to extend the expressiveness of problems that can be solved automatically.', 'arxivid': '1909.04393', 'author': ['Paul Diac \nAlexandru Ioan Cuza Unviersity of Iaşi\nRomânia\n', 'Liana Ţ Ucȃr \nAlexandru Ioan Cuza Unviersity of Iaşi\nRomânia\n', 'Radu Mereuţȃ \nAlexandru Ioan Cuza Unviersity of Iaşi\nRomânia\n'], 'authoraffiliation': ['Alexandru Ioan Cuza Unviersity of Iaşi\nRomânia', 'Alexandru Ioan Cuza Unviersity of Iaşi\nRomânia', 'Alexandru Ioan Cuza Unviersity of Iaşi\nRomânia'], 'corpusid': 202542860, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 4207, 'n_tokens_neox': 3868, 'n_words': 2637, 'pdfsha': 'c65f048a4cfec089978112616e620d6185c27b57', 'pdfurls': ['https://arxiv.org/pdf/1909.04393v1.pdf'], 'title': ['Extending the Service Composition Formalism with Relational Parameters', 'Extending the Service Composition Formalism with Relational Parameters'], 'venue': []}
arxiv
BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN Lisa Kühnel [email protected] Alexander Schulz [email protected] Barbara Hammer [email protected] Juliane Fluck [email protected] Graduate School DILS Bielefeld Institute for Bioinformatics Infrastructure (BIBI) Faculty of Technology ZB MED -Information Centre for Life Sciences Cologne Germany Bielefeld University BielefeldGermany CITEC Bielefeld University BielefeldGermany ZB MED -Information Centre for Life Sciences Cologne CITEC Bielefeld University BielefeldGermany, Germany University of Bonn BonnGermany BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN Text Mining · BioNLP · BERT · Continual Learning · Federated Learning Recent developments in transfer learning have boosted the advancements in natural language processing tasks. The performance is, however, dependent on high-quality, manually annotated training data. Especially in the biomedical domain, it has been shown that one training corpus is not enough to learn generic models that are able to efficiently predict on new data. Therefore, state-of-the-art models need the ability of lifelong learning in order to improve performance as soon as new data are available -without the need of re-training the whole model from scratch. We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model, thereby reducing catastrophic forgetting. We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once, while being computationally more efficient. Because there is no need of data sharing, the presented method is also easily applicable to federated learning settings and can for example be beneficial for the mining of electronic health records from different clinics. Introduction The amount of literature in the medical domain is increasing enormously and emphasises the need for text mining-based solutions in order to automatically extract relevant information. Named entity recognition (NER) is an important task of natural language processing (NLP) where the aim is to find entity classes in unstructured text, such as specific diseases. As the amount of data for a specific setup is usually limited, recently, transfer learning-based models have been shown to achieve state-of-the-art results in many NLP tasks including NER [1]. Especially, transformer-based models, such as BERT [2], show promising results on benchmark tasks [3]. In the biomedical domain, BioBERT [4] shows state-of-the-art performance for several NER tasks, such as disease recognition. Promising F1-scores are achieved for the available data sets (above 84%). Based on the use case of disease NER, we recently showed that models trained on an available data set are not able to efficiently predict on another data set that, however, follows the same annotation guidelines [5]. This is not only true for transformer-based models such as BioBERT but holds true for different machine learning-based models such as convolutional neural networks or conditional random fields. In our previous study, we showed -based on five different manually labelled data sets -that the performance of a model trained on one of these corpora is reduced by up to 20% in terms of F1-score when predicting on another corpus. This significant drop in performance indicates that the training data is either too small or not representative -compared to a random PubMed corpus. One reason can be attributed to the fact that specific corpora are often comparably small such that small differences in between those data sets are mapped to differences in embeddings and according NER downstream tasks. Therefore, in order to use these models in real world applications, such as semantic search engines, it is advisable to improve the models as soon as new annotated data are available, to obtain optimum performance. This process is known as lifelong learning or, equivalently, continual learning (CL), which means that a model is sequentially re-trained in a so-called online fashion [6]. However, for such settings, a mechanism called catastrophic forgetting easily happens [7]. This means that the model will be biased towards the last data set and will forget previously learned structures. A lot of research has been done in the area of continual learning to prevent a model from forgetting. One of the most prominent approaches is called Elastic Weight Consolidation (EWC) proposed by Kirkpatrick et al. [8]. It is a regularization-based technique that basically quantifies the importance of weights and thereby impedes important weights from being changed drastically. It has been successfully applied for an online personalization of speech recognition systems, as an example [9]. Based on EWC, Liu et al. proposed an extension that makes use of a network reparameterization that basically rotates the parameter space [10]. More recently, Aljundi et al. proposed Memory Aware Synapses (MAS) -a method that, given a new sample, measures the importance of each parameter of the network by taking the sensitivity of the predicted output towards a change in that parameter into account [11]. Next to regularization-based techniques, (pseudo-)rehearsal-based approaches have been proposed, e.g. [12,13]. Rehearsal means that a subset of previously seen data is combined with the new data. Since the old data are not always available, these methods often include a generator network that generates new data based on the previously seen data set; this is also often called silver standard or replay buffer. These data are then mixed with new data to re-train the model. For rehearsal-based methods, research has been done on how best to select the replay buffer for an efficient training, e.g. gradient-based selection has been proposed, known under the abbreviation GEM -Gradient Episodic Memory -where several algorithms and extensions have been proposed recently, e.g. [14,15,16]. Experience replay is another rehearsal-based approach, for example investigated by [17]. In addition, promising methods exist where new parameters are added to the model for each new task that is learned, such as proposed by Fayek et al. [18]. Moreover, dual-memory-based methods are applied where two different networks are used -one for memorising already learned information and one for learning new tasks -such as shown by Hattori [19] or Park [20]. The architecture proposed by Hattori is strongly inspired by biological processes in the brain, making use of so-called hippocampal and neonortical networks. In contrast, Park implemented a dual network architecture based on state-of-the-art transformers. Further transformer-based lifelong learning algorithms include Memory based parameter adaptation (M bP A ++ ) [21], Language Modeling for Lifelong Language Learning (LAMOL) [22] and its extension Lifelong Learning Knowledge Distillation (L2KD) [23]. They all belong to the category of rehearsal-based techniques. Whereas the two latter simultaneously train a learner and a generator network, M bP A ++ uses sparse experience replay and local adaptation. Moreover, Houlsby et al. investigated Adapters, which are transformer-based modules exploiting a different learning procedure than the usual fine-tuning [24]. Except from being more parameter-efficient, these Adapters can be used for sequential learning settings as they can be trained individually and then be "fused" together [25]. Zhang et al. build upon Adapters, but instead of training a new module for each task, the authors developed an algorithm that decides whether a completely new module needs to be trained or whether an already trained one can be re-used [26]. In addition, the authors apply pseudo experience replay according to [22]. Several overview articles structure and compare online learning methods and their suitability in various domains [27,28]. Still, the suitability of such methods for NER tasks in the biomedical domain, where comparably small annotated data sets are present that also suffer from a very complex language, and their suitability to provide federated learning schemes [29] where sharing data should be avoided, has not yet been investigated. In addition, all mentioned methods learn a series of different tasks. Even though the tasks can be similar, as for example investigated by [22,26], none of the studies aim to sequentially improve on the same task as soon as new data are available -without forgetting what has been learned previously. This, however, can be of great importance for real world applications where integrated text mining methods need to be improved as soon as new data are available. Our recently developed semantic search engine preVIEW COVID-19 represents one example for this [30,31]. In this service, several different text mining-based components are integrated, such as for the recognition of disease names or virus proteins. Moreover, we integrated a feedback button, where users (who are mainly information specialists) can give us direct feedback on the annotations [32]. With this feedback, we can generate new data sets and improve our included models. To do this efficiently, a lifelong learning capacity is essential. In this work, we present a new continual learning method -called WEAVER -that can be applied to transformerbased models which exploits parts of the federated averaging algorithm (FedAvg), known from federated learning approaches [33]. Thereby, previously used data are not required for the new task, the model structure does not need to be changed and the process is computationally efficient. For the real world applications described above, the model can therefore be efficiently re-trained as soon as new data are available. Moreover, as previous data do not need to be available in one place for incremental learning, it can also be used for federated learning approaches -which is particularly important in the medical domain. In each clinic or institution, a model can be trained using the available data on-site and can then pass the weights of the trained model to another site where the model is re-trained and WEAVER is applied. This results in a model that is trained on larger data sets without the need for data sharing. There is also no need for a central server where a model is built -instead, the model is passed sequentially through all the institutions. Model For our proposed continual learning procedure, we exploit a mechanism that is originally used in federated learning settings, where models are trained at different places using different data sets, mostly due to data privacy concerns [29]. After training these models individually, their weights are passed to a central server and averaged in relation to the amount of training data they were trained on -hence, the more data were available the more influence in the final model [33]. The corresponding formula for the objective of the target model can be seen in the following: f (w) = K k=1 n k n F k (w)(1) K is the number of clients, i.e. the number of models that were trained. The total amount of training data is described by n, whereas n k is the amount of the current data set. F k (w) defines the client's loss function. As shown in [33], this objective results in weight averaging for convex costs. Based on this, we developed the following procedure: For the first model that is trained on a given task, we initialise a pre-trained BioBERT model and fine-tune it in a usual manner. As soon as new data are then available, we fine-tune the already trained model again using the new data set. In a post-processing step, the weights of the old and the new model are then averaged, taking the amount of training data into account. Thereby, if a second model is trained on top of the first one, the total amount of training data is the sum of the two data sets. Therefore, either a new pre-trained model can be initialised or the already fine-tuned model will be fine-tuned again and afterwards combined. A simplified overview about the continual learning procedure is shown in Fig. 1. Experiments In this section, we first describe the used data sets. Afterwards, the conducted experiments and their implementation details are given. Finally, we describe our evaluation and visualisation strategies in detail. Used Data Sets We perform the experiments on three different NER tasks from the biomedical domain. For disease NER, we use five different data sets -four of them have been described in detail in our previous publication [5]. Additionally, we use the plant-disease corpus [34]. For both proteins/genes and chemicals, we rely on six and five different datasets, respectively, described in detail by Crichton et al. [35] and provided under https://github.com/cambridgeltl/ MTL-Bioinformatics-2016. An overview of the used data sets can be seen in Table 1. We simulate a continuous learning setting, where different datasets are learned sequentially. As in real-world settings, the datasets can differ in size and slightly differ in used annotation guidelines. According to [21], we randomly chose four different orders for each setup (as can be seen in Table A1). For training the first model, a transformer-based model, such as BERT, is initialised and fine-tuned on the available data. To continue training in a sequential manner, the already fine-tuned model is fine-tuned again on a second corpus. To prevent catastrophic forgetting, knowledge of the previous model is infused into the new model by applying weight averaging. Thereby, the size of the data set the individual model was trained on determines the averaging coefficient, i.e. the bigger the data set, the higher the influence for the new model. This procedure is repeated for every new data set. [4] 3230 miRNA-disease [37] 3043 plant-disease [34] 2944 BioNLP13-CG [38] 1885 Proteins/Genes BioNLP11-ID [39] 3197 BioNLP13-CG [38] 4027 BioNLP13-GE [40] 3566 BioNLP13-PC [41] 5468 Ex-PTM [42] 1787 JNLPBA [43] 46750 Chemicals BioNLP13-CG [38] 1097 BioNLP13-PC [41] 1178 BC4CHEMD [44] 29478 BioNLP11-ID [39] 594 BC5CDR [4] 5203 *Size refers to the amount of entities in the training set. Conducted Experiments As baseline experiments, we train one model individually on each of the available data sets. Each model is then evaluated on all available test data sets for this entity class. For example, a model trained on the NCBI training corpus is evaluated on the corresponding test set but also on the four other test sets (BC5CDR, BioNLP13, miRNA-disease and plant-disease). We perform the following CL-based experiments to evaluate and compare our developed algorithm. • FineTune: a standard BERT model that is fine tuned sequentially for each new data set • EWC [8]: our own implementation of EWC for NER with transformer-based models • AdapterFusion [24]: one adapter is individually trained per training data set and they are sequentially fused together • WEAVER: our model described in Chapter 2 • Replay: Sparse experience replay mechanism as performed by [21]. Hence, while fine-tuning BERT models sequentially, we replay 10% of all previously seen data. • MTL: a multi-task upper-bound model trained on all available training data sets simultaneously Note, that Replay and MTL require the datasets to be at one place and therefore just serve as upper-bound methods for comparative reasons in our study. For the same reason, we omit other state-of-the-art transformer-based methods, such as LAM OL or L2KD [22,23]. In addition to the need for data sharing, prediction on new data may also be less efficient due to local adaptation strategies such as in M bP A ++ [21], which can be a hindrance for the integration into running services. Implementation Details For all conducted experiments, we build our code upon the Transformers library [45] and we use dmis-lab/biobertbase-cased-v1.1 as pre-trained model. For all experiments, due to lack of data, we did not perform hyperparameter optimisation but train the models for three epochs, batch size of 16 and with a learning rate of 3e − 5, except for the Adapter-based experiments, where we use a learning rate of 5e − 4. We provide our code under https: //github.com/llangnickel/WEAVER. Evaluation To evaluate our methods, we determine precision, recall and F1-score using the following formula (FP stands for false positive, FN for false negative and TP for true positive). precision = T P T P + F P recall = T P T P + F N (2) F 1 − score = 2 × precision × recall precision + recall(3) We determine the averaged precision, recall and F1-score over all test sets after the complete training procedure has been finished for all different training data set orders. To proof statistical significance, we use the paired t-test and compare WEAVER against the other CL-based methods. The significance level is set to 0.01. Because the averaged F1-score determined at the end of training is not a sufficient measure to judge on the training performance, we, additionally, examine the extent of forgetting when re-training a model in a continual manner, by determining the Backward Transfer (BWT) according to [15]. This metric measures the influence that learning a new task has on the previously learned task. Accordingly, we determine the Forward Transfer (FWT) that measures the influence of a learned task on the future tasks [15]. The corresponding formula are depicted in Equations 4 and 5, respectively. Note, that, in contrast to [15], we use the F1-score instead of the accuracy score. Hence, R ∈ R T xT consists of the F1-scores on task t j after finishing training on task t i . Moreover, we plot the extent of forgetting by evaluating the performance on the test set corresponding to the training data set that has been used for the very first model after each re-training. BW T = 1 T − 1 T −1 i=1 R T,i − R i,i(4)F W T = 1 T − 1 T i=2 R i−1,i − b i(5) Visualisation Techniques To visualise the word (i.e. token) embeddings, we apply the dimensionality reduction technique Uniform Manifold Approximation and Projection (UMAP). More specifically, we make use of the python library umap-learn [46]. This allows us to judge whether different data sets are embedded in different regions of the network or whether they share the representation space. Thereby, we compare the visualisation obtained by the following settings: First, we train different models on different data sets individually, for example on the NCBI training and the BC5CDR training set in case of diseases NER. Then, we make predictions on these training data sets using the corresponding models and use the word embeddings which are vectors of length 768. Because this high dimensionality cannot be visualised, we apply UMAP to scale it down to two dimensions. We then colour the embeddings of the different data sets (predicted by the two different models) differently. In addition, we use the baseline model that has been trained on both data sets simultaneously to also make predictions on both of these data sets. Finally, we visualise the word embeddings predicted by a model that has been trained sequentially on the mentioned data sets according to our developed method. Since UMAP preserves cluster structures, this enables us to judge differences or overlaps of the embedding of different sets. Ablation Study Several studies show that end-to-end fine tuning is not necessary because only the final top layers are decisive for the downstream task. Accordingly, we freeze the first eight layers as suggested by [47] and [48] to investigate whether weight averaging can be reduced to the top four layers. Results In the following section, we first describe the results of the models trained on a single corpus. Afterwards, the results for the simulated continual learning setting are given. Finally, we depict and discuss the UMAP visualisations of the word embeddings as proof-of-concept. The results of the performed ablation study can be found in the Appendix in Table A2. Single Model Training and Cross-evaluation (Baseline) For every entity class, several manually annotated data sets are available. As a first step, we train a BioBERT model on each of the data sets individually and evaluate the model on all available test sets for the given entity class. We visualise the result in a heatmap in Figure 2. For each trained model, we see the highest score on the test set that belongs to the used training set and see significant drops in performance when evaluating the model on the other test sets. For example in case of disease entity recognition (see Fig. 2a), the model trained on the NCBI disease corpus achieves an F1-score of 83% on the corresponding test set, but drops to 66% and 65% for the BC5CDR and miRNA-disease corpus, respectively. The same phenomenon can be seen for all data sets across all three tasks (diseases, genes/proteins and chemicals). These results underline the fact that, even for the same task, trained models need to be improved as soon as new data sets exits because one available corpus may be too small or too specific. Hence, particularly for running services, continuous learning methods are of great importance. Table A1 for data set orders Continual Learning Experiments We simulate a continual learning scenario for three different named entity recognition use cases from the biomedical domain, namely diseases (5 data sets), genes/proteins (6 data sets) and chemicals (5 data sets), that are presented sequentially to the models without storing already seen data. We compare our developed method WEAVER with other continual learning-based methods (for transformers), namely FineTune, EWC and AdapterFusion. Additionally, we apply sparse memory replay (Replay) and multi-task learning (MTL) as upper bound methods because they require the data to be at the same place, which is not always possible in medical applications. We apply four evaluation methods. First, the averaged F1-score on all available test data is determined after finishing training using the last training data set. The results are summarised in Figure 2. For each entity class, four different orders of training data sets have been chosen randomly (see A1). For diseases, WEAVER outperforms all other CL methods. Whereas it achieves an average F1-score of 77.33% for the different orders, with EWC and FineTune, we achieve 76.15% and 76.59%, respectively. For AdapterFusion, the difference is much higher; it achieves an F1-score of 63.51%. In comparison to the upper bound MTL (79.50%), WEAVER performs only around 2% worse. For protein named entity recognition, WEAVER shows excellent results, that are on average only less than 1% worse than the MTL model and outperforms the other CL-based methods. Interestingly, the Replay model does not work well here and achieves an average F1-score of 68%, which could be caused by the fact that the replayed data is only learned for one epoch, which is probably not sufficient for these data sets. For chemical named entity recognition, a similar trend can be seen. WEAVER outperforms all other methods and is only around 1% worse than MTL. For all three experiments, WEAVER shows on average the lowest standard deviation, thus the model's performance is less influenced by the training parameters/initialisation. As the averaged F1-score after finishing training does not indicate how training a new task influences both previous and future tasks, we additionally determine forward and backward transfer that are summarised in Table 3. Note, that as upper bound method only Replay can be used because for the multi-task setting, all training data are combined and no sequential training can be performed. Comparing the CL-based methods, WEAVER performs best for disease NER. Except for the first order of training data sets, we have a positive backward transfer, meaning that when learning a new task, the performance of a preceding task is increased. In two of the four cases, the BWT of WEAVER is also better than for Replay. In case of FWT, WEAVER achieves the best scores (approximately 0.5). The forward transfer is positive for all scenarios, which is expected because we train the models on the same task sequentially, hence, learning on a specific data set will always positively influence a future task (as compared to random initialisation of the model). In case of protein/gene NER, WEAVER achieves the best BWT scores for the CL-based methods, even though they are all slightly negative, meaning that learning on a new task results in moderate forgetting of the previously learned data. For the FWT, we achieve also scores around 0.5 that are better than for Replay, but slightly worse than for FineTune. For example, for the first order, the FWT scores for FineTune and WEAVER amount to 0.4699 and 0.4655, respectively, indicating only a very small difference. For chemicals, we see a similar phenomenon. WEAVER achieves on average Table A1 for data set orders the highest FWT score, however, it is very similar for all methods, amounting to approximately 0.5. Larger differences can be seen for the BWT, where WEAVER achieves for example a score of −0.04 for the first order and the value for FineTune is more than twice as bad (−0.11). As a last evaluation metric, we plot the extent of forgetting in Fig. 3. Thereby, we determined the F1-score for the test set that corresponds to the very first training data set after random initialisation and after each re-training of the model in order to see how much the model "forgets" when being exposed to new data. For the disease NER (see Fig. 3a), FineTune, EWC and AdapterFusion drop to an F1-score of around 30%, WEAVER only drops to around 60% after seeing the fourth training data set. Also after finishing the last training, WEAVER only performs slightly worse than Replay. For protein NER, the CL-based methods perform very similar until the fourth training data set, where the highest score can now be seen for WEAVER. It also outperforms Replay, which which goes in line with the average performance seen in Table 2. For Chemicals, we also achieve the highest F1-score with WEAVER, however, the difference to all other methods is rather small in this case. Visualisation of Word Embeddings In order to comprehend what happens to the word embeddings when averaging the weights of two BERT models, we performed a UMAP visualisation for the different scenarios that can be seen in Figure 4. Exemplary, we use the disease NER use case and compare the arrangement of the embeddings for three different training sets (NCBI, BC5CDR and miRNA-disease) to simulate continual training. Therefore, first, two models have been trained independently on the NCBI and BC5CDR data set, respectively, and their predicted embeddings are visualised in Fig. 4a. Embeddings for the different data sets, predicted by the two different models are clearly separated. In contrast, in Fig. 4b, where a model trained on both data sets simultaneously is used for prediction, the points are strongly overlapping and separate clusters cannot be recognised. Figure 4c shows word embeddings predicted by a model trained according to our method WEAVER (first NCBI training set, then BC5CDR training set). Interestingly, it can be seen that the distribution looks very similar to a combined training. Thereby, we can infer that weight averaging after training two models sequentially has a similar effect to a combined training (simultaneously on all training data). To investigate that effect when training on more than two data sets, we use a third one (miRNA-disease data set). Figures 4e-4f visualise the same settings as described before but now, the first two data sets (NCBI and BC5CDR) are combined to one colour so that the new data set can be clearly distinguished. Here, we see the same phenomenon, i.e. that training sequentially with WEAVER results in very similar distributions of the embeddings than training one model jointly on all three data sets. The different sub-figures show the distribution of the word embeddings predicted for three different disease NER data sets (NCBI, BC5CDR and miRNA-disease) using different models. In sub-figure (a), two different models were used that have been trained independently on the two data sets. In contrast, predicted word embeddings from a model trained on the combined training data are depicted in (b). In sub-figure (c), the embeddings resulting from the continually trained model using WEAVER is shown. With the red and yellow squares depicted in (a), we show where the corresponding word embeddings moved to in settings (b) and (c). In subfigures d-e, the same setting is shown but now for a third data set. Therefore, the previously used data sets NCBI and BC5CDR are represented in one colour. Discussion Transformer-based models have boosted the advancements of natural language processing tasks. Especially in the biomedical domain, BERT-based models are adapted to specific applications, such as electronic health record (EHR) mining or the identification of rare disease patients from administrative claims [49,50,51]. However, these models have the underlying assumption that data are independent and identically distributed. In real world scenarios in the biomedical domain, this is unfortunately not the case, in particular since current models do not yet represent all facets, which are present in such corpora due to much smaller training sets as compared to general domains. Hence different corpora easily display novel and different aspects here, which correspond to a shift of the distribution. In previous studies, we showed that there are significant differences in different data sets and that a model trained on one corpus does not perform well on another corpus; i.e., one such annotated corpus is not representative for biomedical literature data bases such as PubMed. Therefore, to be used in real world applications, trained models need the ability of lifelong learning (also on the same task) -meaning that they can be improved continuously without suffering from catastrophic forgetting. Whereas a lot of research has been done in this direction, most of the approaches do either need also previous data when training on the new data (i.e. (pseudo-)rehearsal), consists of a more complex structure containing two or more different networks (i.e. for example a knowledge base and an active column) or are, in case of regularisation-based methods, computationally more inefficient. Therefore, we propose a lifelong learning algorithm that (1) is based on transformers as current state-of-the-art methods, (2) can be used for federated learning if the data sets are not available at one place (e.g. in clinical use cases due to data privacy), (3) does not involve a second or different neural network structure hence requires limited resources, and (4) is computationally efficient. We evaluated our method on three different use cases from the biomedical domain, namely diseases, genes/proteins and chemicals. For these entity classes five or six different data sets are available, respectively (see Table 1). As baseline, we first determined the evaluation results on single models, i.e. models that have been trained on a single training data set. These are evaluated on the corresponding test set but also on all other test sets from this domain (called cross-evaluation). Here, we see significant differences (compare Figure 2). For example, a model trained on the NCBI disease corpus performs best on its available test set (F1-score of 86%) but drops to 25% for the BioNLP13-CG data set that focuses on cancer related diseases. Similar is true for genes and chemicals where the F1-score can differ about 50%. Hence, continual improvement of the models is needed. We compared our method WEAVER to several different transformer-based CL methods that, except for Replay, do not require the data to be at the same place. We show that WEAVER outperforms the other methods in terms of average F1-score after finishing training of the last data set (see Table 2), backward and forward transfer (Table 3) as well as on the performance of the test set corresponding to the very first data set (Fig. 3) for all three use cases. However, the evaluation turns out differently for the different use cases. In terms of averaged F1-score, for disease NER, WEAVER is for example about 1% better than FineTune and EWC, and around 2% worse than the upper bound (MTL). In case of protein NER, WEAVER is only less than 1% worse than MTL and around 2% better than FineTune and EWC. In all scenarios, AdapterFusion performs worst. In terms of disease NER, WEAVER achieves mainly positive backward transfer and outperforms all other CL-based methods. Generally, for backward transfer, we see differences between the different orderings, indicating that the order can influence the success of training. For the forward transfer, this is less noticeable, for all use cases and orderings, the values range from 0.4 to 0.5. As we plotted the extent of forgetting in Fig. 3, it can be seen that WEAVER is more robust to training of new data sets, i.e. it shows less variation in its performance of the test set corresponding to the very first training data set when seeing a new data set. For proof of concept of WEAVER, we visualised token embeddings of the variously trained models. Figure 4 indicates that applying WEAVER to a series of new data sets results in similar word embedding distributions as the combined training -with the advantage of efficiently improving a model as soon as new data sets arise. Summarising, WEAVER consists of only one small post processing step where weights are averaged. In comparison to other presented methods, there is no need to change the training procedure; in addition, this method can theoretically not only be applied to transformer-based methods but to all neural network-based methods where weights are determined by training. However, possible limitations of our proposed method need to be further investigated: Since the averaging is weighted based on the size of the training data sets, this can be dangerous if sizes differ too much. For example, if a model is re-trained on a big data set which only represents a small sub-domain (e.g. cancer related diseases), the model can be still biased towards this data set/topic. Therefore, further experiments are needed to investigate the influence and importance of weighting based on the corpus size. Thereby, the previous recognition of a shift could also be useful and needs to be incorporated into future experiments [52]. Still, WEAVER shows very good results and outperforms other CL-based methods that do not require the data to be at the same place. Thereby, it perfectly combines practicability and quality, and can hence also be used for the continuous improvement of running services. Conclusion Based on transformer models as state-of-the-art methods for document processing, we propose a new lifelong learning method called WEAVER. This method continuously trains a model on top of a previously trained model and infuses knowledge from the previously trained model into the new one by weight averaging. Thereby, we demonstrate a simple, yet efficient method that can also be used in settings, where the data sets are not available in one place. This is especially important in clinical use cases where data sets underlie data protection laws. In addition, in contrast to conventional federated learning settings, no central server is needed but the weights can be simply passed from one institution to the next. Moreover, our method is a simple post-processing step, which means that the training workflow itself does not need to be changed and therefore it can be easily implemented into running services, such as semantic search engines. Therefore, in future work, the method will be tested on other NLP tasks from the biomedical domain, such as document classification, and will be integrated into our semantic search engine [31]. Figure 1 : 1Overview of our transformer-based continual learning procedure WEAVER using weight averaging. Figure 2 : 2Evaluation results of single models. Each model has been trained on one of the available datasets and evaluated on all other datasets for the specific entity class. Figure 3 : 3F1-scores on the first test data set over time. After each re-training of the model, it is evaluated on the test set corresponding to the first training data set in order to see how much the model forgets. The legend depicted in subfigure (a) equally applies to the two other subfigures. ( a )Figure 4 : a4NCBI and BC5CDR models trained independently (b) NCBI and BC5CDR models trained jointly (c) Model trained continually on NCBI and BC5CDR using WEAVER (d) Model trained independently on NCBI+BC5CDR and miRNA-disease PLACEHOLDER (e) Model trained jointly on NCBI, BC5CDR and miRNA-disease data set PLACEHOLDER (f) Model trained continually on NCBI, BC5CDR and miRNA-disease using WEAVER BERT embeddings visualised using UMAP. Table 1 : 1Overview of used datasets from the biomedical domainNER-Task Dataset Size* Disease NCBI [36] 4725 BC5CDR Table 2 : 2Averaged F1-scores after finishing training on all data sets. We averaged the F1-score over ten independent runs and determined the standard deviation (shown in brackets). Highest score is shown in bold, statistical significance has been proven with the paired t-test. Upper-bound methods are shown on the right.Entity Class Order* FineTune EWC AdapterFusion WEAVER Replay MTL Diseases (i) 76.20 (0.44) 74.24 (0.38) 60.69 (2.34) 76.77 (0.29) 78.09 (0.44) 79.68 (0.29) (ii) 76.84 (0.23) 77.13 (0.47) 62.32 (2.87) 77.36 (0.28) 77.44 (0.89) 79.49 (0.28) (iii) 76.63 (0.33) 76.93 (0.49) 62.09 (2.73) 77.70 (0.18) 77.33 (0.47) 79.34 (0.25) (iv) 76.68 (0.48) 76.29 (0.29) 53.42 (11.93) 77.47 (0.29) 77.52 (0.64) 79.48 (0.33) Avg. 76.59 76.15 63.51 77.33 77.60 79.50 Proteins (i) 72.01 (0.32) 72.28 (0.46) 61.95 (1.3) 75.60 (0.11) 68.66 (1.09) 76.05 (0.22) (ii) 75.84 (0.17) 76.26 (0.41) 63.66 (2.99) 75.53 (0.17) 68.44 (0.58) 76.03 (0.22) (iii) 72.85 (0.38) 71.91 (0.44) 60.4 (2.39) 74.43 (0.18) 67.40 (0.55) 75.98 (0.12) (iv) 73.34 (0.35) 72.39 (0.39) 60.29 (2.46) 75.47 (0.16) 67.14 (0.86) 75.85 (0.28) Avg. 73.51 73.21 61.58 75.26 67.91 75.98 Chemicals (i) 74.33 (0.72) 73.70 (0.39) 62.57 (2.05) 76.81 (0.43) 76.51 (0.51) 78.26 (0.71) (ii) 74.27 (0.47) 74.56 (0.51) 62.08 (3.15) 76.63 (0.13) 76.22 (0.22) 78.18 (0.12) (iii) 77.36 (0.42) 77.54 (0.21) 63.78 (4.69) 77.76 (0.16) 76.39 (0.69) 78.27 (0.27) (iv) 75.05 (0.33) 75.39 (0.32) 64.32 (4.52) 75.57 (0.36) 75.90 (0.78) 78.14 (0.43) Avg. 75.25 75.30 63.19 76.69 76.26 78.21 *See Table 3 : 3Backward/Forward Transfer for all entity classes. Upper bound method Replay is shown on the right. Highest score is shown in bold. If highest score outperforms the upper-bound, it is also shown in italics.Entity Class Order* FineTune EWC AdapterFusion WEAVER Replay AppendixOrderings:Table A1: Overview of randomly chosen orderings of the data sets.Entity ClassOrderFor the ablation study, we freeze the first eight layers and only fine-tune the last four. As can be seen inTable A2, this results in lower F1-scores than fine-tuning the whole model for WEAVER. A survey on recent advances in named entity recognition from deep learning models. Vikas Yadav, Steven Bethard, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsVikas Yadav and Steven Bethard. A survey on recent advances in named entity recognition from deep learning models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2145-2158, Santa Fe, New Mexico, USA, August 2018. Association for Computational Linguistics. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. Named-Entity Recognition Using CRF and BERT. Akshay Kulkarni, Adarsha Shivananda, Anoosh Kulkarni, Apress2022Berkeley, CAAkshay Kulkarni, Adarsha Shivananda, and Anoosh Kulkarni. Named-Entity Recognition Using CRF and BERT, pages 211-238. Apress, Berkeley, CA, 2022. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, 36Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. 36(4):1234-1240. We are not ready yet: limitations of state-of-the-art disease named entity recognizers. Lisa Kühnel, Juliane Fluck, 1326Lisa Kühnel and Juliane Fluck. We are not ready yet: limitations of state-of-the-art disease named entity recognizers. 13(1):26. Lifelong learning for text retrieval and recognition in historical handwritten document collections. Lambert Schomaker, abs/1912.05156CoRRLambert Schomaker. Lifelong learning for text retrieval and recognition in historical handwritten document collections. CoRR, abs/1912.05156, 2019. Catastrophic interference in connectionist networks: The sequential learning problem. Michael Mccloskey, Neal J Cohen, Psychology of Learning and Motivation. Gordon H. Bower24Academic PressMichael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Gordon H. Bower, editor, Psychology of Learning and Motivation, volume 24, pages 109-165. Academic Press. Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Publisher: National Academy of Sciences Section: Biological Sciences. 11413James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. 114(13):3521-3526. Publisher: National Academy of Sciences Section: Biological Sciences. Personalization of end-to-end speech recognition on mobile devices for named entities. Khe Chai Sim, Françoise Beaufays, Arnaud Benard, Dhruv Guliani, Andreas Kabel, Nikhil Khare, Tamar Lucassen, Petr Zadrazil, Harry Zhang, Leif Johnson, Giovanni Motta, Lillian Zhou, Khe Chai Sim, Françoise Beaufays, Arnaud Benard, Dhruv Guliani, Andreas Kabel, Nikhil Khare, Tamar Lucassen, Petr Zadrazil, Harry Zhang, Leif Johnson, Giovanni Motta, and Lillian Zhou. Personalization of end-to-end speech recognition on mobile devices for named entities, 2019. Rotate your networks: Better weight consolidation and less catastrophic forgetting. Xialei Liu, Marc Masana, Luis Herranz, Joost Van De, Antonio M Weijer, Andrew D Lopez, Bagdanov, Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, and Andrew D. Bagdanov. Rotate your networks: Better weight consolidation and less catastrophic forgetting. Memory aware synapses: Learning what (not) to forget. Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars, Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. Catastrophic forgetting, rehearsal and pseudorehearsal. A Robins, A. Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Pseudo-rehearsal: A simple solution to catastrophic forgetting for NLP · explosion. Matthew Honnibal, Matthew Honnibal. Pseudo-rehearsal: A simple solution to catastrophic forgetting for NLP · explosion. Gradient based sample selection for online continual learning. Rahaf Aljundi, Min Lin, Baptiste Goujaud, Yoshua Bengio, Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. Gradient episodic memory for continual learning. David Lopez, - Paz, Marc&apos;aurelio Ranzato, David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. . Arslan Chaudhry, Marc&apos;aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-GEMArslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-GEM. Complementary learning for overcoming catastrophic forgetting using experience replay. Mohammad Rostami, Soheil Kolouri, Praveen K Pilly, Mohammad Rostami, Soheil Kolouri, and Praveen K. Pilly. Complementary learning for overcoming catastrophic forgetting using experience replay. Progressive learning: A deep learning framework for continual learning. M Haytham, Lawrence Fayek, Hong Ren Cavedon, Wu, 128Haytham M. Fayek, Lawrence Cavedon, and Hong Ren Wu. Progressive learning: A deep learning framework for continual learning. 128:345-357. A biologically inspired dual-network memory model for reduction of catastrophic forgetting. Motonobu Hattori, 134Motonobu Hattori. A biologically inspired dual-network memory model for reduction of catastrophic forgetting. 134:262-268. Continual BERT: Continual learning for adaptive extractive summarization of COVID-19 literature. Park Jong Won, Jong Won Park. Continual BERT: Continual learning for adaptive extractive summarization of COVID-19 literature. Episodic memory in lifelong language learning. Sebastian Cyprien De Masson D&apos;autume, Lingpeng Ruder, Dani Kong, Yogatama, Cyprien de Masson d'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. Episodic memory in lifelong language learning. LAMOL: LAnguage MOdeling for lifelong language learning. Fan-Keng Sun, Cheng-Hao Ho, Hung-Yi Lee, Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. LAMOL: LAnguage MOdeling for lifelong language learning. . Yung-Sung Chuang, Shang-Yu Su, Yun-Nung Chen, Lifelong language knowledge distillationYung-Sung Chuang, Shang-Yu Su, and Yun-Nung Chen. Lifelong language knowledge distillation. Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. AdapterFusion: Non-destructive task composition for transfer learning. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych, Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. AdapterFusion: Non-destructive task composition for transfer learning. Continual sequence generation with adaptive compositional modules. Yanzhe Zhang, Xuezhi Wang, Diyi Yang, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Yanzhe Zhang, Xuezhi Wang, and Diyi Yang. Continual sequence generation with adaptive compositional modules. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3653-3667. Association for Computational Linguistics. Online learning: A comprehensive survey. C H Steven, Doyen Hoi, Jing Sahoo, Peilin Lu, Zhao, Neurocomputing. 459Steven C.H. Hoi, Doyen Sahoo, Jing Lu, and Peilin Zhao. Online learning: A comprehensive survey. Neurocom- puting, 459:249-289, 2021. Incremental on-line learning: A review and comparison of state of the art algorithms. Viktor Losing, Barbara Hammer, Heiko Wersing, Neurocomputing. 275Viktor Losing, Barbara Hammer, and Heiko Wersing. Incremental on-line learning: A review and comparison of state of the art algorithms. Neurocomputing, 275:1261-1274, 2018. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. Qinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Xu Liu, Bingsheng He, abs/1907.09693CoRRQinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Xu Liu, and Bingsheng He. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. CoRR, abs/1907.09693, 2019. COVID-19 preVIEW: Semantic search to explore COVID-19 research preprints. Lisa Langnickel, Roman Baum, Johannes Darms, Sumit Madan, Juliane Fluck, Publisher: IOS PressLisa Langnickel, Roman Baum, Johannes Darms, Sumit Madan, and Juliane Fluck. COVID-19 preVIEW: Semantic search to explore COVID-19 research preprints. pages 78-82. Publisher: IOS Press. preVIEW: from a fast prototype towards a sustainable semantic search system for central access to COVID-19 preprints. Lisa Langnickel, Johannes Darms, Roman Baum, Juliane Fluck, Journal of EAHIL. Lisa Langnickel, Johannes Darms, Roman Baum, and Juliane Fluck. preVIEW: from a fast prototype towards a sustainable semantic search system for central access to COVID-19 preprints. Journal of EAHIL:8-14. Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID. Lisa Langnickel, Johannes Darms, Katharina Heldt, Denise Ducks, Juliane Fluck, 202248Lisa Langnickel, Johannes Darms, Katharina Heldt, Denise Ducks, and Juliane Fluck. Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID. 2022:baac048. Communicationefficient learning of deep networks from decentralized data. H , Brendan Mcmahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera Y Arcas, H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. Communication- efficient learning of deep networks from decentralized data, 2017. A corpus of plant-disease relations in the biomedical domain. Baeksoo Kim, Wonjun Choi, Hyunju Lee, e0221582. Publisher: Public Library of Science14Baeksoo Kim, Wonjun Choi, and Hyunju Lee. A corpus of plant-disease relations in the biomedical domain. 14(8):e0221582. Publisher: Public Library of Science. A neural network multi-task learning approach to biomedical named entity recognition. Gamal Crichton, Sampo Pyysalo, Billy Chiu, Anna Korhonen, 18368Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. A neural network multi-task learning approach to biomedical named entity recognition. 18(1):368. NCBI disease corpus: A resource for disease name recognition and concept normalization. Robert Rezarta Islamaj Dogan, Zhiyong Leaman, Lu, 47Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. NCBI disease corpus: A resource for disease name recognition and concept normalization. 47:1-10. Detecting miRNA mentions and relations in biomedical literature. Shweta Bagewadi, Tamara Bobić, Martin Hofmann-Apitius, Juliane Fluck, Roman Klinger, 3205Shweta Bagewadi, Tamara Bobić, Martin Hofmann-Apitius, Juliane Fluck, and Roman Klinger. Detecting miRNA mentions and relations in biomedical literature. 3:205. Overview of the cancer genetics (CG) task of BioNLP shared task 2013. Sampo Pyysalo, Tomoko Ohta, Sophia Ananiadou, Proceedings of the BioNLP Shared Task 2013 Workshop. the BioNLP Shared Task 2013 WorkshopAssociation for Computational LinguisticsSampo Pyysalo, Tomoko Ohta, and Sophia Ananiadou. Overview of the cancer genetics (CG) task of BioNLP shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 58-66. Association for Computational Linguistics. Overview of the ID, EPI and REL tasks of BioNLP shared task. Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Dan Sullivan, Chunhong Mao, Chunxia Wang, Bruno Sobral, Sophia Jun&apos;ichi Tsujii, Ananiadou, 132Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Dan Sullivan, Chunhong Mao, Chunxia Wang, Bruno Sobral, Jun'ichi Tsujii, and Sophia Ananiadou. Overview of the ID, EPI and REL tasks of BioNLP shared task 2011. 13(11):S2. The genia event extraction shared task. Jin-Dong Kim, Yue Wang, Yamamoto Yasunori, editionoverviewJin-Dong Kim, Yue Wang, and Yamamoto Yasunori. The genia event extraction shared task, 2013 edition - overview. Overview of the pathway curation (PC) task of BioNLP shared task 2013. Tomoko Ohta, Sampo Pyysalo, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Sophia Ananiadou, Jun&apos;ichi Tsujii, Proceedings of the BioNLP Shared Task 2013 Workshop. the BioNLP Shared Task 2013 WorkshopAssociation for Computational LinguisticsTomoko Ohta, Sampo Pyysalo, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Sophia Ananiadou, and Jun'ichi Tsujii. Overview of the pathway curation (PC) task of BioNLP shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 67-75. Association for Computational Linguistics. Towards exhaustive protein modification event extraction. Sampo Pyysalo, Tomoko Ohta, Makoto Miwa, Jun&apos;ichi Tsujii, Proceedings of BioNLP 2011 Workshop, BioNLP '11. BioNLP 2011 Workshop, BioNLP '11Association for Computational LinguisticsSampo Pyysalo, Tomoko Ohta, Makoto Miwa, and Jun'ichi Tsujii. Towards exhaustive protein modification event extraction. In Proceedings of BioNLP 2011 Workshop, BioNLP '11, pages 114-123. Association for Computational Linguistics. Introduction to the bio-entity recognition task at JNLPBA. Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, Nigel Collier, Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04. the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04Association for Computational LinguisticsJin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. Introduction to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04, pages 70-75. Association for Computational Linguistics. Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, Roger A Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rocktäschel, Sérgio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, Senthil Sv Ramanan, Slavko Nathan, Marko Žitnik, Lutz Bajec, Matthias Weber, Irmer, A Saber, Jan A Akhondi, Shuo Kors, Xin Xu, An, Asif Kumar Sikdar, Masaharu Ekbal, Yoshioka, M Thaer, Miji Dieb, Karin Choi, Madian Verspoor, C Lee Khabsa, Hongfang Giles, Liu, The CHEMDNER corpus of chemicals and drugs and its annotation principles. Komandur Elayavilli Ravikumar, Andre Lamurias, Francisco M. Couto, Hong-Jie Dai, Richard Tzong-Han TsaiCaglar Ata, Tolga Can, Anabel Usié, Rui Alves, Isabel Segura-Bedmar, Paloma Martínez72Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M. Lowe, Roger A. Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rocktäschel, Sérgio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, SV Ramanan, Senthil Nathan, Slavko Žitnik, Marko Bajec, Lutz Weber, Matthias Irmer, Saber A. Akhondi, Jan A. Kors, Shuo Xu, Xin An, Utpal Kumar Sikdar, Asif Ekbal, Masaharu Yoshioka, Thaer M. Dieb, Miji Choi, Karin Verspoor, Madian Khabsa, C. Lee Giles, Hongfang Liu, Komandur Elayavilli Ravikumar, Andre Lamurias, Francisco M. Couto, Hong-Jie Dai, Richard Tzong-Han Tsai, Caglar Ata, Tolga Can, Anabel Usié, Rui Alves, Isabel Segura-Bedmar, Paloma Martínez, Julen Oyarzabal, and Alfonso Valencia. The CHEMDNER corpus of chemicals and drugs and its annotation principles. 7(1):S2. Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. Umap: Uniform manifold approximation and projection. Leland Mcinnes, John Healy, Nathaniel Saul, Lukas Grossberger, The Journal of Open Source Software. 329861Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861, 2018. What would elsa do? freezing layers during transformer fine-tuning. Jaejun Lee, Raphael Tang, Jimmy Lin, Jaejun Lee, Raphael Tang, and Jimmy Lin. What would elsa do? freezing layers during transformer fine-tuning. What happens to BERT embeddings during fine-tuning?. Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, Ian Tenney, Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. What happens to BERT embeddings during fine-tuning? Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, Degui Zhi, Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype. Nature Publishing Group4Subject_term: Disease prevention;Experimental models of disease;Health care Sub-ject_term_id: disease-prevention;experimental-models-of-disease;health-careLaila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. Med-BERT: pretrained contextualized embed- dings on large-scale structured electronic health records for disease prediction. 4(1):1-13. Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype: Research Publisher: Nature Publishing Group Subject_term: Disease prevention;Experimental models of disease;Health care Sub- ject_term_id: disease-prevention;experimental-models-of-disease;health-care. Subject_term: Experimental models of disease. Yikuan Li, Shishir Rao, José Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, Gholamreza Salimi-Khorshidi, Behrt, Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype. Nature Publishing Group107155Transformer for electronic health records. Preventive medicine Subject_term_id: experimental-models-of-disease;preventive-medicineYikuan Li, Shishir Rao, José Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, and Gholamreza Salimi-Khorshidi. BEHRT: Transformer for electronic health records. 10(1):7155. Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Num- ber: 1 Primary_atype: Research Publisher: Nature Publishing Group Subject_term: Experimental models of disease;Preventive medicine Subject_term_id: experimental-models-of-disease;preventive-medicine. RareBERT: Transformer architecture for rare disease patient identification using administrative claims. P K S Prakash, Srinivas Chilukuri, Nikhil Ranade, Shankar Viswanathan, 351P. K. S. Prakash, Srinivas Chilukuri, Nikhil Ranade, and Shankar Viswanathan. RareBERT: Transformer architecture for rare disease patient identification using administrative claims. 35(1):453-460. Number: 1. Drift detection in text data with document embeddings. Robert Feldhans, Adrian Wilke, Stefan Heindorf, Mohammad Hossein Shaker, Barbara Hammer, Axel-Cyrille Ngonga Ngomo, Eyke Hüllermeier, ; David Camacho, Peter Tiño, Richard Allmendinger, Antonio J Tallón-Ballesteros, Ke Tang, Sung-Bae Cho, Intelligent Data Engineering and Automated Learning -IDEAL 2021 -22nd International Conference, IDEAL 2021. Manchester, UKSpringer13113ProceedingsRobert Feldhans, Adrian Wilke, Stefan Heindorf, Mohammad Hossein Shaker, Barbara Hammer, Axel- Cyrille Ngonga Ngomo, and Eyke Hüllermeier. Drift detection in text data with document embeddings. In Hujun Yin, David Camacho, Peter Tiño, Richard Allmendinger, Antonio J. Tallón-Ballesteros, Ke Tang, Sung-Bae Cho, Paulo Novais, and Susana Nascimento, editors, Intelligent Data Engineering and Automated Learning - IDEAL 2021 -22nd International Conference, IDEAL 2021, Manchester, UK, November 25-27, 2021, Proceedings, volume 13113 of Lecture Notes in Computer Science, pages 107-118. Springer, 2021.
{'fraction_non_alphanumeric': 0.045832389947278195, 'fraction_numerical': 0.024695151534754343, 'mean_word_length': 4.909872885405715, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Recent developments in transfer learning have boosted the advancements in natural language processing tasks. The performance is, however, dependent on high-quality, manually annotated training data. Especially in the biomedical domain, it has been shown that one training corpus is not enough to learn generic models that are able to efficiently predict on new data. Therefore, state-of-the-art models need the ability of lifelong learning in order to improve performance as soon as new data are available -without the need of re-training the whole model from scratch. We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model, thereby reducing catastrophic forgetting. We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once, while being computationally more efficient. Because there is no need of data sharing, the presented method is also easily applicable to federated learning settings and can for example be beneficial for the mining of electronic health records from different clinics.', 'arxivid': '2202.10101', 'author': ['Lisa Kühnel [email protected] ', 'Alexander Schulz [email protected] ', 'Barbara Hammer [email protected] ', 'Juliane Fluck [email protected] ', '\nGraduate School DILS\nBielefeld Institute for Bioinformatics Infrastructure (BIBI)\nFaculty of Technology\nZB MED -Information Centre for Life Sciences Cologne\nGermany\n', '\nBielefeld University\nBielefeldGermany\n', '\nCITEC\nBielefeld University\nBielefeldGermany\n', '\nZB MED -Information Centre for Life Sciences Cologne\nCITEC\nBielefeld University\nBielefeldGermany, Germany\n', '\nUniversity of Bonn\nBonnGermany\n'], 'authoraffiliation': ['Graduate School DILS\nBielefeld Institute for Bioinformatics Infrastructure (BIBI)\nFaculty of Technology\nZB MED -Information Centre for Life Sciences Cologne\nGermany', 'Bielefeld University\nBielefeldGermany', 'CITEC\nBielefeld University\nBielefeldGermany', 'ZB MED -Information Centre for Life Sciences Cologne\nCITEC\nBielefeld University\nBielefeldGermany, Germany', 'University of Bonn\nBonnGermany'], 'corpusid': 247012007, 'doi': None, 'github_urls': ['https://github.com/cambridgeltl/'], 'n_tokens_mistral': 17552, 'n_tokens_neox': 15601, 'n_words': 9081, 'pdfsha': 'b8b88a01d6b7f9d17c4f7cc5bb9d558192e39832', 'pdfurls': ['https://export.arxiv.org/pdf/2202.10101v2.pdf'], 'title': ['BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN', 'BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN'], 'venue': []}
arxiv
DERIVATION OF A MACROSCOPIC MODEL FOR BROWNIAN HARD NEEDLES 8 Feb 2023 M Bruna S J Chapman M Schmidtchen DERIVATION OF A MACROSCOPIC MODEL FOR BROWNIAN HARD NEEDLES 8 Feb 20232020 Mathematics Subject Classification 35C2035K5535Q8460J7082C2270K2035B36 Keywords and phrases many-particle systemsanisotropic particlesexcluded-volume interactionsphase transitionscoarse-graining We study the role of anisotropic steric interactions in a system of hard Brownian needles. Despite having no volume, non-overlapping needles exclude a volume in configuration space that influences the macroscopic evolution of the system. Starting from the stochastic particle system, we use the method of matched asymptotic expansions and conformal mapping to systematically derive a nonlinear nonlocal partial differential equation for the evolution of the population density in position and orientation. We consider the regime of high rotational diffusion, resulting in an equation for the spatial density that allows us to compare the effective excluded volume of a hard-needles system with that of a hard-spheres system. We further consider spatially homogeneous solutions and find an isotropic to nematic transition as density increases, consistent with Onsager's theory. Introduction Systems of interacting particles are ubiquitous in nature. Examples include biomolecules (e.g. proteins), polymers (e.g. DNA), cells (e.g. bacteria), all the way to multi-cellular organisms (animals). Interactions between organisms may be attractive (keeping a herd cohesive), aligning (keeping animals moving in the same direction), or repulsive (keeping particles a safe distance apart) [19]. Short-ranged repulsive interactions with singular or hard-core potentials are used to model steric or excluded-volume interactions [5]. Anisotropy plays a crucial role in self-organisation. For example, the helical form of the DNA strand is due to highly anisotropic interactions between DNA bases [36]. The molecular shape of liquid crystals leads to their remarkable properties [1]. Self-propulsion in active matter systems can lead to motility-induced phase separation [9], where the uniform suspension becomes unstable and dense clusters of almost stationary particles emerge [9]. Alignment interactions have been shown to explain the emergence of flocking and milling [8]. Tools to study the rich collective properties of such systems range from simulations at the microscopic level (e.g. molecular dynamics or Monte Carlo simulations) to the study of macroscopic models for statistical quantities, often involving partial differential equations (PDEs). While microscopic models provide a detailed system description, simulating them can become computationally prohibitive. This is due to the large number of particles and the complexity of interactions often involved, mainly if one is after statistical properties (which require averaging over multiple simulations). Macroscopic models operate at the statistical level and can often provide the insight lacking in their microscopic counterparts. Anisotropy in particle systems comes in many forms. Models can be classified into either first-or second-order models and either soft-or hard-core interactions. In secondorder models (which track particles' positions and velocities), particles may interact differently depending on their relative velocities. Examples with weak interactions include the Cucker-Smale model [11], and the Vicsek model [38], which include alignment interactions in velocities. One may also add a cone of vision such that an individual only aligns velocity with neighbours within the cone [8]. The Cucker-Smale and Vicsek models, and their many variants, have been the starting point in multiple works concerned with deriving kinetic PDE models starting from such microscopic dynamics. It is common to consider a weak or mean-field scaling 1/N of the interactions (where N is the number of particles), leading to nonlocal and nonlinear kinetic PDE in the limit [13,27,37,25,7]. The focus in most kinetic models is on how the interaction rule depends on relative positions and velocities, not particles' shapes. An exception is the recent works [24,28], where they consider a system of kinetic hard needles that align upon collision. Instead of a mean-field scaling, they consider the Boltzmann-Grad limit of infrequent but strong interactions and invoke propagation of chaos to derive a closed kinetic equation. First-order models for anisotropic particles often consider the particle position and orientation and assume diffusive behaviour in both. For isotropic particles, microscopic models are well-established (the hard-sphere, the Lennard-Jones, the Coulomb potential, etc.), and current efforts primarily focus on deriving macroscopic models from them. In contrast, anisotropic particles are much harder to model, even at the microscopic scale. There is a trade-off between the complexity of the particles' shape, on the one hand, and the model's analytic tractability, on the other. Interactions may be soft or hard; depending on the application, either may be seen as the 'true dynamics'. For example, while soft interactions may be more appropriate for molecules, hard steric interactions may be more fitting for cells, bacteria and animals. The most well-known soft anisotropic potential is the Gay-Berne potential [23]. It builds on the work of Berne and Pechukas [2], which proposed to represent particles as a union of Gaussian potentials and their interaction as the overlap integral of their Gaussians. The Gay-Berne potential combines this anisotropic overlap model with the Lennard-Jones potential (a 12-6 attractive-repulsive potential). The multi-phase-field approach [33,39] is at the other end of the complexity-tractability trade-off. Here, each particle is not characterised by its centre of mass and orientation but by a phase field variable, φ i (x, t) ∈ [0, 1], such that φ i (x, t) ≈ 1 if location x is occupied by the particle i at time t and, conversely, φ(x, t) ≈ 0 if the i th particle does not occupy location x at time t. Due to the diffusive interface between the two states (occupied and unoccupied), repulsive interactions are incorporated in a fashion similar to that of Berne and Pechukas: the overlap integral between two particles (now represented by phase-field variables rather than Gaussians) is computed, and the evolution is such that it minimises the area of overlap. The models above have in common that the space taken up by particles is not precisely localised, in contrast to hard-core models. Hard-core ellipsoids and rods are the natural generalisations of hard spheres to model anisotropy. A hard-core particle induces an excluded region (where no other particle can enter). In his seminal paper [35], Onsager finds expressions for the excluded volume of various particle shapes such as ellipses, discs, and rods. The most striking example in his treatise is a hard needle of length ǫ in two dimensions, which has zero volume but excludes a volume in configuration space of ǫ 2 | sin(θ)| to another needle with relative orientation θ (see Figure 1). The problem of interacting needles in three dimensions is fundamentally different since needles exclude zero volume in configuration space in addition to having no volume. Rods with a core-shell structure have also been used in microscopic models of self-assembly [20] and morphogenesis in bacterial colonies [15]. Interesting mathematical problems arise from considering even just one anisotropic hard-core particle. For example, in [26], they study the mean turnaround time of a Brownian needle in a narrow planar strip as a simplified model for mRNA or stiff DNA fragments under extreme confinement. In [10], they consider an anisotropic Brownian microswimmer in a channel and show that noflux boundary conditions with the flat channel walls lead to nontrivial boundaries in configuration space. with centre at the origin. The centre of a second needle (red) with orientation θ cannot be placed inside the excluded region (shaded grey area) as it would lead to an overlap. Right panel: excluded volume in phase space. The vertical axis denotes the relative angle between two needles. Since one hard-core anisotropic particle already poses mathematical challenges and, ordinarily, natural systems comprise large ensembles of anisotropic particles, it is easy to see that their study is substantially more challenging. This explains the dearth of macroscopic PDE models systematically derived from underlying dynamics and the popularity of computational and phenomenological approaches to incorporate anisotropic interactions in PDE models. Phenomenological models have been widely used in the context of polymer and liquid crystals theory. These include the so-called tube theory [14], which assumes polymers as rigid filaments that, under crowding, move along a tube formed by the surrounding polymers, as well as the Landau-de Gennes Q-tensor theory for nematic liquid crystals [12], which represents polar molecules via a continuum order parameter. A lot of work has been dedicated to validating these theories by comparing their predictions with microscopic models with different levels of success [22]. In [32], they consider a system of self-propelled needles, with collisions such that energy and momentum are preserved, and validate the tube theory (they find that the selfdiffusion coefficient of a needle increases with concentration, in contrast to that of hard spheres [4]). In this paper, we focus on, possibly, the simplest hard-core anisotropic system, namely that of N Brownian needles of length ǫ with non-overlapping constraints in two dimensions. Using matched asymptotic expansions, we systematically derive a macroscopic PDE model in the asymptotic regime ǫ 2 N ≪ 1. To our knowledge, this is the first systematic derivation for such a system. We take an approach similar to [5], in which the authors consider a system of N Brownian hard disks of diameter ǫ under a drift f(x) in two spatial dimensions. Under the assumption that the volume fraction of the particles is small, the one-particle probability density ρ(x, t) satisfies the nonlinear diffusion equation (Eq. (11) of Ref. [5]) ∂ρ ∂t (x, t) = ∇ x · [1 + π(N − 1)ǫ 2 ρ]∇ x ρ − f(x)ρ ,(1) in R 2 . The goal of this paper is to derive an analogous PDE to (1) for the one-particle density p(x, θ, t) describing the probability of a needle with centre at x and orientation θ at time t. The structure of the paper is as follows. In Section 2, we introduce the particle-based model, that is, a system of N Brownian needles with drifts and its associated Fokker-Planck equation describing the whole ensemble probabilistically. Section 3 is devoted to the systematic derivation of the effective model using the method of matched asymptotic expansions. Section 4 is dedicated to systems with high rotational diffusion coefficients, which are shown to inherit striking similarities with the hard-disk model, Section 5, proposed by [5]. We conclude the paper with the space-homogeneous model and, upon performing a linear stability analysis, we find that the system exhibits an isotropic-tonematic phase transition consistent with Onsager's theory [29]. The microscopic model and its associated Fokker-Planck equation We start by describing the individual-based (microscopic) model. We suppose there are N ∈ N identical hard needles of length ǫ distributed in a bounded domain Ω ∈ R 2 . For 1 ≤ i ≤ N, we denote by X i (t) ∈ Ω the centre of the ith needle and by Θ i (t) ∈ [0, π] its orientation. We choose Ω to be the 2-dimensional torus T = R/(πZ), imposing periodic boundary conditions. The π-period is chosen for mathematical convenience, such that Υ = Ω × [0, π) = T 3 . As pointed out in the introduction, the spatial extension of the needles restricts their ability to move freely in the domain due to non-overlapping constraints, in contrast to a system of point particles. Each needle evolves according to a translational (resp. rotational) Brownian motion with diffusion constant D T (resp. D R ) in an external force field f = (f T , f R ) that may depend both on the position and orientation of the needle, but not on the other needles. This leads to the system of stochastic differential equations (SDEs) dX i (t) = 2D T dW T,i (t) + f T (X i , Θ i )dt, (2a) dΘ i (t) = 2D R dW R,i (t) + f R (X i , Θ i )dt,(2b)for 1 ≤ i ≤ N and (X i , Θ i ) ∈ Υ. Here W T,i and W R,i are standard independent Brownian motions for 1 ≤ i ≤ N. In addition, we impose reflective boundary conditions whenever two needles come into contact, thereby introducing a coupling to an otherwise uncoupled system of N SDEs. It is convenient to consider the joint probability density P N ( ξ, t) associated to system (2), where ξ = (ξ 1 , . . . , ξ N ) and ξ i = (x i , θ i ), for 1 ≤ i ≤ N. The density P N describes the probability of the entire system of N needles being in state ξ at time t. It is well known that P N satisfies the Fokker-Planck equation (3a) ∂ t P N = ∇ x · D T ∇ x P N − F T ( ξ)P N + ∇ θ · D R ∇ θ P N − F R ( ξ)P N , where x = (x 1 , . . . , x N ), θ = (θ 1 , . . . , θ N ), F T ( ξ) = (f T (ξ 1 ), . . . , f T (ξ N )) and F R ( ξ) = (f R (ξ 1 ), . . . , f R (ξ N ) ). Due to the hard-core interactions between needles, note that (3a) is not defined on ξ ∈ Υ N but its perforated form Υ N ǫ := Υ N \ B N ǫ . Here, B N ǫ denotes the set of illegal configurations where at least two needles overlap, i.e., B N ǫ := ξ ∈ (Υ) N | ∃i = j s.t. N (ξ i ) ∩ N (ξ j ) = ∅ , where N (x, θ) := x + λ cos(θ) sin(θ) |λ| ≤ ǫ 2 , denotes the set of all points belonging to a needle at (x, θ). On ∂Υ N ǫ (corresponding to configurations with at least two needles in contact), we prescribe reflective boundary conditions (3b) D T ∇ x P N − F T ( ξ)P N D R ∇ θ P N − F R ( ξ)P N · n = 0, on ∂Υ N ǫ , where n ∈ S 3N −1 denotes the unit outward normal on the boundary. Finally, we assume that the initial positions of the particles are identically distributed so that the initial condition P ( ξ, 0) = P 0 ( ξ) is invariant to permutations of the particles' labels. Derivation of the macroscopic model In the previous section, we have established a connection between the particle-based dynamical system (2) and the associated Fokker-Planck equation (3). We highlight that the dimensionality of both descriptions increases as more needles are added to the system, rendering their analytical or numerical study intractable. This section is dedicated to deriving an effective model in the form of a nonlinear evolution equation for the oneparticle probability density (4) p(ξ, t) := Υ N ǫ P N ( ξ, t)δ(ξ 1 − ξ)d ξ. In the case of ǫ = 0, the needles become point particles and, as a consequence, their evolutions (2) decouple and, for suitable iid initial conditions, we have that P N ( ξ, t) = N i=1 p(ξ i , t). In this setting, the first marginal is shown to satisfy the following equation (5) ∂ t p(ξ, t) = ∇ x · [D T ∇ x p − f T (ξ)p] + ∂ θ [D R ∂ θ p − f R (ξ)p] , with t ≥ 0 and ξ ∈ Υ. Unlike point particles, needles of length ǫ > 0 exclude a certain volume in phase space. Remark 3.1 (Excluded region of a needle). The region in phase space excluded by a needle at ξ 1 is denoted by B ǫ (ξ 1 ) (Fig. 1). Depending on the relative orientation θ := θ 2 − θ 1 between the two needles, the cross-section of B ǫ for fixed θ range from a line of length 2ǫ (θ = 0) to a square of side ǫ (θ = π/2). For general θ, the slice is a rhombus of area ǫ 2 sin θ with nodes at (6) x A = x 1 + ǫ 2 R θ 1 (−1 + cos θ, sin θ), x B = x 1 + ǫ 2 R θ 1 (1 + cos θ, sin θ), x C = x 1 + ǫ 2 R θ 1 (1 − cos θ, − sin θ), x D = x 1 + ǫ 2 R θ 1 (−1 − cos θ, − sin θ), where R θ 1 is the rotation matrix (7) R θ 1 = cos θ 1 − sin θ 1 sin θ 1 cos θ 1 . We denote byn 2 the outward unit normal on B ǫ (ξ 1 ) (outward of Υ(ξ 1 ) so it points into the shaded area in Fig. 1). If the boundary of B ǫ (ξ 1 ) is given by the relation χ(ξ 2 ) = 0, we have thatn 2 ∝ ∇ ξ 2 χ. For example, the top edge x A x B is given by χ(ξ 2 ) = y A + tan θ 1 (x 2 − x A ) − y 2 = 0 and the normal vector is (8)n 2 ∝ ∇ ξ 2 χ = tan θ 1 , 1, ǫ 2 (cos θ 2 + tan θ 1 sin θ 2 ) . For ǫ > 0, the equation for the one-particle density p(ξ 1 , t) is obtained by integrating (3) with respect to ξ 2 , . . . , ξ N for ξ 1 fixed. The perforations in Υ N ǫ lead to boundary integrals for ξ i ∈ B ǫ (ξ 1 ) on which the two-particle probability density P 2 (ξ 1 , ξ i , t) needs to be evaluated. One can go back to (3) and obtain an equation for P 2 , which in turn depends on the three-particle probability density P 3 . This is known as the BBGKY hierarchy. In this work, we assume that φ = ǫ 2 N ≪ 1 such that this hierarchy can be truncated "asymptotically". We note from Remark 3.1 that the volume of B ǫ (ξ 1 ) is ǫ 2 π 0 sin(θ)dθ = 2ǫ 2 . If φ ≪ 1, the volume in Υ N ǫ occupied by configurations where two needles are closeby is O(φ), whereas the volume of configurations where three or more needles are nearby is much smaller (O(φ 2 )). Hence, it means that, at the leading order, the equation for p coincides with the point particles equation (5) and that the first correction appears at O(φ) and is due to two-needle interactions. Three-and more-needle interactions are higher-order corrections. Therefore, we may neglect three-particle interactions in the equation for P 2 (ξ 1 , ξ 2 , t) and consider ∂ t P 2 = ∇ ξ 1 · [D∇ ξ 1 P 2 − f (ξ 1 )P 2 ] + ∇ ξ 2 · [D∇ ξ 2 P 2 − f (ξ 2 )P 2 ] , (9a) in Υ 2 ǫ , where D = diag(D T , D T , D R ) and f (ξ) = (f T (ξ), f R (ξ)), together with reflecting boundary conditions (9b) [D∇ ξ 1 P 2 − f (ξ 1 )P 2 ] · n 1 + [D∇ ξ 2 P 2 − f (ξ 2 )P 2 ] · n 2 = 0 on ∂Υ 2 ǫ . Here n 1 (resp. n 2 ) are the components of the unit normal n corresponding to the coordinates of the first (resp. second) needle. It turns out that n 1 = −n 2 such that n = √ 2/2(−n 2 ,n 2 ), wheren 2 is defined in Remark 3.1. 3.1 Evolution of the first marginal Let Υ(ξ 1 ) = Υ \ B ǫ (ξ 1 ) denote the second particle's phase space given that the first particle is in state ξ 1 . Integrating (9a) over Υ(ξ 1 ) yields ∂ t p(ξ 1 , t) = Υ(ξ 1 ) ∂ t P 2 (ξ 1 , ξ 2 , t)dξ 2 = Υ(ξ 1 ) ∇ ξ 1 · [D∇ ξ 1 P 2 − f (ξ 1 )P 2 ] dξ 2 + Υ(ξ 1 ) ∇ ξ 2 · [D∇ ξ 2 P 2 − f (ξ 2 )P 2 ] dξ 2 .(10) Using Reynold's transport theorem, the first integral becomes Υ(ξ 1 ) ∇ ξ 1 · [D∇ ξ 1 P 2 − f (ξ 1 )P 2 ] dξ 2 = ∇ ξ 1 · [D∇ ξ 1 p − f (ξ 1 )p] + ∂Bǫ(ξ 1 ) [f (ξ 1 )P 2 − 2D∇ ξ 1 P 2 − D∇ ξ 2 P 2 ] ·n 2 dS ξ 2 ,(11) The second integral in (10) is Υ(ξ 1 ) ∇ ξ 2 · [D∇ ξ 2 P 2 − f (ξ 2 )P 2 ] dξ 2 = ∂Bǫ(ξ 1 ) [D∇ ξ 2 P 2 − f (ξ 2 )P 2 ] ·n 2 dS ξ 2 .(12) Substituting (11) and (12) into (10) we obtain ∂ t p(ξ 1 , t) = ∇ ξ 1 · [D∇ ξ 1 p − f (ξ 1 )p] + I,(13) where the collision integral I is (14) I = − ∂Bǫ(ξ 1 ) D (∇ ξ 1 P 2 + ∇ ξ 2 P 2 ) ·n 2 dS ξ 2 . The evolution equation (13) for the first marginal p still depends on the joint probability density function P 2 . A common approach to overcome this is to use a closure assumption, for instance, the mean-field approximation, P 2 (ξ 1 , ξ 2 , t) = p(ξ 1 , t)p(ξ 2 , t). However, such an approach ignores correlations between both particles, and it is not suitable for systems of strongly interacting particles with short-range repulsive interactions such as hard needles. Instead, we employ the method of matched asymptotics to compute the collision integral I systematically. Matched asymptotics expansions We introduce a partition of the domain Υ(ξ 1 ) consisting of an inner region, when the two needles are close to each other, x 1 − x 2 2 ∼ ǫ, and an outer region, when the two needles are far apart, x 1 − x 2 ≫ ǫ. In the outer region, we suppose that particles are independent at leading order, whereas we consider their correlation in the inner region. In the outer region we define P out (ξ 1 , ξ 2 , t) = P 2 (ξ 1 , ξ 2 , t). Then by independence, the two-particle density function is 1 (15) P out (ξ 1 , ξ 2 , t) = p(ξ 1 , t)p(ξ 2 , t) + ǫP (1) out (ξ 1 , ξ 2 , t) + · · · . In the inner region, we introduce the inner variablesξ 1 = (x 1 ,θ 1 ) andξ = (x,θ), defined as (16) x 1 =x 1 , x 2 =x 1 + ǫR θ 1x , θ 1 =θ 1 , θ 2 =θ 1 +θ, and the inner functionP (ξ 1 ,ξ, t) = P 2 (ξ 1 , ξ 2 , t). The coordinates (x,θ) define the configuration of the second needle relative to the first. The excluded volume B ǫ (ξ 1 ) becomes B 1 (0) in inner variables. In theξ-space, this is now a volume centred at the origin with two horizontal sides (see Figure 1 and Remark 3.1). Using thatx = ǫ −1 R T θ 1 (x 2 − x 1 ), the derivatives transform according to ∇ x 1 → ∇x 1 − ǫ −1 R θ 1 ∇x, ∇ x 2 → ǫ −1 R θ 1 ∇x, ∂ θ 1 → ∂θ 1 − ∂θ +ỹ∂x −x∂ỹ, ∂ θ 2 → ∂θ. In terms of the inner variables, (9a) reads ǫ 2 ∂ tP = 2D T ∆xP − ǫ 2D T ∇x 1 · R θ 1 ∇xP + ∇x · R θ 1 f T (x 1 + ǫx,θ 1 +θ) − f T (ξ 1 ) P + ǫ 2 ∇x 1 · D T ∇x 1P − f T (ξ 1 )P + D R ∂θ 1 − ∂θ +ỹ∂x −x∂ỹ 2 + ∂ 2 θ P − (∂θ 1 − ∂θ +ỹ∂x −x∂ỹ) f R (ξ 1 )P − ∂θ f R (x 1 + ǫx,θ 1 +θ)P .(17) In order to write the boundary condition (9b) in terms of the inner variables, we need to determine how the normaln 2 changes under the transformation. Following the procedure in Remark 3.1, we have ∇ ξ 2 χ → (ǫ −1 R θ 1 ∇xχ, ∂θχ), whereχ(ξ) = 0 describes the boundary in inner variables. Therefore (18)n 2 → (R θ 1ñ , ǫñ θ ). For example, the top edge x A x B becomesχ = sin(θ) −ỹ = 0 and the normal vector in the inner variables isñ = (ñ,ñ θ ) ∝ ∇xχ = (0, −1, cosθ). Using (18) and n 1 = −n 2 as pointed out earlier, the no-flux boundary condition (9b) becomes 0 = 2D T R θ 1 ∇xP − ǫD T ∇x 1 P − ǫ f T (x 1 + ǫx,θ 1 +θ) − f T (ξ 1 ) P · R θ 1ñ + ǫ 2 D R 2∂θP − ∂θ 1P +x∂ỹP −ỹ∂xP − f R (x 1 + ǫx,θ 1 +θ) − f R (ξ 1 ) P ñ θ ,(19) forξ ∈ ∂B 1 (0). Finally, we impose the matching boundary condition to ensure that, as the two needles become further apart and enter the outer region, the inner solutionP will match with the outer solution P out . Expanding (15) in the inner variables, P ∼ p(x 1 ,θ 1 , t)p(x 1 + ǫR θ 1x ,θ 1 +θ, t) + ǫP (1) out (x 1 ,θ 1 ,x 1 + ǫR θ 1x ,θ 1 +θ, t) ∼ pp + + ǫ pR θ 1x · ∇x 1 p + + P (1) out (x 1 ,θ 1 ,x 1 ,θ 1 +θ, t) , as |x| → ∞,(20) where p := p(x 1 ,θ 1 , t) and p + := p(x 1 ,θ 1 +θ, t). We look for a solution of (17), (19), and (20) of the formP =P (0) + ǫP (1) + · · · . The leading-order problem is (21)        ∆xP (0) = 0, R θ 1 ∇xP (0) · R θ 1ñ = 0,ξ ∈ ∂B 1 (0), P (0) ∼ pp + , |x| ∼ ∞. This is a problem in the inner spatial variablesx, and thatx 1 ,θ 1 , andθ can be regarded as parameters. In particular, (21) is defined forx ∈ R 2 \ Rθ, where Rθ denotes the rhombus corresponding to slicing the excluded volume B 1 (0) atθ (see Figure 1). The solution of (21) is (22)P (0) = pp + . Using (22) and expanding f T , the O(ǫ) problem reads (23) ∆xP (1) = 0,x ∈ R 2 \ Rθ, R θ 1 ∇xP (1) · R θ 1ñ = 1 2 ∇x 1 (pp + ) + (pp + ) D T (f + T − f T ) · R θ 1ñ ,x ∈ ∂Rθ, P (1) ∼ p∇x 1 p + · R θ 1x + P (1) out (x 1 ,θ 1 ,x 1 ,θ 1 +θ, t), |x| ∼ ∞, where f T := f T (x 1 ,θ 1 ) and f + T := f T (x 1 ,θ 1 +θ). We can rewrite problem (23) as (24)        ∆xP (1) = 0,x ∈ R 2 \ Rθ, ∇xP (1) ·ñ = R T θ 1 A ·ñ,x ∈ ∂Rθ, P (1) ∼ R T θ 1 B ∞ ·x + C ∞ , |x| ∼ ∞, where A, B ∞ and C ∞ are functions ofx 1 ,θ 1 ,θ and t only and given by A = 1 2 ∇x 1 (pp + ) + pp + D T (f + T − f T ) , B ∞ = p∇x 1 p + , C ∞ = P (1) out (x 1 ,θ 1 ,x 1 ,θ 1 +θ, t).(25) The solution to (24) is given by (26)P (1) = R T θ 1 A ·x + R T θ 1 B · u + C ∞ , where B := B ∞ − A and u = (u 1 , u 2 ) satisfy the following problems:      ∆xu 1 = 0,x ∈ R 2 \ Rθ, ∇xu 1 ·ñ = 0,x ∈ ∂Rθ, u 1 ∼x, |x| ∼ ∞,(27) and (28)      ∆xu 2 = 0,x ∈ R 2 \ Rθ, ∇xu 2 ·ñ = 0,x ∈ ∂Rθ, u 2 ∼ỹ, |x| ∼ ∞. Thus we have reduced the inner problem (23) to two problems for u 1 (x) and u 2 (x) that only depend onθ through their domain of definition, namely the exterior of a rhombus whose tilting depends onθ (see Figure 1). Problems (27) and (28) are solved via conformal mapping in Appendix A. Collision integral In this subsection, we go back to the integrated equation (13) and use the inner solutionP to evaluate the collision integral I in (14). Transforming (14) to inner variables, we obtain (29) I = −ǫD T ∂B 1 (0) ∇x 1P · R θ 1ñ dSξ − ǫ 2 D R ∂B 1 (0) ∂θ 1P +ỹ∂xP −x∂ỹP ñθ dSξ, using (16) and (18). We evaluate (29) by breaking I in powers of ǫ, I = I (0) + ǫI (1) + · · · . Clearly I (0) = 0. The first-order integral is I (1) = −D T ∂B 1 (0) R T θ 1 ∇x 1P (0) ·ñ dSξ = 0, using thatP (0) is independent ofx, see (22), and that we are integrating the normal of a closed curve (forθ fixed). At the next order, we have (30) I (2) = −D T ∂B 1 (0) R T θ 1 ∇x 1P (1) ·ñ dSξ Ix − D R ∂B 1 (0) ∂θ 1P (0)ñθ dSξ Iθ , using again thatP (0) is independent ofx, making the termsỹ∂xP (0) −x∂ỹP (0) vanish in the second integral. The latter can be further simplified to Iθ = − B 1 (0) ∂θ∂θ 1 (pp + ) dξ = −∂θ 1 π 0 ∂θ(pp + ) Rθ dxdθ = −∂θ 1 π 0 sinθ∂θ(pp + ) dθ.(31) In the first equality, we have applied the divergence theorem to (0, 0, ∂θ 1P (0) ). In the last equality, we have used that Rθ is the rhombus tilted by angleθ in inner variables, which has area sinθ (see Remark 3.1). The integral Ix in (30) can be rewritten as (32) Ix = π 0 ∂Rθ R T θ 1 ∇x 1P (1) ·ñ dSx dθ = π 0 J(x 1 ,θ 1 ,θ) dθ with J = ∂Rθ R T θ 1 ∇x 1P (1) ·ñ dSx. Using the expression forP (1) in (26), we find that (see Appendix B) J = −∇x 1 · sinθA + M(θ 1 ,θ)B ,(33) where M(θ 1 ,θ) = R θ 1 T (θ)R T θ 1 with T (θ) the symmetric 2 × 2 matrix (67) whose entries are plotted in Fig. 2. The matrix T (θ) is positive definite and contains information on the effect of the excluded volume due to a horizontal needle on a second needle with orientationθ. We observe that: Forθ = π/2, the diagonal terms are equal while the cross-terms are zero, as expected, since the excluded region is symmetric (a square). For θ = 0, π, the needle is "invisible" to the horizontal flow (T 11 = 0) and the effect on the vertical flow is maximal (T 22 largest). Finally, combining (31), (32) and (33), we find that the leading-order contribution to the collision integral is I = −ǫ 2 D T π 0 J dθ + D R Iθ = ǫ 2 ∇ξ 1 · π 0 D sinθA + M(θ 1 ,θ)B, sinθ∂θ(pp + ) dθ.(34) A nonlinear nonlocal diffusion equation Inserting the collision integral (34) into (13), we find that the integrated Fokker-Planck equation for N = 2 is ∂ t p = ∇ ξ 1 · D∇ ξ 1 p − f (ξ 1 )p + ǫ 2 π 0 D sin θA + M(θ 1 , θ)B, sin θ ∂ θ (pp + ) dθ .(35) The extension from two to N needles is straightforward up to O(ǫ 2 ) since only pairwise interactions need to be considered at this order. Noting that the first needle has (N − 1) inner regions, one with each of the remaining needles, the marginal density for N needles satisfies ∂ t p = ∇ ξ 1 · D∇ ξ 1 p − f (ξ 1 )p + ǫ 2 (N − 1)D π 0 Q(θ, p)dθ , (36a) where D = diag(D T , D T , D R ), f (ξ 1 ) = (f T (ξ 1 ), f R (ξ 1 ), and Q = (Q T , Q R ) is given by Q T (θ, p, p + ) = sin θA + M(θ 1 , θ)B, Q R (θ, p, p + ) = sin θp∂ θ p + . (36b) In (36), p = p(x 1 , θ 1 , t), p + = p(x 1 , θ 1 + θ, t), and M(θ 1 , θ) = R θ 1 T (θ)R T θ 1 , where R θ 1 is the rotation matrix by θ 1 (see (7)) and T (θ) is the matrix defined in (67) (see also Fig. 2), and A = 1 2 ∇ x 1 (pp + ) + (pp + ) D T (f + T − f T ) , B = 1 2 p∇ x 1 p + − p + ∇ x 1 p + (pp + ) D T (f T − f + T ) .(36c) The nonlinearities in (36) encompass the effect that the non-overlap constraint between needles has on the macroscopic dynamics. In particular, we note that the interactions are local in space but nonlocal in angle. The integrands Q T and Q R vanish for θ = 0 (as two parallel needles exclude no volume in phase space), while for θ ∈ (0, π) they include a series of quadratic terms involving p, p + and their derivatives. The interaction in orientation is of mean-field type (see I R ), where only the "cross-diffusion" term p∂ θ 1 p + appears, whereas in space we obtain full cross-diffusion terms p∇ x 1 p + and p + ∇ x 1 p as well as a drift-difference term (see I T ), as in the case of mixtures of hard spheres [4]. To give some intuition on their role, consider the kernel Q T for θ = π/2 (perpendicular needles). This is the only value for which T is a multiple of the identity (see (67)), T (π/2) = µI 2 with µ ≈ 2.18. Thus M(θ 1 , θ)B ≡ µB and the integrand simplifies to Q T (π/2, p, p + ) = A + µB = 1 2 (µ + 1)p∇ x 1 p + − (µ − 1) p + ∇ x 1 p − f T − f + T D T pp + . In this form, one may readily compare it with the nonlinear terms arising from the interactions between two types of hard-sphere particles of diameter ǫ (cf. Eq. (22) in [4]) Q T (p, p + ) = π 2 3p∇ x 1 p + − p + ∇ x 1 p + f T − f + T D T pp + . Thus we observe the same structure with an "effective drift" p∇ x 1 p + due to gradients of the other species, a reduced diffusion p + ∇ x 1 p due to concentrations of the other species, and a quadratic drift adjustment with the same relative strength and sign in both needles and hard-spheres cases. The size of the coefficients is larger for hard spheres (3π/2 and π/2) than for needles ((µ ± 1)/2), as expected given their excluded volume in this specific needles configuration (π vs 1). Remark 3.2 (Active Brownian needles). We note that our model (36) may be used to describe a system of N active needles similar to that considered in [32] (except that they use the θ-dependent diffusion tensorD in (40)). In particular, consider f R = 0 and f T (x, θ) = v 0 e(θ) with e(θ) = (cos θ, sin θ) in (2), such that needles drift along their orientation θ at constant velocity v 0 . This implies that we must now distinguish between a needle's head and tail as its orientation θ determines the direction of the drift in position; that is, we must extend the range of θ to [0, 2π). Since the excluded volume between two needles is invariant under switching heads and tails, the terms in (36) that describe the excluded volume, namely sin θ and M(θ 1 , θ) in (36b) must be extended to [0, 2π) as | sin θ| andM (θ 1 , θ) respectively, whereM (θ 1 , θ) = M(θ 1 , θ) for θ ∈ [0, π) and M (θ 1 , θ) = M(θ 1 , θ − π) for θ ∈ [π, 2π). Then (36) becomes ∂ t p = ∇ x 1 · D T ∇ x 1 p − v 0 e(θ 1 )p + φ 2π 0Q T (θ, p)dθ + D R ∂ θ 1 ∂ θ 1 p + φp 2π 0Q R (θ, p)dθ ,(37)where φ = (N − 1)ǫ 2 , with Q T (θ, p) = D T 2 (M + | sin θ|)p∇p + − (M − | sin θ|)p + ∇p + v 0 2 (M − | sin θ|)pp + e = D T µ + p∇p + − µ − p + ∇p + v 0 µ − pp + e, Q R (θ, p) = | sin θ|∂ θ p + where e = e(θ 1 ) − e(θ 1 + θ) and µ ± (θ 1 , θ) = 1 2 M (θ 1 , θ) ± | sin θ| . Rearranging (37) may be cast in a more familiar form in the active matter community (compare with Eq. (2.29) in [3], corresponding to active Brownian hard disks) (38) ∂ t p + v 0 ∇ · p(1 − φρ − )e(θ 1 ) + φm − p = D T ∇ · (1 − φρ − )∇p + φp∇ρ + + D R ∂ θ 1 (∂ θ 1 p + φpρ) , with "effective" spatial densities ρ ± ,ρ and magnetisation (also known as polarisation) ρ ± = 2π 0 µ ± p + dθ,ρ = 2π 0 ∂ θ | sin θ|p + dθ, m − = 2π 0 µ − p + e + dθ. If the excluded volume between two needles was a constant, then ρ ± ≡ ρ = 2π 0 pdθ (the spatial density), m − ≡ m = 2π 0 pedθ and the nonlinear flux in orientation (pρ) would drop. This is because this term represents changes in orientation brought by the change in excluded volume with relative orientation. High rotational diffusion limit In the context of colloidal suspensions, the diffusion coefficients corresponding to the rotational and translational motions (parallel or perpendicular to the needle's axis) are not independent. In particular, using Stokes' law, we have that [14,31] (39) D R = 12D ⊥ /ǫ 2 D = 2D ⊥ , where D ⊥ and D are the translational diffusion coefficients for perpendicular and parallel motion. This means that, instead of the constant diffusion matrix D = diag(D T , D T , D R ) used in our derivation, we would havê D(θ 1 ) =   R θ 1 0 0 0 0 1   D =   cos θ 1 D − sin θ 1 D ⊥ 0 sin θ 1 D cos θ 1 D ⊥ 0 0 0 D R   . (40) Our derivation can be adapted to allow for a diffusion tensor of this form, resulting in a modified equation for p (in particular, the Q T in (36b) would change). We omit this generalisation here but comment on the asymptotic regime of (39), namely when the rotational diffusion is much larger than the translational diffusion D R = D T /ǫ 2 ,(41) and set D T ≡ 1 in this section. Inserting (41) into (36a), we have ǫ 2 ∂ t p = ǫ 2 ∇ x 1 · [∇ x 1 p − f T (ξ 1 )p] + ∂ θ 1 ∂ θ 1 p − ǫ 2 f R (ξ 1 )p + ǫ 4 (N − 1)∇ x 1 · π 0 Q T (θ, p, p + )dθ + ǫ 2 (N − 1)∂ θ 1 π 0 Q R (θ, p, p + )dθ.(42) We look for a solution of (42) of the form p ∼ p 0 + ǫ 2 p 1 + · · · . The leading-order problem gives that p 0 = p 0 (x 1 , t), that is, the leading-order problem is independent of angle. Collecting the O(ǫ 2 )-terms in (42) yields ∂ t p 0 = ∇ x 1 · [∇ x 1 p 0 − f T (ξ 1 )p 0 ] + ∂ θ 1 [∂ θ 1 p 1 − f R (ξ 1 )p 0 ] ,(43) where we have used that Q R (θ, p 0 , p + 0 ) ≡ 0. The O(ǫ 4 ) of (42) is ∂ t p 1 = ∇ x 1 · [∇ x 1 p 1 − f T (ξ 1 )p 1 ] + ∂ θ 1 [∂ θ 1 p 2 − f R (ξ 1 )p 1 ] + (N − 1)∇ x 1 · π 0 Q T (θ, p 0 , p + 0 )dθ + (N − 1)∂ θ 1 π 0 Q R (θ, p 0 , p + 1 )dθ,(44) noting that Q R (θ, p 1 , p + 0 ) ≡ 0. We now write an equation for the spatial density ρ(x 1 , t) := π 0 (p 0 + ǫ 2 p 1 )dθ 1 . Combining (43) and (44), and using periodicity in θ 1 , we find ∂ t ρ = ∇ x 1 · ∇ x 1 ρ − π 0 f T (ξ 1 )p(ξ 1 , t)dθ 1 + ǫ 2 (N − 1) π 0 π 0 Q T (θ; p 0 , p + 0 )dθdθ 1 .(45) In particular, if we assume that f T is independent of angle, then ∂ t ρ = ∇ x 1 · ∇ x 1 ρ − f T (x 1 )ρ + ǫ 2 (N − 1) π 0 π 0 Q T (θ; p 0 , p + 0 )dθdθ 1 .(46) Using that p 0 = p + 0 and f T = f + T , from (36c) we have that A = p∇ x 1 p and B = 0 and hence Q T = sin θ∇ x 1 (p 2 0 )/2. The double integral on Q T is then π∇ x 1 (p 2 0 ) ∼ 1 π ∇ x 1 (ρ 2 ) using that ρ = πp 0 + O(ǫ 2 ). We find that (46) reduces to ∂ t ρ = ∇ x 1 · 1 + 2 π (N − 1)ǫ 2 ρ ∇ x 1 ρ − f T (x 1 )ρ .(47) Therefore we find that the equation satisfied by N needles of length ǫ in the limit of large rotational diffusion is a nonlinear diffusion equation of the same form as the equation (1) satisfied by N disks of diameter ǫ. Comparing the two equations, we have that the effective diameter of a needle with very fast rotational diffusion is √ 2/π times its length ǫ. That is, the needle excludes roughly 45% less volume than a disk of diameter ǫ. Space homogeneous solutions In this section, we consider spatially homogeneous solutions to (36), that is, solutions of the form p(ξ 1 , t) = p(θ 1 , t) satisfying D −1 R ∂ t p = ∂ 2 θ 1 p + ǫ 2 (N − 1)∂ θ 1 p π 0 sin θ∂ θ p + dθ . (48) The integral is π 0 sin θ∂ θ p + dθ = − π 0 cos θp(θ 1 + θ)dθ = π 0 cos θp(θ 1 − θ)dθ = W ′ * p, where W (θ) = sin(θ). Therefore, the space-homogeneous system of interacting needles of length ǫ is described by a periodic McKean-Vlasov equation with an attractive potential W (see, e.g., [6,30]) D −1 R ∂ t p = ∂ 2 θ 1 p + ǫ 2 (N − 1)∂ θ 1 (pW ′ * p).(49) We study the linear stability of the homogeneous solution p * = 1/π of (49) by considering a perturbation of the form p = p * + δe λt n≥0 a n cos(2nθ 1 ) + b n sin(2nθ 1 ), with δ ≪ 1. Inserting this into (49), linearising and keeping terms of O(δ), we arrive at λ = −4n 2 D R 1 − 2φn (4n 2 − 1)π , where φ = ǫ 2 (N − 1). We look for growing modes by imposing λ > 0, leading to 2φn > (4n 2 − 1)π. The most unstable mode (n = 1) leads to linear instability if (50) φ > φ c = 3π 2 . Note that, while φ represents an effective volume fraction (which would be bounded for isotropic bodies by their close packing densities, e.g., φ < 0.74 for closely packed hard disks in two dimensions), the hard-core needle system admits any φ ∈ [0, ∞), with ∞ corresponding to a system of perfectly aligned needles. It is also worth pointing out that, while our derivation relied on a diluteness assumption φ ≪ 1, the critical volume fraction is φ c = O(1). Therefore, the aggregation behaviour occurs outside the region of validity of our PDE model (36) and, as a by-product, of the space-homogeneous model (49). In fact, the value φ c agrees with the bifurcation point of isotropic-nematic transition obtained in [29] using Onsager's theory of orientational order [35]. In particular, Onsager considers the virial expansion of the orientational probability density up to the second virial coefficient (which depends on two-particle interactions, and Onsager obtains for a variety of hard anisotropic particles evaluating the excluded volume for a pair of such particles). While the third-and higher-order virial coefficients are negligible for hard needles in R 3 , it is not the case in the present case of two dimensions [29]. Therefore, the value we obtain for φ c should be taken with caution, and indeed Monte Carlo simulations have found the critical density at the transition to be φ c ≈ 7 [21]. The stationary solutions of (49) satisfy ∂ θ 1 p s + φp s W ′ * p s = −J, where J is a constant corresponding to the flux of the stationary solution. Without any external forcing, we expect solutions with J = 0. Imposing J = 0 and integrating, we arrive at p s (θ 1 ) = C exp φ θ 1 0 (W ′ * p)(θ)dθ , where C is a normalisation constant such that π 0 p s dθ 1 = 1. We consider a fixed-point iteration method to compute p s (θ 1 ) above for various values of φ. Specifically, given an initialisation p 0 (normalised to one), we compute (51) p k+1 = C exp φ θ 1 0 (W ′ * p k )(θ) dθ , for k = 1, 2, . . . . We initialise the scheme with the most unstable mode from the linear stability analysis (p 0 (θ 1 ) = 1/π + δ cos(2θ 1 )) and solve (51) with Chebfun [17] until it reaches a stationary profile. We consider several values of φ ≥ 3π/2 so that we expect nontrivial stationary states. Figure 3 shows the results for ten values of φ. We observe that the stationary solution becomes more concentrated as φ increases. This means that needles are forced to align more to avoid overlapping as their number increases. Discussion We have systematically derived an effective PDE model for a system of non-overlapping Brownian needles in two dimensions (36). The nonlinearities of the PDE describe the effect of pairwise interactions at the macroscopic level: interactions are nonlocal in angle (the nonlinearity is of mean-field type, only p∂ θ 1 p + term) and local in position (full crossdiffusion terms p∇ x 1 p + and p + ∇ x 1 p as well as a drift-difference term appear, consistent with other local-in-space models [4,34]). To gain insight into the behaviour of the PDE model, we consider two simplifications. First, we obtain a reduced PDE for the spatial density in the high-rotational diffusion limit. By comparing the resulting PDE with the effective PDE for hard-core disks in two dimensions, we find that the needles' effective diameter is about 45 per cent of their length. Second, we consider space-homogeneous Figure 4. Mapping of the exterior of the unit circle with boundary Γ ′ into the D of the rhombus with boundary Γ. ζ C z = g(ζ) z A Γ ′ z B z D z C ζ D ζ B ζ A Γ θ z ζ solutions of the nonlocal PDE and show they satisfy a well-known McKean-Vlasov equation with an attractive potential in orientation. Notably, we identify an instability of the uniform distribution (in angle) for effective packing densities above a critical threshold, see (50). Intuitively, we expect this phase transition to occur and arise from the finite-size interactions between needles. Indeed, the instability corresponds to the emergence of a preferred direction of needles to exclude less volume in configuration space in crowded settings. Let us point out that the nonlocal interaction term in Eq. (49) includes the size of the excluded volume. In this work, we find that the strength of the nonlinearity in the macroscopic PDE is proportional to the total excluded region volume (N − 1)ǫ 2 sin θ. The form of such nonlinearity is nontrivial in the full PDE (36) (due to the spatial interactions). Still, it may have been inferred in the space-homogeneous case (48) (in fact, this was the approach taken in [29] using Onsager's free energy functional based on the geometry of the excluded region). A natural question is whether this can be generalised to similar systems. A particularly interesting case is that of Brownian needles in three dimensions, which have zero excluded volume in configuration space (for fixed relative angles, the excluded region is a two-dimensional surface in R 3 ). If the result from two dimensions were to extend to three dimensions, it would imply that the effective PDE for needles in three dimensions would not "see" the non-overlapping constraints, at least not to O(Nǫ 3 ). To proceed with the solution of (52), we seek a transformation that simplifies the definition domain. In particular, we look for an analytic function z = g(ζ) that maps a domain D ′ of the ζ plane, namely the interior of the unit disk, to D in the z plane (see Fig. 4). Then the unit circle, denoted by Γ ′ , is mapped into the boundary of the rhombus Γ. This is a Schwarz-Christoffel transformation, given by [18, eq. (4.6)] (53) z = g(ζ) = a 0 + a(θ) ζ (1 − t 2 )θ /π (1 + t 2 ) 1−θ/π t −2 dt, where a 0 and a(θ) are chosen so that g(ζ k ) = z k , for k = A, B, C, D, where ζ k = ±1, ±ı (see (4)). Note that, as we move through the points A → B → C → D → A, we travel the circle counterclockwise but the rhombus clockwise (so that both curves are positively oriented, i.e., we have the domain to our left as we travel on its boundary). We note that g(ζ) goes to infinity like −a(θ)/ζ as ζ → 0. The constant a(θ) is given exactly as (54) a(θ) = α β − ıγ , where α, β and γ are the following real functions ofθ: α(θ) = 2 1+2θ/π secθ, β(θ) = Γ 1 2 −θ π Γ 1 + 2θ π × 2F1 1 2 ,θ π ; 3 2 +θ π ; −1 − 2 2F1 − 1 2 ,θ π ; 1 2 +θ π ; −1 , γ(θ) = 16θ /π Γ 1 2 +θ π Γ 1 − 2θ π × 2F1 1 2 , −θ π ; 3 2 −θ π ; −1 + 2 2F1 − 1 2 , −θ π ; 1 2 −θ π ; −1 , where 2F1 (a, b; c; z) = 2 F 1 (a, b; c; z)/Γ(c) is the regularised hypergeometric function. The map g corresponding toθ = π/4 is illustrated in (5)(a), and the complex constant a(θ) = a 1 (θ) + ıa 2 (θ), where a 1 = αβ/(β 2 + γ 2 ) and a 2 = αγ/(β 2 + γ 2 ), is shown in (5)(b). Note that although α, β, γ are singular atθ = π/2, a 1 and a 2 are not. We now write the problem in the ζ plane. If w 1 satisfies (52) in D, W 1 (ζ) := w 1 (g(ζ)) is satisfies the following problem in D ′ : (55) ∆ ζ W 1 = 0 |ζ| < 1, Im(W 1 ) = 0 |ζ| = 1, W 1 ∼ −a(θ)ζ −1 at 0, where ∆ ζ denotes the Laplacian operator in the ζ-plane. The solution to (55) is (56) W 1 (ζ) = − a(θ)ζ + a(θ) ζ . Repeating the same procedure to solve for (28), we find that u 2 = Re(w 2 ) where w 2 satisfies (52) but replacing the condition at infinity by w 2 ∼ −ız. Then the solution in the ζ plane W 2 (ζ) := w 2 (g(ζ)) needs to go like ia(θ)/ζ at the origin and is therefore is given by (57) W 2 (ζ) = −ı a(θ)ζ − a(θ) ζ . Figure 5. (a) Schwarz-Christoffel map g in (53) from the interior of unit circle to the exterior of the rhombus, forθ = π/4. The black curves are the images of ten evenly spaced circles centred at the origin and ten evenly spaced radii in the unit disk. Plot generated using the Schwarz-Christoffel MATLAB Toolbox [16]. (b) Real and imaginary parts of the multiplicative constant a(θ) (54). B Collision integral In this appendix, we evaluate the integral J in (32), (58) J = ∂Rθ R T θ 1 ∇x 1P (1) ·ñ dSx, where Rθ is the excluded rhombus in the inner region with |Rθ| = sinθ (see Remark (3.1)). Using the first-order inner solutionP (1) (26), we write J = J A + J B + J C with J A = ∂Rθ R T θ 1 ∇x 1 (R T θ 1 A ·x) ·ñ dSx = ∇x 1 · R θ 1 ∂Rθx ⊗ñ dSx R T θ 1 A , J B = ∂Rθ R T θ 1 ∇x 1 (R T θ 1 B · u) ·ñ dSx = ∇x 1 · R θ 1 ∂Rθ u ⊗ñ dSx R T θ 1 B , J C = R T θ 1 ∇x 1 C ∞ · ∂Rθñ dSx.(59) We have J C = 0 since we integrate the normal along the closed curve ∂Rθ. To evaluate J A and J B , we are left to compute the matrices inside the round brackets, which we denote by −Q and −T , respectively, applying the divergence theorem (onxc with c constant). The ∼ equivalence is due to the fact thatñ is the projection of the unit normalñ onto thex plane, and so it is not normalised (see (18) and discussion thereafter). However, since the component ofñ in Figure 6. Contour to compute the integral (63). theθ direction is O(ǫ), and we only require the leading order of J, we can treatñ as if it were the unit normal on Rθ. For example, on the top edge of the rhombus, we havẽ n ∼ (0, −1) (Fig. 1). Note also the change in sign in the first equivalence sinceñ is the inward unit normal to Rθ. Similarly, we find that the second row of Q is (61) Q 2· = − ∂Rθỹñ dSx ∼ (0, 1) sinθ. Therefore, we find that Q(θ) = sinθI 2 . Matrix T has rows (62) T i· = − ∂Rθ u i (x)ñ dSx, where u i for i = 1, 2 solve (27) and (28), respectively. Rather than transforming the solutions W 1 and W 2 obtained in Appendix A back to thex plane, we express the integrals as complex integrals in the ζ plane (see Figure 4). To transform (62) into a complex integral, first recall that z =x + ıỹ. Given a parameterisation (x(s),ỹ(s)) of ∂Rθ ≡ Γ, the integral along the arclength is dSx ≡ ds = (x ′ (s),ỹ ′ (s))ds = (x ′ (s) + ıỹ ′ (s))ds = z ′ (s)ds = dz. Since the curves Γ and Γ ′ are positively oriented (see Figure 4), the corresponding outward normals to the interior of the rhombus or the exterior of the circle, respectively, are given by a −π/2 rotation, or −ı, of the tangent vector. Therefore T i as a complex integral is (63) T i· = ı Γ u i (z) dz = ı Γ w i (z) dz = ı Γ ′ W i (ζ)g ′ (ζ)dζ. In the second equality we have used that Im(w i ) = 0 on Γ (see (52)) and in the third that W i (ζ) = w i (g(ζ)). The integrand in (63) has a singularity at the origin and branch points at ±1 and ±ı. We choose branch cuts going to infinity so that the contour of integration follows Γ ′ with four small semicircular indentations at the branch points as shown in (6). This way, T i· can be computed using Cauchy's Residue Theorem, with 2πı times the residue at the origin and −πı times the residues at the four branch points. 2 In fact, the four branch points do not contribute to the integral forθ ∈ (0, π) as their residues are zero (no singularities). Because of the form of W 1 (ζ) and W 2 (ζ) (see (56) and (57)), it is sufficient to compute the following residues (64) Res ζ=0 [ζg ′ (ζ)] = a(θ), Res ζ=0 ζ −1 g ′ (ζ) = 1 − 2θ π a(θ). Substituting in the expressions for W i (56) and (57) in (63) and using (64), we find T 1· = −ı Γ ′ a(θ)ζ + a(θ) ζ g ′ (ζ)dζ = 2π a Res ζ=0 [ζg ′ (ζ)] + a Res ζ=0 ζ −1 g ′ (ζ) = 2πaa − (4θ − 2π)a 2 ,(65) and T 2· = Γ ′ a(θ)ζ − a(θ) ζ g ′ (ζ)dζ = 2πı a Res ζ=0 [ζg ′ (ζ)] − a Res ζ=0 ζ −1 g ′ (ζ) = 2πıaa + ı(4θ − 2π)a 2 .(66) Writing (65) and (66) as two-dimensional vectors, we obtain the symmetric matrix (67) T (θ) := − ∂Rθ u ⊗ñ dSx = 4 a 2 1 (π −θ) + a 2 2θ a 1 a 2 (π − 2θ) a 1 a 2 (π − 2θ) a 2 2 (π −θ) + a 2 1θ , where a 1 and a 2 are shown in Figure 5(b). Figure 1 . 1Left panel: excluded volume of a horizontal hard needle (blue) Figure 2 . 2Values T 11 , T 12 , and T 22 in (67) as a function ofθ. Figure 3 . 3(a) Stationary solutions p s (θ 1 ) of the space-homogeneous problem (49) for different values of φ = (3π/2) + k/2 for k = 0, . . . , 10. Solutions are obtained via a fixed-point iterative scheme using Chebfun[17]. (b) Time evolution p(θ 1 , t) for φ = 1.1 × 3π/2 for a small initial perturbation p 0 = π −1 − 0.01 cos(2θ 1 ). Times shown are t = 0, 4, 6, 8, 10, 12, 20. At t = 20, the solution has already reached the stable equilibrium. Independence only tells us that P out (ξ 1 , ξ 2 , t) ∼ q(ξ 1 , t)q(ξ 2 , t) for some function q, but the normalisation condition on P 2 implies p = q + O(ǫ). Note that the contribution of the four points on the unit circle is −πı since it is only half a circle and we are taking the small semicircles clockwise. A Solution of the first-order inner problem via conformal mapping We solve problems(27)and(28)by mapping them to problems in the interior of a circle. We consider the problem for u 1(27); the problem for u 2 follows similarly.Let D denote the exterior of the rhombus in the z-plane, D = C \ Rθ, where z = x + ıỹ, and let Γ be its boundary, Γ = ∂Rθ. Let ∆ z denote the Laplacian operator ∂ 2 /∂x 2 + ∂ 2 /∂ỹ 2 . We look for a complex function w 1 : D → D such that the solution we need is given as u 1 = Re(w 1 ). By the Cauchy-Riemann relations it follows that the boundary condition ∇xu 1 ·ñ = 0 on Γ is equivalent to imposing that the conjugate harmonic function Im(w 1 ) is constant on Γ, for example, equal to zero. Then w 1 must satisfy (52) ∆ z w 1 = 0 in D, Im(w 1 ) = 0 on Γ, w 1 ∼ z at ∞. B Bahadur, Liquid crystals: applications and uses. World scientific1B. Bahadur, Liquid crystals: applications and uses, vol. 1, World scientific, 1990. Gaussian model potentials for molecular interactions. B J Berne, P Pechukas, J. Chem. Phys. 56B. J. Berne and P. Pechukas, Gaussian model potentials for molecular interactions, J. Chem. Phys., 56 (1972), pp. 4213-4216. Phase separation in systems of interacting active Brownian particles. M Bruna, M Burger, A Esposito, S Schulz, SIAM J. Appl. Math. 82M. Bruna, M. Burger, A. Esposito, and S. Schulz, Phase separation in systems of interacting active Brownian particles, SIAM J. Appl. Math., 82 (2022), pp. 1635-1660. Diffusion of multiple species with excluded-volume effects. M Bruna, S J Chapman, J Chem. Phys. 137204116M. Bruna and S. J. Chapman, Diffusion of multiple species with excluded-volume effects, J Chem. Phys., 137 (2012), p. 204116. Excluded-volume effects in the diffusion of hard spheres. M Bruna, S J Chapman, Phys. Rev. E. 8511103M. Bruna and S. J. Chapman, Excluded-volume effects in the diffusion of hard spheres, Phys. Rev. E, 85 (2012), p. 011103. Long-time behaviour and phase transitions for the mckean-vlasov equation on the torus. J Carrillo, R Gvalani, G Pavliotis, A Schlichting, Arch. Ration. Mech. Anal. 235J. Carrillo, R. Gvalani, G. Pavliotis, and A. Schlichting, Long-time behaviour and phase transitions for the mckean-vlasov equation on the torus, Arch. Ration. Mech. Anal., 235 (2020), pp. 635-690. Double milling in self-propelled swarms from kinetic theory. J A Carrillo, M R D&apos;orsogna, V Panferov, Kinet. Relat. Models. 2363J. A. Carrillo, M. R. D'Orsogna, and V. Panferov, Double milling in self-propelled swarms from kinetic theory, Kinet. Relat. Models, 2 (2009), p. 363. Particle, kinetic, and hydrodynamic models of swarming, in Mathematical modeling of collective behavior in socio-economic and life sciences. J A Carrillo, M Fornasier, G Toscani, F Vecil, SpringerJ. A. Carrillo, M. Fornasier, G. Toscani, and F. Vecil, Particle, kinetic, and hydrodynamic models of swarming, in Mathematical modeling of collective behavior in socio-economic and life sciences, Springer, 2010, pp. 297-336. Motility-induced phase separation. M E Cates, J Tailleur, Annu. Rev. Condens. Matter Phys. 6M. E. Cates and J. Tailleur, Motility-induced phase separation, Annu. Rev. Condens. Matter Phys., 6 (2015), pp. 219-244. Shape matters: a Brownian microswimmer in a channel. H Chen, J.-L Thiffeault, J. Fluid Mech. 916H. Chen and J.-L. Thiffeault, Shape matters: a Brownian microswimmer in a channel, J. Fluid Mech., 916 (2021). Emergent behavior in flocks. F Cucker, S Smale, IEEE T. Automat. Contr. 52F. Cucker and S. Smale, Emergent behavior in flocks, IEEE T. Automat. Contr., 52 (2007), pp. 852-862. The physics of liquid crystals. P.-G De Gennes, J Prost, in Oxford Science Publications. OxfordClarendon press2nd ed.P.-G. De Gennes and J. Prost, The physics of liquid crystals, no. 83 in Oxford Science Publi- cations, Clarendon press, Oxford, 2nd ed., 1993. A continuum model for nematic alignment of self-propelled particles. P Degond, A Manhart, H Yu, Discrete Contin. Dyn. Syst.-B. 22P. Degond, A. Manhart, and H. Yu, A continuum model for nematic alignment of self-propelled particles, Discrete Contin. Dyn. Syst.-B, 22 (2017), pp. 1295-1327. The theory of polymer dynamics. M Doi, S F Edwards, International series of monographs on physics. OxfordClarendon PressM. Doi and S. F. Edwards, The theory of polymer dynamics, no. 73 in International series of monographs on physics, Clarendon Press, Oxford, 1986. A purely mechanical model with asymmetric features for early morphogenesis of rod-shaped bacteria micro-colony. M Doumic, S Hecht, D Peurichard, Math. Biosci. and Eng. 17M. Doumic, S. Hecht, and D. Peurichard, A purely mechanical model with asymmetric features for early morphogenesis of rod-shaped bacteria micro-colony, Math. Biosci. and Eng., 17 (2020), pp. 6873-6908. Schwarz-Christoffel toolbox for MATLAB. T A Driscoll, 2.3T. A. Driscoll, Schwarz-Christoffel toolbox for MATLAB, 2003. Version 2.3. . T A Driscoll, N Hale, L N Trefethen, Chebfun Guide, Pafnuty PublicationsT. A. Driscoll, N. Hale, and L. N. Trefethen, Chebfun Guide, Pafnuty Publications, 2014. T A Driscoll, L N Trefethen, in Cambridge monographs on applied and computational mathematics. Cambridge University PressSchwarz-Christoffel mappingT. A. Driscoll and L. N. Trefethen, Schwarz-Christoffel mapping, no. 8 in Cambridge mono- graphs on applied and computational mathematics, Cambridge University Press, 2002. Self-propelled particles with soft-core interactions: Patterns, stability, and collapse. M R D&apos;orsogna, Y Chuang, A L Bertozzi, L S Chayes, Phys. Rev. Lett. 96104302M. R. D'Orsogna, Y. Chuang, A. L. Bertozzi, and L. S. Chayes, Self-propelled particles with soft-core interactions: Patterns, stability, and collapse, Phys. Rev. Lett., 96 (2006), p. 104302. Pattern formation in self-propelled particles with density-dependent motility. F D C Farrell, M C Marchetti, D Marenduzzo, J Tailleur, Phys. Rev. Lett. 108248101F. D. C. Farrell, M. C. Marchetti, D. Marenduzzo, and J. Tailleur, Pattern formation in self-propelled particles with density-dependent motility, Phys. Rev. Lett., 108 (2012), p. 248101. Evidence for algebraic orientational order in a two-dimensional hard-core nematic. D Frenkel, R Eppenga, Phys. Rev. A. 311776D. Frenkel and R. Eppenga, Evidence for algebraic orientational order in a two-dimensional hard-core nematic, Phys. Rev. A, 31 (1985), p. 1776. Molecular dynamics study of infinitely thin hard rods: scaling behavior of transport properties. D Frenkel, J F Maguire, Phys. Rev. Lett. 47D. Frenkel and J. F. Maguire, Molecular dynamics study of infinitely thin hard rods: scaling behavior of transport properties, Phys. Rev. Lett., 47 (1981), pp. 1025-1028. Modification of the overlap potential to mimic a linear site-site potential. J Gay, B Berne, J. Chem. Phys. 74J. Gay and B. Berne, Modification of the overlap potential to mimic a linear site-site potential, J. Chem. Phys., 74 (1981), pp. 3316-3319. Kinetic modelling of colonies of myxobacteria. S Hittmeir, L Kanzler, A Manhart, C Schmeiser, Kinet. Relat. Models. 14S. Hittmeir, L. Kanzler, A. Manhart, and C. Schmeiser, Kinetic modelling of colonies of myxobacteria, Kinet. Relat. Models, 14 (2021), pp. 1-24. Keller-Segel-type models and kinetic equations for interacting particles: Longtime asymptotic analysis. F K O Hoffmann, University of CambridgePhD thesisF. K. O. Hoffmann, Keller-Segel-type models and kinetic equations for interacting particles: Long- time asymptotic analysis, PhD thesis, University of Cambridge, 2017. Brownian needle in dire straits: Stochastic motion of a rod in very confined narrow domains. D Holcman, Z Schuss, Phys. Rev. E. 8510103D. Holcman and Z. Schuss, Brownian needle in dire straits: Stochastic motion of a rod in very confined narrow domains, Phys. Rev. E, 85 (2012), p. 010103. Mean field limit for stochastic particle systems. P.-E Jabin, Z Wang, Active Particles. Springer1P.-E. Jabin and Z. Wang, Mean field limit for stochastic particle systems, in Active Particles, vol. 1, Springer, 2017, pp. 379-402. Kinetic model for myxobacteria with directional diffusion, Commun. L Kanzler, C Schmeiser, Math. Sci. 21L. Kanzler and C. Schmeiser, Kinetic model for myxobacteria with directional diffusion, Com- mun. Math. Sci., 21 (2023), pp. 107-126. Bifurcation in Onsager's model of the isotropic-nematic transition. R F Kayser, H J Raveché, Phys. Rev. A. 17R. F. Kayser and H. J. Raveché, Bifurcation in Onsager's model of the isotropic-nematic transition, Phys. Rev. A, 17 (1978), pp. 2067-2072. Rhythms and turbulence in populations of chemical oscillators. Y Kuramoto, Phys. A: Stat. Mech. Appl. 106Y. Kuramoto, Rhythms and turbulence in populations of chemical oscillators, Phys. A: Stat. Mech. Appl., 106 (1981), pp. 128-143. Dynamically crowded solutions of infinitely thin Brownian needles. S Leitmann, F Höfling, T Franosch, Phys. Rev. E. 12118S. Leitmann, F. Höfling, and T. Franosch, Dynamically crowded solutions of infinitely thin Brownian needles, Phys. Rev. E, 96 (2017), p. 012118. Crowding-enhanced diffusion: An exact theory for highly entangled self-propelled stiff filaments. S Mandal, C Kurzthaler, T Franosch, H Löwen, Phys. Rev. Lett. 125138002S. Mandal, C. Kurzthaler, T. Franosch, and H. Löwen, Crowding-enhanced diffusion: An exact theory for highly entangled self-propelled stiff filaments, Phys. Rev. Lett., 125 (2020), p. 138002. A mechanism for cell motility by active polar gels. W Marth, S Praetorius, A Voigt, J. R. Soc. Interface. 1220150161W. Marth, S. Praetorius, and A. Voigt, A mechanism for cell motility by active polar gels, J. R. Soc. Interface, 12 (2015), p. 20150161. Macroscopic behaviour in a two-species exclusion process via the method of matched asymptotics. J Mason, R L Jack, M Bruna, J. Stat. Phys. 19047J. Mason, R. L. Jack, and M. Bruna, Macroscopic behaviour in a two-species exclusion process via the method of matched asymptotics, J. Stat. Phys., 190 (2023), p. 47. The effects of shape on the interaction of colloidal particles. L Onsager, Ann. N.Y. Acad. Sci. 51L. Onsager, The effects of shape on the interaction of colloidal particles, Ann. N.Y. Acad. Sci., 51 (1949), pp. 627-659. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA. B E Snodin, F Randisi, M Mosayebi, P Šulc, J S Schreck, F Romano, T E Ouldridge, R Tsukanov, E Nir, A A Louis, J. Chem Phys. 142B. E. Snodin, F. Randisi, M. Mosayebi, P.Šulc, J. S. Schreck, F. Romano, T. E. Ouldridge, R. Tsukanov, E. Nir, A. A. Louis, et al., Introducing improved structural properties and salt dependence into a coarse-grained model of DNA, J. Chem Phys., 142 (2015), p. 06B613 1. A nonlocal continuum model for biological aggregation. C M Topaz, A L Bertozzi, M A Lewis, Bull. Math. Biol. 68C. M. Topaz, A. L. Bertozzi, and M. A. Lewis, A nonlocal continuum model for biological aggregation, Bull. Math. Biol., 68 (2006), pp. 1601-1623. Novel type of phase transition in a system of self-driven particles. T Vicsek, A Czirók, E Ben-Jacob, I Cohen, O Shochet, Phys. Rev. Lett. 75T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet, Novel type of phase transition in a system of self-driven particles, Phys. Rev. Lett., 75 (1995), pp. 1226-1229. Multiphase field models for collective cell migration. D Wenzel, A Voigt, Phys. Rev. E. 10454410D. Wenzel and A. Voigt, Multiphase field models for collective cell migration, Phys. Rev. E, 104 (2021), p. 054410.
{'fraction_non_alphanumeric': 0.08596316827012182, 'fraction_numerical': 0.03622516891038062, 'mean_word_length': 3.447841726618705, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 71, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "We study the role of anisotropic steric interactions in a system of hard Brownian needles. Despite having no volume, non-overlapping needles exclude a volume in configuration space that influences the macroscopic evolution of the system. Starting from the stochastic particle system, we use the method of matched asymptotic expansions and conformal mapping to systematically derive a nonlinear nonlocal partial differential equation for the evolution of the population density in position and orientation. We consider the regime of high rotational diffusion, resulting in an equation for the spatial density that allows us to compare the effective excluded volume of a hard-needles system with that of a hard-spheres system. We further consider spatially homogeneous solutions and find an isotropic to nematic transition as density increases, consistent with Onsager's theory.", 'arxivid': '2208.03056', 'author': ['M Bruna ', 'S J Chapman ', 'M Schmidtchen '], 'authoraffiliation': [], 'corpusid': 251371535, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 21668, 'n_tokens_neox': 18767, 'n_words': 11333, 'pdfsha': '98ef958f337177a36eb28aa758bd0bb51c83ffa7', 'pdfurls': ['https://export.arxiv.org/pdf/2208.03056v2.pdf'], 'title': ['DERIVATION OF A MACROSCOPIC MODEL FOR BROWNIAN HARD NEEDLES', 'DERIVATION OF A MACROSCOPIC MODEL FOR BROWNIAN HARD NEEDLES'], 'venue': []}
arxiv
Computational models of sound-quality metrics using method for calculating loudness with gammatone/gammachirp auditory filterbank 19 May 2023 Takuto Isoyama School of Information Science Japan Advanced Institute of Science and Technology 1-1 Asahidai923-1292Nomi, IshikawaJapan Shunsuke Kidani School of Information Science Japan Advanced Institute of Science and Technology 1-1 Asahidai923-1292Nomi, IshikawaJapan Masashi Unoki School of Information Science Japan Advanced Institute of Science and Technology 1-1 Asahidai923-1292Nomi, IshikawaJapan Computational models of sound-quality metrics using method for calculating loudness with gammatone/gammachirp auditory filterbank 19 May 2023LoudnessSound-quality metricsSharpnessRoughnessFluctuation strengthGammatone auditory filterbankGammachirp auditory filterbank HighlightsComputational models of sound-quality metrics using method for calculating loudness with gammatone/gammachirp auditory filterbank Takuto Isoyama, Shunsuke Kidani, Masashi Unoki • A method for calculating loudness using the time-domain gammatone/gammachirp auditory filterbanks is proposed.• Three computational SQM models of sharpness, roughness, and fluctuation strength are also proposed using the proposed loudness method.AbstractSound-quality metrics (SQMs), such as sharpness, roughness, and fluctuation strength, are calculated using a standard method for calculating loudness (Zwicker method, ISO532B, 1975). Since ISO 532 had been revised to contain the Zwicker method (ISO 5321) and Moore-Glasberg method (ISO 532-2) in 2017, the classical computational SQM model should also be revised in accordance with these revisions. A roex auditory filterbank used with the Moore-Glasberg method is defined separately in the frequency domain not to have impulse responses. It is therefore difficult to construct a computational SQM model, e.g., the classical computational SQM model, on the basis of ISO 532-2. We propose a method for calculating loudness using the time-domain gammatone or gammachirp auditory filterbank instead of the roex auditory filterbank to solve this problem. We also propose three computational SQM models based on ISO 532-2 to use with the proposed loudness method. We evaluated the root-mean squared errors (RMSEs) of the calculated loudness with the proposed and Moore-Glasberg methods. We then evaluated the RMSEs of the calculated SQMs with the proposed method and human data of SQMs. We found that the proposed method can be considered as a time-domain method for calculating loudness on the basis of ISO 532-2 because the RMSEs are very small. We also found that the proposed computational SQM models can effectively account for the human data of SQMs compared with the classical computational SQM model in terms of RMSEs. • The results of an evaluation indicate improvement in the estimation accuracy of metrics such as sharpness, roughness, and fluctuation strength with the proposed method. Introduction There has been considerable interest in sound-quality metrics (SQMs) [1], such as sharpness, roughness, and fluctuation strength, which objectively examine the texture of sound as perceived by humans. These metrics have been applied in a variety of studies, including on sensory pleasantness [2], annoyance [1], product sound design [3,4], and soundscape analysis [5,6]. SQMs play an important role in creating more desirable products and ambient sound. These metrics can be obtained by computational modeling. Metrics based on time variabilities of loudness, such as roughness and fluctuation strength, are commonly used as the standard method for calculating loudness using a time-domain auditory filter that was proposed by Zwicker (ISO 532B:1975). ISO 532 was revised in 2017 as ISO 532-1:2017 [7] and ISO 532-2:2017 [8]. The methods for calculating loudness described in the former and the latter standards are called the Zwicker method and Moore-Glasberg method, respectively. The Zwicker method calculates loudness for stationary and time-varying sounds. The Moore-Glasberg method calculates only the loudness for stationary sounds. There are differences between these methods for calculating loudness such as the frequency scale (Bark scale or equivalent rectangular bandwidth Email addresses: [email protected] (Takuto Isoyama), [email protected] (Shunsuke Kidani), [email protected] (Masashi Unoki) (ERB) scale [9]) and the auditory filter shape (symmetric or asymmetric) depending on the signal level. A computational SQM model needs to be modified to match the revision of ISO 532. It is believed that minor modifications to this model would enable the computation of SQMs using the Zwicker method. The study of the computational SQM model using the Moore-Glasberg method is limited to sharpness [10]. A roex auditory filterbank used with the Moore-Glasberg method is defined separately in the frequency domain to not have impulse responses. It is therefore difficult to construct a computational SQM model, e.g., the classical computational SQM model, obtained from the time variability of loudness such as roughness and fluctuation strength based on ISO 532-2:2017. If it is possible to compute SQMs based on ISO 532-2:2017, the ERB scale and the asymmetry of the auditory filter shape can be used to better estimate SQMs. As a preliminary study, we constructed a model for calculating loudness using the time-domain gammatone auditory filterbank (GTFB) [11] or analytical gammachirp auditory filterbank (GCFB) [12], to determine if we could calculate sharpness and fluctuation strength using that model [13,14]. As a result, the calculated sharpness and fluctuation strength showed similar trends to the human data. In this paper, we propose a method for calculating loudness using the time-domain GTFB or GCFB instead of the roex auditory filterbank to solve this problem. We also propose three computational SQM models that are based on ISO 532-2 to use with the proposed method. This paper is organized as follows. Section 2 reviews related literature, Section 3 describes the proposed method for calculating loudness, Section 4 describes the proposed computational SQM models, Section 5 discusses the evaluation of the proposed models through comparison with other related computational SQM models, and Section 6 concludes the paper. Literature review Method for calculating loudness ISO 532A:1975 is Stevens' method for calculating the loudness level for broadband sounds. This method calculates the loudness level for sounds analyzed in three octave bands (1 octave, 1/2 octave, and 1/3 octave) according to Stevens' law. However, this calculation method cannot account for loudness at low sound-pressure levels. ISO 532B:1975 is a method for calculating loudness using the time-domain 1/3-octave filterbank as an approximated critical band filterbank for stationary sounds proposed by Zwicker. It is obtained by (1) correcting for the sound field and transferring the characteristics of the outer and middle ear, (2) calculating the excitation pattern for each critical band, (3) calculating the specific loudness, and (4) summing the specific loudness. The excitation pattern for each critical band is obtained by bandwidth division using a 1/3-octave filterbank constructed in accordance with the Bark scale. This calculation method incorporates the idea of partial loudness to correctly account for loudness even at low sound-pressure levels. The Zwicker method [7] is an improved version of ISO 532B:1975 that can also be calculated for time-varying sounds. The basic flow of the calculation is the same as in ISO 532:B. There are five updates: (i) refinement of the auditory filterbank using a 1/3-octave filter, (ii) improved calculation of specific loudness from the excitation to internal noise ratio, (iii) refinements of the relationship between loudness and loudness level, (iv) addition of nonlinear time-decay processing for auditory, and (v) second-order leakage integral process added to the sum of the specific loudness. The Moore-Glasberg method [8] calculates loudness using the frequency-domain auditory filterbank for monaural and binaural sounds. This section describes this method for monaural sound. The basic flow of the calculation is the same as in ISO 532:B. There are differences between the Zwicker and Moore-Glasberg methods such as the frequency scale (Bark scale or ERB scale [9]) and auditory filter shape (symmetric or asymmetric) depending on the signal level. Computational model of SQMs Bismarck established the correlation between specific loudness and sharpness and subsequently presented a computational model of sharpness using ISO 532B:1975 [15,1]. Fastl and Zwicker generalized Bismarck's model [1]. Aures modified Bismarck's proposed computational model of sharpness to a loudness-dependent model [2]. Swift and Gee proposed a computational model of sharpness from the specific loudness calculated using the Moore-Glasberg method [10]. Aures proposed a computational model of roughness on the basis of the relationship between modulation depth and roughness for different critical bands [16]. This model was optimized by Daniel and Weber to match human perception [17]. A computational model of roughness based on the ERB scale was also proposed by Duisters [18]. This model is more consistent with the human data of roughness than Daniel and Weber's model. Widmann and Fastl found that the difference between the peaks and valleys of the temporal masking pattern is related to the perception of roughness [19] and proposed a computational model of roughness using the temporal variation of specific loudness, which is the output of the Zwicker method for calculating loudness [20]. Vecchi proposed a computational model of fluctuation strength on the basis of the relationship between fluctuation strength and modulation frequency [21]. Fastl clarified the relationship between the temporal masking pattern and fluctuation strength as well as roughness and proposed a computational model of fluctuation strength [22]. Fastl's study has made it possible to consistently consider SQMs using a method for calculating loudness [1,19,22]. This indicates that loudness is important in the computation of SQM. The use of the ERB scale improves the SQM computation results, as suggested by Duisters [18]. Research issue With the revision of ISO 532, a computational SQM model also needs to be updated. Since the Zwicker method calculates loudness using the time-domain auditory filter similar to ISO 532B:1975, no major updates are expected. The study of computational SQM models using this Moore-Glasberg method is limited to sharpness, and it is difficult to construct a computational SQM model for roughness and fluctuation strength obtained from the time variation of loudness. A frame-by-frame loudness-calculation method is also possible, but it requires the use of an auditory filter with an impulse response for loudness calculation because it is difficult to capture minute changes in loudness. If we can construct a computational SQM model on the basis of ISO 532-2:2017, the following two points are possible. (a) Compared with the filterbank based on the Bark scale the filterbank based on the ERB scale has a higher resolution in the low-frequency band. Therefore, the accuracy of SQM estimation in the low-frequency range may be improved in the calculation of SQMs. (b) Since the asymmetry of the auditory filter shape depends on the sound-pressure level, the accuracy of SQM estimation with respect to changes in sound-pressure level may be improved. Proposed method for calculating loudness Instead of the roex auditory filterbank, we used the GTFB or GCFB as the time-domain auditory filterbank to develop our method for calculating loudness, and the method was used to construct our computational SQM models. The proposed method using the GTFB is called the GT loudness method, and it using the GCFB is called the GC loudness method. The input signal s(t) is first filtered to mimic the characteristics of the outer and middle ear then divided into K frequency channels x k (t) using the GTFB or GCFB. Next, excitation E k (t) is calculated from the divided signal x k (t). Finally, loudness N(t) is obtained by summing the specific loudness N ′ k (t) calculated from E k (t). The HWR , (·) 2 , and LPF(·) in the figure denote half-wave rectification, squaring, and lowpass filtering, respectively. Auditory filterbank 3.1.1. Gammatone filterbank The GTFB is constructed using the impulse response of the gammatone auditory filter function [11] defined as gt k (t) = at (M−1) exp (−2πbERB N ( f k )t) cos (2π f k t + φ), (1) where a, t, M = 4, b = 1.019, f k , and φ are amplitude, time, the order of gammatone function, the constant, center frequency of the k-th filter, and phase, respectively. The equivalent rectangular bandwidth (ERB N ) and f k are defined as ERB N ( f k ) =24.7(4.37 f k /1, 000 + 1), (2) f k = 10 ERB N -number 21.4 − 1 1, 000 4.37 ,(3) where the subscript N indicates that it was derived from an experiment on a normal-hearing person. The GTFB uses the 4-th cascade one-zero two-pole gammatone proposed by Slaney [23]. The GTFB is arranged such that the ERB N -number lines up from 1.8 Cam (k = 1) to 38.9 Cams (k = K = 372) in 0.1-Cam increments, similar with the Moore-Glasberg method [8]. Gammachirp filterbank The analytical gammachirp filter accounts for the asymmetry of the auditory filter shape in the GTFB and proposed by Irino and Patterson [24]. The compressive gammachirp auditory filter [25,26] and dynamic-compressive gammachirp auditory filter [27] were also been proposed for this auditory filterbank to account for compression properties. The Moore-Glasberg method accounts for the asymmetry of the auditory filter shape but does not for the compression property. To carry out the same process as ISO 543-2, the proposed method uses the analytical gammachirp filter. The GCFB is constructed using the impulse response of the analytical gammachirp auditory filter with real coefficients defined as gc k (t) = at (M−1) exp (−2πbERB N ( f k )t) cos (2π f k t + c ln t + φ),(4) where c denotes the coefficient of frequency change (chirp) [24] and ln denotes the natural logarithmic operator. The only difference between the gammachirp auditory filter and impulse response of the gammatone auditory filter (Eq. (1)) is the chirp term c log t. When c = 0, the chirp term (c ln t) vanishes and has the same frequency characteristics as the gammatone auditory filter. By transforming Eq. (4) into a complex impulse response and applying Fourier transform, the frequency characteristics of the gammachirp auditory filter can be represented as |G c ( f )| = |Γ(M − jc)| exp (cθ( f )) |2π (bERB N ( f k )) 2 + ( f − f k ) 2 | ,(5)=a Γ |G T ( f )| exp (cθ( f )), (6) θ( f ) = arctan f − f k bERB N ( f k ) ,(7) where f denotes frequency, a Γ denotes amplitude, and the amplitude spectrum of the gammatone auditory filter |G T ( f )| is asymmetric on a linear frequency axis. Since θ( f ) has asymmetry around f k , exp (cθ( f )) is an asymmetric function. When c is negative, exp (cθ( f )) becomes a low-pass filter (LPF), and when c is positive, exp (cθ( f )) becomes a high-pass filter. This enables control of asymmetry in the low/high range of the gammachirp auditory filter. By making c a function of soundpressure level [12], as shown in the following equation, level dependence and asymmetry of the auditory filter can be added. c = 3.38 − 0.107Ps k ,(8) where Ps k denotes the sound-pressure level resulting from the output of the different filters in the GTFB. The c is smoothed in the ERB N -number direction using a weighted moving average. The exp (cθ( f )) of the current gammachirp filter is designed by cascading the minimum phase filter of the infinite-impulseresponse (IIR) filter [12]. The GCFB is arranged such that the ERB N -number lines up from 2.6 Cam (k = 1) to 36.9 Cam (k = K = 344) in 0.1-Cam increments. Calculation of excitation To simulate the response of the inner hair cells and auditory nerve, E k (t) is calculated by HWR, (·)2, and leakage integration processing through an LPF of the output of GTFB or GCFB. The transfer characteristic of the leakage integrator H LPF (ω) is defined as a second-order LPF in which two LPFs shown in the following equations are cascaded. H LPF (ω) = a LPF 1 − exp (−2π f c / f s ) exp ( jω) ,(9)a LPF = 1 1 − exp (−2π f c / f s ) ,(10) where f s denotes sampling frequency and is 44, 100 Hz, ω denotes the angular frequency (2π f ), and f c is cut-off frequency of the leakage integrator and is 1,200 Hz. An a LPF is the amplitude of the leakage integrator. Calculation of loudness from excitation The N ′ k (t) is calculated from E k (t) in accordance with three conditions: • Condition of E k (t)/E 0 < E THRQ,k N ′ k (t) =Q N 2E k (t)/E 0 (E k (t)/E 0 + E THRQ,k ) 1.5 × (GE k (t)/E 0 + A) α − A α ,(11)• Condition of E THRQ,k ≤ E k (t)/E 0 < 10 10 N ′ k (t) = Q N (GE k (t)/E 0 + A) α − A α ,(12) • Condition of E k (t)/E 0 > 10 10 N ′ k (t) = Q N E k (t)/E 0 0.99 · 10 −3 0.2 ,(13) where Q N denotes the loudness coefficient and is 54.6 × 10 −3 when the GTFB is used and 54.8 × 10 −3 when the GCFB is used, E THRQ,k denotes the excitation corresponding to the minimum audible value in silence, G denotes the low-level gain of cochlear amplification, α denotes the power exponent of Stevens's power law, and A denotes the input/output characteristic [8]. Since the shapes of the roex auditory filterbank specified with the Moore-Glasberg method are different from those of the GTFB or GCFB, the areas of the specific loudness calculated with these filterbanks are also different. Therefore, we added 0.049 for the GT loudness method and 0.047 for the GC loudness method to α (= 0.2) to align the range of specific loudness of the Moore-Glasberg method and that of the proposed method. The N(t) is obtained from the total computation of specific loudness defined as Moore-Glasberg method [8] GT loudness method GC loudness method Figure 2 shows the relationship between loudness and loudness levels, where the horizontal axis represents the loudness level in phon and the vertical axis represents the loudness in sone. When the loudness level is greater than 80 phon, the loudness of the GT loudness method is calculated to be lower than that of the Moore-Glasberg method. However, the loudness of the GT loudness method is consistent with the Moore-Glasberg method at 80 phon or less. The loudness of the GC loudness method are almost the same as those of the Moore-Glasberg method. Figure 3 shows the block diagram of the proposed SQM models 1 . The N(t) and N ′ (t) are calculated from the observed s(t) by using the proposed method. Sharpness is then computed by calculating the weighted center of the obtained N ′ (t). Roughness and fluctuation strength are determined by analyzing N ′ (t). N(t) = K k=1 N ′ k (t).(14) Proposed computational model of SQMs Proposed sharpness model The proposed computational SQM model for sharpness (hereafter, proposed sharpness model) is designed on the basis of Aures's computational model of sharpness [2], which takes loudness dependence into account. Figure 4 shows a block diagram of the proposed sharpness model using the GTFB or GCFB. This model using the GTFB is called the GT sharpness model, and it using the GCFB is called the GC sharpness model. Sharpness S (t) is obtained by S (t) =Q s K k=1 q s,k (t)N ′ k (t)ERB N -number K k=1 N ′ k (t) ,(15)q s,k (t) = w s,k K k=1 N ′ k (t) ERB N -number × ln N(t)+20 20 ,(16) where Q s denotes the sharpness coefficient and is 2.29 × 10 −3 when the GTFB is used and 2.23×10 −3 when the GCFB is used, q s,k (t) denotes a weight that varies depending on loudness, and w s,k denotes the weighting function. The w s,k is fitted to minimize the root-mean-square error (RMSE) between the human data of sharpness [1] and the output of this model. The w s,k is defined as w s,k =1.19 × 10 −3 ERB N -number 3 − 4.90 × 10 −2 ERB N -number 2 + 7.17 × 10 −1 ERB N -number − 2.01.(17) Proposed roughness model Due to differences in the loudness model, Fastl's model for computing roughness cannot be used directly. This model was developed on the basis of Duisters' roughness model [18] but with modifications in the computations. Figure 5 shows a block diagram of the proposed computational SQM model for roughness (hereafter, proposed roughness model). The proposed roughness model using the GTFB is called the GT roughness model, and it using the GCFB is called the GT roughness model. The proposed roughness model starts by obtaining N ′ k(t) from the observed s(t) using the proposed method. The bandwidth is then limited using a band-pass filter, and the upper and lower envelope of the band-limited specific loudness is obtained using a Hilbert transform and LPF. A logarithmic transformation is applied to these envelopes to obtain the difference between the peak and dip ∆L R,k (t). The normalized cross-correlations i k and i k−10 are calculated from the k-th band-limited loudness level and the k ± 10-th (±1 Cam) band-limited loudness level, respectively. The specific roughness R ′ k (t) is then calculated using ∆L R,k and i k and i k−10 , and roughness R(t) is obtained by computing its area. The following equations are used for the proposed roughness model. R = Q R K k=1 R ′ k ,(18)R ′ k =            w R,k ∆L R,k (t)i k 2 k ∈ [1, 10], w R,k ∆L R,k (t)(i k−10 i k ) 2 k ∈ [11, K − 11], w R,k ∆L R,k (t)i k−10 2 k ∈ [K − 10, K],(19) where Q R denotes the roughness coefficient and is 3.15 × 10 −3 when the GTFB is used and 3.20 ×10 −3 when the GCFB is used and (w R,k denotes the weighting function optimized to match the human data of roughness [1], as shown in Fig. 6. Difference between peak and dip in specific loudness The roughness model proposed by Duisters uses modulation depth as a parameter. Fastl discovered that the difference between the peaks and dips ∆L R,k (t) of the log-transformed temporal masking pattern is related to roughness [19]. This enabled him to calculate roughness. The proposed roughness model incorporates this insight, and instead of using the modulation level, it log-transforms the specific loudness and uses ∆L R,k (t), which is defined as ∆L R,k (t) = L ′ R,Upper,k (t) − L ′ R,Lower,k (t) W Calib,k (t),(20)W Calib,k (t) = LT N ′ k (t) max LT(N ′ k (t)) ,(21) where LT denotes a logarithmic transformation function created from the relationship between the loudness and loudness levels in Fig. 2. Missing data points are interpolated using a linear interpolation method to complete the function. The notation W Calib,k (t) is the weighting function that ensures that the maximum loudness level at each time is 1, and N ′ Upper,k (t) and N ′ Lower,k (t) represent the upper and lower envelopes of the bandlimited specific loudness N ′ BP,k (t), respectively, as follows. Waight of roughness L ′ R,Upper,k (t) =LT LPF(|N ′ BP,k (t) + jHilbert(N ′ BP,k (t))|) ,(22)L ′ R,Lower,k (t) = − LT LPF(| − N ′ BP,k (t) + jHilbert(−N ′ BP,k (t))|) ,(23) Band-pass filter for proposed roughness model The N ′ BP,k (t) is defined as N ′ BP,k (t) = BPF R,k N ′ k (t) − H 0,k ,(24) where BPF R,k is the band-pass filter optimized using a sigmoid function in accordance with the human data of roughness [1]. The center frequency C F,k and band width W B,k of the band-pass filter are defined as C F,k = 69.2 1 + exp −(ERB N -number − α)/β ,(25)W B,k =1.58C F,k ,(26) where α = 4.58 and β = 1.48 and constant. The band-pass filter used the gammatone filter defined as BPF R,k (t) = at (M−1) exp (−2πW B,k (C F,k )t) cos (2πC F,k + φ),(27) where the filter order M is 3. Calculation of normalized cross-correlation Even though the sense of variability is barely perceptible in sounds such as pink noise and white noise, roughness is high when these types of noise are the input. To sense this small variability, we calculate the normalized cross-correlation i between distant auditory filters and incorporate it into the proposed roughness model. The i is calculated from the time variability of N ′ BP,k (t) calculated between distant auditory filters defined as i = max 0.01 τ=−0.01 x(t)y(t + τ) V x V y ,(28) where τ, T , and V denote lag, signal length, and variance, respectively. The time variation of N ′ BP,k (t) calculated from the GTFB or GCFB has a time delay among the filters. Thus, we shift i in 1-sample increments up to 10 ms and set the maximum i in channel k. Here, i k − 10 is the result of i when x is N ′ BP,k−10 (t), and y is N ′ BP,k (t), while i k is the result of i when x is N ′ BP,k (t), and y is N ′ BP,k+10 (t). Proposed fluctuation-strength model The process of calculating fluctuation strength is almost the same as that of computing roughness. The difference is the modulation frequency at which the sensation of fluctuation is felt. The proposed computational SQM model for fluctuation strength (hereafter, proposed fluctuation-strength model) was constructed using the proposed roughness model. Figure 7 shows a block diagram of the proposed fluctuationstrength model. The model using the GTFB is called the GT fluctuation-strength model, and it using the GCFB is called the F =Q F K k=1 F ′ k ,(29)F ′ k =            ∆L 0.6 F,k (t)i 2 k k ∈ [1, 10], ∆L 0.6 F,k (t)(i k−10 i k ) 2 k ∈ [11, K − 11], ∆L 0.6 F,k (t)i 2 k−10 k ∈ [K − 10, K],(30) where Q F denotes the fluctuation-strength coefficient and is 30.2×10 −3 when the GTFB is used and 30.0×10 −3 when GCFB is used. The i k and i k−10 are obtained from Eq. (28), as in the proposed roughness model. Difference between peak and dip in specific loudness The difference between the peaks and dips ∆L F,k (t) is obtained as ∆L F,k (t) = L ′ F,Upper,k (t) − L ′ F,Lower,k (t) W Calib,k (t),(31)W Calib,k (t) = LT N ′ k (t) max LT(N ′ k (t)) ,(32) where LT is the same function described in Chapter 4.2.1. N ′ Upper,k (t) and N ′ Lower,k (t) are the upper envelope of band-limited specific loudness and the lower envelope of specific loudness as follows. L ′ F,Upper,k (t) =LT LPF(|N ′ BP,k (t) + jHilbert(N ′ BP,k (t))|) ,(33)L ′ F,Lower,k (t) = − LT LPF(| − N ′ BP,k (t) + jHilbert(−N ′ BP,k (t))|) ,(34) where the LPF is a ninth-order IIR Butterworth low-pass filter, with a cut-off frequency of 0.4 Hz. Band-pass filter for proposed fluctuation-strength model The N ′ BP,k (t) is defined as N ′ BP,k (t) = BPF F N ′ k (t) − H 0,k(35) The band-pass filter consists of a second-order IIR Butterworth LPF and a second-order IIR Butterworth high-pass filter connected in cascade. The cut-off frequencies are 5 and 2 Hz, respectively. Evaluation We evaluated the RMSE of loudness calculation using the proposed method and the Moore-Glasberg method to see if the proposed method can calculate loudness correctly. We then evaluated whether the output of the proposed computational SQM models can account for the human data of the SQMs [1] by calculating the RMSE between the model output and results obtained from the human data of the SQMs. To determine the effectiveness of the proposed SQM models, we conducted a similar evaluation of conventional SQM models and compared these results with those of the proposed SQM models. Since the inputs of the proposed SQM models are assumed stationary signals, the average of the outputs of the proposed SQM models was used in the evaluation. Evaluation of loudness We compared the loudness calculated with the proposed method with the loudness specified in ISO 532-2. We evaluated the loudness for different frequency. Table 1 shows the input signal (sinusoidal signal) used for the loudness evaluation. The loudness with the proposed method was calculated using time-averaged loudness. Figure 8 show the results of the calculating loudness for different frequencies. The horizontal axis is the frequency and the vertical axis is the estimated error of the calculated results of the proposed loudness method relative to the calculated results of the Moore-Glasberg method. The results of the proposed method were in almost perfect agreement with the results The estimation error at a frequency of 3,000 Hz was the largest, but since the loudness at this frequency was 27 sones, the estimation error is considered sufficiently small. This indicates that the calculation results of the proposed loudness method are consistent with those of the Moore-Glasberg method. Evaluation of sharpness Both versions of the proposed sharpness model (GT and GC sharpness models) was compared with previous sharpness models, i.e., Fastl & Zwicker's model (FZ model) and the loudnessdependent Aures model (Aures model). Sharpness was evaluated for three different noise [1] and loudness levels [2]. The initial set of stimuli comprised narrow-band (NB) noise, high-pass (HP) noise, and low-pass (LP) noise, as listed in Table 2 and specified in DIN 45692 [28]. The amplitudes of these sound stimuli were adjusted so that the loudness from the proposed loudness method was 4 sones by time-averaged sharpness. The second set of stimuli was a 0.5-sec sinusoidal signal at 500, 1,000, 2,000, 4,000, and 8,000 Hz, with loudness levels of 2, 7, 14, and 28 sones. We used the RMSE to quantify the difference between the sharpness values obtained from the four sharpness models and the human data of sharpness [1]. The sharpness values were calculated by taking the time-averaged sharpness from the proposed sharpness models. Figure 9 shows the results of sharpness for different types of noise, where the vertical axis represents the RMSE between the results of the human data of sharpness and the values obtained from the four sharpness models. The RMSEs of the proposed sharpness models was found to be lower than those of the FZ and Aures models. Furthermore, the sharpness values obtained from the proposed sharpness models were found to be in close agreement with the human data of sharpness. Figure 10 shows the results of sharpness for different loudness levels, where the vertical axis represents the RMSE between the human data of sharpness and the values obtained from the four sharpness models. The RMSEs of the proposed sharpness models was found to be lower than those of the FZ and Aures models. Additionally, the sharpness values obtained from the proposed sharpness models were found to be in close agreement with the human data of sharpness. Evaluation of roughness Both versions of the proposed roughness model (GT and GC roughness models) were compared with Widmann & Fastl's computational model of roughness (WF model) [20] with Daniel & Weber's computational model of roughness (DW model) [17] in terms of modulation perception. The perceived level of roughness varies with modulation frequency, soundpressure level, center frequency, and modulation depth. Signals that incorporate variations in these parameters were used in this evaluation. We first assessed roughness when the modulation frequency was varied using amplitude-modulated (AM) and frequency- We then assessed roughness when the sound-pressure level was varied using AM and FM signals. The AM signal was a 1,000-Hz sinusoidal signal with 0.2-sec duration with 100% amplitude modulation at modulation frequencies of 70 Hz. The FM signals were frequency modulated at 70 Hz with a frequency deviation of 700 Hz using a sinusoidal signal of 0.2 sec at 1,500 Hz. The sound-pressure level of these signals was set at 40, 50, 60, 70, and 80 dB. Next, we assessed the roughness of AM signals with varying modulation frequencies and carrier frequencies. The stimuli were sinusoidal signals of 1,000, 2,000, 4,000, and 8,000 Hz, with 0.2-sec duration with 100% amplitude modulation at modulation frequencies of 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, and 200 Hz. The sound-pressure level was set at 60 dB. Finally, we assessed the roughness of AM signals with varying degrees of modulation. The stimuli consisted of sinusoidal signals of 1,000 Hz with 0.2-sec duration with amplitude modulation at 0, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100% modulation and modulation frequency of 70 Hz. The sound-pressure level was set at 60 dB. The roughness values were calculated by taking the time averaged roughness from the proposed roughness models. The RMSE was used to evaluate the discrepancy between the human data of roughness [1] and predictions of the four roughness models. Figure 11 shows the roughness results obtained from the four models at various modulation frequencies for both AM and FM signals. The modulation frequency is represented on the horizontal axis and the normalized roughness on the vertical axis. The solid black line signifies the human data of roughness [1]. Figure 11(a) shows that the WF model estimated the roughness at modulation frequencies below 70 Hz lower than the human data of roughness, while the other models estimated values that are close to the human data of roughness. The GT roughness model estimated lower results for modulation frequencies above 70 Hz compared with the human data of roughness. The computational model with the smallest estimation error was the DW model, followed by the GC roughness, GT roughness, and WF models. Figure 11(b) shows that the estimation results of the GC Figure 12 shows the results of roughness estimated from the four models at different sound-pressure levels for AM and FM signals. The horizontal axis is the sound-pressure level, and the vertical axis is the normalized roughness. The solid black line signifies the human data of roughness. Figure 12(a) shows that the results of the WF model were estimated lower than the human data of roughness. The model with the smallest estimation error was the GC roughness model, followed by the GT roughness, DW, and WF models. Figure 12(b) shows that the estimation results of the WF, GC roughness, and GT roughness models were lower than those of the listening experiment on roughness, while the estimation results of the DW model were higher than those of the human data of roughness. The model with the smallest estimation error was the WF model, followed by the GT roughness, GC roughness, and DW models. Figure 13 shows the estimation errors of the four models for center frequency versus the human data of roughness. The horizontal axis is the center frequency, and the vertical axis is the estimation error. The model with the smallest average estimation error for all center frequencies was the GC roughness model, followed by the DW, WF, and GT roughness models. Figure 14 shows the estimation results of roughness as a function of modulation depth. The horizontal axis is modu- lation, and the vertical axis is roughness. The solid black line signifies the human data of roughness. The four models estimated roughness higher than the human data of roughness. The model with the smallest estimation error was the GC roughness model, followed by the DW, WF, and GT roughness models. Evaluation of fluctuation strength Both versions of the proposed fluctuation-strength model (GT and GC fluctuation-strength models) were compared with the previous fluctuation-strength model (Fastl model) [22]. The perceived level of fluctuation strength varies with modulation frequency, sound-pressure level, and modulation depth. Hence, signals that incorporate variations in these parameters were used in the evaluation. We first assessed the fluctuation strength when the modulation frequency was varied using AM and FM signals. The AM signal was a 1,000 Hz sinusoidal signal with 4-sec duration and 100% amplitude modulation at modulation frequencies of 0. 25, 0.5, 1, 2, 4, 8, 16, and 32 Hz. The FM signals were frequency modulated at 0. 25, 0.5, 1, 2, 4, 8, 16, and 32 Hz with a frequency deviation of 700 Hz using a sinusoidal signal of 4 sec at 1,500 Hz. The sound-pressure level of these signals was set at 70 dB. We then assessed the fluctuation strength of AM signals with varying sound-pressure levels. The stimuli were a 1,000 Hz sinusoidal signal with 4-sec duration with 100% amplitude modulation at modulation frequencies of 4 Hz. The sound-pressure level was set at 50, 60, 70, and 80 dB. Figure 14: Roughness calculated using four roughness models (WF, DW, GT roughness, and GC roughness models) with respect to modulation depth of AM signal lation and a modulation frequency of 4 Hz. The sound-pressure level was set at 70 dB. The fluctuation-strength values were calculated by taking the time-averaged fluctuation strength from the proposed fluctuation-strength models. The RMSE was used to evaluate the discrepancy between the human data of fluctuation strength [1] and predictions of the three models. Figure 15 shows the results of the estimated fluctuation strength from the three models at different modulation frequencies for AM and FM signals. The horizontal axis is the modulation frequency, and the vertical axis is the fluctuation strength. The solid black line signifies the human data of fluctuation strength. Figure 15(a) shows that the proposed fluctuation-strength models estimated the fluctuation strength at modulation fre-quencies below 4 Hz more highly than the human data of fluctuation strength. However, they estimated the strength at modulation frequencies higher than 4 Hz. The model with the smallest estimation error was the Fastl model, followed by the GC fluctuation-strength, and GT fluctuation-strength models. Conclusion We found that the proposed loudness method can be regarded as a time-domain method for calculating loudness based on ISO 532-2 because the RMSEs are very small. The evaluation of the proposed method showed that the calculated loudness was comparable to the loudness specified in ISO 532-2; therefore, the use of the GTFB or GCFB did not significantly affect the loudness calculation. In particular, the proposed GC loudness method was found to calculate loudness closer to those of the Moore-Glasberg method than the proposed GT loudness method. This is because the filter shape of the GCFB is similar to that of the roex auditory filter. The output of the three proposed computational SQM models based on ISO 532-2 using the proposed loudness method was in agreement with the human data of the three SQMs [1]. Comparing the estimation errors of the conventional and proposed computational SQM models, the proposed models had lower estimation errors, suggesting that the difference between the Bark and ERB measures contributed to the estimation of the SQMs. The proposed computational SQM models using the proposed GC loudness method helped explain the change in SQMs in response to changes in sound-pressure level. Future work should involve developing a model that can process time-varying sounds using the compressive gammachirp auditory filter [25,26] or dynamic-compressive gammachirp auditory filter [27] as auditory filters. Figure 1 : 1Block diagram of proposed method for calculating loudness (proposed loudness method) using gammatone auditory filterbank (GTFB) or gammachirp auditory filterbank (GCFB). Figure 1 1shows a block diagram of the proposed method. Figure 2 : 2Relationship between loudness level and loudness Figure 3 :Figure 4 : 34Block diagram of proposed computational sound-quality metric (SQM) models Block diagram of proposed sharpness model calculating the weighted center from N ′ k (t) defined as Figure 5 : 5Block Figure 6 : 6Weighting function of proposed roughness modelwhere H 0,k denotes the direct-current (DC) component of N ′ k (t). The LPF is a ninth-order IIR Butterworth LPF, with a cut-off frequency of 7 Hz. Figure 7 : 7Block diagram of proposed fluctuation-strength model GC fluctuation-strength model. This model was developed with the following modifications. Figure 8 : 8Estimation error of both versions of proposed loudness method: GT loudness method and GC loudness method of the Moore-Glasberg method. Figure 9 : 9Estimation error of four sharpness models (FZ, Aures, proposed GT sharpness, and proposed GC sharpness models) for different types of noise Figure 10 : 10Estimation error of four sharpness models (FZ, Aures, proposed GT sharpnes, and proposed GC sharpness models) for different loudness roughness model had a peak at 50 Hz. The model with the smallest estimation error was the GC roughness model, followed by the GT roughness, DW, and WF models. Figure 11 : 11Relative roughness calculated using four roughness models (WF, DW, proposed GT roughness, and proposed GC roughness models) with respect to modulation frequency: (a) AM signal and ( Figure 12 : 12Relative roughness calculated using four roughness models (WF, DW, proposed GT roughness, and proposed GC roughness models) with respect to sound-pressure level: (a) AM signal and (b) FM signal Figure 15 ( 15b) shows that the estimation results of the Fastl model were higher than those of the human data of fluctuation strength for modulation frequencies above 4 Hz. The model with the smallest estimation error was the GC fluctuationstrength model, followed by the GT fluctuation-strength and Fastl models. Figure 16 16shows the results of the estimated fluctuation strength from the three models at different sound-pressure levels for AM and FM signals. The horizontal axis is the soundpressure level, and the vertical axis is the fluctuation strength. The solid black line signifies the human data of fluctuation strength. Figure 16 ( 16a) shows that the estimated values of all the models were approximately the same as the human data of fluctuation strength. The model with the smallest estimation error was the GC fluctuation-strength model, followed by the GT fluctuationstrength and Fastl models. Figure 16 ( 16b) shows that the estimation results of the WF model were higher than those of the human data of fluctuation strength. The model with the smallest estimation error was the GC fluctuation-strength model followed by the GT fluctuationstrength and Fastl models. Figure 17 17shows the estimation results of fluctuation strength versus modulation depth. The horizontal axis is modulation, and the vertical axis is fluctuation strength. The solid black line signifies the human data of fluctuation strength. The model with the smallest estimation error was the Fastl model followed by the GC and GT fluctuation-strength models. Figure 15 : 36 Figure 16 : 153616Fluctuation strength calculated using three fluctuation-strength models (Fastl, GT fluctuation-strength, and GC fluctuation-strength models) with respect to modulation frequency: (a) AM signal and (b) fluctuation-strength model, RMSE: 0.56 GC fluctuation-strength model, RMSE: 0.Fluctuation strength calculated using three fluctuation-strength models (Fastl, GT fluctuation-strength, and GC fluctuation-strength models) with respect to sound-pressure level: (a) AM signal and (b) FM signal Funding: This work was supported by the SCOPE Program of the Ministry of Internal Affairs and Communications (Grant No. 201605002) and JSPS-NSFC Bilateral Programs (Grant number: JSJSBP120197416). This research was also supported by a Fund for the Promotion of Joint International Research (Fostering Joint International Research (B)) (20KK0233), from MEXT. Figure 17 : 17Fluctuation strength calculated using three fluctuation strength models (Fastl, GT fluctuation-strength, and GC fluctuation-strength models) with respect to modulation depth of AM signal Table 1 : 1Sound signal used for evaluating proposed loudness method Frequency [Hz] Sound pressure level [dB] Step size [dB]100 50 - 1,000 20 ∼ 80 10 3,000 20 ∼ 80 20 Table 2 : 2Sound signal used for evaluating proposed sharpness model Type of noise Center frequency [Hz] Bandwidth [Hz] Low frequency [Hz] High frequency [Hz]NB noise 200 ∼ 10,000 104 ∼ 2,463 - - HP noise - 1,500 ∼ 9,750 250 ∼ 8,500 10,000 (constant) LP noise - 150 ∼ 10,300 200 (constant) 350 ∼ 10,500 modulated (FM) signals. The AM signal was a 1,000-Hz sinu- soidal signal with 0.2-sec duration with 100% amplitude mod- ulation at modulation frequencies of 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, and 200 Hz. The FM signals were frequency mod- ulated at 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, and 200 Hz with a frequency deviation of 700 Hz using a sinusoidal signal of 0.2 sec at 1,500 Hz. The sound-pressure level of these signals was set at 70 dB. Next, we assessed the fluctuation strength of AM signals with varying modulation depths. The stimuli consisted of sinusoidal signals of 1,000 Hz with 0.2-sec duration and amplitude modulation at 0, 10, 20, 30, 40, 50, 60, 70, 80, 90 with 100% modu-Figure 13: Relative roughness calculated by four roughness models (WF, DW, proposed GT roughness, and proposed GC roughness models) with respect to center frequency Auditory perception [1] WF model, RMSE: 0.133 DW model, RMSE: 0.128 GT roughness model, RMSE: 0.15 GC roughness model, RMSE: 0.1180.125 0.25 0.5 1.0 2.0 4.0 8.0 Mean Center frequency [kHz] 0 0.1 0.2 0.3 0.4 0.5 Estimation error WF model DW model GT roughness model GC roughness model 20 50 100 Modulation depth [%] 0.1 0.2 0.5 1 Roughness [asper] Software Public URL: https://jstorage-2018.jaist.ac.jp/s/8Wk6tgt4LS6YTDo Password: Qx4AjwZwEi Psycho-acoustics facts and models. H Fastl, E Zwicker, SpringerLa Vergne, TN USAFastl H and Zwicker E. Psycho-acoustics facts and models. Springer, La Vergne, TN USA, 2010. Berechnungsverfahren für den sensorischen wohlklang beliebiger schallsignale. W Aures, Acta Acustica united with Acustica. GermanAures W. Berechnungsverfahren für den sensorischen wohlklang be- liebiger schallsignale. Acta Acustica united with Acustica 1985:130-141. German. Specification of component sound quality applied to automobile power windows. A Nykänen, A Sirkka, 10.1016/j.apacoust.2008.09.015Applied Acoustics. 70Nykänen A and Sirkka A. Specification of component sound quality ap- plied to automobile power windows. Applied Acoustics 2009;70:813-820. https://doi.org/10.1016/j.apacoust.2008.09.015 Model of psychoacoustic sportiness for vehicle interior sound: Excluding loudness. G Kwon, H Jo, J Y Kang, 10.1016/j.apacoust.2018.01.027Applied Acoustics. 136Kwon G, Jo H, and Kang J Y. Model of psychoacoustic sportiness for vehicle interior sound: Excluding loudness. Applied Acoustics 2018;136:16-25. https://doi.org/10.1016/j.apacoust.2018.01.027 A systematic review of prediction models for the experience of urban soundscapes. M Lionello, F Aletta, J Kang, 10.1016/j.apacoust.2020.107479Applied Acoustics. 170Lionello M, Aletta F, and Kang J. A systematic review of prediction mod- els for the experience of urban soundscapes. Applied Acoustics 2020;170. https://doi.org/10.1016/j.apacoust.2020.107479 An efficient diagnosis approach for bearing faults using sound quality metrics. T Mian, A Choudhary, Fatima S , 10.1016/j.apacoust.2022.108839Applied Acoustics. 195Mian T, Choudhary A, and Fatima S. An efficient diagnosis approach for bearing faults using sound quality metrics. Applied Acoustics 2022;195. https://doi.org/10.1016/j.apacoust.2022.108839 . ISO 532-1: 2017Acoustics -Methods for calculating loudness -Part. 1Zwicker methodISO 532-1: 2017, Acoustics -Methods for calculating loudness -Part 1: Zwicker method. ISO 532-2: 2017, Acoustics -Methods for calculating loudness -Part. Moore-Glasberg method2ISO 532-2: 2017, Acoustics -Methods for calculating loudness -Part 2: Moore-Glasberg method. An introduction to the psychology of hearing. C J B Moore, BrillLeiden, BostonMoore C J B. An introduction to the psychology of hearing. Brill, Leiden, Boston, 2013. Extending sharpness calculation for an alternative loudness metric input. S H Swift, L K Gee, 10.1121/1.5016193Journal of the Acoustical Society of America. 142Swift S H and Gee L K. Extending sharpness calculation for an alterna- tive loudness metric input. Journal of the Acoustical Society of America 2017;142. https://doi.org/10.1121/1.5016193 An efficient auditory filterbank based on the gammatone function. A meeting of the IOC Speech Group on Auditory Modelling at RSRE. R D Patterson, I Nimmo-Smith, J Holdsworth, Rice P , Patterson R D, Nimmo-Smith I, Holdsworth J, and Rice P. An efficient auditory filterbank based on the gammatone function. A meeting of the IOC Speech Group on Auditory Modelling at RSRE 1988. An analysis/synthesis auditory filterbank based on an IIR implementation of the gammachirp. T Irino, M Unoki, 10.1250/ast.20.397Journal predictionstical Society of Japan. 206Irino T and Unoki M. An analysis/synthesis auditory filterbank based on an IIR implementation of the gammachirp. Journal predictionstical Soci- ety of Japan 1999;20(6):397-406. https://doi.org/10.1250/ast.20.397 Modeling of sound quality metrics using gammatone and gammachirp filterbanks. T Isoyama, S Kidani, M Unoki, 10.48465/fa.2020.0701Proc. Forum Acusticum. Forum AcusticumIsoyama T, Kidani S, and Unoki M. Modeling of sound quality metrics using gammatone and gammachirp filterbanks. Proc. Forum Acusticum 2020 2020;2731-2735. https://doi.org/10.48465/fa.2020.0701 Computational models of sharpness and fluctuation strength using loudness models composed of gammatone and gammachirp auditory filterbanks. T Isoyama, S Kidani, M Unoki, 10.2299/jsp.25.141Journal of Signal Processing. 254Isoyama T, Kidani S, and Unoki M. Computational models of sharp- ness and fluctuation strength using loudness models composed of gamma- tone and gammachirp auditory filterbanks. Journal of Signal Processing 2021;25(4):141-144. https://doi.org/10.2299/jsp.25.141 Sharpness as an attribute of the timbre of steady sounds. V Bismarck, Acta Acustica united with Acustica. Bismarck V G. Sharpness as an attribute of the timbre of steady sounds. Acta Acustica united with Acustica 1974;159-172. Ein berechnungsverfahren der rauhigkeit. W Aures, Acta Acustica united with Acustica. 58GermanAures W. Ein berechnungsverfahren der rauhigkeit. Acta Acustica united with Acustica 1985;58:268-281. German. Psychoacoustical roughness: implementation of an optimized model. P Daniel, R Weber, Acta Acustica united with Acustica. Daniel P and Weber R. Psychoacoustical roughness: implementation of an optimized model. Acta Acustica united with Acustica 1997;113-123. The modeling of auditory roughness for signals with temporally asymmetric envelopes. R Duisters, EindhovenTechnische UniversiteitDuisters R. The modeling of auditory roughness for signals with tempo- rally asymmetric envelopes. Technische Universiteit Eindhoven 2005. The hearing sensation roughness and neuronal responses to AMtones. H Fastl, Hearing Research. 46Fastl H. The hearing sensation roughness and neuronal responses to AM- tones. Hearing Research 1990;46:293-296. Calculating roughness using time-varying specific loudness spectra. U Widmann, H Fastl, Proc. Sound quality symposium. 98Widmann U and Fastl H. Calculating roughness using time-varying spe- cific loudness spectra. Proc. Sound quality symposium 98 1998;55-60. Modelling the sensation of fluctuation strength. A O Vecchi, R G León, A Kohlrausch, 10.1121/2.0000410Proc. Meetings on Acoustics. 2850005Vecchi A O, León R G, and Kohlrausch A. Modelling the sensation of fluctuation strength. Proc. Meetings on Acoustics 2016;28:050005. https://doi.org/10.1121/2.0000410 Fluctuation strength and temporal masking patterns of amplitude-modulated broadband noise. H Fastl, 10.1016/0378-5955(82)90034-XHearing Research. 8Fastl H. Fluctuation strength and temporal masking patterns of amplitude-modulated broadband noise. Hearing Research 1982:8;59-69. https://doi.org/10.1016/0378-5955(82)90034-X An Efficient Implementation of the Patterson-Holdsworth Auditory Filter Bank. M Slaney, Apple Computer Tech. Rep. 35Slaney M. An Efficient Implementation of the Patterson-Holdsworth Au- ditory Filter Bank. Apple Computer Tech. Rep. #35 1993. A time-domain, level-dependent auditory filter: the gammachirp. T Irino, R D Patterson, 10.1121/1.417975Journal of the Acoustical Society of America. 1011Irino T and Patterson R D. A time-domain, level-dependent auditory filter: the gammachirp. Journal of the Acoustical Society of America 1997;101(1):412-9. https://doi.org/10.1121/1.417975 Comparison of the roex and gammachirp filters as representations of theauditory filter. M Unoki, T Irino, B Glasberg, C J B Moore, Patterson D R, 10.1121/1.2228539Journal of the Acoustical Society of America. 1203Unoki M, Irino T, Glasberg B, Moore C J B, and Patterson D R. Compar- ison of the roex and gammachirp filters as representations of theauditory filter. Journal of the Acoustical Society of America 2006; 120(3):1474-92. https://doi.org/10.1121/1.2228539 A compressive gammachirp auditory filter for both physiological and psychophysical data. T Irino, D R Pattersonb, 10.1121/1.1367253Journal of the Acoustical Society of America. 1095Irino T and Pattersonb D R. A compressive gammachirp auditory filter for both physiological and psychophysical data. Journal of the Acoustical Society of America 2008;109(5). https://doi.org/10.1121/1.1367253 A dynamic compressive gammachirp auditory filterbank. T Irino, D R Pattersonb, 10.1109/TASL.2006.874669IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING. 146Irino T and Pattersonb D R. A dynamic compressive gam- machirp auditory filterbank. IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 2006;14(6):2222-32. https://doi.org/10.1109/TASL.2006.874669 Measurement technique for the simulation of the auditory sensation of sharpness. DIN. 45692GermanDIN 45692: 2009. Measurement technique for the simulation of the au- ditory sensation of sharpness. German.
{'fraction_non_alphanumeric': 0.05109529737463072, 'fraction_numerical': 0.03792200070604411, 'mean_word_length': 4.507213752174358, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 1, 'https://': 15, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 14, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'HighlightsComputational models of sound-quality metrics using method for calculating loudness with gammatone/gammachirp auditory filterbank Takuto Isoyama, Shunsuke Kidani, Masashi Unoki • A method for calculating loudness using the time-domain gammatone/gammachirp auditory filterbanks is proposed.• Three computational SQM models of sharpness, roughness, and fluctuation strength are also proposed using the proposed loudness method.AbstractSound-quality metrics (SQMs), such as sharpness, roughness, and fluctuation strength, are calculated using a standard method for calculating loudness (Zwicker method, ISO532B, 1975). Since ISO 532 had been revised to contain the Zwicker method (ISO 5321) and Moore-Glasberg method (ISO 532-2) in 2017, the classical computational SQM model should also be revised in accordance with these revisions. A roex auditory filterbank used with the Moore-Glasberg method is defined separately in the frequency domain not to have impulse responses. It is therefore difficult to construct a computational SQM model, e.g., the classical computational SQM model, on the basis of ISO 532-2. We propose a method for calculating loudness using the time-domain gammatone or gammachirp auditory filterbank instead of the roex auditory filterbank to solve this problem. We also propose three computational SQM models based on ISO 532-2 to use with the proposed loudness method. We evaluated the root-mean squared errors (RMSEs) of the calculated loudness with the proposed and Moore-Glasberg methods. We then evaluated the RMSEs of the calculated SQMs with the proposed method and human data of SQMs. We found that the proposed method can be considered as a time-domain method for calculating loudness on the basis of ISO 532-2 because the RMSEs are very small. We also found that the proposed computational SQM models can effectively account for the human data of SQMs compared with the classical computational SQM model in terms of RMSEs.', 'arxivid': '2305.13213', 'author': ['Takuto Isoyama \nSchool of Information Science\nJapan Advanced Institute of Science and Technology\n1-1 Asahidai923-1292Nomi, IshikawaJapan\n', 'Shunsuke Kidani \nSchool of Information Science\nJapan Advanced Institute of Science and Technology\n1-1 Asahidai923-1292Nomi, IshikawaJapan\n', 'Masashi Unoki \nSchool of Information Science\nJapan Advanced Institute of Science and Technology\n1-1 Asahidai923-1292Nomi, IshikawaJapan\n'], 'authoraffiliation': ['School of Information Science\nJapan Advanced Institute of Science and Technology\n1-1 Asahidai923-1292Nomi, IshikawaJapan', 'School of Information Science\nJapan Advanced Institute of Science and Technology\n1-1 Asahidai923-1292Nomi, IshikawaJapan', 'School of Information Science\nJapan Advanced Institute of Science and Technology\n1-1 Asahidai923-1292Nomi, IshikawaJapan'], 'corpusid': 258833088, 'doi': '10.48550/arxiv.2305.13213', 'github_urls': [], 'n_tokens_mistral': 16427, 'n_tokens_neox': 13936, 'n_words': 8400, 'pdfsha': '435db412434f64c276ae559ebae1d4d3a6880e47', 'pdfurls': ['https://export.arxiv.org/pdf/2305.13213v1.pdf'], 'title': ['Computational models of sound-quality metrics using method for calculating loudness with gammatone/gammachirp auditory filterbank', 'Computational models of sound-quality metrics using method for calculating loudness with gammatone/gammachirp auditory filterbank'], 'venue': []}
arxiv
∞ -algebra of braided electrodynamics 29 August -9 October 2021 M Dimitrijević Ćirić N Konjik [email protected] V Radovanović R J Szabo [email protected] M Toman Faculty of Physics Studentski trg 12 Department of Mathematics University of Belgrade BelgradeSerbia Heriot-Watt University EdinburghUnited Kingdom Higgs Centre for Theoretical Physics Maxwell Institute for Mathematical Sciences Edinburgh, Edinburgh, CorfuUnited Kingdom, United Kingdom, Greece ∞ -algebra of braided electrodynamics 29 August -9 October 2021Corfu Summer Institute 2021 "School and Workshops on Elementary Particle Physics and Gravity" * Speaker † Preprint: EMPG-22-07 Using the recently developed formalism of braided noncommutative field theory, we construct an explicit example of braided electrodynamics, that is, a noncommutative (1) gauge theory coupled to a Dirac fermion. We construct the braided ∞ -algebra of this field theory and apply the formalism to obtain the braided equations of motion, action functional and conserved matter current. The braided deformations leads to a modification of the charge conservation. Finally, the Feynman integral appearing in the one-loop contribution to the vacuum polarization diagram is calculated. There are no non-planar diagrams, but the UV/IR mixing appears nevertheless. We comment on this unexpected result. Introduction ∞ -algebras are generalizations of differential graded Lie algebras with infinitely-many graded antisymmetric brackets, related to each other by higher homotopy versions of the Jacobi identity. In [1] it was suggested that the complete data of any classical field theory with generalized gauge symmetries fit into cyclic ∞ -algebras with finitely-many non-vanishing brackets, encoding both gauge transformations and dynamics. It was then showed that the existence of such an ∞ -algebra formulation is a consequence of the duality with the BV-BRST formalism for perturbative field theories [2]. ∞ -algebras naturally appear in noncommutative gauge theory [3][4][5][6] and noncommutative gravity [7] (see also the contribution [8] to these proceedings for a brief review). In [3] it was shown that the semi-classical limit of a noncommutative and/or nonassociative gauge theory can be encoded in an infinite dimensional ∞ -algebra which is constructed order by order in the deformation parameter. Furthermore, it was shown in [5] that the Seiberg-Witten map relating noncommutative gauge theory with the corresponding commutative gauge theory is a ∞ quasi-isomorphism. Using the Drinfel'd twist deformation method in our recent work [7,9] we constructed a deformation of a ∞ -algebra, the braided ∞ -algebra. The corresponding field theory is then the braided gauge theory. It is different compared to the usual noncommutative gauge theory, the ★-gauge theory. The braided gauge transformations close in the Lie algebra of the undeformed gauge symmetry and they have a braided Leibniz rule. In this paper we illustrate our construction of braided ∞ -algebra and braided gauge theories on the example of braided electrodynamics: braided (1) gauge theory coupled to a charged Dirac spinor. We start by reviewing some basic facts about ∞ -algebras and their relation with classical field theories. Then we introduce the braided ∞ -algebra and the corresponding braided gauge theory. To illustrate our construction, we discuss in details the braided electrodynamics and its properties. In particular, the theory remains abelian and there are no three and four photon vertices. The quantization of field theories with braided symmetries is currently under development. To gain some preliminary insight, here we calculate the standard Feynman integrals which appear in the one-loop contribution to the vacuum polarization. We find UV/IR mixing, but no non-planar diagrams. This unexpected result should be understood once the full quantum field theory of braided (gauge) field theories is constructed. Some preliminary results on these problems can be found in [10,11]. ∞ -algebras and classical field theory In this section we briefly review the connection between classical field theories and ∞ -algebras established in [1,2]. An ∞ -algebra is a Z-graded vector space = ∈Z with graded antisymmetric multilinear maps called -brackets ℓ : ⊗ −→ , 1 ⊗ · · · ⊗ ↦ −→ ℓ ( 1 , . . . , ) ℓ (. . . , , , . . . ) = −(−1) | | | | ℓ (. . . , , , . . . ) , where | | is the degree of a homogeneous element ∈ . The -brackets must also fulfil homotopy relations. The first three relations are given by =1 : ℓ 1 ℓ 1 ( ) = 0, =2 : ℓ 1 ℓ 2 ( 1 , 2 ) = ℓ 2 ℓ 1 ( 1 ), 2 + (−1) | 1 | ℓ 2 1 , ℓ 1 ( 2 ) , (1) =3 : ℓ 1 ℓ 3 ( 1 , 2 , 3 ) = −ℓ 3 ℓ 1 ( 1 ), 2 , 3 − (−1) | 1 | ℓ 3 1 , ℓ 1 ( 2 ), 3 − (−1) | 1 |+ | 2 | ℓ 3 1 , 2 , ℓ 1 ( 3 ) − ℓ 2 ℓ 2 ( 1 , 2 ), 3 − (−1) ( | 1 |+ | 2 |) | 3 | ℓ 2 ℓ 2 ( 3 , 1 ), 2 − (−1) ( | 2 |+ | 3 |) | 1 | ℓ 2 ℓ 2 ( 2 , 3 ), 1 Cyclic ∞ -algebras contain an additional structure called cyclic pairing that is a graded, symmetric, and non-degenerate bilinear map −, − : ⊗ → R satisfying 0 , ℓ ( 1 , 2 , . . . , ) = (−1) +( | 0 |+ | |) + | | −1 =0 | | , ℓ ( 0 , 1 , . . . , −1 ) , ≥ 1. (2) In order to relate this formalism to a (gauge) field theory, we first define the graded vector space = 0 ⊕ 1 ⊕ 2 ⊕ 3 . This space contains gauge parameters ∈ 0 , gauge fields ∈ 1 , equations of motion ∈ 2 , and II Noether identities d ∈ 3 . The gauge transformations , equations of motion = 0, gauge invariant action functional and the second Noether identity d = 0 are then formulated as follows: = ℓ 1 ( ) + ℓ 2 ( , ) − 1 2 ℓ 3 ( , , ) + . . . ,(3)= ℓ 1 ( ) − 1 2 ℓ 2 ( , ) − 1 3! ℓ 3 ( , , ) + . . . ,(4)( ) = 1 2 , ℓ 1 ( ) − 1 3! , ℓ 2 ( , ) + . . . ,(5)d = ℓ 1 ( ) + ℓ 2 ( , ) + . . .(6) In field theories the cyclic pairing of degree −3 is needed. Therefore, the only non-vanishing pairings are −, − : ⊗ 3− −→ R for ≤ 3 . The variation principle is then written as ( ) = , . We will now illustrate this ∞ -algebra encoding using two important examples. ∞ -algebra of 3D nonabelian Chern-Simons theory Consider a Chern-Simons theory in three dimensions for a gauge group with the corresponding (hermitian) Lie algebra generators , = 1 . . . , whose Lie algebra has an invariant quadratic form Tr and commutation relations , = . Let be a Lie algebra valued one-form, tensor is = d − ∧ = d − 2 [ , ] . The action of this theory can be written as = 1 2 ∫ Tr ∧ d − 3 ∧ [ , ] , which yields the equation of motion = 0. This theory can be encoded into the ∞ -algebra formalism by introducing = 0 ⊕ 1 ⊕ 2 ⊕ 3 . The corresponding nonvanishing ℓ-brackets are given by ℓ 1 ( ) = d , ℓ 2 ( 1 , 2 ) = [ 1 , 2 ] , ℓ 2 ( , ) = [ , ] , ℓ 1 ( ) = d , ℓ 2 ( 1 , 2 ) = [ 1 , 2 ],ℓ 1 ( ) = d , ℓ 2 ( , ) = [ , ]. In addition, a cyclic pairing of degree −3 is defined as , = ∫ Tr ∧ , where and are Lie algebra valued differential forms. Since the cyclic pairing is a map of degree −3, it is non-vanishing only on the homogeneous subspaces 0 ⊗ 3 and 1 ⊗ 2 . The theory is reproduced as = ℓ 1 ( ) + ℓ 2 ( , ) = d + [ , ] , = ℓ 1 ( ) − 1 2 ℓ 2 ( , ) = d − ∧ , d = ℓ 1 ( ) − ℓ 2 ( , ) = d − [ , ] , The action can be written with the help of cyclic pairing as ( ) = 1 2 , ℓ 1 ( ) − 1 3! , ℓ 2 ( , ) = 1 2 ∫ Tr ∧ d − 3 ∧ [ , ] . ∞ algebra of 4D 4 theory This theory is not a gauge theory and therefore = 1 ⊕ 2 . The corresponding nonvanishing brackets are given by ℓ 1 ( ) = − 2 , ℓ 3 ( 1 , 2 , 3 ) = 1 2 3 . The cyclic pairing can be taken to be 1 , 2 = ∫ d 4 1 ( ) 2 ( ) where 1 ∈ 1 and 2 ∈ 2 . This leads to the action functional ( ) = 1 2 , ℓ 1 ( ) − 1 4! , ℓ 3 ( , , ) = ∫ d 4 1 2 − 2 − 4! 4 . The equation of motion is = ℓ 1 ( ) − 1 2 ℓ 2 ( , ) − 1 3! ℓ 3 ( , , ) = − 2 − 3! 3 = 0 Braided gauge theory and its braided ∞ -algebra In this section we first briefly review the recently proposed description of the noncommutative gauge theory: the braided gauge theory. Then we relate this theory with the notion of braided ∞ -algebra. More details can be found in [7,9]. Braided gauge theory Noncommutative deformation of gauge theories can be defined in different ways. One of the most studied examples is that of ★-gauge theories [12,13]. Let us again use the gauge group with the corresponding hermitian Lie algebra generators , = 1 . . . , the invariant quadratic form Tr g and commutation relations , = . Let the gauge field be a Lie algebra valued one-form, = d . An infinitesimal gauge transformation is defined as ★ = d + [ ★ , ] = d + ★ − ★ with the undeformed Leibniz rule (˜★) =˜★ ⊗ id + id ⊗˜★ . However, it is easily checked that [ ★ , ] = 1 2 { ★ , }[ , ] + 1 2 [ ★ , ]{ , }. Therefore, in general these transformations do not close in the corresponding Lie algebra. To circumvent this problem, one either works with the ( ) algebra in its fundamental representation, or enlarges the algebra to the universal enveloping algebra. Working with the universal enveloping algebra results in infinitely many new degrees of freedom, and one might then use the Seiberg-Witten map to express all of the new degrees of freedom in terms of the corresponding classical (commutative) degrees of freedom [14]. Let us now define a noncommutative gauge theory using the notion of a braided Lie algebra [15]. A short review of the twist formalism, matrix and the ★-products is presented in Appendix A. For simplicity, we consider the example of braided Chern-Simons non-abelian gauge theory in 3 . The gauge field = d transforms as ( = ) ★ = d + [ , ] ★ = d + ★ − R ( ) ★ R ( ) .(7) It is easily verified that the braided commutator closes in the Lie algebra [ 1 , 2 ] ★ = 1 ★ 2 . Transformations (7) have the braided Leibniz rule F ( ★ ) = ★ ⊗ id + R ⊗ ★ R ( ) ,(8) Note that we work with the matrix Lie algebra g. and close the braided algebra ★ 1 , ★ 2 ★ • = ★ 1 • ★ 2 − ★ R ( 2 ) • ★ R ( 1 ) = ★ − [ 1 , 2 ] ★ . Note that in this setting we can define left and right gauge transformations ★L = d + [ , ] ★ and ★R = d − [ , ] ★ . They are different. We will work only with left gauge transformations, analogous conclusions also hold for the right gauge transformations, see [7] for more details. The braided curvature of the gauge field is given by ★ = d − 1 2 [ , ] ★ = d + 1 2 ∧ ★ , and transforms covariantly ★ ★ = [ , ★ ] ★ . The gauge invariant action is defined as ★ ( ) = 1 2 ∫ Tr g ∧ ★ d − 3 ∧ ★ [ , ] ★ . Unlike in the commutative case, the braided second Noether identity is not linear in field equations d ★ ★ = d ★ − 2 [ , ★ ] ★ − 2 [ ★ , ] ★ + 1 4 [R ( ), [R ( ), ] ★ ] ★ = 0. There is an additional term on the right-hand side that depends only on the gauge field. Braided ∞ -algebra Starting with a suitable classical ∞ -algebra, ℒ, a braided ∞ -algebra, ℒ ★ , can be constructed using Drinfel'd twist deformation techniques as in [7]. More details on the Drinfel'd twist deformation formalism can be found in [15] and [16]. Following the prescription outlined in Appendix A, we set the first bracket ℓ ★ 1 := ℓ 1 and ℓ ★ ( 1 , . . . , ) := ℓ ( 1 ⊗ ★ · · · ⊗ ★ )(9) for ≥ 2, where ⊗ ★ := F −1 ( ⊗ ) =f ( ) ⊗f ( ) for , ∈ . These define multilinear maps ℓ ★ : ⊗ → which are braided graded antisymmetric: ℓ ★ (. . . , , , . . . ) = −(−1) | | | | ℓ ★ . . . , R ( ), R ( ), . . . . The first and second homotopy relations are unchanged with respect to the corresponding classical homotopy relations, that is, the braided ∞ -algebra ℒ ★ still has underlying cochain complex ( , ℓ 1 ) and ℓ ★ 2 is again a cochain map. The third homotopy relation is given by ℓ ★ 2 ℓ ★ 2 ( 1 , 2 ), 3 − (−1) | 2 | | 3 | ℓ ★ 2 ℓ ★ 2 ( 1 , R ( 3 )), R ( 2 ) + (−1) ( | 2 |+ | 3 |) | 1 | ℓ ★ 2 ℓ ★ 2 (R ( 2 ), R ( 3 )), R R ( 1 ) = −ℓ ★ 3 ℓ 1 ( 1 ), 2 , 3 − (−1) | 1 | ℓ ★ 3 1 , ℓ 1 ( 2 ), 3 − (−1) | 1 |+ | 2 | ℓ ★ 3 1 , 2 , ℓ 1 ( 3 ) − ℓ 1 ℓ ★ 3 ( 1 , 2 , 3 ) .(10) We see that the non-trivial braiding now appears in this relation, which indicates that the braided graded Jacobi identity for ℓ ★ 2 is violated by the cochain homotopy ℓ ★ 3 . Nontrivial braiding also appears in the higher homotopy relations [7]. The graded symmetry of the cyclic pairing −, − implies that the twisted pairing −, − ★ is naturally braided graded symmetric 1 , 2 ★ = f ( 1 ),f ( 2 ) = (−1) | 1 | | 2 | R ( 2 ), R ( 1 ) ★ .(11) However, for applications to field theory, we have to restrict to compatible Drinfel'd twists [7] that result in a strictly graded symmetric pairing 1 , 2 ★ = (−1) | 1 | | 2 | 2 , 1 ★ for all homogeneous 1 , 2 ∈ . In this case, ℒ ★ becomes a strictly cyclic braided ∞ -algebra. Let ℒ ★ = ( , {ℓ ★ }) be a 4-term braided ∞ -algebra, obtained by twist deformation of an ∞algebra ℒ = ( , {ℓ }), which organizes the symmetries and dynamics of a classical field theory. For a gauge parameter ∈ 0 , we define the braided gauge variation of a field ∈ 1 by ★ = ℓ 1 ( ) + ℓ ★ 2 ( , ) − 1 2 ℓ ★ 3 ( , , ) + · · · .(12) Braided covariant dynamics are described by the equations of motion ★ = 0 ★ = ℓ 1 ( ) − 1 2 ℓ ★ 2 ( , ) − 1 6 ℓ ★ 3 ( , , ) + · · ·(13) that transform covariantly as ★ ★ = ℓ ★ 2 ( , ★ ) + 1 2 ℓ ★ 3 ( , ★ , ) − ℓ ★ 3 ( , , ★ ) + · · · ,(14) for all gauge parameters ∈ 0 . For the field theories considered in this paper, the braided gauge transformations obey the off-shell closure relation in terms of the braided commutator: ★ 1 , ★ 2 ★ • := ★ 1 • ★ 2 − ★ R ( 2 ) • ★ R ( 1 ) = ★ −ℓ ★ 2 ( 1 , 2 ) .(15) Corresponding to the braided gauge symmetry, a suitable combination of the braided homotopy relations leads to an identity d ★ ★ : = ℓ 1 ( ★ ) + 1 2 ℓ ★ 2 ( ★ , ) − ℓ ★ 2 ( , ★ ) + 1 6 ℓ 1 ℓ ★ 3 ( , , ) + · · · + 1 8 ℓ ★ 2 (ℓ ★ 2 ( , ), ) − ℓ ★ 2 ( , ℓ ★ 2 ( , )) + 1 12 ℓ ★ 2 (ℓ ★ 3 ( , , ), ) − ℓ ★ 2 ( , ℓ ★ 3 ( , , )) + · · · = 0 .(16) We have already seen in the example of the braided Chern-Simons theory that, unlike the classical Noether identity (6), the braided Noether identity (16) is no longer linear in the equations of motion ★ and contains inhomogeneous terms involving brackets of the fields themselves. This is related to the violations of the Bianchi identities in braided gauge theories [7]. In the classical limit, where R = 1 ⊗ 1, the braided Noether identity (16) reduces to the classical formula (6). For a Lagrangian field theory, using the (strictly) cyclic inner product one can define an analogue of the action functional for the braided field theory as ★ ( ) := 1 2 , ℓ 1 ( ) ★ − 1 6 , ℓ ★ 2 ( , ) ★ − 1 24 , ℓ ★ 3 ( , , ) ★ + · · · ,(17) whose variational principle yields the braided equations of motion ★ = 0. This action functional is invariant under braided gauge transformations: ★ ★ ( ) = 0 ,(18) for all ∈ 0 and ∈ 1 . Note that the free fields of braided field theory are unchanged from the classical field theory. Only the interaction vertices, corresponding to the higher brackets ℓ ★ for ≥ 2, are modified by the braided noncommutative deformation. Braided electrodynamics In this section we first rewrite the classical electrodynamics as an ∞ -algebra. More examples of ∞ -algebra description of classical gauge theories coupled to matter are discussed in [17]. Following the steps from Section 3 we formulate a noncommutative generalization and obtain a braided ∞ -algebra of noncommutative electrodynamics. For simplicity we will work with the Moyal-Weyl deformation and do all calculations in the coordinate basis. For a more general result we refer to [18]. ∞ -algebra of classical electrodynamics The classical electrodynamics on the 4D Minkowski space-time is a (1) gauge theory with the massive spinor field , and the (1) gauge field . The infinitesimal (1) gauge transformations are = ,¯= −¯, = 1 , with the infinitesimal gauge parameter ( ). To write the ∞ -algebra of classical electrodynamics in a more compact way we define a master field A ∈ 1 and the corresponding equations of motion A ∈ 2 as A = ¯ , A = ( ) .(19) The corresponding brackets are then ℓ 1 ( ) = 0 0 1 , ℓ 1 (A) = − −¯−− + , ℓ 1 ( A ) = ( ) , ℓ 2 (A, A ) = − (¯¯− ),(20)ℓ 2 ( , A) = −0 , ℓ 2 (A 1 , A 2 ) = − 2 1 2 + 2 1 1 2 +¯2 1 1 2 +¯2 1 . The cyclic pairing can be defined as , d A A = ∫ d 4 · d A A ,(21)A, A = ∫ d 4 + +¯¯ . It is easy to verify that the brackets (20) and the pairing (21) reproduce the classical theory: Gauge transformations: A = ¯ = ℓ 1 ( ) + ℓ 2 ( , A) = −iī 1 , Equations of motion: A = ( ) = ℓ 1 (A) − 1 2 ℓ 2 (A, A) = ( − ) − − (¯+¯) −− + +¯ , Action: = 1 2 A, ℓ 1 (A) − 1 3! A, ℓ 2 (A, A) = ∫ d 4 − 1 4 +¯ ( − ) − II Noether identity: d A A = ℓ 1 ( A ) − ℓ 2 (A, A ) = ( ) + i¯¯− i = 0 . ∞ -algebra of braided electrodynamics Following the steps described in the previous section, we now deform the classical ∞ -algebra of electrodynamics to the braided ∞ -algebra. The corresponding field theory is the braided electrodynamics and it represents a noncommutative deformation of the classical theory. The deformation is introduced by the Moyal-Weyl twist (A.1) and the corresponding ★-product between functions is given by ★ = · • 2 ⊗ ⊗ = · + 2 · + . . . ,(22) where · represents the usual multiplication. The braided brackets are given by: ℓ ★ 1 ( ) = 0 0 1 , ℓ ★ 2 ( , A) = − R (¯) ★ R ( ) ★ [ , ] ★ = 0 ,(23)ℓ ★ 1 ( ★ A ) = ( ★ ) , ℓ ★ 2 (A, ★ A ) = − ¯★¯− R ( ) ★ R ( ) , ℓ ★ 1 (A) = − −¯−− + , ℓ ★ 2 (A 1 , A 2 ) = − 2 1 ★ 2 + R 2 ★ R 1 1 ★ 2 + R¯2 ★ R 1 1 ★ 2 + R¯2 ★ R 1 , The braided electrodynamics is defined with: Gauge transformations: ★ A = ★★ ★ = ℓ ★ 1 ( ) + ℓ ★ 2 ( , A) = −i R (¯) ★ R ( ) i ★ 1 ,(24) Equations of motion: ★ A = ★ ★ ( ★ ) = ℓ ★ 1 (A) − 1 2 ℓ ★ 2 (A, A) = ( − ( ★ + R ★ R )) − − (¯+ (¯+ R¯★ R )) −− + + 2 ¯★ + R¯★ R ,(25) Action: ★ = 1 2 A, ℓ ★ 1 (A) − 1 3! A, ℓ ★ 2 (A, A) = ∫ d 4 − 1 4 ★ +¯★ ( − ★ − R ★ R ) −(26) II Noether identity: d A ★ A = ℓ ★ 1 ( ★ A ) − ℓ ★ 2 (A, ★ A ) = ( ★ ) + 2 ¯★ + R (¯) ★ R ( ) = 0 .(27) The second Noether identity, combined with the equations of motion can be used to derive the conserved charge of the braided (1) gauge theory. Setting ★ = 0 in (27) leads to 2 ¯★ + R (¯) ★ R ( ) = 0, that is the matter current is conserved. The corresponding conserved charge is then given by ★ = 2 ∫ d 3 ì † ★ + R ( † ) ★ R ( ) .(28) Although the Moyal-Weyl ★-product is compatible with the cyclic inner product (11) ∫ d 4 R ( ) ★ R ( ) = ∫ d 4 ★ , this cannot be applied to the integration in (28) which is only taken over the three-dimensional spatial volume . Hence the second term in (28) has a non-trivial contribution to the conserved charge, and ★ generally differs from the electromagnetic charge not only in the classical theory but also in the ★-deformed electrodynamics. Only when the time component of the twist vanishes, that is 0 = 0, do we recover the usual electromagnetic charge. A first look at quantization The usual starting point for quantization of a field theory is the classical action. In the case of braided electrodynamics, we can start from (26) and split it into three pieces ★ = ∫ d 4 − 1 4 ★ +¯★ + 2 ¯★ ★ +¯★ R ( ) ★ R ( ) = ∫ d 4 L + L + L int .(29) Since the action (29) is invariant under the braided (1) gauge symmetry, we have to perform gauge fixing before the quantization. The ghost field that appears in this process completely decouples from the gauge field due to the abelian nature of the braided (1) symmetry. The full action is then given by ★ = ∫ d 4 L + L + L int + L ghost ,(30) with L ghost = −¯★ . The corresponding vertex is given in Figure 1. The notation × = is used. As expected, the propagators do not change compared with the undeformed case, while the vertex has a nontrivial contribution due to the noncommutative deformation. In the ★-deformed electrodynamics the noncommutative correction to the photonfermion vertex consists only of the phase factor; in the braided electrodynamics, due to the presence of the R matrix in the action (29), the phases combine to the cosine factor. Note that unlike in the ★-deformed electrodynamics, here there are no three and four photon vertices and no photon-ghost vertex. As an illustration, let us calculate the photon self-energy at one loop order. The only contribution comes from the fermion bubble in Figure 2. The corresponding amplitude is Π 2 ( ) = (−1) (− ) 2 ∫ d 4 (2 ) 4 Tr( / − / + / − ) cos 2 (× 2 ) . After a straightforward calculation following the appendix B, we obtain Π 2 ( ) = − 2 (2 2 ) ∫ 1 0 d 4 2 Δ 2 ( 2 − ) (1 − )Γ( 2 ) − 2 (2 2 ) ∫ 1 0 d 4 2 Δ 2 (31) ( 2 − ) (1 − ) |¯| √ Δ 2 2 2 (|¯| √ Δ) − 1 2 Γ( 2 ) −Δ¯2 (1 − 2 ) |¯| √ Δ 2 1− 2 1− 2 (|¯| √ Δ) − |¯| √ Δ 2 2 2 (|¯| √ Δ) . Here we introduced¯= , |¯| = √︃ −¯¯and Δ = 2 − (1 − ) 2 . The parameter has mass dimension 1 and is introduced for dimensional reasons. The first line of (31) is the usual commutative result. Expanding Γ( 2 ) as Γ( 2 ) ≈ 2 − + O ( ) and 4 2 Δ 2 ≈ 1 + 2 ln 4 2 Δ we find the usual divergence in the commutative contribution Π 2 ( ) =0 ≈ − 2 (2 2 ) ( 2 − ) 1 3 − 6 + ∫ 1 0 d (1− ) ln 4 2 2 − 2 (1 − ) +O ( ) . (32) The noncommutative part of the result (31) is convergent [19]. However, in the limit |¯| → 0 ( = 0 or → 0) it becomes UV divergent [19]. We conclude that in this way we recover the UV/IR mixing, despite the facts that there are no nonplanar diagrams. This unexpected result simply reflects our current lack of understanding of the proper quantization of field theories with braided symmetry, which is currently under development [18]. Some examples [10,11] suggest that a proper quantization of braided (gauge) field theories results in the absence of non-planar diagrams and at the same time the absence of UV/IR mixing. Analyzing further the noncommutative contribution (31) we find that it does not spoil the transversality of the photon. The second line of (31) is obviously transversal. To see that the third line is also transversal we multiply it with and find2 . . . =¯2 . . . = 0 since = 0 due to the antisymmetry of . As a comparison, we mention that in the ★-deformed electrodynamics there are three more diagrams contributing to the Π 2 ( ): the photon bubble, the photon tadpole and the ghost bubble (ghosts do not decouple from the photon due to the nonabelian nature of the ★-deformed electrodynamics). These three diagrams give non-trivial corrections to the commutative result, while the fermion bubble gives no noncommutative corrections. The UV/IR mixing is present [19,20]. We use dimensional regularization and set = 4 − . Outlook In this paper we applied the recently developed formalism of braided ∞ -algebras to construct a noncommutative deformation of the classical electrodynamics. The obtained theory, the braided electrodynamics is invariant under the braided (1) gauge symmetry, which is still abelian. As a consequence, there are no three and four photon interaction vertices in the action ★ (26). Therefore, the only interaction vertex is the fermion-photon vertex. This is different compared to the ★-gauge deformation of the classical electrodynamics which is nonabelian and as a consequence the three and four photon interaction vertices appear. We calculated the Feynman integrals which appear in the one-loop contribution to the vacuum polarization. Although there are no non-planar diagrams contributions, we find the UV/IR mixing. This unexpected result is consistent with claims about equivalence of various combinations of (non)commutative field theories and (un)braided statistics made in [11]. The correct quantization of theories with braided symmetries should be implemented with braided homotopy algebraic techniques, and will be addressed in [18]. The results of [10,11] suggest that there is no UV/IR mixing, at least in the scalar field theories which do not possess gauge symmetries. The results reported in this preliminary investigation suggest that there are still many new physical features to be uncovered in braided quantum field theory. A. Drinfel'd twist deformation In this appendix we briefly introduce the basics of Drinfel'd deformation and the notation we use on the example of the Moyal-Weyl deformation. More details can be found in [15,16]. In the Drinfel'd twist formalism, a deformation is introduced by twist F ∈ v ⊗ v F = − 2 ⊗ , (A.1) where is a constant antisymmetric matrix and its entries are considered to be small deformation parameters . Note that belong to the Lie algebra of vector fields v := Γ( ) on a manifold . The corresponding enveloping algebra is v. We use the following notation: F = f ⊗ f , F −1 =f ⊗f . The invertible R-matrix R ∈ v ⊗ v encodes the braiding (deformation, noncommutativity) and it is induced by the twist as R = F 21 F −1 =: R ⊗ R , (A.3) where F 21 = (F ) = f ⊗ f is the twist with its legs swapped. It is easy to see that the R-matrix is triangular, that is R 21 = R −1 = R ⊗ R . (A.4) To be more precise, the twist (A.1) should be written as with the small deformation parameter¯and arbitrary constant antisymmetric matrix elements . In the usual notation is absorbed in the matrix elements and these are called deformation parameters. = d , transforming under gauge transformations as = d + [ , ]. The curvature Figure 1 : 1Photon-fermion vertex in braided electrodynamics. Figure 2 : 2Photon self-energy. ∞ -algebra of braided electrodynamics ∞ -algebras and field theory. O Hohm, B Zwiebach, arXiv:1701.08824Fortsch. Phys. 651700014O. Hohm and B. Zwiebach, ∞ -algebras and field theory, Fortsch. Phys. 65 (2017) 1700014 [arXiv:1701.08824]. ∞ -algebras of classical field theories and the Batalin-Vilkovisky formalism. B Jurčo, L Raspollini, C Sämann, M Wolf, arXiv:1809.09899Fortsch. Phys. 671900025B. Jurčo, L. Raspollini, C. Sämann and M. Wolf, ∞ -algebras of classical field theories and the Batalin-Vilkovisky formalism, Fortsch. Phys. 67 (2019) 1900025 [arXiv:1809.09899]. Bootstrapping noncommutative gauge theories from ∞ -algebras. R Blumenhagen, I Brunner, V G Kupriyanov, D Lüst, arXiv:1803.00732JHEP. 0597R. Blumenhagen, I. Brunner, V. G. Kupriyanov and D. Lüst, Bootstrapping noncommutative gauge theories from ∞ -algebras, JHEP 05 (2018) 097 [arXiv:1803.00732]. Noncommutative deformation of Chern-Simons theory. V G Kupriyanov, arXiv:1905.08753Eur. Phys. J. C. 80V. G. Kupriyanov, Noncommutative deformation of Chern-Simons theory, Eur. Phys. J. C 80 (2020) 42, [arXiv:1905.08753]. On the Uniqueness of ∞ bootstrap: Quasi-isomorphisms are Seiberg-Witten Maps. R Blumenhagen, M Brinkmann, V Kupriyanov, M Traube, arXiv:1806.10314J. Math. Phys. 59R. Blumenhagen, M. Brinkmann, V. Kupriyanov and M. Traube, On the Uniqueness of ∞ bootstrap: Quasi-isomorphisms are Seiberg-Witten Maps, J. Math. Phys. 59 (2018) 12, 123505, [arXiv:1806.10314]. Symplectic embeddings, homotopy algebras and almost Poisson gauge symmetry. V G Kupriyanov, R J Szabo, arXiv:2101.12618J. Phys. A. 5535201V. G. Kupriyanov and R. J. Szabo, Symplectic embeddings, homotopy algebras and almost Poisson gauge symmetry, J. Phys. A 55 (2022) 035201 [arXiv:2101.12618]. Braided ∞ -algebras, braided field theory and noncommutative gravity. M Ćirić, G Giotopoulos, V Radovanović, R J Szabo, arXiv:2103.08939Lett. Math. Phys. 111M. Dimitrijević Ćirić, G. Giotopoulos, V. Radovanović and R. J. Szabo, Braided ∞ - algebras, braided field theory and noncommutative gravity, Lett. Math. Phys. 111 (2021) 148, [arXiv:2103.08939]. R J Szabo, arXiv:2203.15744The ∞ -structure of noncommutative gravity. R. J. Szabo, The ∞ -structure of noncommutative gravity, arXiv:2203.15744. G Giotopoulos, R J Szabo, arXiv:2112.00541Braided Symmetries in Noncommutative Field Theory. G. Giotopoulos and R. J. Szabo, Braided Symmetries in Noncommutative Field Theory, arXiv:2112.00541. Batalin-Vilkovisky quantization of fuzzy field theories. H Nguyen, A Schenkel, R J Szabo, arXiv:2107.02532Lett. Math. Phys. 111H. Nguyen, A. Schenkel and R. J. Szabo, Batalin-Vilkovisky quantization of fuzzy field theories, Lett. Math. Phys. 111 (2021) 149, [arXiv:2107.02532]. Untwisting noncommutative R and the equivalence of quantum field theories. R Oeckl, arXiv:hep-th/0003018Nucl. Phys. B. 581R. Oeckl, Untwisting noncommutative R and the equivalence of quantum field theories, Nucl. Phys. B 581 (2000) 559-574 [arXiv:hep-th/0003018]. Noncommutative field theory. M R Douglas, N A Nekrasov, arXiv:hep-th/0106048Rev. Mod. Phys. 73M. R. Douglas and N. A. Nekrasov, Noncommutative field theory, Rev. Mod. Phys. 73 (2001) 977-1029, [arXiv:hep-th/0106048]. Quantum field theory on noncommutative spaces. R J Szabo, arXiv:hep-th/0109162Phys. Rept. 378R. J. Szabo, Quantum field theory on noncommutative spaces, Phys. Rept. 378 (2003) 207-299, [arXiv:hep-th/0109162]. String theory and noncommutative geometry. N Seiberg, E Witten, arXiv:hep-th/9908142JHEP. 0932N. Seiberg and E. Witten, String theory and noncommutative geometry, JHEP 09, 032 (1999), [arXiv: hep-th/9908142]. P Aschieri, M Dimitrijević, P Kulish, F Lizzi, J Wess, Noncommutative spacetimes: Symmetries in noncommutative geometry and field theory. 774P. Aschieri, M. Dimitrijević, P. Kulish, F. Lizzi and J. Wess, Noncommutative spacetimes: Symmetries in noncommutative geometry and field theory, Lect. Notes Phys. 774 (2009) 1-199. S Majid, Foundations of Quantum Group Theory. Cambridge University PressS. Majid, Foundations of Quantum Group Theory, Cambridge University Press (1995). The ∞ -structure of gauge theories with matter. H Gomez, R Jusinskas, C Lopez-Arcos, A. Quintero Vélez, arXiv:2011.09528JHEP. 0293H. Gomez, R. Lipinski Jusinskas, C. Lopez-Arcos and A. Quintero Vélez, The ∞ -structure of gauge theories with matter, JHEP 02 (2021) 093 [arXiv:2011.09528]. . M Ćirić, N Konjik, V Radovanović, R J Szabo, M Toman, in preparationM. Dimitrijević Ćirić, N. Konjik, V. Radovanović, R. J. Szabo and M. Toman, in preparation. General structure of the photon self-energy in noncommutative QED. F T Brandt, A K Das, J Frenkel, arXiv:hep-th/0112127Phys. Rev. D. 6585017F. T. Brandt, A. K. Das and J. Frenkel, General structure of the photon self-energy in noncom- mutative QED, Phys. Rev. D 65 (2002) 085017 [arXiv:hep-th/0112127]. Perturbative analysis on infrared aspects of noncommutative QED on R 4. M Hayakawa, arXiv:hep-th/9912167Phys. Lett. B. 478M. Hayakawa, Perturbative analysis on infrared aspects of noncommutative QED on R 4 , Phys. Lett. B 478 (2000) 394-400 [arXiv:hep-th/9912167]. I S Gradshteyn, I M Ryzhik, Table of Integrals, Series, and Products. Academic PressI. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, Academic Press (2007).
{'fraction_non_alphanumeric': 0.09814158129760966, 'fraction_numerical': 0.043997898607827685, 'mean_word_length': 3.3860887096774195, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 161, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Using the recently developed formalism of braided noncommutative field theory, we construct an explicit example of braided electrodynamics, that is, a noncommutative (1) gauge theory coupled to a Dirac fermion. We construct the braided ∞ -algebra of this field theory and apply the formalism to obtain the braided equations of motion, action functional and conserved matter current. The braided deformations leads to a modification of the charge conservation. Finally, the Feynman integral appearing in the one-loop contribution to the vacuum polarization diagram is calculated. There are no non-planar diagrams, but the UV/IR mixing appears nevertheless. We comment on this unexpected result.', 'arxivid': '2204.06448', 'author': ['M Dimitrijević Ćirić ', 'N Konjik [email protected] ', 'V Radovanović ', 'R J Szabo [email protected] ', 'M Toman ', '\nFaculty of Physics Studentski trg 12\nDepartment of Mathematics\nUniversity of Belgrade\nBelgradeSerbia\n', '\nHeriot-Watt University\nEdinburghUnited Kingdom\n', '\nHiggs Centre for Theoretical Physics\nMaxwell Institute for Mathematical Sciences\nEdinburgh, Edinburgh, CorfuUnited Kingdom, United Kingdom, Greece\n', 'M Dimitrijević Ćirić ', 'N Konjik [email protected] ', 'V Radovanović ', 'R J Szabo [email protected] ', 'M Toman ', '\nFaculty of Physics Studentski trg 12\nDepartment of Mathematics\nUniversity of Belgrade\nBelgradeSerbia\n', '\nHeriot-Watt University\nEdinburghUnited Kingdom\n', '\nHiggs Centre for Theoretical Physics\nMaxwell Institute for Mathematical Sciences\nEdinburgh, Edinburgh, CorfuUnited Kingdom, United Kingdom, Greece\n'], 'authoraffiliation': ['Faculty of Physics Studentski trg 12\nDepartment of Mathematics\nUniversity of Belgrade\nBelgradeSerbia', 'Heriot-Watt University\nEdinburghUnited Kingdom', 'Higgs Centre for Theoretical Physics\nMaxwell Institute for Mathematical Sciences\nEdinburgh, Edinburgh, CorfuUnited Kingdom, United Kingdom, Greece', 'Faculty of Physics Studentski trg 12\nDepartment of Mathematics\nUniversity of Belgrade\nBelgradeSerbia', 'Heriot-Watt University\nEdinburghUnited Kingdom', 'Higgs Centre for Theoretical Physics\nMaxwell Institute for Mathematical Sciences\nEdinburgh, Edinburgh, CorfuUnited Kingdom, United Kingdom, Greece'], 'corpusid': 248157137, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 11274, 'n_tokens_neox': 9747, 'n_words': 5222, 'pdfsha': 'b56b82b0ad9ea5d022401dead32f000db64f6f1c', 'pdfurls': ['https://arxiv.org/pdf/2204.06448v1.pdf'], 'title': ['∞ -algebra of braided electrodynamics', '∞ -algebra of braided electrodynamics', '∞ -algebra of braided electrodynamics', '∞ -algebra of braided electrodynamics'], 'venue': []}
arxiv
Rank-1 Tensor Approximation Methods and Application to Deflation 21 Aug 2015 Alex P Da Silva Fellow, IEEEPierre Comon Senior Member, IEEEAndré L F De Almeida Rank-1 Tensor Approximation Methods and Application to Deflation 21 Aug 2015JOURNAL OF L A T E X CLASS FILES 1Index Terms-rank-1 approximationCanonical Polyadicten- sor decompositioniterative deflationblind source separation Because of the attractiveness of the canonical polyadic (CP) tensor decomposition in various applications, several algorithms have been designed to compute it, but efficient ones are still lacking. Iterative deflation algorithms based on successive rank-1 approximations can be used to perform this task, since the latter are rather easy to compute. We first present an algebraic rank-1 approximation method that performs better than the standard higher-order singular value decomposition (HOSVD) for three-way tensors. Second, we propose a new iterative rank-1 approximation algorithm that improves any other rank-1 approximation method. Third, we describe a probabilistic framework allowing to study the convergence of deflation CP decomposition (DCPD) algorithms based on successive rank-1 approximations. A set of computer experiments then validates theoretical results and demonstrates the efficiency of DCPD algorithms compared to other ones. I. INTRODUCTION In the last years, tensors have been playing an important role in many applications such as blind source separation [1], [2], telecommunications [3], chemometrics [4], neuroscience [5], sensor array processing [6] and data mining [7]. The attractiveness behind tensors lies in the uniqueness of their canonical polyadic (CP) decomposition under mild conditions [8], which is a powerful property not shared by standard matrix-based tools. There are several methods to compute the CP tensor decomposition. We will point out here some of the most used methods among many others. For the exact CP decomposition, [9] proposes a direct computation to decompose 2 × n × n tensors. In [10], a generalization of Sylvester's algorithm is described for decomposing symmetric tensors. In [11], one can use simultaneous matrix diagonalization by congruence, provided that the rank of the tensor is smaller than its greatest dimension. An approach based on eigenvectors of tensors is proposed in [12]. In practice, tensors are corrupted by noise so that one needs to compute an approximate decomposition of given rank. Computing the exact CP decomposition is difficult [13], but finding a lower-rank approximation is even harder. In fact, this is an ill-posed problem in general [14]. Nevertheless, some useful algorithms have been conceived to solve locally the low-rank approximation problem. This kind of algorithms can be found in [15], [16], [17], [18], [19]. One of the most widely used is the alternating least squares (ALS) algorithm [4], which is an iterative method that consists in conditionally updating in an alternate way, the matrix factors composing the CP decomposition. Other gradient and Newton-based methods estimate the factor matrices all-at-once. Howsoever, all theses algorithms have disappointing convergence properties [15], [20]. Another kind of algorithms is based on rank-1 deflation. It is known that the conventional deflation works for matrices but not for tensors [21]. In [22], the authors propose deflation methods that work only if the rank of the tensor is not larger than its dimensions. In [23], ALS is used to update the columns of matrix factors in a deflation procedure, but for non-negative tensors. However, these deflation methods strongly depend on initialization and no convergence study has been conducted. In the same vein of iterative deflation, the authors proposed in [24] a deflation-based CP decomposition (DCPD), based on successive rank-1 approximations computed by lowcomplexity methods. Other rank-1 approximation methods can be used in DCPD, for instance, ALS. However, the latter exhibits an unbounded complexity and no satisfactory convergence study is available for rank-1 approximation apart [26], which shows results on the global convergence for generic tensors in the sense that, for any initialization, ALS converges to the same point in general. A quasi-Newton method defined on Grassmannians are developed in [27]. However, they exhibit an unbounded complexity to compute rank-1 approximations, since these methods need to be iterated. In [28], the best rank-1 approximation problem can be computed by means of an algebraic geometry moment method, but it is only applicable to very small tensor dimensions since the number of variables grows exponentially when building convex relaxations. Moreover, even for small dimensions, its convergence is very slow. In [29], the authors propose an improvement method for [28] based on border basis relaxation, but again the method is limited to small tensor dimensions. Semidefinite relaxations are proposed in [30] to compute rank-1 approximations, however the convergence becomes very slow when dimensions are large. In this paper, we report mainly three contributions. First, we propose an algebraic rank-1 approximation method, namely the sequential rank-1 approximation and projection (SeROAP), which can perform better than the standard truncated highorder singular value decomposition (THOSVD) [25]. Indeed, we prove that the rank-1 approximation performed by SeROAP is always better than the one obtained from THSOVD for three-way tensors. Moreover, for large dimensions and small orders, we show that the computational complexity of SeROAP is dramatically lower than that of THOSVD. Second, we propose an alternating eigenvalue rank-1 iterative algorithm for three-way tensors, namely CE (coupled-eigenvalue), that improves other rank-1 approximation algorithms. We prove that if the solution obtained from some rank-1 approximation algorithm (e.g., SeROAP, THOSVD) is the input of the CE algorithm, the performed rank-1 approximation remains the same in the worst case. We also prove that the convergence to a stationary point is always guaranteed. Actually, results have shown that when the initialization of the CE algorithm is close enough to the global solution, it recovers the best rank-1 approximation. Furthermore, when one dimension is much larger than the other two dimensions, the computational complexity of the CE algorithm can be lower than that of the standard ALS algorithm. Third, we perform a theoretical study on deflation in order to analyze the convergence of the DCPD algorithm. In a first stage, we show that the norm of residuals is monotonically reduced within the iterative deflation process. We also prove that the DCPD algorithm recovers the exact CP decomposition of a given tensor when residuals do not fall within a cone with an arbitrary small volume. In a second stage, we prove that the iterative deflation method can reduce the norm of the initial residual by a factor smaller than (sin(β)) L−1 (β being the angle of a suitable cone where the residuals can fall in) after L iterations with high probability, when tensors are distributed according to an absolutely continuous probability measure, and the probability function of residuals is continuous on some suitable angular interval. We also present a conjecture stating the existence of probability measures ensuring the convergence of the DCPD algorithm to an exact decomposition with high probability. The paper is organized as follows. In Section III some standard iterative methods and the DCPD algorithm as well as SeROAP and THOSVD methods are described. The computational complexity per iteration for each algorithm is provided. The next two sections form the core of the paper. In Section IV, we first prove that SeROAP performs better than THOSVD as far as rank-1 approximations of three-way tensors is concerned. Then in a second part, the CE algorithm is presented as well as the proof that it can refine any other rank-1 approximation method and the proof of its convergence. In Section V, among other theoretical results, we study conditions ensuring the convergence of the DCPD algorithm. Finally, in Section VI, computer results show satisfactory performances of the proposed DCPD and rank-1 approximation methods, compared to other related algorithms, even under noisy scenarios. II. NOTATION The notation employed in this paper is the following. Scalar numbers are denoted by lowercase letters and vectors by boldface lowercase ones. For matrices and tensors, boldface capital and calligraphic letters are used, respectively. Plain capitals are used to denote array dimensions. Depending on the context, greek letters can denote either scalars, vectors, matrices or tensors. The symbols ⊡, ⊙, ⊠ and ⊗ denote the Hadamard, Khatri-Rao, Kronecker and tensorial products, respectively, and + denotes matrix pseudo-inversion. The Euclidean scalar product between tensors is denoted by T , U = i1···iN T i1···iN U * i1···iN . The angle between two tensors U and V will refer to arccos{| U , V |}/ U F V F ∈ [0, π/2]. · then denotes the Frobenius norm induced by the previous scalar product. We shall also use the ℓ 2 operator norm · 2 for matrices, which corresponds to the largest singular value. The mode-n unfolding of a tensor T is denoted as T (n) , as proposed in [16]. T (:, j, k), T (i, :, k) and T (i, j, :) denote vector slices of tensor T . The operator vec is the vectorization operator that stacks the columns of a matrix into a long column vector, and Unvec is its reverse operator. C k is the set of functions having continuous kth derivatives. Finally, K is either the real or the complex field. III. DESCRIPTION OF ALGORITHMS AND COMPLEXITY ANALYSIS This section presents the description of some CP decomposition algorithms. In order to support further results, the complexity per iteration is calculated here for each algorithm using Landau's notation, denoted by O, and counting only multiplications, as recommended in [31]. From Section III-A to Section III-B, we describe two classical algorithms known in literature: ALS and conjugate gradient [15]. In Section III-C, the DCPD algorithm is presented. For the CP decomposition algorithms described in the following, the input parameter R denotes the rank of the output tensor. Assuming R 0 is the rank of the input tensor T , if R 0 ≤ R, then the algorithms perform an exact decomposition. On the other hand, if R 0 > R, a lower rank-R approximation is computed. A. Alternating least squares (ALS) The most commonly used algorithm for solving the CP decomposition is ALS [4]. The goal is to update alternately each factor matrix in each iteration by solving a least squares problem conditioned on previous updates of the other factor matrices. There is no guarantee of convergence to the global solution, nor even to a critical point. The implementation is quite simple and it is detailed in Alg.1. input : T ∈ K I1×I2×···×IN : input data, R: rank parameter. output: A (n) ∈ K In×R , for n = 1, . . . , N : factor matrices (1) )V + end until some stopping criterion is satisfied; Initialize A (1) , . . . , A (N ) repeat for n = 1 to N do V ← A (1)T A (1) ⊡ · · · ⊡ A (n−1)T A (n−1) ⊡ A (n+1)T A (n+1) ⊡ · · · ⊡ A (N )T A (N ) A (n) = T (n) (A (N ) ⊙ · · · ⊙ A (n+1) ⊙ A (n−1) ⊙ · · · ⊙ A Algorithm 1: ALS algorithm The complexity per iteration (repeat loop) of ALS may be calculated as follows. The computation of matrix V needs (I 1 + . . . + I n−1 + I n+1 . . . I N )R 2 + (N − 2)R 2 operations (multiplications). The pseudo-inverse of V is calculated by resorting to an SVD. For a m×n rank-r matrix, with m ≥ n ≥ r, the explicit calculation of diagonal, left singular, and right singular matrices require 2mn 2 − 2n 3 /3, 5mr 2 − r 3 /3, and 5nr 2 −r 3 /3 multiplications, respectively [31]. Thus, assuming for simplicity that V is a non-singular matrix, the number of operations to calculate its pseudo inverse is 35R 3 /3 + R 2 . For updating A (n) , we need R N i=1,i =n I i + R N j=1 I j + I n R 2 multiplications. These calculations must be performed for each n ∈ {1, . . . , N }. Thus, the number of operations per iteration for ALS is dominated by the term composed of the product of all dimensions. Hence, #op = O N R N j=1 I j . (1) B. Conjugate gradient (CG) The conjugate gradient algorithm (CG) is a faster algorithm than the well known gradient descent [15]. Here, we use the optimum step size and the Polak-Ribière implementation [32] for updating the parameter β in the algorithm presented in Alg.2. The number of operations for computing each vector g n is R 2 N j=1 I j + (N − 2)R 2 + I n R 2 + R N i=1,i =n I i + R N j=1 I j The computation of the step size µ ⋆ is dominated by the number of multiplications needed to determine all the coefficients but one of a 2N -degree polynomial generated from the enhanced line search (ELS) method, which is given by [17]: 2 N R + O{N 2 } N j=1 I j The parameter β requires only 1 + 2R N j=1 I j multiplications. Hence, the CG algorithm with ELS has a total complexity given by #op = O (2 N + 1)R + N 2 N j=1 I j . (2) C. Deflation-based CP decomposition (DCPD) The computation of a rank-1 approximation is the key of the DCPD algorithm. We present here two methods for computing a rank-1 approximation referred to as THOSVD and SeROAP [24]. input : T ∈ K I1×I2×···×IN : input data, R: rank parameter. output: A (n) ∈ K In×R , for n = 1, . . . , N : factor matrices Initialize A (1) , . . . , A (N ) ; p ← [vec T (A (1) ) · · · vec T (A (N ) )] T ; repeat for n = 1 to N do Compute the gradient g (n) with respect vec(A (n) ); end g ← [g T (1) . . . g T (N ) ] T ; if n = 1 then d ← −g; end Compute the optimum step size µ ⋆ ; Update β according to Polak-Ribière; ∆p ← µ ⋆ d; p ← p + ∆p; d ← −g + βd; until some stopping criterion is satisfied; Extract A (n) from p, for n = 1, . . . , N. Algorithm 2: CG algorithm 1) Truncation of higher order singular value decomposition -THOSVD: The algorithm is described in Alg.3. For computing the first right singular vector, we do not need compute the complete SVD. According to [33], we can compute the best rank-1 approximation of a m × n matrix in k steps by using the Lanczos algorithm, with a complexity O{2kmn}. Hence, the accumulated complexity computed for all u n is equal to O{2N k N j=1 I j }. The computation of U requires N j=1 I j flops. The contraction to obtain λ also needs N j=1 I j operations. To sum up, the total number of operations of THOSVD is of order: O (2N k + 2) N j=1 I j input : T ∈ K I1×I2×···×IN : input data output: X ∈ K I1×I2×···×IN : rank-1 approximation for n = 1 to N do u n ← first left singular vector of T (n) ; end U ← ⊗ N n=1 u n ; λ ← T , U ; X ← λ · U . Algorithm 3: THOSVD algorithm 2) Sequential rank-1 approximation and projection -SeROAP: Without loss of generality, consider I 1 ≥ I 2 ≥ . . . ≥ I N . The SeROAP algorithm [24] goes along the lines depicted in Alg.4. In the first for loop, we compute N −2 right singular vectors of matrices whose size is successively reduced. For this step, the complexity is O{2k N −2 i=1 ( N j=i I j )}. The computation of vectors u, v and w has complexity O{(2k + 1)I N −1 I N }. Next, the second for loop performs N − 2 successive projections of the rows of matrices V n onto the vectors w. We need here 2 N −2 i=1 ( N j=i I j ) operations. input : T ∈ K I1×I2×···×IN : input data output: X ∈ K I1×I2×···×IN : rank-1 approximation V 0 ← T (1) ; V ← V 0 ; for n = 1 to N − 2 do v n ← first right singular vector of V ; V n ← U nvec(v n ) ∈ K In+1×In+2In+3···IN ; V ← V n ; end (u, v) ← first left and right singular vectors of V ; w ← v * ⊠ u; for n = N − 2 to 1 do X (n) ← (V n−1 w) w H ; w = vec(X (n) ); end X (1) is the mode-1 unfolding of X . Algorithm 4: SeROAP algorithm For large dimensions and small N , the complexity of SeROAP is dominated by: O (2k + 2) N j=1 I j which can be significantly smaller than that of THOSVD. An example of typical execution 1 is given in the Appendix. 3) Description and complexity of DCPD: The DCPD is an iterative deflation algorithm [24] that computes the CP decomposition for real or complex tensors. As summarized in Alg.5, it proceeds as follows. In the first for loop, we compute the rank-1 tensors X [1, 1], . . . , X [R, 1] by successive rank-1 approximations and subtractions. Since the rank of the tensor does not decrease with subtractions in general [21], a residual E[R, 1] is then produced. In the iterative process (repeat loop), a new rank-1 component is generated from the sum of the previous residual and X [1,1], and a new residual E[1, 2] is produced with the subtraction Y[1, 2] − X [1, 1]. The tensor Y[1, 2] is updated within the if-else condition. By applying the same procedure to the other components, we update all R rank-1 tensors, so that another residual E[R, 2] is generated in the end of the second for loop. The second loop continues to execute until some stopping criterion is satisfied, and all rank-1 components of T can be recovered. The complexity per iteration is dominated by the rank-1 approximation function φ, which is computed R times. Therefore, the complexity of DCPD is #op = O    (2N k + 2)R N j=1 I j    ,(3) input : T ∈ K I1×I2×···×IN : input data, R: rank parameter. φ: an algorithm computing a rank-1 approximation output: X r ∈ K I1×I2×···×IN , for r = 1, . . . , R: rank-1 components Y[1, 1] ← T ; for r = 1 to R do X [r, 1] ← φ(Y[r, 1]); if r < R then Y[r + 1, 1] ← Y[r, 1] − X [r, 1]; else E[R, 1] ← Y[R, 1] − X [R, 1]; end end l ← 2; repeat for r = 1 to R do if r > 1 then Y[r, l] ← X [r, l − 1] + E[r − 1, l]; else Y[1, l] ← X [1, l − 1] + E[R, l − 1]; end X [r, l] ← φ(Y[r, l]); E[r, l] ← Y[r, l] − X [r, l]; end l ← l + 1; until some stopping criterion is satisfied; foreach r ∈ [1, . . . , R] do X r ← X [r, l]; end Algorithm 5: DCPD algorithm for the T-HSOVD algorithm, and #op = O    (2k + 2)R N j=1 I j    ,(4) for the SeROAP algorithm. IV. RANK-1 APPROXIMATION This section is divided in two parts. The first one presents a more detailed study of the THOSVD and SeROP rank-1 approximation methods. For three-way tensors, we show that SeROAP is a better choice than THOSVD because the former presents a better rank-1 approximation, which ensures a more probable monotonic decrease of the residual E[R, l] within the DCPD algorithm [24]. A new rank-1 approximation algorithm is described in a second part, and is proved to perform better than any other rank-1 approximation method. A. THOSVD vs SeROAP Since we do not have at our disposal an efficient method to compute quickly the best rank-1 approximation of a tensor, we should compute a suboptimal rank-1 approximation in a tractable way. The THOSVD and SeROAP algorithms can perform this task, as presented in Section III. The question that arises is: which algorithm performs the best? The proposition 4.1 below shows that SeROAP performs better than THOSVD for three-way tensors. For simplicity, the notation of the unfolding matrices does not present indices in this section. Proposition 4.1: Let T ∈ K I1×I2×I3 be a 3-order tensor. Let also φ TH (T ) and φ Se (T ) be the rank-1 approximations delivered by THOSVD and SeROAP algorithms, respectively. Then the inequality T − φ Se (T ) ≤ T − φ TH (T ) holds. Proof: Let T , T TH φ and T Se φ be some mode unfolding of tensors T , φ TH (T ) and φ Se (T ), respectively. Assuming mode-1 unfolding for the THOSVD algorithm, we have T − T TH φ 2 = T − λu 1 (u 3 ⊠ u 2 ) T 2 , where λ, u 1 , u 2 , and u 3 are obtained from Alg. 3. Since λ is the contraction of T on U , we plug it into the previous equation and we obtain, after simplifications, T − T TH φ 2 = T 2 − |λ| 2 , with λ = u H 1 T (u * 3 ⊠ u * 2 ). Yet, u H 1 T = T 2 v H 1 since (u 1 , v 1 , T 2 ) is the dominant singular triplet of matrix T . Hence |λ| 2 = T 2 2 |v H 1 (u * 3 ⊠ u * 2 )| 2 . On the other hand for SeROAP, we have T − T Se φ 2 = T − T ww H 2 = T 2 F + T w 2 2 w 2 2 − 2 T w 2 2 = T 2 − w H T H T w, where w is the same vector computed before the second for loop given in Alg. 4 for 3-order tensors. The eigenvalue decomposition of T H T can be expressed by T H T = T 2 2 v 1 v H 1 + S, where S is a semidefinite positive matrix. Hence, we have w H T H T w = T 2 2 w H v 1 v H 1 w + c, with c ≥ 0. To complete the proof of the proposition, we just need to show that |v H 1 w| 2 ≥ |v H 1 (u * 3 ⊠ u * 2 )| 2 , or equivalently that | w, v 1 | ≥ | u * 3 ⊠ u * 2 , v 1 |. This is true, because w is by construction (cf. Alg. 4) the vector closest to v 1 among all vectors of the form a ⊠ b where a and b have unit norm. B. Coupled-eigenvalue rank-1 approximation This section presents an alternating eigenvalue method for three-way tensors that can improve local solutions obtained from any other rank-1 approximation method (e.g. SeROAP and THOSVD algorithms). Actually, simulations have shown that the global solution is always attained if the initial approximation is close enough. Let t i3 be the vectorization of slice i 3 , 1 ≤ i 3 ≤ I 3 (we have chosen the third mode) of tensor T . The rank-1 approximation problem can be stated as [α opt , x opt , y opt ] = arg min α,x,y Υ(x, y, α) s.t. x 2 = 1, y 2 = 1,(5) with Υ(x, y, α) = I3 i3=1 t i3 − α i3 (y * ⊠ x) 2 2 and α = [α 1 · · · α I3 ]. Plugging the optimal value of α i3 into Problem (5), namely (y * ⊠ x) H t i3 , we can rewrite it as the following equivalent maximization problem [x opt , y opt ] = arg max x,y z H M z s.t. z = y * ⊠ x, y 2 = 1, x 2 = 1,(6) where M = I3 i3=1 t i3 t H i3 . Now, we decompose M as a sum of Kronecker products. This can be done by reshaping M and applying the SVD decomposition [34]. Thus, M can be given by M = R ′ r=1 Q (r) ⊠ P (r) , with the Hermitian matrices P (r) ∈ K I1×I1 and Q (r) ∈ K I2×I2 . R ′ is the Kronecker rank of M satisfying R ′ ≤ min(I 2 1 , I 2 2 ). Substituting M into Problem (6), we have: [x opt , y opt ] = arg max x,y R ′ r=1 (y H Q (r) * y)(x H P (r) x) s.t. y 2 = 1, x 2 = 1.(7) Let L be the Lagrangian function given by L = − R ′ r=1 (y H Q (r) * y)(x H P (r) x) + η1( y 2 2 − 1) + η2( x 2 2 − 1), where η 1 and η 2 are the Lagrange multipliers. By computing the critical points, we obtain a pair of coupled eigenvalue problems    y H A (1,1) y · · · y H A (1,I1) y . . . . . . . . . y H A (I1,1) y · · · y H A (I1,I1) y    x = λx(8) and    x H B (1,1) x · · · x H B (1,I2) x . . . . . . . . . x H B (I2,1) x · · · x H B (I2,I2) x    y = λy,(9)where λ = η 1 = η 2 , A (m,n) = R ′ r=1 P (r) mn Q (r) * , and B (k,l) = R ′ r=1 Q (r) * kl P (r) , with 1 ≤ m, n ≤ I 1 and 1 ≤ k, l ≤ I 2 . The coupled-eigenvalue algorithm is presented in Alg. 6. We can initialize the algorithm by computing x 0 and y 0 from the rank-1 approximation solution obtained with SeROAP, THOSVD or any other rank-1 approximation method. The complexity per iteration of the CE algorithm is dominated by the construction of the matrices in the LHS of (8) and (9), which is of order O{min(I 2 1 I 2 2 , I 2 1 I 2 3 , I 2 2 I 2 3 )}. Suppose I 3 is the largest dimension. If I 3 ≫ I 1 I 2 , then we can take advantage of the CE algorithm in terms of complexity in comparison with the ALS algorithm. Indeed, the complexity per iteration of ALS for rank-1 approximation is of order input : φ(T ): rank-1 approximation output: φ ⋆ (T ): improved rank-1 approximation Compute x 0 from φ(T ) as x0 ← φ(T )(:, i2, i3)/ φ(T )(:, i2, i3) 2 for some i2, i3 t ← 0 repeat Set x = xt in eigenvalue problem (9) and take y t+1 as the eigenvector whose eigenvalue is maximum; Set y = y t+1 in eigenvalue problem (8) and take xt+1 as the eigenvector whose eigenvalue is maximum; t ← t + 1 until some stopping criterion is satisfied; for i3 = 1 to I3 do α ⋆ i 3 ← ti 3 , y * t ⊠ xt ; end φ ⋆ (T ) ← xt ⊗ y t ⊗ α ⋆ ; Algorithm 6: CE rank-1 approximation. Above, we chose to start with x, but we could equivalently have started with y. O(3I 1 I 2 I 3 ), which is higher than that of the CE algorithm in this case. Notice, however, that a properly comparison makes sense if the same initialization is employed in both algorithms. The following proposition shows that the above algorithm improves (in worst case the solution remains the same) any rank-1 approximation algorithm. Proposition 4.2: Let φ(T ) be a rank-1 approximation of a three-way tensor T . If φ(T ) is the input of the CE algorithm and φ ⋆ (T ) the output, then the inequality T − φ ⋆ (T ) ≤ T − φ(T ) holds. Proof: Plugging the expression of A (m,n) into equation (8), we obtain, after simplifications, λ = R ′ r=1 (y H Q (r) * y)(x H P (r) x), which is the objective function of Problem (7). The same result is obtained when the matrix B (k,l) is plugged into equation (9). Now ∀t ≥ 1, let λ (x) t and λ (y) t be the maximal eigenvalues whose eigenvectors are x t and y t , respectively. The eigenpair (λ (y) t+1 , y t+1 ) obtained by solving equation (9) with x = x t , is solution of the maximization problem λ (y) t+1 = max y 2=1 R ′ r=1 (y H Q (r) * y)(x H t P (r) x t ). Also, the eigenpair (λ (x) t+1 , x t+1 ) obtained by setting y = y t+1 in equation (8), is solution of the problem λ (x) t+1 = max x 2=1 R ′ r=1 (y H t+1 Q (r) * y t+1 )(x H P (r) x). Since max x 2=1 R ′ r=1 (y H t+1 Q (r) * y t+1 )(x H P (r) x) = R ′ r=1 (y H t+1 Q (r) * y t+1 )(x H t+1 P (r) x t+1 ), it follows in particular that R ′ r=1 (y H t+1 Q (r) * y t+1 )(x H t P (r) x t ) ≤ R ′ r=1 (y H t+1 Q (r) * y t+1 )(x H t+1 P (r) x t+1 ), which implies that λ (y) t+1 ≤ λ (x) t+1 . Similarly, plugging x t+1 into equation (9), we can conclude that λ (x) t+1 ≤ λ (y) t+2 for the reason that R ′ r=1 (y H t+1 Q (r) * y t+1 )(x H t+1 P (r) x t+1 ) ≤ R ′ r=1 (y H t+2 Q (r) * y t+2 )(x H t+1 P (r) x t+1 ). Hence, the sequence {Υ t } t∈N = {. . . , λ (y) t , λ (x) t+1 , λ (y) t+1 , λ (x) t+2 , . . .} is monotonically non-decreasing. The same conclusion would be achieved if we begin by plugging x t into equation (9). Now, let φ(T ) = x 0 ⊗ y 0 ⊗ α 0 be a rank-1 approximation obtained with any other method. Assume x 0 and y 0 are unit vectors, and define λ 0 = R ′ r=1 (y H 0 Q (r) * y 0 )(x H 0 P (r) x 0 ) . By setting x = x 0 in equation (9) in the first iteration (a similar operation would be possible for y 0 in equation (8)), we clearly have λ 0 ≤ λ (y) 1 ≤ λ (y) tmax , where t max is the iteration in which the stopping criterion is satisfied. Since the optimization problems (5) and (7) are equivalent, α ⋆ i3 , 1 ≤ i 3 ≤ I 3 , can be obtained by performing the scalar product between vectors t i3 and y * tmax ⊠ x tmax (which is equivalent to contracting tensor T on x tmax and y * tmax ). Hence, the tensor Proof: In the proof of Proposition 4.2, we have shown that {Υ t } is monotonically non-decreasing for any input φ(T ). Let p ⋆ be the maximum of the objective (7). Since the best rank-1 approximation problem always has a solution, then p ⋆ < ∞. But max x Υ(x, y t+1 , α) ≤ max x,y Υ(x, y, α), which implies that {Υ t } t∈N is bounded above by p ⋆ . Since {Υ t } is a real non decreasing sequence bounded above, it converges to a limit Υ ⋆ , Υ ⋆ ≤ p ⋆ . φ ⋆ (T ) = x tmax ⊗ y tmax ⊗ α ⋆ is a better rank-1 approximation of T than φ(T ), implying T − φ ⋆ (T ) ≤ T − φ(T ) . V. DEFLATION In [24], we proved that for a rank-R tensor, the normalized residual ( E[R, l] ) l∈N>0 is a monotonically decreasing sequence when the best rank-1 approximation is assumed within DCPD. In this section, a thorough theoretical analysis and new results are presented. Based on a geometric approach, we sketch an analysis of the convergence of the DCDP algorithm, including a conjecture that it converges to an exact decomposition with high probability when tensors within T (R) = {T ∈ T : rank{T } ≤ R} are distributed according to absolutely continuous probability measures. First, let us take a closer look at the 2D geometric interpretation of the DCPD algorithm. Figure 1 Before stating some theoretical results on the DCPD algorithm, we present a fundamental lemma related to the error in rank-1 approximations of tensors of the form X + E, where X is a rank-1 tensor and E any other tensor, both with entries in some field K. β γ[r, l] X [r, l − 1] Y [ r , l ] E[r − 1, l] Lemma 5.1: Let X be a rank-1 tensor and φ the best rank-1 approximation operator. For any tensor E, X + E − φ(X + E) ≤ sin(γ) E , where γ denotes the angle between E and X . Proof: Let P X (X + E) be the orthogonal projection of X + E onto span(X ). Because φ(X + E) is a best rank-1 approximation of X + E, P X (X + E) cannot be a strictly better rank-1 approximation than φ(X + E). Thus, X + E − φ(X + E) ≤ X + E − P X (X + E) . On the other hand, X + E − P X (X + E) ⊥ X . Hence, we have X + E − P X (X + E) = sin(γ) E by using basic trigonometry. This concludes the proof. The following results for the DCPD algorithm stems from the previous lemma. From Lemma 5.1, we know that E[1, l] ≤ sin(γ[1, l]) E[R, l − 1] . Thus, it follows that E[R, l] ≤ c l E[R, l − 1] . Notice that the same result brought by Proposition (4.4) in [24] can be deduced from Corollary 5.3 since 0 ≤ c l ≤ 1 for every iteration l, which implies the monotonic decrease of the sequence { E[R, l] } l∈N>0 . Lemma 5.4 shows that the DCPD algorithm might not improve the estimation of the rank-1 components anymore for l ≥ l 0 > 1. And this may occur not only in the presence of noise. Actually, even for an almost orthogonal case c l ≈ 1, E[R, l] may tend to a stationary non-zero value as l increases. However, the DCPD algorithm converges to an exact decomposition if c l ≤ C, for all l > 1 for some constant C < 1. This will be subsequently detailed by means of a geometric approach. Figure 1 can also be seen as the representation of an nsphere of dimension n = I 1 I 2 · · · I N − 1 in K n+1 space. β is half the white cone angle defined in [0, π/2]. The direction of the rank-1 tensor X [r, l − 1] defines the axis of the white cone and varies with r or l. Under a condition on β, we can state an important proposition ensuring the convergence of the DCPD algorithm. Proposition 5.5: Let T be a tensor such that rank{T } ≤ R. An exact decomposition is recovered by the DCPD algorithm if and only if there exists for every (r, l) a half cone of angle β (in white in Fig. 1), 0 ≤ β < π/2, such that β ≥ max (⇒) Let l 0 be some iteration such that l 0 > 1. Without loss of generality, assume E[R, l] = 0 for l ≥ l 0 (l 0 can be arbitrarily large). Then ( E[R, l] ) l∈N>0 is a strictly monotonically decreasing sequence for 1 < l < l 0 , otherwise the algorithm would converge to a nonzero constant for some iteration smaller than l 0 . Hence, for every (r, l) we can choose β, 0 ≤ β < π/2 such that β ≥ max l>1 min 1≤r≤R γ[r, l], and the proof is complete. As a conclusion, if for a given iteration l, all tensors E[r, l], 1 ≤ r ≤ R, fall within the gray volume depicted in Fig. 1 (the complementary of the white cone), then the sequence E[r, l] does not tend to zero. Even if this gray volume can be made arbitrarily small, it is not of zero measure. So the best we can do is to prove an almost sure convergence of the DCPD algorithm to the exact decomposition under some probabilistic conditions. Lemma 5.6: If tensors T are distributed within T (R) according to an absolutely continuous probability measure, then E[r, l] are absolutely continuous random variables. Proof: Let D = [I 1 · · · I N ] be a specific size of n-order tensors, and let T (R) D = {T ∈ K I1×···×IN : T ⊂ T (R) }. Because T (R) ⊃ T (R) D , any tensor T within T (R) D is also distributed according to an absolutely continuous probability measure. Via the DCDP algorithm, each rank-1 component obtained in successive deflations is also in T (R) D . Hence, since the sum (subtraction) of continuous random variables does not affect the continuity, the residuals E[r, l] are also absolutely continuous random variables. Since the norm is a C 0 function in finite dimension, E[r, l] is also absolutely continuous. For the next developments, let Z l = E[R, l] and define the following probability for some iteration L > 1: F L [β] = P Z L ≤ sin(β)Z L−1 ≤ . . . ≤ (sin(β)) L−1 Z 1 . F L [β] can be viewed as the probability that residuals fall within at least one of the R white cones in every iteration l ≤ L. The following proposition ensures a reduction of Z 1 by a factor smaller than (sin(β)) L−1 after L iterations with high probability, if a condition on the continuity of F L [β] is assumed. Proposition 5.7: Let L be fixed. If ∃β 0 : β 0 ∈ [0, π/2) such that F L [β] is continuous on [β 0 , π/2], then ∀ε : ε ∈ (0, 1], ∃β ∈ [β 0 , π/2) such that F L [β] > 1 − ε. Proof: Since F L [π/2] = 1 and F L [β] is continuous on [β 0 , π/2], the proof follows directly from the intermediate value theorem. Although Z l , 1 ≤ l ≤ L, are absolutely continuous random variables and g m (β) = sin m (β) are continuous functions for all m ≥ 0, the continuity of F L [β] in β is not guaranteed due to the dependence of the random variables Z 1 , . . . , Z L (Z l depends on Z l−1 ). For example, for L = 2 and Z 1 = 2Z 2 with probability 1, it is easy to check that F 2 [β] is not continuous at β = π/2. Indeed, lim β→π/2 − F 2 [β] = 0 whereas F 2 [π/2] = 1. The following conjecture claims that there exists absolutely continuous distributions of tensors in T (R) such that the probability F l [β(l)] tends to 1 for some function β(l) as l → ∞, and at the same time the norm of residuals tends to 0, which is suitable for the convergence of the DCPD algorithm to an exact CP decomposition. Conjecture 5.8: There exists at least one absolutely continuous probability measure µ for tensors T within T (R) for which the following holds: (i). ∀ε : ε ∈ (0, 1], ∀l : l > 1, ∃β : β ∈ [0, π/2), such that F l [β] > 1 − ε. (ii). ∀l : l > 1, ∃β : β ∈ [0, π/2), such that (sin(β)) l−1 is a strictly monotonically decreasing sequence converging to 0. Subsequent computer simulations support the existence of a uniform probability measure µ for the entries of tensors within T (R) , such that F β [L] ≈ 1 for large values of L, and E[R, L] ≈ 0. This reinforces our conjecture. VI. COMPUTER RESULTS A. Comparison between THOSVD and SeROAP In this section, we compare the performance of rank-1 approximation methods SeROAP and THOSVD for different three-way tensor scenarios. For each case, 300 complex tensors whose real and imaginary parts are uniformly distributed in [−1, 1] were generated. Figure 2 presents the difference between the Frobenius norms of the residuals computed as We note that ∆φ > 0 in all scenarios, as predicted by Proposition 4.1. ∆φ = T − φ TH (T ) − T − φ Se (T ) . B. Performance of rank-1 approximations The tables below compare different rank-1 approximation methods with respect to the best rank-1 approximation, which was obtained from the algebraic geometric moment method described in [28]. Because the latter is infeasible to compute for high dimensions, we have focused on 2×2×2 and 3×3×3 real tensors. The two iterative methods, namely ALS and CE, are initialized by the result obtained from SeROAP. A sample of 200 real tensors uniformly distributed in [−1, 1] were generated for each of both scenarios. For comparison, we consider the MSE metric given by MSE = 1 200 200 n=1 ∆φ (n) m 2 , where ∆φ (n) m = T (n) − φ m (T (n) ) − T (n) − φ ⋆ (T (n) ) . φ m (T (n) ) and φ ⋆ (T (n) ) are the rank-1 approximation for algorithm m and the best rank-1 approximation of T (n) , respectively. The results show that SeROAP is a better rank-1 approximation than THOSVD as expected. For 2 × 2 × 2 tensors, CE attains the best rank-1 approximation. In both scenarios, ALS and CE converge approximately in the same number of iterations. Figure 3 presents the percentage of successful decompositions of rank-3 tensors for the algorithms ALS, CG with ELS, and DCPD. The ALS and the CG algorithms were randomly initialized. We have simulated DCPD with the algebraic methods THOSVD and SeROAP. Noise is not considered in this case so that the performance is evaluated for the computation of an exact decomposition of 300 tensors. We consider that a decomposition is succeeded if the residual E ≤ 10 −6 . We note that the DCPD algorithm combined with SeROAP always presents a better performance than the standard ALS algorithm. Moreover, for higher dimensions, the percentage of successful decompositions is almost 100% for DCPD-SeROAP, which is a remarkable result, bearing in mind that the objective is multimodal. Again, ALS and CG algorithms were randomly initialized. Additive Gaussian noise is considered in our simulations. C. Percentage of successful decompositions D. Convergence rate We note in this figure that DCPD-SeROAP converges more quickly than the other algorithms. E. Residual vs rank Now, we compare the algorithms for two SNRs by varying the rank of 8 × 8 × 8 tensors. Again, we note the better performance of DCPD-SeROAP over the competing algorithms. Figure 5 also shows that the combination of DCPD and THOSVD yields the worst results. This is expected because the rank-1 approximation obtained by THOSVD is not good enough, so that DCPD does not come at a small residual. VII. CONCLUSION In this paper, we presented some CP tensor decompositions algorithms and provided an analysis of their compu-tational complexities. Our contributions included: (i) a new algebraic rank-1 method, namely SeROAP, performing better than THOSVD for three-way tensors; (ii) an iterative rank-1 approximation algorithm, namely CE, that refines any rank-1 approximation method, such as SeROAP and THOSVD, which converges in very few iterations; and (iii) an analysis of the convergence of the DCPD algorithm under a geometric point of view. Several computer experiments have confirmed the theoretical results. APPENDIX A COMPUTATION OF RANK-1 APPROXIMATION USING SEROAP We present an example of how SeROAP algorithm works for computing a rank-1 approximation of a given tensor. Let T be a 2 × 2 × 2 × 2 complex tensor whose mode-1 unfolding is given by T (1) = 1 −1 0 1 3 i 0 1 0 1 −i 1 1 0 2 −2i . In the first for loop of SeROAP algorithm, v 1 is the dominate right singular vector of V = V 0 = T (1) given by Notice that w can be viewed as a vectorization of a rank-1 matrix. In the end of the first iteration of the second for loop, w is updated so that it becomes a vectorization of a rank-1 three-way tensor. This is achieved by performing the projection of each row of matrix V 1 onto w, by means of V 1 w, following by the multiplication with w H . Hence In the next iteration, we perform X 1 = (V 0 )w H , which is an unfolding of a 4-order rank-1 tensor. Actually, in every iteration of the second for loop, X n is updated to an unfolding of a rank-1 tensor of order N − n + 1. Hence, Indeed, X (1) is the mode-1 unfolding of the rank-1 approximation of T computed by SeROAP algorithm. This work has been funded by the European Research Council under the 7th Framework Program FP7/2007-2013 Grant Agreement no. 320594, and by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ) under the program Ciências sem Fronteiras. Proposition 4. 3 : 3For any input φ(T ), the CE algorithm converges to a stationary point. Fig. 1 . 1Visualization of the residual in an n-sphere for some iteration l of DCPD algorithm. depicts the [r, l]iteration for r > 1, so that γ[r, l] is the angle between the tensors E[r − 1, l] and X [1, l − 1]. For r = 1, the residual E[r − 1, l] can be just replaced with E[R, l − 1] in the figure, and γ[1, l] is then defined from E[R, l − 1] and X [1, l − 1]. Corollary 5 . 2 : 52The inequality E[r, l] ≤ sin(γ[r, l]) E[r− 1, l] holds for any 1 < r ≤ R.Proof: By replacing X , E and γ in Lemma 5.1 with X [r, l−1], E[r−1, l] and γ[r, l] respectively, the result follows directly. inequality E[R, l] ≤ c l E[R, l − 1] holds.Proof: By applying R − 1 times the result Lemma 5 . 4 : 54If E[R, l] = E[R, l − 1] , then c l = 1. Proof: Because c l ≤ 1 and E[R, l] = E[R, l − 1] , one concludes directly from Corollary 5.3 that c l = 1. Proof: (⇐) For any iteration l > 1, take γ[r 0 , l] = min 1≤r≤R γ[r, l]. Notice that c l ≤ sin (γ[r 0 , l]) from Corollary 5.3. By hypothesis, sin (γ[r 0 , l]) ≤ sin(β), which implies that E[R, l] ≤ c l E[R, l − 1] ≤ sin(β) E[R, l − 1] . Because β is an upper bound for γ[r, l], l > 1, we have sin(β) E[R, 1] ≥ E[R, 2] =⇒ (sin(β)) l−1 E[R, 1] ≥ E[R, l] . Hence, when l → ∞, E[R, l] → 0. Fig. 2 . 2THOSVD and SeROAP comparison Fig. 3 . 3Percentage of successful decomposition for rank-3 tensors. Figure 4 4presents the performance of the algorithms in terms of the average of E[R, l] per iteration for different values of the signal-to-noise (SNR) ratio for 5 × 5 × 5rank-3 tensors. Fig. 4 . 4Mean of E[R, l] per iteration for different values of SNR. For an SNR of 40 dB, DCPD-SeROAP attains E[R, l] ≈ 0.01 in approximately 100 iterations while, for the other algorithms, E[R, l] > 0.02 for the same number of iterations. Similar results are observed for other SNRs. The figure also shows that performances become similar when the SNR is decreased. Fig. 5 . 5Mean of E under rank variation. . By reshaping v 1 in a 2 × 4 matrix V 1 we haveV 1 = −0.1717 − 0.0914i −0.0146 − 0.1472i 0.0245 + 0.1060i −0.3189 − 0.0768i −0.6624 − 0.2596i −0.2944 + 0.0292i −0.0914 + 0.1717i −0.2010 − 0.3858i .In next iteration (n = 2), we compute the dominate right singular vector of V = V 1 . Hence, The next step is to compute the vector w = v * ⊠u, where u and v are the first left and right singular vectors, respectively. Thus,u = 0.6106 − 0.7024i 0.1758 − 0.3208i , v = −0.3027 − 0.0545i −0.9481 + 0.0809i , Matlab codes are available at http://www.gipsa-lab.grenoble-inp.fr/∼pierre. comon/TensorPackage/tensorPackage.html. Handbook of Blind Source Separation, Independent Component Analysis and Applications. ISBN: 978-0-12-374726-6, hal- 00460653P. Comon and C. JuttenAcademic PressOxford UK, Burlington USAP. Comon and C. Jutten, Eds., Handbook of Blind Source Separation, Independent Component Analysis and Applications, Academic Press, Oxford UK, Burlington USA, 2010, ISBN: 978-0-12-374726-6, hal- 00460653. Blind separation of instantaneous mixtures of dependent sources. M Castella, P Comon, Independent Component Analysis and Signal Separation. SpringerM. Castella and P. Comon, "Blind separation of instantaneous mixtures of dependent sources," in Independent Component Analysis and Signal Separation, pp. 9-16. Springer, 2007. Parafacbased unified tensor modeling for wireless communication systems with application to blind multiuser equalization. A L F De Almeida, G Favier, J C M Mota, Signal Processing. 872A. L. F. de Almeida, G. Favier, and J. C. M. Mota, "Parafac- based unified tensor modeling for wireless communication systems with application to blind multiuser equalization," Signal Processing, vol. 87, no. 2, pp. 337-351, Feb. 2007. Multi-way analysis: applications in the chemical sciences. A Smilde, R Bro, P Geladi, John Wiley & SonsA. Smilde, R. Bro, and P. Geladi, Multi-way analysis: applications in the chemical sciences, John Wiley & Sons, 2005. Eeg extended source localization: tensor-based vs. conventional methods. H Becker, L Albera, P Comon, M Haardt, G Birot, F Wendling, M Gavaret, C.-G Bénar, I Merlet, NeuroImage. 96H. Becker, L. Albera, P. Comon, M. Haardt, G. Birot, F. Wendling, M. Gavaret, C.-G. Bénar, and I. Merlet, "Eeg extended source localiza- tion: tensor-based vs. conventional methods," NeuroImage, vol. 96, pp. 143-157, 2014. Joint source estimation and localization. S Sahnoun, P Comon, IEEE Trans. on Signal Processing. 6310S. Sahnoun and P. Comon, "Joint source estimation and localization," IEEE Trans. on Signal Processing, vol. 63, no. 10, 2015. Algorithms in Data Mining using Matrix and Tensor Metho ds. B Savas, Linköping Univ. Tech.Ph.D. thesisB. Savas, Algorithms in Data Mining using Matrix and Tensor Metho ds, Ph.D. thesis, Linköping Univ. Tech., 2008. Towards a standardized notation and terminology in multiway analysis. H A L Kiers, J. Chemometrics. H. A. L. Kiers, "Towards a standardized notation and terminology in multiway analysis," J. Chemometrics, pp. 105-122, 2000. Kruskal's polynomial for 2× 2× 2 arrays and a generalization to 2× n× n arrays. J M Berge, Psychometrika. 564J. M. Ten Berge, "Kruskal's polynomial for 2× 2× 2 arrays and a generalization to 2× n× n arrays," Psychometrika, vol. 56, no. 4, pp. 631-636, 1991. Symmetric tensor decomposition. J Brachat, P Comon, B Mourrain, E Tsigaridas, Linear Algebra and its Applications. 43311J. Brachat, P. Comon, B. Mourrain, and E. Tsigaridas, "Symmetric tensor decomposition," Linear Algebra and its Applications, vol. 433, no. 11, pp. 1851-1872, 2010. A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. L De Lathauwer, SIAM Journal on Matrix Analysis and Applications. 283L. De Lathauwer, "A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization," SIAM Journal on Matrix Analysis and Applications, vol. 28, no. 3, pp. 642- 666, 2006. Eigenvectors of tensors and algorithms for waring decomposition. L Oeding, G Ottaviani, Journal of Symbolic Computation. 54L. Oeding and G. Ottaviani, "Eigenvectors of tensors and algorithms for waring decomposition," Journal of Symbolic Computation, vol. 54, pp. 9-35, 2013. Most tensor problems are np-hard. C J Hillar, L.-H Lim, Journal of the ACM (JACM). 60645C. J. Hillar and L.-H. Lim, "Most tensor problems are np-hard," Journal of the ACM (JACM), vol. 60, no. 6, pp. 45, 2013. Tensor rank and the ill-posedness of the best low-rank approximation problem. V , De Silva, L.-H Lim, SIAM Journal on Matrix Analysis and Applications. 303V. De Silva and L.-H. Lim, "Tensor rank and the ill-posedness of the best low-rank approximation problem," SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 3, pp. 1084-1127, 2008. Tensor decompositions, alternating least squares and other tales. P Comon, X Luciani, A L F De Almeida, Journal of Chemometrics. 237-8P. Comon, X. Luciani, and A. L. F. de Almeida, "Tensor decompositions, alternating least squares and other tales," Journal of Chemometrics, vol. 23, no. 7-8, pp. 393-405, 2009. Tensor decompositions and applications. T G Kolda, B W Bader, SIAM review. 513T. G. Kolda and B. W. Bader, "Tensor decompositions and applications," SIAM review, vol. 51, no. 3, pp. 455-500, 2009. Enhanced line search: A novel method to accelerate parafac. M Rajih, P Comon, R A Harshman, SIAM Journal on Matrix Analysis and Applications. 303M. Rajih, P. Comon, and R. A. Harshman, "Enhanced line search: A novel method to accelerate parafac," SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 3, pp. 1128-1147, 2008. Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposition, decomposition in rank-(l r,l r,1) terms, and a new generalization. L Sorber, M Van Barel, L De Lathauwer, SIAM Journal on Optimization. 232L. Sorber, M. Van Barel, and L. De Lathauwer, "Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposi- tion, decomposition in rank-(l r,l r,1) terms, and a new generalization," SIAM Journal on Optimization, vol. 23, no. 2, pp. 695-720, 2013. A comparison of algorithms for fitting the parafac model. G Tomasi, R Bro, Computational Statistics & Data Analysis. 507G. Tomasi and R. Bro, "A comparison of algorithms for fitting the parafac model," Computational Statistics & Data Analysis, vol. 50, no. 7, pp. 1700-1734, 2006. A weighted non-negative least squares algorithm for threeway Parafac factor analysis. P Paatero, Chemometrics Intell. Lab. Syst. 38P. Paatero, "A weighted non-negative least squares algorithm for three- way Parafac factor analysis," Chemometrics Intell. Lab. Syst., vol. 38, pp. 223-242, 1997. Subtracting a best rank-1 approximation does not necessarily decrease tensor rank. A Stegeman, P Comon, Linear Algebra Appl. 4337512275A. Stegeman and P. Comon, "Subtracting a best rank-1 approximation does not necessarily decrease tensor rank," Linear Algebra Appl., vol. 433, no. 7, pp. 1276-1300, Dec. 2010, hal-00512275. Tensor deflation for candecomp/parafac. part 1: Alternating subspace update algorithm. A.-H Phan, P Tichavskỳ, A Cichocki, IEEE Transaction on Signal Processing. A.-H. Phan, P. Tichavskỳ, and A. Cichocki, "Tensor deflation for candecomp/parafac. part 1: Alternating subspace update algorithm," IEEE Transaction on Signal Processing, 2015. Fast local algorithms for large scale nonnegative matrix and tensor factorizations. A Cichocki, A.-H Phan, IEICE Transactions on Fundamentals of Electronics. 3Communications and Computer SciencesA. Cichocki and A.-H. Phan, "Fast local algorithms for large scale nonnegative matrix and tensor factorizations," IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E92-A, no. 3, pp. 708-721, 2009. An iterative deflation algorithm for exact cp tensor decomposition. A P Silva, P Comon, A L F De Almeida, IEEE conference on Acoustics, Speech and Signal Processing. A. P. da Silva, P. Comon, and A. L. F. de Almeida, "An iterative deflation algorithm for exact cp tensor decomposition," IEEE conference on Acoustics, Speech and Signal Processing, 2015. A multilinear singular value decomposition. L De Lathauwer, B De Moor, J Vandewalle, SIAM journal on Matrix Analysis and Applications. 214L. De Lathauwer, B. De Moor, and J. Vandewalle, "A multilinear singular value decomposition," SIAM journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253-1278, 2000. On the global convergence of the alternating least squares method for rank-one approximation to generic tensors. L Wang, T Chu, SIAM J. Matrix Anal. Appl. 35L. Wang and T. chu, "On the global convergence of the alternating least squares method for rank-one approximation to generic tensors," SIAM J. Matrix Anal. Appl., vol. 35, pp. 1058-1072, 2014. Quasi-newton methods on grassmannians and multilinear approximations of tensors. B Savas, L.-H Lim, SIAM Journal on Scientific Computing. 326B. Savas and L.-H. Lim, "Quasi-newton methods on grassmannians and multilinear approximations of tensors," SIAM Journal on Scientific Computing, vol. 32, no. 6, pp. 3352-3393, 2010. Global optimization with polynomials and the problem of moments. J B Lasserre, SIAM Journal on Optimization. 113J. B. Lasserre, "Global optimization with polynomials and the problem of moments," SIAM Journal on Optimization, vol. 11, no. 3, pp. 796- 817, 2001. Border basis relaxation for polynomial optimization. M A Bucero, B Mourrain, arXiv:1404.5489arXiv preprintM. A. Bucero and B. Mourrain, "Border basis relaxation for polynomial optimization," arXiv preprint arXiv:1404.5489, 2014. Semidefinite relaxations for best rank-1 tensor approximations. J Nie, L Wang, SIAM Journal on Matrix Analysis and Applications. 353J. Nie and L. Wang, "Semidefinite relaxations for best rank-1 tensor approximations," SIAM Journal on Matrix Analysis and Applications, vol. 35, no. 3, pp. 1155-1179, 2014. . G H Golub, C F Van Loan, Matrix computations. 3JHU PressG. H. Golub and C. F. Van Loan, Matrix computations, vol. 3, JHU Press, 2012. Optimization: algorithms and consistent approximations. E Polak, Springer-Verlag New York, IncE. Polak, Optimization: algorithms and consistent approximations, Springer-Verlag New York, Inc., 1997. Tracking a few extreme singular values and vectors in signal processing. P Comon, G H Golub, Proceedings of the IEEE. 788P. Comon and G. H. Golub, "Tracking a few extreme singular values and vectors in signal processing," Proceedings of the IEEE, vol. 78, no. 8, pp. 1327-1343, 1990. Approximation with Kronecker products. C F Van Loan, N Pitsianis, SpringerC. F. Van Loan and N. Pitsianis, Approximation with Kronecker products, Springer, 1993.
{'fraction_non_alphanumeric': 0.0771122768586307, 'fraction_numerical': 0.03118694875194558, 'mean_word_length': 3.7143763022012863, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 19, 'https://': 0, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 35, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Because of the attractiveness of the canonical polyadic (CP) tensor decomposition in various applications, several algorithms have been designed to compute it, but efficient ones are still lacking. Iterative deflation algorithms based on successive rank-1 approximations can be used to perform this task, since the latter are rather easy to compute. We first present an algebraic rank-1 approximation method that performs better than the standard higher-order singular value decomposition (HOSVD) for three-way tensors. Second, we propose a new iterative rank-1 approximation algorithm that improves any other rank-1 approximation method. Third, we describe a probabilistic framework allowing to study the convergence of deflation CP decomposition (DCPD) algorithms based on successive rank-1 approximations. A set of computer experiments then validates theoretical results and demonstrates the efficiency of DCPD algorithms compared to other ones.', 'arxivid': '1508.05273', 'author': ['Alex P Da Silva ', 'Fellow, IEEEPierre Comon ', 'Senior Member, IEEEAndré L F De Almeida '], 'authoraffiliation': [], 'corpusid': 16320545, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 17043, 'n_tokens_neox': 15168, 'n_words': 9268, 'pdfsha': '2671293d48cffc00666676ce743bed52e7cd911c', 'pdfurls': ['https://arxiv.org/pdf/1508.05273v1.pdf'], 'title': ['Rank-1 Tensor Approximation Methods and Application to Deflation', 'Rank-1 Tensor Approximation Methods and Application to Deflation'], 'venue': []}
arxiv
Deconvolution of linear systems with quantized input: an information theoretic viewpoint October 18, 2010 Fabio Fagnani Sophie M Fosson Deconvolution of linear systems with quantized input: an information theoretic viewpoint October 18, 2010Hybrid deconvolution systemsInput estimationBit-MAP decoding In spite of the huge literature on deconvolution problems, very little is done for hybrid contexts where signals are quantized. In this paper we undertake an information theoretic approach to the deconvolution problem of a simple integrator with quantized binary input and sampled noisy output. We recast it into a decoding problem and we propose and analyze (theoretically and numerically) some low complexity on-line algorithms to achieve deconvolution. Introduction The deconvolution problem is ubiquitous in many scientific and technological areas such as seismology, astrophysics, image processing and medical applications (see e.g. [2,3,4,10,18,19]). Its most general formulation is as follows. We consider a time horizon T (possibly infinite), a convolution kernel K(t) and the input/output system x(t) = t 0 K(t − s)u(s)ds(1) (we implicitly assume that K and u are s.t. the above integral makes sense). The problem is to estimate the input u from some noisy version y of the output x. This is an instance of inverse problem: to see why the problem is difficult we focus on the special case K = 1 which will be the case considered throughout this paper. In this context, (1) can be written aṡ x(t) = u(t) , x(0) = 0 .(2) Since the operation of differentiation is not robust with respect to noise perturbation, the reconstruction of u from y cannot be simply done by differentiation. The goal is then to estimate u, using the available information on x and any a priori information on u. Several procedures can be exploited to accomplish this task and the choice is in general motivated by a suitable trade-off between precision of the solution and complexity of the algorithm. Classical algorithms due to Tikhonov [21,22] are based on a penalization technique and work off-line: the estimation u at any time depends on the whole signal y(t) with t ∈ [0, T ]. This is a significant drawback in on-line or interactive data flows application where the delay in estimation is required to remain bounded. Causal algorithms have been studied in [7,8], where bounds on the error have been obtained for the case of bounded noises and regularity assumptions on the input signals u. An outstanding problem is how to use possible side information available on the input signal u(t) on the above algorithms: indeed, while functional, and more generally convex, constraints can be incorporated in the above algorithms, things are quite less clear for more general constraints. In this paper we focus on the case when u is known to be a piecewise constant signal with values restricted to a fixed known finite discrete alphabet. This turns out to be a significant issue in the context of hybrid systems where continuous-time systems are driven by discrete digital signals. Such constraints are clearly of a non-convex type and is not obvious how to include them in classical deconvolutional algorithms. In this work we will undertake an information-theoretic approach to causal deconvolution problems with sampled quantized inputs introducing algorithms which reconstruct u through a decoding procedure. A key feature of these algorithms is that they present very low complexity structure, while they exhibit performance quite close to the information theoretical limit. The main mathematical results consist in a rigorous analysis of the asymptotic performance of the proposed algorithms employing tools from the ergodic theory of Markov Processes. In Section 2 we will give all the mathematical details regarding the deconvolution problem with quantized input signals. In particular, we will link it to classical decoding problems and we will study the possibility to use classical decoding techniques for our purpose. In Section 3 we will develop a couple of low complexity deconvolution algorithms comparing their performance. Section 4 is the core of our paper: it is devoted to a deep analysis of the proposed algorithms. Using Markov Processes ergodic theorems we will be able to give theoretical results on their behavior in the asymptotic regime (time range going to ∞). We conclude now the introduction with notation and terminology to be used throughout the paper. Notation The deconvolution problem In the following we stick to the system (1) under the assumptions we make throughout this paragraph. Assumption 1 The available output signal is a noisy, sampled version of x(t): y k = x k + n k where x k = x(τ k), τ > 0 being the constant sampling time, and n k 's are realizations of independent, identically distributed Gaussian variables N k 's of 0 mean and variance σ 2 . We will denote by y = (y 1 , . . . , y K ) ∈ R K the vector of all available measures (K = T /τ is assumed to be an integer) and by y b a = (y a , y a+1 , . . . , y b ) the available measures from time a to time b, with a, b ∈ {1, . . . , K}, a < b. A deconvolution algorithm consists in a function Γ : R K → R [0,T ] . u = Γ(y) is the estimated input and in general it will not coincide with the true input u. What in general we request is a bound on the error u − u and some consistency property: when the variance of the noise and the sampling time go to 0, the error should converge (in some suitable sense) to 0. We say that a deconvolution algorithm Γ is causal (with delay k 0 τ . k 0 ∈ N) if there exists a sequence of functions Γ k : R k+k0 → R [(k−1)τ,kτ [ , where k = 1, 2, . . . , such that Γ(y)| t∈[(k−1)τ,kτ [ = Γ k (y k+k0 1 ) . Such an algorithm estimates the unknown signal in the current time interval [(k − 1)τ, kτ [ exploiting the past and present information y 1 , . . . , y k along with a possible bounded future information y k+1 , . . . , y k+k0 . We now come to the assumptions on the input signals. Assumption 2 There is a finite alphabet U ⊂ R and we consider signals of type u(t) = K−1 k=0 u k 1 [kτ,(k+1)τ [ (t) u k ∈ U .(3) u(t), with t ∈ [0, T [, is then completely determined by the sequence of samples u 0 , u 1 , . . . , u K−1 . For simplicity we assume the sampling time τ to be the same as in the output and to have an exact synchronization in the sampling instants. The output signals are now identified by samples x 1 , x 2 , . . . , x K ∈ X , where X ⊂ R is a suitable alphabet (recall that we have fixed x 0 = 0). Of course, in principle, one could still use the deconvolution algorithms in [7,8] or [21,22], however, there would be no way to use inside the algorithm the a priori information on the quantization of u. Instead we now show that, in this case, our deconvolution problem can completely be recasted into a discrete decoding problem. Notice indeed that the input/output system is simply described by x 0 = 0 x k+1 = x k + τ u k , k = 0, . . . , K − 1.(4) The vector x = (x 1 , . . . x K ) can thus be seen as a coded version of u = (u 0 , . . . , u K−1 ): we can write x = E(u) where E denotes the encoder given by (4). Afterwards, x is transformed as it was transmitted through a classical Additive White Gaussian Noise (AWGN) channel, the received output being given by y k = x k + n k . It is on the basis of these measures that we have to estimate the 'information signal' u. Notice that the real time t is completely out of the problem at this point and everything can be considered at the discrete sampling clock time. In the coding theory language, a decoder is exactly a function D : R K → U K which allows to construct an estimation of the input signal: u = D(y). Even in this context we can talk about causal algorithm if there exists a sequence of functions D k : R k+k0 → U such that D(y) k−1 = D k (y k+k0 1 ) k = 1, . . . K . Finally, Assumption 3 The unknown input is assumed to be generated by a stochastic source with a known distribution, independent from the noise source. The particular source distribution considered in this work will be introduced in Section 2.4. According to the notation given in Section 1.1, in the sequel U k will identify the input r.v. at time k, X k the corresponding system output given by expression 4, Y k = X k + N k the measured output, N k being the Gaussian noise. Furthermore, U k = D(Y) k and X k = X k−1 + τ U k−1 ( X 0 = 0) will be respectively the estimated input and the estimated state. Finally, U = (U 0 , . . . , U K−1 ), U = ( U 0 , . . . , U K−1 ), Y = (Y 1 , . . . , Y K ), Y b a = (Y a , . . . , Y b ), a, b ∈ {1, . . . , K}, a < b. Error Evaluation: The Mean Square Cost A fundamental issue in the deconvolution problem is the choice of the norm with respect to which errors are evaluated. In this context, we consider the mean square cost: d(D) = τ E ||U − U|| 2 = τ K−1 k=0 E |U k − U k | 2 . We now define D * as the decoder minimizing d(D) among all the possible decoders. It can be constructed as follows: given the density f Y (y) of Y, notice that d(D) = τ K−1 k=0 R K E |U k − D(y) k | 2 |Y = y f Y (y)dy. Hence, for any y ∈ R K , D * (y) k = argmin v∈U E |U k − v| 2 |Y = y = argmin v∈U u∈U |u − v| 2 P(U k = u|Y = y). This turns out to be a finite optimization problem which can be solved by means of a marginalization procedure and a Bayesian inversion: P(U k = u|Y = y) = u∈U K :u k =u f (Y|X) (y|E(u))P(U = u) f Y (y) . Analogously, we can define D * k 0 as the decoder minimizing d(D) among all the possible causal decoders with delay k 0 : D * k 0 (y) k−1 = D * k 0 k (y k+k0 1 ) = argmin v∈U u∈U |u − v| 2 P(U k−1 = u|Y k+k0 1 = y k+k0 1 ).(5) The BCJR algorithm In practice, the decoder D * can be implemented with the well-known BCJR algorithm [1]. This algorithm computes the probabilities of states and transitions of a Markov source, given the observed channel outputs; in other words, it provides the so-called APP (a posteriori probabilities) on states and transitions, therefore on coded and information symbols. Let us briefly remind the BCJR procedure. For i, j ∈ X , we define the following probability density functions: α k (i) = f (X k ,Y k 1 ) (i, y k 1 ) k = 1, . . . , K β k (i) = f (Y K k+1 |X k ) (y K k+1 |i) k = 0, . . . , K − 1 Γ k (i, j) = f (X k ,Y k |X k−1 ) (j, y k |i) k = 1, . . . , K.(6) For any k = 1, . . . , K, the APP on states and on transitions respectively are: λ k (i) = f (X k ,Y) (i, y) σ k (i, j) = f (X k ,X k−1 ,Y) (j, i, y). Given the following initial and final conditions: α 0 (i) = P(X 0 = i) = 1 if i = 0 0 otherwise. β K (i) = 1 for any i ∈ X for k = 1, . . . , K we have λ k (i) = α k (i)β k (i) σ k (i, j) = α k−1 (i)Γ k (i, j)β k (j)(7) where α k (i) and β k (i), i ∈ X , can be respectively computed with a forward and a backward recursions: α k (i) = h∈X α k−1 (h)Γ k (h, i) β k (i) = h∈X Γ k+1 (i, h)β k+1 (h).(8) The APP are then recursively computed and finally used to decide on the transmitted input sequence. Analogous causal versions of the BCJR algorithm can be used to implement the decoder (5) with delay k 0 . For k = 1, . . . , K −k 0 , the APP on the transitions becomes σ k (i, j) = f (X k ,X k−1 ,Y k+k 0 1 ) (j, i, y k+k0 1 ) = α k−1 (i)Γ k (i, j) β k (j)(9) where α k and Γ k are defined as above, while β k (j) = f (Y k+k 0 k+1 |X k ) (y k+k0 k+1 |j). For k > K − k 0 , we recast into the classical formulation (7). For brevity, we will refer to the causal BCJR as CBCJR. Further Assumptions In the sequel of this work, we will make two further assumptions on the input: Assumption 4 The input alphabet is binary: U = {0, 1}. Assumption 5 For k = 0 . . . K − 1, the U k 's are independent and uniformly distributed: P(U k = 0) = P(U k = 1) = 1 2 . In particular, the U k 's are independent from the Gaussian noises N k 's. Now the probabilistic setting introduced at the end of Section 2.1 is complete and we can resume the system as follows: given X 0 = X 0 = 0, for k = 1, . . . , K, U k−1 ∼ Bernoulli (1/2) ; X k = X k−1 + τ U k−1 ; N k ∼ N (0, σ 2 ); Y k = X k + N k ; U k−1 = D(Y) k−1 ; X k = X k−1 + τ U k−1 .(10) Notice that also X k 's are independent from N k 's. Under Assumption 4, d(D) = τ K−1 k=0 E |U k − U k | = τ KP b (e) where P b (e) = 1 K K−1 k=0 P( U k = U k ) = 1 K E(|U − U|)(11) is the so-called Bit Error Rate (also denoted by BER), a very common performance measure in digital transmissions that expresses the average number of bits in error. In our context, minimizing d(D) is equivalent to minimizing the BER and, therefore, the optimal decoder D * that performs this minimization coincides with the well-known Bit-MAP (Maximum a posteriori) decoder (see [15,1]): D * (y) k = argmax u∈{0,1} P(U k = u|Y = y).(12) At step k: Computations Storage Locations Decoding Delay Its causal version is given by BCJR O(k) O(k) K − k CBCJR O(k) O(k) k 0 = 0D * k 0 (y) k = argmax u∈{0,1} P(U k = u|Y k+1+k0 1 = y k+1+k0 1 ).(13) We introduce here also the Conditional Bit Error Rate, CBER for short: P b (e|U) = 1 K K−1 k=0 P( U k = U k |U) = 1 K E(|U − U| |U).(14) While the BER is a parameter that evaluates the mean performance of the transmission model, the CBER describes its behavior for each possible sent sequence. The CBER is then a relevant parameter for our system, whose decoding performance changes in function of the transmitted input. For computational simplicity, from now onwards let τ = 1 (15) so that X = {0, . . . , K} and in particular, if X 0 = 0, X k ∈ {0, . . . , k}. In the BCJR implementation of decoders (12) and (13), we obtain that α k (i), i = 0, 1, . . . , K, is null for any i > k, while matrices Γ k and σ k are non-null only on diagonal and superdiagonal. By Assumption 5, P(X k = j|X k−1 = i) = 1/2 if j = i, i + 1 and 0 otherwise. Recalling that the transition between X k and Y k is modeled by an AWGN channel, f (Y k |X k ) (y k |j) = 1 σ √ 2π exp − (y k −j) 2 2σ 2 , we obtain Γ k (i, j) = f (Y k |X k ) (y k |j)P(X k = j|X k−1 = i) = 1 2σ √ 2π exp − (y k − j) 2 2σ 2 for j = i, i + 1.(16) Given Γ k , σ k or its causal version σ k can be recursively computed and the corresponding decoding rules are: BCJR D * (y) k−1 = 0 if k−1 i=0 σ k (i, i + 1) ≤ k−1 i=0 σ k (i, i) 1 otherwise.(17) CBCJR D * k 0 (y) k−1 = 0 if k−1 i=0 σ k (i, i + 1) ≤ k−1 i=0 σ k (i, i) 1 otherwise.(18) Suboptimal Causal Decoding Algorithms Causality has a price and the CBCJR algorithm has clearly a worse performance than BCJR. By simulating our system, we quantify the performance gap between BCJR and CBCJR (k 0 = 0) as we can appreciate in Figure 1: the two curves represent the corresponding BER's in function of the Signal-to-Noise Ratio (SNR), here defined as τ 2 /σ 2 = 1/σ 2 . These outcomes are the averages over 5000 transmissions, each of which being a 100 bit message. avoid unacceptable delays and complexity problems in the BCJR and CBCJR implementation). We remark that CBCJR has the best performance among causal deconvolution algorithms. Moreover, by comparing the efficiency of the two procedures (the results are reported in Table 1), we gather that for both BCJR and CBCJR the required computations and storage locations linearly increase with the number of transmitted bits, which is a drawback in case of long transmission. This fact motivates the development of new suboptimal causal algorithms that improve the efficiency without substantial loss of reliability. To achieve that, we implement the CBCJR fixing the number of states, that is, at each step we save the n states with largest probability (where n is arbitrarily chosen) and we discard the others. We now introduce the algorithms in the cases n = 1 and n = 2, which are of great interest for their low complexity, and we show some simulations' outcomes. One State Algorithm A suboptimal causal decoder D (1) : R K → {0, 1} K can be derived from the CBCJR by assuming the most probable state to be the correct one. At any step k = 0, 1, . . . , D (1) decides on the current bit by a single MAP procedure and upgrades the estimated state, which is the only one value that requires to be stored. Consider (6), (9) and (18). Given the estimated state x k−1 , the decoding rule of D (1) at time step k is given by (18) with no backward recursion β k (j) and α k−1 ( x k−1 ) = 1, α k−1 (j) = 0 for any j = x k−1 . This reduces the decoding task to the comparison between two distances; in fact, the One State algorithm that implements D (1) is as follows: 2. For k = 1, . . . , K, given the received symbol y k ∈ R, u k−1 = D (1) (y) k−1 = argmax u∈{0,1} P(U k−1 = u|Y k = y k , X k−1 = x k−1 ) = 0 if Γ k ( x k−1 , x k−1 ) ≥ Γ k ( x k−1 , x k−1 + 1) 1 otherwise x k = x k−1 + u k−1(19) and given the equality (16) in the AWGN case, Γ k ( x k−1 , x k−1 ) ≥ Γ k ( x k−1 , x k−1 + 1) ⇔ |y k − x k−1 | ≤ |y k − ( x k−1 + 1)|. (20) Two States Algorithm By fixing n = 2, we derive a decoder D (2) : R K → {0, 1} K that, at each step, estimates the current input bit and computes and stores the two most likely states along with the corresponding probabilities α k (i) (defined by (6)). As for the One State Algorithm, the estimation of the input bit is performed by a MAP decoding rule (18) with no backward recursion and summing over the two "surviving" states. In detail, the recursive Two States algorithm that implements D (2) is the following: 1. For k = 1, given the unique starting state x 0 = 0, we estimate the first bit by a One State procedure: u 0 = D (2) (y) 0 = argmax u∈{0,1} P(U 0 = u|Y 1 = y 1 , X 0 = 0) = 0 if |y 1 | ≤ |y k − 1| 1 otherwise.(21) Afterwards, the possible states are two: x 1 (0) = 0 and x 1 (1) = 1 and the corresponding probabilities α 1 (0) and α 1 (1) in our framework are given by α 1 (j) = f (X1,Y1) (j, y 1 ) = f (Y1|X1) (y 1 |j)P(X 1 = j) = f (Y1|X1) (y 1 |j)P(U 0 = j) = 1 2 f (Y1|X1) (y 1 |j), j ∈ {0, 1}. We then normalize these probabilities so that α 1 (0) + α 1 (1) = 1 and we just store the couple of values (α 1 (0), x 1 (0)), as this is sufficient to retrieve also (α 1 (1), x 1 (1)) = (1 − α 1 (0), x 1 (0) + 1) . For notational simplicity we rename the stored vector (α 1 (0), x 1 (0)) as (α 1 , x 1 ). 2. For k = 2, 3, . . . , K, given (α k−1 , x k−1 ) and F k = f (X k ,Y k 1 ) ( x k , y k 1 ) u k−1 = D (2) (y) k−1 = = argmax u∈{0,1} P U k−1 = u|Y k = y k , X k−1 = x k−1 , F k−1 = α k−1 = =    0 if α k−1 Γ k ( x k−1 , x k−1 ) + (1 − α k−1 )Γ k ( x k−1 + 1, x k−1 + 1) ≥ ≥ α k−1 Γ k ( x k−1 , x k−1 + 1) + (1 − α k−1 )Γ k ( x k−1 + 1, x k−1 + 2) 1 otherwise. From step k − 1, three possible states arise: x k−1 , x k−1 + 1 and x k−1 + 2, whose probabilities are given by the forward recursion in (8): α k ( x k−1 ) = α k−1 Γ k ( x k−1 , x k−1 ) α k ( x k−1 + 1) = α k−1 Γ k ( x k−1 , x k−1 + 1) + (1 − α k−1 )Γ k ( x k−1 + 1, x k−1 + 1) α k ( x k−1 + 2) = (1 − α k−1 )Γ k ( x k−1 + 1, x k−1 + 2).(22) which can be reduced as follows in the case (16): α k ( x k−1 ) = α k−1 1 2σ √ 2π exp − (y k − x k−1 ) 2 2σ 2 α k ( x k−1 + 1) = 1 2σ √ 2π exp − (y k − ( x k−1 + 1)) 2 2σ 2 α k ( x k−1 + 2) = (1 − α k−1 ) 1 2σ √ 2π exp − (y k − ( x k−1 + 2)) 2 2σ 2 . Since |y k − ( x k−1 + 1)| = max{|y k − ( x k−1 + j)|, j = 0, 1, 2}, in the AWGN case α k ( x k−1 + 1) = min{α k ( x k−1 + j), j = 0, 1, 2}. Hence, the state x k−1 + 1 is never discarded and also the two "surviving" states are always adjacent. Therefore, • we calculate α min = min{α k ( x k−1 ), α k ( x k−1 + 2)}. • If α min = α k ( x k−1 ), the surviving states are ( x k−1 + 1, x k−1 + 2) with probabilities (α k ( x k−1 + 1), α k ( x k−1 + 2)) . We then store the lowest state along with the corresponding normalized probability: (α k , x k ) = ( α k ( x k−1 +1) α k ( x k−1 +1)+α k ( x k−1 +2) , x k−1 + 1). • Similarly, if α min = α k ( x k−1 +2), (α k , x k ) = ( α k ( x k−1 ) α k ( x k−1 )+α k ( x k−1 +1) , x k−1 ). Remark 1 When the extreme case α k = 1 occurs, x k + 1 has null probability, then x k+1 = x k ; analogously, when α k = 0, x k+1 = x k + 1. In these cases the Two States Algorithm actually behaves as the One State Algorithm. Remark 2 As a consequence of Remark 1, the unique initial state x 0 = 0 can be interpreted as a double state with all the probability in x 0 = 0, that is, (α 0 , x 0 ) = (1, 0). We report now the simulations' outcomes concerning the decoders D * 0 , D (1) and D (2) , respectively implemented with CBCJR, One State and Two States algorithms. The simulations have been performed considering 5000 different transmissions, each of which being a 100 bit message. The obtained results are then the averages overall transmissions. At step k: Computations Storage Locations Decoding Delay BCJR O(k) O(k) K − k CBCJR O(k) O(k) 0 ONE STATE O(1) 1 0 TWO STATES O(1) 2 0 Simulations and comparisons In Figure 3 we compare the efficiency of the three decoding schemes, in terms of BER: we evidence that two states are sufficient to achieve performance very close to the causal optimum: we observe that the gain between D (2) and D * 0 never exceeds 0.15 dB, while it achieves 0.8 dB between D (1) and D * 0 for BER's values between 0.2 and 0.3. Moreover, as we report in Table 2, the complexity of One State and Two States algorithms is constant when the number is constant and no delay is produced in the decoding: this makes them efficient even for long-time transmissions, i.e., for a large number of states. we will compute both the BER and the CBER, which respectively describe the decoding for the "mean input" and for each possible input. The natural setting of this analysis is the theory of Markov Processes, in countably infinite or not countable spaces (we will talk about Markov Chains when the space is countably infinite). Theoretic Analysis of the One State Algorithm Suppose to transmit K (possibly infinite) bits and to decode by the One State method. The starting point of our analysis is the definition, at any step k = 1, 2, 3 . . . , of the r.v. D k = X k − X k ∈ Z(23) X k being defined (10). D k actually represents the difference between the actual and the estimated state values. Since D 0 = 0, the following recursive relationship holds: (19). While U k 's are independent, U k is function of U k and D k . Then, the stochastic process (D k ) k∈N is a Markov Chain (whose definition is formally given in the next section), which can be exploited to carry on our analysis; in order to do that, let us first review some basic elements of Markov theory. D k+1 = D k + U k − U k (24) where U k−1 = D (1) (y) k−1 (see the algorithm Markov Chains The definitions and results introduced in this Section can be retrieved in the Chapter 3 of [20] or in the Chapter 3 of [11]. By Markov Chain we intend any sequence of random variables (X n ) n=0,1,... assuming values in a countable set X and satisfying the Markov property: P(X n+1 = y|X n = x, X n−1 , . . . , X 0 ) = P(X n+1 = y|X n = x). If the chain is time-homogeneous, that is P(X n+1 = y|X n = x) = P(X n+m+1 = y|X n+m = x), the transition probabilities P x,y = P(X n+1 = y|X n = x) are the entries of the stochastic transition probability matrix P ∈ [0, 1] X×X . We review some important properties of a Markov Chain (X n ) n=0,1,... on X = Z: Definition 1 [20, Section 3.1] Two states x, y ∈ Z communicate if there exist n, m ∈ N s.t. (P n ) x,y > 0 and (P m ) y,x > 0. If all the states communicate, the Markov Chain is said to be irreducible. Definition 2 [20, Section 3.2.3] Let τ j = min{n > 0 : X n = j}: a state j is said to be positive recurrent if E(τ j |X 0 = j) < ∞. The Definition 4 [20, Section 3.2.3] A invariant (or stationary) probability vector is a probability vector Φ (that is, Φ ∈ [0, 1] X and x∈X Φ x = 1) such that Φ T P = Φ T . The existence of an invariant probability vector, assured under some conditions, gives an important convergence result, as stated in the following Proposition 5 [20, Sections 3.2.3-3.2.4] An irreducible, positive recurrent Markov Chain admits a unique invariant probability vector Φ. Moreover, Φ is the limit of the so-called Cesàro sum, that is lim K→∞ 1 K K−1 k=0 (P k ) x,d = Φ d ∀x ∈ Z. The mean BER Let us go back to the One State algorithm. According to 24, (D k ) k∈N is a countable homogeneous Markov Chain on Z, with transition probabilities P x,y = P(D k+1 = y|D k = x) = 1 2 [P x,y (0) + P x,y (1)] where P x,y (u) = P(D k+1 = y|D k = x, U k = u), u ∈ {0, 1}. Notice that the only non-null entries of P(u) are the following: P d,d+1 (0) = 1 2 erfc d + 1 2 √ 2σ P d,d (0) = 1 − P d,d+1 (0) P d,d (1) = 1 2 erfc d − 1 2 √ 2σ P d,d−1 (1) = 1 − P d,d (1) P is tridiagonal and, for any x, y ∈ Z, P x,y = P −x,−y and P x,y > 0 if and only if |x − y| ≤ 1; by iteration, for any n ∈ N, (P n ) x,y > 0 if and only if |x − y| ≤ n. Hence, given any couple of states x, y ∈ Z with distance |x−y| = m, (P m ) x,y > 0 and (P m ) x,y > 0, that is, (D k ) k∈N is irreducible. Moreover, Lemma 6 (D k ) k∈N is positive recurrent. Proof It suffices to apply the following criterion proposed in [20]: if there exists a function g ∈ R +Z so that g x ≥ (Pg) x + ε for any x ∈ Z \ {y} and for some ε > 0, then y is a positive recurrent state. In our case, it is easy to prove that y = 0 is a positive recurrent state considering g x = |x|. Moreover, given that the chain is irreducible, if one state is positive recurrent, all states are so. Proposition 7 The following statements hold: 1. (D k ) k∈N admits a unique invariant probability vector Φ; 2. Φ is defined by Φ d = Φ 0 |d| i=1 P i−1,i P i,i−1 (25) where Φ 0 = 1 + 2 ∞ d=1 |d| i=1 P i−1,i /P i,i−1 −1 . Proof (1) It follows from Proposition 5. (2) By (Φ T P) d = Φ T d , for any d ∈ Z, it follows that Φ d−1 P d−1,d − Φ d P d,d−1 = c (c constant).(26) In particular, as Φ d = Φ −d for any d ∈ Z (this is due to the uniqueness of the invariant measure and to the symmetry of P), it suffices to substitute values d = 0 and d = 1 in (26) to conclude that c = 0; hence, relation (25) holds. Notice that c = 0 corresponds to the property of time-reversibility of a Markov Chain (see Section 4.8 of [16]), hence one could even prove it by Theorem 4.2 in [16], after having introduced the concepts of aperiodicity and ergodicity of a Markov Chain. From Proposition 7 we deduce in particular that for any d ∈ Z, Φ d > 0. Moreover, since P i−1,i /P i,i−1 < 1 for i ≥ 1, Φ d has a maximum at d = 0 and it is monotone decreasing for d > 0. As a consequence of Proposition 5, Corollary 8 Let q d = P[ U k = U k |D k = d] = P d,d+1 + P d,d−1 , then lim K→∞ P b (e) = d∈Z q d Φ d . Proof Since P b (e) = 1 K K−1 k=0 d∈Z q d P(D k = d) = 1 K K−1 k=0 d∈Z q d (P k ) 0,d the result follows from Proposition 5 and by the Lebesgue's Dominated Convergence Theorem. Indeed, 1 K K−1 k=0 d∈Z q d (P k ) 0,d = d∈Z q d 1 K K−1 k=0 (P k ) 0,d where 1 K K−1 k=0 (P k ) 0,d ≤ 1. This concludes the computation of the BER in case of long-time transmission, given the distribution of the input source. In the next paragraph we study how the performance depends on the transmitted input sequence. The Conditional BER In the asymptotic case, the CBER converges to the same limit of the BER for almost all the possible inputs: Theorem 9 Let π be the uniform Bernoulli probability measure over {0, 1} N . Then, for the One State algorithm, lim K→∞ P b (e|U) = lim K→∞ P b (e) for π-a.e. U. Theorem 9 gives a stronger result than Corollary 8: the mean behavior of the One State algorithm is stated to be the behavior for each possible input occurrence, except for a π-negligible set. To prove Theorem 9, we will refer to the theory of Markov Chains in Random Environments (see Sections 5.1 and 5.2 in the Appendix). Theoretic Analysis of the Two States Algorithm Similar to the One State algorithm, the Two States procedure can be studied through the Markov Theory, which provides the instruments to compute both BER and CBER. As shown in Section 3.2, the Two States procedure stores, at each step, a state and its normalized probability, this information being sufficient to individuate also the second state and probability. Let X k be the r.v. representing the stored state, X k the current correct state, D k = X k − X k and A k the r.v corresponding to the probability of X k : now, the stochastic process (A k , D k ) k∈N in [0, 1] × Z is a Markov Process, whose definition (which actually extends the definition of Markov Chain from a denunerable to a continuous set) is now given. Markov Processes The definitions and results introduced in this Section can be retrieved in [12] or in the Chapter 2 of [11] Consider a set X endowed with a countably generated σ-field F. A transition probability kernel (or Markov probability kernel, see, e.g., [12, Section 3.4.1]) on (X, F) is an application P : X × F → [0, 1] such that (i) for each F ∈ F, P (·, F ) is a non-negative measurable function; (ii) for each x ∈ X, P (x, ·) is a probability measure (p.m. for short) on (X, F). Given a bounded measurable function v on (X, F), we denote by P v the bounded measurable function on (X, F) defined as (P v)(x) = X v(y)P (x, dy).(27) Further, let µ be a measure on (X, F): we define the measure µP (µP )(F ) = X P (x, F )µ(dx) F ∈ F.(28) We define the n-th power of the transition kernel P simply putting P 1 (x, F ) = P (x, F ) and P n (x, F ) = X P (x, dy)P n−1 (y, F ). It is easy to see that P n (x, F ) are transition kernels, too. Corresponding actions on bounded functions and on measures will be respectively denoted by P n v and µP n . Definition 10 [12, (10.1)] A measure ψ on (X, F) is said to be invariant for the transition kernel P if ψP = ψ. We define a homogeneous Markov Process on space (X, F) with transition kernel P as a sequence of X-valued random variables (X n ) n∈N such that, for any x ∈ X and F ∈ F, Prob(X n+1 ∈ F |X n = x, X n−1 , . . . , X 0 ) = Prob(X n+1 ∈ F |X n = x) = P (x, F ) for any n ∈ N. The evolution of (X n ) n∈N is completely described once we fix a probability law µ of X 0 on (X, F); if µ is invariant, then the Markov Process is said to be stationary: all the r.v.'s X n are distributed according to µ. Notice also that for any x ∈ X and F ∈ F, Prob(X m+n ∈ F |X m = x) = P n (x, F ) for any m, n ∈ N. From now onwards, we will assume that X is a locally compact separable metric space: under this topological condition we can easily prove the existence of an invariant measure (see [12,Section 12.3]). Let B(X) be the Borel σ-algebra of X. Definition 11 [12, Sections 6.1.1, 11.3.1] Let P be a transition kernel on (X, B(X)). If P (·, O) is a lower semicontinuous function for any open set O ∈ B(X), then P is said to be weak Feller. Moreover, we say that P verifies the Drift Condition if there exist a compact set C ⊂ X, a constant b < ∞ and a function V : X → [0, ∞] not always infinite such that ∆V (x) := X P (x, dy)V (y) − V (x) ≤ −1 + b1 C (x)(29) for every x ∈ X. A fundamental issue for our analysis is the Ergodic Theorem of Markov Processes, which is the transposition into stochastic terms of the Birkhoff's Individual Ergodic Theorem ([24, Theorem 1.14]). Here we report its version under the ergodicity condition for an invariant p.m.; for a more general treatise, see [9,11]. Theorem 17 (Ergodic Theorem) [11, Theorem 2.3.4 -Proposition 2.4.2] Assume that a kernel P on (X, B(X)) admits an ergodic invariant p.m. µ. Then, for any non-negative function v ∈ L 1 (X, B(X), µ), lim K→∞ 1 K K−1 k=0 (P k v)(x) = X v dµ for µ-a.e. x ∈ X. Finally, we report a result of direct convergence for the iterates of the kernel, in the case of no periodic behavior. In order to completely define the process, we provide also an initial distribution L × κ, L and κ respectively being the usual Lebesgue measure on [0, 1] and the counting measure on Z. The transition probability kernels will be explicitly computed in the Appendix 5.3. Proposition 20 The kernel of (A k , D k ) k∈N admits an invariant p.m. φ. Proof We prove that the kernel of (A k , D k ) satisfies both the Weak Feller Property and the Drift Condition; the result will then follow from Proposition 12. First, we check the Drift Condition. By equations (49)-(51) in the Appendix, P (α, d), [0, 1] × {d + 1} = 1 4 erfc   σ 2 log α 1−α + d + 1 σ √ 2   P (α, d), [0, 1] × {d − 1} = 1 2 − 1 4 erfc   σ 2 log α 1−α + d σ √ 2   .(30) In particular, P (α, d), and V (α, d) = d 2 if d ≥ 0, α ≥ δ d or if d < 0, α ≤ 1 − δ d ; d 2 + 2|d| otherwise.(32) We are going to prove that V fulfills the Drift inequality for some compact C: ∆V (α, d) = [0,1]×Z P (α, d), d(α , d ) V (α , d ) − V (α, d) ≤ −1 + b1 C (α, d) (33) for every (α, d) ∈ [0, 1] × Z. In order to individuate C, let us find out the values of (α, d) such that (33) holds with 1 C (α, d) = 0. Recall that P (α, d), A × {d } > 0 ⇒ d ∈ {d − 1, d, d + 1} for any α ∈ [0, 1], A ∈ B([0, 1]). In the next, let us use the notation ω = (α, d), ω = (α , d ). If d ≥ 0, ∆V (ω) = 1 0 d+1 d =d−1 P (ω, (dα , d ))V (ω ) − V (ω) = d+1 d =d−1 δ d 0 P (ω, (dα , d ))(2d + d 2 ) + 1 δ d P (ω, (dα , d ))d 2 − V (ω) = d+1 d =d−1 1 0 P (ω, (dα , d ))d 2 + δ d 0 P (ω, (dα , d ))2d − V (ω) = d+1 d =d−1 P (ω, [0, 1] × {d })d 2 + P (ω, [0, δ d ] × {d })2d − V (ω) = d 2 + 2d[P (ω, [0, 1] × {d + 1}) − P (ω, ([0, 1] × {d − 1})] + P (ω, [0, 1] × {d + 1}) + P (ω, [0, 1] × {d − 1}) + 2dP (ω, [0, δ d ] × Z) + 2[P (ω, [0, δ d ] × {d + 1}) − P (ω, [0, δ d ] × {d − 1})] − V (ω). As P (ω, [0, 1] × {d + 1}) + P (ω, [0, 1] × {d − 1}) ≤ 1 2 (see equations (30)) and P (ω, [β 1 , β 2 ] × Z) ≤ G(β 2 − β 1 ) (see Lemma 28 in the Appendix 5.6). ∆V (ω) ≤ d 2 + 2d[P (ω, [0, 1] × {d + 1}) − P (ω, [0, 1] × {d − 1})] + 1 2 + 2(d + 1)Gδ d − V (ω) ≤ d 2 + 2d[P (ω, [0, 1] × {d + 1}) − P (ω, [0, 1] × {d − 1})] + 1 2 + G − V (ω)(34) where we exploited that 2(d + 1)Gδ d < G by the definition (31) of δ d . If d < 0, by analogous computation we obtain again the inequality (34). Let us study the behavior of this bound for every ω ∈ [0, 1] × Z, according to the partition of [0, 1] × Z into four subsets given by the definition of V . Subset 1: If d ≥ 0 and α ≥ δ d , V (ω) = d 2 and P (ω, [0, 1] × {d + 1}) ≤ 1 4 erfc   σ 2 log δ d 1−δ d + d σ √ 2   P (ω, [0, 1] × {d − 1}) ≥ 1 2 − 1 4 erfc   σ 2 log δ d 1−δ d + d σ √ 2   hence inequality (34) becomes ∆V (ω) ≤ G + d   erfc   σ 2 log δ d 1−δ d + d σ √ 2   − 1   + 1 2 = G + d erfc − σ 2 2 log(2d + 19) + d σ √ 2 − 1 + 1 2 . As erfc(x) ∈ (1, 2) when the argument x is negative, then for d is sufficiently large the quantity in the square bracket is negative. Moreover, this quantity is multiplied by d; hence, there necessarily exists an integer d + 0 > 0, depending on the noise σ, such that for any d > d + 0 , ∆V (ω) ≤ −1. Subset 2: If d < 0 and α ≤ 1 − δ d , P (ω, [0, 1] × {d + 1}) ≥ 1 4 erfc   −σ 2 log δ d 1−δ d + d + 1 σ √ 2   P (ω, [0, 1] × {d − 1}) ≤ 1 2 − 1 4 erfc   −σ 2 log δ d 1−δ d + d + 1 σ √ 2   hence inequality (34) becomes ∆V (ω) ≤ G + d   erfc   −σ 2 log δ d 1−δ d + d + 1 σ √ 2   − 1   + 1 2 = G + d erfc σ 2 2 log(−2d + 19) + d + 1 σ √ 2 − 1 + 1 2 . The computation is now analogous to the previous case and we conclude that there necessarily exists an integer d − 0 < 0, depending on the noise, such that for any d < d − 0 , ∆V (ω) ≤ −1. ∆V (ω) ≤ d 2 + G + 1 2 + d − d 2 − 2d = G + 1 2 − d hence ∆V (ω) ≤ −1 if d > d 1 = G + 3 2 .P b (e) = 1 K K−1 k=0 P ( U k = U k ) = 1 K K−1 k=0 1 0 d∈Z P ( U k = U k , A k = α, D k = d)dα = 1 K K−1 k=0 1 0 d∈Z P ( U k = U k |A k = α, D k = d)P k (1, 0); (dα, d) . the initial state (1, 0) being discussed in the Remark 2. Let q(α, d) = P ( U k = U k |α k = α, D k = d) (notice that q(α, d) actually does not depend on k) so that P b (e) = 1 Proof (A k , D k ) k∈N is (L × κ)-irreducible (the proof of this fact requires some technical computation and is postponed in the Appendix 5.5), then φ is unique and ergodic by Propositions 14 and 16. Therefore, by the Ergodic Theorem 17, lim K→∞ 1 K K−1 k=0 (P k q)(α, d) = [0,1]×Z q d φ φ-a.e. (α, d). This result cannot be immediately applied to evaluate the BER since the convergence is not assured for all the initial states. In particular, let call N ⊂ [0, 1]×Z the negligible set for which there is no convergence and let N 0 = {α ∈ [0, 1] : (α, 0) ∈ N }. Now, recalling the Remark 1, 0), (dα 1 , 0))(P k−1 q)(α 1 , 0). P b (e) = 1 K q(1, 0) + 1 K K−1 k=1 α1∈[0,1] d1∈Z P ((1, 0), (dα 1 , d 1 ))(P k−1 q)(α 1 , d 1 ) = 1 K q(1, 0) + 1 K K−1 k=1 α1∈[0,1] P ((1, By the Lebesgue's Dominated Convergence Theorem, lim K→∞ P b (e) = α1∈[0,1] P ((1, 0), (dα 1 , 0)) lim K→∞ 1 K K−1 k=1 (P k−1 q)(α 1 , 0). Notice that L(N 0 ) = 0, otherwise φ(N 0 ×{0}) = [0,1]×Z P (ω, N 0 ×{0}) φ(dω) > C ε,0 L(N 0 ) > 0 by Proposition27. By Proposition 28, this implies that P ((1, 0), N 0 × {0}) = 0. Finally, lim K→∞ P b (e) = α1∈[0,1]\N0 P ((1, 0), (dα 1 , 0)) lim K→∞ 1 K K−1 k=1 (P k−1 q)(α 1 , 0) = α1∈[0,1]\N0 P ((1, 0), (dα 1 , 0)) [0,1]×Z q d φ = [0,1]×Z q d φ as (α 1 , 0) / ∈ N . The function q(α, d) is explicitly computed in the Appendix 5.4. The Conditional BER The CBER for the Two States algorithm can be derived just as we computed it for the One State case, in fact it holds the following We refer the reader to the Appendix 5.7 for the proof. Direct Convergence to φ The explicit construction of an invariant p.m. is an intricate issue in the not countable framework. When ergodic results are available, one can approximate it by several procedures (see, e.g, [11,Chapter 12]). In our framework, we can obtain an approximation by Proposition 19, which states the direct convergence of the iterates P n (·, ·) to the invariant p.m.. Before illustrating that, let us prove that the hypotheses of Proposition 19 hold. Analytic vs Simulations' outcomes To conclude our analysis of One State and Two States algorithms, we compare the simulations' outcomes with the theoretic results: we expect the BER's obtained by the simulations of sufficiently long transmissions to be consistent to the analytic computations. By Corollaries 8 and 21, the BER's can be computed once we know the corresponding invariant distributions. While for the One State algorithm the invariant measure is explicitly given by (25), for the Two States algorithm we have approximated it using the Corollary 24. In particular, we have discretized the kernel P into a matrix, afterwards we have computed the iterates P n for a sufficiently large n, so that to obtain an equilibrium condition, that is, a matrix whose rows are all equal up to numerical roundoff . At this point, any row of the matrix is a discretized, approximated version of the invariant p.m. In Figures 4 and 5, we compare analytic and simulations' outcomes: as expected, they do not present substantial differences. Appendix Markov Chains in Random Environments Consider a countable set Θ and a family of transition probability kernels {P θ , θ ∈ Θ} on a space (X, F). Given a σ-field B of Θ, let (θ n ) n∈N and (X k ) k∈N respectively be sequences of Θ-valued and X-valued r.v's. P θ k (X k , F ) can now be interpreted as the transition probability of X k to set F depending on the r.v θ k , which represents to so-called random environment. We say that (X k ) k∈N with (θ n ) n∈Z is a Markov Chain in Random Environment (or MCRE) if P (X k+1 ∈ F |X k , . . . , X 0 , (θ n ) n∈Z ) = P θ k (X k , F ) a.s. for all F ∈ F and k = 0, 1, . . . B. An important feature of a MCRE is that we can always associate to it a classical Markov Process. In fact, given any x ∈ X and θ = (θ 0 , θ 1 , . . . ) ∈ Θ N and denoting by T the left sequence shift on Θ N (that is, T θ = θ with θ n = θ n+1 for any n ∈ N), we can introduce the following transition probability kernel on (X × Θ N , F × B N ): P (x, θ), F × B = P θ0 (x, F )1 B (T θ)(36) which determines a Markov Process X k , T k (θ n ) n∈N k∈N on (X × Θ N , F × B N ). From now onwards, we will refer to it as to the Extended Markov Process, EMP for short. Remark 3 : As noted in the Section 1 of [5], if the random environments θ n 's are independent, then (X k ) k∈N is a Markov Process with transition probability kernel P (x, F ) = E θ∈Θ n [P θ0 (x, F )]. In other terms, (X k ) k∈N is the Markov Process moving in the average environment. In this framework, we prove the following Proposition 25 Let (X k ) k∈N with (θ n ) n∈N be a MCRE on X × Θ N . Suppose that the random environments θ n 's are independent, identically distributed with distribution π 0 on (Θ, B) and that the kernel of the Markov Process (X k ) k∈N admits an invariant p.m. φ; given the distribution π = × ∞ n=0 π 0 over (Θ N , B N ), ψ = φ × π(37) is an invariant p.m. for the EMP X k , T k (θ n ) n∈N k∈N over (X × Θ N , F × B N ). Proof Let ω = (x, θ) ∈ X × Θ N . ψ is an invariant for (X k , θ k ) k∈N if X×Θ N P (ω, F × B)ψ(dω) = ψ(F × B) for any F × B such that F ∈ F, B ∈ B N . Now, X×Θ N P (ω, F × B)ψ(dω) = X Θ N P θ0 (x, F )1 B (θ 1 , θ 2 , . . . )π(dθ)φ(dx) = π(B) X θ0∈Θ P θ0 (x, F )π 0 (θ 0 )φ(dx) = π(B) X P (x, F )φ(dx) = π(B)φ(F ) = ψ(F × B) where we have exploited the fact that φ is invariant. This Proposition is a partial extension of the Theorem 5 in [13], which states the same result in the case of denumerable state space X and attests also the inverse implication (that is, all the invariant p.m.'s are product measures of kind (37) still in the denumerable framework. For a more detailed treatise on MCRE's, we refer the reader to [5,6,13,14]. Proof of Theorem 9 From equation 24, (D k ) k∈N with (U k ) k∈N turns out to be a countable MCRE. This is the right way to look at (D k ) k∈N if we want to understand its behavior with respect to typical instances of the input U = (U 0 , U 1 , . . . ). For any x, y ∈ Z, we have 1})) endowed with the initial distribution κ × π, where κ is the counting measure on Z and π is the usual uniform Bernoulli measure on {0, 1} N . Given x, y ∈ Z, u = (u 0 , u 1 , . . . ) ∈ {0, 1} N and B ∈ ∞ 0 P({0, 1}), the EMP is defined by the transition probability kernel P (x, u); {y} × B = P x,y (u 0 )1 B (T u). P (D k+1 = y|D k = x, D k−1 , . . . , D 0 ; U) = P x,y (U k ). Consider the space (Z × {0, 1} N , P(Z) × ∞ 0 P({0, By Proposition 25, an invariant probability measure exists for our EMP and we explicitly compute it: in fact, let φ be a p.m. on (Z, P(Z)) given by φ({d}) = Φ d , Φ d being the invariant probability vector defined in the Proposition 7, for any integer d. Then, ψ = φ × π is an invariant p.m. for the EMP. We can verify that ψ is ergodic by the following criterion (see Chapter 3 of [6]). Let P(U 0 , . . . U n−1 ) the transition matrix whose entries are P x,y (U 0 , . . . U n−1 ) = P(D n = y|D 0 = x, U 0 , . . . U n−1 ).(39) If for each x, y ∈ Z and π-a.e. U there exist n = n(x, y, U) ∈ N and z = z(x, y, U, n) ∈ Z such that P x,z (U 0 , . . . , U n−1 )P y,z (U 0 , . . . U n−1 ) > 0, then ψ is ergodic. In our context it is easy to check that given any couple of starting states x and y, after n > |x − y| steps we have a non-null probability of having joined a common state z. Define q d (U k ) = P [ U k = U k |D k = d, U k ] = P d,d+1 (U k ) + P d,d−1 (U k ) (q d is actually the mean of q d ). For any K ∈ N and given D 0 = 0, the CBER can be expressed as follows: P b (e|U) = 1 K K−1 k=0 d∈Z q d (U k )P 0,d (U 0 , U 1 , . . . U k−1 )(40) Notice that, since the U k 's k ∈ N are independent, P(U 0 , U 1 , . . . , U k−1 ) = P(U 0 )P(U 1 ) · · · P(U k−1 ). Consider ω = (x, U) and the function g(ω) = q x (U 0 ): we have that d∈Z q d (U k )P x,d (U 0 , . . . U k−1 ) = P k g(x, U) and notice that P b (e|U) = 1 K K−1 k=0 P k g(0, U).(41) Now, by the Ergodic Theorem 17: lim K→∞ 1 K K−1 k=0 P k g(ω) = Z×{0,1} N g(ω)ψ(dω) for ψ-a.e. ω.(42) Notice that, as pointed out after Proposition 7, φ({d}) > 0 for any d ∈ Z; then, a set {d} × B, d ∈ Z, B ⊂ {0, 1} N , is ψ-negligible if and only if π(B) = 0. Hence, in (42), "ψ-a.e. ω" is equivalent to "for any d ∈ Z and π-a.e. U". This, along with the equality (41), implies that lim K→∞ P b (e|U) = Z×{0,1} N g(ω)ψ(dω) for π-a.e. U.(43) Finally, recalling that ψ = φ × π, Z×{0,1} N g(ω)ψ(dω) = d∈Z U0=0,1 q d (U 0 )π(U 0 )Φ d = d∈Z q d Φ d . Two States Algorithm: Computation of the Transition Probabilities In the next pages, we compute the probability of moving from a state (α, d) ∈ c α = exp(1/σ 2 ) α(1 − α) h x,y (z) = σ 2 log x 1−z z + y + 1 2 σ √ 2 H x,y (z) = 1 2 erfc (h x,y (z)) .(44) Notice that these quantities depend on the noise variance σ 2 , even if the notation does not emphasize that. Remind also Definition (22). Case 1: d = d, u = 0. P 0 (α, d), (0, β) × {d} = Prob(ζ 3 ≤ ζ 1 ≤ β(ζ 1 + ζ 2 )|A k = α, D k = d, U k = 0) =      0 if α = 0 or if α ∈ (0, 1) and β ≤ 1 1+cα H α,d (β) − H α,d 1 1+cα if α ∈ (0, 1) and β > 1 1+cα H 1,d (β) if α = 1.(45) Case 2: d = d, u = 1. P 1 (α, d), (0, β) × {d} = = Prob (ζ 3 ≥ ζ 1 ) ∩ (βζ 3 ≥ (1 − β)ζ 2 )|A k = α, D k = d, U k = 1 =      H 1 1−α ,d (β) if α = 0 or if α ∈ (0, 1) and β ≤ cα 1+cα H 1 1−α ,d cα 1+cα if α ∈ (0, 1) and β > cα 1+cα 0 if α = 1.(46)Case 3: d = d + 1, u = 0. P 0 (α, d), (0, β) × {d + 1} = = Prob (ζ 3 ≥ ζ 1 ) ∩ (βζ 3 ≥ (1 − β)ζ 2 )|A k = α, D k = d, U k = 0 =      H 1 1−α ,d+1 (β) if α = 0 or if α ∈ (0, 1) and β ≤ cα 1+cα H 1 1−α ,d+1 cα 1+cα if α ∈ (0, 1) and β > cα 1+cα 0 if α = 1.(47) Case 4: d = d − 1, u = 1. P 1 (α, d), (0, β) × {d − 1} = = Prob(ζ 3 ≤ ζ 1 ≤ β(ζ 1 + ζ 2 )|A k = α, D k = d, U k = 1) =      0 if α = 0 or if α ∈ (0, 1) and β ≤ 1 1+cα H α,d−1 (β) − H α,d−1 1 1+cα if α ∈ (0, 1) and β > 1 1+cα H 1,d−1 (β) if α = 1.(48) Remark 4 : As c α > 2, 1 1+cα < 1 3 < 2 3 < cα 1+cα . Summing up: P (α, d), (0, β) × {d} = 1 2                    H 1,d (β) if α = 0 or if α = 1 H 1 1−α ,d (β) if α ∈ (0, 1) and β ≤ 1 1+cα H α,d (β) − H α,d 1 1+cα + H 1 1−α ,d (β) if α ∈ (0, 1) and 1 1+cα < β ≤ cα 1+cα H α,d (β) − H α,d 1 1+cα + H 1 1−α ,d cα 1+cα if α ∈ (0, 1) and β > cα 1+cα (49) P (α, d), (0, β) × {d + 1} = 1 2      H 1 1−α ,d+1 (β) if α = 0 or if α ∈ (0, 1) and β ≤ cα 1+cα H 1 1−α ,d+1 cα 1+cα if α ∈ (0, 1) and β > cα 1+cα 0 if α = 1 (50) P (α, d), (0, β) × {d − 1} = 1 2      0 if α = 0 or if α ∈ (0, 1) and β ≤ 1 1+cα H α,d−1 (β) − H α,d−1 1 1+cα if α ∈ (0, 1) and β > 1 1+cα H 1,d−1 (β) if α = 1. (51) Two States Algorithm: Computation of q(α, d) The function q on [0, 1] × Z defined in the Corollary 21 is given by q(α, d) = 1 2 P( U k = 1|U k = 0, A k = α, D k = d) + 1 2 P( U k = 0|U k = 1, A k = α, D k = d). Note that P( U k = 1|U k = 0, A k = α, D k = d) = = Prob αf (Y k+1 |X k+1 ) (y k+1 | x k + 1) + (1 − α)f (Y k+1 |X k+1 ) (y k+1 | x k + 2) > αf (Y k+1 |X k+1 ) (y k+1 | x k ) + (1 − α)f (Y k+1 |X k+1 ) (y k+1 | x k + 1) = 1 2 erfc σ 2 log z 1 + d + 1 2 √ 2σ where z 1 is the positive solution of the equation (1−α)e − 1 σ 2 z 2 +(2α−1)z−α = 0. Similarly, P( U k = 0|U k = 1, A k = α, D k = d) = 1 − 1 2 erfc σ 2 log z 1 + d − 1 2 √ 2σ hence q(α, d) = 1 2 1 2 erfc σ 2 log z 1 + d + 1 2 √ 2σ + 1 − 1 2 erfc σ 2 log z 1 + d − 1 2 √ 2σ . Naturally, if α = 1, then q(α, d) = 1 2 1 2 erfc d+ 1 2 √ 2σ + 1 − 1 2 erfc d− 1 2 √ 2σ = q d and we recast into the One State case. Lemma 26 For any ε > 0, d ∈ Z, there exists a constant C ε,d > 0 such that the following inequalities hold for every (α, d) ∈ [0, 1] × Z and M ∈ B([ε, 1 − ε]),: Two States P (α, d), M × {d} ≥ C ε,d L(M ) P 2 (α, d), M × {d + 1} ≥ C ε,d L(M ) P 2 (α, d), M × {d − 1} ≥ C ε,d L(M ) where L is the Lebesgue measure. Proof First, we prove the lemma on the open intervals (β 1 , β 2 ) ⊂ [ε, 1 − ε]. For shortness of notation, letᾱ = 1 1−α . Consider the first inequality. On the basis of the equations (49) and Remark 4, the following cases may occur: 1 3 ]: 1. If α = 0, (β 1 , β 2 ) ∈ [ε, 1 − ε] or if α ∈ (0, 1), (β 1 , β 2 ) ⊂ [ε, 1 1+cα 1 3 ] ⊆ [ε, 3. Otherwise: it is straightforward to verify that P (α, d), (β 1 , β 2 ) × {d} ≥ σ 2/π (m α,d + mᾱ ,d ) (β 2 − β 1 ). Finally, if we consider m(α, d, β 1 , β 2 ) =    mᾱ ,d if α = 0 or if α ∈ (0, 1) and ε < β 1 < β 2 ≤ 1 1+cα ; m α,d if α = 1 or if α ∈ (0, 1) and cα 1+cα < β 1 < β 2 ≤ 1 − ε; m α,d + mᾱ ,d otherwise. (52) and C (1) ε,d = σ 2 π min α∈[0,1] (β 1 ,β 2 )⊂[ε,1−ε]m (α, d, β 1 , β 2 )(53) we conclude that for any ε > 0, d ∈ Z, P (α, d), (β 1 , β 2 ) × {d} ≥ C (1) ε,d (β 2 − β 1 ) C (1) ε,d > 0.(54) Let us prove the second inequality, on the basis of equations (50). In this case, the component d of the state moves to d + 1, which is not always possible in one step. In particular, there are two situations in which the transition probability is null: α = 1 and when β 1 = cα 1+cα (and given the continuity of (50, problems occur whenever α → 1 or β 1 → cα 1+cα ). Both issues can be solved considering two-step transition: roughly speaking, if α is close to 1, a first step is used to move α away from 1 (and d remains constant); at this point, the probability to move d to d + 1 is positive. On the other hand, when β 1 is close to cα 1+cα a first step is used to move d to d + 1 and a second one to move the component α to the desired interval (and now this is possible since we recast in the case in which d remains constant, previously studied). Let us assess this qualitative argumentation. 1. If α = 0, (β 1 , β 2 ) ∈ [ε, 1 − ε] or if α ∈ (0, 1 − δ 1 ] for some small δ 1 > 0, (β 1 , β 2 ) ⊂ [ε, cα 1+cα ]: P (α, d), (β 1 , β 2 )×{d+1} ≥ σ 2/π min α∈[0,1−δ1] mᾱ ,d+1 (β 2 −β 1 ) > 0 (55) where the positiveness of min α∈[0,1−δ1] mᾱ ,d+1 > 0 as been discussed above. 2. If α ∈ (0, 1 − δ 1 ], β 1 ∈ [ε, cα 1+cα − δ 2 ] for some small δ 1 , δ 2 > 0 and β 2 ∈ [ cα 1+cα , 1 − ε]: the transition probability depends on β 1 , not on β 2 , and P (α, d), (β 1 , β 2 ) × {d + 1} ≥ σ 2/π min α∈(0,1−δ1] mᾱ ,d+1 c α 1 + c α − β 1 where cα 1+cα − β 1 ≥ δ 2 ≥ δ 2 (β 2 − β 1 ). Let us now consider the cases that require two steps to move with non-null probability into the desired set. For this purpose, notice that P 2 (α, d), (β 1 , β 2 ) × {d + 1} = = 1 0 d =d,d+1 P (α, d), (dα , d ) P (α , d ), (β 1 , β 2 ) × {d + 1}(56) 3. If α ∈ (0, 1 − δ 1 ], β 1 ∈ ( cα 1+cα − δ 2 , β 2 ) and β 2 ∈ [ cα 1+cα , 1 − ε], we exploit that P 2 (α, d), (β 1 , β 2 ) × {d + 1} ≥ ≥ 1 0 P (α, d), (dα , d + 1) P (α , d + 1), (β 1 , β 2 ) × {d + 1}(57) As (54), P (α , d + 1), (β 1 , β 2 ) × {d + 1} ≥ C (1) ε,d+1 (β 2 − β 1 ) byP 2 (α, d), (β 1 , β 2 ) × {d + 1} ≥ C (1) ε,d+1 (β 2 − β 1 )P (α, d), ([0, 1], d + 1) ≥ C (1) ε,d+1 (β 2 − β 1 )P (α, d), ([ε, 1 − ε], d + 1) ≥ ≥ C (1) ε,d+1 (β 2 − β 1 )σ 2/π(1 − 2ε) min α∈(0,1−δ1] mᾱ ,d+1 .(58) 4. If α ∈ (1 − δ 1 , 1], we exploit that P 2 (α, d), (β 1 , β 2 ) × {d + 1} ≥ ≥ 1 0 P (α, d), (dα , d) P (α , d), (β 1 , β 2 ) × {d + 1} .(59) A sufficient condition to have P (α , d), Reducing the domain of integration to [0, α], we obtain (β 1 , β 2 )×{d+1} > 0 is β 2 ≤ c αP 2 (α, d), (β 1 , β 2 ) × {d + 1} ≥ ≥ α 0 P (α, d), (dα , d) P (α , d), (β 1 , β 2 ) × {d + 1} ≥ α 0 P (α, d), (dα , d) σ 2/π mᾱ ,d+1 (β 2 − β 1 ) ≥ σ 2/π min α ∈[0, α] mᾱ ,d+1 (β 2 − β 1 )P (α, d), ([0, α], d) ≥ σ 2/π min α ∈[0, α] mᾱ ,d+1 (β 2 − β 1 )C (1) ε,d α.(60) Finally, gathering the bounds obtained in the previous four cases, we obtain P 2 (α, d), (β 1 , β 2 ) × {d + 1} ≥ C (2) ε,d (β 2 − β 1 ).(61) where C (2) ε,d = δ 2 (1 − 2ε) ασ 2/π min α∈[0,1−δ1] mᾱ ,d+1 min{C (1) ε,d , C(1) ε,d+1 } > 0. We omit the proof of the third inequality as it is analogous to the second one: by the same argumentation, we obtain a suitable constant C (3) ε,d . Finally, for any small ε > 0 and d ∈ Z, C ε,d = min{C (1) ε,d , C (2) ε,d , C ε,d }. P r (α, d), ∩ N n=1 O n × {d } ≥ C ε L(∩ N n=1 O n ) ≥ C ε L(∩ ∞ n=1 O n ) = C ε L(M ) for any d ∈ {d − 1, d, d + 1} and r = 1, 2 according to the value of d . This inequality holds for any N ∈ N, hence lim N →∞ P r (α, d), ∩ N n=1 O n × {d } = P r ((α, d), ∩ ∞ n=1 O n × {d }) ≥ C ε L(M ). By this lemma, it follows in particular that for any M ∈ B([ε, 1 − ε]), P 2|d−d | (α, d), M × {d } ≥ C |d−d | ε,d L(M ) if d = d ; P (α, d), M × {d} ≥ C ε,d L(M ). Moreover, Proposition 27 For any M ∈ B([0, 1]) with L(M ) > 0, P 2|d−d | (α, d), M × {d } > 1 2 C |d−d | ε,d L(M ) if d = d ; P (α, d), M × {d} > 1 2 C ε,d L(M ). In particular, (A k , D k ) k∈N is (L × κ)-irreducible, κ being the counting measure. − ε] c ) ≥ λ − 2ε and we can always choose ε = ε(λ) such that λ > 2ε. For instance, let us choose ε = λ 4 , so that λ − 2ε = λ 2 . Therefore, x,y (z) (−h x,y (z))dz with x = α, 1/(1 − α) and y = d − 1, d, d + 1 according to the instance. As we have shown in the Proof of Lemma 2, h x,y (z) = σ z(z−1) √ 2 , hence g(z) = −e −h 2 x,y (z) h x,y (z) > 0 for every z ∈ (0, 1). Furthermore, g (z) is monotone decreasing over (0, 1) and null in one point z 0 ∈ (0, 1) corresponding to the unique solution of the equation h x,y (z) = √ 2 σ ( 1 2 − z); hence g(z) is increasing in (0, z 0 ), decreasing in (z 0 , 1) and admits a maximum in z 0 ∈ (0, 1). In conclusion, P 2|d−d | (α, d), M × {d } ≥ P 2|d−d | (α, d), (M ∩ [ε, 1 − ε]) × {d } ≥ C ε,d L(M ∩ [ε, 1 − ε]) > λ 2 C |d−d | ε,d when d = d , Proof of Theorem 22 The process (A k , D k ) k∈N with (U k ) k∈N is an instance of MCRE. The corresponding EMP in Ω = [0, 1] × Z × {0, 1} N is defined by the following transition probability kernel: Proof Let F ⊂ Ω be an invariant set: by Definition 15, to prove the ergodicity of ψ is sufficient to show that ψ(F ) > 0 implies ψ(F ) = 1. Then, let us suppose ψ(F ) > 0. We name U F = u ∈ {0, 1} N : (α, d, u) ∈ F for some (α, d) ∈ [0, 1] × Z ; U 0 = u ∈ {0, 1} N : u contains infinitely many 0's and 1's ; U n 0 = u ∈ U 0 : u contains at least a 0 and a 1 in its first n bits , n ≥ 2. Given the transition probability kernel (62), if u ∈ U F then also T u ∈ U F and since π is an ergodic measure with respect to the shift operator T (see [24,Section 1.5]) and π(U F ) > 0 (otherwise ψ(F ) = 0), we have that π(U F ) = 1 by the Birkhoff's Individual Ergodic Theorem ([24, Theorem 1.14]). By analogous reasoning, π(U 0 ) = 1. Furthermore, U n 0 ⊂ U n+1 0 , then U n 0 ↑ U 0 . This implies the existence of an n 0 ≥ 2 such that π(U n0 0 ) > 0. At this point, let us consider the equations (45)-(48): by applying the procedure used to prove Lemma 26 and Proposition 27, it is easy to verify that for any (α, d) ∈ (0, 1) × Z, Notice also that we are not considering the negligible cases α = 0 and α = 1, which may prevent the one-step transition (see (45)-(48)). Maintaining this hypothesis, consider (α, d, u) ∈ F such that u ∈ U n0 0 (this is always possible since U n0 0 ⊂ U F ψ-a.e.). By the invariance of F and (64), we obtain that [0, 1] × {d} × {T n0 u} ⊂ F(65) since u contains at least a 0 and a 1 in its first n 0 bits. Moreover, the fact that U n0 0 is not negligible implies that we can always choose u ∈ U n0 0 such that V u = {T n u, n ∈ N} has measure π(V u ) = 1, as a consequence of [24, Theorem 1.14]. Hence, [0, 1] × {d} × V u ⊂ F(66) Birkhoff. Furthermore, consider the evolution of the component d ∈ Z: from equations (63) we deduce that any d has non-null probability to achieve, in n steps, any integer belonging to D n = {d − m 1 , d − m 1 + 2, . . . , d + n − m 1 } m 1 being the number of 1's in the corresponding n-bit input sequence. Hence, [0, 1] × D n × T n V u ⊂ F(67) where T n V u = V u π-a.e.. Given that for any n, D n ⊂ D n+1 , in particular, D n+1 has one more element than D n , then D n ↑ Z. This finally proves that [0, 1] × Z × V u ⊂ F π-a.e. a subset A of a set Ω, 1 A : Ω → {0, 1} is the indicator function, defined by 1 A (x) = 1 if x ∈ A and 1 A (x) = 0 otherwise. Erfc indicates the complementary error function, defined by erfc(x) Figure 1 : 1BCJR vs CBCJR. Figure 2 : 2Trellis representation of the Two States Algorithm. Figure 3 : 3Performance comparison of different causal decoders. Markov Chain itself is said to be positive recurrent if all its states are so. Proposition 3 3[20, Last part of Section 3.2.3] If a Markov Chain is irreducible and has one positive recurrent state, then all the states are so, that is the chain is positive recurrent. Proposition 12 [ 12 , 12Theorem 12.3.4] If a transition kernel P is weak Feller and verifies the Drift Condition, then it admits an invariant p.m.. Under some further conditions, also the uniqueness of the invariant measure can be proved. Definition 13 [12, Section 4.2.1] For any B ∈ B(X), let τ B = min{n > 0 : X n ∈ B}. (X n ) n∈N is said to be µ-irreducible if there exists a measure µ on B(X) such that for every x ∈ X, µ(B) > 0 implies P(τ B < +∞|X 0 = x) > 0. A µ-irreducible Markov Process whose kernel admits an invariant p.m. is said to be positive recurrentmey:93 and Proposition 14 [12, Theorem 10.0.1, Proposition 10.1.1] The kernel of a positive recurrent Markov Process admits a unique invariant p.m.. [ 11 , 11Definitions 2.2.2, 2.4.1] A set B ∈ B(X) is said to be invariant if P (x, B) ≥ 1 B (x) for every x ∈ X. A p.m. µ on B(X) is said to be ergodic if µ(B) = 0 or µ(B) = 1 for every invariant set B ∈ B(X). Proposition 16 [11, Proposition 2.4.3] If a Markov Process admits a unique invariant p.m. µ, then µ is ergodic. Definition 18 [ 23 , 23Section 3.6] A Markov Process is said to be strongly aperiodic it there exist a set A ⊆ X, a measure ν and a constant c such that P (x, B) ≥ cν(B) for any x ∈ A, B ∈ B(X).Now, let ||P n (x, ·) − µ|| = 2 sup B∈B(X) |P n (x, B) − µ(B)|be the total variation norm between the measures P n (x, ·) and µ. Proposition 19 [23, Proposition 3.8] For a positive recurrent, aperiodic Markov Process with invariant p.m. µ, ||P n (x, ·) − µ|| → 0 as n → ∞ for µ-a.e. x ∈ X. 4.2.2 The Mean BER Let A k be the r.v. representing the normalized probability of the stored state in the Two States algorithm. We observe that (A k , D k ) k∈N is a Markov Process in ([0, 1] × Z, B([0, 1]) × P(Z)) where B([0, 1]) is the Borel σ-field on [0, 1] and P(Z) is the discrete σ-field of Z. Subset 3 : 3If d ≥ 0 and α < δ d , V (ω) = d 2 + 2d; moreover, we have no tight bounds for P (ω, [0, 1] × {d + 1}) and P (ω, [0, 1] × {d − 1}): we can just notice that their difference is smaller than1 2 . Substituting it in (34) we obtain Subset 4 : 4If d < 0 and α > 1 − δ d , V (ω) = d 2 − 2d; as P (ω, [0, 1] × {d + 1}) − P (ω, [0, 1] × {d − 1}) ≤ −1 if d < −d 1 .Now, it is easy to verify that the subsets of [0, 1] × Z not yet considered form the compact set [0,δ d ] × {0, . . . , d 1 } ∪ [δ d , 1] × {0, . . . , d + 0 } ∪ [0, 1 − δ d ] × {d − 0 , . . . , −1, 0} ∪ [1 − δ d , 1] × {−d 1 , .. . , −1, 0} . For simplicity, we can consider the bigger compact set C =[0, 1] × {−d C , . . . , d C }, where d C = max{d + 0 , −d − 0 , d 1 }: now, it is easy to check that for any ω ∈ C the Drift Condition is satisfied whenever b ≥ G + d C + 3 2 .We now check the Weak Feller Property. Given any open interval I ⊂ [0, 1] and d ∈ Z, the continuity of P (·, I ×{d }) can be easily verified by the equations (49)-(51) (Section 5.3): P ((α, d), I × {d }) is piecewise defined as combination of H, which is a continuous function; moreover, it is straightforward to check that the continuity holds also at the connection points. Furthermore, (a) any open set on the real line (hence on [0, 1]) is a countable union of disjoint intervals; (b) if f N is a monotone increasing sequence of lower semicontinuous functions such that f N ↑ f pointwise, then f is lower semicontinuous. By (a), any open set O in [0, 1] can be expressed as O = ∪ ∞ n=1 I n , with I n mutually disjoint open intervals in [0, 1]. Moreover, f N (ω) = P (ω, (∪ N n=1 I n ) × {d }) ≤ 1 fulfills the hypotheses of statement (b), hence its pointwise limit f (ω) = P (ω, (∪ ∞ i=1 I i ) × {d }) = P (ω, O × {d }) is lower semicontinuous. As any open set of the product topology can be expressed as ∪ n∈Z (O n × {n}), O n open in [0, 1], the lower semicontinuity is extended to all the open sets. Given the existence of an invariant p.m., we now evaluate the BER by means of the Ergodic Theorem 17. The BER is given by Theorem 22 22Let π be the uniform Bernoulli probability measure over {0, 1} N . Then, for the Two States algorithm, lim K→∞ P b (e|U) = lim K→∞ P b (e) for π-a.e. U. Proposition 23 Figure 4 : 234The Markov Process (A k , D k ) k∈N is strongly aperiodic. Proof Let us consider the probability measure L × δd on ([0, 1] × Z, B([0, 1]) × P(Z)), where L is the Lebesgue measure and δd(d) = 1 if d =d, 0 otherwise. By Proposition 27, P ((α, d), M × {d}) > 1 2 C ε,d L(M ), C ε,d > 0. Then, considering the Definition 18 with ν = L×δd, c = 1 2 C ε,d and A = [0, 1]×{d}, the proposition is proved. This result along with Proposition 19 yields: Corollary 24 (Direct Convergence) ||P n ((α, d), ·) − φ|| → 0 as n → ∞ for φ-a.e. (α, d) ∈ [0, 1] × Z. One State: analytic computation vs simulation. Figure 5 : 5Two States: analytic computation vs simulation. [ 0 , 1 ] 01× Z to a set of type (0, β) × {d }, β ∈ (0, 1], d ∈ Z, for the Markov Process (A k , D k ) k∈N defined in Section 3.2. Let P u (α, d), (0, β) × {d } be the transition probability given the transmitted bit u: P (α, d), (0, β) × {d } =1 2 P 0 (α, d), (0, β)×{d } + 1 2 P 1 (α, d), (0, β)×{d } are null if d / ∈ {d−1, d, d+1}, if d = d + 1 and u = 1 or if d = d − 1 and u = 0; we now compute the non-null instances. Given (α, d) ∈ (0, 1)×Z and x ∈ {α, (1−α) −1 , 1}, y ∈ {d−1, d, d+1}, z ∈ (0, 1), we define: Algorithm: Proof of the (L×κ)-irreducibility of (A k , D k ) k∈N In this paragraph, we complete the proof of the Corollary 21 showing the (L×κ)irreducibility of (A k , D k ) k∈N in the space ([0, 1] × Z, B([0, 1]) × P(Z)). For this purpose, we first prove that any non-negligible Borel subset of kind M × {d } ⊂ [0, 1] × Z is achievable with positive probability from any (α, d), in one or two steps, if d ∈ {d − 1, d, d + 1} and M is sufficiently far from the extreme points of [0, 1]: otherwise for α ∈ [0, α] ∪ [1 − α, The thesis is now proved for any open interval in [ε, 1−ε]. The generalization to all the open sets in [ε, 1 − ε] is straightforward since any open set on the real line is countable union of disjoint open intervals. Finally, we can extend the result to all the Borelians in [ε, 1−ε]. Remind that for any Lebesgue measurable set M (in particular, for any Borelian) in R there exists a sequence of open sets O n such that M ⊂ ∩ ∞ n=1 O n and L(M ) = L(∩ ∞ n=1 O n ), see [17]. As any finite intersection of open sets is open, we have Proof By the previous lemma, this result holds when M ∈ B([ε, 1 − ε]) given any ε > 0. Now, if we consider any M ∈ B([0, 1]) with L(M ) = λ > 0, we have L(M ∩ [ε, 1 − ε]) = L(M ) − L(M ∩ [ε, 1 and similarly when d = d . 5.6 Two States Algorithm: an upper bound for the transition probability kernel Lemma 28 There exists a real positive constant G such that P (α, d), M × Z ≤ GL(M ) for any (α, d) ∈ [0, 1] × Z and M ∈ B([0, 1]). Proof First, we prove the lemma when M is an open interval. Consider the equations (49) -(51): given (α, d), P (α, d), (β 1 , β 2 ) × Z is equal to a sum of integrals of type β2 β1 e −h 2 dz ≤ G(β 2 − β 1 ), G = g(z 0 ). The extension to all the open sets is trivial as any open set is countable union of disjoint intervals. Finally, as for any M ∈ B([0, 1]) there exists a sequence of open sets O n such that M ⊂ ∩ ∞ n=1 O n and L(M ) = L(∩ ∞ n=1 O n ) (see[17]), for any n ∈ N we can writeP (α, d), ∩ ∞ n=1 O n × Z ≤ P (α, d), ∩ N n=1 O n × Z ≤ GL(∩ N n=1 O n )as any finite intersection of open sets is open. The result follows from the arbitrariness of N . P (α, d, u), A × {d } × B = P u0 (α, d), A × {d } 1 B (T u) (62) where u = (u 0 , u 1 , . . . ) ∈ {0, 1} N , A ∈ B([0, 1]), d ∈ Z, B ∈ P({0, 1} N ). P u0 (α, d), A × {d } can be assessed by equations (45)-(48). Moreover, we denote by P u0,...u k−1 (α, d), A × {d } the probability of moving from (α, d) ∈ [0, 1] × Z to the set A × {d }, A ∈ B([0, 1]), in k-steps, given the input sequence (u 0 , . . . , u k−1 ) ∈ {0, 1} k . By Proposition 25, ψ = φ × π ( φ being defined in Proposition 20), is an invariant p.m. for the EMP. Moreover, Lemma 29 ψ is ergodic. P 0 0(α, d), M × {d} > 0 for any M ∈ B ( (1/3, 1] ) , L(M ) > 0; P 1 (α, d), M × {d} > 0 for any M ∈ B ( [0, 2/3) ) , L(M ) > 0; P 0 (α, d), M × {d + 1} > 0 for any M ∈ B ( [0, 2/3) ) , L(M ) > 0; P 1 (α, d), M × {d − 1} > 0 for any M ∈ B ( (1/3, 1] ) , L(M ) sufficient, not necessary bounds derived from Remark (4). These inequalities yield to P 01 (α, d), M × {d} > 0 for any M ∈ B ( [0, 1] ) , L(M ) > 0; P 10 (α, d), M × {d} > 0 for any M ∈ B ( [0, 1] ) , L(M ) > 0. = 1 .P 1But now also [0, 1] × Z × {Tw} ⊆ F . which impliesψ(F ) = φ([0, 1] × Z)π(V u ) = 1.(69)Given q(α, d,U k ) = P ( U k = U k |U k , A k = α, D k = d, U k )P (U0,...U k−1 ) (1, 0), (dα, d) .(70)Now, let g(α, d, U) = q(α, d, U 0 ): it is easy to verify thatP k g(α, d, d , U k )P (U0,...U k−1 ) (α, d), (dα ,ψ for ψ-a.e. ω ∈ Ω Let N ⊂ Ω be the negligible set for which there is no convergence and let N 0,U = {α ∈ [0, 1] : (α, 0, U) ∈ N }. By the same argumentation used in Corollary 21, P U0 ((1, 0), N u,U × {0}) U0 ((1, 0), (dα 1 , 0))(P k−1 g)(α 1 , 0, T U) Table 1 : 1 Table 2 : 2 Suboptimal Causal Decoding Algorithms: Theoretic AnalysisIn this section, we propose an exhaustive theoretic analysis of One State and Two States algorithms and we provide a formal setting for the analytical computation of their performance. According to Definitions 11 and 14 in Section 1, By definition (44), for any x, y,Notice now that for any d ∈ Z, mᾱ ,d → 0 if and only if α → 1; nevertheless, if α → 1, also (1 + c α ) −1 → 0 and in particular there will be some α such that (1 + c α ) −1 < ε, which contradicts the hypothesis β 1 ≥ ε. Hence, can we conclude thatwhere the minimum has to be computed for α satisfying the initial hypotheses.: by analogous procedure, we obtain> 0 and its minimum is computed for α satisfying the above hypotheses. The positiveness holds since for any d ∈ Z, mᾱ ,d → 0 if and only if α → 0, which implies cα 1+cα → 1 and contradicts β 2 ≤ 1 − ε. Optimal decoding of linear codes for minimizing symbol error rate. L Bahl, J Cocke, F Jelinek, J Raviv, IEEE Trans. on Information Theory. 202L. Bahl, J. Cocke, F. Jelinek and J. Raviv. Optimal decoding of linear codes for minimizing symbol error rate. IEEE Trans. on Information Theory, volume IT-20(2), pages 284-287, 1974. Digital image restoration. M R Banham, A K Katsaggelos, IEEE Signal Proc. Magazine. 142M.R. Banham and A.K. Katsaggelos. Digital image restoration. IEEE Signal Proc. Magazine, volume 14(2), pages 24-41, 1997. M Bertero, P Boccacci, Introduction to Inverse Problems in Imaging. Institute of Physics, Bristol and Philadelphia. M. Bertero and P. Boccacci. Introduction to Inverse Problems in Imaging. Institute of Physics, Bristol and Philadelphia, 1998. Reconstruction from partial information with applications to tomography. C L Byrne, R M Fitzgerald, SIAM J. Appl. Math., volume. 424C.L. Byrne and R.M. Fitzgerald. Reconstruction from partial information with applications to tomography. SIAM J. Appl. Math., volume 42(4), pages 933-940, 1982. The ergodic theory of Markov chains in random environments. Zeitschrift Fur Wahrscheinl Und Verwandte Gebiete. R Cogburn, 66R. Cogburn. The ergodic theory of Markov chains in random environments. Zeitschrift Fur Wahrscheinl Und Verwandte Gebiete, volume 66, pages 109- 128, 1984. On products of random stochastic matrices. R Cogburn, Contemporary Mathematics. 50R. Cogburn. On products of random stochastic matrices. Contemporary Mathematics, volume 50, pages 199-213, 1986. A singular perturbation approach to a recursive deconvolution problem. F Fagnani, L Pandolfi, SIAM J. Control Optim. 40F. Fagnani and L. Pandolfi. A singular perturbation approach to a recursive deconvolution problem. SIAM J. Control Optim., volume 40, pages 1384- 1405, 2002. A recursive algorithm for the approximate solution of Volterra integral equations of the first kind of convolution type. Inverse problems. F Fagnani, L Pandolfi, 19F. Fagnani and L. Pandolfi. A recursive algorithm for the approximate solution of Volterra integral equations of the first kind of convolution type. Inverse problems, volume 19, pages 23-47, 2003. The ergodic theory of Markov Processes. Van Nostrand, Princeton. S R Foguel, S.R. Foguel. The ergodic theory of Markov Processes. Van Nostrand, Prince- ton, 1969. A K Jain, Fundamentals of Digital Image Processing. Englewood CliffsA.K. Jain. Fundamentals of Digital Image Processing. Prentice-Hall, En- glewood Cliffs, 1989. Markov Chains and Invariant Probabilities. O Hernández-Lerma, J B Lasserre, Birkhauser-VerlagBaselO. Hernández-Lerma and J.B. Lasserre. Markov Chains and Invariant Probabilities. Birkhauser-Verlag, Basel, 2003. Markov Chains and Stochastic Stability. S P Meyn, R L Tweedie, Springer-VerlagLondonS.P. Meyn and R.L. Tweedie. Markov Chains and Stochastic Stability. Springer-Verlag, London, 1993. Discrete Open Systems or Markov Chains in a Random Environment I. EIK. K Nawrotzki, 17K. Nawrotzki. Discrete Open Systems or Markov Chains in a Random Environment I. EIK, volume 17(11/12), pages 569-599, 1981. Discrete Open Systems or Markov Chains in a. K Nawrotzki, Random Environment II. EIK. 181K. Nawrotzki. Discrete Open Systems or Markov Chains in a Random Environment II. EIK, volume 18(1/2), pages 83-98, 1982. Modern Coding Theory. T Richardson, R Urbanke, Cambridge University PressT. Richardson and R. Urbanke. Modern Coding Theory. Cambridge Uni- versity Press, 2008. Introduction to Probability Models. S M Ross, Academic PressNinth EditionS.M. Ross. Introduction to Probability Models. Ninth Edition, Academic Press, 2007. Real and Complex Analysis. W Rudin, McGraw-HillNew YorkW. Rudin. Real and Complex Analysis. McGraw-Hill, New York, 1966. A stochastic deconvolution method to reconstruct insulin secretion rate after a glucose stimulus. G Sparacino, C Colbelli, IEEE Trans. on biomedical engineering. 435G. Sparacino and C. Colbelli. A stochastic deconvolution method to re- construct insulin secretion rate after a glucose stimulus. IEEE Trans. on biomedical engineering, volume 43(5), pages 512-529, 1996. J L Starck, E Pantin, F Murtagh, Deconvolution in Astronomy: A Review. Publications of the Astronomical Society of the Pacific. J.L. Starck, E. Pantin and F. Murtagh. Deconvolution in Astronomy: A Review. Publications of the Astronomical Society of the Pacific, pages 1051- 1069, 2002. An Introduction to Markov Processes. D W Stroock, Springer-VerlagBerlinD.W. Stroock. An Introduction to Markov Processes. Springer-Verlag, Berlin, 2005. On the solution of ill-posed problems and the method of regularization. A N Tikhonov, Soviet Math. Dokl. 4A.N. Tikhonov. On the solution of ill-posed problems and the method of regularization. Soviet Math. Dokl., volume 4 , pages 1035-1038, 1963. Solution of ill-posed problems. A N Tikhonov, V Y Arsenin, Winston and WileyWashingtonA.N. Tikhonov and V.Y. Arsenin. Solution of ill-posed problems. Winston and Wiley, Washington, 1977. Markov chains: structure and applications. R L Tweedie, Handbook of Statist. 19ElsevierStochastic processes: theory and methodsR.L. Tweedie. Markov chains: structure and applications. Stochastic pro- cesses: theory and methods, Handbook of Statist., 19, Elsevier, Amsterdam, pages 817-851, 2001. An Introduction to Ergodic Theory. P Walters, Springer-VerlagNew YorkP. Walters. An Introduction to Ergodic Theory. Springer-Verlag, New York, 2000.
{'fraction_non_alphanumeric': 0.11712719543573107, 'fraction_numerical': 0.04091620986687549, 'mean_word_length': 3.0108244531688166, 'pattern_counts': {'":': 0, '<': 24, '<?xml version=': 0, '>': 72, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 133, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In spite of the huge literature on deconvolution problems, very little is done for hybrid contexts where signals are quantized. In this paper we undertake an information theoretic approach to the deconvolution problem of a simple integrator with quantized binary input and sampled noisy output. We recast it into a decoding problem and we propose and analyze (theoretically and numerically) some low complexity on-line algorithms to achieve deconvolution.', 'arxivid': '1001.3550', 'author': ['Fabio Fagnani ', 'Sophie M Fosson '], 'authoraffiliation': [], 'corpusid': 9902662, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 28928, 'n_tokens_neox': 25461, 'n_words': 14831, 'pdfsha': '48b3d7c093ca65b588d14087b2daef3f1290611a', 'pdfurls': ['https://arxiv.org/pdf/1001.3550v2.pdf'], 'title': ['Deconvolution of linear systems with quantized input: an information theoretic viewpoint', 'Deconvolution of linear systems with quantized input: an information theoretic viewpoint'], 'venue': []}
arxiv
On BPS bounds in D=4 N=2 gauged supergravity II: general matter couplings and black hole masses 4 Apr 2012 Kiril Hristov Institute for Theoretical Physics and Spinoza Institute Faculty of Physics Utrecht University 3508 TDUtrechtThe Netherlands K P Hristov@uu Nl Sofia University 1164SofiaBulgaria On BPS bounds in D=4 N=2 gauged supergravity II: general matter couplings and black hole masses 4 Apr 2012arXiv:1112.4289v2 [hep-th] We continue the analysis of BPS bounds started in [1], extending it to the full class of N = 2 gauged supergravity theories with arbitrary vector and hypermultiplets. We derive the general form of the asymptotic charges for asymptotically flat (M 4 ), anti-de Sitter (AdS 4 ), and magnetic anti-de Sitter (mAdS 4 ) spacetimes. Some particular examples from black hole physics are given to explicitly demonstrate how AdS and mAdS masses differ when solutions with non-trivial scalar profiles are considered. Introduction and general results This paper is a continuation of the work of [1] and aims at a derivation of the BPS bounds for solutions of gauged D = 4 N = 2 supergravity with vector and hypermultiplets. We briefly recall that in [1] a method was developed for explicit evaluation of BPS bounds for solutions in supergravity, based on their asymptotic Killing spinors. The main results were the derivation of the asymptotic charges in minimal gauged supergravity for asymptotically AdS and magnetic AdS solutions, which differ by their magnetic charge. For stationary solutions, the BPS bound in AdS with vanishing magnetic charge Q m = 0 is found to be M ≥ |Q e | + g| J| , (1.1) with M the mass, Q e the electric charge, J the angular momentum of the given solution, and g the gauge coupling that is related to the cosmological constant. For asymptotically mAdS solutions on the other hand, the BPS bound is M ≥ 0 ,(1.2) with magnetic charge Q m = ±1/(2g). As we show in the present work, the superalgebra structure does not change when considering more general matter couplings in the theory. Thus, (1.1) and (1.2) continue to hold. However, the explicit definition of the asymptotic charges (M, Q e , etc.) of a given solution depends directly on the field content. We first derive the form of the supersymmetry anticommutator for all possible solutions of gauged supergravity with vectors and hypers. Then we focus on the special cases of Minkowski, AdS, and mAdS asymptotics where we evaluate the anticommutator explicitly. These calculations show that the hypermultiplets do not produce additional central charges in the superalgebra. We are also able to formulate renormalized expressions for the mass in AdS and mAdS. Our results in AdS are in exact agreement with the techniques of holographic renormalization [2]. On the other hand, the mAdS mass takes a different form and in some examples leads to qualitatively different results that have no analog in previous literature. We consider the most general (two-derivative) electrically 1 gauged D = 4 N = 2 supergravity, following strictly the conventions of [4] (that are mostly the same as in [5]). For further background material on N = 2 supergravity, see e.g. [6,7,8]. The standard N = 2 graviton multiplet (graviton g µν , graviphoton A g µ and two gravitinos) is coupled with n V vector multiplets (n V complex scalars z i , n V vectors A i µ and 2n V gauginos) 2 and n H hypermultiplets (4n H real scalars q u and 2n H hyperinos). The bosonic part of the lagrangian is 1 Although explicitly concentrating on electric gaugings here, the results will hold for more general theories with electromagnetic gauging such as the ones described in [3]. This is due to the fact that electromagnetic duality rotates the symplectic frame of the general lagrangian of [3] and one can always find a purely electric frame, where our results hold exactly. Since the spectrum of the theories remains invariant under symplectic transformations, our results generalize trivially. 2 In the lagrangian the graviphoton A g µ and vector fields A i µ mix between each other and appear as vector fields A Λ µ , Λ = 0, ..., n V , with corresponding field strengths F Λ µν . L = 1 2 R(g) + g i (z,z)∇ µ z i ∇ µz + h uv (q)∇ µ q u ∇ µ q v + I ΛΣ (z,z)F Λ µν F Σ µν (1.3) + 1 2 R ΛΣ (z,z)ǫ µνρσ F Λ µν F Σ ρσ − 4 3 g c Λ,ΣΠ ǫ µνρσ A Λ µ A Σ ν ∂ ρ A Π σ − 3 8 f ΩΓ Π A Ω ρ A Γ σ − V (z,z, q) , with scalar potential V = g 2 (g i k i Λ k Σ + 4h uv k u Λ k v Σ )L Λ L Σ + (g i f Λ if Σ  − 3L Λ L Σ )P x Λ P x Σ . (1.4) Most of the above quantities and the supersymmetry transformations will not be important for our purposes here so we leave the more technical introduction to the full lagrangian to appendix A. The quantities of relevance for the derivation of the BPS bound will be introduced shortly when needed. As described in detail in [1], one in principle needs to consider the full lagrangian (or just upto second order terms in fermions when eventually setting fermions to zero) in order to derive the expression for the supercharges. Alternatively, one can fix the right form of the supercharges from the supersymmetry variations. From our knowledge of the minimal case [1] and with the help of the susy variations we can derive explicitly the supercharge, as done in appendix B. The original expression for the supercharge is somewhat lengthy and non-suggestive. However, using the equations of motion for the gravitinos we can cast the supercharge into a much simpler form as a surface integral (see the appendix for the technical details). The important quantity for our purposes here is the Dirac bracket of two supercharges. It can be derived from the supercharge (B.4) and takes the remarkably simple form {Q, Q} = ∂V dΣ µν (ǫ µνρσ ε A γ ρ D σ ε A − ǫ µνρσ ε A γ ρ D σ ε A ) , (1.5) where D µ ε A = (∂ µ − 1 4 ω ab µ γ ab )ε A + i 2 A µ ε A + ω µ A B ε B + T − µν γ ν ǫ AB ε B + igS AB γ µ ε B . (1.6) Here ω ab µ is the spin connection, A µ is the gauged U(1) Kähler connection, A µ ≡ − i 2 ∂ i K∇ µ z i − ∂ῑK∇ µzῑ ,(1.7) and ω µ A B is the gauged Sp(1) connection of the quaternion-Kähler manifold, ω µ A B ≡ ∂ µ q u ω u A B + gA Λ µ P x Λ (σ x ) A B . (1.8) The quantity T − µν is the anti-selfdual part of the graviphoton field strength, T − µν ≡ 2iF Λ − µν (I ΛΣ )L Σ ,(1.9) and S AB ≡ i 2 (σ x ) AB P x Λ L Λ (1.10) is the gravitino mass matrix (see App. A and [5] for more details about special and quaternion Kähler geometry). Eq. (1.5) is the main general result of this paper. It can be explicitly evaluated on every spacetime that has an asymptotic Killing spinor 3 . Compared with the corresponding expression in the minimal case [1], (1.5) is just a straightforward generalization. A priori, one could expect some more radical changes due to the presence of vector and hypermultiplets, but this is not the case. We already see that the main conclusions of [1] remain the same, with the difference that the definition of the asymptotic charges will generalize to accommodate for the possibility of non-constant scalars 4 . In order to give more precise statements, we need to plug in the explicit Killing spinors of interest in the general Dirac bracket (1.5) as described in section 3 of [1]. In the following sections we consider more carefully the cases of Minkowski, AdS 4 , and mAdS 4 asymptotics, paying special attention to the asymptotic charges in stationary solutions. In each of the cases we give an explicit example from the study of black holes as an application of our results. Somewhat surprisingly, we are able to find a very simple unified formula for the mass of supersymmetric black hole spacetimes in all three cases. This also leads to a better conceptual understanding of the difference in the mass in AdS and mAdS spacetimes. We conclude with some remarks on the connection of our results to alternative approaches in literature and mention other potential uses of our method. 2 Asymptotically flat solutions General analysis Here we will be interested in the superalgebra and asymptotic charges of Minkowski spacetime. In the context of electrically gauged supergravity with vector and hypermultiplets the necessary conditions for a Minkowski vacuum were derived in [4], k i ΛL Λ = 0 ,k u Λ L Λ = 0 , P x Λ = 0 ,(2.1) together with constant scalars, vanishing field strengths and flat R 1,3 metric. These are now the conditions that asymptotically flat solutions will have to satisfy as r → ∞ (we 3 Everywhere in this paper the solutions of D µ ε A = 0 are referred to as Killing spinors. Each independent Killing spinor signifies the existence of a preserved fermionic isometry, i.e. supersymmetry. 4 Note that for a solution with constant scalars (both in the vector and in the hypermultiplet sector) (1.5) is equivalent with the result for the minimal case. Thus, the only difference between the asymptotic charges in minimal and non-minimal supergravity lies in the possibility for non-constant scalar profiles. always work in spherical coordinates as in [1]). The Majorana Killing spinors of Minkowski in spherical coordinates arẽ ǫ 1,2 M = e − 1 2 θγ 12 e − 1 2 ϕγ 23ǫ 1,2 0 , (2.2) whereǫ 1,2 0 are two arbitrary and linearly independent constant Majorana spinors. We will use the notationǫ A for Majorana spinors and ε A , ε A for the positive/negative chirality Weyl spinors that are used in our notation. The chiral spinors are related to the Majorana ones through ε A ≡ 1 + γ 5 2ǫ A , ε A ≡ 1 − γ 5 2ǫ A , (ε A ) * = ε A . (2.3) Having the Killing spinors we can now in principle plug (2.2) in (1.5) and derive the supercharge anticommutator directly. Of course, we already know the general answer from the Poincaré superalgebra, {Q Aα , Q Bβ } = δ AB (iγ M C −1 ) αβ P M − ǫ AB ((ReZ + iγ 5 ImZ)(C −1 )) αβ , (2.4) where C is the charge conjugation matrix, P M is the momentum operator, and Z is the complex central extension of the superalgebra. The explicit eigenvalues of the operators P M and Z for any asymptotically flat solution can be computed now from (1.5). The additional U(1) and Sp(1) connections in (1.5) from the matter multiplets can potentially lead to contributions to the supersymmetry anticommutator that are not of the type (2.4). Since we know that Minkowski asymptotics will necessarily lead to the Poincaré superalgebra it follows that these additional connections must fall off fast enough so that they do not contribute. (2.4) can in fact be taken as a definition for asymptotically flat spacetimes. In practice, the condition for the fall off of the connections will be equivalent with imposing the metric to approach Minkowski space. This will be illustrated more clearly with an explicit example. In the next subsection we give the explicit expressions for P 0 , Z in (2.4) for the stationary case, but one can straightforwardly derive the asymptotic charges in full generality if needed. Stationary solutions For stationary solutions we find that the supersymmetry anticommutator takes the following form 5 : {Q Aα , Q Bβ } = δ AB 8πM(iγ 0 C −1 ) αβ − ǫ AB 8π((ReZ + iγ 5 ImZ)(C −1 )) αβ , (2.5) where the complex central charge is given by Z = 1 4π lim r→∞ S 2 T − = lim r→∞ L Λ q Λ − M Λ p Λ , (2.6) as derived in detail in [9] 6 . The derivation of the central charge from (1.5) is a bit subtle and uses the fact that D µ ε A contains a T − µν term, while D µ ε A contains T + µν . This eventually leads to (T − (1 + γ 5 ) + T + (1 − γ 5 )) ∼ ReZ + iγ 5 ImZ. This calculation picks out the electric and magnetic charge carried by the graviphoton, which explicitly depend on the asymptotic values of the vector multiplet scalars. The mass, on the other hand, remains unaffected by scalars, M = 1 8π lim r→∞ dΣ tr e t [0 e r 1 e θ 2] + sin θ e t [0 e r 1 e ϕ 3] − (ω ab θ e t [0 e r a e θ b] + ω ab ϕ e t [0 e r a e ϕ b] ) , (2.7) just as in the minimal case. The vielbein and spin connection in the above formula can belong to any stationary asymptotically Minkowski solution of interest, explicit examples of such configurations can be found in the next subsection. The BPS bound, as always for stationary asymptotically flat solutions, is M ≥ |Z| . (2.8) Note that the hypermultiplet sector seems to be completely decoupled from the above calculations since the hypers do not influence the asymptotic charges. This suggests that the stabilization of the hypers at a particular point in moduli space as described in [11] might be the generic situation in this case. Black hole example Example of asymptotically flat stationary solutions to apply the above formulas are hardly needed since these have been very well understood. As a standard example we can just 5 We rescale the central charges for convenience. 6 Note that the charges q Λ and p Λ in (2.6) are the standard electric and magnetic charges as commonly defined in literature. The electric charges come from the dual field strengths G Λµν ≡ iǫ µνρσ δL δF Λ ρσ . See e.g. [10,11] for more details. briefly glance through the single-centered supersymmetric black holes of [10]. First we take the most standard case of a static black hole as a warm up for the static examples in AdS and mAdS. We then also explain the case of a rotating BPS saturated Kerr-Newman metric, which provides a non-trivial test of the BPS bound (2.8). The solutions of [10] in ungauged supergravity allow for an arbitrary number of vector multiplets (and arbitrary hypermultiplets that decouple and will not be considered in what follows) with arbitrary charges q Λ , p Λ . The charges only need to satisfy a certain condition in order to make the metric static (see [10] for more details). The metric and symplectic sections in spherical coordinates are ds 2 = e K (dt 2 + ωdϕ 2 ) − e −K dr 2 − e −K r 2 dΩ 2 2 , 2 Im(X Λ ) = H Λ = h Λ + p Λ r , 2 Im(F Λ ) = H Λ = h Λ + q Λ r , (2.9) where h Λ , h Λ are arbitrary constants that decide the asymptotic value of the scalars, usually chosen such that e −K asymptotes exactly to 1 7 . The rotation ω is present only when the Kähler connection (1.7) is non-vanishing. Let us consider as a first simple example the prepotential F = − (X 1 ) 3 X 0 with non-vanishing magnetic charge p 0 and electric charge q 1 (also non-vanishing h 0 , h 1 ). This implies that X 0 = i 2 H 0 , X 1 = 1 2 H 0 H 1 3 and e −K = 2 3 √ 3 H 0 (H 1 ) 3 . The U(1) connection vanishes and therefore the metric is static, ω = 0. To normalize the Kähler potential we choose h 0 (h 1 ) 3 = 27 4 and find for the central charge Z = 1 4 p 0 h 0 + 3 q 1 h 1 . (2.10) The mass can be calculated from (2.7) with the metric (2.9) and spin connection ω 12 θ = ω 13 ϕ sin θ = e K/2 ∂ r (re −K/2 ) and becomes M = lim r→∞ (−r 2 ∂ r e −K/2 ) = 1 4 p 0 h 0 + 3 q 1 h 1 . (2.11) This illustrates that the above spacetime is supersymmetric since M = |Z|. A slightly more challenging example is provided if we take the supersymmetric Kerr-Newman spacetime from section 4.2 of [10]. We will literally consider the same solution, taken in minimal supergravity with a prepotential F = − i 4 (X 0 ) 2 , such that e −K = X 0X 0 . In oblate spheroidal coordinates (c.f. (59) of [10]), the harmonic functions that give the solution are H 0 = 1 + mr r 2 + α 2 cos 2 θ , H 0 = 2α cos θ r 2 + α 2 cos 2 θ . Solving for the vector field strengths from this, we find that q 0 = m, p 0 = 0. This means that Z = e K/2 X 0 m ⇒ |Z| = m . (2.12) The Kähler connection (c.f. (1.7)) in this example is in fact non-vanishing, A θ = 1 2 e K/2 (H 0 ∂ θ H 0 − H 0 ∂ θ H 0 ) . However, it goes as r −2 as r → ∞ and therefore does not contribute to the supercharge anticommutator and keeps the Minkowski asymptotics. If we further perform a redefinition r → r − m, we obtain a stationary supersymmetric metric in the familiar form after converting back to spherical coordinates 8 . This confirms that the Kerr-Newman metric (2.13) is supersymmetric and that the angular momentum, J = αm, indeed does not enter in the BPS bound (2.8) and remains unconstrained by supersymmetry. ds 2 = (r − m) 2 + α 2 cos 2 θ r 2 + α 2 cos 2 θ (dt 2 + (2mr − m 2 )α cos 2 θ (r − m) 2 + α 2 cos 2 θ dϕ 2 ) − r 2 + α 2 cos 2 θ (r − m) 2 + α 2 dr 2 − (r 2 + α 2 cos 2 θ)dθ 2 − (r 2 + α 2 cos 2 θ) (r − m) 2 + α 2 (r − m) 2 + α 2 cos 2 θ sin 2 θdϕ 2 ,(2. 3 AdS 4 asymptotics General analysis The necessary conditions for AdS 4 vacuum, derived in [4], are: k i ΛL Λ = 0 ,k u Λ L Λ = 0 P x Λ f Λ i = 0 , ǫ xyz P y Λ P z Σ L ΛLΣ = 0 ,(3.1) with constant scalars, vanishing field strengths F Λ µν = 0 and AdS 4 metric with cosmological constant 9 Λ ≡ −3g ′2 = −3g 2 P x Λ P x Σ L Λ L Σ . (3.1) will have to hold at r → ∞ for all asymptotically AdS spacetimes, together with the usual conditions on the metric [1]. Note that we do not allow for asymptotic magnetic charge for the graviphoton, i.e. P x Λ A Λ ϕ = 0. Unlike in the minimal case, this does not rule out the existence of magnetic charges but only restricts them. The last condition in (3.1) tells us that the P x Λ L Λ 's are restricted in a certain way. We will assume that they are aligned in one particular direction asymptotically 10 (direction a), i.e. only P a ≡ P a Λ L Λ = 0. The Majorana Killing spinors for AdS were derived in [12,1], ǫ 1,2 AdS = e i 2 arcsinh(g ′ r)γ 1 e i 2 g ′ tγ 0 e − 1 2 θγ 12 e − 1 2 ϕγ 23ǫ 1,2 0 ,(3.2) where it was implicitly assumed that a = 2 for the gauging in the minimal case. The end result for the supercharge anticommutator will of course not depend on which direction for the moment maps is chosen, but when a = 2 the Killing spinors (the chiral ones can again be found using (2.3)) take the simplest form. In the explicit formulas for the asymptotic charges it is clear how to leave the choice for the direction a completely arbitrary. The basic anticommutator for asymptotically AdS solutions can be again derived directly using the chiral version of (3.2) in (1.5). The result takes the expected form from the OSp(2|4) superalgebra, {Q Aα , Q Bβ } = δ AB (γ M N C −1 ) αβ M M N − ǫ AB T (C −1 ) αβ ,(3.3) as discussed in detail in sections 3.1 and 4.1 of [1]. Here we also require that the U(1) and Sp(1) gauged conections in (1.5) fall off fast enough as r → ∞ in order to precisely recover the above expression. (3.3) can be taken as a definition of asymptotically AdS spacetimes. Any spacetime, whose Dirac bracket (1.5) does not simplify to (3.3) is therefore not asymptotically AdS. In the explicit example that follows the fall off will already be of the correct type, but in principle one needs to always make sure that the spacetime in question really is asymptotically AdS in the sense of (3.1) and (3.3). Each of the asymptotic charges M M N and T can be explicitly derived, but we will again concentrate on the mass and charge in the stationary case. 9 Λ is the cosmological constant of pure AdS 4 with constant scalars. The curvature of all asymptotic AdS solutions will approach this value as r → ∞. The reason for defining g ′ is because the AdS Killing spinors explicitly contain this constant instead of the gauge coupling constant g. 10 P x ≡ P x Λ L Λ rotates under Sp(1) ≃ SU (2) and can always be put in a particular direction. This however does not mean that existing solutions in literature will automatically be written in such a way. Stationary solutions Now we consider any stationary asymptotically AdS 4 solution (see the next subsection for an explicit example). For asymptotically AdS solutions with vanishing magnetic charge lim r→∞ P x Λ p Λ = 0, the supersymmetry anticommutator is 11 {Q Aα , Q Bβ } = δ AB 8π((Mγ 0 + g ′ J ij γ ij )C −1 ) αβ − ǫ AB 8πT (C −1 ) αβ ,(3.+ 2gg ′ r|P a Λ L Λ | e t [0 e r 1] − g ′2 r 2 + 1(ω ab θ e t [0 e r a e θ b] + ω ab ϕ e t [0 e r a e ϕ b] ) ,(3.5) and T = 1 4π lim r→∞ S 2 Re T − = lim r→∞ Re L Λ q Λ − M Λ p Λ . (3.6) The angular momenta J ij remain exactly as given in App. C of [1], unaffected directly by the scalars. The BPS bound is given by M ≥ |T | + g ′ | J| . (3.7) Note that the scalars enter explicitly in the definition of the mass (3.5), unlike for the asymptotically flat solutions. Static example Here we will explicitly consider the static supersymmetric spacetimes with non-constant scalars constructed by Sabra in [13] 13 . Unlike in the asymptotically flat case, one cannot easily find what the mass is just from looking at the metric. Briefly summarized, the solution of [13] is in a FI gauged supergravity with constant parameters P a Λ = ξ Λ and an arbitrary number of vector multiplets. The solutions are 11 Again, the supercharges are rescaled for convenience. 12 Note that the following expression includes both the gauge coupling constant g and the asymptotic cosmological constant g ′ . 13 These are the most general static BPS configurations that have been constructed so far in AdS. Strictly speaking, they do not correspond to black holes but rather to naked singularities due to the absence of an event horizon. purely electric with arbitrary charges q Λ . The metric and symplectic sections are ds 2 = e K 1 + g 2 r 2 e −2K dt 2 − e −K dr 2 (1 + g 2 r 2 e −2K ) − e −K r 2 dΩ 2 2 , ImX Λ = 0, 2 ImF Λ = H Λ = ξ Λ + q Λ r . (3.8) It is immediately clear that the charge T of this configuration will be T = lim r→∞ Re L Λ q Λ − M Λ p Λ = lim r→∞ L Λ q Λ = e K(ξ)/2 X Λ (ξ)q Λ , (3.9) where K(ξ), X Λ (ξ) denote the corresponding asymptotic values that will only depend on the gauge parameters via the second row of (3.8). Since the solutions are supersymmetric and static (J ij = 0) it follows that the mass takes the exact same value as the charge T . We can show this explicitly for any given solution. Let us for simplicity take the prepotential F = −2i X 0 (X 1 ) 3 with electric charges q 0 , q 1 and FI parameters ξ 0 , ξ 1 . The sections are therefore X 0 = 1 6 √ 3 (H 1 ) 3 H 0 , X 1 = 1 2 √ 3 √ H 0 H 1 with e −K = 2 3 √ 3 H 0 (H 1 ) 3 and g ′ = 2 1/2 3 3/4 g(ξ 0 (ξ 1 ) 3 ) 1/4 . The asymptotic charge T from (3.9) becomes T = (ξ 0 (ξ 1 ) 3 ) 1/4 2 3/2 3 3/4 q 0 ξ 0 + 3 q 1 ξ 1 . (3.10) In order to find the mass of this configuration we first need to perform a simple coordinate rescaling to make sure that the metric asymptotes to AdS in spherical coordinates (equivalently we could insist that e −K asymptotes to 1). Transforming r → ar, t → t/a, with a = lim r→∞ e −K/2 = 2 1/2 3 3/4 (ξ 0 (ξ 1 ) 3 ) 1/4 we achieve ds 2 = a 2 e K + g 2 r 2 e −K dt 2 − dr 2 (a 2 e K + g 2 r 2 e −K ) − e −K a 2 r 2 dΩ 2 2 , (3.11) which exactly asymptotes to AdS with cosmological constant −3g ′2 in spherical coordinates. The functions that further define the metric now take the form H 0 = ξ 0 + aq 0 r , H 1 = ξ 1 + aq 1 r . The relevant spin connection components in this case are ω 12 θ = ω 13 ϕ sin θ = a 2 e K + g 2 r 2 e −K ∂ r ( re −K/2 a ). Now we can use (3.5) to find the mass of this configuration: 12) as expected. This is a rather non-trivial check that (3.5) gives the correct expression for the AdS mass, and therefore reproduces correctly results from holographic renormalization [2]. Interestingly, we note that in the process of simplifying the above formula, in "..." one finds the mass to be M = lim r→∞ e −K/2 a 2 r 2 a r + gg ′ r(ξ 0 X 0 + ξ 1 X 1 ) − 1 r g ′2 r 2 + 1 a 2 e K + g 2 r 2 e −K ∂ r (re −K/2 ) = ... = (ξ 0 (ξ 1 ) 3 ) 1/4 2 3/2 3 3/4 q 0 ξ 0 + 3 q 1 ξ 1 = T ,(3.M = lim r→∞ (− r 2 a ∂ r e −K/2 ) = (ξ 0 (ξ 1 ) 3 ) 1/4 2 3/2 3 3/4 q 0 ξ 0 + 3 q 1 ξ 1 , (3.13) i.e. picking the first subleading term of the Kähler potential after normalizing it to asymptote to 1. This simple formula turns out to give the mass for the static solutions both in Minkowski (c.f. (2.11) and (2.14)) and in AdS. We now turn to magnetic AdS asymptotics and show that the same formula effectively gives the mass also for supersymmetric solutions in mAdS. mAdS asymptotics General analysis Magnetic AdS (or mAdS) was recently introduced as a concept in [1]. Many features of it are similar to the purely AdS case, but due to the presence of magnetic charges mAdS preserves less supersymmetry. The asymptotic conditions on the spacetime remain as in (3.1) with constant scalars, only now the magnetic field strengths are 2F Λ θϕ = p Λ sin θ under the restriction 2gP a Λ p Λ = ∓1 coming from Dirac quantization 14 . As before, we have the redefinition of the cosmological constant to be Λ ≡ −3g ′2 and assume the moment map in direction P a to be non-zero. For a = 2, the Killing spinors of mAdS 4 were given in [14,1]. Here we can give the projections obeyed by the chiral Killing spinors as straightforward generalization of the analysis in [15]: ε mAdS,A = e iα ǫ AB γ 0 ε B mAdS , ε mAdS,A = ±e iα σ a AB γ 1 ε B mAdS ,(4.1) where α is an arbitrary constant phase, and the choice of sign of the second projection corresponds to the choice of sign for the charge quantization condition. The functional dependence of the Killing spinors can also be found in [15] -it is only radial, g ′ r + g ′ 2g 2 r . This can be seen explicitly by analyzing the Killing spinor equation D µ ε A = 0. Solving it also forces all asymptotically mAdS spacetimes to satisfy P a Λ X Λ = ±1, 4ge K F Λ p Λ = ±1 as r → ∞. For asymptotically mAdS solutions with non-vanishing magnetic charge, the supersymmetry anticommutator is just {Q I , Q J } = δ IJ 8πM , (4.2) with only two supercharge singlets as discussed in detail in section 4.2 of [1]. The mass is given by explicitly plugging (4.1) in (1.5) for any asymptotically mAdS solution. Just as in [1], it turns out that the expression takes more convenient form if we choose an upper triangular vielbein: M = 1 8π lim r→∞ dΣ tr g ′ r + g ′ 2g 2 r 2 Im L Λ q Λ − M Λ p Λ sin θ e t 0 e r 1 e θ 2 e ϕ 3 + 2g|P a Λ L Λ | e t 0 e r 1 − (ω 12 θ e t 0 e r 1 e θ 2 + ω 13 ϕ e t 0 e r 1 e ϕ 3 ) ,(4. 3) The BPS bound in this case is simply M ≥ 0 . (4.4) Note that there is a crucial difference between the AdS and the mAdS masses since the scalars enter differently in the expressions, e.g. in the first term on the r.h.s. of (4.3). We will see in the next subsection that this ultimately leads to a different notion of the mass in the two cases and that the standard holographic renormalization technique is equivalent to the mass definition (3.5), but does not reproduce correctly (4.3). Black hole example Here we concentrate on the static supersymmetric black holes with magnetic charges, found recently by [16] and generalized by [17,15]. The theory is again FI gauged supergravity with an arbitrary number of vector multiplets and gaugings ξ Λ . The magnetic charges are restricted by the equation 2gξ Λ p Λ = 1 15 , and the metric and scalars are given by If we evaluate the mass of this solutions from (4.3) we get the supersymmetric value M = 0. 15 We just choose the positive sign here without any loss of generality. ds 2 = e K gr + c 2gr 2 dt 2 − e −K dr 2 gr + c 2gr 2 − e −K r 2 dΩ 2 2 , ReX Λ = H Λ = α Λ + β Λ r , ReF Λ = 0 , ξ Λ α Λ = −1 , ξ Λ β Λ = 0 , F Λ −2g 2 rβ Λ + cα Λ + 2gp Λ = 0 . To see this in some detail, let us again consider the simplest case of prepotential F = −2i X 0 (X 1 ) 3 that was also discussed carefully in section 7.1 of [15]. We have X 0 = H 0 = α 0 + β 0 r , X 1 = H 1 = α 1 + β 1 r and e −K = 8 H 0 (H 1 ) 3 , with β 0 = − ξ 1 β 1 ξ 0 , α 0 = − 1 4ξ 0 , α 1 = − 3 4ξ 1 , c = 1 − 32 3 (gξ 1 β 1 ) 2 ,(4.6) and magnetic charges p 0 = 1 gξ 0 1 8 + 8(gξ 1 β 1 ) 2 3 , p 1 = 1 gξ 1 3 8 − 8(gξ 1 β 1 ) 2 3 . (4.7) We again need to rescale t and r in order to have the metric asymptote to mAdS in spherical coordinates just as above: r → ar, t → t/a, with a = lim r→∞ e −K/2 = 2 1/2 3 3/4 (ξ 0 (ξ 1 ) 3 ) 1/4 and cosmological constant coming from g ′ = 3 3/4 2 1/2 (ξ 0 (ξ 1 ) 3 ) −1/4 . The metric is then ds 2 = e K gr + a 2 c 2gr 2 dt 2 − e −K dr 2 gr + a 2 c 2gr 2 − e −K a 2 r 2 dΩ 2 2 ,(4.8) and H 0 = α 0 + aβ 0 r , H 1 = α 1 + aβ 1 r . Evaluating (4.3) now gives M = lim r→∞ e −K/2 a 2 r 2 g ′ r + g ′ 2g 2 r g − 2a 2 e K r 2 (F 0 p 0 + F 1 p 1 ) − e K/2 r gr + a 2 c 2gr ∂ r (re −K/2 ) = 0 . (4.9) We are now in position to compare this result with the one obtained via the holographic renormalization techniques of [2,18]. As found in section 9 of [15], the mass of the above black holes is non-vanishing if one uses the explicit formulas provided in [18] based on the procedure of holographic renormalization [2]. In fact these formulas give the same result as if (3.5) were used, i.e. the holographic renormalization procedure does not consider the case of magnetic AdS asymptotics separately. It is then fair to conclude that holographic renormalization is well defined for the asymptotically AdS spacetimes of section 3, but one should not use this technique for asymptotically mAdS cases. Remarkably, the effective formula that worked in the static cases for Minkowski and AdS (see (2.11) and (3.13)) turns out to give the correct result once again, M = lim r→∞ (− r 2 a ∂ r e −K/2 ) = 0 . (4.10) Although the fundamental mass formulas (2.7),(3.5) and (4.3) are a priori considerably different, it turns out that the corresponding supersymmetric solutions have such properties that in each case the mass reduces to exactly the same simple formula. Final remarks To summarize, the main results of our work are the general mass formulas (2.7), (3.5), and (4.3) for asymptotically flat, AdS, and mAdS spacetimes, respectively. We confirmed the well-known result [9] for the central charge in Minkowski, showing that the hypermultiplets do not alter it. We also showed that supergravity does make a clear distinction between masses in AdS and mAdS. Our analysis in AdS generalizes some previous works that did not allow for non-trivial scalars, e.g. [19]. The results for asymptotically AdS solutions are in fact equivalent to performing the procedure of holographic renormalization [2,18], i.e. (3.5) can be directly used in AdS/CFT applications. In the asymptotically mAdS case, to our best knowledge, (4.3) provides the only correct mass formula in literature. Physically, this mass formula might seem a bit counter-intuitive as it allows for black hole solutions with vanishing mass. However, from the point of view of the superalgebra, the vanishing mass is the only possibility for BPS objects in mAdS. Therefore M = 0 should not come as a surprise for the static magnetic black holes of [16,15]. It is important to observe that the scalar profiles as functions of the radial coordinate enter explicitly in the mass formulas (3.5) and (4.3). Thus, the AdS and mAdS masses not only depend on the asymptotic values of the scalars, but also on how the scalars approach these values. This feature provides a new point of view towards the attractor mechanism in AdS/mAdS. It shows that scalars are much more restricted to behave in a particular way in comparison with the Minkowski case. Nevertheless, for the supersymmetric solutions it turned out that the mass can be described by the same formula in all three asymptotic vacua, M = lim r→∞ (− r 2 a ∂ r e −K/2 ) ,(5.1) where a ≡ lim r→∞ e −K/2 is usually chosen to be 1. This essentially means that the mass is the first subleading term of the Kähler potential expansion, no matter what the details of the solution and its asymptotics are. It will be interesting to understand the physical reasons behind this. Finally, we note that the supercharge anticommutator (1.5) can also be used to describe other asymptotic vacua in gauged supergravity. Examples of potential use are in asymptotically Lifshitz spacetimes (a supersymmetric Lifshitz vacuum was found in [20,21]) or in solutions with AdS 2 × S 2 asymptotics. Another important special Kähler quantity is the period matrix, N ΛΣ ≡ D i F Λ F Λ · D i X Σ X Σ −1 , (A.6) with R ΛΣ ≡ ReN ΛΣ , I ΛΣ ≡ ImN ΛΣ . All these quantities are also explained in more details in [4] where the analysis of fully supersymmetric vacua was accomplished. The bosonic susy variations in the vector multiplet sector are δ ε z i = λ iA ε A , (A.7) and δ ε A Λ µ = 2L Λ ψ µA ε B ǫ AB + if Λ i λ iA γ µ ε B ǫ AB + h.c. . (A.8) Finally, in the hypermultiplet sector, the hyperino variation is δ ε ζ α = i U Bβ u ∇ µ q u γ µ ε A ǫ AB C αβ + gN A α ε A , (A.9) with the vielbein U Aα u of the quaternionic metric h uv , the gauge covariant derivative of the hypers ∇ µ q u (when gauging isometriesk u Λ of the quaternion Kähler manifold), and the hyperino mass matrix N A α ≡ 2 U A α uk u ΛL Λ . (A.10) The susy variation of the hypermultiplet scalars (hypers) is δ ε q u = U Aα u ζ α ε A + C αβ ǫ AB ζ β ε B . (A.11) In order to derive the supercharge of the theory from the procedure described in section 2 of [1], we additionally need the Poisson/Dirac brackets of the fundamental fields. It will suffice to list the non-vanishing fermionic Dirac brackets that follow from the full lagrangian 18 (see e.g. [5]): {ψ µA (x), ǫ 0νρσ ψ B ρ (x ′ )γ σ } t=t ′ = δ µ ν δ A B δ 3 ( x − x ′ ) , {λ i A (x), − i 2 g k λ B (x ′ )γ 0 } t=t ′ = δ A B δ k i δ 3 ( x − x ′ ) , {ζ α (x), −iζ β (x ′ )γ 0 } t=t ′ = δ α β δ 3 ( x − x ′ ) . (A.12) The conventions about metric signatures, gamma matrices, (anti-)selfdual tensors that we use in this paper can be found in some previous papers [4,11,15]. Note in particular that we follow the conventions for ǫ µνρσ of [4]. Consequently, we define as a measure for the volume/surface integrals dΣ µ = 1 6 ǫ µνρσ dx ν ∧ dx ρ ∧ dx σ , dΣ µν = 1 2 ǫ µνρσ dx ρ ∧ dx σ , (A. 13) which are defined differently in [1]. 18 The brackets for the bosonic fields can be derived directly from (1.3) if needed. B Supersymmetry charge From the susy variations one can fix uniquely the supersymmetry charge Q by the requirement that δ ǫ φ = {Q, φ}, (B.1) for all fundamental fields (here denoted by φ) in the theory. From (A.1)-(A.11), together with the Dirac brackets (A.12), one finds Q = V dΣ µ [ǫ µνρσ ψ A ν γ ρDσ ǫ A + h.c. − ig i λ A γ µ (i∇ ν z i γ ν ε A + G −i νρ γ νρ ǫ AB ε B + gW iAB ε B ) + h.c. − iζ α γ µ (i U Bβ u ∇ ν q u γ ν ε A ǫ AB C αβ + gN A α ε A ) + h.c.] , (B.2) up to higher order in fermions. The expression for the supercharge simplifies considerably when evaluated on shell, due to the very suggestive form of the equations of motion of the gravitinos: ǫ µνρσ γ νDρ ψ σA = g i (∇ µz λ i A − ∇ ν z i γ µν λ A ) − ig i (G + µν γ ν ǫ AB λ iB + gW i AB γ µ λ B ) − (U Bβ u ∇ µ q u ǫ AB C αβ − U Bβ u ∇ ν q u γ µν ǫ AB C αβ + igN αA γ µ )ζ α . (B.3) After performing a partial integration of the first term on the r.h.s. of (B.2) and using (B.3), the supercharge becomes a surface integral: similarly to (2.26) in [1] in the minimal case. 13) which is the Kerr-Newman metric with equal mass and charge, leading to a nakedly singular rotating asymptotically flat spacetime. The mass can be again found by M = ... = lim r→∞ (−r 2 ∂ r e −K/2 ) = m = |Z| , (2.14) ρ ε A − ψ σA γ ρ ε A , (B.4) One does not really need to stick to a particular choice for h Λ , h Λ . We can always perform a coordinate transformation to make sure that we have the correct asymptotics at r → ∞. This has exactly the same effect. Eq. (2.14) holds also in the given set of Boyer-Lindquist coordinates, but in order to use (2.7) one needs to first convert the relevant asymptotic quantities in spherical coordinates. Note that there is a mismatch of a factor of 2 between the charges here and in the previous sections. It can be traced back to the different conventions used in[14] and[15] and is compensated for in all formulas of this section. AcknowledgementsI would like to especially thank Chiara Toldo for initial collaboration and careful reading of the manuscript and Stefan Vandoren for helpful discussions. I acknowledge support by the Netherlands Organization for Scientific Research (NWO) under the VICI grant 680-47-603.A Details on D = 4 N = 2 gauged supergravityHere we will give more details on the theory in consideration. Alternatively, see[5]for a very detailed description. The bosonic part of the supergravity lagrangian was given in (1.3)-(1.4). The supersymmetry variations under which the full action is invariant (upto higher order terms in fermions) are as follows. The gravitino variation iswith a supercovariant derivative D as defined in (1.6). The corresponding vielbein variation readsNote that ψ µA ≡ iψ A µ † γ 0 in order to keep the correct chirality 16 (this holds similarly for all the conjugate (anti-) chiral spinors). In the vector multiplet sector (we will also consider the graviphoton here) we have the gaugino variationwhere ∇ µ z i denotes the gauge covariant derivative of the complex scalars (when isometries k i Λ of the Kähler manifold are being gauged), G i µν are the field strengths of the vectors from the vector multiplets, and W iAB is the gaugino mass matrix,The mass matrix also includes the quaternionic moment maps P x Λ from the hypermultiplet gauging 17 , together with L Λ = e K/2 X Λ (in analogy, M Λ ≡ e K/2 F Λ ) and their derivatives f Λ i ≡ e K/2 D i X Λ . They are defined in terms of the holomorphic sections X Λ , F Λ of special geometry and the Kähler potential(A.5)16We use the notation χ A , χ A for positive/negative chirality spinors that are related to each other by complex conjugation.17Note that in the absence of hypermultiplets, the quaternionic moment maps P x Λ can be non-vanishing constants, called FI parameters and usually denoted with ξ Λ K Hristov, C Toldo, S Vandoren, arXiv:1110.2688On BPS bounds in D=4 N=2 gauged supergravity. hep-thK. Hristov, C. Toldo and S. Vandoren, On BPS bounds in D=4 N=2 gauged super- gravity, arXiv:1110.2688 [hep-th]. Gravitational Stability and renormalization-Group Flow. K Skenderis, P Townsend, arXiv:hep-th/9909070Phys.Lett. B. 46846K. Skenderis and P. Townsend, Gravitational Stability and renormalization-Group Flow, Phys.Lett. B 468, 46 (1999), arXiv:hep-th/9909070; Holographic Reconstruction of Spacetime and Renormalization in the AdS/CFT Correspondence. S Haro, K Skenderis, S Solodukhin, arXiv:hep-th/0002230Commun.Math.Phys. 217S. de Haro, K. Skenderis and S. Solodukhin, Holographic Reconstruction of Spacetime and Renormalization in the AdS/CFT Correspondence, Commun.Math.Phys. 217, 595 (2001), arXiv:hep-th/0002230; How to go with an RG Flow. M Bianchi, D Freedman, K Skenderis, arXiv:hep-th/0105276JHEP. 010841M. Bianchi, D. Freedman and K. Skenderis, How to go with an RG Flow, JHEP 0108, 041 (2001), arXiv:hep-th/0105276; Correlation Functions in Holographic RG Flows. I Papadimitriou, K Skenderis, arXiv:hep-th/0407071JHEP. 041075I. Papadimitriou and K. Skenderis, Correlation Functions in Holographic RG Flows, JHEP 0410, 075 (2004), arXiv:hep-th/0407071. Lagrangians with electric and magnetic charges of N=2 supersymmetric gauge theories. M De Vroome, B De Wit, arXiv:0707.2717JHEP. 070864hep-thM. de Vroome and B. de Wit, Lagrangians with electric and magnetic charges of N=2 supersymmetric gauge theories, JHEP 0708 (2007) 064, arXiv:0707.2717 [hep-th]; Electric and magnetic charges in N=2 conformal supergravity theories. B De Wit, M Van Zalk, arXiv:1107.3305hep-thB. de Wit and M. van Zalk, Electric and magnetic charges in N=2 conformal super- gravity theories, arXiv:1107.3305 [hep-th]. Maximally supersymmetric solutions of D=4 N=2 gauged supergravity. K Hristov, H Looyestijn, S Vandoren, arXiv:0909.1743JHEP. 0911115hep-thK. Hristov, H. Looyestijn and S. Vandoren, Maximally supersymmetric solutions of D=4 N=2 gauged supergravity, JHEP 0911, 115 (2009), arXiv:0909.1743 [hep-th]. L Adrianopoli, M Bertolini, A Ceresole, R Auria, S Ferrara, P Fre, T Magri, arXiv:hep-th/9605032N=2 Supergravity and N=2 Super Yang-Mills Theory on General Scalar Manifolds. L. Adrianopoli, M. Bertolini, A. Ceresole, R. D'Auria, S. Ferrara, P. Fre and T. Magri, N=2 Supergravity and N=2 Super Yang-Mills Theory on General Scalar Manifolds, arXiv:hep-th/9605032. Potentials and Symmetries of General Gauged N=2 Supergravity: Yang-Mills Models. B De Wit, A Van Proeyen, Nucl. Phys. B. 24589B. de Wit, A. Van Proeyen, Potentials and Symmetries of General Gauged N=2 Su- pergravity: Yang-Mills Models, Nucl. Phys. B 245 (1984) 89; Gauge And Matter Fields Coupled To N=2 Supergravity. B De Wit, P G Lauwers, R Philippe, S Q Su, A Van Proeyen, Phys. Lett. B. 13437B. de Wit, P. G. Lauwers, R. Philippe, S. Q. Su and A. Van Proeyen, Gauge And Matter Fields Coupled To N=2 Supergravity, Phys. Lett. B 134 (1984) 37; Yang-Mills Theories Coupled To N=2 Supergravity: Higgs And Superhiggs Effects In Anti-De Sitter Space. J P Derendinger, S Ferrara, A Masiero, A Van Proeyen, Phys. Lett. B. 136354J. P. Derendinger, S. Ferrara, A. Masiero and A. Van Proeyen, Yang-Mills Theories Coupled To N=2 Supergravity: Higgs And Superhiggs Effects In Anti-De Sitter Space, Phys. Lett. B 136 (1984) 354. Lagrangians Of N=2 Supergravity -Matter Systems. B De Wit, P G Lauwers, A Van Proeyen, Nucl. Phys. B. 255569B. de Wit, P. G. Lauwers and A. Van Proeyen, Lagrangians Of N=2 Supergravity - Matter Systems, Nucl. Phys. B 255, (1985) 569. Special and Quaternionic Isometries: General Couplings in N=2 Supergravity and the Scalar Potential. R Auria, S Ferrara, P Frè, Nucl. Phys. B. 359705R. D'Auria, S. Ferrara and P. Frè, Special and Quaternionic Isometries: General Couplings in N=2 Supergravity and the Scalar Potential, Nucl. Phys. B 359, (1991) 705. The Symplectic structure of N=2 supergravity and its central extension. A Ceresole, R , S Ferrara, arXiv:hep-th/9509160Nucl. Phys. Proc. Suppl. 4667A. Ceresole, R. D'Auria and S. Ferrara, The Symplectic structure of N=2 super- gravity and its central extension, Nucl. Phys. Proc. Suppl. 46 (1996) 67, arXiv:hep- th/9509160. Stationary solutions of N=2 supergravity. K Behrndt, D Lust, W A Sabra, hep-th/9705169Nucl. Phys. B. 510K. Behrndt, D. Lust and W. A. Sabra, Stationary solutions of N=2 supergravity, Nucl. Phys. B 510 (1998) 264, hep-th/9705169. BPS black holes in N=2 D=4 gauged supergravities. K Hristov, H Looyestijn, S Vandoren, arXiv:1005.3650JHEP. 1008103hep-thK. Hristov, H. Looyestijn and S. Vandoren, BPS black holes in N=2 D=4 gauged supergravities, JHEP 1008 (2010) 103, arXiv:1005.3650 [hep-th]. . J M Izquierdo, P Meessen, T Ortín, Bogomol'nyi Bounds in AdS. 1999J. M. Izquierdo, P. Meessen and T. Ortín, Bogomol'nyi Bounds in AdS, 1999 unpub- lished. Anti-de Sitter BPS black holes in N=2 gauged supergravity. W A Sabra, arXiv:hep-th/9903143Phys. Lett. B. 45836W. A. Sabra, Anti-de Sitter BPS black holes in N=2 gauged supergravity, Phys. Lett. B 458 (1999) 36, arXiv:hep-th/9903143. Supersymmetric, cold and lukewarm black holes in cosmological Einstein-Maxwell theory. L J Romans, arXiv:hep-th/9203018Nucl. Phys. B. 383395L. J. Romans, Supersymmetric, cold and lukewarm black holes in cosmological Einstein-Maxwell theory, Nucl. Phys. B 383, 395 (1992), arXiv:hep-th/9203018. Static supersymmetric black holes in AdS 4 with spherical symmetry. K Hristov, S Vandoren, arXiv:1012.4314JHEP. 110447hep-thK. Hristov and S. Vandoren, Static supersymmetric black holes in AdS 4 with spherical symmetry, JHEP 1104, 047 (2011), arXiv:1012.4314 [hep-th]. Supersymmetric AdS 4 black holes and attractors. S Cacciatori, D Klemm, arXiv:0911.4926JHEP. 100185hep-thS. Cacciatori and D. Klemm, Supersymmetric AdS 4 black holes and attractors, JHEP 1001, 085 (2010), arXiv:0911.4926 [hep-th]. Flow equations and attractors for black holes in N = 2 U(1) gauged supergravity. G Dall&apos;agata, A Gnecchi, arXiv:1012.3756JHEP. 110337hep-thG. Dall'Agata and A. Gnecchi, Flow equations and attractors for black holes in N = 2 U(1) gauged supergravity, JHEP 1103, 037 (2011), arXiv:1012.3756 [hep-th]. Black hole mass and Hamilton-Jacobi counterterms. A Batrachenko, J Liu, R Mcnees, W Sabra, W Wen, arXiv:hep-th/0408205JHEP. 050534A. Batrachenko, J. Liu, R. McNees, W. Sabra and W. Wen, Black hole mass and Hamilton-Jacobi counterterms, JHEP 0505, 034 (2005), arXiv:hep-th/0408205. Stability of Gravity with a Cosmological Constant. L Abbot, S Deser, Nucl. Phys. B. 19576L. Abbot and S. Deser, Stability of Gravity with a Cosmological Constant, Nucl. Phys. B 195, 76 (1982). Constructing Lifshitz solutions from AdS. D Cassani, A F Faedo, arXiv:1102.5344JHEP 1105. 13hep-thD. Cassani and A. F. Faedo, Constructing Lifshitz solutions from AdS, JHEP 1105, 013 (2011), arXiv:1102.5344 [hep-th]. Non-Relativistic Solutions of N=2 Gauged Supergravity. N Halmagyi, M Petrini, A Zaffaroni, arXiv:1102.5740JHEP. 110841hep-thN. Halmagyi, M. Petrini, A. Zaffaroni, Non-Relativistic Solutions of N=2 Gauged Supergravity, JHEP 1108 ,041 (2011), arXiv:1102.5740 [hep-th].
{'fraction_non_alphanumeric': 0.06348387675472036, 'fraction_numerical': 0.0414854016235368, 'mean_word_length': 3.573377089529279, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 76, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We continue the analysis of BPS bounds started in [1], extending it to the full class of N = 2 gauged supergravity theories with arbitrary vector and hypermultiplets. We derive the general form of the asymptotic charges for asymptotically flat (M 4 ), anti-de Sitter (AdS 4 ), and magnetic anti-de Sitter (mAdS 4 ) spacetimes. Some particular examples from black hole physics are given to explicitly demonstrate how AdS and mAdS masses differ when solutions with non-trivial scalar profiles are considered.', 'arxivid': '1112.4289', 'author': ['Kiril Hristov \nInstitute for Theoretical Physics and Spinoza Institute\nFaculty of Physics\nUtrecht University\n3508 TDUtrechtThe Netherlands\n', 'K P Hristov@uu Nl \nSofia University\n1164SofiaBulgaria\n'], 'authoraffiliation': ['Institute for Theoretical Physics and Spinoza Institute\nFaculty of Physics\nUtrecht University\n3508 TDUtrechtThe Netherlands', 'Sofia University\n1164SofiaBulgaria'], 'corpusid': 119183017, 'doi': '10.1007/jhep03(2012)095', 'github_urls': [], 'n_tokens_mistral': 15948, 'n_tokens_neox': 13556, 'n_words': 8335, 'pdfsha': '21dab69f889cb19af3d01075e8b6c0c8607d56b3', 'pdfurls': ['https://arxiv.org/pdf/1112.4289v3.pdf'], 'title': ['On BPS bounds in D=4 N=2 gauged supergravity II: general matter couplings and black hole masses', 'On BPS bounds in D=4 N=2 gauged supergravity II: general matter couplings and black hole masses'], 'venue': []}
arxiv
Gravity-Matter Entanglement in Regge Quantum Gravity 22 Jan 2016 Nikola Paunković [email protected] Departamento de Matemática SQIG -Security and Quantum Information Group Instituto de Telecomunicacoes Instituto Superior Técnico Universidade de Lisboa Avenida Rovisco Pais 1049-001LisboaPortugal Marko Vojinović Grupo de Física Matemática Faculdade de Ciências da Universidade de Lisboa Edifício C61749-016Campo Grande, LisboaPortugal Gravity-Matter Entanglement in Regge Quantum Gravity 22 Jan 2016 We argue that Hartle-Hawking states in the Regge quantum gravity model generically contain non-trivial entanglement between gravity and matter fields. Generic impossibility to talk about "matter in a point of space" is in line with the idea of an emergent spacetime, and as such could be taken as a possible candidate for a criterion for a plausible theory of quantum gravity. Finally, this new entanglement could be seen as an additional "effective interaction", which could possibly bring corrections to the weak equivalence principle. Introduction. The unsolved problems of interpreting quantum mechanics (QM) and formulating quantum theory of gravity (QG) are arguably the two most prominent ones of the twentieth century theoretical physics. Up to date, most of the efforts to solve the two were taken independently. Indeed, the majority of the interpretations of QM do not involve explicit dynamical effects (with notable exceptions of the spontaneous collapse and the de Broglie-Bohm theories), while the researchers from the QG community often adopt the manyworld interpretation of QM. Nevertheless, the two problems share a number of similar unsolved questions and counter-intuitive features, such as nonlocality: entanglement-based quantum nonlocality, as well as the anticipated explicit dynamical nonlocality in QG (a consequence of quantum superpositions of different gravitational fields, i.e., different spacetimes and their respective causal orders). We analyse the generic entanglement between gravitational and matter fields in the Regge model of quantum gravity, and its possible impact to the fundamental questions regarding QM and QG. Regge quantum gravity model. A simple toy model of quantum gravity with matter fields is the Regge quantum gravity with one real scalar field, whose construction can be motivated by the Loop Quantum Gravity research program [1,2]. The path integral of the model is Z T = DL DΦ exp iS Regge (L) + iS matter (L, Φ) ,(1) where L are the lengths of the edges of the triangulation T of a 4-manifold M 4 , and Φ are the values of the scalar field in 4-simplices of T . The measure terms DL and DΦ are defined via discretization induced by T . The actions S Regge and S matter represent lattice discretizations of the Einstein-Hilbert action for gravity and an action of the scalar field coupled to gravity, respectively. See [3,4] for details. |Ψ = Dl Dφ Ψ(l, φ) |l |φ .(2) However, since gravity is a theory with constraints, not every kinematical state is allowed, so we must choose the coefficients Ψ(l, φ) such that |Ψ is an element of the physical Hilbert space H phys ⊂ H G ⊗ H M . One such class of states are the Hartle-Hawking (HH) states [5], defined by the following choice of the coefficients, for a given triangulation T : Ψ(l, φ) = Ψ HH (l, φ) ≡ DL DΦ exp iS Regge (L, l) + iS matter (L, Φ, l, φ) .(3) This expression differs from (1) in that the triangulation T is now assumed to have a nontrivial 3-dimensional boundary ∂T , and that the variables l, φ living on the boundary are not integrated over, in contrast to the bulk variables L and Φ. Using (2) and (3), one can calculate the reduced density matrix of the Hartle-Hawking state, ρ M = Tr G |Ψ Ψ| = Dφ Dφ ′ Dl Ψ HH (l, φ)Ψ * HH (l, φ ′ ) |φ φ ′ | , where the integral in the brackets can be denoted as Z T ∪T (φ, φ ′ ). The resulting density matrix can then be tested for entanglement by checking if the trace of its square equals one [6]. In the Regge quantum gravity model the path integrals reduce to a finite number of ordinary integrals, which can then in principle be evaluated. For a generic triangulation, we obtain Tr Mρ 2 M = Dφ Dφ ′ Z T ∪T (φ, φ ′ ) 2 = 1 , i.e., the gravitational and scalar degrees of freedom in the generic HH state are entangled. Discussion. In [7] Penrose argues that gravity-matter entanglement is at odds with (classical) spacetime, seen as a (four-dimensional) differentiable manifold. In light of this, our result could be seen as a quantitative indicator that in quantum gravity one cannot talk of "matter in a point of space", i.e., this result could be seen as a confirmation of a "spacetime as an emergent phenomenon". Thus, generic gravity-matter entanglement could be seen as a possible candidate for a criterion for a plausible theory of quantum gravity. Entanglement is in standard quantum mechanics a generic consequence of the interaction. This new entanglement can be regarded as a consequence of an effective interaction (such as the "exchange interactions", which are a consequence of quantum statistics). This additional "effective interaction" can potentially lead to corrections to the weak equivalence principle. C Rovelli, Quantum Gravity. CambridgeCambridge University PressRovelli C 2004 Quantum Gravity (Cambridge: Cambridge University Press) . C Rovelli, F Vidotto, Covariant Loop Quantum Gravity. Cambridge University PressRovelli C and Vidotto F 2014 Covariant Loop Quantum Gravity (Cambridge: Cambridge University Press) . A Miković, M Vojinović, Class. Quant. Grav. 29165003Miković A and Vojinović M 2012 Class. Quant. Grav. 29 165003 . A Miković, Rev. Math. Phys. 251343008Miković A 2013 Rev. Math. Phys. 25 1343008 . J B Hartle, S W Hawking, Phys. Rev. D. 282960Hartle J B and Hawking S W 1983 Phys. Rev. D 28 2960 M A Nielsen, I L Chuang, Quantum Computation and Quantum Information. CambridgeCambridge University PressNielsen M A and Chuang I L 2000 Quantum Computation and Quantum Information (Cambridge: Cambridge University Press) . R Penrose, Gen. Relativ. Gravit. 28581Penrose R 1996 Gen. Relativ. Gravit. 28 581
{'fraction_non_alphanumeric': 0.043868232224396606, 'fraction_numerical': 0.02136333985649054, 'mean_word_length': 4.540198735320686, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We argue that Hartle-Hawking states in the Regge quantum gravity model generically contain non-trivial entanglement between gravity and matter fields. Generic impossibility to talk about "matter in a point of space" is in line with the idea of an emergent spacetime, and as such could be taken as a possible candidate for a criterion for a plausible theory of quantum gravity. Finally, this new entanglement could be seen as an additional "effective interaction", which could possibly bring corrections to the weak equivalence principle.', 'arxivid': '1601.06831', 'author': ['Nikola Paunković [email protected] \nDepartamento de Matemática\nSQIG -Security and Quantum Information Group\nInstituto de Telecomunicacoes\nInstituto Superior Técnico\nUniversidade de Lisboa\nAvenida Rovisco Pais\n1049-001LisboaPortugal\n', 'Marko Vojinović \nGrupo de Física Matemática\nFaculdade de Ciências da Universidade de Lisboa\nEdifício C61749-016Campo Grande, LisboaPortugal\n'], 'authoraffiliation': ['Departamento de Matemática\nSQIG -Security and Quantum Information Group\nInstituto de Telecomunicacoes\nInstituto Superior Técnico\nUniversidade de Lisboa\nAvenida Rovisco Pais\n1049-001LisboaPortugal', 'Grupo de Física Matemática\nFaculdade de Ciências da Universidade de Lisboa\nEdifício C61749-016Campo Grande, LisboaPortugal'], 'corpusid': 118747898, 'doi': '10.1088/1742-6596/701/1/012035', 'github_urls': [], 'n_tokens_mistral': 1781, 'n_tokens_neox': 1559, 'n_words': 974, 'pdfsha': '7c056c74ec31a9487f8f5ca4d293015340c39653', 'pdfurls': ['https://arxiv.org/pdf/1601.06831v1.pdf'], 'title': ['Gravity-Matter Entanglement in Regge Quantum Gravity', 'Gravity-Matter Entanglement in Regge Quantum Gravity'], 'venue': []}
arxiv
No-Go Theorem for the Composition of Quantum Systems 22 Feb 2014 Maximilian Schlosshauer Department of Physics University of Portland 5000 North Willamette Boulevard97203PortlandOregonUSA Arthur Fine Department of Philosophy University of Washington Box 35335098195SeattleWashingtonUSA No-Go Theorem for the Composition of Quantum Systems 22 Feb 2014 Building on the Pusey-Barrett-Rudolph theorem, we derive a no-go theorem for a vast class of deterministic hidden-variables theories, including those consistent on their targeted domain. The strength of this result throws doubt on seemingly natural assumptions (like the "preparation independence" of the Pusey-Barrett-Rudolph theorem) about how "real states" of subsystems compose for joint systems in nonentangled states. This points to constraints in modeling tensor-product states, similar to constraints demonstrated for more complex states by the Bell and Bell-Kochen-Specker theorems. PACS numbers: 03.65.Ta, 03.65.Ud, 03.67.-a Studies by Pusey, Barrett, and Rudolph (PBR) [1] and others [2, 3] demonstrate a no-go theorem for properties of ontological [4] hidden-variables models. We show that if the strategy of the demonstration is viable, it leads to a theorem like that claimed by von Neumann generations ago[5]; that is, it leads to a broad no-go theorem for deterministic hidden-variables models, including successful models known to reproduce the quantum statistics for the systems in question[6][7][8]. This startling consequence calls for an examination of the elements essential to the strategies underlying these no-go theorems. One critical element is an assumption of how hidden variables of component systems relate to hidden variables of the composite in product states. We show that the physical rationale for composition principles of this kind overreaches. Our results, and those of PBR, highlight that the tensorproduct structure required for composite systems, even for those prepared in nonentangled states, can open up new possibilities that cannot be accommodated by ontological hidden-variables models that embody classical intuitions about how hidden variables ("real states") of quantum systems ought to compose.Hidden variables.-In the models under consideration, each quantum state |ψ in the state space of a given system is associated with a nonempty set Λ ψ that supports a probability density function p ψ (λ) > 0 for λ ∈ Λ ψ , where Λ ψ p ψ (λ) dλ = 1. We will refer to a complete state λ as associated with |ψ if λ ∈ Λ ψ . With probability p ψ (λ), preparing |ψ results in a λ associated with |ψ . In general, different systems, each prepared in |ψ , may have different Λ ψ and p ψ (λ). Adopting Einstein's language for quantum incompleteness [9], PBR call the hidden variables "physical states" or "real physical states" [1]. That terminology signals that the λs are regarded as representing real aspects of a quantum system. But since the structure of a hiddenvariables model cannot actually fix the nature (or reference) of the λs, we will use the neutral language often used by Bell [10] of "complete states," except where realist intuitions come into play. The states are complete in the sense that they suffice to determine the probable responses to measurements of any observable M defined on the state space of the system. Here and below, we assume that M is discrete. Then we have a response function pr(M = k | λ) that, given a system in complete state λ, yields the probability that a measurement of M results in eigenvalue k. Two elements characterize the ontological framework [4] employed by PBR. (i) Response functions do not depend on the quantum state (unless that dependence is written into the λs). (ii) Hidden variables do not play any role in accounting for measurement inefficiencies, so that k∈S(M) pr(M = k | λ) = 1,(1) where S(M ) is the spectrum of M . We have shown elsewhere [11] that the PBR theorem requires both (i) and (ii). The Born probability Pr(M = k | |ψ ) that a measurement of M in state |ψ results in eigenvalue k is obtained from Pr(M = k | |ψ ) = Λ ψ pr(M = k | λ) p ψ (λ) dλ. (2) In the general case, this setup leaves open the possibility that the "reality" represented by λ is the quantum state itself. In the deterministic case, where all response functions yield probability 0 or 1, this cannot happen. The PBR example.-We say that states |ψ and |φ overlap just in case Λ ψ ∩ Λ φ is nonempty. In the example PBR use to illustrate their theorem [1], they consider two quantum systems, each independently prepared in either |1 or |2 , where | 1|2 | = 2 −1/2 . Suppose the states overlap. Then there is some nonzero probability that the preparations result in λs each associated with both |1 and |2 . States |1 and |2 span a two-dimensional space H 0 . Consider H = H 0 ⊗ H 0 , which contains the product states |x, y ≡ |x ⊗ |y , x, y = 1, 2. Using Bell states, PBR display an orthonormal basis {|ξ xy } of H such that ξ xy |x, y = 0. Then for any maximal measurement M with eigenstates |ξ xy = |k xy (where k xy is the corresponding eigenvalue), Pr(M = k xy | |x, y ) = | ξ xy |x, y | 2 = 0.(3) If the pair of λs associated with both |1 and |2 constituted a hidden variable λ c associated with all four product states |x, y , then the response function for λ c would contribute to the Born probabilities in Eq. (3) for all four states |x, y simultaneously. For such a λ c , Eqs. (2) and (3) imply that pr(M = k xy | λ c ) = 0 for x, y = 1, 2. Thus, the M measurement would have no outcome, contradicting Eq. (1). The demonstration of a violation of Eq. (1) does not use the full probability rule of Eq. (2). All one uses is that where the Born probabilities say "no" to a measurement outcome, as in Eq. (3), the appropriate response function also says "no," as in Eq. (4). This motivates the following definition. Thus λ tracks |ψ if and only if whenever the outcome probabilities assigned by λ are nonzero, the Born outcome probabilities for a system prepared in |ψ are nonzero. Equation (2) implies that if λ is associated with |ψ on S, then λ tracks |ψ on S. Thus, tracking |ψ is a necessary condition for association with |ψ . The converse is not true. Composition.-Because complete states are complete only for measurements on a given state space, deriving Eq. (4) from Eq. (3) requires assumptions about complete states for composites beyond what is built into the hidden-variables structure so far. Thus PBR introduce an assumption they call preparation independence: "systems . . . prepared independently have independent physical states" (see p. 475 of Ref. [1]). The "independence" referred to here is twofold. One aspect encompasses stochastic independence (P I st ) of the λs that result from preparing the quantum states. More importantly, to derive a violation of Eq. (1), preparation independence must encompass a composition principle, P I c , that allows those λs to function independently of the quantum states actually prepared. We could capture this compositional aspect of independence by assuming that if λ 1 is associated with |1 of system S 1 and λ 2 is associated with |2 of S 2 , then (λ 1 , λ 2 ) constitutes a complete state associated with |1 ⊗ |2 for the composite system formed from S 1 and S 2 . In fact, the following weaker assumption suffices. Definition. (P I c ) If λ 1 is associated with |1 of system S 1 and λ 2 is associated with |2 of system S 2 , then the pair (λ 1 , λ 2 ) tracks |1 ⊗ |2 on the composite system formed from S 1 and S 2 [13]. According to P I c , although each λ is associated with some pure state, these need not be pure states actually prepared on a given occasion. Thus, suppose two systems are prepared independently-say, one in |1 and the other in |2 -resulting (respectively) in complete states λ 1 and λ 2 . Then P I c implies that (λ 1 , λ 2 ) tracks |1 ⊗ |2 on the composite system. But P I c also implies (counterfactually) that had different states |α 1 and |α 2 been prepared with which (respectively) complete states λ 1 and λ 2 are also associated, then the same pair (λ 1 , λ 2 ) would simultaneously track |α 1 ⊗ |α 2 and |1 ⊗ |2 on the composite (even had the hypothetical preparations turned out different λs). When applied to the PBR example where | 1|2 | = 2 −1/2 , the counterfactuals supported by P I c yield the composition rule PBR employ to move from Eq. (3) to Eq. (4). This is so because P I c implies that if independent preparations of two systems, each in either |1 or |2 , result in complete states associated with both |1 and |2 , then there is a λ c that simultaneously tracks all four states |x, y ≡ |x ⊗ |y , x, y = 1, 2, on the composite formed from S 1 and S 2 . To cover the general case 0 < | 1|2 | 2 < 1 developed by PBR, we can extend P I c to apply to arbitrary tensor products |x 1 ⊗ |x 2 ⊗ |x 3 ⊗ · · · , x i ∈ {1, 2}. This results in the compactness principle, tailored to tracking, that we formulated elsewhere [11] as a composition rule sufficient for the PBR argument. While we may add P I st , it is not needed to derive a violation of Eq. (1). PBR do not discuss the physical rationale for assuming P I c , which may seem natural from a realist point of view, where the λs associated with |1 and |2 represent all the hard facts relevant to probable measurement outcomes on the respective systems. In the state |1 ⊗ |2 , the subsystems are not entangled and not interacting (at least not in an entangling manner), and, hence, one might regard their composite as not generating any new facts. So facts about probable outcomes on the two subsystems, taken conjointly, should constitute all the facts about likely outcomes (i.e., about tracking) on the composite system in the product state. Below we develop a challenge to this rationale. Tracking.-We now show that because the PBR strategy requires only tracking rather than association, it is not specific to models with overlap ("epistemic" models [4]) but also targets nonoverlapping ("ontic") models. First, we modify the antecedent ("if" clause) of P I c to produce a version purely phrased in terms of tracking. Definition. (P I c,tr ) If λ 1 tracks |1 on system S 1 and λ 2 tracks |2 on system S 2 , then the pair (λ 1 , λ 2 ) tracks the product state |1 ⊗|2 on the composite system formed from S 1 and S 2 . Since association implies tracking, and since the conclusions of P I c and P I c,tr are identical, it follows that if P I c,tr and the antecedent of P I c hold, then P I c also holds. Thus P I c,tr implies P I c , and P I c,tr is sufficient to generate the PBR contradiction. What is the physical rationale for assuming P I c,tr ? As in P I c , we can think that the λs associated with |1 and |2 represent all the hard facts relevant to measurement outcomes with nonzero probability (tracking) on the respective systems. Forming the composite described by |1 ⊗ |2 should not generate new facts about outcomes, for the same reasons as in the case of P I c . Hence, all the facts about outcomes that have nonzero probability (tracking) on the components, taken together, should be sufficient to account for outcomes with nonzero probability on the composite. Thus P I c and P I c,tr have the same rationale. Since the antecedent of P I c,tr is weaker than the antecedent of P I c , it is more easily satisfied. Thus, as we will now see, P I c,tr opens up the possibility of no-go results broader than those of PBR. Deterministic models.-In deterministic hiddenvariables models, all probabilities given by the response functions are 0 or 1. Thus, we write M (λ) to denote the eigenvalue k that obtains if M is measured. The Bell-Kochen-Specker (BKS) theorem [6,7] targets such models where the state space has dimension ≥ 3. Essential to that theorem is the rule that an eigenvalue k is assigned to an observable M if and only if the spectral projector P k belonging to k takes the value 1. (This is equivalent to the function rule assumed in Ref. [6], or the additivity of values for commuting operators assumed by Bell [14,15].) The rule mirrors the connection Pr(M = k | |ψ ) = Pr(P k = 1 | |ψ ) built into the Born probabilities. In two dimensions it is harmless, although in certain higher dimensions we have shown that it falls to compactness [11]. Here, we weaken the rule and consider deterministic hidden-variables theories that are only required to follow it in one direction: Assumption. (A) For any state |ψ : if λ ∈ Λ ψ and P k (λ) = 0, then M (λ) = k, where P k is the spectral projector onto the k eigenspace of M . We now show that for every deterministic hiddenvariables theory on a two-dimensional space H 0 that satisfies assumption A, any two distinct, nonorthogonal quantum states are simultaneously tracked; this means that in such models, the antecedent of P I c,tr is always satisfied. Lemma. Suppose 0 < | ψ|φ | 2 < 1 and an ontological deterministic hidden-variables theory governs λs on the two-dimensional Hilbert space H 0 spanned by |ψ and |φ . If assumption A holds on H 0 , then associated with each of these kets |ψ and |φ is a set of measure | ψ|φ | 2 consisting of complete states each of which also tracks the other ket on H 0 . Proof. Since every λ ∈ Λ ψ tracks |ψ on H 0 , we show that a subset S ⊂ Λ ψ of these λs also tracks |φ on H 0 . Let P ⊥ = |φ ⊥ φ ⊥ | be the projector along the state vector |φ ⊥ in H 0 that is orthogonal to |φ . Then Pr(P ⊥ = 1 | |ψ ) = 1 − | ψ|φ | 2 = 1. Hence, there exist λ ∈ Λ ψ such that P ⊥ (λ) = 0; otherwise the overall probability of having P ⊥ = 1 would be 1. With respect to the density p ψ (λ), the set S of such λs has measure equal to Pr(P ⊥ = 0 | |ψ ) = | ψ|φ | 2 . Consider any M for which Pr(M = k | |φ ) = Pr(P k = 1 | |φ ) = 0. Then, since the space is two-dimensional, the projector P k on the k eigenspace of M just projects onto |φ ⊥ ; so P k = P ⊥ . For any λ ∈ S, P ⊥ (λ) = 0. Assumption A then implies that M (λ) = k; that is, pr(M = k | λ) = 0. Thus, every λ ∈ S tracks |φ on H 0 . The same argument applies if we interchange |ψ and |φ . Theorem. No ontological deterministic hidden-variables theory satisfying assumption A and the composition principle P I c,tr can reproduce the predictions of quantum mechanics. Proof. Consider systems S 1 and S 2 , each independently prepared in either |1 or |2 , with | 1|2 | = 2 −1/2 . Suppose an ontological deterministic hidden-variables theory satisfying assumption A governs λs on the twodimensional space H 0 spanned by |1 and |2 . The above lemma establishes that there exists a nonempty set S of λs that track both |1 and |2 on both S 1 and S 2 . Since P I c,tr supports the same counterfactuals as P I c , P I c,tr implies that for any λ ∈ S, λ c = (λ, λ) tracks the four states |x, y , x, y = 1, 2, on the composite system represented by H 0 ⊗ H 0 . Thus, for this λ c and the PBR measurement M , we obtain the PBR contradiction, Eq. (4). For arbitrary distinct nonorthogonal states |1 and |2 , we can extend P I c,tr , analogous to P I c , to cover arbitrary tensor products |x 1 ⊗ |x 2 ⊗ |x 3 ⊗ · · · , x i ∈ {1, 2}, and then apply PBR's quantum circuit to arrive at a contradiction. Discussion.-The no-go theorem derived here is very strong, stronger than the BKS theorem in two respects. First, the assumption A it requires is weaker than the BKS condition. Second, unlike the BKS theorem, it applies to systems with two-dimensional state spaces. It shows that no deterministic qubit model (or submodel) satisfying assumption A, even if it is consistent with the quantum predictions on its domain, can be extended via the composition principle P I c,tr to tensor-product states. This applies to almost all the hidden-variables models reviewed in Ref. [16], and includes the models for qubit systems of Kochen and Specker [6] and the ontic finitedimensional model of Bell [7,8], both of which satisfy assumption A and are known to be quantum consistent. The strength of this result calls attention to the operative composition principle P I c,tr and the possibility of flaws in the physical rationale sketched above. Two possibilities stand out. One is the very idea that pairs (λ 1 , λ 2 ) resulting from independent preparations suffice alone to determine probable measurement outcomes on the tensor product-as would be the case, for instance, if we could identify the λs with the prepared quantum states. That identification, however, is not an option, being incompatible both with determinism and with overlap. Moreover, at least four distinct tensor products are required for a contradiction, whereas only two quantum states are actually prepared. Thus, we need to recognize the possibility that, in addition to (λ 1 , λ 2 ), facts about the context of the actual preparations or subsequent measurements may be needed in order to track the product states. (Ignoring such contextual factors is at the root of the BKS theorem.) But if contextual factors need to be taken into account, a composition principle guaranteeing a complete state λ c that simultaneously tracks all the product states, context free, need not hold. A second possible flaw arises from the circumstance that measurements like M (and PBR's quantum circuit) are entangling. They engage the tensor-product structure to generate facts pertaining to the composite as a whole. Like correlations, such facts are not accessible from the isolated subsystems. Contrary to the stated rationale, forming composites, even ones described by tensor-product states, makes available new, relational facts about measurement outcomes. These reservations about P I c,tr could be taken to undermine the no-go theorem developed here. But both apply in exactly the same way to the compositional assumption P I c of preparation independence required for the PBR theorem. Thus the composition principles P I c,tr and P I c stand or fall together, depending on whether we credit the physical rationale or the reservations. There is an important constructive message here. Recall that other significant no-go theorems, such as the BKS theorem and the Bell theorem, were based on natural assumptions (noncontextuality, locality) supporting counterfactuals in a classical setting. The positive lesson from those no-go theorems was to throw such assumptions into doubt when imported to the quantum world. We suggest the same lesson here. While entanglement and "quantum nonseparability" indicate that sim-ple rules of composition for "real states" are unlikely, one might have assumed that when modeling a tensorproduct state, the compositional aspect of preparation independence, P I c , should be viable. Our results challenge this assumption. They caution against classical, realist intuitions about how "real states" ought to compose, even in the absence of entanglement. It would be interesting to investigate the status of composition rules in other classes of hidden-variables models. We thank J. Malley for useful correspondence. Definition. (Tracking) Consider a system S with state space H. A hidden variable λ tracks |ψ ∈ H on S if and only if, for all observables M on S, whenever Pr(M = k | |ψ ) = 0, then pr(M = k | λ) = 0 [12]. . M F Pusey, J Barrett, T Rudolph, Nat. Phys. 8475M. F. Pusey, J. Barrett, and T. Rudolph, Nat. Phys. 8, 475 (2012). . R Colbeck, R Renner, Phys. Rev. Lett. 108150402R. Colbeck and R. Renner, Phys. Rev. Lett. 108, 150402 (2012). . L Hardy, arXiv:1205.1439v3L. Hardy, arXiv:1205.1439v3. . N Harrigan, R W Spekkens, Found. Phys. 40125N. Harrigan and R. W. Spekkens, Found. Phys. 40, 125 (2010). J Neumann, Mathematische Grundlagen der Quantenmechanik. BerlinSpringerJ. von Neumann, Mathematische Grundlagen der Quan- tenmechanik (Springer, Berlin, 1932). . S Kochen, E Specker, J. Math. Mech. 1759S. Kochen and E. Specker, J. Math. Mech. 17, 59 (1967). . J S Bell, Rev. Mod. Phys. 38447J. S. Bell, Rev. Mod. Phys. 38, 447 (1966). . P G Lewis, D Jennings, J Barrett, T Rudolph, Phys. Rev. Lett. 109150404P. G. Lewis, D. Jennings, J. Barrett, and T. Rudolph, Phys. Rev. Lett. 109, 150404 (2012). . A Einstein, J. Franklin Inst. 221313A. Einstein, J. Franklin Inst. 221, 313 (1936). J S Bell, Speakable and Unspeakable in Quantum Mechanics. Cambridge, EnglandCambridge University PressJ. S. Bell, Speakable and Unspeakable in Quantum Me- chanics (Cambridge University Press, Cambridge, Eng- land, 1987). . M Schlosshauer, A Fine, Phys. Rev. Lett. 108260404M. Schlosshauer and A. Fine, Phys. Rev. Lett. 108, 260404 (2012). Tracking is similar to "possibilistic completeness. go theorem [3Tracking is similar to "possibilistic completeness," as- sumed in a related no-go theorem [3]. Our argument could allow more general functions defined on (λ1, λ2), provided those functions do not depend on specific preparation procedures, measurements, or quantum states. Our argument could allow more general functions defined on (λ1, λ2), provided those functions do not depend on specific preparation procedures, measurements, or quan- tum states. . A Fine, P Teller, Found. Phys. 8629A. Fine and P. Teller, Found. Phys. 8, 629 (1978). . N D Mermin, Rev. Mod. Phys. 65803N. D. Mermin, Rev. Mod. Phys. 65, 803 (1993). F J Belinfante, A Survey of Hidden-Variables Theories. New YorkPergamonF. J. Belinfante, A Survey of Hidden-Variables Theories (Pergamon, New York, 1973).
{'fraction_non_alphanumeric': 0.05752025856011991, 'fraction_numerical': 0.02238980748512811, 'mean_word_length': 4.010560901196902, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Building on the Pusey-Barrett-Rudolph theorem, we derive a no-go theorem for a vast class of deterministic hidden-variables theories, including those consistent on their targeted domain. The strength of this result throws doubt on seemingly natural assumptions (like the "preparation independence" of the Pusey-Barrett-Rudolph theorem) about how "real states" of subsystems compose for joint systems in nonentangled states. This points to constraints in modeling tensor-product states, similar to constraints demonstrated for more complex states by the Bell and Bell-Kochen-Specker theorems. PACS numbers: 03.65.Ta, 03.65.Ud, 03.67.-a Studies by Pusey, Barrett, and Rudolph (PBR) [1] and others [2, 3] demonstrate a no-go theorem for properties of ontological [4] hidden-variables models. We show that if the strategy of the demonstration is viable, it leads to a theorem like that claimed by von Neumann generations ago[5]; that is, it leads to a broad no-go theorem for deterministic hidden-variables models, including successful models known to reproduce the quantum statistics for the systems in question[6][7][8]. This startling consequence calls for an examination of the elements essential to the strategies underlying these no-go theorems. One critical element is an assumption of how hidden variables of component systems relate to hidden variables of the composite in product states. We show that the physical rationale for composition principles of this kind overreaches. Our results, and those of PBR, highlight that the tensorproduct structure required for composite systems, even for those prepared in nonentangled states, can open up new possibilities that cannot be accommodated by ontological hidden-variables models that embody classical intuitions about how hidden variables ("real states") of quantum systems ought to compose.Hidden variables.-In the models under consideration, each quantum state |ψ in the state space of a given system is associated with a nonempty set Λ ψ that supports a probability density function p ψ (λ) > 0 for λ ∈ Λ ψ , where', 'arxivid': '1306.5805', 'author': ['Maximilian Schlosshauer \nDepartment of Physics\nUniversity of Portland\n5000 North Willamette Boulevard97203PortlandOregonUSA\n', 'Arthur Fine \nDepartment of Philosophy\nUniversity of Washington\nBox 35335098195SeattleWashingtonUSA\n'], 'authoraffiliation': ['Department of Physics\nUniversity of Portland\n5000 North Willamette Boulevard97203PortlandOregonUSA', 'Department of Philosophy\nUniversity of Washington\nBox 35335098195SeattleWashingtonUSA'], 'corpusid': 14979353, 'doi': '10.1103/physrevlett.112.070407', 'github_urls': [], 'n_tokens_mistral': 6103, 'n_tokens_neox': 5453, 'n_words': 3654, 'pdfsha': 'cde92fb333a72ea539be426c21c7e33c08ac54c7', 'pdfurls': ['https://arxiv.org/pdf/1306.5805v3.pdf'], 'title': ['No-Go Theorem for the Composition of Quantum Systems', 'No-Go Theorem for the Composition of Quantum Systems'], 'venue': []}
arxiv
Strangeness, Cosmological Cold Dark Matter and Dark Energy arXiv:astro-ph/0501378v1 18 Jan 2005 Sibaji Raha Physics Department Bose Institute 93/1, A. P. C. Road700009KolkataINDIA Shibaji Banerjee Physics Department St. Xavier's College 30, Park Street700016KolkataINDIA Abhijit Bhattacharyya Physics Department Scottish Church College 1 & 3 Urquhart Square 700006KolkataINDIA Sanjay K Ghosh Physics Department Bose Institute 93/1, A. P. C. Road700009KolkataINDIA Ernst-Michael Ilgenfritz Research Center for Nuclear Physics Osaka University 567-0047IbarakiOsakaJAPAN Bikash Sinha Saha Institute of Nuclear Physics 1/AF, Kolkata -700 064BidhannagarINDIA Eiichi Takasugi Graduate School of Physics Osaka University 560-0043ToyonakaOsakaJAPAN Hiroshi Toki Research Center for Nuclear Physics Osaka University 567-0047IbarakiOsakaJAPAN Strangeness, Cosmological Cold Dark Matter and Dark Energy arXiv:astro-ph/0501378v1 18 Jan 20051238Mh1290+b1480Dq9640-z * Electronic Mail : sibaji@bosemainboseinstacin It is now believed that the universe is composed of a small amount of the normal luminous matter, a substantial amount of matter (Cold Dark Matter: CDM) which is non-luminous and a large amount of smooth energy (Dark Energy: DE). Both CDM and DE seem to require ideas beyond the standard model of particle interactions. In this work, we argue that CDM and DE can arise entirely from the standard principles of strong interaction physics out of the same mechanism. The current consensus [1,2] in cosmology is that the standard model comprising an initial Big Bang and a flat universe accommodates only about 4% atoms and ∼ 23% cold dark matter (CDM). The remaining ∼ 73% is a smooth energy, called the dark energy (DE). Matter contributing to CDM should have a dust-like equation of state, pressure p ≈ 0 and energy density ρ > 0 and be responsible for clustering on galactic or supergalactic scales. Dark energy (DE), on the other hand, shows no clustering features at any scale. It is required to have an equation of state p = wρ where w < 0 (ideally w = −1), so that for a positive amount of dark energy, the resulting negative pressure would facilitate an accelerated expansion of the universe, evidence for which has recently become available from the redshift studies of type IA supernovae [3]. The present-day crtitcal density is ∼ 10 −47 GeV 4 so that ρ DE today is ∼ 10 −48 GeV 4 . There is no agreement within the community about the origin or the nature of CDM or DE. It is argued that given the limit of Ω B ∼ 0.04, CDM cannot be baryonic; as a result, various exotic possibilities, all beyond the standard model SU (3) c × SU (2) × U (1) of particle interactions, have been suggested. The situation is even more complicated for DE. The most natural explanation for DE would be a vacuum energy density, which a priori would have the correct equation of state (w = −1). This possibility however is beset with the insurmountable difficulty that for any known (or conjectured) type of particle interaction, the vacuum energy density scale turns out to be many orders of magnitude larger than the present critical density. A trivial, but aesthetically displeasing, way out could be to a priori postulate a small cosmological constant. There exists in the recent literature a large number, too many to cite here, of speculative suggestions, all substantially beyond the standard model. Apart from being non-standard, all these pictures require large amounts of fine-tuning. In this work, we argue that there is no essential need to go beyond the standard model SU (3) c × SU (2) × U (1) to understand the nature of CDM and DE; they can both arise from the same process of the cosmic quark-hadron phase transition occurring during the microsecond epoch after the Big Bang. The role of phase transitions [4] in the early universe has been recognized to be of paramount importance. In the strong interaction (Quantum Chromodynamics) sector, there is expected to be a phase transition separating the confined (hadronic) phase from the deconfined (quark-gluon plasma) phase. In the early universe, this phase transition is predicted to occur during the microsecond epoch after the Big Bang. The order of this phase transition is at present an open issue. While it may be of second order (or even a cross-over transition) in the laboratory, the cosmic phase transition could most likely be of first order, as has been argued earlier [5,6,7]. In what follows, we shall tacitly assume it to be of first order; the conclusions may however be valid even if the transition is not strictly of first order. We shall return to this issue at the end of the discourse. Another crucial ansatz in our scenario is that the universe is overall colour neutral at all times. We further assume, in keeping with the standard cosmological model, that the baryon number of the universe has been generated much before the universe reaches the microsecond era. The net baryon number till this epoch is carried in the form of (net) quarks. A first order phase transition can be described through a bubble nucleation scenario. At temperatures higher than the critical temperature T c , the coloured quarks and gluons are in a thermally equilibrated state in the perturbative vacuum (the quark-gluon plasma or QGP). The total colour of the universe is neutral so that the total colour wave function of the universe is a singlet. Then, as T c is reached and the phase transition starts, bubbles of the hadronic phase begin to appear in the quark-gluon plasma, grow in size and form an infinite chain of connected bubbles (the percolation process). At this stage, the ambient universe turns over to the hadronic phase. Within this hadronic phase, the remaining high temperature quark phase gets trapped in large bubbles. As is well known, this process is associated with a fluctuation in the temperature around T c ; the bubbles of the hadronic phase can nucleate only when the temperature falls slightly below T c . The released latent heat raises the temperature again and so on. It is thus fair to assume that the temperature of the universe remains around T c at least upto percolation. Witten [6] argued some time ago that the net baryon number contained in these Trapped False Vacuum Domains (TFVD) could be many orders of magnitude larger than that in the normal hadronic phase and they could constitute the absolute ground state of strongly interacting matter. It has been shown [8] that if these TFVDs possess baryon number in excess of 10 42−44 , they would be stable on cosmological time scales and would form the so-called strange quark nuggets (SQN). (It should be mentioned here that the role of strangeness is extremely important in this context; it is the population of the strange quark sector through weak interaction which ensures the stability of the SQNs against decay into normal hadrons. Hence the occurrence of the term "Strangeness" in the title and the justification of the topic in the present conference.) In such situations, they would have spatial radii ∼ 1 m, while their spatial separation would be ∼ 300 m [9]. TFVDs with baryon number less than this would evaporate into normal baryons quite rapidly, much before Big Bang Nucleosynthesis (BBN) [10] starts. (To distinguish the baryon number contained in SQNs from that participating in BBN, we denote the SQN matter as quasibaryonic.) The SQNs could evolve [9] to primordial structures of approximate solar mass, which could manifest themselves as the Massive Compact Halo Objects (MACHO), discovered [11,12] through gravitational microlensing in the Milky Way Halo in the direction of the Large Magellanic Cloud (LMC). In all these considerations, the explicit role of colour, the fundamental charge of strong interaction physics, has been glossed over. It has been tacitly assumed that in a many-body system of quarks and gluons, colour is averaged over, leaving only a statistical degeneracy factor for thermodynamic quantities. We argue that such simplification has led us to overlook a fundamentally important aspect of strong interaction physics in cosmology. Let us now understand the QGP characteristics in terms of the Debye screening length (DSL). All colour charges are neutralised within the DSL (∼ 1 gs(T )T , g s being the strong coupling conatant). Formation of hadrons becomes possible only when the DSL becomes larger than the typical hadronic radius. The existence of QGP would mean that there are sufficient number of colour charges present within the Debye volume. It can be shown that upto T c , Debye length is less than a fermi and more than 10 colour charges are present within the Debye volume (quarks, antiquarks and gluons). On the other hand, the net number of quarks (obtained using n b nγ ∼ 10 −10 ) is much less than one within the same volume. Thus, to ensure both colour neutrality as well as integer baryon number, one would need a long-range correlation beyond the Debye length in QGP, the quantum entanglement property [13]. We will now consider the process of the cosmic quark-hadron phase transition from the quantum mechanical standpoint of colour confinement. As already mentioned, the colour wave function of the entire universe prior to the phase transition must be a singlet (this assumption is at the same level as that of total electric charge of the universe being zero), which means it cannot be factorized into constituent product states; the wave functions of all coloured objects are completely entangled [13] in a quantum mechanical sense. In such a situation, the so-called quark-gluon plasma phase, the universe is characterized by a vacuum energy corresponding to the perturbative vacuum of Quantum Chromodynamics (QCD). As the phase transition proceeds, locally colour neutral configurations (hadrons) arise, resulting in gradual decoherence of the entangled colour wave function of the entire universe. Note that the coloured objects within the hadrons are entangled among themselves but not with those in the rest of the universe. This amounts to a proportionate reduction in the perturbative vacuum energy density, which goes into providing the latent heat of the transition, as well as the mass and the kinetic energy of the particles in the non-perturbative (hadronic) phase. (It should be mentioned here that the vacuum energy of the non-perturbative phase of QCD is taken to be zero; more on this later.) In the quantum mechanical sense of entangled wave functions, the end of the quark-hadron transition would correspond to complete decoherence of the colour wave function of the universe; the entire vacuum energy would disappear as the perturbative vacuum would be replaced by the non-perturbative vacuum. Combining these observations with the formation of TFVDs as discussed above, it is obvious that in order for the TFVDs to be stable physical objects, they must be colour neutral. This is synonymous with the requirement that they all have integer baryon numbers, i.e., at the moment of formation each TFVD has net quark numbers in exact multiples of 3. For a statistical process, this is, obviously, most unlikely and consequently, most of the TFVDs would have some residual colour at the percolation time. Then, on the way to becoming colour singlet, they would each have to shed one or two coloured quarks. This scenario is certainly not inconsistent with the screening of colour within DSL, even if the size of the TFVD is much larger than the DSL. It is well known from the Electromagnetic plasma that the local charge neutrality is violated over a small length of the order of DSL at the boundary of the plasma (plasma sheath effect). Thus, the end of the cosmic QCD phase transition corresponds to a situation where there would be a few quarks, separated by spacelike distances. It has to be noted that such a large separation, apparently against the dictates of QCD, is by no means unphysical. The separation of coloured TFVDs occurs at the temperature T c , when the effective string tension is zero, so that there does not exist any long range force. Even more importantly, these orphan quarks are not deconfined at all; they do not form asymptotic states. In terms of the quantum entanglement and decoherence of the colour wave function, their colour wave functions must still remain entangled and a corresponding amount of the perturbative vacuum energy would persist in the universe. In a physical picture, the orphan quarks, being unable to form strings and recombine into baryons, belong to a very dilute many-body system of quarks confined in a very large bag which spans the entire universe. Note that the above scenario is unique for the early universe. For the QGP putatively formed in the laboratory in energetic heavy ion collisions, the process is limited to strong interaction time scales so that the size of the system is of the order of a few fermis, comparable to the DSL. Thus separation of quarks over large spatial distances is not at all likely. Furthermore, one has to also take into account the two possible situations in these collisions. If there is complete or substantial stopping, as the case seems to be till present energies, the baryon number density is very high so that there would be sufficient number of net quarks within the Debye volume. On the other hand, if there is total transparency, the baryon chemical potential in the central region would be zero. In either case, there is no a priori need to invoke quantum entanglement. There does not exist any way to calculate the perturbative vacuum energy from first principles in QCD. For the latter quantity, one may adopt the phenomenological Bag model [14] of confinement, where the Bag parameter B (∼ (145M eV ) 4 ) is the measure of the difference between the perturbative and the non-perturbative vacua. Thus we can assume that at the beginning of the phase transition, the universe starts out with a vacuum energy density B, which gradually decreases with increasing decoherence of the entangled colour wave function. A natural thermodynamic measure of the amount of entanglement during the phase transition could be the volume fraction (f q ≡ colour /V total ) of the coloured degrees of freedom; at the beginning, f q is unity, indicating complete entanglement, while at the end, very small but finite entanglement corresponds to a tiny but non-zero f q due to the coloured quarks. Accordingly, the amount of perturbative vacuum energy density in the universe at any time is the energy density B times the instantaneous value of f q ; within the scenario discussed above, the remnant perturbative vacuum energy at the end of the QCD transition would just be B ×f q,O , where f q,O is due solely to the orphan quarks. An order of magnitude estimate for f q,O can be carried out in the following straightforward manner. On the average, each TFVD is associated with 1 orphan quark so that the number N q,O of orphan quarks within the horizon volume at any time is about the same as the number N T F V D of TFVDs therein. It is well known from the study of percolating systems [15] that percolation is characterized by a critical volume fraction f c ∼ 0.3 of the high temperature phase. In the present case, this would require f q in the form of TFVD-s to be ∼ 0.3. Following the ansatz of Witten [6] that the most likely length scale for a TFVD is a few cm, one can estimate N T F V D (and hence N q,O ) within the horizon at the percolation time of about 100 µsec [5] to be about 10 18−20 . The inter-TFVD separation comes out to be ∼ 0.01 cm at that time. (It is obvious that the orphan quarks, separated by distances of 0.01 cm, cannot develop colour strings between them, even if there is some non-zero string tension generated at temperatures slightly lower than T c .) Then, if we naively associate an effective radius of ∼ 10 −14 cm (estimated from σ qq = 1 9 σ pp ; σ pp ∼ 20mb ) with each orphan quark, we obtain f q,O ∼ N q,O × (v q,O /V total ) ∼ 10 −42 − 10 −44 ( where v q,O is the effective volume of an orphan quark), so that the residual pQCD vacuum energy comes out to be in the range 10 −46 to 10 −48 GeV 4 , just the amount of DE. Even though the DE component appears during the microsecond era in the history of the universe, it remains negligible in comparison to the matter density for most of the history. Since matter density decreases as R −3 (R being the scale size), while DE density remains constant, the latter can become dominant only at very late times (z ∼ 0.17) and thus would not affect the galaxy formation scenarios to any extent. The density of the orphan quarks in the present universe is exceedingly small; compared to ∼ 10 77 baryons which took part in BBN, there would be 10 44−45 orphan quarks. Their flux in the cosmic rays would be negligibly small; non-observance of fractionally charged objects is thus not a detractor. The scenario presented here may still remain valid even if the QCD phase transtition is not strictly of first order, provided there are finite size fluctuation associated with the quark-hadron transition. A definitive answer to this question would require a detailed simulation which is a cherished goal. A posteriori, it is tempting to mention that it should perhaps have been anticipated that if the cosmological constant does arise from a vacuum energy, then the QCD vacuum is the most natural candidate. It is a known (and accepted) lore of (renormalisable) quantum field theories that the divergent vacuum energy density is renormalised to the physical parameters of the theory. Within all the field theories now in vogue, it is only the QCD which has a distinct perturbative and a non-perturbative vacuum, which are separated by a finite energy density, irrespective of the renormalisation prescription. (That we are unable to estimate this as yet from first principles in QCD is a technical shortcoming, not a conceptual one.) It is thus most plausible that even if suitable renormalisation prescriptions remove all vacuum energy densities, the finite part of the perturbative vacuum energy density of QCD should survive and play the role of the cosmological constant. We therefore conclude by reiterating that the emergence of both CDM and DE from the same mechanism entirely within the standard model of particle interactions is a very interesting possibility which deserves detailed attention. A natural corollary of the present scenario would be the existence of strange quark matter (strangelets) in the cosmic ray flux. Despite many searches, no conclusive evidence for the existence of strangelets has yet emerged. The reader is referred to the article of Jes Madsen in this volume for a review of the status of such searches. We are in the process of setting up a dedicated large area array of passive detectors at mountain altitudes in Eastern Himalayas for such detection, funding for which has very recently been approved. SR would like to thank the Research Center for Nuclear Physics, Osaka University for their warm hospitality during his sojourn there, where this work was initiated. . T Tegmark, Science. 296T. Tegmark, Science 296, 1427-1433 (2002). . B Leibundgut, Ann. Rev. Astron. Astrophys. 39B. Leibundgut, Ann. Rev. Astron. Astrophys. 39, 67-98 (2001). . R P Kirshner, Science. 300R. P. Kirshner, Science 300, 1914-1918 (2003). H J Vega, I M Khalatnikov, N G Sánchez, Phase Transitions in the Early Universe: Theory and Observations. Kluwer Academic PublishersH. J. De Vega, I. M. Khalatnikov and N. G. Sánchez, Phase Transitions in the Early Universe: Theory and Observations (Kluwer Academic Publishers, 2003). . J Alam, S Raha, B Sinha, Astrophys. J. 513J. Alam, S. Raha and B. Sinha, Astrophys. J. 513, 572-575 (1999). . E Witten, Phys. Rev. 30E. Witten, Phys. Rev. D30, 272-285 (1984). . J Alam, S Raha, B Sinha, Phys. Rep. 273J. Alam, S. Raha, and B. Sinha, Phys. Rep. 273, 243-362 (1996). . P Bhattacharjee, J Alam, B Sinha, S Raha, Phys. Rev. 48P. Bhattacharjee, J. Alam, B. Sinha, and S. Raha, Phys. Rev. D48, 4630-4638 (1993). . S Banerjee, Mon. Not. R. Astron. Soc. 340S. Banerjee et al., Mon. Not. R. Astron. Soc. 340, 284-288 (2003). . C J Copi, D N Schramn, M S Turner, Science. 267C. J. Copi, D. N. Schramn, and M. S. Turner, Science 267, 192-199 (1995). . C Alcock, Nature. 365C. Alcock et al., Nature 365, 621-623 (1993). . E Aubuorg, Nature. 365E. Aubuorg, et al., Nature 365, 623-625 (1993). . W K Wootters, Phil. Trans. R. Soc. Lond. 356W. K. Wootters, Phil. Trans. R. Soc. Lond. A356, 1717-1731 (1998). . A Chodos, R Jaffe, K Johnson, C B Thorn, V F Weisskopf, Phys. Rev. 9A. Chodos, R. L Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D9, 3471-3495 (1974). . D Stauffer, Phys. Rep. 541D. Stauffer, Phys. Rep. 54, 1 (1979).
{'fraction_non_alphanumeric': 0.03762537793348371, 'fraction_numerical': 0.022220089264289485, 'mean_word_length': 4.395649922320041, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'It is now believed that the universe is composed of a small amount of the normal luminous matter, a substantial amount of matter (Cold Dark Matter: CDM) which is non-luminous and a large amount of smooth energy (Dark Energy: DE). Both CDM and DE seem to require ideas beyond the standard model of particle interactions. In this work, we argue that CDM and DE can arise entirely from the standard principles of strong interaction physics out of the same mechanism.', 'arxivid': 'astro-ph/0501378', 'author': ['Sibaji Raha \nPhysics Department\nBose Institute\n93/1, A. P. C. Road700009KolkataINDIA\n', "Shibaji Banerjee \nPhysics Department\nSt. Xavier's College\n30, Park Street700016KolkataINDIA\n", 'Abhijit Bhattacharyya \nPhysics Department\nScottish Church College\n1 & 3\n\nUrquhart Square\n700006KolkataINDIA\n', 'Sanjay K Ghosh \nPhysics Department\nBose Institute\n93/1, A. P. C. Road700009KolkataINDIA\n', 'Ernst-Michael Ilgenfritz \nResearch Center for Nuclear Physics\nOsaka University\n567-0047IbarakiOsakaJAPAN\n', 'Bikash Sinha \nSaha Institute of Nuclear Physics\n1/AF, Kolkata -700 064BidhannagarINDIA\n', 'Eiichi Takasugi \nGraduate School of Physics\nOsaka University\n560-0043ToyonakaOsakaJAPAN\n', 'Hiroshi Toki \nResearch Center for Nuclear Physics\nOsaka University\n567-0047IbarakiOsakaJAPAN\n'], 'authoraffiliation': ['Physics Department\nBose Institute\n93/1, A. P. C. Road700009KolkataINDIA', "Physics Department\nSt. Xavier's College\n30, Park Street700016KolkataINDIA", 'Physics Department\nScottish Church College\n1 & 3', 'Urquhart Square\n700006KolkataINDIA', 'Physics Department\nBose Institute\n93/1, A. P. C. Road700009KolkataINDIA', 'Research Center for Nuclear Physics\nOsaka University\n567-0047IbarakiOsakaJAPAN', 'Saha Institute of Nuclear Physics\n1/AF, Kolkata -700 064BidhannagarINDIA', 'Graduate School of Physics\nOsaka University\n560-0043ToyonakaOsakaJAPAN', 'Research Center for Nuclear Physics\nOsaka University\n567-0047IbarakiOsakaJAPAN'], 'corpusid': 119496708, 'doi': '10.1088/0954-3899/31/6/028', 'github_urls': [], 'n_tokens_mistral': 5603, 'n_tokens_neox': 4875, 'n_words': 3452, 'pdfsha': '2f59244713a9eec84c06770f439fab790d08a816', 'pdfurls': ['https://arxiv.org/pdf/astro-ph/0501378v1.pdf'], 'title': ['Strangeness, Cosmological Cold Dark Matter and Dark Energy', 'Strangeness, Cosmological Cold Dark Matter and Dark Energy'], 'venue': []}
arxiv
A new proof of Rédei's theorem on the number of directions 24 Dec 2022 Gábor Somlai [email protected] Department of Algebra and Number Theory Eötvös Loránd University A new proof of Rédei's theorem on the number of directions 24 Dec 2022 Rédei and Megyesi proved that the number of directions determined by a p element subset of F 2 p is either 1 or at least p+3 2 . The same result was independently obtained by Dress, Klin and Muzychuk. We give a new and short proof of this result using a Lemma proved by Kiss and the author. The new proof further on a result on polynomials over finite fields. Introduction Let p be a prime. The points of the projective line P G(1, F p ) can be considered as equivalence classes of the non-zero vectors of the affine plane AG(2, F p ), where F p denotes the field of p elements. Two elements are equivalent if one of them is a non-zero multiple of the other. For a subset H, the set of directions D(H) ⊂ P G(1, F p ) is the equivalence classes corresponding to the elements of (H − H) \ {0}. The number of directions determined by is H is the cardinality of D(H). Rédei investigated the number of directions determined by a p-element subset of the 2 dimensional space F 2 p over the finite field F p of p elements. Using his results on lacunary polynomials, Rédei proved that such a subset is a line or determine at least p+1 2 directions. Later, Megyesi excluded the case of p+1 2 directions. These results can be summarized as follows, see [7]. Theorem 1.1. If a set of p points in F 2 p is not a line, then it determines at least p+3 2 directions. Sets having exactly p+3 2 were described by Lovász and Schrijver [6]. As a generalization of Rédei's result, Szőnyi [8] proved that if k ≤ p, then a k-element subset not lying in a line determines at least k+3 2 directions. Gács [4] showed that there is another gap in the possible number of directions between p+3 2 and ⌊2 p−1 3 ⌋ + 1. Note that the result of Megyesi and Rédei was independently obtained by Dress, Klin and Muzychuk [2]. They used this result to give a new proof of Burnside's theorem on permutation groups of prime degree. Another application of the results on the number of directions in * The author is a János Bolyai research fellow. The work of the author on the project leading to this application has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 741420) and by (NKFIH) Grant No. K138596 and SNN 132625. group theory is due to Dona [3], who used his result to add to the theory of growth in groups. Further, the connection of the set of directions in the affine plane to blocking sets on finite projective plans are also discussed in [4]. One of the main purposes of this paper is to give a new proof of Theorem 1.1. The other one is to prove Theorem 1.2, which will immediately imply Theorem 1.1. Let g : F p → F p be a polynomial. Considering the polynomial function corresponding to g we may assume that the degree of g is at most p − 1. Further the elements of F p can be considered as elements in {0, 1, . . . , p − 1} ⊂ Z. Thus we may consider the sum of the values in Z. If it is small enough, then we obtain restrictions on the degree of g. In order to motivate the following theorem, it is useful to consider the polynomial q(x) = x p−1 2 + 1. The sum of the values of q is equal to p since q(x) =      2 if x is a quadratic residue 1 if x=0 0 otherwise.(1) This simple example shows that the following theorem is sharp. Theorem 1.2. Let p be an odd prime. If p−1 i=1 g(i) = p, then the degree of either g is at least p−1 2 or g is a constant function. Technique The proof of Theorem 1.2 relies on the following result proved in [5]. Note that the proof of this lemma uses Rédei's polnyomials. Lemma 2.1. Let A be a subset of F 2 p of cardinality kp. Assume that it determines d special directions. Let r be a projection function defined as follows: r(i) = |{j ∈ F p | (i, j) ∈ A}|. Then d ≥ deg(r) + 2. Using suitable affine transformation we may prove the previous lemma for any projection function obtained in this way instead of the vertical projection. As a corollary of this lemma we obtain that in order to prove Rédei's result it is sufficient to prove the Theorem 1.2 that is of independent interest. Using a simple argument we get a weaker result than Theorem 1.2. Proposition 2.2. Let p be an odd prime. If p−1 i=1 g(i) = p, then the degree of either g is at least p−1 3 or g is a constant function. Proof. We will simply prove that one of its values of g is taken at least p−1 3 times. More precisely |{x ∈ F p | g(x) = 0}| ≥ p−1 3 or |{x ∈ F p | g(x) = 1}| ≥ p−1 3 . Assume indirectly this is not the case. Then x∈Fp g(x) ≥ x∈Fp : g(x)≥1 1 + x∈Fp : g(x)≥2 1 ≥ (p − p − 1 3 ) + (p − 2 p − 1 3 ) = p + 1, a contradiction. In order to emphasize the usefulness of Lemma 2.1 we prove the following simple result. The proof uses again the observation that the multiplicity of any element in the range of a non-constant polynomial is a lower bound for the degree of a non-zero polynomial. Proof. Let r be the function from F p to F p be defined as in Lemma 2.1. Then the multiplicity of 0 as a root of r is at least p − a and for the projection polynomial defined by the projection of H to the second coordinate gives a polynomial of degree at least p − b. Now, Lemma 2.1 gives the result. The importance of this trivial corollary of Lemma 2.1 relies on the similarity of this result to the one of Di Benedetto, Solymosi and White [1], who proved that the number of directions determined by a subset of F 2 p , which is the Cartesian product of the subsets A, B ⊂ F p is at least |A| · |B| − min{|A|, |B|} + 2. Proof of the main result Proof. As we have mentioned, every polynomial function from F p to F p coincides with a unique polynomial of degree at most p − 1 so we will automatically reduce the degree below p. The proof of Theorem 1.2 relies on the following simple observation. The degree of a polynomial h is smaller than p − 1 if and only if y∈Fp h(y) ≡ 0 (mod p). Let us consider the polynomial f (x) = g(x 2 ) (reduced to degree at most p − 1). Clearly, if y∈Fp f (y) ≡ 0 (mod p), then deg(g) ≥ p−1 2 . We argue that if there were a non-constant polynomial of degree less than p−1 2 such that the sum of its values is p, then there is one which takes 0 at 0. It is clear that if the polynomial is non-constant, then the sum can only be p if 0 is in the range of the polynomial. Now applying a linear substitution x → x + i on the x variable of the polynomial we obtain a polynomial of the same degree satisfying f (0) = 0. Let us first estimate the sum of the values of f from above. y∈Fp f (y) = x∈Fp g(x 2 ) = g(0) + 2 x∈(F * p ) 2 g(x) = 2 x∈(F * p ) 2 g(x) ≤ 2 x∈Fp g(x) = 2p.(2) It is clear that y∈Fp f (y) cannot be equal to p since it is an even number by equation (2). On the other hand, equality in (2) can only hold if g vanishes on every the non-quadratic residues, when the degree of g is at least p−1 2 + 1 = p+1 2 . It could be that the previous sum is 0 but then g vanishes on the quadratic residues, again having many roots. Therefore we obtain that the sum in equation (2) is not divisible by p, finishing the proof of Theorem 1.2. Theorem 1.1 can be applied for the projection function defined in Lemma 2.1. We may assume that the set is not a vertical line. It follows from Theorem 1.2 that the degree of r is at least p−1 2 . Then by Lemma 2.1, the number of direction determined by A is at least p+3 2 = p−1 2 + 2. There are natural problems arising here. Can we find a similar result proving the ones listed in the beginning of this paper. • It is true that up to affine transformations x p−1 2 + 1 is the unique polynomial of degree p−1 2 such that the sum of its values is p? • Is it possible to prove Gács's result on the number of directions? Theorem 2. 3 . 3Let H be a subset of F 2 p of cardinality p. Let a and b be the size of the projection of H to the x and y axis, respectively. Then the number of directions determined by H is at least p − min{a, b} + 2. AcknowledgementThe author is grateful for Gergely Kiss and Zoltán Nagy for short but fruitful conversations. On the directions determined by a Cartesian product in an affine Galois plane. D Di Benedetto, J Solymosi, E P White, Combinatorica. 6D. Di Benedetto, J. Solymosi, E. P. White, On the directions determined by a Cartesian product in an affine Galois plane. Combinatorica, 41(6), 755-763. On p-configurations with few slopes in the affine plane over F p and a theorem of W. Burnside's. A W M Dress, M H Klin, M Muzychuk, Bayreuther Math. Schriften. 40A. W. M. Dress, M. H. Klin, M. Muzychuk, On p-configurations with few slopes in the affine plane over F p and a theorem of W. Burnside's, Bayreuther Math. Schriften 40 (1992), 7-19. Number of directions determined by a set in F 2 q and growth in Aff(F q ). D Dona, D , D. Dona, D. Number of directions determined by a set in F 2 q and growth in Aff(F q ). . Discrete & Computational Geometry. 664Discrete & Computational Geometry, 66(4) (2021), 1415-1428. On a generalization of Rédei's theorem. A Gács, Combinatorica. 23A. Gács, On a generalization of Rédei's theorem, Combinatorica 23 (2003), 585-598. Special directions on the finite affine plane. G Kiss, G Somlai, G. Kiss, G. Somlai, Special directions on the finite affine plane. https://arxiv.org/abs/2109.13992. Remarks on a theorem of Rédei. L Lovász, A Schrijver, Studia Scient. Math. Hungar. 16L. Lovász, A. Schrijver: Remarks on a theorem of Rédei, Studia Scient. Math. Hungar. 16 (1981), 449-454. L Rédei, English translation: Lacunary polynomials over finite fields. Basel; AmsterdamNorth HollandLückenhafte Polynomeüber endlichen KörpernL. Rédei: Lückenhafte Polynomeüber endlichen Körpern, Birkhäuser Verlag, Basel (1970) (English translation: Lacunary polynomials over finite fields, North Holland, Amsterdam (1973)). On the number of directions determined by a set of points in an affine Galois plane. T Szőnyi, Journal of Combinatorial Theory Series A. 74T. Szőnyi, On the number of directions determined by a set of points in an affine Galois plane, Journal of Combinatorial Theory Series A 74 (1996), 141-146.
{'fraction_non_alphanumeric': 0.0570294345468629, 'fraction_numerical': 0.02933772269558482, 'mean_word_length': 3.627688172043011, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Rédei and Megyesi proved that the number of directions determined by a p element subset of F 2 p is either 1 or at least p+3 2 . The same result was independently obtained by Dress, Klin and Muzychuk. We give a new and short proof of this result using a Lemma proved by Kiss and the author. The new proof further on a result on polynomials over finite fields.', 'arxivid': '2212.12823', 'author': ['Gábor Somlai [email protected] \nDepartment of Algebra and Number Theory\nEötvös Loránd University\n\n'], 'authoraffiliation': ['Department of Algebra and Number Theory\nEötvös Loránd University\n'], 'corpusid': 255125377, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 3257, 'n_tokens_neox': 2941, 'n_words': 1933, 'pdfsha': 'cc9795cc2e22ae2fd27c391b6c19c1d072b2749b', 'pdfurls': ['https://export.arxiv.org/pdf/2212.12823v1.pdf'], 'title': ["A new proof of Rédei's theorem on the number of directions", "A new proof of Rédei's theorem on the number of directions"], 'venue': []}
arxiv
Coherent forward scattering as a robust probe of multifractality in critical disordered media (Dated: October 10, 2022) Maxime Martinez Laboratoire de Physique Théorique Université de Toulouse CNRS UPS France Gabriel Lemarié Laboratoire de Physique Théorique Université de Toulouse CNRS UPS France MajuLab CNRS-UCA-SU-NUS-NTU International Joint Research Unit Singapore Centre for Quantum Technologies National University of Singapore Singapore Bertrand Georgeot Laboratoire de Physique Théorique Université de Toulouse CNRS UPS France Christian Miniatura MajuLab CNRS-UCA-SU-NUS-NTU International Joint Research Unit Singapore Centre for Quantum Technologies National University of Singapore Singapore Université Côte d'Azur CNRS INPHYNI NiceFrance Department of Physics National University of Singapore Singapore School of Physical and Mathematical Sciences Nanyang Technological University Singapore Olivier Giraud Université Paris Saclay CNRS LPTMS 91405OrsayFrance Coherent forward scattering as a robust probe of multifractality in critical disordered media (Dated: October 10, 2022)numbers: 0545Df0545Mt7130+h0540-a We study coherent forward scattering (CFS) in critical disordered systems, whose eigenstates are multifractals. We give general and simple arguments that make it possible to fully characterize the dynamics of the shape and height of the CFS peak. We show that the dynamics is governed by multifractal dimensions D1 and D2, which suggests that CFS could be used as an experimental probe for quantum multifractality. Our predictions are universal and numerically verified in three paradigmatic models of quantum multifractality: Power-law Random Banded Matrices (PRBM), the Ruijsenaars-Schneider ensembles (RS), and the three-dimensional kicked-rotor (3DKR). In the strong multifractal regime, we show analytically that these universal predictions exactly coincide with results from standard perturbation theory applied to the PRBM and RS models. PACS numbers: 05.45.Df, 05.45.Mt, 71.30.+h, 05.40.-a arXiv:2210.04796v1 [cond-mat.dis-nn] I. INTRODUCTION Wave transport in disordered systems is a long-standing topic of interest in mesoscopic physics. In particular, wave interference can have dramatic consequences on quantum transport properties. The most celebrated example is probably Anderson localization (AL) [1], that is, the suppression of quantum diffusion and the exponential localization of quantum states. AL is ubiquitous in wave physics and has been observed in many experimental situations: with acoustic waves [2,3], light [4][5][6][7][8], matter waves [9][10][11][12][13][14][15]. Appearance of AL depends on several characteristics, in particular dimensionality, disorder strength and correlations. For instance, it is well established that 3d disordered lattices undergo a genuine disorder-driven metalinsulator transition (MIT), associated with a mobility edge in the spectrum, separating the insulating phase with localized eigenstates from the conducting phase with extended eigenstates. Near the critical point of such disorder driven transitions, eigenstates φ α (with energy ω α ) can display multifractal behavior, for instance at the MIT in Anderson model [16][17][18] and graphs [19][20][21], but also for Weyl-semimetal-diffusive transition [22]. They are extended but non-ergodic, and characterized by the anomalous scaling of their moments I q (E): I q (E) = n,α |φ α (n)| 2q δ(E − ω α ) α δ(E − ω α ) ∼ N −Dq(q−1) ,(1) where D q are the multifractal dimensions, forming a continuous set with q real ( . . . represents an average over disorder configurations). Extreme cases D q = 0 and D q = d (the dimension of the system) for all q, cor-respond respectively to localized and extended ergodic eigenstates. While Anderson MIT has been observed directly in atomic matter waves [13], experimental observation of multifractality remains challenging [23][24][25][26]. In particular, there exists to our knowledge no direct experimental observation of dynamical multifractality, i.e. manifestation of multifractality through transport properties (e.g. powerlaw decay of the return probability [27,28]). Another celebrated wave interference effect is the coherent backscattering (CBS). It describes the doubling of the scattering probability (with respect to incoherent classical contribution) of an incident plane wave with wave vector k 0 , in the backward direction −k 0 . Coherent backscattering has been observed in many experimental situations: with light [29][30][31][32][33], acoustic waves [34,35], seismic waves [36] and cold atoms [37,38]. Recently, it was demonstrated that in the presence of AL a new robust scattering effect emerges [39][40][41][42][43][44][45][46], namely the doubling of the scattering probability in the forward direction +k 0 . This phenomenon, which appears at long times, was dubbed coherent forward scattering (CFS). CBS and CFS actually have a distinct origin: CBS comes from pair interference of time-reversed paths (and thus requires time-reversal symmetry), while CFS is present even in the absence of time-reversal symmetry [39,40]. From an experimental point of view, CFS has recently been observed with cold atoms [38]. In this work, we discuss the fate of CFS at the critical point of a disorder-driven transition with multifractal eigenstates. This problem was first addressed for a bulk 3d Anderson lattice [44], for which it was shown that CFS survives at the transition, with however a scattering probability smaller than in the localized phase. More precisely, Figure 1. CFS contrast ΛN (k, t; E) defined by (40) in critical disordered systems. k0 is the wave vector of the incident plane wave and Dq are the multifractal dimensions of the eigenstates. (a) In systems of infinite size N → ∞, the emergence of the CFS peak as a function of time is governed by the nonergodic properties of multifractal eigenstates. The CFS wings decay asymptotically like (|k − k0|t 1/d ) −D 2 , see Eqs. (62) and (63), while the CFS peak height grows algebraically in time like t −D 2 /d and finally reaches the compressibility value χ = 1− D 1 d in the long-time limit t → ∞, see Eq. (58). (b) For systems of finite size N , the long-time dynamics of the CFS peak is governed by the box boundaries. The CFS peak height reaches 1 − αN −D 2 for t → ∞ with α some numerical factor, see Eq. (56). The wings of the CFS peak are then described by Eq. (54). it was conjectured from numerical evidence that, instead of a doubling of the classical incoherent contribution, the forward scattering probability corresponds to a multiplication by a factor (2 − D 1 /d), with d the dimension of the system and D 1 the information dimension. In our previous study [46], we gave scaling arguments that corroborate this conjecture, backed by numerical simulations on the Ruijsenaars-Schneider ensemble, a Floquet system with critical disorder and tunable multifractal dimensions. We also studied CFS at the transition in finite-size systems, unveiling a new regime, where CFS properties have finite-size scaling related to the multifractal dimension D 2 [46]. This article is based on the approach developed in our previous work [46] and, somehow, in the spirit of the random matrix theory point of view discussed in [41]. In particular, we give a complete description of the dynamics of CFS peak in critical disordered systems, including height and shape of the scattering probability, in two distinct dynamical regimes. Our findings are summarized in the sketch in Fig. 1. In particular, we present new links between CFS dynamics and the multifractal dimension D 2 , that are relevant for most experimental situations. Our analytical predictions are verified on three different critical disordered models with multifractal eigenstates: Power-Law Random Banded Matrices (PRBM), Ruijsenaars-Schneider ensemble (RS) and unitary threedimensional random Kicked Rotor (3DKR). Our predictions are also corroborated by perturbative expansions for RS and PRBM models in the strong multifractality regime. These results pave the way to a direct observa-tion of a dynamical manifestation of multifractality in a critical disordered system. II. CRITICAL DISORDERED MODELS As explained, in the following, our predictions will be compared to numerical simulations on three different models. All of them can be mapped onto the generalized d-dimensional Anderson model, defined by the following tight-binding Hamiltonian H = n ε n |n n| + n =m t nm |n m| ,(2) where |n are the lattice site states, ε n the on-site energies and t nm the hopping between two sites at distance |n − m|. Both ε n and t nm can be considered arbitrary random variables, whose exact properties will depend on the system considered (see Table I). We will be interested in finite-size effects, and will consider a system with linear size N , i.e. with a total number of sites equal to N d . MIT in the generalized Anderson model (2) has been intensively studied (see [18,47] and references therein). The three relevant parameters are the spatial dimension d of the lattice, the range of the hopping t nm , and existence of correlations in the random entries of the Hamiltonian. We recall here some well established facts: (i) in the absence of disorder correlations and if |t nm | decay faster than 1/|n − m| d , Anderson transition only occurs for d > 2; (ii) in the absence of disorder correlations, critical eigenstates can appear if |t nm | decay as fast as 1/|n − m| d ; (iii) correlations in diagonal disorder ε n weaken localization while correlations in off-diagonal disorder t nm can favour localization. We now discuss the characteristics and properties of the different models we used, as well as their link with the Anderson model (2). A summary is given in Table I. A. Power-Law Random Banded Matrices (PRBM) Power-law random banded matrices were first introduced in [48]. They were inspired from earlier random banded matrix ensembles with exponential decay describing the transition from integrability to chaos [49]. The PRBM model is defined by symmetric or Hermitian matrices whose elements are identical independently distributed (i.i.d.) Gaussian random variables with zero mean and variance decreasing as a power law with the distance from the diagonal. The critical PRBM model corresponds to an Anderson model (2) with random long-range hopping whose variance decays as the inverse of the distance between sites. More precisely, let N (µ, σ) be a Gaussian distribution of mean µ and standard deviation σ. In the following we use the version of PRBM considered in [17,50], with periodic boundary conditions, where for N × N matrices diagonal entries ε n are i.i.d. with distribution N (0, 1), and real and imaginary parts of the off-diagonal entries t mn are i.i.d. with distribution N 0, σ nm / √ 2 , σ 2 nm = 1 + sin 2 (π|n − m|/N ) (bπ/N ) 2 −1 .(3) In particular we have |t nm | 2 = σ nm , which scales as ∼ 1/|n − m| for b |n − m| N . The density of states is defined as ρ(E) = 1 N d α δ(E − ω α ) ,(4) which for this model gives ρ PRBM (E) = 1 √ 2π exp − E 2 2 b 1, 1 2bπ 2 √ 4bπ − E 2 b 1.(5) Eigenvectors are multifractal, and their multifractal dimensions D q , which depend on both E and parameter b, can be analytically computed [18,50]. Parameter b makes it possible to explore the whole range of multifractality regime: the weak multifractality regime D q → 1 is reached for b → ∞ and the strong multifractality regime D q → 0 is reached for b → 0. All numerical data presented in this work are performed at the center of the band E = 0. B. Ruijsenaars-Schneider model Let us consider the following deterministic kicked rotor model [51,52] Ĥ = τp 2 2 + V (x) n δ(t − n),(6) with a 2π-periodic sawtooth potential V (x) = ax for −π < x < π, and where τ is a constant parameter. As a direct consequence of spatial periodicity of V (x), momenta only take quantized values p n = 0, ±1, ±2, . . . (here = 1). Additionally, we consider a truncated basis in p space, with periodic boundary conditions, so that the total number of momenta states |p n accessible is N . This implies that position basis is also discretized (x k are separated by intervals 2π/N , with k an integer). It is well-known that kicked Hamiltonians such as (6) can be mapped onto the Anderson models (2) [53,54]. The N quantized plane waves |p n then play the role of lattice site states |n . The mapping is given (for an eigenvector of the Floquet operator with eigenphase e iω ) by ε n = tan ω/2 − τ n 2 /4 ,(7)t nm = − π −π dx 2π tan[V (x)/2]e −ix(m−n) ,(8) where the on-site energy ε n takes evenly distributed pseudo-random value, provided τ is sufficiently irrational. As a consequence of the Fourier transform relation in Eq. (8), discontinuity of the sawtooth potential V (x) creates a long-range decay of the couplings t nm ∼ 1/|n − m| and actually induces multifractal eigenstates. The Ruijsenaars-Schneider (RS) model was introduced in the context of classical mechanics [55][56][57]. Its quantum properties were studied in [58][59][60]. It is defined (for an arbitrary real parameter a) by the Floquet operator of the Hamiltonian (6) (with truncated basis in p space) U = e −iϕp e −iax ,(9) where the deterministic kinetic phase has been replaced by random phases ϕp (consequently the on-site energies ε n in Eq. (8) are truly uncorrelated), and x is taken modulo 2π [61]. Importantly, unlike for PRBM, eigenstate properties of the RS matrix ensemble do not depend on their quasienergy. In particular it has a flat density of states ρ RS (E) = 1 2π .(10) Eigenvectors are multifractal; the multifractal dimensions can be derived in certain perturbation regimes, and only depend on the parameter a [62][63][64][65]. This parameter a allows us to explore the whole range of multifractality regimes : the weak multifractality regime D q → 1 is reached for a → 1 and the strong multifractality regime D q → 0 is reached for a → 0. H = τ x p 2 x 2 + τ y p 2 y 2 + τ z p 2 z 2 + V (q) n δ(t − n),(11) where τ i are constant parameters and the spatial potential writes V (q) = KV(x)V(y)V(z), K the kick strength, with V(x) = √ 2 2 cos x + 1 2 sin 2x ,(12) so that the system breaks the time-reversal symmetry [45]. As previously stated, the Hamiltonian (11) can be mapped onto the 3d Anderson model (2). For a given eigenstate of the system with eigenphase e iω , this mapping writes ε n = tan ω/2 − τ x n 2 x /4 − τ y n 2 y /4 − τ z n 2 z /4 , (13) t nm = − π −π dq (2π) 3 tan[V (q)/2]e −iq·(m−n) ,(14) where energies ε n take pseudo-random values (provided that (τ x , τ y , τ z ) are incommensurate numbers), and where hopping terms t nm decay exponentially fast with distance between sites |n − m| [66]. The 3d random Kicked Rotor (3DKR) that we consider in the following corresponds to the Floquet operator of Hamiltonian (11) Û = e −iφp e −iV (q) ,(15) where deterministic kinetic phases are replaced by uniformly distributed random phases φ p (this implies in particular that energies ε n in Eq. (13) are uncorrelated). The 3DKR can be seen as the Floquet counterpart of the usual 3d unitary Anderson Model. In particular it undergoes an Anderson transition monitored by the parameter K (that is related to the hopping intensity). Using techniques inspired by [66,67], we found that the critical value is K c ≈ 1.58 (see Appendix A). However, unlike the 3d Anderson model, this unitary counterpart has a flat density of states ρ 3DKR (E) = 1 2π ,(16) and no mobility edge. Furthermore, we assumed that 3DKR has the same multifractal dimensions than the corresponding unitary 3d Anderson model, because it belongs to the same universality class. The values that were determined in [68] (using the same techniques as in [69,70]) are D 1 = 1.912 ± 0.007 and D 2 = 1.165 ± 0.015. III. GENERAL FRAMEWORK FOR THE STUDY OF CFS IN CRITICALLY DISORDERED SYSTEMS A. Eigenstates and time propagator In the following, we will analytically and numerically address CFS in critical disordered systems within a very general framework, including both Floquet and Hamiltonian cases. The numerical methods are presented in Appendix B. For the sake of clarity, we use a common notation: |φ α refer to eigenstates (or Floquet modes) with energy (or quasienergy) ω α . The time propagator of the system then writesÛ (t) = α e −iωαt |φ α φ α | ,(17) where time will be considered a continuous variable. In particular, we use the following convention and notation for the temporal Fourier transform: f (ω) = ∞ −∞ dt f (t)e iωt , f (t) = ∞ −∞ dω 2π f (ω)e −iωt .(18) B. Direct and reciprocal spaces As illustrated by the models introduced above, in generic critical disordered systems disorder can be present either in position space (e.g. PRBM, Anderson model) or momentum space (e.g. 3DKR, RS). From now on, we refer to the basis where disorder is present (labeled with kets |n ) as the direct space and to its Fourier-conjugated basis (labeled with kets |k ) as the reciprocal space. This distinction is particularly important because multifractality of eigenstates is a basis-dependent property that only appears in direct space, where disorder is present, while CFS is an interference effect taking place in reciprocal space. Importantly, we choose to use standard notations of spatially disordered lattice systems, as in Eq. (2). For a d-dimensional system, direct space is spanned by discrete lattice sites states |n = |n 1 , . . . , n d (n i = −N/2 + 1, . . . N/2) (N will be considered even). The dimension of the associated Hilbert space is N d . Consequently, the reciprocal space is spanned by a basis |k = |k 1 , . . . , k d (where k i = ± π N , ± 3π N · · · ± (N −1)π N ). We also choose the following convention for the change of basis (see Appendix C for details) φ α (k) = n∈]−N/2,N/2] d φ α (n)e −ik·n ,(19)φ α (n) = 1 N d k∈]−π,π] d φ α (k)e ik·n ,(20) so that in the limit N → ∞ the system tends to a infinitesize discrete lattice, that is, φ α (k) −→ N →∞ ∞ n1=−∞ · · · ∞ n d =−∞ φ α (n)e −ik·n ,(21)φ α (n) −→ N →∞ π −π d d k (2π) d φ α (k)e ik·n .(22) We insist that for 3DKR and RS models, direct space is the momentum space. For instance for the RS model the basis |n corresponds to plane waves with discrete momenta p = n (with = 1) because of spatial 2πperiodicity of kicked Hamiltonians. Consequently, the reciprocal space corresponds to position space, so that |k corresponds to discrete positions x k = ± π N , ± 3π N · · · ± (N −1)π N . Spatial discretization comes from the imposed periodic boundary conditions in the truncated momentum basis, so that the linear system size in direct space is N . C. Form factor and level compressibility Previous studies [39][40][41][42][43][44][45][46] found that CFS dynamics could be related to the form factor. We will show that it is the same in critical disordered systems. We recall some definitions that will be useful in forthcoming calculations. Form factor The form factor is the Fourier transform of the twopoint energy correlator; it is usually defined as K N (t) = 1 N d α,β e −iω αβ t ,(23) with ω α,β = ω β − ω α . It can be rewritten as K N (t) = dE ρ(E)K N (t; E)(24) with K N (t; E) = 1 N d ρ(E) αβ e −iω αβ t δ E − ω α + ω β 2 . (25) The component K N (t; E) of the form factor can be interpreted as coming from contributions of all interfering pairs of states whose average energy is E. In order to lighten forthcoming calculations, we introduce the following implicit notation α,β . . . E ≡ 1 ρ(E) α,β δ E − ω α + ω β 2 . . . ,(26)f (ω α ) E ≡ 1 ρ(E) α δ(E − ω α )f (ω α ) ,(27)so that K N (t; E) writes K N (t; E) = 1 N d α,β e −iω αβ t E .(28) Compressibility and link to multifractal dimensions The level compressibility χ is defined as χ = lim t/N d →0 K N (t; E).(29) It is a measure of long-range correlations in the spectrum. It estimates how much the variance of the number of states in a given energy window scales with the size of the window. For usual random matrices (GOE, GUE. . . ) χ = 0, while for Poisson statistics χ = 1. For critical systems that have intermediate statistics, the level compressibility lies in between 0 < χ < 1 [27]. It was proposed that χ could actually be related to multifractal dimension D 2 via χ = 1 − D 2 /2d [27,71], but it was later observed that this relation fails in the weak multifractal regime. Another relation was then conjectured [60], relating χ to the information dimension D 1 χ = 1 − D 1 d ,(30) and has since been verified in many different systems [63,[72][73][74] (see also Appendix B). The information dimension D 1 appearing in Eq. (30) is defined trough the asymptotic expansion of Eq. (1) in the limit q → 1 (31) and can be seen as the Shannon entropy of eigenstates |φ α (n)| 2 . n,α δ(E − ω α )|φ α (n)| 2 ln |φ α (n)| 2 α δ(E − ω α ) ∼ D 1 ln N, D. Energy decomposition and contrast definition CFS is an interference effect that appears when the system is initially prepared in a state localized in reciprocal space, |ψ(t = 0) = 1 N d/2 |k 0 (our Fourier transform and normalization conventions are listed in Appendix C). The observable of interest is the disorder averaged scattering probability in direction k, defined as n(k, t) = 1 N d | k|Û (t)|k 0 | 2 . Using (17), it can be expanded over eigenstates as n(k, t) = 1 N d α,β e −iω αβ t φ α (k)φ α (k 0 )φ β (k 0 )φ β (k) .(32) Energy decomposition As previously stated, multifractal properties of eigenstates may depend on their energy. Following the lines of [44], we rewrite the contrast in the following way n(k, t) = dE ρ(E)n(k, t; E),(33) where n(k, t; E) is the contribution of all interfering pairs of states whose average energy is E and is given by (see Eqs. (26)- (27)) n(k, t; E) = 1 N d α,β e −iω αβ t φ α (k)φ α (k 0 )φ β (k 0 )φ β (k) E .(34) Classical incoherent background Coherent scattering effects (such as CFS and CBS) build on top of a classical incoherent diffusive background. This classical incoherent contribution can be described by introducing the disorder-averaged spectral function A(k; E) = 1 N d α |φ α (k)| 2 δ(E − ω α ) .(35) Using the normalization condition Eq. (C14), A(k 0 ; E) can be interpreted as the probability that the system has energy E when initialized in the plane wave state |k 0 . By the same token, Eq. (C13) shows that A(k, E)/ρ(E) can be interpreted as the distribution in reciprocal space associated with the system residing on the energy-shell E (ergodicity). Taking the product of these two probabilities and using Eq. (33), we find that the classical incoherent contribution reads: n class (k; E) = A(k, E) ρ(E) A(k 0 , E) ρ(E) .(36) This result has been derived and numerically checked in [41,42] in the case of random potentials in 1 or 2 dimensions (note that in these works one of the factors ρ(E) in the denominator was absorbed in the definition of the spectral function at energy E). For usual disordered systems such as the Anderson model, the spectral function A(k; E) depends on k with a width related to the inverse scattering mean free path 1/ s [42]. However, for kicked systems such as models (9) and (15), one can show that A(k; E) = ρ(E) [45]. The essence of the argument is that the Fourier transform of (35) in direct space n and time A(n; t) is given by the matrix elements ofÛ t averaged over disorder: m|Û t |n = δ m,n δ t,0 .(37) This result is a consequence of the uniform distribution of the random phases over [0, 2π]. The equality A(k; E) = ρ(E) can be seen as the limit s → 0, that is, when s becomes less than the lattice spacing [41]. Notably, we found that the relation A(k; E) = ρ(E) also holds in the case of PRBM, where the inverse scattering mean free path is less clearly defined; this is illustrated in Fig. 8 of Appendix B. In fact, for PRBM the relation is a consequence of the independence of the matrix elements, as we demonstrate analytically in Appendix D. This property that the spectral function reduces to the density of states can be understood as a consequence of a "diagonal approximation" central to our work. Starting from (35) and expanding A(k; E) in direct space, we have A(k; E) = 1 N d n,m α φ α (n)φ α (m)δ(E − ω α ) e ik·(n−m) . (38) The case where disorder average washes out the offdiagonal terms n = m is usually referred to as "diagonal approximation". Under that approximation we have A(k; E) ≈ 1 N d n α |φ α (n)| 2 δ(E − ω α ) = ρ(E).(39) The identity A(k; E) = ρ(E) can thus be seen as resulting from the absence of correlations between norm and phase of the eigenstates in direct space, so that only terms where phase factors cancel (i.e. diagonal elements) do survive the disorder average. This is corroborated by the direct numerical computation of these correlations for the RS model (see Appendix E), as well as by the analytical derivation of Appendix D in the PRBM case. We thus think that the diagonal approximation we use in this article should hold in many critical systems, as long as there is no correlation in disorder that might induce correlations between norm and phase in direct space. The classical contribution Eq. (36) then simply reduces to a k-independent and E-independent flat background n class (k; E) = 1. Contrast The CFS and CBS peaks emerge from this classical background. Following the lines of [44] we introduce the CFS contrast Λ N (k, t; E) as the interference pattern relative to the classical background, at a given energy. In the diagonal approximation discussed above, it simply reads Λ N (k, t; E) = n(k, t; E) − 1.(40) IV. UNIVERSAL PREDICTIONS FOR CFS DYNAMICS In this Section we explain the main hypotheses of our approach, and we derive a simple expression for the CFS contrast. We then discuss the existence of two distinct dynamical regimes, one corresponding to large time limit of finite-size systems, the other one to infinite-size systems. We describe the CFS contrast in these two regimes. A. General predictions Extended diagonal approximation First, we take the temporal Fourier transform (18) of the CFS contrast given by (34) and (40), and expand it in direct space. This gives Λ N (k, ω; E) = 2π N d n1,n2 n3,n4 C(ω; E)e ik·(n1−n4)−ik0·(n2−n3) − 2πδ(ω),(41) with C(ω; E) = α,β δ(ω − ω αβ )φ α (n 1 )φ α (n 2 )φ β (n 3 )φ β (n 4 ) E .(42) Following the idea of the "diagonal approximation" used to derive Eq. (39), we claim that correlation functions C(ω; E) should generically vanish (or become negligible) upon disorder average unless they are of the two following kinds: (i) tuples such as n 1 = n 2 and n 3 = n 4 , that give a real positive contribution, and (ii) tuples such as n 1 = n 4 ≡ n and n 2 = n 3 ≡ m, whose temporal Fourier transform is the average transfer probability (at a given energy E) between |n and |m in direct space, namely | n|Û (t)|m | 2 E = α,β e −iω αβ t φ α (n)φ α (m)φ β (m)φ β (n) E .(43) Compact approximate expression for the contrast Keeping only these non-vanishing contributions (and taking care of double count of the tuple n 1 = n 2 = n 3 = n 4 ), the CFS contrast can be approximated by Λ N (k, ω; E) = Λ (1) + Λ (2) − 2πδ(ω),(44) where the first term corresponds to the contribution n 1 = n 2 and n 3 = n 4 , Λ (1) = 2π N d n =m α,β δ(ω − ω αβ )|φ α (n)| 2 |φ β (m)| 2 E e i(k−k0)·(n−m) ,(45) and the second term comes from the contribution n 1 = n 4 and n 2 = n 3 , Λ (2) = 2π N d α,β δ(ω − ω αβ )δ αβ E = 2πδ(ω).(46) In (46), the Kronecker delta δ αβ appears because of eigenstate orthonormalization, and simplifications arises from Eq. (26), using the definition (4) of the density of states. The second term Λ (2) thus exactly compensates the Dirac delta in (44). The CFS contrast reduces to Λ (1) , and is finally given by the following compact expression Λ N (k, ω; E) = 2π n =0 α,β δ(ω − ω αβ )|φ α (n 0 )| 2 |φ β (n 0 + n)| 2 E,n0 e −in·(k−k0) ,(47) or equivalently Λ N (k, t; E) = n =0 α,β e −iω αβ t |φ α (n 0 )| 2 |φ β (n 0 + n)| 2 E,n0 e −in·(k−k0) ,(48) where the disorder average . . . n0 additionally runs over different sites n 0 . At the peak k = k 0 , the expression for the CFS contrast further simplifies. Adding and subtracting the contribution n = 0 to the sum in (48) and using normalization of wavefunctions, we get the expression Λ N (k 0 , t; E) = K N (t; E) − | n 0 |Û (t)|n 0 | 2 E,n0 ,(49) where the first term is the form factor, given by Eq. (28), and second term is the return probability in direct space at energy E, see Eq. (43). Relevant time scale It has been shown (see e.g. [42]) that the relevant time scale for the CFS dynamics is given by the Heisenberg time τ H = 2π/∆, where ∆ is the mean level spacing. More precisely, the mean level spacing corresponds to the spacing in the confining volume, which is associated to the localization volume in the presence of localization, or to the system volume if the system is delocalized. In the context of critically disordered media, wavefunctions are delocalized (but nonergodic); the mean level spacing is ∆ = 1/(N d ρ(E)), which depends on the system size, and thus τ H = 2πN d ρ(E).(50) This defines two distinct regimes for the CFS, with specific properties, that we shall explore in turn in the next two subsections: (i) when t τ H , CFS originates from the nonergodicity of the eigenstates ; (ii) when t τ H , CFS is caused by boundaries of the system. Regime (i) is relevant in the limit of infinite size, which corresponds to the regime numerically explored in [44] in the 3d Anderson model. There it was found that at the AT the height of the CFS peak reaches a stationary value, conjectured to be the compressibility χ = 1 − D 1 /d. Regime (ii) corresponds to the long-time limit of a finite-size system. In the finite-size case, waves travel many times across the entire system until they resolve the discreteness of energy levels. The shape and height of the CFS peak then explicitly depend on system size N (see Section IV B). When N goes to infinity, the CFS still manifests itself at small times and is due to nonergodicity of eigenstates (see Section IV C). This is to be contrasted with the localized regime of the Anderson transition, where the behavior differs depending on whether the localization length is smaller or larger than the system size. B. Long-time limit CFS peak shape We now discuss the long-time limit in finite-size systems i.e. the regime t τ H , t → ∞ with fixed system size N . The contrast defined by (34) and (40) is then only determined by diagonal terms ω αβ = 0 (which are the only ones that survive the long-time limit), so that the expression of the contrast is given by Λ N (k, t → ∞; E) = 1 N d |φ α (k)| 2 |φ α (k 0 )| 2 E − 1.(51) On the other hand, using the same argument, the approximate expression (48) can be rewritten as Λ N (k, t → ∞; E) = n =0 |φ α (n 0 )| 2 |φ α (n 0 + n)| 2 E,n0 e −in·(k−k0) .(52) This expression can be seen as the spatial Fourier transform of the two-point correlator in direct space. For a function which is multifractal in direct space the correlator has the asymptotic behavior [18] N d |φ α (n 0 )| 2 |φ α (n 0 + n)| 2 E,n0 ∼ N n d−D2 .(53) It implies that the CFS contrast shape in the long-time limit can be approximated (up to a prefactor) by Λ N (k, t → ∞; E) N −D2 ∼ |n|≥1 cos[n · (k − k 0 )] |n| d−D2 .(54) The right-hand term only depends on k and D 2 , and becomes N -independent for N sufficiently large. The behavior (54) is confirmed by the numerical simulations displayed in Fig. 2, which show that all the curves N D2 Λ(k, t) collapse onto the predicted expression. We note however a strong discrepancy when k → k 0 in the insets of Fig. 2. This comes from the existence of a high spatial cut-off for the scaling law (53), roughly given by the system size N . As a consequence, (54) fails to describe the CFS distribution on a scale smaller than |δk| ∼ 2π/N . In the specific case of RS model when a → 0, we also note the appearance of an anti-CBS peak (see Figs. 2b and 5b) that comes from a nontrivial asymptotic symmetry of the system and is not relevant in the general case (it is not present in PRBM and 3DKR). We give a more detailed account of this specificity in Sec. V D. CFS height Although (54) fails to describe the CFS distribution at k = k 0 , it is actually possible to circumvent this limitation starting back from (52) and rewriting it for k = k 0 as Λ N (k 0 , t → ∞; E) = n |φ α (n 0 )| 2 |φ α (n 0 + n)| 2 E,n0 − |φ α (n 0 )| 4 E,n0 .(55) The first term is actually equal to 1 from eigenstate normalization. The second term is nothing but the inverse participation ratio (up to a factor N ). It gives the following scaling law 1 − Λ N (k 0 , t → ∞; E) ∼ N −D2 .(56) Note that this result could alternatively by obtained from Eq. (49) in the limit t → ∞. Indeed at large t the form factor goes to 1, while the return probability behaves as N −D2 [27]. The scaling dependence (56) is illustrated in Fig. 3 for the three models investigated here. This shows that the long-time behavior of the CFS peak allows us to extract the multifractal dimension D 2 . C. Limit of infinite system size We now discuss the CFS contrast dynamics in the limit N → ∞, at fixed time t τ H . In this regime, as we will see below, CFS arises from the nonergodicity of the eigenstates, and it no longer depends on N . At the peak the contrast is given by Eq. (49). In the limit t τ H , the spectral form factor goes to the compressibility χ, while the return probability follows a temporal power law decay related to the multifractal dimension D 2 [27,75] | n 0 |Û (t)|n 0 | 2 E,n0 ∼ t −D2/d .(57) The height of the CFS peak is then finally given by Λ N →∞ (k 0 , t; E) = χ − αt −D2/d ,(58) where α is a constant that may depend on E (but not on N and t). If we assume that the relation (30) between compressibility and information dimension holds, then measuring the time dependence of the peak height at small times allows us to access D 1 . This is illustrated in Fig. 4 (left panels), where the contrast is plotted as a function of time for the three models discussed here. A proper rescaling of the curves allows to extract D 1 as the constant small-time behavior of the CFS contrast. Dynamics of the CFS contrast shape We now discuss more generally the dynamics of the CFS contrast shape. To do so, we use the fact that the two following correlation functions behave in the same way α,β δ(ω − ω αβ )φ α (n)φ * α (m)φ β (m)φ * β (n) E ∼ γ α,β δ(ω − ω αβ )|φ α (n)| 2 |φ β (m)| 2 E(59) with γ some constant (see e.g. Eq. 2.32 of [18]). As a consequence, the CFS contrast (48) can be rewritten as Λ N (k, t; E) = γ n | n 0 |Û (t)|n 0 + n | 2 E,n0 e −in·(k−k0) − | n 0 |Û (t)|n 0 | 2 E,n0 .(60) In the case where k = k 0 it is easy to check that (60) reduces to Λ N (k 0 , t; E) = γ − | n 0 |Û (t)|n 0 | 2 E,n0 .(61) This expression coincides with (49) at small t provided γ = χ, since the form factor goes to χ for t → 0. Again, the second term in the above expression is the return probability. The first term in (60) is the spatial Fourier transform of the propagator between two sites in direct space. This quantity is well-known and has been studied in the past, as it plays an important role in the study of the anomalous diffusion in direct space at the transition [16,28,76,77]. Provided k < 1/l s (with l s the mean free path, l ∼ 1 in our models) it is a function f (q) of q = |k−k 0 |t 1/d only, that goes to a constant at small argument. In our case, in view of (61) that constant is equal to χ, and thus f (q) = χ q 1, q −D2 q 1.(62) The CFS contrast (60) finally writes Λ N →∞ (k, t; E) = f (|k − k 0 |t 1/d ) − αt −D2 ,(63) where α is the same constant as in Eq. (58). In Fig. 4, we test these theoretical predictions by comparing them to the numerical data of the three models considered. The left panels represent the temporal dynamics of the CFS contrast at k 0 . We clearly observe the convergence towards the compressibility χ = 1 − D 1 /d as time increases, the finite-time effects being controlled by D 2 , whatever the model and the more or less strong 62) and (63) for k = k 0 . In the right panels, we represent the spatial dependence of the CFS peak at different times. It is clearly observed that the curves at different times collapse on each other when they are represented as a function of q, which confirms the scaling law Eq. (63). Also, the shape of the scaling function f is in perfect agreement with Eq. (62). V. PERTURBATION THEORY IN THE LONG-TIME LIMIT AND IN THE STRONG MULTIFRACTAL REGIME A. Perturbation theory In this Section we use perturbation theory to derive analytic expressions for the contrast at infinite time in the strong multifractality regime (D q → 0) of PRBM and RS models (respectively b → 0 and a → 0). First, we recall that in the long-time limit t τ H the CFS contrast Eq. (51) writes Λ N (k, t → ∞; E) = I(E) ρ(E) − 1,(64) with I(E) = 1 N α |φ α (k)| 2 |φ α (k 0 )| 2 δ(E − ω α ) .(65) In the following we will find a perturbative expansion of this quantity I(E) as I(E) = I (0) (E) + I (1) (E) + . . .(66) To do so, we use a perturbative approach based on the Levitov renormalization-group technique [78]. The idea is that in the strong multifractality regime, the Hamiltonian or Floquet operatorM is almost diagonal in direct space and the off-diagonal entries M nm = n|M |m can be treated as a perturbation. At order zero, the operator is diagonal in direct space with eigenvectors given by the canonical basis vectors |n with energy E n = M nn . It gives I (0) (E) = n 1 N | k|n | 2 | k 0 |n | 2 δ(E − E n ) ,(67) where the average runs over different disorder realisations of the diagonal entries M nn . Using | k|n | 2 = 1 (see Appendix C), we directly get I (0) (E) = ρ(E): at order 0 the CFS contrast vanishes. At next order, the main contribution now originates from resonant interactions between pairs of unperturbed states ( |m , |n ). They occur if |H mm − H nn | is of the order of |H mn |. The corresponding 2×2 submatrices have two eigenvectors |φ µ mn labelled by µ = ±1, with energy E µ mn . The corresponding contribution writes I (1) = 1 N m<n µ=± | k|φ µ mn | 2 | k 0 |φ µ mn | 2 δ(E − E µ mn ) ,(68) where different realizations of random entries M nm will lead to different pairs ( |m , |n ) effectively contributing, so that one needs to sum over all of them. The first-order contribution depends on the model we consider. We give a full account of the PRBM case. We only give the main results for the RS model, since it essentially follows the same lines and was already partially discussed in [46]. H mm H mn H * mn H nn = ε + ∆ re iξ re −iξ ε − ∆ .(69) The average in Eq. (68) now runs over disorder realisations of parameters ε, ∆, r and ξ. As explained in Sec. II A, entries H mm and H nn of the PRBM model are independent random real numbers with Gaussian distribution of variance 1. Off-diagonal entries H mn are complex random numbers, whose real and imaginary part are independent with Gaussian distribution of variance σ 2 nm /2, with σ nm given by (3). This means that ε = 1 2 (M mm + M nn ) and ∆ = 1 2 (M mm − M nn ) in (69) both have Gaussian distribution with variance 1/2, while ξ is uniformly distributed in [0, 2π] and r = |M nm | 2 ∈ [0, ∞) is distributed with PDF f T (r) given by f T (r) = 2r σ 2 mn exp − r 2 σ 2 mn .(70) Eigenvectors |φ µ mn with energy E µ mn of submatrices (69) can be expressed as φ + mn = cos θ |m + e −iξ sin θ |n ,(71)φ − mn = −e iξ sin θ |m + cos θ |n ,(72) where angle θ is defined by tan θ = − ∆ r + 1 + ∆ 2 r 2 .(73) The corresponding energy is E µ mn = ε + µ r 2 + ∆ 2 .(74) The quantity of interest | k|φ µ mn | 2 then writes | k|φ µ mn | 2 = 1 + µ cos ϕ k sin 2θ, (75) with ϕ k = (m − n)k − ξ. Performing the full calculation shows that the 1 in this expression is the 0th order contribution (this can be intuited by comparing this expression with the 0th order one). The order-1 contribution (68) then writes I (1) = m<n 1 N cos ϕ k cos ϕ k0 sin 2 2θ µ=±1 δ(E − E µ mn ) .(76) Only ϕ k and ϕ k0 depend on ξ; averaging over it leads to I (1) (E) = m<n 1 N cos([m − n][k − k 0 ])A PRBM mn (E), (77) with A PRBM mn (E) = 1 2 sin 2 2θ µ=±1 δ(E − E µ mn ) .(78) The dependency of the above expression on m and n is via the parameter r, distributed according to Eq. (70). In particular, (78) only depends on the difference |m − n|. Moreover, in the periodic PRBM model we are considering, pair (m, N − n) gives the same contribution as pair (m, n) in Eq. (77) (the average (78) is taken over the same random realizations of parameters r, ε and ∆ for both pairs). As a consequence, the contrast up to order 1 writes Λ N (k, t → ∞; E) = N/2 n=1 A PRBM n0,n0+n (E) ρ(E) cos(n[k − k 0 ]).(79) We now find an explicit expression for A PRBM nm (E). To do so, we use the fact that sin 2 2θ = r 2 /(r 2 + ∆ 2 ) and perform the remaining averages over ε, ∆ and r in Eq. (78). It gives A PRBM nm (E) = ∞ −∞ d∆ √ π e −∆ 2 ∞ −∞ dε √ π e −ε 2 ∞ 0 dr 2r σ 2 mn e − r 2 σ 2 mn r 2 2(r 2 + ∆ 2 ) µ=± δ(E − ε − µ r 2 + ∆ 2 ).(80) For E = 0, the integral (80) can be calculated explicitly, and for b → 0 (where σ 2 mn ≈ b(π/N ) sin π|n−m|/N 1 for m = n) it gives at lowest order A PRBM mn (E = 0) ρ(E = 0) = π √ 2 σ mn + . . .(81) (we used the fact that ρ(E) is given by Eq. (5) for b 1). Finally, we find Λ N (k, t → ∞, E = 0) = bπ √ 2 N/2 n=1 (π/N ) cos(n[k − k 0 ]) sin(πn/N ) .(82) This result is checked in Fig. 5 (top) against numerics; the agreement is remarkable. Asymptotic behavior of the peak height At k = k 0 , the contrast behaves following Eq. (56). In the regime of small parameter b, an expansion of the multifractal dimension D 2 was obtained in [50], using the same perturbative approach as above. At first order it reads D 2 = bπ/ √ 2. From Eq. (56) we get for b 1 Λ N (k 0 , t → ∞, E = 0) ≈ 1 − N −bπ/ √ 2 ∼ bπ √ 2 ln N. (83) This expression coincides with the leading term of Eq. (82). Indeed, in the sum N/2 n=1 π N sin(πn/N ) = π N N/2 n=1 1 sin(πn/N ) − 1 πn/N + N/2 n=1 1 n ,(84) the first term is a Riemann sum that converges to the finite value ln(4/π), while the second term behaves asymptotically as ∼ ln N . Thus Eq. (82) at k = k 0 entails the asymptotic behavior Eq. (83) with the correct prefactor. This provides a check of Eq. (56) in the perturbation regime. Expansion of the two-point correlator in direct space The comparison of Eq. (79) with the universal analytical expression Eq. (52) suggests that A PRBM nm (E) is equal up to order 1 to the two-point correlation function in direct space, that is, B nm (E) = α |φ α (n)| 2 |φ α (m)| 2 δ(E − E α ) .(85) This can be shown directly as follows. As previously, we expand B nm (E) as B nm (E) = B (0) nm (E) + B (1) nm (E) + . . .(86) Expression (85) at order 0 gives B (0) nm (E) = l | n|l | 2 | m|l | 2 δ(E − E l ) ,(87) which vanishes for n = m. At order 1, using eigenstates (71)-(72) we find B (1) nm (E) = l<p 2 sin 2 θ cos 2 θδ nl δ mp µ=± δ(E − E µ lp ) = 1 2 sin 2 2θ µ=± δ(E − E µ mn ) .(88) This proves that A PRBM nm (E) = B nm (E) up to order 1. In particular Eq. (79) becomes Λ N (k, t → ∞; E) = N/2 n=1 |φ α (n 0 )| 2 |φ α (n 0 + n)| 2 E cos(n[k − k 0 ]),(89) which is exactly the universal analytical expression Eq. (52). C. RS model We now apply the same method to determine the first order contribution I (1) (E) for the RS model, which is unitary. We give the key points and main results. The interested reader should refer to the supplementary material of [46], in which more details are given. The operatorM of interest for the RS model is defined as M nm = U nm e −iπa(1−1/N ) , whereÛ is the Floquet operator (9). This transformation only shifts the eigenvalues ofÛ and has no physical consequences (in particular the multifractal dimensions remain unchanged). In the strong multifractal regime a 1, the operatorM in direct space writes M nm e iϕn δ nm − 2iπa N e iϕn 1 − δ nm 1 − e 2πi(n−m)/N .(90) The term of order 0 is diagonal. At order 1, the 2 × 2 submatrices contributing to Eq. (68) read M mm M mn M nm M nn = e iϕm he i(ϕm+ξ) he i(ϕn−ξ) e iϕn ,(91) with h = aπ/N sin (m−n)π N and ξ = π(m − n) N . These submatrices only depend on two independent random parameters ϕ m and ϕ n , while the off-diagonal amplitudes h are deterministic, unlike PRBM. As previously, it is more convenient to introduce the random variables ∆ = 1 2 (ϕ m − ϕ n ) and ε = 1 2 (ϕ m + ϕ n ). Following the same lines, we find that the first order contribution can be written as I (1) (E) = N/2 n=1 A RS n0,n0+n (E) cos(n[k − k 0 ]) − N/2 n=1 A RS n0,n0+n (E) cos n[k + k 0 + 2π N ] ,(93) where A RS m,n (E) ρ(E) = π/2 −π/2 d∆ π h 2 h 2 + sin 2 ∆(94) does not depend on E. In the limit a → 0 it gives A RS m,n (E) ρ(E) ≈ h − h 3 2 + . . .(95) so that finally Λ(k, t → ∞, E) = a N/2 n=1 (π/N ) cos(n[k − k 0 ]) sin(πn/N ) − (π/N ) cos n[k + k 0 + 2π N ] sin(πn/N ) ,(96) which is independent of E. The first term describes the CFS peak, and is similar to the PRBM result (82). The second term describes an anti-CBS peak, that we will discuss in Section V D 3 below. As before, we can show that A RS m,n (E) is nothing but the two-point correlation function in direct space (for n = m) up to order 1 of perturbation theory, that is, A RS m,n (E) ≈ α |φ α (n)| 2 |φ α (m)| 2 δ(E − E α ) . (97) D. Comparison with numerics and universal predictions Comparison with numerics In Fig. 5 we display the results of our perturbation theory calculations for PRBM at E = 0 and for RS. Both reproduce very accurately the numerics in the strong multifractality limit. Comparison with universal predictions Leaving out the anti-CBS peak contribution in RS model for now, we see from Fig. 5 that both in PRBM and RS the CFS contrast in the long-time limit fully corroborates the universal analytical expression Eq. (52) (after pairing contributions n and −n), that is Λ(k, t → ∞, E) = N/2 n=1 |φ α (n 0 )| 2 |φ α (n 0 + n)| 2 E,n0 cos(n[k − k 0 ]). (98) Actually, at first order of perturbation theory these two models even have the same expression around the CFS peak Λ(k, t → ∞, E) ∼ N/2 n=1 (π/N ) cos(n[k − k 0 ]) sin(πn/N ) ,(99) and only the prefactor differs. This is to be expected, since off-diagonal terms of PRBM (r in (69)) and RS (h in (91)) behave in the same way, namely ∼ π/N/ sin(π|n − m|/N ). Anti CBS-peak in RS model at small a Let us now get back to the anti-CBS peak in the RS model. We see in Fig. 5 that this anti-peak is well captured by the perturbative expansion (96) while it is not present in the universal analytical prediction Eq. (54). However, we can adopt a phenomenological point of view and adapt the universal prediction : in order to take into account the anti-CBS peak, we propose that Λ(k, t → ∞, E) = A N/2 n=1 cos(n[k − k 0 ]) n + B N/2 n=1 cos n[k + k 0 + 2π N ] n ,(100) where A and B are two fitting parameters. We then recover a very good agreement with numerical data (see Fig. 5b). This suggests that our approach missed some non vanishing contributions, probably due to a hidden symmetry inducing phase correlation of the eigenstates in direct space. This idea is corroborated by the observation (both from numerical data -not shown -and from perturbation theory) of an asymptotic symmetry verified by every single eigenstate in the perturbative regime, |φ α (k)| 2 + |φ α (−k)| 2 ≈ 2. We will not dwell further on this peculiarity in the present work. VI. SUMMARY AND CONCLUSION We have studied CFS in critical disordered systems with multifractal eigenstates. We demonstrated that there exist two distinct dynamical regimes: (i) When t τ H , the CFS arises from the nonergodicity of the eigenstates. This regime corresponds to infinite system size and is relevant for most experimental situations. We recovered and demonstrated the numerical conjecture of [44] in the same limit: the CFS peak height asymptotically goes to χ = 1 − D1 d . We discovered that the CFS peak height actually reaches χ with a temporal power law related to the multifractal dimension D 2 (see Eq. (58)), and we gave a full description of the shape of the CFS peak: it gets smaller and smaller and the tail of the distribution decays with a power-law related to D 2 (see Eq. (63)). (ii) When t τ H , the CFS is caused by the system boundaries. The height of CFS peak goes to 1 with a finite-size correction related to multifractal dimension D 2 , and the CFS shape decays as N −D2 elsewhere, the shape of the distribution being given by a system-size independent function (see Eq. (54)). All our universal analytical predictions are verified very accurately on three critical disordered systems (PRBM, RS, 3DKR) in both strong and weak multifractal regimes. Moreover, for PRBM and RS models in the strong multifractality regime, we find that our universal predictions in the regime (ii) are exact at first order of perturbation theory. These results, in particular (i), should be in reach of experiments, such as [38]. This opens the way to the first direct observation of a dynamical manifestation of multifractality in a critical disordered system. PRBM a. Energy filtering procedure In order to evaluate n(k, t; E) defined in Eq. (34), we use a filtering technique introduced in [44]. Let E 0 be the targeted energy; the idea is to replace the initial state |ψ 0 = |k 0 / √ N by a Gaussian-filtered plane wave around E 0 |ψ 0 = 1 (σ 2 π) 1/4 exp − (E 0 −Ĥ) 2 2σ 2 |k 0 / √ N , (B1) where σ is the width of the energy filter. The filtered scattering probability can be written as n fil (k, t; E 0 ) = 1 N α,β e −iω αβ t 1 σ √ π exp − (E 0 − ω α ) 2 2σ 2 exp − (E 0 − ω β ) 2 2σ 2 φ α (k)φ α (k 0 )φ β (k 0 )φ β (k) (B2) = 1 N dE dω e −iωt 1 σ √ π exp − (E 0 − E) 2 σ 2 α,β δ(ω − ω αβ ) exp − ω 2 4σ 2 φ α (k)φ α (k 0 )φ β (k 0 )φ β (k) .(B3) We see that n fil (k, t; E 0 )/ρ(E 0 ) is not much different from n(k, t; E 0 ) in Eq. (34), provided σ is sufficiently small (compared with the DOS variation), because lim σ→0 1 σ √ π exp − (E 0 − E) 2 σ 2 = δ(E − E 0 ). (B4) One noticeable difference however is the term exp −ω 2 /4σ 2 , that acts as a high energy cut-off in the filtered dynamics. Consequently, n fil (k, t; E 0 ) is coarsegrained over a time scale ∼ 1/σ. In particular, simulating times shorter than 1/σ is not relevant. In practice, eigenstate properties can be considered roughly constant in an energy window where the DOS (5) does not vary much. We choose σ = 1 8 max( √ πb, 1). (B5) For the values presented in the article (b = 0.05, 0.1, 0.3) the corresponding time scale 1/σ is of the order of 10. Note that data presented in Fig. 4 are additionally averaged on a timescale ∆t ≥ 1/bσ for the sake of clarity. The classical contribution, with filtered initial state, should write n class,fil (k; E 0 ) = dE ρ(E) 1 σ √ π exp − (E 0 − E) 2 σ 2 A(k, E) ρ(E) A(k 0 , E) ρ(E) .(B6) Again, we see that n class,fil (k, t; E 0 )/ρ(E 0 ) is not much different from n(k, t; E 0 ) in Eq. (36), provided that σ is sufficiently small (compared with the DOS variation). Under the diagonal approximation (A(k, E) = ρ(E)), it becomes n class,fil (k; E 0 ) = dE ρ(E) 1 σ √ π exp − (E 0 − E) 2 σ 2 = 1 N α 1 σ √ π exp − (E 0 − ω α ) 2 σ 2 ,(B7) where we used the definition (4) of ρ(E). The numerical contrast is thus finally defined as Λ(k, t; E 0 ) = n fil (k, t; E 0 ) 1 N α 1 σ √ π exp − (E0−ωα) 2 σ 2 − 1, (B8) and is actually independent of the choice of normalization for the energy filter because both n fil (k, t; E 0 ) and n class,fil (k; E 0 ) are proportional to 1 σ √ π . b. Infinite system size limit (t τH ) To evaluate the filtered contrast Eq. (B8) in the regime t τ H , we diagonalize PRBM matrices of size N in an energy window [−3σ, 3σ] (this roughly corresponds to 1/4 of the eigenstates of the system) and expand the filtered time propagator over the eigenstates in the reciprocal space. Combining conditions to reach the regime t τ H , and the one coming from the filter (see below (B4)), we get that the relevant time must verify (for small b) 1 σ ≤ t N.(B9) We checked that the upper bound of this inequality was met by verifying that the CFS contrast was independent of the system size N , and that the filtered form factor Eq. (28), applying the same substitution as for the filtered contrast, directly computed from the knowledge of eigenvalues, for different times, was stationary. Note that this condition is a bit stronger than for the RS model, because here we use exact diagonalization (of a non-sparse matrix) to compute the dynamics, which limits us to system sizes about 10 times smaller than the ones simulated with the RS model using the split-step scheme. The numbers of disorder realizations are given in Table II. To compute long-time dynamics, we use the identity (51). We express eigenstates in reciprocal space. We use the same number of disorder realizations as in Table II. Table II. Corresponding values of D1 and D2 are given in Table III. d. Filtered multifractal properties Multifractal dimensions are determined by filtering the finite-size scaling laws (1) and (31) of the moments I q (E). We express the eigenstates in the direct basis, then compute Eqs. (1) and (31) for different system sizes N and average the results over n d different disorder realizations (see Table II). Finally, we fit the averaged moments vs system size N to obtain D 1 and D 2 (see Fig. 7). The results are given in Table III. Table III. Numerically determined multifractal dimensions for the PRBM model (E = 0). Errors are always smaller than 10 −6 . 1 − χnum is given to test the validity of Eq. (30), with χnum numerically determined by computing the form factor from eigenvalues, see Eqs. (28) and (29), in the same temporal interval than the CFS contrast, where it is constant and equal to the compressibility. e. Spectral function In the main text, we show that under the diagonal approximation the spectral function A(k, E) does not depend on k and is equal to ρ(E), see Eq. (39). Here we verify explicitly numerically the validity of this approximation in PRBM. The numerical spectral function is defined via the above filtering technique, as A(k, E) = 1 N α 1 σ √ π exp − (E − ω α ) 2 σ 2 |φ α (k)| 2 . (B10) The density of states at energy E is directly computed by counting the number of states in an interval of width 2σ around E. As shown in Fig. 8, we find a very good agreement of Eq. (39) with numerics, for different values of E and different parameters b. This supports the validity of the diagonal approximation for the calculation of the classical background for the CFS peak. RS model As discussed in the main text, in the RS model, the CFS contrast is independent of the mean energy E. In practice, we therefore compute the integrated probability n(k, t) defined in Eq. (34). The corresponding contrast is given by Λ N (k, t) = n(k, t) − 1.(B11) It can be seen as the average of the energy-dependent contrast Λ N (k, t; E) over all (equally contributing) energies, since Λ N (k, t) = 2π 0 ρ(E)n(k, t; E) dE − 1, = 1 2π 2π 0 Λ N (k, t; E) dE . (B12) a. Infinite system size limit (t τH ) We recall that the Floquet operator of the RS model is the product of two operators, U = e −iφp e −iax ,(B13) where phases φp are randomly generated in the interval [0, 2π[. The first operator represents kinetic energy during the free propagation and is diagonal in p space. The second one represents the kick and is diagonal in x space. We use a grid of size N (even) with positions evenly spaced in the interval [0, 2π[, x k = 2πk/N , with k integer. The corresponding grid in momentum space is p = −N/2+ 1, . . . , N/2. A wavefunction ψ is initially prepared in a single position state around x 0 = π/2. The propagation scheme This method is particularly efficient and makes it possible to simulate very large system sizes, up to N = 131072, as in Fig. 4. To ensure that the condition t τ H = N 2π is met, we checked that the CFS contrast is sizeindependent. b. Long-time limit To compute long-time dynamics, we use the identity (51). We compute and diagonalize the Floquet operator and express the eigenstates in the reciprocal basis (here the x basis). The number of disorder realizations for each system size is given in Table IV. N 512 1024 2048 4096 8192 16384 n d 28800 14400 7200 3600 1800 900 Diagonalizing the matrices is more computationally demanding than naive time propagation at long time t τ H (which scales as ∼ N for each time step). However, results are more reliable because of the oscillatory nature of the large-time behavior in the RS model. Indeed, the form factor of the RS model is given by [59] K(t) = (1 − a) 2 (κt) 2 a 2 (1 − cos κt) 2 + (a sin κt + (1 − a)κt) 2 (B16) with κ = 2πa/N , and has the following asymptotic expansion K(t) ≈ t N/a 1 − 2a sin(κt) (1 − a)κt .(B17) Because of Eq. (49), this slow algebraic and oscillatory convergence to its limiting value also manifests itself in the CFS contrast, which significantly complicates the numerical determination of the asymptotic contrast. c. Multifractal dimensions Multifractal dimensions are determined using finite-size scaling laws (1) and (31) of the moments I q (E). However, as I q (E) (and D q ) do not depend on E for RS, we compute averaged moments I q over all quasi-energies E I q = 1 2π 2π 0 dE I q (E) = 1 N α,n |φ α (n)| 2q . (B18) We compute and diagonalize the Floquet operator and express the eigenstates in the direct basis (momentum basis). Then we compute Eqs. (1) and (31) for different system sizes N and average the results over n d different disorder realizations (see Table IV). Finally, we fit the averaged moments vs system size N to obtain D 1 and D 2 (see Fig. 9). The results are given in Table V. Table IV. Corresponding values of D1 and D2 are given in Table V. Table V. Numerically determined multifractal dimensions for the RS model. Errors are negligible (always smaller than 10 −6 ). 1 − χ th is given to test the validity of Eq. (30) (with χ th = (1 − a) 2 (see [59] or Eq. (B16) for t → 0). 3DKR Similarly to the RS model, the 3DKR is a Floquet system, whose eigenstate properties do not depend on quasienergy. We thus compute the integrated contrast (B11). a. Infinite system size limit (t τH ) We use the exact same method as for the RS model, based on the propagation of wavefunctions with the splitstep scheme Eqs. (B14)-(B15), except that we now use a 3d grid. To ensure that the condition t τ H = N 3 2π is met, we checked that the CFS contrast is size-independent. b. long-time limit (t τH ) Unlike for RS model, to access the long-time dynamics we used temporal propagation of the wavefunction up to time t ∼ τ H . We observed that beyond t > 2.5N 3 , the contrast reaches a stationary value; we thus averaged the CFS contrast in the temporal window 2.5 < t/N 3 < 3 for different system sizes. Note that the computational time to reach this regime scales as ∼ N 3 × N with system size N , which is why we limited ourselves to N = 128 (N = 256 would for instance require to reach 50 × 10 6 kicks with a system of 256 3 points). That is, any given index in (D8) must appear an even number of times. But all indices i 1 , i 2 , ..., i n do already appear in pairs. Therefore the two remaining indices i and j must be equal, otherwise at least one index would appear an odd number of times. As a consequence, all terms with i = j vanish in Eq. (D8). Therefore, upon average, (D7) yields for any fixed k k|Ĥ n |k = i | k|i | 2 i|Ĥ n |i = i i|Ĥ n |i (D9) using the normalization | k|i | 2 = 1 (see Appendix C). Since each k|Ĥ n |k is independent of k, so is A(k; t) in Eq. (D6). The identity A(k; E) = ρ(E) then ensues from the normalization condition (C13) of the spectral function. Appendix E: Decorrelation between norms and phases In the main text we perform our calculations under the approximation that norms and phases of random wavefunctions are uncorrelated, an assumption which is quite usual in random matrix theory. In order to assess this assumption, we illustrate it below in the case of the RS model and for different values of D q . As shown in Fig. 10, norms and phases are indeed uncorrelated in the RS model. Figure 2 . 2Rescaled CFS contrast for PRBM at E = 0 (a,c,e) and for RS averaged over E, see Eq. (B12) (b,d,f) in the limit t τH for different system sizes N (see Appendix B for numerical procedure). Insets are a zoom around k = k0. The dashed line correspond to analytical prediction Eq. (54), with a height fitted far from k = k0 (in panel b, the dashed line corresponds to the symmetrized prediction Eq. (100), where the two parameters A and B have been independently adjusted, which accounts for the anti-peak (see Sec. V D)). The value of D2 used in Eq.(54) and in the y axis is obtained from scaling of the moments (1) in direct space. Figure 3 . 3CFS contrast peak in the long-time limit (t τH ) and its scaling(56)with system size N . (a) PRBM model with different b and at E = 0. (b) RS model averaged over E with different a. (c) 3DKR model with K = 1.58. Symbols are numerical data for different system sizes. Dashed black lines are Eq. (56), i.e. a single parameter fit y = αN −D 2 with α the fit parameter and D2 independently determined from scaling of the moments (1) in direct space (for PRBM and RS) or taken from [68] (for 3DKR). See Appendix B for numerical procedure. 1 . 1Dynamics of the CFS at k = k0 Figure 4 . 4Dynamics of the CFS contrast in the infinite system size limit (t τH ). (a,b) PRBM model with different b, system size N = 16384, number of disorder realizations n d = 1125. (c,d) RS model with different a, system size N = 131072, number of disorder realizations n d = 3600. (e,f) 3DKR model with K = 1.58. See Appendix B for various numerical details. (a,c,e) Dynamics of the CFS peak height at k = k0. Solid lines are numerical data, smoothed over a range ∆t for clarity (∆t = 11 for RS, ∆t ∼ 10/b for PRBM and ∆t = 74 for 3DKR). Dashed black lines are theoretical predictions Eq. (58), i.e. single parameter fit y = 1 − D1/d − αt −D 2 /d , with α the fit parameter and D1 and D2 either independently determined from scaling of the moments in direct space (PRBM and RS) or taken from [68] (3DKR). (b,d,f) Dynamics of the CFS peak shape. Symbols are numerical data at different times (t ∈ [91/b, 819/b] for PRBM, t ∈ [196/a, 1243/a] for RS, t ∈ [1, 22500] for 3DKR model). For PRBM and RS models, data are averaged in boxes of q with logarithmically increasing size. For 3DKR, data are averaged over each spherical shell at radius |k − k0|. Values of α used to plot the y-axis are extracted from the fits presented in (a,c,e). Dashed black lines are a single parameter fit y = cq −D 2 (see Eqs. (62) and (63)), with D2 independently determined or taken from literature. Dotted black line is y = 1 − D1, with D1 independently determined or taken from literature. multifractality considered. This confirms Eqs. ( For the PRBM model, the operatorM of interest is the tight-binding HamiltonianĤ defined in Sec. II A. The 2 × 2 submatrices of H nm contributing to first order Eq. (68) can be parametrized as Figure 5 . 5CFS contrast in the long-time limit and strong multifractal regime. (a) PRBM model for b = 0.001 (N = 16384, n d = 1125 disorder realizations). (b) RS model for a = 0.001 (N = 16384, n d = 900 disorder realizations). In both plots, thick solid lines are results from perturbation theory Eqs. (82) and (96), thin solid lines are numerical data (see Appendix B for details). For PRBM model, dashed black line is the universal prediction Eq. (54) (for |k − k0| 1) with D2 = 0 and height adjusted to best fit the numerical data. For RS model, dashed black line is the symmetrized universal prediction Eq. (100) (see text), numerical data are averaged over E. Figure 6 . 6Determination of the critical kicking strength K in the 3DKR model. System size is N = 128 and number of disorder realizations n d = 179. p 2 is the momentum variance of an initially fully localized wavefunction. Symbols are numerical data. For the value K = 1.58, the curve p 2 × t −2/3 is flat, indicating the critical point of the Anderson transition (see text). Figure 7 . 7Determination of D1 and D2 in the PRBM model (E = 0) by finite-size scaling of moments Iq(E) Eq. (1). (a) Determination of D2. Symbols are numerical data, with error bars smaller than symbol size. Dashed lines are two-parameter fits y = AN −D 2 (see Eq. (1)). (b) Determination of D1. Symbols are numerical data, with error bar smaller than symbols. Dashed lines are two-parameter fits y = B + N ln D1 (see Eq. (31)). Numbers of disorder realizations are given in Figure 8 . 8Spectral function A(k, E) in PRBM for various values of E. Parameters are N = 1024, n d = 10 disorder realizations. Solid lines are numerical data for the spectral function, dashed lines are numerical data for the density of state. over one period is then achieved by applying twice a Fast Fourier Transform (FFT) algorithm, in the spirit of the split-step method ψ(p, t = 0 + ) = FFT[e −iaxn ψ(x n , t = 0)], (B14) ψ(x n , t = 1) = FFT −1 [e −iφp ψ(p, t = 0 + )]. Figure 9 . 9Determination of D1 and D2 in the RS model by finite-size scaling of moments Iq given by Eq. (B18). (a) Determination of D2. Symbols are numerical data, with error bars smaller than symbol sizes. Dashed lines are two-parameter fits y = AN −D 2 . (b) Determination of D1. Symbols are numerical data, with error bars smaller than symbol sizes. Dashed lines are two-parameter fits y = B +N ln D1. Numbers of disorder realizations are given in α (k)| 2 δ(E − ε α ) α φ α | δ(E − ω α ) |k . have used the eigenvalue-eigenvector decompo-sitionÛ = exp(−iĤ) = α |φ α φ α | e −iωα .(D5)Expanding the exponential exp(−iĤt) the direct basis, one has, using the closure relation (C1),k|Ĥ n |k = i,j k|i i|Ĥ n |j j|k .(D7)The N × N Hamiltonian matrix in direct space has independent (up to Hermiticity) Gaussian entries H ij = i|Ĥ |j . Calculating (D6) requires to determine the averages of quantitiesi|Ĥ n |j = i1,...,in H ii1 H i1i2 . . . H in−1in H inj .(D8) The vector (H 11 , Re(H 12 ), Im(H 12 ), . . . , H N N ) is a multivariate centered Gaussian. Each moment in (D8) can be calculated using Wick's theorem: moments of odd order vanish, and moments x a1 ...x a2p are given by the sum over all possible pairings of the set {1, ..., 2p}. Because of independence of matrix elements, only entries H ab and H ba are non-independent; thus the only nonvanishing twopoint correlators are either of the form H ab H ba or of the form H 2 ab (possibly with a = b). Figure 10 . 10Correlations ρ(X, Y ) = ( XY − X Y )/(σX σY ) between norm |φα(m)| and phase θα(n) of a same eigenvector φα = |φα| exp(iθα) of the RS model, evaluated at different momenta (m, n): (a,d) norm-phase correlation, (b,e) norm-norm correlation, (c,f) phase-phase correlation. Panels (a-c) correspond to the strong multifractal regime a = 0.1, panels (d-f) to the weak multifractal regime a = 0.9 (d-f). Matrix size is N = 128 and average is taken over disorder (100 realizations) and eigenvectors. Table I. Summary of some of the main properties of three models considered in this article (see text pour more details).Model PRBM RS 3DKR Tunable multifractal dimensions Dq Yes with b ∈ [0, ∞[ Yes with a ∈ [0, ∞[ No Type Hamiltonian Floquet Floquet Energy dependent properties Yes No No Hopping range tn Long-range ∼ 1/n Long-range ∼ 1/n Short-range (exponential decay) Dimension d = 1 d = 1 d = 3 Direct (disorder) space Position Momentum Momentum Table II . IINumber of numerical disorder realizations n d used to average statistical properties of the PRBM model, for different system sizes N .c. Long-time limit (t τH ) Table IV . IVNumber of numerical disorder realizations n d used to average statistical properties of the RS model, for different system sizes N . ACKNOWLEDGMENTS.OG wishes to thank MajuLab and CQT for their kind hospitality. This study has been supported through the EUR grant NanoX nr ANR-17-EURE-0009 in the framework of the "Programme des Investissements d'Avenir", and research funding Grants No. ANR-17-CE30-0024, ANR-18-CE30-0017 and ANR-19-CE30-0013. We thank Calcul en Midi-Pyrénées (CALMIP) for computational resources and assistance.To determine the critical parameter K c , at which the Anderson transition occurs in the 3DKR, we follow the lines of[66,79], that we briefly recall here.The one-parameter scaling theory predicts that at the Anderson transition diffusion is anomalous. Namely, starting from an initially fully localized wavefunction in direct space, i.e. p|ψ(t = 0) = δ(p), it predicts p 2 ∝ t 2/3 . From a numerical point of view, we simulate the dynamics of an initially localized wavepacket using the split-step scheme discussed in (B14)-(B15) below. We compute the standard deviation and plot p 2 × t −2/3 as a function of time. At the critical point there should be no finite-size effect. The critical value of K correspond to the flat curve inFig. 6, yielding an estimate K c ≈ 1.58.Appendix B: Numerical methodsHere we give a detailed discussion of the different numerical procedures used in the article. . P W Anderson, 10.1103/PhysRev.109.1492Phys. Rev. 1091492P. W. Anderson, Phys. Rev. 109, 1492 (1958). . R Weaver, 10.1016/0165-2125(90)90034-2Wave Motion. 12129R. Weaver, Wave Motion 12, 129 (1990). . H Hu, A Strybulevych, J H Page, S E Skipetrov, B A Van Tiggelen, 10.1038/nphys1101Nat. Phys. 4945H. Hu, A. Strybulevych, J. H. Page, S. E. Skipetrov, and B. A. van Tiggelen, Nat. Phys. 4, 945 (2008). . D S Wiersma, P Bartolini, A Lagendijk, R Righini, 10.1038/37757Nature. 390671D. S. Wiersma, P. Bartolini, A. Lagendijk, and R. Righini, Nature 390, 671 (1997). . A Chabanov, M Stoytchev, A Genack, 10.1038/35009055Nature. 404850A. Chabanov, M. Stoytchev, and A. Genack, Nature 404, 850 (2000). . T Schwartz, G Bartal, S Fishman, M Segev, 10.1038/nature05623Nature. 44652T. Schwartz, G. Bartal, S. Fishman, and M. Segev, Nature 446, 52 (2007). . J Topolancik, B Ilic, F Vollmer, 10.1103/PhysRevLett.99.253901Phys. Rev. Lett. 99253901J. Topolancik, B. Ilic, and F. Vollmer, Phys. Rev. Lett. 99, 253901 (2007). . F Riboli, P Barthelemy, S Vignolini, F Intonti, A Rossi, S Combrie, D Wiersma, 10.1364/OL.36.000127Opt. Lett. 36127F. Riboli, P. Barthelemy, S. Vignolini, F. Intonti, A. De Rossi, S. Combrie, and D. Wiersma, Opt. Lett. 36, 127 (2011). . R Graham, M Schlautmann, D L Shepelyansky, 10.1103/PhysRevLett.67.255Phys. Rev. Lett. 67255R. Graham, M. Schlautmann, and D. L. Shepelyansky, Phys. Rev. Lett. 67, 255 (1991). . F L Moore, J C Robinson, C Bharucha, P E Williams, M G Raizen, 10.1103/PhysRevLett.73.2974Phys. Rev. Lett. 732974F. L. Moore, J. C. Robinson, C. Bharucha, P. E. Williams, and M. G. Raizen, Phys. Rev. Lett. 73, 2974 (1994). . J Billy, V Josse, Z Zuo, A Bernard, B Hambrecht, P Lugan, D Clément, L Sanchez-Palencia, P Bouyer, A Aspect, 10.1038/nature07000Nature. 453891J. Billy, V. Josse, Z. Zuo, A. Bernard, B. Hambrecht, P. Lugan, D. Clément, L. Sanchez-Palencia, P. Bouyer, and A. Aspect, Nature 453, 891 (2008). . G Roati, C Errico, L Fallani, M Fattori, C Fort, M Zaccanti, G Modugno, M Modugno, M Inguscio, 10.1038/nature07071Nature. 453895G. Roati, C. D'Errico, L. Fallani, M. Fattori, C. Fort, M. Zaccanti, G. Modugno, M. Modugno, and M. Inguscio, Nature 453, 895 (2008). . J Chabé, G Lemarié, B Grémaud, D Delande, P Szriftgiser, J C Garreau, 10.1103/PhysRevLett.101.255702Phys. Rev. Lett. 101255702J. Chabé, G. Lemarié, B. Grémaud, D. Delande, P. Szrift- giser, and J. C. Garreau, Phys. Rev. Lett. 101, 255702 (2008). . F Jendrzejewski, A Bernard, K Mueller, P Cheinet, V Josse, M Piraud, L Pezzé, L Sanchez-Palencia, A Aspect, P Bouyer, 10.1038/nphys2256Nat. Phys. 8398F. Jendrzejewski, A. Bernard, K. Mueller, P. Cheinet, V. Josse, M. Piraud, L. Pezzé, L. Sanchez-Palencia, A. As- pect, and P. Bouyer, Nat. Phys. 8, 398 (2012). . I Manai, J.-F Clément, R Chicireanu, C Hainaut, J C Garreau, P Szriftgiser, D Delande, 10.1103/PhysRevLett.115.240603Phys. Rev. Lett. 115240603I. Manai, J.-F. Clément, R. Chicireanu, C. Hainaut, J. C. Garreau, P. Szriftgiser, and D. Delande, Phys. Rev. Lett. 115, 240603 (2015). . J T Chalker, G J Daniell, 10.1103/PhysRevLett.61.593Phys. Rev. Lett. 61593J. T. Chalker and G. J. Daniell, Phys. Rev. Lett. 61, 593 (1988). . F Evers, A D Mirlin, 10.1103/PhysRevLett.84.3690Phys. Rev. Lett. 843690F. Evers and A. D. Mirlin, Phys. Rev. Lett. 84, 3690 (2000). . F Evers, A D Mirlin, 10.1103/RevModPhys.80.1355Rev. Mod. Phys. 801355F. Evers and A. D. Mirlin, Rev. Mod. Phys. 80, 1355 (2008). . A De Luca, B L Altshuler, V E Kravtsov, A Scardicchio, 10.1103/PhysRevLett.113.046806Phys. Rev. Lett. 11346806A. De Luca, B. L. Altshuler, V. E. Kravtsov, and A. Scardicchio, Phys. Rev. Lett. 113, 046806 (2014). . K S Tikhonov, A D Mirlin, 10.1103/PhysRevB.94.184203Phys. Rev. B. 94184203K. S. Tikhonov and A. D. Mirlin, Phys. Rev. B 94, 184203 (2016). . I García-Mata, J Martin, R Dubertrand, O Giraud, B Georgeot, G Lemarié, 10.1103/PhysRevResearch.2.012020Phys. Rev. Research. 212020I. García-Mata, J. Martin, R. Dubertrand, O. Giraud, B. Georgeot, and G. Lemarié, Phys. Rev. Research 2, 012020 (2020). . E Brillaux, D Carpentier, A A Fedorenko, 10.1103/PhysRevB.100.134204Phys. Rev. B. 100134204E. Brillaux, D. Carpentier, and A. A. Fedorenko, Phys. Rev. B 100, 134204 (2019). . M Morgenstern, J Klijn, C Meyer, R Wiesendanger, 10.1103/PhysRevLett.90.056804Phys. Rev. Lett. 9056804M. Morgenstern, J. Klijn, C. Meyer, and R. Wiesendan- ger, Phys. Rev. Lett. 90, 056804 (2003). . S Faez, A Strybulevych, J H Page, A Lagendijk, B A Van Tiggelen, 10.1103/PhysRevLett.103.155703Phys. Rev. Lett. 103155703S. Faez, A. Strybulevych, J. H. Page, A. Lagendijk, and B. A. van Tiggelen, Phys. Rev. Lett. 103, 155703 (2009). . A Richardella, P Roushan, S Mack, B Zhou, D A Huse, D D Awschalom, A Yazdani, 10.1126/science.1183640Science. 327665A. Richardella, P. Roushan, S. Mack, B. Zhou, D. A. Huse, D. D. Awschalom, and A. Yazdani, Science 327, 665 (2010). . T Shimasaki, M Prichard, H E Kondakci, J Pagett, Y Bai, P Dotti, A Cao, T.-C Lu, T Grover, D M Weld, arxiv:2203.09442T. Shimasaki, M. Prichard, H. E. Kondakci, J. Pagett, Y. Bai, P. Dotti, A. Cao, T.-C. Lu, T. Grover, and D. M. Weld, arxiv:2203.09442 (2022). . J T Chalker, V E Kravtsov, I V Lerner, 10.1134/1.567208J. Exp. Theor. Phys. 64386J. T. Chalker, V. E. Kravtsov, and I. V. Lerner, J. Exp. Theor. Phys. 64, 386 (1996). . P Akridas-Morel, N Cherroret, D Delande, 10.1103/PhysRevA.100.043612Phys. Rev. A. 10043612P. Akridas-Morel, N. Cherroret, and D. Delande, Phys. Rev. A 100, 043612 (2019). . Y Kuga, A Ishimaru, 10.1364/JOSAA.1.0008311831Y. Kuga and A. Ishimaru, JOSA A 1, 831 (1984). . M P Van Albada, A Lagendijk, 10.1103/PhysRevLett.55.2692Phys. Rev. Lett. 552692M. P. Van Albada and A. Lagendijk, Phys. Rev. Lett. 55, 2692 (1985). . P.-E Wolf, G Maret, 10.1103/PhysRevLett.55.2696Phys. Rev. Lett. 552696P.-E. Wolf and G. Maret, Phys. Rev. Lett. 55, 2696 (1985). . D S Wiersma, M P Van Albada, B A Van Tiggelen, A Lagendijk, 10.1103/PhysRevLett.74.4193Phys. Rev. Lett. 744193D. S. Wiersma, M. P. van Albada, B. A. van Tiggelen, and A. Lagendijk, Phys. Rev. Lett. 74, 4193 (1995). . G Labeyrie, F De Tomasi, J.-C Bernard, C A Müller, C Miniatura, R Kaiser, 10.1103/physrevlett.83.5266Phys. Rev. Lett. 835266G. Labeyrie, F. de Tomasi, J.-C. Bernard, C. A. Müller, C. Miniatura, and R. Kaiser, Phys. Rev. Lett. 83, 5266 (1999). . G Bayer, T Niederdränk, 10.1103/PhysRevLett.70.3884Phys. Rev. Lett. 703884G. Bayer and T. Niederdränk, Phys. Rev. Lett. 70, 3884 (1993). . A Tourin, A Derode, P Roux, B A Van Tiggelen, M Fink, 10.1103/PhysRevLett.79.3637Phys. Rev. Lett. 793637A. Tourin, A. Derode, P. Roux, B. A. Van Tiggelen, and M. Fink, Phys. Rev. Lett. 79, 3637 (1997). . E Larose, L Margerin, B Van Tiggelen, M Campillo, 10.1103/PhysRevLett.93.048501Phys. Rev. Lett. 9348501E. Larose, L. Margerin, B. Van Tiggelen, and M. Campillo, Phys. Rev. Lett. 93, 048501 (2004). . F Jendrzejewski, K Müller, J Richard, A Date, T Plisson, P Bouyer, A Aspect, V Josse, 10.1103/physrevlett.109.195302Phys. Rev. Lett. 109195302F. Jendrzejewski, K. Müller, J. Richard, A. Date, T. Plis- son, P. Bouyer, A. Aspect, and V. Josse, Phys. Rev. Lett. 109, 195302 (2012). . C Hainaut, I Manai, J.-F Clément, J C Garreau, P Szriftgiser, G Lemarié, N Cherroret, D Delande, R Chicireanu, 10.1038/s41467-018-03481-9Nat. Commun. 91382C. Hainaut, I. Manai, J.-F. Clément, J. C. Garreau, P. Szriftgiser, G. Lemarié, N. Cherroret, D. Delande, and R. Chicireanu, Nat. Commun. 9, 1382 (2018). . T Karpiuk, N Cherroret, K L Lee, B Grémaud, C A Müller, C Miniatura, 10.1103/PhysRevLett.109.190601Phys. Rev. Lett. 109190601T. Karpiuk, N. Cherroret, K. L. Lee, B. Grémaud, C. A. Müller, and C. Miniatura, Phys. Rev. Lett. 109, 190601 (2012). . T Micklitz, C A Müller, A Altland, 10.1103/PhysRevLett.112.110602Phys. Rev. Lett. 112110602T. Micklitz, C. A. Müller, and A. Altland, Phys. Rev. Lett. 112, 110602 (2014). . K L Lee, B Grémaud, C Miniatura, 10.1103/PhysRevA.90.043605Phys. Rev. A. 9043605K. L. Lee, B. Grémaud, and C. Miniatura, Phys. Rev. A 90, 043605 (2014). . S Ghosh, N Cherroret, B Grémaud, C Miniatura, D Delande, 10.1103/PhysRevA.90.063602Phys. Rev. A. 9063602S. Ghosh, N. Cherroret, B. Grémaud, C. Miniatura, and D. Delande, Phys. Rev. A 90, 063602 (2014). . S Ghosh, D Delande, C Miniatura, N Cherroret, 10.1103/PhysRevLett.115.200602Phys. Rev. Lett. 115200602S. Ghosh, D. Delande, C. Miniatura, and N. Cherroret, Phys. Rev. Lett. 115, 200602 (2015). . S Ghosh, C Miniatura, N Cherroret, D Delande, 10.1103/PhysRevA.95.041602Phys. Rev. A. 9541602S. Ghosh, C. Miniatura, N. Cherroret, and D. Delande, Phys. Rev. A 95, 041602 (2017). . G Lemarié, C A Müller, D Guéry-Odelin, C Miniatura, 10.1103/PhysRevA.95.043626Phys. Rev. A. 9543626G. Lemarié, C. A. Müller, D. Guéry-Odelin, and C. Miniatura, Phys. Rev. A 95, 043626 (2017). . M Martinez, G Lemarié, B Georgeot, C Miniatura, O Giraud, 10.1103/PhysRevResearch.3.L032044Phys. Rev. Research. 332044M. Martinez, G. Lemarié, B. Georgeot, C. Miniatura, and O. Giraud, Phys. Rev. Research 3, L032044 (2021). . E Abrahams, Anderson Localization. 50World ScientificE. Abrahams, 50 years of Anderson Localization, Vol. 24 (World Scientific, 2010). . A D Mirlin, Y V Fyodorov, F.-M Dittes, J Quezada, T H Seligman, 10.1103/PhysRevE.54.3221Phys. Rev. E. 543221A. D. Mirlin, Y. V. Fyodorov, F.-M. Dittes, J. Quezada, and T. H. Seligman, Phys. Rev. E 54, 3221 (1996). . T Seligman, J Verbaarschot, M Zirnbauer, 10.1103/PhysRevLett.53.215Phys. Rev. Lett. 53215T. Seligman, J. Verbaarschot, and M. Zirnbauer, Phys. Rev. Lett. 53, 215 (1984). . A D Mirlin, F Evers, 10.1103/PhysRevB.62.7920Phys. Rev. B. 627920A. D. Mirlin and F. Evers, Phys. Rev. B 62, 7920 (2000). . B V Chirikov, 10.1016/0370-1573(79)90023-1Phy. Rep. 52263B. V. Chirikov, Phy. Rep. 52, 263 (1979). . F M Izrailev, 10.1016/0370-1573(90)90067-CPhy. Rep. 196299F. M. Izrailev, Phy. Rep. 196, 299 (1990). . S Fishman, D R Grempel, R E Prange, 10.1103/PhysRevLett.49.509Phys. Rev. Lett. 49509S. Fishman, D. R. Grempel, and R. E. Prange, Phys. Rev. Lett. 49, 509 (1982). . D L Shepelyansky, 10.1103/PhysRevLett.56.677Phys. Rev. Lett. 56677D. L. Shepelyansky, Phys. Rev. Lett. 56, 677 (1986). . S Ruijsenaars, H Schneider, 10.1016/0003-4916(86)90097-7Ann. Phys. (N. Y.). 170370S. Ruijsenaars and H. Schneider, Ann. Phys. (N. Y.) 170, 370 (1986). . S N Ruijsenaars, Publications of the Research Institute for Mathematical Sciences. 31247S. N. Ruijsenaars, Publications of the Research Institute for Mathematical Sciences 31, 247 (1995). . H W Braden, R Sasaki, 10.1143/PTP.97.1003Prog. Theor. Exp. Phys. 971003H. W. Braden and R. Sasaki, Prog. Theor. Exp. Phys. 97, 1003 (1997). . E Bogomolny, O Giraud, C Schmit, 10.1103/PhysRevLett.103.054103Phys. Rev. Lett. 10354103E. Bogomolny, O. Giraud, and C. Schmit, Phys. Rev. Lett. 103, 054103 (2009). . E Bogomolny, O Giraud, C Schmit, 10.1088/0951-7715/24/11/010Nonlinearity. 243179E. Bogomolny, O. Giraud, and C. Schmit, Nonlinearity 24, 3179 (2011). . E Bogomolny, O Giraud, 10.1103/PhysRevLett.106.044101Phys. Rev. Lett. 10644101E. Bogomolny and O. Giraud, Phys. Rev. Lett. 106, 044101 (2011). . O Giraud, J Marklof, S O&apos;keefe, 10.1088/0305-4470/37/28/l01J. Phys. A. 37303O. Giraud, J. Marklof, and S. O'Keefe, J. Phys. A 37, L303 (2004). . E Bogomolny, O Giraud, 10.1103/PhysRevE.84.036212Phys. Rev. E. 8436212E. Bogomolny and O. Giraud, Phys. Rev. E 84, 036212 (2011). . E Bogomolny, O Giraud, 10.1103/PhysRevE.85.046208Phys. Rev. E. 8546208E. Bogomolny and O. Giraud, Phys. Rev. E 85, 046208 (2012). . I García-Mata, J Martin, O Giraud, B Georgeot, 10.1103/PhysRevE.86.056215Phys. Rev. E. 8656215I. García-Mata, J. Martin, O. Giraud, and B. Georgeot, Phys. Rev. E 86, 056215 (2012). Y V Fyodorov, O Giraud, 10.1016/j.chaos.2014.11.018extreme Events and its Applications. 74Y. V. Fyodorov and O. Giraud, Chaos Solitons Fractals 74, 15 (2015), extreme Events and its Applications. . J Wang, A M García-García, 10.1103/PhysRevE.79.036206Phys. Rev. E. 7936206J. Wang and A. M. García-García, Phys. Rev. E 79, 036206 (2009). . G Lemarié, J Chabé, P Szriftgiser, J.-C Garreau, B Grémaud, D Delande, 10.1103/PhysRevA.80.043626Phys. Rev. A. 8043626G. Lemarié, J. Chabé, P. Szriftgiser, J.-C. Garreau, B. Grémaud, and D. Delande, Phys. Rev. A 80, 043626 (2009). . J Lindinger, A Rodríguez, 10.1103/PhysRevB.96.134202Phys. Rev. B. 96134202J. Lindinger and A. Rodríguez, Phys. Rev. B 96, 134202 (2017). . A Rodriguez, L J Vasquez, K Slevin, R A Römer, 10.1103/PhysRevLett.105.046403Phys. Rev. Lett. 10546403A. Rodriguez, L. J. Vasquez, K. Slevin, and R. A. Römer, Phys. Rev. Lett. 105, 046403 (2010). . A Rodriguez, L J Vasquez, K Slevin, R A Römer, 10.1103/PhysRevB.84.134209Phys. Rev. B. 84134209A. Rodriguez, L. J. Vasquez, K. Slevin, and R. A. Römer, Phys. Rev. B 84, 134209 (2011). . R Klesse, M Metzler, 10.1103/PhysRevLett.79.721Phys. Rev. Lett. 79721R. Klesse and M. Metzler, Phys. Rev. Lett. 79, 721 (1997). . J A Méndez-Bermúdez, A Alcázar-López, I Varga, 10.1209/0295-5075/98/37006EPL. 9837006J. A. Méndez-Bermúdez, A. Alcázar-López, and I. Varga, EPL 98, 37006 (2012). . J A Méndez-Bermúdez, A Alcazar-López, I Varga, 10.1088/1742-5468/2014/11/p11012J. Stat. Mech. Theory Exp. 11012J. A. Méndez-Bermúdez, A. Alcazar-López, and I. Varga, J. Stat. Mech. Theory Exp. 2014, P11012 (2014). . M Carrera-Núñez, A Martínez-Argüello, J Méndez-Bermúdez, 10.1016/j.physa.2021.125965Phys. A: Stat. Mech. Appl. 573125965M. Carrera-Núñez, A. Martínez-Argüello, and J. Méndez- Bermúdez, Phys. A: Stat. Mech. Appl. 573, 125965 (2021). . B Huckestein, L Schweitzer, 10.1103/PhysRevLett.72.713Phys. Rev. Lett. 72713B. Huckestein and L. Schweitzer, Phys. Rev. Lett. 72, 713 (1994). . J Chalker, 10.1016/0378-4371(90)90056-XPhys. A: Stat. Mech. Appl. 167253J. Chalker, Phys. A: Stat. Mech. Appl. 167, 253 (1990). . T Brandes, B Huckestein, L Schweitzer, 10.1002/andp.2065080803Ann. Phys. (Berl.). 508633T. Brandes, B. Huckestein, and L. Schweitzer, Ann. Phys. (Berl.) 508, 633 (1996). . L Levitov, 10.1103/PhysRevLett.64.547Phys. Rev. Lett. 64547L. Levitov, Phys. Rev. Lett. 64, 547 (1990). Transition d'Anderson avec des ondes de matière atomiques. G Lemarié, Université Pierre et Marie Curie-Paris VIPh.D. thesisG. Lemarié, Transition d'Anderson avec des ondes de matière atomiques, Ph.D. thesis, Université Pierre et Marie Curie-Paris VI (2009). . G Lemarié, B Grémaud, D Delande, 10.1209/0295-5075/87/37007EPL. 8737007G. Lemarié, B. Grémaud, and D. Delande, EPL 87, 37007 (2009). . M Gonçalves, P Ribeiro, E V Castro, M A N Araújo, 10.1103/PhysRevLett.124.136405Phys. Rev. Lett. 124136405M. Gonçalves, P. Ribeiro, E. V. Castro, and M. A. N. Araújo, Phys. Rev. Lett. 124, 136405 (2020). . E Cuevas, V E Kravtsov, 10.1103/PhysRevB.76.235119Phys. Rev. B. 76235119E. Cuevas and V. E. Kravtsov, Phys. Rev. B 76, 235119 (2007). E Akkermans, G Montambaux, 10.1017/CBO9780511618833Mesoscopic Physics of Electrons and Photons. Cambridge University PressE. Akkermans and G. Montambaux, Mesoscopic Physics of Electrons and Photons (Cambridge University Press, 2007). . N Cherroret, T Karpiuk, C A Müller, B Grémaud, C Miniatura, 10.1103/PhysRevA.85.011604Phys. Rev. A. 8511604N. Cherroret, T. Karpiuk, C. A. Müller, B. Grémaud, and C. Miniatura, Phys. Rev. A 85, 011604 (2012). . A M García-García, J Wang, 10.1103/PhysRevLett.94.244102Phys. Rev. Lett. 94244102A. M. García-García and J. Wang, Phys. Rev. Lett. 94, 244102 (2005). . E Bogomolny, R Dubertrand, C Schmit, 10.1088/0951-7715/22/9/003Nonlinearity. 222101E. Bogomolny, R. Dubertrand, and C. Schmit, Nonlin- earity 22, 2101 (2009). . E Bogomolny, C Schmit, 10.1103/PhysRevLett.93.254102Phys. Rev. Lett. 93254102E. Bogomolny and C. Schmit, Phys. Rev. Lett. 93, 254102 (2004). . J Martin, O Giraud, B Georgeot, 10.1103/PhysRevE.77.035201Phys. Rev. E. 7735201J. Martin, O. Giraud, and B. Georgeot, Phys. Rev. E 77, 035201 (2008). . R Dubertrand, I García-Mata, B Georgeot, O Giraud, G Lemarié, J Martin, 10.1103/PhysRevLett.112.234101Phys. Rev. Lett. 112234101R. Dubertrand, I. García-Mata, B. Georgeot, O. Giraud, G. Lemarié, and J. Martin, Phys. Rev. Lett. 112, 234101 (2014). . G Labeyrie, D Delande, C A Müller, C Miniatura, R Kaiser, 10.1209/epl/i2003-00173-xEPL. 61327G. Labeyrie, D. Delande, C. A. Müller, C. Miniatura, and R. Kaiser, EPL 61, 327 (2003). . V E Kravtsov, A Ossipov, O M Yevtushenko, E Cuevas, 10.1103/PhysRevB.82.161102Phys. Rev. B. 82161102V. E. Kravtsov, A. Ossipov, O. M. Yevtushenko, and E. Cuevas, Phys. Rev. B 82, 161102 (2010). . S S Kondov, W R Mcgehee, J J Zirbel, B De-Marco, 10.1126/science.1209019Science. 33466S. S. Kondov, W. R. McGehee, J. J. Zirbel, and B. De- Marco, Science 334, 66 (2011). . M Lopez, J.-F Clément, P Szriftgiser, J C Garreau, D Delande, 10.1103/PhysRevLett.108.095701Phys. Rev. Lett. 10895701M. Lopez, J.-F. Clément, P. Szriftgiser, J. C. Garreau, and D. Delande, Phys. Rev. Lett. 108, 095701 (2012). . G Lemarié, H Lignier, D Delande, P Szriftgiser, J C Garreau, 10.1103/PhysRevLett.105.090601Phys. Rev. Lett. 10590601G. Lemarié, H. Lignier, D. Delande, P. Szriftgiser, and J. C. Garreau, Phys. Rev. Lett. 105, 090601 (2010).
{'fraction_non_alphanumeric': 0.08658048701523946, 'fraction_numerical': 0.05621719122170062, 'mean_word_length': 3.6239841745081267, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 25, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We study coherent forward scattering (CFS) in critical disordered systems, whose eigenstates are multifractals. We give general and simple arguments that make it possible to fully characterize the dynamics of the shape and height of the CFS peak. We show that the dynamics is governed by multifractal dimensions D1 and D2, which suggests that CFS could be used as an experimental probe for quantum multifractality. Our predictions are universal and numerically verified in three paradigmatic models of quantum multifractality: Power-law Random Banded Matrices (PRBM), the Ruijsenaars-Schneider ensembles (RS), and the three-dimensional kicked-rotor (3DKR). In the strong multifractal regime, we show analytically that these universal predictions exactly coincide with results from standard perturbation theory applied to the PRBM and RS models. PACS numbers: 05.45.Df, 05.45.Mt, 71.30.+h, 05.40.-a arXiv:2210.04796v1 [cond-mat.dis-nn]', 'arxivid': '2210.04796', 'author': ['Maxime Martinez \nLaboratoire de Physique Théorique\nUniversité de Toulouse\nCNRS\nUPS\nFrance\n', 'Gabriel Lemarié \nLaboratoire de Physique Théorique\nUniversité de Toulouse\nCNRS\nUPS\nFrance\n\nMajuLab\nCNRS-UCA-SU-NUS-NTU International Joint Research Unit\nSingapore\n\nCentre for Quantum Technologies\nNational University of Singapore\nSingapore\n', 'Bertrand Georgeot \nLaboratoire de Physique Théorique\nUniversité de Toulouse\nCNRS\nUPS\nFrance\n', "Christian Miniatura \nMajuLab\nCNRS-UCA-SU-NUS-NTU International Joint Research Unit\nSingapore\n\nCentre for Quantum Technologies\nNational University of Singapore\nSingapore\n\nUniversité Côte d'Azur\nCNRS\nINPHYNI\nNiceFrance\n\nDepartment of Physics\nNational University of Singapore\nSingapore\n\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\nSingapore\n", 'Olivier Giraud \nUniversité Paris Saclay\nCNRS\nLPTMS\n91405OrsayFrance\n'], 'authoraffiliation': ['Laboratoire de Physique Théorique\nUniversité de Toulouse\nCNRS\nUPS\nFrance', 'Laboratoire de Physique Théorique\nUniversité de Toulouse\nCNRS\nUPS\nFrance', 'MajuLab\nCNRS-UCA-SU-NUS-NTU International Joint Research Unit\nSingapore', 'Centre for Quantum Technologies\nNational University of Singapore\nSingapore', 'Laboratoire de Physique Théorique\nUniversité de Toulouse\nCNRS\nUPS\nFrance', 'MajuLab\nCNRS-UCA-SU-NUS-NTU International Joint Research Unit\nSingapore', 'Centre for Quantum Technologies\nNational University of Singapore\nSingapore', "Université Côte d'Azur\nCNRS\nINPHYNI\nNiceFrance", 'Department of Physics\nNational University of Singapore\nSingapore', 'School of Physical and Mathematical Sciences\nNanyang Technological University\nSingapore', 'Université Paris Saclay\nCNRS\nLPTMS\n91405OrsayFrance'], 'corpusid': 252781337, 'doi': '10.21468/scipostphys.14.3.057', 'github_urls': [], 'n_tokens_mistral': 32277, 'n_tokens_neox': 27223, 'n_words': 15125, 'pdfsha': '659c70d622548ce68613fdf5c2308808a0e428fc', 'pdfurls': ['https://export.arxiv.org/pdf/2210.04796v1.pdf'], 'title': ['Coherent forward scattering as a robust probe of multifractality in critical disordered media', 'Coherent forward scattering as a robust probe of multifractality in critical disordered media'], 'venue': []}
arxiv
On Dart-Zobel Algorithm for Testing Regular Type Inclusion 1 Oct 1998 Lunjin Lu [email protected] Department of Computer Science University of Waikato HamiltonNew Zealand John G Cleary [email protected] Department of Computer Science University of Waikato HamiltonNew Zealand On Dart-Zobel Algorithm for Testing Regular Type Inclusion 1 Oct 1998typeregular term languageregular term grammartuple distributivity This paper answers open questions about the correctness and the completeness of Dart-Zobel algorithm for testing the inclusion relation between two regular types. We show that the algorithm is incorrect for regular types. We also prove that the algorithm is complete for regular types as well as correct for tuple distributive regular types. Also presented is a simplified version of Dart-Zobel algorithm for tuple distributive regular types. G 1 A start symbol is not needed in our setting. Introduction Types are ubiquitous in programming languages [4]. They make programs easier to understand and help detect errors since a large number of errors are type errors. Types have been introduced into logic programming in the forms of type checking and inference [3,7,11,23,28] or type analysis [22,29,15,17,12,20,5,21] or typed languages [14,19,25,27]. Recent logic programming systems allow the programmer declare types for predicates and type errors are then detected either at compile time or at run time. Even in early logic programming systems, built-in predicates are usually typed and type checking for these predicates are performed at run time. The reader is referred to [24] for more details on type in logic programming. A type is a possibly infinite set of ground terms with a finite representation. An integral part of any type system is its type language that specifies which sets of ground terms are types. To be useful, types should be closed under intersection, union and complement operations. The decision problems such as the emptiness of a type, inclusion of a type in another, equivalence of two types should be decidable. Regular term languages [13,6], called regular types, satisfy these constraints and has been used widely used as types [26,22,29,7,15,19,25,27,11,28,17,12,20,5,21]. Most type systems use tuple distributive regular types which are strictly less powerful than regular types [26,22,29,15,19,25,27,11,28,17,12,20,5,21]. Tuple distributive regular types are regular types closed under tuple distributive closure. Intuitively, the tuple distributive closure of a set of terms is the set of all terms constructed recursively by permuting each argument position among all terms that have the same function symbol [28]. Tuple distributive regular types are discussed in section 5. To our knowledge, Dart and Zobel's work [8] is the only one to present, among others, an inclusion algorithm for regular types with respect to a given set of type definitions without the tuple distributive restriction. Set-based analysis can also be used to deriving types based on set constraint solving [2,1,18,16,10]. However, set constraint solving methods are intended to infer descriptive types [25] rather than for testing inclusion of a prescriptive type [25] in another. Therefore, they are useful in different settings from Dart-Zobel algorithm. Dart-Zobel algorithm has been used in type or type related analyses [7,9]. However, the completeness and the correctness of the algorithm are left open. This paper provides answers to these open questions. We show that the algorithm is incorrect for regular types. We also prove that the algorithm is complete for regular types in general as well as correct for tuple distributive regular types. These results lead to a simplified version of Dart-Zobel algorithm that is complete and correct for tuple distributive regular types. The remainder of this paper is organised as follows. Section 2 defines regular types by regular term grammars. Section 3 recalls Dart-Zobel algorithm for testing if a regular type is a subset of another regular type. Section 4 addresses the completeness and the correctness of their algorithm that have been left open. In section 5, we show that their algorithm is both complete and correct for tuple distributive regular types and provides a simplified version of their algorithm for tuple distributive regular types. Several equivalent formalisms such as tree automata [13,6], regular term grammars [13,6], regular unary logic programs [28] have been used to describe regular types. In [8], Dart and Zobel use regular term grammars to describe regular types that are sets of ground terms over a ranked alphabet Σ. A regular term grammar is a tuple G = Π, Σ, ∆ where 1 -Σ is a fixed ranked alphabet. Each symbol in Σ is called a function symbol and has a fixed arity. It is assumed that Σ contains at least one constant that is a function symbol of arity 0. -Π is a set of symbols called nonterminals. These terminals will be called type symbols as they represent types. Type symbols are of arity 0. It is assumed that Π ∩ Σ = ∅. -∆ is a set of production rules of the form α → τ with α ∈ Π and τ ∈ T (Σ ∪ Π) where T (Σ ∪ Π) is the set of all terms over Σ ∪ Π. Terms in T (Σ ∪ Π) will be called pure type terms. Nat → 0, Nat → s(Nat), Natlist → nil, Natlist → cons(Nat, Natlist)          2 The above presentation is slightly different from [8] where production rules with the same type symbol on their lefthand sides are grouped together and called a type rule. For instance, production rules in the above examples are grouped into two type rules Nat → {Nat, Natlist} and Natlist → {nil, cons(Nat, Natlist)}. Types denoted by a pure type term is given by a rewrite rule ⇒ G associated with G. t ⇒ G s if ∆ contains a rule α → τ , α occurs in t and s results from replacing an occurrence of α in t by τ . Let ⇒ * be the reflexive and transitive closure of ⇒ G . The type denoted by a pure type term τ is defined as follows. [ [τ ] ] G def = {t ∈ T (Σ) | τ ⇒ * G t} [ [τ ] ] G is the set of terms over Σ that can be derived from τ by repeatedly replacing the lefthand side of a rule in ∆ with its righthand side. Example 2. Let G be the regular term grammar in example 1. We have The type represented by a sequence ψ of pure type terms and a set Ψ of sequences of pure type terms are defined as follows. [ [ǫ] ] G def = {ǫ} [ [ τ + ψ ′ ] ] G def = [ [τ ] ] G × [ [ψ ′ ] ] G [ [Ψ ] ] G def = ψ∈Ψ [ [ψ] ] G where ǫ is the empty sequence, + is the infix sequence concatenation operator, τ is the sequence consisting of the pure type term τ and × is the Cartesian product operator. The set Π of nonterminals in Dart and Zobel's type language also contains constant type symbols. Constant type symbols are not defined by production rules and they denote constant types. In particular, Π contains µ denoting the set of all terms over Σ and φ denoting the empty set of terms. We will leave out constant type symbols in this paper in order to simplify presentation. Re-introducing constant type symbols will not affect the results of the paper. Dart-Zobel algorithm works with simplified regular term grammars. A regular term grammar G = Π, Σ, ∆ is simplified if [ [α] ] G = ∅ for each α ∈ Π and τ ∈ Π for each (α → τ ) ∈ ∆. Every regular grammar can be simplified. This section recalls Dart and Zobel's inclusion algorithm for regular types. As indicated in section 2, we shall disregard constant type symbols and simplify their algorithm accordingly. We note that without constant type symbols, many functions in their algorithm can be greatly simplified. In place of a type rule, we use the corresponding set of production rules. These superficial changes don't change the essence of the algorithm but facilitate the presentation. We shall assume that G is a simplified regular term grammar and omit references to G where there is no confusion. We first describe the ancillary functions used in their algorithm. Let ψ = τ 1 τ 2 · · · τ n be a non-empty sequence of pure type terms and Ψ be a set of non-empty sequences of pure type terms. head (ψ) def = τ 1 and tail (ψ) def = τ 2 · · · τ n . heads and tails are defined as heads(Ψ ) def = {head (ψ) | ψ ∈ Ψ } and tails(Ψ ) def = {tail (ψ) | ψ ∈ Ψ }. The function expand rewrites a non-empty sequence into a set of sequences when necessary. expand (ψ) def = {ψ} if head(ψ) ∈ Π { τ + tail (ψ) | (head (ψ) → τ ) ∈ ∆} if head(ψ) ∈ Π expands(Ψ ) def = ψ∈Ψ expand(ψ). The function selects(τ, Ψ ) defined below applies when τ is pure type term and τ ∈ Π and Ψ is a set of non-empty sequences with heads(Ψ ) ∩ Π = ∅. The output of selects(τ, Ψ ) is the set of the sequences in Ψ that have the same principal function symbol as τ . selects(f (τ 1 , · · · , τ n ), Ψ ) def = {ψ ∈ Ψ | head (ψ) = f (ω 1 , · · · , ω n )} Note that f (τ 1 , · · · , τ n ) is a constant when n = 0. The function open(ψ ′ ) defined below applies when ψ ′ is a non- empty sequence with head (ψ ′ ) ∈ Π. open(ψ ′ ) replaces the head of ψ ′ with its arguments. open(f (τ 1 , · · · , τ n ) + ψ) def = τ 1 τ 2 · · · τ n + ψ When n = 0, open(f (τ 1 , · · · , τ n ) + ψ) = ψ. Without constant type symbols, open doesn't need an extra argument as in [8] that is used to test membership of a term in a constant type and to indicate the required number of arguments when the constant type symbol is µ. opens(Ψ ) def = {open(ψ) | ψ ∈ Ψ }. The inclusion algorithm subset(τ 1 , τ 2 ) takes two pure type terms τ 1 and τ 2 and is intended to decide if [ [τ 1 ] ] G ⊆ [ [τ 2 ] ] G is true or false. The core part subsetv of the inclusion algorithm takes a sequence ψ of pure type terms and a set Ψ of sequences of pure type terms that are of the same length as ψ and is intended to decide if [ [ψ] ] G ⊆ [ [Ψ ] ] G . subsetv takes a third argument C to ensure termination. C is a set of pairs β, Υ where β ∈ Π is a type symbol and Υ ⊆ T (Σ ∪ Π) is a set of pure type terms. A pair β, Υ in C can be read as [ [β] ] G ⊆ [ [Υ ] ] G . The functions subset and subsetv are defined in the following. Where several alternative definitions of subsetv apply, the first is used. subset(τ 1 , τ 2 ) def = subsetv ( τ 1 , { τ 2 }, ∅) subsetv (ψ, Ψ, C) def =                              false if Ψ = ∅ true if ψ = ǫ subsetv (tail (ψ), tails(Ψ ), C) if head (ψ), Υ ∈ C and heads(Ψ ) ⊇ Υ ∀ψ ′ ∈ expand(ψ).subsetv (ψ ′ , Ψ, C ∪ { head (ψ, heads(Ψ ) }) if head(ψ) ∈ Π subsetv (open(ψ), opens(selects(head (ψ), expands(Ψ ))), C) if head(ψ) = f (τ 1 , · · · , τ n ) The second condition heads(Ψ ) ⊇ Υ for the third alternative is obviously mistaken to be heads(Ψ ) ⊆ Υ in [8]. The first two alternatives deal with two trivial cases. The third alternative uses pairs in C to force termination. As we shall see later, this is fine for tuple distributive regular types but is problematic for regular types in general. The fourth alternative expands ψ into a set of sequences ψ ′ and compares each of them with Ψ . The fifth alternative applies when ψ = f (τ 1 , · · · , τ n ) + ψ ′ . Sequences in Ψ are expanded and the expanded sequences of the form f (σ 1 , · · · , σ n ) + ω ′ are selected. ψ and the set of the selected sequences are then compared after replacing f (τ 1 , · · · , τ n ) with τ 1 · · · τ n in ψ and replacing f (σ 1 , · · · , σ n ) with σ 1 · · · σ n in each f (σ 1 , · · · , σ n ) + ω ′ . We now address the correctness and the completeness of Dart-Zobel algorithm that were left open. We first show that the algorithm is incorrect for regular types by means of a counterexample. We then prove that the algorithm is complete for regular types. Thus, the algorithm provides an approximate solution to the inclusion problem of regular types in that it returns true if inclusion relation holds between its two arguments while the reverse is not necessarily true. Correctness -a counterexample The following example shows that Dart-Zobel algorithm is incorrect for regular types. Example 3. Let G = Π, Σ, ∆ with Π = {α, β, θ, σ, ω}, Σ = {a, b, g(), h(, )} and ∆ =                α → g(ω) β → g(θ) | g(σ) θ → a | h(θ, a) σ → b | h(σ, b) ω → a | b | h(ω, a) | h(ω, b)                where, for instance, θ → a | h(θ, a) is an abbreviation of two rules θ → a and θ → h(θ, a). Let Σ h = Σ \ {h}. We have [ [θ] ] G = {t ∈ T (Σ h ) | t is left-skewed and leaves of t are a's} [ [σ] ] G = {t ∈ T (Σ h ) | t is left-skewed and leaves of t are b's} [ [ω] ] G = {t ∈ T (Σ h ) | t is left-skewed} [ [α] ] G = {g(t) | t ∈ [ [ω] ] G } [ [β] ] G = {g(t) | t ∈ [ [θ] ] G ∪ [ [σ] ] G } Let t = g(h(h(a, b), a)). t ∈ [ [α] ] G and t ∈ [ [β] ] G . Therefore, [ [α] ] G ⊆ [ [β] ] G . The incorrectness of Dart-Zobel algorithm is illustrated by showing subset(α, β) = true as follows. Let C 0 = { α, {β} }. We have subset(α, β) = subsetv ( α , { β }, ∅) by def. of subset = subsetv ( g(ω) , { β }, C 0 ) by 4th def. of subsetv = subsetv ( ω , { θ , σ }, C 0 ) by 5th def. of subsetv Let C 1 = C 0 ∪ { ω, {θ, σ} }. By the fourth definition of subsetv and the above equation, subset(α, β) =      subsetv ( a , { θ , σ }, C 1 ) ∧ subsetv ( b , { θ , σ }, C 1 ) ∧ subsetv ( h(ω, a) , { θ , σ }, C 1 ) ∧ subsetv ( h(ω, b) , { θ , σ }, C 1 )     (1) By applying the fifth and then the second definitions of subsetv , subsetv ( a , { θ , σ }, C 1 ) = subsetv (ǫ, {ǫ}, C 1 ) = true. In the same way, we obtain subsetv ( a , { θ , σ }, C 1 ) = true. subsetv ( h(ω, a) , { θ , σ }, C 1 ) = subsetv ( ω, a , { θ, a , σ, b }, C 1 ) by 5th def. of subsetv = subsetv ( a , { a , b }, C 1 ) by 3rd def. of subsetv = subsetv (ǫ, {ǫ}, C 1 ) by 5th def. of subsetv = true by 2nd def. of subsetv We can show subsetv ( h(ω, a) , { θ , σ }, C 1 ) = true in the same way as above. Therefore, by equation 1, subset(α, β) = true and subset is incorrect for regular types. The problem with the algorithm stems from the way the set C is used in the third definition of subsetv . As the above example indicates, the third definition of subsetv severs the dependency between the terms in a tuple, i.e., subterms of a term. In [8], Dart and Zobel show by an example that their algorithm works for some regular types which are not tuple distributive. We don't know what is the largest subclass of the class of regular types for which the algorithm is correct. Completeness We now prove that Dart-Zobel algorithm is complete for regular types in the sense that subset(τ 1 , τ 2 ) = true whenever [ [τ 1 ] ] G ⊆ [ [τ 2 ] ] G . Let C be a set of pairs β, Υ with β ∈ Π and Υ ⊆ T (Σ ∪ Π). A pair β, Υ in C states that the denotation of β is included in that of Υ , i.e., [ [β] ] G ⊆ [ [Υ ] ] G for regular types. Define Γ C,G def = ∧ β,Υ ∈C [ [β] ] G ⊆ [ [Υ ] ] G The completeness of subset follows from the following theorem which asserts the completeness of subsetv . Theorem 1. Let ψ be a sequence of pure type terms and Ψ a set of sequences of pure type terms of the same length as ψ, C a set of pairs β, Υ with β ∈ Π and Υ ⊆ T (Σ ∪ Π). If Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G then subsetv (ψ, Ψ, C) = true. Proof. Assume subsetv (ψ, Ψ, C) = false. The proof is done by show- ing Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G . This is accomplished by induction on dp(ψ, Ψ, C), lg(ψ) where lg(ψ) is the length of ψ and dp(ψ, Ψ, C) is the depth of the computation tree for subsetv (ψ, Ψ, C). Define k, l < k ′ , l ′ def = (k < k ′ ) ∨ (k = k ′ ) ∧ (l < l ′ ) . Basis. dp(ψ, Ψ, C) = 0 and lg(ψ) = 0. ψ = ǫ and Ψ = ∅ since subsetv (ψ, Ψ, C) = false. Let t = ǫ. t ∈ [ [ψ] ] G and t ∈ [ [Ψ ] ] G . So, Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G . Induction. dp(ψ, Ψ, C) = 0 or lg(ψ) = 0. By the definition of subsetv , (a) Ψ = ∅; or (b) subsetv (tail (ψ), tails(Ψ ), C) = false and there is Υ ⊆ T (Σ ∪ Π) such that ( head(ψ), Υ ∈ C) ∧ (heads(Ψ ) ⊇ Υ ); or (c) head (ψ) ∈ Π and ∃.ψ ′ ∈ expand(ψ).subsetv (ψ ′ , Ψ, C ′ ) = false where C ′ = C ∪ { head (ψ), heads(Ψ ) }; or (d) head (ψ) = f (τ 1 , · · · , τ n ) and subsetv (ψ ′ , Ψ ′ , C) = false where ψ ′ = open(ψ) and Ψ ′ = opens(selects(head(ψ), expands(Ψ ))). It remains to prove that Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G in each of the cases (a)-(d). The case (a) is trivial as G is simplified and hence [ [ψ] ] G = ∅. In the case (b), we have dp(tail (ψ), tails(Ψ ), C) ≤ dp(ψ, Ψ, C) and lg(tail (ψ)) < lg(ψ). By the induction hypothesis, Γ C,G |= [ [tail (ψ)] ] G ⊆ [ [tails(Ψ )] ] G . Thus, Γ C,G |= ∃t ′ .(t ′ ∈ [ [tail (ψ)] ] G ∧ t ′ ∈ [ [tails(Ψ )] ] G ). Let t ∈ [ [head (ψ)] ] G and t = t + t ′ . Note that t exists as G is simplified. We have Γ C,G |= t ∈ [ [ψ] ] G ∧ t ∈ [ [Ψ ] ] G . So, Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G . In the case (c), dp(ψ ′ , Ψ, C ′ ) < dp(ψ, Ψ, C). By the induction hypothesis, Γ C ′ G |= [ [ψ ′ ] ] G ⊆ [ [Ψ ] ] G . Note that Γ C ′ ,G = Γ C,G ∧([ [head (ψ)] ] G ⊆ [ [heads(Ψ )] ] G ). So, we have Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G ∨[ [head (ψ)] ] G ⊆ [ [heads(Ψ )] ] G . Assume Γ C,G = true. Either (i) [ [ψ] ] G ⊆ [ [Ψ ] ] G or (ii) [ [head (ψ)] ] G ⊆ [ [heads(Ψ )] ] G . In the case (i), ∃.t ′ .(t ′ ∈ [ [ψ ′ ] ] G ) ∧ (t ′ ∈ [ [Ψ ] ] G ). By proposi- tion 5.26 in [8], we have Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G . In the case (ii), ∃t.(t ∈ [ [head(ψ)] ] G ) ∧ (t ∈ [ [heads(Ψ )] ] G ). Let t ′ ∈ [ [tail (ψ)] ] G and t = t + t ′ . Note that t ′ exists as G is simplified. We have t ∈ [ [ψ] ] G ∧ t ∈ [ [Ψ ] ] G . So, Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G in the case (c). In the case (d), we have ψ ′ = τ 1 · · · τ n +tail (ψ) and dp(ψ ′ , Ψ ′ , C) < dp(ψ, Ψ, C). By the induction hypothesis, Γ C,G |= [ [ψ ′ ] ] G ⊆ [ [Ψ ′ ] ] G . Thus, Γ C,G |= ∃t 1 .∃t 2 .(lg(t 1 ) = n) ∧ ((t 1 + t 2 ) ∈ [ [ψ ′ ] ] G ) ∧ ((t 1 + t 2 ) ∈ [ [Ψ ′ ] ] G ), which implies Γ C,G |= ∃t 1 .∃t 2 .( f (t 1 ) + t 2 ) ∈ [ [ψ] ] G ) ∧ (( f (t 1 ) + t 2 ) ∈ [ [Ψ ] ] G ). So, Γ C,G |= [ [ψ] ] G ⊆ [ [Ψ ] ] G . 2 The completeness of subset is a corollary of the above theorem. Corollary 2. Let τ 1 and τ 2 be pure type terms. If [ [τ 1 ] ] G ⊆ [ [τ 2 ] ] G then subset(τ 1 , τ 2 ) = true. Proof. subset(τ 1 , τ 2 ) = subsetv ( τ 1 , { τ 2 }, ∅) by the definition of subset. We have Γ ∅,G |= [ [ τ 1 ] ] G ⊆ [ [{ τ 2 }] ] G since [ [τ 1 ] ] G ⊆ [ [τ 2 ] ] G . The corollary now follows from the above theorem as Γ ∅,G = true. 2 Tuple Distributive Regular Types Most type languages in logic programming use tuple distributive closures of regular term languages as types [26,22,29,15,19,25,27,11,28,17,12,20,5,21]. The notion of tuple distributivity is due to Mishra [22]. The following definition of tuple distributivity is due to Heintze and Jaffar [15]. Each function symbol of arity n is associated with n projection operators f −1 (1) , f −1 (2) , · · · , f −1 (n) . Let S be a set of ground terms in T (Σ). f −1 (i) is defined as follows. f −1 (i) (S) def = {t i | f (t 1 , · · · , t i , · · · , t n ) ∈ S} The tuple distributive closure of S is S ⋆ def = {c | c ∈ S ∧ c ∈ Σ 0 } ∪ {f (t 1 , · · · , t n ) | t i ∈ (f −1 (i) (S)) ⋆ } where Σ 0 is the set of constants in Σ. The following proposition results from the fact that (.) ⋆ is a closure operator and preserves set inclusion, i.e., S 1 ⊆ S 2 implies S ⋆ 1 ⊆ S ⋆ 2 . Proposition 3. Let S 1 , S 2 ⊆ T (Σ). (S 1 ∪ S 2 ) ⋆ = (S ⋆ 1 ∪ S ⋆ 2 ) ⋆ . 2 The tuple distributive regular type τ G associated with a pure type term τ is the tuple distributive closure of the regular type [ [τ ] ] G associated with τ [22]. τ G def = [ [τ ] ] ⋆ G Let ψ be a sequence of pure type terms, Ψ be a set of sequences of pure type terms of the same length. ǫ G def = {ǫ} ψ G def = { t + t | t ∈ head(ψ) G ∧ t ∈ tail (ψ) G } Ψ G def = ( ψ∈Ψ head (ψ) G ) ⋆ × tails(Ψ ) G The definition of Ψ G makes use of tuple distributivity and hence severs the inter-dependency between components of a sequences of terms. Correctness We now prove that Dart-Zobel algorithm is correct for tuple distributive regular types in the sense that if subset(τ 1 , τ 2 ) = true then τ 1 G ⊆ τ 2 G . Let C be a set of pairs β, Υ with β ∈ Π and Υ ⊆ T (Σ ∪ Π). A pair β, Υ in C represents β G ⊆ Υ G for tuple distributive regular types. Define Φ C,G def = ∧ β,Υ ∈C β G ⊆ Υ G The correctness of subset follows from the following theorem which asserts the correctness of subsetv for tuple distributive regular types. Theorem 4. Let ψ be a sequence of pure type terms and Ψ a set of sequences of pure type terms of the same length as ψ, C a set of pairs β, Υ with β ∈ Π and Υ ⊆ T (Σ ∪ Π). If subsetv (ψ, Ψ, C) = true then Φ C,G |= ψ G ⊆ Ψ G . Proof. Assume subsetv (ψ, Ψ, C) = true. The proof is done by induction on dp(ψ, Ψ, C), lg(ψ) . Basis. dp(ψ, Ψ, C) = 0 and lg(ψ) = 0. ψ = ǫ and Ψ = ∅ by the second definition of subsetv . So, Ψ = {ǫ} and Φ C,G |= ψ G ⊆ Ψ G . Induction. dp(ψ, Ψ, C) = 0 or lg(ψ) = 0 By the definition of subsetv , (a) subsetv (tail (ψ), tails(Ψ ), C) = true and there is Υ ⊆ T (Σ ∪ Π) such that ( head (ψ), Υ ∈ C) ∧ (heads(Ψ ) ⊇ Υ ); or (b) head (ψ) ∈ Π and ∀.ψ ′ ∈ expand (ψ).subsetv (ψ ′ , Ψ, C ′ ) = true where C ′ = C ∪ { head (ψ), heads(Ψ ) }; or (c) head (ψ) = f (τ 1 , · · · , τ n ) and subsetv (ψ ′ , Ψ ′ , C) = true where ψ ′ = open(ψ) and Ψ ′ = opens(selects(head(ψ), expands(Ψ ))). It remains to prove that Φ C,G |= ψ G ⊆ Ψ G in each of the cases (a)-(c). In the case (a), we have dp(tail (ψ), tails(Ψ ), C) ≤ dp(ψ, Ψ, C) and lg(tail (ψ)) < lg(ψ). By the induction hypothesis, Φ C,G |= tail (ψ) G ⊆ tails(Ψ ) G . ( head (ψ), Υ ∈ C) and (heads(Ψ ) ⊇ Υ ) imply Φ C,G |= head (ψ) G ⊆ heads(Ψ ) G . Thus, Φ C,G |= ψ G ⊆ Ψ G by the def- initions of ψ G and Ψ G . Note that tuple distributivity is used in the definition of Ψ G . In the case (b), dp(ψ ′ , Ψ, C ′ ) < dp(ψ, Ψ, C). By the induction hypothesis, Φ C ′ ,G |= ψ ′ G ⊆ Ψ G . Note that Φ C ′ ,G = Φ C,G ∧ ( head (ψ) G ⊆ heads(Ψ ) G ). So, Φ C,G |= ψ ′ G ⊆ Ψ G ∨ head (ψ) G ⊆ heads(Ψ ) G subsetv (head (ψ), heads(Ψ ), C) = true since subsetv (ψ, Ψ, C) = true. We have Φ C,G |= head (ψ) G ⊆ heads(Ψ ) G by the induction hypothesis since dp(head(ψ), heads(Ψ ), C) < dp(ψ, Ψ, C). So, Φ C,G |= ψ ′ G ⊆ Ψ G . Φ C,G |= ( {ψ ′ | ψ ′ ∈ expand(ψ)} G ⊆ Ψ G ) since (.) ⋆ is a closure operator and hence Φ C,G |= ψ G ⊆ Ψ G In the case (c), we have ψ ′ = τ 1 · · · τ n +tail (ψ) and dp(ψ ′ , Ψ ′ , C) < dp(ψ, Ψ, C). By the induction hypothesis, Φ C,G |= ψ ′ G ⊆ Ψ ′ G . By proposition 5.29 in [8], Φ C,G |= ψ G ⊆ Ψ G . This completes the proof of the theorem. The correctness of subset is a corollary of the above theorem. Corollary 5. Let τ 1 and τ 2 be pure type terms. If subset(τ 1 , τ 2 ) = true then τ 1 G ⊆ τ 2 G . Proof. Let subset(τ 1 , τ 2 ) = true. subsetv ( τ 1 , { τ 2 }, ∅) = true by the definition of subset. Thus, Φ ∅,G |= τ 1 G ⊆ { τ 2 } G according to the above theorem. So, τ 1 G ⊆ τ 2 G as Φ ∅,G = true. 2 Completeness This section presents the completeness of Dart-Zobel algorithm for tuple distributive regular types. The following theorem is the counterpart of theorem 1. Theorem 6. Let ψ be a sequence of pure type terms and Ψ a set of sequences of pure type terms of the same length as ψ, C a set of pairs β, Υ with β ∈ Π and Υ ⊆ T (Σ ∪ Π). If Φ C,G |= ψ G ⊆ Ψ G then subsetv (ψ, Ψ, C) = true. Proof. The proof can be obtained from that for theorem 1 by simply replacing Γ ·,· with Φ ·,· and [ [·] ] G with · G . 2 The following completeness result of Dart-Zobel algorithm for tuple distributive regular types follows from the above theorem. Corollary 7. Let τ 1 and τ 2 be pure type terms. If τ 1 G ⊆ τ 2 G then subset(τ 1 , τ 2 ) = true. Proof. The proof can be obtained from that for corollary 2 by simply replacing Γ (·,·) with Φ (·,·) , [ [·] ] G with · G and theorem 1 with theorem 6. 2 A Simplified Algorithm Now that Dart-Zobel algorithm is complete and correct for tuple distributive regular types but not correct for general regular types. It is desirable to specialise Dart-Zobel algorithm for tuple distributive regular types which was originally proposed for general regular types. The following is a simplified version of the algorithm for tuple distributive regular types. subset ′ (τ 1 , τ 2 ) def = subset ′ (τ 1 , {τ 2 }, ∅) subset ′ (τ, Υ, C) def =                    false if Υ = ∅ true if ( τ, Υ ′ ∈ C) ∧ (Υ ⊇ Υ ′ ) ∀τ ′ ∈ expand ′ (τ ).subset ′ (τ ′ , Υ, C ∪ { τ, Υ }) if τ ∈ Π subsetv ′ (τ 1 · · · τ n , {σ 1 · · · σ n | f (σ 1 , · · · , σ n ) ∈ expands ′ (Υ )}, C) if τ = f (τ 1 , · · · , τ n ) subsetv ′ (ǫ, {ǫ}, C) def = true subsetv ′ (ψ, Ψ, C) def = subset ′ (head(ψ), heads(Ψ ), C) ∧ subsetv ′ (tail (ψ), tails(Ψ ), C) expand ′ (τ ) def = {τ } if τ ∈ Π {σ | τ → σ) ∈ ∆} if τ ∈ Π expands ′ (Υ ) def = τ ∈Υ expand ′ (τ ) While Dart-Zobel algorithm mainly deals with sequences of pure type terms, the simplified algorithm primarily deals with pure type terms by breaking a sequence of pure type terms into its component pure type terms. This is allowed because tuple distributive regular types abstract away inter-dependency between component terms in a sequence of ground terms. We forgo presenting the correctness and the completeness of the simplified algorithm because they can be proved by emulating proofs for theorems 1 and 4. Conclusion We have provided answers to open questions about the correctness and the completeness of Dart-Zobel algorithm for testing inclusion of one regular type in another. The algorithm is complete but incorrect for general regular types. It is both complete and correct for tuple distributive regular types. It is our hope that the results presented in this paper will help identify the applicability of Dart-Zobel algorithm. We have also provided a simplified version of Dart-Zobel algorithm for tuple distributive regular types. Example 1 . 1Let Σ = {0, s(), nil, cons(, )} and Π = {Nat, NatList}. G = Π, Σ, ∆ defines natural numbers and lists of natural numbers where Natlist ⇒ G cons(Nat, Natlist) ⇒ G cons(s(Nat), Natlist) ⇒ G cons(s(0), Natlist) ⇒ G cons(s(0), nil) Thus, [ [Natlist] ] G contains cons(s(0), nil). This article was processed using the L a T E X macro package with LLNCS style Directional type checking of logic programs. A Aiken, T K Lakshman, Proceedings of the First International Static Analysis Symposium. B. Le Charlierthe First International Static Analysis SymposiumSpringer-VerlagA. Aiken and T.K. Lakshman. Directional type checking of logic programs. In B. Le Charlier, editor, Proceedings of the First International Static Analysis Sym- posium, pages 43-60. Springer-Verlag, 1994. Solving systems of set constraints. A Aiken, E Wimmers, Proceedings of the Seventh IEEE Symposium on Logic in Computer Science. the Seventh IEEE Symposium on Logic in Computer ScienceThe IEEE Computer Society PressA. Aiken and E. Wimmers. Solving systems of set constraints. In Proceedings of the Seventh IEEE Symposium on Logic in Computer Science, pages 329-340. The IEEE Computer Society Press, 1992. Type inferencing for polymorphic order-sorted logic programs. C Beierle, Proceedings of the Twelfth International Conference on Logic Programming. L. Sterlingthe Twelfth International Conference on Logic ProgrammingThe MIT PressC. Beierle. Type inferencing for polymorphic order-sorted logic programs. In L. Sterling, editor, Proceedings of the Twelfth International Conference on Logic Programming, pages 765-779. The MIT Press, 1995. On understanding types, data abstraction, and polymorphism. L Cardelli, P Wegner, ACM computing surveys. 174L. Cardelli and P. Wegner. On understanding types, data abstraction, and poly- morphism. ACM computing surveys, 17(4):471-522, 1985. Type dependencies for logic programs using aciunification. M Codish, V Lagoon, Proceedings of the 1996 Israeli Symposium on Theory of Computing and Systems. the 1996 Israeli Symposium on Theory of Computing and SystemsIEEE PressM. Codish and V. Lagoon. Type dependencies for logic programs using aci- unification. In Proceedings of the 1996 Israeli Symposium on Theory of Computing and Systems, pages 136-145. IEEE Press, June 1996. H Comon, M Dauchet, R Gilleron, D Lugiez, S Tison, M Tommasi, Tree Automata Techniques and Applications. Draft. H. Comon, M. Dauchet, R. Gilleron, D. Lugiez, S. Tison, and M. Tommasi. Tree Automata Techniques and Applications. Draft, 1998. Efficient run-time type checking of typed logic programs. P W Dart, J Zobel, Journal of Logic Programming. 141-2P.W. Dart and J. Zobel. Efficient run-time type checking of typed logic programs. Journal of Logic Programming, 14(1-2):31-69, 1992. A regular type language for logic programs. P W Dart, J Zobel, Types in Logic Programming. Frank PfenningThe MIT PressP.W. Dart and J. Zobel. A regular type language for logic programs. In Frank Pfenning, editor, Types in Logic Programming, pages 157-189. The MIT Press, 1992. Non-failure analysis for logic programs. S K Debray, P López-Garcia, M Hermenegildo, Logic programming: Proceedings of the fourteenth International Conference on Logic Programming. Lee NaishThe MIT PressS. K. Debray, P. López-Garcia, and M. Hermenegildo. Non-failure analysis for logic programs. In Lee Naish, editor, Logic programming: Proceedings of the four- teenth International Conference on Logic Programming, pages 48-62. The MIT Press, July 1997. Co-definite set constraints with membership expressions. P Devienne, J-M Talbot, S Tison, Proceedings of the 1998 Joint Conference and Symposium on Logic Programming. J. Jaffarthe 1998 Joint Conference and Symposium on Logic ProgrammingThe MIT PressP. Devienne, J-M. Talbot, and S. Tison. Co-definite set constraints with mem- bership expressions. In J. Jaffar, editor, Proceedings of the 1998 Joint Conference and Symposium on Logic Programming, pages 25-39. The MIT Press, 1998. Logic programs as types for logic programs. T Fruhwirth, E Shapiro, M Y Vardi, E Yardeni, Proceedings of Sixth Annual IEEE Symposium on Logic in Computer Science. Sixth Annual IEEE Symposium on Logic in Computer ScienceThe IEEE Computer Society PressT. Fruhwirth, E. Shapiro, M.Y. Vardi, and E. Yardeni. Logic programs as types for logic programs. In Proceedings of Sixth Annual IEEE Symposium on Logic in Computer Science, pages 300-309. The IEEE Computer Society Press, 1991. Fast and precise regular approximations of logic programs. J P Gallagher, D A De Waal, Proceedings of the Eleventh International Conference on Logic Programming. M. Bruynooghethe Eleventh International Conference on Logic ProgrammingThe MIT PressJ.P. Gallagher and D.A. de Waal. Fast and precise regular approximations of logic programs. In M. Bruynooghe, editor, Proceedings of the Eleventh International Conference on Logic Programming, pages 599-613. The MIT Press, 1994. . F Gécseg, M Steinby, Tree Automata. Akadémiai Kiadó. F. Gécseg and M. Steinby. Tree Automata. Akadémiai Kiadó, 1984. Horn clause programs with polymorphic types: semantics and resolution. M Hanus, Theoretical Computer Science. 891M. Hanus. Horn clause programs with polymorphic types: semantics and resolu- tion. Theoretical Computer Science, 89(1):63-106, 1991. A finite presentation theorem for approximating logic programs. N Heintze, J Jaffar, Proceedings of the seventh Annual ACM Symposium on Principles of Programming Languages. the seventh Annual ACM Symposium on Principles of Programming LanguagesThe ACM PressN. Heintze and J. Jaffar. A finite presentation theorem for approximating logic programs. In Proceedings of the seventh Annual ACM Symposium on Principles of Programming Languages, pages 197-209. The ACM Press, 1990. A decision procedure for a class of set constraints. N Heintze, J Jaffar, CMU-CS-91-110Proc. 5th IEEE Symposium on LICS. 5th IEEE Symposium on LICSCarnegie-Mellon UniversityTechnical ReportN. Heintze and J. Jaffar. A decision procedure for a class of set constraints. Tech- nical Report CMU-CS-91-110, Carnegie-Mellon University, February 1991. (Later version of a paper in Proc. 5th IEEE Symposium on LICS). Semantic types for logic programs. N Heintze, J Jaffar, Types in Logic Programming. Frank PfenningThe MIT PressN. Heintze and J. Jaffar. Semantic types for logic programs. In Frank Pfenning, editor, Types in Logic Programming, pages 141-155. The MIT Press, 1992. Principles and Practice of Constraint Programming. N Heintze, J Jaffar, PPCP'94: Second International Workshop. Alan BorningOrcas Island, Seattle, USASpringer874Set constraints and set-based analysisN. Heintze and J. Jaffar. Set constraints and set-based analysis. In Alan Borning, editor, Principles and Practice of Constraint Programming, volume 874 of Lecture Notes in Computer Science. Springer, May 1994. (PPCP'94: Second International Workshop, Orcas Island, Seattle, USA). Type declarations as subtype constraints in logic programming. SIG-PLAN Notices. D Jacobs, 25D. Jacobs. Type declarations as subtype constraints in logic programming. SIG- PLAN Notices, 25(6):165-73, 1990. Type analysis of logic programs in the presence of type definitions. L Lu, Proceedings of the 1995 ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based program manipulation. the 1995 ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based program manipulationThe ACM PressL. Lu. Type analysis of logic programs in the presence of type definitions. In Proceedings of the 1995 ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based program manipulation, pages 241-252. The ACM Press, 1995. A polymorphic type analysis in logic programs by abstract interpretation. L Lu, Journal of Logic Programming. 361L. Lu. A polymorphic type analysis in logic programs by abstract interpretation. Journal of Logic Programming, 36(1):1-54, 1998. Towards a theory of types in Prolog. P Mishra, Proceedings of the IEEE international Symposium on Logic Programming. the IEEE international Symposium on Logic ProgrammingThe IEEE Computer Society PressP. Mishra. Towards a theory of types in Prolog. In Proceedings of the IEEE inter- national Symposium on Logic Programming, pages 289-298. The IEEE Computer Society Press, 1984. A polymorphic type system for Prolog. A Mycroft, R A O&apos;keefe, Artificial Intelligence. 23A. Mycroft and R.A. O'Keefe. A polymorphic type system for Prolog. Artificial Intelligence, 23:295-307, 1984. Types in logic programming. Frank PfenningThe MIT PressCambridge, MassachusettsFrank Pfenning, editor. Types in logic programming. The MIT Press, Cambridge, Massachusetts, 1992. Types for logic programs. U S Reddy, Logic Programming. Proceedings of the 1990 North American Conference. mitS. Debray and M. HermenegildoU.S. Reddy. Types for logic programs. In S. Debray and M. Hermenegildo, ed- itors, Logic Programming. Proceedings of the 1990 North American Conference, pages 836-40. mit, 1990. Type definitions with parameters. M Soloman, Conference Record of the Fifth ACM Symposium on Principles of Programming Languages. M. Soloman. Type definitions with parameters. In Conference Record of the Fifth ACM Symposium on Principles of Programming Languages, pages 31-38, 1978. Polymorphically typed logic programs. E Yardeni, T Fruehwirth, E Shapiro, Logic Programming. Proceedings of the Eighth International Conference. K. FurukawaThe MIT PressE. Yardeni, T. Fruehwirth, and E. Shapiro. Polymorphically typed logic pro- grams. In K. Furukawa, editor, Logic Programming. Proceedings of the Eighth International Conference, pages 379-93. The MIT Press, 1991. A type system for logic programs. E Yardeni, E Shapiro, Journal of Logic Programming. 102E. Yardeni and E. Shapiro. A type system for logic programs. Journal of Logic Programming, 10(2):125-153, 1991. Derivation of polymorphic types for Prolog programs. J Zobel, Logic Programming: Proceedings of the fourth international conference. J.-L. LassezThe MIT PressJ. Zobel. Derivation of polymorphic types for Prolog programs. In J.-L. Lassez, editor, Logic Programming: Proceedings of the fourth international conference, pages 817-838. The MIT Press, 1987.
{'fraction_non_alphanumeric': 0.09553412132024977, 'fraction_numerical': 0.020015611061552186, 'mean_word_length': 3.4298592244998765, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 49, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'This paper answers open questions about the correctness and the completeness of Dart-Zobel algorithm for testing the inclusion relation between two regular types. We show that the algorithm is incorrect for regular types. We also prove that the algorithm is complete for regular types as well as correct for tuple distributive regular types. Also presented is a simplified version of Dart-Zobel algorithm for tuple distributive regular types. G 1 A start symbol is not needed in our setting.', 'arxivid': 'cs/9810001', 'author': ['Lunjin Lu [email protected] \nDepartment of Computer Science\nUniversity of Waikato\nHamiltonNew Zealand\n', 'John G Cleary [email protected] \nDepartment of Computer Science\nUniversity of Waikato\nHamiltonNew Zealand\n'], 'authoraffiliation': ['Department of Computer Science\nUniversity of Waikato\nHamiltonNew Zealand', 'Department of Computer Science\nUniversity of Waikato\nHamiltonNew Zealand'], 'corpusid': 1514914, 'doi': '10.1145/609769.609782', 'github_urls': [], 'n_tokens_mistral': 13005, 'n_tokens_neox': 11501, 'n_words': 6521, 'pdfsha': 'd59af02be28b1d2738d28f4882cceb5e367999bc', 'pdfurls': ['https://arxiv.org/pdf/cs/9810001v1.pdf'], 'title': ['On Dart-Zobel Algorithm for Testing Regular Type Inclusion', 'On Dart-Zobel Algorithm for Testing Regular Type Inclusion'], 'venue': []}
arxiv
Tree-body loss of of trapped ultracold 87 Rb atoms due to a Feshbach resonance 6 Feb 2003 V A Yurovsky School of Chemistry Tel Aviv University 69978Tel AvivIsrael A Ben-Reuven School of Chemistry Tel Aviv University 69978Tel AvivIsrael Tree-body loss of of trapped ultracold 87 Rb atoms due to a Feshbach resonance 6 Feb 2003(Dated: November 4, 2018)arXiv:physics/0211002v2 [physics.atom-ph] The loss of ultracold trapped atoms in the vicinity of a Feshbach resonance is treated as a twostage reaction, using the Breit-Wigner theory. The first stage is the formation of a resonant diatomic molecule, and the second one is its deactivation by inelastic collisions with other atoms. This model is applied to the analysis of recent experiments on 87 Rb, leading to an estimated value of 7 × 10 −11 cm 3 /s for the deactivation rate coefficient. 32.80.Pj, 03.75.Fi The phenomenon of Feshbach resonance has received recently an increased attention due to its application to Bose-Einstein condensation (BEC) (see Ref.[1] and references therein). Its most outstanding effect is a drastic change of the elastic scattering length as the collision energy of an atomic pair approaches the energy of a bound level belonging to another electronic or hyperfine state. The resonance can be tuned by applying an external magnetic field, as has been proposed in Ref.[2]in order to control the BEC properties. Applications include a controlled BEC collapse [3] and bright solitons in BEC[4,5], as well as a formation of molecular BEC[1,6,7,8,9], an atom-molecule coherent superposition[10,11], and an entangled atomic gas[9].Another effect of the resonance is the abrupt increase in atom loss due to inelastic collisions of the resonant molecules[1,6,8,12], and to the formation of noncondensed atoms[7,8,13]. The determination of the loss parameters is important for an appreciation of the outcome of applications of Feshbach resonances. We present here an estimate of the rate coefficient for the deactivation of vibrationally excited resonant 87 Rb 2 molecules by collisions with other Rb atoms, based on the results of recent experiments[14].The theory presented in Refs.[1,6,8,12], based on coupled Gross-Pitaevskii equations for atomic and molecular condensates, cannot be applied to the analysis of these experiments involving a non-condensed thermal gas. The approach used here is based on the Breit-Wigner theory of resonant multichannel collisions (see e.g. Ref.[15]), as has been proposed for the system under consideration by Ref.[7]. The reaction involving the excited resonant molecule Rb 2 (m) includes a reversible input channel of formation from (and dissociation to) a pair of colliding atoms,and irreversible output channels of exoergic collisions with a third atom,bringing the molecule down to one of the lower-lying rovibrational levels of the same spin state, or to levels belonging to other spin states. (An alternative approach, presented in Refs.[16,17], treats the whole process as a one-stage recombination by a three-body collision.) Let us consider all atoms, for the time being, as distinguishable particles. According to the standard theory (see Ref.[15]), the natural resonance width Γ e associated with channel (1) is two times smaller than the corresponding width for the case of indistinguishable atoms presented in Ref.[7] (see also Refs. [1,2]). It exhibits a Wigner threshold dependence of the formwhere a a is the non-resonant (background) elastic scattering length, µ is the difference of the magnetic momenta of the atomic pair and the Rb 2 (m) molecule, ∆ is the phenomenological resonance strength (see Refs.[1, 2]), and p is the relative momentum of the colliding atoms. These parameters also describe the variation of the elastic scattering length a res as a function of the external magnetic field B in the vicinity of the resonance at B = B 0 as (see Refs.[1, 2])The total width Γ d associated with the deactivation channel (2) can be expressed in terms of a two-body rate coefficient k d , asis proportional to the atomic density n. The rate coefficient k d includes the contributions of all the output deactivation channels (d) of Eq.(2). The Breit-Wigner theory leads to the following expression for the cross section of resonance-enhanced threebody recombination (see Ref.[7]),This expression does not take into account the indistinguishability of the three participating atoms, in which case the cross section should be σ ind = 3!σ (see Ref.[15]). The resonant molecular state Rb 2 (m) can be formed whenever the detuning from the resonance is comparable or less than Γ e . This state decays producing atoms The loss of ultracold trapped atoms in the vicinity of a Feshbach resonance is treated as a twostage reaction, using the Breit-Wigner theory. The first stage is the formation of a resonant diatomic molecule, and the second one is its deactivation by inelastic collisions with other atoms. This model is applied to the analysis of recent experiments on 87 Rb, leading to an estimated value of 7 × 10 −11 cm 3 /s for the deactivation rate coefficient. The phenomenon of Feshbach resonance has received recently an increased attention due to its application to Bose-Einstein condensation (BEC) (see Ref. [1] and references therein). Its most outstanding effect is a drastic change of the elastic scattering length as the collision energy of an atomic pair approaches the energy of a bound level belonging to another electronic or hyperfine state. The resonance can be tuned by applying an external magnetic field, as has been proposed in Ref. [2] in order to control the BEC properties. Applications include a controlled BEC collapse [3] and bright solitons in BEC [4,5], as well as a formation of molecular BEC [1,6,7,8,9], an atom-molecule coherent superposition [10,11], and an entangled atomic gas [9]. Another effect of the resonance is the abrupt increase in atom loss due to inelastic collisions of the resonant molecules [1,6,8,12], and to the formation of noncondensed atoms [7,8,13]. The determination of the loss parameters is important for an appreciation of the outcome of applications of Feshbach resonances. We present here an estimate of the rate coefficient for the deactivation of vibrationally excited resonant 87 Rb 2 molecules by collisions with other Rb atoms, based on the results of recent experiments [14]. The theory presented in Refs. [1,6,8,12], based on coupled Gross-Pitaevskii equations for atomic and molecular condensates, cannot be applied to the analysis of these experiments involving a non-condensed thermal gas. The approach used here is based on the Breit-Wigner theory of resonant multichannel collisions (see e.g. Ref. [15]), as has been proposed for the system under consideration by Ref. [7]. The reaction involving the excited resonant molecule Rb 2 (m) includes a reversible input channel of formation from (and dissociation to) a pair of colliding atoms, Rb + Rb ⇄ Rb 2 (m) ,(1) and irreversible output channels of exoergic collisions with a third atom, Rb 2 (m) + Rb → Rb 2 (d) + Rb,(2) bringing the molecule down to one of the lower-lying rovibrational levels of the same spin state, or to levels belonging to other spin states. (An alternative approach, presented in Refs. [16,17], treats the whole process as a one-stage recombination by a three-body collision.) Let us consider all atoms, for the time being, as distinguishable particles. According to the standard theory (see Ref. [15]), the natural resonance width Γ e associated with channel (1) is two times smaller than the corresponding width for the case of indistinguishable atoms presented in Ref. [7] (see also Refs. [1,2]). It exhibits a Wigner threshold dependence of the form Γ e = |a a µ|∆ 2 p,(3) where a a is the non-resonant (background) elastic scattering length, µ is the difference of the magnetic momenta of the atomic pair and the Rb 2 (m) molecule, ∆ is the phenomenological resonance strength (see Refs. [1,2]), and p is the relative momentum of the colliding atoms. These parameters also describe the variation of the elastic scattering length a res as a function of the external magnetic field B in the vicinity of the resonance at B = B 0 as (see Refs. [1,2]) a res = a a 1 − ∆ B − B 0 .(4) The total width Γ d associated with the deactivation channel (2) can be expressed in terms of a two-body rate coefficient k d , as Γ d = k d n(5) is proportional to the atomic density n. The rate coefficient k d includes the contributions of all the output deactivation channels (d) of Eq. (2). The Breit-Wigner theory leads to the following expression for the cross section of resonance-enhanced threebody recombination (see Ref. [7]), σ = π 2 p 2 Γ e Γ d µ 2 (B − B 0 ) 2 / 2 + (Γ e + Γ d ) 2 /4(6) This expression does not take into account the indistinguishability of the three participating atoms, in which case the cross section should be σ ind = 3!σ (see Ref. [15]). The resonant molecular state Rb 2 (m) can be formed whenever the detuning from the resonance is comparable or less than Γ e . This state decays producing atoms with a kinetic energy spectrum of width Γ e . Under the conditions of the experiments [14] (a a ≈ 98.96 atomic units, µ ≈ 2.8 Bohr magnetons, ∆ ≈ 0.17 G for the strongest resonance at 1007.34 G in 87 Rb and a collision energy of p 2 /m ≈ 2µK) the width calculated with Eq. (3) is given by Γ e /k B ≈ 7µK, where k B is the Boltzmann constant. Therefore this energy is less than the trap depth of ≈ 20µK and a spontaneous dissociation of the resonance molecule (1) cannot lead to a significant loss of trapped atoms (as opposed to the case of a BEC -see Ref. [7,8]). Each deactivation event (2) leads to the simultaneous loss of three atoms. Therefore, the loss rate for the atomic density n (r, t) can be written in the form,ṅ (r, t) = −3 2p m σ ind n 2 (r, t) = −K 3 n 3 (r, t) ,(7) where K 3 = 36π 2 k d |a a µ|∆ m µ 2 (B − B 0 ) 2 + 2 Γ 2 e /4(8) is the three-body loss rate coefficient. Here the partial inelastic width Γ d is neglected in the denominator in comparison to Γ e . Even very close to the resonance, as long as |B − B 0 | > 0.1 G, the width Γ e may as well be neglected, leading to an expression similar to Eq. (9) of Ref. [12] for the loss in a BEC. However, the rate coefficient given by Eq. (8) is six times larger than the corresponding rate for a BEC. This difference, due to the effects of quantum statistics, has been predicted for non-resonant three-body recombination in Ref. [18], and observed in experiments [19]. In the case of a BEC the atomic density profile is determined by the repulsive interaction between atoms. This interaction can be neglected whenever its characteristic energy, proportional to the elastic scattering length, is small compared to the kinetic energy of atoms, 4π m 2 a res n ≪ k B T. For the temperature T = 2µK used in the experiments [14] this condition is obeyed whenever |B − B 0 | > 0.01 G. Therefore we can consider the gas as an ideal one with the equilibrium density profile described by the Boltzmann distribution in the trap potential. The loss rate given by Eq. (7) is density dependent. In the case of an inhomogeneous trapped gas the loss processes modify the equilibrium density profile, leading to an atomic drift which tends to compensate for this deformation. The characteristic time for this compensation can be estimated as the trap period. In the experiments [14] the magnetic field that brings the system close to resonance has been applied during a time interval of t = 50 ms. This time substantially exceeds the radial trap period (the radial trap frequency is ω r /2π = 930 Hz), but it is less than the axial trap period (the axial trap frequency is ω a /2π = 11 Hz). Therefore we can consider the radial density profile as an equilibrium one, described by a Boltzmann distribution, and write out the atomic density profile as n (r, t) = ν (z, t) πb 2 r exp − x 2 + y 2 b 2 r ,(10) where ν (z, t) = dxdyn (r, t) is a non-equilibrium axial profile and b r = 1 ω r 2k B T m(12) is the characteristic radius of the atomic cloud. Neglecting effects of axial atom transport, a kinetic equation for the axial profile can be written in the forṁ ν (z, t) = −K 1D ν 3 (z, t) , K 1D = K 3 / 3π 2 b 4 r . (13) The solution of Eq. (13) relates the axial profile at time t to the initial one at t = 0 as ν (r, t) = ν (r, 0) 1 + 2K 1D ν 2 (r, 0) t .(14) Let us suppose that at t = 0 the atoms have a Boltzmann distribution with the temperature T and ν (r, 0) = ν 0 exp − z 2 b 2 a , ν 0 = N 0 √ πb a ,(15)where b a = 1 ω a 2k B T m(16) is the characteristic half-length of the atomic cloud and N 0 is the initial number of atoms. In this case, the number of atoms remaining in the trap can be expressed as N (t) = 2 N 0 √ π ∞ 0 dζ exp −ζ 2 1 + 2K 1D ν 2 0 t exp (−2ζ 2 ) ,(17) where ζ = z/b a . Equation (17), in combination with Eqs. (3), (8), (13) and (15), allows us to estimate the value of k d by a fit to the number of remaining atoms measured in Ref. [14] for N 0 = 2.8 × 10 6 . The fit produces the optimal value of k d = 0.7 × 10 −10 cm 3 /s. This value is comparable to corresponding estimates for Na resonances (1.6 × 10 −10 cm 3 /s in Ref. [8]; 4 × 10 −10 cm 3 /s and 10 −11 cm 3 /s in Ref. [20] following the theory of Ref. [1]). The results of calculations for several values of k d are presented in Fig. 1 in comparison with the experimental results of Ref. [14]. The authors are most grateful to Dr. Stephan Dürr for providing a preprint of Ref. [14] and clarifying details of the experiment. [14]. PACS numbers: 34.50.-s, 32.80.Pj, 03.75.Fi FIG. 1 : 1Number of remaining atoms as a function of the magnetic field in the vicinity of the 1007 G resonance in 87 Rb calculated with Eq. (17) for three values of the deactivation rate coefficient, k d = 7×10 −11 cm 3 /s (solid line), 10 −10 cm 3 /s (long-dashed line), and 5 × 10 −11 cm 3 /s (short-dashed line). The circles represent the experimental results of A. Marte et al. . E Timmermans, P Tommasini, M Hussein, A Kerman, Phys. Rep. 315199E. Timmermans, P. Tommasini, M. Hussein, and A. Ker- man, Phys. Rep. 315, 199 (1999). . E Tiesinga, A J Moerdijk, B J Verhaar, H T C Stoof, Phys. Rev. A. 461167E. Tiesinga, A. J. Moerdijk, B. J. Verhaar, and H. T. C. Stoof, Phys. Rev. A 46, R1167 (1992); . E Tiesinga, B J Verhaar, H T C Stoof, Phys. Rev. A. 474114E. Tiesinga, B. J. Verhaar, and H. T. C. Stoof, Phys. Rev. A 47, 4114 (1993); . A J Moerdijk, B J Verhaar, A Axelsson, Phys. Rev. A. 514852A. J. Moerdijk, B. J. Verhaar, and A. Axelsson, Phys. Rev. A 51, 4852 (1995). . E A Donley, R Claussen, S L Cornish, J L Roberts, E A Cornell, C W Wieman, Nature. 412295E. A. Donley, R. Claussen, S. L. Cornish, J. L. Roberts, E. A. Cornell, and C. W. Wieman, Nature (London) 412, 295 (2001 ). . K Strecker, G Partridge, A Truscott, R Hulet, Nature. 417150K. Strecker, G. Partridge, A. Truscott, and R. Hulet, Nature (London) 417, 150 (2002). . L Khaykovich, F Schrenk, T Bourdel, J Cubizolles, G Ferrari, L Carr, Y Castin, C Salomon, Science. 2961290L. Khaykovich, F. Schrenk, T. Bourdel, J. Cubizolles, G. Ferrari, L. Carr, Y. Castin, and C. Salomon, Science 296, 1290 (2002). . E Timmermans, P Tommasini, R Côté, M Hussein, A Kerman, Phys. Rev. Lett. 832691E. Timmermans, P. Tommasini, R. Côté, M. Hussein, and A.Kerman, Phys. Rev. Lett. 83, 2691 (1999). . F H Mies, E Tiesinga, P S Julienne, Phys. Rev. A. 6122721F. H. Mies, E. Tiesinga, and P. S. Julienne, Phys. Rev. A 61, 022721 (2000). . V A Yurovsky, A Ben-Reuven, P S Julienne, C J Williams, Phys. Rev. A. 6243605V. A. Yurovsky, A. Ben-Reuven, P. S. Julienne, and C. J. Williams, Phys. Rev. A 62, 043605 (2000). . V A Yurovsky, A Ben-Reuven, cond-mat/0205267V. A. Yurovsky and A. Ben-Reuven, cond-mat/0205267. . E A Donley, N R Claussen, S T Thompson, C E Wieman, Nature. 417529E. A. Donley, N. R. Claussen, S. T. Thompson, and C. E. Wieman, Nature 417, 529 (2002) . S J J M F Kokkelmans, M J Holland, Phys. Rev. Lett. 89180401S. J. J. M. F. Kokkelmans and M. J. Holland, Phys. Rev. Lett. 89 180401 (2002). . V A Yurovsky, A Ben-Reuven, P S Julienne, C J Williams, Phys. Rev. A. 60765V. A. Yurovsky, A. Ben-Reuven, P. S. Julienne, and C. J. Williams, Phys. Rev. A 60, R765 (1999). . M Holland, J Park, R Walser, Phys. Rev. Lett. 861915M. Holland, J. Park, and R. Walser, Phys. Rev. Lett. 86 1915 (2001). . A Marte, T Volz, J Schuster, S Dürr, G Rempe, E G M Van Kempen, B J Verhaar, Phys. Rev. Lett. 89283202A. Marte, T. Volz, J. Schuster, S. Dürr, G. Rempe, E. G. M. van Kempen, and B. J. Verhaar, Phys. Rev. Lett. 89 283202 (2002). N F Mott, H S W Massey, The theory of atomic collisions. LondonOxford University PressN. F. Mott and H. S. W. Massey, The theory of atomic collisions (Oxford University Press, London, 1965). . B D Esry, C H Greene, J P Burke, Phys. Rev. Lett. 831751B. D. Esry, C. H. Greene, and J. P. Burke, Phys. Rev. Lett. 83, 1751 (1999). . O I Kartavtsev, J H Macek, Few-Body Systems. 31249O. I. Kartavtsev and J. H. Macek, Few-Body Systems 31, 249 (2002). . Yu, B V Kagan, G V Svistunov, Shlyapnikov, JETP Lett. 42209Yu. Kagan, B. V. Svistunov, and G. V. Shlyapnikov, JETP Lett. 42, 209 (1985). . E A Burt, R W Ghrist, C J Myatt, M J Holland, E A Cornell, C E Wieman, Phys. Rev. Lett. 79337E. A. Burt, R. W. Ghrist, C. J. Myatt, M. J. Holland, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. 79, 337 (1997). . F A Van Abeelen, B J Verhaar, Phys. Rev. Lett. 831550F. A. van Abeelen, and B. J. Verhaar, Phys. Rev. Lett. 83, 1550 (1999).
{'fraction_non_alphanumeric': 0.07206233648049142, 'fraction_numerical': 0.044079171880332155, 'mean_word_length': 3.752162162162162, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 8, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The loss of ultracold trapped atoms in the vicinity of a Feshbach resonance is treated as a twostage reaction, using the Breit-Wigner theory. The first stage is the formation of a resonant diatomic molecule, and the second one is its deactivation by inelastic collisions with other atoms. This model is applied to the analysis of recent experiments on 87 Rb, leading to an estimated value of 7 × 10 −11 cm 3 /s for the deactivation rate coefficient. 32.80.Pj, 03.75.Fi The phenomenon of Feshbach resonance has received recently an increased attention due to its application to Bose-Einstein condensation (BEC) (see Ref.[1] and references therein). Its most outstanding effect is a drastic change of the elastic scattering length as the collision energy of an atomic pair approaches the energy of a bound level belonging to another electronic or hyperfine state. The resonance can be tuned by applying an external magnetic field, as has been proposed in Ref.[2]in order to control the BEC properties. Applications include a controlled BEC collapse [3] and bright solitons in BEC[4,5], as well as a formation of molecular BEC[1,6,7,8,9], an atom-molecule coherent superposition[10,11], and an entangled atomic gas[9].Another effect of the resonance is the abrupt increase in atom loss due to inelastic collisions of the resonant molecules[1,6,8,12], and to the formation of noncondensed atoms[7,8,13]. The determination of the loss parameters is important for an appreciation of the outcome of applications of Feshbach resonances. We present here an estimate of the rate coefficient for the deactivation of vibrationally excited resonant 87 Rb 2 molecules by collisions with other Rb atoms, based on the results of recent experiments[14].The theory presented in Refs.[1,6,8,12], based on coupled Gross-Pitaevskii equations for atomic and molecular condensates, cannot be applied to the analysis of these experiments involving a non-condensed thermal gas. The approach used here is based on the Breit-Wigner theory of resonant multichannel collisions (see e.g. Ref.[15]), as has been proposed for the system under consideration by Ref.[7]. The reaction involving the excited resonant molecule Rb 2 (m) includes a reversible input channel of formation from (and dissociation to) a pair of colliding atoms,and irreversible output channels of exoergic collisions with a third atom,bringing the molecule down to one of the lower-lying rovibrational levels of the same spin state, or to levels belonging to other spin states. (An alternative approach, presented in Refs.[16,17], treats the whole process as a one-stage recombination by a three-body collision.) Let us consider all atoms, for the time being, as distinguishable particles. According to the standard theory (see Ref.[15]), the natural resonance width Γ e associated with channel (1) is two times smaller than the corresponding width for the case of indistinguishable atoms presented in Ref.[7] (see also Refs. [1,2]). It exhibits a Wigner threshold dependence of the formwhere a a is the non-resonant (background) elastic scattering length, µ is the difference of the magnetic momenta of the atomic pair and the Rb 2 (m) molecule, ∆ is the phenomenological resonance strength (see Refs.[1, 2]), and p is the relative momentum of the colliding atoms. These parameters also describe the variation of the elastic scattering length a res as a function of the external magnetic field B in the vicinity of the resonance at B = B 0 as (see Refs.[1, 2])The total width Γ d associated with the deactivation channel (2) can be expressed in terms of a two-body rate coefficient k d , asis proportional to the atomic density n. The rate coefficient k d includes the contributions of all the output deactivation channels (d) of Eq.(2). The Breit-Wigner theory leads to the following expression for the cross section of resonance-enhanced threebody recombination (see Ref.[7]),This expression does not take into account the indistinguishability of the three participating atoms, in which case the cross section should be σ ind = 3!σ (see Ref.[15]). The resonant molecular state Rb 2 (m) can be formed whenever the detuning from the resonance is comparable or less than Γ e . This state decays producing atoms', 'arxivid': 'physics/0211002', 'author': ['V A Yurovsky \nSchool of Chemistry\nTel Aviv University\n69978Tel AvivIsrael\n', 'A Ben-Reuven \nSchool of Chemistry\nTel Aviv University\n69978Tel AvivIsrael\n'], 'authoraffiliation': ['School of Chemistry\nTel Aviv University\n69978Tel AvivIsrael', 'School of Chemistry\nTel Aviv University\n69978Tel AvivIsrael'], 'corpusid': 119488994, 'doi': '10.1103/physreva.67.050701', 'github_urls': [], 'n_tokens_mistral': 5943, 'n_tokens_neox': 5047, 'n_words': 3096, 'pdfsha': '5fbab0bc9ef4e3111799de9556e7b84121049624', 'pdfurls': ['https://arxiv.org/pdf/physics/0211002v2.pdf'], 'title': ['Tree-body loss of of trapped ultracold 87 Rb atoms due to a Feshbach resonance', 'Tree-body loss of of trapped ultracold 87 Rb atoms due to a Feshbach resonance'], 'venue': []}
arxiv
PROJECTIVE GEOMETRIES ARISING FROM ELEKES-SZABÓ PROBLEMS Martin Emmanuel Breuillard PROJECTIVE GEOMETRIES ARISING FROM ELEKES-SZABÓ PROBLEMS We generalise the Elekes-Szabó theorem to arbitrary arity and dimension and characterise the complex algebraic varieties without power saving. The characterisation involves certain algebraic subgroups of commutative algebraic groups endowed with an extra structure arising from a skew field of endomorphisms. We also extend the Erdős-Szemerédi sum-product phenomenon to elliptic curves. Our approach is based on Hrushovski's framework of pseudo-finite dimensions and the abelian group configuration theorem. arXiv:1806.03422v3 [math.CO] 1 Jun 2021 Introduction Let V ⊂ C n be an irreducible algebraic set over C, let N ∈ N, and let X i ⊂ C with |X i | ≤ N , i = 1 . . . n. Then it is easy to see that |V ∩ n i=1 X i | ≤ O V (N dim V ). Indeed, this follows inductively from the observation that there exists an algebraic subset W ⊂ V of lesser dimension and a co-ordinate projection of the complement V \ W → C dim V with fibres of finite size bounded by a constant. Say V admits no power-saving if the exponent dim V is optimal, i.e. if for no > 0 do we have a bound |V ∩ n i=1 X i | ≤ O V, (N dim V − ) as the X i vary among finite subsets of C of size ≤ N . In an influential paper Elekes and Szabó [ES12] classified the varieties which admit no power-saving in the case n = 3. In order to state their main theorem, we first need the following definition: Definition 1.1. A generically finite algebraic correspondence between irreducible algebraic varieties V and V is a closed irreducible subvariety of the product Γ ⊂ V ×V such that the projections π V (Γ) ⊂ V and π V (Γ) ⊂ V are Zariski dense, and dim(Γ) = dim(V ) = dim(V ). Suppose W 1 , . . . , W n and W 1 , . . . , W n are irreducible algebraic varieties, and V ⊂ n i=1 W i and V ⊂ n i=1 W i are irreducible subvarieties. Then we say V and V are in co-ordinatewise correspondence if there is a generically finite algebraic correspondence Γ ⊂ V × V and a permutation σ ∈ Sym(n) such that for each i, the closure of the projection (π i × π σi )(Γ) ⊂ W i × W σi is a generically finite algebraic correspondence (between the closure of π i (V ) and the closure of π σi (V )). Theorem 1.2 (Elekes-Szabó [ES12]). An irreducible surface V ⊂ C 3 admits no power-saving if and only if either (i) V ⊂ C 3 is in co-ordinatewise correspondence with the graph Γ = {(g, h, g+h) : g, h ∈ G} ⊂ G 3 of the group operation of a 1-dimensional connected complex algebraic group G, (ii) or V projects to a curve, i.e. dim(π ij (V )) = 1 for some i = j ∈ {1, 2, 3}. Date: January 20, 2022. Here we generalise these results to arbitrary n and V ⊂ C n . Definition 1.3. An irreducible algebraic set V ⊂ C n is special if it is in coordinatewise correspondence with a product i H i ≤ i G ni i of connected subgroups H i of powers G ni i of 1-dimensional complex algebraic groups, where i n i = n. We prove: Theorem 1.4. An irreducible algebraic set V ⊂ C n admits no power-saving if and only if it is special. The case of Theorem 1.4 with V ⊂ C 3 and dim(V ) = 2 is precisely Theorem 1.2. Indeed it is easy to verify that V is special if and only if it is either of the form (i) or of the form (ii). The latter occurs exactly when the special subgroup H ≤ G 3 can be taken to be a diagonal subgroup {x i = x j }, while the curve π ij (V ) gives the correspondence. The case V ⊂ C 4 with dim(V ) = 3 is a consequence of the results of [RSdZ18]. A slightly stonger version of the case V ⊂ C n with dim(V ) = n − 1, asking also for some uniformity in the power-saving (c.f. Remark 1.14), was conjectured by de Zeeuw in [dZ18,Conjecture 4.3]. The case V ⊂ C 4 with dim(V ) = 2 solves [dZ18,Problem 4.4]. Example 1.5. V := {(x, y, z, w) ∈ C 4 : xzw = 1 = yz 2 w 2 } is special because it is a subgroup of (C * ) 4 , and geometric progressions witness that it admits no powersaving: setting X = {2 k : −M ≤ k ≤ M }, we find |V ∩ X 4 | ≥ Ω(M 2 ) ≥ Ω(|X| 2 ). Example 1.6. Let E ⊂ P 2 (C) be an elliptic curve, say defined by {y 2 = x(x − 1)(x − λ)}. Then taking x co-ordinates yields a surface V ⊂ C 3 in co-ordinatewise correspondence with the graph Γ + ⊂ E 3 of the elliptic curve group law, and arithmetic progressions in E witness that V admits no power-saving. This demonstrates the necessity of taking correspondences in the definition of special. To demonstrate the necessity of taking products, suppose E ⊂ P 2 (C) is another elliptic curve. Then taking x co-ordinates yields a 4-dimensional subvariety W ⊂ C 6 in co-ordinatewise correspondence with the product Γ + × Γ + ⊂ E 3 × E 3 of the graphs of the two group laws, and again arithmetic progressions witness that W admits no power-saving. But if E is not isogenous to E, then W is not in coordinatewise correspondence with a subgroup of a power of a single elliptic curve (see Fact 2.13). In fact we obtain a more general result, with arbitrary varieties in place of the complex co-ordinates. Again, this generalises the corresponding result of [ES12], who considered the case of a subvariety V of C d × C d × C d of dimension 2d and with dominant projections to pairs of co-ordinates, and showed that V must be in correspondence with the graph of multiplication of some algebraic group G. In [BW16] it was noted that this group must be commutative. Theorem 1.11 below gives a complete classification of the subvarieties without power saving, showing in particular that the groups involved must be commutative. To state the result, we first introduce the following definition. Definition 1.7. Let W be a complex variety. Let C, τ ∈ N with C ≥ τ . A finite subset X ⊂ W is in coarse (C, τ )-general position in W if for any proper irreducible complex closed subvariety W W of complexity at most C, we have |W ∩ X| ≤ |X| 1 τ . When C = τ we will simply say that X is τ -cgp in W . The notion of the complexity of a subvariety of a fixed variety is defined in full generality in 2.1.10 below. In the case that W is affine, W ⊂ W has complexity at most C if it can be defined as the zero set of polynomials of degree at most C. Let W i , i = 1, . . . , n, be irreducible complex varieties each of dimension d, and let V ⊂ n i=1 W i be an irreducible subvariety. Now let C, τ ∈ N and consider finite subsets X i ⊂ W i with |X i | ≤ N d , N ∈ N, and with each X i in coarse (C, τ )-general position in W i . As a straightforward consequence of coarse general position, if τ > d and C is sufficiently large depending on V only, we will see in Lemma 7.1 that we have a trivial bound |V ∩ n i=1 X i | ≤ O V (N dim(V ) ). We say that V ⊂ i W i admits a power-saving by > 0 if for some C, τ ∈ N depending on V only, this bound can be improved to |V ∩ n i=1 X i | ≤ O V, (N dim(V )− ). We say V admits no power-saving if it does not admit a power-saving by for any > 0. It is easy to see that if V admits no power-saving, then dim(V ) must be an integral multiple of d (see Lemma 7.1). In Theorem 1.11 below we give a complete classification of the varieties with no power-saving. To this end we introduce as earlier a notion of special varieties, which generalises the previous definition and is slightly more involved. Let G be a connected commutative complex algebraic group, and let End(G) be the ring of algebraic endomorphisms of G. We will denote by End 0 (G) the Qalgebra End 0 (G) := Q ⊗ Z End (G). For example, if G is a torus G = G r m , then End(G) = Mat r (Z) and End 0 (G) = Mat r (Q), and if G = G r a is a vector group, then End(G) = End 0 (G) = Mat r (C). In any case End(G) is a subring of End 0 (G). Definition 1.8. An algebraic subgroup of G n is called a special subgroup if it has an "F -structure" for some division subring F of End 0 (G), by which we mean that it is the connected component of the kernel ker A ≤ G n of a matrix A ∈ Mat n (F ∩ End(G)). For example F could be trivial and equal to Q, in which case the corresponding special subgroups will be the connected components of subgroups defined by arbitrary linear equations with integer coefficients in the n co-ordinates of G n . Remark 1.9. It will be convenient for us to express this condition in terms of the Lie algebra Lie(H) of the subgroup H ≤ G n , which is defined as the tangent space at the identity as a C-vector space. An algebraic endomorphism η ∈ End (G) induces by differentiation a linear map dη : Lie(G) → Lie (G), making Lie(G) into an End 0 (G)-module. Then a subgroup H ≤ G n is a special subgroup if and only if Lie(H) = Lie(G) ⊗ F J ≤ Lie(G) n for some division subring F ⊂ End 0 (G) and some F -subspace J ≤ F n (where we make the obvious identifications between Lie(G) n , Lie(G n ) and Lie(G) ⊗ F F n ). Definition 1.10. An irreducible closed subvariety V ⊂ n i=1 W i of a product of irreducible varieties is special if it is in co-ordinatewise correspondence with a product i H i ≤ i G ni i of special subgroups H i of powers G ni i of commutative complex algebraic groups, where i n i = n. Example 1.12. Let G := (C × ) 4 . Then End 0 (G) = Q ⊗ Z End(G) ∼ = Q ⊗ Z Mat 4 (Z) ∼ = Mat 4 (Q), the ring of 4 × 4 rational matrices. This is certainly not a division ring, but for example the quaternion algebra H Q = (Q[i, j, k] : i 2 = j 2 = k 2 = −1; ij = k; jk = i; ki = j) embeds in Mat 4 (Q) via the left multiplication representation. This defines in particular an action of H Z = Z[i, j, k] ⊂ H Q on G by endomorphisms given by a ring homomorphism α : H Z → End(G), x → α x defined by: Then for instance V := {(x, y, z 1 , z 2 , z 3 ) ∈ G 5 : z 1 = x · y, z 2 = x · α i (y), z 3 = x · α j (y)} is a special subgroup of G 5 . To see that V admits no power-saving, we can consider "approximate H Zsubmodules": let H N := {n+mi+pj +qk : n, m, p, q ∈ {−N, . . . , N }}, let g ∈ G be generic (i.e. Q(g) has transcendence degree 4), and let X N := α H N (g) = {α h (g) : h ∈ H N } ⊂ G. Since α H Z (g) is a finitely generated subgroup of G, one can show (it is a consequence of Laurent's Mordell-Lang theorem for tori, see Remark 7.14) that for W G a proper Zariski-closed subvariety, |W ∩ α H Z (g)| is finite and bounded by a function of the complexity of W . Hence for all τ , for all sufficiently large N , we have that X N is τ -cgp in G. But α i (X N ) = X N = α j (X N ), and so |X 5 N ∩ V | ≥ Ω(|X N | 2 ). We show in Subsection 7.1 that any special subgroup admits no power-saving. The argument of this example goes through for many groups G (see Remark 7.14), but extra complications arise with other groups -in particular in the case of a power of the additive group G = G a (C) d = (C d , +), where many more division rings can arise and no Mordell-Lang type result holds. Remark 1.13. In the situation of [ES12,Theorem 27] that V ⊂ C d × C d × C d projects dominantly with generically finite fibres to each pair of co-ordinates, if V is special then it is in co-ordinatewise correspondence with the special subgroup H 0 := {x 1 + x 2 + x 3 = 0} of a d-dimensional commutative algebraic group G. Indeed, we first obtain from Theorem 1.11 that it is in co-ordinatewise correspondence with the connected component of {α 1 y 1 + α 2 y 2 + α 3 y 3 = 0} for some self-isogenies α i ∈ End(G); then, setting x i := α i y i , we see that this is in co-ordinatewise correspondence with H 0 . Similarly for general n, if dim(V ) = (n − 1)d (as in [RSdZ18] for instance) then { i x i = 0} is the only kind of special subgroup which needs to be considered. But if V has higher codimension, endomorphisms are indispensable. Remark 1.14 (Explicit power-saving). We can consider strengthening Theorem 1.11 by replacing the condition that V admits no power-saving with the condition that V does not admit a power-saving by η, where the "gap" η is a constant η = η(d, n) > 0. The existence of such a gap in the case n = 3 is part of [ES12,Main Theorem], and for n = 3 and d = 1 an explicit value of η = 1 6 for this gap was found independently by Wang [Wan14] and Raz, Sharir, and de Zeeuw [RSDZ16]; furthermore, [RSdZ18] finds a gap of η = 1 3 for the case of n = 4 and d = 1 (under a non-degeneracy assumption). For n = 3 and d arbitrary some explicit gaps were obtained by Wang and the second author (see [BW16]). None of the gaps are known to be optimal. Our techniques for the general situation go via the abstraction of combinatorial geometries and are not adapted to even proving the existence of a gap, still less calculating one. However, in Section 3 we work out the case of n = 3, which does not require the full power of this abstraction, and we obtain there an explicit gap η = 1 16 for all d (see Theorem 3.8 below) and also recover the above-mentioned 1 6 gap when n = 3 and d = 1. We draw as a corollary of Theorem 1.4 the following generalised sum-product phenomenon. Corollary 1.15 (Generalised sum-product phenomenon). Let (G 1 , + 1 ) and (G 2 , + 2 ) be one-dimensional non-isogenous connected complex algebraic groups, and for i = 1, 2 let f i : G i (C) → C be a rational map. Then there are , c > 0 such that if A ⊂ C is a finite set lying in the range of each f i , then setting A i = f −1 i (A) ⊂ G i (C) we have max(|A 1 + 1 A 1 |, |A 2 + 2 A 2 |) ≥ c|A| 1+ . Remark 1.16. The usual sum-product phenomenon is the case (G 1 , + 1 ) = (C, +) and (G 2 , + 2 ) = (C \ {0}, ·), with f 1 and f 2 being the identity maps. If instead G 2 = E ⊂ P 2 (C) is an elliptic curve defined by {y 2 = x(x − 1)(x − λ)}, then we may take f 2 to be the rational map [x : y : 1] → x. This case of the additive group and an elliptic curve was previously considered for finite fields in [Shp08]. The constant c (which must depend on the f i 's) is necessary as both A i 's could be finite subgroups of bounded order. We believe however that the power-saving > 0 above is uniform over all group laws (and also independent of the f i 's); proving this would require establishing an explicit gap in Theorem 1.4 for d = 1 and n = 6. We do not tackle this issue here. We also obtain the following result on intersections of subvarieties with powers of an approximate subgroup, or just of a set with small doubling. Theorem 1.17. Let G be a commutative complex algebraic group. Suppose V is a subvariety of G n which is not a coset of a subgroup. Then there are N, τ, , η > 0 depending only on G and the complexity of V such that if A ⊂ G is a finite subset such that A − A is τ -cgp and |A + A| ≤ |A| 1+ and |A| ≥ N , then |A n ∩ V | < |A| dim(V ) dim(G) −η . Note that Theorem 1.11 yields right away that if no such η > 0 exists, then V must be special. So the point here is to show that under the small doubling assumption the special V 's are in fact cosets of algebraic subgroups. The result is reminiscent of the Larsen-Pink type estimates for approximate groups (see [HW08] [Hru12, Prop. 5.5], [BGT11, Thm 4.1.]), with a stronger conclusion (the powersaving η > 0) and stronger hypothesis (coarse general position). This conclusion is also reminiscent of results in Diophantine geometry of Manin-Mumford or Mordell-Lang type, although our methods are completely unrelated; see Example 8.3 for further comments in this direction. 1.1. Method of proof. The proof of our main results, Theorems 1.4 and 1.11, relies on an initial ultraproduct construction starting from a sequence of finite subsets witnessing the absence of power-saving. This yields pseudo-finite cartesian products. The field-theoretic algebraic closure relation then induces an abstract projective geometry at the level of the ultraproduct and we show, as a consequence of known incidence bounds generalising the Szemerédi-Trotter theorem, that this geometry is modular, i.e. satisfies the Veblen axiom of abstract projective geometries and can therefore be co-ordinatised. The division rings appearing in Theorem 1.11 arise that way. In the one-dimensional (d = 1) case, the projective geometries which embed in the geometry of algebraic closure in an algebraically closed field were characterised in [EH91], and in an appendix we use similar techniques (primarily the abelian group configuration theorem) to characterise them in the higher dimensional case. The main combinatorial results above then follow. Much of the strategy is an implementation of ideas due to Hrushovski appearing in [Hru13], where he introduced the formalism of coarse pseudo-finite dimension and outlined a proof of the original Elekes-Szabó theorem in those terms. More generally, our results are a consequence of specialising ideas of model theory to this combinatorial setting. We use the conventions and language of model theory throughout. Nonetheless, our treatment requires very little model-theoretic background and everything we need is described and recalled in Section 2. It is also mostly self-contained, except for the use of the group configuration theorem, recalled in Section 3, and the Szemerédi-Trotter type incidence bounds recalled in §2.2. 1.2. Related work. We remark here on how this paper relates to other recent works on applications of model theory to similar problems. In an unreleased work in progress, Hrushovski, Bukh, and Tsimmerman consider expansion phenomena in pseudo-finite subsets of pseudo-finite fields of size comparable to that of the non-standard prime field. This context is quite different from that we consider, in particular because of the failure of Szemerédi-Trotter in this regime, but there may be some overlap in techniques; in particular, their analysis also proceeds via modularity and the abelian group configuration theorem. Meanwhile, Chernikov and Starchenko [CS] recently proved a version of Theorem 1.2 in strongly minimal structures which are reducts of distal structures. This direction of generalisation is orthogonal to the one we consider here, where we restrict to the case of ACF 0 (this restriction is used in Lemma 2.15 and in Proposition 7.10). 1.3. Organisation of the paper. In Section 2 we set up our notation for the rest of the paper and present Hrushovski's notion of pseudo-finite dimension of internal sets and its basic properties. This section is entirely self-contained. We also recall the Szemerédi-Trotter-type bounds for arbitrary varieties and recast them in this language. In Section 3 we reprove the original Elekes-Szabó theorem using the group configuration theorem and the formalism of pseudo-finite dimensions. In higher dimensions we also recover the commutativity of the ambient group and obtain an explicit power saving of 1 16 . This section is not used in the proof of our main theorem, but can be read as an example of the method, worked out in a special case. In Section 4 we give a counter-example to the original Elekes-Szabó theorem when the assumption of general position is removed. Section 5 contains the proof of the key point: the modularity of the projective geometry associated to a variety without power-saving. In Sections 6 and 7 we complete the proof of Theorems 1.4 and 1.11 modulo the result proven in the Appendix. In particular we prove the converse (the "if" direction of the theorems), which requires some information regarding division subrings of matrices. We also derive Corollary 1.15. In Section 8 we prove Theorem 1.17 and draw some connections with Diophantine geometry. Finally the appendix is devoted to the higher-dimensional version of [EH91]. We would also like to thank the Institut Henri Poincaré and the organisers of the trimester "Model theory, combinatorics and valued fields", where some of the work was done. The second author acknowledges support from ERC grant GeTeMo no. 617129. In this preliminary section, we set up our notation and introduce the key concepts, which will be used in the proof of the main results. We assume some familiarity with the notions of first order languages, formulas and ultraproducts as expounded for example in the first two chapters of [Mar02]. No other more sophisticated model-theoretical concepts will be assumed. Contents 2.1. Coarse pseudo-finite dimension. We begin with a self-contained presentation of Hrushovski's formalism of coarse pseudo-finite dimensions from [Hru13], slightly adapted to our purposes. 2.1.1. Ultraproducts and internal sets. We will fix a non-principal ultrafilter U on the set of natural numbers. We say that a property of natural numbers holds for U-almost every s if the set of natural numbers s for which the property holds is an element of U. We form the ultraproduct K = s→U K s of countably many algebraically closed fields K s , s ≥ 0, which by definition is the cartesian product s≥0 K s quotiented by the equivalence relation (x s ) s ∼ (y s ) s if and only if x s = y s for U-almost every s ≥ 0. The field K is also algebraically closed. We will assume throughout internal characteristic zero; namely, we assume char(K s ) = 0 for all s. This is required for the incidence bounds used in Lemma 2.15 below. (See [Hru13, Corollary 5.6] for discussion on how it ought to be possible to weaken this assumption). In fact for our purposes it makes no difference to simply make the following Assumption 2.1. We assume that K s = C for all s. We denote by * R := R U the corresponding ultrapower of R, and call its elements non-standard reals. The real field R embeds diagonally in * R and its elements are called standard reals. The order on R extends to an order on * R by saying that x < y if and only if x s < y s for U-almost every s ≥ 0. We let st : * R → R ∪ {−∞, ∞} be the standard part map, namely st(ξ) is ∞ (resp. −∞) if ξ = (ξ s ) s≥0 ∈ * R is larger (resp. smaller) than any standard real, and otherwise it is the ultralimit along U of the sequence (ξ s ) s , namely the unique z ∈ R such that for each > 0, |z − ξ s | < holds for U-almost every s. Let n be a positive integer. We say that a subset X ⊂ K n is internal if X = s→U X Ks for some subsets X Ks ⊂ K n s . 2.1.2. Saturation and compactness. A standard property of ultraproducts over a countable index set is their ℵ 1 -compactness. Namely countable families of internal sets have the finite intersection property. This means that for each positive integer n, if X 0 ⊃ X 1 ⊃ . . . is a countable chain of internal subsets of K n such that i≥0 X i = ∅, then X i = ∅ for some i ≥ 0. Equivalently if an internal set X ⊂ K n lies in the union of countably many internal sets, then it already lies in the union of finitely many of them. 2.1.3. Coarse pseudo-finite dimension. Throughout we will fix once and for all some infinite non-standard real ξ ∈ * R with ξ > R, which we call the scaling constant. This choice corresponds to a choice of calibration for the large finite sets involved in our main results. Given an internal set X = s→U X Ks ⊂ K n , we define the non-standard cardinality of X by |X| := s→U |X Ks | ∈ * R ∪ {∞} and its coarse pseudo-finite dimension δ(X) by δ(X) := st log |X| log ξ ∈ R ≥0 ∪ {−∞, ∞} (for the empty set we adopt the convention δ(∅) = log(0) = −∞). Example 2.2. Let X Ks := {(p, q) ∈ N 2 : p + q < s}, Y Ks := {1, . . . , s s } and ξ s := s for all s ≥ 1. Then δ(X) = 2 and δ(Y ) = ∞. We note here the following immediate properties of the coarse dimension, for internal sets A, B ⊂ K n : ( 1) (non-negativity) δ(A) ≥ 0 if A is non-empty, (2) (monotonicity) If A ⊂ B, then δ(A) ≤ δ(B), (3) (ultrametricity) δ(A ∪ B) = max{δ(A), δ(B)}. 2.1.4. Definable sets. In order to talk about definable subsets of K n we fix a language L, which extends the language of rings L ring = (+, −, ·, 0, 1) by only countably many symbols. We assume each K s is an L-structure, and equip the ultraproduct K with the corresponding L-structure. If C ⊂ K is a countable set, we write L C for the language with new constant symbols for the elements of C. To every first order formula φ = φ(x) in the language L C with free variables x := (x 1 , . . . , x n ) there corresponds a definable set φ(K) := {k ∈ K n : φ(k) holds in K}. We say that the set φ(K) is C-definable or definable over C; in other words, φ(K) is definable by a formula with parameters from C. When C = ∅, we say that φ(K) is definable over ∅, or definable without parameters. We set K <∞ := n≥1 K n the set of all finite tuples of elements of K. The family D n,C of C-definable subsets of K n forms a boolean algebra, which contains all algebraic sets defined over C (i.e. solutions of polynomial equations whose coefficients are elements of C) as well as a countable number of prescribed subsets (the graphs of functions from L and sets of tuples satisfying the relations whose symbols belong to L) and ∪ n≥1 D n,C is stable under co-ordinate projections (image and pre-image). Equivalently, instead of starting with the language L and considering the associated definable sets, we may begin by giving ourselves for each n a countable number of prescribed internal subsets of K n and consider the smallest family of subsets of K <∞ which contains them as well as all algebraic sets defined over C and is stable under union, complement and co-ordinate projections (image and pre-image). Clearly every definable set is internal. The converse is not true, however any single internal set in K n (or a countable family of such) can be made definable by expanding the language by adding an n-ary relation symbol for that internal set. This will be done below in order to make δ continuous and further down in the paper when, in our combinatorial applications, we will always add the ultraproduct of the finite sets X i to the class of definable sets. Remark 2.3 (Notation for tuples). We will often write a couple (a, b) ∈ K 2 as ab, or given two tuples a ∈ K n and b ∈ K m , we will denote the tuple (a, b) ∈ K n+m simply by ab, concatenating the two tuples. 2.1.5. Types, -definable sets and coarse dimension of a tuple. The type tp(a) of a tuple a ∈ K n is the family of all formulae in n variables in the language L (that is, without parameters) satisfied by a. The intersection of all ∅-definable subsets containing a will be denoted by tp(a)(K). Similarly if C is a countable subset of K, we denote by tp(a/C) the (countable) family of all formulae in L C with n variables satisfied by a and we set: tp(a/C)(K) := φ∈tp(a/C) φ(K) = a∈Y, Y C-definable Y. By a -definable set over C (say "type-definable", or "wedge-definable"), we mean a subset X of K n , for some n, which is the intersection of countably many C-definable sets. Such sets need not be internal. We say a set is -internal if it is the intersection of a countable collection of internal sets; so -definable sets are -internal. For a -internal set X ⊂ K n , we define (1) δ(X) := inf{δ(Y ) : Y ⊃ X, Y internal}. It is an immediate consequence of ℵ 1 -compactness that if X ⊂ K n is the intersection of a countable decreasing chain X 0 ⊃ X 1 ⊃ . . . of internal subsets of K n then δ(X) = inf i δ(X i ). In particular if X ⊂ K n is a -definable set over a countable set C ⊂ K, then (2) δ(X) = inf{δ(Y ) : Y ⊃ X, Y definable over C}. The set tp(a/C)(K) is -definable, so this allows to define δ(a/C), the coarse dimension of the tuple a over C, as δ(tp(a/C)(K)). Namely for a ∈ K n : (3) δ(a/C) = inf{δ(Y ) : a ∈ Y ⊂ K n , Y definable over C}. Abusing notation we will write δ(a) for δ(a/∅), and similarly if C ⊂ K <∞ (as opposed to just K) we will denote by tp(a/C) and δ(a/C) the type and coarse dimension of a over the subset C ⊂ K of all co-ordinates of tuples from C. Further note that if C 1 ⊂ C 2 ⊂ K <∞ , then (4) δ(a/C 2 ) ≤ δ(a/C 1 ). Given a -definable set X ⊂ K n over a countable set C ⊂ K, we clearly have δ(a/C) ≤ δ(X) for every a ∈ X. An important consequence of ℵ 1 -compactness, which will be used several times in the proofs, is the existence of some tuple realising the dimension: Fact 2.4 ("existence of an independent realisation"). If X ⊂ K n is a -definable set over a countable set C ⊂ K, then X contains some a ∈ X with δ(a/C) = δ(X). It is for this that we require countability of the language. Proof. Note that for any a ∈ X, we have δ(a/C) < δ(X) if and only if there is a C-definable subset Z ⊂ K n such that a ∈ Z and δ(Z) < δ(X). Consider the family of all C-definable subsets Z with δ(Z) < δ(X). It is enough to show that their union does not contain X. But if this were the case, by ℵ 1 -compactness (see 2.1.2), X would be contained in the union of finitely many of them, say Z 1 , . . . , Z m . So δ( m 1 Z i ) ≥ δ(X) by monotonicity. However δ( m 1 Z i ) = max δ(Z i ) by the ultrametricity property (see 2.1.3 above), and hence is < δ(X), a contradiction. Finally we record the following straightforward observation: Fact 2.5. For a tuple a = (a 1 , . . . , a n ) ∈ K n and a countable set C ⊂ K, the coarse dimension δ(a/C) depends only on the set of co-ordinates {a 1 , . . . , a n } ⊂ K. Proof. Indeed it is invariant under any permutation of the co-ordinates, because these induce bijections of K n and thus preserve cardinality. And furthermore if X is an internal set in K n such that the last two co-ordinates x n−1 and x n coincide for all x ∈ X, then δ(X) = δ(π(X)), where π(X) is the projection to the first n − 1 co-ordinates. 2.1.6. Continuity, additivity and invariance of coarse dimension. We now come to two crucial properties of δ: its additivity and its invariance. This will turn δ into a dimension-like quantity with properties very similar to those say of the transcendence degree of the field extension generated by a tuple. To get these properties it is enough to prove that δ has the continuity property we will now define. This property essentially amounts to requiring that for each definable set the subset of fibers of a given size under a co-ordinate projection is itself a definable set or is at least well approximated by one. However continuity is not automatic and to get it we will need to enrich our language L somewhat artificially, by adding a (still countable) family of definable subsets. Definition 2.6. We say that δ has the continuity property (or is continuous) if given n, m ≥ 1, α ∈ R, > 0 and a ∅- definable set Y ⊂ K n × K m there is a ∅-definable set W ⊂ K m such that {b ∈ K m : δ(Y b ) ≥ α + } ⊂ W ⊂ {b ∈ K m : δ(Y b ) ≥ α}, where Y b is the fiber {x ∈ K n ; (x, b) ∈ Y }. It is always possible to force the continuity of δ by enlarging the language L to a new language L , which is still countable and for which δ becomes continuous. Indeed for each q ∈ Q we may add a predicate to simulate the quantifier ∃ ≥ξ q of having "at least ξ q solutions". Explicitly, if ξ = lim s→U ξ s is as in the definition of δ, let L 0 := L and define L i+1 by adding to L i a new predicate ψ φ(x,y),q (y) for each formula φ(x, y) ∈ L i and each q ∈ Q, interpreted in K s by ψ φ(x,y),q (K s ) := {y : |φ(K s , y)| ≥ ξ q s }, where we have written φ(K s , y) for {x : φ(x, y) holds in K s }. So in the ultraproduct K we have ψ φ(x,y),q (K) = {y : |φ(K, y)| ≥ ξ q }. Then we set L := ∪ i<∞ L i . It is then clear that δ is continuous once we replace L with L . Indeed if α ∈ R and > 0 we may pick a rational q ∈ (α, α + ). Then if b ∈ ψ φ(x,y),q (K) then |φ(K, b)| ≥ ξ q so δ(φ(K, b)) ≥ q > α, while if δ(φ(K, b)) ≥ α+ then δ(φ(K, b)) ≥ q and b ∈ ψ φ(x,y),q (K). Remark 2.7. Note that the continuity property automatically extends to definable sets with parameters. Namely if Y is assumed C-definable for some C ⊂ K, and δ is continuous, we may find a C-definable W as in Definition 2.6. Indeed there is a finite tuple c 0 ∈ K for some ≥ 1 with co-ordinates in C such that Y = Y 0 c0 = {(x, y) : (x, y, c 0 ) ∈ Y 0 } for some ∅-definable set Y 0 ∈ K n+m+ , so by continuity there is W 0 a ∅-definable subset of K m such that {(b, c) ∈ K m+ : δ(Y b,c ) ≤ α} ⊂ W 0 ⊂ {(b, c) ∈ K m+ : δ(Y b,c ) ≤ α + }. But now W := W 0 c0 is the desired C-definable set. Continuity yields the following crucial properties, which are characteristic of a dimension function; in particular, they are shared by transcendence degree. Fact 2.8. Let a, b ∈ K <∞ and let C ⊂ K be countable and φ(x, y) a formula in the language L. If δ is continuous (for L) then it is (i) invariant: if tp(a) = tp(b), then δ(φ(K, a)) = δ(φ(K, b)), (ii) additive: δ(ab/C) = δ(b/C) + δ(a/bC). Here as above φ(K, a) denotes the definable set {x : φ(x, a) holds}. We have used the convention α + ∞ = ∞ + α = ∞, and ab is a shorthand for (a, b), the concatenation of the tuples a and b. Also we wrote bC for the union of C and the co-ordinates of b. Proof. When a and b have the same type they belong to the same definable sets, so (i) is immediate from the continuity of δ. The proof of (ii) is given in [Hru13, Lemma 2.10]. We give it again here for the reader's convenience. The idea is the following: if Y is a C-definable set in K n × K m containing (a, b) and such that all fibers Y b = Y ∩ π −1 2 (b ) above the points b ∈ π 2 (Y ) (where π 2 is the co-ordinate projection to K m ) have the same size, then clearly δ(Y ) = δ(π 2 (Y )) + δ(Y b ). Now the continuity property of δ ensures that we can find a Y with δ(Y ) close to δ(ab/C) and with all fibers of almost the same size. This shows additivity. We now give more details: by definition of the coarse dimension as an infimum (see (3)), given > 0 we may find C-definable sets Y, Y ⊂ K n × K m such that ab ∈ Y, Y and δ(ab/C) ≤ δ(Y ) ≤ δ(ab/C) + , δ(a/bC) ≤ δ(Y b ) ≤ δ(a/bC) + and a C-definable set Z ⊂ K m with b ∈ Z and δ(b/C) ≤ δ(Z) ≤ δ(b/C) + . Replacing Y, Y by Y ∩ Y ∩ π −1 2 (Z), we may assume that Y = Y and Z = π 2 (Y ). Now by continuity of δ there is a C-definable set W b such that |δ(Y b ) − δ(Y b )| < for all b ∈ W . We may then further replace Y by Y ∩ π −1 2 (W ) and get to a situation where δ(ab/C) ≤ δ(Y ) ≤ δ(ab/C) + , δ(b/C) ≤ δ(π 2 (Y )) ≤ δ(b/C) + and all fibers Y b for b ∈ π 2 (Y ) have δ(a/bC)− ≤ δ(Y b ) ≤ δ(a/bC)+ . We thus conclude that |δ(ab/C) − δ(b/C) − δ(a/bC)| ≤ 3 as desired. Remark 2.9. We briefly remark in passing for the model-theoretically inclined reader that a more sophisticated setup is also available, which in some ways is more satisfactory than that described above. Working directly in a countable ultrapower with only ℵ 1 -compactness, as we have in this section, has the consequence that we must pick a countable language to work with. In our applications we will have no real control over the definable sets and can expect no tameness, so having to make this choice is something of a distraction. An alternative would be to define K as above but in a language L int which includes all internal sets as predicates, and then to take a κ-saturated κ-strongly homogeneous elementary extension K, for a cardinal κ which is larger than any parameter set we wish to consider. There is then a unique way to define δ(φ(x, a)) for φ ∈ L int and a ∈ K <ω such that δ is continuous and extends the original definition in the case a ∈ K <ω . Namely, δ(φ(x, a)) := sup{q ∈ Q : K ∃ ≥ξ q x. φ(x, a)}, where ∃ ≥ξ q x. φ(x, y) denotes an L int formula with free variables y such that K ∃ ≥ξ q x. φ(x, b) if and only if |φ(K, b)| ≥ ξ q , for b ∈ K <ω . (This is parallel to the way one defines dimension on an elementary extension of a Zariski structure.) Here, continuity is meant in the sense of Definition 2.6 -or equivalently, that the map from the type space to the 2-point compactification S y (∅) → R ∪ {−∞, ∞} : tp(b) → δ(φ(x, b)) is well-defined and continuous. We can then work with elements of K in order to analyse the internal subsets of K. We will not use this alternative presentation, but some readers may prefer to pretend that we do so throughout. 2.1.7. Algebraic independence and transcendence degree. At the heart of the combinatorial results of this paper lies the interplay between combinatorics (via the coarse pseudo-finite dimension δ) and algebraic geometry (via the notion of algebraic dimension, or transcendence degree). To this effect we will fix a base field C 0 and assume it is countable and algebraically closed and contained in K. We will then have to consider the subclass of definable sets that are C 0 -definable using only the language of rings L ring . In the applications C 0 will be the algebraic closure of the field of definition of the variety. As is well-known, in an algebraically closed field, the sets that are C 0 -definable in L ring coincide with the so-called constructible sets of algebraic geometry defined over C 0 , namely solutions of finitely many polynomial equations and inequations with coefficients in C 0 . After enlarging L if necessary we can make the following Assumption 2.10. We assume that L contains a constant symbol for each element of C 0 . Notation (0 superscript). We will use a superscript 0, e.g. tp 0 , to indicate that we work in the structure (K; +, ·, (c) c∈C0 ) of K as an algebraically closed field extension of the base field C 0 , rather than in the full language L. For example for a, b ∈ K <∞ and C ⊂ K <∞ , saying that tp 0 (a/C) = tp 0 (b/C) means that they satisfy the same polynomial equations over the field C 0 (C) generated by C 0 and the co-ordinates of all tuples belonging to C, i.e. for f ∈ C 0 (C)[X], f (a) = 0 if and only if f (b) = 0. Notation (algebraic closure acl 0 ). Similarly for a subset A ⊂ K <∞ we denote by acl 0 (A) the field-theoretic algebraic closure in K of the subfield C 0 (A) generated by C 0 and the co-ordinates of the elements of A. When there is no superscript, we work in the full language L. Notation (transcendence degree d 0 ). We write d 0 for the dimension with respect to acl 0 , i.e. for A, B ⊂ K <∞ we set: d 0 (A/B) := trd(C 0 (AB)/C 0 (B)), where trd denotes the transcendence degree, and C 0 (B) the field extension of C 0 generated by B and AB is short for A ∪ B. Note that, just like δ, d 0 is additive: if a, b ∈ K <∞ and C ⊂ K, then (5) d 0 (ab/C) = d 0 (a/bC) + d 0 (b/C), where, as earlier, bC is short for the union of C and the co-ordinates of b. Note finally that clearly d 0 (A/B) = d 0 (A/ acl 0 (B)). Notation (independence | 0 C ). If A, B, C are subsets of tuples of K, we will say that A is algebraically independent of B over C and write A | 0 C B if d 0 (A/BC) = d 0 (A/C), i.e. if C 0 (A) is algebraically independent from C 0 (B) over C 0 (C). This is clearly a symmetric relation, namely A | 0 C B if and only if B | 0 C A. Notation. For A ⊂ K <∞ , we write (6) acl 0 (A) <∞ := n≥1 (acl 0 (A)) n ⊂ K <∞ , for the set of tuples algebraic over A. Note that this is also the set of tuples with finite orbit under the group of field automorphisms Aut(K/C 0 (A)) fixing C 0 (A) pointwise. 2.1.8. Coarse dimension of an algebraic tuple. Let C ⊂ K <∞ be a countable subset. If a tuple a belongs to acl 0 (C) <∞ , then it is contained in a finite C-definable set, namely the Galois orbit of a over C 0 (C). In particular, since ξ > R we have δ(a/C) = 0. So a ∈ acl 0 (C) <∞ ⇒ δ(a/C) = 0. We also record here the following generalisation of this observation, which will be used in the proof of Proposition 5.14. For any a ∈ K <∞ and countable C ⊂ K: (7) δ(a/C) = δ(a/ acl 0 (C)). Indeed, first we have δ(a/C) ≥ δ(a/ acl 0 (C)) by (4). For the opposite inequality it is enough to show that if b ∈ acl 0 (C), then δ(a/C) ≤ δ(a/b). To see this, note that δ(b/C) = 0 by the above remark, and thus by additivity (2.8.ii) δ(a/C) ≤ δ(ab/C) = δ(a/bC) + δ(b/C) = δ(a/bC) ≤ δ(a/b). 2.1.9. Locus of a tuple. If a ∈ K n and C ⊂ K, we define the locus of a over C 0 (C), denoted by loc 0 (a/C), to be the smallest Zariski-closed subset V ⊂ K n such that a ∈ V and V is defined by the vanishing of polynomials with coefficients in C 0 (C). We also write loc 0 (a) for loc 0 (a/∅). Note that by definition loc 0 (a/C) is irreducible over C 0 (C), i.e. it cannot be written as a finite union of more than one proper Zariski-closed subset of K n defined over C 0 (C), but it may not be absolutely irreducible (i.e. irreducible over K). However each absolutely irreducible component is defined over some finite algebraic extension. In particular loc 0 (a/ acl 0 (C)) is an absolutely irreducible component of loc 0 (a/C), and (8) d 0 (a/C) = dim(loc 0 (a/C)) = dim(loc 0 (a/ acl 0 (C))). 2.1.10. Abstract varieties. Our setup is adapted to working with tuples of elements of K, but in our applications we will want to work with points of algebraic groups and of general abstract algebraic varieties. We explain here how we bridge this gap using standard notions from the model theory of algebraically closed fields, as described in [Pil98] or [Mar02,7.4]. We adopt the convention that varieties are always separated, but not necessarily irreducible. If V is an algebraic variety over an algebraically closed subfield C ≤ K, then V admits a cover by finitely many affine open subvarieties over C; that is, there are open subvarieties V i ⊂ V and (closed) affine subvarieties U i ⊂ A ni and isomorphisms f i : V i → U i over C, such that V = i V i . Then V (K) can be identified with the quotient of the disjoint union of the V i (K) by the equivalence relation of representing the same point of V (K). Now ACF 0 , the theory of algebraically closed fields of characteristic zero, admits elimination of imaginaries, which exactly means that such a quotient is in definable bijection over C with a definable (i.e. constructible) subset of K n for some n. We refer to [Pil98, Remark 3.10(iii), Lemma 1.7] for details of this construction. In this way, we embed V (K) as a subset of K n . Note that this embedding is not continuous. The precise embedding depends on our choice of cover. However, if W ⊂ V is another subvariety and f : W → U ⊂ K m is an isomorphism over C with an affine variety, and if a ∈ W (K) ⊂ V (K) ⊂ K n , then the subfield of K generated over C by the co-ordinates of a according to our embedding of V (K) in K n and those according to f are equal, C(a) = C(f (a)). In particular, for a ∈ V (K) the subfield C(a) ≤ K does not depend on our choice of cover. For τ ∈ N, we say that the complexity of a closed subvariety W ⊂ V is at most τ if for each i the affine variety f i (W ∩ V i ) ⊂ U i ⊂ A ni can be defined as the set of common zeros of a collection of polynomials each of degree at most τ . Note that the family of subvarieties of V of complexity at most τ forms a definable family; that is, there is m ∈ N and a constructible set X ⊂ V × K m over C such that every subvariety of V over K of complexity at most τ is of the form X(b) = {v : (v, b) ∈ X} for some b ∈ K m . In fact this is the only property we require of the notion of complexity. 2.1.11. Generic elements. Let V as before be an algebraic variety over an algebraically closed C ≤ K. For a ∈ V (K) and C ≤ K an algebraically closed subfield containing C, we define the locus of a over C within V , locus V (a/C ), to be the smallest Zariski-closed subvariety of V defined over C and containing a. If V (K) ⊂ K m is affine and defined over C, then locus V (a/C) = loc 0 (a/C). If V is irreducible, a point a ∈ V (K) of V is generic if it is contained in no proper closed subvariety over C, i.e. locus V (a/C) = V ; equivalently, trd(a/C) = dim(V ). Remark 2.11. If V ⊂ i W i and V ⊂ i W i are closed subvarieties where V, V , W i , W i are irreducible varieties over C 0 , then V and V are in co-ordinatewise correspondence if and only if they have generics a ∈ V (K) and a ∈ V (K) such that a i ∈ W i (K) and a i ∈ W i (K) are generic and for some permutation σ ∈ Sym(n), we have acl 0 (a i ) = acl 0 (a σi ). Indeed, loc 0 (a i , a σi ) ⊂ W i × W σi is then a generically finite algebraic correspondence between loc 0 (a i ) and loc 0 (a σi ), as required. 2.1.12. Canonical base. In the proof of our main theorems, it will be crucial to understand the dimensions of certain families of varieties. The right concept for this (which serves a similar purpose as the concept of Hilbert scheme in classical algebraic geometry) is the notion of canonical base. Recall that the field of definition of a Zariski-closed subset V ⊂ K n is the smallest field k over which V is defined; equivalently (since char(K) = 0), k is such that a field automorphism σ ∈ Aut(K) fixes V setwise if and only if it fixes k pointwise. Given a ∈ K n and C ⊂ K <∞ let k ≤ K be the field of definition of the absolutely irreducible Zariski-closed subset loc 0 (a/ acl 0 (C)) of K n . A tuple d ∈ K <∞ is said to be a canonical base of a over C if its co-ordinates together with C 0 generate the subfield of K generated by C 0 and k. Clearly if d ∈ K <∞ is a canonical base of a over C then it is a canonical base of a over acl 0 (C) and conversely. Furthermore d ∈ acl 0 (C) and since loc 0 (a/ acl 0 (C)) is defined over C 0 (d) we have: loc 0 (a/ acl 0 (C)) = loc 0 (a/d) so loc 0 (a/d) is (absolutely) irreducible and d 0 (a/d) = d 0 (a/C) = d 0 (a/ acl 0 (C)) = d 0 (a/Cd), in other words: a | 0 d C. In the proof of Proposition 5.14 below we shall require the following fact. Lemma 2.12. Let a ∈ K <∞ and C ⊂ K <∞ . Let d ∈ K <∞ be a canonical base of a over C and V := loc 0 (ad). Let d 1 , d 2 ∈ tp 0 (d)(K) and a ∈ K <∞ such that a d i ∈ V . Then either d 0 (a /d 1 d 2 ) < d 0 (a/d), or d 1 = d 2 . Proof. Note that d 0 (a /d 1 d 2 ) ≤ d 0 (a /d 1 ) = d 0 (a d 1 ) − d 0 (d 1 ) ≤ dim V − d 0 (d) = d 0 (a/d). So if d 0 (a /d 1 d 2 ) ≥ d 0 (a/d), then the above inequalities are equalities and in partic- ular d 0 (a d i ) = d 0 (ad) for each i. Since V is irreducible we obtain V = loc 0 (a d i ). Hence there exist σ i ∈ Aut(K/C 0 ) with σ i (a) = a and σ i (d) = d i . Since d is a canonical base for a over C, C 0 (d) is the field of definition of loc 0 (a/d). Hence C 0 (d i ) is the field of definition of loc 0 (a /d i ), and thus d i is a canonical base of a over d i . In particular loc 0 (a /d i ) = σ i (loc 0 (a/d)) is irre- ducible. Since loc 0 (a /d 1 d 2 ) ⊂ loc 0 (a /d i ) have the same dimension, we conclude that loc 0 (a /d 1 d 2 ) = loc 0 (a /d i ). In particular loc 0 (a /d 1 ) = loc 0 (a /d 2 ). Setting σ = σ 2 σ −1 1 , we get σ(a ) = a , σ(d 1 ) = d 2 and σ(loc 0 (a /d 1 )) = loc 0 (a /d 2 ) = loc 0 (a /d 1 ). Hence σ fixes loc 0 (a /d 1 ) setwise. It must fix its field of definition C 0 (d 1 ) pointwise. Hence σ(d 1 ) = d 1 , and d 2 = d 1 as claimed. 2.1.13. Isogenies. We say that commutative algebraic groups G 1 , G 2 are isogenous if there exists an isogeny θ : G 1 G 2 ; that is, a surjective algebraic group homomorphism with finite kernel. The relation of being isogenous is an equivalence relation. We will apply in multiple places the following useful criterion for the existence of an isogeny. Fact 2.13. Let (G; ×) and (G ; +) be connected algebraic groups over an algebraically closed field C 0 of characteristic zero. Suppose the graphs Γ × and Γ + of the group operations are in co-ordinatewise correspondence, and G is commutative. Then G is also commutative, and is isogenous to G . Moreover, if (g, h) ∈ G 2 (K) and (g , h ) ∈ (G ) 2 (K) are each generic, and if acl 0 (g) = acl 0 (g ) and acl 0 (h) = acl 0 (h ) and acl 0 (g × h) = acl 0 (g + h ) (where acl 0 (x) = C 0 (x) alg ), then there are n ∈ N >0 and an isogeny α : G → G and a point c ∈ G (C 0 ) such that αg = ng + c. Proof. This is a consequence of [BMP14, Lemme 2.4]. Indeed, that lemma yields, via Remark 2.11, that there is an algebraic subgroup S ≤ G × G such that the projections to G and G are surjective and have finite kernels. It follows that G is abelian. Indeed, if g ∈ G then g G is finite, so the centraliser C g is a finite index subgroup of G and hence is equal to G since the latter is connected. Alternatively, one may assume by the Lefschetz principle that G, G , and S are complex algebraic groups, and observe that S induces an isomorphism of the Lie algebras. Now let n be the exponent of the subgroup coker(S) := {y ∈ G : (0, y) ∈ S} ≤ G . Then by setting α(x) := ny whenever (x, y) ∈ S, we obtain a well-defined isogeny α : G G . So G is isogenous to G . For the "moreover" clause, we use that the subgroup S in the above cited lemma is a coset of V := locus G×G ((g, g )/C 0 ). Knowing that G is abelian, we can see this fairly directly as follows. Let S be the stabiliser of tp 0 (g, g ), namely S := {γ ∈ G×G : dim(V ∩(γ +V )) = dim(V )}. Then S projects surjectively with finite kernel to G and to G , and it follows from our assumptions that V is a coset of S; indeed, this can be seen by applying [Zie06, Theorem 1] to (g, g ) + (h, h ) = (g × h, g + h ). Since C 0 is algebraically closed and both V and S are over C 0 , there is (c 1 , c 2 ) ∈ (G × G )(C 0 ) such that V = (c 1 , c 2 ) + S. Then since the projection π 1 : S G is surjective, there exists c ∈ G (C 0 ) such that (g, g + c ) ∈ S, namely any c such that (c 1 , c 2 −c ) ∈ S. Then with α, n as above, we have α(g) = n(g +c ) = ng +nc , so c := nc is as required. 2.2. Incidence bounds and Szemerédi-Trotter. As in [ES12] we will require some incidence boundsà la Szemerédi-Trotter in higher dimension. As is wellknown, if G is a bi-partite graph between vertex sets X 1 and X 2 with the property that no two distinct points in X 2 have more than B common neighbours, then a simple argument via the Cauchy-Schwarz inequality (e.g. see [ES12,Prop. 12]), implies that the number of edges of G is at most O(|X 1 | 1 2 |X 2 |+B|X 1 |). The theorem of Szemerédi-Trotter and its generalisations (such as [PS98], [ES12,Theorem 9] or more recently [FPS + 17, Theorem 1.2]) aim at improving this inequality by some power saving in the situation when the vertex sets X 1 and X 2 are points in Euclidean space and the graph G is given by some algebraic relation. For example Elekes-Szabó prove the following Szemerédi-Trotter-type result: Theorem 2.14 ([ES12, Theorem 9]). If V ⊂ C n1 × C n2 is a complex algebraic subvariety there is 0 = 0 (n 2 ) > 0 such that the following holds. Let B ∈ N. Let X 1 , X 2 ⊂ C n be finite subsets. Write V (y) := {x ∈ C n : (x, y) ∈ V } for the fiber above y ∈ C n . Assume that for any two distinct y, y ∈ X 2 the intersection V (y) ∩ V (y ) contains at most B points from X 1 . Then the number I of incidences (x, y) ∈ X 1 × X 2 with x ∈ V (y) satisfies: I ≤ O B,V,n1 (|X 1 | 1 2 (1+ 0) |X 2 | 1− 0 + |X 1 | + |X 2 | log |X 1 |). We note that this bound has been slightly improved, with a better 0 (namely any 0 < 1 4n2−1 ) and no log factor in [FPS + 17, Theorem 1.2]. Looking carefully at the proofs of the above theorem we find that the depen- dence in B of the big-O is sublinear, that is O B,V,n1 ≤ B · O V,n1 (see [She19, Problem 11.4]). This aspect will be important for us (we can afford a polynomial dependence). In what follows we spell out how the above incidence bound reads in the formalism of coarse pseudo-finite dimension. With the notation and terminology of Section 2.1 (in particular K is an ultraproduct of fields of characteristic zero and δ is the coarse dimension 2.1.3), we have: Lemma 2.15 (Szemerédi-Trotter-type bound). Let X 1 ⊂ K n1 and X 2 ⊂ K n2 , suppose each X i is -internal, and let X = (X 1 × X 2 ) ∩ V where V ⊂ K n1+n2 is a K-Zariski closed subset. Assume that δ(X 1 ), δ(X 2 ) are both finite. Set β := sup a,b∈X2; a =b δ(X(a) ∩ X(b)), where X(y) := {x ∈ X 1 : (x, y) ∈ X}. Then for some 0 > 0 depending only on n 2 , writing y + := max{0, y}, δ(X) ≤ β + max 1 2 δ(X 1 ) + δ(X 2 ) − 0 (δ(X 2 ) − 1 2 δ(X 1 )) + , δ(X 1 ), δ(X 2 ) . Remark 2.16. In the same way, the trivial bound mentioned earlier (via Cauchy-Schwarz) yields the same estimate on δ(X) as above, but with 0 = 0. The original Szemerédi-Trotter theorem [ST83] corresponds to the case when X 1 is the ultraproduct of finite sets of points in R 2 and X 2 is the ultraproduct of finite sets of lines in R 2 , and V is the incidence relation p ∈ . In this case 0 = 1 3 , which is optimal. Proof of Lemma 2.15. Suppose first that X 1 and X 2 are internal sets, i.e. X i = s→U X Ks i for i = 1, 2, for some X Ks i ⊂ K ni s , and X Ks = (X Ks 1 × X Ks 2 ) ∩ V (K s ). Since δ(X i ) is finite, X Ks i is finite for U-almost every s. The assumption δ(X(a) ∩ X(b)) ≤ β for each a, b ∈ X 2 implies that for each > 0 for U-almost every s we have: |X Ks (a) ∩ X Ks (b)| ≤ B s , where B s := ξ β+ s , and ξ = lim s→U ξ s is the scaling constant as in §2.1.3. Now Theorem 2.14 implies: |X Ks | ≤ B s · O V,n1 |X Ks 1 | 1 2 (1+ 0) |X Ks 2 | 1− 0 + |X Ks 1 | + |X Ks 2 | log |X Ks 1 | . Taking logarithms and passing to the ultralimit yields the desired bound. Finally the following claim allows us to reduce to the case when X i are internal sets: Claim 2.17. For any β > β, there are internal subsets X i ⊃ X i , for i = 1, 2, such that for all a, b ∈ X 2 with a = b, we have δ(X (a) ∩ X (b)) < β , where X := (X 1 × X 2 ) ∩ V . Proof. The variety V is defined over a countable (finitely generated) subfield of K, which we denote by k. Since each X i is -internal, we may work in a language L in which each X i is -definable and δ is continuous. Note that X(y) = X 1 ∩ V (y) for each y ∈ X 2 . Since X 1 is -definable, in view of (2), for any a, b ∈ X 2 with a = b there is a ∅-definable subset X a,b 1 ⊃ X 1 such that δ(X a,b 1 ∩ V (a) ∩ V (b)) < β . By continuity of δ (see 2.1.6 and Remark 2.7), there is a k-definable subset of Z a,b of K n2 × K n2 containing (a, b) such that δ(X a,b 1 ∩ V (a ) ∩ V (b )) < β for all (a , b ) ∈ Z a,b . Hence (X 2 ) 2 \ ∆ (where ∆ denotes the diagonal) is covered by the family of k-definable sets Z a,b . This is a countable family, because there are only countably many k-definable sets. Combined with the fact that X 2 isdefinable, ℵ 1 -compactness (see 2.1.2) now implies that there must be a ∅-definable set X 2 containing X 2 such that (X 2 ) 2 \ ∆ is contained in finitely many Z a,b 's, say Z a1,b1 , . . . , Z am,bm . Let X 1 be the intersection of the corresponding X ai,bi 1 , i = 1, . . . , m. Then by monotonicity of coarse dimension δ(X 1 ∩ V (a ) ∩ V (b )) < β for all a , b ∈ (X 2 ) 2 \ ∆. So X 1 and X 2 are as desired. Warm-up: the Elekes-Szabó theorem In this section, we show how the proof of the original Elekes-Szabó theorem translates in the non-standard setup expounded in the previous section. This will help us motivate the notions introduced in the following section, where we will pass to the general case of arbitrary dimension and arity and work towards Theorems 1.4 and 1.11. We begin with the one-dimensional case, i.e. we prove Theorem 1.2. A similar result was proven by Hrushovski using similar techniques as [Hru13, Proposition 5.21]. We then proceed to recover Elekes-Szabó's second theorem, which corresponds to the case of a 2d-dimensional variety in (C d ) 3 , and at the same time add two things: we establish that the associated algebraic group is in fact commutative (this was noted already in [BW16]), and we also give an explicit gap in the power-saving, 1 16 in fact. Although this is indeed new, we include this section mostly for the reader's convenience as a way to introduce some of the ideas in a special case. But a reader only interested in the proof of Theorems 1.4 and 1.11 may safely skip ahead to Section 5. 3.0.1. Abelian group configuration theorem. While Elekes-Szabó used their 'composition lemma' to establish the existence of the associated algebraic group, we will rely directly on the Group Configuration Theorem. This is a by now classical theorem of model theory due to Zilber and Hrushovski. We first recall its statement in the form we need and then describe a variant, due to Hrushovski, which ensures that the associated group is commutative. In this paragraph C 0 ≤ K are arbitrary algebraically closed fields, and we use the notation of §2.1.7, in particular K <∞ = ∪ n>0 K n and acl 0 (A) is the algebraic closure of C 0 (A) in K. x y for any three distinct points a 1 , a 2 , a 3 , • if a 1 , a 2 , a 3 lie on a common line then a i ∈ acl 0 (a j , a k ) whenever {i, j, k} = {1, 2, 3}, • if a 1 , a 2 , a 3 do not lie on a common line then a i | 0 a j a k whenever {i, j, k} = {1, 2, 3}. Then there is a connected algebraic group (G, ·) defined over C 0 , and generic elements a , b , c ∈ G(K) such that each primed element is acl 0 -interalgebraic with the corresponding unprimed element, namely acl 0 ( x) = acl 0 (x ) for each x ∈ {a, b, c}, and c = b · a . Remark 3.2. Here, acl 0 (x ) is to be understood via a coding of elements of G(K) as tuples from K, as discussed in 2.1.10. But since x is generic, we may equivalently fix a single arbitrary affine patch over C 0 and take co-ordinates there. For a proof of this theorem, we refer the reader to [Pil96, Theorem 5.4.5, Remark 5.4.10]. Strictly speaking only a -definable (in ACF) group G satisfying desired conclusions is obtained there, but by [Pil96, Remark 1.6.21], G is in fact definable (in ACF). By the Van den Dries-Hrushovski-Weil theorem [Pil98, Theorem 4.12] any such group is definably isomorphic over C 0 to an algebraic group G as required. for any three distinct points a 1 , a 2 , a 3 , • if a 1 , a 2 , a 3 lie on a common line then a i ∈ acl 0 (a j , a k ) whenever {i, j, k} = {1, 2, 3}, • if a 1 , a 2 , a 3 do not lie on a common line and {a 1 , a 2 , a 3 } = {w, c, y} then a i | 0 a j a k whenever {i, j, k} = {1, 2, 3}. Then there is an connected commutative algebraic group G defined over C 0 , and generics a , b , c ∈ G(K) such that each primed element is acl 0 -interalgebraic with the corresponding unprimed element, and c = b + a . Note that the hypotheses of Theorem 3.1 are satisfied, so we need only show that our additional assumptions yield that the algebraic group G obtained from that theorem is commutative. We refer to [BHM17, Theorem C.1] for a proof of this. 3.0.2. Elekes-Szabó -one dimensional case. In this paragraph we reprove the original Elekes-Szabó theorem, namely Theorem 1.2. We start by reformulating it in the non-standard setup of the last section; in particular we keep the notation of Section 2.1. So K is an ultrapower of the complex field, δ is the coarse dimension 2.1.3 which is continuous in a countable language L containing L ring and constant symbols for each element of the countable algebraically closed field C 0 over which V is defined, and d 0 denotes transcendence degree over C 0 . Theorem 3.4 (Reformulation of Theorem 1.2). Let a 1 , a 2 , a 3 ∈ K and assume that for all i = j, dim 0 (a i , a j ) = dim 0 (a 1 , a 2 , a 3 ) = 2, δ(a i ) ≤ 1 and δ(a 1 , a 2 , a 3 ) = 2. Then there exists a connected one-dimensional algebraic group G over C 0 and a 1 , a 2 , a 3 ∈ G(K) with acl 0 (a i ) = acl 0 (a i ) for i = 1, 2, 3 and a 3 = a 1 + a 2 . Proof of Theorem 1.2 from Theorem 3.4. Assume V ⊂ C 3 does not project to a curve on two co-ordinates and has no power-saving. Then we may find a sequence of positive integers (N s ) s≥0 with lim s→∞ N s = +∞ and finite subsets X s 1 , X s 2 and X s 3 in C with |X s i | ≤ N s for each i, s such that |X s 1 × X s 2 × X s 3 ∩ V | ≥ N 2− s s for some s > 0 with lim s→∞ s = 0. Passing to an ultraproduct X i = s→U X s i for some non-principal ultrafilter U over the integers, we obtain three internal sets X i ⊂ K, where K is the ultrapower of C, and we define the coarse dimension δ as in 2.1.3 with scaling constant ξ = lim s→U N s . Hence δ(X i ) ≤ 1 for each i and δ(X 1 × X 2 × X 3 ∩ V ) = 2. Since V is irreducible and does not project to a curve on two co-ordinates, the fibers of co-ordinate projections of V on pairs of co-ordinates have uniformly bounded size. Consequently |X s 1 × X s 2 × X s 3 ∩ V | = O(|X s i × X s j |) for all s and all i = j. It follows that 2 = δ(X 1 × X 2 × X 3 ∩ V ) ≤ δ(X i ) + δ(X j ), and hence that δ(X i ) = 1 for each i. The variety V is defined over some finitely generated subfield of C. Let C 0 be its algebraic closure in C. It is a countable subfield. To be able to talk about definable sets we specify a language L as follows: we start with L ring = (K, +, ·, 0, 1) the language of rings and enlarge it by adding a constant symbol for each element of C 0 as well as a predicate for each X i , i = 1, 2, 3, thus in effect forcing X i to be definable. Finally we enlarge L as in §2.1.6 so as to make δ continuous and hence additive. Now by Fact 2.4 we may find a triple (a 1 , a 2 , a 3 ) ∈ X 1 × X 2 × X 3 ∩ V such that δ(a 1 , a 2 , a 3 ) = δ(X 1 ×X 2 ×X 3 ∩V ) = 2. Note that (a 1 , a 2 , a 3 ) is generic in V , i.e. it is not contained in any proper algebraic subvariety over the base field C 0 , because |X s 1 × X s 2 × X s 3 ∩ W | = O W (N s ) for every one-dimensional subvariety W V over C 0 , and so δ(X 1 × X 2 × X 3 ∩ W ) ≤ 1. Consequently d 0 (a 1 , a 2 , a 3 ) = d 0 (a i , a j ) = 2 for all i = j. So we are in the situation of Theorem 3.4. Then loc 0 (a 1 , a 2 , a 3 ) is the graph Γ G (C) of the group operation of G, and we conclude that V has the required description via the correspondence loc 0 ((a 1 , a 2 , a 3 ), (a 1 , a 2 , a 3 )) ⊂ V ×Γ G (C), which is defined over C 0 and projects to the correspondences given by the (irreducible) curves loc 0 (a i , a i ) ⊂ C × G(C). We now pass to the proof of Theorem 3.4. We need to verify that the hypotheses of the group configuration are met. For this we crucially need the following lemma, which can be interpreted as saying that a 2-parameter family of plane curves with no power-saving must in fact be one-dimensional. This is where the Szemerédi-Trotter bound comes into play. Lemma 3.5. Let x 1 , . . . , x 4 ∈ K be such that δ(x i ) = 1 and δ(x 1 , . . . , x 4 ) = d 0 (x 1 , . . . , x 4 ). Assume that d 0 (x 1 , x 2 /x 3 , x 4 ) = 1. Then there is x 5 ∈ acl 0 (x 3 , x 4 ) with δ(x 5 ) = d 0 (x 5 ) = 1 such that d 0 (x 1 , x 2 /x 5 ) = 1. Proof. We postpone the proof of this lemma to the next subsection, where a stronger quantitative version of it will be proven as Lemma 3.11. It is also a special case of Proposition 5.14. Proof of Theorem 3.4. First note that the assumptions imply that δ(a i ) = 1 for each i. Indeed for any three distinct i, j, k we have a i ∈ acl 0 (a j , a k ). Hence by (7) we have δ(a i /a j , a k ) = 0. And by additivity of δ (see Fact 2.8) we get (9) δ(a i , a j , a k ) = δ(a i /a j , a k ) + δ(a j , a k ) = δ(a j , a k ) ≤ δ(a j ) + δ(a k ). This forces δ(a j ) and δ(a k ) to be equal to 1, since both are ≤ 1. Let X = tp(a 2 , a 3 /a 1 )(K) be the set of realisations of the type of the pair (a 2 , a 3 ) over a 1 , namely the intersection of all definable sets over C := {a 1 } containing (a 2 , a 3 ). By additivity of δ we have δ(X) = δ(a 1 , a 2 , a 3 ) − δ(a 1 ) = 2 − 1 = 1, by assumption. According to Fact 2.4 we can find (a 4 , a 5 ) ∈ X such that (10) δ(a 4 , a 5 /a 1 , a 2 , a 3 ) = δ(X) = 1. We will show that there is a 6 ∈ K such that a 1 , . . . , a 6 satisfy the hypotheses of the group configuration theorem as in the following diagram: Since (a 4 , a 5 ) and (a 2 , a 3 ) have the same type over a 1 , they have the same type over the empty set, and in particular they belong to the same algebraic subsets of K 2 defined over C 0 . So d 0 (a 4 , a 5 , a 1 ) = 2, d 0 (a 4 , a 5 ) = d 0 (a 4 , a 1 ) = d 0 (a 5 , a 1 ) = 2. Moreover the Zariski dimension of the whole system is 3, i.e. d 0 (a 1 , . . . , a 5 ) = 3. Indeed it is at most 3 given that a 5 ∈ acl 0 (a 1 , a 4 ) and a 3 ∈ acl 0 (a 1 , a 2 ), but it cannot be less, for otherwise d 0 (a 4 , a 5 /a 1 , a 2 , a 3 ) = 0 forcing δ(a 4 , a 5 /a 1 , a 2 , a 3 ) = 0 by (7), a contradiction to (10). Claim: d 0 (a 3 , a 4 ) = 2, d 0 (a 2 , a 5 ) = 2 and d 0 (a 2 , a 5 /a 3 , a 4 ) = 1. Indeed if d 0 (a 3 , a 4 ) < 2, then a 4 ∈ acl 0 (a 3 ), and thus d 0 (a 1 , . . . , a 5 ) = d 0 (a 1 , a 2 , a 3 , a 4 ) = d 0 (a 1 , a 2 , a 3 ) = 2, where we have used that a 5 ∈ acl 0 (a 1 , a 4 ). In a similar way d 0 (a 2 , a 5 ) = 2. Now by additivity of d 0 we finally get d 0 (a 2 , a 5 /a 3 , a 4 ) = 1, proving the claim. Further note that by additivity and (10) we have δ(a 2 , a 3 , a 4 , a 5 ) = δ(a 4 , a 5 /a 2 , a 3 ) + δ(a 2 , a 3 ) = 1 + 2 = 3 = d 0 (a 2 , a 3 , a 4 , a 5 ). So Lemma 3.5 applies and gives a 6 ∈ acl 0 (a 3 , a 4 ) such that d 0 (a 2 , a 5 /a 6 ) = 1 and d 0 (a 6 ) = 1. It then follows easily by additivity of d 0 that d 0 (a 6 , a 2 ) = d 0 (a 5 , a 6 ) = 2 and a 6 ∈ acl 0 (a 2 , a 5 ). This shows that a 1 , . . . , a 6 satisfy the hypotheses of the group configuration theorem. We are done. 3.0.3. Coarse general position. A significant new difficulty arises when dealing with the higher dimensional situation, i.e. when m = dim W i > 1 say in Theorem 1.11. We will have to assume that the finite sets X i ⊂ W i do not have too large an intersection with proper subvarieties. There are various ways to quantify this assumption, for instance Elekes-Szabó's notion of general position requires that the intersections have bounded size with a bound depending only on the complexity of the subvariety. We will adopt here the weaker assumption of coarse general position. As explained in Section 4 below, some assumption of this kind is necessary for the result to hold. Recall from Definition 1.7 that for τ ∈ N, a finite subset X of a complex algebraic variety W is said to be in coarse (C, τ )-general position (or (C, τ )-cgp for short) with respect to W if |W ∩X| ≤ |X| 1 τ for any proper irreducible complex subvariety W W of complexity at most C ∈ N. In the non-standard setup of Section 2, where we have specified a language L and defined the coarse dimension δ, it will be convenient to define a notion of coarse general position for tuples a ∈ K <∞ . We will say that a ∈ K <∞ is in coarse general position or is cgp for short if for every b ∈ K <∞ such that a is not independent from b, that is such that d 0 (a/b) < d 0 (a), we have: δ(a/b) = 0. The two notions are closely related as follows. Suppose W ⊂ C n is a variety and X = s→U X s ⊂ W (K) is an internal set. Assume that X is definable without parameters in the countable language L of Section 2 for which the coarse dimension δ is continuous (see 2.1.6). Lemma 3.6. Suppose that 0 < δ(X) < ∞ and that for any τ ∈ N, there is C ≥ τ such that for U-many s, X s is (C, τ )-cgp in W (C). Then any tuple a ∈ K n lying in X is cgp. Proof. Let b ∈ K <∞ such that d 0 (a/b) < d 0 (a) . Then setting W = loc 0 (a/ acl 0 (b)) we get an absolutely irreducible subvariety of W , which is proper, since by (8) one has dim(W ) = d 0 (a/b) < dim W , and contains a. Let c be the complexity of W . Then for every τ > c, U-many X s are (c, τ )-cgp in W (C), and this implies that δ(a/ acl 0 (b)) ≤ δ(X ∩ W (K)) = 0. Hence δ(a/b) = 0 by (7). Remark 3.7. The property of being cgp for a tuple a ∈ K m depends only on the type tp(a) of a. Indeed, suppose a ∈ K m has the same type, and b ∈ K n is such that a | 0 b. Then there is b ∈ K n such that tp(a, b ) = tp(a , b), by ℵ 1 -compactness of K. So a | 0 b , and so δ(a/b ) = 0. But then by invariance of δ (see 2.8ii.), we have δ(a /b) = δ(a/b) = 0. 3.0.4. Higher dimensional case: Elekes-Szabó with a gap and commutativity. We now move on to the second theorem proved by Elekes-Szabó in [ES12], which is the extension of Theorem 1.2 to higher dimensional varieties. We give a proof following the strategy used above in the one-dimensional case. As a payoff we will also get an explicit bound, 1 16 , on the power-saving and we will establish that the group involved must be commutative. This feature (rather the nilpotency) had been hinted at already by Elekes-Szabó (see their Example 28 in [ES12]), but was first established by H. Wang and the second named author [BW16] via a different argument using the classification of approximate groups from [BGT11]. As before we consider three irreducible complex varieties W 1 , W 2 , W 3 of dimension d. We say that a subvariety V ⊂ i W i admits a power-saving η > 0 if there exists τ ∈ N such that |V ∩ X 1 × X 2 × X 3 | ≤ O V,τ (N dim(V )−η ). for every N ∈ N and all finite subsets X i ⊂ W i with |X i | ≤ N d and each X i in coarse τ -general position in W i . Theorem 3.8. Suppose V ⊂ W 1 × W 2 × W 3 are irreducible complex varieties, and dim(W i ) = d and dim(π ij (V )) = 2d = dim(V ) for all i = j ∈ {1, 2, 3}. Then either V admits a power-saving 1 16 , or V is in co-ordinatewise correspondence with the graph Γ + ⊂ G 3 of the group operation of a commutative algebraic group G. Remark 3.9. Note that we obtain a power saving which is independent of d. In fact the method gives a power-saving η for any η < d 16d−1 , which is slightly better. For d = 1, a power-saving of 1 6 was obtained by Wang [Wan14] and independently by Raz, Sharir, and de Zeeuw [RSDZ16]. The method given below also gives 1 6 when d = 1, see Remark 3.14. The remainder of this section is devoted to the proof of Theorem 3.8. As in the one-dimensional case, we first reformulate the result in the framework of coarse dimension using the notation of Section 2. Theorem 3.10 (Reformulation of Theorem 3.8). Suppose a 1 , a 2 , a 3 ∈ K d with d 0 (a i , a j ) = d 0 (a 1 , a 2 , a 3 ) = 2d, δ(a i ) ≤ d and δ(a 1 , a 2 , a 3 ) ∈ [2d − 1 16 , 2d]. Assume that each a i is cgp in the sense of 3.0.3. Then there is an d-dimensional commutative algebraic group G over C 0 , and a 1 , a 2 , a 3 ∈ G(K) such that d 0 (a i ) = d, acl 0 (a i ) = acl 0 (a i ) and a 1 + a 2 = a 3 . Reduction of Theorem 3.8 to Theorem 3.10. This is essentially the same argument as in the one-dimensional case, so we shall be brief. Let η = 1 16 . Arguing by contradiction and carefully negating quantifiers we obtain an increasing sequence of integers (N s ) s≥0 and a sequence of finite sets X s i ⊂ W i (C) in coarse s-general position with |X s i | ≤ N d s but |X s 1 × X s 2 × X s 3 ∩ V | ≥ N 2d−η s . Passing to an ultralimit we obtain three internal sets X i ⊂ W i (K), which we add as predicates to our language (thus turning them into definable sets). Clearly δ(X i ) ≤ d and δ(X 1 × X 2 × X 3 ∩ V ) ≥ 2d − η. By Fact 2.4 we find three tuples a i ∈ X i such that δ(a 1 , a 2 , a 3 ) = δ(X 1 × X 2 × X 3 ∩ V ). By Lemma 3.6 each a i is cgp. Moreover (a 1 , a 2 , a 3 ) is generic in V , for otherwise it would be contained in a subvariety W V over C 0 forcing δ(a 1 , a 2 , a 3 ) ≤ δ(X 1 × X 2 × X 3 ∩ W ) ≤ dim W ≤ 2d − 1 (see Lemma 7.1). Therefore d 0 (a i , a j ) = d 0 (a 1 , a 2 , a 3 ) = 2d and the assumptions of Theorem 3.10 are met. Analogously to the one-dimensional case, the Szemerédi-Trotter bounds of Lemma 2.15 will be used to prove the following crucial step, which shows that the 2parameter family of varieties loc 0 (a/b) is in fact a 1-parameter family. Lemma 3.11. Let η ∈ [0, d 8d−1 ). Let x 1 , . . . , x 4 ∈ K d , and set a = (x 1 , x 2 ) and b = (x 3 , x 4 ). Assume each x i is cgp and δ( x i ) ≤ d 0 (x i ) = d. Assume further δ(a, b) ≥ d 0 (a, b) − η and d 0 (a) = d 0 (b) = 2d and d 0 (a/b) = d. Then there is x 5 ∈ acl 0 (b) <∞ ∩ acl 0 (a) <∞ with d 0 (x 5 ) = d 0 (a/x 5 ) = d. Proof. First note that δ(b) ≥ d 0 (b) − η. To see this observe that by cgp either δ(x 2 /bx 1 ) = 0 or δ( x 1 /b) = 0. Indeed otherwise d 0 (x 2 /bx 1 ) = d 0 (x 2 ) = d and d 0 (x 1 /b) = d 0 (x 1 ) = d, contradicting d 0 (x 1 , x 2 /b) = d. So we have δ(a/b) = δ(x 2 /bx 1 ) + δ(x 1 /b) ≤ d = d 0 (a/b). And so δ(b) = δ(a, b) − δ(a/b) ≥ 3d − η − d = d 0 (b) − η. Let d ∈ K <∞ be a canonical base for a over b (see §2.1.12). By definition d ∈ acl 0 (b) <∞ . We will show that this x 5 := d satisfies the desired conclusion. By definition we have d 0 (a/d) = d 0 (a/b) = d. Hence d 0 (d) = d 0 (a, d) − d 0 (a/d) ≥ d 0 (a) − d = d. It only remains to show the upper bound d 0 (d) ≤ d. Indeed this will also imply that d ∈ acl 0 (a) <∞ , because then d 0 (d) = d and d 0 (ad) = d 0 (a/d) + d 0 (d) = 2d, so d 0 (ad) = d 0 (a) and thus d 0 (d/a) = 0. So suppose that d 0 (d) > d. Then d 0 (b/d) = d 0 (b) − d 0 (d) < d. Writing d 0 (b/d) = d 0 (x 3 /d) + d 0 (x 4 /x 3 , d), each term is < d. Since each x i is cgp, this forces δ(b/d) = 0. Consequently δ(d) = δ(bd). But δ(d/b) = 0 by (7). So δ(d) = δ(b). However δ(b) ≥ d 0 (b)−η = 2d−η. Using the Szemerédi-Trotter bound of Lemma 2.15 we will obtain an upper bound on δ(d) contradicting this lower bound. By the primitive element theorem, the field extension C 0 (d) is generated over C 0 by a tuple of length d 0 (d) + 1. So we may assume d ∈ K d 0 (d)+1 . Assume first that d 0 (d) < 2d, so max(2d, d 0 (d) + 1) = 2d; we handle the case d 0 (d) = 2d separately below. Now let V = loc 0 (ad), X := (X 1 × X 2 ) ∩ V , X 1 = tp(a)(K) and X 2 = tp(d)(K). Note that X 1 ⊂ K 2d and X 2 ⊂ K d 0 (d)+1 . First we check that δ(X(d 1 )∩X(d 2 )) = 0 when d 1 = d 2 belong to X 2 , so that β = 0 in Lemma 2.15. Recall that X(d 1 ) := {a ∈ X 1 : a d 1 ∈ V }. By Fact 2.4 we may find a = a 1 a 2 ∈ X(d 1 ) ∩ X(d 2 ) such that δ(a /d 1 d 2 ) = δ(X(d 1 ) ∩ X(d 2 )). Then a d i ∈ V for i = 1, 2. We are thus in the setting of Lemma 2.12. Since d 1 = d 2 we conclude that d 0 (a /d 1 d 2 ) < d. Hence d 0 (a i /d 1 d 2 ) < d, and since both a i are cgp (because cgp is type invariant, see Remark 3.7), we conclude that δ(a i /d 1 d 2 ) = 0 for each i. Thus by additivity δ(X(d 1 ) ∩ X(d 2 )) = δ(a /d 1 d 2 ) = 0 as claimed. The Szemerédi-Trotter bound of Lemma 2.15 then gives the bound δ(X) ≤ max 1 2 δ(X 1 ) + δ(X 2 ) − 0 δ(X 2 ) − 1 2 δ(X 1 ) + , δ(X 1 ), δ(X 2 ) ; using the incidence bound of [FPS + 17, Theorem 1.2] mentioned after Theorem 2.14, we obtain this bound for all 0 ∈ [0, 1 8d−1 ). If the maximum on the right hand side is δ(X 1 ), then δ(X 2 ) ≤ 1 2 δ(X 1 ) ≤ d, contradicting our lower bound δ(X 2 ) = δ(d) ≥ 2d − η above. If the maximum is δ(X 2 ), then δ(X) = δ(ad) ≤ δ(d), so δ(a/d) = 0 and hence δ(a/b) = 0 by (7). But δ(a /b) = δ(ab) − δ(b) ≥ d − η > 0. So we must conclude that δ(a/b) + δ(d) ≤ δ(X) ≤ 1 2 δ(a) + δ(d) − 0 δ(d) − 1 2 δ(a) . In other words, since δ(d) = δ(b): 1 0 δ(ab) ≤ 1 2 δ(a)( 1 0 + 1) + δ(b)( 1 0 − 1). Finally since δ(a), δ(b) ≤ 2d and δ(ab) ≥ 3d − η by assumption, we conclude that η ≥ d 8d − 1 a contradiction to our assumption. We assumed above that d 0 (d) < 2d, and so we conclude from this contradiction that d 0 (d) = d or d 0 (d) = 2d. it remains to rule out the latter case. If we were willing to weaken our bound to b = i V di . Indeed, any V di is an automorphic image over C 0 (b) of V d and so is a component, and conversely i V di is automorphism-invariant over C 0 (b) and hence is defined over C 0 (b), so W b ⊂ i V di . Claim 3.12. There are only finitely many b ∈ tp 0 (b)(K) with dim(W b ∩ W b ) = d. Proof. Fix field automorphisms τ ij ∈ Aut(K/C 0 ) such that τ ij (d i ) = d j . Suppose b ∈ tp 0 (b)(K) and dim(W b ∩ W b ) = d. Then W b and W b share a component, so say V dj ⊂ W b . Now let σ ∈ Aut(K/C 0 ) be such that b = σ(b). Since the σ(V di ) are the irreducible components of W b , for some i we have σ(V di ) = V dj . Let Proof. If A is internal, then G is the ultraproduct of finite graphs G i = (A i , E i ) of maximal degree k. Then G i has chromatic number at most k + 1, and so has an anticlique of size at least |Ai| k+1 . The ultraproduct of such anticliques is then an internal anticlique A ⊂ A as required. σ := τ −1 ij σ. Then σ (V di ) = τ −1 ij V dj = V τ −1 ij dj = V di , In general, our -internal A is, by ℵ 1 -compactness of the ultraproduct, contained in an internal A 0 such that (A 0 , E) has maximal degree at most k, because the property of having maximal degree at most k can be expressed as the inconsistency of a partial (k + 1)-type. So then the same holds for all internal A 1 with A ⊂ A 1 ⊂ A 0 , and hence the claim follows from the internal case. Now let X 2 ⊂ tp(b)(K) be an anticlique as in Claim 3.13 for the graph defined above, and X 1 := tp(a)(K) and X := (X 1 × X 2 ) ∩ W . If a ∈ X(b 1 ) ∩ X(b 2 ) then d 0 (a /b 1 b 2 ) ≤ dim(W b1 ∩W b2 ) < d since X 2 is an anticlique, and so δ(a ) = 0 by cgp. So we contradict the Szemerédi-Trotter bound exactly as in the case d < d 0 (d) < 2d above. This contradiction shows that d 0 (d) = d and ends the proof. Proof of Theorem 3.10. Here again the strategy is the same as in the 1-dimensional case, so we shall be brief. Let η = 1 16 . As before set X = tp(a 2 , a 3 /a 1 )(K) and note that δ(a 1 , a 2 , a 3 ) = δ(X) + δ(a 1 ) by additivity of δ. It follows that δ(X) ≥ d − η. By Fact 2.4 we may find (a 4 , a 5 ) ∈ X with δ(a 4 , a 5 /a 1 , a 2 , a 3 ) = δ(X). Note that a 4 , a 5 are both cgp (see Remark 3.7). We will show that there are a 6 and a 7 in K d such that (a 1 , . . . , a 7 ) satisfy the hypotheses of the abelian group configuration theorem as in the following diagram: As earlier we have that d 0 (a 4 , a 5 , a 1 ) = 2d, d 0 (a 4 , a 5 ) = d 0 (a 4 , a 1 ) = d 0 (a 5 , a 1 ) = 2d. And we also have d 0 (a 1 , . . . , a 5 ) = 3d. Indeed otherwise d 0 (a 4 , a 5 /a 1 , a 2 , a 3 ) < d, which implies that d 0 (a i /a 1 , a 2 , a 3 ) < d for i = 4, 5 and thus δ(a i /a 1 , a 2 , a 3 ) = 0 since a 4 , a 5 are cgp. But this contradicts δ(a 4 , a 5 /a 1 , a 2 , a 3 ) = δ(X) > 0. Again as in the 1-dimensional case, using only the additivity of d 0 we conclude that d 0 (a 3 , a 4 ) = d 0 (a 2 , a 5 ) = 2d and d 0 (a 2 , a 5 /a 3 , a 4 ) = d, and also that d 0 (a 2 , a 4 ) = d 0 (a 3 , a 5 ) = 2d and d 0 (a 3 , a 5 /a 2 , a 4 ) = d. Moreover δ(a 2 , a 3 ) = δ(a 1 , a 2 , a 3 ) by additivity since a 1 is cgp. Similarly by additivity δ(a 2 , a 3 , a 4 , a 5 ) = δ(a 1 , . . . , a 5 ) = δ(X) + δ(a 1 , a 2 , a 3 ), hence this is ≥ 3d − 2η. Since 2η ≤ 1 8 < d 8d−1 , we are thus in a position to apply Lemma 3.11 to both a 2 , a 5 , a 3 , a 4 and to a 2 , a 4 , a 3 , a 5 . This yields a 6 and a 7 as desired and concludes the proof of the theorem. Remark 3.14 (Quality of the power-saving). The quality of the power-saving depends crucially on the quality of 0 in the Szemeredi-Trotter type bound of Lemma 2.15. We immediately lose a factor 2 because this bound is usually proven for real algebraic varities, while we consider complex varieties and somewhat carelessly view them as real varieties of twice the dimension. It is plausible that the bound 0 = 1 2n−1 holds in Theorem 2.14. In fact this is known when n = 1 (see [SSZ18] or [RSDZ16, Thm 4.3]), and consequently Lemma 3.11 holds for all η ∈ [0, 1 3 ) when d = 1 and thus yields a power-saving η for any η < 1 6 in the 1-dimensional Elekes-Szabó theorem. This recovers the bound obtained in [Wan14] and [RSDZ16]. The latter work however gave more precise information on the multiplicative constant and the dependence on the degree of the variety V , which is an aspect we do not investigate in our paper (it would require working with the Hrushovski-Wagner fine pseudo-finite dimension while we restrict attention to the coarse dimension δ). The following corollary indicates the robustness of the commutativity of the group in Theorem 3.8. Corollary 3.15. Suppose (G; ·) is a connected complex algebraic group. Suppose the graph Γ of multiplication admits no power-saving. Then G is commutative. Proof. By Theorem 3.8, Γ is in co-ordinatewise correspondence with the graph Γ + ⊂ G 3 of the group operation of a commutative connected algebraic group G . So this is an immediate consequence of Fact 2.13. Remark 3.16. Another proof of this corollary was noted in [BW16]. It can be derived as a consequence of the Balog-Szemeredi-Gowers-Tao theorem combined with [BGT11, Theorem 2.5], one of the main results of [BGT11] which was proven there for linear algebraic groups, but can be extended to all algebraic groups. In the following sections we will handle the general case of a cartesian product of an arbitrary number, say n, of subvarieties. As in the reformulations of Elekes-Szabó's statements expounded above it is easy to see that a subvariety without power-saving leads to a tuple (a 1 , . . . , a n ) such that each a i is cgp, belongs to K d and has δ(a i ) ≤ d, such that δ(a 1 , . . . , a n ) = d 0 (a 1 , . . . , a n ). In Sections 5 and 6 we will forget for a moment the original combinatorial problem and focus entirely on the study of these tuples. Then in Section 7 we will return to combinatorics and give a proof of Theorem 1.11. Necessity of general position We give an example showing that Theorem 3.8 fails dramatically if we weaken too far the coarse general position assumption in the definition of power-saving. Indeed varieties which are not even in correspondence with a group operation can then have no power-saving even when the finite sets are assumed to be say in weak general position, namely assuming δ always drops when there is an algebraic dependence. Define an operation * : C 2 × C 2 → C 2 by (a 1 , b 1 ) * (a 2 , b 2 ) = (a 1 + a 2 + b 2 1 b 2 2 , b 1 + b 2 ), and let V := Γ * be its graph {(x, y, x * y)|x, y ∈ C 2 }. Set X N := {0, . . . , N 4 − 1} × {0, . . . , N − 1} ⊂ C 2 . Then |X 3 N ∩ Γ * | ≥ Ω(|X N | 2 ); indeed, if a i < 1 3 N 4 and b i < 1 2 N for i = 1, 2, then (a 1 , b 1 ) * (a 2 , b 2 ) ∈ X N . Now |X N ∩ ({0} × C)| = N = |X N | 1 5 , so X N is not 6-cgp. Nevertheless X N is still in weak general position, namely |X N ∩ W | = O W (N 4 ) for every algebraic curve W ≤ C 2 and N 4 ≤ |X N | 1− 1 5 . But if we were to remove the cgp assumption in the definition of power-saving, then (X N ) N witnesses that Γ * would admit no power-saving. However, one can show that Γ * is not in co-ordinatewise correspondence with the graph of the group operation of any complex algebraic group (G; ·). We sketch a proof of this. Suppose such a group (G; ·) and a correspondence exist, defined over a finitely generated field A ≤ C. Then if we take independent generics x 1 , x 2 , y 1 , y 2 ∈ C 2 over A and set z ij := x i * y j , then z 22 lies in the algebraic closure acl(z 11 , z 12 , z 21 ) of A(z 11 , z 12 , z 21 ). This follows from the equation in the algebraic group G x 2 · y 2 = (x 2 · y 1 )(x 1 · y 1 ) −1 (x 1 · y 2 ); cf. [HZ96, 6.2] and [Tao15, Theorem 41] where a converse to this is proven in the 1-dimensional case. But if one takes z 11 , z 12 , z 21 , x 2 independent generics and calculates in order y 1 , x 1 , y 2 , z 22 using the definition of * , then x 1 , x 2 , y 1 , y 2 are also independent generics, and writing z 11 = (z 11 , z 11 ) and so on, one obtains z 22 = z 21 + z 12 − z 11 − x 2 2 (z 21 − x 2 ) 2 + (z 11 − z 21 + x 2 ) 2 (z 21 − x 2 ) 2 − (z 11 − z 21 + x 2 ) 2 (z 12 − z 11 + z 21 − x 2 ) 2 , z 22 = z 21 + z 12 − z 11 , and e.g. x 2 z 11 z 21 z 12 has a non-zero coefficient, and so z 22 is not independent from x 2 ; but z 11 , z 12 , z 21 is independent from x 2 by assumption, so z 22 / ∈ acl(z 11 , z 12 , z 21 ). Projective geometries arising from varieties without power-saving As hinted in Section 3, the proof of our main results will rely on the study of cgp tuples from K m whose algebraic dimension d 0 coincides with their pseudo-finite coarse dimension δ. In this section we study the geometry underlying these tuples, and prove Theorem 5.9 below, which establishes that the associated geometry is modular and hence satisfies the Veblen axiom making it, via the Veblen-Young co-ordinatisation theorem, a sum of projective geometries over division rings. 5.1. Geometries and modularity. We begin with recalling some classical terminology and basic results regarding abstract projective geometry (see [Art57,Cam92]) and the general notions of pregeometries, geometries and modularity. Definition. A closure structure is a set P with a map cl : P(P ) → P(P ) such that for A, B ⊂ P we have A ⊂ cl(A), A ⊂ B ⇒ cl(A) ⊂ cl(B), and cl(cl(A)) = cl(A). A subset of P is closed if it is in the image of cl. A closure structure (P, cl) is a pregeometry if the following two properties also hold: • Exchange: b ∈ cl(A ∪ {c}) \ cl(A) ⇒ c ∈ cl(A ∪ {b}); • Finite character: cl(A) = A0⊂A, A0 finite cl(A 0 ). Finite pregeometries are also known as matroids. Let (P, cl) be a pregeometry. For A, B ⊂ P , a basis for A over B is a subset A ⊂ A of minimal size such that cl(A ∪ B) = cl(A ∪ B). Any two bases have the same cardinality, which is denoted by dim(A/B) and called the dimension of A over B. When B is empty, this is the dimension of A, which we denote as usual by dim(A). Subsets A, B ⊂ P are independent over a subset C ⊂ P , written A | C B, if dim(A /B ∪ C) = dim(A /C) for any finite A ⊂ A. A pregeometry (P, cl) is a geometry if cl(∅) = ∅ and cl({a}) = {a} for a ∈ P . Every pregeometry gives rise to a geometry by projectivisation: the projectivisation of a closure structure (P, cl) is the closure structure P(P, cl) with points {cl({a}) : a / ∈ cl(∅)} and the induced closure. If (P, cl) is a pregeometry, then P(P, cl) is the associated geometry . A geometry (P, cl) is said to be modular if for distinct a 1 , a 2 ∈ P and B ⊂ P , and if a 2 ∈ cl({a 1 }∪B)\cl(B), then there exists d ∈ cl(B) such that d ∈ cl({a 1 , a 2 }). A pregeometry is modular if its associated geometry is modular. Equivalently ([Mar02, Lemma 8.1.13]), for any closed sets A, B, dim(A ∪ B) = dim(A) + dim(B) − dim(A ∩ B). Points a 1 , a 2 of a geometry (P, cl) are non-orthogonal if there exists B ⊂ P such that a 2 ∈ cl({a 1 } ∪ B) \ cl(B). A subgeometry of a geometry (P, cl) is the restriction (Y, cl Y ) to a subset Y ⊂ P , where cl Y (A) := cl(A) ∩ Y . The sum of geometries (P i , cl i ) is the non-interacting geometry on the disjoint union, namely the geometry ( ˙ i P i , cl) where cl(˙ i A i ) :=˙ i cl i (A i ). The proofs of all claims made in the above definitions are straightforward and classical. We refer the reader to [TZ12, Appendix C.1] for them and further details. Example 5.1 (Projective spaces over division rings). If V is a vector space over a division ring F , then V equipped with F -linear span forms a pregeometry (V, · F ) of dimension dim(V ), and the associated geometry P(V ) := P(V, · F ) is the projective space of V ; it also has dimension dim(V ), and it is a modular geometry. Example 5.2 (Algebraic closure). An algebraically closed field K equipped with field-theoretic algebraic closure over the prime field forms a pregeometry (K, acl). The dimension is the transcendence degree over the prime field. If dim(K) ≥ 3 then the associated geometry is not modular, as can be seen by considering a generic solution to b = c 1 a + c 2 , see [TZ12, Appendix C.1]. Example 5.3 (Algebraic closure on tuples). If C 0 ≤ K are algebraically closed fields, the set of all tuples K <∞ equipped with the algebraic closure acl 0 over C 0 forms a closure structure, where the closure of a subset A ⊂ K <∞ is acl 0 (A) <∞ as defined in (6). But it is in general not a pregeometry. In the sequel we will only consider closure operators of the types described in the above examples. Let (P, cl) be a modular geometry. Then a, b ∈ P are non-orthogonal if and only if there exists c ∈ P \ {a} such that a ∈ cl({b, c}). In other words, a = b or | cl({a, b})| > 2. It is easy to see from modularity that this is an equivalence relation. Example 5.4 (Abstract projective space). An abstract projective space is a pair (P, L) of sets, where P is the set of points and L the set of lines, a unique line passes through every two distinct points, every line has at least three points and the Veblen axiom holds: given four distinct points a, b, c, d, if the lines ab and dc intersect, then so do ad and bc. Any such abstract projective space gives rise to a modular geometry on P in which the closure of a subset is the union of all lines passing through two points in the subset. Conversely any modular geometry in which every pair of points is non-orthogonal gives rise to an abstract projective space with the same set of points and with the set of lines being the set of closures of pairs of distinct points. We now recall the classical Veblen-Young co-ordinatisation theorem of projective geometry, which characterises modular geometries. Fact 5.5. If (P, cl) is a modular geometry, and every two points a, b ∈ P are nonorthogonal, and dim(P ) ≥ 4, then P is isomorphic to a projective space P(V ), where V is a vector space over a division ring. More generally, if (P, cl) is a modular geometry, then non-orthogonality is an equivalence relation, and (P, cl) is the sum of the subgeometries on its non-orthogonality classes, each of which either has dimension ≤ 2, or is a projective space over a division ring, or is a non-Desarguesian projective plane. Proof. This is a consequence of the classical Veblen-Young co-ordinatisation theorem for projective geometries. Veblen's axiom is a direct consequence of modularity. We refer to [Cam92, Theorem 3.6] for a statement which directly implies the stated result and for an overview of the proof, and to [Art57, Chapter II] for a detailed proof of the co-ordinatisation theorem for Desarguesian projective planes. In our applications the geometries will be modular and infinite dimensional. So by the above they will be sums of projective geometries over division rings. 5.2. Coarse general position, coherence and modularity. We recall the notion of coarse general position for tuples introduced in §3.0.3. We keep the notation and setup of Section 2. Definition 5.6. A tuple c ∈ K <∞ is said to be in coarse general position (or is cgp) if for any B ⊂ K <∞ , if c | 0 B then δ(c/B) = 0. Recall that K <∞ is the set of all tuples of elements from K and c | 0 B means that d 0 (c/B) < d 0 (c), where d 0 (c/B) denotes as earlier the transcendence degree of the tuple c over the field C 0 (B) field generated by all co-ordinates of elements from B, where C 0 ≤ K is the base field defined in 2.1.7. The coarse dimension δ was defined in (3). Definition 5.7. A subset P ⊂ K <∞ is said to be cgp-coherent if every a ∈ P is cgp and δ(a 1 , . . . , a n ) = d 0 (a 1 , . . . , a n ) for all choices of a 1 , . . . , a n ∈ P . In this paper, we abbreviate 'cgp-coherent' to just 'coherent'. We will also say that a tuple of tuples from K <∞ is coherent when the set of its elements is coherent. Remark 5.8. The term "coherent" is borrowed from [Hru13, Section 5], where it is used in a parallel context to refer to the same idea that a pseudo-finite dimension notion is in accord with transcendence degree. We are now ready to state the main result of this section. Theorem 5.9. Suppose P ⊂ K <∞ is coherent. Then (P ; acl 0 P ) is a pregeometry. Moreover P extends to a coherent P ⊂ K <∞ such that the geometry G P := P(P ; acl 0 P ) is a sum of 1-dimensional geometries and infinite dimensional projective geometries over division rings. Here the closure operator is simply the restriction to P of the algebraic closure acl 0 as in Example 5.3, namely if A ⊂ P , acl 0 P (A) is the set of tuples in P whose co-ordinates are algebraic over the subfield of K generated by C 0 and the set of all co-ordinates of all tuples from A. The proof of Theorem 5.9 will proceed in two steps. First we will show that if P ⊂ K <∞ is coherent, then its coherent algebraic closure ccl(P ) := {x ∈ acl 0 (P ) <∞ : x is cgp and δ(x) = d 0 (x)} is also coherent. And second we will prove that if P = ccl(P ) is coherent, then (P ; acl 0 P ) is a modular pregeometry. The latter step will use the incidence boundsà la Szemerédi-Trotter recalled in Section 2.2. Theorem 5.9 will then follow by applying the Veblen-Young theorem recalled in Fact 5.5 above to the projectivisation of (P ; acl 0 P ). The rest of this section is devoted to the proof of Theorem 5.9. 5.2.1. Properties of coherent sets. The next lemma will be used to form coherent sets. Lemma 5.12. Let a 1 , . . . , a n ∈ K <∞ . Assume that each a i is cgp and δ(a i ) ≤ d 0 (a i ). Then for every C ⊂ K <∞ we have: (a 1 , . . . , a n /C) ≤ d 0 (a 1 , . . . , a n /C). (11) δ Moreover δ(a 1 , . . . , a n ) = d 0 (a 1 , . . . , a n ) if and only if {a 1 , . . . , a n } is coherent. Proof. The proof is by induction on n. Suppose first n = 1. We have a cgp a 1 ∈ K <∞ such that δ(a 1 ) ≤ d 0 (a 1 ) and we need to show that δ(a 1 /C) ≤ d 0 (a 1 /C). If a 1 | 0 C, then d 0 (a 1 /C) = d 0 (a 1 ), so the desired inequality follows immediately. On the other hand, if a 1 | 0 C, then by cgp δ(a 1 /C) = 0, so the desired inequality is then obvious. Suppose (11) holds for n − 1 tuples and any C. Let x = a 1 . . . a n−1 . δ(xa n /C) = δ(x/C ∪ {a n }) + δ(a n /C) ≤ d 0 (x/C ∪ {a n }) + d 0 (a n /C) = d 0 (xa n /C), where we applied the induction hypothesis and the case n = 1. Finally we turn to the last claim of the lemma. Suppose δ(a 1 . . . a n ) = d 0 (a 1 . . . a n ). We need to show that δ(x) = d 0 (x) for all concatenated tuples x made of subtuples from {a 1 , . . . , a n }. Note that for every tuple of a i 's the quantities δ and d 0 depend only on the subset of a i 's appearing in the tuple (see Fact 2.5), so up to relabelling co-ordinates we may assume that x = a 1 . . . a i for i ∈ [1, n]. Let y := a i+1 . . . a n . Then by assumption δ(xy) = d 0 (xy). By (11) we have δ(y/x) ≤ d 0 (y/x) and δ(x) ≤ d 0 (x). Hence by additivity we conclude that the last two inequalities are equalities. This ends the proof. Finally we record one last observation, which will be useful in the next paragraph. Lemma 5.13. If P ⊂ K <∞ is coherent and x ∈ acl 0 (P ) <∞ . Then δ(x) ≥ d 0 (x). Proof. Pick a 1 , . . . , a n ∈ P such that x ∈ acl 0 ({a 1 , . . . , a n }) <∞ and concatenate the a i 's in a := a 1 . . . a n . Then d 0 (a) = d 0 (ax). By additivity d 0 (ax) = d 0 (x)+d 0 (a/x) and δ(ax) = δ(x) + δ(a/x). By coherence of P we have δ(a) = d 0 (a). But δ(a/x) ≤ d 0 (a/x) by Lemma 5.12. So δ(x) ≥ δ(ax) − d 0 (a/x) ≥ δ(a) − d 0 (a/x) = d 0 (a) − d 0 (a/x) = d 0 (x). The Veblen axiom and incidence bounds. In this paragraph we exploit the Szemerédi-Trotter-type bounds described in Subsection 2.2 in order to show that the pregeometry (P ; acl 0 P ) satisfies the Veblen axiom of projective geometry. Proposition 5.14. Assume P ⊂ K <∞ is coherent, let a 1 , a 2 ∈ P and B ⊂ P . Assume that a 1 , a 2 / ∈ acl 0 (B) <∞ , but a 2 ∈ acl 0 ({a 1 } ∪ B) <∞ \ acl 0 (a 1 ) <∞ . Then there is d ∈ acl 0 (B) <∞ such that d ∈ acl 0 (a 1 , a 2 ) <∞ and δ(d) = d 0 (d) = d 0 (a 1 ) = d 0 (a 2 ). Proof. For brevity set a := a 1 a 2 . Without loss of generality we may assume B is finite and B = {a 3 , . . . , a n }. We set b = (a 3 , . . . , a n ). First we check that a 1 ∈ acl 0 (a 2 , b) <∞ , a i | 0 b and a 1 | 0 a 2 , and that if k := d 0 (a 1 ), then d 0 (a 2 ) = k and dim(a) = 2k. The first property follows from the exchange property of pregeometries and from Proposition 5.10. Lemma 5.11 tells us that a i | 0 b, since a i / ∈ acl 0 (b) <∞ . For the same reason a 1 | 0 a 2 . Then d 0 (ab) = d 0 (a 1 b) = d 0 (a 2 b) is equal to both d 0 (a 1 ) + d 0 (b) and d 0 (a 2 ) + d 0 (b). Hence d 0 (a 2 ) = k and d 0 (a) = 2k. This also shows that d 0 (a/b) = k. Let d ∈ K <∞ be a canonical base for a over b (see §2.1.12). By definition d ∈ acl 0 (b) <∞ . We will show that this d satisfies the desired conclusion. By definition we have d 0 (a/d) = d 0 (a/b) = k. Hence d 0 (d) = d 0 (ad) − d 0 (a/d) ≥ d 0 (a) − k = k. By Lemma 5.13 δ(d) ≥ d 0 (d). Hence we are left to show the upper bound δ(d) ≤ k. To this end let V be the locus of the tuple ad, i.e. V = loc 0 (a, d), let X 1 ⊂ K <∞ be the type of a, i.e. X 1 = tp(a)(K), let X 2 = tp(d)(K) and finally let X = (X 1 × X 2 ) ∩ V . We wish to apply the Szemerédi-Trotter bound of Lemma 2.15 to this data. For this we first show that δ(X(d 1 )∩X(d 2 )) = 0 for all d 1 , d 2 ∈ X 2 with d 1 = d 2 , so that β = 0 in this lemma. Recall that X(d 1 ) := {a ∈ X 1 : a d 1 ∈ V }. By Fact 2.4 we may find a = a 1 a 2 ∈ X(d 1 ) ∩ X(d 2 ) such that δ(a /d 1 d 2 ) = δ(X(d 1 ) ∩ X(d 2 )). Then a d i ∈ V for i = 1, 2. We are thus in the setting of Lemma 2.12. Since d 1 = d 2 we conclude that d 0 (a /d 1 d 2 ) < k. Hence d 0 (a i /d 1 d 2 ) < k, and since both a i are cgp (because cgp is type invariant, see Remark 3.7), we conclude that δ(a i /d 1 d 2 ) = 0 for each i. Thus by additivity δ(X(d 1 ) ∩ X(d 2 )) = δ(a /d 1 d 2 ) = 0 as claimed. Next note that δ(X 2 ) = δ(d), δ(X 1 ) = δ(a) = d 0 (a) = 2k by coherence, and δ(X) ≥ δ(ad) = δ(a/d) + δ(d). But δ(a/d) ≥ δ(a/ acl 0 (b) <∞ ) = δ(a/b) by (7). By coherence δ(a/b) = d 0 (a/b) = k. We conclude δ(X) ≥ k + δ(d) = 1 2 δ(X 1 ) + δ(X 2 ). Now comparing this to the Szemerédi-Trotter bound of Lemma 2.15 we obtain δ(X 2 ) ≤ 1 2 δ(X 1 ). In other words δ(d) ≤ k. This ends the proof. 5.2.3. Proof of Theorem 5.9. Here we show Theorem 5.9. Lemma 5.14 will help us find a modular geometry explaining algebraic dependence on a coherent set. This is the engine behind our main results. The idea comes from [Hru13, Subsection 5.17], and the context is essentially that of [Hru13, Remark 5.26]. Definition 5.15. For P ⊂ K <∞ , define the coherent algebraic closure by ccl(P ) := {a ∈ acl 0 (P ) <∞ : a is cgp and δ(a) = d 0 (a)}. Lemma 5.16. If P is coherent, then so is ccl(P ). Proof. We need to show that δ(a 1 . . . a n ) = d 0 (a 1 . . . a n ) for any a i 's from ccl(P ). We proceed by induction on n. This holds when n = 1 by the definition of ccl(P ). Set x := a 1 . . . a n−1 . By induction hypothesis and Lemma 5.12 {a 1 , . . . , a n−1 } is coherent and δ(x/a n ) ≤ d 0 (x/a n ). So by additivity δ(xa n ) = δ(x/a n ) + δ(a n ) ≤ d 0 (x/a n ) + d 0 (a n ) = d 0 (xa n ). But Lemma 5.13 implies that δ(xa n ) ≥ d 0 (xa n ). This ends the proof. Clearly ccl(ccl(P )) = ccl(P ). We say P ⊂ K <∞ is coherently algebraically closed if P is coherent and P = ccl(P ). Proposition 5.17. Suppose P ⊂ K <∞ is coherently algebraically closed. Then the pregeometry (P ; acl 0 P ) is modular. Proof. We must show that if B ⊂ P and a 1 , a 2 ∈ P \ acl 0 (B) <∞ are such that a 1 ∈ acl 0 (B ∪ {a 2 }) <∞ , then a 1 ∈ acl 0 ({d, a 2 }) <∞ for some d ∈ P ∩ acl 0 (B) <∞ . We may assume without loss of generality that B is finite, say B = {a 3 , . . . , a n }. This is the situation of Proposition 5.14 from which we conclude that there is an integer k such that k = d 0 (a 1 a 2 /d) = d 0 (a 2 ) = d 0 (a 1 ) = d 0 (d) for some d ∈ acl 0 ({a 1 , a 2 }) <∞ . We are left to show that d ∈ P , and since we already know that δ(d) = d 0 (d) and d ∈ acl 0 (P ), we are only left to check that d is cgp. To this end assume that d is not acl 0 independent from E, for some E ⊂ K <∞ . We need to show that δ(d/E) = 0. By Fact 2.4 we may pick a 1 , a 2 ∈ K <∞ such that tp(a 1 , a 2 /d) = tp(a 1 , a 2 /d) and δ(a 1 , a 2 /Ed) = δ(a 1 , a 2 /d). For brevity write a := a 1 a 2 and a = a 1 a 2 . By additivity of δ we may write: δ(d/E) = δ(d/a E) + δ(a /E) − δ(a /dE). Let us examine the three terms on the right hand side. The first term δ(d/a E) is zero since d ∈ acl 0 (a ) <∞ , because d ∈ acl 0 (a) <∞ . The second term is equal to δ(a 1 /a 2 E) + δ(a 2 /E). We claim that this is at most k. Note that a 1 , a 2 are cgp because a 1 , a 2 are cgp (see Remark 3.7). By (11) it is enough to show that one of these terms is zero. Hence by the cgp we only need to check that either a 1 | 0 a 2 E or a 2 | 0 E. In other words that a 1 a 2 | 0 E. This is indeed the case for otherwise d would be independent from E, because d ∈ acl 0 (a ) <∞ . Finally let us turn to the third term. Since d ∈ acl 0 (B) <∞ by Fact 7 we have δ(a /d) = δ(a/d) ≥ δ(a/ acl 0 (B)) = δ(a/B). On the other hand since a 1 , a 2 , B lie in P and P is coherent we have δ(a/B) = d 0 (a/B). By Proposition 5.14 this is k. Hence δ(a /dE) = δ(a /d) ≥ k. This concludes the proof. Proof of Theorem 5.9. By Proposition 5.10 (P ; acl 0 P ) is a pregeometry. By Lemma 5.16, enlarging P to ccl(P ) if necessary, we may assume that P is coherently algebraically closed. By Proposition 5.17 the associated geometry G P is modular. Hence the non-orthogonality relation is an equivalence relation. Up to enlarging P further if necessary, we can assume that each non-orthogonality class in G P of dimension > 1 has infinite dimension. This follows from the Lemma 5.18 below applied iteratively countably many times. In each finite dimensional non-orthogonality class we pick a point a and increase its dimension without altering the other classes, until all classes are infinite dimensional. Now we conclude by the Veblen-Young Theorem as recalled in Fact 5.5. Lemma 5.18 (dimension increase). If P is coherently algebraically closed and a, b, c ∈ P are distinct in G P and collinear in the sense that c ∈ acl 0 (a, b), then there is a ∈ K <∞ non-orthogonal to a such that P := ccl(P ∪ {a }) is coherent, a | 0 P , and every x ∈ P is either in P or non-orthogonal to a. Proof. Note first if a, b ∈ P are non-orthogonal, then d 0 (a) = d 0 (b). This is part of the conclusion of Proposition 5.14, or also follows easily from Lemma 5.11. Now by Fact 2.4 we can pick a , c ∈ K <∞ with tp(a c /b) = tp(ac/b) and δ(a c /P ) = δ(ac/b). Since P is coherent δ(a/b) = d 0 (a/b) = d 0 (a) = δ(a), while since a, a have the same type δ(a) = δ(a ) coincides with d 0 (a ) = d 0 (a). Also a and c are cgp (see Remark 3.7). Since tp(a c /b) = tp(ac/b) we have c ∈ acl 0 (P ∪ {a }) and hence δ(c /P a ) = 0. Similarly, δ(c/ab) = 0. By additivity we conclude δ(a /P ) = δ(a c /P ) = δ(ac/b) = δ(a/b) = d 0 (a). Hence also a | 0 P since a is cgp. We conclude d 0 (a /P ) = δ(a /P ). It follows from Lemma 5.12 and additivity that P ∪ {a } is also coherent. By Lemma 5.16 P = ccl(P ∪{a }) is also coherent. Moreover b, a , c are collinear so a is non-orthogonal to a, b and c. Finally if x ∈ P , then x ∈ acl 0 (P ∪ {a }), so if x / ∈ P ∪ acl 0 (a ) by modularity there is y ∈ P such that x, a , y are collinear, hence a and x are non-orthogonal. Remark 5.19. The results of this section go through with the same proofs when ACF 0 is replaced by an arbitrary finite U-rank theory in which Lemma 2.15 holds. Varieties with coherent generics We now show in Proposition 6.1 below that the locus of a coherent tuple is a special variety. This will follow from Theorem 5.9 and a characterisation of the projective geometries which can arise from acl 0 . We shall give such a characterisation in Appendix A, generalising a result of Evans-Hrushovski. Proposition 6.1. Suppose a 1 , . . . , a n ∈ K <∞ are such that a = (a 1 , . . . , a n ) is coherent. Then loc 0 (a) is a special subvariety of i loc 0 (a i ). Remark 6.2. Technically, we defined "special" only for complex varieties, but loc 0 (a) is a variety over C 0 and C 0 need not come with an embedding into C. In our main applications in Section 7 below, C 0 will come with such an embedding; more generally, we may take an arbitrary such embedding, or just define "special" for varieties over an algebraically closed field C 0 by exact analogy to the definition for varieties over C in the introduction. Proof. In this proof we make use of some of the definitions from Appendix A, applied to the pair of algebraically closed fields C 0 ≤ K. In particular, given x ∈ K <∞ \ acl 0 (∅) we set x := acl 0 (x), and we let G K be the projectivisation of the closure structure (K <∞ , acl 0 ) defined in Example 5.3 above, namely G K := P(K <∞ ; acl 0 ) = { x : x ∈ K <∞ \ acl 0 (∅)}. We may assume no a i lies in acl 0 (∅). Indeed, if say a 1 ∈ acl 0 (∅), then loc 0 (a) = {a 1 }×loc 0 (a 2 , . . . , a n ), and {a 1 } is special (with the trivial group, which is a special subgroup of itself), and so it suffices to show that loc 0 (a 2 , . . . , a n ) is special. By Theorem 5.9, {a 1 , . . . , a n } extends to a coherent set P such that G P = { p : p ∈ P } ⊂ G K splits as a sum of 1-dimensional and infinite dimensional projective geometries over division rings. This induces a corresponding splitting of a into subtuples of a i 's, and the locus of a is the product of their loci. So it suffices to show that each such locus is special. So we may assume a 1 , . . . , a n are all contained in a single summand. We conclude by showing that loc 0 (a) is in co-ordinatewise correspondence with a special subgroup, by finding a commutative algebraic group G over C 0 and generics h i ∈ G(K) with a i = h i , such that loc 0 (h) is a special subgroup of G n . By Remark 2.11, this will suffice. If a i = a j for i = j, we can take h j := h i . So assume there are no such interalgebraicities. Let G a := { a 1 , . . . , a n } ⊂ G P ⊂ G K . If dim(G a ) = 1 (the "trivial" case), then a = a 1 and we may take G := G d 0 (a1) a and a point h 1 ∈ G(K) = K d 0 (a1) with a 1 = h 1 . Else, G a embeds in a projective geometry over a division ring, where moreover by Lemma 5.11 the latter geometry is fully embedded in G K in the sense of Definition A.2. So by Proposition A.4, there is an abelian algebraic group G over C 0 with dim(G) = d 0 (a i ), and a division subring F of End 0 C0 (G), and h = (h 1 , . . . , h n ) ∈ G(K) n with h i = a i , such that (in particular) dim F ( h/G(C 0 ) F ) = dim(G a ). Hence A·(h/G(C 0 )) = 0 for some A ∈ Mat n (F ) of rank n−dim(G a ). By clearing denominators, we may assume that A has entries from End C0 (G) ∩ F . Let c := A · h ∈ G(C 0 ) n . Since C 0 is algebraically closed, A · x = c has a solution h 0 ∈ G(C 0 ) n . Replacing h by h − h 0 , which does not affect h i , we may assume h ∈ ker(A). Write ker(A) o for the connected component of ker(A). By further replacing h by e · h where e ∈ N is the exponent of the finite group ker(A)/ ker(A) o , we may assume h ∈ ker(A) o . Now it is not hard to see, e.g. by considering Gaussian elimination, that dim(ker(A)) = dim(G)(n − rank(A)). So dim(ker(A) o ) = dim(G)(n − rank(A)) = d 0 (a) = d 0 (h). Therefore loc 0 (h) = ker(A) o is a special subgroup of G n as required. Asymptotic consequences In this section we first unpeel the ultraproduct construction to show how coherent tuples correspond to varieties without powersaving. Then, combining this with Proposition 6.1 above and some further argument, we prove the combinatorial theorems stated in the introduction. Let W i , i = 1, . . . , n, be irreducible complex algebraic varieties each of dimension d, and let V ⊂ i W i be an irreducible complex closed subvariety. We first recall the following simple observation, already mentioned in the introduction. Lemma 7.1 (the trivial bound). Let τ > d. There is C ∈ N depending only on τ and V such that if X i ⊂ W i is in (C, τ )-coarse general position in W i (see Def. 1.7) and if |X i | ≤ N d , then |V ∩ i≤n X i | ≤ O V (N dim(V ) ). Furthermore, if V admits no power-saving then dim(V ) is an integral multiple of d. Proof. We prove this by induction on n and dim(V ), with C and the multiplicative constant in O V depending only on the complexity of V and the W i 's. For n = 1 it is clear. For n > 1, consider the projection π : V → i<n W i . Let Y be the Zariski closure of the image π(V ). Then Y is irreducible since V is. By [BGT11,Lemma 3.7], there is a proper closed subvariety Z Y such that for y ∈ π(V ) \ Z the fibre π −1 (y) has dimension D := dim(V ) − dim(Y ) and both Y and Z as well as the fibers π −1 (y) have complexity bounded by a constant depending only on the complexity of V and the W i 's. We may assume that τ is larger than this constant. Now V := π −1 (Z) is a proper closed subvariety of V , so by the inductive hypothesis applied to its irreducible components, |V ∩ i≤n X i | ≤ O V (N dim(V )−1 ). If D = 0, the fibres over π(V ) \ Z have size uniformly bounded by some c ∈ N, and so |(V \ V ) ∩ i≤n X i | ≤ c|Y ∩ i<n X i | and we conclude by the inductive hypothesis and dim(V ) = dim(Y ). If D = d, we conclude by the trivial estimate |(V \ V ) ∩ i≤n X i | ≤ N d |Y ∩ i<n X i | and dim(V ) = d + dim(Y ). If 0 < D < d, by τ -coarse general position of X n , and the inductive hypothesis |(V \ V ) ∩ i≤n X i | ≤ O(N d τ |Y ∩ i<n X i |) ≤ O(N d τ +dim(V )−D ); so we see that for τ > d the desired bound holds, and moreover that V admits a power-saving. Finally if the projection of V to W i for some i is not dominant, then has no power saving, then so does Y and V has dominant projections on all W i 's with i < n. Let C 0 ⊂ C be a countable algebraically closed field over which V and the W i 's are defined. Consider as in §2.1 a sequence K s of L-structures on the complex field C and a scaling constant ξ ∈ * R so as to form the coarse pseudo-finite dimension δ defined on subsets of tuples with co-ordinates in K := s→U K s for some nonprincipal ultrafilter U. Here as before L is a countable language expanding the language of rings on (K, +, ·) and containing a constant symbol for each c ∈ C 0 , and closed under cardinality quantifiers so as to make δ invariant and continuous (cf. §2.1.6). For an irreducible algebraic variety W defined over C 0 , we will say that an internal set X ⊂ W (K) is cgp in W if 0 < δ(X) < ∞ and for any proper closed subvariety W W over K, we have δ(X ∩ W (K)) = 0. Lemma 7.2. Suppose X ⊂ W (K) is an internal set which is cgp in W . Then any a ∈ X is cgp. Proof. Suppose B ⊂ K <∞ and a | 0 B. Then W := locus W (a/B) is a proper subvariety of W , and so δ(a/B) ≤ δ(X ∩ W ) = 0. We also introduce one last piece of terminology: Definition 7.3. We will say that an element a ∈ W (K) is dcgp in W (for definably in coarse general position) if it is contained in a subset X ⊂ W (K), which is definable without parameters in L and is cgp in W . It is immediate from Lemma 7.2 that every dcgp tuple a is cgp. Recall that W 1 , . . . , W n are irreducible complex algebraic varieties of dimension d. Lemma 7.4. Let V ⊂ i W i be an irreducible complex closed subvariety. The following are equivalent: (1) The subvariety V admits no power-saving, (2) (existence of a coherent generic) for some language L as above there are some a i ∈ W i (K) such that a = (a 1 , . . . , a n ) ∈ V (K) is coherent and generic in V with a i dcgp in W i for each i. Proof. If V admits no power-saving, then for any s ∈ N, setting τ := 1 + s and := 1 τ , there exists N s ≥ s and |X i,s | ≤ N d s such that |V ∩ i X i,s | > N dim V − s and |X i,s ∩ W i | ≤ |X i,s | for any W i a proper closed subvariety of W i of complexity ≤ τ . After enlarging L if necessary, we may assume that s→U X i,s =: X i are definable without parameters in L. Set the scaling constant ξ := lim s→U N s . Then by the above estimates and Lemma 7.1, δ(V ∩ i X i ) = dim V . So by Fact 2.4, say a = (a 1 , . . . , a n ) ∈ V ∩ n i=1 X i with δ(a) = dim V . By construction each X i is cgp in W i , so a i is dcgp and hence cgp (Lemma 7.2). In particular δ(a i ) ≤ d 0 (a i ), for δ(a i ) ≤ d since a i ∈ X i and either d 0 (a i ) = d or a i is contained in a proper subvariety of W i defined over C 0 , which forces δ(a i ) = 0, since X i is cgp in W i . Also a is generic in V , i.e. d 0 (a) = dim V , for otherwise a ∈ V for some proper irreducible subvariety V of V defined over C 0 and hence by the trivial bound of Lemma 7.1 we would have δ(a) ≤ δ(V ∩ n i=1 X i ) ≤ dim V − 1. It then follows from Lemma 5.12 that a is coherent. Suppose conversely that, for some K s and ξ ∈ * R, we have a tuple a ∈ V (K) which is coherent generic and for each i we have a i ∈ X i , an L-definable without parameters and cgp subset of W i (K). To say that a is coherent means that {a 1 , . . . , a n } is a coherent set. In particular a i is cgp and δ(a i ) = d 0 (a i ). Since a is generic in V , its projection a i is generic in the co-ordinate projection π i (V ) ⊂ W i . We may assume that this projection is dominant, for otherwise by cgp we would have δ(a i ) = 0 and hence d 0 (a i ) = 0, which would mean that the projection π i (V ) is a point, and we may replace V with the fibre of this projection and omit W i . So we have δ(a i ) = d 0 (a i ) = d. Now let > 0 and τ ∈ N. Pick Y i ⊂ W i (K) definable over ∅ with a i ∈ Y i and d ≤ δ(Y i ) < d + . Replacing Y i by Y i ∩ X i we may assume Y i is cgp in W i . Then δ(V ∩ i Y i ) ≥ δ(a) = dim V . Let Y i,s := Y Ks i be the interpretation in K s . Then for U-many s, for all i ∈ {1, . . . , n}, Y i,s is τ -cgp in W i , 1/ < |Y i,s | < ∞, and |V ∩ i Y i,s | ≥ |Y i,s | dim V d+2 . Hence V admits no power-saving. 7.1. Sharpness. In this subsection, we show the converse to Proposition 6.1 and prove that every special subvariety has no power saving. For this we will need to construct certain well chosen cartesian products of finite sets, which are adapted to the special subvariety. The construction we are about to describe consists in building certain long multidimensional arithmetic progressions on few algebraically independent elements. The difficulty is that in order to belong to a given special subgroup, these progressions will need to satisfy some almost invariance under the division ring F of End 0 (G) used to define the special subgroup. For this purpose it will be convenient to introduce the notion of constrainedly filtered ring, as follows. Example 7.6. Z is constrainedly filtered, since ([−2 n , 2 n ]) n is a constrained filtration. Constrained filtrations are somewhat similar to Bourgain systems. (i) Let O n := i<n O n X i . Then (CF0)-(CF2) are easily verified. For (CF3), note that |O n | = |O n | n , and so for > 0, for n >> 0, |O n+1 |/|O n | = |O n |(|O n+1 |/|O n |) n+1 ≤ O (|O n | 1+(1+n) ) ≤ O (|O n | 1 n + 1+n n ) ≤ O (|O n | 2 ). (ii) Say k ∈ N is such that for all n ∈ N we have aO n ⊂ O n+k and O n + O n ⊂ O n+k . Let O n := a −n O 2kn . Then O n + O n = a −n (O 2kn + O 2kn ) ⊂ a −n (O 2kn+k ) = a −(n+1) (aO 2kn+k ) ⊂ a −(n+1) (O 2kn+k+k ) = O n+1 ,t ij ∈ O be such that a i a j = t c t ij a t ; then given β = a i b i ∈ O with b i ∈ O, let k := max i,j,t (k bi + k c t ij ) where αO n ⊂ O n+kα (∀n), and say O n + O n ⊂ O n+l (∀n). Then βO n = j βa j O n = i,j a i b i a j O n = i,j b i a i a j O n = i,j,t b i c t ij a t O n ⊂ i,j,t a t O n+k ⊂ O n+k+d 2 l . Lemma 7.8. Suppose D is a finite-dimensional algebra over a characteristic 0 field L, and O ⊂ D is a finitely generated subring. Then there exists a constrainedly filtered subring O ⊂ D extending O. Proof. Let (e k ) 1≤k≤d be an L-basis of D and (f j ) j generators of O. Without loss of generality we may change L into the subfield generated by the co-ordinates f k j of the f j 's and the co-ordinates c k ij of the products e i e j 's. So we may assume that L is finitely generated. Let z = (z 1 , . . . , z n ) be a transcendence basis for L over Q. Then [L : Q(z)] is finite, so D is again finite-dimensional over Q(z), and without loss of generality, we may assume that L = Q(z). There is a polynomial g ∈ Z[z] such that all f k j and c k ij belong to Z[z, 1 g ]. By Example 7.6, Z is constrainedly filtered. Then by Lemma 7.7(i), so is Z[z], and by item (ii) so is R := Z[z, 1 g ], and by item (iii) so is O := d k=1 Re k ⊃ O. Fact 7.9. A division subring of a matrix algebra over a division ring has finite dimension over its centre. Proof. This is a special case of the Kaplansky-Amitsur theorem [Jac75,p17], which shows that any primitive algebra satisfying a proper polynomial identity is finite dimensional over its centre. Indeed, any division ring is a primitive algebra, and any matrix algebra M n (∆) over a division ring ∆ satisfies a polynomial identity (e.g. by the Amitsur-Levitzki theorem [Jac75,p21]). In particular, combining this fact with the previous lemma, we see that if F is a division subring of a matrix algebra, then every finite subset of F is contained in a constrainedly filtered subring of F . We will use this observation in the next proposition. Although this is sufficient for our purposes, we do not know it to be the optimal result along these lines -in fact, for all we know, it could be that every finitely generated subring of M n (C) is constrainedly filtered. Proposition 7.10. Suppose V ⊂ i W i is special. Then, for appropriate choices of C 0 ≤ C and structures K s with universe C and scaling constant ξ ∈ * R, the variety V has a coherent generic a ∈ V (K) such that each a i is dcgp in W i . Proof. The conclusion is preserved by taking products and under correspondences, so we may assume W i = G where G is a d-dimensional commutative connected algebraic group defined over a countable algebraically closed subfield C 0 ⊂ C, and V = H ≤ G n is a special subgroup. By Remark 1.9 and permuting co-ordinates, we may assume that the Lie subalgebra Lie(H) ≤ Lie(G) n is the graph of an Flinear map θ = (θ 1 , . . . , θ m ) : Lie (G) m → Lie(G) n−m with θ i (x) = d j=1 α ij x j , where α ij ∈ F and F is a division subring of End 0 (G) := End(G) ⊗ Z Q. We may assume that F is generated by the α ij . Extending C 0 if necessary, we may assume F ⊂ End 0 C0 (G), i.e. that every element of F is a scalar multiple of an algebraic endomorphism of G which is defined over C 0 . Now F acts faithfully by C-linear maps on Lie(G) ∼ = C d , so by Fact 7.9, F has finite dimension over its centre. So by Lemma 7.8, there is a constrainedly filtered subring O ⊂ F such that α ij ∈ O (∀i, j). Say (O n ) n is a constrained filtration for O. Let exp G : Lie (G) G(C) be the Lie exponential map, which is a surjective End(G)-homomorphism. For every positive integer s, let γ s = (γ s,i ) i=1,...,s ∈ G(C) s be algebraically generic over C 0 , i.e. dim 0 (γ s ) = sd, and let γ s ∈ Lie(G) s be arbitrary such that exp G (γ s,i ) = γ s,i . Note then that γ s is F -linearly independent modulo ker(exp G ). For k ∈ N ≥0 we set X k,s := s i=1 O s−k γ s,i ⊂ Lie(G) if s ≥ k and X k,s := ∅ if s < k. Let X k := s→U X k,s ⊂ Lie(G)(K) := (Lie(G)) U . Set the scaling constant ξ := |X 0 | 1 d , so that δ(X 0 ) = d. Claim 7.11. Let Z := k≥0 X k ⊂ Lie(G)(K). Then Z is an O-submodule with δ(Z) = d. Proof. By (CF1), there is k 0 such that for all k and s we have X k,s +X k,s ⊂ X k−k0,s . It follows that Z+Z ⊂ Z. Similarly, (CF2) implies aZ ⊂ Z for all a ∈ O. Finally, by (CF3), for any k, s and ∈ R >0 , we have |X k,s | |X k+1,s | = |O s−k | s |O s−k−1 | s ≤ O (|O s−k−1 | s ) = O (|X k+1,s | ). Hence δ(X k ) ≤ (1 + )δ(X k+1 ) for any > 0, so δ(X k ) ≤ δ(X k+1 ). Clearly δ(X k+1 ) ≤ δ(X k ), so δ(X k+1 ) = δ(X k ). So by induction, δ(X k ) = d for all k, so δ(Z) = inf k δ(X k ) = d. Now since Z is an O-submodule, the co-ordinate projection to Lie(G) m induces a bijection from Lie(H) ∩ Z n to Z m , so δ(Lie(H) ∩ Z n ) = md. Moreover exp G is injective on each X k,s and hence on each X k and on Z. Claim 7.12. exp G (X 0 ) is cgp in G. Proof. Suppose W G is a proper closed subvariety over K. Say C 0 (b) ⊂ K is a finitely generated extension of C 0 such that W is over C 0 (b). Then W = J b for some J x a constructible family defined over C 0 of proper closed subvarieties of G. If b = lim s→U b s and W s := J bs then W (K) = s→U W s (C). It holds for U-many s that dim 0 (b s ) ≤ dim 0 (b) =: k. We claim that for such s, we have rk F (exp −1 G (W s (C)) ∩ X 0,s ) ≤ k. Indeed, suppose g = (g i ) k i=0 is F -linearly independent with g i ∈ exp −1 G (W s (C))∩ X 0,s . By F -linear algebra, some k + 1-subtuple γ of the generators γ s of X 0,s is in the F -linear span of g . Let g i := exp G (g i ). Then dim 0 (g) ≥ dim 0 (exp G (γ )) = (k + 1)d, so dim 0 (g) = (k + 1)d. Then dim 0 (g/b s ) ≤ dim(W k+1 s ) ≤ (k + 1)(d − 1) = dim 0 (g) − (k + 1) < dim 0 (g) − dim 0 (b s ), so dim 0 (b s ) < dim 0 (g) − dim 0 (g/b s ) = dim 0 (b s ) − dim 0 (b s /g) ≤ dim 0 (b s ), which is a contradiction. So |W s (C) ∩ exp G (X 0,s )| = | exp −1 G (W s (C)) ∩ X 0,s | ≤ |O s | k = (|O s | s ) k s = |X 0,s | k s . So for any ∈ R >0 , considering sufficiently large s, we deduce δ(W (K) ∩ exp G (X 0 )) ≤ δ(X 0 ). Hence δ(W (K) ∩ exp G (X 0 )) = 0, as required. By Fact 2.4 we can pick a ∈ H ∩ exp G (Z) n with δ(a) = δ(H ∩ exp G (Z) n ). By injectivity of exp G on Z this is ≥ δ(Lie(H)∩Z n ) = md. Note that δ(a i ) ≤ δ(Z) = d and that, by the above Claim a i ∈ exp G (X 0 ) is dcgp in G (see Def. 7.3) and hence cgp (see Lemma 7.2). So by Lemma 5.12 a is coherent. Remark 7.13. The only essential role played by Lie theory in the above proof is to establish that F embeds in a matrix algebra; for the rest of the proof, exp G : Lie(G) → G is used only to pick out choices of systems of division points of elements of G, and this can instead be done directly by replacing exp G with ρ : G → G where G is the "profinite cover" G := lim ← − ([n] : G → G) consisting of "division systems" (x n ) n satisfying [n]x nm = x m , and ρ is the first co-ordinate map of the inverse limit, ρ((x n ) n ) := x 1 . Then End 0 (G) acts on G by η m (x n ) n := (ηx nm ) n . Remark 7.14. In the case that G is a semiabelian variety, we can do slightly better and obtain approximate O-modules which are in general position in the sense of Elekes-Szabó rather than merely in coarse general position. More precisely, say an internal subset X ⊂ G(K) is in general position if it has finite intersection with any proper closed subvariety W G over K. Then proceed as in the proof of Proposition 7.10 but taking γ s := γ ∈ G(C) to be a singleton which is in no proper algebraic subgroup of G. Let Γ := Oγ ≤ G(C) be the O-submodule generated by γ, which is a finitely generated subgroup of G(C). As shown in [Sca04,Theorem 4.7], as a consequence of the truth of the Mordell-Lang conjecture, if V b is a constructible family of subvarieties of G then there is a uniform bound on the sizes of finite intersections V b ∩ Γ. Hence exp G (X k ) is in general position in G. However, this approach clearly fails for G = G 2 a , by considering intersections with linear subvarieties. Pach [Pac03,Theorem 2] gives an example of an internal subset X ⊂ K 2 with δ(X) = 1 = δ(X + X) and where the intersection with any linear subvariety has size at most 2; however, quadratic subvarieties witness that this X is not in general position. We do not know whether it is possible to find such an X which is in general position. This prompts the following question. Question 7.15. Is there a sequence of finite sets A n ⊂ C 2 such that |A n | → +∞, |A n + A n | ≤ |A n | 1+1/n and with the property that, for each degree d, sup n,C,deg C≤d |A n ∩ C| is finite, where C runs through algebraic curves C ⊂ C 2 . 7.2. Proofs of the main results. We first observe that Theorem 1.4 is a special case of Theorem 1.11, as follows immediately from the following lemma. Lemma 7.16. Let G be a 1-dimensional connected complex algebraic group, let n ≥ 1, and let H ≤ G n be a connected algebraic subgroup. Then H is a special subgroup of G n . Proof. First, suppose G is the additive group (C; +). Then F := End 0 (G) = C is a division ring, and H is a vector subgroup, i.e. H = ker(A) for some A ∈ Mat n (C), as required. Otherwise, G is the multiplicative group or an elliptic curve. In either case, F := End 0 (G) is again a division ring, being either Q or a quadratic imaginary field extension of Q. We refer to [BHP,Lemma 4.1(i)] for the fact that H ≤ G n is the connected component of ker(A) for some A ∈ Mat n (End(G)). Proof of Theorem 1.11. Combine Lemma 7.4 with Proposition 6.1 and Proposition 7.10. We end this section with a proof of Corollary 1.15 from the Introduction. It is a special case of the following more precise result: Corollary 7.17. Suppose (G 1 ; · 1 ), (G 2 ; · 2 ) are non-isogenous connected complex algebraic groups of the same dimension, and Γ ⊂ G 1 × G 2 is a generically finite algebraic correspondence. Then there are τ, , c > 0 such that if A i ⊂ G i are finite sets such that Γ ∩ (A 1 × A 2 ) is the graph of a bijection between A 1 and A 2 , and if A i ⊂ G i and A i · i A i ⊂ G i are τ -cgp for i = 1, 2, then max(|A 1 · 1 A 1 |, |A 2 · 2 A 2 |) ≥ c|A 1 | 1+ . Remark 7.18. The cgp condition holds trivially for any A when dim(G i ) = 1. Proof. Let V = {(x 1 , y 1 , x 1 · 1 y 1 , x 2 , y 2 , x 2 · 2 y 2 ) : (x 1 , x 2 ), (y 1 , y 2 ) ∈ Γ}. Suppose V is special. Then V is in co-ordinatewise correspondence with a special subgroup H ≤ G 6 , say. As in Remark 1.13, the projection of H to the first three co-ordinates is in co-ordinatewise correspondence with the graph {(x, y, x + y)} of the group operation of G. Hence the graph of the group operation of G 1 is in co-ordinatewise correspondence with that of G. By Fact 2.13, G 1 is commutative and isogenous to G. Similarly, considering the projection to the last three co-ordinates, G 2 is commutative and isogenous to G. Since isogeny is an equivalence relation, this contradicts the assumption that G 1 and G 2 are not isogenous. So by Theorem 1.11, V admits a power-saving, say by . So for sufficiently large τ , if A i are as in the statement, then setting X : = {(a 1 , b 1 , a 1 · 1 b 1 , a 2 The techniques of this paper are insufficient to prove this generalisation, as the corresponding weakened notion of coherence does not directly yield a pregeometry. , b 2 , a 2 · 2 b 2 ) : a i , b i ∈ A i ; (a 1 , a 2 ), (b 1 , b 2 ) ∈ Γ} ⊂ V , we have |A 1 | 2 = |X| ≤ O(max(|A 1 · 1 A 1 |, |A 2 · 2 A 2 |) 2− ). So := 2− is as required. Since type-definable subgroups of simple algebraic groups do satisfy this Larsen-Pink condition (see [Hru13,2.15]), a positive answer should suffice to recover the characteristic 0 case of the main theorem of [BGT11], Theorem 5.5. Coherence in subgroups In this section we observe a strengthening of our results in the special case of a -definable pseudo-finite subgroup and derive Theorem 1.17 from the introduction. We then briefly discuss connections to Diophantine problems and Manin-Mumford. Theorem 8.1. We keep the notational setup of Section 2. Let G be a commutative algebraic group over C 0 and Γ ≤ G(K) be a -definable (over ∅) subgroup of G(K) contained in a cgp definable (over ∅) subset of G (see Definition 7.3). Assume δ(Γ) = dim (G). Then the locus locus G n (γ/C 0 ) of any coherent tuple γ ∈ Γ n is a coset of an algebraic subgroup. Remark 8.2. The commutative case is the only relevant case: by Corollary 3.15, if G is a connected algebraic group with such a subgroup Γ then G is commutative. Proof. By Lemma 7.2 any α ∈ Γ is cgp. In particular if δ(α) > 0, then α is generic in G and δ(α) ≤ δ(Γ) = dim(G) = d 0 (α). So for all α ∈ Γ we have δ(α) ≤ d 0 (α). By Fact 2.4, we may find α ∈ Γ n with δ(α/γ) = δ(Γ n ) = n dim (G). Then δ(γ, α, γ + α) = δ(γ, α) = δ(γ) + n dim(G) = d 0 (γ) + n dim(G) ≥ d 0 (γ, α) = d 0 (γ, α, γ + α), and γ i , α i , γ i + α i ∈ Γ, so (γ, α, γ + α) is coherent by Lemma 5.12. Since the product of cosets is a coset, we may assume that locus G n (γ/C 0 ) is not a product of loci of subtuples. Then as in Proposition 6.1, there is a commutative algebraic group G over C 0 and a tuple (γ , α , ψ ) which is generic in a connected subgroup H ≤ G 3n and which is co-ordinatewise acl 0 -interalgebraic with (γ, α, γ + α). Furthermore, for each i we have θ 1 i γ i + θ 2 i α i + θ 3 i ψ i ∈ G (C 0 ) for some θ j i ∈ End C0 (G ) invertible in End 0 C0 (G ) , and so we may assume without loss that ψ i = γ i + α i . Then by Fact 2.13, there are m ∈ N and isogenies η i : G → G and k i ∈ G(C 0 ) such that η i (γ i ) = mγ i + k i for i = 1, . . . , n. Hence locus G n (mγ/C 0 ) = ( i η i )(locus G n (γ )) − k = ( i η i )(π 1 (H)) − k where k = (k 1 , . . . , k n ) and π 1 : (x, y, z) → x. So locus G n (mγ/C 0 ) is a coset of an algebraic subgroup of G n , and hence so is locus G n (γ/C 0 ), as required. Proof of Theorem 1.17. First, note that there exists a function f : N → N such that any translate of a subvariety of G n of complexity ≤ τ has complexity ≤ f (τ ). This follows from [BGT11, Lemma 3.4], and can also be seen as a consequence of the fact that the family of translates of subvarieties in a constructible family is constructible. Increasing f if necessary, we may assume also that f is strictly increasing and 2 −τ + 1 f (τ ) ≤ 1 τ for any τ ∈ N. By [TV06, Proposition 2.26], there are C 1 , C 2 > 0 such that if |A + A| ≤ |A| 1+ then H := A − A is an |A| C1 -approximate subgroup and |H| ≤ |A| 1+C2 . It therefore suffices to prove the following revised statement: there are N, τ, , η > 0 depending only on G and the complexity of V such that if H ⊂ G is a τ -cgp |H|approximate subgroup and |H| ≥ N , then |H n ∩ V | < |H| which has complexity at most f (τ ). Then |H n ∩ V | ≥ |A n ∩ V | ≥ |A| dim(V ) dim(G) −η ≥ |H| ( dim(V ) dim(G) −η )(1+C2 ) −1 ≥ |H| dim(V ) dim(G) −η . This contradicts the revised statement. Now suppose the revised statement fails. Then there is a family (V b ) b of bounded complexity subvarieties of G n , such that for each s ∈ N there is an f (s)-cgp subset H s ⊂ G such that H s is an |H s | 4 −s -approximate subgroup, and a parameter b s , such that |H s | ≥ s and |H n s ∩ V bs | ≥ |H s | dim(V bs ) dim(G) − 1 s and V bs is not a coset. Let U be a non-principal ultrafilter on N, let Γ i := s→U 2 s−i j=1 H s for i ∈ N, and let Γ := i≥0 Γ i . Set the language L to be such that each Γ i is definable. Then Γ is a -definable subgroup, since Γ i+1 + Γ i+1 ⊂ Γ i . Now δ(Γ i ) = δ( s→U H s ) for any i, because | 2 s−i j=1 H s | ≤ |H s | 1+2 s−i 4 −s ≤ |H s | 1+2 −s . So δ(Γ) = δ( s→U H s ), so setting our scaling parameter ξ appropriately, we may ensure δ(Γ) = dim (G). We claim that Γ 0 is cgp in G. Indeed, 2 s j=1 H s is contained in the union of |H s | 2 s 4 −s = |H s | 2 −s translates of H s , so if W G has complexity ≤ s then, using the lower bound we assumed on f , we have |W ∩ 2 s j=1 H s | ≤ |H s | 2 −s |H s | 1 f (s) ≤ |H s | 1 s . So 2 s j=1 H s is s-cgp in G. Let b := lim s→U b s . Then δ(Γ n ∩ V b ) = dim(V b ). Set C 0 such that G and V b are over C 0 . By Fact 2.4 we can pick γ = (γ 1 , . . . , γ n ) ∈ Γ n ∩ V b with δ(γ) = δ(Γ n ∩ V b ). Since Γ 0 is cgp in G, Lemma 7.2 implies that all γ i 's are cgp and δ(γ i ) ≤ d 0 (γ i ). Then γ is coherent by Lemma 5.12. So by Theorem 8.1 we have that V b = locus G n (γ/C 0 ) is a coset. But then so is V bm for U-many m, contradicting our assumption. We know, by the Manin-Mumford conjecture proven by Raynaud, that V is a coset of an algebraic subgroup. Some co-ordinate projection to G dim(V ) yields an isogeny, so V = H + α where H = η(G dim(V ) ) is a subgroup and η is an isogeny, and α ∈ G[∞] n . Setting c := | ker(η)| −1 it follows that for r ≥ N := ord(α) we have |V (C) ∩ G[r!] n | ≥ c · |G dim(V ) [r!]|, and so for r ∈ N we have the lower bound |V (C) ∩ G[r!] n | ≥ Ω(|G[r!]| dim(V ) ). Suppose conversely that we only know this consequence of Manin-Mumford on the asymptotics of the number of torsion points in V , or even just that for every > 0, for arbitrarily large r ∈ N we have |V (C) ∩ G[r!] n | ≥ |G[r!]| dim(V )− . Then it follows that V is a coset of an algebraic subgroup. Indeed, G[r!] is a subgroup and is trivially τ -cgp for any τ since dim(G) = 1, so this is an immediate consequence of Theorem 1.17. We can generalise this argument by replacing G[∞] with a finite rank subgroup, as in the Mordell-Lang conjecture. Indeed, let Γ ≤ G(C) be a finite rank End(G)submodule. Say Γ is contained in the divisible hull of the subgroup generated by γ 1 , . . . , γ k . Let Γ r := {x ∈ Γ : (r!)x ∈ i [−r, . . . , r]γ i }. Then Γ r is finite and |Γ r + Γ r | ≤ 2 k |Γ r |, so as above we obtain that if V ⊂ G n is an irreducible closed subvariety, then V is a coset of a subgroup if and only if for all > 0, for arbitrarily large r ∈ N, we have |V (C) ∩ Γ n r | ≥ |Γ r | dim(V )− . Appendix A. Projective geometries fully embedded in algebraic geometry [EH91] characterises the projective subgeometries of the geometry of algebraic closure in an algebraically closed field K over an algebraically closed subfield C 0 . The points of such a geometry are C 0 -interalgebraicity classes of elements of K. In this essentially self-contained appendix, we consider the more general situation of a projective geometry induced from field-theoretic algebraic dependence whose points are C 0 -interalgebraicity classes of finite tuples from K (or, equivalently, of K-rational points of arbitrary varieties over K). The arguments are generalisations of those used in [EH91]. We use Hrushovski's abelian group configuration theorem to find an abelian algebraic group, then apply a version of the fundamental theorem of projective geometry to identify the co-ordinatising skew field of the geometry as a skew field of quasi-isogenies of the group. Identifying the isogenies involved requires a little more care in the higherdimensional case, as there may be non-trivial endomorphisms which are not isogenies, and these cannot appear in the co-ordinatising skew field. We allow ourselves to simplify some of the algebra by restricting ourselves to the characteristic 0 case, whereas [EH91] works in arbitrary characteristic. Let K be an algebraically closed field of characteristic 0. Let C 0 ≤ K be an algebraically closed subfield, and let cl : P(K <∞ ) → P(K <∞ ) be field-theoretic algebraic closure over C 0 as defined in Example 5.3. In other words, for a subset B ⊂ K <∞ := n≥1 K n we let cl(B) be the set of tuples from the field-theoretic algebraic closure C 0 (B) alg of the subfield C 0 (B) ≤ K generated by C 0 and the co-ordinates of all tuples from B. This closure operator 1 was denoted acl 0 (B) <∞ in Section 5. So a ∈ cl(B) if and only if a has finite orbit under Aut(K/C 0 (B)), if and only if a ∈ (C 0 (B) alg ) <ω . If V is an algebraic variety over C 0 and a ∈ V (K) is a K-rational point, we may consider a as a tuple in K <∞ as explained in §2.1.10. Let G K := P(K <∞ ; cl) be the projectivisation of the closure structure (K <∞ ; cl), as defined in §5.1; i.e. G K = {cl({a}) : a ∈ K <∞ \ cl(∅)} with the closure induced from (and still denoted by) cl. For x ∈ K <∞ and C ⊂ K <∞ , define x := cl({x}) and C := { c : c ∈ C}. As already noted G K is not a geometry in general (it does not satisfy the exchange property), but here we are interested in geometries that embed in G K . We say that a geometry P is connected if any two points a, b are non-orthogonal, i.e. if there exists C ⊂ P such that a ∈ cl(b, C) \ cl(C). Lemma A.1. Let P ⊂ G K and suppose that the restriction (P, cl) of cl to P forms a connected geometry (embedded in G K ). Then the following are equivalent: (i) for any x ∈ P and C ⊂ P , x ∈ cl( C) ⇔ x | C0 C (ii) There exists k ∈ N such that for any finite subset A ⊂ P , k · dim cl ( A) = d 0 (A), where recall d 0 (A) := trd(C 0 (A)/C 0 ). Proof. Note that (i) is equivalent to say that d 0 (x/C) = 0 if and only if d 0 (x/C) < d 0 (x). That (i) implies (ii) follows from additivity of dimensions, setting k := d 0 (a) for any a ∈ K <∞ such that a ∈ P , once we see that this does not depend on the choice of a. For another such b with b ∈ P we show that d 0 (a) = d 0 (b). If b = a then b and a are interalgebraic over C 0 , so this holds. Else by non-orthogonality there is C such that a ∈ cl( C b) and b ∈ cl( C a) but a, b / ∈ cl( C). Then by (i), d 0 (a) = d 0 (a/C) = d 0 (b/C) = d 0 (b). The converse is easy and since it is not needed in the sequel, we leave it to the reader. Definition A.2. We say a connected geometry (P, cl) ⊂ G K is (k-dimensionally) fully embedded in G K if the equivalent conditions of the above lemma hold. If G is a connected abelian algebraic group over C 0 , let E G := End C0 (G) be the ring of algebraic endomorphisms defined over C 0 , and let E 0 G := End 0 C0 (G) := Q ⊗ Z E G . Any η ∈ E 0 G can be written as qη for some q ∈ Q and η ∈ E G . Since char(K) = 0 and G is connected, G(K) is divisible, and the n-torsion is finite for all n and hence contained in G(C 0 ). So V := G(K)/G(C 0 ) is naturally a left E 0 G -module. If F ≤ E 0 G is a division subring, we view V as an F -vector space and let P F (G) := P(V ) be its projectivisation, and let η G F : P F (G) → G K be the map induced by g → g for g ∈ G(K). Note that η G F is not injective. Example A.3. Let G and F be as above. Let g i ∈ G(K) be independent generics over C 0 for i in a (possibly infinite) index set I. Let V := (g i /G(C 0 )) i∈I F ≤ G(K)/G(C 0 ). Then η G F maps the |I|-dimensional projective geometry P F (V ) ⊂ P F (G) injectively into G K , and the image η G F (P F (V )) is dim(G)-dimensionally fully embedded in G K . For example, in the case G = G m , if a 0 , . . . , a n ∈ K with trd(a/C 0 ) = n + 1, then they generate in K * /C * 0 the Q-subspace {a q /C * 0 : q ∈ Q n+1 } = { i a qi i /C * 0 : q 0 , . . . , q n ∈ Q}; the algebraic dependencies over C 0 within this set are precisely those arising from Q-linear dependencies on the exponents, and so this yields an embedding of P n (Q) in G K . The following proposition, which is the main result of this appendix, says that any fully embedded projective geometry (of sufficiently large dimension) is of this form. Proposition A.4. Let (P, cl) ⊂ G K be a k-dimensionally fully embedded geometry, and suppose P is isomorphic to the geometry of a projective space over a division ring F , and dim(P ) ≥ 3. Then there is an abelian algebraic group G over C 0 of dimension k, and an embedding of F as a subring F ≤ E 0 G , and a closed subgeometry P of P F (G) on which η G F is injective, such that P = η G F (P ). Furthermore, G is unique up to isogeny. The remainder of this appendix constitutes a proof of Proposition A.4. The strategy of the proof is to find the commutative algebraic group G via the abelian group configuration theorem, and then to exhibit a natural injective collineation from P to P F (G). The fundamental theorem of projective geometry, in its version over division rings, then allows to claim that this collineation is a projective embedding. However, since we must also identify F within E 0 G , we will in fact use a more general form of the fundamental theorem. Remark A.5. In fact the proof applies directly to C 0 ≺ K models of an arbitrary theory of finite Morley rank, with definable groups and endomorphisms in place of algebraic groups and endomorphisms, as long as any connected definable abelian group is divisible (equivalently, has finite n-torsion for all n). Remark A.6. Unlike in [EH91], our techniques do not directly apply in the case of P a projective plane (i.e. a 3-dimensional connected modular geometry), and so do not rule out non-Desarguesian projective planes. However, David Evans has pointed out to us that the arguments of [Lin85] go through to show that any projective plane appearing as a subgeometry of cl is Desarguesian (and hence is a projective plane over a division ring). Lemma A.7. Suppose G is an abelian algebraic group over C 0 , and let g, h ∈ G(K) and b ∈ P . Suppose g, h, b ∈ P are independent, and g + h ∈ P , and d ∈ cl( g, b) \ { g, b}. Then there is h ∈ G(K) such that h = b and g + h = d. Proof. By modularity, say c ∈ cl( h, b) ∩ cl( g + h, d). Then we proceed as in [EH95, Lemma 1.2]. Say b = b and d = d . Then by the coheir property of independence in the stable theory ACF any formula in tp(c/C 0 b d g) is satisfiable in C 0 . So since b resp. d is interalgebraic over c with points x resp. y of G satisfying x + g = y, they are already interalgebraic over C 0 with such points, as required. (Alternatively, this argument can be phrased purely algebraically, by taking a specialisation of c to C 0 fixing b d g; see [EH91, Lemma 2.1.1].) Lemma A.8. Suppose a, b ∈ P , a = b. Then there is an abelian algebraic group G over C 0 with dim(G) = k, and there exist g, h ∈ G(K) such that a = g, and b = h, and cl(a, b) \ {a, b} = P ∩ { g + h : g, h ∈ G(K), a = g, b = h}. Proof. This proof closely follows the argument of [EH91, Theorem 3.3.1]. As there, this proof actually only needs dim(P ) ≥ 3, but we will make the argument slightly more concrete by using that P is a projective geometry over F . Now say V is an F -vector space such that P ∼ = P(V ). Then by Fact A.14, for some embedding σ : F → E 0 G and σ-semilinear embedding f : V → G(K)/G(C 0 ), we have ι = P(f ). The main statement of Proposition A.4 follows. For the uniqueness up to isogeny of G, suppose Proposition A.4 also holds for a group G . then if g, h ∈ G(K) with g, h ∈ P and g = h, then, as in the proof of Lemma A.12, there are g , h ∈ G (K) with g = g and h = h and g + h = g + h . So by Fact 2.13, G is isogenous to G. α i (a, b, c, d) = (b −1 , a, d −1 , c); α j (a, b, c, d) = (c −1 , d, a, b −1 ); α k (a, b, c, d) = (d −1 , c −1 , b, a). Theorem 3.1 (Group Configuration Theorem). Suppose a, b, c, x, y, z ∈ K <∞ are such that in the following diagram Theorem 3.3 (Abelian Group Configuration Theorem). Suppose a, b, c, w, x, y, z ∈ K <∞ are such that in the the argument above would suffice. To obtain the sharper bound, we use an additional argument inspired by [RSDZ16, Theorem 4.3]. So assume d 0 (d) = 2d, i.e. acl 0 (d) = acl 0 (b). Let W := loc 0 (ab) and V := loc 0 (ad). Write V y for {x : (x, y) ∈ V }, and similarly for W y . So V d = loc 0 (a/d) is the irreducible component of W b = loc 0 (a/b) containing a. Now tp 0 (d/b)(K) is finite, so say tp 0 (d/b)(K) = {d 1 , . . . , d k }. Then V di are precisely the irreducible components of W b , and W so by the definition of canonical base, we have σ (d i ) = d i . Then since b ∈ acl 0 (d i ), there are only finitely many possibilities for σ (b), and so there are only finitely many possibilities for b = σ(b) = τ ij (σ (b)). Now let G be the graph with vertex set tp(b)(K) and with an edge between b and b if and only if dim(W b ∩ W b ) = d. By Claim 3.12, G has constant finite degree. Claim 3. 13 . 13If G = (A, E) is a graph where the vertex set A is -internal and the edge relation E is internal, and if G has finite maximal degree k, then there is a -internal anticlique A with δ(A ) = δ(A). Proposition 5 . 10 . 510If P ⊂ K ∞ is coherent then (P ; acl 0 P ) is a pregeometry. Proof. We verify exchange, the other properties being immediate. Suppose b ∈ acl 0 (A ∪ {c}) \ acl 0 (A) for A ⊂ P and b, c ∈ P . So b | 0 A c. By symmetry of | 0 (see §2.1.7), we get c | 0 A b. Now the next lemma forces c ∈ acl 0 (A ∪ {b}). Lemma 5 . 11 . 511Suppose P ⊂ K <∞ is coherent. Let c ∈ P and A, B ⊂ P . Then either c | 0 A B, or c ∈ acl 0 (A ∪ B). Proof. Suppose c | 0 A B. Then in particular c | 0 (B ∪ A), and so c | 0 ba for some tuples b ∈ B <∞ and a ∈ A <∞ . Since c is cgp this implies δ(c/ba) = 0. Since P is coherent, d 0 (cba) = δ(cba) and d 0 (ba) = δ(ba) so by additivity of d 0 and δ we get d 0 (c/ba) = 0. Hence c ∈ acl 0 (A ∪ B). Definition 7.5. A constrained filtration of a ring O is a chain O n ⊂ O of finite subsets, n ∈ N, such that (CF0) n∈N O n = O, and ∀n ∈ N. O n ⊂ O n+1 ; (CF1) ∃k ∈ N. ∀n ∈ N. O n + O n ⊂ O n+k ; (CF2) ∀a ∈ O. ∃k ∈ N. ∀n ∈ N. aO n ⊂ O n+k ; (CF3) ∀ > 0. |On+1| |On| ≤ O (|O n | ). If a constrained filtration exists, we say O is constrainedly filtered . Lemma 7 . 7 . 77Suppose O is a constrainedly filtered ring. (i) The polynomial ring O[X] is constrainedly filtered. (ii) If O is an integral domain and a ∈ O, then the subring O[a −1 ] of the fraction field of O is constrainedly filtered. (iii) If O ⊃ O is an extension ring in which O is central and which is free and finitely generated as a O-module, then O is constrainedly filtered. Proof. Say (O n ) n is a constrained filtration of O. so (CF1) holds. (CF0) and (CF2) are immediate, and (CF3) holds since |O n | = |O 2kn |. (iii) Say O = d i=1 a i O. Then let O n := d i=1 a i O n . Then (CF0), (CF1), and (CF3) are clear. For (CF2), let c Question 7.19. Consider the following weakening of coarse general position. Say a ∈ K <∞ with δ(a) = d 0 (a) is in Larsen-Pink general position if for any B ⊂ K <∞ , we have δ(a/B) ≤ d 0 (a/B). This hypothesis suffices to give the trivial upper bound of Lemma 7.1, and it is not satisfied by the counterexample of Section 4, nor by similar constructions based on nilpotent groups. Does Theorem 1.11 go through unchanged if coarse general position is relaxed to Larsen-Pink general position? G) −η . Indeed, given τ ∈ N, if (N, τ, , η) are as required in the revised statement for V of complexity at most f (τ ), then (N, τ, := C 1 , η := dim(V ) dim(G) − ( dim(V ) dim(G) − η)(1 + C 2 ))are as required in the original statement for V of complexity at most τ , after increasing C 1 if necessary to ensure η > 0. Indeed, given V of complexity at most τ , suppose there is A ⊂ G with |A| ≥ N and |A n ∩ V | ≥ |A| dim(V ) dim(G) −η and |A + A| ≤ |A| 1+ , and with H := A − A being τ -cgp. Then H is an |A| C1 = |A| -approximate subgroup, and |H| ≥ |A| ≥ N . Let x ∈ A, and let V := V − (x, x, . . . , x), Example 8 . 3 ( 83Connections with Manin-Mumford and Mordell-Lang). Let G be a complex elliptic curve. Write G[∞] := r∈N G[r] for the torsion subgroup. Suppose V ⊂ G n is an irreducible closed complex subvariety such that V (C)∩G[∞] is Zariski dense in V . 1.4. Acknowledgements. Thanks to Mohammad Bardestani, Elisabeth Bouscaren, Ben Green, Martin Hils, Udi Hrushovski, Jonathan Kirby, Oriol Serra, Pierre Simon and Hong Wang for helpful conversations. In model theory this is usually denoted by acl eq and is defined on subsets of K eq , which we identify here with K <ω via elimination of imaginaries. Proof. By Lemma A.10, we may take a P -alignment witness g for g with h / ∈ cl( g, g ).Let d ∈ cl( g, h)\{ g, h}. Then by Lemma A.7, d = g + h for some h ∈ G(K) with h = h. Then h is P -aligned, so byLemma A.11, h /G(C 0 ) ∈ E 0 G (h/G(C 0 )). Our aim is to recognise F as a subring of E 0 G , and P as embedded in the corresponding F -projectivisation of a subspace of G(K)/G(C). This is a matter of the fundamental theorem of projective geometry. However, since E 0 G is not necessarily a field, this is not the classical case of the fundamental theorem. We use instead a version for projective spaces over rings obtained by Faure[Fau04].The following definitions are adapted from[Fau04].Definition A.13. The projectivisation P(M ) of a module M over a ring R is the set of non-zero 1-generated submodules Rx equipped with the closure operator cl P(M ) induced from R-linear span,If N is a module over a ring S, a map g :A ring R is directly finite if (∀λ, µ ∈ R)(λµ = 1 ⇒ µλ = 1).Fact A.14. Suppose M and N are modules over rings R and S respectively, and S is directly finite. Suppose g : P(M ) → P(N ) is a projective morphism, and im(g) contains free points B 1 , B 2 , B 3 such that for any C 1 , C 2 ∈ im(g), for some i ∈ {1, 2, 3}, we haveThen there exists an embedding σ : R → S and a σ-semilinear embedding f : M → N such that g = P(f ).Proof. This is the statement of [Fau04, Theorem 3.2] in the case E = ∅, except that there C 1 and C 2 are not restricted to im(g); however, in the proof the condition is used only when C 1 and C 2 are in im(g).. Then Lemma A.9 and Lemma A.11 establish a map ι : P −→ P E 0 G (G), which by Lemma A.12 is a projective morphism. We proceed to verify the assumptions of Fact A.14.E 0 G is directly finite since if µλ = 1 with µ, λ ∈ E G then µ is an isogeny so has a quasi-inverse µ ∈ E G with n := µ µ ∈ N >0 ; then λµn = nλµ = µ µλµ = µ µ = n, so λµ = 1 since G is n-divisible. Now dim(P ) ≥ 3, so say g 1 , g 2 , g 3 ∈ P are independent with g i being P -aligned. Let B i := ι(a i ) = E 0 G g i . Since each g i is generic in G(K), each B i is free. To check (C3), suppose C i = ι( h i ) for i = 1, 2 with h i being P -aligned. Then some g i / ∈ cl( h 1 , h 2 ), and so since P is fully embedded in G K , we have g i | 0 h 1 h 2 , from which (C3) follows. and identify cl(a, b, c) with P(F 3 ), placing a, b. Take C ∈ P \ Cl, c at [0 : 1 :Take c ∈ P \ cl(a, b), and identify cl(a, b, c) with P(F 3 ), placing a, b, c at [0 : 1 : Then by Lemma A.7 applied to g, k, b, we have d = g + h for some h ∈ G(K) with h = b. The converse inclusion is clear. Now let d ∈ cl(a, b) \ {a, b}. h 0 ∈ G(K) as provided by thisNow let d ∈ cl(a, b) \ {a, b}. Then by Lemma A.7 applied to g, k, b, we have d = g + h for some h ∈ G(K) with h = b. The converse inclusion is clear. Now let a 0 , b 0 ∈ P , a 0 = b 0 , and fix G and g 0 , h 0 ∈ G(K) as provided by this G(K) is P -aligned if there exists g ∈ G(K) with g = g such that g. Say g ∈ G(K) is P -aligned if there exists g ∈ G(K) with g = g such that g, g , g + g ∈ P . Such a g is a P -alignment witness for g. Every point of P is of the form g for some. A Lemma, Lemma A.9. Every point of P is of the form g for some P -aligned g ∈ G(K). If g ∈ G(K) is P -aligned and b ∈ P \ g. A Lemma, Lemma A.10. If g ∈ G(K) is P -aligned and b ∈ P \ g, then there exists a P - G(K) is a P -alignment witness for g and b / ∈ cl( g, g ), then there exists a P -alignment witness g for g with g = b. To handle the case that b ∈ cl( g, g ), apply this first to b ∈ P / ∈ cl. Proof. By Lemma A.7, if g ∈ G(K) is a P -alignment witness for g and b / ∈ cl( g, g ), then there exists a P -alignment witness g for g with g = b. To handle the case that b ∈ cl( g, g ), apply this first to b ∈ P / ∈ cl( g, g ) to obtain a witness g , and then again to b / ∈ cl( g, g ). If g, h ∈ G(K) are P -aligned and g = h. A Lemma, then E 0 G (g/G(C 0 )) = E 0 G (h/G(C 0 )Lemma A.11. If g, h ∈ G(K) are P -aligned and g = h, then E 0 G (g/G(C 0 )) = E 0 G (h/G(C 0 )). Say g , h ∈ G(K) are P -alignment witnesses for g, h respectively. By Lemma A.10, we may assume g / ∈ cl( h, h ). Proof. Say g , h ∈ G(K) are P -alignment witnesses for g, h respectively. By Lemma A.10, we may assume g / ∈ cl( h, h ). there is h ∈ G(K) such that h = h and g + h = h + h . Then by Fact 2.13, there is n ∈ N and an isogeny α ∈ E G and k ∈ G(C 0 ). By Lemma A. 7such that αg = nh + k, as requiredBy Lemma A.7, there is h ∈ G(K) such that h = h and g + h = h + h . Then by Fact 2.13, there is n ∈ N and an isogeny α ∈ E G and k ∈ G(C 0 ) such that αg = nh + k, as required. Suppose g, h ∈ G(K) are P -aligned and g = h. Then cl( g, h) = P ∩ { k : k/G(C 0 ) ∈ E 0 G. A Lemma, E 0Lemma A.12. Suppose g, h ∈ G(K) are P -aligned and g = h. Then cl( g, h) = P ∩ { k : k/G(C 0 ) ∈ E 0 G (g/G(C 0 )) + E 0 . G , G (h/G(C 0 ))}. Geometric algebra. Emil Artin, Interscience Publishers, IncNew York-LondonEmil Artin. Geometric algebra. Interscience Publishers, Inc., New York-London, 1957. Approximate subgroups of linear groups. Emmanuel Breuillard, Ben Green, Terence Tao, Geom. Funct. Anal. 214Emmanuel Breuillard, Ben Green, and Terence Tao. Approximate subgroups of linear groups. Geom. Funct. Anal., 21(4):774-819, 2011. Model theory of compact complex manifolds with an automorphism. Martin Bays, Martin Hils, Rahim Moosa, Trans. Amer. Math. Soc. 3696Martin Bays, Martin Hils, and Rahim Moosa. Model theory of compact complex man- ifolds with an automorphism. Trans. Amer. Math. Soc., 369(6):4485-4516, 2017. Universal covers of commutative finite morley rank groups. Martin Bays, Bradd Hart, Anand Pillay, J. Inst. Math. Jussieu. 19Martin Bays, Bradd Hart, and Anand Pillay. Universal covers of commutative finite morley rank groups. J. Inst. Math. Jussieu., 19:767-799, 2020. De beaux groups. Thomas Blossier, Amador Martin-Pizarro, Confluentes Math. 61Thomas Blossier and Amador Martin-Pizarro. De beaux groups. Confluentes Math., 6(1):29-39, 2014. Erdős geometry and the group configuration. Emmanuel Breuillard, Hong Wang, Oberwolfach report, Model Theory Workshop. Emmanuel Breuillard and Hong Wang. Erdős geometry and the group configuration. 2016. Oberwolfach report, Model Theory Workshop. Projective and polar spaces. J Peter, Cameron, QMW Maths Notes. Queen Mary and Westfield College, School of Mathematical Sciences. 13Peter J. Cameron. Projective and polar spaces, volume 13 of QMW Maths Notes. Queen Mary and Westfield College, School of Mathematical Sciences, London, [1992]. A model-theoretic generalization of the Elekes-Szabó theorem. Artem Chernikov, Sergei Starchenko, arXiv:1801.09301math.LOArtem Chernikov and Sergei Starchenko. A model-theoretic generalization of the Elekes- Szabó theorem. arXiv:1801.09301 [math.LO]. A survey of Elekes-Rónyai-type problems. Zeeuw Frank De, New trends in intuitive geometry. Budapest27János Bolyai Math. Soc.Frank de Zeeuw. A survey of Elekes-Rónyai-type problems. In New trends in intuitive geometry, volume 27 of Bolyai Soc. Math. Stud., pages 95-124. János Bolyai Math. Soc., Budapest, 2018. Projective planes in algebraically closed fields. M David, Ehud Evans, Hrushovski, Proc. London Math. Soc. 623David M. Evans and Ehud Hrushovski. Projective planes in algebraically closed fields. Proc. London Math. Soc. (3), 62(1):1-24, 1991. The automorphism group of the combinatorial geometry of an algebraically closed field. M David, Ehud Evans, Hrushovski, J. London Math. Soc. 522David M. Evans and Ehud Hrushovski. The automorphism group of the combinatorial geometry of an algebraically closed field. J. London Math. Soc. (2), 52(2):209-225, 1995. How to find groups? (and how to use them in Erdös geometry?). György Elekes, Endre Szabó, Combinatorica. 325György Elekes and Endre Szabó. How to find groups? (and how to use them in Erdös geometry?). Combinatorica, 32(5):537-571, 2012. A semi-algebraic version of Zarankiewicz's problem. Claude-Alain Faure, J. Eur. Math. Soc. (JEMS). FPS + 17] Jacob Fox, János Pach, Adam Sheffer, Andrew Suk, and Joshua Zahl41Adv. Geom.Claude-Alain Faure. Morphisms of projective spaces over rings. Adv. Geom., 4(1):19-31, 2004. [FPS + 17] Jacob Fox, János Pach, Adam Sheffer, Andrew Suk, and Joshua Zahl. A semi-algebraic version of Zarankiewicz's problem. J. Eur. Math. Soc. (JEMS), 19(6):1785-1810, 2017. Stable group theory and approximate subgroups. Ehud Hrushovski, J. Amer. Math. Soc. 251Ehud Hrushovski. Stable group theory and approximate subgroups. J. Amer. Math. Soc., 25(1):189-243, 2012. On pseudo-finite dimensions. Ehud Hrushovski, Notre Dame J. Form. Log. 543-4Ehud Hrushovski. On pseudo-finite dimensions. Notre Dame J. Form. Log., 54(3- 4):463-495, 2013. Counting and dimensions. Ehud Hrushovski, Frank Wagner, Model theory with applications to algebra and analysis. CambridgeCambridge Univ. Press2Ehud Hrushovski and Frank Wagner. Counting and dimensions. In Model theory with applications to algebra and analysis. Vol. 2, volume 350 of London Math. Soc. Lecture Note Ser., pages 161-176. Cambridge Univ. Press, Cambridge, 2008. Zariski geometries. Ehud Hrushovski, Boris Zilber, J. Amer. Math. Soc. 91Ehud Hrushovski and Boris Zilber. Zariski geometries. J. Amer. Math. Soc., 9(1):1-56, 1996. PI-algebras. Nathan Jacobson, Lecture Notes in Mathematics. 441Springer-VerlagAn introductionNathan Jacobson. PI-algebras. Lecture Notes in Mathematics, Vol. 441. Springer- Verlag, Berlin-New York, 1975. An introduction. A Desarguesian theorem for algebraic combinatorial geometries. Bernt Lindström, Combinatorica. 53Bernt Lindström. A Desarguesian theorem for algebraic combinatorial geometries. Com- binatorica, 5(3):237-239, 1985. Model theory. David Marker, Graduate Texts in Mathematics. 217Springer-VerlagDavid Marker. Model theory, volume 217 of Graduate Texts in Mathematics. Springer- Verlag, New York, 2002. Midpoints of segments induced by a point set. János Pach, Geombinatorics. 132János Pach. Midpoints of segments induced by a point set. Geombinatorics, 13(2):98- 105, 2003. Geometric stability theory. Anand Pillay, The Clarendon Press Oxford University Press32New YorkOxford Logic GuidesAnand Pillay. Geometric stability theory, volume 32 of Oxford Logic Guides. The Clarendon Press Oxford University Press, New York, 1996. Model theory of algebraically closed fields. Anand Pillay, Model theory and algebraic geometry. BerlinSpringer1696Anand Pillay. Model theory of algebraically closed fields. In Model theory and algebraic geometry, volume 1696 of Lecture Notes in Math., pages 61-84. Springer, Berlin, 1998. On the number of incidences between points and curves. János Pach, Micha Sharir, Combin. Probab. Comput. 71János Pach and Micha Sharir. On the number of incidences between points and curves. Combin. Probab. Comput., 7(1):121-127, 1998. Polynomials vanishing on Cartesian products: the Elekes-Szabó theorem revisited. Orit Raz, Micha Sharir, Frank De Zeeuw, Duke Math. J. 16518Orit Raz, Micha Sharir, and Frank De Zeeuw. Polynomials vanishing on Cartesian products: the Elekes-Szabó theorem revisited. Duke Math. J., 165(18):3517-3566, 2016. The Elekes-Szabó theorem in four dimensions. Orit Raz, Micha Sharir, Frank De Zeeuw, Israel J. Math. 2272Orit Raz, Micha Sharir, and Frank de Zeeuw. The Elekes-Szabó theorem in four di- mensions. Israel J. Math., 227(2):663-690, 2018. Automatic uniformity. Thomas Scanlon, Int. Math. Res. Not. 62Thomas Scanlon. Automatic uniformity. Int. Math. Res. Not., (62):3317-3326, 2004. Polynomial Methods and Incidence Theory. Adam Sheffer, Unfinished draft, 22.08Adam Sheffer. Polynomial Methods and Incidence Theory. 2019. Unfinished draft, 22.08.2019. Point-Curve Incidences in the Complex Plane. Adam Sheffer, Endre Szabó, Joshua Zahl, Combinatorica. 382Adam Sheffer, Endre Szabó, and Joshua Zahl. Point-Curve Incidences in the Complex Plane. Combinatorica, 38(2):487-499, 2018. On the elliptic curve analogue of the sum-product problem. Igor Shparlinski, Finite Fields Appl. 143Igor Shparlinski. On the elliptic curve analogue of the sum-product problem. Finite Fields Appl., 14(3):721-726, 2008. Extremal problems in discrete geometry. Endre Szemerédi, William T Trotter, Jr , Combinatorica. 33-4Endre Szemerédi and William T. Trotter, Jr. Extremal problems in discrete geometry. Combinatorica, 3(3-4):381-392, 1983. Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets. Terence Tao, Contrib. Discrete Math. 101Terence Tao. Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets. Contrib. Discrete Math., 10(1):22-98, 2015. Additive combinatorics. Terence Tao, Van Vu, Cambridge Studies in Advanced Mathematics. 105Cambridge University PressTerence Tao and Van Vu. Additive combinatorics, volume 105 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2006. A course in model theory. Katrin Tent, Martin Ziegler, Lecture Notes in Logic. 40Cambridge University PressKatrin Tent and Martin Ziegler. A course in model theory, volume 40 of Lecture Notes in Logic. Association for Symbolic Logic, La Jolla, CA; Cambridge University Press, Cambridge, 2012. An application of algebraic geometry to combinatorics, a refinement of the Elekes-Szabó theorem. Hong Wang, OrsayUniversité Paris-SudMasters thesisHong Wang. An application of algebraic geometry to combinatorics, a refinement of the Elekes-Szabó theorem. 2014. Masters thesis, Université Paris-Sud, Orsay. A note on generic types. Martin Ziegler, arXiv:0608433v1math.LOMartin Ziegler. A note on generic types. 2006. arXiv:0608433v1 [math.LO].
{'fraction_non_alphanumeric': 0.08883480760864768, 'fraction_numerical': 0.027053950280248915, 'mean_word_length': 3.0463799173427217, 'pattern_counts': {'":': 1, '<': 170, '<?xml version=': 0, '>': 52, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 231, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "We generalise the Elekes-Szabó theorem to arbitrary arity and dimension and characterise the complex algebraic varieties without power saving. The characterisation involves certain algebraic subgroups of commutative algebraic groups endowed with an extra structure arising from a skew field of endomorphisms. We also extend the Erdős-Szemerédi sum-product phenomenon to elliptic curves. Our approach is based on Hrushovski's framework of pseudo-finite dimensions and the abelian group configuration theorem. arXiv:1806.03422v3 [math.CO] 1 Jun 2021", 'arxivid': '1806.03422', 'author': ['Martin ', 'Emmanuel Breuillard '], 'authoraffiliation': [], 'corpusid': 119642959, 'doi': '10.24033/asens.2467', 'github_urls': [], 'n_tokens_mistral': 58080, 'n_tokens_neox': 51217, 'n_words': 32097, 'pdfsha': 'd9bddba8036a93e01190146fce036563ec925bbe', 'pdfurls': ['https://export.arxiv.org/pdf/1806.03422v3.pdf'], 'title': ['PROJECTIVE GEOMETRIES ARISING FROM ELEKES-SZABÓ PROBLEMS', 'PROJECTIVE GEOMETRIES ARISING FROM ELEKES-SZABÓ PROBLEMS'], 'venue': []}
arxiv
Subsampling for Knowledge Graph Embedding Explained Sep 2022 Hidetaka Kamigaito Nara Institute of Science and Technology (NAIST) Nara Katsuhiko Hayashi Hokkaido University HokkaidoJapan Subsampling for Knowledge Graph Embedding Explained Sep 2022 In this article, we explain the recent advance of subsampling methods in knowledge graph embedding (KGE) starting from the original one used in word2vec.Negative Sampling LossKnowledge graph completion (KGC) is a research topic for automatically inferring new links in a KG that are likely but not yet known to be true.We denote a triplet representing entities e i , e j and their relation r k as (e i , r k , e j ). In a typical KGC task, the model receives a query (e i , r k , ?) or (?, r k , e j ) and predicts the entity corresponding to ?.Knowledge graph embedding (KGE) is a well-known scalable approach for KGC. In KGE, a KGE model scores a triplet (e i , r k , e j ) by using a scoring function s θ (x, y). Due to the computational cost, training of s θ (x, y) commonly relies on the following negative sampling loss function[Sun et al., 2019, Ahrabian et al., 2020: ℓ base = − 1 |D| (x,y)∈D log(σ(s θ (x, y) + γ)) + 1 ν ν yi∼pn(yi|x) log(σ(−s θ (x, y i ) − γ)) ,(1) where D = {(x 1 , y 1 ), · · · , (x n , y n )} represents observables that follow p d (x, y), p n (y|x) is the noise distribution, σ is the sigmoid function, ν is the number of negative samples per positive sample (x, y), and γ is a margin term. Subsampling in Negative Sampling Loss Eq. (1) is on the assumption that the NS loss function fits the model to the distribution p d (y|x) defined from the observed data. However, what the NS loss actually does is to fit the model to the true distribution p ′ d (y|x) that exists behind the observed data. To fill in the gap between p d (y|x) and p ′ d (y|x), Kamigaito and Hayashi [2022a,b] theoretically add A(x, y) and B(x) to Eq. (1) as follows 1 : ℓ sub = − 1 |D| (x,y)∈D A(x, y) log(σ(s θ (x, y) + γ)) + 1 ν ν yi∼pn(yi|x) B(x) log(σ(−s θ (x, y i ) − γ)) .(2) In this formulation, we can consider several assumptions for deciding A(x, y) and B(x). We introduce the assumptions in the following subsections. Subsampling in word2vec (Base) As a basic subsampling approach, Sun et al. [2019] used the original word2vec-based method for KGE learning defined as follows: A(x, y) = B(x, y) = 1 √ #(x,y) |D| (x ′ ,y ′ )∈D 1 √ #(x ′ ,y ′ ) , (3) * Nara Institute of Science and Technology (NAIST), Nara, Japan. [email protected] † Hokkaido University, Hokkaido, Japan. 1 We include the detailed derivation of this function in Appendix A. where # is the symbol for frequency and #(x, y) represents the frequency of (x, y) 2 . Note that the actual (x, y) occurs at most once in the KG, so when (x, y) = (e i , r k , e j ), they approximate the frequency of (x, y) as follows: #(x, y) ≈ #(e i , r k ) + #(r k , e j ). Different from the form in Eq. (2), Eq. (3) use A(x, y) and B(x, y), instead of A(x, y) and B(x). Thus, their approach does not follow the theoretically induced loss function in Eq. (2). Frequency-based Subsampling (Freq) Frequency-based subsampling [Kamigaito and Hayashi, 2022b] is based on the assumption that in p ′ d (y|x), (x, y) originally has a frequency, but the observed one is at most 1. Since A(x, y) needs to discount the frequency of (x, y), and B(x) needs to discount that of x, we can derive the following subsampling method based on word2vec [Mikolov et al., 2013] as implemented by the previous work [Sun et al., 2019] 3 : A(x, y) = 1 √ #(x,y) |D| (x ′ ,y ′ )∈D 1 √ #(x ′ ,y ′ ) , B(x) = 1 √ #x |D| x ′ ∈D 1 √ #x ′ .(5) Unique-based Subsampling (Uniq) In the true distribution p ′ d (y|x), however, if we assume that (x, y) has frequency 1 at most, as in the observation, then p ′ d (y|x) = p ′ d (x, y)/p ′ d (x) ∝ 1/p ′ d (x), so p ′ d (y|x) is the same for an x independent from y. Therefore, under this assumption, we have only need to consider a discount for p d (x) and can derive the unique-based subsampling [Kamigaito and Hayashi, 2022b] as follows: A(x, y) = B(x) = 1 √ #x |D| x ′ ∈D 1 √ #x ′ .(6) Effectiveness of Subsampling in KGE We conducted experiments to evaluate our subsampling methods. We used FB15k-237 [Toutanova and Chen, 2015], WN18RR, and YAGO3-10 [Dettmers et al., 2018] for the evaluation. As comparison methods, we used Com-plEx [Trouillon et al., 2016], RESCAL [Bordes et al., 2011], DistMult [Yang et al., 2015], TransE [Bordes et al., 2013], RotatE [Sun et al., 2019], and HAKE [Zhang et al., 2020]. We followed the original settings of Sun et al. [2019] for ComplEx, DistMult, TransE, and RotatE with their implementation 4 and the original settings of Zhang et al. [2020] for HAKE with their implementation 5 . In RESCAL, we inherited the original setting of DistMult and set the dimension size to 500 for saving computational time. Since Kamigaito and Hayashi [2021] refer to the smoothing effect of self-adversarial negative sampling (SANS) [Sun et al., 2019] that is a role of subsampling, we applied subsampling on SANS for investigating the performance in practical settings. Table 1 shows the result. We can see that subsampling improved KG completion performances from the methods without subsampling. Furthermore, frequency-based and unique-based subsampling basically outperformed the baseline subsampling. Table 1 : 1Evaluation results ofKamigaito and Hayashi [2022b] for each subsampling method on the FB15k-237, WN18RR, and YAGO3-10 datasets. Sub. denotes subsampling, None denotes model that did not use subsampling, Base denotes Eq. (3), Freq denotes Eq. (5), and Uniq denotes Eq. (6).Model Sub. FB15k-237 WN18RR YAGO3-10 MRR Hits@ MRR Hits@ MRR Hits@ 1 3 10 1 3 10 1 3 10 RESCAL None 17.2 9.9 18.1 31.8 41.5 39.0 42.3 45.9 - - - - Base 22.3 13.9 24.2 39.8 43.3 40.7 44.5 48.2 - - - - Freq 26.6 17.4 29.4 45.1 44.1 41.1 45.6 49.5 - - - - Uniq 26.6 17.6 29.3 44.9 44.1 41.4 45.5 49.5 - - - - ComplEx None 22.4 14.0 24.2 39.5 45.0 40.9 46.6 53.4 - - - - Base 32.2 23.0 35.1 51.0 47.1 42.8 48.9 55.7 - - - - Freq 32.8 23.6 36.1 51.2 47.6 43.3 49.3 56.3 - - - - Uniq 32.7 23.5 35.8 51.3 47.6 43.2 49.5 56.3 - - - - DistMult None 22.2 14.0 24.0 39.4 42.4 38.3 43.6 51.0 - - - - Base 30.8 22.1 33.6 48.4 43.9 39.4 45.2 53.3 - - - - Freq 29.9 21.2 32.7 47.5 44.6 40.0 45.9 54.4 - - - - Uniq 29.1 20.3 31.8 46.6 44.6 39.9 46.2 54.3 - - - - TransE None 33.0 22.8 37.2 53.0 22.6 1.8 40.1 52.3 50.6 40.9 56.6 67.7 Base 32.9 23.0 36.8 52.7 22.4 1.3 40.1 53.0 51.2 41.5 57.6 68.3 Freq 33.6 24.0 37.3 52.9 23.0 1.9 40.7 53.7 51.3 41.9 57.2 68.1 Uniq 33.5 23.9 37.3 52.8 23.2 2.2 41.0 53.4 51.4 42.0 57.6 67.9 RotatE None 33.1 23.1 37.1 53.1 47.3 42.6 49.1 56.7 50.6 41.1 56.5 67.8 Base 33.6 23.9 37.4 53.2 47.6 43.1 49.5 56.6 50.8 41.8 56.5 67.6 Freq 34.0 24.5 37.6 53.2 47.8 42.9 49.8 57.4 51.0 41.9 56.5 67.8 Uniq 34.0 24.5 37.6 53.0 47.9 43.5 49.6 56.7 51.5 42.5 56.8 68.3 HAKE None 32.3 21.6 36.9 53.2 49.1 44.5 51.1 57.8 53.4 44.9 58.7 68.4 Base 34.5 24.7 38.2 54.3 49.8 45.3 51.6 58.2 54.3 46.1 59.5 69.2 Freq 34.9 25.2 38.6 54.2 49.7 45.2 51.4 58.5 54.0 45.5 59.4 69.1 Uniq 35.4 25.8 38.9 54.7 49.8 45.4 51.5 58.3 55.0 46.6 60.1 69.8 In the original word2vec, they randomly discard a word by a probability 1 − t f , where t is a constant value and f is a frequencty of a word. This is similar to randomly keep a word with a probability t f . 3 https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding A The Detailed Derivation of Eq.(2)We can reformulate the NS loss in Eq. (1) as follows:(1) = − 1 |D| (x,y)∈D log(σ(s θ (x, y) + γ)) + 1 ν ν yi∼pn(yi|x)Here, we can consider the following approximation based on the Monte Carlo method:Using Eq.(8), we can reformulate Eq. (7) as follows:Similr to Eq.(8), we can consider the following approximation by the the Monte Carlo method:Using Eq. (10), we can reformulate Eq. (9) as follows:Next, we consider replacements of p d (x, y) with p ′ d (x, y) and p d (x) with p ′ d (x). By assuming two functions, A(x, y) and B(x), that convert p d (x, y) into p ′ d (x, y) and p d (x) into p ′ d (x), we further reformulate Eq. (11) as follows:Based on the similar derivation from Eq. (1) to Eq. (11), we can reformulate Eq. (12) as follows:(12) ≈ − 1 |D| (x,y)∈DA(x, y) log(σ(s θ (x, y) + γ)) + 1 ν ν yi∼pn(yi|x)B(x) log(σ(−s θ (x, y i ) − γ)) . Structure aware negative sampling in knowledge graphs. Kian Ahrabian, Aarash Feizi, Yasmin Salehi, William L Hamilton, Avishek Joey Bose, 10.18653/v1/2020.emnlp-main.492Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsKian Ahrabian, Aarash Feizi, Yasmin Salehi, William L. Hamilton, and Avishek Joey Bose. Structure aware negative sampling in knowledge graphs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6093-6101, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.492. URL https://aclanthology.org/2020.emnlp-main.492. Learning structured embeddings of knowledge bases. Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI'11. the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI'11AAAI PressAntoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. Learning structured embeddings of knowledge bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI'11, page 301-306. AAAI Press, 2011. Translating embeddings for modeling multi-relational data. Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, Oksana Yakhnenko, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems. Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013, pages 2787-2795, 2013. URL https://proceedings.neurips.cc/paper/2013/hash/1cecc7a77928ca8133fa24680a88d2f9-Abstract.html. Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18)Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), pages 1811-1818, 2018. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17366. Unified interpretation of softmax cross-entropy and negative sampling: With case study for knowledge graph embedding. Hidetaka Kamigaito, Katsuhiko Hayashi, doi: 10. 18653/v1/2021.acl-long.429Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Hidetaka Kamigaito and Katsuhiko Hayashi. Unified interpretation of softmax cross-entropy and negative sampling: With case study for knowledge graph embedding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5517-5531, Online, August 2021. Association for Computational Linguistics. doi: 10. 18653/v1/2021.acl-long.429. URL https://aclanthology.org/2021.acl-long.429. Erratum to: Comprehensive analysis of negative sampling in knowledge graph representation learning. ResearchGate. Hidetaka Kamigaito, Katsuhiko Hayashi, 10.13140/RG.2.2.34839.44966/1082022Hidetaka Kamigaito and Katsuhiko Hayashi. Erratum to: Comprehensive analysis of negative sampling in knowledge graph representation learning. ResearchGate, 08 2022a. doi: 10.13140/RG.2.2.34839.44966/1. 4 https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding 5 https://github.com/MIRALab-USTC/KGE-HAKE Comprehensive analysis of negative sampling in knowledge graph representation learning. Hidetaka Kamigaito, Katsuhiko Hayashi, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162 of Proceedings of Machine Learning ResearchHidetaka Kamigaito and Katsuhiko Hayashi. Comprehensive analysis of negative sampling in knowledge graph representation learning. In Proceedings of the 39th International Conference on Machine Learning, vol- ume 162 of Proceedings of Machine Learning Research, pages 10661-10675. PMLR, 17-23 Jul 2022b. URL https://arxiv.org/abs/2206.10140. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310. Tomás Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean, 4546Tomás Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013. URL http://arxiv.org/abs/1310.4546. Rotate: Knowledge graph embedding by relational rotation in complex space. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang, Proceedings of the 7th International Conference on Learning Representations. the 7th International Conference on Learning RepresentationsZhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, 2019. URL https://openreview.net/forum?id=HkgEQnRqYQ. Observed versus latent features for knowledge base and text inference. Kristina Toutanova, Danqi Chen, 10.18653/v1/W15-4007Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. the 3rd Workshop on Continuous Vector Space Models and their CompositionalityBeijing, ChinaAssociation for Computational LinguisticsKristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57- 66, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-4007. URL https://www.aclweb.org/anthology/W15-4007. Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, Proceedings of the 33nd International Conference on Machine Learning. the 33nd International Conference on Machine Learning48Workshop and Conference ProceedingsThéo Trouillon, Johannes Welbl, Sebastian Riedel,Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2071-2080. JMLR.org, 2016. URL http://proceedings.mlr.press/v48/trouillon16.html. Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, Proceddings of the 3rd International Conference on Learning Representations. eddings of the 3rd International Conference on Learning RepresentationsBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In Proceddings of the 3rd International Conference on Learning Representations, ICLR 2015, 2015. URL http://arxiv.org/abs/1412.6575. Learning hierarchy-aware knowledge graph embeddings for link prediction. Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, Jie Wang, Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, (AAAI20). the Thirty-Fourth AAAI Conference on Artificial Intelligence, (AAAI20)Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. Learning hierarchy-aware knowledge graph em- beddings for link prediction. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, (AAAI20), pages 3065-3072, 2020.
{'fraction_non_alphanumeric': 0.08526252983293556, 'fraction_numerical': 0.08227923627684965, 'mean_word_length': 3.784755923494148, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 10, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In this article, we explain the recent advance of subsampling methods in knowledge graph embedding (KGE) starting from the original one used in word2vec.Negative Sampling LossKnowledge graph completion (KGC) is a research topic for automatically inferring new links in a KG that are likely but not yet known to be true.We denote a triplet representing entities e i , e j and their relation r k as (e i , r k , e j ). In a typical KGC task, the model receives a query (e i , r k , ?) or (?, r k , e j ) and predicts the entity corresponding to ?.Knowledge graph embedding (KGE) is a well-known scalable approach for KGC. In KGE, a KGE model scores a triplet (e i , r k , e j ) by using a scoring function s θ (x, y). Due to the computational cost, training of s θ (x, y) commonly relies on the following negative sampling loss function[Sun et al., 2019, Ahrabian et al., 2020:', 'arxivid': '2209.12801', 'author': ['Hidetaka Kamigaito \nNara Institute of Science and Technology (NAIST)\nNara\n', 'Katsuhiko Hayashi \nHokkaido University\nHokkaidoJapan\n'], 'authoraffiliation': ['Nara Institute of Science and Technology (NAIST)\nNara', 'Hokkaido University\nHokkaidoJapan'], 'corpusid': 252531467, 'doi': '10.48550/arxiv.2209.12801', 'github_urls': ['https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding', 'https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding', 'https://github.com/MIRALab-USTC/KGE-HAKE'], 'n_tokens_mistral': 6791, 'n_tokens_neox': 5743, 'n_words': 2451, 'pdfsha': '5925f186996d0135d73aa57305b096c8ef29a817', 'pdfurls': ['https://export.arxiv.org/pdf/2209.12801v1.pdf'], 'title': ['Subsampling for Knowledge Graph Embedding Explained', 'Subsampling for Knowledge Graph Embedding Explained'], 'venue': []}
arxiv
COEFFICIENT GROWTH IN SQUARE CHAINS 27 Feb 2019 Shawn Walker COEFFICIENT GROWTH IN SQUARE CHAINS 27 Feb 2019 linear factors over Z and c k = 0. We show that for each Motivation: factoring integers with square chains Call a polynomial of the form P (x) = ((· · · ((x 2 − c 1 ) 2 − c 2 ) 2 · · · ) 2 − c k−1 ) 2 − c k a square chain of length k. Some square chains of lengths k = 3, 4 are presented in Crandall and Pomerance [2005, research problem 6.18] which have the property that they have 2 k distinct integer roots. Crandall and Pomerance then ask about the existence of longer square chains, suggesting that sufficiently long chains might be useful for factoring large integers. Indeed, a simple scheme shows promise: Suppose n = pq is an odd semiprime, and that p and q are approximately the same size. If P (x) ∈ Z[x] has about p 2 ≈ q 2 distinct roots, one can reasonably hope that P (m) ≡ 0 (mod p) for about half of m ∈ {0, 1, . . . , n − 1}, and similarly hope P (m) ≡ 0 (mod q) for about half of m ∈ {0, 1, . . . , n−1}. If we assume heuristically that these are independent events, then about a quarter of the choices of m yield gcd(P (m), n) = p, and about a quarter yield gcd(P (m), n) = q. For a more rigorous analysis, see Lipton [1994]. To make this a tractable factoring algorithm, we need a polynomial P which can be efficiently evaluated and has sufficiently many distinct roots. Square chains are nearly ideal from the standpoint of efficient evaluation. As a polynomial of degree 2 k , a square chain of length k may have up to 2 k roots and may be evaluated using only k multiplications. No polynomial of degree 2 k can be evaluated with fewer multiplications (Borchert et al. [2013]). An obstacle is finding square chains with many distinct roots. Let us set aside the question of existence. Suppose that there exists a square chain of length k which has exactly 2 k roots, counting multiplicity, even if those roots are not all distinct. What properties might such a square chain have? 1 We say a polynomial P crumbles over the unique factorization domain D if P may be written as the product of (not necessarily distinct) linear polynomials in D[x]. Unless otherwise indicated by context, we will assume D = Z. Clearly x 2 k = (· · · (x 2 − 0) 2 · · · − 0) 2 − 0 crumbles over any UFD. Unfortunately, it has only 1 distinct root. More generally, if we have any crumbling square chain P , then P (x) 2 − 0 is a crumbling square chain longer than P . But extending a chain in this manner doesn't create any new roots. Conversely, any square chain whose final coefficient is 0 has exactly the same root set as a square chain of shorter length, and we can instead consider the shorter square chain. To this end, call a square chain whose final coefficient is nonzero a fundamental square chain. While it may be a big ask, suppose we are able to find some fundamental crumbling square chain P . Then it is guaranteed to have plenty of distinct roots. Proposition 1.1. Let D be a unique factorization domain with 1 D + 1 D = 0 D . Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ D[x] crumbles over D. If c k = 0 D , then P has at least 2 k−1 + 1 distinct roots. Proof. Let T k = {c k }, and call this the k-th tail square set of P . Note that c k must be a perfect square; indeed, for any root r of P , we have c k = (· · · (r 2 − c 1 ) 2 · · · − c k−1 ) 2 . So we may use the identity a 2 − b 2 = (a + b)(a − b), to split P (x) into two square chains: P (x) = ((· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k−1 − √ c k )((· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k−1 + √ c k ) Since P crumbles and D[x] is a unique factorization domain, each of these factors must crumble. Let T k−1 = {c k−1 ± t : t 2 ∈ T k } be the tail squares of the two factors. Exactly as was the case with T k , elements of T k−1 must be perfect squares. We then repeat this splitting with each factor, splitting P into 4 factors with tail square set T k−2 = {c k−2 ± t : t 2 ∈ T k−1 }, then 8 factors with tail square set T k−3 = {c k−3 ± t : t 2 ∈ T k−2 }, and so forth onto 2 k−1 factors of the form (x 2 − a) with a ∈ T 1 = {c 1 ± t : t 2 ∈ T 2 }. If we make the convention that c 0 = 0, we may split the previous set of factors one more time to get 2 k factors of the form (x − r) with r in the set T 0 = {0 ± t : t 2 ∈ T 1 }. That is, T 0 is the set of roots of P . Suppose j ∈ {0, 1, . . . , k − 1}. What can we say about the size of T j ? Distinct elements of T j+1 yield distinct elements of T j : if {c j ± t} ∩ {c j ± s} = ∅ then t = ±s, and so t 2 = s 2 . Each nonzero element of T j+1 yields two distinct elements of T j : if c j + t = c j − t, then 2t = 0, and so t = 0 since D is an integral domain and 2 = 0. As such, |T j | = 2(|T j+1 | − 1) + 1 if 0 ∈ T j+1 2|T j+1 | if 0 ∈ T j+1 ≥ 2(|T j+1 | − 1) + 1 Consequently, |T 0 | ≥ 2(|T 1 | − 1) + 1 ≥ 2 2 (|T 2 | − 1) + 1 ≥ · · · ≥ 2 k−1 (|T k−1 | − 1) + 1 To complete the argument, we note that |T k−1 | = 2|T k | = 2 since 0 ∈ T k by hypothesis. As an aside, the bound of proposition 1.1 is met exactly when D = Z q for a prime q satisfying q ≡ 1 (mod 2 k−1 ), and when the square chain P of length k is given by P (x) = (· · · ((x 2 − 0) 2 · · · − 0) 2 − 2 −1 ) 2 − 2 −2 = (x 2 k−1 − 2 −1 ) 2 − 2 −2 = (x 2 k−1 − 1)(x 2 k−1 − 0) In this case, 0 is a root of P with multiplicity 2 k−1 , and each 2 k−1 -th root of unity (mod q) is a root of P with multiplicity 1. Coefficient growth Perhaps more interesting than proposition 1.1 is a specialization of its contrapositive: for an odd characteristic finite field, fundamental crumbling square chains must be of strictly bounded length. Corollary 2.1. Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ F [x] crumbles over the finite field F with char(F ) = 2. If |F | ≤ 2 j−1 then c j = c j+1 = · · · = c k = 0 F Proof. Suppose c i is the last non-zero coefficient in P , so that Q(x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c i is a fundamental crumbling square chain over F . By proposition 1.1, Q must have at least 2 i−1 + 1 distinct roots in F . Of course, 2 i−1 + 1 ≤ |F |, as Q can have, at most, all of F as roots. So 2 i−1 + 1 ≤ |F | ≤ 2 j−1 . Consequently, i < j, and so all coefficients of index j or larger must be zero. Our primary interest in corollary 2.1 will be in the case F = Z p for p any prime. But corollary 2.1 does not cover the case of Z 2 . Instead, we derive a similar, if weaker, result to cover Z 2 . Lemma 2.2. Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ Z[x] crumbles over Z. Then c 2 ≡ c 3 ≡ · · · ≡ c k ≡ 0 (mod 2) Proof. Consider the tail square sets T j from proposition 1.1. Let us adopt the convention that T k+1 = {0}. This is consistent with our previous definition, as it makes T k = {c k ± t : t 2 ∈ T k+1 } = {c k }. For an arbitrary j > 1, choose an arbitrary t 2 ∈ T j . Then c j−1 ± t ∈ T j−1 by definition, and so c j−1 ± t are both squares. Thus, c j−1 ± t ≡ 0, 1, or 4 (mod 8) The limited set of congruence classes that these elements fall into lets us somewhat limit what congruence classes their difference falls into. That is, (c j−1 + t) − (c j−1 − t) ≡ 2t ≡ 0, 1, 3, 4, 5, or 7 (mod 8), This limits the congruence classes t may fall into as well: t ≡ 0, 2, 4, or 6 (mod 8). So t must be even. Since t was chosen arbitrarily, each element t 2 ∈ T j must be the square of an even number. For a given j with 1 < j < k + 1, choose any r 2 ∈ T j+1 . By the previous argument, r is even. We have (c j + r) ∈ T j and so (c j + r) is even. Consequently, c j must be even. As a consequence of corollary 2.1, coefficients in long crumbling square chains over Z must have many prime factors. This implies a lower bound on the size of those coefficients. To quantify this, define the primorial of m as: m# = p prime p ≤ m p Proposition 2.3. Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ Z[x] crumbles over Z. Then for each j, we have 2 j−1 # divides c j . If c k = 0 and j ≥ 5, then ln c j > 2 j−2 . Proof. c j is even for j ≥ 2 by lemma 2.2. For each odd prime p ≤ 2 j−1 , corollary 2.1 tells us that p divides c j . Thus 2 j−1 # divides c j . Suppose c k > 0. We claim that c j > 0 for each j. Clearly c j ≥ 0. To see that c j = 0, let i be an arbitrary index 1 ≤ i < k. We may write the equation P (x) = 0 as (· · · (x 2 − c 1 ) 2 − · · · ) 2 = c i ± · · · ± √ c k Since the right-hand side of this equation must be non-negative for every choice of signs, we have c i ≥ √ c i+1 + · · · ≥ √ c i+1 . As the choice of i was arbitrary, we may apply this inequality recursively, yielding c j ≥ 2 k−j √ c k . But c k = 0; it follows that c j = 0 as well. Together with the fact that 2 j−1 # divides c j , the positivity of c j implies that 2 j−1 # ≤ c j . Rosser and Schoenfeld [1962, equation 3.14] establish: x(1 − 1 2 ln x ) ≤ ln(x#) for x ≥ 563 An exhaustive calculation (omitted) demonstrates the looser bound 1 2 x < ln(x#) for 11 ≤ x ≤ 563 Thus 1 2 2 j−1 < ln(2 j−1 #) ≤ ln c j if 11 ≤ 2 j−1 An asymptotic refinement It seems unlikely the lower bounds given in proposition 2.3 are the best possible. ln(x#) ∼ x, and so it seems likely, at the very least, that ln c j ≥ 2 j−1 for j sufficiently large. At the same time, it is plausible that for some primes p, not only must p divide c j , but possibly p i divides c j for some appropriate condition on p, i and j. Indeed, some reflection shows that primes of the form 4n + 3 must be much more prevalent than we've heretofore indicated. Let ν p (n) = max{e ∈ Z : p e n} be the exponent of p in the prime factorization of n. As noted by Dilcher [2000], at least some of the coefficients in a crumbling square chain must be expressable as half the sum of two squares. We show that every coefficient in a crumbling square chain must be so expressable. By the sum of two squares theorem, if p ≡ 3 (mod 4), then ν p (a 2 + b 2 ) must be even. And so ν p (c j ) must be even for every coefficient c j in a crumbling square chain. What's more, by similar considerations as go into the sum of two squares theorem, we can propagate powers forward to following coefficients, so that ν p (c j+1 ) ≥ 2ν p (c j ). We collect these ideas into the following lemma. Lemma 3.1. Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ Z[x] crumbles over Z. If p is prime with p ≡ 3 (mod 4), and p ≤ 2 j−1 then ν p (c j ) ≥ 2 j−⌈lg p⌉ . Proof. Suppose ⌈lg p⌉ = h − 1. Then p divides c h by corollary 2.1. Choose an arbitrary t 2 ∈ T h+1 . By definition, c h ± t ∈ T h . All members of T h are squares, so there exist r, s so that c h + t = r 2 and c h − t = s 2 . Then 2c h = r 2 + s 2 , that is 2c h is the sum of two squares. By the sum of two squares theorem, since p ≡ 3 (mod 4) and p divides r 2 + s 2 , then p 2 divides both r 2 and s 2 . It follows that ν p (c h ) ≥ 2 1 = 2 h−⌈lg p⌉ . To handle the more general case, we proceed inductively. Suppose ν p (c j−1 ) ≥ 2 j−1−⌈lg p⌉ . We will show that ν p (c j ) ≥ 2 j−⌈lg p⌉ . As before, choose an arbitrary t 2 ∈ T j , making c j−1 ± t ∈ T j−1 . Write c j−1 + t = r 2 and c j−1 − t = s 2 . Then 2c j−1 = r 2 + s 2 . Since p = 2, ν p (r 2 + s 2 ) = ν p (2c j−1 ) = ν p (c j−1 ) ≥ 2 j−1−⌈lg p⌉ Also, we may write r 2 + s 2 = (r + is)(r − is) as the product of two Gaussian integers. We recall two well known results: the Gaussian integers form a unique factorization domain, and p is a prime Gaussian integer since p ≡ 3 (mod 4). As such, it makes sense to extend our definition of ν p to Gaussian integers. Now, ν p (r + is) = ν p (r − is) since p is its own complex conjugate. Thus 2ν p (r + is) = ν p (r + is) + ν p (r − is) = ν p ((r + is)(r − is)) = ν p (r 2 + s 2 ) ≥ 2 j−1−⌈lg p⌉ So ν p (r + is) ≥ 1 2 2 j−1−⌈lg p⌉ Any power of p that divides r + is must divide both r and s, so ν p (r), ν p (s) ≥ 1 2 2 j−1−⌈lg p⌉ implying that ν p (r 2 ), ν p (s 2 ) ≥ 2 j−1−⌈lg p⌉ Since r 2 and s 2 share a common power of p, their difference r 2 − s 2 = 2t must share the same common power. That is, ν p (2t) = ν p (r 2 − s 2 ) ≥ 2 j−1−⌈lg p⌉ Since p = 2, this implies that ν p (t) ≥ 2 j−1−⌈lg p⌉ and so ν p (t 2 ) ≥ 2 j−⌈lg p⌉ As t 2 was chosen arbitrarily, this inequality holds for any t 2 ∈ T j . Choose any u 2 ∈ T j+1 . We have c j ± u ∈ T j by definition. So ν p (c j + u), ν p (c j − u) ≥ 2 j−⌈lg p⌉ Since c j +u and c j −u share a common power of p, their sum, (c j +u)+(c j −u) = 2c j must share the same common power. That is, ν p (2c j ) = ν p ((c j + u) + (c j − u)). ≥ 2 j−⌈lg p⌉ And since p = 2, ν p (c j ) ≥ 2 j−⌈lg p⌉ By proposition 2.3, later coefficients "pick up" many primes as divisors. By lemma 3.1, once a coefficient acquires a divisor p ≡ 3 (mod 4), each later coefficient is divisible many times by the same prime. Define x# 3:4 = p prime p ≡ 3 (mod 4) p ≤ x p Together, proposition 2.3 and lemma 3.1 imply: Proposition 3.2. Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ Z[x] crumbles over Z. Let D j = 2 j−1 # · j−1 i=0 (2 j−1−i # 3:4 ) 2 i For each j, D j divides c j . If c k = 0, then for some absolute constant λ > 0 and each j ≥ 3, ln c j > λj2 j Proof. For primes p ≤ 2 j−1 : If p ≡ 3 (mod 4) then ν p (D j ) = 1. If p ≡ 3 (mod 4) then ν p (D j ) = 1 + j−1−⌈lg p⌉ i=0 2 i = 2 j−⌈lg p⌉ Thus by corollary 2.1 and lemma 3.1, D j divides c j . Corollaries of the Siegel-Walfisz theorem give ln(x# 3:4 ) ∼ 1 2 x, c.f. Montgomery and Vaughan [2006, corollaries 11.15, 11.20]. So there must be some constant λ 3:4 > 0 satisfying λ 3:4 x < ln(x# 3:4 ) for all x ≥ 3. Similarly, by the prime number theorem, ln(x#) ∼ x, and thus there is a constant λ 1 so that λ 1 x < ln(x#) for all x ≥ 2. Let 4λ = min(λ 1 , λ 3:4 ). Then for j ≥ 3 ln D j = ln(2 j−1 #) + j−1 i=0 2 i · ln(2 j−1−i # 3:4 ) = ln(2 j−1 #) + j−3 i=0 2 i · ln(2 j−1−i # 3:4 ) > 4λ2 j−1 + 4λ j−3 i=0 2 i · 2 j−1−i = 4λ2 j−1 (1 + j − 2) = 2λ(j − 2)2 j ≥ λj2 j If c k = 0, then c j > 0 as argued in proposition 2.3. Since D j divides c j , positivity of c j forces D j ≤ c j . And so λj2 j < ln D j ≤ ln c j . We can sharpen the closed form estimate slightly: Corollary 3.3. Suppose P (x) = (· · · (x 2 − c 1 ) 2 − · · · ) 2 − c k ∈ Z[x] crumbles over Z, k ≥ 3, and c k > 0. Then there exists an absolute constant λ > 0 so that for all j, ln c j > λk2 j Proof. We observed in the proof of proposition 2.3 that c j ≥ 2 k−j √ c k so, ln c j ≥ ln c k 2 k−j > λk2 k 2 k−j = λk2 j It is interesting to note that this statement is enough to show the length of the chain influences the size of c 1 , as it shows that ln c 1 ≥ 2λk. Discussion and related work To factor a product of two 500-bit primes using the algorithm described in section 1, we would need to start by constructing a fundamental crumbling square chain of length not much smaller than 500. According to proposition 2.3, the coefficient c 400 of such a chain would be at least lg e 2 398 ≈ 9.3 × 10 119 bits in length. By way of comparison, estimates place the total digital storage capacity of the world at approximately 10 22 bits as of the year 2019. Even if we knew how to construct such a chain, precalculating the coefficients of such a chain would clearly be infeasible. Finding fundamental crumbling square chains has proven difficult. While Dilcher [2000] provides a characterization of length 3 fundamental crumbling square chains, and Bremner [2008] describes two infinite families of length 4 fundamental crumbling square chains, no fundamental crumbling square chains of length 5 are known. Indeed, Borchert et al. [2013] points out that a crumbling square chain of length 5 with distinct roots would advance understanding of a historied question known as the Prouhet-Tarry-Escott problem. Borchert et al. [2013] discuss a more general family of polynomials, which they term gems. By construction, their gems are polynomials which are efficiently computable, crumble over Z, and have distinct roots. While the highest known degree of a square chain that crumbles over Z is 16, the authors of that article describe gems of degrees up to 55. Few product gates but many zeroes. Bernd Borchert, Pierre Mckenzie, Klaus Reinhardt, Chicago J. Theor. Comput. Sci. Bernd Borchert, Pierre McKenzie, and Klaus Reinhardt. Few product gates but many zeroes. Chicago J. Theor. Comput. Sci., 2013, 2013. When can (((x 2 − p) 2 − q) 2 − r) 2 − s 2 split into linear factors?. Andrew Bremner, 1058-6458Experimental Mathematics. 174Andrew Bremner. When can (((x 2 − p) 2 − q) 2 − r) 2 − s 2 split into linear factors? Experimental Mathematics, 17(4):385-390, 2008. ISSN 1058-6458. Prime numbers. A computational perspective. Richard Crandall, Carl Pomerance, ISBN 0-387-25282-7Springer-VerlagNew Yorksecond editionRichard Crandall and Carl Pomerance. Prime numbers. A computational perspec- tive. Springer-Verlag, New York, second edition, 2005. ISBN 0-387-25282-7. Nested squares and evaluations of integer products. Karl Dilcher, Experimental Mathematics. 93Karl Dilcher. Nested squares and evaluations of integer products. Experimental Mathematics, 9(3):369-372, 2000. Straight-line complexity and integer factorization. Richard J Lipton, ANTS. Springer877Richard J. Lipton. Straight-line complexity and integer factorization. In ANTS, volume 877 of Lecture Notes in Computer Science, pages 71-79. Springer, 1994. Hugh L Montgomery, Robert C Vaughan, 10.1017/CBO9780511618314Multiplicative Number Theory I: Classical Theory. Cambridge Studies in Advanced Mathematics. Cambridge University PressHugh L. Montgomery and Robert C. Vaughan. Multiplicative Number Theory I: Classical Theory. Cambridge Studies in Advanced Mathematics. Cambridge Uni- versity Press, 2006. doi: 10.1017/CBO9780511618314. Approximate formulas for some functions of prime numbers. Barkley Rosser, Lowell Schoenfeld, Illinois J. Math. 61Barkley Rosser and Lowell Schoenfeld. Approximate formulas for some functions of prime numbers. Illinois J. Math., 6(1):64-94, 03 1962. URL https://projecteuclid.org:443/euclid.ijm/1255631807.
{'fraction_non_alphanumeric': 0.0917237157010074, 'fraction_numerical': 0.0468636945511215, 'mean_word_length': 3.009819236777505, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 13, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 44, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'linear factors over Z and c k = 0. We show that for each', 'arxivid': '1902.11164', 'author': ['Shawn Walker '], 'authoraffiliation': [], 'corpusid': 119143128, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 6791, 'n_tokens_neox': 5867, 'n_words': 3808, 'pdfsha': 'b729bb31231c3f713b80ed7b2beec0c304cdfd34', 'pdfurls': ['https://arxiv.org/pdf/1902.11164v1.pdf'], 'title': ['COEFFICIENT GROWTH IN SQUARE CHAINS', 'COEFFICIENT GROWTH IN SQUARE CHAINS'], 'venue': []}
arxiv
The QCD pomeron in e 27 Apr 1999 April 1999 + E − Collisions J Kwieciński Department of Theoretical Physics, H. Niewodniczański Institute of Nuclear Physics CracowPoland L Motyka Institute of Physics Jagellonian University CracowPoland The QCD pomeron in e 27 Apr 1999 April 1999arXiv:hep-ph/9904464v1 1 Presented by L. Motyka at the Cracow Epiphany Conference on Electron-Positron Colliders, Cracow 5-10 Jan. 1999. The contribution of the QCD pomeron to the processes: e + e − → e + e − J/ψJ/ψ and e + e − → e + e − hadrons (with tagged electrons) is discussed. We focus on reactions which occur via photon-photon collisions, with virtual photons coming from the Weizsäcker-Wiliams spectrum of the electrons. We stress the importance of the non-leading corrections to the BFKL equation and take into account dominant non-leading effects which come from the requirement that the virtuality of the exchanged gluons along the gluon ladder is controlled by their transverse momentum squared. The γ * γ * cross-sections are found to increase with increasing γ * γ * CM energy W as (W 2 ) λ P while the cross-section for γγ → J/ψJ/ψ is found to increase as (W 2 ) 2λ P . The parameter λ P is slowly varying with energy W and takes the values λ P ∼ 0.23 − 0.35 depending on the process. We also analyze the contribution of the soft pomeron for the total γ * γ * cross-section. We compare results of our calculations to the recent data from LEP. Introduction Two photon reactions are an important part of physics which is being studied in current e + e − experiments at LEP1 and LEP2 and which will also be intensively analyzed in future e + e − colliders. The available photon-photon energy and photon virtualities continously increase with the increasing energy of the e + e − pair. Therefore the data from LEP1 and LEP2 and the expected results from the TESLA and NLC provide us with an excellent oportunity to study virtual photon scattering in the diffractive regime. Moreover, with proper experimental cuts, it is possible to study observables dominated by the perturbative QCD contributions. The theoretical description of such processes is based on expectations concerning high energy limit in perturbative QCD which is at present theoretically fairly well understood [1,2]. The leading high energy behaviour is controlled by the pomeron singularity which corresponds to the sum of ladder diagrams with reggeized gluons along the chain. This sum is described by the Balitzkij, Fadin, Kuraev, Lipatov (BFKL) equation [3]. The perturbative QCD pomeron exchange effects can be observed only in specific conditions and even then not in the unambigous form. In order to minimize the contribution of the other mechanisms competing with the QCD pomeron and to guarantee the validity of the calculations based on perturbative QCD one has to chose carefully the processes to analyze. The virtualities of the gluons along the ladder should be large enough to assure the applicability of the perturbative expansion. The neccesary hard scale may be provided either by coupling of the ladder to scattering particles, that contain a hard scale themselves, or by large momentum transfer carried by the gluons. Moreover, to distinguish the genuine BFKL from DGLAP evolution effects it is convenient to focus on procesess in which the scales on both ends of the ladder are of comparable size. Finally, one requires that the non-perturbative effects should factor out in order to minimize the theoretical uncertainties. The two classical processes which can probe the QCD pomeron in ep and in γ * p collisions are the deep inelastic events accompanied by an energetic (forward) jet [4,5] and the production of large p T jets separated by the rapidity gap [6]. The former process probes the QCD pomeron in the forward direction while the latter reflects the elastic scattering of partons via the QCD pomeron exchange with non-zero (and large) momentum transfer. Another possible probe of the QCD pomeron at (large) momentum transfers can be provided by the diffractive vector meson photoproduction accompanied by proton dissociation in order to avoid nucleon form-factor effects [7,8]. In this talk we shall analyze two measurements in e + e − collisions, complementary to those listed above. Namely we focus on double diffractive J/ψ production in γγ collisions and on the total γ * γ * cross section. The former process is unique since in principle it allows to test the QCD pomeron for arbitrary momentum transfers [9]. The hard scale is given by the relatively large mass of the c-quark. The total γ * γ * cross-section has been studied by several authors [10,11], however our approach has the novel feature of taking into account dominant non-leading corrections to the BFKL equation. This re-analysis has become necessary when the next-to-leading corrections to the BFKL kernel were obtained [12], which alter substantially the results obtained at the leading order. It turns out that the magnitude of the next-to-leading (NLO), i.e. O(α 2 s ), contribution to the QCD pomeron intercept is very large for the values of the QCD coupling within the range which is relevant for most experiments. This means that the NLO approximation alone is not reliable and one has to perform resummation to all orders. Unfortunately the exact result of this resummation is unknown. It may however be possible to pin down certain dominant contributions of well defined physical origin and perform their exact resummation [13,14]. In our approach we shall use the so called consistency constraint which limits the available phase space for the real gluon emission by imposing the requirement that the virtuality of the exchanged gluons along the chain is dominated by their transverse momentum squared. Let us remind that the form of the LO BFKL kernel where the gluon propagators contain only the gluon transverse momentum squared etc. is only valid within the region of phase space restricted by this constraint. Formally however, the consistency constraint generates subleading corrections. It can be shown that at the NLO accuracy it generates about 70 % of the exact result for the QCD pomeron intercept. The very important merit of this constraint is also the fact that it automatically generates resummation of higher order contributions which stabilizes the solution [14]. 2 The total γ * γ * cross-section The collisions of virtual photons may be studied experimentally only as subprocesses of reactions between charged particles. In principle, one is able to unfold the photonic cross-section from the leptonic data, however this procedure requires additional assumptions which increase the systematic uncertainty of the result. It seems to be more sensible to formulate the predictions for the e + e − cross-sections with the properly chosen cuts and compare them directly with the e + e − data. Therefore we use the equivalent photon approximation which allows us to express the leptonic cross-section through a convolution of the photonic cross-section and the standard flux factors. Thus a) Figure 1: The QCD pomeron exchange mechanism of the processes a) γ * 2 (Q 2 2 ) 1 (Q 2 1 ) 1 2 W 2 b) J= J= t W 21 (Q 2 1 )γ * 2 (Q 2 2 ) → X and b) γγ → J/ψJ/ψ. the cross-section for the process e + e − → e + e − + X (averaged over the angle φ between the lepton scattering planes in the frame in which the virtual photons are aligned along the z axis) is given by the following formula [11]: Q 2 1 Q 2 2 dσ dy 1 dy 2 dQ 2 1 dQ 2 2 = α 2π 2 [P (T ) γ/e + (y 1 )P (T ) γ/e − (y 2 )σ T T γ * γ * (Q 2 1 , Q 2 2 , W 2 )+ P (T ) γ/e + (y 1 )P (L) γ/e − (y 2 )σ T L γ * γ * (Q 2 1 , Q 2 2 , W 2 ) + P (L) γ/e + (y 1 )P (T ) γ/e − (y 2 )σ LT γ * γ * (Q 2 1 , Q 2 2 , W 2 )+ P (L) γ/e + (y 1 )P (L) γ/e − (y 2 )σ LL γ * γ * (Q 2 1 , Q 2 2 , W 2 )](1) where P (T ) γ/e (y) = 1 + (1 − y) 2 y (2) P (L) γ/e (y) = 2 1 − y y where y 1 and y 2 are the longitudinal momentum fractions of the parent leptons carried by virtual photons, Q 2 i = −q 2 i (i = 1, 2) where q 1,2 denote the four momenta of the virtual photons and W 2 is the total CM energy squared of the two (virtual) photon system, i.e. W 2 = (q 1 + q 2 ) 2 . The cross-sections σ ij γ * γ * (Q 2 1 , Q 2 2 , W 2 ) are the total crosssections for the process γ * γ * → X and the indices i, j = T, L denote the polarization of the virtual photons. The functions P The ladder diagram corresponding to the perturbative contribution to the diffractive subprocess γ * 1 (Q 2 1 )γ * (Q 2 2 ) → X is shown in Fig. 1a. The cross-sections σ ij γ * γ * (Q 2 1 , Q 2 2 , W 2 ) are given by the following formulae: σ ij γ * γ * (Q 2 1 , Q 2 2 , W 2 ) = P S (Q 2 1 , Q 2 2 , W 2 )δ iT δ jT + 1 2π q k 2 max (Q 2 2 ,x) k 2 0 d 2 k πk 4 1/x ξ min (k 2 ,Q 2 2 ) dξG 0j q (k 2 , Q 2 2 , ξ)Φ i (k 2 , Q 2 1 , xξ)(4) where k 2 max (Q 2 2 , x) = −4m 2 q + Q 2 2 1 x − 1 (5) ξ min (k 2 , Q 2 ) = 1 + k 2 + 4m 2 q Q 2(6) and x = Q 2 2 2q 1 q 2(7) In Eq. (4) we sum over four quark flavours with m q → 0 for light quarks and m c = 1.5 GeV. The lower limit of integration over k 2 appearing in Eq. (4) is taken to be k 2 0 = 1 GeV 2 in order to subtract the contribution from the nonperturbative region from the perturbative part of the amplitude. The functions G 0i q (k 2 , Q 2 , ξ) are defined as below: [11,15] G 0T q (k 2 , Q 2 , ξ) = 2α em α s (k 2 + m 2 q )e 2 q λmax 0 dλ d 2 p ′ π δ ξ − 1 + p ′2 + m 2 q z(1 − z)Q 2 + k 2 Q 2 ×      (z 2 + (1 − z) 2 ) p D 1 − p + k D 2 2   + m 2 q 1 D 1 − 1 D 2 2    (8) G 0L q (k 2 , Q 2 , ξ) = 8α em α s (k 2 + m 2 q )e 2 q λmax 0 dλ d 2 p ′ π δ ξ − 1 + p ′2 + m 2 q z(1 − z)Q 2 + k 2 Q 2 × z 2 (1 − z) 2 1 D 1 − 1 D 2 2 (9) where z = 1 + λ 2 (10) p = p ′ + (z − 1)k(11)D 1 = p 2 + z(1 − z)Q 2 + m 2 q D 2 = (p + k) 2 + z(1 − z)Q 2 + m 2 q(12) In the formulae given above as well as throughout the rest of the text we are using the one loop approximation for the QCD coupling α s with the number of flavours N f = 4 and set Λ QCD = 0.23 GeV. The function P S (Q 2 1 , Q 2 2 , W 2 ) corresponds to the contribution from the region k 2 ≤ k 2 0 in the corresponding integrals over the gluon transverse momenta. It is assumed to be dominated by the soft pomeron contribution which is estimated from the factorisation of its couplings, i.e. (13) We assume that this term is only contributing to the transverse part. In equation (13) the cross-sections σ SP P S (Q 2 1 , Q 2 2 , W 2 ) = σ SP γ * (Q 2 1 )p (Q 2 1 , W 2 )σ SP γ * (Q 2 2 )p (Q 2 2 , W 2 ) σ SP ppγ * (Q 2 i )p (Q 2 i , W 2 ) and σ SP pp are the soft pomeron contributions to the γ * p and pp total cross sections and their parametrisation is taken from Refs. [16,17]. Their W 2 dependence is, of course, universal i.e. σ SP pp = β 2 p W 2 W 2 0 α SP (0)−1 σ SP γ * (Q 2 i )p (Q 2 i , W 2 ) = β γ * (Q 2 )β p W 2 W 2 0 α SP (0)−1(14) with W 0 = 1 GeV and α SP (0) ≈ 1.08. The function Φ T (k 2 , Q 2 , x g ) satisfies the Balitzkij, Fadin, Kuraev, Lipatov (BFKL) equation which, in the leading ln(1/x) approximation has the following form: Φ i (k 2 , Q 2 , x g ) = Φ 0 i (k 2 , Q 2 , x g ) + Φ S (k 2 , Q 2 , x g )δ iT + 3α s (k 2 ) π k 2 1 xg dx ′ x ′ ∞ k 2 0 dk ′2 k ′2 Φ i (k ′2 , Q 2 , x ′ ) − Φ i (k 2 , Q 2 , x ′ ) |k ′2 − k 2 | + Φ i (k 2 , Q 2 , x ′ ) √ 4k ′4 + k 4(15) In what follows we shall consider the modified BFKL equation in which we restrict the available phase-space in the real gluon emission by the consistency constraint: k ′2 ≤ k 2 x ′ x g(16) This constraint follows from the requirement that the virtuality of the exchanged gluons is dominated by their transverse momentum squared. The consistency constraint (16) introduces the non-leading ln(1/x) effects and in the next-to-leading approximation exhausts about 70% of the entire next-to-leading corrections to the QCD pomeron intercept. The modiffied BFKL equation takes the following form: Φ i (k 2 , Q 2 , x g ) = Φ 0 i (k 2 , Q 2 , x g ) + Φ S (k 2 , Q 2 , x g )δ iT + 3α s (k 2 ) π k 2 1 xg dx ′ x ′ ∞ k 2 0 dk ′2 k ′2   Φ i (k ′2 , Q 2 , x ′ )Θ k 2 x ′ xg − k ′2 − Φ i (k 2 , Q 2 , x ′ ) |k ′2 − k 2 | + Φ i (k 2 , Q 2 , x ′ ) √ 4k ′4 + k 4  (17) The inhomogeneous terms in equations (15,17) are the sum of two contributions Φ 0 i (k 2 , Q 2 , x g ) and Φ S (k 2 , Q 2 , x g )δ iT . The first term Φ 0 i (k 2 , Q 2 , x g ) corresponds to the diagram in which the two gluon system couples to a virtual photon through a quark box and are given by following equations: Φ 0 i (k 2 , Q 2 , x g ) = q 1 xg dzG 0 iq (k 2 , Q 2 , z)(18) wherẽ G 0 T q (k 2 , Q 2 , z) = 2α em e 2 q α s (k 2 + m 2 q ) 1 0 dλ [λ 2 + (1 − λ) 2 ][z 2 + (1 − z) 2 ]k 2 λ(1 − λ)k 2 + z(1 − z)Q 2 + m 2 q + 2m 2 q 1 z(1 − z)Q 2 + m 2 q − 1 λ(1 − λ)k 2 + z(1 − z)Q 2 + m 2 q (19) G 0 Lq (k 2 , Q 2 , z) = 16α em Q 2 k 2 e 2 q α s (k 2 + m 2 q )× 1 0 dλ [λ(1 − λ)][z 2 (1 − z) 2 ] [λ(1 − λ)k 2 + z(1 − z)Q 2 + m 2 q ][z(1 − z)Q 2 + m 2 q ](20) The second term Φ S (k 2 , Q 2 , x g )δ iT , which is assumed to contribute only to the transverse component, corresponds to the contribution to the BFKL equation from the nonperturbative soft region k ′2 < k 2 0 . Adopting the strong ordering approximation k ′2 ≪ k 2 it is given by the following formula: Φ S (k 2 , Q 2 , x g ) = 3α s (k 2 ) π 1 xg dx ′ x ′ k 2 0 0 dk ′2 k ′2 Φ T (k ′2 , Q 2 , x ′ )(21) The last integral in equation (21) can be interpreted as a gluon distribution in a virtual photon of virtuality Q 2 evaluated at the scale k 2 0 . At low values of x ′ it is assumed to be dominated by a soft pomeron contribution and can be estimated using the factorisation of the soft pomeron couplings: k 2 0 0 dk ′2 k ′2 Φ T (k ′2 , Q 2 , x ′ ) = π 2 x ′ g p (x ′ , k 2 0 ) β γ * (Q 2 ) β p(22) where g p (x ′ , k 2 0 ) is the gluon distribution in a proton at the scale k 2 0 and the couplings β γ * (Q 2 ) and β p are defined by equation (14). We adopt the parametrization of the gluon structure function taken from Ref. [15] i.e. xg(x, k 2 0 ) = 1.57(1 − x) 2.5 which is consinstent with the DIS data. In Fig. 2 we show our results for σ T T γ * γ * (Q 2 1 , Q 2 2 , W 2 ) plotted as the function of the CM energy W for three different values of Q 2 where Q 2 1 = Q 2 2 = Q 2 . We plot in this figure: (4). For each choice of the virtuality four curves are shown taking into account hard effects only ("hard part"), hard amplitude with soft pomeron contributions added in the source term of the BFKL equation ("mixed"), the full cross-section including both soft and hard pomeron contributions ("full result"). We also show the "full result" with the low scale of α s in the impact factors: µ 2 = (k 2 + m 2 q )/4. σ T T γ * γ * (Q 2 1 , Q 2 2 , W 2 ) for the process γ * (Q 2 1 )γ * (Q 2 2 ) → X for various choices of virtualities Q 2 = Q 2 1 = Q 2 2 corresponding to Eq. 1. the pure QCD (i.e. "hard") contribution obtained from solving the BFKL equation with the consistency constraint included (see Eq. (17)) and with the inhomogeneous term containing only the QCD impact factor defined by equations (18,19,20), 2. the "mixed" contribution generated by the BFKL equation (17) with the soft pomeron contribution defined by equations (21,22) included in the inhomogeneous term, 3. The "full" contribution which also contains the soft pomeron term (13). We also show results obtained by changing the scale of the strong coupling α s in the impact factors from k 2 + m 2 q to (k 2 + m 2 q )/4. The scale of α s in the BFKL equation is the same in the both cases. The components of the cross-section for which at least one of the photons is longitudinally polarized have very similar energy dependence to σ T T γ * γ * (Q 2 1 , Q 2 2 , W 2 ) and give together about 60% of the transverse-transverse contribution. We see from this figure that the effects of the soft pomeron contribution are nonnegligible at low and moderately large values of Q 2 < 10 GeV 2 and for moderately large values of W < 100 GeV. The QCD pomeron however dominates already at Q 2 = 40 GeV 2 . We also see from this figure that for low energies W < 40 GeV the phase-space effects are very important. For W > 40 GeV or so one observes that the cross-section exhibits the effective power-law behaviour σ γ * γ * (W ) ∼ (W 2 ) λ P . The (effective) exponent increases weakly with increasing Q 2 and varies from λ P = 0.28 for Q 2 = 2.5 GeV 2 to λ P = 0.33 for Q 2 = 40 GeV 2 . This (weak) dependence of the effective exponent λ P with Q 2 is the result of the interplay between soft and hard pomeron contributions, where the former becomes less important at large Q 2 . Using Formula (1) integrated over the virtualities in the range allowed by the relevant experimental cuts, we have calculated the total cross-section for the process e + e − → e + e − + X for LEP1 and LEP2 energies and confronted results of our calculation with the recent experimental data obtained by the L3 collaboration at LEP [18]. Comparison of our results with experimental data is sumarised in Table 1. We show comparison for dσ/dY , where Y = ln(W 2 /Q 1 Q 2 ) with subtracted Quark Parton Model (QPM) contribution. We see that the contamination of the cross-section by soft pomeron is substantial. The data do also favour the smaller value of the scale of α s . In general, the results of our calculation lay below the data, however the error bars are still quite large, so that the discrepancy is not very pronounced. Let us also mention that cuts applied to obtain the data shown in Table 1 admit rather low γγ energies i.e. below 10 GeV [18], which probably is not sufficient to justify the validity of high energy limit in QCD. Exclusive J/ψ production The experimental aspects of the measurement of double exclusive J/ψ production are different from those for the virtual photons scattering. Namely, since the c-quark provides the energy scale, we may perturbatively describe the cross-section for the process of exclusive J/ψ production in which almost real photons take part. It is an important feature beacause the photon flux in electron is dominated by low virtualities. On the other hand one may measure the produced J/ψ-s through theirs decay products with no need of tagging of the electrons. Thus, it is prefered to focus on events with anti-tagged leptons. The cross-section for the process e + e − → e + e − +Y for anti-tagged e ± corresponds to the production of the hadronic state Y in γγ collision and is given by the following convolution integral: [19] σ e + e − →e + e − +Y = 1 0 dy 1 1 0 dy 2 Θ(W 2 − W 2 Y 0 )σ γγ→Y (W 2 )f γ/e (y 1 )f γ/e (y 2 ).(23) where the γγ system invariant mass squared W 2 is related to the lepton CM energy squared s by the simple formula: W 2 = y 1 y 2 s. The flux factor takes the form: f γ/e (y) = α em 2π 1 + (1 − y) 2 y ln Q 2 max Q 2 min − 2m 2 e y 1 Q 2 min − 1 Q 2 max .(24) and Q 2 min = m 2 e y 2 (1 − y)(25)Q 2 max = (1 − y)E 2 beam θ 2 max .(26) The lower limit follows from the kinematics of photon emission from a lepton whereas the upper one arises from the upper limit θ max for the lepton scattering angle. The minimal invariant mass squared of the hadronic system W 2 Y 0 , the angle θ max and the beam energy E beam depend on the process and experimental conditions. For diffractive J/ψ production we shall choose θ max = 30 mrad in accordance with LEP conditions and W Y 0 = 15 GeV. The formalism that we shall employ to evaluate the cross-section of the sub-process γγ → J/ψJ/ψ is very similar to this used in the previous section. However some modification are neccessary in order to adopt to specific features of the process. First of all we have to go beyond the forward configuration of the pomeron by the use of the BFKL equation with non-zero momentum transver. Besides that, we introduce a parameter s 0 in the propagators of exchanged gluons instead of the infra-red cut-off k 2 0 applied in the previous case. This parameter can be viewed upon as the effective representation of the inverse of the colour confinement radius squared. Sensitivity of the cross-section to its magnitude can serve as an estimate of the sensitivity of the results to the contribution coming from the infrared region. It should be noted that formula (27) gives finite result in the limit s 0 = 0. While analyzing this process we use the asymptotic (high-energy) form of the amplitude, neglecting the phase space effects. The imaginary part ImA(W 2 , t = −Q 2 P ) of the amplitude for the considered process which corresponds to the diagram in Fig. 1b can be written in the following form: ImA(W 2 , t = −Q 2 P ) = d 2 k π Φ 0 (k 2 , Q 2 P )Φ(x, k, Q P ) [(k + Q P /2) 2 + s 0 ][(k − Q P /2) 2 + s 0 ](27) In this equation x = m 2 J/ψ /W 2 where W denotes the total CM energy of the γγ system, m J/ψ is the mass of the J/ψ meson, Q P /2 ± k denote the transverse momenta of the exchanged gluons and Q P is the transverse part of the momentum transfer. The impact factor Φ 0 (k 2 , Q 2 P ) describes the γJ/ψ transition induced by two gluons and the diagrams defining this factor are illustrated in Fig. 3. In the nonrelativistic approximation they give the following formula for Φ 0 (k 2 , Q 2 P ) [7,20]: Φ 0 (k 2 , Q 2 P ) = C 2 √ α em α s (µ 2 )   1 q 2 − 1 m 2 J/ψ /4 + k 2   (28) where C = q c 8 3 πm J/ψ f J/ψ (29) with q c = 2/3 denoting the charge of a charm quark and q 2 = m 2 J/ψ + Q 2 P 4 (30) f J/ψ = 3m J/ψ Γ J/ψ→l + l − 2πα 2 em(31) where Γ J/ψ→l + l − is the leptonic with of the J/ψ meson. In our calculations we will set f J/ψ = 0.38 GeV. The function Φ(x, k, Q P ) satisfies the non-forward BFKL equation which in the leading ln(1/x) approximation has the following form: Φ(x, k, Q P ) = Φ 0 (k 2 , Q 2 P ) + 3α s (µ 2 ) 2π 2 1 x dx ′ x ′ d 2 k ′ (k ′ − k) 2 + s 0 × k 2 1 k ′2 1 + s 0 + k 2 2 k ′2 2 + s 0 − Q 2 P (k ′ − k) 2 + s 0 (k ′2 1 + s 0 )(k ′2 2 + s 0 ) Φ(x ′ , k ′ , Q P )− k 2 1 k ′2 1 + (k ′ − k) 2 + 2s 0 + k 2 2 k ′2 2 + (k ′ − k) 2 + 2s 0 Φ(x ′ , k, Q P )(32) where k 1,2 = Q P 2 ± k and k ′ 1,2 = Q P 2 ± k ′(33) denote the transverse momenta of the gluons. The scale of the QCD coupling α s which appears in equations (28) and (32) will be set µ 2 = k 2 + Q 2 P /4 + m 2 c where m c denotes the mass of the charmed quark. The differential cross-section is related in the following way to the amplitude A: dσ dt = 1 16π |A(W 2 , t)| 2(34) Generalization of the consistency constraint (16) to the case of non-forward configuration with Q 2 P ≥ 0 takes the following form: k ′2 ≤ (k 2 + Q 2 P /4) x ′ x(35) Besides the BFKL equation (32) in the leading logarithmic approximation we shall also consider the equation which will embody the constraint (35) in order to estimate the effect of the non-leading contributions. The corresponding equation which contains constraint (35) in the real emission term reads: Φ(x, k, Q P ) = Φ 0 (k 2 , Q 2 P ) + 3α s (µ 2 ) 2π 2 1 x dx ′ x ′ d 2 k ′ (k ′ − k) 2 + s 0 × k 2 1 k ′2 1 + s 0 + k 2 2 k ′2 2 + s 0 − Q 2 P (k ′ − k) 2 + s 0 (k ′2 1 + s 0 )(k ′2 2 + s 0 ) Θ (k 2 + Q 2 P /4)x ′ /x − k ′2 ) × Φ(x ′ , k ′ , Q P ) − k 2 1 k ′2 1 + (k ′ − k) 2 + 2s 0 + k 2 2 k ′2 2 + (k ′ − k) 2 + 2s 0 Φ(x ′ , k, Q P )(36) We solved equations (32) and (36) numerically setting m c = m J/ψ /2. Brief summary of the numerical method and of the adopted approximations in solving equations (32,36) has been given in Ref. [9]. Let us recall that we used running coupling with the scale µ 2 = k 2 + Q 2 P /4 + m 2 c . The parameter s 0 was varied within the range 0.04 GeV 2 < s 0 < 0.16 GeV 2 . It should be noted that the solutions of equations (32, 36) and the amplitude (27) are finite in the limit s 0 = 0. This follows from the fact that both impact factors Φ 0 (k 2 , Q 2 P ) and Φ(x, k, Q P ) vanish for k = ±Q P /2 (see equations (28,32,36)). The results with finite s 0 are however more realistic. In Fig. 4 we show the cross-section for the process γγ → J/ψJ/ψ plotted as the function of the total CM energy W . We show results based on the BFKL equation in the leading logarithmic approximation as well as those which include the dominant nonleading effects. The calculations were performed for the two values of the parameter s 0 i.e. s 0 = 0.04 GeV 2 and s 0 = 0.16 GeV 2 . In Fig. 5 we show the t-dependence of the cross-section calculated for s 0 = 0.10 GeV 2 . We show in this figure results for two values of the CM energy W (W = 50 GeV and W = 125 GeV) obtained from the solution of the BFKL equation with the non-leading effects taken into account (see Eq. (36)) and confront them with the Born term which corresponds to the two (elementary) gluon exchange. The latter is of course independent of the energy W . The values of the energy W were chosen to be in the region which may be accessible at LEP2. Let us discuss crucial features of the obtained results: 1. Non leading corrections. We see from Fig. 4 that the effect of the non-leading contributions is very important and that they significantly reduce magnitude of the cross-section and slow down its increase with increasing CM energy W . 2. Energy dependence. The cross-section exhibits approximate (W 2 ) 2λ P dependence. The parameter λ P , which slowly varies with the energy W takes the values λ P ∼ 0.23 − 0.28 within the energy range 20 GeV < W < 500 GeV relevant for LEP2 and for possible TESLA measurements. These results correspond to the solution of the BFKL equation (36) which contains the non-leading effects generated by the constraint (35). The (predicted) energy dependence of the crosssection ((W 2 ) 2λ P , λ P ∼ 0.23 − 0.28) is marginally steeper than that observed in J/ψ photo-production [21]. It should however be remebered that the non-leading effects which we have taken into account although being the dominant ones still do not exhaust all next-to-leading QCD corrections to the BFKL kernel [12]. The remaining contributions are expected to reduce the parameter λ P but their effect may be expected to be less important than that generated by the constraint (35). The cross-section calculated from the BFKL equation in the leading logarithmic approximation gives much stronger energy dependence of the cross-section (see Fig. 4). 3. The value of the cross-section. Enhancement of the cross-section is still appreciable after including the dominant non-leading contribution which follows from the constraint (35). Thus while in the Born approximation (i.e. for the elementary two gluon exchange which gives energy independent cross-section) we get σ tot ∼ 1.9 − 2.6 pb the cross-section calculated from the solution of the BFKL equation with the non-leading effects taken into account can reach the value 4 pb at W = 20 GeV and 26 pb for W = 100 GeV i.e. for energies which can be accessible at LEP2. 4. Infrared sensitivity. The magnitude of the cross-section decreases with increasing magnitude of the parameter s 0 which controls the contribution coming from the infrared region. This effect is however much weaker than that generated by the constraint (35) which gives the dominant non-leading contribution. The energy dependence of the cross-section is practically unaffected by the parameter s 0 . 5. The t-dependence. Plots shown in Fig. 5 show that the BFKL effects significantly affect the t-dependence of the differential cross-section leading to steeper t-dependence than that generated by the Born term. Possible energy dependence of the diffractive slope is found to be very weak (see Fig. 5). Similar result was also found in the BFKL equation in the leading logarithmic approximation [8]. In our calculations we have assumed dominance of the imaginary part of the pro- duction amplitude. The effect of the real part can be taken into account by multiplying the cross-section by the correction factor 1 + tg 2 (πλ P /2) which for λ P ∼ 0.25 can introduce additional enhancement of about 20 %. The photonic cross-sections that we obtained in this section are rather low in terms of the expected number of events, at least for the LEP2 luminosity. Therefore we consider the most inclusive observables relevant for double J/ψ production in e + e − collisions which is the total cross-section σ tot (e + e − → e + e − J/ψJ/ψ). In fact, it is convenient to impose additionally the anti-tagging condition. Taking θ max = 30 mrad we get for the σ tot (e + e − → e + e − J/ψJ/ψ) the values of about 0.14 pb at √ s = 175 GeV and 0.74 pb at √ s = 500 GeV (i.e. for typical energies at LEP2 and TESLA respectively). Therefore, assuming the LEP2 luminosity to be about 500 pb −1 we predict about 70 events, which is far below the previous expectations [19]. Besides, if one measures both the J/ψ-s through the leptonic decay channels the rate should be divided by factor of about 20, which cuts down the statistics to only a few events. Discussion and summary From the theoretical point of view, there exist excellent oportunities to study the exchange of the QCD pomeron in e + e − colliders. The two golden-plated measurements for this purpose are exclusive J/ψ production and the total γ * γ * cross-section. Both these processes allow to reduce substantially the contribution of unknown, nonperturbative elements. However, the leptonic cross-sections in both cases are well below 1 pb in LEP2 conditions, which makes the measurement rather difficult there. Nevertheless this problem does not appear at the future linear colliders e + e − for which the luminosity is expected to be much larger than at LEP and moreover the cross-section for diffractive processes is enhanced due to the photon flux and the pomeron effects. The large expected statistics enables one to reach the region of large photon virtualities (for double tagged events) where the perturbative calculations are more reliable. The important point that should be stressed once more is the existence of large non-leading corrections to BFKL equation, which influence dramatically the theoretical estimate of the pomeron intercept i.e. the behaviour of the cross-sections as functions of the energy. The recently calculated magnitude of next-to-leading contribution to the intercept (for any relevant value of the strong coupling constant) is comparable or even greater than the leading term. This implies a very poor convergence of the perturbative series. Thus one is forced to rely on a resummation scheme. We adopt the so called consistency constraint, which is based on the requirement that the virtualities of gluons exchanged along the ladder are dominated by transverse momenta squared. This constraint introduces at the next-to-leading order a correction to the pomeron intercept which exhausts about 70% of the exact QCD result. The main advantage of this approach is that there is a good physical motivation behind it. Moreover it also offers an approximate resummation scheme for the perturbative expansion of the intercept. Employing this scheme we found significant reduction of the predicted value of the intercept in comparison to the leading value. We find that the calculated behaviour of the γ * γ * total cross-section exhibits approximate power law dependence (W 2 ) λ P with 0.28 < λ P < 0.35. It is also found that the cross-section for γγ → J/ψJ/ψ increases with increasing energy W as (W 2 ) 2λ P with λ P varying from 0.23 to 0.28. This has important consequences for the phenomenology, since the enhancement of the crosssection although still quite appreciable is much smaller than that which follows from estimates based on the leading logarithmic approximation [19]. The results of our calculation are in fair agreement with the existing data for γ * γ * cross-section from LEP, although the theoretical calculations have a tendency to underestimate experimental results. They are also much more realistic than the predictions following from the leading order BFKL equation, which are an order of magnitude larger. The encouraging element is that even this very first data with rather low statistics, are enough to show clearly the importance of non-leading corrections. We may therefore expect that when the excellent data from linear colliders will be available we will acquire very good opportunity to test our models and to understand more deeply the physics of the QCD pomeron. Chromodynamics and the Deep Structure of Elementary Particles', contract FMRX-CT98-0194. e (y) are the transverse and longitudinal photon flux factors. Figure 2 : 2Energy dependence of the cross-section Figure 3 : 3The diagrams describing the coupling of two gluons to the γ → J/ψ transition vertex. Figure 4 : 4Energy dependence of the cross-section for the process γγ → J/ψJ/ψ. The two lower curves correspond to the calculations based on equation (36) which contains the non-leading effects coming from the constraint (35). The continuous line corresponds to s 0 = 0.04 GeV 2 and the dashed line to s 0 = 0.16 GeV 2 . The two upper curves correspond to equation (32) i.e. to the BFKL equation in the leading logarithmic approximation. The dashed-dotted line corresponds to s 0 = 0.04 GeV 2 and short dashed line to s 0 = 0.16 GeV 2 . Figure 5 : 5The differential cross-section of the process γγ → J/ψJ/ψ corresponding to the solution of equation (36) which contains the non-leading effects coming from the consistency (kinematical) constraint (35) shown for two values of the CM energy W , W = 50 GeV (continuous line) and W = 125 GeV (dashed line). The short dashed line corresponds to the Born term i.e. to the elementary two gluon exchange mechanism which gives the energy independent cross-section. The parameter s 0 was set equal to 0.10 GeV 2 . Table 1 : 1Comparison of the theoretical results to L3 data for e + e − → e + e − X with E tag > 30 GeV, 30 mrad < θ tag < 66 mrad. We show in the table dσ/dY binned in Y obtained from experiment and the results of our calculation which take into account perturbative pomeron only (hard) and both perturbative and soft pomerons (hard + DL) for two different choices of scale of the α s in impact factors and for e + e − CM energy 91 GeV and 183 GeV.dσ/dY [fb] Theory (BFKL+DL) ∆Y Data -QPM α s [(k 2 + m 2 q )/4] α s (k 2 + m 2 q ) Hard Hard + DL Hard Hard + DL 91 GeV 2 -3 480 ± 140 ± 110 76 206 34 163 3 -4 240 ± 60 ± 50 114 237 53 173 4 -6 110 ± 30 ± 10 60 109 29 74 183 GeV 2 -3 180 ± 120 ± 50 51 68 25 42 3 -4 160 ± 50 ± 30 70 86 34 49 4 -6 120 ± 40 ± 20 70 85 35 47 AcknowledgmentsWe are grateful to the Organizers for the interesting and stimulating Conference. We thank Albert De Roeck for his interest in this work and useful discussions. This research was partially supported by the Polish State Committee for Scientific Research (KBN) grants 2 P03B 184 10, 2 P03B 89 13, 2 P03B 084 14 and by the EU Fourth Framework Programme 'Training and Mobility of Researchers', Network 'Quantum . L N Gribov, E M Levin, M G Ryskin, Phys. Rep. 1001L.N. Gribov, E.M. Levin and M.G. Ryskin, Phys. Rep. 100 (1983) 1. . L N Lipatov, Phys. Rep. 286131L.N. Lipatov, Phys. Rep. 286 (1997) 131. . E A Kuraev, L N Lipatov, V S Fadin, Zh. Eksp. Teor. Fiz. 72373Sov. Phys. JETPE.A. Kuraev, L.N.Lipatov and V.S. Fadin, Zh. Eksp. Teor. Fiz. 72 (1977) 373 (Sov. Phys. JETP 45 (1977) 199); . Ya, Ya, L N Balitzkij, Lipatov, Sov. J. Nucl. Phys. 281597Yad. Fiz.Ya. Ya. Balitzkij and L.N. Lipatov, Yad. Fiz. 28 (1978) 1597 (Sov. J. Nucl. Phys. 28 (1978) 822); . J B Bronzan, R L Sugar, Phys. Rev. 17585J.B. Bronzan and R.L. Sugar, Phys. Rev. D17 (1978) 585; . T Jaroszewicz, Acta. Phys. Polon. 11965T. Jaroszewicz, Acta. Phys. Polon. B11 (1980) 965; Perturbative QCD. L N Lipatov, A.H. MuellerWorld Scientific441SingaporeL.N. Lipatov, in "Perturbative QCD", edited by A.H. Mueller, (World Scientific, Singapore, 1989), p. 441. . A H Mueller, J. Phys. 171443A.H. Mueller, J. Phys. G17 (1991) 1443 . . J Bartels, M Loewe, A De Roeck, Z. Phys. 54635J. Bartels, M. Loewe and A. De Roeck, Z. Phys. C54 (1992) 635 ; . J Kwieciński, A D Martin, P J Sutton, Phys. Rev. 46921J. Kwieciński, A.D. Martin and P.J. Sutton, Phys. Rev. D46 (1992) 921; . Phys. Lett. 287254Phys. Lett. B287 (1992) 254; . J Bartels, Phys. Lett. 384300J. Bartels et al., Phys. Lett. B384 (1996) 300; J Bartels, V Duca, M Wüsthoff, ; E Mroczko, Proceedings of the 28th International Conference on High Energy Physics. Z. Ajduk and A.K Wróblewskithe 28th International Conference on High Energy PhysicsWarsaw, PolandWorld Scientific76J. Bartels, V. Del Duca, M. Wüsthoff, Z. Phys. C76 (1997) 75. E. Mroczko, Proceedings of the 28th International Conference on High Energy Physics, Warsaw, Poland, 25-31 July 1996, Z. Ajduk and A.K Wróblewski (editors), World Scientific. . A H Mueller, W K Tang, Phys. Lett. 284123A.H. Mueller, W.K. Tang, Phys. Lett. B284 (1992) 123; . V Duca, W K Tang, Phys. Lett. 312225V. Del Duca, W.K. Tang, Phys. Lett. B312 (1993) 225; . V Duca, C R Schmidt, Phys.Rev. 494510V. Del Duca, C.R. Schmidt, Phys.Rev. D49 (1994) 4510. . J R Forshaw, M G Ryskin, Z. Phys. 68137J.R. Forshaw, M.G. Ryskin, Z. Phys. C68 (1995) 137. . J Bartels, J R Forshaw, H Lotter, M Wüsthoff, Phys. Lett. 375301J. Bartels, J.R. Forshaw , H. Lotter, M. Wüsthoff, Phys. Lett. B375 (1996) 301. . J Kwieciński, L Motyka, Phys. Lett. 438203J. Kwieciński and L. Motyka, Phys. Lett. B438 (1998) 203. . J Bartels, A De Roeck, H Lotter, Phys. Lett. 389742J. Bartels, A. De Roeck, H. Lotter, Phys. Lett. B389 (1996) 742; . J Bartels, A De Roeck, C Ewerz, H Lotter, hep-ph/9710500Eur. Phys. J. A. Bia las, W. Czyż, W. Florkowski2683J. Bartels , A. De Roeck, C. Ewerz, H. Lotter, hep-ph/9710500; A. Bia las, W. Czyż, W. Florkowski, Eur. Phys. J. C2 (1998) 683; . W Florkowski, Acta Phys. Polon. 282673W. Florkowski, Acta Phys. Polon. 28 (1997) 2673; . A Donnachie, H G Dosch, M Rueter, Phys. Rev. 5974011A. Donnachie, H.G. Dosch, M. Rueter, Phys. Rev. D59 (1999) 74011; . M Boonekamp, hep-ph/9812523M. Boonekamp et al. hep-ph/9812523. . S J Brodsky, F Hautmann, D A Soper, Erratum-ibid. 79Phys. Rev. Lett. 566957Phys. Rev.S.J. Brodsky, F. Hautmann, D.A. Soper, Phys. Rev. D56 (1997) 6957; Phys. Rev. Lett. 78 (1997) 803 (Erratum-ibid. 79 (1997) 3544); . M Ciafaloni, G Camici, ibid. B412Phys. Lett. 386396M. Ciafaloni, G. Camici, Phys. Lett. B386 (1996) 341; ibid. B412 (1997) 396; Erratum -Ibid, hep- ph/9709390hep-ph/9803389; M. Ciafaloni. 417390Erratum -ibid. B417 (1998) 390; hep-ph/9803389; M. Ciafaloni, hep- ph/9709390; . V S Fadin, M I Kotskii, R Fiore, Phys. Lett. 359181V.S. Fadin, M.I. Kotskii, R. Fiore, Phys. Lett. B359 (1995) 181; . V S Fadin, M I Kotskii, L N S Lipatov ; V, R Fadin, A Fiore, M Flachi, Kotskii, hep-ph/9704267Phys. Lett. 422287V.S. Fadin, M.I. Kotskii, L.N. Lipatov, hep-ph/9704267; V.S. Fadin, R. Fiore, A. Flachi, M. Kotskii, Phys. Lett. B422 (1998) 287; . V S Fadin, L N Lipatov, hep-ph/9802290V.S. Fadin, L.N. Lipatov, hep-ph/9802290. . D A Ross, Phys. Lett. 431161D.A. Ross, Phys. Lett. B431 (1998) 161; . G P Salam, hep-ph/9806482JHEP. 9807G.P. Salam, JHEP 9807 (1998), 19; hep-ph/9806482; Colferai hep-ph/9812366. M Ciafaloni, D S J Brodsky, hep-ph/9901229M. Ciafaloni, D. Colferai hep-ph/9812366; S.J. Brodsky et al. hep-ph/9901229. . B Andersson, G Gustafson, H Kharraziha, J Samuelsson, Z. Phys. 71613B. Andersson, G. Gustafson, H. Kharraziha, J. Samuelsson, Z. Phys. C71 (1996) 613; . J Kwieciński, A D Martin, P J Sutton, Z. Phys. 71585J. Kwieciński, A.D. Martin, P.J. Sutton, Z. Phys. C71 (1996) 585. . J Kwieciński, A D Martin, A Staśto, Phys. Rev. 563991J. Kwieciński, A.D. Martin, A. Staśto, Phys. Rev. D56 (1997) 3991. . A Donnachie, P V Landshoff, Phys. Lett. 296227A. Donnachie and P.V. Landshoff, Phys. Lett. B296 (1992) 227. . A Donnachie, P V Landshoff, Z. Phys. 61139A. Donnachie and P.V. Landshoff, Z. Phys. C61 (1994) 139. P Aurenche, G A Schuler, Report of the Working Group on "γγ physics. G. Altarelli, T. Sjöstrand and P. ZwirnerCERN yellow preprintReport of the Working Group on "γγ physics", P. Aurenche and G. A. Schuler (convenerrs), Proceedings of the Workshop on "Physics at LEP2", editors: G. Altarelli, T. Sjöstrand and P. Zwirner, CERN yellow preprint 96-01. . I F Ginzburg, S Panfil, V G Serbo, Nucl. Phys. 296569I.F.Ginzburg, S.L Panfil, V.G. Serbo, Nucl. Phys. B296 (1988) 569. . S Aid, H1 CollaborationNucl. Phys. 468) 3; ibid.S. Aid et al., H1 Collaboration, Nucl. Phys. B468 (1996) 3; ibid., B472 (1996) . M Derrick, ZEUS CollaborationPhys. Lett. 350120M. Derrick et al., ZEUS Collaboration, Phys. Lett. B350 (1996) 120; . J Breitweg, ZEUS CollaborationZ. Phys. 75215J. Breitweg et al., ZEUS Collaboration, Z. Phys. C75 (1975) 215.
{'fraction_non_alphanumeric': 0.06917356185973207, 'fraction_numerical': 0.052674349881796693, 'mean_word_length': 3.365151026550575, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 70, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The contribution of the QCD pomeron to the processes: e + e − → e + e − J/ψJ/ψ and e + e − → e + e − hadrons (with tagged electrons) is discussed. We focus on reactions which occur via photon-photon collisions, with virtual photons coming from the Weizsäcker-Wiliams spectrum of the electrons. We stress the importance of the non-leading corrections to the BFKL equation and take into account dominant non-leading effects which come from the requirement that the virtuality of the exchanged gluons along the gluon ladder is controlled by their transverse momentum squared. The γ * γ * cross-sections are found to increase with increasing γ * γ * CM energy W as (W 2 ) λ P while the cross-section for γγ → J/ψJ/ψ is found to increase as (W 2 ) 2λ P . The parameter λ P is slowly varying with energy W and takes the values λ P ∼ 0.23 − 0.35 depending on the process. We also analyze the contribution of the soft pomeron for the total γ * γ * cross-section. We compare results of our calculations to the recent data from LEP.', 'arxivid': 'hep-ph/9904464', 'author': ['+ E − Collisions ', 'J Kwieciński \nDepartment of Theoretical Physics, H. Niewodniczański Institute of Nuclear Physics\nCracowPoland\n', 'L Motyka \nInstitute of Physics\nJagellonian University\nCracowPoland\n'], 'authoraffiliation': ['Department of Theoretical Physics, H. Niewodniczański Institute of Nuclear Physics\nCracowPoland', 'Institute of Physics\nJagellonian University\nCracowPoland'], 'corpusid': 118876142, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 14623, 'n_tokens_neox': 12254, 'n_words': 7663, 'pdfsha': 'fab7aaa1f65cc5fd385b9bf8de6ef08f55a70cf1', 'pdfurls': ['https://arxiv.org/pdf/hep-ph/9904464v1.pdf'], 'title': ['The QCD pomeron in e', 'The QCD pomeron in e'], 'venue': []}
arxiv
Supplementary Information: Optimal strategy to certify quantum nonlocality S Gómez Instituto Milenio de Investigación enÓptica Universidad de Concepción ConcepciónChile Facultad de Ciencias Físicas y Matemáticas Departamento de Física Universidad de Concepción ConcepciónChile D Uzcátegui Departamento de Física Facultad de Ciencias Básicas Universidad de Antofagasta Casilla 170AntofagastaChile I Machuca Instituto Milenio de Investigación enÓptica Universidad de Concepción ConcepciónChile Facultad de Ciencias Físicas y Matemáticas Departamento de Física Universidad de Concepción ConcepciónChile E S Gómez Instituto Milenio de Investigación enÓptica Universidad de Concepción ConcepciónChile Facultad de Ciencias Físicas y Matemáticas Departamento de Física Universidad de Concepción ConcepciónChile S P Walborn Instituto Milenio de Investigación enÓptica Universidad de Concepción ConcepciónChile Facultad de Ciencias Físicas y Matemáticas Departamento de Física Universidad de Concepción ConcepciónChile G Lima Instituto Milenio de Investigación enÓptica Universidad de Concepción ConcepciónChile Facultad de Ciencias Físicas y Matemáticas Departamento de Física Universidad de Concepción ConcepciónChile D Goyeneche Departamento de Física Facultad de Ciencias Básicas Universidad de Antofagasta Casilla 170AntofagastaChile Supplementary Information: Optimal strategy to certify quantum nonlocality A Error propagationThis section provides a general expression for the experimental error obtained by measuring the Bell inequality value. In particular, we show how errors in the photon counting number due to finite statistics propagates to ∆Q. Consider the following general expression for a Bell inequality:Here we include both joint and marginal probability distributions. As is typical, the marginal probabilities are calculated from the join probabilities, and we average over all possibleReplacing these quantities in Eq.(S1) and rewriting Q in term of the coincidence count c(ab|xy) we getFinally, Gaussian error propagation and the Poisson statistics of the recorded coincidence count are considered to calculate ∆Q. The Possonian nature of the coincidence counts gives squared error (∆c(ab|xy)) 2 = c(ab|xy). The general expression for the experimental error is thenand straightforward calculation leads toB Experimental DetailsFor testing our method, we use the statistics recorded in Ref.1, where the authors aim to study randomness certification behavior and self-testing in a practical Bell scenario, considering five different partially entangled states (PES). The experiment (depicted inFig.S1) was performed using a high-purity, tunable polarization entanglement source of photons generated in the spontaneous parametric down-conversion (SPDC) process. A Sagnac interferometer, composed of two laser mirrors (M 1 and M 2 ), a half-wave plate (HWP), and a polarizing beam-splitter (PBS p ) cube, combined with a type-II periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal were used. The PPKTP crystal was pumped by a continuous-wave S1 A Error propagation This section provides a general expression for the experimental error obtained by measuring the Bell inequality value. In particular, we show how errors in the photon counting number due to finite statistics propagates to ∆Q. Consider the following general expression for a Bell inequality: Q = m−1 ∑ x,y=0 d−1 ∑ a,b=0 s ab xy p(ab|xy) + m−1 ∑ x=0 d−1 ∑ a=0 s a x p A (a|x) + m−1 ∑ y=0 d−1 ∑ b=0 s b y p B (b|y).(S1) Here we include both joint and marginal probability distributions. As is typical, the marginal probabilities are calculated from the join probabilities, and we average over all possible x (or y), i.e. p(a|x) = 1 m ∑ m−1 y=0 ∑ d−1 b=0 p(ab|xy) and p(b|y) = 1 m ∑ m−1 x=0 ∑ d−1 a=0 p(ab|xy). Replacing these quantities in Eq.(S1) and rewriting Q in term of the coincidence count c(ab|xy) we get Q = m−1 ∑ x,y=0 d−1 Finally, Gaussian error propagation and the Poisson statistics of the recorded coincidence count are considered to calculate ∆Q. The Possonian nature of the coincidence counts gives squared error (∆c(ab|xy)) 2 = c(ab|xy). The general expression for the experimental error is then ∆Q = ∑ abxy ∂ Q ∂ c(ab|xy) 2 c(ab|xy),(S3) and straightforward calculation leads to ∂ Q ∂ c(a ′ b ′ |x ′ y ′ ) = 1 (∑ ab c(ab|x ′ y ′ )) 2 s a ′ b ′ x ′ y ′ + 1 m s a ′ x ′ + s b ′ y ′ ∑ ab c(ab|x ′ y ′ ) − ∑ ab s ab x ′ y ′ + 1 m s a x ′ + s b y ′ c(ab|x ′ y ′ ) . (S4) B Experimental Details For testing our method, we use the statistics recorded in Ref. 1 , where the authors aim to study randomness certification behavior and self-testing in a practical Bell scenario, considering five different partially entangled states (PES). The experiment (depicted in Fig.S1) was performed using a high-purity, tunable polarization entanglement source of photons generated in the spontaneous parametric down-conversion (SPDC) process. A Sagnac interferometer, composed of two laser mirrors (M 1 and M 2 ), a half-wave plate (HWP), and a polarizing beam-splitter (PBS p ) cube, combined with a type-II periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal were used. The PPKTP crystal was pumped by a continuous-wave laser, operating at 405 nm, to create degenerate down-converted photons at 810 nm. The two propagation modes inside the interferometer (clockwise and counter-clockwise) for the generated down-converted photons overlap inside the PBS p cube, resulting in the polarized-entangled state |ψ(ϑ )⟩ = cos(ϑ ) |HV ⟩ + sin(ϑ ) |V H⟩ , where the angle ϑ defines the linear polarization mode of the pump beam cos(ϑ )|H⟩ + sin(ϑ )|V ⟩. Therefore, the amount of entanglement, given by the concurrence C = sin(2ϑ ), can be adjusted using the half-wave (HWP p ) and the quarter-wave plate (QWP p ) located at the pump beam propagation path. To ensure the degenerate generation of the down-converted photons, Semrock high-quality narrow bandpass filters centered at 810 nm were used, with 0.5 nm of bandwidth and a peak transmission > 90%. Furthermore, to prevent distinguishability between the spatial and polarization modes, the authors couple the generated down-converted photons into single-mode optical fibers. To maximize the coincidence counts, they follow a numerical model proposed in Ref. 4 . The optimal coupling condition is reached when ω SPDC = √ 2ω p , where ω p and ω SPDC are the waist mode of the pump beam and the down-converted photon at the center of the PPKTP crystal, respectively. These conditions were satisfied using a 20 cm focal length for L p lens and 10X objective lenses to couple the down-converted photons into the optical fibers. The local projective measurements involved in the tilted Bell inequality (Eq. 7, main text) were implemented using the typical polarization analyzer, which consists in the HWP A (HWP B ), the QWP A (QWP B ), and the PBS A (PBS B ) for Alice (Bob). To reach the high overall visibility required for randomness certification and self-testing, an electronic circuit capable of implementing up to 500 ps coincidence window was used, reducing the accidental coincidence rate probability 2, 3 . PerkinElmer single-photon avalanche detectors were placed at the output mode for each PBS to record the photon statistic and estimate the set of probabilities p(a, b|x, y) used for our analysis. The overall two-photon visibility obtained was (99.7 ± 0.3)% while the logical and diagonal polarization bases were measured. C Canonical form of Bell inequalities To transform any bipartite Bell inequality with m settings and two outcomes to its canonical form, i.e. depending on outputs a = b = 0 only, the following identities have to be considered for local p A (1|x) = 1 − p A (0|x) p B (1|y) = 1 − p B (0|y), S2/S3 and joint probabilities p(0, 1|x, y) = p A (0|x) − p(0, 0|x, y) p(1, 0|x, y) = p B (0|y) − p(0, 0|x, y) p(1, 1|x, y) = 1 − p A (0|x) − p B (0|y) + p(0, 0|x, y), for every x, y = 0, m − 1. Figure S1 . S1Experimental setup used in Ref.1 to implement randomness certification and self-testing using a tunable, high-quality source of polarization-entangled down-converted photons. Figure created on Inkscape 1.1, https://inkscape.org/. Experimental investigation of partially entangled states for device-independent randomness generation and self-testing protocols. S Gómez, A Mattar, I Machuca, E S Gómez, D Cavalcanti, O Farías, A Acín, G Lima, Phys. Rev. A. 9932108S. Gómez, A. Mattar, I. Machuca, E. S. Gómez, D. Cavalcanti, O. Jiménez Farías, A. Acín, G. Lima, Experimental investigation of partially entangled states for device-independent randomness generation and self-testing protocols, Phys. Rev. A 99, 032108 (2019). Experimental nonlocality-based randomness generation with nonprojective measurements. S Gómez, A Mattar, E S Gómez, D Cavalcanti, O Farías, A Acín, G Lima, Phys. Rev. A. 9740102S. Gómez, A. Mattar, E. S. Gómez, D. Cavalcanti, O. Jiménez Farías, A. Acín, and G. Lima, Experimental nonlocality-based randomness generation with nonprojective measurements, Phys. Rev. A 97, 040102 (2018). . E S Gómez, S Gómez, P González, G Cañas, J F Barra, A Delgado, G B Xavier, A Cabello, M Kleinmann, T Vértesi, Phys. Rev. Lett. 117260401E. S. Gómez, S. Gómez, P. González, G. Cañas, J. F. Barra, A. Delgado, G. B. Xavier, A. Cabello, M. Kleinmann, T. Vértesi et al., Phys. Rev. Lett. 117, 260401 (2016). Optimal focusing for maximal collection of entangled narrow-band photon pairs into single-mode fibers. D Ljunggren, Maria Tengner, Phys. Rev. A. 7262301D. Ljunggren, Maria Tengner, Optimal focusing for maximal collection of entangled narrow-band photon pairs into single-mode fibers, Phys. Rev. A 72, 062301 (2005).
{'fraction_non_alphanumeric': 0.06218625140866715, 'fraction_numerical': 0.019977461325683843, 'mean_word_length': 4.562393162393162, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 1, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 5, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'A Error propagationThis section provides a general expression for the experimental error obtained by measuring the Bell inequality value. In particular, we show how errors in the photon counting number due to finite statistics propagates to ∆Q. Consider the following general expression for a Bell inequality:Here we include both joint and marginal probability distributions. As is typical, the marginal probabilities are calculated from the join probabilities, and we average over all possibleReplacing these quantities in Eq.(S1) and rewriting Q in term of the coincidence count c(ab|xy) we getFinally, Gaussian error propagation and the Poisson statistics of the recorded coincidence count are considered to calculate ∆Q. The Possonian nature of the coincidence counts gives squared error (∆c(ab|xy)) 2 = c(ab|xy). The general expression for the experimental error is thenand straightforward calculation leads toB Experimental DetailsFor testing our method, we use the statistics recorded in Ref.1, where the authors aim to study randomness certification behavior and self-testing in a practical Bell scenario, considering five different partially entangled states (PES). The experiment (depicted inFig.S1) was performed using a high-purity, tunable polarization entanglement source of photons generated in the spontaneous parametric down-conversion (SPDC) process. A Sagnac interferometer, composed of two laser mirrors (M 1 and M 2 ), a half-wave plate (HWP), and a polarizing beam-splitter (PBS p ) cube, combined with a type-II periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal were used. The PPKTP crystal was pumped by a continuous-wave S1', 'arxivid': '2107.09120', 'author': ['S Gómez \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'D Uzcátegui \nDepartamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile\n', 'I Machuca \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'E S Gómez \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'S P Walborn \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'G Lima \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'D Goyeneche \nDepartamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile\n', 'S Gómez \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'D Uzcátegui \nDepartamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile\n', 'I Machuca \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'E S Gómez \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'S P Walborn \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'G Lima \nInstituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile\n\nFacultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile\n', 'D Goyeneche \nDepartamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile\n'], 'authoraffiliation': ['Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Departamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Departamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Departamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Instituto Milenio de Investigación enÓptica\nUniversidad de Concepción\nConcepciónChile', 'Facultad de Ciencias Físicas y Matemáticas\nDepartamento de Física\nUniversidad de Concepción\nConcepciónChile', 'Departamento de Física\nFacultad de Ciencias Básicas\nUniversidad de Antofagasta\nCasilla 170AntofagastaChile'], 'corpusid': 236134246, 'doi': '10.1038/s41598-021-99844-2', 'github_urls': [], 'n_tokens_mistral': 3106, 'n_tokens_neox': 2863, 'n_words': 1498, 'pdfsha': '9401b08e05e5aad82945ba95824d48a37b189819', 'pdfurls': ['https://arxiv.org/pdf/2107.09120v1.pdf'], 'title': ['Supplementary Information: Optimal strategy to certify quantum nonlocality', 'Supplementary Information: Optimal strategy to certify quantum nonlocality', 'Supplementary Information: Optimal strategy to certify quantum nonlocality', 'Supplementary Information: Optimal strategy to certify quantum nonlocality'], 'venue': []}
arxiv
Critical temperature of site-diluted spin-1/2 systems with long-range ferromagnetic interactions 25 Mar 2014 Karol Szałowski Department of Solid State Physics Faculty of Physics and Applied Informatics University of Łódź ulica Pomorska 149/15390-236ŁódźPoland Tadeusz Balcerzak Department of Solid State Physics Faculty of Physics and Applied Informatics University of Łódź ulica Pomorska 149/15390-236ŁódźPoland Critical temperature of site-diluted spin-1/2 systems with long-range ferromagnetic interactions 25 Mar 201410.7566/JPSJ.83.044002Author's version of the manuscript published in J. Phys. Soc. Jpn. 83 (2014) 044002Ising modelHeisenberg modelcritical temperaturelong-range interactions In the paper the Pair Approximation (PA) method for studies of the site-diluted spin-1/2 systems of arbitrary dimensionality with the long-range ferromagnetic interactions is adopted.The method allows to take into account arbitrary anisotropy of the interactions in the spin space, so it is not limited to purely Ising couplings. Within this approach, the Gibbs free energy is obtained, which allows to derive all the further interesting thermodynamic properties.In particular, we obtain an equation for the critical temperature of the second-order phase transitions for the model in question. In the study we focus our attention on the systems with ferromagnetic interactions decaying with the distance according to the power law J(r) ∝ r −n .We discuss the dependence of the critical temperature on the concentration of magnetic component and the index n for selected one-, two-and three-dimensional lattices. We confirm the absence of the critical concentration for a diluted magnet with infinite interaction range. In the regime of the low concentrations of magnetic component, we find a non-linear increase of the critical temperature with the concentration in the form of T c ∝ p n/d , depending on the system dimensionality d and the index n, whereas n > d. and valuable tool for their characterization. Within this field, a range of magnetic systems attracted considerable attention focusing mainly on low dimensions. This selection is generally restricted to magnets in which interactions are of constant sign, thus not leading to magnetic frustration with a plethora of intriguing consequences. Let us mention that the studies of magnetic systems with the site dilution and long-range couplings seem to be rather rare and this subject is principally mentioned only in the context of spin glasses and scaling relations.39,40)Let us present a brief motivation for studies of diluted magnetic systems with long-range interactions provided by some recent experimental works. One can instance the progress in growth and characterization of a highly promising dilute magnetic semiconductor (Ga,Mn)N, which encourages the interest in three-dimensional ferromagnets with the long-range interactions, for this substance attracts rising interest in the context of potential room-temperature ferromagnetism.[41][42][43][44][45]In this compound, a non-linear dependence of the critical temperature on magnetic Mn dopant concentration has been found experimentally for low Mn content, and such behaviour has been attributed to a ferromagnetic long-range superexchange mechanism.42,44,45)What is more, the unique properties of indirect Ruderman-Kittel-Kasuya-Yosida interaction in graphene (see e.g.46,47)) also promote theoretical understanding of twodimensional magnets with the long-range coupling (e.g.48,49)).Despite the development and use of simulational Monte Carlo methods for the systems with the long-range interactions,45,48,[50][51][52]there is still room for analytic studies. However, the problem turns out to be complex and, up to now, no complete thermodynamic method, which goes beyond the Molecular Field Approximation (MFA), has been proposed. In order to fill the gap, the present work describes the thermodynamics of the site-diluted systems with spins 1/2 interacting ferromagnetically by means of the long-range coupling, using analytical method based on the Pair Approximation (PA). The PA method is superior to MFA from the point of view of the systematic hierarchy of Cluster Variational Methods (CVM).[53][54][55]These methods have been originally developed for the nearest-neighbour (NN) interactions.However, the application of CVM for larger clusters: for instance, in triangle or square approximation, in the presence of the long-range interaction does not seem to be possible in practice. Nevertheless, it turns out that in the frame of CVM reduced to the PA the problem of long-range interactions is still tractable. The usefulness of the PA method follows from the fact that, in contrast to MFA, it takes into account the spin-pair correlations and can be applied to low-dimensional and disordered magnets. 56) Moreover, this method yields the Gibbs free-energy from which all thermodynamic quantities can be calculated. Introduction The studies of the systems with the long-range interactions constitute a challenging contemporary problem in statistical physics. 1) The important part of the studies concerns the systems with the so called 'strong long-range interactions', 1) which term denotes the couplings decaying with the distance slow enough to cause a failure of extensivity which is the basis for formulation of thermodynamics. However, there is a wide class of systems in which such a behaviour does not emerge and usually formulated thermodynamics is an appropriate In this paper, within the PA method, the equation for the critical (Curie) temperature for the system in question has been obtained. Attention is being focused on a specific form of long-range interactions, namely decaying with the distance according to the power law. For such a coupling the dependence of the critical temperature on the concentration of magnetic atoms for various anisotropy parameters characterizing the coupling has been illustrated and discussed. Theoretical model The Hamiltonian of a spin-1/2 site-diluted ferromagnet with the long-range interactions can be written in the following form: H = − i, j J r i j ∆ S x i S x j + S y i S y j + S z i S z j ξ i ξ j − h i ξ i S z i ,(1) where J k = J r i j > 0 is the ferromagnetic exchange integral between two spins i and j, the distance between which amounts to r i j = r k . It is assumed that one of the spins is the k-th nearest-neighbour of the other one, i.e. this spin belongs to the k-th coordination zone around the central one and the set of radii r k for k = 1, 2, . . . characterizes fully a given crystalline lattice. The parameter 0 ≤ ∆ ≤ 1 is the anisotropy of interaction in the spin space, and is assumed to be independent on the distance between interacting spins. ∆ = 0 corresponds to Ising interaction, while ∆ = 1 is the isotropic Heisenberg coupling. The site dilution is introduced by means of the occupation number operators ξ i , for which the configurational average ξ i = p yields the concentration of the magnetic atoms. The external magnetic field is denoted by h. Since the interaction is long-ranged, the summation in the Hamiltonian extends for all site pairs of the considered crystalline lattice. In order to describe the thermodynamics of the model in question, the Pair Approximation method is extended to be capable of treating the systems with the long range interactions. The method is based on the cumulant expansion technique for the free energy, 55) which constitutes a systematic approach used in the frame of CVM. In the PA only the first-and second-order cumulants are taken into account, and the higher-order cumulants are neglected. This corresponds to the assumption that only the single-site and pair cluster energies contribute to the total energy. The spin-spin interactions within each cluster pair are taken exactly. The molecular fields in which the clusters are embedded play a role of variational parameters. These parameters can be self-consistently determined from the condition that the Gibbs energy in equilibrium must achieve a minimum. Moreover, the magnetizations calculated basing on single sites and on pairs must be equal, which is imposed by a consistency condition. The PA method has been previously applied to the extensive studies of various magnetic systems with the interactions limited to nearest neighbours [56][57][58][59][60] and has been exhaustively described there; therefore, only a brief scheme is presented here. The quantum state of a spin is described by means of the following density matrices: ρ i = e βG (1) exp β (Λ + h) S i z(2) for a single spin at site i and ρ i j = e βG (2) k exp β J k ∆ S x i S x j + S y i S y j + S z i S z j + Λ ′ k + h S i z + S j z(3) for a pair of spins at sites i and j. Here, J k is the interaction for the k-th coordination zone of the given crystalline lattice and β = 1/ (k B T ). In the present formulation the total Gibbs energy per site, averaged over magnetic component configurations, G r = H r − S r T , can be expressed in the following form: G r N = 1 2 p ∞ k=1 z k pG (2) k − 2 (z k p − 1) G (1) ,(4) where z k is the number of lattice sites belonging to the k-th coordination zone. The single-site and pair Gibbs energy terms are: G (1) = −k B T ln 2 cosh β Λ + h 2(5) and G (2) k = −k B T ln 2 exp β J k 4 cosh β Λ ′ k + h +2 exp −β J k 4 cosh β J k ∆ 2 , respectively. The parameter Λ has the interpretation of a molecular field acting on a single spin and originating from all the spins in its environment. The analogous parameter Λ ′ denotes a molecular field acting on a selected pair of spins, one of them being a k-th nearest neighbour of the other. Both parameters can be further expressed using the variational parameters λ j which constitute molecular fields acting on given spin and resulting from its interaction with an j-th nearest neighbour spin. Therefore we can write: Λ = p ∞ l=1 z l λ l(6) and Λ ′ k = Λ − λ k = ∞ l=1 (pz l − δ kl ) λ l .(7) Author's version of the manuscript published in J. Phys. Soc. Jpn. 83 (2014) 044002 DOI:10.7566/JPSJ.83.044002 The variational minimization of the Gibbs energy with respect to λ j is performed with a set of constraints in the form of Tr i ρ i S i = 1 2 Tr i j ρ i j S i + S j , which impose a condition that the magnetization for a given lattice site is the same when calculated using either a singlesite or a pair density matrix. Such a procedure leads to the self-consistent set of equations in the form of: tanh 1 2 β (Λ + h) = = e 1 4 βJ k sinh β Λ ′ k + h e 1 4 βJ k cosh β Λ ′ k + h + e − 1 4 βJ k cosh 1 2 βJ k ∆ ,(8) where k = 1, 2, . . . numbers the subsequent coordination zones. After plugging in Eq. (8) the formulas (6) and (7) the set of equations for λ j variables is finally obtained. The solution to the infinite set of self-consistent equations (8) allows the Gibbs energy to be determined and hence the thermodynamic behaviour of the system can be completely characterized. Further thermodynamic quantities of interest can be obtained as appropriate derivatives of the Gibbs energy with respect to its natural variables. It should be emphasized here that the Gibbs energy, which has been constructed from the enthalpy H r (i.e., the mean value of the Hamiltonian containing interaction with the external field h) and the entropic part S r T , in general, is a function of three parameters: h, T and N. Since we are using the canonical ensemble with N = const., only the temperature T and the external field h are (intensive) thermodynamic parameters for which the Gibbs energy can be treated as a thermodynamic potential. As these two variables can easily be controlled in the experiment, they appear to be very convenient in magnetism. The present paper focuses on the critical temperature of the second-order phase transition for ferromagnetic system. It is worth mentioning here that in the system with spin S = 1/2 and solely ferromagnetic NN interactions we do not expect to obtain the 1st order phase transitions. Such discontinuous phase transitions may occur when the competitive interactions (often introduced for higher spins), or magnetic frustration take place, which is not our case. For the continuous phase transitions the derivation is presented in details in Appendix A. Within this approach, we obtain the following equation for the critical (Curie) temperature T c : p ∞ k=1 z k 1 − exp − 1 2 β c J k cosh 1 2 β c J k ∆ = 2,(9) where β C = 1/ (k B T C ). This equation will serve as a basis for numerical calculations, the results of which are discussed in the following section. Let us mention here that an usual Molecular Field Approximation leads to the following formula for the Curie temperature: k B T MFA C = 1 4 p ∞ k=1 z k J k ,(10) which is insensitive to the interaction anisotropy in the spin space. Let us remark that for a specific case of Ising couplings limited only to nearest-neighbour spins, i.e., when J 1 > 0, J 2 = J 3 = · · · = 0 and ∆ = 0, we can solve Eq. (9) to obtain the expression for the critical temperature in the form k B T c /J 1 = 1/ 2 ln pz/ (pz − 2) , which agrees with the results previously reported in Refs. [56][57][58]61) On the other hand, for Heisenberg 56) We should also mention that the validity of our approach, the outcome of which is Eq. (9), is limited to such interactions J (r), for which the sum in Eq. (9) is convergent and all thermodynamic quantities resulting from the used formulas (like the Gibbs energy per site) are couplings with ∆ = 1 we obtain k B T c /J 1 = 1/ ln pz/ (pz − 4) . finite. This implies that the interaction should decay fast enough with the distance between magnetic moments. In order to illustrate the critical temperatures resulting from the equation (9), let us assume for further calculations a specific form of distance dependence of couplings between magnetic moments. For this purpose we will select a power-law decay of the interaction, in the form of: J k = J 1 (r k /r 1 ) −n ,(11) (k = 1, 2, . . . ), where J 1 and r 1 are the coupling energy and distance between nearest neighbours for a given lattice. The exponent n > 0 characterizes the power decay. This form of distance dependence of the coupling is known as 'magnetic Grüneisen law' and has been postulated in. 62) Moreover, such a dependence is also used for interpretation of the experimental data. 63) Let us mention that such a formula is an empirical one and is applied to both ferro-and antiferromagnetic interactions. All the results presented below will be normalized to the parameter J 1 (i.e., NN interactions) setting the energy scale. One of the interesting issues is the character of dependence of the Curie temperature on the concentration of magnetic component p. In particular, the range of small p is of special interest. In this regime the detailed structure of the underlying crystalline lattice is not expected to be important, leaving only the dependence on the dimensionality d of the lattice. As a consequence, a continuous approximation can be applied to the Eq. is: k B T c = J 1 (1 + ∆) d/n + (1 − ∆) d/n n/d        r 1 Ω 1/d 0        n 1 2         ω d Γ 1 − d n 4d         n/d p n/d .(12)for d = 1, 2, 3; the coefficient ω d = 2 for d = 1, ω d = 2π for d = 2 and ω d = 4π for d = 3. The presented result is valid only for the exponent n > d, for the distance dependence of the interaction given by (11). Ω 0 denotes the volume/area/length (depending on the dimensionality) of the system per site. Γ (x) is Euler gamma function. The condition n > d is used to guarantee the convergence of the total energy and associated quantities, including convergence of the sum in Eq. (9). The most important finding from Eq. (12) is that the critical temperature is no longer proportional to p, as in MFA (Eq. (10)). Instead, it varies in a non-linear way, proportionally to p n/d . Let us mention that such a dependence, T C ∝ p n/d , can be inferred from the scaling analysis presented briefly in Refs. 39,40) This kind of non-linear dependence remains unmodified by various values of interaction anisotropy ∆ in the spin space. Let us observe, along the lines of the discussion in Ref., 48) that for a diluted system with concentration of magnetic component equal to p, the average distance between the impurities r av amounts to r av = (Ω 0 /p) 1/d ∝ p −1/d . The interaction energy between impurities at this distance is J av ∝ p n/d . Therefore, for very low concentration p, the critical temperature is governed by the coupling between magnetic impurities at average distance. On the other hand, for high concentration p → 1, the critical temperature tends to vary linearly with p. Another interesting problem concerns the existence of the finite critical concentration p c below which the critical temperature vanishes. Such a critical concentration has been found within the PA method for the diluted systems with interaction limited to the nearest neighbours only. 56) In the presented case, from Eq. (9) (or from its alternative form (B·1)), in the limit T c → 0 and J (r) > 0 for all r < ∞, we can obtain p c = 2 ∞ k=1 z k for the Ising model (∆ = 0) and p c = 4 ∞ k=1 z k for the pure Heisenberg system (∆ = 1). For NN interactions only we have ∞ k=1 z k = z 1 and the critical concentrations reduce to these reported in Ref. 56) From the above formulas it is clear that p c → 0 if the interaction does not vanish totally for any finite distance between impurities, as then ∞ k=1 z k → ∞. Therefore, the critical temperature for the interacion given by Eq. (11) Numerical results and discussion In Fig. 1(a) It is instructive to present the same dependence on the double logarithmic scale, as plotted in Fig. 1(b). From such presentation it is visible that for the lowest concentrations p, the dependencies of T c vs. p become linear, which is a sign of power-law dependence. This observation is in accordance with Eq. (12), where T c ∝ p n/d is predicted. Moreover, it is evident that the slope of the curves on the double logarithmic scale is increasing with the increase of n in the low concentration range. In order to better illustrate the comparison between the analytical and numerical results, in Fig. 1(c) For large values of n, corresponding to a considerably fast decrease of the coupling with the distance, a series of subsequent kinks is visible in Fig. 1(b). The first one corresponds to p = 2/z 1 , while the positions of the other correspond to p = 2/ (z 1 + z 2 ), p = 2/ (z 1 + z 2 + z 3 ), etc. The positions of the above mentioned kinks are indicated in the plot with dashed vertical lines. According to the discussion of the critical concentration, and the formulas presented at the end of previous Section, these values correspond to the critical concentrations, which would appear when the interactions were cut off at the first, second, third, etc., coordination zone, respectively. Since such cutting-off does not take place when n < ∞, the critical temperature does not fall to zero at those values; instead, only the rapid decrease of critical temperature takes place and a noticeable kink is formed at the curve. When n → ∞, the behaviour of T c is convergent to the behaviour of the Ising model with interaction only between Let us also present analogous dependencies calculated for one-dimensional lattice (chain), which are shown in Fig. 2(a) and (b), on the linear and double logarithmic scale, respectively. Contrary to 3D sc lattice, for 1D system there is nonlinear regime of T c for large values of p. One can see that the first kink in Fig. 2(b) emerges at p = 1, and when index n increases the critical temperature at this kink quickly drops to zero. In the limiting case, when n → ∞, only the NN interaction J 1 remains (J k =0 for k ≥ 2 on the basis of Eq (11)). Then, we found can be seen in Fig. 2(b)). selected lattices. In the limit of n → ∞ the results fall onto the same curve which is predicted from the application of the Pair Approximation to a diluted magnet with nearest-neighbour interactions only. In this case the first kink appears at p sc 1 = p tr 1 = 2/z 1 = 1/3. The further kinks for these curves are connected with the next coordination zones. Next two coordination numbers for sc lattice are z 2 = 12 and z 3 = 8. This leads to the concentrations corresponding to the second and third kinks: p sc 2 = 2/ (z 1 + z 2 ) = 1/9 and p sc 3 = 2/ (z 1 + z 2 + z 3 ) = 1/13. For the triangular lattice the first three coordination numbers are equal: z 1 = z 2 = z 3 = 6. Thus, the second and the third kinks appear at the values: p tr 2 = 1/6 and p tr 3 = 1/9, respectively. It is worthy noticing that the second kink for sc lattice and the third kink for tr lattice appear at the same concentration p sc 2 = p tr 3 = 1/9. This coincidence can be visible, for example, on the curves with index n = 20. It means that the critical concentration for 3D sc lattice with the first-and second-neighbour interactions is the same as the critical concentration for 2D tr lattice, where the interactions up to the third coordination zone are taken into account. A remarkable feature of the critical temperature dependencies on p is the difference in slope for a low concentration range between the curves plotted for both systems of unequal dimensionality d = 2 and d = 3. This behaviour is in concert with Eq. 12. Fig. 4 illustrates the results for two lattices of the same dimensionality -sc 3D lattice and fcc 3D lattice. In this case it is visible that the critical temperature is higher for fcc lattice where the density of sites is greater. However, the slope of the dependence of T c vs. p on the double logarithmic scale is the same in low p range for both lattices, since their dimensionality is the same. The positions of kinks observable on both curves are different since the numbers z k are mostly unequal for these crystalline lattices. In particular, we have z 1 = 6 for sc while z 1 = 12 for fcc lattice. This difference causes a different limiting critical temperature behaviour for both lattices when n → ∞. The effects of the interaction anisotropy are studied in Fig. 5, where the results of critical temperature calculation are compared for sc 3D lattice with either Ising or isotropic Heisenberg couplings. It is evident that the critical temperatures are lowered by switching from the anisotropic to isotropic coupling. This effect is least remarkable for low n and becomes gradually more and more pronounced when n increases. The slope on the double logarithmic scale is the same for low p (see Eq. 12) and does not depend on the interaction anisotropy ∆. However, the limiting high-n behaviour differs, for the Pair Approximation predicts T c = 0 below p c = 2/z 1 for ∆ < 1 and below p c = 4/z 1 for ∆ = 1 (in agreement with Ref. 56) ). It can also be of interest to study the dependence of the critical temperature on index Final remarks and conclusion In the paper the Pair Approximation method for spin-1/2 systems with the long-range couplings of ferromagnetic character and random site dilution has been applied. In particular, we found the equation for the critical (Curie) temperature with the interaction anisotropy ∆ taken into account. For the interesting case of interactions varying with the distance between spins like J (r) ∝ r −n a limiting formula for critical temperature (valid in the limit of low concentration p) has been derived. This formula shows that the critical temperature varies non-linearly with the concentration of magnetic atoms, namely T c ∝ p n/d , where d is the dimensionality of the considered system. This finding differs qualitatively from the Mean Field Approximation prediction, where T c ∝ p for any interaction and dimensionality. The prediction of our method is in agreement with scaling arguments 39,40) where the same proportionality has been found. The result is also in accord with some Quantum Monte Carlo calculations for honeycomb lattice 48) (d = 2) with spin S = 1/2 and interaction of the type J (r) ∝ r −3 (n = 3). Namely, the result found in Ref. 48) is T c ∝ p 3/2 for p ≤ 0.2. There is also a strong experimental evidence that T c for very diluted magnets with long-range interaction exponent (p 1.9 ) is obeyed by spin-glass freezing temperature in Co-based II-VI DMS. [64][65][66] Also in a wide class of Mn-based DMS power-law dependence of freezing temperature on magnetic ion concentration is confirmed. 67) In our plots the dependence of the critical temperature on the concentration of magnetic atoms for various lattices of different dimensionality has been illustrated. In our work we focused our attention on the phase transition temperature calculation. The critical behaviour in the vicinity of the phase transition has not been studied; however, it has been known that the critical exponents in the PA method are the same as in the Landau theory, i.e., given by MFA. For the regular lattices such classical critical exponents present an approximation. It has also been shown that the PA method gives exact results when is applied for the Bethe lattices with NN interactions. 68) The differences between the Ising and Heisenberg models in the PA method can be noticed through different phase transition temperatures and different critical concentrations. In particular, for NN interactions only (when n → ∞) the critical concentration obtained here for the Ising model is p c = 2/z 1 , whereas for the Heisenberg model p c = 4/z 1 . This means that 1D Ising chain with z 1 = 2 is nonmagnetic for non-zero temperatures, and 2D Heisenberg system with z 1 = 4 is also nonmagnetic (in accordance with Mermin-Wagner theorem 4) ). Unfortunately, for NN interaction the PA method is not able to distinguish between 2D triangular lattice with z 1 = 6 and 3D simple cubic lattice. However, such lattices are distinguishable for the long-range interaction (Fig. 3). As far as the NN interactions are concerned within the PA method, a difference between the Ising and Heisenberg models can also be found in the low-temperature behaviour of magnetic susceptibility. For instance, it has been found in Ref. 57) that the susceptibility in the isotropic Heisenberg bilayer in the vicinity of T = 0 diverges like ∝ 1/T . One can suppose that such kind of behaviour may also occur for the long-range interactions; however, it needs more extended studies of all thermodynamic properties, which is beyond the scope of the present paper. As far as the low-dimensional magnetism is concerned, we found that a non-zero critical temperature is found in all the systems where the interactions extend to infinity, provided n > d. This result is in accordance with theoretical predictions of several papers, for example: Quantum Monte Carlo method for 2D Heisenberg model, 33) spherical model in 1D Ising system, 2) one-and two-dimensional quantum Heisenberg model studied by spin wave theory, 34) Green Function technique 35) and Spectral Density method. 36) Another interesting limit of interaction considered in literature is n = 0, i.e., when the interactions extend to infinity and all of them have the same strength. Then, assuming J k = J 1 /N (for the energy convergence), we obtain the Kac model. 69) That model has been solved exactly for the crystalline case giving the phase transition temperature and the molecularfield-like behaviour. However, in the case of dilution, we do not expect to obtain the non-zero critical concentration for the Kac model, similarly to MFA. As for the context of the validity of our approach, let us once more put emphasis on the fact that our description is valid when the interaction decays appropriately fast with the distance (i.e., n > d for J (r) ∝ r −n )). Therefore, such a kind of 'long-range interactions' does not involve the systems for which the standard formulation of thermodynamics is not working properly: 1) for example, due to failure of extensivity of some thermodynamic variables caused by a slow decay of interactions. As a consequence, the interactions we consider fall into the category of the 'weak long-range interactions' according to classification in Ref. 1) However, we are convinced that such a class of interactions is interesting; for example, from the modern magnetic systems point of view. Acknowledgments The computational support on Hugo cluster at Department of Theoretical Physics and Astrophysics, P. J.Šafárik University in Košice is gratefully acknowledged. This work has been supported by Polish Ministry of Science and Higher Education on a special purpose grant to fund the research and development activities and tasks associated with them, serving the development of young scientists and doctoral students. Appendix A: Determination of the critical temperature The set of equations for the variational parameters takes the form of: tanh 1 2 β (Λ + h) = = e 1 4 βJ k sinh β Λ ′ k + h e 1 4 βJ k cosh β Λ ′ k + h + e − 1 4 βJ k cosh 1 2 βJ k ∆ ,(A·1) where the values of the index k = 1, 2, . . . number the subsequent coordination zones for the considered crystalline lattice. First, let us assume that the set of equations is truncated after k max -th coordination zone, i.e. k = 1, . . . , k max . The variational parameters Λ and Λ ′ k can be written as follows: Λ = p k max l=1 z l λ l( ( 9 ) 9, the details of which are presented in Appendix B. The resulting formula for the critical temperature for small p is always nonzero for any finite p. It is worthy mentioning that the physical meaning of the critical concentration is not only connected with vanishing of the Curie temperature but also, from the structural point of view, indicates the percolation threshold in dilute systems. we present the dependence of the normalized critical temperature on the concentration of magnetic component, plotted for three-dimensional simple cubic (sc) lattice with z 1 = 6 nearest neighbours, in a linear scale. The presence of long-range Ising interactions is assumed. The dependencies for various values of index n are shown, starting from n = 4. It is evident that for the values of p larger than 2/z 1 the dependencies for all the values of n are linear in their character, and their slope is decreasing with increasing n. For the lowest value of n the curve remains almost linear in the whole range of concentrations. However, for larger n values, a kink emerges close to p = 2/z 1 . For n → ∞ we reproduce the results for Ising model with nearest-neighbour interactions only, i.e. the critical concentration p c = 2/z 1 = 1/3 is present, below which T c = 0. we plot two selected solutions of the general Eq. (9) (solid curves) together with their analytical approximations presented by Eq. (12) (dashed lines) for two different values of exponent n. One can see that for sufficiently low concentration p the analytical approximation given by Eq. (12) is fully consistent with the numerical solution of the full equation for critical temperature (9). Fig. 1 . 1(Color online) Dependence of critical temperature on magnetic component concentration for various indexes n on linear scale (a) and double logarithmic scale (b). Comparison of the numerical solution of Eq. (9) (solid lines) with low-concentration analytical approximation Eq. (12) (dashed lines) for two different indexes n in double logarithmic scale (c). Exchange integral between nearest neighbours J 1 is fixed. 3D simple cubic (sc) lattice is considered, with z 1 = 6, z 2 = 12, z 3 = 8. Ising couplings (with ∆ = 0) are assumed. nearest neighbours (and the value of p c = 1/3 in this case simultaneously corresponds to the first kink for all other curves).One can notice that a similar plot has been presented in the work Ref.,48) where the twodimensional graphene has been considered with antiferromagnetic couplings decaying according to the law J (r) ∝ r −3 . Quantum Monte Carlo results for isotropic Heisenberg model show the proportionality of the critical temperature to p 3/2 , which is also in accord Fig. 2 . 2(Color online) Dependence of critical temperature on magnetic component concentration for various indexes n on linear scale (a) and double logarithmic scale (b). Exchange integral between nearest neighbours J 1 is fixed. 1D lattice (linear chain) is considered, with z 1 = 2. Ising couplings (with ∆ = 0) are assumed. results (Eq. 12). Moreover, for larger concentrations p a linear dependence of the critical temperature on magnetic impurity concentration is found, as in the presented results. Let us also observe that a lack of kink on the dependence of the critical temperature vs. p in the results of Ref. 48) is also in qualitative agreement with what we obtain for low values of index n (for the case considered in Ref., 48) n = 3 and d = 2). Fig. 3 . 3that the critical temperature tends to zero for all concentrations, including p = 1. This result is in accordance with the exact solution for the linear Ising chain with NN interactions, where no phase transition occurs at non-zero temperatures. In the range of small (Color online) Dependence of critical temperature on magnetic component concentration for various indexes n on double logarithmic scale. Exchange integral between nearest neighbours J 1 is fixed. The two lattices with z 1 = 6 are compared: 3D sc lattice (solid lines) and 2D triangular lattice (dashed lines). Ising couplings (with ∆ = 0) are assumed. Fig. 4 . 4(Color online) Dependence of critical temperature on magnetic component concentration for various indexes n on double logarithmic scale. Exchange integral between nearest neighbours J 1 is fixed. The two 3D lattices are compared: sc lattice with z 1 = 6 (solid lines) and fcc lattice with z 1 = 12 (dashed lines). Ising couplings (with ∆ = 0) are assumed. magnetic atoms and finite n, the features are rather similar to the ones present in the previous case (i.e. the presence of further kinks and power-law dependence of T c on p when p → 0 Fig. 3 Fig. 5 . 35presents a comparison of the results obtained for two crystalline lattices with the same number of nearest neighbours (z 1 = 6), but of different dimensionality, namely for 3D sc lattice and 2D triangular (tr) lattice. The critical temperatures were calculated for Ising couplings and plotted as a function of concentration p on a double logarithmic scale, for selected values of n. The calculated T c is always lower for a 2D system than for a 3D system. (Color online) Dependence of critical temperature on magnetic component concentration for various indexes n on double logarithmic scale. Exchange integral between nearest neighbours J 1 is fixed. The two models: Ising (solid lines) and isotropic Heisenberg (dashed lines) are compared for 3D sc lattice. Fig. 6 . 6(Color online) Dependence of critical temperature on index n for various concentrations of magnetic component p, on double logarithmic scale. Exchange integral between nearest neighbours J 1 is fixed. Ising couplings (with ∆ = 0) are assumed, for (a) 1D lattice (linear chain) with z 1 = 2; (b) 3D sc lattice with z 1 = 6.The difference in the critical temperatures tends to vanish for the increasing index n in the range of large concentrations p. This reflects the fact that for large n the most important role is played by the interaction with nearest neighbours, the number of which is equal some fixed values of concentration p. Such plots are presented inFig. 6, for Ising couplings on 1D lattice (a) and on 3D sc lattice (b). A double logarithmic scale is used. For 1D lattice, the critical temperature drops with increasing n and the tendency is stronger for higher n values. In this case the drop in T c is not limited by a non-zero value. When the concentration p increases, the range of slower drop of T c emerges for lower n and for p close to 1 this range is significant. For p = 1 the dependence is different, because only a slow linear-like drop of the critical temperature for increasing n is visible. Let us observe that p = 1 is a limiting case for 1D lattice for which z 1 = 2 and thus p c = 2/z 1 = 1. It means that if n → ∞ then T c → 0, in agreement with the exact result for the Ising chain with NN interactions. Somewhat similar behaviour can be seen inFig. 6(b) for 3D sc lattice. When p < p c = 2/z 1 = 1/3, the behaviour of T c (unlimited, fast drop) is analogous to one observed inFig. 6(a). However, in the range of concentrations p c = 1/3 < p ≤ 1 qualitatively different dependence of T c vs. n is seen. Namely, after some initial decrease, the critical temperature tends to the limiting value predicted by the Pair Approximation for a diluted magnet with the nearest-neighbours coupling only. The separating line for p = p c = 1/3 corresponds to the slow linear-like decrease in the critical temperature. is non-linear. For instance, the experiments performed on 3D dilute magnetic semiconductors (DMS) Ga 1−p Mn p N[42][43][44][45] gave the result T c ∝ p 2.2 for p ≤ 0.1. The scaling law with A·2 ) A·2DOI:10.7566/JPSJ.83.044002Finally, the critical temperature can be expressed as follows:k B T c = J 1 (1 + ∆) d/n + (1 − ∆) d/n n/d       r 1 Ω 1/d 0        n 1 2         ω d Γ 1 − d n 4d         n/d p n/d . (B·6) for d = 1, 2, 3. The equations (A·1) can be linearized in the vicinity of the continuous phase transition, which yields:After substituting A·2 and A·3 into A·4 we obtain the system of equations in the form:with the matrix elementsThe equation for the critical (Curie) temperature of the continuous phase transition can be derived from the condition:By denoting:we can write the matrix elements as follows:Then, after some algebra, we obtain the expression for the determinant in the following form:and the equation A·7 for critical temperature is equivalent to yielding finally:Now, by assuming the limit k max → ∞ the final result takes the form of:Appendix B: Critical temperature dependence on magnetic component concentration for small concentrationsThe equation for the critical temperature A·14 can be re-written as:Let us introduce the notation: C ± ≡ J 1 (1 ± ∆) r n 1 /2. Then, for the interactions J k ∝ r −n k , we get:For p → 0 we can replace summation over the coordination zones with integration over the volume/surface/length in the following way:where ω d = 2 for d = 1, ω d = 2π for d = 2 and ω d = 4π for d = 3.It can be shown that for n > d we get the result: 70)where Γ (x) is the Euler gamma function. The condition n > d is necessary to guarantee the convergence of the integrals and thus the finite value of the total energy of the system in question.Using the above results we obtain from B·2: . F Bouchet, S Gupta, D Mukamel, Physica A. 3894389F. Bouchet, S. Gupta, and D. Mukamel: Physica A 389 (2010) 4389. . G S Joyce, Phys. Rev. 146349G. S. Joyce: Phys. Rev. 146 (1966) 349. . M E Fisher, S Ma, B G Nickel, Phys. Rev. Lett. 29917M. E. Fisher, S.-k. Ma, and B. G. Nickel: Phys. Rev. Lett. 29 (1972) 917. . N D Mermin, H Wagner, Phys. Rev. Lett. 171133N. D. Mermin and H. Wagner: Phys. Rev. Lett. 17 (1966) 1133. . A Gelfert, W Nolting, J. Phys.: Condens. Matter. 13505A. Gelfert and W. Nolting: J. Phys.: Condens. Matter 13 (2001) R505. . M Barati, A Ramazani, Phys. Rev. B. 6424407M. Barati and A. Ramazani: Phys. Rev. B 64 (2001) 024407. . M Barati, A Ramazani, Phys. Rev. B. 6512406M. Barati and A. Ramazani: Phys. Rev. B 65 (2001) 012406. . M Barati, A Ramazani, Phys. Rev. B. 6212130M. Barati and A. Ramazani: Phys. Rev. B 62 (2000) 12130. . J L Monroe, Phys. Rev. E. 6827103J. L. Monroe: Phys. Rev. E 68 (2003) 027103. . J L Monroe, J. Phys. A: Math. Gen. 319809J. L. Monroe: J. Phys. A: Math. Gen. 31 (1998) 9809. . J L Monroe, J. Phys. A: Math. Gen. 327083J. L. Monroe: J. Phys. A: Math. Gen. 32 (1999) 7083. . E Bayong, H T Diep, Phys. Rev. B. 5911919E. Bayong and H. T. Diep: Phys. Rev. B 59 (1999) 11919. . E Bayong, H T Diep, T T Truong, J. Appl. Phys. 856088E. Bayong, H. T. Diep, and T. T. Truong: J. Appl. Phys. 85 (1999) 6088. . S Curilef, L A Pino, P Orellana, Phys. Rev. B. 72224410S. Curilef, L. A. del Pino, and P. Orellana: Phys. Rev. B 72 (2005) 224410. . H Nakano, M Takahashi, J. Phys. Soc. Jpn. 634256H. Nakano and M. Takahashi: J. Phys. Soc. Jpn. 63 (1994) 4256. . H Nakano, M Takahashi, J. Phys. Soc. Jpn. 66228H. Nakano and M. Takahashi: J. Phys. Soc. Jpn. 66 (1997) 228. . Y Tomita, J. Phys. Soc. Jpn. 7814002Y. Tomita: J. Phys. Soc. Jpn. 78 (2009) 014002. . A S T Pires, Phys. Rev. B. 535123A. S. T. Pires: Phys. Rev. B 53 (1996) 5123. . A S T Pires, J. Magn. Magn. Mater. 3222015A. S. T. Pires: J. Magn. Magn. Mater. 322 (2010) 2015. . K H Khoo, H K Sy, J. Phys.: Condens. Matter. 13101K. H. Khoo and H. K. Sy: J. Phys.: Condens. Matter 13 (2001) 101. . M Hamedoun, Y Cherriet, A Hourmatallah, N Benzakour, Phys. Rev. B. 63172402M. Hamedoun, Y. Cherriet, A. Hourmatallah, and N. Benzakour: Phys. Rev. B 63 (2001) 172402. . N Laflorencie, I Affleck, M Berciu, J. Stat. Mech. 12001N. Laflorencie, I. Affleck, and M. Berciu: J. Stat. Mech. 2005 (2005) P12001. . E Luijten, H W J Blöte, Phys. Rev. B. 568945E. Luijten and H. W. J. Blöte: Phys. Rev. B 56 (1997) 8945. . A W Sandvik, Phys. Rev. Lett. 104137204A. W. Sandvik: Phys. Rev. Lett. 104 (2010) 137204. . J T M Pacobahyba, W Nunes, J R De Sousa, Phys. Rev. B. 6992410J. T. M. Pacobahyba, W. Nunes, and J. R. de Sousa: Phys. Rev. B 69 (2004) 092410. . E Yusuf, A Joshi, K Yang, Phys. Rev. B. 69144412E. Yusuf, A. Joshi, and K. Yang: Phys. Rev. B 69 (2004) 144412. . R.-G Zhu, A.-M Wang, Phys. Rev. B. 7412406R.-G. Zhu and A.-M. Wang: Phys. Rev. B 74 (2006) 012406. Author's version of the manuscript published in. 10.7566/JPSJ.83.044002J. Phys. Soc. Jpn. 8344002Author's version of the manuscript published in J. Phys. Soc. Jpn. 83 (2014) 044002 DOI:10.7566/JPSJ.83.044002 . L A Pino, P Troncoso, S Curilef, J. Phys.: Conf. Ser. 13412030L. A. del Pino, P. Troncoso, and S. Curilef: J. Phys.: Conf. Ser. 134 (2008) 012030. . A Cavallo, F Cosenza, L De Cesare, Phys. Rev. B. 66174439A. Cavallo, F. Cosenza, and L. De Cesare: Phys. Rev. B 66 (2002) 174439. . F Cosenza, A Cavallo, L De Cesare, Phys. Lett. A. 310223F. Cosenza, A. Cavallo, and L. De Cesare: Phys. Lett. A 310 (2003) 223. . O Vassiliev, I Rojdestvenski, M Cottam, Physica A. 294139O. Vassiliev, I. Rojdestvenski, and M. Cottam: Physica A 294 (2001) 139. . O Vassiliev, M Cottam, I Rojdestvenski, J. Magn. Magn. Mater. 226 -230, Part. 1588O. Vassiliev, M. Cottam, and I. Rojdestvenski: J. Magn. Magn. Mater. 226 -230, Part 1 (2001) 588. . O N Vassiliev, M G Cottam, I V Rojdestvenski, J. Appl. Phys. 897329O. N. Vassiliev, M. G. Cottam, and I. V. Rojdestvenski: J. Appl. Phys. 89 (2001) 7329. . H Nakano, M Takahashi, Phys. Rev. B. 5010331H. Nakano and M. Takahashi: Phys. Rev. B 50 (1994) 10331. . H Nakano, M Takahashi, Phys. Rev. B. 526606H. Nakano and M. Takahashi: Phys. Rev. B 52 (1995) 6606. . A Cavallo, F Cosenza, L Cesare, Physica A. 332301A. Cavallo, F. Cosenza, and L. D Cesare: Physica A 332 (2004) 301. . A Cavallo, F Cosenza, L De Cesare, Eur. Phys. J. B. 5073A. Cavallo, F. Cosenza, and L. De Cesare: Eur. Phys. J. B 50 (2006) 73. . L S Campana, L De Cesare, U Esposito, M T Mercaldo, I Rabuffo, Phys. Rev. B. 8224409L. S. Campana, L. De Cesare, U. Esposito, M. T. Mercaldo, and I. Rabuffo: Phys. Rev. B 82 (2010) 024409. D Chowdhury, Spin Glasses and Other Frustrated Systems. SingaporeWorld Scientific PublishingD. Chowdhury: Spin Glasses and Other Frustrated Systems (World Scientific Publish- ing, Singapore, 1986). R Rammel, J Souletie, Magnetism of Metals and Alloys. M. CyrotNorth-Holland Publishing CompanySpin GlassesR. Rammel and J. Souletie. Spin Glasses. In M. Cyrot (ed), Magnetism of Metals and Alloys, Chap. 4. North-Holland Publishing Company, 1982. . G Kunert, S Dobkowska, T Li, H Reuther, C Kruse, S Figge, R Jakiela, A Bonanni, J Grenzer, W Stefanowicz, J Borany, M Sawicki, T Dietl, D Hommel, Appl. Phys. Lett. 10122413G. Kunert, S. Dobkowska, T. Li, H. Reuther, C. Kruse, S. Figge, R. Jakiela, A. Bonanni, J. Grenzer, W. Stefanowicz, J. von Borany, M. Sawicki, T. Dietl, and D. Hommel: Appl. Phys. Lett. 101 (2012) 022413. . M Sawicki, T Devillers, S Gałȩski, C Simserides, S Dobkowska, B Faina, A Grois, A Navarro-Quezada, K N Trohidou, J A Majewski, T Dietl, A Bonanni, Phys. Rev. B. 85205204M. Sawicki, T. Devillers, S. Gałȩski, C. Simserides, S. Dobkowska, B. Faina, A. Grois, A. Navarro-Quezada, K. N. Trohidou, J. A. Majewski, T. Dietl, and A. Bonanni: Phys. Rev. B 85 (2012) 205204. . S Stefanowicz, G Kunert, C Simserides, J A Majewski, W Stefanowicz, C Kruse, S Figge, T Li, R Jakieła, K N Trohidou, A Bonanni, D Hommel, M Sawicki, T Dietl, Phys. Rev. B. 8881201S. Stefanowicz, G. Kunert, C. Simserides, J. A. Majewski, W. Stefanowicz, C. Kruse, S. Figge, T. Li, R. Jakieła, K. N. Trohidou, A. Bonanni, D. Hommel, M. Sawicki, and T. Dietl: Phys. Rev. B 88 (2013) 081201. . T Dietl, H Ohno, arXiv:1307.3429v2Rev. Mod. Phys. T. Dietl and H. Ohno: arXiv:1307.3429v2 (2013), to be published in Rev. Mod. Phys. . C Simserides, J Majewski, K Trohidou, T Dietl, arXiv:1308.4517v1Eur. Phys. J. Web of Conferences. C. Simserides, J. Majewski, K. Trohidou, and T. Dietl: arXiv:1308.4517v1 (2013), to be published in Eur. Phys. J. Web of Conferences. . M Sherafati, S Satpathy, Phys. Rev. B. 83165425M. Sherafati and S. Satpathy: Phys. Rev. B 83 (2011) 165425. Author's version of the manuscript published in. 10.7566/JPSJ.83.044002J. Phys. Soc. Jpn. 8344002Author's version of the manuscript published in J. Phys. Soc. Jpn. 83 (2014) 044002 DOI:10.7566/JPSJ.83.044002 . J M Duffy, P D Gorman, S R Power, M S Ferreira, J. Phys.: Condens. Matter. 2655007J. M. Duffy, P. D. Gorman, S. R. Power, and M. S. Ferreira, J. Phys.: Condens. Matter 26 (2014) 055007. . T Fabritius, N Laflorencie, S Wessel, Phys. Rev. B. 8235402T. Fabritius, N. Laflorencie, and S. Wessel: Phys. Rev. B 82 (2010) 035402. . S Qi, H Chen, X Xu, Z Zhang, Carbon. 61609S. Qi, H. Chen, X. Xu, and Z. Zhang: Carbon 61 (2013) 609. . K Fukui, S Todo, J. Comp. Phys. 2282629K. Fukui and S. Todo: J. Comp. Phys. 228 (2009) 2629. . K Watanabe, M Sasaki, J. Phys. Soc. Jpn. 8093001K. Watanabe and M. Sasaki: J. Phys. Soc. Jpn. 80 (2011) 093001. . M Sasaki, F Matsubara, J. Phys. Soc. Jpn. 7724004M. Sasaki and F. Matsubara: J. Phys. Soc. Jpn. 77 (2008) 024004. . R Kikuchi, Phys. Rev. 81988R. Kikuchi: Phys. Rev. 81 (1951) 988. . T Morita, T Tanaka, Phys. Rev. 145288T. Morita and T. Tanaka: Phys. Rev. 145 (1966) 288. S Katsura, Theory and Applications of the Cluster Variation and Path Probability Methods. J.L. Morán López and J.M. SanchezNew YorkPlenum PressS. Katsura: In J.L. Morán López and J.M. Sanchez (ed), Theory and Applications of the Cluster Variation and Path Probability Methods, Plenum Press, New York, 1996. . T Balcerzak, K Szałowski, Phys. Rev. B. 80144404T. Balcerzak and K. Szałowski: Phys. Rev. B 80 (2009) 144404. . T Balcerzak, I Łużniak, Physica A. 388357T. Balcerzak and I. Łużniak: Physica A 388 (2009) 357. . K Szałowski, T Balcerzak, A Bobák, J. Magn. Magn. Mater. 3232095K. Szałowski, T. Balcerzak, and A. Bobák: J. Magn. Magn. Mater. 323 (2011) 2095. . K Szałowski, T Balcerzak, Physica A. 3912197K. Szałowski and T. Balcerzak: Physica A 391 (2012) 2197. . K Szałowski, T Balcerzak, Thin Solid Films. 534546K. Szałowski and T. Balcerzak: Thin Solid Films 534 (2013) 546. . T Balcerzak, Physica A. 317213T. Balcerzak: Physica A 317 (2003) 213. . D Bloch, J. Phys. Chem. Solids. 27881D. Bloch: J. Phys. Chem. Solids 27 (1966) 881. . R E Coffman, G R Buettner, J. Phys. Chem. 832387R. E. Coffman and G. R. Buettner: J. Phys. Chem. 83 (1979) 2387. . A Twardowski, H J M Swagten, W J M De Jonge, M Demianiuk, Phys. Rev. B. 367013A. Twardowski, H. J. M. Swagten, W. J. M. de Jonge, and M. Demianiuk: Phys. Rev. B 36 (1987) 7013. . H J M Swagten, A Twardowski, P J T Eggenkamp, W J M De Jonge, Phys. Rev. B. 46188H. J. M. Swagten, A. Twardowski, P. J. T. Eggenkamp, and W. J. M. de Jonge: Phys. Rev. B 46 (1992) 188. . P M Shand, A Lewicki, I Miotkowski, B C Crooker, J K Furdyna, Phys. Rev. B. 446152P. M. Shand, A. Lewicki, I. Miotkowski, B. C. Crooker, and J. K. Furdyna: Phys. Rev. B 44 (1991) 6152. . R R Gałazka, J. Magn. Magn. Mater. 13R.R. Gałazka: J. Magn. Magn. Mater. 140-144 (1995) 13. . J W Tucker, T Balcerzak, M Gzik, A Sukiennicki, J. Magn. Magn. Mater. 187381J.W. Tucker, T. Balcerzak, M. Gzik, A. Sukiennicki: J. Magn. Magn. Mater. 187 (1998) 381. H E Stanley, Introduction to Phase Transitions and Critical Phenomena. OxfordOxford University Press, IncH.E. Stanley: Introduction to Phase Transitions and Critical Phenomena (Oxford Uni- versity Press, Inc., Oxford, 1971). I , Wolfram Research, Mathematica Edition: Version 8.0. Champaign, ILWolfram Research, IncI. Wolfram Research: Mathematica Edition: Version 8.0 (Wolfram Research, Inc., Champaign, IL, 2010).
{'fraction_non_alphanumeric': 0.06234972900281185, 'fraction_numerical': 0.03973267044296833, 'mean_word_length': 3.772828940970534, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 11, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 34, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In the paper the Pair Approximation (PA) method for studies of the site-diluted spin-1/2 systems of arbitrary dimensionality with the long-range ferromagnetic interactions is adopted.The method allows to take into account arbitrary anisotropy of the interactions in the spin space, so it is not limited to purely Ising couplings. Within this approach, the Gibbs free energy is obtained, which allows to derive all the further interesting thermodynamic properties.In particular, we obtain an equation for the critical temperature of the second-order phase transitions for the model in question. In the study we focus our attention on the systems with ferromagnetic interactions decaying with the distance according to the power law J(r) ∝ r −n .We discuss the dependence of the critical temperature on the concentration of magnetic component and the index n for selected one-, two-and three-dimensional lattices. We confirm the absence of the critical concentration for a diluted magnet with infinite interaction range. In the regime of the low concentrations of magnetic component, we find a non-linear increase of the critical temperature with the concentration in the form of T c ∝ p n/d , depending on the system dimensionality d and the index n, whereas n > d. and valuable tool for their characterization. Within this field, a range of magnetic systems attracted considerable attention focusing mainly on low dimensions. This selection is generally restricted to magnets in which interactions are of constant sign, thus not leading to magnetic frustration with a plethora of intriguing consequences. Let us mention that the studies of magnetic systems with the site dilution and long-range couplings seem to be rather rare and this subject is principally mentioned only in the context of spin glasses and scaling relations.39,40)Let us present a brief motivation for studies of diluted magnetic systems with long-range interactions provided by some recent experimental works. One can instance the progress in growth and characterization of a highly promising dilute magnetic semiconductor (Ga,Mn)N, which encourages the interest in three-dimensional ferromagnets with the long-range interactions, for this substance attracts rising interest in the context of potential room-temperature ferromagnetism.[41][42][43][44][45]In this compound, a non-linear dependence of the critical temperature on magnetic Mn dopant concentration has been found experimentally for low Mn content, and such behaviour has been attributed to a ferromagnetic long-range superexchange mechanism.42,44,45)What is more, the unique properties of indirect Ruderman-Kittel-Kasuya-Yosida interaction in graphene (see e.g.46,47)) also promote theoretical understanding of twodimensional magnets with the long-range coupling (e.g.48,49)).Despite the development and use of simulational Monte Carlo methods for the systems with the long-range interactions,45,48,[50][51][52]there is still room for analytic studies. However, the problem turns out to be complex and, up to now, no complete thermodynamic method, which goes beyond the Molecular Field Approximation (MFA), has been proposed. In order to fill the gap, the present work describes the thermodynamics of the site-diluted systems with spins 1/2 interacting ferromagnetically by means of the long-range coupling, using analytical method based on the Pair Approximation (PA). The PA method is superior to MFA from the point of view of the systematic hierarchy of Cluster Variational Methods (CVM).[53][54][55]These methods have been originally developed for the nearest-neighbour (NN) interactions.However, the application of CVM for larger clusters: for instance, in triangle or square approximation, in the presence of the long-range interaction does not seem to be possible in practice. Nevertheless, it turns out that in the frame of CVM reduced to the PA the problem of long-range interactions is still tractable. The usefulness of the PA method follows from the fact that, in contrast to MFA, it takes into account the spin-pair correlations and can be applied to low-dimensional and disordered magnets. 56) Moreover, this method yields the Gibbs free-energy from which all thermodynamic quantities can be calculated.', 'arxivid': '1403.6283', 'author': ['Karol Szałowski \nDepartment of Solid State Physics\nFaculty of Physics and Applied Informatics\nUniversity of Łódź\nulica Pomorska 149/15390-236ŁódźPoland\n', 'Tadeusz Balcerzak \nDepartment of Solid State Physics\nFaculty of Physics and Applied Informatics\nUniversity of Łódź\nulica Pomorska 149/15390-236ŁódźPoland\n'], 'authoraffiliation': ['Department of Solid State Physics\nFaculty of Physics and Applied Informatics\nUniversity of Łódź\nulica Pomorska 149/15390-236ŁódźPoland', 'Department of Solid State Physics\nFaculty of Physics and Applied Informatics\nUniversity of Łódź\nulica Pomorska 149/15390-236ŁódźPoland'], 'corpusid': 56440936, 'doi': '10.7566/jpsj.83.044002', 'github_urls': [], 'n_tokens_mistral': 15865, 'n_tokens_neox': 13552, 'n_words': 8399, 'pdfsha': '003c967d20cf9e980434b1a4467449df9ec18fca', 'pdfurls': ['https://arxiv.org/pdf/1403.6283v1.pdf'], 'title': ['Critical temperature of site-diluted spin-1/2 systems with long-range ferromagnetic interactions', 'Critical temperature of site-diluted spin-1/2 systems with long-range ferromagnetic interactions'], 'venue': []}
arxiv
22 May 2016 Quansen Jiu Jitao Liu Dongjuan Niu 22 May 2016GLOBAL EXISTENCE OF WEAK SOLUTIONS TO THE 3D INCOMPRESSIBLE AXISYMMETRIC EULER EQUATIONS WITHOUT SWIRLand phrases Euler equationsglobal weak solutions3D axisymmetric 2010 Mathematics Subject Classification 35Q3576B0376B47 In this paper, we mainly discuss the three-dimensional incompressible axisymmetric Euler equations without swirl in R 3 . Specifically, we prove the global existence of weak solutions if the swirl component of initial vorticity w θ 0 satisfying that w θ 0 /r ∈ L 1 ∩ L p (R 3 ) for some p > 1. Our work improves the previous results by removing the assumption that the initial velocity fields u 0 ∈ L 2 (R 3 ). Introduction and main results In this paper, we are concerned with the three-dimensional incompressible Euler equations ∂ t u + u · ∇u = −∇p, ∇ · u = 0, (1.1) in the whole space R 3 with initial data u(0, x) = u 0 (x), where u = (u 1 , u 2 , u 3 ) and p = p(x, t) represent the velocity fields and pressure respectively. The mathematical study to the incompressible Euler equations takes a long history with a large associated literatures. For two-dimensional case, Wolibner [24] obtained the global well-posedness of smooth solutions in 1933. Then, this work was extended by Yudovich [25], who proved existence and uniqueness for a certain class of weak solutions if the initial vorticity w 0 lies in L 1 ∩L ∞ (R 2 ). Later, under the assumption of w 0 ∈ L 1 ∩ L p (R 2 ) for some p > 1, Diperna and Majda showed that the weak solutions exist globally in [8]. Furthermore, if w 0 is a finite Radon measure with one sign, there are also many works related to the global existence of weak solutions, which can be referred to [6,9,19,21] for details. However, the global existence of smooth solutions for 3D incompressible Euler equations with smooth initial data is an important open problem, with a large associated iterature. From mathematical point of view, in two-dimensional case, the corresponding vorticity w = ∂ 2 u 1 − ∂ 1 u 2 is a scalar fields and satisfies the following transport equation ∂ t w + u · ∇w = 0, which infer that its L p norm is conserved for all time. Nevertheless, for the threedimensional case, w becomes a vector fields and the vortex stretching term w · ∇u occurs in the equations of vorticity w = ∇ × u, i.e., ∂ t w + u · ∇w = w · ∇u. The presence of vortex stretching term leads to more difficulties to prove the global regularity, which is the main reason causing this problem open. Therefore, some mathematicians find out certain geometrical assumptions, which provide the possibilities to explore the global well-posedness of three-dimensional inviscid flow. One typical situation is the 3D axisymmetric flow. Whereas, under this particular structure, it is still an open problem to exclude the singularity which occurs (if there is) only on the symmetry axis (see [2]), even for the Cauchy problem. But if the swirl component of velocity fields, u θ , is trivial, Ukhovskii, Yudovich [26] and Saint Raymond [22] proved that the weak solutions in the whole space are regular for all time. It should be noted that under this additional assumption, the corresponding vorticity quantity w θ r is transported by a divergence free vector fields, which makes the problem more close to the 2D case. Afterwards, many mathematicians are committed to looking for the minimal initial assumptions, which are required to guarantee the existence of global weak solutions. In 1997, D. Chae and N. Kim proved the global existence of weak solutions under the assumptions that w θ 0 r ∈ L p (R 3 ) for some p > 6/5 in [4]. Later, D. Chae and O. Y. Imanuvilov [3] obtained the similar result by assuming u 0 ∈ L 2 (R 3 ) and | w θ 0 r |[1 + (log + | w θ 0 r |) α ] ∈ L 1 (R 3 ) with α > 1/2. Recently, Jiu, Wu and Yang in [13] also obtained the existence result under the assumptions that u 0 ∈ L 2 (R 3 ) and w θ 0 /r ∈ L 1 ∩ L p (R 3 ) (for some p > 1) by using the method of viscous approximations. Regarding other related works, one can refer to [1, 5, 10-12, 14, 15, 17, 23]. Now, a natural question comes, if one can generalize Diperna and Majda's work [8] for 2D case to 3D axisymmetric flow without swirl? In the present paper, we give a positive answer to this question. That is, for given initial data w θ 0 r ∈ L 1 ∩ L p (R 3 ) for some p > 1, the weak solutions exist globally. As a matter of fact, our work can be seen as an improvement of previous results by removing initial assumption u 0 ∈ L 2 (R 3 ). In the process of our proof, the new ingredient is the rigorous L p loc (R 3 ) estimates of velocity fields. In this case, the basic energy estimates take no effect, which brings us difficulties in employing the compactness argument. To overcome them, we make attempt to establish the L p loc (R 3 ) (p > 1) for the velocity fields. More precisely, we find out the explicit form of stream function, which is in terms of vorticity initially. On this basis, we establish the estimate u L p loc (R 3 ) for any p > 1. It should be noted that this estimate is not only a new observation, but also provides us the cornerstone to get the W 1,p loc (R 3 ) estimates of velocity fields. Based on this estimate, it suffices to establish the necessary strong convergence for approximation solutions. In general, the compact embedding W 1,p loc (R n ) ֒→ L 2 loc (R n ) holds for any p > 1 when n = 2 and p > 6/5 when n = 3. Because of this, in [4], the authors took the critical index p as 6/5. However, in the cylindrical coordinate, 3D Lebesgue measure is rdrdθdz and the radial, swirl and z-component of axisymmetric vector fields are the functions of variable r and z only. This means that in any compact regions away from the axis of symmetry, r is non-zero and bounded and therefore 3D measure rdrdθdz is equivalent to 2D measure drdz. Thus, in any compact regions away from the axis of symmetry, the strong convergence of approximate solutions can be guaranteed by the W 1,p loc (R 3 ) estimates of velocity fields and compact embeddings W 1,p loc (drdz) ֒→ L 2 loc (drdz) for p > 1. Subsequently, by applying Theorem 3.2 in Jiu-Xin [15], the approximate solutions converge strongly in L 2 (0, T ; L 2 loc (R 3 )), which is sufficient to finish all the proofs. Before stating our main result, we would like to introduce the definition of weak solutions to (1.1) as follow. Definition 1.1 (Global weak solutions). For any T > 0, the velocity fields u(x, t) ∈ L ∞ (0, T ; L 2 loc (R 3 )) with initial data u 0 is a weak solution to (1.1) if it holds (i) For any vector field ϕ ∈ C ∞ 0 ((0, T ]; R 3 ) with ∇ · ϕ = 0, T 0 R 3 u · ϕ t + u · ∇ϕ · u = R 3 u 0 · ϕ 0 dx, (ii) For any function φ ∈ C ∞ 0 ((0, T ]; R 3 ), R 3 u · ∇φ dx = 0. Under this definition, our main result can be stated as below. Theorem 1.1. Suppose that w θ 0 = w θ 0 (r, z) is a scalar axisymmetric function such that w 0 = w(x, 0) = w θ 0 e θ and w θ 0 /r ∈ L 1 ∩ L p (R 3 ) with some p > 1. Then, for any T > 0, there exists at least an axisymmetric weak solution u ∈ W 1,p loc (R 3 ) without swirl in the sense of Definition 1.1. Remark 1.1. It should be noted that in general, w 0 ∈ L 1 ∩ L p (R 3 ) for some p > 1 doesn't imply u 0 ∈ L 2 (R 3 ) . For example, we take a radial function u 0 (x) such that u 0 (x) =        |x| − 8 5 , if |x| ≤ 1, smooth, if 1 < |x| ≤ 2, 0, if |x| > 2. For this function, it is clear that |w 0 (x)| ∼ |x| − 13 5 for |x| ≤ 1, and therefore w 0 ∈ L 1 ∩ L p (R 3 ) for any 1 < p < 15 13 . Nevertheless, u 0 doesn't belong to L 2 (R 3 ). This paper is organized as follows. In section 2, we introduce some notations. In section 3, we will concentrate on the a priori estimates of velocity fields. Section 4 is devoted to prove the global existence of weak solutions. Preliminary In this section, we fix notations and set down some basic definitions. Initially, we would like to introduce the definition of 3D axisymmetric flow. Definition 2.1 (Axisymmetric flow). A vector field u(x, t) is called axisymmetric if it can be described by the form of u(x, t) = u r (r, z, t)e r + u θ (r, z, t)e θ + u z (r, z, t)e z (2.2) in the cylindrical coordinate, where e r = (cosθ, sinθ, 0), e θ = (−sinθ, cosθ, 0), e z = (0, 0, 1). We call the velocity components u r (r, z, t), u θ (r, z, t), u z (r, z, t) as radial, swirl and z-component respectively. Remark 2.1. In the following context, we will use the notations u r , u θ , u z instead of u r (r, z, t), u θ (r, z, t), u z (r, z, t) for simplicity. With above definition, we turn to set up the equations satisfied by u r , u θ , u z . Initially, under cylindrical coordinate, it is trivial that the gradient operator can be expressed in the form of ∇ = e r ∂ r + 1 r e θ ∂ θ + e z ∂ z . Then, by some basic calculations, one can rewrite (1.1) as                  ∂ t u r +ũ ·∇u r + ∂ r p = (u θ ) 2 r , ∂ t u θ +ũ ·∇u θ = − u θ u r r , ∂ t u z +ũ ·∇u z + ∂ z p = 0, ∂ r (ru r ) + ∂ z (ru z ) = 0, (2.3) whereũ = u r e r + u z e z and∇ = e r ∂ r + e z ∂ z . In addition, by (2.3) 2 , it is clear that the quantity ru θ satisfies the following transport equation: ∂ t (ru θ ) +ũ ·∇(ru θ ) = 0. (2.4) Thanks to (2.4), the following well-known Proposition holds. Proposition 2.1. Assume u is a smooth axisymmetric solution of 3D incompressible Euler equations, then the swirl component of velocity u θ will be vanishing if its initial data u θ 0 be given zero. Proof. Becauseũ is divergence free and (2.4) holds, by multiplying (2.4) with ru θ and integrating on (0, t), it follows that u θ (t) L 2 (R 3 ) ≤ u θ 0 L 2 (R 3 ) = 0. Then, considering that u θ is smooth, we can conclude that u θ ≡ 0 for any t > 0. Therefore, if u θ 0 = 0, then the corresponding vector fields becomeũ and its voritcity can be described as w = w θ e θ , where w θ = ∂ z u r − ∂ r u z . What's more, the scalar quantity w θ r is transported by divergence free vector fieldũ, that is ∂ t ( w θ r ) +ũ ·∇( w θ r ) = 0. (2.5) This means that w θ r is conserved, which together ∇ ·ũ = 0 implies the following property. Conservation laws for w θ r . Suppose u is a smooth axisymmetric solution of 3D incompressible Euler equations, with its initial swirl component u θ 0 vanished, then the estimate w θ r L p (R 3 ) ≤ w θ 0 r L p (R 3 ) , (2.6) holds for any p ∈ [1, ∞], where w θ 0 = w θ (x, 0). A priori estimates of velocity fields In this section, we will focus on the W 1,p loc (R 3 ) estimates of velocity fields. The first step is to show existence of stream function, which is firstly given in Lemma 1 of [18]. Lemma 3.1. Let u be a smooth axisymmetric velocity fields without swirl and ∇·u = 0, then there exists a unique scalar function ψ = ψ(r, z) such that u = ∇ × (ψe θ ) and ψ = 0 on the axis of symmetry r = 0. Naturally, this Lemma together with ∇ · u = 0 and w = ∇ × u = w θ e θ , tell us that −∆(ψe θ ) = w θ e θ . Then by the elliptic theory, we have ψ(r x , z x )e θx = R 3 G(X, Y )w θ (r y , z y )e θy dY,(3.7) where X = (r x , θ x , z x ) and G(X, Y ) = |X − Y | −1 stands for the three-dimensional Green's function in the whole space. Regarding the Green's function G(X, Y ), it is well-known that the following two properties (i) : |D k X G(X, Y )| ≤ C k |X − Y | −1−k , (3.8) (ii) : G(X, Y ) = G(X,Ȳ ), ∂ r G(X, Y ) = ∂ r G(X,Ȳ ), ∂ z G(X, Y ) = ∂ z G(X,Ȳ ), (3.9) hold for all (X, Y ) ∈ R 3 ,X = (−x, −y, z) and k = 0, 1, 2. Until now, we have established the formulation (3.7). However, in order to find out the explicit form of ψ(r x , z x ), we need to fix the value of θ x . Therefore, by making use of the rotational invariance and putting θ x = 0 in (3.7), we derive the explicit form of ψ in terms of w θ ψ(r x , z x ) = ∞ −∞ ∞ 0 π −π G(X, Y )w θ cosθ y r y dθ y dr y dz y ,(3.10) where X = (r x , 0, z x ). On this basis, we intend to utilize the stream function to establish L p loc (R 3 ) estimates of velocity fields. And we would like to introduce the following Lemma, which is our new observation and the cornerstone of this paper. Lemma 3.2. Assume u and ψ be as in Lemma 3.1, w = ∇ × u = w θ e θ , then there holds that |ψ(r x , z x )| ≤ C |X−Y |≤1 |w θ | |X − Y | dY + Cr x |X−Y |>1 |w θ | |X − Y | 2 dY,(3. 11) and |∂ r ψ(r x , z x )| + |∂ z ψ(r x , z x )| (3.12) ≤ C |X−Y |≤1 |w θ | |X − Y | 2 dY + Cr x |X−Y |>1 |w θ | |X − Y | 3 dY, where C is an absolute constant and X = (r x , 0, z x ). Proof. First of all, we do the estimate of |∂ r ψ|. From (3.10), we have ∂ r ψ = ∞ −∞ ∞ 0 π −π ∂ r G(X, Y )w θ cosθ y r y dθ y dr y dz y , which together with (3.9) yields that Y ))w θ cosθ y r y dθ y dr y dz y . ∂ r ψ = ∞ −∞ ∞ 0 π 2 − π 2 (∂ r G(X, Y ) − ∂ r G(X, Thus, to prove (3.12), it suffices to verify that H π 2 − π 2 (∂ r G(X, Y ) − ∂ r G(X, Y ))w θ cosθ y r y dθ y ≤ C π 2 − π 2 |w θ | |X − Y | 2 dθ y + Cr x π 2 − π 2 |w θ | |X − Y | 3 dθ y . Without loss of generality, we assume θ * be the unique real number θ y ∈ [0, π 2 ] such that |X − Y | = 1 and split the integral H into H = I + II + III, with I = −θ * − π 2 dθ y , II = θ * −θ * dθ y , III = π 2 θ * dθ y , where |X − Y | > 1 for I, III and |X − Y | ≤ 1 for II. Otherwise, |X − Y | > 1 or |X − Y | ≤ 1 for all θ y ∈ [− π 2 , π 2 ] . For these two cases, one can prove them along the same lines with estimating I or II. Because |X − Y | ≤ |X − Y | for all |θ y | ≤ π 2 and the interval [−θ * , θ * ] corresponds to those θ y for which |X − Y | ≤ 1, one can conclude that II satisfies the desired estimate easily. Regarding the first and third terms, to start with, we fix some angle θ ′ ∈ [θ * , π 2 ] and denote X t = (rcos t, rsin t, z) for t ∈ [−π, 0]. Besides, for the function f (x, y, z) = f (rcos θ, rsin θ, z), it is clear that ∂ θ f = r∂ h f · e θ , where ∂ h = (∂ x , ∂ y , 0). Whence, by the mean value theorem, it follows that ∂ r G(X, Y ) − ∂ r G(X, Y ) = πr x 0 −π ∂ h ∂ r G(X t , Y ) · e θy dθ y . Then, by employing the fact |X − Y | ≤ |X t − Y | for all t ∈ [−π, 0] and (3.8) , it holds that |∂ r G(X, Y ) − ∂ r G(X, Y )| ≤ Cr x |X − Y | −3 . Thus, we have obtained the estimate of III, that is III ≤ Cr x π 2 θ * |X − Y | −3 |w θ |dθ y . What's more, the estimate of I can be treated by the same arguments with III. Thus, by adding up all the estimates, one can derive the estimate of |∂ r ψ|. As for |ψ| and |∂ z ψ|, one can estimate it in the similar way and we will omit it here. Remark 3.1. In [5,23], the authors have ever used the estimate of |∂ z ψ| to establish L ∞ (R 3 ) estimate of ur r . However, we discover that the estimates of |ψ|, |∂ r ψ| and |∂ z ψ| can be used to establish L p loc (R 3 ) estimates of velocity fields. Thanks to Lemma 3.2, we can derive the following L p loc (R 3 ) estimates of velocity fields, which is the key contribution of our work. Thanks to the following Proposition, then we can remove the assumption on initial velocity fields u 0 ∈ L 2 (R 3 ). Proposition 3.1. [L p loc (R 3 ) estimates] Given u as a smooth axisymmetric velocity fields without swirl and ∇ · u = 0, then there holds u L p (C R ×[−R,R]) ≤ C(R) w θ r L 1 ∩L p (R 3 ) , for any p ∈ (1, ∞). Here C R = {(x, y) ∈ R 2 | 1 R ≤ x 2 + y 2 ≤ R} ⊂ R 2 be a 2D ring and the constant C(R) depends only on R. Proof. According to Lemma 3.1, for axisymmetric smooth velocity fields u with zero swirl component, there exists a unique stream function ψ such that u = u r e r + u z e z = ∇ × (ψe θ ). This implies that u r = ∂ z ψ, u z = ∂ r ψ + ψ r and |u| ≤ |∂ z ψ| + |∂ r ψ| + | ψ r |. Then, by Lemma 3.2, we have |u| ≤ C |X−Y |≤1 |w θ | |X − Y | 2 dY + Cr x |X−Y |>1 |w θ | |X − Y | 3 dY + C r x |X−Y |≤1 |w θ | |X − Y | dY + C |X−Y |>1 |w θ | |X − Y | 2 dY ≤ Cr x |X−Y |≤1 |w θ | r y |X − Y | 2 dY + C |X−Y |≤1 |w θ ||r x − r y | r y |X − Y | 2 dY +Cr 2 x |X−Y |>1 |w θ | r y |X − Y | 3 dY + Cr x |X−Y |>1 |w θ ||r x − r y | r y |X − Y | 3 dY +C |X−Y |≤1 |w θ | r y |X − Y | dY + C r x |X−Y |≤1 |w θ ||r x − r y | r y |X − Y | dY (3.13) +Cr x |X−Y |>1 |w θ | r y |X − Y | 2 dY + C |X−Y |>1 |w θ ||r x − r y | r y |X − Y | 2 dY ≤ Cr x |X−Y |≤1 |w θ | r y |X − Y | 2 dY + 2C |X−Y |≤1 |w θ | r y |X − Y | dY +Cr 2 x |X−Y |>1 |w θ | r y |X − Y | 3 dY + 2Cr x |X−Y |>1 |w θ | r y |X − Y | 2 dY + C r x |X−Y |≤1 |w θ | r y dY + C |X−Y |>1 |w θ | r y |X − Y | dY = 6 i=1 I i , where we used the inequality |r x − r y | ≤ |X − Y |. Therefore, for any p ∈ (1, ∞) and cut-off function χ A with compact support set A, by Young's inequality for convolutions, one has I 1 L p (C R ×[−R,R]) + I 2 L p (C R ×[−R,R]) + I 3 L p (C R ×[−R,R]) + I 5 L p (C R ×[−R,R]) ≤ CR χ {|x|≤1} |x| 2 L 1 (R 3 ) w θ r L p (R 3 ) + C χ {|x|≤1} |x| L 1 (R 3 ) w θ r L p (R 3 ) +CR 2 χ {|x|>1} |x| 3 L p (R 3 ) w θ r L 1 (R 3 ) + C R χ {|x|≤1} L 1 (R 3 ) w θ r L p (R 3 ) (3.14) ≤ C(R) w θ r L 1 ∩L p (R 3 ) . Regarding the left terms, by applying Hölder inequalities and Young's inequality for convolutions, it follows that I 4 L p (C R ×[−R,R]) + I 6 L p (C R ×[−R,R]) ≤ CR I 4 L 3p 2 (C R ×[−R,R]) + C I 6 L 3p (C R ×[−R,R]) ≤ CR χ {|x|>1} |x| 2 L 3p 2 (R 3 ) w θ r L 1 (R 3 ) + C χ {|x|>1} |x| L 3p (R 3 ) w θ r L 1 (R 3 ) (3.15) ≤ C(R) w θ r L 1 (R 3 ) . Thus, one can finish the proof by summing up (3.13) − (3.15). At last, it suffices to deal with the L p loc (R 3 ) estimates of ∇u in terms of w θ . According to Proposition 2.20 in [20], the gradient of velocity field can be expressed in terms of its vorticity by [∇u]h = [Pw]h + 1 3 w × h. (3.16) Here P is a singular integral operator of Calderon-Zygmund type which is generated by a homogeneous kernel of degree -3 (see [16]) and h is a vector field. Moreover, the explicit form of [Pw]h is [Pw]h = −P.V. R 3 1 4π w(y) × h |x − y| 3 + 3 4π {[(x − y) × w(y)] ⊗ (x − y)}h |x − y| 5 dy. (3.17) Therefore, with the help of (3.16) and (3.17), we are in the position to build up the following estimates. Proposition 3.2. [ ∇u L p loc (R 3 ) estimates] Assume that u is a smooth axisymmterical velocity fields with divergence free and zero swirl component, then for any p ∈ (1, ∞), there holds ∇u L p (B R ×[−R,R]) ≤ C(R) w θ r L 1 ∩L p (R 3 ) , where B R = B R (0) ⊂ R 2 be a 2D ball and the constant C(R) depends only on R. As P is a singular operator of Calderon-Zygmund type, hence by the Calderon-Zygmund inequality for p ∈ (1, ∞), it follows that I L p (B R ×[−R,R]) + III L p (B R ×[−R,R]) ≤ C [P(χw)] L p (R 3 ) + C w L p (B R ×[−R,R]) ≤ C w θ L p (B 2R ×[−2R,2R]) (3.18) ≤ CR w θ r L p (R 3 ) . Regarding the second term, by (3.17), we have II = −P.V. R 3 1 4π g(y) × e i |x − y| 3 + 3 4π {[(x − y) × g(y)] ⊗ (x − y)}e i |x − y| 5 dy, where g(y) = (1−χ(y))w(y). In addition, as supp (1−χ(y)) ⊂ R 3 \B 2R ×[−2R, 2R], it is clear that |x−y| ≥ |y|−|x| ≥ R for x ∈ B R ×[−R, R] and y ∈ R 3 \B 2R ×[−2R, 2R]. Therefore, for x ∈ B R × [−R, R], there holds |II| ≤ C |x−y|≥R |w θ (y)| |x − y| 3 dy ≤ Cr x |x−y|≥R |w θ (y)| r y |x − y| 3 dy + C |x−y|≥R |w θ (y)||r x − r y | r y |x − y| 3 dy ≤ Cr x |x−y|≥R |w θ (y)| r y |x − y| 3 dy + C |x−y|≥R |w θ (y)| r y |x − y| 2 dy ≤ C R 2 w θ r L 1 (R 3 ) , which implies, after utilizing some basic calculations, that II L p (B R ×[−R,R]) ≤ C(R) w θ r L 1 (R 3 ) .(3.19) Thus, we can finish the proof by adding up (3.18) and (3.19). Global existence of weak solutions This section is devoted to prove global existence of weak solutions. The first step is to construct a family of approximation solutions. To begin with, we would like to introduce the standard mollifier ρ ǫ which can be described by ρ ǫ (x) = 1 ǫ 3 ρ( |x| ǫ ), where ρ ∈ C ∞ 0 (R 3 ), ρ ≥ 0, supp ρ ∈ {|x| ≤ 1} and R 3 ρ dx = 1. With the help of this mollifier, we have the following theorem. Theorem 4.1. Given initial data w θ 0 = w θ (0, r, z) such that w 0 = w θ 0 e θ and w θ 0 r ∈ L 1 ∩ L p (R 3 ) for some p > 1, then there exists a family of smooth axisymmetric solutions u ǫ with zero swirl component and initial data u ǫ 0 for any T > 0. Here, w ǫ 0 (x) = ρ ǫ * w 0 (x) and u ǫ 0 = ∇ × w ǫ 0 . In addition, it holds that u ǫ W 1,p (C R ×[−R,R]) ≤ C(R). (4.20) Proof. Without loss of generality, we assume that w 0 has the compact support. Otherwise, we can redefine w 0 = ρ ǫ * (χw 0 ) for any cut-off function χ, which is compact supported in R 3 . By our construction for initial data, it is clear that w ǫ 0 is axisymmetric. Then, by solving following elliptic system ∇ × u ǫ 0 = w ǫ 0 , ∇ · u ǫ 0 = 0, we can obtain a unique axisymmetric velocity field u ǫ 0 . According to the assumptions, ∇ × u ǫ 0 = w ǫ 0 has only swirl component w ǫ θ (0, x) such that w ǫ 0 = w ǫ θ (0, x)e θ . Therfore, it is clear to conclude that u ǫ 0 has zero swirl component, i.e., u ǫ θ (0, x) = 0. Moreover, u ǫ 0 ∈ C ∞ (R 3 ) and belongs to the space V = {u|u ∈ H 3 (R 3 ), ∇ · u = 0}. On this basis, by Theorem 2.4 of [7], there exists a unigue global axisymmetric smooth solution u ǫ . What's more, as Euler equations keep invariant under the rotation and translation transformations, it is obvious that the vector fields u ǫ is still axisymmetric. Except that, the swirl component u ǫ θ is also vanishing due to its initial data u ǫ 0,θ be given zero. Finally, by the properties of standard mollifier, it follows that w ǫ 0 r L p (R 3 ) ≤ ρ ǫ * w 0 θ r L p (R 3 ) ≤ C w θ 0 r L p (R 3 ) , ∀p ∈ [1, ∞]. Furthermore, considering that w ǫ r satisfies the transport equation (2.5) and applying (2.6), one has w ǫ r L 1 ∩L p (R 3 ) ≤ C. This together with Proposition 3.1 and 3.2 leads to (4.20). As discussed in the introduction, in order to improve the index p from 6/5 to 1, it is necessary to employ the compactness arguments in any compact support, which is away from the axis of symmetry. Also, the strong convergence locally in the whole space is essential. Hence, we would like to introduce following Proposition, which is pretty necessary in our proof of main theorem and introduced by Jiu and Xin [15]. Proposition 4.1. [15] For the approximate solutions {u ǫ } constructed in Theorem 4.1, if there exists a subsequence {u ǫ j } ⊂ {u ǫ } such that, for any Q ⊂⊂ R 3 \{x ∈ R 3 |r = 0} and ǫ j → 0, u ǫ j → u strongly in L 2 (0, T ; L 2 (Q)), (4.21) then there exists a further subsequence of {u ǫ j }, still denoted by itself, such that, as ǫ j → 0, u ǫ j → u strongly in L 2 (0, T ; L 2 loc (R 3 )). (4.22) Thanks to Proposition 4.1, the strong convergence in the compact region excluding the axis of symmetry is sufficient in the process of passing limits. Therefore, with the help of a priori estimates in Proposition 3.1 and 3.2, we can prove our main theorem as follow. Proof of Theorem 1.1. With the help of Theorem 4.1, it is easy to obtain u ǫ L ∞ (0,T ;W 1,p (C R ×[−R,R])) ≤ C(R). Then by making use of equation (1.1) 1 , it is evident that ∂ t u ǫ L ∞ (0,T ;W −1,p * (C R ×[−R,R])) ≤ C(R), where p * = p p−1 . In addition, as |u| is a function of variable r and z only, one can conlude that Then, by a diagonal selection process, we can extract a subsequence of u ǫ j independent of R (still denoted by u ǫ j ) such that u ǫ j − u L 2 (0,T ;([ 1 R ,R]×[−R,R];drdz)) → 0 as ǫ j → 0, which also implies that u ǫ j − u L 2 (0,T ;C R ×[−R,R]) → 0 as ǫ j → 0. Until now, we actually have proved that there exists an axisymmetric vector field u without swirl , such that u ǫ j → u in L 2 (0, T ; Q) for any Q ⊂⊂ R 3 \{x ∈ R 3 |r = 0}. Therefore, by Proposition 4.1, there holds that u ǫ i → u strongly in L 2 (0, T ; L 2 loc (R 3 )). The last step is to pass limit in the equations satisfied by u ǫ . As a matter of fact, it suffices to show the convergence of nonlinear term. Considering that u ǫ j → u strongly in L 2 (0, T ; L 2 loc (R 3 )), it is clear that T 0 R 3 u ǫ j · ∇ϕ · u ǫ j dxdt → T 0 R 3 u · ∇ϕ · u dxdt, for any ϕ ∈ C ∞ 0 ((0, T ]; R 3 ). This shows that u is a weak solution of 3D incompressible axisymmetric Euler equations without swirl in the sense of Definition 1.1. Proof. Thanks to (3.16), it is clear that ∇u L p ≃ i [∇u]e i L p holds for any p ∈ (1, ∞), where e i (i = r, θ, z) is the orthogonal basis in (2.2). Futhermore, let χ(r, z) be a smooth cut-off function such that χ(r, z) = 1 in B 2R × [−2R, 2R], and supp χ ⊂ B 3R × [−3R, 3R]. Then, we can split [∇u]e i into three parts by [∇u]e i = [P(χw)]e i + [P{(1 − χ)w}]e i + 1 3 w × e i = I + II + III. R ,R]×[−R,R];drdz)) ≤ C(R). Hence, by applying the Aubin-Lions lemma and compact embeddingsW 1,p ([ 1 R , R]× [−R, R]; drdz) ֒→ L 2 ([ 1 R , R]×[−R, R]; drdz) for any p > 1, there exists a subsequence u ǫ j (depending on R) such thatu ǫ j → u in L 2 (0, T ; ([ 1 R, R] × [−R, R]; drdz)). Global existence of a weak solution of the incompressible Euler equations with helical symmetry and L p vorticity. A Bronzi, M Lopes, H Lopes Nuzzenveig, Indiana Univ. Math. J. 641A. Bronzi, M. Lopes and H. Lopes Nuzzenveig, Global existence of a weak solution of the incompressible Euler equations with helical symmetry and L p vorticity, Indiana Univ. Math. J. 64 (2015), no. 1, 309-341. Partial regularity of suitable weak solutions of the Navier-Stokes equations. L Cafferalli, R Kohn, L Nirenberg, Comm. Pure Appl. Math. 356L. Cafferalli, R. Kohn, L. Nirenberg, Partial regularity of suitable weak solutions of the Navier-Stokes equations, Comm. Pure Appl. Math. 35 (1982), no. 6, 771- 831. Existence of axisymmetric weak solutions of the 3-D Euler equations for near-vortex-sheet initial data. D Chae, O Y Imanuvilov, Electron. J. Differential Equations. 1726ppD. Chae, O. Y. Imanuvilov, Existence of axisymmetric weak solutions of the 3-D Euler equations for near-vortex-sheet initial data, Electron. J. Differential Equations 1998, No. 26, 17 pp. Axisymmetric weak solutions of the 3-D Euler equations for incompressible fluid flows. D Chae, N Kim, Nonlinear Anal. 2912D. Chae, N. Kim, Axisymmetric weak solutions of the 3-D Euler equations for incompressible fluid flows, Nonlinear Anal. 29 (1997), no. 12, 1393-1404. Axisymmetric incompressible flows with bounded vorticity. R Danchin, Russian Math. Surveys. 623R. Danchin, Axisymmetric incompressible flows with bounded vorticity, Russian Math. Surveys 62 (2007), no. 3, 475-496. Existence of vortex sheets in dimension two. J Delort, J. Amer. Math. Soc. 43J. Delort, Existence of vortex sheets in dimension two, J. Amer. Math. Soc. 4 (1991), no. 3, 553-586. R Diperna, A Majda, Concentrations in regularizations for 2-D incompressible flow. 40R. DiPerna, A. Majda, Concentrations in regularizations for 2-D incompressible flow, Comm. Pure Appl. Math. 40 (1987), no. 3, 301-345. Oscillations and concentrations in weak solutions of the incompressible fluid equations. R Diperna, A Majda, Comm. Math. Phys. 1084R. Diperna, A. Majda, Oscillations and concentrations in weak solutions of the incompressible fluid equations, Comm. Math. Phys. 108 (1987), no. 4, 667-689. Hardy spaces and the two-dimensional Euler equations with nonnegative vorticity. L Evans, S Müller, J. Amer. Math. Soc. 71L. Evans, S. Müller, Hardy spaces and the two-dimensional Euler equations with nonnegative vorticity, J. Amer. Math. Soc. 7 (1994), no. 1, 199-219. Global existence and uniqueness of weak solutions of threedimensional Euler equations with helical symmetry in the absence of vorticity stretching. B Ettinger, E S Titi, SIAM J. Math. Anal. 411B. Ettinger, E. S. Titi, Global existence and uniqueness of weak solutions of three- dimensional Euler equations with helical symmetry in the absence of vorticity stretching, SIAM J. Math. Anal. 41 (2009), no. 1, 269-296. Axisymmetric solutions to the 3D Euler equations. S Gang, X Zhu, Nonlinear Anal. 669S. Gang, X. Zhu, Axisymmetric solutions to the 3D Euler equations, Nonlinear Anal. 66 (2007), no. 9, 1938-1948. Existence of weak solutions to three-dimensional Euler equations under helical symmetry. Q Jiu, J Li, D Niu, preprintQ. Jiu, J. Li and D. Niu Existence of weak solutions to three-dimensional Euler equations under helical symmetry, preprint. Viscous approximation and weak solutions of the 3D axisymmetric Euler equations. Q Jiu, J Wu, W Yang, Math. Methods Appl. Sci. 383Q. Jiu, J. Wu and W. Yang, Viscous approximation and weak solutions of the 3D axisymmetric Euler equations, Math. Methods Appl. Sci. 38 (2015), no. 3, 548-558. Viscous approximations and decay rate of maximal vorticity function for 3-D axisymmetric Euler equations. Q Jiu, Z Xin, Acta Math. Sin. (Engl. Ser.). 203Q. Jiu, Z. Xin, Viscous approximations and decay rate of maximal vorticity function for 3-D axisymmetric Euler equations, Acta Math. Sin. (Engl. Ser.) 20 (2004), no. 3, 385-404. On strong convergence to 3-D axisymmetric vortex sheets. Q Jiu, Z Xin, J. Differential Equations. 2231Q. Jiu, Z. Xin, On strong convergence to 3-D axisymmetric vortex sheets, J. Differential Equations 223 (2006), no. 1, 33-50. Nonstationay flows of viscous and ideal fluids in R 3. T Kato, J. Functional Analysis. 9T. Kato, Nonstationay flows of viscous and ideal fluids in R 3 , J. Functional Analysis 9 (1972), 296-305. On axially symmetric flows in R 3. S Leonardi, J Malek, J Necas, M Pokorny, Z. Anal. Anwendungen. 183S. Leonardi, J. Malek, J. Necas, M. Pokorny, On axially symmetric flows in R 3 , Z. Anal. Anwendungen 18 (1999), no. 3, 639-649. Convergence analysis of the energy and helicity preserving scheme for axisymmetric flows. J Liu, W Wang, SIAM J. Numer. Anal. 446electronicJ. Liu, W. Wang, Convergence analysis of the energy and helicity preserving scheme for axisymmetric flows, SIAM J. Numer. Anal. 44 (2006), no. 6, 2456- 2480 (electronic). Convergence of vortex methods for weak solutions to the 2-D Euler equations with vortex sheet data. J Liu, Z Xin, Comm. Pure Appl. Math. 486J. Liu, Z. Xin, Convergence of vortex methods for weak solutions to the 2-D Euler equations with vortex sheet data, Comm. Pure Appl. Math. 48 (1995), no. 6, 611-628. A Majda, A Bertozzi, Vorticity and Incompressible Flow. Cambridge, UKCambridge University Press27A. Majda, A. Bertozzi, Vorticity and Incompressible Flow, Cambridge University Press, 27. Cambridge, UK, 2002. Remarks on weak solutions for vortex sheets with a distinguished sign. A Majda, Indiana Univ. Math. J. 423A. Majda, Remarks on weak solutions for vortex sheets with a distinguished sign, Indiana Univ. Math. J. 42 (1993), no. 3, 921-939. Remarks on axisymmetric solutions of the incompressible Euler system. X. Saint Raymond, Comm. Partial Differential Equations. 191-2X. Saint Raymond. Remarks on axisymmetric solutions of the incompressible Euler system, Comm. Partial Differential Equations 19 (1994), no. 1-2, 321-334. Note on global existence for axially symmetric solutions of the Euler system. T Shirota, T Yanagisawa, Proc. Japan Acad. Ser. A Math. Sci. 7010T. Shirota, T. Yanagisawa, Note on global existence for axially symmetric solu- tions of the Euler system, Proc. Japan Acad. Ser. A Math. Sci. 70 (1994), no. 10, 299-304. Un theorème sur l'existence du mouvement plan d'un fluide parfait, homogène, incompressible, pendant un temps infìniment long. W Wolibner, Math. Z. 37W. Wolibner, Un theorème sur l'existence du mouvement plan d'un fluide parfait, homogène, incompressible, pendant un temps infìniment long, Math. Z. 37, 698- 726 (1933). Non-stationary flow of an ideal incompressible liquid, USSR Comput. V Yudovitch, Math. Math. Phys. 3V. Yudovitch, Non-stationary flow of an ideal incompressible liquid, USSR Com- put. Math. Math. Phys. 3 (1963), 1407-1456. Axially symmetric flows of ideal and viscous fluids filling the whole space. M Ukhovskii, V Yudovich, J. Appl. Math. Mech. 32M. Ukhovskii, V. Yudovich, Axially symmetric flows of ideal and viscous fluids filling the whole space, J. Appl. Math. Mech. 32 (1968) 52-61. . Jitao Liu) College of Applied Sciences. Quansen Jiu) School of Mathematical Sciences, Capital Normal University, Beijing ; Beijing University of Technology ; Capital Normal UniversitySchool of Mathematical Sciences. Beijing, 100048, P. R. China. E-mail address: [email protected](Quansen Jiu) School of Mathematical Sciences, Capital Normal University, Bei- jing, 100048, P. R. China. E-mail address: [email protected] (Jitao Liu) College of Applied Sciences, Beijing University of Technology, Beijing 100124, P. R. China. E-mail address: [email protected], [email protected] (Dongniu Niu) School of Mathematical Sciences, Capital Normal University, Bei- jing, 100048, P. R. China. E-mail address: [email protected]
{'fraction_non_alphanumeric': 0.10216249613839976, 'fraction_numerical': 0.03839975285758418, 'mean_word_length': 3.152257567983581, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 36, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 5, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 45, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In this paper, we mainly discuss the three-dimensional incompressible axisymmetric Euler equations without swirl in R 3 . Specifically, we prove the global existence of weak solutions if the swirl component of initial vorticity w θ 0 satisfying that w θ 0 /r ∈ L 1 ∩ L p (R 3 ) for some p > 1. Our work improves the previous results by removing the assumption that the initial velocity fields u 0 ∈ L 2 (R 3 ).', 'arxivid': '1605.06740', 'author': ['Quansen Jiu ', 'Jitao Liu ', 'Dongjuan Niu '], 'authoraffiliation': [], 'corpusid': 119699303, 'doi': '10.1007/s00332-021-09687-4', 'github_urls': [], 'n_tokens_mistral': 12644, 'n_tokens_neox': 11030, 'n_words': 6586, 'pdfsha': '05d4bdbae166d8c86e9e3414ea944dcf1fa14f19', 'pdfurls': ['https://arxiv.org/pdf/1605.06740v2.pdf'], 'title': [], 'venue': []}
arxiv
The gluon contents of the η and η ′ mesons 28 Oct 2003 P Kroll [email protected] Fachbereich Physik Universität Wuppertal D-42097WuppertalGermany The gluon contents of the η and η ′ mesons 28 Oct 2003 It is reported on a leading-twist analysis of the η − γ and η ′ − γ transition form factors. The analysis allows for an estimate of the lowest Gegenbauer coefficients of the quark and gluon distribution amplitudes. One of the simplest exclusive observables is the form factor F Pγ ( * ) for the transitions from a real or virtual photon to a pseudoscalar meson P. Its behaviour at large momentum transfer is determined by the expansion of a product of two electromagnetic currents about light-like distances. The form factor then factorizes [1] into a hard scattering amplitude and a soft matrix element, parameterized by a process-independent meson distribution amplitude Φ P . For space-like momentum transfer the form factor can be accessed in e + e − → e + e − P. Such measurements have been carried through for quasi-real photons by CLEO [2] and L3 [3]. From the data on the form factors one may extract information about the meson distribution amplitudes by fitting the theoretical results to the experimental data. Here, in this talk, it is reported on recent attempts [4,5] to perform such analyses to leading-twist NLO accuracy in the cases of the η and η ′ mesons. As the valence Fock components of the η and η ′ mesons SU (3) F singlet and octet combinations of quark-antiquark parton states are chosen |qq 1 = |uu + dd + ss / √ 3 , |qq 8 = |uu + dd − 2ss / √ 6 .(1) In addition the two-gluon Fock state, |gg , is to be taken into account which also possesses flavour-singlet quantum numbers and contributes to leading-twist order. Associated to each valence Fock component of the meson P is a distribution amplitude denoted by Φ Pi (i = 1, 8) and Φ Pg . The distribution amplitudes possess Gegenbauer expansions [1] Φ Pi (ξ , µ F ) = 3 2 (1 − ξ 2 ) 1 + ∑ n=2,4,··· B (i) Pn (µ F ) C 3/2 n (ξ ) , Φ Pg (ξ , µ F ) = 1 16 (1 − ξ 2 ) 2 ∑ n=2,4,··· B (g) Pn (µ F ) C 5/2 n−1 (ξ ) ,(2) where ξ = 2x − 1, and x is the usual momentum fraction carried by the quark inside the meson. The Gegenbauer coefficients, B Pn , which encode the soft physics, evolve with the factorization scale µ F according to the relevant anomalous dimensions. The essential point is that the singlet and gluon coefficients mix under evolution B (1) Pn (µ F ) ↔ B (g) Pn (µ F ) ,(3) and that all coefficients evolve to zero for asymptotically large factorization scales. Hence Φ Pi → Φ AS = 3 2 (1 − ξ 2 ) , Φ Pg → 0 , for µ F → ∞ .(4) It is important to note that the gluon distribution amplitude goes along with the following projector of a state of two incoming collinear gluons (colours a, b, Lorentz indices µ, ν and momentum fractions x, 1 − x) onto a pseudoscalar meson state P g µν,ab = i 2 C F n f δ ab N 2 c − 1 ε ⊥µν x(1 − x) .(5) The anomalous dimensions have to be normalized accordingly [5]. The components of the transverse polarization tensor are ε ⊥12 = −ε ⊥21 = 1 and zero for all others. The γ * (q, µ) γ ( * ) (q ′ , ν) → P(p) vertex is parameterized as Γ µν = ie 2 0 F Pγ (Q, ω)ε µναβ q α q ′ β ,(6) where Q 2 = −q 2 ≥ 0, Q ′2 = −q ′2 ≥ 0 and Q 2 = 1 2 (Q 2 + Q ′2 ) , ω = Q 2 − Q ′2 Q 2 + Q ′2 .(7) Due to Bose symmetry the transition form factor is symmetric in ω. To leading-twist NLO accuracy the transition form factor reads (P = η, η ′ ) F Pγ * = 2 3 √ 3 Q 2 1 −1 dξ 1 − ξ 2 ω 2 f (8) P 2 √ 2 Φ P8 (ξ , µ F ) + f (1) P Φ P1 (ξ , µ F ) × 1 + α s (µ R ) 4π K q (ω, ξ , Q 2 ) + f (1) P Φ Pg (ξ , µ F ) α s (µ R ) 4π K g (ω, ξ , Q 2 ) . (8) The NLO hard scattering kernels, K , are calculated from the Feynman graphs shown in Fig. 1. The results -in the MS scheme -can be found in the literature, see for instance [4,5,6]. The decay constants, f (i) P , are defined by matrix elements of SU (3) F singlet and octet axial vector currents: 0|J (i) 5µ |P(p) = i f (i) P p µ .(9) The singlet decay constant f (1) P depends on the scale [7] but the anomalous dimension controlling it is of order α 2 s . In a NLO calculation this effect is to be neglected for consistency. Note that the octet part of (8) also holds for the π − γ form factor with the obvious replacement Φ P8 → Φ π , f P → √ 3 f π . £ È ´£ µ £ È ´£ µ £ È FIGURE 1.(8) Sample Feynman graphs contributing to the transition form factors Of particular interest is the limit ω → 0. Inserting the Gegenbauer expansion (2) into (8), one finds that the Gegenbauer coefficients of the quark and gluon distribution amplitudes first appears at order ω n [4]. Hence, one obtains the prediction F Pγ * (Q 2 , ω) = √ 2 3 √ 3 f 8 P + 2 √ 2 f 1 P Q 2 1 − α s π + O(ω 2 , α 2 s ) .(10) Since the decay constants are known to amount to f (8) η = 1.17 f π , f (1) η = 0.19 f π , f (8) η ′ = −0.46 f π , f (1) η ′ = 1.15 f π ,(11) with a accuracy of about 5% [8], (10) is a parameter-free prediction of QCD to leadingtwist accuracy. Its theoretical status is comparable to that of the Bjorken sum rule [9] 1 0 dx g p 1 (x) − g n 1 (x) = 1 6 G A G V 1 − α s π − 3.583( α s π ) 2 − 20.215( α s π ) 3 + · · · ,(12) and a few other observables among which is the famous result for the cross section ratio of e + e − annihilation into hadrons and into a pair of muons. It is known [10] that the perturbative series of the transition form factors are identical to that of the Bjorken sum rule. The prediction (10) well deserves experimental verification but there is no data as yet. The real photon case, ω = 1, is another interesting limit. Here data is available [2,3] from which information about the distribution amplitudes can be extracted. For the case of the pion such analyses have been carried through immediately after the advent of the CLEO data in Ref. [11,12] and, recently, in much greater detail in [4]. The η and η ′ data have been analyzed within the modified pertrubative approach in [13] and to leading twist NLO accuracy in [5]. Since the present quality of the data does not suffice to determine all six distribution amplitudes, one has to simplify matters and employ an η − η ′ mixing scheme in order to reduce the number of free parameters. Since in hard processes only small spatial quark-antiquark separations are of relevance, it is sufficiently suggestive to embed the particle dependence and the mixing behaviour of the valence Fock components solely into the decay constants which play the role of wave functions at the origin. Following [8,13], one may therefore take This assumption is further supported by the observations made in [13] that, as is the case for the pion [4,11,12] the quark distribution amplitudes are close to the asymptotic form, Φ AS , for which the particle independence (13) holds trivially. The analysis is further simplified by truncating the Gegenbauer series in (2) at n = 2. The coefficients B Φ Pi = Φ i , Φ Pg = Φ g .(13) (i) 2 , acting for all others, parameterize the deviations from the asymptotic form of the distribution amplitudes. Clearly, this is a serious assumption (note that to LO accuracy the transition form factors only fix the sum 1 + ∑ B (i) n ) but in view of the large experimental errors as well of the limited range of momentum transfer in which data is available, one is forced to do so. Truncation at n = 4 does not lead to reliable results, all contributing Gegenbauer coefficients are highly correlated. A fit to the CLEO and L3 data provides B (8) 2 (µ 0 ) = −0.04 ± 0.04 , B(1) 2 (µ 0 ) = −0.08 ± 0.04 , B (g) 2 (µ 0 ) = 9 ± 12 , (14) where the following scales have been chosen: µ 0 = 1 GeV, µ F = Q, µ R = Q/ √ 2. The use of µ F = Q/ √ 2 instead leads to values of the Gegenbauer coefficients which agree with those quoted in (14) almost perfectly. For comparison, B π 2 takes a value of −0.06 ± 0.03 as determined in [4]. The fit is compared to the data in Fig. 2. The insensitivity of the η − γ transition form factor to the gluonic distribution amplitude is clearly seen which comes about as a consequence of the smallness of f (1) η , see (11). Although the present data are compatible with a leading-twist analysis as Fig. 2, the existence of power and/or highertwist corrections cannot be excluded. This is a source of theoretical uncertainties in the results (14). Thus, for instance, the use of the modified perturbative approach in which quark transverse degrees of freedom and Sudakov suppressions are taken into account, leads to good agreement with experiment for the asymptotic distribution amplitudes [13]. Within errors the quark Gegenbauer coefficients for the octet and singlet case agree with each other and with the pion one. This implies not only approximate flavour symmetry but also the approximate validity of the OZI rule which is a prerequisite of the quark-flavour mixing scheme advocated for in [8]. Although the face value of B (g) 2 is huge as compared to that of B (1) 2 the gluonic distribution amplitude itself is not large as can be seen from Fig. 2, its x ↔ 1 − x asymmetry and the numerical factors in (2) keep it small. Moreover since it only contributes to NLO its impact on the transition form factors is small resulting in large errors. In order to obtain more precise information on the gluonic distribution amplitude additional constraints from other reactions are required. The inclusive decay ϒ( 1 S) → η ′ X , discussed in [14], is one such possibility. Others are e.g. B → πη ′ or χ cJ → η ′ η ′ . Finally it is to be emphasized that the approach presented in this article applies to all flavour-neutral pseudoscalar mesons, e.g. for the η(1400). The properties of the valence distribution amplitudes (2) make it unlikely that a pseudoscalar meson possesses pure glueball properties. A substantial qq Fock component is always there. For flavour-neutral scalar mesons, on the other hand, the situation is different. The properties of the quark and gluon distribution amplitudes are reversed [15]. A strong gg Fock component is therefore not necessarily accompanied by strong qq one. FIGURE 2 . 2Left:The scaled Pγ transition form factor vs. Q 2 . Data taken from[2,3]; rombs represent the η ′ data, squares the η ones. Right: The flavour-singlet and gluon distribution amplitudes at the scale µ 0 = 1 GeV 2 . G P Lepage, S J Brodsky, Phys. Rev. D. 222157G. P. Lepage and S. J. Brodsky, Phys. Rev. D 22, 2157 (1980). . J Gronberg, CLEO Collaborationhep-ex/9707031Phys. Rev. D. 5733J. Gronberg et al. [CLEO Collaboration], Phys. Rev. D 57, 33 (1998) [hep-ex/9707031]. . M Acciarri, L3 CollaborationPhys. Lett. B. 418399M. Acciarri et al. [L3 Collaboration], Phys. Lett. B 418, 399 (1998). . M Diehl, P Kroll, C Vogt, hep-ph/0108220Eur. Phys. J. C. 22439M. Diehl, P. Kroll and C. Vogt, Eur. Phys. J. C 22, 439 (2001) [hep-ph/0108220]. . P Kroll, K Passek-Kumericki, arXiv:hep-ph/0210045Phys. Rev. D. 6754017P. Kroll and K. Passek-Kumericki, Phys. Rev. D 67, 054017 (2003) [arXiv:hep-ph/0210045]. . F Del Aguila, M K Chase, Nucl. Phys. B. 193517F. del Aguila and M. K. Chase, Nucl. Phys. B 193, 517 (1981); . E Braaten, Phys. Rev. D. 28524E. Braaten, Phys. Rev. D 28, 524 (1983); . E P Kadantseva, S V Mikhailov, A V Radyushkin, Sov. J. Nucl. Phys. 44Yad. Fiz.E. P. Kadantseva, S. V. Mikhailov and A. V. Radyushkin, Yad. Fiz. 44, 507 (1986) [Sov. J. Nucl. Phys. 44, 326 (1986)]. . R Kaiser, H Leutwyler, hep-ph/0007101Eur. Phys. J. C. 17623R. Kaiser and H. Leutwyler, Eur. Phys. J. C 17, 623 (2000) [hep-ph/0007101]. . T Feldmann, P Kroll, B Stech, hep-ph/9812269Phys. Lett. B. 58339Phys. Rev. DT. Feldmann, P. Kroll and B. Stech, Phys. Rev. D 58, 114006 (1998) [hep-ph/9802409] and Phys. Lett. B 449, 339 (1999) [hep-ph/9812269]. . D J Broadhurst, A L Kataev, hep-ph/0207261Phys. Lett. 544154D.J. Broadhurst and A.L. Kataev, Phys. Lett. B544, 154 (2002) [hep-ph/0207261]. . B Melic, D Müller, K Passek-Kumericki, hep-ph/0212346Phys. Rev. D. 6814013B. Melic, D. Müller and K. Passek-Kumericki, Phys. Rev. D 68, 014013 (2003) [hep-ph/0212346]. . P Kroll, M Raulfs, hep-ph/9605264Phys. Lett. B. 387848P. Kroll and M. Raulfs, Phys. Lett. B 387, 848 (1996) [hep-ph/9605264]. . I V Musatov, A V Radyushkin, hep-ph/9702443Phys. Rev. D. 562713I. V. Musatov and A. V. Radyushkin, Phys. Rev. D 56, 2713 (1997) [hep-ph/9702443]. . T Feldmann, P Kroll, hep-ph/9711231Eur. Phys. J. C. 5327T. Feldmann and P. Kroll, Eur. Phys. J. C 5, 327 (1998) [hep-ph/9711231]. . A Ali, A Ya, Parkhomenko, hep-ph/0307092Eur. Phys. J. C. 30367A. Ali and A.Ya. Parkhomenko, Eur. Phys. J. C 30, 367 (2003) [hep-ph/0307092]. . M K Chase, Nucl. Phys. B. 174109M. K. Chase, Nucl. Phys. B 174, 109 (1980); . V N Baier, A G Grozin, Nucl. Phys. B. 192476V. N. Baier and A. G. Grozin, Nucl. Phys. B 192, 476 (1981).
{'fraction_non_alphanumeric': 0.0816390365968095, 'fraction_numerical': 0.05927431967469503, 'mean_word_length': 3.4344660194174756, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 13, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'It is reported on a leading-twist analysis of the η − γ and η ′ − γ transition form factors. The analysis allows for an estimate of the lowest Gegenbauer coefficients of the quark and gluon distribution amplitudes.', 'arxivid': 'hep-ph/0310311', 'author': ['P Kroll [email protected] \nFachbereich Physik\nUniversität Wuppertal\nD-42097WuppertalGermany\n'], 'authoraffiliation': ['Fachbereich Physik\nUniversität Wuppertal\nD-42097WuppertalGermany'], 'corpusid': 16350060, 'doi': '10.1063/1.1799748', 'github_urls': [], 'n_tokens_mistral': 4805, 'n_tokens_neox': 3953, 'n_words': 2340, 'pdfsha': '5fe0ff7675fc3cc278985a9d7a25a5883adb4acc', 'pdfurls': ['https://arxiv.org/pdf/hep-ph/0310311v1.pdf'], 'title': ['The gluon contents of the η and η ′ mesons', 'The gluon contents of the η and η ′ mesons'], 'venue': []}
arxiv
Temporal Action Segmentation: An Analysis of Modern Techniques AUGUST 2015 1 Journal Of L A T E X Class Files Temporal Action Segmentation: An Analysis of Modern Techniques 148AUGUST 2015 1Index Terms-Temporal Action SegmentationVideo RepresentationTemporal & Sequential ModelingLiterature Survey ! Temporal action segmentation (TAS) from videos aims at densely identifying video frames in minutes-long videos with multiple action classes. As a long-range video understanding task, researchers have developed an extended collection of methods and examined their performance using various benchmarks. Despite the rapid growth of TAS techniques in recent years, no systematic survey has been conducted in these sectors. In this survey, we analyze and summarize the most significant contributions and trends to this endeavor. In particular, we first examine the task definition, common benchmarks, types of supervision, and prevalent evaluation measures. In addition, we systematically investigate two essential techniques of this topic, i.e., frame representation, and temporal modeling, which have been studied extensively in the literature. We then conduct a thorough review of existing TAS works categorized by their levels of supervision and conclude our survey by identifying and emphasizing several research gaps. In addition, we have curated a list of TAS resources, which is available at https://github.com/atlas-eccv22/awesome-temporal-action-segmentation.!Related TasksThere are several tasks in video understanding which are closely related to TAS. They can be distinguished with TAS based on their data domain, identification of segment semantics as well as the reasoning of temporal dynamics between segments. The related tasks are described below and compared in Tab. 1.Temporal Action Detection / Localization (TAD/L) [15],[16] detects the start and end of action instances and predicts semantic labels simultaneously. TAD/L works more with general videos that allow overlap in the actions while TAS works with procedural videos to find a change point between actions.Sequence Segmentation (SS) is popular in other domains, including motion capture data [17], [18], [19] and audio signals [20]. Most approaches are developed to segment individual sequences [17], [18], [19] while some [21]focuses on multiple motion capture recordings simultaneously. However, such data is lower-dimensional and exhibits much less variance than video.Key-Frame Detection (KFD) identifies single characteristic frames or key-steps [22], [23], [24], [25] for actions. LikeTAS, KFD requires modeling the temporal relations between actions; however, it is out of the task scope to find the boundary where the actions transition.Complex Activity Classification (CAC) [37], [38] targetsclassifying the complex activity of procedural videos. Such a task is similar to TAS in the way it models the temporal relations of actions. Still, it is not concerned with the individual frames as CAC is aimed to determine the complex activity class of the full action sequence. (GEBD) [39] localizes the moments where human perceive as event boundaries. The boundaries signify changes in action, subject, and environment. In comparison, GEBD does not work withGeneric Event Boundary Detection INTRODUCTION T EMPORAL action segmentation (TAS) is a video understanding task that segments in time a temporally untrimmed video sequence. Each segment is labeled with one of a finite set of pre-defined action labels (see Fig. 1 for a visual illustration). This task is a 1D temporal analogue to the more established semantic segmentation [1], replacing pixel-wise semantic labels to frame-wise action labels. Automatically segmenting untrimmed video sequences helps to understand what actions are being performed, when they started, how far they have progressed, what type of transformations are brought to the environment through these actions, and what people will do next. It also enables diverse downstream applications, such as video security or surveillance systems, assistive technologies, and human-robot interactions. This survey first introduces the techniques required to understand the task, followed by a comprehensive overview of recent TAS methods. In computer vision, action recognition is the hallmark task for video understanding. In action recognition, pretrimmed video clips of a few seconds are classified with single semantic labels. State-of-the-art methods [2], [3], [4] can distinguish hundreds of classes. However, classifying pre-trimmed clips is a highly limiting case as the video feeds of surveillance systems, autonomous vehicles, and other real-world systems occur in streams. The individual actions or events are related and may span well beyond a few seconds. As a result, standard action recognition approaches are not directly applicable. TAS methods target untrimmed video sequences as opposed to action recognition on pre-trimmed video clips. The videos portray a series of multiple actions, which typically span several minutes. A common "making coffee" procedural video may include the following steps: 'take cup', 'pour coffee', 'pour sugar', 'pour milk', and 'stir coffee'. In the domain of procedural videos, the prevalent word for the overall procedure is (complex) activity, whereas the composing steps are actions. Importantly, the steps often adhere to a loose temporal ordering, i.e., permutations of some actions in time ('pour coffee' and 'pour milk'), or the lack of certain actions ('pour sugar') still accomplishes the goal. An effective TAS model must be able to utilize sequential information to appropriately determine action boundaries. This leads to two considerations: frame-level representations that are discriminative and long-range temporal modeling. Frame-level representations should capture both static and dynamic visual information for discrimination. Furthermore, the sequential dynamics of actions should be well captured. The ordering characteristics of actions raise a fundamental question -how should temporal or sequential relationships be modeled to account for action repetition, duration, and order variations? Therefore, this survey identifies the aforementioned two aspects as the essential techniques for the TAS task and provides respective in-depth analyses. activity understanding in videos, though their focus is primarily on action recognition [5], [6], temporal action localization [7], [8], action anticipation [9], [10], etc. To the best of our knowledge, this is the first survey to provide a thorough overview of TAS works. In addition to categorizing existing works, we propose a taxonomy that emphasizes their contributions. Contributions Additionally, we compare the existing datasets by analyzing their characteristics. In doing so, we present two metrics, i.e., repetition and order variation scores, which characterize the temporal dynamics of actions and demonstrate that the majority of these datasets are limited in action repetition and order variation. We further distinguish several performance evaluation and comparison settings. We provide a standardized evaluation setup for unsupervised segmentation methods and a class-based evaluation metric emphasizing the long-tail distribution. Lastly, we present a handful of intriguing future areas and problems for the community to investigate. Fig. 2 outlines a taxonomy of the TAS task and the structure of this survey. Section 2 provides a formal task description and compares it with other related tasks. Sections 3 and 4 compares the benchmarks, forms of supervision, evaluation metrics and settings, respectively. Section 5 delves into how frames are embedded and embellished, summarizing the widespread usage of handcrafted models or deep learning backbones for feature extraction. Section 6 outlines the temporal and sequential modeling techniques employed in TAS. Sections 7 to 10 provides a comprehensively curated list of TAS approaches grouped according to the type of supervision. Finally, Section 11 concludes the survey by discussing challenges and future research directions. Survey Structure TEMPORAL ACTION SEGMENTATION Task Description Temporal action segmentation aims to segment a temporally untrimmed video by time and label each segmented part with a pre-defined action label [11]. Formally, given a video x = (x 1 , x 2 , ..., x T ) of length T with N actions, TAS methods produce the following output: where s n = (c n , n ) represents a continuous video segment of length n that has the semantic label c n out of C predefined categories. The task can also be regarded as a 1D TAS has its own unique position in the task landscape, differentiating the tasks based on whether they involve Temporal Relation between action instances in videos, Boundary Localization of actions, Segment Semantic understanding, and the Data Domain. Task Temporal Boundary Segment Data Relation Localization Semantic Domain TAS video TAD/L video SS audio, motion KFD video, text CAC video GEBD video version of semantic (image) segmentation, and can analogously be formulated as a frame-wise action classification, i.e., y 1:T = (y 1 , y 2 , ..., y T ) where y t is the action label of frame t. The segment formulation is commonly used in works that predict the most probable sequence of actions [12], [13], while the latter, framewise formulation is popular with deep learning-based methods [14]. The two formulations, however, are equivalent and one can easily reconstruct one from the other. Comparisons of procedural activity datasets in a chronological order. The first group of datasets are Recorded while the second group of datasets are from Online media platforms, e.g., YouTube. We report the Duration, the number of videos (# Videos), segments (# Segments), procedural activities (# Activity), actions (# Action) and Domain for each dataset. DATASETS Core Datasets Datasets used for TAS usually feature procedural activities. Actors execute a sequence of actions, with some order, to arrive at a goal, such as making a dish, or assembling some furniture. Such datasets are annotated with action segments' start and end boundaries and action labels. Four datasets used in TAS works are described as follows. GTEA [26] contains 28 videos of seven procedural activities recorded in a single kitchen. The videos are recorded with a camera mounted on a cap worn by four participants. 50Salads [27] is composed of 50 recorded videos of 25 participants making two different mixed salads. The videos are captured by a camera with a top-down view of the work surface. The participants are provided with recipe steps, which are randomly sampled from a statistical recipe model. [28] targets recording videos "in the wild", in 18 different kitchens, as opposed to the controlled lab environments in the previous datasets [26], [27]. The dataset features 52 participants performing ten breakfastrelated activities and is recorded with 3 to 5 cameras, all from the third-person point of view. [34] is a collected dataset and includes five instructional activities. There are 30 videos for each activity. This dataset is mainly used for unsupervised segmentation. Breakfast Actions YouTube Instructional Assembly101 [32] is a collected dataset where 53 participants are asked to disassemble and assemble take-apart toys without being given any instructionsThe dataset is annotated with fine-grained, hand-object interactions and coarse action labels. The authors evaluate their dataset for TAS using coarse labels. Related Datasets There are several other long-range procedural activity datasets. In this section, we present these datasets and explain why it is challenging or not preferable to explore temporal segmentation on these datasets. [40] is a large-scale egocentric dataset with 100 hours of recording. Videos last 1 -55 minutes. Epic-Kitchens Although it comprises long-range videos, its overlapping segments and fine-grained action labels may make it unsuitable for segmentation. Ikea ASM [30] includes videos of the assembly of four types of IKEA furniture and annotates fine-grained actions. Meccano [31] is a recorded dataset of 20 people assembling a toy motorbike featuring only fine-grained actions. HA4M [33] documents 41 subjects constructing an epicyclic gear train with 12 actions by multi-modal data sensors. Despite the absence of TAS benchmarks on this dataset, it remains a viable option for TAS. YouCookII [35] is collected from YouTube. Each video, all from cooking recipes, is annotated with the temporal boundaries of the recipe steps and their textual description, but no action labels are to be used in the TAS works. CrossTask [22] is a YouTube collection of 18 primary tasks with temporal location annotations and 65 related tasks without any temporal annotations. CorssTask is used to assess unsupervised segmentation algorithms, but not supervised or weakly supervised ones. COIN [36] is a dataset from YouTube with 180 diverse tasks from twelve domains. The typical video has four segments, making sequence dynamics less interesting. Dataset Comparison & Discussion Tab. 2 makes a detailed comparison of the procedural video datasets. The datasets are divided according to their source, scale, number of actions, and viewpoint, either recorded or collected from online platforms. Recorded datasets have either third-person or egocentric views. Datasets with third-person view only contain constrained backgrounds and focus more on foreground action. However, they may suffer from the occlusions of actions due to the fixed view of cameras. While the egocentric view is better at capturing objects and tools to recognize the hand object interactions, the camera motion poses extra challenges to action recognition. The Epic-Kitchens dataset [29] is the largest egocentric vision dataset capturing untrimmed daily activities. There are also several procedural activity datasets with an egocentric view, e.g., [26], [41], but on a much smaller scale. Breakfast [28] contains recordings with multiple third-person views. Only Assembly101 [32] provides both egocentric and third-person views. Sourcing videos from online platforms is a convenient way to build up large-scale and varied datasets [22], [35], [36], [42], [43]. Such datasets are useful for training offline retrieval systems, but may not be applicable for real-time scenarios as the videos are edited, e.g., with fast-forwarding, annotated frames or changing viewpoints. Given the increasing number of "how to" videos, procedural activity understanding is a particularly interesting topic for research. Moreover, it can have a significant impact on real-time intelligent systems that assist with various tasks. Yet, the diversity of existing activity datasets is rather limited. With a few recent exceptions [30], [31] almost all the recorded datasets cover only cooking activities. These datasets are too small. As of the moment, Assembly101 [32] is the only dataset that offers a large-scale data which is beyond the kitchen domain. Background Frames Some videos feature task-irrelevant segments. For example, the actor may introduce tools or perform alternative ways to complete an action, etc. Such 'background frames' occur at arbitrary locations with varying lengths, and are common in datasets collected from YouTube, such as YouTube Instructional [34], YouCookII [35], and CrossTask [22]. In most existing TAS works, the background class is treated equivalently to other action classes and used for training and evaluation. Temporal Dynamics A defining characteristic of the TAS task is the sequential relationship between the actions. To better understand and characterize the sequence dynamics, we propose a repetition score and an order variation score and compare the core datasets according to these scores in Tab. 3. The repetition score r reflects the extent of repeated actions, and is formulated as, r = 1 − u/g(3) where u is the number of unique actions in one video instance, g is the total number of actions, and r is a score that falls in the range of [0, 1). 0 indicates no repetition, and the closer the score is to 1, the more repetition occurs in the sequence. The order variation score v is defined as the normalized average edit distance, e(R, G), between every pair of sequences, (R, G). It is then normalized with respect to the maximum sequence length of the two, v = 1 − e(R, G)/max(|R|, |G|) This score has a range of [0, 1]; 1 corresponds to no deviations in ordering between pairs. Scores close to 1 indicate that actions follow a strict ordering, making it less necessary fr y p a n c a k e c u t fr u it fr y e g g S IL s ti rf ry e g g s ti r d o u g h c ra c k e g g p e e l fr u it p o u r m il k s q u e e z e o ra n g e b u tt e r p a n s m e a r b u tt e r a d d s a lt n p e p p e r p u t e g g 2 p la te ta k e p la te p u t p a n c a k e 2 p la te c u t o ra n g e p o u r e g g 2 p a n ta k e c u p ta k e b o w l ta k e g la s s s p o o n s u g a r s ti r c e re a ls s ti r c o ff e e s ti r fr u it ta k e k n if e p u t b u n T o g e th e r p o u r fl o u r p o u r s u g a r ta k e e g g s ta k e to p p in g ta k e s q u e e z e r s ti r te a ta k e b u tt e r 0 5% 10% Action frequency Fig. 3: Action frequency on Breakfast [28], sorted by descending order. 'SIL' indicates the 'background' where no action of interest occurs. The head class 'fry pancake' is 639× more frequent than the tail class 'take butter'. to model temporal sequence dynamics. On the other hand, a lower score, like in 50Salads, indicates the highest amount of ordering variations as salad items can be added in any order, making modeling the temporal relations between actions less beneficial. Assembly101 positions itself as a challenging and interacting benchmark for modeling the sequence relations between actions. As indicated in Tab. 3, Assembly101's order variation score sits between Breakfast and 50Salads, and includes 1.6× and 2.3× more repeated steps than the two datasets respectively. Action Frequency The long-tailed action frequency is an overlooked aspect of per-frame classification formulation (Eq. (2)) of TAS. In procedural videos, it is natural that some actions require a longer time to perform than others, e.g., 'fry egg' is considerably more time-consuming than 'crack egg'. We calculate the action frequency as the proportion each action label takes in the whole dataset, i.e., n c / C i=1 n i , where n c is the number of frames with label c. Fig. 3 illustrates the action frequency on Breakfast [26], which depicts the imbalanced frequencies across actions. A commonly adopted value to indicate the skewness of the action distribution is the imbalance ratio (IR) [44], [45]. IR is defined as the ratio between the number of frames in the head and the tail classes (n 1 /n K ) sorted by the decreasing order of cardinality (i.e., if i 1 > i 2 , then n i1 ≥ n i2 and n 1 n K ). Tab. 4 shows that 50Salads [27] has the smallest IR value of 6, marking a less imbalanced scenario between actions; while, Assembly101 [32] is highly skewed in action frequencies with an IR of 2604. Such a long-tailed nature of the datasets poses extra challenges to the TAS task. SUPERVISION AND EVALUATION Task Supervision Like many other computer vision tasks, TAS has been investigated under different forms of supervisions. Tab. 5 lists the forms of supervision in descending order in terms of annotation effort, i.e., from fully supervised to unsupervised. Comparison of supervisory signals and evaluation prerequisites in TAS. Full, semi-supervision, and single-frame under weak supervision provide labels for video frames, and their performance is directly evaluated on model outputs without any presteps. Action lists or set supervision do not provide exemplars and their evaluation is based on the best-matched action list filtered with maximum sequence posterior per test video. Activity label setting uses video-level labels all at once while unsupervised uses them one at a time. Hungarian matching is required for both before evaluation. Full (Section 7) Semi (Section 10) Weak (Section 8) Unsupervised (Section 9) Single-frame Action List/Set Activity Label A fully-supervised setting provides dense action labels for every frame in training video sequences [14], [46]. Dense labels are the most time-consuming to collect per video sequence as it requires the annotator to view the entire video sequence. A semi-supervised [47], [48] setting reduces the annotation effort proportionally by annotating a subset of the videos densely while treating the remaining videos as unlabeled samples. Weak labels require less annotation effort than dense video labels. Weak labels from the literature include singleframes [49], [50], action lists or action sets [12], [13], and activity labels [51]. Single-frames, or time-stamp annotations, are sparsely labeled frames, and can be viewed as an ordered list of actions associated with exemplar frames. Removal of the exemplar frames would form an action list, which is also referred to as the action transcript. Further reducing the ordering of actions, and repetitive entries from the action list leads to the even weaker action set. The abovelisted weak labels are all based on action-level annotations. Recently, video-level complex activity labels [51] are used to supervise the TAS task, which is the weakest supervision because no action level information is provided. The unsupervised setting in TAS in previous works [52], [53], [54] considers collections of videos that perform the same activity. In this regard, it is not label-free, as it requires the activity label to form the video collections. The unsupervised setting then is comparable with the weak activity label supervision in terms of label information. However, the two settings differ in how the collections of videos are processed during training. Formally, unsupervised works work with one group of the same activity videos at a time, while activity label supervision works with videos from all activities simultaneously. Evaluation Measures Three commonly adopted evaluation metrics in TAS are Mean of Frames (MoF), Edit Score, and F1-scores. The first is a frame-based measure (see Section 4.2.1), while the latter two are segment-based measures (see Section 4.2.2). All three metrics are used in full, weak, and semi-supervised settings. For unsupervised settings, only F1 and MoF are reported in the literature. By definition, the evaluation of the unsupervised works is conditioned on the association between clusters and semantic labels. The Hungarian matching algorithm has been adopted to enable the evaluation by mapping learned frame clusters to semantic labels, which we elaborate in detail in Section 4.4. Frame-Based Measures Mean over Frames (MoF), also referred to as frame-wise Accuracy (Acc), is defined as the fraction of the model's correctly predicted frames: MoF = # of correct frames # of all frames .(5) The Acc metric is problematic when the action frame distribution is imbalanced. This is true for most datasets, as dominating (long) actions class can have a strong impact on the value. The existence of such imbalance as we have discussed in Section 3.6 makes this more likely to happen. This also implies that models achieving similar accuracy may have large qualitative differences. Therefore, a classaveraged accuracy metric would help better interpret the model performance, e.g., mMoF = c MoF(c)/|C|(6) where MoF(c) is the frame accuracy per class c. Another drawback of MoF is that its per-frame calculation can not reflect the segmental quality. The MoF score could be high even when the segmentation results are fragmented. Such division of a durative action into many discontinuous sub-segments is referred to as oversegmentation. However, over-segmentation can be evaluated by segment-based measures, as introduced next. Segment-Based Measures The segment-based F1-score [46] and Edit Score [55] are evaluation metrics that more focus on the segment errors. The F1-score or F1@τ [46] compares the Intersection over Union (IoU) of each segment with respect to the corresponding ground truth based on some threshold τ /100. A segment is considered a true positive if its score with respect to the ground truth exceeds the threshold. If there is more than one correct segment within the span of a single ground truth action, then only one is considered a true positive and the others are marked as false positives. Based on the true and false positives as well as false negatives (missed segments), one can compute the precision and recall and blend the two into the harmonic mean to get F1 = 2 · precision * recall precision + recall .(7) Normally, τ values are set to {10, 25, 50}. The Edit Score [55] quantifies the similarity of two sequences. It is based on the Levenshtein distance and tallies the minimum number of insertions, deletions, and replacement operations required to convert one segment sequence into another. By denoting by X and Y the ordered list of predicted and ground truth action segments, the accumulative distance value e is defined as: e[i, j] =      max(i, j), min(i, j) = 0 min(e[i−1, j]+1, e[i, j −1]+1, e[i−1, j −1]+1(X i = Y j )) , otherwise. (8) where i ∈ |X|, j ∈ |Y | are indices for X and Y , respectively, and 1(·) is the indicator function. The above problem can be effectively solved by dynamic programming. The Edit Score is then normalized by the maximum length of the two sequences and is computed as: Edit = 1 − e(X, Y ) max(|X|, |Y |) · 100.(9) This metric measures how well a model predicts the action segment ordering without requiring exact frame-wise correspondence to the ground truth. Given a TAS model that outputs frame-wise action probability scores, its performance is directly evaluated by taking frame-wise predictions and comparing them with their corresponding ground truth labels to compute the three scores defined above. Weakly-Supervised Evaluation For the model inference on the test set where no reference action list or set is available, Richard et. al. [13] presumes that the set of actions appearing in a test video overlaps with that from the training set, i.e., there is at least one training video sharing the same ground truth action set as the test video. Similarly, Li and Todorovic [56] follow [13] and use Monte Carlo sampling of potential action sequences but discard candidate sequences that do not include all actions in that video. Out of all K sampled candidates, the action sequence that gives the maximum posterior modeled by a Hidden Markov Model (HMM) Eq. (15) is selected as the final solution. A detailed description of the posterior estimation is provided in Section 6.2.1. Unsupervised Evaluation Without some correlation between the estimated agnostic segments and ground truth actions, the evaluation metrics in Section 4.2 are not directly applicable to the unsupervised scenario. The Hungarian matching algorithm [57] is a combinatorial algorithm used to find maximum-weight matching in bipartite graphs, and it has been widely utilized for evaluating unsupervised clustering tasks [58], [59]. In unsupervised TAS, Hungarian matching links the given frames X of N clusters to the action label corpus Y of M classes by finding the best matching A ⊂ {0, 1} N ×M : A = arg max A n,m A n,m ·I(X n , Y m ), s.t. | A | = min(N, M )(10) where X n denotes frames belonging to cluster n, and Y m denotes frames with the action label m. A n,m is the indicator Video-level matching [53], [60] matches the cluster with respect to the ground truth actions of a single video. This matching evaluates the ability of a model to segment a video sequence into distinct actions. This matching produces the best performance given it is done per video. In Fig. 4(a), within each matching scope, the Hungarian matching is agnostic of the possible association of actions across videos. (a) Video-level (b) Activity-level (c) Global-level Activity-level matching associates clusters to labels within each complex activity. Most unsupervised works [52], [61], [62] follow this level of matching, i.e., process videos from the same activity. As shown in Fig. 4(b), the activity level of grouping leads to the assignment changes denoted by colored arrows. Lastly, global-level matching is performed on the entire dataset. This is the most challenging setting as both intraand inter-activity matching must be considered. It is noteworthy that [61] report different 'global' matching results across complex activities, as their setting does not consider actions shared across complex activities. The various scopes of Hungarian matching correlate to distinct learning objectives of a TAS model; the greater the scope, the more difficult the task. Video-level matching simply sets the requirement of differentiating actions within a video, i.e., intra-video action discrimination. For activity- For video-level matching, Intra-Video Discrimination of actions is sufficient. To enable an activity-level matching, the model needs to take into consideration Intra-Activity Association. Meanwhile, the global-level matching sets the highest learning requirement with additional Inter-Activity Association between actions. Matching Intra-Video Inter-Activity Inter-Activity Level Discrimination Association Association Video Activity Global level matching, a model must discriminate between actions within a video and also acquire extra intra-activity action association knowledge. In the global-level matching, a model must additionally include inter-activity associations to construct feasible action correspondences across complex activities. The differences between these learning requirements are summarized in Tab. 6. Note that a model learned at a broader scope is downward compatible and can be adjusted to be evaluated at a finer scope, e.g. from global to activity level, but not vice-versa. Despite the practical feasibility of doing so, the results cannot be directly compared due to the models' disparate learning requirements. FRAME-WISE REPRESENTATION The standard practice in TAS is to use pre-computed framewise features (Section 5.1) as inputs due to the heavy computational demands of learning video features. Using precomputed features has a key advantage in that it allows a dedicated comparison of the proposed architectures without the confounding influences of improved frame-wise feature representations. More recently, some works have aimed to make the pre-computed features more discriminative (Section 5.2) by embedding more task-specific knowledge. Pre-Computed Features Fisher Vector Encoded IDT The original and Improved Dense Trajectories (IDT) [63], [64] were commonly used hand-crafted features for action recognition and video understanding before the rise of deep learning. The original dense trajectories features [63] are spatiotemporal features computed along tracks of interest points formed via optical flow. IDT [64] adds "improvements" by correcting the trajectories for camera motions. To apply IDT to action recognition, [64] further encode the raw trajectories by using Fisher Vectors (FV) [65] to capture the trajectories' first and second-order statistics. Inflated 3D ConvNet (I3D) Inflated 3D ConvNet (I3D) [66] is a state-of-the-art architecture to extract generic features for video understanding. It uses as a backbone the pre-trained Inception-V1 [67] with 2D ConvNet inflation. In practice, it inflates all N×N spatial kernels to N ×N ×N by replicating the original kernels N times and rescaling them with a temporal factor of 1/N . The model is pre-trained on the Kinetics dataset [68] for action recognition. Architecture-wise, the I3D model has two data streams i.e., RGB and optical flow. The optical flow of the input video is computed by the TV-L1 algorithm [69]. Then, a 21 × 224 × 224 spatiotemporal volume of RGB and flow frames are each fed into their respective branch to extract 1024D features. The two are then concatenated to compose the final 2048D representation [70]. Additional Feature Learning Discriminative Clustering To cluster the video features discriminatively, Sener et. al. [52] first learn a linear mapping Φ of the input features X ∈ R V into a latent embedding space, i.e., Φ(X) ∈ R E . In the latent space, they define a set of K anchors W a ∈ R K×E to represent the potential action classes. The latent feature descriptor is then defined as a similarity with respect to the anchors, denoted as F = W T a W Φ X, where W Φ ∈ R E×V are the embedding weights of Φ. The learning objective for the latent space is defined as a pair-wise ranking loss: L = t K k=1,k =k * max[0, f k t − f k * t + ∆] + γ||W a , W Φ || 2 2 ,(11) where f k t denotes the distance of f t to an anchor k and k * is the action label for that video frame. The term ∆ > 0 is a predefined parameter that ensures that f t is closer in the latent space to its anchor k * than other anchors by a margin. L 2 regularization is imposed on W a and W Φ , and γ is the weighting parameter. In an unsupervised setting, as the true action label is unknown, W a and W Φ are learned iteratively, and k * is assigned based on the segmentation results from a previous step. Contrastive Learning Contrastive learning builds robust feature representations by contrasting samples against each other to learn attributes that are common between data classes and attributes that set a data class apart from the others [71], [72], [73]. Inspired by these works, Singhania et. al. [47] apply contrastive learning to learn a set of stronger feature representations with a temporal encoder-decoder. The positive and negative sets of contrastive learning are selected based on clustering and temporal continuity. The contrastive probability for a positive frame pair (i, j) from video (m, n) is defined as: p nm i,j = e τ (f n i , f m j ) e τ (f n i , f m j ) + (r,q)∈N (n,i) e τ (f n i , f q r ) ,(12) where e τ is the exponential cosine similarity with temperature τ . The complex activity label can provide further cues for contrastive learning. In [47], video-level features h n for video n are formed by max-pooling the frame features along the temporal dimension, i.e. h n = max 1≤t≤Tn f (n,t) . For video n with activity c n , video-wise positive and negative sets can be constructed from other videos of the same or different activities, i.e., P n = {m : c m = c n } and N n = {m : c m = c n } respectively. A contrastive probability for video n and its positive pair m is defined analogously as p nm = e τ (h n , h m ) e τ (h n , h m ) + r∈Nn e τ (h n , h r ) .(13) Based on the frame-and video-level constrastive probabilities in Eq. (12) and Eq. (13), feature representations are learned with following contrastive loss: L = − 1 N 1 n,i m,j∈Pn,i log p nm ij − 1 N 2 n m∈Pn log p nm ,(14) where N 1 = n,i |P n,i | and N 2 = n |P n |. Temporal and Visual Embedding For highly regular activities like those found in Breakfast Actions, the same action tends to occur in a similar temporal range in the video sequence. Kukleva et. al. [61] leverage this fact and use a pretext task of timestamp prediction to learn a temporal embedding. These temporal features are later used to find the potential temporal action clusters and their orders for subsequent action segmentation. Vidalmata et. al. [62] later pointed out that a standalone temporal embedding lacks sufficient visual cues and proposed a two-stage pipeline to capture both visual and temporal representations of each frame. The first stage trains visual and temporal embedding models separately; the second stage trains both jointly. The visual embedding is learned with a frame prediction task to predict the feature at a future time t + s based on feature input at current time t, while the temporal embedding model follows [61]. The second stage unites two models by imposing a frame reconstruction loss on the temporal model to predict frame representations that can give the best timestamp prediction. Also based on the temporal embedding [61], Li et. al. [49] exploit the extra temporal relations on the action level by training a binary classifier on whether the input sequence of actions has been shuffled. Concretely, negative sequences are randomly sampled and shuffled while positive sequences remain in their original ordering. The final feature embedding is used for subsequent learning. TEMPORAL AND SEQUENTIAL MODELING Segmenting actions from the frame-wise features outlined in Section 5 typically requires some additional handling of dynamics or change over time. One approach captures the model directly into the network architecture, i.e. as part of a temporal convolutional network (TCN), a recurrent neural network (RNN), or a transformer. Others explicitly apply external models such as HMMs or Generalized Mallows Models. In accordance with the hierarchical structure of these videos, the reasoning of the temporal dynamics can be categorized into frame level and segment level. We denote the frame-level model as temporal modeling and the segment-level model as sequential modeling. Temporal Modeling Temporal modeling on a frame-wise basis expands the temporal receptive field of the network and aggregates the dynamics in the feature representations. This level of modeling is necessary as commonly adopted pre-computed frame representations for TAS were originally designed for action recognition on few-second-long video clips. Several works [74], [75], [76], [77] have shown that these features exhibit bias towards static cues, such as objects, scenes, and people. Efforts dedicated to this problem include Recurrent Neural Networks, Temporal Convolutional Networks, and Transformers. Recurrent Neural Networks (RNNs) RNNs attempt to capture the temporal relations by encoding the complete sequence with the same set of shared parameters over time. Among them, Gated Recurrent Units (GRUs) [78] have been adopted in [79], [80]. Specifically, the GRU takes in frame inputs recurrently following their temporal order and predicts action labels. A similar GRU structure is used in [80] as the backbone, while they enable the bidirectional flow of the frames. The memory of a frame-wise RNN stand-alone does not span long enough to capture the sequential relationship between actions. Above discussed methods [79], [80] are therefore usually combined with sequential modeling techniques which we introduce in Section 6.2. Another weakness of RNN is its limited ability in processing sequential inputs in parallel due to the recurrent dependencies between frames. Temporal Convolutional Networks (TCNs) Temporal Convolution Networks (TCNs) [46] use 1D convolutional kernels in time. Two standard paradigms of TCNs, shown in Fig. 5, are encoder-decoders and multistage TCNs. Encoder-decoder TCNs [46], [81], [82], [83] shrink and then expand the temporal resolution with layerwise pooling and upsampling in a U-Net fashion [84]. Alternatively, the multi-stage architecture (MS-TCN) keeps a constant temporal resolutions and expands the receptive field with progressively larger dilated convolutions [14], [85]. Comparatively, the encoder-decoder architecture greatly reduces the computation time of long input sequences with temporal pooling. However, pooling may harm the prediction accuracy at action boundaries. On the contrary, the MS-TCN architecture preserves the full temporal resolution, especially the boundary information, at the cost of higher computation. Transformer Transformers have seen a quick adoption for video tasks, including for TAS. The core technique of a transformer is its attention mechanism; and Sener et. al. [86] proposed one of the first attention-based architectures called Temporal Aggregates. The Temporal Aggregates model used the non-local operation [87] to estimate the mutual attention between frames in multiple time spans and fused these together with further attention operations. ASFormer [88] was one of the first true transformer architectures for TAS. It adopted an encoder-decoder like the ED-TCN [46] and replaced the convolutional operations with a transformer block. In the encoder, the transformer only attends to frames within the inputs, which is often referred as self-attention (SA), while the blocks in the decoder adopt cross-attention (CA) between features and the encoder outputs (see Fig. 6). Building on top of ASFormer, Behrmann et. al. [89] adjusts the decoder to output only the action sequence instead of frame-wise action labels, i.e., mapping frame inputs to action sequence outputs. Transformers are being embraced gradually for TAS, but their use is still limited. First, the transformers lack inductive biases, requiring a large corpus of videos for effective training. Yet existing datasets for TAS are relatively small, making it difficult for large transformers to develop effective representations. Another issue identified by [90] is that the self-attention mechanism might not acquire meaningful weights from a large span of inputs. In summary, the temporal relationships for action recognition is modelled implicitly through the network architecture. Given the temporal convolution and attention operations discussed above, it is necessary for them to have access to the fine-grained frame-level annotation. Moreover, accurate action boundaries are of more importance to the model for learning ambiguities during action transitions. Sequential Modeling The actions in procedural videos typically follow some sequential order to serve a purpose or achieve a specific goal. Such sequential information is more easily captured on a segment level and various models such as Hidden Markov Models and Mallows Models have been investigated in the existing works [12], [52], [61]. Hidden Markov Model Hidden Markov Models (HMMs) are classic probabilistic models for working with sequential data input and they have also been applied to model the progression or sequential relations of action segments in video sequences. Recall that for a given video x, TAS is to find the optimal segments (ĉ,ˆ ), whereĉ = [ĉ 1 , . . . ,ĉ n , . . . ,ĉN ] denotes the predicted ordering of action labels of lengthN ,ĉ n ∈ C, and = arg max N,c, p(c) · p( |c) · p(x|c) = arg max N,c, N −1 n=1 p(c n+1 |c n ) context model · N n=1 p( n |c n ) length model · T t=1 p(x t |c n ) visual model .(15) The last term p(x|c) in the second line is simplified from p(x|c, ) since it is a frame-wise likelihood and does not depend on the action length . The HMM formulation induces a three component model. The first, p(c), is a context model, providing probabilities for the sequence of actions in the video. As discussed in Section 5.2.3, CTE [61] assumes that similar actions happen in close temporal vicinity so that the average timestamp t(k) for feature clusters in the temporal embedding space is a good indication of the action order in the sequence: t(k) = 1 |F(k)| f ∈F(k) t(f )(16) where F(k) is the set of features in cluster k and t(·) indicates the normalized temporal location. The clusters are then ordered as π = [k 1 , . . . , k K ] with respect to their temporal location, such that 0 ≤ t(k 1 ) ≤ · · · ≤ t(k K ) ≤ 1. With this ordering, the transition probability is defined: p(c n+1 |c n ) = 1, c n+1 = c n or k cn+1 − k cn = 1, 0, otherwise.(17) Eq. (17) imposes a hard transition; new frames must either keep the same action label as the previous frame or transition to the next action label observed in the ordering π. Comparatively, Li et. al. [49] define a relaxed version, taking into consideration the action length λ c : p(c n+1 |c n ) ∝    λc n +λc n+1 c n+1 j=cn λc j , k cn+1 > k cn , 0, otherwise.(18) This formulation allows for the skipping of actions in the ordering π and penalizes multiple action skips with a large denominator (sum of skipped action lengths) in Eq. (18). The second component p(l|c), the length model, determines the temporal length for each action class. Common practice [12], [13], [49], [56] is to model the length of each action with a Poisson distribution: p(l|c) = λ l c l! e −λc .(19) The lengths λ c for actions are estimated over all video sequences by the following: (20) where A x is the set of occurring actions in video x with T x frames and λ min denotes a pre-set minimum length over all actions. This ensures the minimum difference between estimated lengths by summing composing λ c and actual length T x over the video set and can be solved with constrained optimization by linear approximation (COBYLA) [91]. Such explicit modeling of lengths is necessary to avoid producing unreasonably long action segments. The third component, the visual model, provides the probability of a feature sequence x being generated by the given action labels c. There are multiple ways to model the frame likelihood. Following Bayes' theorem, [12] proposes estimating p(x t |c n ) by considering: λ = arg min λ x∈X   c∈Ax λ c − T x   2 , s.t. λ c > λ minp(x t |c n ) ∝ p(c n |x t ) p(c n ) ,(21) where prior p(c n ) can be estimated either by the fraction of frames present with label c n [13] or as a uniform distribution for simplicity [49], while the posterior p(c n |x t ) are approximated by the output of an action classification network supervised by the action annotations. Generative models like GMMs can also be used to model the frame likelihood. In the GMM, the likelihood for a video frame x t to be given an action class c n is written as: p(x t |c n ) = N (x t ; µ n , Σ n )(22) where µ n and Σ n are the action class mean and covariance. In practice, GMMs are preferred for cases where no action annotations are available [61], [62]. Viterbi. The MAP for the HMM described in Eq. (15) can be solved efficiently with the Viterbi algorithm [92]. Viterbi relies on dynamic programming to find the most likely sequence of states following the temporal direction. Consider the case where a uniform length model applied in Eq. (15) yields the following: (N ,ĉ) = arg max N,c N −1 n=1 p(c n+1 |c n ) · T t=1 p(x t |c n ).(23) which can be further rewritten by denoting the labeling sequence of T frames π: π = arg max π T t=1 p(x t |π t ) · p(π t |π t−1 )(24) Given the recurrence relations, we can define the probability value Q t,πt of the most probable state sequence, also called the Viterbi path, as: Q 1,π1 = p(x 1 |π 1 ) · p(π 1 ) and(25)Q t,πt = max πt p(x t |π t ) · p(π t |π t−1 ) · Q t−1,πt−1 .(26) Then Viterbi pathπ can be retrieved by traversing the saved bestπ t from each timestamp in Eq. (26). The overall complexity of this implementation is O(T × |N | 2 ). Re-estimation. [49], [61] have stated that the aforementioned HMM model can be updated iteratively. As a first step, one initializes the above three HMM components with naive observations. Second, the Viterbi decoding is applied to infer the MAP label sequence. The decoded labels can again be applied to refine the feature inputs to the HMM components. These steps can be repeated until convergence. Inference. To reduce the computational complexity of Viterbi, [93] proposed FIFA. Instead of dynamic programming, [93] defines a differentiable energy function to approximate the probabilities of possible segment alignments. Their inference process reformulates the maximization of the sequence posterior by minimizing the proposed energy function. Given the transcript c 1:N , the aim is to find the lengths 1:N correspondingly, i.e., The objective energy function E( 1:N ) can be further decomposed as E( 1:N ) = − log T t=1 p(α(t)|x t ) · N n=1 p( n |c n ) = T t=1 − log p(α(t)|x t ) Eo + N n=1 − log p( n |c n ) E .(28) where p(α(t)) = p(y t |t; c 1:N , 1:N ) is the mapping of time t to action label given the segment-wise labeling, and c 1:N is sampled from the training set as mentioned in Section 4.3. Two further approximations are used for the two terms in Eq. (28). First is a differentiable mask M ∈ R N ×T with a parametric plateau function f [94]: M [n, t] = f (t|λ c n , λ w n , λ s ) = 1 (e λ s (t−λ c n −λ w n ) + 1)(e λ s (−t+λ c n −λ w n ) + 1)(29) where λ c , λ w are the center and lengths of a plateau computed from 1:N and λ s is a fixed sharpness parameter. Hence, the first term E o is approximated as: E * o = T t=1 N n=1 M [n, t] · P [n, t](30) where P [n, t] = − log p(c n |x t ) is the negative log probabilities. Secondly, for E l , c n is replaced with the expected length value λ l cn based on a Laplace distribution assumption: E * L = 1 Z N n=1 | n − λ l cn |(31) where Z is the constant normalization factor. The original energy function is finally expressed as a weighted aggregation of two approximation terms: E * ( 1:N ) = E * o ( 1:N ) + βE * ( 1:N )(32) where β is a coefficient. FIFA can boost the inference speed up to 5× and at the same time maintain a comparable performance score. Generalized Mallows Model A generalized Mallows Model (gMM) models distributions over orderings or permutations. Given a set of videos belonging to the same activity, Sener et. al. [52] propose using a gMM to model the sequential structures of actions for action segmentation. Their assumption is that a canonical sequence ordering σ is shared in these videos and they consider possible action ordering π as a permutation of σ. Such modeling offers flexibility for missing steps and deviations. A gMM represents permutations as a vector of inversion counts v = [v 1 , · · · , v K−1 ], where K is the number of elements, i.e. actions, in the ordering and v k denotes the total number of elements from (k + 1, · · · , K) that rank before k in the ordering π. With the distance between two orderings defined as d(π, σ) = k ρ k v k , the probability of observing v is as follows: P GM M (v|ρ) = e − k ρ k v k ψ k (ρ) = K−1 k e −ρ k v k ψ k (ρ k ) ,(33) where ρ = [ρ 1 , · · · , ρ K−1 ] is a set of dispersion parameters and ψ k (ρ k ) is the normalization function. The prior for each ρ k is the conjugate: P (ρ k |v k,0 , v 0 ) ∝ e −ρ k v k,0 −log(ψ k (ρ(k)))v0 ,(34) A common prior ρ 0 is used for each k, such that v k,0 = 1 e ρ0 − K − k + 1 e (K−k+1)ρ0 − 1 .(35) Given an action ordering π, generating frame-wise label assignment z further requires an action appearance model a. a is the bag of action labels providing the occurrence for each action and is modeled as a multinomial parameterized by θ and a Dirichlet prior with parameter θ 0 . Recall that the assumption in the work of [52] is that the canonical ordering is given, thus their objective is to infer the following posterior over the entire video corpus, P (z, ρ|F , θ 0 , ρ 0 , v 0 ) ∝ P (F |z)P (a|θ)P (θ|θ 0 )P (ρ|ρ 0 , v 0 ) (36) where F is the frame features. The feature likelihood term P (F |z) is evaluated with GMM (Eq. (22)) and the remaining terms are approximated via MCMC sampling. Specifically, they use slice sampling for ρ and collapsed Gibbs sampling for z. Similar to HMM, the above model can also be trained in two stages, where discriminative feature clustering (as described in Section 5.2.1) and sequential modeling are performed in an alternating fashion. Dynamic Time Warping Other than Viterbi, dynamic time warping (DTW) has also been used in the literature to implement sequential modeling. SemiTAS [48] first sub-samples in time an ordered action sequence from the frame-wise network prediction, and then computes a cost matrix between the sequence and frame predictions to enforce the sampled sequential order of actions. The proposed continuity loss is to compute the loss along the optimal assignment path found via DTW. Over-Segmentation Local continuity is an inherent attribute of procedural actions, meaning an action should be locally continuous and only change at its actual boundary. This has motivated researchers to refine the results of existing segmentation algorithms at the boundaries to increase their performance. Boundary Refinement. Wang et. al. [95] raise concerns over the boundary ambiguity and over-segmentation issues in existing works and propose a module for multi-stage segmentation algorithms [14]. Their module enables the later stages to smooth noisy boundary predictions with confident ones with a novel Local Barrier Pooling operation. Separately, [96] proposes to supplement the segmentation branch with a boundary regression branch, and use boundary detection on the segmentation outputs for postprocessing. Gaussian Smoothing. Smoothing with a Gaussian kernel [51], [54], [96] promotes the continuity of actions in a narrow local temporal window and is highly effective in enhancing segmentation performances, especially on the segmental metrics. While [51], [96] directly apply smoothing on the frame-wise action probabilities, Du et. al. [54] apply it along the temporal dimension of sequential similarity scores between consecutive frames to mitigate the noisy frames and obtain more robust boundaries. Boundary Smoothing. From a different perspective, some research suggests that coarser transitional action boundaries may improve the TAS performance compared to the conventional rigid ones [48], [81]. A comparison of existing boundary smoothing techniques proposed for TAS is illustrated in Fig. 7. Correspondingly, [81] mixes the action probabilities with a fixed slope linear decay (Fig. 7(b)), whereas [48] elastically expands the smoothing range to be proportionate to the estimated action duration and employs a sigmoid shape for mixing (Fig. 7(c)). FULLY-SUPERVISED APPROACHES Supervised TAS approach requires frame-wise action labels during training. Following the trend of other developments in action recognition, supervised TAS research has moved towards adopting deep learning-based solutions. Early approaches before deep-learning-based solutions to TAS classified actions in a temporal sliding window [106], [107], [108]. Among these, Cheng et. al. [108] reason on the dependencies between the actions with a Bayesian nonparametric language model. In contrast, [109], [110] model the actions as a change in the state of objects and approached the segmentation problem as finding change points. Another line of methods before deep learning approaches predicts the most probable sequence of actions using stochastic context-free grammar to represent the temporal structure of actions [111], [112] and combining a set of hidden Markov models with a context-free grammar [28], [100]. [113] combines a visual model that maps visual cues to action probability with a language model applied to the action sequence and captures the segment durations with a length model. Finding the final segmentation output that optimizes the likelihood of all three components is performed via dynamic programming. This section discusses deep models utilized for TAS. Taxonomically, we identify four main categories in the following text. The performance of these approaches is compared in Tab. 7 on the Breakfast and GTEA. Overall, we find that the majority of the approaches are TCN-based. Although several types of features were used in the early approaches, I3D features are mostly employed to achieve the current state of the art. Representation Learning The early attempts combine features derived from deep learning with temporal models. Lea et. al. [55] utilize a CNN to capture spatiotemporal feature relations (ST-CNN) and a semi-Markov model to segment videos. To compute visual representation, Bi-LSTM [97] divides videos into snippets and passes them through a multi-stream network composed of appearance and motion streams. Then, these features are fed into a bi-directional LSTM to predict action labels. The focus of follow-up works is on acquiring better representations for fine-grained actions. Instead of optical flow, [98] models fine-grained motion with locally consistent deformable convolutions (LCDC). Coupled GAN [99] models the evolution of actions with two generative adversarial networks, one for RGB images and one for auxiliary information (depth or optical flow). TempAgg [86] is a recent framework for multi-granular temporal aggregation that relates recent observations to long-range ones with attention. This network can be used for TAS by naively classifying long-range informationaggregated snippets. Although performance is reported only based on snippet scores, adding a sequence model is expected to result in further improvements. Temporal Convolutional Networks (TCNs) Temporal patterns are captured by TCNs via a hierarchy of convolutions. Lea et. al. [46] were the pioneers in implementing TCNs for TAS. They present an encoder-decoder architecture (ED-TCN) with 1D temporal convolutional and deconvolutional kernels that capture long-range temporal patterns. In addition to capturing action durations, pairwise transitions, and long-term dependencies, TCN-based solutions are fast. TricorNet [11] substitutes a bi-directional LSTM for the decoder in ED-TCN and offers a hybrid temporal convolutional and recurrent network. Due to the recurrences, this network incurs large computation costs. Moreover, TDRN [82] builds upon ED-TCN by substituting the temporal convolutions with deformable temporal convolutions and by adding a residual stream to the encoder-decoder model. The residual stream processes videos with the full temporal resolution, while the other stream collects the temporal context at various scales. Although the above TCN-based approaches work on the entire video, referred to as full resolution, in reality, these methods temporally downsample the videos to a few frames per second. This type of pre-processing may cause the loss of fine-grained details. In contrast, Farha and Gall [14] propose MS-TCN, a multi-stage hierarchical temporal convolutional network that works on exact full-resolution video. Each stage of MS-TCN comprises multiple temporal convolutional layers with 1D dilated convolutions and outputs an initial prediction that is iteratively refined by subsequent stages. This work considerably enhances the segmentation performance compared to earlier works [46], [82] by a large margin and lowers over-segmentation mistakes. MS-TCN++ [85] introduces a dual dilated layer and shares parameters in refining stages to improve upon MS-TCN. RPGaussian [101] presents a bilinear pooling module that is integrated into TCNs to serve as an efficient feature fusion operation, e.g., by replacing the last 1 × 1 convolution layer in the first stage of MS-TCN. GatedR [102] employs a gated forward refinement network to correct errors from the previous stages in an adaptive manner. It also incorporates a multi-stage sequence-level refinement loss to correct the errors in the previous predictions. An analysis of fragmentation concerns in TCNs by Singhania et. al. [83] leads to the development of the C2F-TCN encoder-decoder model with a coarse-to-fine ensemble of decoding layers. Its decoder output ensemble is less fragmented and more precise. This work also introduces a multi-resolution feature-level augmentation strategy and an action loss that penalizes misclassifications at the video level, hence improving segmentation performance. Improving Existing Architectures or Outputs Several efforts concentrate on enhancing existing TAS algorithms by incorporating new modules into existing backbones or by post-processing the outputs. Chen et. al. [104] argue that spatiotemporal variations of human actions referred to as different domains, lead to poor performance in supervised TAS, as training a model in one domain and testing it in another will fail due to the variation gap. They propose to employ two self-supervised auxiliary tasks to reduce discrepancies between source and target domains' feature space. One task predicts which domain a single frame's feature vector originates, while the other predicts domain labels for a shuffled sequence of source and target segments. Combining their self-supervised model, SSTDA, with MS-TCN significantly improves the performance without the use of extra labeled data. Such superior performance could potentially benefit from having access to the test data as the unlabeled inputs. GTRM [80] refines segmentation outputs generated by conventional TAS methods using a graph convolutional network (GCNs). The refinement depends on the quality and degree of fragmentation of initial segmentation. In segmentation models, temporal receptive fields are crucial, with large fields facilitating long-term relations and small receptive fields capturing local changes. In lieu of manually created receptive fields, Global2Local [105] introduces a search scheme for effective receptive field combinations that can be inserted into any existing segmentation model. Correction of segmentation findings at action boundaries is a second suggestion for improvement. Wang et. al. [95] raise concerns over the boundary ambiguity and over-segmentation issues in previous works and specifically propose a module that can be used with MS-TCN. Their module, BCN, enables the later stages to focus on ambiguous frames. A newly designed pooling operator smooths noisy boundary predictions with confident ones. Similarly, ASRF [96] also does boundary refinement, but it is modelindependent and may be applied to any TAS output. Transformers Transformers have recently been utilized for TAS. AS-Former [88] is a transformer-based segmentation model with an encoder and multiple decoders to perform iterative refining. A self-attention block with instance normalization is used per dilated temporal convolutional layer of MS-TCN. The initial stage is the encoder, which takes the video sequences and outputs predictions, while the decoders receive as input the predictions from preceding layers. on GTEA, ASFormer performs on par with MS-TCN, while on Breakfast, it outperforms MS-TCN. Another recent work, UVAST [89], uses a similar encoder but a different decoder than ASFormer. UVAST's decoder predicts the action segments in an auto-regressive way, as opposed to ASFormer and MS-TCN, which do frame-level predictions. On F1score and Edit Score, UVAST surpasses previous methods, suggesting reduced over-segmentation. WEAKLY-SUPERVISED APPROACHES The objective of weakly supervised techniques is to avoid intensive frame-level supervision. We classify these techniques into five categories. One group receives supervision in the form of an ordered list of actions known as transcripts. The second one uses an unordered list of actions called action sets. The third group labels each video with action timestamps. The fourth is even weaker supervision as it only accepts complex activity labels. The fifth and final category consists of segmentation methods that leverage complementary textual data such as narrations to provide temporal constraints. We compare the performance of all approaches on Breakfast and 50Salads in Tab. 8. Transcripts: Ordered List of Actions Kuehne et. al. [114] were the first to use the term transcripts to refer to an ordered list of the actions occurring in a video. Even earlier work already features transcripts [115], however, it focuses on aligning frames with the transcripts and assumes that the test transcripts sequences are also provided. Transcript-based supervision gives simply the actions within a video and their chronological order. This type of supervision significantly reduces the cost of annotating videos because it does not require any frame-by-frame labels, and it could accelerate the annotation of Breakfast by an order of magnitude. We categorize these methods into iterative two-stage and single-state solutions. Iterative Two-Stage Solutions The two-stage solutions begin with an initial estimate of frame-wise labels, which is subsequently improved iteratively using a segmentation model. These methods are based on an incremental refinement of previous predictions. HTK [114] extends an approach from supervised [100] to weakly-supervised setting. A set of HMMs represents the actions in this framework, whereas a GMM models the observations. The algorithm uniformly initializes and iteratively refines the video segments depending on the transcripts. On the basis of this concept, Richard et. al. [79] replaces GMMs with recurrent neural networks. Additionally, they subdivide the actions into fragments to capture their finer properties. ISBA [81] expands the TCN from [46] by adding encoder and decoder layer lateral connections. They employ a soft labeling method at the segment boundaries and refine the segmentation iteratively. TASL [116] offers to model the action subspaces with an ensemble of auto-encoders and a similar two-stage strategy of iterating between aligning videos with regard to transcripts and learning subspace learning with the alignment. In particular, a constrained Viterbi decoding algorithm [12], [56] is utilized to efficiently tackle the alignment problem. Single-Stage Solutions The two-step approaches are initialization-sensitive and may not converge when the models are incrementally trained. Segmentation can be learned directly through single-stage techniques. ECTC [117] is an extended variant of connectionist temporal classification [118] for aligning the transcripts with video frames under consistency restrictions. It ensures frame-wise similarities so that the action alignments are consistent. This lowers the space of viable pathways and prevents degenerate segmentation, which could result from a large number of frames in long videos. NN-Viterbi [12] employs Viterbi decoding as part of the loss function to train a segmentation network. The Viterbi algorithm generates pseudo-ground truths for the network's output probabilities, which are subsequently used to calculate the loss. This method presents a substantial advance over the previous methods. Yet, training is expensive due to Viterbi decoding. Chang et. al. [119] propose D 3 TW, a framework with a differentiable alignment loss to model positive and negative transcripts discriminatively. Similar discriminative training is proposed by Li et. al. [120], whose framework, CDFL, is constructed on NN-Viterbi with ordering restrictions. CDFL, unlike D 3 TW, generates valid and invalid segmentation candidates using a segmentation graph, where invalid candidates violate the transcripts. It formulates a new loss based on the energy differences between valid and invalid candidates using a recursive estimation of each candidate's segmentation energy. CDFL performs far better than is predecessors, although its training is more costly. Souri et. al. [121] call attention to the lengthy training time of the state-of-the-art and propose a sequence-tosequence network that performs comparably to previous research but is significantly faster during training and inference. Their framework, MuCon, consists of two branches, one of which produces frame-wise predictions while the other predicts transcripts with durations. Using the predictions from the two branches, a mutual consistency loss is computed to ensure comparable predictions. A recent work, DP-DTW [122], trains class-specific discriminative action prototypes for weakly-supervised segmentation and suggests that videos could be represented by concatenating prototypes based on transcripts. The model seeks to increase the inter-class distinction between prototypes by means of discriminative losses. Action Set Action set assumes that a collection of action labels is presented for training without knowledge of their temporal location, order, or frequency. This type of labeling may appear as meta-tags on video-sharing platforms, for example. Richard et. al. [13] are the first to propose a weak segmentation model based on action sets. Similar to [113], their structure consists of an action, length, and sequence component and employs Viterbi to determine the most probable segmentation. They produce several transcripts utilizing context-free grammar to limit the search space and transform the problem into a weakly-supervised setting with multiple transcripts. However, this work cannot generate all possible sequences of an action set, which might compromise the quality of segmentation. SCT [123] learns a segmentation network that uses annotations directly for learning. They begin by segmenting videos into regions, then estimate action probabilities and temporal lengths with one branch. A second branch is utilized to provide frame-wise predictions. They measure the consistency of the frame-wise predictions with regard to region predictions, which considerably increases the accuracy of the model. In addition, they define several losses and regularizers to encourage temporally consistent predictions between adjacent regions or regularize region lengths. SCV [56] uses a set-constrained Viterbi algorithm to generate accurate pseudo ground truths and an n-pair loss to minimize the distance between training video pairs that share action classes in respective action sets. It employs a greedy post-processing step to ensure that all actions are included in the frame-wise pseudo-ground truth by replacing low-score frames with action labels included in the action set but absent from the initial segmentation. ACV [124] improves this work by eliminating the necessity for postprocessing by use of differentiable approximation that allows end-to-end training. It builds an anchor-constrained graph and estimates anchor segments to confine the candidate set of valid sequences. Recent observations by Lu et. al. [125] indicate that it is typical for action pairs to have a fixed temporal order in multiple procedural videos, which could help improve the segmentation performance. In light of this, they incorporate these constraints into learning using a pairwise order consistency loss. The loss penalizes the ordering disagreement between extracted templates and outputs of the segmentation model outputs. Single-Frame Supervision Instead of annotating every frame with an action label, another type of supervision obtains labels from single timestamps for each action, significantly lowering the effort required for annotation. The annotation could correspond to any arbitrary frame for each segment. Li et. al. [126] present a method for generating framewise labels by detecting action transitions. They implement a confidence loss that mandates the class probabilities to decrease monotonically as the distance to the timestamp grows. Their method is applicable to any TAS model for training. Compared to using transcripts or action set-based weak supervision, this type of supervision greatly improves the segmentation performance. Its performance is comparable to that of fully supervised, making it an intriguing direction to explore. Recently, Rahaman et. al. [50] proposed incorporating Expectation-Maximization (EM) for timestamp supervision with the notion that the missing frame labels might be deduced from labeled timestamps. The E-step is designed to train the network for the label estimation, whereas the M-step maximizes the timestamp segment likelihood and calculates the boundary based on this maximization. They illustrate the generalizability of the proposed EM approach to manage the missing actions between annotated timestamps. Their model indicates that labeling the initial frame for each segment impairs performances in comparison to a random or middle frame initialization, demonstrating the ambiguity of labels at the boundaries. Previously stated approaches utilize a segmentation model to help infer the action boundaries that are in between timestamps, whereas GCN [134] proposed adopting the Graph Neural Network (GNN) to achieve the same goal in an alternative manner. Specifically, the frame features are considered as nodes, and the edges between consecutive frames are weighted by their feature affinity, which is computed as the cosine similarity. The GNN is trained to propagate labels from a few labeled nodes to the rest of the unlabeled nodes. Narrations & Subtitles Frequently, videos are accompanied by publicly available text data in the form of scripts, subtitles, or narrations. It is commonly employed for video and text alignment [42], [135] and step localization [22], [34]. Many works make use of text to weakly supervise TAS. The primary drawback textual data is the assumption that all the videos are accompanied by temporally aligned text. Unfortunately, text data may not always be properly aligned and could be absent. Sener et. al. [136] combine visual and language cues in a hybrid generative model to segment videos. They construct object proposal segments and compute visual vocabularies from a collection of videos of the same activity. Combining these along with computed textual vocabularies over narrative text, they represent each frame with a binary histogram of visual and textual words. Using binary data, they employ the generative beta process mixture model from [21] in order to detect the actions shared by multiple videos. They evaluate the proposed method on a newly collected dataset of 17 activities and 5 test videos per activity. On a separate track, Fried et. al. [137] describe a method for segmentation that uses canonical step ordering and transcribed narrations. Canonical step ordering refers to normal order in which the steps of an activity are carried out. They model the segment duration, location, order, and features using a semi-Markov model. Although they do not use the narrations during testing, they use the canonical ordering during inference because these constraints affect the parameters of their model. This work systematically accesses how much models can improve when by increasing the amount of supervision. for as by switching from canonical ordering to narrated transcripts or full supervision. They exclusively provide results for the CrossTask dataset [22] since it includes narratives and canonical orderings for activities, allowing for their methodical examination. Activity Supervision Proposed as an even weaker supervision signal for the TAS task is the use of solely complex activity labels [51]. This type of supervision does not give information at the action level. Ding and Yao [51] propose a Constituent Action Discovery (CAD) framework that learns frame representations Performance of unsupervised methods evaluated on the Breakfast Actions dataset. Two Stg. indicates the two stages group, and SS the self-supervision type. Single Stg. list methods with only a single stage. A correspond to activity level evaluation while V the video level. We also present how Temporal Model is defined and their flexibility for allowing Deviations, Missing steps and Repetitions in orderings. Method Year based on their similarity to the latent action prototypes. They assume that the complex activity label can be inferred using aggregated action prototype affinities across the whole video sequence. Despite being deemed to be weakly supervised, this method uses the same amount of information as the majority of unsupervised works. UNSUPERVISED APPROACHES For supervision, unsupervised TAS approaches do not require action labels, temporal boundaries, nor textual data. Yet, because of the application scope of their proposed learning strategies, they would implicitly require activity information [52], [61], [136]. Tab. 9 categories existing unsupervised works and evaluates their performances on the Breakfast. The first set of works [52], [61], [62] feature two iterative steps of alternating between frame clusters estimation and frame representations learning. The second group employs self-supervised learning-based representations [49], [129]. The last group disregards sequence dynamics and solely segments depending on boundary changes, e.g., LSTM+AL [60], TW-FINCH [53] and ABD [54]. Interestingly, these methods outperform previous works on unsupervised temporal segmentation. This is likely owing to the limits of the existing datasets, which are either too small to see the effect of modeling the sequence structure [26] or contain activities that predominantly adhere to a rigid ordering [28]. Two-Stage Learning Sener and Yao [52] are the first to offer an unsupervised segmentation approach that operates entirely on unsupervised visual data. They propose an iterative discriminativegenerative strategy for TAS. Their method alternates between discriminatively learning the action appearance and generatively modeling their temporal structure using a gMM [127]. Although the Mallows framework permits deviations from ordering, such as missing steps, it cannot model depict repeated actions since the ordering of actions is considered as a permutable sequence of steps. Follow-up studies on unsupervised learning are evaluated based on to their flexibility in deviations, missing steps, and ordering repetitions. For example, Prism [128] is a hierarchical generative Bayesian model that permits repeated actions. This model, however, implies that all the videos adhere to the same underlying ordering. CTE [61] and JVT [62] outperform the Mallows [52] framework (see Tab. 9). CTE [61] first learns continuous temporal embeddings of frame-wise features. Afterwards, these features are clustered, and the video ordering is decoded using Viterbi. A form of CTE groups videos into activity clusters during the pre-processing phase, rather than receiving activity labels as input. JVT [62] is a joint visual-temporal learning model predicts future frame features using the similar temporal embedding from CTE and an encoderdecoder network. In turn, these two embedding networks are trained in a joint framework to learn useful representations of visual and temporal attributes. The embedding space is then employed for clustering to generate the action segments. The preceding approaches presuppose a fixed sequential ordering determined by the average timestamp for each cluster. The use of a predetermined order for all videos allows missing steps but cannot accept deviations nor repetitions. CAP [129] offers a method for computing the video order by representing the multi-occurrence of actions using co-occurrence relations. Recent work by Bansal et. al. [138] proposes to infer the temporal orderings of the discovered actions per video based on the assumption that there may be multiple ways to complete a given task. In [138], however, the ordering is still based on the average cluster timestamps. Self-Supervised Learning To extract frame-level feature representation in unsupervised learning, Wang et. al. [129] propose using selfsupervised learning methods. Similar to exiting unsupervised works [61], [62], they first cluster these features. Their method, CAP, decodes the frames into actions using temporal order of actions co-occurrence relations. This enables improved modeling of the activities, by extension, the repetitions. In addition, they compare several self-supervised designs to demonstrate that such feature-learning techniques can increase performance. ASAL [49] provides an efficient approach for the self-supervised learning of feature embeddings through the temporal shuffling of the predicted action 10: Performance of semi-supervised methods evaluated on GTEA, Breakfast Actions and 50Salads with varying ratios of labeled data (D%). Abbreviated names are feature learning (FL) that learns a new set of inputs in a self-supervised manner, test data (TD) is used for feature learning, and feature ensembling (FE) technique to boost the performance. Complex acvity indicates the video-level labels used for training (CA), which is only applicable to the Breakfast dataset. segments and classification the action sequences as valid and invalid. It also alternates between HMM training and identifying these latent actions. Single-Stage Learning Previous works consists of an embedding step in which a joint space is learned using visual and/or temporal information and a clustering stage applied to the embedded features. Another type of work involves segmentation in a single stage. Aakur et. al. [60] present a self-supervised approach for detecting action boundaries using a single pass of the data. Their model, LSTM+AL, predicts the next frame's feature and computes the difference between the predicted and observed features to define action boundaries. UDE [132] is proposed to jointly learn embedding and clustering. Combining visual and positional encoding, they employ contrastive learning for clustering the latent space. Kumar et. al. [133] also combine representation learning and clustering into a single framework called TOT. They employ a mix of temporal optimal transport to preserve the temporal order of actions and a temporal coherence loss to retain the affinity across neighboring frames. Recent work, TW-FINCH [53], captures the spatiotemporal similarities between frames and applies a temporally weighted hierarchical clustering algorithm to group semantically coherent video frames. This method does not require training because it is directly performed on the precomputed features determine action boundaries. Similarly, ABD [54] identifies as action boundaries the abrupt change points along the similarity chain calculated between consecutive features. SEMI-SUPERVISED APPROACHES Compared to weak supervision that requires annotation for every training video, semi-supervised only requires dense labeling for a small subset of them. Ding and Yao [48] demonstrate that a small subset of dense annotations provides more information than single-frame supervision on the entire dataset. They claim that such supervision gives not only action information but also valuable action-level priors to guide the learning of the unlabeled videos. Their approach, SemiTAS [48], presents two novel loss functions for semi-supervised TAS, i.e., action affinity loss and action continuity loss. Specifically, the affinity loss imposes the action composition and distribution prior by minimizing the KLD between closest label-unlabeled video pairs. Likewise, ICC [47] proposes a semi-supervised method for TAS. ICC first learn a new set of feature representations with contrastive learning in a unsupervised way. These features are later used to train a classifier for the semi-supervised setting. The network predictions are used as pseudo-labels to supervised unlabeled videos. With 40% labeled data, ICC performs comparably to the fullysupervised counterparts. We provide detailed performance comparisons in Tab. 10. CONCLUSIONS AND OUTLOOK This survey provided an overview of the techniques utilized in TAS, followed by a full literature evaluation as of the time of writing. The enormous quantity of literature demonstrates the subject's expanding attention. Despite the rapid growth of the area, there are still a number of unexplored areas that we invite the community to examine. Input Features. The mainstream works on TAS take visual feature vectors, either hand-crafted (IDT) [64] or extracted from an off-the-shelf CNN backbone (I3D) [66], as input for each frame. Using pre-computed features as inputs serves as a conventional practice for several other tasks as well, including temporal action localization, action anticipation, as it greatly reduces the computational demands and advocates a dedicated comparison of architectures, removing the impact of enhanced feature representations. Nonetheless, as pointed out by [74], [75], pre-computed characteristics may favor static cues, e.g., scene components, in frames. No empirical work to our knowledge has compared pre-computed features to training from raw images end-to-end, as training efficiency and GPU memory requirements are quite demanding. Segment-Level Modeling. As mentioned in Section 6.2, the vast majority of current methods for the sequential modeling of actions are iterative and independent of feature learning. In addition, sequential modeling approaches are heavily utilized to post-process and refine the per-frame outputs. Exploring how to add sequence-related losses, such as edit scores that penalize segment-wise mistakes, into the learning process is also an interesting but under-explored direction. Segment-level loss readily coincides with the first interpretation of the TAS task (Eq. (1)). While the majority of existing techniques take a frame-wise prediction stance (Eq. (2)). We recommend a greater emphasis on solving the task at the segment level. Forms of Supervision. Procedural video sequences feature enormous temporal redundancy in the supervisory signals. Temporal redundancy is due to the significant similarity between successive video frames of the same motion. Such redundancy has been proven by the comparable performance of using single-frame supervision vs. fully-supervised setting [50], [126]. Yet, even single-frame supervision necessitates that a vigilant annotator skims over every video to verify that no activities are missed. Briefly explored in [50], how to handle missing actions in annotations could be a direction for TAS. An additional component of supervision to consider is the inherent uncertainty of the action boundaries, as actions occurring in time are frequently not as distinct as an object in space. According to [48], these uncertainties in action boundaries can have a significant impact on model performance. It is therefore worthwhile to investigate how to define/label action boundaries. Downstream tasks Outputs from temporal segmentation could be utilized in downstream tasks. For instance, [139] segments video streams in order to send alerts regarding missed actions. While [140] uses segmentation as a preliminary job for estimating the remaining time in lengthy surgery videos. Similarly, many approaches in action anticipation use segmentation techniques to represent prior observations with action labels [141], [142], [143]. This is because such labels contain high-level semantic information, which is preferred over visual characteristics for anticipation tasks [86]. An intelligent system that has acquired highlevel semantic results via a TAS approach can summarize the contents of a movie, i.e., video summarization [144]. Moreover, transferring the TAS to an online environment could make these strategies more useful to real-world applications. The initial attempts to achieve this objective were in [54], [145]. Yet, both approaches rely on frame-wise precomputed features. The online segmentation of videos with end-to-end models could be a future trend. In conclusion, TAS is a promising and rapidly evolving scientific topic with numerous potential real-world applications. In this survey, we present a detailed taxonomy of the problem, a systematic analysis of the fundamental methodologies, and a curated collection of current works classified by levels of supervision. We also highlight the chances and obstacles that lie ahead. We believe that this survey will provide an exposition of the topic and promote the growth of the community. •Fig. 1 : 1Guodong Ding and Angela Yao are with the School of Computing, National University of Singapore, Singapore (emails: [email protected], [email protected]). • Fadime Sener is a research scientist at Meta Reality Labs (email: [email protected]). • * indicates equal contribution. A TAS model segments an untrimmed video sequence in the temporal dimension into successive actions. :Fig. 2 : 2There are several surveys on human arXiv:2210.10352v3 [cs.CV] The taxonomy of existing temporal action segmentation research. s 1:N = (s 1 , s 2 , ..., s N ) Fig. 4 : 4Four videos from two complex activities with varying Hungarian matching levels. Colored rectangles denote ground truth actions, while rounded rectangles denote video segments. Hungarian matching scopes are red dashed rectangles. Black dashed arrows indicate matched segments, while the coloured arrows highlight changed matching across levels. With scope changing from the video (a) to activity (b), change in GT Y across different videos results in the change of label association for video2 (green) and video3 (orange). A similar change of assignments happens when the matching is done on the global level (c). Unmatched segments (X2 in video2 at video-level matching (a)) are considered as background. function for assigned pair (n, m), I(X n , Y m ) is the number of frames with ground-truth class label m that appear in cluster n. When two parties have equivalent classes (N=M ), the Hungarian matching constructs a bijection. Otherwise, it produces a one-sided perfect matching of size min(N, M ). The remaining mismatched clusters are treated as background automatically. The evaluations are then based on the corresponding results.Depending on the bipartite set's scope, Hungarian matching can be applied at three different levels, as illustrated inFig. 4. Fig. 5 : 5Two exemplary types of Temporal Convolutional Networks (TCN) for temporal action segmentation. a) Encoderdecoder TCNs progressively enlarge the temporal receptive field via pooling. b) Multi-stage TCNs maintain a fixed temporal resolution with progressively larger dilated convolutions. Fig. 6 : 6Transformer architecture proposed for temporal action segmentation. The transformer blocks (TF) takes as input frames with increasing temporal dilation ratios d. The transformers in encoder uses self-attention (SA) across the frames. The decoder takes as input encoder outputs and uses crossattention (CA) with them at each layer.= [ˆ 1 , . . . ,ˆ N ] are their corresponding temporal extents. The HMM to estimate the MAP (ĉ,ˆ ) can be written as: Fig. 7 : 7Action probability assignment approaches around the action boundary as a function of time. Let t b denote the estimated boundary between the left action in [t s l , t b ) and the right action [t b , t e r ). The colour-shaded segments denote the boundary vicinities V l and Vr. (a) One-hot labels adopt a step function with hard action labels assignments. (b) Linear[81] linearly mixes the action probabilities. (c) ABS[48] uses a sigmoid function with a decay proportional to the action duration. TABLE 1 : 1 TABLE 2 : 2 The View of datasets might be Egocentric, 3rd Person, Top-view, or Mixed.Dataset Year Duration # Videos # Segments # Activity # Action Domain View Recorded [26] GTEA 2011 0.4h 28 0.5K 7 71 Cooking Egocentric [27] 50Salads 2013 5.5h 50 0.9K 1 17 Cooking Top-view [28] Breakfast 2014 77h 1712 11K 10 48 Cooking 3rd Person [29] Epic-Kitchens 2020 200h 700 90K - 4053 Daily Egocentric [30] Ikea ASM 2021 35h 371 16K 4 33 Furniture 3rd Person [31] Meccano 2021 0.3h 20 8.9K 1 61 Assembly Egocentric [32] Assembly101 2022 513h 4321 1M 15 202 Assembly Egocentric + 3rd Person [33] HA4M 2022 6h 217 4.1K 1 12 Manufacture 3rd Person Online [34] YouTube Instructional 2016 7h 150 - 5 47 Mixed Mixed [35] YouCookII 2018 176h 2K 15K 89 - Cooking Mixed [22] CrossTask 2019 376h 4.7K 34K 83 107 Mixed Mixed [36] COIN 2019 476h 11.8K 46K 180 778 Mixed Mixed semantic labels nor assume any temporal relations between detected boundaries. TABLE 3 : 3Temporal dynamics of coarse action segments. A higher value in Repetition score indicates more action repetition, while a lower value in Order Variation score indicates looser action ordering constraints.Dataset Repetition r ↑ Order Variation v ↓ [27] 50Salads 0.08 0.02 [28] Breakfast 0.11 0.15 [32] Assembly101 0.18 0.05 TABLE 4 : 4Imbalance Ratio (IR) on four TAS datasets.Dataset GTEA 50Salads Breakfast Assembly101 IR 24 6 639 2604 TABLE 5 : 5 TABLE 6 : 6Comparison between model learning requirements in different levels of Hungarian matching for action segmentation. TABLE 7 : 7Performance of supervised TAS methods on the Breakfast and GTEA. Rep. corresponds to approaches targeting learning better feature representations. TCN lists methods built on Temporal Convolutional Networks. Impro. group aims to improve the performance of existing algorithms. Lastly, TF uses transformer as the backbone. FT means fine-tuning. Motion images are computed by taking the difference between frames across a 2 second window. ** The improvements are computed based on the authors' implementation of MS-TCN. † Test set without labels is used for training.Method Year Input/Feature GTEA Breakfast F1@{10, 25, 50} Edit MoF F1@{10, 25, 50} Edit MoF Rep. [97] Bi-LSTM 2016 RGB + flow 66.5 59.0 43.6 - 55.5 - - - - - [55] ST-CNN 2016 RGB + motion* 58.7 54.4 41.9 60.6 - - - - - [98] LCDC 2019 RGB 52.4 45.4 55.3 - - - - - [99] Coupled GAN 2019 RGB + flow 80.1 77.9 69.1 72.8 78.5 - - - - - [86] TempAgg 2020 I3D - - - - - 59.2 53.9 39.5 54.5 64.5 TCN [46] ED-TCN 2017 LCDC [98] 75.4 - - 72.8 65.3 - - - - - [46] ED-TCN 2017 IDT + FV [100] - - - - - - - - - 43.3 [46] ED-TCN 2017 spatial-CNN [55] 72.2 69.3 56.0 - 64.0 - - - - - [11] TricorNet 2017 spatial-CNN [55] 76.0 71.1 59.2 - 64.8 - - - - - [82] TDRN 2018 spatial-CNN [55] 79.2 74.4 62.7 74.1 70.1 - - - - - [14] MS-TCN 2019 IDT - - - - - 58.2 52.9 40.8 61.4 65.1 [14] MS-TCN 2019 I3D (FT) 87.5 85.4 74.6 81.4 79.2 - - - - - [14] MS-TCN 2019 I3D 85.8 83.4 69.8 79.0 76.3 52.6 48.1 37.9 61.7 66.3 [85] MS-TCN++ 2020 I3D 87.8 86.2 74.4 82.6 78.9 64.1 58.6 45.9 65.6 67.6 [101] RPGaussian 2019 I3D 88.5 86.8 74.6 84.0 78.5 62.0 56.0 43.7 63.5 64.2 [102] GatedR 2020 I3D 89.1 87.5 72.8 83.5 76.7 71.1 65.7 53.6 70.6 67.7 [83] C2F-TCN 2021 I3D 90.3 88.8 77.7 86.4 80.8 72.2 68.7 57.6 69.6 76.0 Impro. [103] MTDA + MS-TCN † 2020 I3D 90.5 88.4 76.2 85.8 80.0 74.2 68.6 56.5 73.6 71.0 [104] SSTDA + MS-TCN † 2020 I3D 90.0 89.1 78.0 86.2 79.8 75.0 69.1 55.2 73.7 70.2 [80] GTRM + MS-TCN** 2020 I3D - - - - - 57.5 54.0 43.3 58.7 65.0 [95] BCN + MS-TCN 2020 I3D 88.5 87.1 77.3 84.4 79.8 68.7 65.5 55.0 66.2 70.4 [96] ASRF + MS-TCN 2020 I3D 89.4 87.8 79.8 83.7 77.3 74.3 68.9 56.1 72.4 67.6 [105] Global2Local + MS-TCN 2021 I3D 89.9 87.3 75.8 84.6 78.5 74.9 69.0 55.2 73.3 70.7 [93] FIFA + MS-TCN 2021 I3D - - - - - 75.5 70.2 54.8 78.5 68.6 [93] FIFA + ASFormer [88] 2021 I3D 90.4 88.6 78.1 86.2 78.9 76.8 71.4 58.9 75.6 73.7 [93] FIFA + UVAST [89] 2022 I3D 82.9 79.4 64.7 90.5 69.8 76.9 71.5 58.0 77.1 69.7 TF [88] ASFormer 2021 I3D 90.1 88.8 79.2 84.6 79.7 76.0 70.6 57.4 75.0 73.5 [89] UVAST 2022 I3D 77.1 69.7 54.2 90.5 62.2 76.7 70.0 56.6 77.2 68.2 * TABLE 8 : 8Performance of weakly supervised methods on Breakfast and 50Salads. Tr. indicates transcripts supervision. T groups iterative two-stage solutions, while S single-stage solutions. Action set is denoted by Set. TS provides timestamp annotation for frames. CA stands for complex activity.Method Year Feature Breakfast 50Salads MoF IoU IoD MoF Tr. + T [114] HTK 2017 IDT + FV 25.9 - - 24.7 [79] HMM/RNN 2017 IDT + FV 33.3 - - 45.5 [81] ISBA 2018 IDT + FV 38.4 24.2 40.6 - [116] TASL 2021 IDT + FV 49.9 36.6 34.3 - Tr. + S [117] ECTC 2016 IDT + FV 27.7 - - - [12] NN-Viterbi 2018 IDT + FV 42.9 32.2 29.1 49.4 [119] D 3 TW 2019 IDT + FV 45.7 - - - [120] CDFL 2019 IDT + FV 50.2 33.7 45.4 54.7 [121] MuCon 2019 IDT + FV 48.5 - - - [122] DP-DTW 2021 IDT + FV 50.8 35.6 45.1 - Set [13] ActionSet 2018 IDT + FV 23.3 - - - [123] SCT 2020 IDT + FV 26.6 - - - [123] SCT 2020 I3D 30.4 - - - [56] SCV 2020 IDT + FV 30.2 - - - [124] ACV 2021 IDT + FV 33.4 - - - TS [126] Timestamps 2021 I3D 64.1 - - 75.6 [50] EM-TSS 2022 I3D 63.7 - - 75.9 CA [51] CAD 2022 IDT+FV 49.5 - - - [51] CAD 2022 I3D 53.1 - - - TABLE 9 : 9 CAD is included here as it essentially uses same amount of supervision information as the unsupervised approaches.Input/Feature F1(A) MoF(A) MoF(V) Temporal Model Deviations Missing Repetitions Two Stg. [52] Mallows 2018 IDT + FV - 34.6 - Mallows model [127] - [128] Prism 2019 IDT + FV - 33.5 - hierarchical Bayesian model - - [61] CTE 2019 IDT + FV 26.4 41.8 - temporal cluster order - - [62] JVT 2021 IDT + FV 29.9 48.1 52.2 temporal cluster order - - [51] CAD* 2021 IDT + FV - 49.5 - temporal cluster order - - [51] CAD* 2021 I3D - 53.1 - temporal cluster order - - SS [49] ASAL 2021 IDT + FV 37.9 52.5 - HMM - - [129] CAP 2021 SpeedNet [130] 39.2 51.1 - temporal cluster order - Single Stg. [60] LSTM+AL 2019 CNN [131] - - 42.9 - - - - [132] UDE 2021 I3D 31.9 47.4 74.6 temporal cluster order - - [133] TOT 2021 IDT + FV 31.0 47.5 - temporal optimal transport - - [53] TW-FINCH 2021 IDT + FV - - 62.7 - - - - [54] ABD 2022 IDT + FV - - 64.0 - - - - * TABLE Image segmentation using deep learning: A survey. S Minaee, Y Y Boykov, F Porikli, A J Plaza, N Kehtarnavaz, D Terzopoulos, TPAMIS. Minaee, Y. Y. Boykov, F. Porikli, A. J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, "Image segmentation using deep learning: A survey," TPAMI, 2022. Slowfast networks for video recognition. C Feichtenhofer, H Fan, J Malik, K He, ICCV. C. Feichtenhofer, H. Fan, J. Malik, and K. He, "Slowfast networks for video recognition," in ICCV, 2019. Tsm: Temporal shift module for efficient video understanding. J Lin, C Gan, S Han, ICCV. J. Lin, C. Gan, and S. Han, "Tsm: Temporal shift module for efficient video understanding," in ICCV, 2019. Keeping your eye on the ball: Trajectory attention in video transformers. M Patrick, D Campbell, Y Asano, I Misra, F Metze, C Feichtenhofer, A Vedaldi, J F Henriques, NeurIPSM. Patrick, D. Campbell, Y. Asano, I. Misra, F. Metze, C. Feichten- hofer, A. Vedaldi, and J. F. Henriques, "Keeping your eye on the ball: Trajectory attention in video transformers," NeurIPS, 2021. A comprehensive survey of vision-based human action recognition methods. H.-B Zhang, Y.-X Zhang, B Zhong, Q Lei, L Yang, J.-X Du, D.-S Chen, Sensors. 1951005H.-B. Zhang, Y.-X. Zhang, B. Zhong, Q. Lei, L. Yang, J.-X. Du, and D.-S. Chen, "A comprehensive survey of vision-based human action recognition methods," Sensors, vol. 19, no. 5, p. 1005, 2019. Human action recognition and prediction: A survey. Y Kong, Y Fu, IJCV. 1305Y. Kong and Y. Fu, "Human action recognition and prediction: A survey," IJCV, vol. 130, no. 5, pp. 1366-1401, 2022. A survey on temporal action localization. H Xia, Y Zhan, IEEE Access. 8H. Xia and Y. Zhan, "A survey on temporal action localization," IEEE Access, vol. 8, 2020. Weakly-supervised temporal action localization: a survey. A Baraka, M H Mohd Noor, Neural Computing and Applications. A. Baraka and M. H. Mohd Noor, "Weakly-supervised temporal action localization: a survey," Neural Computing and Applications, 2022. A comprehensive survey on human activity prediction. N P Trong, H Nguyen, K Kazunori, B Le Hoai, International Conference on Computational Science and Its Applications. SpringerN. P. Trong, H. Nguyen, K. Kazunori, and B. Le Hoai, "A com- prehensive survey on human activity prediction," in International Conference on Computational Science and Its Applications. Springer, 2017, pp. 411-425. Deep learning for vision-based prediction: A survey. A Rasouli, arXiv:2007.00095arXiv preprintA. Rasouli, "Deep learning for vision-based prediction: A sur- vey," arXiv preprint arXiv:2007.00095, 2020. Tricornet: A hybrid temporal convolutional and recurrent network for video action segmentation. L Ding, C Xu, arXiv:1705.07818arXiv preprintL. Ding and C. Xu, "Tricornet: A hybrid temporal convolutional and recurrent network for video action segmentation," arXiv preprint arXiv:1705.07818, 2017. Neuralnetworkviterbi: A framework for weakly supervised video learning. A Richard, H Kuehne, A Iqbal, J Gall, CVPR. A. Richard, H. Kuehne, A. Iqbal, and J. Gall, "Neuralnetwork- viterbi: A framework for weakly supervised video learning," in CVPR, 2018, pp. 7386-7395. Action sets: Weakly supervised action segmentation without ordering constraints. A Richard, H Kuehne, J Gall, CVPR. A. Richard, H. Kuehne, and J. Gall, "Action sets: Weakly su- pervised action segmentation without ordering constraints," in CVPR, 2018, pp. 5987-5996. Ms-tcn: Multi-stage temporal convolutional network for action segmentation. Y A Farha, J Gall, CVPR. Y. A. Farha and J. Gall, "Ms-tcn: Multi-stage temporal convolu- tional network for action segmentation," in CVPR, 2019. Temporal action localization in untrimmed videos via multi-stage cnns. Z Shou, D Wang, S.-F Chang, CVPR. Z. Shou, D. Wang, and S.-F. Chang, "Temporal action localization in untrimmed videos via multi-stage cnns," in CVPR, 2016. Bmn: Boundarymatching network for temporal action proposal generation. T Lin, X Liu, X Li, E Ding, S Wen, ICCV. T. Lin, X. Liu, X. Li, E. Ding, and S. Wen, "Bmn: Boundary- matching network for temporal action proposal generation," in ICCV, 2019. Segmenting motion capture data into distinct behaviors. J Barbič, A Safonova, J.-Y Pan, C Faloutsos, J K Hodgins, N S Pollard, Proceedings of Graphics Interface. Graphics InterfaceJ. Barbič, A. Safonova, J.-Y. Pan, C. Faloutsos, J. K. Hodgins, and N. S. Pollard, "Segmenting motion capture data into distinct behaviors," in Proceedings of Graphics Interface, 2004. Aligned cluster analysis for temporal segmentation of human motion. F Zhou, F De La Torre, J K Hodgins, 8th IEEE international conference on automatic face & gesture recognition. IEEEF. Zhou, F. De la Torre, and J. K. Hodgins, "Aligned cluster anal- ysis for temporal segmentation of human motion," in 2008 8th IEEE international conference on automatic face & gesture recognition. IEEE, 2008, pp. 1-7. Hierarchical aligned cluster analysis for temporal clustering of human motion. TPAMI. 353--, "Hierarchical aligned cluster analysis for temporal cluster- ing of human motion," TPAMI, vol. 35, no. 3, pp. 582-596, 2012. Investigating the effects of training set synthesis for audio segmentation of radio broadcast. S Venkatesh, D Moffat, E R Miranda, Electronics. 107827S. Venkatesh, D. Moffat, and E. R. Miranda, "Investigating the effects of training set synthesis for audio segmentation of radio broadcast," Electronics, vol. 10, no. 7, p. 827, 2021. Joint modeling of multiple time series via the beta process with application to motion capture segmentation. E B Fox, M C Hughes, E B Sudderth, M I Jordan, The Annals of Applied Statistics. 83E. B. Fox, M. C. Hughes, E. B. Sudderth, M. I. Jordan et al., "Joint modeling of multiple time series via the beta process with application to motion capture segmentation," The Annals of Applied Statistics, vol. 8, no. 3, pp. 1281-1313, 2014. Cross-task weakly supervised learning from instructional videos. D Zhukov, J.-B Alayrac, R G Cinbis, D Fouhey, I Laptev, J Sivic, CVPR. D. Zhukov, J.-B. Alayrac, R. G. Cinbis, D. Fouhey, I. Laptev, and J. Sivic, "Cross-task weakly supervised learning from instruc- tional videos," in CVPR, 2019. Video summarization with long short-term memory. K Zhang, W.-L Chao, F Sha, K Grauman, ECCV. K. Zhang, W.-L. Chao, F. Sha, and K. Grauman, "Video summa- rization with long short-term memory," in ECCV, 2016. Self-supervised multi-task procedure learning from instructional videos. E Elhamifar, D Huynh, ECCV. E. Elhamifar and D. Huynh, "Self-supervised multi-task proce- dure learning from instructional videos," in ECCV, 2020. Procedure completion by learning from partial summaries. Z Naing, E Elhamifar, BMVC. Z. Naing and E. Elhamifar, "Procedure completion by learning from partial summaries," in BMVC, 2020. Learning to recognize objects in egocentric activities. A Fathi, X Ren, J M Rehg, CVPR. A. Fathi, X. Ren, and J. M. Rehg, "Learning to recognize objects in egocentric activities," in CVPR, 2011. Combining embedded accelerometers with computer vision for recognizing food preparation activities. S Stein, S J Mckenna, UBICOMP. ACMS. Stein and S. J. McKenna, "Combining embedded accelerom- eters with computer vision for recognizing food preparation activities," in UBICOMP. ACM, 2013. The language of actions: Recovering the syntax and semantics of goal-directed human activities. H Kuehne, A Arslan, T Serre, CVPR. H. Kuehne, A. Arslan, and T. Serre, "The language of actions: Recovering the syntax and semantics of goal-directed human activities," in CVPR, 2014. Rescaling egocentric vision: collection, pipeline and challenges for epickitchens-100. D Damen, H Doughty, G M Farinella, A Furnari, E Kazakos, J Ma, D Moltisanti, J Munro, T Perrett, W Price, IJCV. 1301D. Damen, H. Doughty, G. M. Farinella, A. Furnari, E. Kazakos, J. Ma, D. Moltisanti, J. Munro, T. Perrett, W. Price et al., "Rescaling egocentric vision: collection, pipeline and challenges for epic- kitchens-100," IJCV, vol. 130, no. 1, pp. 33-55, 2022. The ikea asm dataset: Understanding people assembling furniture through actions, objects and pose. Y Ben-Shabat, X Yu, F Saleh, D Campbell, C Rodriguez-Opazo, H Li, S Gould, WACV. Y. Ben-Shabat, X. Yu, F. Saleh, D. Campbell, C. Rodriguez-Opazo, H. Li, and S. Gould, "The ikea asm dataset: Understanding people assembling furniture through actions, objects and pose," in WACV, 2021. The meccano dataset: Understanding human-object interactions from egocentric videos in an industrial-like domain. F Ragusa, A Furnari, S Livatino, G M Farinella, WACVF. Ragusa, A. Furnari, S. Livatino, and G. M. Farinella, "The meccano dataset: Understanding human-object interactions from egocentric videos in an industrial-like domain," in WACV, 2021. Assembly101: A large-scale multi-view video dataset for understanding procedural activities. F Sener, D Chatterjee, D Shelepov, K He, D Singhania, R Wang, A Yao, CVPR. F. Sener, D. Chatterjee, D. Shelepov, K. He, D. Singhania, R. Wang, and A. Yao, "Assembly101: A large-scale multi-view video dataset for understanding procedural activities," in CVPR, 2022. The ha4m dataset: Multi-modal monitoring of an assembly task for human action recognition in manufacturing. G Cicirelli, R Marani, L Romeo, M G Domínguez, J Heras, A G Perri, T D&apos;orazio, Scientific Data. 91745G. Cicirelli, R. Marani, L. Romeo, M. G. Domínguez, J. Heras, A. G. Perri, and T. D'Orazio, "The ha4m dataset: Multi-modal monitoring of an assembly task for human action recognition in manufacturing," Scientific Data, vol. 9, no. 1, p. 745, 2022. Unsupervised learning from narrated instruction videos. J.-B Alayrac, P Bojanowski, N Agrawal, J Sivic, I Laptev, S Lacoste-Julien, CVPR. J.-B. Alayrac, P. Bojanowski, N. Agrawal, J. Sivic, I. Laptev, and S. Lacoste-Julien, "Unsupervised learning from narrated instruc- tion videos," in CVPR, 2016. Towards automatic learning of procedures from web instructional videos. L Zhou, C Xu, J J Corso, AAAIL. Zhou, C. Xu, and J. J. Corso, "Towards automatic learning of procedures from web instructional videos," in AAAI, 2018. Coin: A large-scale dataset for comprehensive instructional video analysis. Y Tang, D Ding, Y Rao, Y Zheng, D Zhang, L Zhao, J Lu, J Zhou, CVPRY. Tang, D. Ding, Y. Rao, Y. Zheng, D. Zhang, L. Zhao, J. Lu, and J. Zhou, "Coin: A large-scale dataset for comprehensive instructional video analysis," in CVPR, 2019, pp. 1207-1216. Timeception for complex action recognition. N Hussein, E Gavves, A W Smeulders, CVPR. N. Hussein, E. Gavves, and A. W. Smeulders, "Timeception for complex action recognition," in CVPR, 2019, pp. 254-263. Pic: Permutation invariant convolution for recognizing long-range activities. arXiv:2003.08275arXiv preprint--, "Pic: Permutation invariant convolution for recognizing long-range activities," arXiv preprint arXiv:2003.08275, 2020. Generic event boundary detection: A benchmark for event segmentation. M Z Shou, S W Lei, W Wang, D Ghadiyaram, M Feiszli, ICCV. M. Z. Shou, S. W. Lei, W. Wang, D. Ghadiyaram, and M. Feis- zli, "Generic event boundary detection: A benchmark for event segmentation," in ICCV, 2021. Scaling egocentric vision: The epic-kitchens dataset. D Damen, H Doughty, G M Farinella, S Fidler, A Furnari, E Kazakos, D Moltisanti, J Munro, T Perrett, W Price, M Wray, ECCV. D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray, "Scaling egocentric vision: The epic-kitchens dataset," in ECCV, 2018. In the eye of beholder: Joint learning of gaze and actions in first person video. Y Li, M Liu, J M Rehg, in ECCV. Y. Li, M. Liu, and J. M. Rehg, "In the eye of beholder: Joint learning of gaze and actions in first person video," in ECCV, 2018, pp. 619-635. What's cookin'? interpreting cooking videos using text, speech and vision. J Malmaud, J Huang, V Rathod, N Johnston, A Rabinovich, K Murphy, NAACL. J. Malmaud, J. Huang, V. Rathod, N. Johnston, A. Rabinovich, and K. Murphy, "What's cookin'? interpreting cooking videos using text, speech and vision," in NAACL, 2015. Zero-shot anticipation for instructional activities. F Sener, A Yao, ICCV. F. Sener and A. Yao, "Zero-shot anticipation for instructional activities," in ICCV, 2019, pp. 862-871. Largescale long-tailed recognition in an open world. Z Liu, Z Miao, X Zhan, J Wang, B Gong, S X Yu, CVPR. Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, and S. X. Yu, "Large- scale long-tailed recognition in an open world," in CVPR, 2019. Decoupling representation and classifier for longtailed recognition. B Kang, S Xie, M Rohrbach, Z Yan, A Gordo, J Feng, Y Kalantidis, arXiv:1910.09217arXiv preprintB. Kang, S. Xie, M. Rohrbach, Z. Yan, A. Gordo, J. Feng, and Y. Kalantidis, "Decoupling representation and classifier for long- tailed recognition," arXiv preprint arXiv:1910.09217, 2019. Temporal convolutional networks for action segmentation and detection. C Lea, M D Flynn, R Vidal, A Reiter, G D Hager, C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager, "Temporal convolutional networks for action segmentation and detection," in CVPR, 2017, pp. 156-165. Iterative contrast-classify for semi-supervised temporal action segmentation. D Singhania, R Rahaman, A Yao, AAAI. 36D. Singhania, R. Rahaman, and A. Yao, "Iterative contrast-classify for semi-supervised temporal action segmentation," in AAAI, vol. 36, no. 2, 2022. Leveraging action affinity and continuity for semi-supervised temporal action segmentation. G Ding, A Yao, ECCV. G. Ding and A. Yao, "Leveraging action affinity and continuity for semi-supervised temporal action segmentation," in ECCV, 2022. Action shuffle alternating learning for unsupervised action segmentation. J Li, S Todorovic, CVPR. J. Li and S. Todorovic, "Action shuffle alternating learning for unsupervised action segmentation," in CVPR, 2021. A generalized & robust framework for timestamp supervision in temporal action segmentation. R Rahaman, D Singhania, A Thiery, A Yao, ECCV. R. Rahaman, D. Singhania, A. Thiery, and A. Yao, "A generalized & robust framework for timestamp supervision in temporal action segmentation," in ECCV, 2022. Temporal action segmentation with highlevel complex activity labels. G Ding, A Yao, TMMG. Ding and A. Yao, "Temporal action segmentation with high- level complex activity labels," TMM, 2022. Unsupervised learning and segmentation of complex activities from video. F Sener, A Yao, CVPR. F. Sener and A. Yao, "Unsupervised learning and segmentation of complex activities from video," in CVPR, 2018, pp. 8368-8376. Temporally-weighted hierarchical clustering for unsupervised action segmentation. S Sarfraz, N Murray, V Sharma, A Diba, L Van Gool, R Stiefelhagen, CVPR. S. Sarfraz, N. Murray, V. Sharma, A. Diba, L. Van Gool, and R. Stiefelhagen, "Temporally-weighted hierarchical clustering for unsupervised action segmentation," in CVPR, 2021. Fast and unsupervised action boundary detection for action segmentation. Z Du, X Wang, G Zhou, Q Wang, CVPR. Z. Du, X. Wang, G. Zhou, and Q. Wang, "Fast and unsupervised action boundary detection for action segmentation," in CVPR, 2022. Segmental spatiotemporal cnns for fine-grained action segmentation. C Lea, A Reiter, R Vidal, G D Hager, ECCV. SpringerC. Lea, A. Reiter, R. Vidal, and G. D. Hager, "Segmental spa- tiotemporal cnns for fine-grained action segmentation," in ECCV. Springer, 2016, pp. 36-52. Set-constrained viterbi for set-supervised action segmentation. J Li, S Todorovic, CVPR. 10J. Li and S. Todorovic, "Set-constrained viterbi for set-supervised action segmentation," in CVPR, 2020, pp. 10 820-10 829. The hungarian method for the assignment problem. H W Kuhn, Naval research logistics quarterly. 21-2H. W. Kuhn, "The hungarian method for the assignment prob- lem," Naval research logistics quarterly, vol. 2, no. 1-2, 1955. The relationships among various nonnegative matrix factorization methods for clustering. T Li, C Ding, ICDM. T. Li and C. Ding, "The relationships among various nonnegative matrix factorization methods for clustering," in ICDM, 2006. J Chang, Y Guo, L Wang, G Meng, S Xiang, C Pan, arXiv:1905.01681Deep discriminative clustering analysis. arXiv preprintJ. Chang, Y. Guo, L. Wang, G. Meng, S. Xiang, and C. Pan, "Deep discriminative clustering analysis," arXiv preprint arXiv:1905.01681, 2019. A perceptual prediction framework for self supervised event segmentation. S N Aakur, S Sarkar, CVPR. S. N. Aakur and S. Sarkar, "A perceptual prediction framework for self supervised event segmentation," in CVPR, 2019. Unsupervised learning of action classes with continuous temporal embedding. A Kukleva, H Kuehne, F Sener, J Gall, CVPR. A. Kukleva, H. Kuehne, F. Sener, and J. Gall, "Unsupervised learning of action classes with continuous temporal embedding," in CVPR, 2019. Joint visual-temporal embedding for unsupervised learning of actions in untrimmed sequences. R G Vidalmata, W J Scheirer, A Kukleva, D Cox, H Kuehne, WACVR. G. VidalMata, W. J. Scheirer, A. Kukleva, D. Cox, and H. Kuehne, "Joint visual-temporal embedding for unsupervised learning of actions in untrimmed sequences," in WACV, 2021. Action recognition by dense trajectories. H Wang, A Kläser, C Schmid, C.-L Liu, CVPR. IEEEH. Wang, A. Kläser, C. Schmid, and C.-L. Liu, "Action recognition by dense trajectories," in CVPR. IEEE, 2011. Action recognition with improved trajectories. H Wang, C Schmid, ICCV. H. Wang and C. Schmid, "Action recognition with improved trajectories," in ICCV, 2013. Improving the fisher kernel for large-scale image classification. F Perronnin, J Sánchez, T Mensink, ECCV. F. Perronnin, J. Sánchez, and T. Mensink, "Improving the fisher kernel for large-scale image classification," in ECCV, 2010. Quo vadis, action recognition? a new model and the kinetics dataset. J Carreira, A Zisserman, CVPR. J. Carreira and A. Zisserman, "Quo vadis, action recognition? a new model and the kinetics dataset," in CVPR, 2017. Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, PMLRICML. S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in ICML. PMLR, 2015, pp. 448-456. W Kay, J Carreira, K Simonyan, B Zhang, C Hillier, S Vijayanarasimhan, F Viola, T Green, T Back, P Natsev, arXiv:1705.06950The kinetics human action video dataset. arXiv preprintW. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijaya- narasimhan, F. Viola, T. Green, T. Back, P. Natsev et al., "The kinet- ics human action video dataset," arXiv preprint arXiv:1705.06950, 2017. A duality based approach for realtime tv-l 1 optical flow. C Zach, T Pock, H Bischof, Joint pattern recognition symposium. SpringerC. Zach, T. Pock, and H. Bischof, "A duality based approach for realtime tv-l 1 optical flow," in Joint pattern recognition symposium. Springer, 2007. Temporal segmentation of human actions in videos. A Richard, Universitäts-und Landesbibliothek BonnPh.D. dissertationA. Richard, "Temporal segmentation of human actions in videos," Ph.D. dissertation, Universitäts-und Landesbibliothek Bonn, 2019. A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, PMLRICML. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple framework for contrastive learning of visual representations," in ICML. PMLR, 2020. Spatiotemporal contrastive video representation learning. R Qian, T Meng, B Gong, M.-H Yang, H Wang, S Belongie, Y Cui, CVPR. R. Qian, T. Meng, B. Gong, M.-H. Yang, H. Wang, S. Belongie, and Y. Cui, "Spatiotemporal contrastive video representation learning," in CVPR, 2021. Temporal contrastive pretraining for video action recognition. G Lorre, J Rabarisoa, A Orcesi, S Ainouz, S Canu, WACVG. Lorre, J. Rabarisoa, A. Orcesi, S. Ainouz, and S. Canu, "Tem- poral contrastive pretraining for video action recognition," in WACV, 2020. Why can't i dance in the mall? learning to mitigate scene bias in action recognition. J Choi, C Gao, J C Messou, J.-B Huang, NeurIPS. J. Choi, C. Gao, J. C. Messou, and J.-B. Huang, "Why can't i dance in the mall? learning to mitigate scene bias in action recognition," NeurIPS, 2019. What makes a video a video: Analyzing temporal information in video understanding models and datasets. D.-A Huang, V Ramanathan, D Mahajan, L Torresani, M Paluri, L Fei-Fei, J Carlos Niebles, CVPR. D.-A. Huang, V. Ramanathan, D. Mahajan, L. Torresani, M. Paluri, L. Fei-Fei, and J. Carlos Niebles, "What makes a video a video: Analyzing temporal information in video understanding models and datasets," in CVPR, 2018. Repair: Removing representation bias by dataset resampling. Y Li, N Vasconcelos, CVPR. Y. Li and N. Vasconcelos, "Repair: Removing representation bias by dataset resampling," in CVPR, 2019, pp. 9572-9581. Resound: Towards action recognition without representation bias. Y Li, Y Li, N Vasconcelos, ECCV. Y. Li, Y. Li, and N. Vasconcelos, "Resound: Towards action recognition without representation bias," in ECCV, 2018. Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, EMNLP. K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase rep- resentations using rnn encoder-decoder for statistical machine translation," in EMNLP, 2014. Weakly supervised action learning with rnn based fine-to-coarse modeling. A Richard, H Kuehne, J Gall, CVPR. A. Richard, H. Kuehne, and J. Gall, "Weakly supervised action learning with rnn based fine-to-coarse modeling," in CVPR, 2017. Improving action segmentation via graph-based temporal reasoning. Y Huang, Y Sugano, Y Sato, CVPR. Y. Huang, Y. Sugano, and Y. Sato, "Improving action segmenta- tion via graph-based temporal reasoning," in CVPR, 2020. Weakly-supervised action segmentation with iterative soft boundary assignment. L Ding, C Xu, CVPR. L. Ding and C. Xu, "Weakly-supervised action segmentation with iterative soft boundary assignment," in CVPR, 2018. Temporal deformable residual networks for action segmentation in videos. P Lei, S Todorovic, in CVPR. P. Lei and S. Todorovic, "Temporal deformable residual networks for action segmentation in videos," in CVPR, 2018, pp. 6742-6751. Coarse to fine multi-resolution temporal convolutional network. D Singhania, R Rahaman, A Yao, arXiv:2105.10859arXiv preprintD. Singhania, R. Rahaman, and A. Yao, "Coarse to fine multi-resolution temporal convolutional network," arXiv preprint arXiv:2105.10859, 2021. U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted inter- vention. Springer, 2015. Ms-tcn++: Multi-stage temporal convolutional network for action segmentation. S.-J Li, Y Abufarha, Y Liu, M.-M Cheng, J Gall, TPAMIS.-J. Li, Y. AbuFarha, Y. Liu, M.-M. Cheng, and J. Gall, "Ms- tcn++: Multi-stage temporal convolutional network for action segmentation," TPAMI, 2020. Temporal aggregate representations for long-range video understanding. F Sener, D Singhania, A Yao, ECCV. F. Sener, D. Singhania, and A. Yao, "Temporal aggregate repre- sentations for long-range video understanding," in ECCV, 2020. Nonlocal netvlad encoding for video classification. Y Tang, X Zhang, L Ma, J Wang, S Chen, Y.-G Jiang, ECCV. Y. Tang, X. Zhang, L. Ma, J. Wang, S. Chen, and Y.-G. Jiang, "Non- local netvlad encoding for video classification," in ECCV, 2018. Asformer: Transformer for action segmentation. F Yi, H Wen, T Jiang, BMVC. F. Yi, H. Wen, and T. Jiang, "Asformer: Transformer for action segmentation," in BMVC, 2021. Unified fully and timestamp supervised temporal action segmentation via sequence to sequence translation. N Behrmann, S A Golestaneh, Z Kolter, J Gall, M Noroozi, ECCV. N. Behrmann, S. A. Golestaneh, Z. Kolter, J. Gall, and M. Noroozi, "Unified fully and timestamp supervised temporal action seg- mentation via sequence to sequence translation," in ECCV, 2022. Deformable detr: Deformable transformers for end-to-end object detection. X Zhu, W Su, L Lu, B Li, X Wang, J Dai, arXiv:2010.04159arXiv preprintX. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, "Deformable detr: Deformable transformers for end-to-end object detection," arXiv preprint arXiv:2010.04159, 2020. A direct search optimization method that models the objective and constraint functions by linear interpolation. M J Powell, Advances in optimization and numerical analysis. SpringerM. J. Powell, "A direct search optimization method that models the objective and constraint functions by linear interpolation," in Advances in optimization and numerical analysis. Springer, 1994. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. A Viterbi, IEEE Transactions on Information Theory. 132A. Viterbi, "Error bounds for convolutional codes and an asymp- totically optimum decoding algorithm," IEEE Transactions on Information Theory, vol. 13, no. 2, pp. 260-269, 1967. Fifa: Fast inference approximation for action segmentation. Y Souri, Y A Farha, F Despinoy, G Francesca, J Gall, GCPR. Y. Souri, Y. A. Farha, F. Despinoy, G. Francesca, and J. Gall, "Fifa: Fast inference approximation for action segmentation," in GCPR, 2021. Action recognition from single timestamp supervision in untrimmed videos. D Moltisanti, S Fidler, D Damen, CVPR. D. Moltisanti, S. Fidler, and D. Damen, "Action recognition from single timestamp supervision in untrimmed videos," in CVPR, 2019. Boundary-aware cascade networks for temporal action segmentation. Z Wang, Z Gao, L Wang, Z Li, G Wu, ECCV. Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, "Boundary-aware cascade networks for temporal action segmentation," in ECCV, 2020. Alleviating oversegmentation errors by detecting action boundaries. Y Ishikawa, S Kasai, Y Aoki, H Kataoka, WACVY. Ishikawa, S. Kasai, Y. Aoki, and H. Kataoka, "Alleviating over- segmentation errors by detecting action boundaries," in WACV, 2021. A multistream bi-directional recurrent neural network for fine-grained action detection. B Singh, T K Marks, M Jones, O Tuzel, M Shao, CVPR. B. Singh, T. K. Marks, M. Jones, O. Tuzel, and M. Shao, "A multi- stream bi-directional recurrent neural network for fine-grained action detection," in CVPR, 2016, pp. 1961-1970. Learning motion in feature space: Locally-consistent deformable convolution networks for fine-grained action detection. K.-N C Mac, D Joshi, R A Yeh, J Xiong, R S Feris, M N Do, ICCV. K.-N. C. Mac, D. Joshi, R. A. Yeh, J. Xiong, R. S. Feris, and M. N. Do, "Learning motion in feature space: Locally-consistent deformable convolution networks for fine-grained action detec- tion," in ICCV, 2019, pp. 6282-6291. Coupled generative adversarial network for continuous fine-grained action segmentation. H Gammulle, T Fernando, S Denman, S Sridharan, C Fookes, WACVH. Gammulle, T. Fernando, S. Denman, S. Sridharan, and C. Fookes, "Coupled generative adversarial network for contin- uous fine-grained action segmentation," in WACV, 2019. An end-to-end generative framework for video segmentation and recognition," in WACV. H Kuehne, J Gall, T Serre, IEEEH. Kuehne, J. Gall, and T. Serre, "An end-to-end generative framework for video segmentation and recognition," in WACV. IEEE, 2016, pp. 1-8. Frontal low-rank random tensors for fine-grained action segmentation. Y Zhang, K Muandet, Q Ma, H Neumann, S Tang, arXiv:1906.01004arXiv preprintY. Zhang, K. Muandet, Q. Ma, H. Neumann, and S. Tang, "Frontal low-rank random tensors for fine-grained action segmentation," arXiv preprint arXiv:1906.01004, 2019. Gated forward refinement network for action segmentation. D Wang, Y Yuan, Q Wang, Neurocomputing. 407D. Wang, Y. Yuan, and Q. Wang, "Gated forward refinement network for action segmentation," Neurocomputing, vol. 407, 2020. Action segmentation with mixed temporal domain adaptation. M.-H Chen, B Li, Y Bao, G Alregib, WACVM.-H. Chen, B. Li, Y. Bao, and G. AlRegib, "Action segmentation with mixed temporal domain adaptation," in WACV, 2020. Action segmentation with joint self-supervised temporal domain adaptation. M.-H Chen, B Li, Y Bao, G Alregib, Z Kira, CVPR. M.-H. Chen, B. Li, Y. Bao, G. AlRegib, and Z. Kira, "Action segmentation with joint self-supervised temporal domain adap- tation," in CVPR, 2020, pp. 9454-9463. Global2local: Efficient structure search for video action segmentation. S.-H Gao, Q Han, Z.-Y Li, P Peng, L Wang, M.-M Cheng, CVPR. S.-H. Gao, Q. Han, Z.-Y. Li, P. Peng, L. Wang, and M.-M. Cheng, "Global2local: Efficient structure search for video action segmen- tation," in CVPR, 2021. Fast saliency based pooling of fisher encoded dense trajectories. S Karaman, L Seidenari, A Del Bimbo, ECCV THUMOS Workshop. 15S. Karaman, L. Seidenari, and A. Del Bimbo, "Fast saliency based pooling of fisher encoded dense trajectories," in ECCV THUMOS Workshop, vol. 1, no. 2, 2014, p. 5. A database for fine grained activity detection of cooking activities. M Rohrbach, S Amin, M Andriluka, B Schiele, CVPR. M. Rohrbach, S. Amin, M. Andriluka, and B. Schiele, "A database for fine grained activity detection of cooking activities," in CVPR, 2012. Temporal sequence modeling for video event detection. Y Cheng, Q Fan, S Pankanti, A Choudhary, CVPR. Y. Cheng, Q. Fan, S. Pankanti, and A. Choudhary, "Temporal sequence modeling for video event detection," in CVPR, 2014. Understanding egocentric activities. A Fathi, A Farhadi, J M Rehg, ICCV. A. Fathi, A. Farhadi, and J. M. Rehg, "Understanding egocentric activities," in ICCV, 2011, pp. 407-414. Modeling actions through state changes. A Fathi, J M Rehg, CVPR. A. Fathi and J. M. Rehg, "Modeling actions through state changes," in CVPR, 2013, pp. 2579-2586. From stochastic grammar to bayes network: Probabilistic parsing of complex activity. N N Vo, A F Bobick, CVPR. N. N. Vo and A. F. Bobick, "From stochastic grammar to bayes network: Probabilistic parsing of complex activity," in CVPR, 2014, pp. 2641-2648. Parsing videos of actions with segmental grammars. H Pirsiavash, D Ramanan, CVPR. H. Pirsiavash and D. Ramanan, "Parsing videos of actions with segmental grammars," in CVPR, 2014. Temporal action detection using a statistical language model. A Richard, J Gall, CVPR. A. Richard and J. Gall, "Temporal action detection using a statis- tical language model," in CVPR, 2016. Weakly supervised learning of actions from transcripts. H Kuehne, A Richard, J Gall, CVIU. 163H. Kuehne, A. Richard, and J. Gall, "Weakly supervised learning of actions from transcripts," CVIU, vol. 163, pp. 78-89, 2017. Weakly supervised action labeling in videos under ordering constraints. P Bojanowski, R Lajugie, F Bach, I Laptev, J Ponce, C Schmid, J Sivic, ECCV. P. Bojanowski, R. Lajugie, F. Bach, I. Laptev, J. Ponce, C. Schmid, and J. Sivic, "Weakly supervised action labeling in videos under ordering constraints," in ECCV, 2014. Weakly-supervised action segmentation and alignment via transcript-aware union-of-subspaces learning. Z Lu, E Elhamifar, ICCV. Z. Lu and E. Elhamifar, "Weakly-supervised action segmentation and alignment via transcript-aware union-of-subspaces learn- ing," in ICCV, 2021. Connectionist temporal modeling for weakly supervised action labeling. D Huang, F Li, J C Niebles, ECCV. D. Huang, F. Li, and J. C. Niebles, "Connectionist temporal modeling for weakly supervised action labeling," in ECCV, 2016. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, ICML. A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, "Connec- tionist temporal classification: labelling unsegmented sequence data with recurrent neural networks," in ICML, 2006. D3tw: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation. C.-Y Chang, D.-A Huang, Y Sui, L Fei-Fei, J C Niebles, CVPR. C.-Y. Chang, D.-A. Huang, Y. Sui, L. Fei-Fei, and J. C. Niebles, "D3tw: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation," in CVPR, 2019, pp. 3546-3555. Weakly supervised energy-based learning for action segmentation. J Li, P Lei, S Todorovic, ICCV. J. Li, P. Lei, and S. Todorovic, "Weakly supervised energy-based learning for action segmentation," in ICCV, 2019. Fast weakly supervised action segmentation using mutual consistency. Y Souri, M Fayyaz, L Minciullo, G Francesca, J Gall, TPAMIY. Souri, M. Fayyaz, L. Minciullo, G. Francesca, and J. Gall, "Fast weakly supervised action segmentation using mutual con- sistency," TPAMI, 2021. Learning discriminative prototypes with dynamic time warping. X Chang, F Tung, G Mori, CVPR. X. Chang, F. Tung, and G. Mori, "Learning discriminative proto- types with dynamic time warping," in CVPR, 2021. Sct: Set constrained temporal transformer for set supervised action segmentation. M Fayyaz, J Gall, CVPR. M. Fayyaz and J. Gall, "Sct: Set constrained temporal transformer for set supervised action segmentation," in CVPR, 2020. Anchor-constrained viterbi for setsupervised action segmentation. J Li, S Todorovic, CVPR. J. Li and S. Todorovic, "Anchor-constrained viterbi for set- supervised action segmentation," in CVPR, 2021. Set-supervised action learning in procedural task videos via pairwise order consistency. Z Lu, E Elhamifar, CVPR. Z. Lu and E. Elhamifar, "Set-supervised action learning in proce- dural task videos via pairwise order consistency," in CVPR, 2022. Temporal action segmentation from timestamp supervision. Z Li, Y Farha, J Gall, CVPR. Z. Li, Y. Abu Farha, and J. Gall, "Temporal action segmentation from timestamp supervision," in CVPR, 2021. Distance based ranking models. M A Fligner, J S Verducci, Journal of the Royal Statistical Society. Series B (Methodological). M. A. Fligner and J. S. Verducci, "Distance based ranking mod- els," Journal of the Royal Statistical Society. Series B (Methodological), pp. 359-369, 1986. Learning procedural abstractions and evaluating discrete latent temporal structure. K Goel, E Brunskill, ICLR. K. Goel and E. Brunskill, "Learning procedural abstractions and evaluating discrete latent temporal structure," in ICLR, 2019. Sscap: Self-supervised co-occurrence action parsing for unsupervised temporal action segmentation. Z Wang, H Chen, X Li, C Liu, Y Xiong, J Tighe, C Fowlkes, WACVZ. Wang, H. Chen, X. Li, C. Liu, Y. Xiong, J. Tighe, and C. Fowlkes, "Sscap: Self-supervised co-occurrence action parsing for unsu- pervised temporal action segmentation," in WACV, 2022. Speednet: Learning the speediness in videos. S Benaim, A Ephrat, O Lang, I Mosseri, W T Freeman, M Rubinstein, M Irani, T Dekel, CVPR. S. Benaim, A. Ephrat, O. Lang, I. Mosseri, W. T. Freeman, M. Rubinstein, M. Irani, and T. Dekel, "Speednet: Learning the speediness in videos," in CVPR, 2020. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. Unsupervised discriminative embedding for sub-action learning in complex activities. S Swetha, H Kuehne, Y S Rawat, M Shah, ICIP. IEEES. Swetha, H. Kuehne, Y. S. Rawat, and M. Shah, "Unsupervised discriminative embedding for sub-action learning in complex activities," in ICIP. IEEE, 2021. Unsupervised action segmentation by joint representation learning and online clustering. S Kumar, S Haresh, A Ahmed, A Konin, M Z Zia, Q.-H Tran, CVPR. S. Kumar, S. Haresh, A. Ahmed, A. Konin, M. Z. Zia, and Q.-H. Tran, "Unsupervised action segmentation by joint representation learning and online clustering," in CVPR, 2022. Timestamp-supervised action segmentation with graph convolutional networks. H Khan, S Haresh, A Ahmed, S Siddiqui, A Konin, M Z Zia, Q.-H Tran, IROS. H. Khan, S. Haresh, A. Ahmed, S. Siddiqui, A. Konin, M. Z. Zia, and Q.-H. Tran, "Timestamp-supervised action segmentation with graph convolutional networks," in IROS, 2022. Weakly-supervised alignment of video with text. P Bojanowski, R Lajugie, E Grave, F Bach, I Laptev, J Ponce, C Schmid, ICCV. P. Bojanowski, R. Lajugie, E. Grave, F. Bach, I. Laptev, J. Ponce, and C. Schmid, "Weakly-supervised alignment of video with text," in ICCV, 2015, pp. 4462-4470. Unsupervised semantic parsing of video collections. O Sener, A R Zamir, S Savarese, A Saxena, ICCV. O. Sener, A. R. Zamir, S. Savarese, and A. Saxena, "Unsupervised semantic parsing of video collections," in ICCV, 2015. Learning to segment actions from observation and narration. D Fried, J.-B Alayrac, P Blunsom, C Dyer, S Clark, A Nematzadeh, ACL. D. Fried, J.-B. Alayrac, P. Blunsom, C. Dyer, S. Clark, and A. Ne- matzadeh, "Learning to segment actions from observation and narration," in ACL, 2020. My view is the best view: Procedure learning from egocentric videos. S Bansal, C Arora, C Jawahar, ECCV. S. Bansal, C. Arora, and C. Jawahar, "My view is the best view: Procedure learning from egocentric videos," in ECCV, 2022. Generating notifications for missing actions: Don't forget to turn the lights off!" in ICCV. B Soran, A Farhadi, L Shapiro, B. Soran, A. Farhadi, and L. Shapiro, "Generating notifications for missing actions: Don't forget to turn the lights off!" in ICCV, 2015, pp. 4669-4677. Unsupervised temporal video segmentation as an auxiliary task for predicting the remaining surgery duration," in OR 2.0 Context-Aware Operating Theaters and Machine Learning in Clinical Neuroimaging. D Rivoir, S Bodenstedt, F Bechtolsheim, M Distler, J Weitz, S Speidel, SpringerD. Rivoir, S. Bodenstedt, F. von Bechtolsheim, M. Distler, J. Weitz, and S. Speidel, "Unsupervised temporal video segmentation as an auxiliary task for predicting the remaining surgery duration," in OR 2.0 Context-Aware Operating Theaters and Machine Learning in Clinical Neuroimaging. Springer, 2019, pp. 29-37. When will you do what? anticipating temporal occurrences of activities. Y A Farha, A Richard, J Gall, CVPR. Y. A. Farha, A. Richard, and J. Gall, "When will you do what? anticipating temporal occurrences of activities," in CVPR, 2018. Time-conditioned action anticipation in one shot. Q Ke, M Fritz, B Schiele, CVPR. Q. Ke, M. Fritz, and B. Schiele, "Time-conditioned action antici- pation in one shot," in CVPR, 2019. Forecasting future action sequences with neural memory networks. H Gammulle, S Denman, S Sridharan, C Fookes, BMVC. H. Gammulle, S. Denman, S. Sridharan, and C. Fookes, "Fore- casting future action sequences with neural memory networks," in BMVC, 2019. Video summarization using deep neural networks: A survey. E Apostolidis, E Adamantidou, A I Metsai, V Mezaris, I Patras, Proceedings of the IEEE. 10911E. Apostolidis, E. Adamantidou, A. I. Metsai, V. Mezaris, and I. Patras, "Video summarization using deep neural networks: A survey," Proceedings of the IEEE, vol. 109, no. 11, 2021. Weakly-supervised online action segmentation in multiview instructional videos. R Ghoddoosian, I Dwivedi, N Agarwal, C Choi, B Dariush, CVPR. R. Ghoddoosian, I. Dwivedi, N. Agarwal, C. Choi, and B. Dar- iush, "Weakly-supervised online action segmentation in multi- view instructional videos," in CVPR, 2022.
{'fraction_non_alphanumeric': 0.05722218163220574, 'fraction_numerical': 0.02723752465843501, 'mean_word_length': 4.320130602091188, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 20, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Temporal action segmentation (TAS) from videos aims at densely identifying video frames in minutes-long videos with multiple action classes. As a long-range video understanding task, researchers have developed an extended collection of methods and examined their performance using various benchmarks. Despite the rapid growth of TAS techniques in recent years, no systematic survey has been conducted in these sectors. In this survey, we analyze and summarize the most significant contributions and trends to this endeavor. In particular, we first examine the task definition, common benchmarks, types of supervision, and prevalent evaluation measures. In addition, we systematically investigate two essential techniques of this topic, i.e., frame representation, and temporal modeling, which have been studied extensively in the literature. We then conduct a thorough review of existing TAS works categorized by their levels of supervision and conclude our survey by identifying and emphasizing several research gaps. In addition, we have curated a list of TAS resources, which is available at https://github.com/atlas-eccv22/awesome-temporal-action-segmentation.!Related TasksThere are several tasks in video understanding which are closely related to TAS. They can be distinguished with TAS based on their data domain, identification of segment semantics as well as the reasoning of temporal dynamics between segments. The related tasks are described below and compared in Tab. 1.Temporal Action Detection / Localization (TAD/L) [15],[16] detects the start and end of action instances and predicts semantic labels simultaneously. TAD/L works more with general videos that allow overlap in the actions while TAS works with procedural videos to find a change point between actions.Sequence Segmentation (SS) is popular in other domains, including motion capture data [17], [18], [19] and audio signals [20]. Most approaches are developed to segment individual sequences [17], [18], [19] while some [21]focuses on multiple motion capture recordings simultaneously. However, such data is lower-dimensional and exhibits much less variance than video.Key-Frame Detection (KFD) identifies single characteristic frames or key-steps [22], [23], [24], [25] for actions. LikeTAS, KFD requires modeling the temporal relations between actions; however, it is out of the task scope to find the boundary where the actions transition.Complex Activity Classification (CAC) [37], [38] targetsclassifying the complex activity of procedural videos. Such a task is similar to TAS in the way it models the temporal relations of actions. Still, it is not concerned with the individual frames as CAC is aimed to determine the complex activity class of the full action sequence. (GEBD) [39] localizes the moments where human perceive as event boundaries. The boundaries signify changes in action, subject, and environment. In comparison, GEBD does not work withGeneric Event Boundary Detection', 'arxivid': '2210.10352', 'author': ['Journal Of L A T E X Class ', 'Files '], 'authoraffiliation': [], 'corpusid': 252992530, 'doi': '10.48550/arxiv.2210.10352', 'github_urls': ['https://github.com/atlas-eccv22/awesome-temporal-action-segmentation.!Related'], 'n_tokens_mistral': 40527, 'n_tokens_neox': 35511, 'n_words': 20939, 'pdfsha': 'd119ef44c0489b699e3524a8e58fd6b192c06305', 'pdfurls': ['https://export.arxiv.org/pdf/2210.10352v3.pdf'], 'title': ['Temporal Action Segmentation: An Analysis of Modern Techniques', 'Temporal Action Segmentation: An Analysis of Modern Techniques'], 'venue': []}
arxiv
Q-balls in K-field theory 30 May 2023 Aníbal Faúndez Instituto de Física Pontificia Universidad Católica de Valparaíso Av. Brasil 2950ValparaísoChile Radouane Gannouji Instituto de Física Pontificia Universidad Católica de Valparaíso Av. Brasil 2950ValparaísoChile Q-balls in K-field theory 30 May 2023(Dated: May 31, 2023) We study the existence and stability of Q-balls in noncanonical scalar field theories, K(|Φ| 2 , X) where Φ is the complex scalar field and X is the kinetic term. We extend the Vakhitov-Kolokolov stability criterion to K-field theories. We derive the condition for the perturbations to have a wellposed Cauchy problem. We find that K,X > 0 and K,X +XK,XX > 0 are necessary but not sufficient conditions. The perturbations define a strongly hyperbolic system if (K,X − 2φ ′2 K,XX )(K,X + 2ω 2 φ 2 K,XX ) > 0. For all modifications studied, we found that perturbations propagate at a speed different from light. Generically, the noncanonical scalar field can lower the charge and energy of the Q-ball and therefore improves its stability. I. INTRODUCTION Q-balls are pseudolike particles that could be defined as lumps of a singularity-free scalar field with finite energy. They have been originally discovered in [1] and independently rediscovered in [2]. Contrary to solitons, they do not have a topological charge but a Noether charge based originally on the U (1) global symmetry, and therefore they belong to the class of nontopological solitons. The scalar field is captured in some region of space because of nonlinear self-interaction, therefore forming a pseudolike particle carrying charge and energy. Q-balls can be produced via many mechanisms, which makes them very interesting in particular in cosmology. Indeed, they could be produced from inflationary models, such as natural inflation [3,4], where if a complex scalar field with a global symmetry is spontaneously broken, we end up with the inflaton as the goldstone boson and a naturally flat potential due to the shift symmetry. Also in supersymmetric extensions of the standard model (see e.g. [5]), Q-balls emerge naturally where the global charge could be assumed by the baryon or the lepton number. For example, the Affleck-Dine mechanism [6,7] uses the supersymmetric flat directions to generate baryogenesis. In this context, some of these flat directions (scalar field) can be parametrized as a complex field, which is in general a condensate of squarks, sleptons and Higgs field. This condensate can be unstable and form Q-balls [8]. Of course, the most interesting property of Q-balls is their stability, because they could then be considered dark matter candidates [9,10]. For that reason, it will be our main focus in this paper along with some interesting properties related to their existence. The analysis of the classical stability was studied in [11,12] where they found that considering a Q-ball of frequency ω and charge Q, stability is similar to the condition dQ/dω < 0. It was shown in [13] that the stability of gauged Q-balls is not related to this condition. It would be interesting to see * [email protected][email protected] the extension of this criteria to global charge Q-balls but in modified gravity theories. We will study three types of stability conditions that appear in the literature [15], namely, classical stability as we have previously mentioned, absolute stability, and stability against fission [12]. In most of the papers, a canonical scalar field is assumed, which appears naturally at low energies of various theories. But studying Q-balls in the early universe might modify this simple picture. Indeed, e.g. higher dimensions naturally produce scalar fields with nonlinear kinetic terms such as D3-brane [16] or in the context of braneworld gravity [17]. Also in string theory, a rolling tachyon has a Dirac-Born-Infeld (DBI) type of action [18]. It is therefore natural to look to noncanonical scalar fields. Q-balls in the DBI type of kinetic term was studied in [19] along with its stability using catastrophe theory [20]. In this context, we will study Q-balls in the context of a complex K-field also known as K-inflation [21] or K-essence [22]. The plan of the paper is as follows. We introduce the model before discussing the stability conditions encountered in the literature. In the next section, we analyze the range of existence of the Q-balls and define the energy conditions for these solutions. Finally, we will study numerically the properties of the Q-balls before studying the equation of perturbation. We analyze the strong hyperbolicity of these equations along with the stability of the Q-ball before conclusions. II. Q-BALLS Let us consider the density Lagrangian L = K(|Φ| 2 , X) (2.1) where K is a generic function of a complex scalar field Φ and the kinetic term X = −∂ µ Φ∂ µ Φ * . The equation of motion is ∇ µ (K ,X ∂ µ Φ) + ΦK ,|Φ| 2 = 0 (2.2) where we have used the notation K ,A ≡ ∂K/∂A. The model admits a global U(1) symmetry with which the associated Noether current is j µ = iK ,X Φ * ∂ µ Φ − Φ∂ µ Φ * (2.3) This current is conserved ∂ µ j µ on-shell. The corresponding conserved scalar charge (or total particle number) is Q = d 3 xj 0 = i d 3 xK ,X (ΦΦ * −ΦΦ * ) (2.4) To obtain the energy, we define the canonical conjugate momenta to the variables Φ and Φ * , π Φ = ∂L ∂Φ = K ,XΦ * (2.5) π Φ * = ∂L ∂Φ * = K ,XΦ (2.6) so the Hamiltonian density is H = π ΦΦ + π Φ * Φ * − L = 2|Φ| 2 K ,X − K (2.7) The energy of the system is then E = d 3 x 2|Φ| 2 K ,X − K (2.8) We are looking for solutions that minimize the energy for a given charge Q. For that, we define the functional E ω = E + ω Q − i d 3 xK ,X (ΦΦ * −ΦΦ * ) (2.9) where ω is a Lagrange multiplier which enforces the given charge Q. We have E ω = ωQ + d 3 x K X 2|Φ| 2 − iω(ΦΦ * −ΦΦ * ) − K = ωQ + d 3 x K X |Φ − iωΦ| 2 + K X (|Φ| 2 − ω 2 |Φ| 2 ) − K (2.10) In the case of a canonical scalar field, K = X − V (|Φ| 2 ), we have E ω = ωQ + d 3 x |Φ − iωΦ| 2 − ω 2 |Φ| 2 + | ∇Φ| 2 + V (|Φ| 2 ) (2.11) where we used that X = −∂ µ Φ∂ µ Φ * = |Φ| 2 − | ∇Φ| 2 . We can therefore conclude that for a given charge Q, the energy is minimized whenΦ − iωΦ = 0 which means for Φ(t, x) = φ( x)e iωt [12]. This simple argument for the canonical scalar field cannot be easily generalized to the K-field. But we observe that in the general case, if Φ(t, x) = φ( x)e iωt , E ω = ωQ − d 3 x K (2.12) which implies that the extrema of the energy (for fixed charge) coincide with the extrema of the action. Therefore solutions of the following type Φ(t, x) = φ( x)e iωt extremize the energy. Even if we do not know of the existence of other solutions that could also extremize the energy functional, we will assume in the future for this paper this time-dependent phase of the solution. For a given model, the only parameter that characterizes the energy E and the charge Q is the parameter ω. Therefore we can consider that energy and charge are functions of ω, thus differentiating the energy, and we get dE dω = d 3 x 2ωφ 2 K ,X + 4ω 3 φ 4 K ,XX (2.13) Performing the same differentiation of the charge Q, we found dE dω = ω dQ dω (2.14) which extends to K-field results from [11]. When dQ dω = 0 also dE dω = 0 which corresponds to the existence of extremum of the charge and the energy at the same time. They will correspond to the cusps in the diagram E(Q). When dQ dω = 0, we obtain dE dQ = ω (2.15) which corresponds to the generic relation found for a U (1) Q-ball. III. STABILITY Usually, three different stability criteria are discussed in the literature. The first condition considers that a given Q-ball should not decay into smaller Q-balls, sometimes referred to as stability against fission [12]. In that case, the stability translates into E(Q 1 + Q 2 ) < E(Q 1 ) + E(Q 2 ) (3.1) and if taking derivatives with respect to both charges (Q 1 , Q 2 ), we obtain the equivalent condition d 2 E dQ 2 < 0; by using Eq.(2.15) it reduces to dQ dω < 0. Notice the similarity with the more generic Vakhitov-Kolokolov stability criterion [23] (or spectral stability). Of course, because of Eq.(2.14), we could equivalently consider dE dω < 0. The second stability criterion considers decay into free particles of mass M = V ′′ (0) 2 . To avoid the decay of a Q-ball into Q free particles with the rest masses M , we need to consider E(Q) < M Q. Finally, the last stability considers the time evolution of small perturbations, the so-called classical stability that we will analyze later. Notice that from the catastrophe theory, a simple criteria of stability has been proved [24]. Indeed, considering the diagram E(Q), the lowest branch corresponds to the stable soliton while the upper branch is unstable. This condition will be found to be equivalent to the linear stability. IV. EXISTENCE In this section, we briefly summarize the conditions of the existence of Q-balls. These conditions are obtained by constraining the shape of the potential. Considering a flat spherically symmetric spacetime, and Φ = φ(r)e iωt ; Eq.(2.2) becomes K ,X φ ′′ (r) + 2 r φ ′ (r) + ω 2 φ(r) + φ ′ (r)X ′ (r)K ,XX + φ ′ (r) 2 K ,φX + 1 2 K ,φ = 0 (4.1) with X = ω 2 φ(r) 2 − φ ′ (r) 2 . Let us first consider the canonical case, namely K = X − V (φ). The equation of motion reduces to φ ′′ (r) + 2 r φ ′ (r) + ω 2 φ(r) − V ′ (φ) 2 = 0 (4.2) which can be written as φ ′′ (r) + 2 r φ ′ (r) − V ′ eff (φ) = 0 (4.3) with V eff (φ) = (V (φ)− ω 2 φ 2 )/2. We see that the ω 2 term acts as a tachyonic contribution to the mass of the field, which will produce solitonic solutions otherwise absent for ω = 0. Considering only solutions with finite energy, the energy functional (2. 8) E = d 3 x(φ ′ (r) 2 + ω 2 φ 2 + V (φ)) implies that (φ, φ ′ ) → 0 for r → ∞ and V (0) = 0 (we assumed V (φ) > 0). It is easier to use the analogy with a particle in Newtonian mechanics, namely replacing φ → x and r → t which givesẍ + 2 tẋ + W ′ eff (x) = 0, where W eff (x) = −V eff (x) . Looking for a trajectory φ(r) or equivalently x(t), we need to impose x(∞) = 0 to obtain a finite energy solution. Therefore, the problem reduces to classifying the different trajectories of the equivalent particle giving finite energy. It is easy to show [2] that we need to im- pose W ′′ eff (0) < 0 and W eff (φ) > 0 around φ(r = 0). These conditions translate into V ′′ (0) > 2ω 2 as well as min V (φ) φ 2 ≤ ω 2 . Thus, nonrenormalizable potentials have to be considered and the simplest could be V (φ) = m 2 φ 2 − bφ 4 + λφ 6 . The previous constraints reduce to 0 < m 2 − b 2 4λ < ω 2 ≤ m 2 (4.4) The positivity of m 2 − b 2 /4λ is imposed by demanding that V (0) is a global minimum. In this paper, we will normalize [25] the parameters such as λ = 1 and b = 2 which implies m > 1. Therefore we will consider m 2 = 1.1 which implies 0.32 < ω ≤ 1.05. The Q-ball will exist only in this range of frequencies. It is important to mention that this range will change for K-fields. For example, in a model where K = X + αX 2 − V (φ), we have around r = 0, and using the condition φ ′ (r = 0) = 0, φ ′′ (r) + W ′ ef f (φ) ≃ 0 with W ′ ef f = ω 2 φ − m 2 − 2bφ 2 + 3λφ 4 1 + 2αω 2 φ 2 φ (4.5) Therefore the condition W ef f > 0 for some range of the scalar field, implies a different value for the minimum of ω. For our parameters, we found that with good accuracy, ω min ≃ (1 + α/30)/ √ 10 while ω max remains unchanged. Another important condition for the existence of the Q-ball is the nature of the differential equation. We have an equation K ,X − 2φ ′2 K ,XX φ ′′ + F (φ, φ ′ ) = 0 (4.6) To avoid singular points, we need to impose K ,X − 2φ ′2 K ,XX = 0. Therefore, for any model, smoothly connected to the canonical case, K ,X − 2φ ′2 K ,XX = 1, we should impose K ,X − 2φ ′2 K ,XX > 0. Considering the model K = X + αX 2 − V (φ 2 ), we have 1 + 2αω 2 φ 2 − 6αφ ′2 > 0. Around the origin, we have φ ′ = 0, which implies the condition 1 + 2αω 2 φ 2 0 > 0 and therefore large negative values of α will not be allowed. V. ENERGY CONDITIONS For these type of models, the fluid interpretation is not suitable because the kinetic term does not have a definite sign. But, it is mostly positive in the interior of the Q-ball and becomes negative near the surface of the Q-ball. Therefore, deep inside the Q-ball, we can use the hydrodynamical interpretation of the scalar field, by defining the energy-momentum tensor T µν = Kg µν + K ,X (∂ µ Φ∂ ν Φ * + ∂ µ Φ * ∂ ν Φ) (5.1) from which we define the energy density ρ = 2|Φ| 2 K ,X − K = 2ω 2 φ(r) 2 K ,X − K, the radial pressure P r = 2φ ′ (r) 2 K ,X + K and finally the tangential pressure P t = K. These quantities can be converted into the pressure P = (P r +2P t )/3 and the shear force S = P r −P t . Notice that the energy defined from E = d 3 xT 00 corresponds to Eq.(2.8). The hydrodynamical approach helps to obtain easily the energy conditions such as the strong energy condition (SEC) K ,X ≥ 0 , K + (ω 2 φ 2 + φ ′2 )K ,X ≥ 0 (5.2) the dominant energy condition (DEC) K ,X ≥ 0 , (ω 2 φ 2 − φ ′2 )K ,X − K ≥ 0 (5.3) the weak energy condition (WEC) K ,X ≥ 0 , 2ω 2 φ 2 K ,X − K ≥ 0 (5.4) and the null energy condition (NEC) K ,X ≥ 0 (5.5) We notice that K ,X ≥ 0 is common to all energy conditions. VI. NUMERICAL ANALYSIS As we have mentioned, Q-balls are finite energy objects and therefore with a finite space extension, which imposes the asymptotic condition φ(∞) = 0. Therefore we have used a shooting method for each value of the frequency ω with mixed boundary conditions φ ′ (0) = 0 and φ(∞) = 0. In practice, we have integrated the system from r = 10 −30 to some value, r max , and demanded that the solution remains unchanged if we increase r max . In Fig. 1, we have considered the standard model K(X) = X − V (|Φ| 2 ) with the potential defined in Sec. IV. For lower frequencies, or the thin wall limit, the scalar field is constant and at some radius (often considered as the Q-ball radius) the scalar field drops rapidly to zero, while for larger values of ω, also known as the thick wall limit, the scalar field is more shallow. The latter will be unstable. In the same graphics, we have represented the energy and the charge. The energy and charge seem to diverge for the frequencies ω min and ω max . Also E(ω) and Q(ω) reach their minimum for the same frequency, defining therefore a cusp in the energy vs charge graphics. We show also the stability conditions of the Q-balls. The stability criteria against decay is stronger than the fission stability condition. In the (Q, E) plot, it is easy to determine the stable Q-ball. Indeed, for every given charge Q, two Q-balls exist, and the one with the smallest energy corresponds to the solution stable under fission. We will see later, that it corresponds also to the stable solution under linear perturbations. Q-balls have also excited states that correspond to solutions with nodes but with the same limit at infinity, namely φ(∞) = 0. In Fig. 2, we show the first and second excited modes for a given frequency ω. To fulfill the boundary conditions, for excited states, the initial conditions must be extremely fine-tuned. The excited states have as expected larger energy but also charge. We found that the frequency corresponding to dE/dω = 0 becomes larger with the number of nodes. For example, for the fundamental mode, we have a minimal energy for ω = 0.972, while ω = 1.015 for the first excited mode and ω = 1.025 for the second excited mode. All these solutions are easily generalized to K-field the-ories. We will consider the simplest model where the action is modified by a single parameter, K = X + αX 2 − V (|Φ|) where α is the new parameter of the model 1 . Generically, we found that the structure of the solutions will not change. Q-balls exist for a certain range of frequency which depends on the parameter α. We see from energy for large positive values of the parameter α, because the radius decreases. Notice that the critical value, (E ′ (ω) = 0), of the energy and charge is also lowered for larger values of α. Therefore, for a given frequency, the modified model with α > 0 produces Q-balls with lower charge and energy. The modification by the K-field allows one to build Q-balls with small charge and energy or on the contrary with larger energy and charge. Finally, we found that for all values of the parameter α, in the limit of ω → ω max , or the thick-wall limit, we have the scaling solution E = ωQ γ with γ = 1 ± 10 −4 . This expression generalizes results found in [15]. In Fig. 4, we show the energy versus the frequency for different values of α but with the information on the violation of the energy conditions. We see that NEC is never violated. This condition corresponds to 1 + 2α(ω 2 φ 2 − φ ′2 ) > 0. It could be violated for very negative values of α, but the construction of Q-balls for α < −0.5 becomes very challenging and often impossible. In general, the larger and positive α, the lower the probability to violate an energy condition, except for the SEC which is violated for any α. VII. PERTURBATIONS To study the mechanical stability, we decompose our field as Φ(t, r, θ, ϕ) = φ(r)e iωt + ℓ,m δΦ lm (t, r)e iωt Y m ℓ (θ, ϕ) where φ(r) is the background scalar field studied in the previous sections, δΦ ℓm is the scalar field perturbation, e iωt in the second term is included for convenience and Y m ℓ are spherical harmonics. Because of the symmetries of the Q-balls, the perturbations will be independent of the azimuthal number m and therefore the spherical harmonics reduce to Legendre polynomials. We will fix m = 0. Notice that the different modes, ℓ, do not couple and therefore we will omit this index. At second order in perturbations, and after integrating over the angle variables, the action reduces to S = dtdr r 2 K ,XΨ 2 1 − r 2 (K ,X − 2φ ′2 K ,XX )Ψ ′2 1 + r 2 (K ,X + 2ω 2 φ 2 K ,XX )Ψ 2 2 − r 2 K ,X Ψ ′2 2 − 2ωr 2 φφ ′ K ,XX Ψ 1 Ψ ′ 2 + Ψ ′ 1Ψ2 + A Ψ 1 Ψ 2 − Ψ 1Ψ2 − M 2 1 Ψ 2 1 − M 2 2 Ψ 2 2 (7.1) where we have decomposed the perturbation into its real and imaginary parts, δΦ = Ψ 1 + iΨ 2 , and A = −2ωr 2 d d(φ 2 ) φ 2 K ,X − ω d dr r 2 φφ ′ K ,XX M 2 1 = λK ,X − r 2 2 K ,φφ − d dr r 2 φ ′ K ,Xφ M 2 2 = λK ,X − r 2 K ,φ 2 + ω 2 K ,X λ = ℓ(ℓ + 1) (7.2) From this action, we obtain the two coupled equations for linear perturbations − K ,XΨ1 + (K ,X − 2φ ′2 K ,XX )Ψ ′′ 1 + 2ωφφ ′ K ,XXΨ ′ 2 + F 1 (r, Ψ 1 , Ψ 2 , Ψ ′ 1 ,Ψ 2 ) = 0 (7.3) − (K ,X + 2ω 2 φ 2 K ,XX )Ψ 2 + K ,X Ψ ′′ 2 + 2ωφφ ′ K ,XXΨ ′ 1 + F 2 (r, Ψ 1 , Ψ 2 , Ψ ′ 2 ,Ψ 1 ) = 0 (7.4) with F 1 and F 2 some functions defined by the perturbations and their first derivative. These equations form a set of two coupled differential equations representing the evolution of the perturbations in an effective metric. Indeed, if we consider, e.g. Eq. (7.3), and in the absence of coupling between Ψ 1 and Ψ 2 , i.e., ω = 0, the equation would reduce to the generic form h µν ∇ µν Ψ 1 + · · · = 0, with h 00 = −K ,X and h 11 = K ,X − 2φ ′2 K ,XX , from which we obtain the stability conditions, i.e., a Lorentzian effective metric h 00 < 0 and h 11 > 0. These conditions are equivalent to the Hamiltonian of field perturbations to be positive definite; indeed, as seen from Eq.(7.1), the Lagrangian (of Ψ 1 in the case of ω = 0) reduces to L = r 2 (−h 00Ψ2 − h 11 Ψ ′2 ) and therefore to a Hamiltonian H = r 2 (−h 00Ψ2 + h 11 Ψ ′2 ). The Hamiltonian is bounded from below [26,27] if we satisfy the conditions for an effective Lorentzian metric K ,X > 0 K ,X − 2φ ′2 K ,XX > 0 ⇔ K ,X + 2XK ,XX > 0 (ω = 0) But as nicely stated in [28], one should be careful, because Hamiltonian unboundedness is not always equivalent to stability. A Hamiltonian can be unbounded only because of our set of variables chosen. Therefore, stability should be imposed only from the existence of a future causal cone defined by the effective metric. In conclusion, to study stability, we need to ensure that the problem is well-posed. For that, we will derive the conditions of weak and strong hyperbolicity. Broadly speaking, the weak hyperbolicity condition forbids any solution to grow exponentially in time while the strong hyperbolicity condition imposes a stronger bound than the exponential growth and therefore is equivalent to local well-posedness of the Cauchy problem. In the case of a strong hyperbolic system, F 1 and F 2 will not be relevant while they could change the behavior of the system if weakly hyperbolic. We define the vector u = (Ψ 1 , Ψ 2 ) T and the system (7.3) and (7.4) becomes u ,tt = Au ′′ + Bu ′ ,t + · · · (7.5) where · · · indicates the lowest derivative terms, and A 11 = K ,X − 2φ ′2 K ,XX K ,X (7.6) A 22 = K ,X K ,X + 2ω 2 φ 2 K ,XX (7.7) B 12 = 2ωφφ ′ K ,XX K ,X (7.8) B 21 = 2ωφφ ′ K ,XX K ,X + 2ω 2 φ 2 K ,XX (7.9) while other elements of the matrices are zero. We consider wave solutions u(t, r) = e ikrû (t, k) and obtain u ,tt = −k 2 Aû + ikBû ,t + · · · (7.10) This system can be reduced to first order by defining the variablev =û ,t /(i|k|) v u ,t = i|k|P v u (7.11) withP =     0 k |k| B 12 A 11 0 k |k| B 21 0 0 A 22 1 0 0 0 0 1 0 0     (7.12) The well-posedness of this system is reduced to the analysis of the matrixP (see e.g. [29]). If, for all k, the eigenvalues ofP are real, the system is weakly hyperbolic. The eigenvalues are ±1, ± K ,X − 2φ ′2 K ,XX K ,X + 2ω 2 φ 2 K ,XX (7.13) Therefore, we conclude that, if K,X −2φ ′2 K,XX K,X +2ω 2 φ 2 K,XX ≥ 0, the system is weakly hyperbolic. Additionally, when K ,X − 2φ ′2 K ,XX K ,X + 2ω 2 φ 2 K ,XX > 0 (7.14) the system is strongly hyperbolic because the eigenvectors form a complete set. In that case, the two perturbations propagate at the speed c 1 = 1 , c 2 = K ,X − 2φ ′2 K ,XX K ,X + 2ω 2 φ 2 K ,XX (7.15) As we have shown in Sec. IV, we consider the condition K ,X − 2φ ′2 K ,XX > 0 which implies K ,X + 2ω 2 φ 2 K ,XX > 0. Summing these two conditions, we find a weaker condition, viz. K ,X > 0 and K ,X + XK ,XX > 0. Notice that for a real scalar field (ω = 0), the condition K ,X + 2ω 2 φ 2 K ,XX > 0 reduces to K ,X > 0 along with the condition K ,X −2φ ′2 K ,XX > 0 (K ,X +2XK ,XX > 0), and they correspond to the stability conditions obtained in [27]. Notice that the conditions of well-posedness of the system are independent of the energy conditions derived previously (5.2), (5.3), (5.4) and 5.5). In Fig. 5, and for the model K = X + αX 2 − V (φ), we have found that for a certain range of the parameters (ω, α), the Cauchy problem is not well-posed which never corresponds to α > 0. Also we found that for any α < 0, the perturbations are superluminal in some region of space. Even if the classical theory is well-posed, the superluminal propagation of the perturbations could be an obstacle to a quantum version of the theory. For example, requiring UV completion for K-essence (real scalar field analog to the case studied in this paper) imposes subluminal propagation [30]. A similar situation should be expected in our case [14]. Even if not equivalent, we found numerically, for all parameters (ω, α) of Fig. 5, that a system which violates the weak energy condition does not have a well-posed Cauchy problem. The converse is not true. Restricting our analysis to the cases where the Cauchy problem is well-posed, we can study the mechanical stability of our solutions. For that, we assume the following form for the perturbation: δΦ(t, r) = η(r) r n e iρt + χ * (r) r n e −iρ * t (7.16) The system (7.3) and (7.4) reduces to two ordinary coupled differential equations for η(r) and χ(r). We have included a factor r n for numerical stability. In general, n = ℓ provides faster numerical results. In the canonical case where K ,X = 1, the stability analyses shows that any instability corresponds to ρ = −ρ * [13] which implies the condition dQ dω < 0. We could not extend this analysis to K-field theories and therefore we will study the perturbations by numerical means. For that, our system can be written as four first order differential equations for the variable Ψ ≡ (η, χ, η ′ , χ ′ ) T , Ψ ′ = BΨ where the matrix B is given in Appendix . Considering the conditions at r = 0 on the scalar field, φ ′ = 0, it is easy to show that perturbations behave as η(r ≃ 0) = c 0 r ℓ+n (7.17) χ(r ≃ 0) = c 1 r ℓ+n (7.18) which implies Ψ(0) = c 0 r ℓ+n−1    r 0 ℓ + n 0    + c 1 r ℓ+n−1    0 r 0 ℓ + n    (7.19) Therefore, we can perform two numerical integrations from r = 0 with initial conditions η = r ℓ+n , χ = 0 and η = 0, χ = r ℓ+n respectively. The general solution will be a linear combination of these two solutions with coefficients (c 0 , c 1 ). Similarly, we perform an integration from infinity to r = 0. We have also a system with two free parameters (c 3 , c 4 ). We can integrate it from a large radius with initial conditions or χ = e −r − K ,φ 2 (0,0) K ,X (0,0) −(ρ−ω) 2 r 1−n , η = 0 (7.21) Having the solution integrated from both boundaries with four free parameters (c 1 , c 2 , c 3 , c 4 ), we can match them at a given radius, using the four continuity conditions of (η, χ, η ′ , χ ′ ). Notice that, because our system is linear, we can always fix one of the parameters, e.g. c 1 = 1. Therefore, we end with a system of four conditions and three parameters, the fourth parameter will determine the value of ρ. In conclusion, only a certain number of discrete values of ρ can solve our problem. In Fig. (6), we show |φ + δΦ| 2 , for ω = (0.5, 1) and α = 0. For each case, we have found the parameter ρ and using Eq. pendence of the solution. In the case, where ω = 0.5, the radius of the Q-ball is oscillating, and ρ is real. The energy of this solution is constant in time, while for ω = 1, the energy grows exponentially as well as the radius of the Q-ball. The solution is unstable and ρ is purely imaginary. Therefore, the strategy is simple, for each Q-ball, we search in the complex plane for values of the ρ solution to our previous problem. For the excited states, all frequencies ω were unstable. But for various frequencies, the unstable modes were not always purely imaginary but also with a nonzero real part. For the fundamental solution, Fig. 6 shows two cases where α = 0 and ω = (0.5, 1). The first frequency corresponds to a stable solution for which we see an oscillation of the radius of the Q-ball while the energy remains perfectly constant in time. The second case, corresponds to an unstable solution for which the radius increases and the energy grows exponentially. Generically, we found that the stability region corresponds to dQ/dω < 0 for all ω, generalizing results that were known in the canonical case. In the unstable region, the timescale of the instability is of the order 1/Im(ρ). We found that Im(ρ) and therefore the timescale of the instability depends on the mode ℓ. For example, for α = 0, Im(ρ) is of the order 10 −1 for ℓ = 0 and of the order 10 −4 for ℓ = 1. Therefore, we will focus mainly on the spherical mode of perturbations ℓ = 0. In Fig. 7, we show the unstable modes for three values of α. For each α, the instability starts when dQ/dω = 0. We notice also that even if for a given frequency, such as ω = 1.03, the Q-ball is unstable for all values of the parameter α, the instability is slower to develop (lower value of Im(ρ)), for larger positive values of α, which is consistent with the previous section where we found that the energy is lowered. In Fig. 8, we summarize the various stability condi- tions. The quantum stability condition, namely the stability against fission is, as expected, stronger than the classical stability condition. We have also represented regions where the energy conditions are violated. The NEC is never violated in the region of analysis of the model while the WEC is violated only in the region where the Cauchy problem is not well-posed. The violation of the SEC and the DEC are totally independent of the stability conditions. VIII. CONCLUSION In this work we studied Q-balls in noncanonical scalar field theory. We derived the general equations of existence and stability for these theories. We found that the stability against fission and the linear mechanical stability are equivalent and reduce to Q ′ (ω) < 0 (see Table I). On the other hand, the condition for decay into free particles is stronger. TABLE I. Summary of the three stability conditions studied in this paper and extended to K-field theories. We found that perturbations have a well-posed Cauchy problem if K,X −2φ ′2 K,XX K,X +2ω 2 φ 2 K,XX > 0. When the perturbations are strongly hyperbolic, we found that perturbations are superluminal or subluminal. In the particular case, K = X + αX 2 − V (|Φ| 2 ), perturbations are subluminal and luminal for α > 0 while they are superluminal and luminal for α < 0. We found that a Q-ball with α > 0 lowers its energy for larger values of α. Even in the unstable region, the timescale of this instability becomes larger and therefore more stable. The frequency at which Q-balls become unstable increases with α. It would be interesting to find models for which all Q-balls are stable irrespectively of their frequency. Finally, we have studied the different energy conditions such as the SEC, DEC, WEC, NEC. We found that NEC is never violated and none of these conditions can be related to mechanical stability. (1) 31 = A K ,X − φ ′2 K ,XX r 2 K ,X (K ,X − 2φ ′2 K ,XX ) − ω r 4 φφ ′3 K 2 ,XX ′ 2r 4 K ,X (K ,X − 2φ ′2 K ,XX ) − ωφ ′2 (φ ′2 − φφ ′′ )K 2 ,XX 2K ,X (K ,X − 2φ ′2 K ,XX ) (A.8) B (1) 32 = ω φφ ′ K ,XX ′ K ,X − 2φ ′2 K ,XX + 2ωφφ ′ K ,X K ,XX rK ,X (K ,X − 2φ ′2 K ,XX ) + 2ωrφ ′2 K ,XX K ,X + ω 2 φ 2 K ,XX + φ 2 K ,Xφ 2 rK ,X (K ,X − 2φ ′2 K ,XX ) (A.9) B (1) 33 = − 2ωφφ ′3 K 2 ,XX K ,X (K ,X − 2φ ′2 K ,XX ) (A.10) B (1) 34 = 2ωφφ ′ K ,XX K ,X − φ ′2 K ,XX K ,X (K ,X − 2φ ′2 K ,XX ) (A.11) B (2) 31 = 2ω 2 φ 2 φ ′2 K 2 ,XX − K 2 ,X − XK ,X K ,XX K ,X (K ,X − 2φ ′2 K ,XX ) (A.12) B (2) 32 = K ,XX ω 2 φ 2 K ,X − φ ′2 K ,X − 2φ ′2 K ,XX (A.13) and (A, M 2 1 , M 2 2 ) are defined by Eq. (7.2). These equations are given in the case of n = 0. FIG. 1 . 1Left: The field φ(r) is shown as a function of the radial coordinate for different values of ω. For each value of ω, φ(0) is adjusted such that φ(∞) = 0. Center: The energy E and the charge Q are shown as a function of the frequency ω with the critical frequency (change of colors) defined by the condition dQ/dω = 0. Right: The energy is shown as a function of the charge. For all graphics, in green we have stable configurations according to the fission stability criteria, while in red we have unstable solutions. In the first figure, the solution for the critical frequency is shown in blue and in the third graphics, we have added the decay stability criteria that is shown by a red solid line and red dashed line for the unstable solutions while the fission unstable configurations are represented only by red solid line. FIG. 2 . 2The field φ(r) is shown as a function of the radial coordinate for the fundamental mode (green curves), the first (purple curves) and the second (blue curves) radial excited mode for ω = 0.7. We also show the evolution of the energy as a function of the frequency. The dashed region corresponds to the unstable solutions according to the fission stability criteria. FIG. 3 .FIG. 4 . 34The energy is shown as a function of charge for different values of the parameter α which runs from α = −0.5 in red to α = 0.5 in purple with an incrementation of 0Energy versus frequency for K-field model with α running from −0.5 (in red) to +0.5 (in purple) with a step of 0.1. For each panel, we have represented in dotted lines the regime where some energy condition is violated. From top left to bottom right, we show the violation of the SEC, DEC, WEC, NEC. FIG. 5 . 5In gray, the region of parameter space (ω, α) where the Cauchy problem is not well-posed and in cyan the region of superluminal propagation. FIG. 6 . 6Spacetime diagram of |Φ| 2 . The upper diagram shows the stability of the background solution with ω = 0.5 and the lower case shows an unstable solution for ω = 1. For both solutions, we have considered α = 0. FIG. 7 . 7(7.16), we obtain the time and space de-Existence of Im(ρ) as a function of ω for α = (−0.5, 0, +0.5). The existence of such a mode implies an instability of the background solution. The dotted line corresponds to unstable modes but in a region where the Cauchy problem is not well-posed and therefore should be excluded from the analysis. FIG. 8 . 8Space of parameters (ω, α) within the region where the Cauchy problem is well-posed. We have represented regions of quantum stability (against fission) and classical stability as well as regions where the energy conditions such as the SEC and DEC are violated. We have kept the cyan and white colors for, respectively, superluminal and subluminal propagation. numerically to be the same condition for K = X +αX 2 −V (|Φ| 2 ) The matrix of the system Ψ ′ = BΨ can be decomposedas B = B (0) + ρB (1) + ρ 2 B (2) log K ,X (K ,X − 2φ ′2 K ,XX ) (A.6) K ,X K ,X − 2φ ′2 K ,XX (A.7) B We assume our model corresponds to the low energy effective field theory where a small-X expansion is possible and therefore terms X n with n ≥ 3 are negligible. This is the complex analogous of[14] where α −1/4 is a cut off scale. ACKNOWLEDGEMENTSThe work of A.F. is supported by ANID/CONICYT No. 21171262 while R.G. is supported by ANID FONDE-CYT Regular No. 1220965.Appendix: Perturbation equations . G Rosen, J. Math. Phys. 9996G. Rosen, J. Math. Phys. 9 (1968), 996 . S R Coleman, Nucl. Phys. B. 2622263S. R. Coleman, Nucl. Phys. B 262 (1985) no.2, 263 . K Freese, J A Frieman, A V Olinto, Phys. Rev. Lett. 65K. Freese, J. A. Frieman and A. V. Olinto, Phys. Rev. Lett. 65 (1990), 3233-3236 . F C Adams, J R Bond, K Freese, J A Frieman, A V Olinto, arXiv:hep-ph/9207245Phys. Rev. D. 47hep-phF. C. Adams, J. R. Bond, K. Freese, J. A. Frieman and A. V. Olinto, Phys. Rev. D 47 (1993), 426-455 [arXiv:hep-ph/9207245 [hep-ph]]. . A Kusenko, arXiv:hep-ph/9704273Phys. Lett. B. 405108hep-phA. Kusenko, Phys. Lett. B 405 (1997), 108 [arXiv:hep- ph/9704273 [hep-ph]]. . I Affleck, M Dine, Nucl. Phys. B. 249I. Affleck and M. Dine, Nucl. Phys. B 249 (1985), 361- 380 . M Dine, L Randall, S D Thomas, arXiv:hep-ph/9507453Nucl. Phys. B. 458hep-phM. Dine, L. Randall and S. D. Thomas, Nucl. Phys. B 458 (1996), 291-326 [arXiv:hep-ph/9507453 [hep-ph]]. . K Enqvist, A Mazumdar, arXiv:hep-ph/0209244Phys. Rept. 380hep-phK. Enqvist and A. Mazumdar, Phys. Rept. 380 (2003), 99-234 [arXiv:hep-ph/0209244 [hep-ph]]. . A Kusenko, M E Shaposhnikov, arXiv:hep-ph/9709492Phys. Lett. B. 418hep-phA. Kusenko and M. E. Shaposhnikov, Phys. Lett. B 418 (1998), 46-54 [arXiv:hep-ph/9709492 [hep-ph]]. . A Kusenko, P J Steinhardt, arXiv:astro-ph/0106008Phys. Rev. Lett. 87141301astro-phA. Kusenko and P. J. Steinhardt, Phys. Rev. Lett. 87 (2001), 141301 [arXiv:astro-ph/0106008 [astro-ph]]. . R Friedberg, T D Lee, A Sirlin, Phys. Rev. D. 13R. Friedberg, T. D. Lee and A. Sirlin, Phys. Rev. D 13 (1976), 2739-2761 . T D Lee, Y Pang, Phys. Rept. 221T. D. Lee and Y. Pang, Phys. Rept. 221 (1992), 251-350 . A G Panin, M N Smolyakov, arXiv:1612.00737Phys. Rev. D. 95665006hep-thA. G. Panin and M. N. Smolyakov, Phys. Rev. D 95 (2017) no.6, 065006 [arXiv:1612.00737 [hep-th]]. . A Adams, N Arkani-Hamed, S Dubovsky, A Nicolis, R Rattazzi, arXiv:hep-th/0602178JHEP. 1014hep-thA. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nico- lis and R. Rattazzi, JHEP 10 (2006), 014 [arXiv:hep- th/0602178 [hep-th]]. . M I Tsumagari, E J Copeland, P M Saffin, arXiv:0805.3233Phys. Rev. D. 7865021hep-thM. I. Tsumagari, E. J. Copeland and P. M. Saffin, Phys. Rev. D 78 (2008), 065021 [arXiv:0805.3233 [hep-th]]. . E Silverstein, D Tong, arXiv:hep-th/0310221Phys. Rev. D. 70103505hep-thE. Silverstein and D. Tong, Phys. Rev. D 70 (2004), 103505 [arXiv:hep-th/0310221 [hep-th]]. . G Goon, K Hinterbichler, M Trodden, arXiv:1103.5745JCAP. 0717hep-thG. Goon, K. Hinterbichler and M. Trodden, JCAP 07 (2011), 017 [arXiv:1103.5745 [hep-th]]. . A Sen, arXiv:hep-th/0204143Mod. Phys. Lett. A. 17hep-thA. Sen, Mod. Phys. Lett. A 17 (2002), 1797-1804 [arXiv:hep-th/0204143 [hep-th]]. . M Kuniyasu, N Sakai, K Shiraishi, Phys. Rev. D. 9411116001M. Kuniyasu, N. Sakai and K. Shiraishi, Phys. Rev. D 94 (2016) no.11, 116001 . N Sakai, M Sasaki, arXiv:0712.1450Prog. Theor. Phys. 119hep-phN. Sakai and M. Sasaki, Prog. Theor. Phys. 119 (2008), 929-937 [arXiv:0712.1450 [hep-ph]]. . C Armendariz-Picon, T Damour, V F Mukhanov, arXiv:hep-th/9904075Phys. Lett. B. 458hep-thC. Armendariz-Picon, T. Damour and V. F. Mukhanov, Phys. Lett. B 458 (1999), 209-218 [arXiv:hep-th/9904075 [hep-th]]. . C Armendariz-Picon, V F Mukhanov, P J Steinhardt, arXiv:astro-ph/0004134Phys. Rev. Lett. 85astro-phC. Armendariz-Picon, V. F. Mukhanov and P. J. Stein- hardt, Phys. Rev. Lett. 85 (2000), 4438-4441 [arXiv:astro-ph/0004134 [astro-ph]]. N G Vakhitov, A A Kolokolov, Radiophysics and Quantum Electronics. 16N. G. Vakhitov and A. A. Kolokolov, Radiophysics and Quantum Electronics, 16(7), (1973) pp.783-789. F E Schunck, F V Kusmartsev, E W Mielke, Approaches to Numerical Relativity. R.A. d'InvernoCambridgeCambridge Univ. PressF.E. Schunck, F.V. Kusmartsev, E.W. Mielke, in: R.A. d'Inverno (Ed.), Approaches to Numerical Relativity, Cambridge Univ. Press, Cambridge, 1992, pp. 130-140. . M S Volkov, E Wohnert, Phys. Rev. D. 6685003M. S. Volkov and E. Wohnert, Phys. Rev. D 66 (2002), 085003 . N Arkani-Hamed, H C Cheng, M A Luty, S Mukohyama, arXiv:hep-th/0312099JHEP. 0574hep-thN. Arkani-Hamed, H. C. Cheng, M. A. Luty and S. Muko- hyama, JHEP 05 (2004), 074 [arXiv:hep-th/0312099 [hep-th]]. . C Armendariz-Picon, E A Lim, arXiv:astro-ph/0505207JCAP. 087astro-phC. Armendariz-Picon and E. A. Lim, JCAP 08 (2005), 007 [arXiv:astro-ph/0505207 [astro-ph]]. . E Babichev, C Charmousis, G Esposito-Farèse, A Lehébel, arXiv:1803.11444Phys. Rev. D. 9810104050gr-qcE. Babichev, C. Charmousis, G. Esposito-Farèse and A. Lehébel, Phys. Rev. D 98 (2018) no.10, 104050 [arXiv:1803.11444 [gr-qc]]. . H O Kreiss, O E Ortiz, arXiv:gr-qc/0106085Lect. Notes Phys. 604359gr-qcH. O. Kreiss and O. E. Ortiz, Lect. Notes Phys. 604 (2002), 359 [arXiv:gr-qc/0106085 [gr-qc]]. . S Melville, J Noller, arXiv:1904.05874Phys. Rev. D. 101221502Phys. Rev. D. astro-ph.COS. Melville and J. Noller, Phys. Rev. D 101 (2020) no.2, 021502 [erratum: Phys. Rev. D 102 (2020) no.4, 049902] [arXiv:1904.05874 [astro-ph.CO]].
{'fraction_non_alphanumeric': 0.08163371083525539, 'fraction_numerical': 0.045533469821442456, 'mean_word_length': 3.4703429101019463, 'pattern_counts': {'":': 0, '<': 17, '<?xml version=': 0, '>': 30, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 21, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We study the existence and stability of Q-balls in noncanonical scalar field theories, K(|Φ| 2 , X) where Φ is the complex scalar field and X is the kinetic term. We extend the Vakhitov-Kolokolov stability criterion to K-field theories. We derive the condition for the perturbations to have a wellposed Cauchy problem. We find that K,X > 0 and K,X +XK,XX > 0 are necessary but not sufficient conditions. The perturbations define a strongly hyperbolic system if (K,X − 2φ ′2 K,XX )(K,X + 2ω 2 φ 2 K,XX ) > 0. For all modifications studied, we found that perturbations propagate at a speed different from light. Generically, the noncanonical scalar field can lower the charge and energy of the Q-ball and therefore improves its stability.', 'arxivid': '2301.05890', 'author': ['Aníbal Faúndez \nInstituto de Física\nPontificia Universidad Católica de Valparaíso\nAv. Brasil 2950ValparaísoChile\n', 'Radouane Gannouji \nInstituto de Física\nPontificia Universidad Católica de Valparaíso\nAv. Brasil 2950ValparaísoChile\n'], 'authoraffiliation': ['Instituto de Física\nPontificia Universidad Católica de Valparaíso\nAv. Brasil 2950ValparaísoChile', 'Instituto de Física\nPontificia Universidad Católica de Valparaíso\nAv. Brasil 2950ValparaísoChile'], 'corpusid': 255942427, 'doi': '10.1103/physrevd.107.104058', 'github_urls': [], 'n_tokens_mistral': 14049, 'n_tokens_neox': 12063, 'n_words': 7120, 'pdfsha': '47ef6714d889bf5a96aae13fba37bea8fbb74c9b', 'pdfurls': ['https://export.arxiv.org/pdf/2301.05890v2.pdf'], 'title': ['Q-balls in K-field theory', 'Q-balls in K-field theory'], 'venue': []}
arxiv
Rare radiative decays of the B c meson 25 Jan 2016 Wan-Li Ju Department of Physics Harbin Institute of Technology 150001HarbinChina Tianhong Wang Department of Physics Harbin Institute of Technology 150001HarbinChina Yue Jiang Department of Physics Harbin Institute of Technology 150001HarbinChina Han Yuan Department of Physics Harbin Institute of Technology 150001HarbinChina Guo-Li Wang †[email protected] Department of Physics Harbin Institute of Technology 150001HarbinChina Rare radiative decays of the B c meson 25 Jan 20161 In this paper, we study the rare radiative processes B c → D ( * ) sJ γ within the Standard Model, where D ( * ) sJ stands for the meson D * s , D s1 (2460, 2536) or D * s2 (2573). During the investigations, we consider the contributions from the penguin, annihilation, color-suppressed and color-favored cascade diagrams. Our results show that: 1) the penguin and annihilation contributions are dominant in the branching fractions; 2) for the processes B c → D * s γ and B c → D s1 (2460, 2536)γ, the effects from the color-suppressed and color-favored cascade diagrams are un-negligible. Introduction The processes B c → D ( * ) sJ γ in the Standard Model (SM) are emphasized in the recent decades, due to their sensitivity to the new physics (NP). In the existing studies [1][2][3][4][5], the annihilation (Ann) and penguin (Peng) diagrams, as shown in Fig. 1, are paid attention to. illustrate their importance to the B c → D ( * ) sJ γ decays, we compare with the B → K * γ process. As to the B → K * γ transition, its SD contribution is dominated by the penguin diagrams, while the color-suppressed (CS) diagrams are the dominant LD influences 1 . According to the estimation in Ref. [6], the CS diagrams influence the Peng ones by 12% in the branching ratio of the B → K * γ transition. Thus, the CS diagrams are un-negligible in the B → K * γ case. Considering that the typical Peng and CS diagrams for B → K * γ process are topologically similar to the B c → D ( * ) sJ γ ones, the CS effects may also influence the Peng amplitudes unnegligibly in B c → D ( * ) sJ γ cases. So it is interesting to consider the CS contributions in the B c → D ( * ) sJ γ channels. In addition to the CS diagrams, the color-favored (CF) ones also participant in the B c → D ( * ) sJ γ processes. In an approximate sense, the CF amplitudes are 3 times larger than the CS ones due to their color factors. This makes the CF amplitudes more crucial. Therefore, when the B c → D ( * ) sJ γ transitions are studied, it is also interesting to include the CF influences. Consequently, we are motivated to investigate the B c → D ( * ) sJ γ decays including the Peng, Ann, CS and CF diagrams. During the investigations, the hadronic matrix elements are involved. In Refs. [1,2], the hadronic matrix element corresponding to the penguin diagram is estimated by means of the perturbative QCD (pQCD), while the annihilation one is analyzed using the effective formalism [7]. In Ref. [3], the penguin hadronic current is obtained in the relativistic independent quark model (RIQM), while the annihilation one is evaluated by investigating the B c → M * γ → D * s γ processes, where M * stands for the virtual intermediate state. In Refs. [4,5], both the penguin and annihilation hadronic currents are computed in QCD sum rules (QCDSR). However, in this paper, we use the hadronic currents in Refs. [8,9], which are obtained by the Bethe-Salpeter (BS) method [10][11][12][13][14][15]. The BS method has several particular features. First, in this method, the wave functions are obtained by solving the BS equations and have complete relativistic structure. Second, the Mandelstam Formalism [16] is employed for calculating the hadronic matrix elements, which keeps the relativistic effects from both the kinematics and the dynamics. Third, the BS Ann hadronic currents are effective for all physical region, without any un-physical singularities. Fourth, as proved in Ref. [9], the BS annihilation currents satisfy the gauge-invariance condition, no matter what J P s of the initial and final mesons are. More important, in our previous works [17][18][19], the B decays and other B c transitions are calculated within the BS method. Most of them are in good agreement with the experimental data. Therefore, in this paper, we choose the BS hadronic currents to calculate B c → D ( * ) sJ γ processes. This paper is organized as follows. In Section 2, we elucidate the theoretical details of the effective hamiltonian and the hadronic transition matrix elements. And Section 3 is devoted to presenting the numerical results and discussions. In Section 4, we draw our conclusion. Theoretical Details In this part, we introduce the theoretical details on the calculations of B c → D ( * ) sJ γ decays, which includes their transition amplitudes and the involved hadronic currents. Transition Amplitudes From the low energy effective theory [20], the transition amplitude for the b → s(d)γ process (corresponding to Fig. 1 (a)) is M P eng = i eG F 4 √ 2π 2 m b V * ts(d) V tb C eff 7γ W µ P eng ǫ * γµ ,(1) where e stands for the electron charge magnitude and G F denotes Fermi coupling constant. m b is the mass of b quark, while V q 1 q 2 represents the CKM matrix element. ǫ γ stands for the polarization vector of photon. C eff 7γ is the effective Wilson coefficient, which can be obtained from the summation of the Wilson coefficients multiplying the same hadronic matrix element. In this paper, we take C eff 7γ = −0.313 [21]. In Eq. (1), we also define the penguin hadronic matrix element as W µ P eng ≡ f |s(d)iσ µν (1 + γ 5 )b|i Q ν , where σ µν ≡ i[γ µ , γ ν ]/2 and Q ≡ P i − P f . P i (P f ) stands for the momentum of the initial (final) meson. For the Ann transition amplitude, from the factorization hypothesis [22], we have M Ann =V cb V * cs(d) ieG F √ 2 a eff 1 W µ Ann ǫ * γµ ,(2) where W µ ann is the annihilation hadronic current. It can be expressed as W µ Ann = dxe −iq·x f |T [O w (0), J µ em (x)]|i , where O w ≡ {cγ ν (1 − γ 5 )b} {sγ ν (1 − γ 5 )c} and J µ em = Q qq γ µ q. Here Q q stands for the charge of the quark q. In Eq. (2), the effective coefficient a eff 1 is introduced. In this paper, we follow the estimations of QCDSR [23] and take the following set of parameters (Here we also give the numerical value of a eff 2 , which will be used in the M CS calculations.) a eff 1 = 1.14, a eff 2 = −0.20.(3) In recent years, this set of parameters is widely used in the calculations of the B c non-leptonic decays [24][25][26][27][28][29]. As to the CS transition amplitude for B c → D ( * ) sJ γ processes, similarly to the B → K * γ case, it reads [6] M CS =i G F √ 2 2e 3 V cb V * cs(d) a eff 2 V =J/ψ,ψ(2S)··· {κ 2 f 2 V W µ CS ǫ * γµ },(4) where the CS hadronic matrix element W µ CS is defined as W µ CS ≡ f |s(d)γ µ (1−γ 5 )b|i . In Eq. (4), V denotes the intermediate vector meson and f V is the according decay constant. Conventionally, we have 0|cγ µ c|V = M V f V ǫ µ V . In this paper, we only consider the contributions for V = J/ψ and ψ(2S). The effects from higher charmonia are suppressed by their small decay constants, while the contributions from ρ, ω and φ are suppressed by either their CKM matrix elements [32] or the small Wilson coefficients C 3 − C 6 [20]. In Eq. (4), the suppression factor κ is also introduced in order to describe the off-shell behaviors of J/ψ and ψ(2S) mesons. V ub V * us ∼ Aλ 4 In this paper, we follow the discussions in Refs. [6,30] and take κ = 0.63. Based on the derivations in Refs. [8,31], the CF amplitude is M CF =i G F √ 2 2e 3 V cb V * cs(d) a eff 1 V =J/ψ,ψ(2S) {ǫ * γµ κ 2 f f f V M f M V W µ CF },(5) where contributions. The V = ρ, ω, φ case is not relevant to the CF amplitudes, while the influences for the higher charmonia are suppressed by their smaller decay constants. In Eq. (5), f f and W CF are also introduced. Conventionally, we have f |s(d)γ µ (1 − γ 5 )c|0 = M f f f ǫ * f µ , and the CF hadronic current is defined as W µ CF ≡ V |cγ ν (1 − γ 5 )b|i ǫ * f ν ǫ µ V . Hereafter ǫ f (V ) denotes Form Factors In the previous subsection, we have defined the hadronic matrix elements W P eng , W Ann , W CS and W CF . Considering the Lorentz invariance, these hadronic currents can be expressed in terms of form factors, W µ P eng (P → V ⊥ , A ⊥ ) = −iT V,A 1 ǫ µǫ * f QP + + T V,A 2 P + · Qǫ µ * f , W µ Ann (P → V ⊥ , A ⊥ ) = (M i − M f ) T V,A 1ann M 2 i ǫ µ * f + 1 2 iV V,A ann ǫ µǫ * f QP + , W µ CS (P → V ⊥ , A ⊥ ) = iV V,A M i + M f ǫ µǫ * f QP + − (M i + M f )A V,A 1 ǫ µ * f , W µ CF (P → V ⊥ , A ⊥ ) = (M i − M f ) T V,A 1CF M 2 i ǫ µ * f + 1 2 iV V,A CF ǫ µǫ * f QP + , W µ P eng (P → T ⊥ ) = −i T T 1 M f (ǫ T αβ ) * Q β ǫ µαQP + + T T 2 M f P + · Q(ǫ µβ T ) * Q β , W µ Ann (P → T ⊥ ) = (M i − M f ) T T 1ann M 2 i M f (ǫ µα T ) * Q α + 1 2 i V T ann M f (ǫ T αβ ) * Q β ǫ µαQP + , W µ CS (P → T ⊥ ) = iV T (M i + M f )M f (ǫ T αβ ) * Q β ǫ µαQP + − M i + M f M f A T 1 (ǫ µα T ) * Q α ,(6) where V ⊥ , A ⊥ and T ⊥ denote the transversely polarized final vector, axial-vector and tensor mesons, respectively. M i is the mass of the initial meson, while P + is defined as P + ≡ P i + P f . V V,A,T (ann,CF ) , A V,A,T 1 , T V,A,T 1(ann,CF ) and T V,A,T 2 are form factors. In our previous works [8,9], these form factors have been calculated in the BS method. In this paper, we use the results directly. Numerical Results and Discussions In order to calculate the processes B c → D ( * ) sJ γ, we need to specify the inputs. In this paper, the masses and the lifetimes of B c , J/ψ, ψ(2S) and D ( * ) sJ are taken from Particle Data Group (PDG) [32], as well as the values of α em , G F and V CKM . The decay constants f J/ψ and f ψ(2S) can be extracted from the branching widths Γ(J/ψ → e + e − ) = 5.55 keV and Γ(ψ(2S) → e + e − ) = 2.35 keV [32], respectively. And the decay constants f D ( * ) sJ can be found in our previous works [13,14]. Using these inputs and Eqs. (1-2,4-5) we can obtain the branching fractions of the B c → D ( * ) sJ γ decays. In the following paragraphes, we will present the numerical results and discuss them. The results of the B c → D * s γ decay are listed in Table. First, as shown in Table. 1, our results satisfy the relationship Br Peng +Br Ann < Br Peng+Ann . This relationship indicates the constructive interference between M P eng and M Ann . The similar situation can also be found in the results of Refs. [2][3][4]. Second, one may note that Br CS is much Besides, as listed in Table. 1, there are other theoretical predictions on the branching fractions Br Peng and Br Ann . One may note that there is a large discrepancy between the results of various theoretical approaches. Here we try to analyze the reasons. • Case of Br Peng . As seen from Table. 1, there are five groups calculating Br Peng . -In Refs. [1,2], the same framework, "PQCD" [33], is employed. The reason for their different numerical results is that they use different C eff 7γ . For instance, in Ref. [2], the Wilson coefficient C eff 7γ is obtained neglecting the mixing of O 7γ with other operators, while in Ref. [1], this approximation is not employed. -In Ref. [3], Br Peng s are calculated through RIQM. This method has two particular features, which makes Br Peng in Ref. [3] different from the ones in Refs. [1,2]. First, the Peng transition amplitude can be expressed as Φ f ⊗ O 7γ ⊗ Φ i , while in Refs. [1,2] the single gluon should be exchanged within the hard kernel. Second, in Ref. [3], the Gaussian wave functions are employed, while in Refs. [1,2], the non-relativistic limit is used, namely, Φ i (x) ∼ δ(x − m c /M B C ) and Φ F (x) ∼ δ(x − (M f − m c )/M f ). -In this paper, Br Peng s are obtained from the BS method. By this method, the Peng amplitude are calculated in the Mandelstam form, while the initial and final wave functions Φ i,f are dealt including the relativistic influences. To be specific, in BS method, the traditional Gaussian wave functions are abandoned. Instead, they are solved by the BS equations [12][13][14][15]. Besides, for the mesons with definite parity and charge, our wave functions have the complete relativistic structures. The components caused by the relative momenta are not neglected. -In Ref. [4], Br Peng is evaluated by the QCDSR. This method is a quite different framework from the ones in this paper and Refs. [1][2][3]. In QCDSR, the Peng amplitude [34], while only leading power contributions are discussed in Ref. [4]. So if more accurate hadronic matrix elements are wanted, more works are still needed in the future. • Case of Br Ann . Here we attempt to analyze the reasons for the different Br Ann s. -In Refs. [1,2], Br Ann s are both computed within the effective formalism [7]. The difference between them is caused by their different inputs, namely, a eff 1 . -As shown in Table. 1, the result in Ref. [2] is in agreement with ours. This is because 1) the parameter a eff 1 used in Ref. [2] is close to ours; 2) if the expansion in Λ QCD /M Bc is performed in our calculations and only the leading power contributions are kept, our framework is equivalent to the effective formalism [7]. - Table. 1 also shows that Br Ann in Ref. [3] is almost one order smaller than ours. In Ref. [3], the Ann amplitudes are obtained by calculating B c → B * c γ → D * s γ and B c → D s → D * s γ transitions. However, in this paper, we deal with this problem in the parton level. -In Table. 1, we also list the results in QCDSR [4]. The differences and relations between QCDSR and the BS method are mentioned before. Here we do not discuss them. In the paragraphs above, we have discussed the discrepancies between the results of different approaches. It is hard to say which method is the most accurate one at this time, because each is based on the particular hypothesis or expansion and has advantages in different aspects. Therefore, in the future, more works on the hadronic currents are still needed. In Table. 2, we show the branching fractions of the decay B c → D s1 (2460)γ. One may note that the B c → D s1 (2460)γ transition is in a rather similar situation to the B c → D * s γ case. Hence, we only emphasize the following two points. First, if only Ann and Peng contributions are considered, our result Br Peng+Ann is almost a fifth of the one in Ref. [5]. Second, when the LD influences are added, the total branching fraction Br Total (B c → D s1 (2460)γ) reduces un-negligibly. In Table. 3, we show the results of the B c → D s1 (2536)γ and B c → D * s2 γ decays. As to the B c → D s1 (2536)γ process, except their smaller branching ratios, we see the similar behaviors to the B c → D * s γ decay. But for the B c → D * s2 γ case, their situations is quite different. First, its Br Peng is almost two times bigger than Br Ann . This is because that for the B c → D * s2 γ transitions, the Ann hadronic form factors are much smaller than the Peng ones, as shown in Ref. [9]. Second, there is no CF contribution in B c → D * s2 γ decay. This can be understood from Eq. (5). In Eq. (5), the factor f f appears. When the transition B c → D * s2 γ is referred, the conservation of angular momentum makes f D * s2 vanish. Hence, M µ CF (B c → D * s2 γ) = 0. Third, we see this channel is influenced by the LD contributions imperceptibly. This implies that if only the SD contributions are interesting, the B c → D * s2 γ decay provides clearer laboratory than B c → D * s γ and B c → D s1 (2460, 2536)γ processes. Conclusion In this paper, considering the penguin, annihilation, color-suppressed and color-favored cascade diagrams, we calculate the processes B c → D ( * ) sJ γ in the Standard Model. Our conclusions include: 1. The processes B c → D * s γ and B c → D s1 (2460, 2536)γ receive un-negligible contributions from CS and CF diagrams. When these decays are investigated, including the LD effects is necessary. 2. The transitions B c → D * s2 γ is affected by the LD diagrams slightly. Hence, if only the short distance interactions are interested, this channel offers much clearer laboratories than the B c → D * s γ and B c → D s1 (2460, 2536)γ processes. 3. In different methods, the results on Br Peng+Ann are quite different. From this, more discussions and more precise calculations are still needed in the future. Figure 1 : 2 . 12Diagrams of B c → D ( * ) s(d)J γ. In annihilation diagram (b) the photon can be emitted from quarks and anti-quarks, denoted by . Besides the Ann and Peng effects, the transitions B c → D ( * )sJ γ are also influenced by long distance (LD) cascade contributions, whose typical diagrams are illustrated inFig. In Figure 2 : 2Resonance Cascade Diagrams of B c → D ( * ) s(d)J γ. M V and M f are the masses of the intermediate vector and final mesons, respectively. f f is the decay constant of the final meson. Here we also only consider the V = J/ψ, ψ(2S) the polarization vector of the final (intermediate vector) meson. Finally, based on the expressions in Eqs. (1,2,4,5), the total transition amplitude reads M T otal = M P eng + M Ann + M CS + M CF . 1 . 1Br Peng(Ann,CS,CF) stands for the branching fraction where only M P eng(Ann,CS,CF ) contributes. Br Peng+Ann(CS) is obtained from M P eng + M Ann(CS) , while Br LD represents the branching ratio including only the M CS and M CF influences. Br Peng+Ann+CF includes the M P eng , M Ann and M CF influences. Br Total contains the M P eng , M Ann , M CS and M CF contributions. is related to the correlation functions and these correlation functions are calculated with the help of the operator product expansion (OPE). Unlike the PQCD, RIQM and BS methods, where the LD fluctuations are contained in the wave functions, the LD interactions in QCDSR are described by the photon distribution amplitudes and the quark (gluon) condensate inputs. It is believed that our result in Br Peng should be very close to the one in Ref. [4] if the following conditions are satisfied: 1) the exact photon distribution amplitudes are employed; 2) the higher order effects in OPE are small enough; 3) our BS wave functions are obtained rigorously; 4) all contributions beyond our factorization formula are negligible. But at this moment, they are practically involved. For instance, our wave functions are solved under the instantaneous approximations Table 1 : 1Branching fractions of the decay B c → D * s γ.Table.1 we see the tiny Br CS . Third, if we compare Br Peng with Br Peng+CS , it is observed that the CS amplitude can influence the Peng one by ∼ 10% in the branching fraction. This is similar to the B → K * γ case and in agreement with our estimation in Introduction. Fourth, when CS and CF effects are both included, our Br Total is nearly two thirds of Br Peng+Ann . This implies that in the B c → D * s γ process, the LD contributions are un-negligible.This paper pQCD [1] pQCD [2] RIQM[3] QCDSR[4] Br Peng 1.5 × 10 −6 2.2 × 10 −7 3.3 × 10 −6 2.4 × 10 −5 3.5 × 10 −6 Br Ann 4.3 × 10 −6 7.4 × 10 −7 4.4 × 10 −6 4.5 × 10 −5 1.6 × 10 −5 Br CS 1.1 × 10 −8 Br CF 6.8 × 10 −7 Br Peng+Ann 9.6 × 10 −6 7.0 × 10 −7 1.0 × 10 −5 1.4 × 10 −4 2.5 × 10 −5 Br Peng+CS 1.7 × 10 −6 Br LD 5.2 × 10 −7 Br Peng+Ann+CF 5.8 × 10 −6 Br Total 6.3 × 10 −6 smaller than Br CF . This can be understood from the following facts: 1) the CS hadronic matrix element is smaller than the CF one; 2) according to Eqs. (4-5), the CS amplitude is proportional to a eff 2 , while the CF one refers to a eff 1 . From Eq. (3), we have the relationship a eff 2 ≪ a eff 1 . Hence, from Table 2 : 2Branching fractions of the decay B c → D s1 (2460)γ. Br Peng+Ann 5.6 × 10 −6 2.4 × 10 −5 Br Peng+CS 2.1 × 10 −6 Br LD 4.1 × 10 −7 Br Peng+Ann+CF 2.8 × 10 −6 Br Total 3.2 × 10 −6This paper QCDSR[5] Br Peng 1.8 × 10 −6 1.8 × 10 −8 Br Ann 1.1 × 10 −6 2.2 × 10 −5 Br CS 1.6 × 10 −8 Br CF 5.8 × 10 −7 Table 3 : 3Branching fractions of the B c → D s1 (2536)γ and B c → D * s2 γ decays.B c → D s1 (2536)γ B c → D * s2 γ Br Peng 1.8 × 10 −7 1.3 × 10 −6 Br Ann 5.3 × 10 −7 5.6 × 10 −7 Br CS 6.1 × 10 −10 3.1 × 10 −9 Br CF 3.8 × 10 −8 - Br Peng+Ann 1.1 × 10 −6 2.4 × 10 −6 Br Peng+CS 2.1 × 10 −7 1.2 × 10 −6 Br LD 3.0 × 10 −8 3.1 × 10 −9 Br Peng+Ann+CF 8.2 × 10 −7 2.4 × 10 −6 Br Total 8.6 × 10 −7 2.2 × 10 −6 Their typical diagrams are identical toFig. 1 (a)andFig. 2 (a)respectively if the spectatorc quarks are replaced by theū ord quark. . G. -R Lu, C Yue, Y. -G Cao, Z. -H Xiong, Z. -J Xiao, hep-ph/9609271Phys. Rev. D. 545647G. -R. Lu, C. Yue, Y. -G. Cao, Z. -H. Xiong and Z. -J. Xiao, Phys. Rev. D 54, 5647 (1996) [hep-ph/9609271]. . D. -S Du, X. -L Li, Y. -D Yang, hep-ph/9603291Phys. Lett. B. 380193D. -S. Du, X. -L. Li and Y. -D. Yang, Phys. Lett. B 380, 193 (1996) [hep-ph/9603291]. . N Barik, S Naimuddin, S Kar, P C Dash, Phys. Rev. D. 6314024N. Barik, S. Naimuddin, S. Kar and P. C. Dash, Phys. Rev. D 63, 014024 (2001). . K Azizi, V Bashiry, arXiv:0708.2068Phys. Rev. D. 76114007hep-phK. Azizi and V. Bashiry, Phys. Rev. D 76, 114007 (2007) [arXiv:0708.2068 [hep-ph]]. . K Azizi, N Ghahramani, A R Olamaei, arXiv:1207.1676Phys. Rev. D. 8716013hep-phK. Azizi, N. Ghahramani and A. R. Olamaei, Phys. Rev. D 87, 016013 (2013) [arXiv:1207.1676 [hep-ph]]. . Y Y Keum, M Matsumori, A I Sanda, hep- ph/0406055Phys. Rev. D. 7214013Y. Y. Keum, M. Matsumori and A. I. Sanda, Phys. Rev. D 72, 014013 (2005) [hep- ph/0406055]. . H. -Y Cheng, C. -Y Cheung, G. -L Lin, Y C Lin, T. -M Yan, H. -L Yu, hep-ph/9407303Phys. Rev. D. 511199H. -Y. Cheng, C. -Y. Cheung, G. -L. Lin, Y. C. Lin, T. -M. Yan and H. -L. Yu, Phys. Rev. D 51, 1199 (1995) [hep-ph/9407303]. . W. -L Ju, G. -L Wang, H. -F Fu, T. -H Wang, Y Jiang, arXiv:1307.5492JHEP. 140465hep-phW. -L. Ju, G. -L. Wang, H. -F. Fu, T. -H. Wang and Y. Jiang, JHEP 1404, 065 (2014) [arXiv:1307.5492 [hep-ph]]. . W L Ju, G L Wang, H F Fu, Z H Wang, Y Li, arXiv:1407.7968JHEP. 1509171hep-phW. L. Ju, G. L. Wang, H. F. Fu, Z. H. Wang and Y. Li, JHEP 1509, 171 (2015) [arXiv:1407.7968 [hep-ph]]. . C H Chang, J K Chen, X Q Li, G L Wang, 10.1088/0253-6102/43/1/023hep-ph/0406050Commun. Theor. Phys. 43C. H. Chang, J. K. Chen, X. Q. Li and G. L. Wang, Commun. Theor. Phys. 43, 113 (2005) doi:10.1088/0253-6102/43/1/023 [hep-ph/0406050]. . C H Chang, J K Chen, G L Wang, 10.1088/0253-6102/46/3/017Commun. Theor. Phys. 46467C. H. Chang, J. K. Chen and G. L. Wang, Commun. Theor. Phys. 46, 467 (2006). doi:10.1088/0253-6102/46/3/017 . G Cvetic, C S Kim, G L Wang, W Namgung, 10.1016/j.physletb.2004.06.092hep-ph/0405112Phys. Lett. B. 596G. Cvetic, C. S. Kim, G. L. Wang and W. Namgung, Phys. Lett. B 596, 84 (2004) doi:10.1016/j.physletb.2004.06.092 [hep-ph/0405112]. . G L Wang, 10.1016/j.physletb.2005.12.005math- ph/0512009Phys. Lett. B. 633492G. L. Wang, Phys. Lett. B 633, 492 (2006) doi:10.1016/j.physletb.2005.12.005 [math- ph/0512009]. . G L Wang, 10.1016/j.physletb.2007.05.001arXiv:0705.2621Phys. Lett. B. 65015hep-phG. L. Wang, Phys. Lett. B 650, 15 (2007) doi:10.1016/j.physletb.2007.05.001 [arXiv:0705.2621 [hep-ph]]. . G L Wang, 10.1016/j.physletb.2009.03.030arXiv:0904.1604Phys. Lett. B. 674172hep-phG. L. Wang, Phys. Lett. B 674, 172 (2009) doi:10.1016/j.physletb.2009.03.030 [arXiv:0904.1604 [hep-ph]]. . S Mandelstam, 10.1098/rspa.1955.0261Proc. Roy. Soc. Lond. A. 233248S. Mandelstam, Proc. Roy. Soc. Lond. A 233, 248 (1955). doi:10.1098/rspa.1955.0261 . C H Chang, H F Fu, G L Wang, J M Zhang, arXiv:1411.3428Sci. China Phys. Mech. Astron. 5871001hep-phC. H. Chang, H. F. Fu, G. L. Wang and J. M. Zhang, Sci. China Phys. Mech. Astron. 58, 071001 (2015) [arXiv:1411.3428 [hep-ph]]. . H F Fu, G L Wang, Z H Wang, X J Chen, 10.1088/0256-307X/28/12/121301arXiv:1202.1221Chin. Phys. Lett. 28121301hep-phH. F. Fu, G. L. Wang, Z. H. Wang and X. J. Chen, Chin. Phys. Lett. 28, 121301 (2011) doi:10.1088/0256-307X/28/12/121301 [arXiv:1202.1221 [hep-ph]]. . Y Jiang, G L Wang, T Wang, W L Ju, H F Fu, 10.1142/S0217751X13501108Int. J. Mod. Phys. A. 281350110Y. Jiang, G. L. Wang, T. Wang, W. L. Ju and H. F. Fu, Int. J. Mod. Phys. A 28, 1350110 (2013). doi:10.1142/S0217751X13501108 . G Buchalla, A J Buras, M E Lautenbacher, hep- ph/9512380Rev. Mod. Phys. 681125G. Buchalla, A. J. Buras and M. E. Lautenbacher, Rev. Mod. Phys. 68, 1125 (1996) [hep- ph/9512380]. . A Faessler, T Gutsche, M A Ivanov, J G Korner, V E Lyubovitskij, hep-ph/0205287Eur. Phys. J. direct C. 4A. Faessler, T. Gutsche, M. A. Ivanov, J. G. Korner and V. E. Lyubovitskij, Eur. Phys. J. direct C 4, 18 (2002) [hep-ph/0205287]. . M Bauer, B Stech, M Wirbel, Z. Phys. C. 34103M. Bauer, B. Stech and M. Wirbel, Z. Phys. C 34, 103 (1987). . V V Kiselev, A E Kovalsky, A K Likhoded, hep- ph/0002127Nucl. Phys. B. 585353V. V. Kiselev, A. E. Kovalsky and A. K. Likhoded, Nucl. Phys. B 585, 353 (2000) [hep- ph/0002127]; . V V Kiselev, hep-ph/0211021V. V. Kiselev, hep-ph/0211021. . H. -F Fu, Y Jiang, C S Kim, G. -L Wang, arXiv:1102.5399JHEP. 110615hep-phH. -F. Fu, Y. Jiang, C. S. Kim and G. -L. Wang, JHEP 1106, 015 (2011) [arXiv:1102.5399 [hep-ph]]. . D Ebert, R N Faustov, V O Galkin, arXiv:1007.1369Phys. Rev. D. 8234019hep-phD. Ebert, R. N. Faustov and V. O. Galkin, Phys. Rev. D 82, 034019 (2010) [arXiv:1007.1369 [hep-ph]]. . H M Choi, C R Ji, arXiv:0909.5028Phys. Rev. D. 80114003hep-phH. M. Choi and C. R. Ji, Phys. Rev. D 80, 114003 (2009) [arXiv:0909.5028 [hep-ph]]. . C Albertus, arXiv:1401.1791Phys. Rev. D. 8965042hep-phC. Albertus, Phys. Rev. D 89, 065042 (2014) [arXiv:1401.1791 [hep-ph]]. . M A Ivanov, J G Korner, P Santorelli, hep- ph/0602050Phys. Rev. D. 7354024M. A. Ivanov, J. G. Korner and P. Santorelli, Phys. Rev. D 73, 054024 (2006) [hep- ph/0602050]. . D Ebert, R N Faustov, V O Galkin, hep- ph/0306306Phys. Rev. D. 6894020D. Ebert, R. N. Faustov and V. O. Galkin, Phys. Rev. D 68, 094020 (2003) [hep- ph/0306306]. . E Golowich, S Pakvasa, hep-ph/9408370Phys. Rev. D. 511215E. Golowich and S. Pakvasa, Phys. Rev. D 51, 1215 (1995) [hep-ph/9408370]. . C H Chang, C D Lu, G L Wang, H S Zong, hep-ph/9904471Phys. Rev. D. 60114013C. H. Chang, C. D. Lu, G. L. Wang and H. S. Zong, Phys. Rev. D 60, 114013 (1999) [hep-ph/9904471]. . K A Olive, Particle Data Group CollaborationChin. Phys. C. 3890001K. A. Olive et al. [Particle Data Group Collaboration], Chin. Phys. C 38, 090001 (2014). . G P Lepage, S J Brodsky, 10.1103/PhysRevD.22.2157Phys. Rev. D. 222157G. P. Lepage and S. J. Brodsky, Phys. Rev. D 22, 2157 (1980). doi:10.1103/PhysRevD.22.2157 . C H Chang, Y Q Chen, 10.1103/PhysRevD.49.3399Phys. Rev. D. 493399C. H. Chang and Y. Q. Chen, Phys. Rev. D 49, 3399 (1994). doi:10.1103/PhysRevD.49.3399
{'fraction_non_alphanumeric': 0.09609351658201375, 'fraction_numerical': 0.07607559008066926, 'mean_word_length': 3.1469722781477465, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 23, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In this paper, we study the rare radiative processes B c → D ( * ) sJ γ within the Standard Model, where D ( * ) sJ stands for the meson D * s , D s1 (2460, 2536) or D * s2 (2573). During the investigations, we consider the contributions from the penguin, annihilation, color-suppressed and color-favored cascade diagrams. Our results show that: 1) the penguin and annihilation contributions are dominant in the branching fractions; 2) for the processes B c → D * s γ and B c → D s1 (2460, 2536)γ, the effects from the color-suppressed and color-favored cascade diagrams are un-negligible.', 'arxivid': '1511.03805', 'author': ['Wan-Li Ju \nDepartment of Physics\nHarbin Institute of Technology\n150001HarbinChina\n', 'Tianhong Wang \nDepartment of Physics\nHarbin Institute of Technology\n150001HarbinChina\n', 'Yue Jiang \nDepartment of Physics\nHarbin Institute of Technology\n150001HarbinChina\n', 'Han Yuan \nDepartment of Physics\nHarbin Institute of Technology\n150001HarbinChina\n', 'Guo-Li Wang †[email protected] \nDepartment of Physics\nHarbin Institute of Technology\n150001HarbinChina\n'], 'authoraffiliation': ['Department of Physics\nHarbin Institute of Technology\n150001HarbinChina', 'Department of Physics\nHarbin Institute of Technology\n150001HarbinChina', 'Department of Physics\nHarbin Institute of Technology\n150001HarbinChina', 'Department of Physics\nHarbin Institute of Technology\n150001HarbinChina', 'Department of Physics\nHarbin Institute of Technology\n150001HarbinChina'], 'corpusid': 55887580, 'doi': '10.1088/0954-3899/43/4/045004', 'github_urls': [], 'n_tokens_mistral': 10925, 'n_tokens_neox': 9030, 'n_words': 4950, 'pdfsha': '0679481305a5a73cc16e7f8827759e6c4105858c', 'pdfurls': ['https://arxiv.org/pdf/1511.03805v2.pdf'], 'title': ['Rare radiative decays of the B c meson', 'Rare radiative decays of the B c meson'], 'venue': []}
arxiv
Multiple (inverse) binomial sums of arbitrary weight and depth and the all-order ε-expansion of generalized hypergeometric functions with one half-integer value of parameter 2 Oct 2007 M Yu Kalmykov B F L Ward S A Yost Department of Physics Laboratory of Theoretical Physics Baylor University One Bear Place97316, 76798-7316Box, Waco, BogoliubovTX Department of Physics Joint Institute for Nuclear Research 141980Dubna (Moscow Region)Russia Institut für Theoretische Physik Baylor University One Bear Place97316, 76798-7316Box, Waco, -1TX Department of Physics Universität Hamburg Luruper Chaussee 14922761HamburgGermany Princeton University 08540PrincetonNJ Multiple (inverse) binomial sums of arbitrary weight and depth and the all-order ε-expansion of generalized hypergeometric functions with one half-integer value of parameter 2 Oct 2007Preprint typeset in JHEP style -HYPER VERSION are expressible in terms of the harmonic polylogarithms of Remiddi and Vermaseren with coefficients that are ratios of polynomials. Abstract: We continue the study of the construction of analytical coefficients of the ε-expansion of hypergeometric functions and their connection with Feynman diagrams. In this paper, we show the following results: Theorem A: The multiple (inverse) binomial sums ∞ j=1 1 2j j k z j j c S a 1 (j − 1) · · · S ap (j − 1) , where k = ±1, S a (j) is a harmonic series, S a (j) = j k=1 1 k a , and c is any integer number are expressible in terms of Remiddi-Vermaseren functions; Theorem B: The hypergeometric functions Introduction Feynman diagrams [1] are a primary tool for calculating radiative corrections to any processes within the Standard Model or its extensions. With increasing of accuracy of measurements, more and more complicated diagrams (with increasing number of loops and legs and increasing numbers of variables associated with different particle masses) must be evaluated. The essential progress in such calculations is often associated with the invention of new (mainly mathematical) algorithms (e.g. Refs. [2,3]) and their realization as a computer programs (e.g. Refs. [4,5]). One fruitful approach to the calculation of Feynman diagrams is based on their representation in terms of hypergeometric functions [6] or multiple series [7,8]. We will refer to such representations as hypergeometric representations for Feynman diagrams. Unfortunately, there does not exist a universal hypergeometric representation for all types of diagrams. Constructing these representations is still a matter of the personal experience of the researcher [9,10,11,12,13]. Nevertheless, existing experience with Feynman diagrams leads us to expect that all Feynman diagrams should be associated with hypergeometric functions. For practical applications, finding a hypergeometric representation is not enough. It is necessary to construct the so-called ε-expansion, which we may understand as the construction of the analytical coefficients of the Laurent expansion of hypergeometric functions around rational values of their parameters. In this direction, very limited results are available. 1 The pioneering systematic activity in studying the Laurent series expansion of hypergeometric functions at particular values of the argument (z = 1) was started by David Broadhurst [14] in the context of Euler-Zagier sums (or multidimensional zeta values) [15]. This activity has received further consideration for another, physically interesting point, z = 1/4 (see the relevant Appendix in Ref. [8,10]), and also for the "primitive sixth roots of unity" (see Ref. [16]). Over time, other types of sums 2 have been analysed in a several publications: harmonic sums [7,17], generalized harmonic sums [18,12], binomial sums [11,12] and inverse binomial sums [12,19]. The introduction of new functions, such as multiple polylogarithms, (see Appendix A) independently in mathematics and physics [16,20,21,22,23], 3 allows us to derive a set of universal algorithms for the simplification and construction of the analytical coefficients of the Laurent expansion of a large class of hypergeometric functions. (For details, see Refs. [10,13,17,18,26,27,28,29].) Recently, similar problems have also drawn the attention of mathematicians [28,30]. However, the general solution of this problem remains unknown. The multiple series representation has further applications in the framework of Feynman diagram calculations. In particular, the Smirnov-Tausk approach [2,3] (see also Ref. [31]) was very productive for constructing the analytical coefficients of the ε-expansion (finite part mainly) of Feynman diagrams depending on one or two massless (ratio of massive) kinematic variables. Presently, there are several computer realizations of this approach [5]. In the framework of this technique, the Feynman parameter representation [1] for a diagram is rewritten in terms of multiple Mellin-Barnes (contour integral) representations, resulting in expressions for which a Laurent expansion about ε = 0 may be constructed explicitly, using gamma functions and their derivatives. The results may be summed analytically or numerically, typically leading to the same sums as in the construction of the ε-expansion of hypergeometric functions: the (generalized) harmonic sums and (inverse) binomial sums. Inverse binomial sums typically arise from massive loops; see Refs. [6,32]. Another source of multiple sums in Feynman diagrams comes from the Frobenius series solution of a differential equation [33]. Other classes of sums have been considered as well 4 . Analytical results are possible when these sums can be evaluated explicitly. For the analysis of (generalized) harmonic sums, the nested sums approach [17,18] permits the reduction of any type of (generalized) harmonic sum to a set of basis sums. The analytical evaluation of these basis sums is an independent problem. (See, for example, Ref. [12].) The Generating function approach [36] is a universal method for analytically evaluating ar-1 One of the classical tasks in mathematics is to find the full set of parameters and arguments for which hypergeometric functions are expressible in terms of algebraic functions. Quantum field theory makes a quantum generalisation of this classical task: to find the full set of parameters and arguments so that the allorder ε-expansion is expressible in terms of known special functions or identify the full set of functions which must be invented in order to construct the all-order ε-expansion of generalized hypergeometric functions. 2 See Eq. (2.1) for clarifying of terminology. 3 Hyperlogarithms have been considered by Kummer, Poincaré, and Lappo-Danilevsky; see [24]. The interrelation between hyperlogarithms and multiple polylogarithms has been discussed in [25]. 4 Finite harmonic sums are another class, on which more details may be found in Ref. [34]. However, there presently does not exist an appropriate generalization of multiple (inverse) binomial sums to finite harmonic sums. Some recent attempts in this direction have been discussed in Refs. [18,35]. bitrary sums, which was successfully applied (see section (2.3) in Ref. [12]) for an analysis of multiple (inverse) binomial sums [11,12]. The generating function approach allows us to convert arbitrary sums to a system of differential equations. The question of the expressibility of the solution to this differential equation in terms of known (special) functions is not addressed by this approach. In particular, the partial results of Refs. [7,11,12,13] were restricted by attempts to express the results of the calculation in terms of only classical or Nielsen polylogarithms [37,38]. It is presently unknown what type of sums (beyond generalized harmonic sums) are expressible in terms of known special functions. 5 The aim of this paper is to prove the following theorems: Theorem A The multiple (inverse) binomial sums ∞ j=1 1 2j j k z j j c S a 1 (j − 1) · · · S ap (j − 1) , (1.1) where k = ±1, S a (j) = j k=1 1 k a is a harmonic series, and c is any integer, are expressible in terms of Remiddi-Vermaseren functions with (1) for k = 1 : (i) c ≥ 2 rational coefficients; (ii) c ≤ 1 ratios of polynomials; (2) for k = −1 : (i) c ≥ 1 rational coefficients; (ii) c ≤ 0 ratios of polynomials. Theorem B The all order ε-expansion of the generalized hypergeometric functions [41] p F p−1 A+ aε; B + bε, 1 2 +I 1 ; z , (1.2a) p F p−1 A+ aε, 1 2 +I 2 ; B + bε; z , (1.2b) where A, B are lists of integers and I 1 , I 2 are integers, are expressible in terms of the harmonic polylogarithms with coefficients that are ratios of polynomials. The paper is organised as follows. In section 2, we will prove Theorem A. In section 3, the results of Theorem A will be applied to hypergeometric functions to prove Theorem B. Section 4 is devoted to a discussion of an algorithm for the reduction and analytical evaluation of generalized multiple (inverse) binomial sums. Appendix A contains some basic information about relevant special functions. 5 There is not universal agreement on what it means to express a solution in terms of known special functions. One reasonable answer has been presented by Kitaev in the Introduction to Ref. [39]. where he quotes R. Askye's Forward to the book Symmetries and Separation of Variables by W. Miller, Jr., [40] which says "One term which has not been defined so far is 'special function'. My definition is simple, but not time invariant. A function is a special function if it occurs often enough so that it gets a name". Kitaev adds, "... most of the people who apply them . . . understand, under the notion of special functions, a set of functions which can be found in one of the well-known reference books. . . ." To this, we may add "functions which can be found in one of the well-known computer algebra systems." Analytical evaluation of a basis of multiple (inverse) binomial sums of arbitrary weight and depth The main purpose of this section is to prove Theorem A. In the first subsection, we will consider differential equations satisfied by multiple (inverse) binomial sums, and use the analytical properties of such sums to derive two useful lemmas. In the second subsection, we prove auxiliary propositions for the separate cases of multiple binomial sums and inverse binomial sums, and use them to complete the proof of Theorem A. Some analytical properties of multiple (inverse) binomial sums of arbitrary weight and depth Let us define the multiple sums Σ (k) a 1 ,··· ,ap; b 1 ,··· ,bq;c (u) ≡ ∞ j=1 1 2j j k u j j c S a 1 (j − 1) · · · S ap (j − 1)S b 1 (2j − 1) · · · S bq (2j − 1) , (2.1) where S a (j) = j k=1 1 k a is a harmonic series and c is any integer. For particular values of k, the sums (2.1) are called k =      0 generalized harmonic 1 inverse binomial −1 binomial      sums . The case Σ (0) a 1 ,··· ,ap; 0,··· ,0;c (u) is called a harmonic sum. The number w = c + a 1 + · · · a p + b 1 + · · · b q is called the weight and d = p + q is called the depth. The general properties of multiple sums can be derived from their generating functions. Let us rewrite the multiple sum (2.1) in the form Σ (j) is the coefficient of u j . In order to find the differential equation for generating functions of multiple sums it is necessary to find a recurrence relation for the coefficients η 2(2j + 1) k (j + 1) c−k η (k) a; b;c (j + 1) = j c η (k) a; b;c (j) + r (k) a; b (j) , (2.2) where the "remainder" r (k) a; b (j) is given by 2j j k r (k) a; b (j) = p r=1 S ar (j − 1)+j −ar × q l=1 S b l (2j − 1)+(2j) −b l +(2j +1) −b l − p r=1 q l=1 S ar (j − 1)S b 1 (2j − 1) . (2.3) Multiplying both sides of Eq. (2.2) by u j , summing from j = 1 to ∞, and using the fact that any extra power of j corresponds to the derivative u(d/du) leads to the following differential equations for the generating functions Σ (k) a; b;c (u) (see Ref. [19]): 4 u −1 u d du − 2 u u d du c−1 Σ (1) a; b;c (u) = δ p,0 + R (1) a; b (u) , (2.4a) 1 u −1 u d du c Σ (0) a; b;c (u) = δ p,0 +R (0) a; b (u) , (2.4b) 1 u −4 u d du −2 u d du c Σ (−1) a; b;c (u) = 2δ p,0 + 2 2u d du + 1 R (−1) a; b (u) , (2.4c) where R (k) a; b (u) ≡ ∞ j=1 u j r (k) a; b (j) and δ a,b is the Kronecker δ-function. The boundary conditions for any of these sums and their derivatives are u d du j Σ a; b;c (0) = 0 , j = 0, 1, 2, · · · (2.5) From the analysis in Refs. [11,12,13,19], we have deduced that the set of equations for the generating functions has a simpler form in terms of a new variable. For multiple inverse binomial sums, this variable is defined by y = √ u − 4 − √ u √ u − 4 + √ u , u = − (1 − y) 2 y , (2.6) and for multiple binomial sums, it is defined by χ = 1 − √ 1 − 4u 1 + √ 1 − 4u , u = χ (1 + χ) 2 . (2.7) Let us consider the differential equation for multiple inverse binomial sums in terms of these new variables. The notation Σ Σ (1) a; b;c (y) ≡ Σ (1) a; b;c (u(y)) ≡ Σ (1) a; b;c (u) u=u(y) , Σ (−1) a; b;c (χ) ≡ Σ (−1) a; b;c (u(χ)) ≡ Σ (−1) a; b;c (u) u=u(χ) . (2.8) In terms of the variable y, equation (2.4a) may be split into sum of two equations − 1−y 1+y y d dy c−1 Σ (1) a; b;c (y) = 1−y 1+y σ (1) a; b (y) , (2.9a) y d dy σ (1) a; b (y) = δ p,0 +R (1) a; b (y) (2.9b) with boundary condition Σ (1) a; b;c (1) = 0 . Equation (2.9a) could be rewritten as 10) or in equivalent form − 1−y 1+y y d dy c−j Σ (1) a; b;c (y) = Σ (1) a; b;j (y) ,(2.− 1 − y 1 + y y d dy c−j−1 Σ (1) a; b;c (y) = y 1 dy 2 1 − y − 1 y Σ (1) a; b;j (y) . (2.11) From this representation we immediately obtain the following lemma: Lemma A (see Ref. [19]) If for some integer j, the series Σ In a similar manner, let us rewrite the differential equations for the generating function of the multiple binomial sums as 1+χ 1−χ χ d dχ c Σ (−1) a; b;c (χ) = 1+χ 1−χ σ (−1) a; b (χ) , (2.12a) 1 2 (1+χ) 2 d dχ σ (−1) a; b (χ) = δ p,0 + 2 1+χ 1−χ χ d dχ +1 R (−1) a; b (χ) . (2. 12b) The first equation may be rewritten as 13) or in an equivalent form 1+χ 1−χ χ d dχ c−j Σ (−1) a; b;c (χ) = Σ (−1) a; b;j (χ) ,(2.1+χ 1−χ χ d dχ c−j−1 Σ (−1) a; b;c (χ) = χ 0 dχ 1 χ − 2 1 + χ Σ (−1) a; b;j (χ) . (2.14) In this case, the boundary condition (2.5) is unchanged, and we can make a statement similar to the previous one: Lemma B (see Ref. [19]) If for some integer j, the series Σ (u) for positive integers i can also be expressed in terms of harmonic polylogarithms with rational coefficients. Analytical evaluation of multiple (inverse) binomial sums of arbitrary weight and depth Let us now consider the special case of sums (2.1) including only products of harmonic sums, and show that they are expressible in terms of Remiddi-Vermaseren functions 7 with 7 These sums are related to the multiple sums ∞ X n 1 >n 2 >···np=1 1 2n 1 n 1´u n 1 n c 1 n b 1 2 · · · n bp p . argument (j − 1); see Eq. (1.1). In agreement with Ref. [12], we will denote such a sum as Σ (k) a 1 ,··· ,ap; −;m (u). In this case, the non-homogeneous term r (k) a;− (j) of differential equation (2.4a) is again expressible in terms of sums of the same type, Σ (k) b 1 ,··· ,bp; −;m (u), but with smaller depth: 2j j k r (k) a;− (j) = p r=1 S ar (j − 1)+j −ar − p r=1 S ar (j − 1) , k = ±1 . (2.15) We shall start with the case of inverse binomial sums, k = 1: ∞ j=1 1 2j j u j j c S a 1 (j − 1) · · · S ap (j − 1) . In order to prove Theorem A for inverse binomial sums, we will prove an auxiliary proposition: Proposition I For c = 1, the inverse binomial sums are expressible in terms of harmonic polylogarithms with rational coefficients c r, s times a factor (1 − y)/(1 + y): Σ (1) a 1 ,··· ,ap; −;1 (u) u=u(y) = 1 − y 1 + y r, s c r, s ln r y Li ( σ s ) (y) ,(2. 16) where r + s 1 + · · · + s k = 1 + a 1 + · · · + a p (weight of l.h.s. = weight of r.h.s.). Substituting expression (2.16) in the r.h.s. of Eq. (2.11), setting j = 1, and making trivial splitting of the denominator, we get the following result: Corollary A: For c ≥ 2, the inverse binomial sums are expressible in terms of harmonic polylogarithms with rational coefficients d r, s : Σ (1) a 1 ,··· ,ap; −;c (u) u=u(y) = r, s d r, s ln r y Li ( σ s ) (y) , c ≥ 2 (2.17) where r + s 1 + · · · + s k = c + a 1 + · · · + a p (weight of l.h.s. = weight of r.h.s.). Proof: Let us consider inverse binomial sums of depth 0: Σ (1) −;−;c (u) ≡ ∞ j=1 1 2j j u j j c . It was shown in Ref. [8] that for any c ≥ 2 this sum is expressible in terms of generalized logsine functions [37] which could be rewritten [10,42] in terms of Nielsen polylogarithms. [38] Here we will present an iterated solution for the case of interest. The system (2.9) has the form − 1−y 1+y y d dy c−1 Σ (1) −;−;c (y) = 1−y 1+y σ (1) −;− (y) , (2.18a) y d dy σ (1) −;− (y) = 1 . (2.18b) For c = 1, we immediately get the relation Σ (1) −;,−;1 (y) = 1−y 1+y ln y ,(2.19) which coincides with Proposition I and can be readily transformed into the form of Eq. (2.10): − 1−y 1+y y d dy c−2 Σ (1) −;−;c (y) = − 1 2 ln 2 y . (2.20) The iterated solution of this differential equation for an arbitrary integer c ≥ 2 is expressible in terms of Remiddi-Vermaseren functions with rational coefficients 8 (in accordance with Corollary A). For sums of depth 1, i.e. Σ (1) a 1 ;−;c (u) ≡ ∞ j=1 1 2j j u j j c S a 1 (j − 1) ≡ ∞ j=1 1 2j j u j j c j−1 i=1 1 i a 1 , the coefficients of the non-homogeneous part are equal to inverse binomial sums of the zero depth, 2j j r (1) a 1 ;− (y) will also be expressible in terms of harmonic polylogarithms with rational coefficients. Substituting these results in the first equation (2.22a), we obtain results in accordance with Proposition I. For c ≥ 2, the desired result follows from Lemma A. We may complete the proof by mathematical induction. Let us assume that Proposition I is valid for multiple inverse binomial sums of depth k : Σ (1) a 1 ,··· ,a k ;−;1 (u) ≡ ∞ j=1 1 2j j u j j S a 1 (j − 1) · · · S a k (j − 1) u=u(y) = 1 − y 1 + y r, s c r, s ln r y Li ( σ s ) (y) ,(2.23) where Li ( σ s ) (z) is a coloured polylogarithm of a square root of unity, s = s 1 , · · · , s k , and r + s 1 + · · · s p = c + a 1 + · · · + a k . Then for c ≥ 2, Corollary A also holds for multiple inverse binomial sums of depth k : Σ (1) a 1 ,··· ,a k ;−;c (u) ≡ ∞ j=1 1 2j j u j j c S a 1 (j − 1) · · · S a k (j − 1) u=u(y) = r, sc r, s ln r y Li ( σ s ) (y) ,(2.24) For the sum of depth k +1, the coefficients of the non-homogeneous part may be expressed as linear combinations of sums of depth j , j = 0, · · · , k, with integer coefficients and all possible symmetric distributions of the original indices between terms of the new sums: 1 p!(k+1−p)! S i 1 (j −1) · · · S ip (j −1) j i p+1 +···i k+1 ,(2.25b) where the sum over indices (i 1 , · · · i k+1 ) is to be taken over all permutations of the list (a 1 , · · · , a k+1 ). If i p+1 + · · · i k+1 ≥ 2, the r.h.s. of Eq. (2.25b) is expressible in terms of harmonic polylogarithms of weight k with rational coefficients; see Eq. (2.24). As the result of integrating this equation, σ a 1 ,··· ,a k+1 ;− (y) also will be expressible in terms of harmonic polylogarithms of weight k +1 with rational coefficients. If i p+1 + · · · i k+1 = 1, the r.h.s. of Eq. (2.25b) is expressible in terms of harmonic polylogarithms of weight k with a common factor (1−y)/(1+y); see Eq. (2.23). The result of integrating this equation again will be expressible in terms of harmonic polylogarithms of weight k +1 with rational coefficients: For c = 1, direct substitution of the previous results into (2.25a) will show that Proposition I is valid at weight k +1. In this way, the Proposition I is proven for all weights. Then for c ≥ 2, Corollary A is also true for multiple inverse binomial sums of depth k +1. Applying the differential operator u d du ≡ − 1−y 1+y y d dy repeatedly l times to the sum Σ (1) a 1 ,··· ,ap; −;c (u), we can derive results for a similar sum with c ≤ 1. 9 Thus, Theorem A is proven for multiple inverse binomial sums. 10 Let us now consider the multiple binomial sums 11 , (k = −1), Σ (−1) a 1 ,··· ,ap; −;c (u) ∞ j=1 2j j u j j c S a 1 (j − 1) · · · S ap (j − 1) . In order to prove Theorem A for binomial sums, we will first prove the following auxiliary proposition: Proposition II For c = 0, the binomial sums are expressible in terms of harmonic polylogarithms and have the following structure: Σ (−1) a 1 ,··· ,ap; −;0 (u) u=u(χ) = r, s 1 1 − χ c r, s + d r, s ln r χ Li ( σ s ) (χ) ,(2. 26) where r + s 1 + · · · + s k = 1 + a 1 + · · · + a p (weight of l.h.s. = weight of r.h.s.) and c r, s and d r, s are rational numbers. Substituting the expression (2.26) in the r.h.s. of Eq. (2.14) and setting j = 0, we get Corollary B For c ≥ 1, the binomial sums are expressible in terms of harmonic polylogarithms with rational coefficientsd r, s : Σ (−1) a 1 ,··· ,ap; −;c (u) u=u(χ) = r, sd r, s ln r χ Li ( σ s ) (χ) , c ≥ 1 ,(2. 27) where r + s 1 + · · · + s k = c + a 1 + · · · + a p (weight of l.h.s. is equal to weight of r.h.s.). 9 Some particular cases of sums of this type were considered also in Ref. [43]. 10 All multiple inverse binomial sums up to weight 4 were calculated in ref. [12]; see Table I in Appendix C. 11 These sums are related to the multiple sums ∞ X n 1 >n 2 >···np=1 2n1 n1 ! u n 1 n c 1 n b 1 2 · · · n bp p . We start again from the multiple binomial sums of depth 0, Σ (−1) −;−;c (u) ≡ ∞ j=1 2j j u j j c . In this case, Eqs. (2.12) have the form 1+χ 1−χ χ d dχ c Σ (−1) −;−;c (χ) = 1+χ 1−χ σ (−1) −;− (χ) , 1 2 (1+χ) 2 d dχ σ (−1) −;− (χ) = 1 , (2.28a) where the factor 1+χ 1−χ may be written as For c ≥ 1, the desired result follows from Lemma B: 1+χ 1−χ = 2 1 − χ − 1 .a 1 ;−;c (u) ≡ ∞ j=1 2j j u j j c S a 1 (j − 1) ≡ ∞ j=1 2j j u j j c j−1 i=1 1 i a 1 , we have 1+χ 1−χ χ d dχ c Σ (−1) a 1 ;−;c (χ) = 1+χ 1−χ σ (−1) a 1 ;− (χ) , (2.32a) 1 2 (1+χ) 2 d dχ σ (−1)1+χ 1−χ χ d dχ c−1 Σ (−1) a 1 ;−;c (χ) = −Σ (−1) −;−;a 1 +1 (χ) + χ 0 dt 1 t 1 t 1 0 dt 2 t 2 Σ (−1) −;−;a 1 −1 (t 2 ) . (2.35) In particular, for a 1 = 1 we have 1+χ 1−χ χ d dχ c−1 Σ (−1) 1;−;c (χ) = 2Li 2 (−χ) + 2 ln 2 (1 + χ) + 2Li 2 (χ) . (2.36) Let us assume Proposition II is valid for multiple binomial sums of depth k , and prove the proposition for depth k +1. Thus, we assume that Σ (−1) a 1 ,··· ,a k ;−;0 (u) ≡ ∞ j=1 2j j u j S a 1 (j − 1) · · · S a k (j − 1) u=u(χ) = p, s 1 1 − χ c p, s + d p, s ln p χLi ( σ s ) (χ) , (2.37) where Li ( σ s ) (χ) is a coloured polylogarithm of a square root of unity, s = (s 1 , · · · , s k ), and p + s 1 + · · · s p = a 1 + · · · + a k . Then for c ≥ 1, Corollary B also holds for multiple binomial sums of depth k : Σ (−1) a 1 ,··· ,a k ;−;c (u) ≡ ∞ j=1 2j j u j j c S a 1 (j − 1) · · · S a k (j − 1) u=u(χ) = p, sc p, s ln p χLi ( σ s ) (χ) ,(2. 38) For a sum of depth k +1, the coefficients of the non-homogeneous part are expressed as linear combinations of sums of depth j , j = 0, · · · , k, with an integer coefficients and all possible distributions of the original indices between terms of new sums, multiplied by a factor (2j + 1): 1+χ 1−χ χ d dχ c Σ (−1) a 1 ,··· ,a k+1 ;−;c (χ) = 1+χ 1−χ σ (−1) a 1 ,··· ,a k+1 ;− (χ) , (2.39a) 1 2 (1+χ) 2 d dχ σ (−1) a 1 ,··· ,a k+1 ;− (χ) = ∞ j=1 (2j + 1) 2j j u j × k p=0 (i 1 ,··· ,i k+1 ) 1 p!(k + 1 − p)! S i 1 (j −1) · · · S ip (j −1) j i p+1 +···i k+1 , (2.39b) where the sum over indices (i 1 , · · · , i k+1 ) is to be taken over all permutations of the list (a 1 , · · · , a k+1 ). Let us denote the sub-list of length p as I = (i 1 , · · · , i p ) and define the sum of the remaining indices as J = i p+1 + · · · + i k+1 , so that the second equation (2.39b) can be written as 40)). In this way, the Proposition II is found to be valid at the weight k +1. Consequently, Proposition II is proven for all weights. Therefore, for c ≥ 1, Corollary B is also valid for the multiple binomial sums of weight k +1. 1 2 (1+χ) 2 d dχ σ (−1) Applying the differential operator u d du = 1+χ 1−χ χ d dχ repeatedly l times to the sum Σ (−1) a 1 ,··· ,ap; −;c (χ), we can derive results for similar sums with c ≤ 0. Thus, Theorem A is proven for multiple binomial sums. 13 All-order ε-expansion of hypergeometric functions with one half-integer value of the parameters via multiple (inverse) binomial sums In this section, we turn our attention to the proof of Theorem B. It is well known that any function p F p−1 ( a + m; b + k; z) is expressible in terms of p other functions of the same type: R p+1 ( a, b, z) p F p−1 ( a + m; b + k; z) = p k=1 R k ( a, b, z) p F p−1 ( a + e k ; b + E k ; z) , (3.1) where m, k, e k , and E k are lists of integers and R k are polynomials in parameters a, b, and z. Systematic methods for solving this problem were elaborated in Refs. [44,45]. For generalized hypergeometric functions of Theorem B, let us choose as basis functions arbitrary p-functions from the following set: • for Eq. (1.2a) there are p 2 functions of the proper type: p F p−1 3 2 , {1 + a i ε} p−L−1 , {2 + d i ε} L {1 + e i ε} p−Q−1 , {2 + c i ε} Q z , • for Eq. (1.2b) there are p 2 − 1 functions of the proper type: p F p−1 {1 + a i ε} p−L , {2 + d i ε} L 3 2 , {1 + e i ε} p−Q−2 , {2 + c i ε} Q z . In the framework of the approach developed in Refs. [8,10,11,12,19], the study of the ε-expansion of basis hypergeometric functions has been reduced to the study of multiple (inverse) binomial sums. It is easy to get the following representations: p F p−1 {1 + a i ε} K , {2 + d i ε} L 3 2 , {1 + e i ε} R , {2 + c i ε} Q z = 1 2z Π Q s=1 (1 + c s ε) Π L i=1 (1 + d i ε) ∞ j=1 1 2j j (4z) j j K−R−1 ∆ , (3.2a) p F p−1 3 2 , {1+a i ε} K , {2+d i ε} L {1+e i ε} R , {2+c i ε} Q z = 2 z Π Q s=1 (1+c s ε) Π L i=1 (1+d i ε) ∞ j=1 2j j z 4 j j K−R−1 ∆ , (3.2b) where the superscripts K, L, R, Q show the lengths of the parameter lists, ∆ = exp ∞ k=1 (−ε) k k w k j −k + S k (n − 1)t k = 1 − ε w 1 j + t 1 S 1 (n − 1) + O(ε 2 ) , (3.3) S a (n) = n j=1 1/j a is a harmonic sum, and the constants are defined as A k ≡ a k i , C k ≡ c k i , D k ≡ d k i , E k ≡ e k i , t k ≡ C k + E k − A k − D k , w k ≡ C k − D k , where the summations extend over all possible values of the parameters in Eqs. (3.2). In this way, the ε-expansions of the basis functions (3.2) are expressible in terms of multiple (inverse) binomial sums studied in Sect. 2. But all these are are expressible in terms of harmonic polylogarithms. Thus, Theorem B is proven. Generalized multiple (inverse) binomial sums via derivatives of generalized hypergeometric functions In physical applications, in particular, within Smirnov-Tausk approach, more general sums, in addition to the ones defined in Eq. (1.1), may be generated: ∞ j=1 (j + c 1 )!(j + c 2 )! (2j + c 3 )! k u j (nj + c 4 ) c S a 1 (m 1 j + b 1 ) · · · S a k (m k j + b k ) , where {a i }, {b j }, {c k }, {m k }, n are integers and k = ±1. The procedure of finding the proper differential equation (see Refs. [12,36] for a detailed discussion) can be applied to analytically evaluate any of these new sums. Another approach is based on extension of the algorithm of nested sums [17,18] for the study of the algebraic relations between these sums. However, there is a third approach arising from the possibility of reducing an arbitrary generalized hypergeometric function to a set of basis functions with the help of the Zeilberger-Takayama algorithm described by Eq. (3.1). To be more specific, let us divide both sides of Eq. (3.1) by R p+1 (a i , b j , z) and construct the ε-expansion for the hypergeometric functions described in Theorem B. The r.h.s. of this relation is expressible analytically in terms of harmonic polylogarithms with polynomial coefficients. The l.h.s. can be used as a generating function for generalized multiple (inverse) binomial sums. Using a standard form for the Taylor expansion of the Gamma function, 14 (m + aε) j (m) j = exp − ∞ k=1 (−aε) k k [S k (m+j −1)−S k (m−1)] , where (α) j ≡ Γ(α + j)/Γ(α) is the Pochhammer symbol, we obtain P+1 F P {m l +a l ε} L , {p i + 1 2 } P+1−L {n k +b k ε} K , {q j + 1 2 } P−K z = ∞ j=0 z j j! Π L l=1 (m l + a l ε) j Π K l=1 (n k + b k ε) j Π P +1−L i=1 p i + 1 2 j Π P −K s=1 q s + 1 2 j = ∞ j=0 z j j! 1 4 j(K−L+1) Π L l=1 (m l ) j Π K k=1 (n k ) j P+1−L i=1 (2p i +1) 2j (p i +1) j P−K s=1 (l s +1) j (2l s +1) 2j ∆ ,(4.1) where the m l , n k , p i , q j are integers and ∆ = exp ∞ k=1 (−ε) k k K ω=1 b k ω [S k (n ω +j −1)−S k (n ω −1)] − L i=1 a k i S k (m i +j −1)−a k i S k (m i −1) . Setting K = L = P in Eq. (4.1), we get generating functions for generalized multiple binomial sums: the derivatives l,k ∂ ∂a l r l ∂ ∂b k s k P+1 F P {a l } P , p+ 1 2 {b k } P z a l =m l ;b k =n k (4.2a) lead to terms in the epsilon expansion of the form ∞ j=0 (2p+1) 2j (p+1) j 1 j! z j 4 j Π P l=1 (m l ) j Π P k=1 (n k ) j M =1 S a M (I M +j) , (4.2b) where the I M are integers from the lists {m l } L and {n k } K . For L = P + 1 and K = P − 1 we get generating functions for generalized multiple inverse binomial sums: l,k ∂ ∂a l r l ∂ ∂b k s k P+1 F P {a l } P +1 {b k } P −1 , q+ 1 2 z a l =m l ;b k =n k ⇒ ∞ j=0 (q+1) j (2q+1) 2j (4z) j j! Π P +1 l=1 (m l ) j Π P −1 k=1 (n k ) j M =1 S a M (I M +j) . (4.3) For K = P and L = P + 1 we get generating functions for generalized multiple harmonic sums: l,k ∂ ∂a l r l ∂ ∂b k s k P+1 F P {a l } P +1 {b k } P z a l =m l ;b k =n k ⇒ ∞ j=0 1 j! Π P +1 l=1 (m l ) j Π P k=1 (n k ) j M =1 S a M (I M +j) . (4.4) Instead of one hypergeometric function, we could consider a linear combination of the functions of the same type. Such a combination is also reducible and expressible in terms of our basis functions. Combining the proper set of hypergeometric functions, we could expect that any individual sums, 15 of the type described by r.h.s. of Eqs. (4.2) -(4.4) are expressible in terms of generalized (harmonic) polylogarithms with polynomial coefficients. 16 15 Using the results of the all-order ε-expansion for Gauss hypergeometric functions [13,29] we could consider a series of type (2.1). 16 In particular, all sums presented in Ref. [46] are reducible in terms of our basis sums or sums studied in Ref. [12]. Indeed, taking into account that (2n + 1) " 2n n « = n + 1 2 " 2n + 2 n + 1 « and shifting the index of summation we have ∞ X n=1 1 2n n´z n (2n+1) X a (n)Y b (2n+1) = 2 z ∞ X j=1 1 " 2j j " z j j X a (j −1)Y b (2j −1) − X a (0)Y b (1) ,(4. 5) These arguments suggest a criterion for what type of generalized multiple (inverse) binomial sum are expressible in terms of harmonic polylogarithms with coefficients that are ratios of polynomials. This is just the beginning of a general analysis, but the corresponding analysis for harmonic sums is already known to be valid. [17] Unfortunately, existing computer algebra algorithms [47] do not allow us to identify the multiple series with derivatives of hypergeometric functions or their combinations. It is still matter of personal experience, but this approach looks very promising and is worthy of further analysis. Discussion and Conclusions We have constructed an iterative solution for multiple (inverse) binomial sums defined by Eq. (1.1). It was shown that by the appropriate change of variables, defined by Eqs. (2.6) and (2.7), the multiple (inverse) binomial sums are converted into harmonic polylogarithms (see Theorem A). Symbolically, this may be expressed as ∞ j=1 1 2j j u j j S a 1 (j − 1) · · · S a k (j − 1) u=u(y) = 1 − y 1 + y p, s c p, s ln p y Li ( σ s ) (y) , (5.1a) ∞ j=1 1 2j j u j j c S a 1 (j − 1) · · · S a k (j − 1) u=u(y) = p, sc p, s ln p y Li ( σ s ) (y) , c ≥ 2 (5.1b) and ∞ j=1 2j j u j S a 1 (j − 1) · · · S a k (j − 1) u=u(χ) = p, s c p, s 1 − χ + d p, s ln p χ Li ( σ s ) (χ) , (5.2a) ∞ j=1 2j j u j j c S a 1 (j − 1) · · · S a k (j − 1) u=u(χ) = p, sc p, s ln p χ Li ( σ s ) (χ) , c ≥ 1 (5.2b) where c is a positive integer, c p, s ,c p, s and d p, s are rational coefficients, the weight of l.h.s. = weight of r.h.s., Li ( σ s ) (χ) is the coloured multiple polylogarithm of a square root where X a (n) = Π r k=1 Sa k (n) and Y a (2n + 1) = Π r k=1 Sa k (2n + 1), are products of harmonic sums, with the vector a having r components. As a consequence, X a (0) = 0 and Y b (1) = 1. In this way, any sums described by Eq. (4.5) may be reduced to sums of type (2.1), and for Y b (j) = 1 they are reduced to the sums studied in the present paper. Another possible generalization of the sums considered here is ∞ X n=1 1 2n n´z n (2n+1) X a (n + 1)Y b (2n+1) = 2 z ∞ X j=1 1 " 2j j " z j j X a (j)Y b (2j −1) − X a (1)Y b (1) . (4.6) Due to the depth reduction relation, X a (j) = X a (j − 1) + r X p=0 X (i 1 ,··· ,i k+1 ) 1 p!(r − p)! Si 1 (j −1) · · · Si p (j −1) j i p+1 +···i k+1 , sums of type (4.6) are also expressible in terms sums of type (4.5). of unity, S a (j − 1) = j−1 i=1 1 i a , is a harmonic series. The mappings (5.1), (5.2) are defined in the radius of convergence of the l.h.s.: |u| ≤ 4, inverse binomial 1 4 , binomial (5.3) Unfortunately, one of the unsolved problem is the completeness of the representation (5.1), (5.2). In other words, is it possible to express all harmonic polylogarithms in terms of multiple (inverse) binomial sums? If not, what kind of sums must be added to get a complete basis? Another problem beyond our present considerations is to find the algebraic relations among the sums. From representation (5.1), (5.2), it is evident that some (or all, if the basis is complete) of the alternating or non-alternating 17 multiple Euler-Zagier sums (or multiple zeta values) [15], can be written in terms of multiple (inverse) binomial sums of special values of arguments. Two arguments where such a representation is possible are trivially obtained by setting the arguments of the harmonic polylogarithms y, χ to ±1: u = 4 , y = −1 , (5.5) u = 1 4 , χ = 1 . (5.6) Another such point 18 u = −1 , y = 3 − √ 5 2 (5.7) has been discussed intensively in the context of Apéry-like expressions for Riemann zeta functions (see [48] and References therein). For two other points u = 1 , y = exp i π 3 , (5.8) u = 2 , y = i ,(5.9) the relation between multiple inverse binomial sums and multiple zeta values was analysed mainly by the method of experimental mathematics. [49] Some of the relations are presented in Ref. [50] and in the appendix of Ref. [10]. Let us make a few comments about harmonic polylogarithms of a complex argument. For the case 0 ≤ u ≤ 4, the variable y defined in (2.6) belongs to a complex unit circle, 17 Let us recall that multiple Euler-Zagier sums are defined as ζ(s1, . . . , s k ; σ1, . . . , σ k ) = X n 1 >n 2 >...>n k >0 k Y j=1 (σj ) n j n s j j , (5.4) where σj = ±1 and sj > 0. σ = 1 is called non-alternating and σ = −1 is alternating sums, correspondingly. 18 We are thankful to Andrei Davydychev for information about the relation between this point and the "golden ratio", [37], 3− y = exp(iθ). In this case, the coloured polylogarithms of a square root of unity can be split into real and imaginary parts as in the case of classical polylogarithms. [37] At the present, there is no commonly accepted notation for the new functions generated by such splitting. In Ref. [50], the multiple Glaishers and multiple Clausen functions were introduced as the real and imaginary parts of generalized polylogarithms of complex unit argument. In Ref. [10,26,27,42], the splitting of Nielsen polylogarithms was analysed in detail. In this case, the real and imaginary parts are reduced to classical Clausen functions, Cl j (θ) and generalized log-sine functions Ls (k) j (θ). Ref. [51] attempts to classify new functions on the basis of new LsLsc i,j,k (θ)-functions. In Appendix A of Ref. [12], the iterated representation for Remiddi-Vermaseren functions of complex unit was constructed. It was observed [8,10,12,52] that the physically interesting case, representing single-scale diagrams with with two massive particle cuts, corresponds to Remiddi-Vermaseren functions (A.9) with argument equal to a primitive "sixth root of unity", y = exp i π 3 . This gives an explanation of the proper "basis of transcendental constants" constructed in Refs. [52] and [10], and its difference from the proper basis of Broadhurst [16]. Of course, for numerical evaluation of harmonic polylogarithms of complex argument, only a series representation is necessary. [53] Using the results of Theorem A, we have proved Theorems B about the all-order ε-expansion of a special class of hypergeometric functions. The proof includes two steps: (i) the algebraic reduction of generalized hypergeometric functions of the type specified in Theorems B to basic functions and (ii) the algorithms for calculating the analytical coefficients of the ε-expansion of basic hypergeometric functions. The implementation of step (i) -the reduction algorithm -is based on general considerations performed in Refs. [44,45]. In step (ii), the algorithm is based on series representation of the basis hypergeometric functions defined by Eq. (4.1). The coefficients of the ε-expansion are expressible in terms of multiple (inverse) binomial sums analyzed in Theorem A. Exploring the opportunity to reduce an arbitrary generalized hypergeometric function to a set of basis functions with the help of the Zeilberger-Takayama algorithm, we have presented in section 4 some arguments about one possible generalization of (inverse) binomial sums (see Eq. (4.4)) which would be expressible in terms of harmonic polylogarithms with coefficients that are ratios of polynomials. The integral (A.5) is an iterated Chen integral [55] w.r.t. the differential forms ω 0 = dz/z and ω 1 = dz 1−z , so that Li k 1 ,··· ,kn (z) = The coloured polylogarithms (Eq. A.3)) also have an iterated integral representation w.r.t. three differential forms, ) , where a ≡ (a 1 , . . . , a p ) and b ≡ (b 1 , . . . , b q ) denote the collective lists of indices and η (k) a; b;c the summation index j. Using the explicit form of η (k) a; b;c (j), the recurrence relation for the coefficients can be written in the form6 )[(χ)] will be used for a sum defined by Eq. (2.1), where the variable u is rewritten in terms of variable y[χ] defined by Eq. (2.6) [(2.7)]: ) is expressible in terms of Remiddi-Vermaseren functions (A.4b) with rational coefficients, then the sums Σ (1) a; b;j+i (u) for positive integers i can also be expressed in terms of functions of this type with rational coefficients. ) is expressible in terms of harmonic polylogarithms (A.4b) with rational coefficients, then the sums Σ For c = 1 the system of equations ( 2 2now consider the case a 1 = 1. Using Eq. (2.19), we derive from Eq. 2 y − 2 ln y ln(1 + y) − 2Li 2 (−y) ,i.e., result is expressible in terms of harmonic polylogarithms. For a 1 ≥ 2, the r.h.s. of the second equation (2.22b) is expressible in terms of harmonic polylogarithms with rational coefficients (in accordance with previous considerations), so that σ (i 1 , 1··· ,i k+1 ) 1 ,··· ,a k+1 ;− (y) = r, s ln r t Li ( σ s ) (t) . with Proposition II. Substituting this result into r.h.s. of Eq. of iterated integration, for c ≥ 1 and boundary condition defined by Eq. (2.5), are expressible in terms of generalized polylogarithms (A.2) with rational coefficients (see Corollary B). For the sums of depth 1, 12 This relation follows from the differential relation u d du Σ (k) a; b;c (u) = Σ (k) a; b;c−1 (u) . set c = 0. It is necessary to consider two cases: (i) a 1 = 1 and (ii) a 1 ≥ 2. For a 1 = 1, we can use the explicit results (2.30) and (with Proposition II. For a 1 ≥ 2 the r.h.s. of Eq. (2.32b) is expressible in terms of harmonic polylogarithms with rational coefficients, so that Eq. (2.34) is also expressible in terms of harmonic polylogarithms with rational coefficients in accordance with Proposition II. Substituting this result into the r.h.s. of Eq. 1 ) . (2.40) Let us set c = 0 and consider two cases: (i) J = 1 and (ii) J ≥ 2. For J = 1, the first term of the r.h.s. of Eq. (2.40) is expressible in terms of harmonic polylogarithms with rational coefficients. The last term of the r.h.s. of Eq. (2.40) has the structure of Eq. (2.37) so that after integration, it will again be expressible in terms of harmonic polylogarithms of weight k +1. For J ≥ 2, both terms of the r.h.s. of Eq. (2.40) are expressible in terms of harmonic polylogarithms of weight k +1 (see Eq. (2. We would like to point out that Eq. (2.2) is valid for an arbitrary integer k and c − k ≥ 0. In the case c − k < 0, the proper term will be generated in the r.h.s. of the equation. Compare with the results of Refs.[8,10,42]. All multiple binomial sums up to weight 3 were calculated in ref.[11,12]; see the proper Appendixes. The relation between harmonic sums Sa(j) and derivatives of the function ψ(z) = d dz ln Γ(z) isψ (k−1) (j) = (−1) k (k − 1)! [ζ k − S k (j − 1)] , k > 1. AcknowledgmentsWe are indebted to A. Davydychev, A. Kotikov, H. Gangl for interesting discussions. We would like to thank A. Kotikov, T. Riemann and O. Tarasov for carefully reading the manuscript and for pointing out some typos in the first version of paper and A. Davydychev for checks of some formulae. This research was supported by NATO Grant PST.CLG.980342 and DOE grant DE-FG02-05ER41399. Kalmykov is supported in part by BMBF 05 HT6GUA. M.Yu.K. is thankful to Baylor University for support of this research and very grateful to his wife, Laura Dolchini, for moral support while working on the paper.Appendix:A. Zoo of special functions For completeness, we will present the definition of a set of new functions, such as multiple polylogarithms19Li k 1 ,k 2 ,··· ,kn (z 1 , z 2 , · · · , z n ) =Special cases of multiple polylogarithms 20 include generalized polylogarithms, defined byand coloured polylogarithms of a square root of unity, defined by 21where s = (s 1 , · · · s n ) and σ = (σ 1 , · · · , σ n ) are multi-indices and σ k is a square root of unity, σ k = ±1. The extension of coloured polylogarithms of square root of unity (A.3) by inclusion of powers of logarithms, ln k z, leads to harmonic polylogarithms or Remiddi-Vermaseren polylogarithms (or functions)[22]. These can written in the following form:where, by definition19 For a review, we recommended Ref.[54].20Our notations corresponds to Waldschmitd's paper of Ref.[54].21We call n depth, and k = k1 + k2 + · · · + kn(s = s1 + s2 + · · · + sn) the weight. Introduction to the Theory of Quantized Fields. N N Bogoliubov, D V Shirkov, Wiley-Interscience Publication. John Wiley & SonsNew York-Chichester-BrisbaneN.N. Bogoliubov, D.V. Shirkov, Introduction to the Theory of Quantized Fields, A Wiley-Interscience Publication. John Wiley & Sons, New York-Chichester-Brisbane, 1980; C Itzykson, J B Zuber, Quantum Field Theory. New York, McGraw-HillC. Itzykson, J.B. Zuber, Quantum Field Theory, New York, McGraw-Hill, 1980. Analytical result for dimensionally regularised massless on-shell double box. V A Smirnov, arXiv:hep-ph/9905323Phys. Lett. B. 460397V.A. Smirnov, "Analytical result for dimensionally regularised massless on-shell double box," Phys. Lett. B 460 397 (1999) [arXiv:hep-ph/9905323]. Non-planar massless two-loop Feynman diagrams with four on-shell legs. J B Tausk, arXiv:hep-ph/9909506Phys. Lett. B. 469225J.B. Tausk, "Non-planar massless two-loop Feynman diagrams with four on-shell legs," Phys. Lett. B 469 225 (1999) [arXiv:hep-ph/9909506]. High-precision calculation of multi-loop Feynman integrals by difference equations. S Laporta, arXiv:hep-ph/0102033Int. J. Mod. Phys. 155087S. Laporta, "High-precision calculation of multi-loop Feynman integrals by difference equations," Int. J. Mod. Phys. A15 (2000) 5087 [arXiv:hep-ph/0102033]. Numerical evaluation of loop integrals. C Anastasiou, A Daleo, arXiv:hep-ph/0511176JHEP. 061031C. Anastasiou, A. Daleo, "Numerical evaluation of loop integrals," JHEP 0610 (2006) 031 [arXiv:hep-ph/0511176]; Automatized analytic continuation of Mellin-Barnes integrals. M Czakon, arXiv:hep-ph/0511200Comput. Phys. Commun. 175559M. Czakon, "Automatized analytic continuation of Mellin-Barnes integrals," Comput. Phys. Commun. 175 559 (2006) [arXiv:hep-ph/0511200]; AMBRE -a Mathematica package for the construction of Mellin-Barnes representations for Feynman integrals. J Gluza, K Kajda, T Riemann, arXiv:0704.2423hep-phJ. Gluza, K. Kajda, T. Riemann, "AMBRE -a Mathematica package for the construction of Mellin-Barnes representations for Feynman integrals," arXiv:0704.2423 [hep-ph]. . E E Boos, A I Davydychev, A Method Of Evaluating Massive Feynman Integrals, Theor. Math. Phys. 891052E.E. Boos, A.I. Davydychev, "A Method Of Evaluating Massive Feynman Integrals, Theor. Math. Phys. 89 (1991) 1052. Two loop two point functions with masses: Asymptotic expansions and Taylor series, in any dimension. D J Broadhurst, J Fleischer, O V Tarasov, arXiv:hep-ph/9304303Z. Phys. 60287D.J. Broadhurst, J. Fleischer, O.V. Tarasov, "Two loop two point functions with masses: Asymptotic expansions and Taylor series, in any dimension," Z. Phys. C60 (1993) 287 [arXiv:hep-ph/9304303]; Analytic two-loop results for self energy-and vertex-type diagrams with one non-zero mass. J Fleischer, A V Kotikov, O L Veretin, arXiv:hep-ph/9808242Nucl. Phys. 547343J. Fleischer, A.V. Kotikov, O.L. Veretin, "Analytic two-loop results for self energy-and vertex-type diagrams with one non-zero mass," Nucl. Phys. B547 (1999) 343 [arXiv:hep-ph/9808242]. Single-scale diagrams and multiple binomial sums. M Yu, O Kalmykov, Veretin, arXiv:hep-th/0004010Phys. Lett. 483315M.Yu. Kalmykov, O. Veretin, "Single-scale diagrams and multiple binomial sums," Phys. Lett. B483 (2000) 315 [arXiv:hep-th/0004010]. Some Exact Results For N Point Massive Feynman Integrals. A I Davydychev, General Results For Massive N Point Feynman Diagrams With Different Masses. 32358A.I. Davydychev, "Some Exact Results For N Point Massive Feynman Integrals," J. Math. Phys. 32 (1991) 1052; "General Results For Massive N Point Feynman Diagrams With Different Masses," 33 (1992) 358; Two loop selfenergy diagrams with different masses and the momentum expansion. A I Davydychev, J B Tausk, Nucl. Phys. 397123A.I. Davydychev, J.B. Tausk, "Two loop selfenergy diagrams with different masses and the momentum expansion," Nucl. Phys. B397 (1993) 123; . arXiv:hep-ph/9504431Phys. Rev. 537381Phys. Rev. D53 (1996) 7381 [arXiv:hep-ph/9504431]; Closed expressions for specific massive multiloop selfenergy integrals. F A Berends, M Buza, M Böhm, R Scharf, Z. Phys. 63227F.A. Berends, M. Buza, M. Böhm, R. Scharf, "Closed expressions for specific massive multiloop selfenergy integrals," Z. Phys. C63 (1994) 227; The Gegenbauer Polynomial Technique: the evaluation of a class of Feynman diagrams. A V Kotikov, arXiv:hep-ph/9512270Phys. Lett. B. 375240A.V. Kotikov, "The Gegenbauer Polynomial Technique: the evaluation of a class of Feynman diagrams," Phys. Lett. B 375 (1996) 240 [arXiv:hep-ph/9512270]; Two-loop QCD corrections of the massive fermion propagator. J Fleischer, F Jegerlehner, O V Tarasov, O L Veretin, arXiv:hep-ph/9803493Nucl. Phys. 539Erratum-ibid. B571 (2000) 511J. Fleischer, F. Jegerlehner, O.V. Tarasov, O.L. Veretin, "Two-loop QCD corrections of the massive fermion propagator," Nucl. Phys. B539 (1999) 671, [Erratum-ibid. B571 (2000) 511] [arXiv:hep-ph/9803493]; Effect Of M(C) On B Quark Chromomagnetic Interaction And On-Shell Two-Loop Integrals With Two Masses. A I Davydychev, A G Grozin, arXiv:hep-ph/9809589Phys. Rev. 5954023A.I. Davydychev, A.G. Grozin, "Effect Of M(C) On B Quark Chromomagnetic Interaction And On-Shell Two-Loop Integrals With Two Masses," Phys. Rev. D59 (1999) 054023 [arXiv:hep-ph/9809589]; The two-loop scalar and tensor pentabox graph with light-like legs. C Anastasiou, E W N Glover, C Oleari, arXiv:hep-ph/9912251Nucl. Phys. 575416Erratum-ibid. B585 (2000) 763C. Anastasiou, E.W.N. Glover, C. Oleari, "The two-loop scalar and tensor pentabox graph with light-like legs," Nucl. Phys. B575 (2000) 416 [Erratum-ibid. B585 (2000) 763] [arXiv:hep-ph/9912251]; General massive one-loop off-shell three-point functions. A T Suzuki, E S Santos, A G M Schmidt, arXiv:hep-ph/0210148J. Phys. 364465A.T. Suzuki, E.S. Santos, A.G.M. Schmidt, "General massive one-loop off-shell three-point functions," J. Phys. A36 (2003) 4465 [arXiv:hep-ph/0210148]; A new hypergeometric representation of one-loop scalar integrals in d dimensions. J Fleischer, F Jegerlehner, O V Tarasov, arXiv:hep-ph/0307113Nucl. Phys. 672303J. Fleischer, F. Jegerlehner, O.V. Tarasov, "A new hypergeometric representation of one-loop scalar integrals in d dimensions," Nucl. Phys. B672 (2003) 303 [arXiv:hep-ph/0307113]; The O(alpha alpha(s)) correction to the pole mass of the t-quark within the standard model. F Jegerlehner, M Yu, Kalmykov, arXiv:hep-ph/0308216Nucl. Phys. 676365F. Jegerlehner, M.Yu. Kalmykov, "The O(alpha alpha(s)) correction to the pole mass of the t-quark within the standard model," Nucl. Phys. B676 (2004) 365 [arXiv:hep-ph/0308216]; Two-loop quark and gluon form factors in dimensional regularisation. T Gehrmann, T Huber, D Maître, arXiv:hep-ph/0507061Phys. Lett. 622295T. Gehrmann, T. Huber, D. Maître, "Two-loop quark and gluon form factors in dimensional regularisation," Phys. Lett. B622 (2005) 295 [arXiv:hep-ph/0507061]; Hypergeometric representation of a four-loop vacuum bubble. E Bejdakic, Y Schröder, arXiv:hep-ph/0607006Nucl. Phys. B (Proc. Suppl.). 160155E. Bejdakic, Y. Schröder, "Hypergeometric representation of a four-loop vacuum bubble," Nucl. Phys. B (Proc. Suppl.) 160 (2006) 155 [arXiv:hep-ph/0607006]; Hypergeometric representation of the two-loop equal mass sunrise diagram. O V Tarasov, arXiv:hep-ph/0603227Phys. Lett. 638195O.V. Tarasov, "Hypergeometric representation of the two-loop equal mass sunrise diagram," Phys. Lett. B638 (2006) 195 [arXiv:hep-ph/0603227]; On one master integral for three-loop on-shell HQET propagator diagrams with mass. A G Grozin, T Huber, D Maître, arXiv:0705.2609hep-phA.G. Grozin, T. Huber, D. Maître, "On one master integral for three-loop on-shell HQET propagator diagrams with mass," arXiv:0705.2609 [hep-ph]; M Argeri, P Mastrolia, arXiv:0707.4037Feynman Diagrams and Differential Equations. hep-phM. Argeri, P. Mastrolia, "Feynman Diagrams and Differential Equations," arXiv:0707.4037 [hep-ph]. New Results For The Epsilon-Expansion Of Certain One-, Two-And Three-Loop Feynman Diagrams. A I Davydychev, M Yu, Kalmykov, arXiv:hep-th/0012189Nucl. Phys. 605266A.I. Davydychev, M.Yu. Kalmykov, "New Results For The Epsilon-Expansion Of Certain One-, Two-And Three-Loop Feynman Diagrams", Nucl. Phys. B605 (2001) 266 [arXiv:hep-th/0012189]. MS vs pole masses of gauge bosons. II: Two-loop electroweak fermion corrections. F Jegerlehner, M Yu, O Kalmykov, Veretin, arXiv:hep-ph/0212319Nucl. Phys. 65849F. Jegerlehner, M.Yu. Kalmykov, O. Veretin, "MS vs pole masses of gauge bosons. II: Two-loop electroweak fermion corrections," Nucl. Phys. B658 (2003) 49 [arXiv:hep-ph/0212319]. Massive Feynman diagrams and inverse binomial sums. A I Davydychev, M Yu, Kalmykov, arXiv:hep-th/0303162Nucl. Phys. 6993A.I. Davydychev, M.Yu. Kalmykov, "Massive Feynman diagrams and inverse binomial sums," Nucl. Phys. B699 (2004) 3 [arXiv:hep-th/0303162]. Gauss hypergeometric function: Reduction, epsilon-expansion for integer / half-integer parameters and Feynman diagrams. M Yu, Kalmykov, arXiv:hep-th/0602028J. High Energy Phys. 456M.Yu. Kalmykov, "Gauss hypergeometric function: Reduction, epsilon-expansion for integer / half-integer parameters and Feynman diagrams," J. High Energy Phys. (4): (2006) 056 [arXiv:hep-th/0602028]. On the enumeration of irreducible k-fold Euler sums and their roles in knot theory and field theory. D J Broadhurst, arXiv:hep-th/9604128Z. Phys. 54599Three Loop On-Shell Charge Renormalization Without Integration: Lambda-Ms (QED) To Four LoopsD.J. Broadhurst, "Three Loop On-Shell Charge Renormalization Without Integration: Lambda-Ms (QED) To Four Loops," Z. Phys. C54 (1992) 599; "On the enumeration of irreducible k-fold Euler sums and their roles in knot theory and field theory," [arXiv:hep-th/9604128]; Evaluations of k-fold Euler/Zagier sums: a compendium of results for arbitrary k. J M Borwein, D M Bradley, D J Broadhurst, arXiv:hep-th/9611004Electron. J. Combin. 45J.M. Borwein, D.M. Bradley, D.J. Broadhurst, "Evaluations of k-fold Euler/Zagier sums: a compendium of results for arbitrary k," Electron. J. Combin. 4 (1997) # R5 [arXiv:hep-th/9611004]. . L Euler, Novi Comm, Acad. Sci. Petropol. 20140L. Euler, Novi Comm. Acad. Sci. Petropol. 20 (1775) 140; Values of zeta functions and their generalizations. D Zagier, ; A Joseph, Proceedings of the First European Congress of Mathematics. the First European Congress of MathematicsParisBirkhäuserIID. Zagier, Values of zeta functions and their generalizations, in: A. Joseph et al., ed., Proceedings of the First European Congress of Mathematics, Paris, vol. II, Birkhäuser, 1994 (Progress in Mathematics, vol. 120), p. 497-512. Massive 3-loop Feynman diagrams reducible to SC* primitives of algebras of the sixth root of unity. D J Broadhurst, arXiv:hep-th/9803091Eur. Phys. J. 8311D.J. Broadhurst, "Massive 3-loop Feynman diagrams reducible to SC* primitives of algebras of the sixth root of unity," Eur. Phys. J. C8 (1999) 311 [arXiv:hep-th/9803091]. Nested sums, expansion of transcendental functions and multi-scale multi-loop integrals. S Moch, P Uwer, S Weinzierl, arXiv:hep-ph/0110083J. Math. Phys. 433363S. Moch, P. Uwer, S. Weinzierl, "Nested sums, expansion of transcendental functions and multi-scale multi-loop integrals," J. Math. Phys. 43 (2002) 3363 [arXiv:hep-ph/0110083]. Expansion around half-integer values, binomial sums and inverse binomial sums. S , arXiv:hep-ph/0402131J. Math. Phys. 452656S. Weinzierl, "Expansion around half-integer values, binomial sums and inverse binomial sums", J. Math. Phys. 45 (2004) 2656 [arXiv:hep-ph/0402131]. Series and epsilon-expansion of the hypergeometric functions. M Yu, Kalmykov, arXiv:hep-th/0406269Nucl. Phys. B. (Proc. Suppl.). 135280M.Yu. Kalmykov, "Series and epsilon-expansion of the hypergeometric functions," Nucl. Phys. B. (Proc. Suppl.) 135 (2004) 280 [arXiv:hep-th/0406269]. The double logarithm and Manin's complex for modular curves. A B Goncharov, Math. Res. Lett. 4497Math. Res. Lett.A.B. Goncharov, "The double logarithm and Manin's complex for modular curves" Math. Res. Lett. 4 (1997) 617; "Multiple polylogarithms, cyclotomy and modular complexes", Math. Res. Lett. 5 (1998) 497. Special Values Of Multiple Polylogarithms. J M Borwein, D M Bradley, D J Broadhurst, P Lisonek, arXiv:math.ca/9910045Trans. Am. Math. Soc. 353907J.M. Borwein, D.M. Bradley, D.J. Broadhurst, P. Lisonek, "Special Values Of Multiple Polylogarithms," Trans. Am. Math. Soc. 353 (2001) 907 [arXiv:math.ca/9910045]. Harmonic polylogarithms. E Remiddi, J A M Vermaseren, arXiv:hep-ph/9905237Int. J. Mod. Phys. 15725E. Remiddi, J.A.M. Vermaseren, "Harmonic polylogarithms", Int. J. Mod. Phys. A15 (2000) 725 [arXiv:hep-ph/9905237]. Two-loop master integrals for γ * → 3 jets: The planar topologies. T Gehrmann, E Remiddi, arXiv:hep-ph/0008287Nucl. Phys. 601248T. Gehrmann, E. Remiddi, "Two-loop master integrals for γ * → 3 jets: The planar topologies," Nucl. Phys. B601 (2001) 248 [arXiv:hep-ph/0008287]. Uber die Transcendeten, welche aus wiederholten Integrationen rationaler Formeln entstehen. E E Kummer, J.reine and. Math. bf. 21E.E. Kummer, "Uber die Transcendeten, welche aus wiederholten Integrationen rationaler Formeln entstehen", J.reine and. Math. bf 21 (1840), 74-90, 193-225, 328-371; . H Poincaré, Acta. Math. 4201H. Poincaré, Acta. Math. 4 (1884) 201 Resolution algorithmique des problèm es reguliers de Poincare et de Riemann. I A Lappo-Danilevsky, J.Soc. Phisico-Mathematique de St.Petersbourg. 3I.A. Lappo-Danilevsky, "Resolution algorithmique des problèm es reguliers de Poincare et de Riemann" J.Soc. Phisico-Mathematique de St.Petersbourg, vol 3, 1911; I A Lappo-Danilevsky, Mémoires sur la théorie des systémes deséquations différentielles linéaires. Chelsea, New YorkI.A. Lappo-Danilevsky, "Mémoires sur la théorie des systémes deséquations différentielles linéaires", (Chelsea, New York, 1953); Multiple polylogarithms and mixed Tate motives. A B Goncharov, arxiv:math.AG/0103059Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansBirkhuser, BaselZrich1Polylogarithms in arithmetic and geometryA.B. Goncharov, "Polylogarithms in arithmetic and geometry", Proceedings of the International Congress of Mathematicians, Vol. 1,2 (Zrich, 1994),374-387, Birkhuser, Basel, 1995; "Multiple polylogarithms and mixed Tate motives", arxiv:math.AG/0103059. Explicit results for all orders of the epsilon-expansion of certain massive and massless diagrams. A I Davydychev, arXiv:hep-ph/9910224Phys. Rev. 6187701A.I. Davydychev, "Explicit results for all orders of the epsilon-expansion of certain massive and massless diagrams," Phys. Rev. D61 (2000) 087701 [arXiv:hep-ph/9910224]. Some Remarks On The Epsilon-Expansion Of Dimensionally Regulated Feynman Diagrams. A I Davydychev, M Yu, Kalmykov, arXiv:hep-th/0005287Nucl. Phys. Proc. Suppl. 89283A.I. Davydychev, M.Yu. Kalmykov, "Some Remarks On The Epsilon-Expansion Of Dimensionally Regulated Feynman Diagrams," Nucl. Phys. Proc. Suppl. 89 (2000) 283 [arXiv:hep-th/0005287]. Representation of the Gauss hypergeometric function by multiple polylogarithms and relations of multiple zeta values. Shu Oi, arXiv:math.NT/0405162Shu Oi, "Representation of the Gauss hypergeometric function by multiple polylogarithms and relations of multiple zeta values," [arXiv:math.NT/0405162]. All order epsilon-expansion of Gauss hypergeometric functions with integer and half/integer values of parameters. M Yu, B F L Kalmykov, S Ward, Yost, arXiv:hep-th/0612240J. High Energy Phys. 240M.Yu. Kalmykov, B.F.L. Ward, S. Yost, "All order epsilon-expansion of Gauss hypergeometric functions with integer and half/integer values of parameters," J. High Energy Phys. (2): (2007) 040 [arXiv:hep-th/0612240]. Well-poised hypergeometric transformations of Euler-type multiple integrals. V V Zudilin, Russian Math. Surveys. 57824J. London Math. Soc.V.V. Zudilin, "Very well-poised hypergeometric series and multiple integrals" Russian Math. Surveys 57 (2002) 824; "Well-poised hypergeometric transformations of Euler-type multiple integrals" J. London Math. Soc. 70 (2004), Properties of the coefficients of some linear forms of generalized polylogarithms. S A Zlobin, Fundam. Prikl. Mat. 7141Dokl. Akad. NaukS.A. Zlobin, "Integrals expressible as Linear Forms in generalized polylogarithms", Math. Notes 71 (2002) 711; "Decomposition of multiple integrals in linear forms", Dokl. Akad. Nauk 398 (2004) 595; "Expansion of multiple integrals in linear forms" Mat. Zametki 77 (2005) 683; "Properties of the coefficients of some linear forms of generalized polylogarithms", Fundam. Prikl. Mat. 11 (2005) 41; On an identity of Mahler. Yu V Nesterenko, Mat. Zametki. 79107Yu.V. Nesterenko, "On an identity of Mahler", Mat. Zametki 79 (2006) 107; On an identity for a generalization of a hypergeometric integral. E A Ulanskii, Mat. Zametki. 79796E.A. Ulanskii, "On an identity for a generalization of a hypergeometric integral" Mat. Zametki 79 (2006) 796; An identity of Andrews, multiple integrals, and very-well-poised hypergeometric series. C Krattenthaler, T , Ramanujan J. 13203math.CA/0312148C. Krattenthaler, T. Rivoal, "An identity of Andrews, multiple integrals, and very-well-poised hypergeometric series" Ramanujan J. 13 (2007) 203 [math.CA/0312148]; J Cresson, S Fischler, T , math.NT/0609743Séries hypergéométriques multiples et polyzêtas. J. Cresson, S. Fischler, T. Rivoal, "Séries hypergéométriques multiples et polyzêtas", [math.NT/0609743] Euler sums and contour integral representations. P Flajolet, B Salvy, Experiment. Math. 7P. Flajolet, B. Salvy, "Euler sums and contour integral representations", Experiment. Math. 7 (1998) 15-35. The two-loop vector form factor in the Sudakov limit. B Jantzen, V A Smirnov, arXiv:hep-ph/0603133Eur. Phys. J. C. 47671B. Jantzen, V. A. Smirnov, "The two-loop vector form factor in the Sudakov limit," Eur. Phys. J. C 47 (2006) 671 [arXiv:hep-ph/0603133]; Planar two-loop master integrals for massive Bhabha scattering: N (f ) = 1 and N (f ) = 2. S Actis, M Czakon, J Gluza, T Riemann, arXiv:hep-ph/0609051Nucl. Phys. Proc. Suppl. 16091S. Actis, M. Czakon, J. Gluza, T. Riemann, "Planar two-loop master integrals for massive Bhabha scattering: N (f ) = 1 and N (f ) = 2," Nucl. Phys. Proc. Suppl. 160 (2006) 91 [arXiv:hep-ph/0609051]. Differential equations method: New technique for massive Feynman diagrams calculation. A V Kotikov, Phys. Lett. B. 254123Phys. Lett. BA.V. Kotikov, "Differential equations method: New technique for massive Feynman diagrams calculation," Phys. Lett. B 254 (1991) 158. "Differential equations method: The Calculation of vertex type Feynman diagrams," Phys. Lett. B 259 (1991) 314. "Differential equation method: The Calculation of N point Feynman diagrams," Phys. Lett. B 267 (1991) 123; New method of massive Feynman diagrams calculation. Mod. Phys. Lett. A. 6677"New method of massive Feynman diagrams calculation," Mod. Phys. Lett. A 6 (1991) 677; The differential equation method: Calculation of vertex-type diagrams with one non-zero mass. J Fleischer, A V Kotikov, O L Veretin, arXiv:hep-ph/9707492Phys. Lett. B. 417163J.Fleischer, A.V. Kotikov, O.L. Veretin, "The differential equation method: Calculation of vertex-type diagrams with one non-zero mass," Phys. Lett. B 417 (1998) 163 [arXiv:hep-ph/9707492]. Total α s correction to deep inelastic scattering cross-section ratio, R = sigma-L / sigma-t in QCD. Calculation of longitudinal structure function. D I Kazakov, A V Kotikov, Nucl. Phys. 307721Erratum-ibid. B345 (1990) 299D.I. Kazakov, A.V. Kotikov, "Total α s correction to deep inelastic scattering cross-section ratio, R = sigma-L / sigma-t in QCD. Calculation of longitudinal structure function," Nucl. Phys. B307 (1988) 721 [Erratum-ibid. B345 (1990) 299]; Harmonic sums, Mellin transforms and integrals. J A M Vermaseren, arXiv:hep-ph/9806280Int. J. Mod. Phys. 142037J.A.M. Vermaseren, "Harmonic sums, Mellin transforms and integrals," Int. J. Mod. Phys. A14 (1999) 2037 [arXiv:hep-ph/9806280]; Harmonic sums and Mellin transforms up to two-loop order. J Blumlein, S Kurth, arXiv:hep-ph/9810241Phys. Rev. 6014018J. Blumlein, S. Kurth, "Harmonic sums and Mellin transforms up to two-loop order," Phys. Rev. D60 (1999) 014018 [arXiv:hep-ph/9810241]; Algebraic relations between harmonic sums and associated quantities. J Blumlein, arXiv:hep-ph/0311046Comput. Phys. Commun. 15919J. Blumlein, "Algebraic relations between harmonic sums and associated quantities," Comput. Phys. Commun. 159 (2004) 19 [arXiv:hep-ph/0311046]. Two-loop massive operator matrix elements and unpolarized heavy flavor production at asymptotic values Q 2 >> m 2. I Bierenbaum, J Blumlein, S Klein, arXiv:hep-ph/0703285Nucl. Phys. 78040I. Bierenbaum, J. Blumlein, S. Klein, "Two-loop massive operator matrix elements and unpolarized heavy flavor production at asymptotic values Q 2 >> m 2 ," Nucl. Phys. B780 (2007) 40 [arXiv:hep-ph/0703285]. . H S Wilf, Generatingfunctionology , Academic PressLondonH.S. Wilf, Generatingfunctionology, Academic Press, London, 1994, http://www.math.upenn.edu/ wilf/DownldGF.html L Lewin, Polylogarithms and associated functions. North-Holland, AmsterdamL. Lewin, Polylogarithms and associated functions (North-Holland, Amsterdam, 1981). On Nielsen's generalized polylogarithms and their numerical calculation. K S Kölbig, J A Mignaco, E Remiddi, B.I.T. 1038K.S. Kölbig, J.A. Mignaco, E. Remiddi, "On Nielsen's generalized polylogarithms and their numerical calculation", B.I.T. 10 (1970) 38; Electron Form-Factors Up To Fourth Order. 1. R Barbieri, J A Mignaco, E Remiddi, Nuovo Cim. A. 11824R. Barbieri, J.A. Mignaco, E. Remiddi, "Electron Form-Factors Up To Fourth Order. 1, Nuovo Cim. A 11 (1972) 824; Nielsen's Generalized Polylogarithms. K S Kölbig, SIAM J. Math. Anal. 171232K.S. Kölbig, "Nielsen's Generalized Polylogarithms", SIAM J. Math. Anal. 17 (1986) 1232. Special Functions of the Isomonodromy Type. A V Kitaev, Acta Appl. Math. 64A.V. Kitaev, "Special Functions of the Isomonodromy Type", Acta Appl. Math. 64 (2000) 1-32. Symmetries and Separation of Variables. W Miller, Addison-Wesley, Reading, MassachussettsW. Miller, Symmetries and Separation of Variables, Addison-Wesley, Reading, Massachussetts, 1977. . W N Bailey, Generalized Hypergeometric Series. Cambridge University PressW.N. Bailey, Generalized Hypergeometric Series, Cambridge University Press, 1935; Higher Transcendental Functions. A. ErdelyiNew YorkMcGraw-Hill1A. Erdelyi (Ed.), Higher Transcendental Functions, vol.1, McGraw-Hill, New York, 1953; Generalized hypergeometric functions. L J Slater, Cambridge University PressCambridgeL.J. Slater, Generalized hypergeometric functions, Cambridge University Press, Cambridge 1966. lsjk: A C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions. M Yu, A Kalmykov, Sheplyakov, arXiv:hep-ph/0411100Comput. Phys. Commun. 17245M.Yu. Kalmykov, A. Sheplyakov, "lsjk: A C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions," Comput. Phys. Commun. 172 (2005) 45 [arXiv:hep-ph/0411100]. Evaluations of binomial series. J Borwein, R Girgensohn, Aequationes Math. 70J.M Borwein, R. Girgensohn, "Evaluations of binomial series", Aequationes Math. 70 (2005) 25-36. The algebra of linear partial difference operators and its applications. D Zeilberger, SIAM J. Math. Anal. 11919D. Zeilberger, "The algebra of linear partial difference operators and its applications", SIAM J. Math. Anal. 11 (1980) 919. Gröbner basis and the problem of contiguous relations. N Takayama, Japan J. Appl. Math. 6147N. Takayama, "Gröbner basis and the problem of contiguous relations", Japan J. Appl. Math. 6 (1989) 147. Automatizing the application of Mellin-Barnes representations for Feynman integrals. J Gluza, F Haas, K Kajda, T Riemann, arXiv:0707.3567hep-phJ. Gluza, F. Haas, K. Kajda, T. Riemann, "Automatizing the application of Mellin-Barnes representations for Feynman integrals," arXiv:0707.3567 [hep-ph]; see also http://www-zeuthen.desy.de/riemann/Talks/riemann-zif-07.pdf . M Petkovšek, H S Wilf, D Zeilberger, A = B Peters, Ltd , Wellesley, MAM. Petkovšek, H.S. Wilf and D. Zeilberger, A = B, A K Peters, Ltd., Wellesley, MA, 1996. A class of series acceleration formulae for Catalan's constant. D M Bradley, arXiv:math.CA/0706.0356Ramanujan J. 3159D.M. Bradley, "A class of series acceleration formulae for Catalan's constant", Ramanujan J. 3 (1999) 159 [arXiv:math.CA/0706.0356]; Borwein and Bradley's Apéry-like formulae for ζ(4n + 3). G Almkvist, A Granville, Experiment. Math. 8G. Almkvist,A. Granville, "Borwein and Bradley's Apéry-like formulae for ζ(4n + 3)", Experiment. Math. 8 (1999) 197-203; Simultaneous generation of Koecher and Almkvist-Granville's Apéry-like formulae. T , Experiment. Math. 13T. Rivoal, 'Simultaneous generation of Koecher and Almkvist-Granville's Apéry-like formulae", Experiment. Math. 13 (2004) 503-508; Searching symbolically for Apéry-like formulae for values of the Riemann zeta function. J M Borwein, D M Bradley, arXiv:math.CA/0505093J.M. Borwein, D.M. Bradley, "Searching symbolically for Apéry-like formulae for values of the Riemann zeta function", [arXiv:math.CA/0505093]; Experimental determination of Apéry-like identities for ζ(2n + 2). D Bailey, J Borwein, D M Bradley, arXiv:math/0505270v2Experiment. Math. 15math.NTD. Bailey, J.M Borwein, D.M. Bradley, "Experimental determination of Apéry-like identities for ζ(2n + 2)", Experiment. Math. 15 (2006) 281-289. [arXiv:math/0505270v2 [math.NT]] Analysis of PSLQ, an integer relation finding algorithm. H R P Ferguson, D H Bailey, S Arno, Math. Comp. 68H.R.P. Ferguson, D.H. Bailey, S. Arno, "Analysis of PSLQ, an integer relation finding algorithm", Math. Comp. 68 (1999) 351-369; Parallel integer relation detection: techniques and applications. D H Bailey, D Broadhurst, Math. Comp. 70D.H. Bailey, D. Broadhurst, "Parallel integer relation detection: techniques and applications", Math. Comp. 70 (2001) 1719-1736. Central Binomial Sums, Multiple Clausen Values and Zeta Values. J M Borwein, D J Broadhurst, J Kamnitzer, arXiv:hep-th/0004153Exper. Math. 1025J.M. Borwein, D.J. Broadhurst, J. Kamnitzer, "Central Binomial Sums, Multiple Clausen Values and Zeta Values," Exper. Math. 10 (2001) 25 [arXiv:hep-th/0004153]. Geometrical approach to loop calculations and the epsilon-expansion of Feynman diagrams. A I Davydychev, M Yu, Kalmykov, arXiv:hep-th/0203212A.I. Davydychev, M.Yu. Kalmykov, "Geometrical approach to loop calculations and the epsilon-expansion of Feynman diagrams," [arXiv:hep-th/0203212]; About higher order epsilon-expansion of some massive two-and three-loop master-integrals. M Yu, Kalmykov, arXiv:hep-ph/0503070Nucl. Phys. 718276M.Yu. Kalmykov, "About higher order epsilon-expansion of some massive two-and three-loop master-integrals," Nucl. Phys. B718 (2005) 276 [arXiv:hep-ph/0503070]. Two-loop self-energy master integrals on shell. J Fleischer, M Yu, A V Kalmykov, Kotikov, arXiv:hep-ph/9905249Phys. Lett. 462310E)J. Fleischer, M.Yu. Kalmykov, A.V. Kotikov, "Two-loop self-energy master integrals on shell," Phys. Lett. B462 (1999) 169; B467 (1999) 310(E); [arXiv:hep-ph/9905249]; ON-SHELL2: FORM based package for the calculation of two-loop self-energy single scale Feynman diagrams occurring in the standard model. J Fleischer, M Yu, Kalmykov, arXiv:hep-ph/9907431Comput. Phys. Commun. 128531J. Fleischer, M.Yu. Kalmykov, "ON-SHELL2: FORM based package for the calculation of two-loop self-energy single scale Feynman diagrams occurring in the standard model," Comput. Phys. Commun. 128 (2000) 531 [arXiv:hep-ph/9907431]; Single mass scale diagrams: Construction of a basis for the epsilon-expansion. J Fleischer, M Yu, Kalmykov, arXiv:hep-ph/9910223Phys. Lett. 470168J. Fleischer, M.Yu. Kalmykov, "Single mass scale diagrams: Construction of a basis for the epsilon-expansion," Phys. Lett. B470 (1999) 168 [arXiv:hep-ph/9910223]. Numerical evaluation of multiple polylogarithms. J Vollinga, S Weinzierl, arXiv:hep-ph/0410259Comput. Phys. Commun. 167177J. Vollinga and S. Weinzierl, "Numerical evaluation of multiple polylogarithms," Comput. Phys. Commun. 167 (2005) 177 [arXiv:hep-ph/0410259]; Extension of HPL to complex arguments. D Maitre, arXiv:hep-ph/0703052D. Maitre, "Extension of HPL to complex arguments," [arXiv:hep-ph/0703052]. Multiple polylogarithms: an introduction. M Waldschmidt, Number theory and discrete mathematics. Chandigarh, 2000; Birkhäuser, BaselM. Waldschmidt, "Multiple polylogarithms: an introduction", Number theory and discrete mathematics (Chandigarh, 2000), 1-12, (Trends Math., Birkhäuser, Basel, 2002); Algebraic relations for multiple zeta values. V V Zudilin, Russian Math. Surveys. 58V.V. Zudilin, "Algebraic relations for multiple zeta values," Russian Math. Surveys 58 (2003) 1. Algebras of iterated path integrals and fundamental groups. K T Chen, Trans. A.M.S. 156359K.T. Chen, "Algebras of iterated path integrals and fundamental groups", Trans. A.M.S. 156 (1971) 359; Quantum Groups, Graduate Texts in Math. C Kassel, Springer-Verlag155C. Kassel, Quantum Groups, Graduate Texts in Math. 155, Springer-Verlag, 1995.
{'fraction_non_alphanumeric': 0.097105912188053, 'fraction_numerical': 0.05459333677038193, 'mean_word_length': 3.804391484803856, 'pattern_counts': {'":': 0, '<': 1, '<?xml version=': 0, '>': 14, 'https://': 0, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 71, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'are expressible in terms of the harmonic polylogarithms of Remiddi and Vermaseren with coefficients that are ratios of polynomials.', 'arxivid': '0707.3654', 'author': ['M Yu Kalmykov ', 'B F L Ward ', 'S A Yost ', '\nDepartment of Physics\nLaboratory of Theoretical Physics\nBaylor University\nOne Bear Place97316, 76798-7316Box, Waco, BogoliubovTX\n', '\nDepartment of Physics\nJoint Institute for Nuclear Research\n141980Dubna (Moscow Region)Russia\n', '\nInstitut für Theoretische Physik\nBaylor University\nOne Bear Place97316, 76798-7316Box, Waco, -1TX\n', '\nDepartment of Physics\nUniversität Hamburg\nLuruper Chaussee 14922761HamburgGermany\n', '\nPrinceton University\n08540PrincetonNJ\n'], 'authoraffiliation': ['Department of Physics\nLaboratory of Theoretical Physics\nBaylor University\nOne Bear Place97316, 76798-7316Box, Waco, BogoliubovTX', 'Department of Physics\nJoint Institute for Nuclear Research\n141980Dubna (Moscow Region)Russia', 'Institut für Theoretische Physik\nBaylor University\nOne Bear Place97316, 76798-7316Box, Waco, -1TX', 'Department of Physics\nUniversität Hamburg\nLuruper Chaussee 14922761HamburgGermany', 'Princeton University\n08540PrincetonNJ'], 'corpusid': 5401703, 'doi': '10.1088/1126-6708/2007/10/048', 'github_urls': [], 'n_tokens_mistral': 27587, 'n_tokens_neox': 23131, 'n_words': 12076, 'pdfsha': '43526cd66a17302f986d31043ccbcd4025de8e21', 'pdfurls': ['https://arxiv.org/pdf/0707.3654v3.pdf'], 'title': ['Multiple (inverse) binomial sums of arbitrary weight and depth and the all-order ε-expansion of generalized hypergeometric functions with one half-integer value of parameter', 'Multiple (inverse) binomial sums of arbitrary weight and depth and the all-order ε-expansion of generalized hypergeometric functions with one half-integer value of parameter'], 'venue': []}
arxiv
BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View Images Lun Luo Shuhang Zheng Yixuan Li Zhiyong Fan Beinan Yu Siyuan Cao Hui-Liang Shen BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View Images Place recognition is a key module for long-term SLAM systems. Current LiDAR-based place recognition methods are usually based on representations of point clouds such as unordered points or range images. These methods achieve high recall rates of retrieval, but their performance may degrade in the case of view variation or scene changes. In this work, we explore the potential of a different representation in place recognition, i.e. bird's eye view (BEV) images. We validate that, in scenes of slight viewpoint changes, a simple NetVLAD network trained on BEV images achieves comparable performance to the state-ofthe-art place recognition methods. For robustness to view variations, we propose a rotation-invariant network called BEVPlace. We use group convolution to extract rotationequivariant local features from the images and NetVLAD for global feature aggregation. In addition, we observe that the distance between BEV features is correlated with the geometry distance of point clouds. Based on the observation, we develop a method to estimate the position of the query cloud, extending the usage of place recognition. The experiments conducted on large-scale public datasets show that our method 1) achieves state-of-the-art performance in terms of recall rates, 2) is robust to view changes, 3) shows strong generalization ability, and 4) can estimate the positions of query point clouds. Source code is publicly available at https://github.com/zjuluolun/BEVPlace. Introduction Place recognition plays an important role in both the map construction and localization phases of long-term Simultaneous Localization and Mapping (SLAM) systems [3]. In the map construction phase, it can provide loop closure constraints to eliminate the accumulated drift of the odometry. In the localization phase, it can re-localize the system when the pose tracking is lost and improve the robustness of the system. In recent years, lots of image-based place recognition methods [8,2,22] have been developed and achieved satisfactory performance. However, these methods are vulnerable to illumination changes and view variation due to the imaging mechanism of camera sensors. On the contrary, point clouds of LiDAR sensors are robust to illumination changes due to active sensing. In addition, the availability of precise depth information can help more accurate place recognition [1,19]. LiDAR-based place recognition can be regarded as a retrieval problem, that is, finding the most similar frame to a query from a pre-built database. The key to solv-ing this problem is to generate a global feature that can model the similarity between point clouds. PointNetVLAD [1] gives the first deep-learning solution to the problem of large-scale LiDAR-based place recognition. It uses Point-Net [26] to extract local features from unordered points and NetVLAD [2] to generate global features. There are lots of subsequent methods that follow PointNetVLAD and introduce auxiliary modules such as attentions [35,29], handcrafted features [19], and sparse convolution [15]. Recently, some methods [4,21] based on range images have been developed. The range image is the sphere projection of a point cloud. Due to the projection mechanism, the translation of the range image is equivariant to the rotation of the point cloud. Based on this, OverlapTransformer [21] uses a convolution network and a transformer to extract rotation-invariant features from the images. Some methods [13,14,4] use similar projections and also achieve place recognition robust to view changes. Although the aforementioned methods have made great progress, they still have limitations in terms of generalization ability. This is because that both unordered points and range images used for place recognition are sensitive to the motions of the LiDAR sensor. Specifically, for unordered points, the point coordinate and the relative positions between points will change severely along with motions of the LiDAR sensor. For range images, the image contents suffer various distortions with translations of point clouds although they are robust to rotations. Current methods [1,35,19] force the network to learn these variations of data with data augmentation. However, as pointed out in [17], data augmentation needs the network to be as flexible as possible to capture all the variations, which may result in the large risk of overfitting and poor generalization ability. In this work, we explore the potential of place recognition using bird's eye view (BEV) images. The BEV image is generated by projecting a point cloud to the ground space. In road scenes, the transformations of point clouds are approximately equivariant to the rotations and translations of BEV images [20]. Thus, the contents of BEV images are more robust to sensor motions. As shown in Fig. 1, the translation of a point cloud causes little appearance changes in the BEV image but introduces geometry distortions to the range image. The results shown in Fig. 1 (c) validates that a simple NetVLAD based on the BEV representation achieves comparable performance with the state-of-the art methods. To achieve robustness to viewpoint changes, we design a group convolution [30] network to extract local features from BEV images. Then, we use NetVLAD [2] for global rotation-invariant features extraction. Benefiting from the design of rotation invariance for BEV images, our method has the strong ability of place retrieval in the cases of both viewpoint variations and scene changes. In addition, we observe that the distances of the BEV features correlate well with the geometry distances of point clouds. According to this correlation, we map the feature distance to the geometry distance and then estimate the position of the query cloud, which extends the usage of LiDAR-based place recognition. We summarize the contributions of this paper as follows: • We propose a novel LiDAR-based place recognition method called BEVPlace. In the method, we extract rotation-equivariant local features from BEV images based on group convolution, which facilitates the design of rotation invariant global features. • We explore the statistical correlation between the feature distance and the geometry distance of point cloud pairs. Based on this correlation prior, we compute the geometry distance between the query point cloud and the matched point cloud and use it for position estimation. • We evaluate our method on three large-scale public datasets, showing that our method is robust to view changes, has strong generalization ability, and achieves state-of-the-art performance in terms of recall rate. Related Work In this section, we briefly review the recent developments in the field of LiDAR-based place recognition. For a more comprehensive overview, the readers may refer to [24]. According to the representations used for feature extraction, we classify the current Lidar-based place recognition methods into two categories, i.e., the methods that utilize 3D points and the methods that use projection images as intermediate representations. Place recognition based on 3D points. PointNetVLAD [1] leverages PointNet [26] to project each point into a higher dimension feature, and then uses NetVLAD [2] to generate global features. To take advantage of more contextual information, PCAN [35] introduces the point contextual attention network that learns attentions to the taskrelevant features. Both PointNetVLAD and PCAN cannot capture local geometric structures due to the independent treatment for each point. Thus, the following methods focus on extracting more discriminative local features considering the neighborhood information. LPD-Net [19] adopts an adaptive local feature module to extract the handcrafted features and uses a graph-based neighborhood aggregation module to discover the spatial distribution of local features. EPC-Net [10] improves LPD-Net by using a proxy point convolutional neural network. DH3D [6] designs a 3D local feature encoder to learn more distinct local descriptors, and SOE-Net [33] introduces a point orientation encoding (PointOE) module. Minkloc3D [15,16] uses sparse 3D convolutions in local areas and achieves state-of-the-art performance on the benchmark dataset. Recently, some works including SVT-Net [7], TransLoc3D [34], NDT-Transformer [36], and PPT-Net [11] leverage the transformer-based attention mechanism [31] to boost place recognition performance. However, it was shown that Min-kLoc3D outperforms these transformer-based methods with fewer parameters. Place recognition based on projection images. Steder et al. [28] extract handcrafted local features from range images of point clouds and perform place recognition by local feature matching. Kim et al. [13] project the point cloud into a bearing-angle image and propose the scan context descriptor. They further introduce the concept of scan context image (SCI) [14] and achieve place recognition by classifying the SCIs using a convolutional network. Over-lapNet [4] uses the overlap of range images to determine whether two point clouds are at the same place and uses a siamese network to estimate the overlap. OverlapTransformer [21] further uses a transformer architecture to learn rotation-invariant global features. Different from the aforementioned methods based on the image representations that are built under polar or polar-like projections, BVMatch [20] projects point clouds into BEV images and extracts handcrafted BVFT features from the images. It then uses the bag-of-words model [8] to generate global features. However, it is shown that BVMatch cannot generalize well to unseen environments [20]. Different from BVMatch, we extract rotation-equivariant local features using group convolution [30] and generate global features by NetVLAD [2]. Thanks to the network design, our method can generalize to different scenes while keeping high recall rates. Preliminaries Let m i be the point cloud collected by a sensor at the pose T i = (R i , t i ), where R i is the rotation matrix and t i is the position. The database formed by n point clouds and their associated poses could be denoted as M = {(m i , T i )} i=1,2,. ..,n . Given a query point cloud m q , place recognition aims at finding its most structurally similar point cloud from the pre-built database M. In the problem of LiDAR-based place recognition, two point clouds are usually regarded as structurally similar if they are collected at geometry close places. Towards this goal, we design a network f (·) to map the point cloud to a distinct compact global feature vector such that f (m q ) − f (m i ) 2 < f (m q ) − f (m j ) 2 if m q is structurally similar to m i but dissimilar to m j . Based on the network f , we perform place retrieval by finding the point cloud with the minimum feature distance to the query point cloud. In this work, we train our network based on BEV images of point clouds. In addition to place retrieval, we develop an extended usage that estimates the positions of the query point clouds. Method Our method is formed by two modules as illustrated in Fig. 2. In the BEVPlace network, we project the query point cloud into the BEV image. Then, we extract a rotationinvariant global feature through a group convolution network and NetVLAD [2]. In the position estimator, we retrieve the closest feature of the global feature from a prebuilt database. We recover the geometry distance between the query and the matched point clouds based on a mapping model. The position of the query is estimated based on the recovered distances. BEVPlace Network In road scenes, a LiDAR sensor on a car or a robot can only move on the ground plane. Since we generate BEV images by projecting point clouds into the ground plane, the view change of the sensor will result in a rotation transformation on the image. To achieve robust place recognition, we aim at designing a network f to extract rotation-invariant features from BEV images. Denoting the rotation transformation R ∈ SO(2) on the BEV image I as R•I, the rotation invariance of f can be represented as f (R • I) = f (I). (1) A straightforward approach to achieve such invariance is to train a network with data augmentation [17]. However, data augmentation usually requires that the network has a larger group of parameters to learn the rotations and may not generalize to the combination of rotations and scenes not occurring in the training set. In this work, we use the cascading of a group convolution network and NetVLAD to achieve rotation invariance. Our BEVPlace has strong generalization ability since the network is designed inherently invariant to rotations. Bird's Eye View Image Generation. We follow BV-Match [20] and use the point density to construct images. We discretize the ground space into uniform grids with a grid size of 0.4 m. For a point cloud m, we compute the number of points in each grid and use the normalized point density as the pixel intensity of the BEV image I. Group Convolution Network. Group convolution treats the feature map as functions of the corresponding symmetry-group [30]. Considering the 2D rotation group SO(2), applying group convolution f gc on a BEV image I results in rotation-equivariant features, which can be written as f gc (R • I) = R • f gc (I).(2) That is, transforming the input I by a rotation transformation R and then passing it through the mapping f gc should give the same result as first mapping I through f gc and then transforming the feature with R ∈ SO (2). Usually, f is designed such that R = R. f q f q x q x q (a) Range Images (b) BEV Images geometry space feature space mage Generation Retrieved Frame f q f q (a) Group convolution has been well-developed for a few years, and there are some mature group convolution designs [30,18,32]. We implemented our network based on GIFT [18]. GIFT is originally designed for image matching and can produce distinct local features. Our main modification to GIFT is to remove the scale features since there is no scale difference between BEV images. More details of our network implementation are appended in the supplementary materials. Rotation invariant global features. According to Eq. 2, the contents of the feature map of group convolution keep the same for rotated images and are only transformed by a rotation R . Thus, we can use a global pooling operation to extract rotation-invariant global features. To capture more information about the statistics of local features, we use NetVLAD [2] for feature aggregation. We achieve rotation invariance by cascading the group convolution network and NetVLAD, which is, NetVLAD (f gc (R • I)) = NetVLAD R • f gc (I) = NetVLAD (f gc (I)) .(3) Loss function. There are some loss functions [1,33] for LiDAR-based place recognition problem. In this work, we train our network with the simple commonly used lazy triplet loss [1], formulated as L = max j ([m + δ pos − δ negj ] + ),(4) where [...] + denotes the hinge loss, m is the margin, δ pos is the feature distance between an anchor point cloud m a and its structurally similar ("positive") point cloud, δ negj is the feature distance between m a and its structurally dissimilar ("negative") point cloud. We follow the training strategy in [1,19,33] and regard two point clouds are structurally similar if their geometry distance is less than meters. Position Estimator The lazy triplet loss forces the network to learn a mapping that preserves the adjacency of point clouds in the geometry space. Although there isn't an explicit mapping function that reveals the relationship between the feature space and the geometry space, we observe that the distance of global features and the geometry distance of point clouds are inherently correlated. Based on this property, we recover the geometry distance between the query and the match and then use it for position estimation. Statistical correlation between the feature and geometry distances. To reveal the relationship between the feature space and the geometry space, we train our method on the sequence "00" of the KITTI dataset [9]. We then plot the feature distances and the geometry distances of all point cloud pairs in different sequences of the dataset. As shown in Fig. 3, for all the sequences, the feature distance approximately monotonically increases with the geometry distance and saturates when the point clouds are far away from each other. This phenomenon is intuitive since two point clouds are more similar if they are geometry closer, and consequently the feature distance is smaller. It can be seen that the mean curve and the standard deviation differ in different sequences since sequences are collected in diverse scenes. Despite this, the mean curves have similar shapes and can be depicted using a function based on the general-ized Gaussian kernel [23], which is f (m i − f (m j ) 2 = α 1 − exp(− t i − t j γ 2 β ) ,(5) where α is the max feature distance, γ and β control the curve shape. Mapping Model. The mapping function above inspires us to recover the geometry distance from the feature distance, and further estimate positions of the query point clouds. However, this mapping relationship may differ slightly in local areas due to the appearance changes of point clouds. For more accurate geometry distance recovery, we build a mapping function for each point cloud m i in the database M. Specifically, we compute its feature and geometry distances to all the other point clouds in M. We then fit the curve with Eq. 5 and compute the parameters α i , β i , and γ i . After this, we can recover the geometry distance of a query point cloud m q to m i according to Eq. 5, that is t q − t i 2 = −β i log(1 − f (m q ) − f (m i ) 2 α i ) 1 γ i .(6) Position Recovery. Since the positions of the point clouds in the database are given, we can compute the position of the query point cloud m q if we know its geometry distances to at least three reference point clouds. To this end, we first follow the place recognition procedure and find the most similar point cloud m r of m q . We choose the reference point clouds as m r and the point clouds that are less than meters away from m r . Denoting Ω = {k t r − t k 2 < } and d k as the recovered geometry distance between m q and m k , the position of m q can be easily computed by solving the following minimization problem, t q = arg min t k∈Ω ( t − t k 2 − d k ) 2 .(7) Discussion. In fact, the monotonicity of the mapping from feature distance to the geometry distance also holds for other methods. Fig. 4 plots the relationship between the feature and the geometry spaces of two state-of-theart methods Minkloc3D-V2 [15] and OverlapTransformer [21] on the sequence "00" and "06" of KITTI. Although the mappings of the methods have quite different shapes, they all can be approximately depicted by Eq. 5 with specific parameters and thus positions can also be estimated based on the mapping model. In the experiment, we will compare their position estimation accuracy with our method. Experiments We compare our method with the state-of-the-art place recognition methods including PointNetVLAD [1], LPD-Net [19], SOE-Net [33], MinkLock3D-V2 [16], and Over-lapTransformer [21]. All these methods are deep learning methods and their open-sourced codes are used for evaluation. For our method, we set the triplet margin m = 0.3, and the number of clusters of NetVLAD as 64. In the training stage, we choose 1 positive point cloud and 10 negative point clouds in calculating loss functions. We test the methods in terms of place retrieval with metrics of recall at Top-1 and recall at Top-%1. For a more comprehensive evaluation, we also compare the loop closure detection performance with the metric of Precision-Recall (PR) curve. In addition, we test the position estimation accuracy using the absolute translation error (ATE). Datasets We conduct experiments on three large-scale public datasets, i.e. the KITTI dataset [9], the ALITA dataset [25], and the benchmark dataset [1]. KITTI dataset contains a large number of point cloud data collected by a Velodyne 64-beam LiDAR under low viewpoint variation. We select the sequences "00", "02", "05", and "06" under the Odometry subset for evaluation since these sequences contain large revisited areas. We split the point clouds of each sequence into database frames and query frames for place retrieval. The partition of each sequence is summarized in Table 1 ALITA dataset is a dataset for long-term place recognition in large-scale environments. It contains point cloud data of campus and city scenes under different illuminations and viewpoints. In this work, we use its subset released in the General Place Recognition Competition 1 . We evaluate the generalization ability of the methods on its validation set and its test set. Note that the evaluation result of the test set is automatically calculated by the server once we upload the global features to the website. The point clouds of the dataset have been cropped into a [−20 m, 20 m] cubic window and downsampled into 4096 points. Similar to the process in the KITTI dataset, we generate BEV images with the downsampled point clouds for our method. We normalize the points to fit the input of PointNetVLAD, LPD-Net, and MinkLock3D-V2. We do not evaluate OverlapTransformer on this dataset as it cannot adapt to such sparse point clouds. Benchmark dataset is broadly used by the recent place recognition method based on unordered points. It is a dataset set consisting of four scenarios: an outdoor dataset Oxford RobotCar, three in-house datasets of a university sector (U.S.), a residential area (R.A.), and a business district (B.D.). It provides normalized point clouds of 4096 points, which can be directly used by PointNetVLAD, LPD-Net, and MinkLock3D-V2. For our method, we multiply the point values by 20 and then project the point cloud into the BEV image for feature extraction. Note that the recovered point cloud is not of the actual scale since we do not know the exact coordinate range of the point clouds. In spite of this, our method can adapt to such scale variation thanks to the convolution network design. For the KITTI dataset and the ALITA dataset, we regard a retrieval as true positive if the geometry distance between the query and the match is less than = 5 meters. For the benchmark dataset, we set = 25 meters following the configurations in [1,19,33,16]. Place Recognition We train the methods with the point clouds in the database of sequence "00" of the KITTI dataset. In the training stage, we perform data augmentation with random rotations to the point clouds. Ablation on the BEV representation and Group Convolution Network. To validate the claim that a NetVLAD network based on BEV images can achieve comparable performance to the SOTA methods, we implemented two NetVLAD networks with ResNet34 and ResNet18 as backbones, respectively. To explore the influence of rotationinvariance designs to the unordered points based methods, we additionally implement a network VN-PointNetVLAD by replacing the backbone of PointNetVLAD with Vector Nureon [5]. We observe the results shown in Table. 2 and find that, 1) without any delicate design, both NetVLAD networks achieves comparable recalls to Over-lapTransformer and outperforms the unordered points based methods. This indicates the effectiveness of using BEV images. 2) Our BEVPlace achieves better recalls, validating the significance of the rotation invariance design. 3) Among the rotation-invariant methods based on different representations, i.e. VN-PointNetVLAD based on unordered points, OverlapTransformer based on range images, and BEVPlace based on BEV images, our BEVPlace achieves higher recall and better generalization ability than the non-BEV networks, further indicating the importance of the BEV representation in place recognition. Robustness to view changes. In the testing stage, we randomly rotate the point clouds of KITTI to simulate view changes. As shown in Table 3, our method shows much higher recall rates than ResNet18-NetVLAD and ResNet34+NetVLAD. It is also noted that VN-PointNetVLAD performs better than PointNetVLAD. These validate the significance of the rotation invariance design to view variations. However, VN-PointNetVLAD cannot generalize well to sequences "00", "02", "05", and "06". On the other hand, our BEVPlace shows much higher recall and better generalization ability than all the other methods. Loop closure detection. Loop closure detection is an important application of place recognition. For a query point cloud, we accept its Top-1 match as positive if the feature distance is less than a threshold. By setting different thresholds, we compute the precision-recall curves and plot them in Fig. 5. It can be seen that our method outperforms the compared methods. It is worth noting that although our method is only trained on a part of the point clouds of the sequence 00, it generalizes much better to the other sequences than the other methods. We believe our method can be deployed the LiDAR SLAM systems [12] and help globally consistent mapping building. Generalization performance on ALITA. We test the place recognition performance of the methods on the ALITA dataset based on the model trained on the KITTI dataset. Table 4 shows the recall rates on the validation set and the test set. It can be seen that our method generalizes well in ALITA. On the other hand, the recall rates of the compared methods degrade much. Performance on benchmark datasets. Following the previous works, we train our method using only the Oxford RobotCar training dataset and test the method on the test set. The details of the dataset partition can be found in [1]. For a more comprehensive comparison, we also compare our method with the state-of-the-art transformerbased methods, including NDT-Transformer [36], PPT-Net [11], SVT-Net [7], and TransLoc3D [34]. For all the compared methods, we directly use the results from their papers. Table 5 shows that the ResNet34+NetVLAD network achieves comparable recall to the state-of-the-art method minklock3D-v2 and shows better generalization ability. Our BEVPlace outperforms other methods including the transformer-based ones with large margins. Position Estimation We first recovery the geometry distances between the query and the matches and then estimate the global position of point clouds. In the following, we evaluate the performance of these two stages. Accuracy of the recovery distances. We compute the errors of the recovered distances on the sequence "00" of the KITTI dataset. Fig. 6 shows the fitting distribution of the distance errors of the methods. It can be seen that our method can recover the geometry distance more accurately. This will lead to more accurate position estimation results since the estimation is based on the recovered distances. Position estimation. Fig. 7 (a), (b), (c), and (d) show the cumulative distribution of the translation error on different sequences of the KITTI datasets. Our method and OverlapTransformer, both of which are based on projection images, achieve more accurate position estimation than the compared methods. To validate the performance under view changes, we randomly rotate the point clouds in the testing stage. Fig. 7 (e), (f), (g), and (h) show that our method and OverlapTransformer perform well since they are designed to be rotation invariant. On the other hand, the other methods shows poor robustness and their performance degrades much. Conclusions In this work, we explore the potential of LiDAR-based place recognition using BEV images. We designed a rotation invariant network called BEVPlace based on group convolution. Thanks to the use of BEV images and the rotation invariance design, our method achieves high recall rates, strong generalization ability, and robustness to viewpoint changes, as shown in the experiments. In addition, we observe that the geometry and feature distance are correlated, and we model the correlation for position estimation. This model can adapt to other place recognition methods, but our BEVPlace gives more accurate estimation results. In our future work, we will try to encode the rotation information into global features and estimate 6-DoF pose of point clouds. Figure 1 . 1(a) Two range images from the KITTI dataset. The images are projected from two point clouds that are about 5 meters away from each other. A small translation of point clouds will introduce structural distortions such as scale variations and occlusion to objects in the scene. (b) The corresponding BEV images. The scale and position distribution of objects on the road almost remain unchanged. (c) Performance on various datasets. A simple NetVALD network based on the BEV representation achieves comparable recall at Top1 to the state-of-the-art methods. Our BEVPlace further lifts the baseline to a higher level. Figure 2 . 2Two modules of our method. In the BEVPlace network, we project point clouds into BEV images and extract rotation-invariant global features. In the position estimator module, we recover geometry distances from feature space and estimate positions of query point clouds. Figure 3 . 3Geometry distance and feature distance relationship of the point clouds in different sequences of the KITTI dataset. Figure 4 . 4Geometry distance and feature distance relationship of the point clouds in the sequence "00" and "06" of KITTI of two methods. Figure 5 . 5Precision-recall curves for the sequences of the KITTI dataset. Figure 6 . 6Distance estimation error distribution. Figure 7 . 7Accumulative translation error distribution on the KITTI dataset with and without rotations. . For our method, we crop each point cloud with a [−20 m, 20 m] cubic window and downsample it into 4096 points. We then generate BEV images from the downsampled point clouds. For Point-NetVLAD, LPD-Net, MinkLock3D-V2, we normalize the point values to fit their input. For OverlapTransformer, we use full point clouds since its performance is sensitive to the point density.Table 1. Dataset Partition of the KITTI dataset.Sequence 00 02 05 06 Database 0-3000 0-3400 0-1000 0-600 Query 3200-4650 3600-4661 1200-2751 800-1100 Table 2 . 2Recall at Top-1 on the KITTI dataset. * denotes that method is designed rotation-invariant. Sequence 00 02 05 06 Mean PointNetVLAD[1] 91.6 62.3 76.9 77.8 77.2 LPD-Net [19] 95.7 72.3 83.6 82.2 83.5 SOE-Net [33] 95.0 65.5 84.8 69.6 78.7 MinkLoc3D-V2 [15] 95.9 72.3 86.4 80.4 83.4 *VN-PointNetVLAD [1, 5] 94.3 66.5 87.5 84.3 82.9 *OverlapTransformer[21] 96.7 80.1 91.9 95.6 91.1 ResNet18+NetVLAD 95.9 83.2 90.3 98.5 92.0 ResNet34+NetVLAD 96.3 84.1 92.2 98.5 92.8 *BEVPlace (ours) 99.7 98.1 99.3 100.0 99.3 Table 3 . 3Recall at Top-1 on the rotated KITTI dataset. * denotes that method is designed rotation-invariant.Sequence 00 02 05 06 Mean PointNetVLAD[1] 86.1 41.0 69.7 51.5 62.1 LPD-Net [19] 89.6 61.9 72.2 48.9 68.2 SOE-Net [33] 93.1 63.5 82.8 65.5 76.2 MinkLoc3D-V2 [15] 89.4 48.7 83.0 48.1 67.3 *VN-PointNetVLAD 93.2 62.3 85.2 82.9 80.9 *OverlapTransformer[21] 96.7 80.1 91.9 95.6 91.1 ResNet18-NetVLAD [27] 92.3 64.1 89.8 89.9 84.0 ResNet34+NetVLAD [2] 93.1 64.2 90.7 90.4 84.6 *BEVPlace (ours) 99.6 93.5 98.9 100.0 98.0 Table 4 . 4Recall rates on the ALITA dataset.Val Set Test set @1 @1% @1 @1% PointNetVLAD[1] 42.3 55.4 39.8 - LPD-Net [19] 51.2 72.7 49.6 - SOE-Net [33] 66.6 92.8 59.5 - MinkLoc3D-V2 [15] 55.6 82.8 55.3 - BEVPlace (ours) 96.7 99.2 91.7 - Table 5 . 5Recall rates on the benchmark dataset.Oxford U.S. R.A. B.D Mean AR@1 AR@1% AR@1 AR@1% AR@1 AR@1% AR@1 AR@1% AR@1 AR@1% PointNetVLAD [1] 62.8 80.3 63.2 72.6 56.1 60.3 57.2 65.3 59.8 69.6 LPD-Net [19] 86.3 94.9 87.0 96.0 83.1 90.5 82.5 89.1 84.7 92.6 NDT-Transformer [36] 93.8 97.7 - - - - - - - - PPT-Net [11] 93.5 98.1 90.1 97.5 84.1 93.3 84.6 90.0 88.1 94.7 SVT-Net [7] 93.7 97.8 90.1 96.5 84.3 92.7 85.5 90.7 88.4 94.4 TransLoc3D [34] 95.0 98.5 - - - - - - - - MinkLoc3Dv2 [16] 96.3 98.9 90.9 96.7 86.5 93.8 86.3 91.2 90.0 95.1 ResNet34+NetVLAD [16] 95.8 99.2 96.2 99.0 90.1 99.4 94.8 100.0 94.2 99.4 BEVPlace (ours) 96.5 99.0 96.9 99.7 92.3 98.7 95.3 99.5 96.9 99.6 (a) KITTI 00 (b) KITTI 02 (c) KITTI 05 (d) KITTI 06 (e) KITTI 00 rot (f) KITTI 02 rot (g) KITTI 05 rot (h) KITTI 06 rot https://www.aicrowd.com/challenges/icra2022-general-placerecognition-city-scale-ugv-localization PointNetVLAD: Deep point cloud based retrieval for large-scale place recognition. Angelina Mikaela, Gim Hee Uy, Lee, IEEE Conference on Computer Vision and Pattern Recognition. IEEE7Mikaela Angelina Uy and Gim Hee Lee. PointNetVLAD: Deep point cloud based retrieval for large-scale place recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4470-4479. IEEE, 2018. 1, 2, 4, 5, 6, 7, 8 NetVLAD: CNN architecture for weakly supervised place recognition. Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, Josef Sivic, IEEE Conference on Computer Vision and Pattern Recognition. Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pa- jdla, and Josef Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5297- 5307, 2016. 1, 2, 3, 4, 7 Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, José Neira, Ian Reid, John J Leonard, IEEE Transactions on Robotics. 326Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, José Neira, Ian Reid, and John J Leonard. Past, present, and future of simultaneous localiza- tion and mapping: Toward the robust-perception age. IEEE Transactions on Robotics, 32(6):1309-1332, 2016. 1 OverlapNet: Loop closing for LiDAR-based SLAM. Xieyuanli Chen, Thomas Läbe, Andres Milioto, Timo Röhling, Olga Vysotska, Alexandre Haag, Jens Behley, Cyrill Stachniss, Fkie Fraunhofer, Proc. of Robotics: Science and Systems. of Robotics: Science and Systems23Xieyuanli Chen, Thomas Läbe, Andres Milioto, Timo Röhling, Olga Vysotska, Alexandre Haag, Jens Behley, Cyrill Stachniss, and FKIE Fraunhofer. OverlapNet: Loop closing for LiDAR-based SLAM. In Proc. of Robotics: Sci- ence and Systems, 2020. 2, 3 Vector neurons: A general framework for SO(3)-equivariant networks. Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, Leonidas J Guibas, IEEE International Conference on Computer Vision. 67Congyue Deng, Or Litany, Yueqi Duan, Adrien Poule- nard, Andrea Tagliasacchi, and Leonidas J Guibas. Vector neurons: A general framework for SO(3)-equivariant net- works. In IEEE International Conference on Computer Vi- sion, pages 12200-12209, 2021. 6, 7 DH3D: Deep hierarchical 3D descriptors for robust large-scale 6DoF relocalization. Juan Du, Rui Wang, Daniel Cremers, European Conference on Computer Vision. 2020Juan Du, Rui Wang, and Daniel Cremers. DH3D: Deep hier- archical 3D descriptors for robust large-scale 6DoF relocal- ization. In European Conference on Computer Vision, 2020. 2 SVT-Net: A super light-weight network for large scale place recognition using sparse voxel transformers. Z Fan, Z Song, H Liu, Z Lu, J He, X Du, AAAI Conference on Artificial Intelligence, 2022. 3. 7Z. Fan, Z. Song, H. Liu, Z. Lu, J. He, and X. Du. SVT-Net: A super light-weight network for large scale place recogni- tion using sparse voxel transformers. In AAAI Conference on Artificial Intelligence, 2022. 3, 7, 8 Bags of binary words for fast place recognition in image sequences. D Galvez-Lpez, J Tardos, IEEE Transactions on Robotics. 2853D Galvez-Lpez and J. D Tardos. Bags of binary words for fast place recognition in image sequences. IEEE Transac- tions on Robotics, 28(5):1188-1197, 2012. 1, 3 Are we ready for autonomous driving? the KITTI vision benchmark suite. Andreas Geiger, Philip Lenz, Raquel Urtasun, IEEE Conference on Computer Vision and Pattern Recognition. 46Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3354-3361, 2012. 4, 6 Efficient 3D point cloud feature learning for largescale place recognition. Le Hui, Mingmei Cheng, Jin Xie, Jian Yang, Ming-Ming Cheng, IEEE Transactions on Image Processing. 312Le Hui, Mingmei Cheng, Jin Xie, Jian Yang, and Ming-Ming Cheng. Efficient 3D point cloud feature learning for large- scale place recognition. IEEE Transactions on Image Pro- cessing, 31:1258-1270, 2022. 2 Pyramid point cloud transformer for large-scale place recognition. Le Hui, Hang Yang, Mingmei Cheng, Jin Xie, Jian Yang, IEEE International Conference on Computer Vision. 7Le Hui, Hang Yang, Mingmei Cheng, Jin Xie, and Jian Yang. Pyramid point cloud transformer for large-scale place recog- nition. In IEEE International Conference on Computer Vi- sion, pages 6078-6087, 2021. 3, 7, 8 LOAM: LiDAR odometry and mapping in real-time. Zhang Ji, Singh Sanjiv, Proceedings of Robotics: Science and Systems. Robotics: Science and SystemsZhang Ji and Singh Sanjiv. LOAM: LiDAR odometry and mapping in real-time. In Proceedings of Robotics: Science and Systems, 2014. 7 Scan Context: Egocentric spatial descriptor for place recognition within 3D point cloud map. Giseop Kim, Ayoung Kim, IEEE International Conference on Intelligent Robots and Systems. 23Giseop Kim and Ayoung Kim. Scan Context: Egocentric spatial descriptor for place recognition within 3D point cloud map. In IEEE International Conference on Intelligent Robots and Systems, pages 4802-4809. IEEE, 2018. 2, 3 1-day learning, 1-year localization: Long-term LIDAR localization using scan context image. Giseop Kim, Byungjae Park, Ayoung Kim, IEEE Robotics and Automation Letters. 423Giseop Kim, Byungjae Park, and Ayoung Kim. 1-day learn- ing, 1-year localization: Long-term LIDAR localization us- ing scan context image. IEEE Robotics and Automation Let- ters, 4(2):1948-1955, 2019. 2, 3 MinkLoc3D: Point cloud based largescale place recognition. Jacek Komorowski, IEEE Winter Conference on Applications of Computer Vision. 7Jacek Komorowski. MinkLoc3D: Point cloud based large- scale place recognition. In IEEE Winter Conference on Ap- plications of Computer Vision, pages 1789-1798, 2021. 2, 5, 7 Improving point cloud based place recognition with ranking-based loss and large batch training. Jacek Komorowski, IEEE International Conference on Pattern Recognition. 6Jacek Komorowski. Improving point cloud based place recognition with ranking-based loss and large batch training. In IEEE International Conference on Pattern Recognition, 2022. 2, 5, 6, 8 TI-POOLING: Transformation-invariant pooling for feature learning in convolutional neural networks. Dmitry Laptev, Nikolay Savinov, Joachim M Buhmann, Marc Pollefeys, IEEE Conference on Computer Vision and Pattern Recognition. 23Dmitry Laptev, Nikolay Savinov, Joachim M. Buhmann, and Marc Pollefeys. TI-POOLING: Transformation-invariant pooling for feature learning in convolutional neural net- works. In IEEE Conference on Computer Vision and Pattern Recognition, pages 289-297, 2016. 2, 3 GIFT: Learning transformation-invariant dense visual descriptors via group cnns. Yuan Liu, Zehong Shen, Zhixuan Lin, Sida Peng, Hujun Bao, Xiaowei Zhou, Conference and Workshop on Neural Information Processing Systems. Yuan Liu, Zehong Shen, Zhixuan Lin, Sida Peng, Hujun Bao, and Xiaowei Zhou. GIFT: Learning transformation-invariant dense visual descriptors via group cnns. In Conference and Workshop on Neural Information Processing Systems, 2019. 4 LPD-Net: 3D point cloud learning for large-scale place recognition and environment analysis. Zhe Liu, Shunbo Zhou, Chuanzhe Suo, Peng Yin, Wen Chen, Hesheng Wang, Haoang Li, Yun-Hui Liu, IEEE International Conference on Computer Vision. IEEE7Zhe Liu, Shunbo Zhou, Chuanzhe Suo, Peng Yin, Wen Chen, Hesheng Wang, Haoang Li, and Yun-Hui Liu. LPD-Net: 3D point cloud learning for large-scale place recognition and en- vironment analysis. In IEEE International Conference on Computer Vision, pages 2831-2840. IEEE, 2019. 1, 2, 4, 5, 6, 7, 8 BVMatch: Lidar-based place recognition using bird's-eye view images. Lun Luo, Si-Yuan Cao, Bin Han, Hui-Liang Shen, Junwei Li, IEEE Robotics and Automation Letters. 633Lun Luo, Si-Yuan Cao, Bin Han, Hui-Liang Shen, and Jun- wei Li. BVMatch: Lidar-based place recognition using bird's-eye view images. IEEE Robotics and Automation Let- ters, 6(3):6076-6083, 2021. 2, 3 OverlapTransformer: An efficient and yaw-angle-invariant transformer network for LiDAR-based place recognition. Junyi Ma, Jun Zhang, Jintao Xu, Rui Ai, Weihao Gu, Xieyuanli Chen, IEEE Robotics and Automation Letters. 737Junyi Ma, Jun Zhang, Jintao Xu, Rui Ai, Weihao Gu, and Xieyuanli Chen. OverlapTransformer: An efficient and yaw-angle-invariant transformer network for LiDAR-based place recognition. IEEE Robotics and Automation Letters, 7(3):6958-6965, 2022. 2, 3, 5, 7 ORB-SLAM: A versatile and accurate monocular SLAM system. Raul Mur-Artal, J M M Montiel, Juan D Tardos, IEEE Transactions on Robotics. 315Raul Mur-Artal, J. M. M. Montiel, and Juan D. Tardos. ORB- SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5):1147-1163, 2017. 1 A generalized normal distribution. Saralees Nadarajah, Journal of Applied Statistics. 327Saralees Nadarajah. A generalized normal distribution. Jour- nal of Applied Statistics, 32(7):685-694, 2005. 5 General place recognition survey: Towards the real-world autonomy age. Yin Peng, Zhao Shiqi, Cisneros Ivan, Abuduweili Abulikemu, Huang Guoquan, Milford Micheal, Liu Changliu, Choset Howie, Scherer Sebastian, arXivpreprintarXiv:2209.04497Yin Peng, Zhao Shiqi, Cisneros Ivan, Abuduweili Abu- likemu, Huang Guoquan, Milford Micheal, Liu Changliu, Choset Howie, and Scherer Sebastian. General place recog- nition survey: Towards the real-world autonomy age. In arXiv preprint arXiv:2209.04497, 2022. 2 ALITA: A large-scale incremental dataset for long-term autonomy. Yin Peng, Zhao Shiqi, Ge Ruohai, Cisneros Ivan, Fu Ruijie, Zhang Ji, Choset Howie, A Scherer Sebastian, arXiv:2105.11605In arXiv preprintYin Peng, Zhao Shiqi, Ge Ruohai, Cisneros Ivan, Fu Ruijie, Zhang Ji, Choset Howie, and A. Scherer Sebastian. ALITA: A large-scale incremental dataset for long-term autonomy. In arXiv preprint arXiv:2105.11605, 2022. 6 PointNet: Deep learning on point sets for 3D classification and segmentation. Hao Charles R Qi, Kaichun Su, Leonidas J Mo, Guibas, IEEE Conference on Computer Vision and Pattern Recognition. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 652-660, 2017. 2 Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, Karen Simonyan and Andrew Zisserman. Very deep con- volutional networks for large-scale image recognition. In International Conference on Learning Representations. Yoshua Bengio and Yann LeCun, editors, International Con- ference on Learning Representations, 2015. 7 Place recognition in 3D scans using a combination of bag of words and point feature based relative pose estimation. Bastian Steder, Michael Ruhnke, Slawomir Grzonka, Wolfram Burgard, IEEE International Conference on Intelligent Robots and Systems. Bastian Steder, Michael Ruhnke, Slawomir Grzonka, and Wolfram Burgard. Place recognition in 3D scans using a combination of bag of words and point feature based rela- tive pose estimation. In IEEE International Conference on Intelligent Robots and Systems, 2011. 3 DAGC: Employing dual attention and graph convolution for point cloud based place recognition. Qi Sun, Hongyan Liu, Proceedings of the 2020 International Conference on Multimedia Retrieval. the 2020 International Conference on Multimedia RetrievalZhaoxin Fan, and Xiaoyong DuQi Sun, Hongyan Liu, Jun He, Zhaoxin Fan, and Xiaoyong Du. DAGC: Employing dual attention and graph convolution for point cloud based place recognition. In Proceedings of the 2020 International Conference on Multimedia Retrieval, pages 224-232, 2020. 2 Group equivariant convolutional networks. Cohen Taco, Welling Max, International Conference on Machine Learning. 24Cohen Taco and Welling Max. Group equivariant convolu- tional networks. In International Conference on Machine Learning, pages 2990-2999, 2016. 2, 3, 4 Attention is all you need. Ashish Vaswani, Noam M Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Conference and Workshop on Neural Information Processing Systems. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Conference and Workshop on Neural Information Processing Systems, pages 5998-6008, 2017. 3 General e(2)-equivariant steerable cnns. Maurice Weiler, Gabriele Cesa, Conference and Workshop on Neural Information Processing Systems. Maurice Weiler and Gabriele Cesa. General e(2)-equivariant steerable cnns. In Conference and Workshop on Neural In- formation Processing Systems, 2019. 4 SOE-Net: A self-attention and orientation encoding network for point cloud based place recognition. Yan Xia, Yusheng Xu, Shuang Li, Rui Wang, Juan Du, Daniel Cremers, Uwe Stilla, IEEE Conference on Computer Vision and Pattern Recognition. 67Yan Xia, Yusheng Xu, Shuang Li, Rui Wang, Juan Du, Daniel Cremers, and Uwe Stilla. SOE-Net: A self-attention and orientation encoding network for point cloud based place recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 11343-11352, 2021. 2, 4, 5, 6, 7 T X Xu, Y C Guo, Y K Lai, S H Zhang, arXiv:2105.11605TransLoc3D : Point cloud based large-scale place recognition using adaptive receptive fields. 7arXiv preprintT. X. Xu, Y. C. Guo, Y. K. Lai, and S. H. Zhang. TransLoc3D : Point cloud based large-scale place recognition using adap- tive receptive fields. arXiv preprint arXiv:2105.11605, 2021. 3, 7, 8 PCAN: 3D attention map learning using contextual information for point cloud based retrieval. Wenxiao Zhang, Chunxia Xiao, IEEE Conference on Computer Vision and Pattern Recognition. Wenxiao Zhang and Chunxia Xiao. PCAN: 3D attention map learning using contextual information for point cloud based retrieval. In IEEE Conference on Computer Vision and Pat- tern Recognition, 2019. 2 NDT-Transformer: Large-scale 3D point cloud localisation using the normal distribution transform representation. Zhicheng Zhou, Cheng Zhao, Daniel Adolfsson, Songzhi Su, Yang Gao, Tom Duckett, Li Sun, IEEE International Conference on Robotics and Automation. 7Zhicheng Zhou, Cheng Zhao, Daniel Adolfsson, Songzhi Su, Yang Gao, Tom Duckett, and Li Sun. NDT-Transformer: Large-scale 3D point cloud localisation using the normal dis- tribution transform representation. In IEEE International Conference on Robotics and Automation, pages 5654-5660, 2021. 3, 7, 8
{'fraction_non_alphanumeric': 0.04687696367977881, 'fraction_numerical': 0.03397427841313728, 'mean_word_length': 4.445141423357664, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 0, 'https://': 2, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 4, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "Place recognition is a key module for long-term SLAM systems. Current LiDAR-based place recognition methods are usually based on representations of point clouds such as unordered points or range images. These methods achieve high recall rates of retrieval, but their performance may degrade in the case of view variation or scene changes. In this work, we explore the potential of a different representation in place recognition, i.e. bird's eye view (BEV) images. We validate that, in scenes of slight viewpoint changes, a simple NetVLAD network trained on BEV images achieves comparable performance to the state-ofthe-art place recognition methods. For robustness to view variations, we propose a rotation-invariant network called BEVPlace. We use group convolution to extract rotationequivariant local features from the images and NetVLAD for global feature aggregation. In addition, we observe that the distance between BEV features is correlated with the geometry distance of point clouds. Based on the observation, we develop a method to estimate the position of the query cloud, extending the usage of place recognition. The experiments conducted on large-scale public datasets show that our method 1) achieves state-of-the-art performance in terms of recall rates, 2) is robust to view changes, 3) shows strong generalization ability, and 4) can estimate the positions of query point clouds. Source code is publicly available at https://github.com/zjuluolun/BEVPlace.", 'arxivid': '2302.14325', 'author': ['Lun Luo ', 'Shuhang Zheng ', 'Yixuan Li ', 'Zhiyong Fan ', 'Beinan Yu ', 'Siyuan Cao ', 'Hui-Liang Shen '], 'authoraffiliation': [], 'corpusid': 257232932, 'doi': '10.48550/arxiv.2302.14325', 'github_urls': ['https://github.com/zjuluolun/BEVPlace.'], 'n_tokens_mistral': 14063, 'n_tokens_neox': 12329, 'n_words': 7285, 'pdfsha': '93df235115da7d7ecba0b2788c853914345126ca', 'pdfurls': ['https://export.arxiv.org/pdf/2302.14325v2.pdf'], 'title': ["BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View Images", "BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View Images"], 'venue': []}
arxiv
Example-Based Sampling with Diffusion Models Bastien Doignies Univ Lyon France DAVID COEURJOLLY NICOLAS BONNEEL CNRS Univ. Lyon France CNRS Univ. Lyon France LOÏS PAULIN JULIE DIGNE CNRS Univ. Lyon France JEAN-CLAUDE IEHL Univ. Lyon France Univ. Lyon France VICTOR OSTROMOUKHOV Univ. Lyon France Example-Based Sampling with Diffusion Models CCS Concepts: • Computing methodologies → RenderingNeural net- works• Mathematics of computing → Quadrature Additional Key Words and Phrases: Path tracing, quasi-Monte Carlo in- tegration, low discrepancy sequences, generator matrices, integer linear programming Much effort has been put into developing samplers with specific properties, such as producing blue noise, low-discrepancy, lattice or Poisson disk samples. These samplers can be slow if they rely on optimization processes, may rely on a wide range of numerical methods, are not always differentiable. The success of recent diffusion models for image generation suggests that these models could be appropriate for learning how to generate point sets from examples. However, their convolutional nature makes these methods impractical for dealing with scattered data such as point sets. We propose a generic way to produce 2-d point sets imitating existing samplers from observed point sets using a diffusion model. We address the problem of convolutional layers by leveraging neighborhood information from an optimal transport matching to a uniform grid, that allows us to benefit from fast convolutions on grids, and to support the example-based learning of non-uniform sampling patterns. We demonstrate how the differentiability of our approach can be used to optimize point sets to enforce properties. INTRODUCTION A wide range of samplers have been designed in the past, for quasi-Monte Carlo integration, rendering, image stippling, positionning objects or generally, to uniformly or non-uniformly cover some space. The generated samples can have various properties, such as being low discrepancy or stratified, having a blue noise spectrum, producing low integration error, with high packing density, satisfying a Poisson disk criterion, or high inter-point distances [Pharr et al. 2016;. Generating these samples can come at significant cost, especially when points are obtained from complex optimization schemes [Ahmed et al. 2022;De Goes et al. 2012;Fattal 2011;Öztireli and Gross 2012;Paulin et al. 2020;Roveri et al. 2017]. In addition, satisfying multiple properties at the same time is difficult, and is the focus of entire methods -e.g., generating low discrepancy sequences with blue noise properties. Differentiability can also be desirable in contexts involving further optimizations, but may be problematic for specific samplers, for instance when considered in a differential renderer [Jakob et al. 2022b]. The large set of available samplers makes sample generation little generic, with methods involving smooth non-convex optimization, integer linear programming, number theory, bruteforce approaches with clever data structures, etc. This work is shared under a Creative Commons Attribution-Share Alike 3.0 License. Recently, diffusion models have become extremely popular in the context of image generation [Ho et al. 2020;Rombach et al. 2022;Sohl-Dickstein et al. 2015]. By learning how to denoise an image that initially only contains random values, these models have been able to produce impressive results, i.e., to learn the very fine structure of the manifold of realistic images. It hence seems judicious to take advantage of these models to learn the very fine structure of sample points produced by existing samplers. However, these models heavily rely on convolutions, which makes it impractical to efficiently handle point sets. In this paper, we propose to learn the distribution of 2-d samples produced by a wide range of samplers using a diffusion model. When point sets are not stratified, we resort to an optimal transport matching to a uniform grid that mostly preserves neighborhood information so as to benefit from efficient convolutional layers. We demonstrate that a single architecture is able to learn sample points produced by different methods, and even allows to reproduce nonuniform point sets. The differentiability of our network allows us to add properties to a given samplers, e.g., allowing to add low discrepancy properties to a given optimal transport-based sampler. While our network is currently limited to generating 2-d samples, it produces samples beyond the range of samples count it has been trained on. We provide trained networks alongside the paper and believe this exciting step will open the door to further conditioning. Code is provided in supplementary material. RELATED WORKS Existing samplers have a wide range of properties. We enumerate importants classes of samplers below. Blue Noise. Blue noise samples have a characteristic "ring-like" Fourier power spectrum, with low frequencies converging to zero. They are interesting for Monte Carlo integration purposes [Pilleboue et al. 2015;Subr and Kautz 2013], digital halftoning [Ulichney 1987] or stippling [Deussen et al. 2000] and well describe arrangements of natural phenomenas that have been optimized through evolution such as the retinal distribution of cones [Yellott 1982]. They are often costly obtained through optimization, for instance using kernel approaches [Ahmed et al. 2022;Fattal 2011], pair-correlation function [Öztireli and Gross 2012] or optimal transport [De Goes et al. 2012;Paulin et al. 2020;Qin et al. 2017], though fast approximations exist [Nader and Guennebaud 2018]. Tile-based approaches precompute tiles for fast synthesis, but are memory demanding [Kopf et al. 2006;Ostromoukhov et al. 2004;Wachtel et al. 2014]. Poisson Disk. Poisson disk samples have the property that no point fall within a distance smaller than a threshold from another point [Bridson 2007;Dunbar and Humphreys 2006;Gamito and Maddock 2009;Wei 2008;Yuksel 2015]. Their spectra resemble those of blue noise distributions, except that they do not decrease towards zero as the frequency decreases [Pilleboue et al. 2015]. They naturally occur in other natural process such as the placement of trees in a forest. In low dimensions, they are relatively fast to compute. Low Discrepancy Sequences. Discrepancy is a uniformity measure directly related to Monte Carlo integration error. Low discrepancy sequences (LDS) thus have several advantages. First they are sequences, so that samples can be progressively added. Second, they are low discrepancy, hence guaranteeing good numerical integration error [Lemieux 2009;Niederreiter 1992]. Samplers achieving low discrepancy usually rely on arithmetic and number theory constructions leading to extremely fast generators (e.g. in base 2, the -th sample using [Sobol' 1967] is given by a matrix/vector multiplication in (2) on the bitwise representation of ). Alternatively, lattices produce low discrepancy sequences. A rank-1 lattice repeatedly translates an initial point by a given amount in a given direction in a toric domain [Keller 2004]. Rank-n lattices similarly use multiple independent vectors. Good lattices can be similarly hard to optimize for [L'Ecuyer and Munger 2016]. Designing Complex Point Processes. Aside global point set properties such as blue-noise, Poisson disk or low discrepancy, the problem of designing a point process matching some exemplars or satisfying additional constraints has been addressed in several ways. One can design sampler mixing global properties such as low discrepancy and blue-noise [Ahmed et al. 2016;Ahmed and Wonka 2021;Perrier et al. 2018], we can use a profile based approach to generate LDS samplers with adjustable or with scriptable properties (e.g. bluenoise properties, stratification on some projections. . . ) [L'Ecuyer and Munger 2016;Paulin et al. 2022]. Mixing point process properties can also be achieved by interpolating their high order statistics such as their pair-correlation functions [Öztireli and Gross 2012]. Focusing on spectral properties, [Leimkühler et al. 2019] have proposed a neural network approach to target specific profiles defined as combinations of radial power spectra. Point sets through deep learning. Perhaps the closest to our work is that of [Leimkühler et al. 2019]. They learn arbitrary dimensional point sets by matching power spectra. There is a number of important differences with respect to our work. First, they require a power spectrum as input while we require examples from a given sampler. This allows us to capture all characteristics of samplers and not just spectra. Second, our network is able to produce point sets of significantly different sizes without re-training. Third, we propose a way to benefit from efficient convolutions on grids. While this restricts us to low-dimensional settings (we demonstrate our approach in two dimensions), this allows us to use thousands of convolution layers at different scales and to benefit from recent advances in diffusion models. These differences allow us to finely capture the structure of point sets (see Sec. 4.1). In the context of Monte Carlo integration, deep learning has been used to learn a control variate [Müller et al. 2020], though this does not directly address the location of point samples. Deep learning has also been used for importance sampling [Müller et al. 2019]. et al. 2015] in the context of unsupervised learning. The core idea of Denoising diffusion is to gradually remove any structure in the image by progressively adding noise and to train a neural network to invert the degradation process. This allows to capture the data distribution and sample from it. This idea has been extensively used for image synthesis [Ho et al. 2020] with impressive results, either by working directly in pixel space or in the latent space [Rombach et al. 2022]. In this paper, we propose to exploit the capacity of these networks to learn structure from a set of examples to learn point distributions. Probabilistic Denoising Diffusion. Our method is based on Probabilistic Denoising Diffusion, a concept introduced by [Sohl-Dickstein DENOISING DIFFUSION MODEL 3.1 Architecture The denoising process involves a sequence of denoising operations which operate at given timesteps. Each denoising is achieved by a forward pass in a single denoising network , which takes as input both the noisy image˜and the embedded timestep . Our network architecture is very similar to the one of [Ho et al. 2020]. It corresponds to a U-Net [Ronneberger et al. 2015], where each level is composed of two convolutional residual blocks (ResNet) and the feature maps are downsampled by a factor 2 between each level. While the original architecture only included attention blocks between the two convolutional blocks of the 16 × 16 level, we add attention to all levels, which we found to work better in practice. Unless specified otherwise, we used 1000 diffusion time steps. The overall architecture design is detailed in supplementary material. The network learns a time-dependent noise model (˜, ) given a noise added to the input data,˜= + at each time step . In our setting, 0 is the offset between strata centers and the input point set as obtained in Sec 3.2. The network thus predicts noise, that can then be progressively removed from a white noise point set to denoise it according to the learned data distribution. Convolutions on grids While computing the required convolutions used in the diffusion model is possible on unstructured point sets [Groh et al. 2019;Hua et al. 2018;Simonovsky and Komodakis 2017], this comes at a prohibitive cost in our context, due to the large number of convolutions involved. Fortunately, our point sets are not arbitrary but may uniformly cover the unit square. In certain cases, they can be stratified, i.e., each stratum of size 1 √ × 1 √ contains a single sample. This is notably the case for the large class of (0, , )-nets samplers [Niederreiter 1992]. In that case, we use a pixel grid of √ × √ pixels, and store in each pixel the 2-d offset between the stratum center and its corresponding sample location. When this is not the case, we compute a linear assignment using optimal transport between the strata centers and the set of samples (Fig. 1) [Bonneel et al. 2011], and similarly store in each pixel the 2-d offset between the stratum center and its corresponding sample location. Doing so allows to work on 2-d grids and benefit from optimized convolutions. In our settings, the grid acts as an approximate nearest neighbor acceleration data structure, such that, when a convolution is performed, neighboring samples approximately correspond to neighboring pixels, and are thus appropriately weighted. We evaluate this property with non-uniform sampling in Sec. 4.2. This remapping further allows to remain invariant under re-ordering of samples. x x x x x x x x x x x x x x x x Fig. 1. When input point sets are not stratified, we compute a linear assignment problem between strata centers (red) and sample points (blue) using optimal transport. Each stratum stores its assigned point offset (green arrows). The grid thus serves as an approximate nearest neighbor acceleration data structure and benefits from efficient convolutions. Training The benefit of a convolutional approach is that the same convolution weights can be used for different grid sizes. It thus becomes possible to train the same network with point sets of different sizes, and hope that it generalizes. We explore in Sec. 4.1 how it succeeds in generalizing. However, within a single batch, the sample count should remain the same, due to the way batches are processed. For a given batch of size , we thus build a loss that sums contributions for different input grid sizes S stored in different batches: L ( , ) = ∑︁ ∈S 1 ∑︁ =1 ∥ (˜, ) − ∥ 2 , for randomly chosen { }. We typically use S = {8×8, 16×16, 32×32}, hence learning from sample sizes {64, 256, 1024}. We obtain one trained network, of the same architecture but different training weights, per type of sampler, each able to produce point sets of different sample sizes. We train networks to reproduce Sobol' samples with Owen's scrambling [Owen 1998;Sobol' 1967] as a representative LDS matrixbased sampler, LatNetBuilder samples as a representative LDS latticebased sampler, a Poisson disk sampler (classical dart throwing approach), SOT [Paulin et al. 2020] as a representative blue noise sampler using optimal transport, GBN [Ahmed et al. 2022] as a representative kernel-based blue noise sampler, LDBN [Ahmed et al. 2016] as a sampler that combines low discrepancy properties and blue noise spectrum, and Rank-1 [Keller 2004] as a representative of lattice based sampler. We train all our models using 64k point sets, except for the SOT sampler trained with only 32 (not 32k) point sets to assess robustness to small training datasets. We train for a constant time of 3 hours, and synthesis time is typically 35 minutes for 1000 point sets of 1024 samples each using 1000 diffusion steps. VALIDATION AND APPLICATIONS 4.1 Properties of generated samples We study power spectra, optimal transport energy, discrepancy, integration errors and minimum distance statistics of generated point sets, and verify that they match properties they were trained for. We also verify how our network generalizes as we increase the number of samples outside the range it was trained for. For these comparisons, we compare to the approach of [Leimkühler et al. 2019]. For stationary and isotropic point processes or samplers targeting such properties, we have used their publicly available implementation with a 1d radial mean power spectrum loss (same learning parameters as the one provided by the authors for similar experiments). For non-stationary or anisotropic samplers (e.g. Sobol'+Owen and Rank1), we had to design our own learning experiment following their examples in 1d, with losses defined as 1 norm between 2d power spectra (cropped to the central frequency part). We observe that such training turns out to be very difficult in 2d and leads to non-competitive results. In Fig 2, we only show results for Sobol'+Owen in 2d and leave the discussion for Rank-1 in supplementary materials. While we trained our network on small set of sample sizes ({64, 256, 1024}), we assess the performance of these metrics for other sample sizes ({576, 4096}). For most of these properties, we illustrate them with violin plots (Fig. 3, 4, 5, 6), that show the distribution of values in the form of vertical histograms (e.g., similar to a population pyramid). We compute them using 128 point sets. Power spectra. In Fig. 2, we first show performances of [Leimkühler et al. 2019] and our approach to recover spectral properties of the training sets (either through 1d radial mean power spectra for stationay and isotropic point sets, or 2d spectra for other ones). As discussed above, capturing anisotropic spectra with [Leimkühler et al. 2019] is very challenging using a 2d spectra loss function. Our approach fully captures such characteristics. Optimal transport energy. Optimal transport (OT) provides a way to characterize the uniformity of a point set by computing the (squared) semi-discrete optimal transport distance between the point set and a uniform distribution [Mérigot 2011]. Fig. 3 illustrates how we match the OT energy. Discrepancy and integration error. Fig. 4 and 5 show how our network matches integration errors and discrepancy of point sets. For discrepancy, we used the L2 discrepancy [Heinrich 1996;Niederreiter 1992]. For integration error, we compute the average MSE on the integration of wide anisotropic Gaussians (anisotropic ratio between 1:1 and 1:9, and Gaussian sizes ranging from 0.1 to 0.333 for its largest axis) or Heaviside distributions randomly linearly dividing the unit square. We randomly chose 64k integrands among 1 million, whose integral has been estimated with maximum precision as reference. These statistics also often match for sample sizes not seen during training ({576, 4096}). Fig. 2. For various input samplers and their spectral content (Fourier power spectrum and radial mean power spectrum), we compare our approach (last three rows) with that of [Leimkühler et al. 2019] (1d radial mean power spectrum loss for Poisson disk, GBN, SOT and LDBN; for Sobol'+Owen and Rank-1, we used the 2d power spetrum cropped to the central part, framed in orange, for the learning to converge). Fig. 3. We verify that the point sets predicted by our network match the semi-discrete optimal transport distance to a uniform distribution of the original point sets. These plots show these statistics distributions for 128 point sets from the training set and produced by our network, for sample counts of 64, 256, 576, 1024 and 4096 (top to bottom). The network has only been trained with point sets of 64, 256 and 1024 samples, but successfully predicts point sets of 576 and 4096 samples (results highlighted in an orange frame). Labels prefixed by DC refer to Deep Point Correlation results [Leimkühler et al. 2019] (on 1d radial power spectral, unless 2d is specified), while NN refers to results produced by our Neural Network. Minimum distance. For distributions such as Poisson Disk, the minimum distance between any pair of samples can be important. We assess this statistics in Fig. 6. This property is highly sensitive as it only depends on the location of 2 points within the entire point set. For this property, the approach of [Leimkühler et al. 2019] performs remarkably well, due to the repulsion of points introduced during learning. In our approach, we tend to produce points with lower minimum distance value. Non-uniform distributions The goal of our optimal transport matching to a uniform grid is to infer neighborhood information on the point sets from neighborhood information on the grid, that is, neighboring points on the grid are expected to correspond to neighboring samples. In Fig. 7, using a non-uniform linear ramp sliced optimal transport sampling, we show that, even for non-uniform sampling, our network successfully learns from examples and preserve spectral noise characteristics of the sampler. As a stress test, we also learn to sample a blobby function shown in Fig. 8 important characteristics of the GBN sampler despite inaccuracies in neighborhood information due to the grid embedding. Non-uniform sampling is not possible with the approach of [Leimkühler et al. 2019]. Applications Aside from the fast generation of point sets, we also benefit from the differentiability of our network to further optimize point sets within their class. We illustrate how the differentiability of our network can be used to add properties to generated point sets. Here, we wish to add low discrepancy properties to a sliced optimal transport sampler [Paulin et al. 2020], to benefit from both low discrepancy and low optimal transport energy. We train the network on SOT and then fix the trained weights of the network. Then we optimize the initial white noise samples with an objective function aimed at minimizing the L2 discrepancy measure. As backpropagation requires significant memory overhead, we reduce the number of diffusion steps to 100 (instead of 1000) in the diffusion model. In Fig. 9, we illustrate the result of our optimization in terms of discrepancy and optimal transport energy, and illustrate with an example generated point set. DISCUSSIONS & PERSPECTIVES We showed that diffusion models provide a powerful tool for learning how to generate point sets directly from examples across a wide range of samplers and they generalize well with sample size. Generalization hints at the fact that the network is correctly learning the general principles that make each point set so particular. An interesting future work would involve conditioning the network with respect to the particular sampler, sampler type or more general desired properties. This would allow for a single trained network to produce point sets of types. Preliminary experiments showed subpar results, but more complex architectures could alleviate this issue. The capacity of our network to produce possibly non-uniform example-based point sets may open the door to syntheses where sampling data are only available through a small number of measurements (e.g., distribution of trees, cells, etc.) and optimizing only for summarized statistics (power spectrum or PCF) is not desired. This is a promising direction as we have successfully trained our network with 32 examples of the SOT sampler. While in principle our method would work in arbitrary dimension, the efficiency gained through our convolutions on grids would be lost as storing higher dimensional grids becomes impractical, both in terms of storage (that exponentially grows with dimension) and supported sample size (in the form for some , similarly to stratified samplers). To date, higher dimensional data would be better supported by the approach of [Leimkühler et al. 2019] that does not rely on grids. To remove this grid-dependency in the Monte Carlo sampling, one could adapt recent diffusion models for 3D point cloud shape synthesis [Luo and Hu 2021;Zeng et al. 2022]. While our network is reasonably efficient, other recent architectures have been proposed to accelerate diffusion models and could be explored as well [Song et al. 2020]. One synthesized point set. Bottom row. Unwarping example and synthesized point sets to recover a uniform distribution shows that their spectra match. The uniformity of the unwarped samples can also be measured: the semi-discrete optimal transport energy averaged for 128 realizations of 256 samples is 7.24.10 −4 for the neural network output, compared with 7.16.10 −4 for the original sliced OT uniform samples. . . . . . . Fig. 8. As a stress test, we sample from the density 0.2 −20( 2 + 2 ) + 0.2 sin( ) 2 sin( ) 2 by importance sampling using GBN as a training set (first row). Our sampler reproduces the density well and mostly preserves important characteristics of the sampler (second row). However, in the settings we focus on, in most cases our samples preserve characteristics of major samplers well, including their power spectrum, Monte Carlo integration quality, distance statistics, optimal transport energy and discrepancy. Our diffusionbased sampler allows to generate point sets much faster than some optimization-based samplers by learning from their output. Aside for the fast generation of diverse point sets, we have shown use for our network's differentiability by adding a low discrepancy property to an optimal transport-based sampler. Rendering applications could benefit from our samplers, e.g., through differentiable rendering pipelines [Jakob et al. 2022a] or for generating point sets nicely distributing Monte Carlo error in a blue noise fashion in screen space [Salaün et al. 2022]. 3 × 10 3 4 × 10 3 5 × 10 3 Discrepancy 7 × 10 4 8 × 10 4 9 × 10 4 OT Energy Owen SOT After Optim. Before Optim Fig. 9. We used a trained SOT sampling network to optimize the discrepancy of the generated point sets among the class of SOT point sets. For 128 SOT (blue) and some Sobol'+Owen (red) point sets as representative of blue noise and LDS samplers, we show their distribution of OT and discrepancy statistics. In orange, we illustrate the SOT and discrepancy value for 10 optimized point sets as well as a representative trajectory during the optimization process. We also show a representative point set before (right) and after (left) optimization. Fig. 7 . 7We sample from a learned sliced OT linear ramp. Top row, left. One example point set used for training (among 66,035). Top row, right. . In this example, we learn from importance3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Gaussian, 64 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Gaussian, 256 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 Gaussian, 576 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 Gaussian, 1024 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Gaussian, 4096 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Heaviside, 64 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Heaviside, 256 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 Heaviside, 576 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 Heaviside, 1024 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Heaviside, 4096 Fig. 4. Our network matches integration errors on Gaussian integrands (top 4 plots) and Heaviside integrands (bottom 4 plots), even beyond the sample sizes it was trained for ({64, 256, 1024}). Sample counts are 64, 256, 576, 1024 and 4096 (top to bottom for each integrand). sampled GBN point sets obtained by rejection sampling. Our net- work reproduces the sampling density well, and mostly preserves 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 î î î Discrepancy, 64 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 î î î Discrepancy, 256 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 Discrepancy, 576 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 Discrepancy, 1024 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 Discrepancy, 4096 Fig. 5. Our network matches the L2 discrepancy of the original point sets. Sample counts are 64, 256, 576, 1024 and 4096 (top to bottom). Fig. 6. We evaluate the minimum pairwise distance between samples. This property is highly sensitive as it only depends on the location of 2 samples. Our network tends to produce smaller values, while the sample repulsion of[Leimkühler et al. 2019] better preserve minimum distances. Sample counts are 64, 256, 576, 1024 and 4096 (top to bottom).3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 MinDist, 64 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 MinDist, 256 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 MinDist, 576 3RLVVRQ '&3RLVVRQ 113RLVVRQ *%1 '&*%1 11*%1 627 '&627 11627 /'%1 '&/'%1 11/'%1 2ZHQ '&2ZHQG'&2ZHQ 112ZHQ 5 115 MinDist, 1024 3RLVVRQ 113RLVVRQ *%1 11*%1 627 11627 /'%1 11/'%1 2ZHQ 112ZHQ 5 115 MinDist, 4096 • Bastien Doignies,Nicolas Bonneel, David Coeurjolly, Julie Digne, Loïs Paulin, Jean-Claude Iehl, and Victor Ostromoukhov • Bastien Doignies,Nicolas Bonneel, David Coeurjolly, Julie Digne, Loïs Paulin, Jean-Claude Iehl, and Victor Ostromoukhov ACKNOWLEDGMENTSThis work was funded in part by the french Agence Nationale de la Recherche, grant ANR-20-CE45-0025.Supplementary documentDIFFUSION MODELDiffusion models date back to the work of Sohl-Dickstein et al.[2015]but were popularized byHo et al. [2020]for image synthesis. This section recalls the details for completeness.Probabilistic Denoising Diffusion models involve a forward process, where noise is gradually added to the signal (here an image) and a reverse process where noise is removed through a learnable network. The forward diffusion process is a Markov Chain, where each transition adds Gaussian Noise to the image, following:where ( ) =0 are the noise variances for each time . The variance schedule is chosen such that nothing distinguishes from a white noise. In our model, we set the variances to be constant = One has:The reverse (denoising) process is also a Markov Chain, with transitions:and Σ are learned by examples. To simplify, following the work ofHo et al. [2020], we consider that Σ = , with = = . The forward process allows to sample with arbitrary from 0 , following:. During training, and image 0 is drawn from the set of examples, along with a random time ∈ 1 · · · , a random noise image is drawn following N (0, ) and the algorithm tries to minimize:by gradient descent. During sampling a random noise image ∼ N (0, ) is drawn and iteratively denoised by applying:where is a random noise and in our case, we take = The key ingredient of diffusion models is the approximator , which is modeled by a neural network.NETWORKOur network is a slightly modified version of the denoising network ofHo et al. [2020]and is summarized onFigure 10.LEARNING RANK-1 REALIZATIONS WITH [Leimkühler et al. 2019]Leimkühler et al.[2019] proposed a neural network based point process design using losses defined from spectral or pair-correlation information.In most examples provided by the authors, 1d losses (or combination of 1d losses) are considered using 1d radial mean power spectra or 1d pair correlation functions (allowing complex designs such as a high-dimensional point process with some specific spectral properties for given 1d or 2d projections). When targeting isotropicFig. 11. Learning Rank-1 realizations using[Leimkühler et al. 2019] using a 2d power spectra loss, and a 1d power spectra loss. We recall the original properties and our results for completeness.samplers, the authors provided their experimental settings in https: //github.com/sinbag/deepsampling. We use the same parameters for Poisson disk, GBN, SOT, LDBN, targeting their respective radial power spectra. For Sobol'+Owen, we keep the same settings but updated the loss function to target a 2d power spectrum. Cropping the spectra to the central part of the domain allowed us to obtain a convergence of the learning step (in our experiments, increasing the cropping domain does not help the convergence). For Rank-1, no satisfactory results have been obtained, for the sake of completeness, we illustrate inFig.11the point set and spectra we have obtained. Note that Rank-1 is a very specific anisotropic sampler which is far from the context of[Leimkühler et al. 2019]. Although, additional investigation would be interesting to continue. Low-Discrepancy Blue Noise Sampling. G M Abdalla, Hélène Ahmed, David Perrier, Victor Coeurjolly, Jianwei Ostromoukhov, Dong-Ming Guo, Hui Yan, Oliver Huang, Deussen, ACM Trans. on Graphics (SIGGRAPH Asia). 3513Abdalla G. M. Ahmed, Hélène Perrier, David Coeurjolly, Victor Ostromoukhov, Jianwei Guo, Dong-Ming Yan, Hui Huang, and Oliver Deussen. 2016. Low-Discrepancy Blue Noise Sampling. ACM Trans. on Graphics (SIGGRAPH Asia) 35, 6 (2016), 247:1-247:13. https://doi.org/f9cpt2 Gaussian Blue Noise. G M Abdalla, Jing Ahmed, Peter Ren, Wonka, ACM Trans. Graph. 41ArticleAbdalla G. M. Ahmed, Jing Ren, and Peter Wonka. 2022. Gaussian Blue Noise. ACM Trans. Graph. 41, 6, Article 260 (nov 2022), 15 pages. https://doi.org/jtp8 . G M Abdalla, Peter Ahmed, Wonka, Optimizing Dyadic Nets. ACM Trans. on Graphics (SIGGRAPH). 40Abdalla G. M. Ahmed and Peter Wonka. 2021. Optimizing Dyadic Nets. ACM Trans. on Graphics (SIGGRAPH) 40, 4 (2021), 141:1-141:17. https://doi.org/hn22 Displacement interpolation using Lagrangian mass transport. Nicolas Bonneel, Sylvain Van De Panne, Wolfgang Paris, Heidrich, Proceedings of the 2011 SIGGRAPH Asia conference. the 2011 SIGGRAPH Asia conferenceNicolas Bonneel, Michiel Van De Panne, Sylvain Paris, and Wolfgang Heidrich. 2011. Displacement interpolation using Lagrangian mass transport. In Proceedings of the 2011 SIGGRAPH Asia conference. 1-12. https://doi.org/gkcqgt Fast Poisson disk sampling in arbitrary dimensions. Robert Bridson, SIGGRAPH sketches. 10Robert Bridson. 2007. Fast Poisson disk sampling in arbitrary dimensions. SIGGRAPH sketches 10, 1 (2007), 1. https://doi.org/gf8tsr Blue noise through optimal transport. Katherine Fernando De Goes, Victor Breeden, Mathieu Ostromoukhov, Desbrun, ACM Trans. Graph. 3110Fernando De Goes, Katherine Breeden, Victor Ostromoukhov, and Mathieu Desbrun. 2012. Blue noise through optimal transport. ACM Trans. Graph. 31, 6 (2012), 171:1- 171:10. https://doi.org/gbb6n9 Floating Points: A Method for Computing Stipple Drawings. Computer Graphics Forum (EG'00). Oliver Deussen, Stefan Hiller, Cornelius Overveld, Thomas Strothotte, 19Oliver Deussen, Stefan Hiller, Cornelius Overveld, and Thomas Strothotte. 2000. Float- ing Points: A Method for Computing Stipple Drawings. Computer Graphics Forum (EG'00) 19, 3 (2000), 40-51. https://doi.org/fg9w98 A spatial data structure for fast Poisson-disk sample generation. Daniel Dunbar, Greg Humphreys, ACM Transactions on Graphics (TOG). 253Daniel Dunbar and Greg Humphreys. 2006. A spatial data structure for fast Poisson-disk sample generation. ACM Transactions on Graphics (TOG) 25, 3 (2006), 503-508. Blue-Noise Point Sampling Using Kernel Density Model. Raanan Fattal, 48:1-48:12ACM Trans. Graph. 30Raanan Fattal. 2011. Blue-Noise Point Sampling Using Kernel Density Model. ACM Trans. Graph. 30 (2011), 48:1-48:12. https://doi.org/cv7pbv Accurate multidimensional Poisson-disk sampling. N Manuel, Steve C Gamito, Maddock, ACM Trans. Graph. 29Manuel N Gamito and Steve C Maddock. 2009. Accurate multidimensional Poisson-disk sampling. ACM Trans. Graph. 29, 1 (2009), 8:1-8:19. https://doi.org/dr8646 Flex-Convolution: Million-scale point-cloud learning beyond grid-worlds. Fabian Groh, Patrick Wieschollek, Hendrik Pa Lensch, Computer Vision-ACCV 2018: 14th Asian Conference on Computer Vision. Perth, AustraliaSpringerRevised Selected Papers, Part I 14Fabian Groh, Patrick Wieschollek, and Hendrik PA Lensch. 2019. Flex-Convolution: Million-scale point-cloud learning beyond grid-worlds. In Computer Vision-ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2-6, 2018, Revised Selected Papers, Part I 14. Springer, 105-122. Efficient algorithms for computing the L2-discrepancy. Stefan Heinrich, Math. Comp. 65Stefan Heinrich. 1996. Efficient algorithms for computing the L2-discrepancy. Math. Comp. 65, 216 (1996), 1621-1633. Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Advances in Neural Information Processing Systems. 33Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33 (2020), 6840-6851. Pointwise convolutional neural networks. Binh-Son, Minh-Khoi Hua, Sai-Kit Tran, Yeung, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBinh-Son Hua, Minh-Khoi Tran, and Sai-Kit Yeung. 2018. Pointwise convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 984-993. Wenzel Jakob, Sébastien Speierer, Nicolas Roussel, Merlin Nimier-David, Delio Vicini, Tizian Zeltner, Baptiste Nicolet, Miguel Crespo, Vincent Leroy, Ziyi Zhang, Mitsuba 3 renderer. Wenzel Jakob, Sébastien Speierer, Nicolas Roussel, Merlin Nimier-David, Delio Vicini, Tizian Zeltner, Baptiste Nicolet, Miguel Crespo, Vincent Leroy, and Ziyi Zhang. 2022b. Mitsuba 3 renderer. https://mitsuba-renderer.org . Wenzel Jakob, Sébastien Speierer, Nicolas Roussel, Delio 2022a Vicini, Dr, Jit: A Just-In-Time Compiler for Differentiable Rendering. ACM Trans. on Graphics (SIGGRAPH). 41Wenzel Jakob, Sébastien Speierer, Nicolas Roussel, and Delio Vicini. 2022a. Dr.Jit: A Just-In-Time Compiler for Differentiable Rendering. ACM Trans. on Graphics (SIGGRAPH) 41, 4 (2022), 124:1-124:19. https://doi.org/gqjn7p Stratification by rank-1 lattices. Alexander Keller, Monte Carlo and Quasi-Monte Carlo Methods 2002, Harald NiederreiterSpringerAlexander Keller. 2004. Stratification by rank-1 lattices. In Monte Carlo and Quasi- Monte Carlo Methods 2002, Harald Niederreiter (Ed.). Springer, 299-313. https: //doi.org/fks8z8 Recursive Wang Tiles for Real-Time Blue Noise. Johannes Kopf, Daniel Cohen-Or, Oliver Deussen, Dani Lischinski, ACM Trans. Graph. 253Johannes Kopf, Daniel Cohen-Or, Oliver Deussen, and Dani Lischinski. 2006. Recursive Wang Tiles for Real-Time Blue Noise. ACM Trans. Graph. 25, 3 (2006), 509-518. https://doi.org/dgvw52 LatticeBuilder: A General Software Tool for Constructing Rank-1 Lattice Rules. L Pierre, David Ecuyer, Munger, 10.1145/2754929ACM Transactions on Mathematical Software. 42Pierre L'Ecuyer and David Munger. 2016. LatticeBuilder: A General Software Tool for Constructing Rank-1 Lattice Rules. ACM Transactions on Mathematical Software 42 (2016), 1-30. https://doi.org/10.1145/2754929 Deep point correlation design. Thomas Leimkühler, Gurprit Singh, Karol Myszkowski, Hans-Peter Seidel, Tobias Ritschel, ACM Trans. on Graphics (SIGGRAPH Asia). 38Thomas Leimkühler, Gurprit Singh, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. 2019. Deep point correlation design. ACM Trans. on Graphics (SIGGRAPH Asia) 38, 6 (2019), 1-17. https://doi.org/ggfg2x Monte Carlo and Quasi-Monte Carlo Sampling. Christiane Lemieux, SpringerChristiane Lemieux. 2009. Monte Carlo and Quasi-Monte Carlo Sampling. Springer. https://doi.org/b8r4z5 Diffusion probabilistic models for 3d point cloud generation. Shitong Luo, Wei Hu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShitong Luo and Wei Hu. 2021. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2837-2845. A multiscale approach to optimal transport. Quentin Mérigot, Computer Graphics Forum. Wiley Online Library30Quentin Mérigot. 2011. A multiscale approach to optimal transport. In Computer Graphics Forum, Vol. 30. Wiley Online Library, 1583-1592. https://doi.org/cjh4q8 Neural importance sampling. Thomas Müller, Brian Mcwilliams, Fabrice Rousselle, Markus Gross, ACM Trans. on Graphics. 38Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, and Jan Novák. 2019. Neural importance sampling. ACM Trans. on Graphics 38, 5 (2019), 1-19. https://doi.org/jtrf . Thomas Müller, Fabrice Rousselle, Alexander Keller, Neural control variates. ACM Transactions on Graphics (TOG). 39Thomas Müller, Fabrice Rousselle, Alexander Keller, and Jan Novák. 2020. Neural control variates. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1-19. https://doi.org/jtrj Instant transport maps on 2D grids. Georges Nader, Gael Guennebaud, ACM Trans. Graph. 3713Georges Nader and Gael Guennebaud. 2018. Instant transport maps on 2D grids. ACM Trans. Graph. 37, 6 (2018), 249:1-249:13. https://doi.org/jtrg Random Number Generation and Quasi-Monte Carlo Methods. Harald Niederreiter, Society for Industrial and Applied Mathematics (SIAM). Harald Niederreiter. 1992. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, USA. https://doi.org/fd5fjw Fast Hierarchical Importance Sampling with Blue Noise Properties. Victor Ostromoukhov, Charles Donohue, Pierre-Marc Jodoin, ACM Trans. on Graphics (SIGGRAPH). 23Victor Ostromoukhov, Charles Donohue, and Pierre-Marc Jodoin. 2004. Fast Hierar- chical Importance Sampling with Blue Noise Properties. ACM Trans. on Graphics (SIGGRAPH) 23, 3 (2004), 488-495. Scrambling Sobol' and Niederreiter-Xing Points. B Art, Owen, Journal of Complexity. 14Art B. Owen. 1998. Scrambling Sobol' and Niederreiter-Xing Points. Journal of Complexity 14, 4 (1998), 466-489. Analysis and synthesis of point distributions based on pair correlation. Cengiz Öztireli, Markus Gross, ACM Transactions on Graphics (TOG). 31Cengiz Öztireli and Markus Gross. 2012. Analysis and synthesis of point distributions based on pair correlation. ACM Transactions on Graphics (TOG) 31, 6 (2012), 1-10. https://doi.org/gbb6qr MatBuilder: Mastering Sampling Uniformity over Projections. Loïs Paulin, Nicolas Bonneel, David Coeurjolly, Jean-Claude Iehl, Alexander Keller, Victor Ostromoukhov, ACM Trans. on Graphics (SIGGRAPH). 4113Loïs Paulin, Nicolas Bonneel, David Coeurjolly, Jean-Claude Iehl, Alexander Keller, and Victor Ostromoukhov. 2022. MatBuilder: Mastering Sampling Uniformity over Projections. ACM Trans. on Graphics (SIGGRAPH) 41, 4 (2022), 84:1-84:13. https: //github.com/loispaulin/matbuilder Sliced optimal transport sampling. Loïs Paulin, Nicolas Bonneel, David Coeurjolly, Jean-Claude Iehl, Antoine Webanck, Victor Mathieu Desbrun, Ostromoukhov, ACM Trans. on Graphics (SIGGRAPH). 39Loïs Paulin, Nicolas Bonneel, David Coeurjolly, Jean-Claude Iehl, Antoine Webanck, Mathieu Desbrun, and Victor Ostromoukhov. 2020. Sliced optimal transport sam- pling. ACM Trans. on Graphics (SIGGRAPH) 39, 4 (2020), 99:1-99:17. https: //doi.org/gg8xfj Sequences with Low-Discrepancy Blue-Noise 2-D Projections. Hélène Perrier, David Coeurjolly, Feng Xie, Matt Pharr, Pat Hanrahan, Victor Ostromoukhov, 37Hélène Perrier, David Coeurjolly, Feng Xie, Matt Pharr, Pat Hanrahan, and Victor Ostromoukhov. 2018. Sequences with Low-Discrepancy Blue-Noise 2-D Projections. 37, 2 (2018), 339-353. https://doi.org/gd2j2d Physically Based Rendering: From Theory to Implementation. Matt Pharr, Jakob Wenzel, Greg Humphreys, Morgan-Kaufmann3 ed.Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based Rendering: From Theory to Implementation (3 ed.). Morgan-Kaufmann. Variance Analysis for Monte Carlo Integration. Adrien Pilleboue, Gurprit Singh, David Coeurjolly, Michael Kazhdan, Victor Ostromoukhov, 1-124:14ACM Trans. Graph. 34Adrien Pilleboue, Gurprit Singh, David Coeurjolly, Michael Kazhdan, and Victor Ostro- moukhov. 2015. Variance Analysis for Monte Carlo Integration. ACM Trans. Graph. 34, 4 (2015), 124:1-124:14. https://doi.org/f7m28c Wasserstein Blue Noise Sampling. Hongxing Qin, Yi Chen, Jinlong He, Baoquan Chen, ACM Trans. Graph. 36137Hongxing Qin, Yi Chen, Jinlong He, and Baoquan Chen. 2017. Wasserstein Blue Noise Sampling. ACM Trans. Graph. 36, 4, Article 137a (Oct. 2017). https://doi.org/gcj3d3 High-Resolution Image Synthesis with Latent Diffusion Models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR. the IEEE Conference on Computer Vision and Pattern Recognition (CVPRRobin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 10684- 10695. U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. Springer, 234-241. https: //doi.org/gcgk7j General point sampling with adaptive density and correlations. Riccardo Roveri, Markus Cengiz Öztireli, Gross, Computer Graphics Forum. Wiley Online Library36Riccardo Roveri, A Cengiz Öztireli, and Markus Gross. 2017. General point sampling with adaptive density and correlations. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 107-117. https://doi.org/gbm2jp Scalable Multi-Class Sampling via Filtered Sliced Optimal Transport. Corentin Salaün, Iliyan Georgiev, Hans-Peter Seidel, Gurprit Singh, 10.1145/3550454.3555484ACM Trans. Graph. 41ArticleCorentin Salaün, Iliyan Georgiev, Hans-Peter Seidel, and Gurprit Singh. 2022. Scalable Multi-Class Sampling via Filtered Sliced Optimal Transport. ACM Trans. Graph. 41, 6, Article 261 (nov 2022), 14 pages. https://doi.org/10.1145/3550454.3555484 Dynamic edge-conditioned filters in convolutional neural networks on graphs. Martin Simonovsky, Nikos Komodakis, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionMartin Simonovsky and Nikos Komodakis. 2017. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3693-3702. Analysis of sample correlations for Monte Carlo rendering. Gurprit Singh, Cengiz Öztireli, G M Abdalla, David Ahmed, Kartic Coeurjolly, Oliver Subr, Victor Deussen, Ravi Ostromoukhov, Wojciech Ramamoorthi, Jarosz, Computer Graphics Forum. Wiley Online Library38Gurprit Singh, Cengiz Öztireli, Abdalla G. M. Ahmed, David Coeurjolly, Kartic Subr, Oliver Deussen, Victor Ostromoukhov, Ravi Ramamoorthi, and Wojciech Jarosz. 2019. Analysis of sample correlations for Monte Carlo rendering. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 473-491. On the distribution of points in a cube and the approximate evaluation of integrals. M Ilya, &apos; Sobol, Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki. 74Ilya M. Sobol'. 1967. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki 7, 4 (1967), 784-802. https://doi.org/crdj6j Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, PMLRInternational Conference on Machine Learning. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning. PMLR, 2256-2265. Jiaming Song, arXiv:2010.02502Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models. arXiv preprintJiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 (2020). Fourier analysis of stochastic sampling strategies for assessing bias and variance in integration. Kartic Subr, Jan Kautz, ACM Trans. Graph. 32ArticleKartic Subr and Jan Kautz. 2013. Fourier analysis of stochastic sampling strategies for assessing bias and variance in integration. ACM Trans. Graph. 32, 4, Article 128 (2013), 12 pages. https://doi.org/gbdg7c Fast tile-based adaptive sampling with user-specified Fourier spectra. Robert Ulichney, M A Cambridge, Usa Florent Wachtel, Adrien Pilleboue, David Coeurjolly, Katherine Breeden, Gurprit Singh, Gaël Cathelin, Fernando De Goes, Mathieu Desbrun, Victor Ostromoukhov, ACM Trans. on Graphics (SIGGRAPH). 33MIT PressDigital HalftoningRobert Ulichney. 1987. Digital Halftoning. MIT Press, Cambridge, MA, USA. Florent Wachtel, Adrien Pilleboue, David Coeurjolly, Katherine Breeden, Gurprit Singh, Gaël Cathelin, Fernando De Goes, Mathieu Desbrun, and Victor Ostromoukhov. 2014. Fast tile-based adaptive sampling with user-specified Fourier spectra. ACM Trans. on Graphics (SIGGRAPH) 33, 4 (2014), 1-11. https://doi.org/f6cz6k Parallel Poisson disk sampling. Li-Yi Wei, In ACM Trans. Graph. 27ACMLi-Yi Wei. 2008. Parallel Poisson disk sampling. In ACM Trans. Graph., Vol. 27. ACM, 20. https://doi.org/cs3jjv Spectral analysis of spatial sampling by photoreceptors: Topological disorder prevents aliasing. I John, Yellott, Vision Research. 22John I. Yellott. 1982. Spectral analysis of spatial sampling by photoreceptors: Topological disorder prevents aliasing. Vision Research 22, 9 (1982), 1205 -1210. https://doi. org/fsgtr4 Sample elimination for generating Poisson disk sample sets. Cem Yuksel, Computer Graphics Forum. Wiley Online Library34Cem Yuksel. 2015. Sample elimination for generating Poisson disk sample sets. In Computer Graphics Forum, Vol. 34. Wiley Online Library, 25-32. https://doi.org/ f7k7c7 Or Litany, Sanja Fidler, and Karsten Kreis. 2022. LION: Latent Point Diffusion Models for 3D Shape Generation. Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Advances in Neural Information Processing Systems (NeurIPS). Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis. 2022. LION: Latent Point Diffusion Models for 3D Shape Generation. In Advances in Neural Information Processing Systems (NeurIPS).
{'fraction_non_alphanumeric': 0.05397783299806729, 'fraction_numerical': 0.04768666430008283, 'mean_word_length': 4.665586592178771, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 32, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 6, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Much effort has been put into developing samplers with specific properties, such as producing blue noise, low-discrepancy, lattice or Poisson disk samples. These samplers can be slow if they rely on optimization processes, may rely on a wide range of numerical methods, are not always differentiable. The success of recent diffusion models for image generation suggests that these models could be appropriate for learning how to generate point sets from examples. However, their convolutional nature makes these methods impractical for dealing with scattered data such as point sets. We propose a generic way to produce 2-d point sets imitating existing samplers from observed point sets using a diffusion model. We address the problem of convolutional layers by leveraging neighborhood information from an optimal transport matching to a uniform grid, that allows us to benefit from fast convolutions on grids, and to support the example-based learning of non-uniform sampling patterns. We demonstrate how the differentiability of our approach can be used to optimize point sets to enforce properties.', 'arxivid': '2302.05116', 'author': ['Bastien Doignies ', 'Univ Lyon ', 'France ', '\nDAVID COEURJOLLY\nNICOLAS BONNEEL\nCNRS\nUniv. Lyon\nFrance\n', '\nCNRS\nUniv. Lyon\nFrance\n', '\nLOÏS PAULIN\nJULIE DIGNE\nCNRS\nUniv. Lyon\nFrance\n', '\nJEAN-CLAUDE IEHL\nUniv. Lyon\nFrance\n', '\nUniv. Lyon\nFrance\n', '\nVICTOR OSTROMOUKHOV\nUniv. Lyon\nFrance\n'], 'authoraffiliation': ['DAVID COEURJOLLY\nNICOLAS BONNEEL\nCNRS\nUniv. Lyon\nFrance', 'CNRS\nUniv. Lyon\nFrance', 'LOÏS PAULIN\nJULIE DIGNE\nCNRS\nUniv. Lyon\nFrance', 'JEAN-CLAUDE IEHL\nUniv. Lyon\nFrance', 'Univ. Lyon\nFrance', 'VICTOR OSTROMOUKHOV\nUniv. Lyon\nFrance'], 'corpusid': 256808648, 'doi': '10.48550/arxiv.2302.05116', 'github_urls': [], 'n_tokens_mistral': 16304, 'n_tokens_neox': 13493, 'n_words': 7312, 'pdfsha': '6d43b877830444f645becda4090e23b98c3e0b02', 'pdfurls': ['https://export.arxiv.org/pdf/2302.05116v1.pdf'], 'title': ['Example-Based Sampling with Diffusion Models', 'Example-Based Sampling with Diffusion Models'], 'venue': []}
arxiv
Injecting Relational Structural Representation in Neural Networks for Question Similarity Antonio Uva DISI University of Trento 38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA Daniele Bonadiman [email protected]@amazon.com DISI University of Trento 38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA Alessandro Moschitti DISI University of Trento 38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA Injecting Relational Structural Representation in Neural Networks for Question Similarity Effectively using full syntactic parsing information in Neural Networks (NNs) to solve relational tasks, e.g., question similarity, is still an open problem. In this paper, we propose to inject structural representations in NNs by (i) learning an SVM model using Tree Kernels (TKs) on relatively few pairs of questions (few thousands) as gold standard (GS) training data is typically scarce, (ii) predicting labels on a very large corpus of question pairs, and (iii) pre-training NNs on such large corpus. The results on Quora and SemEval question similarity datasets show that NNs trained with our approach can learn more accurate models, especially after fine tuning on GS. Introduction Recent years have seen an exponential growth and use of web forums, where users can exchange and find information just asking questions in natural language. Clearly, the possibility of reusing previously asked questions makes forums much more useful. Thus, many tasks have been proposed to build automatic systems for detecting duplicate questions. These were both organized in academia, e.g., SemEval (Nakov et al., 2016(Nakov et al., , 2017, or companies, e.g., Quora 1 . An interesting outcome of the SemEval challenge was that syntactic information is essential to achieve high accuracy in question reranking tasks. Indeed, the top-systems were built using Support Vector Machines (SVMs) trained with Tree Kernels (TKs), which were applied to a syntactic representation of question text (Filice et al., 2016Barrón-Cedeño et al., 2016). In contrast, NNs-based models struggled to get good accuracy as (i) large training sets are typically not available 2 , and (ii) effectively exploiting full-syntactic parse information in NNs is still an open issue. Indeed, despite Das et al. (2016) showed that NNs are very effective to manage lexical variability, no neural model encoding syntactic information has shown a clear improvement. Indeed, also NNs directly exploiting syntactic information, such as the Recursive Neural Networks by Socher et al. (2013) or the Tree-LSTM by Tai et al. (2015), have been shown to be outperformed by well-trained sequential models (Li et al., 2015). Finally, such tree-based approaches depend on sentence structure, thus are difficult to optimize and parallelize. This is a shame as NNs are very flexible in general and enable an easy system deployment in real applications, while TK models require syntactic parsing and longer testing time. In this paper, we propose an approach that aims at injecting syntactic information in NNs, still keeping them simple. It consists of the following steps: (i) train a TK-based model on a few thousands training examples; (ii) apply such classifier to a much larger set of unlabeled training examples to generate automatic annotation; (iii) pretrain NNs on the automatic data; and (iv) fine-tune NNs on the smaller GS data. Our experiments on two different datasets, i.e., Quora and Qatar Living (QL) from SemEval, show that (i) when NNs are pre-trained on the predicted data, they achieve accuracy higher than the one of TK models and (ii) NNs can be further boosted by fine-tuning them on the available GS data. This suggests that the TK properties are captured by NNs, which can exploit syntactic information even more effectively, thanks to their wellknown generalization ability. In contrast to other semi-supervised approaches, e.g., self-training, we show that the improvement of our approach is obtained only when a very different classifier, i.e., TK-based, is used to label a large portion of the data. Indeed, using the same NNs in a self-training fashion (or another NN in a co-training approach) to label the semi-supervised data does not provide any improvement. Similarly, when SVMs using standard similarity lexical features are applied to label data, no improvement is observed in NNs. One evident consideration is the fact that TKsbased models mainly exploit syntactic information to classify data. Although, assessing that NNs specifically learn such syntax should require further investigation, our results show that only the transfer from TKs produces improvement: this is a significant evidence that makes it worth to further investigate the main claim of our paper. In any case, our approach increases the accuracy of NNs, when small datasets are available to learn highlevel semantic task such as question similarity. It consists in (i) using heavier syntactic/semantic models, e.g., based on TKs, to produce training data; and (ii) exploit the latter to learn a neural model, which can then be fine-tuned on the small available GS data. Tasks and Baseline Models We introduce our question similarity tasks along with two of the most competitive models for their solutions. Question Matching and Ranking Question similarity in forums can be set in different ways, e.g., detecting if two questions are semantically similar or ranking a set of retrieved questions in terms of their similarity with the original question. We describe the two methods below: The Quora task regards detecting if two questions are duplicate or not, or, in other words, if they have the same intent. The associated dataset (Wang et al., 2017) contains over 404, 348 pairs of questions, posted by users on the Quora website, labelled as duplicate pair or not. For example, How do you start a bakery? and How can one start a bakery business? are duplicated while What are natural numbers? and What is a least natural number? are not. The ground-truth labels contain some amount of noise. In the QL task at SemEval-2016 (Nakov et al., 2016) users were provided with a new (original) question q o and a set of related questions (q 1 , q 2 , ...q n ) from the QL forum 3 retrieved by a search engine, i.e., Google. The goal is to rank question candidates, q i , by their similarity with respect to q o . q i were manually annotated as Perfect-Match, Relevant or Irrelevant, depending on their similarity with q o . PerfectMatch and Relevant are considered as relevant. A question is composed of a subject, a body and a unique identifier. Support Vector machines A top-performing model in the SemEval challenge is built with SVMs, which learn a classification function, f : Q × Q → {0, 1}, on the relevant vs. irrelevant questions belonging to the question set, Q. The classifier score is used to rerank a set of candidate questions q i provided in the dataset with respect to an original question q o . Three main representations were proposed: (i) vectors of similarity features derived between two questions; (ii) a TK function applied to the syntactic structure of question pairs; or (iii) a combination of both. Feature Vectors (FV) are built for question pairs, (q 1 , q 2 ), using a set of text similarity features that capture the relations between two questions. More specifically, we compute 20 similarities sim(q 1 , q 2 ) using word n-grams (n = [1, . . . , 4]), after stopword removal, greedy string tiling (Wise, 1996), longest common subsequences (Allison and Dix, 1986), Jaccard coefficient (Jaccard, 1901), word containment (Lyon et al., 2001), and cosine similarity. Tree Kernels (TKs) measure the similarity between the syntactic structures of two questions. Following (Filice et al., 2016), we build two macro-trees, one for each question in the pair, containing the syntactic trees of the sentences composing a question. In addition, we link two macro-trees by connecting the phrases, e.g., NP, VP, PP, etc., when there is a lexical match between the phrases of two questions. We apply the following kernel to two pairs of question trees: K( q 1 , q 2 , q 1 , q 2 ) = T K(t(q 1 , q 2 ), t(q 1 , q 2 ))+T K(t(q 2 , q 1 ), t(q 2 , q 1 )), where t(x, y) extracts the syntactic tree from the text x, enriching it with relational tags (REL) derived by matching the lexical between x and y. Injecting Structures in NNs We inject TK knowledge in two well-known and state-of-the-art networks for question similarity, enriching them with relational information. NNs for question similarity We implemented the Convolutional NN (CNN) model proposed by (Severyn and Moschitti, 2016). This learns f , using two separate sentence encoders f q 1 : Q → R n and f q 2 : Q → R n , which map each question into a fixed size dense vector of dimension n. The resulting vectors are concatenated and passed to a Multi Layer Perceptron that performs the final classification. Each question is encoded into a fixed size vector using an embedding layer, a convolution operation and a global max pooling function. The embedding layer transforms the input question, i.e., a sequence of token, X q = [x q 1 , ..., x q i , ..., x qn ], into a sentence matrix, S q ∈ R m×n , by concatenating the word embeddings w i corresponding to the tokens x q i in the input sentence. Additionally, we implemented a Bidirectional (BiLSTM), using the standard LSTM by Hochreiter and Schmidhuber (1997). An LSTM iterates over the sentence one word at the time by creating a new word representation h i by composing the representation of the previews word and the current word vector h i = LST M (w i , h i−1 ). A BiLSTM iterates over the sentence in both directions and the final representation is a concatenation of the hidden representations, h N , obtained after processing the whole sentence. We apply two sentence models (with different weights), one for each question, then we concatenate the two fixedsize representations and fed them to a Multi-Layer Perceptron. Severyn and Moschitti (2016) showed that relational information encoded in terms of overlapping words between two pairs of text can highly improve accuracy. Thus, for both networks above, we mark each word with a binary feature indicating if a word from a question appears in the other pair question. This feature is encoded with a fixed size vector (in the same way it is done for words). Relational Information Learning NNs with structure To inject structured information in the network, we use a weak supervision technique: (i) an SVM with TK is trained on the GS data; (ii) this model classifies an additional unlabelled dataset, creating automatic data; and (iii) a neural network is trained on the latter data. The pre-trained network can be fine-tuned on the GS data, using a smaller learning rate γ. This prevents catastrophic forgetting (Goodfellow et al., 2013), which may occur with a larger learning rate. Experiments We experiment with two datasets comparing models trained on gold and automatic data and their combination, before and after fine tuning. Data Quora dataset contains 384, 358 pairs in the training set and 10, 000 pairs both in the dev. and test sets. The latter two contain the same number of positive and negative examples. QL dataset contains 3, 869 question pairs divided in 2, 669, 500 and 700 pairs in the training, dev. and test sets. We created 93k 4 unlabelled pairs from the QL dump, retrieving 10 candidates with Lucene for 9, 300 query questions. NN setup We pre-initialize our word embeddings with skipgram embeddings of dimensionality 50 jointly trained on the English Wikipedia dump (Mikolov et al., 2013) and the jacana corpus 5 . The input sentences are encoded with fixed-sized vectors using a CNN with the following parameters: a window of size 5, an output of 100 dimensions, followed by a global max pooling. We use a single non-linear hidden layer, whose size is equal to the size of the sentence embeddings, i.e., 100. The word overlap embeddings is set to 5 dimensions. The activation function for both convolution and hidden layers is ReLU. During training the model optimizes the binary cross-entropy loss. We used SGD with Adam update rule, setting the learning rate to γ to 10 −4 and 10 −5 for the pre-training and fine tuning phases, respectively. parenthesis indicates the model used for generating automatic data, e.g., CNN(TK-10k) means that a CNN has been pre-trained with the data labelled by a TK model trained on 10k GS data. The amount of automatic data for pre-training is in the second column, while the amount of GS data for training or fine tuning (indicated by * ) is in the third column. Finally, the results on the dev. and test sets are in the fourth and fifth columns. We note that: first, NNs trained on 10k of GS data obtain higher accuracy than FV and TK on both dev. and test sets (see the first four lines); Second, CNNs pre-trained with the data generated by FV or in a self-training setting, i.e., CNN(CNN-10k), and also fine-tuned do not improve 6 on the baseline model, i.e., CNN-10K, (see the second part of the table). Results on Quora Third, when CNNs and LSTMs are trained on the data labelled by the TK model, match the TK model accuracy (third part of the table). Most importantly, when they are fine-tuned on GS data, they obtain better results than the original models trained on the same amount of data, e.g., 1% accuracy over CNN-10k. Next, the fourth part of the table shows that the improvement given by our method is still present when training TK (and fine tuning the NNs) on 6 The improvement of 0.5 is not statistically significant. less GS data, i.e., only 5k. Additionally, the fifth section of the table shows a high improvement by training NNs on all available Quora data annotated by TK-10k (trained on just 10k). This suggests that NNs require more data to learn complex relational syntactic patterns expressed by TKs. However, the plot in Figure 1 shows that the improvement reaches a plateau around 100k examples. Finally, in the last row of the table, we report the result of a voting approach using a combination of the normalized scores of TK-10k and CNN-10k. The accuracy is almost the same than CNN(TK-10k)*. This shows that NNs completely learn the combination of a TK model, mainly exploiting syntax, and a CNN, only using lexical information. Note that the voting model is heavy to deploy as it uses syntactic parsing and the kernel algorithm, which has a time complexity quadratic in the number of support vectors. Table 2 reports the results when applying our technique to a smaller and different dataset such as QL. Here, CNNs have lower performance than TK models as 2,669 pairs are not enough to train their parameters, and the text is also noisy, i.e., there are a lot of spelling errors. Despite this problem, the results show that CNNs can approximate the TK models well, when using a large set of automatic data. For example, the CNN trained on 93k automatically annotated examples and then fine tuned exhibits 0.4% accuracy improvement on the dev. set and almost 3% on the test set over TK models. On the other hand, using too much automatically labeled data may hurt the performance on the test set. This may be due to the fact the quality of information contained in the gold labeled data deteriorates. In other words, using the right amount of weekly-supervision is an important hyper-parameter that needs to be carefully chosen. Results on Qatar Living Related Work Determining question similarity is one of the main challenges in building systems that answer real user questions (Agichtein et al., 2015(Agichtein et al., , 2016 in community QA, thus different approaches have been proposed. Jeon et al. (2005) used a language model based on word translation table to compute the probability of generating a query question, given a target/related question. Zhou et al. (2011) showed the effectiveness of phrase-based translation models on Yahoo! Answers. Cao et al. (2009);Duan et al. (2008) proposed a similarity between two questions based on a language model that exploits the category structure of Yahoo! Answers. Wang et al. (2009) proposed a model to find semantically related questions by computing similarity between syntactic trees representing questions. Ji et al. (2012) and Zhang et al. (2014) used latent semantic topics that generate question/answer pairs. Regarding the use of automatically labelled data, Blum and Mitchell (1998) applied semisupervised approaches, such as self-training and co-training to non-neural models. The main point Table 2: Accuracy on QL using all available GS data. of our paper is the use standard weakly-supervised methods to inject syntactic information in NNs. Hu et al. (2016) tried to combine symbolic representations with NNs by transferring structured information of logic rules into the weights of NNs. Our work is rather different as we inject syntactic, and not logic, information in NNs. The work most similar to our is the one by Croce et al. (2017), who use Nystrom methods to compact the TK representation in embedding vectors and use the latter to train a feed forward NNs. In contrast, we present a simpler approach, where NNs learn syntactic properties directly from data. To our knowledge, ours is the first work trying to use NNs to learn structural information from data labelled by TK-based models. Finally, no systems of the SemEval challenges used NNs trained on syntactic information. Conclusion In this work, we have trained TK-based models, which make use of structural information, on relatively small data and applied them to new data to produce a much larger automatically labeled dataset. Our experiments show that NNs trained on the automatic data improve their accuracy. We may speculate that NNs learn relational structural information as (i) TK models mainly use syntactic structures to label data and (ii) other advanced models based on similarity feature vectors do not produce any improvement. Indeed, the latter only exploit lexical similarity measures, which are typically also generated by NNs. However, even if our conjecture were wrong, the bottom line would be that, thanks to our approach, we can have NN models comparable to TK-based approaches, by also avoiding to use syntactic parsing and expensive TK processing at deployment time. Figure 1 : 1Impact of the pre-training data. Table 1 1reports our different models, FV, TK, CNN and LSTM described in the previous section, where the suffix, -10k or -5k, indicates the amount of GS data used to train them, and the name inModel Automatic data GS data DEV TEST FV-10k - 10k 0.7046 0.7023 TK-10k - 10k 0.7405 0.7337 CNN-10k - 10k 0.7646 0.7569 LSTM-10k - 10k 0.7521 0.7450 CNN(CNN-10k) 50k - 0.7666 0.7619 CNN(CNN-10k)* 50k 10k 0.7601 0.7598 CNN(FV-10k) 50k - 0.6960 0.6931 CNN(FV-10k)* 50k 10k 0.7681 0.7565 CNN(TK-10k) 50k - 0.7446 0.7370 CNN(TK-10k)* 50k 10k 0.7748 0.7652 LSTM(TK-10k) 50k - 0.7478 0.7371 LSTM(TK-10k)* 50k 10k 0.7706 0.7505 TK-5k - 5k 0.6859 0.6774 CNN-5k - 5k 0.7532 0.7450 CNN(TK-5k) 50k - 0.7239 0.7208 CNN(TK-5k)* 50k 5k 0.7574 0.7493 CNN(TK-10k) 375k - 0.7524 0.7471 CNN(TK-10k)* 375k 10k 0.7796 0.7728 Voting(TK+CNN) - 10k 0.7838 0.7792 Table 1: Accuracy on the Quora dataset. https://www.kaggle.com/c/quora-question-pairs SQuAD byRajpurkar et al. (2016) is an exception, also possible because dealing with a simpler factoid QA task http://www.qatarliving.com/forum Note that we will release the 400k automatically labelled pairs from Quora as well as the new 93k pairs of QL along with their automatic labels for research purposes.5 Embeddings are available in the repository: https:// github.com/aseveryn/deep-qa Overview of the TREC 2015 LiveQA Track. Eugene Agichtein, David Carmel, Dan Pelleg, Yuval Pinter, Donna Harman, TREC. Eugene Agichtein, David Carmel, Dan Pelleg, Yuval Pinter, and Donna Harman. 2015. Overview of the TREC 2015 LiveQA Track. In TREC. Overview of the TREC 2016 LiveQA Track. Eugene Agichtein, David Carmel, Dan Pelleg, Yuval Pinter, Donna K Harman, TREC. Eugene Agichtein, David Carmel, Dan Pelleg, Yuval Pinter, and Donna K. Harman. 2016. Overview of the TREC 2016 LiveQA Track. In TREC. A bit-string longest-common-subsequence algorithm. Information Processing Letters. Lloyd Allison, Trevor Dix, 23Lloyd Allison and Trevor Dix. 1986. A bit-string longest-common-subsequence algorithm. Informa- tion Processing Letters, 23(6):305-310. ConvKN at SemEval-2016 Task 3: Answer and question selection for question answering on Arabic and English fora. Alberto Barrón-Cedeño, Daniele Bonadiman, Giovanni Da San, Shafiq Martino, Alessandro Joty, Fahad A Al Moschitti, Salvatore Obaidli, Kateryna Romeo, Antonio Tymoshenko, Uva, 10.18653/v1/S16-1138Proceedings of SemEval. Alberto Barrón-Cedeño, Daniele Bonadiman, Giovanni Da San Martino, Shafiq Joty, Alessandro Moschitti, Fahad A Al Obaidli, Salvatore Romeo, Kateryna Ty- moshenko, and Antonio Uva. 2016. ConvKN at SemEval-2016 Task 3: Answer and question selec- tion for question answering on Arabic and English fora. Proceedings of SemEval, pages 896-903. Combining labeled and unlabeled data with co-training. Avrim Blum, Tom Mitchell, Proceedings of the eleventh annual conference on Computational learning theory. the eleventh annual conference on Computational learning theoryACMAvrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In Pro- ceedings of the eleventh annual conference on Com- putational learning theory, pages 92-100. ACM. The use of categorization information in language models for question retrieval. Xin Cao, Gao Cong, Bin Cui, Christian Søndergaard Jensen, Ce Zhang, Proceedings of the 18th ACM conference on Information and knowledge management. the 18th ACM conference on Information and knowledge managementACMXin Cao, Gao Cong, Bin Cui, Christian Søndergaard Jensen, and Ce Zhang. 2009. The use of categoriza- tion information in language models for question re- trieval. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 265-274. ACM. Deep learning in semantic kernel spaces. Danilo Croce, Simone Filice, Giuseppe Castellucci, Roberto Basili, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Danilo Croce, Simone Filice, Giuseppe Castellucci, and Roberto Basili. 2017. Deep learning in seman- tic kernel spaces. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 345-354. Together we stand: Siamese networks for similar question retrieval. Arpita Das, Harish Yenala, Manoj Chinnakotla, Manish Shrivastava, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsArpita Das, Harish Yenala, Manoj Chinnakotla, and Manish Shrivastava. 2016. Together we stand: Siamese networks for similar question retrieval. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 378-387, Berlin, Germany. As- sociation for Computational Linguistics. Searching questions by identifying question topic and question focus. Huizhong Duan, Yunbo Cao, Chin-Yew Lin, Yong Yu, Proceedings of ACL-08: HLT. ACL-08: HLTColumbus, OhioAssociation for Computational LinguisticsHuizhong Duan, Yunbo Cao, Chin-Yew Lin, and Yong Yu. 2008. Searching questions by identifying ques- tion topic and question focus. In Proceedings of ACL-08: HLT, pages 156-164, Columbus, Ohio. Association for Computational Linguistics. KeLP at SemEval-2016 Task 3: Learning Semantic Relations between Questions and Answers. Simone Filice, Danilo Croce, Alessandro Moschitti, Roberto Basili, 10.18653/v1/S16-1172Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)Association for Computational LinguisticsSimone Filice, Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2016. KeLP at SemEval- 2016 Task 3: Learning Semantic Relations between Questions and Answers. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 1116-1123. Associa- tion for Computational Linguistics. KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering. Simone Filice, Giovanni Da San, Alessandro Martino, Moschitti, 10.18653/v1/S17-2053Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Association for Computational LinguisticsSimone Filice, Giovanni Da San Martino, and Alessan- dro Moschitti. 2017. KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 326-333. Association for Computational Lin- guistics. An empirical investigation of catastrophic forgetting in gradient-based neural networks. J Ian, Mehdi Goodfellow, Da Mirza, Aaron Xiao, Yoshua Courville, Bengio, arXiv:1312.6211arXiv preprintIan J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An em- pirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Long Short-Term Memory. Sepp Hochreiter, Jürgen Schmidhuber, https:/www.mitpressjournals.org/doi/abs/10.1162/neco.1997.9.8.1735Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation, 9(8):1735-1780. Harnessing deep neural networks with logic rules. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric Xing, 10.18653/v1/P16-1228Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neu- ral networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2410-2420. Association for Computational Linguis- tics. Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Paul Jaccard, Bulletin del la Société Vaudoise des Sciences Naturelles. Paul Jaccard. 1901.Étude comparative de la distribu- tion florale dans une portion des Alpes et des Jura. Bulletin del la Société Vaudoise des Sciences Na- turelles. Finding similar questions in large question and answer archives. Jiwoon Jeon, Bruce Croft, Joon Ho Lee, Proceedings of the 14th ACM international conference on Information and knowledge management. the 14th ACM international conference on Information and knowledge managementACMJiwoon Jeon, W Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and an- swer archives. In Proceedings of the 14th ACM in- ternational conference on Information and knowl- edge management, pages 84-90. ACM. Question-answer topic model for question retrieval in community question answering. Zongcheng Ji, Fei Xu, Bin Wang, Ben He, Proceedings of the 21st ACM international conference on Information and knowledge management. the 21st ACM international conference on Information and knowledge managementACMZongcheng Ji, Fei Xu, Bin Wang, and Ben He. 2012. Question-answer topic model for question retrieval in community question answering. In Proceedings of the 21st ACM international conference on Infor- mation and knowledge management, pages 2471- 2474. ACM. Jiwei Li, Minh-Thang Luong, Dan Jurafsky, Eudard Hovy, arXiv:1503.00185When are tree structures necessary for deep learning of representations? arXiv preprint. Jiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eu- dard Hovy. 2015. When are tree structures necessary for deep learning of representations? arXiv preprint arXiv:1503.00185. Detecting short passages of similar text in large document collections. Caroline Lyon, James Malcolm, Bob Dickerson, Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. the 2001 Conference on Empirical Methods in Natural Language ProcessingPittsburgh, PA, USACaroline Lyon, James Malcolm, and Bob Dickerson. 2001. Detecting short passages of similar text in large document collections. In Proceedings of the 2001 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP, pages 118-125, Pittsburgh, PA, USA. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems 26. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems 26. Semeval-2017 task 3: Community question answering. Preslav Nakov, Doris Hoogeveen, Lluís Màrquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, Karin Verspoor, 10.18653/v1/S17-2003Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Association for Computational LinguisticsPreslav Nakov, Doris Hoogeveen, Lluís Màrquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. Semeval-2017 task 3: Community question answering. In Proceed- ings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 27-48. Associa- tion for Computational Linguistics. Hamdy Mubarak, abed Alhakim Freihat, Jim Glass, and Bilal Randeree. Preslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, 10.18653/v1/S16-1083Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)Association for Computational LinguisticsSemeval-2016 task 3: Community question answeringPreslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, abed Alhakim Frei- hat, Jim Glass, and Bilal Randeree. 2016. Semeval- 2016 task 3: Community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 525- 545. Association for Computational Linguistics. Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392. Asso- ciation for Computational Linguistics. Modeling relational information in question-answer pairs with convolutional neural networks. Aliaksei Severyn, Alessandro Moschitti, arXiv:1604.01178arXiv preprintAliaksei Severyn and Alessandro Moschitti. 2016. Modeling relational information in question-answer pairs with convolutional neural networks. arXiv preprint arXiv:1604.01178. Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Andrew Manning, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642. Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, arXiv:1503.00075arXiv preprintKai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075. A syntactic tree matching approach to finding similar questions in community-based qa services. Kai Wang, Zhaoyan Ming, Tat-Seng Chua, Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. the 32nd international ACM SIGIR conference on Research and development in information retrievalACMKai Wang, Zhaoyan Ming, and Tat-Seng Chua. 2009. A syntactic tree matching approach to finding sim- ilar questions in community-based qa services. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in infor- mation retrieval, pages 187-194. ACM. Bilateral multi-perspective matching for natural language sentences. Zhiguo Wang, Wael Hamza, Radu Florian, arXiv:1702.03814arXiv preprintZhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural lan- guage sentences. arXiv preprint arXiv:1702.03814. Yap3: Improved detection of similarities in computer program and other texts. J Michael, Wise, ACM SIGCSE Bulletin. ACM28Michael J Wise. 1996. Yap3: Improved detection of similarities in computer program and other texts. In ACM SIGCSE Bulletin, volume 28, pages 130-134. ACM. Question retrieval with high quality answers in community question answering. Kai Zhang, Wei Wu, Haocheng Wu, Zhoujun Li, Ming Zhou, Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. the 23rd ACM International Conference on Conference on Information and Knowledge ManagementACMKai Zhang, Wei Wu, Haocheng Wu, Zhoujun Li, and Ming Zhou. 2014. Question retrieval with high qual- ity answers in community question answering. In Proceedings of the 23rd ACM International Confer- ence on Conference on Information and Knowledge Management, pages 371-380. ACM. Phrase-based translation model for question retrieval in community question answer archives. Guangyou Zhou, Li Cai, Jun Zhao, Kang Liu, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1Association for Computational LinguisticsGuangyou Zhou, Li Cai, Jun Zhao, and Kang Liu. 2011. Phrase-based translation model for question retrieval in community question answer archives. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 653-662. As- sociation for Computational Linguistics.
{'fraction_non_alphanumeric': 0.04527330826004581, 'fraction_numerical': 0.03735542798970676, 'mean_word_length': 4.724182583360311, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 2, 'lorem ipsum': 0, 'www.': 3, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 4, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Effectively using full syntactic parsing information in Neural Networks (NNs) to solve relational tasks, e.g., question similarity, is still an open problem. In this paper, we propose to inject structural representations in NNs by (i) learning an SVM model using Tree Kernels (TKs) on relatively few pairs of questions (few thousands) as gold standard (GS) training data is typically scarce, (ii) predicting labels on a very large corpus of question pairs, and (iii) pre-training NNs on such large corpus. The results on Quora and SemEval question similarity datasets show that NNs trained with our approach can learn more accurate models, especially after fine tuning on GS.', 'arxivid': '1806.08009', 'author': ['Antonio Uva \nDISI\nUniversity of Trento\n38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA\n', 'Daniele Bonadiman [email protected]@amazon.com \nDISI\nUniversity of Trento\n38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA\n', 'Alessandro Moschitti \nDISI\nUniversity of Trento\n38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA\n'], 'authoraffiliation': ['DISI\nUniversity of Trento\n38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA', 'DISI\nUniversity of Trento\n38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA', 'DISI\nUniversity of Trento\n38123, 90266Povo, Manhattan Beach(TN), CAItaly Amazon, USA'], 'corpusid': 49330155, 'doi': '10.18653/v1/p18-2046', 'github_urls': [], 'n_tokens_mistral': 10523, 'n_tokens_neox': 8837, 'n_words': 5194, 'pdfsha': 'b54d7e55f63940ec7ea94a4f5d77a1ee71186524', 'pdfurls': ['https://arxiv.org/pdf/1806.08009v1.pdf'], 'title': ['Injecting Relational Structural Representation in Neural Networks for Question Similarity', 'Injecting Relational Structural Representation in Neural Networks for Question Similarity'], 'venue': []}
arxiv
The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior Yilin Liu UNC Chapel Hill Jiang Li [email protected] UNC Chapel Hill Yunkui Pang UNC Chapel Hill Dong Nie [email protected] UNC Chapel Hill Pew-Thian Yap [email protected] UNC Chapel Hill The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior Deep Image Prior (DIP) shows that some network architectures naturally bias towards smooth images and resist noises, a phenomenon known as spectral bias. Image denoising is an immediate application of this property. Although DIP has removed the requirement of large training sets, it still presents two practical challenges for denoising: architectural design and noise-fitting, which are often intertwined. Existing methods mostly handcraft or search for the architecture from a large design space, due to the lack of understanding on how the architectural choice corresponds to the image. In this study, we analyze from a frequency perspective to demonstrate that the unlearnt upsampling is the main driving force behind the denoising phenomenon in DIP. This finding then leads to strategies for estimating a suitable architecture for every image without a laborious search. Extensive experiments show that the estimated architectures denoise and preserve the textural details better than current methods with up to 95% fewer parameters. The under-parameterized nature also makes them especially robust to a higher level of noise. Introduction Image denoising aims to obtain a clean counterpart from a noisy image. It is not only useful on its own but also serves as a plug-in module for many other image restoration tasks [32,33,3]. Deep neural networks have become the tool of choice for image denoising due to their ability to learn natural image priors from large-scale datasets. Yet, Deep Image Prior (DIP) [29] shows that a randomly initialized convolutional neural network (CNN) can regularize image restoration through its architecture and earlystopping optimization. Unlike conventional deep learning approaches that rely on large databases, DIP requires only a single degraded image. This is inspired by the phenomenon that some network architectures inherently favor generat-* These authors contribute equally. Ours is more robust under the higher noise level. Bottom: Denoising results on an extremely fine-grained (1 st row) and a coarse-grained image (2 nd row). Most methods do not perform well on these two cases at the same time, including the recent image-specific method (ISNAS-DIP [1]). Our strategy versatilely adapts the architecture to the image without searching. Besides, the results of the lightweight ConvDecoder [8] and Deep Decoder [13] suggest that without proper model setups, under-parameterization itself can neither ensure good denoising performance nor remove the need for early-stopping. ing smooth, natural images and resisting noises or degradations, acting as implicit image priors. While being a direct application of this property, image denoising by DIP still faces practical challenges from architectural design, which has been shown to be influential [2,1,29,13,15], and noisefitting (i.e., overfitting), which has also been associated with the architecture [13,14]. Architectural design for DIP remains an open problem. One prevailing view is that making the model underparameterized limits its capability of fitting the noise and thus avoids the need for early stopping [13]. However, our experiments demonstrate that, under a similar parameter budget, there exist multiple possibilities where an inappropriate model setup can still lead to noise fitting ( Fig.1) or over-smoothing. Another line of work automates the architecture search using Neural Architecture Search (NAS) techniques [5,15,1]. Without prior knowledge about the suitable architectures, the search is extensive and incurs substantial computational costs, prohibiting image-wise NAS and yielding sub-optimal restoration on certain images [1]. Arican et al. [1] narrow the search space using trainingfree metrics, but the comparisons of candidates dramatically prolong the restoration time (around 7 hours per image). Moreover, NAS-based models, along with many other DIP models, are typically heavily-parameterized and easily suffer from over-fitting, as shown in our experiments. Thus, their successes are conditioned on good timing for early stopping, which typically varies across images and is hard to track without access to ground truth. To improve the practicality of DIP, Can we directly identify and train an effective under-parameterized architecture for each image? This is particularly challenging as there are a vast number of different architectures to consider and no ground truth available for explicit supervision. Towards this aim, we study and rethink the architectural influences on the performance of DIP in the context of image denoising. We start by noting that only a few components in the CNN are responsible for the denoising effect, among which the unlearnt upsampling operations play a primary role. Our analysis from a frequency perspective reveals that the fixed upsampling operations tend to bias the architecture towards low-frequency content more strongly than linear or convolutional layers, critically influencing both the peak PSNR and the timing of noise-fitting. Importantly, this finding leads us to discover the roles of typical architectural properties in DIP empirically: assuming a standard hourglass network, simply scaling the depth and width can balance smoothing and preservation of details, due to the low-pass filtering effects of the upsampling operations inserted in-between the layers; skip connections make a deep network perform similarly as a shallower one likely by reducing the "effective down-/upsampling rate". The latter implies the possibility of getting rid of the complicated skip connections, which is key to simplifying DIP architectural design. We show that these findings also hold for the decoder-only architectures as long as the fixed upsampling operations are present. Moreover, we observed correlations between the architectural properties and image texture, e.g., a fine-grained image tends to require a wider and shallower network. This also suggests that fixing the architecture for all images is inherently not optimal. Based on these insights, we find it sufficient to restrict the design choices to only the depth and width, which re-duces to only dozens of sub-networks, and estimate them for each image according to the complexity of its texture. This can be done simply as a pre-processing step without any costly searching or evaluation. We show that this simple strategy works with both the hourglass and decoder structures, and with proper setups, the estimated networks can denoise while preserving the details better than the larger counterparts and other current methods with 60%∼95% fewer parameters. They are also more robust under higherlevel noise. Our contributions are as follows: • Building on previous findings that DIP is a result of spectral bias [25,22], we pinpoint that unlearnt upsampling is the main driving force behind this bias. • Leveraging this finding, we empirically identify the influences of depth, width and skip connections, along with their correlations with image texture, allowing for quick, effective and more interpretable architectural design for every image without the laborious search. • We are the first to associate DIP architectural design with image texture for more effective denoising. To encourage future research, we build a Texture-DIP Dataset consisting of images from three popular datasets -we re-classify them into several predefined width choices (validated through experiments) according to the complexity of image texture. • We show that with proper setups, a highly underparameterized subnetwork could match and even outperform the larger counterpart, especially under a higher noise level. We conducted extensive synthetic and real-world noise removal experiments to validate our findings and approach. Related Work DIP Variants. Deep Decoder [13] is an underparameterized network proposed to avoid early stopping. However, our investigations show that underparameterization alone is not sufficient for denoising. For instance, ConvDecoder [8], a convolutional variant of Deep Decoder, contains more parameters but demonstrates less tendency to over-fit (Fig 1). In contrast to NAS-based methods such as [5,15,1], our strategy leverages the observed relationship between the architecture and the image to prevent the exhaustive search. Furthermore, DIP-RED [20] and DIP-TV [17] augment DIP with additional priors. Early-stopping criterion. To alleviate performance decay caused by noise-fitting, Shi et al. [25] regularize the norms of network weights. However, tuning the regularization granularity can be challenging, as it likely depends on the image and noise level (see appendix for comparisons). Jo et al. [16] combine DIP with Stein's unbiased risk estimator (SURE) [28] for training without clean images, but SURE is limited to only a few known noise types [27,19]. Wang et al. [30] propose to track the running variance of the outputs, but this also introduces new hyper-parameters that require non-trivial tuning. Investigation and Method Preliminaries Deep Image Prior. Let us model a noisy image y ∈ R N as: y = x + n, where x ∈ R N is the clean counterpart to be recovered and n is assumed to be i .i .d . Gaussian Noise drawn from N (0, σ 2 I) with I being the identity matrix. DIP parameterizes the clean image x via a network G θ and optimizes it to fit the noisy image y, formulated as: θ * = arg min θ L(y; G θ (z)), x * = G θ * (z).(1) This parameterization allows lower frequencies to be fitted prior to the higher frequencies [2,25], exhibiting high impedance to image noises or degradations, but overparameterization would eventually allow noises to be fitted and hence early-stopping is often required. Image complexity. The ideal DIP architecture should be able to denoise an image while preserving its textural details. We define image complexity by its texture, which can be characterized by power spectral density (PSD). The spectral power of natural images typically follows an exponential decay from low frequencies to high frequencies [26]. High-frequency components correspond to small-scale features such as details, while low-frequency components correspond to large spatial structures. Hence, fine-grained images contain more high frequencies than coarse-grained images, manifesting as a flatter curve in Fig.2. We will score each image based on its texture features, as further described in Sec.3.5. The importance of upsampling We first investigate the core architecture components that affect the denoising performance of DIP. To this end, we analyze a decoder-only architecture by removing the encoder from the typical encoder-decoder model [29], since a decoder is the minimum requirement for reconstructing the Figure 3: Influences of the architecture components on image denoising. (a) When spatial kernels are absent, upsampling still enables denoising, but transposed convolutions lead to noisefitting, and thus performance drop sooner than bilinear upsampling. Removing the upsampling layers results in loss of the denoising capability, which cannot be compensated by simply reducing the number of layers (i.e., parameters). (b) Convolutional layers with spatial kernels alone exhibit certain denoising effects but necessitate early stopping. Increasing the number of convolutional layers achieves higher peak PSNR at the expense of earlier noise fitting (see red arrows). Better results are achieved when combined with upsampling layers. final image. Our base model for analysis is the 6-layered convolutional decoder (Conv-Decoder [8]) with 128 3 × 3 filters per layer except for the last regression layer, followed by batch normalization, ReLU activation function and upsampling. We further simplify the setup by replacing the spatial filters with pixel-wise 1 × 1 filters, constructing a non-convolutional variant (MLP-Decoder). The results of MLP-Decoder shown in Fig.3 (a) suggest that under this modest setting, upsampling plays a vital role as removing it results in loss of the denoising effects, which cannot be compensated by simply reducing the size of the network (i.e., under-parameterization). This holds true for other tasks such as super-resolution as shown in the appendix. Fig.3 (b) shows that spatial filters alone also enable denoising, in contrast to the pixel-wise filters, re-affirming the frequency bias of the convolutional layers as observed in [2], but the effects vanish soon as the network size increases and noise-fitting occurs easily. Spatial filters together with upsampling achieve better results as manifested in the higher peak PSNR and longer denoising effects. Discussion. These results suggest that an appropriate upsampling operation is crucial for enabling effective network image priors, and that under-parameterization alone is not a sufficient condition. It is worth noting that different upsampling operations induce varying extents of denoising effects: transposed convolutions [21] tend to fit noises faster than bilinear upsampling, necessitating early stopping. In the next section, we investigate the behaviors of these upsampling operations from a signal-processing perspective, to gain insights into their influences on denoising performance and the timing of noise-fitting. Spectral effects of upsampling Compared to the diverse network structures, the choice of the upsampling operation is typically consistent: bilinear or nearest neighbor (NN) interpolation and transposed convolutions. These upsampling operations can all be decomposed into two steps: (i) zero-insertion and (ii) filtering. Given a target upsampling factor R, the low-resolution feature map is first interleaved with (R-1) rows/columns of zeros to increase its sampling rate, and then convolved with a low-pass filter (LPF) to remove the alias high frequencies introduced by zero-insertion. The key difference between the upsampling operations lies in the nature of the filters used. In transposed convolution, the filters are learnable, while in bilinear and nearest neighbor upsampling, they are fixed. To prove this, we consider the case of a 1D signal x(n) and its discrete Fourier representation X(k) = N −1 n=0 x(n)e −i2π N kn , k = 0, ..., N − 1. For an upsampling factor of 2, we have: X up (k) = 2N −1 n=0 x up (n)e −i2π 2Nk n (2) = N −1 n=0 x (2n)e −i2π 2N (2n)k + N −1 n=0 x (2n + 1)e −i2π 2N (2n+1)k (3) fork = 0, ..., 2N − 1,(4) where x (2n + 1) = 0 due to interleaved zero insertion. Hence, for 0 ≤k < N , X up (k) = X(k). Fork ≥ N , let k =k − N, k = 0, ..., N − 1: X up (k) = N −1 n=0 x(n)e −i2π 2N (k +N )2n (5) = N −1 n=0 x(n)e −i2π N nk −i2nπ = X(k )(6) Thus, zero-insertion will preserve the original spectrum at [0, N ] (passband) and additionally replicate a (mirrored) copy of the original spectrum contents at [N, 2N − 1] (stopband), i.e., high-frequency replica, which should be suppressed by the subsequent LPF. Convolving with the filter of NN or bilinear interpolation is equivalent to multiplication of X up (k) with a sinc or sinc 2 function corresponding to low-pass filtering, albeit not ideal, while the learnable filter in the transposed convolution may not necessarily be a low-pass filter as it depends on the optimization objective. We experimentally demonstrate that different upsampling operations bias the architecture towards different spectral properties. Specifically, we constructed four upsamplers by first interleaving the input with zeros and then convolving it with the handcrafted LPFs: L14, L15, L −60 and L −100 , with the subscript denoting the decayed dB. By construction, L14 and L15 are very close to NN in the passband (< 0.03dB) and only differ in the stopband. The frequency responses of the compared LPFs are detailed in Fig. 4. We applied the customized upsamplers on ConvDecoder and tested on the fine-and coarse-textured images from Set9 respectively. The same findings also hold for the encoderdecoder architecture (please see the appendix). From Fig. 4, upsampling critically influences both the peak PSNR value and the timing for early stopping with respect to images of different texture complexities. Upsamplers with less attenuation (NN, L14, L15) are beneficial for generating high-frequency abundant (i.e., fine-grained) images, but they may easily cause noise-fitting especially for the coarser-grained ones. Also, they are generally the fastest to reach the peak PSNR (Table 1). This explains why the transposed convolution requires early-stopping, as the filters may not learn to attenuate the introduced high frequencies effectively. Similar frequency issues with the learned upsampling are also prevalent in generative models [24, 4, 9, 10], e.g., the checkerboard artifacts. On the other hand, bilinear and L −60 upsamplers exhibit a stronger bias towards lower frequencies with a greater amount of attenuation, leading to longer-lasting denoising effects. They turn out to work sufficiently well for both kinds of images, especially on the coarser-grained ones which are typically the majority in the dataset. LPF −100 over-smooths both types of images and performs the worst as it attenuates the signals substantially, though not requiring early stopping. Discussion. These results lead us to conclude that the upsamplers with fixed LPFs are key to the denoising effects of DIP for their tendency towards smooth images (i.e., fewer high-frequency contents) aligning well with the spectral statistics of natural images (see Fig.2). Probably due to a good balance between the denoising performance and the prolonged denoising effects, bilinear upsampling has been widely adopted in DIP models for various applications [29,13,11,14]. Interactions with other architecture elements After establishing the significance of upsampling, we will now consider how it may interact with other common architectural elements to affect the ultimate output. Convolutions + non-linearity. Ideal upsampling does not modify the signal representations but only expands the spectrum for the subsequent layers to add new content. Convolution followed by nonlinearity, e.g., ReLU, is the only operation capable of introducing arbitrarily high frequencies. Increasing the number of layers (depth) or channels (width) enhances the capability of generating new high frequencies, as theoretically and empirically proved in [22]. Intuitively, using an excessive number of layers or channels w.r.t. the input image can accelerate the learning of both details and noise. However, the effects are attenuated by the fixed upsampling operations placed between the layers. As shown in Fig.5, when using fewer upsampling operations (i.e., a shallower network), increasing the width of the network can easily cause overfitting on simpler images while benefiting the more complex ones. Increasing the number of upsampling operations (i.e., a deeper network) can alleviate the over-fitting issue (stronger attenuation) but results in blurry outputs for fine-grained images. Increasing only the upsampling factors without adding more layers can make the output even more blurry (Fig.6). In other words, the final output is determined by the balance between the generation of high frequencies by the layers and signal attenuation caused by the upsampling operations. Skip connections between the encoder and the decoder often complicate the search space [5]. While they are not directly responsible for denoising, we have found that they may reduce the "effective downsampling/upsampling rate", making deeper networks perform similarly to shallower ones in terms of the PSNR/SSIM score. Our finding is based on a large-scale experiment comprising 7329 architectures in total, as detailed in the appendix. This result is particularly surprising, as skip connections significantly improve the same deep network that is used to over-smooth the details, as shown in Fig. 5 (b). Our qualitative results in the experimental section also support this claim. Overall, skip connections influence the deeper networks more than the shallower ones, manifested as the smaller deviation when depth ≤3 in Fig.5 (b). Practical application to architectural design Based on the findings and analysis above, we argue that it is possible to estimate an effective architecture for each image without an extensive search. Assuming every decoder layer is followed by a 2× bilinear upsampling layers, we present our strategies as follows: Image scoring. To more robustly classify image texture, we extracted both spatial and frequency features and trained a Decision Tree [18] for feature selection. The most useful features derived from the Gray Level Co-occurrence Matrix (GLCM) [12] of each image turn out to be the following: correlations measured at 0 • , homogeneity at 45 • , and contrast at 0 • . This leads to a total of 3 spatial features for each image. The frequency feature is 1D PSD obtained by first converting the 2D PSD from Cartesian coordinates to polar coordinates, and then azimuthally averaging over θ. Depth estimation. We have known that increasing the depth tends to over-smooth the output, though this does not affect the coarse-grained images much (not benefit either), it hurts the fine-grained ones. Hence, for fine-grained images, we have three options: a) add more skip connections to a deep network; b) simply use a shallower one; c) keep all the layers but reduce down-/up-sampling layers. We recommend c for decoder-only architectures since they are already under-parameterized; for hourglass network this can easily lead to over-fitting (Fig.6). We find b generally achieves a better trade-off than a between good performance and the need for early stopping, especially under higher-level noise, as shown in our results section. This also holds for coarsegrained images as they are not sensitive to depth. More specifically, we find a 2-level hourglass network is sufficient for both types of images (details in the appendix). Width estimation. Width is crucial for learning sufficient details while avoiding over-fitting especially to a shallow network (less attenuation). We find that width is correlated with the complexity of image texture: a finer-grained image requires more channels per layer and vice versa. To further validate this, we treated width estimation as a classification problem, and trained three SVMs [6] with texture features as the inputs to classify the images into three width choices {32, 64, 128}. Note that these widths were empirically chosen for the datasets used in this study and are by no means optimal for all cases, but tuning for other images should be straightforward. The classification results and analysis are in Sec.4.4. Figure 7: Denoising results on a fine-grained image ("kodim01" from Set9) with Gaussian noise (σ = 25). Our estimated architecture for this image is a two-level hourglass network with one skip connection and 128 channels, which is much smaller than others. Implementation Details We conducted the experiments on three popular datasets and a real-world noisy dataset: Set9 [7] consisting of 9 colored images, Set12 [32] consisting of 12 grey-scaled images, CBSD68 [23] consisting of 68 colored images, and PolyU [31] consisting of 100 real noisy and clean image pairs. We first report our results with 2-level hourglass architecture with the same components as in DIP [29] when comparing with the existing methods, and then extend the strategy to ConvDecoder [8], a decoder-only architecture. All models were trained for 3000 iterations. Comparisons with DIP variants Gaussian Noise. Table 2 summarizes the numerical results. Since the base network we estimate the architectures from is DIP, the direct comparisons with it demonstrate that our properly-designed under-parameterized networks perform on par with the larger counterpart at a mild noise level while outperforming it at a higher noise level. This cannot be solely explained by under-parameterization since Deep Decoder and ConvDecoder contain similar or even fewer parameters while are unable to achieve similar results. 7 shows that a shallow and wide network can preserve the details better than many deeper ones. Note that DIP is a 5level hourglass network with full skip connections, which by our standard can also well preserve the details. In fact, it has served as a very strong baseline under mild noise. NAS-DIP and ISNAS-DIP suffer from various extents of over-fitting on different datasets. This also suggests that optimal stopping points vary across images. Besides, they are time-intensive in either searching or testing (Table 3). Simi-lar conclusions hold for Poisson noise, which was tested on Set9 [7] as shown in Table 4. Real-World Noise. We additionally evaluate all methods on the PolyU dataset [31]. We applied again the twolevel hourglass architectures with variable width estimated for the images. Table 5 summarizes the numerical results. Fig.9 shows that our method tends to preserve more details, though this may not be reflected in the metrics. Simply removing the skip connections from DIP makes the result more blurring, similar to the decoder-only architectures. Extend to Decoder-only Architecture We applied our strategy on ConvDecoder and tested it on all three datasets. Here we keep all 5 layers but remove a certain number of upsampling layers and scale the width accordingly for the images. These simple changes effectively alleviate the over-smoothing issue that often comes with ConvDecoder as shown in Fig.8 and improve the quantitative results as shown in Table 6. More Analysis on Depth and Width For practical tips, we find it more efficient to first determine the depth, and then the width. This is because when the network is deep enough (strong attenuation due to upsampling), width becomes less influential, as evident in Fig.10 (b) and the small deviation in Fig.5 (b). However, to relax the need for early stopping under a higher noise level, one may prefer a shallower one, where width matters ( Fig.10 (a)). In this regard, we use the texture features of the images to predict the desired width. This makes intuitive sense as noise-fitting is associated with the amount of high frequencies in the image, which is manifested in its texture. The 0.86 AUC score shown in Fig.11 (a) also suggests such a correlation. The "optimal" width labels we used to train the classifiers were obtained by experimenting with all three width choices on a 2-level hourglass architecture. These width labels are also applicable to decoder architectures as demonstrated by our experiments. Although the choices seem limited, we found the images robust to the choice of width to some extent, and simple averaging also works well for ambiguous cases (please refer to the appendix). In fact, some images have multiple width labels. We will release two versions of our Texture-DIP datasets. Conclusion and Future Work In this work, we present a surprisingly efficient solution to the two open challenges of DIP regarding architectural choice and early stopping in the context of image denoising. By analyzing the spectral effects of upsampling and its interactions with other architectural componenents, we show that simple architectural changes enable highlyeffective under-parameterized networks that could surpass the larger counterparts and does not critically rely on earlystopping. The understanding of the influences of upsampling may reveal the architectural characteristics of other restoration tasks such as inpainting. We leave this for future work. We hope our study could encourage efficient architectural design for DIP and image synthesis in general. Figure 1 : 1Top: SSIM (↑) values versus training iterations under different levels of Gaussian noises. Figure 2 : 2Examples of the spectral density of natural images. Figure 4 : 4(Left) Frequency responses of the tested LPFs. Different LPFs result in upsamplers with different extents of smoothing. NN interpolation preserves most signals in the passband but also the high-frequency replica in the stopband; L−100 attenuates the signals most significantly (∼ 100dB) and suppresses the highfrequency replica most. (Right) Denoising results on coarse-and fine-textured images from Set9. Top rows: Peak PSNR values. Bottom rows: PSNR values at the last training iteration. Figure 5 : 5Influences of width, depth, and skip connections, assuming upsampling inserted in-between the layers. (a) Increased width more easily over-fit the low-frequency dominated (coarsegrained) images. SSIM (↑) scores averaged across the depths. (b) Increased depth tends to over-smooth especially the highfrequency abundant (fine-grained) images, but skip connections alleviate this issue. Scores are averaged across the widths. Figure 6 : 6Varying position of the upsampling vs. SSIM. Placing the upsampling close to the input (or encoder) can easily cause over-fitting, regardless of the scaling factor. When close to the output (end of the decoder), upsampling with large scaling factors (e.g., 32×) causes over-smoothing (i.e., lower SSIM scores). Fig. Figure 8 : 8Denoising results on a real-world noisy image. Figure 9 : 9Qualitative improvement on ConvDecoder simply by removing three upsampling layers in this case. Figure 10 : 10(a) Width critically influences a shallower network, while (b) it rarely has any impact on a sufficiently deep network. Figure 11 : 11(a) ROC curves with AUC scores for width classifcation on images from Set9, Set12 and CBSD68. 5-fold crossvalidation is performed and repeated 10 times. (b) Overview of our Texture-DIP Dataset. Frequency responsePassband Stopband Gain (dB) Frequency LPFs Coarse Fine NN 31.1 24.5 Bilinear 31.3 22.9 L14 31.4 24.7 L15 31.2 24.7 L −60 31.2 22.5 L −100 27.5 20.0 NN 27.8↓ 24.2 Bilinear 31.1 22.9 L14 25.2↓ 24.0 L15 27.1↓ 24.5 L −60 31.2 22.5 L −100 27.5 20.0 Table 1 : 1The iteration number [iter./5000] where the peak PSNR is reached with different upsamplers. The upsamplers are designed to differ in the stopband mainly. As the strength of attenuation increases (from left to right), the slower the peak PSNR is reached. L16 and L17 are created for this sub-experiment only.NN L14 L15 L16 L17 Coarse-grained 1783 1589 1681 2205 2214 Fine-grained 2424 2942 3898 4361 4957 Table 2 : 2Quantitative results on Gaussian noise. σ denotes the noise level. All methods were trained with a fixed iteration number (3000) throughout the experiments. The highest score is in bold, and the second highest is underlined.Datasets DIP [29] Deep Decoder [13] ConvDecoder [8] NAS-DIP [5] ISNAS-DIP [1] Ours σ = 25 Set9 [7] PSNR 30.10 28.45 28.51 26.37 29.11 30.26 SSIM 0.893 0.848 0.854 0.753 0.862 0.900 Set12 [32] PSNR 26.97 25.98 25.78 20.86 24.10 28.14 SSIM 0.812 0.789 0.786 0.534 0.745 0.884 CBSD68 [23] PSNR 28.93 25.50 25.19 23.80 24.51 28.57 SSIM 0.892 0.809 0.793 0.693 0.745 0.888 σ = 50 Set9 [7] PSNR 25.04 25.22 25.01 21.07 23.91 26.13 SSIM 0.761 0.764 0.769 0.593 0.698 0.833 Set12 [32] PSNR 22.15 20.44 22.72 18.92 19.20 24.59 SSIM 0.623 0.687 0.706 0.476 0.537 0.805 CBSD68 [23] PSNR 23.74 23.52 24.06 17.92 19.93 24.17 SSIM 0.746 0.725 0.767 0.323 0.573 0.774 # Params (Millions) 2.3M 0.1M 0.89M 4.4M Varied 0.05M∼0.92M Table 3 : 3Comparisons of desired properties. Restoration time is computed on an image of size 512 × 512 with 3000 iterations. NAS [5] ISNAS [1] Ours Image-Specific Architecture Search 3 days 5 mins - Per-image Restoration ∼23 mins ∼7 hrs ∼6 mins Early Stopping Required? Yes Yes No Table 4 : 4Quantitative evaluation on Poisson noise. Deep Decoder shortened as DD and ConvDecoder as CD.Noise scale DIP DD CD NAS ISNAS Ours ζ = 0.01 PSNR 31.58 29.81 29.51 29.70 30.12 30.95 SSIM 0.915 0.880 0.875 0.864 0.875 0.910 ζ = 0.1 PSNR 22.66 23.91 24.94 15.72 17.68 24.99 SSIM 0.640 0.718 0.776 0.417 0.496 0.789 ζ = 0.2 PSNR 20.36 21.13 21.87 12.99 14.64 22.55 SSIM 0.563 0.609 0.672 0.311 0.383 0.774 Table 5 : 5Quantitative evaluation on PolyU, a real-world noisy dataset. Deep Decoder and ConvDecoder shortened as DD, CD.DIP DD CD NAS ISNAS Ours PSNR 38.15 37.22 37.00 37.83 37.78 38.05 SSIM 0.982 0.978 0.976 0.982 0.977 0.984 4. Experiments Table 6 : 6Apply our strategy on ConvDecoder.Datasets σ = 25 σ = 50 Before After Before After Set9 PSNR 28.51 28.74 25.01 25.11 SSIM 0.854 0.873 0.769 0.784 Set12 PSNR 25.79 26.98 22.72 23.01 SSIM 0.786 0.854 0.706 0.742 CBSD68 PSNR 25.19 28.29 24.36 24.12 SSIM 0.793 0.877 0.767 0.768 # Params (Millions) 0.89M 0.06M∼0.89M 0.89M 0.06M∼0.89M Isnas-dip: Image-specific neural architecture search for deep image prior. Ozgur Metin Ersin Arican, Gustav Kara, Ender Bredell, Konukoglu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Metin Ersin Arican, Ozgur Kara, Gustav Bredell, and En- der Konukoglu. Isnas-dip: Image-specific neural architecture search for deep image prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1960-1968, 2022. 1, 2, 6 The spectral bias of the deep image prior. Prithvijit Chakrabarty, Subhransu Maji, arXiv:1912.0890513arXiv preprintPrithvijit Chakrabarty and Subhransu Maji. The spectral bias of the deep image prior. arXiv preprint arXiv:1912.08905, 2019. 1, 3 Plugand-play admm for image restoration: Fixed-point convergence and applications. Xiran Stanley H Chan, Omar A Wang, Elgendy, IEEE Transactions on Computational Imaging. 31Stanley H Chan, Xiran Wang, and Omar A Elgendy. Plug- and-play admm for image restoration: Fixed-point conver- gence and applications. IEEE Transactions on Computa- tional Imaging, 3(1):84-98, 2016. 1 A closer look at fourier spectrum discrepancies for cnn-generated images detection. Keshigeyan Chandrasegaran, Ngoc-Trung Tran, Ngai-Man Cheung, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionKeshigeyan Chandrasegaran, Ngoc-Trung Tran, and Ngai- Man Cheung. A closer look at fourier spectrum discrepan- cies for cnn-generated images detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7200-7209, 2021. 5 Nas-dip: Learning deep image prior with neural architecture search. Yun-Chun Chen, Chen Gao, Esther Robb, Jia-Bin Huang, European Conference on Computer Vision. Springer56Yun-Chun Chen, Chen Gao, Esther Robb, and Jia-Bin Huang. Nas-dip: Learning deep image prior with neural architecture search. In European Conference on Computer Vision, pages 442-459. Springer, 2020. 2, 5, 6 Support-vector networks. Corinna Cortes, Vladimir Vapnik, Machine learning. 206Corinna Cortes and Vladimir Vapnik. Support-vector net- works. Machine learning, 20:273-297, 1995. 6 Video denoising by sparse 3d transform-domain collaborative filtering. Kostadin Dabov, Alessandro Foi, Karen Egiazarian, 15th European Signal Processing Conference. IEEE7Kostadin Dabov, Alessandro Foi, and Karen Egiazarian. Video denoising by sparse 3d transform-domain collabora- tive filtering. In 2007 15th European Signal Processing Con- ference, pages 145-149. IEEE, 2007. 6, 7, 8 Accelerated mri with un-trained neural networks. Mohammad Zalbagi Darestani, Reinhard Heckel, IEEE Transactions on Computational Imaging. 7Mohammad Zalbagi Darestani and Reinhard Heckel. Accel- erated mri with un-trained neural networks. IEEE Transac- tions on Computational Imaging, 7:724-733, 2021. 1, 2, 3, 6, 7 Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. Ricard Durall, Margret Keuper, Janis Keuper, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionRicard Durall, Margret Keuper, and Janis Keuper. Watch your up-convolution: Cnn based generative deep neural net- works are failing to reproduce spectral distributions. In Pro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7890-7899, 2020. 5 Leveraging frequency analysis for deep fake image recognition. Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz, PMLR, 2020. 5International conference on machine learning. Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fis- cher, Dorothea Kolossa, and Thorsten Holz. Leveraging fre- quency analysis for deep fake image recognition. In Inter- national conference on machine learning, pages 3247-3258. PMLR, 2020. 5 double-dip": unsupervised image decomposition via coupled deep-image-priors. Yosef Gandelsman, Assaf Shocher, Michal Irani, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYosef Gandelsman, Assaf Shocher, and Michal Irani. " double-dip": unsupervised image decomposition via coupled deep-image-priors. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 11026-11035, 2019. 5 Textural features for image classification. M Robert, Karthikeyan Haralick, Its&apos; Hak Shanmugam, Dinstein, IEEE Transactions on systems, man, and cybernetics. 6Robert M Haralick, Karthikeyan Shanmugam, and Its' Hak Dinstein. Textural features for image classification. IEEE Transactions on systems, man, and cybernetics, (6):610-621, 1973. 6 Reinhard Heckel, Paul Hand, arXiv:1810.03982Deep decoder: Concise image representations from untrained non-convolutional networks. 6arXiv preprintReinhard Heckel and Paul Hand. Deep decoder: Concise image representations from untrained non-convolutional net- works. arXiv preprint arXiv:1810.03982, 2018. 1, 2, 5, 6 Denoising and regularization via exploiting the structural bias of convolutional generators. Reinhard Heckel, Mahdi Soltanolkotabi, arXiv:1910.1463415arXiv preprintReinhard Heckel and Mahdi Soltanolkotabi. Denoising and regularization via exploiting the structural bias of convolu- tional generators. arXiv preprint arXiv:1910.14634, 2019. 1, 5 Neural architecture search for deep image prior. Kary Ho, Andrew Gilbert, Hailin Jin, John Collomosse, Computers & graphics. 982Kary Ho, Andrew Gilbert, Hailin Jin, and John Collomosse. Neural architecture search for deep image prior. Computers & graphics, 98:188-196, 2021. 1, 2 Rethinking deep image prior for denoising. Yeonsik Jo, Young Se, Jonghyun Chun, Choi, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionYeonsik Jo, Se Young Chun, and Jonghyun Choi. Rethink- ing deep image prior for denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5087-5096, 2021. 2 Image restoration using total variation regularized deep image prior. Jiaming Liu, Yu Sun, Xiaojian Xu, Ulugbek S Kamilov, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IeeeJiaming Liu, Yu Sun, Xiaojian Xu, and Ulugbek S Kamilov. Image restoration using total variation regularized deep im- age prior. In ICASSP 2019-2019 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 7715-7719. Ieee, 2019. 2 Wiley interdisciplinary reviews: data mining and knowledge discovery. Wei-Yin Loh, 1Classification and regression treesWei-Yin Loh. Classification and regression trees. Wiley in- terdisciplinary reviews: data mining and knowledge discov- ery, 1(1):14-23, 2011. 6 Image denoising in mixed poisson-gaussian noise. Florian Luisier, Thierry Blu, Michael Unser, IEEE Transactions on image processing. 2033Florian Luisier, Thierry Blu, and Michael Unser. Image de- noising in mixed poisson-gaussian noise. IEEE Transactions on image processing, 20(3):696-708, 2010. 3 Deepred: Deep image prior powered by red. Gary Mataev, Peyman Milanfar, Michael Elad, Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. the IEEE/CVF International Conference on Computer Vision WorkshopsGary Mataev, Peyman Milanfar, and Michael Elad. Deepred: Deep image prior powered by red. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0-0, 2019. 2 . Augustus Odena, Vincent Dumoulin, Chris Olah, Deconvolution and checkerboard artifacts. Distill. 1103Augustus Odena, Vincent Dumoulin, and Chris Olah. De- convolution and checkerboard artifacts. Distill, 1(10):e3, 2016. 3 On the spectral bias of neural networks. Aristide Nasim Rahaman, Devansh Baratin, Felix Arpit, Min Draxler, Fred Lin, Yoshua Hamprecht, Aaron Bengio, Courville, PMLRInternational Conference on Machine Learning. 25Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In International Conference on Machine Learning, pages 5301-5310. PMLR, 2019. 2, 5 . Stefan Roth, J Michael, Black, Fields of experts. International Journal of Computer Vision. 8227Stefan Roth and Michael J Black. Fields of experts. Interna- tional Journal of Computer Vision, 82(2):205-229, 2009. 6, 7 On the frequency bias of generative models. Katja Schwarz, Yiyi Liao, Andreas Geiger, Advances in Neural Information Processing Systems. 34Katja Schwarz, Yiyi Liao, and Andreas Geiger. On the fre- quency bias of generative models. Advances in Neural Infor- mation Processing Systems, 34:18126-18136, 2021. 5 On measuring and controlling the spectral bias of the deep image prior. Zenglin Shi, Pascal Mettes, Subhransu Maji, G M Cees, Snoek, International Journal of Computer Vision. 13043Zenglin Shi, Pascal Mettes, Subhransu Maji, and Cees GM Snoek. On measuring and controlling the spectral bias of the deep image prior. International Journal of Computer Vision, 130(4):885-908, 2022. 2, 3 Natural image statistics and neural representation. P Eero, Simoncelli, Bruno A Olshausen, 24Annual review of neuroscienceEero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation. Annual review of neuro- science, 24(1):1193-1216, 2001. 3 Training deep learning based denoisers without ground truth data. Advances in neural information processing systems. Shakarim Soltanayev, Se Young Chun, 31Shakarim Soltanayev and Se Young Chun. Training deep learning based denoisers without ground truth data. Ad- vances in neural information processing systems, 31, 2018. 3 Estimation of the mean of a multivariate normal distribution. The annals of Statistics. M Charles, Stein, Charles M Stein. Estimation of the mean of a multivariate normal distribution. The annals of Statistics, pages 1135- 1151, 1981. 2 Deep image prior. Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition67Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9446-9454, 2018. 1, 3, 5, 6, 7 Hengkang Wang, Taihui Li, Zhong Zhuang, Tiancong Chen, arXiv:2112.06074Hengyue Liang, and Ju Sun. Early stopping for deep image prior. arXiv preprintHengkang Wang, Taihui Li, Zhong Zhuang, Tiancong Chen, Hengyue Liang, and Ju Sun. Early stopping for deep image prior. arXiv preprint arXiv:2112.06074, 2021. 3 Jun Xu, Hui Li, Zhetong Liang, David Zhang, Lei Zhang, arXiv:1804.02603Real-world noisy image denoising: A new benchmark. 7arXiv preprintJun Xu, Hui Li, Zhetong Liang, David Zhang, and Lei Zhang. Real-world noisy image denoising: A new bench- mark. arXiv preprint arXiv:1804.02603, 2018. 7, 8 Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, Lei Zhang, IEEE transactions on image processing. 2677Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7):3142-3155, 2017. 1, 6, 7 Learning deep cnn denoiser prior for image restoration. Kai Zhang, Wangmeng Zuo, Shuhang Gu, Lei Zhang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3929-3938, 2017. 1
{'fraction_non_alphanumeric': 0.04501103752759382, 'fraction_numerical': 0.038565121412803535, 'mean_word_length': 4.603092145949288, 'pattern_counts': {'":': 2, '<': 2, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Deep Image Prior (DIP) shows that some network architectures naturally bias towards smooth images and resist noises, a phenomenon known as spectral bias. Image denoising is an immediate application of this property. Although DIP has removed the requirement of large training sets, it still presents two practical challenges for denoising: architectural design and noise-fitting, which are often intertwined. Existing methods mostly handcraft or search for the architecture from a large design space, due to the lack of understanding on how the architectural choice corresponds to the image. In this study, we analyze from a frequency perspective to demonstrate that the unlearnt upsampling is the main driving force behind the denoising phenomenon in DIP. This finding then leads to strategies for estimating a suitable architecture for every image without a laborious search. Extensive experiments show that the estimated architectures denoise and preserve the textural details better than current methods with up to 95% fewer parameters. The under-parameterized nature also makes them especially robust to a higher level of noise.', 'arxivid': '2304.11409', 'author': ['Yilin Liu \nUNC Chapel Hill\n\n', 'Jiang Li [email protected] \nUNC Chapel Hill\n\n', 'Yunkui Pang \nUNC Chapel Hill\n\n', 'Dong Nie [email protected] \nUNC Chapel Hill\n\n', 'Pew-Thian Yap [email protected] \nUNC Chapel Hill\n\n'], 'authoraffiliation': ['UNC Chapel Hill\n', 'UNC Chapel Hill\n', 'UNC Chapel Hill\n', 'UNC Chapel Hill\n', 'UNC Chapel Hill\n'], 'corpusid': 258298521, 'doi': '10.48550/arxiv.2304.11409', 'github_urls': [], 'n_tokens_mistral': 13317, 'n_tokens_neox': 11431, 'n_words': 6648, 'pdfsha': '8672825c6d63490e3eec62a56ea822f1adc0e7d0', 'pdfurls': ['https://export.arxiv.org/pdf/2304.11409v1.pdf'], 'title': ['The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior', 'The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior'], 'venue': []}
arxiv
Modeling and Estimation for Systems with Randomly Delayed Measurements and Packet Dropouts 4 Apr 2023 Ranjeet Kumar Tiwari Shovan Bhaumik Modeling and Estimation for Systems with Randomly Delayed Measurements and Packet Dropouts 4 Apr 20231Index Terms-Poisson distributionrandom delayssequential Monte Carlo methodGaussian-approximation method A networked system often uses a shared communication network to transmit the measurements to a remotely located estimation center. Due to the limited bandwidth of the channel, a delay may appear while receiving the measurements. This delay can be arbitrary step random, and packets are sometimes dropped during transmission as it exceeds a certain permissible number. In this paper, such measurements are modeled with the Poisson distribution, which allows the user to determine the maximum delay the system might suffer. When the measurement delay exceeds the permissible number, the packet dropout happens. Based on the proposed model, we solve the problem by assuming that the prior and posterior densities of states are Gaussian and derive the expression of the estimated state and the error covariance. Later, relaxing the Gaussian assumption for densities, we propose a solution with the help of the sequential Monte Carlo (SMC) approach. The proposed SMC method divides the set of particles into several groups, where each group supports the possibility that the received measurement is delayed by a certain number of steps. The strength of an individual group is determined by the probability of a measurement being delayed with the same number of steps that the group represents. This approach estimates the states and also assesses the amount of delay from the received measurements. Finally, the developed estimators are implemented on two nonlinear estimation problems, and the simulation results are compared. The proposed SMC approach shows better results compared to the designed Gaussian delay filters and existing particle filters with delay. I. INTRODUCTION T HE networked control systems are widely used in various areas such as unmanned aerial vehicles [1], terrestrial and space exploration [2], and accessing the hazardous environment [3], to name a few. In such systems, the information is sent over a common communication channel with limited capacity, which inevitably causes some undesirable events such as random delay in measurements, missing measurements/packet dropouts, sensor saturation, and signal quantization, among other networked-induced phenomena. Consequently, these unwanted events warrant some modification in the conventional algorithms of state estimation for the said system. In this paper, we have considered the event of random delays in measurements and packet dropouts for developing the Bayesian estimators. In literature, several works have addressed the random delay in measurement while designing the Gaussian state estimator [4]- [7]. An unscented Kalman filter using the state augmentation is proposed in [8] when the measurements are two-step randomly delayed. Assuming the bounded delayed measurements and packet dropouts, the author in [9] has designed an optimal estimator for the networked control system. The same author has proposed an optimal linear filter for delayed measurements with and without time-stamps [10]. A generalized filtering methodology for a Gaussian system with maximum one-step random delay in measurements is presented in [11] whereas [12] has dealt with multiple-step delayed measurements along with packet dropouts while developing a generic Gaussian filter. The nonlinear estimators mentioned above have assumed that the system remains Gaussian even when subjected to nonlinearity in dynamics and random delay in measurements. Moreover, they have used the measurement models where a measurement can be received more than once; consequently, the received measurement does not remain independent of past measurements, and the measurement noise sequence becomes correlated over time. For addressing the non-Gaussian systems, a particle filtering solution for one-step randomly delayed measurement by modifying the importance weight is proposed in [13]. Later, the same authors extended their work for multiple-step delayed measurements [14]. With a different measurement model that incorporates packet dropouts as well as randomly delayed measurements, [15] has also presented a particle filtering method for estimating the states. These works have used the sum of the product of likelihood densities employing every particle repeatedly at the successive time steps for the entire length of maximum delay. They also used the measurement models that generate the dependent measurements and correlated noise sequence. In this technical note, we propose a new measurement model to represent the random delay and packet dropouts in measurements by employing the Poisson random variables. This generalized model can cover a range of delay and packet drop scenarios by just varying a single parameter. It does not allow any measurement to be received more than once; hence, the measurement noise sequences are not correlated. Further, we first derive the generalized Gaussian-approximated estimator for the presented delay model by computing the terms that get affected by the random delay in measurements. Then, we propose a sequential Monte Carlo (SMC) method that does not mandatorily seek a Gaussian system for the state estimation in the delayed environment. The proposed SMC algorithm divides the set of particles sampled from the proposal density into various groups, where each group supports the possibility that the received measurement is delayed by a certain number of steps. Subsequently, the importance weight of particles is computed based on the delay group to which they are assigned. Finally, the resampling is carried out to select the particles that effectively support the received measurement. Moreover, in this approach, the delay value assigned to the resampled particles conveys the information on the delay steps of the received measurement, which can be estimated at each step. Unlike the sum of products of the likelihood densities method used in existing particle filtering solutions, the proposed method avoids the repeated use of the same particle in computing the importance weight. Hence, the relevant particles get a higher chance of representing the posterior density of the state. Two nonlinear state estimation problems have been simulated to validate the proposed Bayesian estimators by comparing their performances with that of the existing estimators. The simulation results demonstrate the effectiveness of the proposed estimators. II. SYSTEM REPRESENTATION WITH RANDOMLY DELAYED MEASUREMENTS Consider a nonlinear dynamic system that can be described by the following equations: State equation: x k = f k−1 (x k−1 ) + η k−1 ,(1) Measurement equation: z k = h k (x k ) + v k ,(2) where x k ∈ ℜ nx denotes the state vector of the system and z k ∈ ℜ nz is the measurement at any discrete time k ∈ (0, 1, · · · ), while η k−1 ∈ ℜ nx and v k ∈ ℜ nz are mutually independent white noises with arbitrary but known probability density function (pdf). Consider a case where the received measurement might be a randomly delayed measurement from a previous time step owing to limitations, such as small bandwidth and communication failures, of the communication network inserted between the sensor and the estimator. Assume that the measurements are not time-stamped, and at a given time step, a maximum of one measurement can be received at the estimator end. In literature, many models depict similar scenarios with different properties. A. Existing Models for Randomly delayed Measurements In literature, the Bernoulli random variables are mostly used to represent the random delay in measurements. Majority of the existing Bernoulli-based multiple-step delay models can be explained with help of the following two models: (i) The delayed measurement, y k , is expressed as [14] y k = N j=0 β j k z k−j ; k ≥ 2,(3) where β j k = j i=0 α i k (1 − α j+1 k ); 0 ≤ j < N, and α 0 k = 1 j i=1 α i k ; j = N. Here, α i k are Bernoulli random numbers with E[α i k ] = Θ, and at any instant, only one of β j k can be 1 and the rest are zero. A set of received measurements using this model is simulated in Table I, where the second row represents the value of index j when α j k = 1. The j 0 1 0 1 0 2 0 1 0 0 y k z 1 z 1 z 3 z 3 z 5 z 4 z 7 z 7 z 9 z 10 authors in [11], [13], [16], [17] have used the Bernoulli distribution to model one step random delay, and [8] modeled for two steps delay with the help of it. One step or two steps delay model are basically a special case of the above model of multi-step delay when N is set to 1 and 2, respectively. In these models, the measurement is mandatorily received at each time step and packet drops are not allowed to occur. (ii) The received measurement, y k , is represented as [12] y k = N j=0 β j k z k−j +   1 − N j=0 β j k   y k−1 ; k ≥ 2, (4) where β j k = j i=0 α i k (1 − α j+1 k ) . Further, α i k are the Bernoulli random variables with E[α i k ] = Θ, and β j k are the binary variables, which, at a given time instant k, can be 1 for only one of j (0 ≤ j ≤ N ). Table II shows a single representative sequence of measurements received when the above model is used for simulation. The main difference lies in the fact that this kind of TABLE II: Received measurements with Θ = 0.5 and N = 2 k 1 2 3 4 5 6 7 8 9 10 j 0 1 0 β j k = 0 0 1 0 2 0 0 y k z 1 z 1 z 2 lost(y 3 ) z 5 z 5 z 7 z 6 z 9 z 10 models allow the measurements to be dropped when α j k are not 1 for any value of j. For example, the models in [9], [15], [18] have incorporated the scope for the packet drops along with the random delays. On the other hand, the authors in [19], [20] have used an additional Bernoulli variable in the structure of [14] to model the missing instances in received measurements. However, in all the models, the same measurement can be received more than once, which is redundant and might be uninformative in reconstructing the states. The following remarks can be made in context of the existing delay models: • When the same measurements are received repetitively, they do not remain conditional independent of each other and the measurement noise sequence gets correlated over time. These phenomena restrict the estimator from using the standard Bayesian estimation structure. • The author in [10] has proposed a model where measurements are not received more than once; however, they have used a set of Bernoulli random variables for different delay steps, which resulted in a visibly complicated and bulky expression. • All the models in literature indicate that a measurement getting no delay is more likely than it goes through a nonzero steps of delay and further, the probability decreases with the increase in number of delay steps. This might be representing the scenario close to the most of the realistic cases, however, it remains a special case of a general model where any number of delay steps can dominate in a set of received measurements. B. Proposed Measurement Model To mathematically represent a measurement model that overcomes the difficulties mentioned above, we propose to employ the Poisson distribution as follows. If y k is the measurement received at time step k, then, y k = N j k =0 β j k k z k−j k ,(5) where β j k k is defined as β j k k = α 0 k ; j k = 0 α j k k j k i=1 (1 − α j k −i k−i ); j k > 0.(6) Here, α j k k (j k = 0, 1, · · · , N ) is a set of binary variables with N as the maximum value of its time-varying index, j k ; j k indicates the number of delay steps and if a measurement is received with a delay step greater than N , it is assumed to be uninformative for the estimation of states and considered as a packet drop at that step. Moreover, only one of α j k k , at most, takes value 1 at time step k, and the rest of them are 0. The Poisson distributed index, j k , for which α j k k holds a value of 1, has the probability mass function (pmf) given as P (α j k k = 1) = e −λ k λ j k k j k ! ,(7) where λ k is the mean of Poisson random variable j k , i.e. E[j k ] = λ k , and represents the average value of delay at each time step. Also, the expectation of α j k k is given as E[α j k k ] = e −λ k λ j k k j k ! . Remark 1. To depict a situation where the same measurements can be received repetitively, a special case of (5) with β j k k = 1, ∀j k , can be given by At any time step k, a measurement is delayed by j k steps with probability P (β j k k = 1), and the probability of the same measurement getting dropped is 1−P ( N j k =0 β j k k = 1). These probabilities are calculated in Lemmas 1 and 2. y k = z k−j k ; j k ≤ k,(8) Lemma 1. The probability of a received measurement, y k , being delayed by j k steps is γ j k k =      e −λ k ; j k = 0 e −λ k λ j k k j k ! j k i=1 1 − e −λ k−i λ j k −i k−i (j k − i)! ; j k > 0.(9) Proof. Case I (j k = 0): By using Eqs. (7) and (6), we have γ 0 k = P (β 0 k = 1) = E[β 0 k = α 0 k ] = e −λ k . Case II (j k > 0): Similarly, γ j k k = P (β j k k = 1) = E α j k k j k i=1 (1 − α j k −i k−i ) . Considering the binary variables, α j k k (j k = 0, 1, · · · , N ), are independent over the time steps, we can write γ j k k = E[α j k k ] j k i=1 E[(1 − α j k −i k−i )] = E[α j k k ] j k i=1 (1 − E[α j k −i k−i ]).(10) Using (7) in the above equation establishes (9). Lemma 2. The probability that a measurement, y k , is never received at the estimator side is given as 1 − N j k =0 γ j k k . Proof. The probability that β j k k is zero for all permissible j k is given as P   N j k =0 β j k k = 0   = 1 − N j k =0 E[β j k k ] = 1 − N j k =0 γ j k k . The proposed delay model has the following properties in comparison with the existing models: (i) Once we define a value for N , it models the random packet drops naturally with non-zero probability, which resembles a real scenario closely. A measurement is treated as lost when it goes through a delay more than N steps. The delay models used in [8], [11], [14] do not offer the scope for a measurement packet to get dropped and the estimator mandatorily receives a measurement at each time step. The work in [20] has to use an additional random variable to represent the random measurement dropouts. (ii) In the proposed model, one measurement is transmitted only once and no measurement is received more than once. Using the same measurement multiple times for the estimation is redundant and might not help in reconstructing the states at that time step. The delay models in [8], [11], [14] prefer the same measurement to be received more than once over the packet drop. Whereas in [12], [15], the authors use the previously received measurement for estimating the state if there occurs a packet drop at the current time step. (iii) The loss of whiteness of the noise sequence prevents the designer from using the standard Bayesian estimation algorithms. However, in the proposed model, no measurement can be received more than once; as a result, the model maintains the conditional independence of the current measurement y k with respect to its previously received measurements y 1:k−1 . Further, as shown in Appendix A, it keeps the whiteness of measurement noise sequences intact unlike the models used in [8], [11], [12], [14], [15]. (iv) The parameter, λ k , is an important part of the proposed model and it captures a wide variety of delay scenarios for the networked systems with proper selection of its value. The property that a measurement getting no delay is more likely than it goes through a non-zero steps of delay is depicted with λ k ∈ (0, 1]. If we select λ k = 3, the measurements with delay steps equal to 3 will be more likely and the probability decreases as we move along the either side of 3. Fig. 1 shows the probability versus number of delay steps for Poisson distributed j k . Table III shows a batch of the received measurements when λ k = 0.7 for all values of k with N = 2. C. Problem Statement We seek to design the Bayesian estimation algorithms employing the Gaussian approximation and sequential Monte j k 0 β j2 2 = 0 1 1 0 β j6 6 = 0 0 2 0 0 y k z 1 lost (ŷ 2 ) z 2 z 3 z 5 lost (ŷ 6 ) z 7 z 6 z 9 z 10 Carlo (SMC) methods to recursively reconstruct the posterior density, p(x k |y 1:k ), and the expectation of a posterior densityintegrable function of the unobserved state, {x k ; k ∈ N}, by using the received measurements, {y k ; k ∈ N}. The measurements received are specified by the expression in (2) and the proposed delay model in (5), and the unobserved states follow the dynamics given in (1). III. GAUSSIAN FILTERS FOR RANDOMLY DELAYED MEASUREMENTS In this section, we derive the nonlinear filtering algorithm under the Gaussian assumption for the proposed model (5) by using the Bayesian framework. The joint density of states conditioned on the received measurements can be expressed as p(x 0:k |y 1:k ) = p(x 0:k , y 1:k ) p(y 1:k ) = p(y k |x 0:k , y 1:k−1 )p(x 0:k , y 1:k−1 ) p(y k |y 1:k−1 )p(y 1:k−1 ) = p(y k |x 0:k , y 1:k−1 )p(x 0:k |y 1:k−1 ) p(y k |y 1:k−1 ) . From the measurement models (2) and (5), we can see that the received measurement, y k , is correlated with the states, x k , · · · , x k−N . Hence, relaxing the standard assumption of independent measurement, we can consider that y k , conditioned on x k−N :k , is independent of the previous measurements and states, i.e. p(y k |x 0:k , y 1:k−1 ) = p(y k |x k−N :k ). Also, for a recursive estimation at time step k, it is assumed that the estimate of states up to time step k−1 is already known. Thus, the filtering density of state, by using (11), can be given as p(x k |y 1:k ) = p(y k |x k−N :k )p(x k |y 1:k−1 ) p(y k |y 1:k−1 ) ,(12) where the predictive density, p(x k |y 1:k−1 ), is given by the Chapman-Kolmogorov integral as p(x k |y 1:k−1 ) = p(x k |x k−1 )p(x k−1 |y 1:k−1 )dx k−1 . (13) Let us Consider that the process noise, η k−1 , and the measurement noise, v k , are a zero mean, white Gaussian sequence with covariances, Q k−1 and R k , respectively. The initial state, x 0 , also follows the Gaussian distribution, and x 0 , η k−1 and v k are uncorrelated sequences. Now, assuming that the predictive density, p(x k |y 1:k−1 ), in (13) is Gaussian if p(x k−1 |y 1:k−1 ) is Gaussian, the first and second moments of p(x k |y 1:k−1 ) can be given aŝ x k|k−1 = E[x k |y 1:k−1 ] P k|k−1 = E[(x k −x k|k−1 )(x k −x k|k−1 ) ⊤ |y 1:k−1 ].(14) Since the prediction density in (13) depends on the state dynamics and the previous estimate and not on the current measurement, the expectation over it can be computed with any Gaussian approximation methods available in literature [21]- [23]. Similarly, assume that the predictive density of the current measurement is also Gaussian i.e., p(y k |y 1:k−1 ) = N (y k ;ŷ k|k−1 , P yy k|k−1 ). Its moments are given in Lemma 3, whereẑ k−j k |k−1 and P zz k−j k |k−1 can be computed as illustrated in the Gaussian approximation methods. Lemma 3. The predicted estimate of measurement at time step k isŷ k|k−1 = N j k =0γ j k kẑ k−j k |k−1 ,(15) and the measurement covariance is given by P yy k|k−1 = N j k =0γ j k k P zz k−j k |k−1 + N j k =0γ j k k (1 −γ j k k )ẑ k−j k |k−1ẑ ⊤ k−j k |k−1 .(16) Proof. The predicted estimate of received measurement at kth time step can be given aŝ y k|k−1 = E[y k |y 1:k−1 ] = E   N j k =0 β j k k z k−j k |y 1:k−1   . Since the variables β j k k and z k−j k are independent, we can write above expectation aŝ y k|k−1 = N j k =0 E[β j k k |y 1:k−1 ]E[z k−j k |y 1:k−1 ]. Using the fact that β j k k and the past received measurements, y 1:k−1 , are uncorrelated, and by Lemma 1, the above equation leads to (15). Note that this estimate of measurement is computed excluding the time instants when no measurement is received and hence, the normalized delay probability, γ j k k = γ j k k / N j k =0 γ j k k , is used as the expectation of β j k k . Further, from (15), the estimated error in received measurement at kth time step can be given as y k −ŷ k|k−1 = N j k =0 β j k k z k−j k − N j k =0γ j k kẑ k−j k |k−1 = M 1 − M 2 ,(17) and the measurement covariance, using (17), is defined as P yy k|k−1 = E[(y k −ŷ k|k−1 )(y k −ŷ k|k−1 ) ⊤ |y 1:k−1 ] = E[M 1 M ⊤ 1 |y 1:k−1 ] − E[M 1 M ⊤ 2 |y 1:k−1 ] − E[M 2 M ⊤ 1 |y 1:k−1 ] + E[M 2 M ⊤ 2 |y 1:k−1 ].(18) Now, we compute the expectations of (18) as follows: E[M 1 M ⊤ 1 |y 1:k−1 ] = E N s=0 β s k z k−s N l=0 β l k z ⊤ k−l |y 1:k−1 = N s=0 N l=0 E[α s k (1 − α s−1 k−1 ) · · · (1 − α 0 k−s )α l k (1 − α l−1 k−1 ) · · · × (1 − α 0 k−l )]E[z k−s z ⊤ k−l ] ConsiderP zz k−s|k−1 = E[z k−s z ⊤ k−s ] −ẑ k−s|k−1ẑ ⊤ k−s|k−1 , we have E[M 1 M ⊤ 1 |y 1:k−1 ] = N s=0 ]E[(α s k ) 2 (1 − α s−1 k−1 ) 2 · · · (1 − α 0 k−s ) 2 E[z k−s z ⊤ k−s ] = N s=0 E[β s k ](P zz k−s|k−1 +ẑ k−s|k−1ẑ ⊤ k−s|k−1 ) = N s=0γ s k (P zz k−s|k−1 +ẑ k−s|k−1ẑ ⊤ k−s|k−1 ).(19) Case-II (s = l): Using the fact that the non-delayed measurements, z k (k ∈ N), and α j k (k ∈ N, 0 ≤ j ≤ N ) are independent, we can write E[M 1 M ⊤ 1 |y 1:k−1 ] = N s=0 N l=0 E[α s k (1 − α s−1 k−1 ) · · · (1 − α 0 k−s )]E[α l k (1 − α l−1 k−1 ) × · · · (1 − α 0 k−l )]E[z k−s ]E[z ⊤ k−l ] = N s=0 N l=0γ s kγ l kẑk−s|k−1ẑ ⊤ k−l|k−1 .(20) Again, computing the second term of (18), we can write E[M 1 M ⊤ 2 |y 1:k−1 ] = E N s=0 β s k z k−s N l=0γ l kẑ ⊤ k−l|k−1 |y 1:k−1 = E N s=0 β s k z k−s |y 1:k−1 N l=0γ l kẑ ⊤ k−l|k−1 |y 1:k−1 = N s=0 N l=0γ s kγ l kẑk−s|k−1ẑ ⊤ k−l|k−1 .(21) Similarly, E[M 2 M ⊤ 1 |y 1:k−1 ] = N s=0 N l=0γ s kγ l kẑk−s|k−1ẑ ⊤ k−l|k−1 .(22) Lastly, E[M 2 M ⊤ 2 |y 1:k−1 ] = N s=0 N l=0γ s kγ l kẑk−s|k−1ẑ ⊤ k−l|k−1 . (23) Now, if we substitute the equations (19), (20), (21), (22), and (23) into (18), it establishes (16). Proceeding further to obtain the posterior estimate, the cross-covariance, P xy k|k−1 , is derived in the following lemma. Lemma 4. The cross-covariance at time step k is given as P xy k|k−1 = N s=0γ s k P xz k,k−s|k−1 .(24) Proof. Using (5) and (15), the conditional cross-covariance is defined as P xy k|k−1 = E[(x k −x k|k−1 )(y k −ŷ k|k−1 )|y 1:k−1 ] = E (x k −x k|k−1 ) N s=0 β s k z k−s − s=0γ s kẑk−s|k−1 = E (x k −x k|k−1 ) N s=0 β s k (z k−s −ẑ k−s|k−1 ) + N s=0 β s kẑk−s|k−1 −γ s kẑk−s|k−1 = N s=0 E[β s k ]E[(x k −x k|k−1 )(z k−s −ẑ k−s|k−1 )] + N s=0 E[β s k −γ s k ]E[(x k −x k|k−1 )ẑ k−s|k−1 ]. Since E[β s k −γ s k ] = 0 and E[(x k −x k|k−1 )(z k−s −ẑ k−s|k−1 )] = P xz k,k−s|k−1 , we have P xy k|k−1 = N s=0γ s k P xz k,k−s|k−1 . Remark 4. Under the Gaussian assumption, P xz k,k−s|k−1 in (24) is given by P xz k,k−s|k−1 = x k h k−s (x k−s )N (x k ;x k|k−1 , P k|k−1 ) × dx k −x k|k−1ẑk−s|k−1 , where the above integration can be approximated by a Gaussian approximation method available in literature. Theorem 1. The posterior estimate and covariance for the system (1),(2),(5) are given aŝ x k|k =x k|k−1 + K k (y k −ŷ k|k−1 ),(25)P k|k = P k|k−1 − K k P yy k|k−1 K ⊤ k ,(26) where K k = P xy k|k−1 (P yy k|k−1 ) −1 . Proof. Proceeding to computing the posterior filtering density, Eq. (12) can be rewritten as p(x k |y 1:k ) = p(y k , x k |y 1:k−1 ) p(y k |y 1:k−1 ) , where the joint density, p(y k , x k |y 1:k−1 ), is Gaussian under our earlier assumption about the predictive densities and can be given as p(y k , x k |y 1:k−1 ) = N x k y k ; x k|k−1 y k|k−1 , P k|k−1 P xy k|k−1 (P xy k|k−1 ) ⊤ P yy k|k−1 ,(28) where the covariances, P k|k−1 , P yy k|k−1 , and P xy k|k−1 are defined in (14), (16), and (24) respectively. Now, substituting (28) into (27) and performing the squaring operation for a Gaussian density (see Appendix A of [11]), we have p(x k |y 1:k ) = N (x k ;x k|k , P k|k ),(29) wherex k|k and P k|k are given in (25) and (26), respectively. Thus, (14) and (25) present the predicted and posterior estimates, respectively, for a stochastic system under the Gaussian assumption. Further, if the measurement is not received at a time step k, we use the predicted measurement,ŷ k|k−1 , for the state estimation. IV. SMC METHOD FOR RANDOMLY DELAYED MEASUREMENTS In this section, we develop an estimation algorithm without assuming a particular distribution for the system noises and prior information. Consider {x k ; k ∈ N} be an unobserved Markov process with an initial distribution p(x o ) and the transitional density specified by (1). The received measurements, {y k ; k ∈ N}, are conditionally independent given the process {x k ; k ∈ N} with the likelihood density defined by (2) and (5). The posterior distribution p(x 0:k |y 1:k ) can be approximated with the help of a set of i.i.d. samples drawn from the distribution as [24] p(x 0:k |y 1: k ) = 1 N s Ns i=1 δ x i 0:k (x 0:k ),(30) where N s is the total number of samples and the particles, {x i 0:k } Ns i=1 are drawn from the posterior distribution. Unfortunately, the posteriors are usually non-standard, multivariate and known only up to a proportional constant and hence the sampling of particles is almost impossible. Alternatively, we adopt a Bayesian importance sampling method, where we select a known and easy-to-sample proposal distribution, q(x 0:k |y 1:k ), from which the particles can easily be drawn. If g k (x 0:k ) is a p(x 0:k |y 1:k )-integrable function, the expectation, E p(·|y 1:k ) (g k (x 0:k )), can be given as E p(·|y 1:k ) (g k (x 0:k )) = g k (x 0:k ) p(x 0:k |y 1:k ) q(x 0:k |y 1:k ) q(x 0:k |y 1:k )dx 0:k = g k (x 0:k ) p(y 1:k |x 0:k )p(x 0:k ) p(y 1:k )q(x 0:k |y 1:k ) q(x 0:k |y 1:k )dx 0:k = 1 p(y 1:k ) g k (x 0:k )w k (x 0:k )q(x 0:k |y 1:k )dx 0:k ,(31) where w k (x 0:k ) is the unnormalized importance weight defined as w k = p(y 1:k |x 0:k )p(x 0:k ) q(x 0:k |y 1:k ) . Now, to compute E p(·|y 1:k ) (g k (x 0:k )) in terms of the expectations that are taken over the proposal distribution, q(x 0:k |y 1:k ), we can write the normalizing constant as p(y 1:k ) = p(y 1:k |x 0:k )p(x 0:k )dx 0:k and substitute it in (31). After some rearrangements, Eq. (31) can be written as E p(·|y 1:k ) (g k (x 0:k )) = E q(·|y 1:k ) (w k (x 0:k )g k (x 0:k )) E q(·|y 1:k ) (w k (x 0:k )) . The estimate of the above expectations can be computed with the help of a set of i.i.d. samples, {x i 0:k } Ns i=1 drawn from the proposal distribution, q(x 0:k |y 1:k ), and Eq. (32). Further, our aim is to sequentially estimate the posterior distribution and the associated expectations at each time step k. In order to achieve the sequential estimation, the proposal distribution can be assumed to be decomposed as q(x 0:k |y 1:k ) = q(x 0:k−1 |y 1:k−1 )q(x k |x 0:k−1 , y 1:k ). (33) Here, we have used the chain rule and assumed that the states (x 0:k−1 ) are independent of the future measurement (y k ). Also, from the measurement models (2) and (5), it is evident that the current measurement, y k , is correlated with the states, x k , x k−1 , · · · , and x k−N . Hence, by using chain rule and under our assumptions that the states are a Markov process and the measurements, conditioned on the states, are independent, we have p(x 0:k ) = p(x 0 ) k l=1 p(x l |x l−1 ), p(y 1:k |x 0:k ) = k l=1 p(y l |x l−N :l ); k > 0,(34) whereN = min(N, l − 1). Now, substituting Eqs. (33) and (34) into Eq. (32), a recursive expression for the unnormalized importance weight can be derived as w k = p(y 1:k |x 0:k )p(x 0:k ) q(x 0:k−1 |y 1:k−1 )q(x k |x 0:k−1 , y 1:k ) = w k−1 p(y 1:k |x 0:k )p(x 0:k ) p(y 1:k−1 |x 0:k−1 )p(x 0:k−1 )q(x k |x 0:k−1 , y 1:k ) = w k−1 p(y k |x k−N :k )p(x k |x k−1 ) q(x k |x 0:k−1 , y 1:k ) . A. Estimation of State Posterior Density Now, the whole set of particles is divided into as many groups as the different delay steps for the received measurement. Each group represents a probable version of measurement with a certain number of delay steps. Thus, instead of one set of particles, we can then use those groups of particles to approximate the state posterior pdf. Theorem 2. The filtering density, p(x k |y 1:k ), for the system (1), (2), (5) can be computed with the help of a set of i.i.d. samples drawn from the proposal density, q(x k |x i 0:k−1 , y 1:k ), asp (x k |y 1:k ) =N j k =0 N j k s i=1w j k ,i k δ x j k ,i k (x k ),(36)where w j k ,i k = w j k−1 ,i k−1 p(y k |x i k−j k ) p(x i k |x i k−1 ) q(x i k |x i 0:k−1 , y 1:k ) , w j k ,i k = w j k ,i k N j k =0 N j k s i=1 w j,i k , and N j k s =γ j k k N s such that N j k =0 N j k s = N s . Proof. In Eq. (35), the likelihood density, p(y k |x k−N :k ), can be written as the joint density, p(y k , β j k k |x k−N :k ), which is marginalized over all the possible values of β j k k (j k = 0, · · · ,N ), as follows. p(y k |x k−N :k ) =N j k =0 p(y k , β j k k |x k−N :k )(37) At most, one of β j k k (j k = 0, · · · ,N ) is 1 at any given time step k and the others are zero with the probability given in (9). Note that the likelihood is computed for the time steps we receive a measurement and hence, the combination where all β j k k are zero, which results into a measurement loss, is not considered. From (5) and (9), Eq. (37) can be expanded as follows: p(y k |x k−N :k ) =N j k =0 p(y k |β j k k , x k−N :k )p(β j k k |x k−N :k ) = p(y k |β 0 k = 1, x k−N :k )P (β 0 k = 1) + · · · + p(y k |βN k = 1, x k−N :k )P (βN k = 1) =N j k =0 p(y k |x k−j k )γ j k k .(38) Substituting (38) in (35), we have w k = w k−1N j k =0 p(y k |x k−j k )γ j k k p(x k |x k−1 ) q(x k |x 0:k−1 , y 1:k ) =N j k =0 w j k kγ j k k ,(39) where the recursive unnormalized importance weight, when the measurement is supposed to be delayed by j k steps, is w j k k = w j k−1 k−1 p(y k |x k−j k ) p(x k |x k−1 ) q(x k |x 0:k−1 , y 1:k ) . Now, using the i.i.d. samples to approximate the posterior distribution similar to (30) when the particles are sampled from the proposal distribution, q(x k |x i 0:k−1 , y 1:k ), we havê p(x k |y 1:k ) =N j k =0γ j k Ns i=1w j k ,i k δ x i k (x k ) =N j k =0 N j k s i=1w j k ,i k δ x j k ,i k (x k ),(40)wherew j k ,i k = w j k ,i k N j k =0 N j k s i=1 w j k ,i k , and N j k s =γ j k k N s . Also, since N j k =0γ j k = 1, it implies that N j k =0 N j k s = N s . B. Delay transition rule for particles It is clear from Theorem 2 that the particles are divided intō N +1 groups at any time step k, where each group supports the fact that the received measurement, y k , is delayed through j k (0 ≤ j k ≤N ) steps and has a strength of N j k s particles. This necessitates framing of a set of rules for delay assignment to each particle after it has been drawn from the proposal distribution, q(x k |x i 0:k−1 , y 1:k ). Now, assume that the delay assignment to each particle upto time step k − 1 is known, then, at step k, we need to consider two things to assign the delay step: (i) the delay history of the particle for lastN steps, and (ii) the delay probability,γ j k k , for 0 ≤ j k ≤N . Since a measurement can not be received more than once, the particle, x j k ,i k−τ , which was assigned a delay of j k−τ steps to support the measurement z k−j k−τ at time step k − τ , can not support a measurement z k−j k bearing a delay of j k at time step k if j k = j k−τ + τ, 0 < τ ≤N . This implies that the probability of ith particle at time step k being assigned a delay of j k steps is given as P (β j k k = 1|x j k ,i k ) = γ j k ,i k = 0; if j k = j k−τ + τ, γ j k k ; else, when 0 < τ ≤N . (41) Further, the delay assignment for each particle can be executed as follows. The ith particle at time step k is assigned a delay of τ steps if Remark 6. The sum of product of the likelihood densities method adopted in [15] and [14] includes every particle repeatedly forN steps to compute the importance weight irrespective of the fact that it has been already used to approximate the posterior state density at previous steps. On the other hand, this chapter excludes a particle from being used in the computation of importance weight if it once has been used in earlier steps as given in (41). Hence, we present a method where the relevant particles get a higher chance to represent the posterior density. C. Resampling Once the delay is assigned to every particle for the current step, the associated importance weight is computed as stated in Theorem 2. On the basis of computed importance weights, the particles are discretely resampled to select only those particles which support the current measurement with significant weights. The value of delay steps of the resampled particles actually carry the delay information of the received measurement. Heuristically, the probability of the measurement being delayed by j k steps can be approximately given as N ′ j k s Ns , where N ′ j k s is the number of particles assigned with j k steps delay after the resampling. Note that Lemma 1 dictates about the prior probability of a measurement being delayed by certain steps, on the other hand, it is the posterior probability of delay that we calculate with the help of resampled state particles. A more systematic way of computing this posterior probability is illustrated in the following subsection. D. Estimation of delay steps The delay variable j k is a Poisson i.i.d. random number, which is correlated with the states and received measurement through β j k k as given in (5) and (2). Theorem 3. The filtering estimate of random delay,ĵ k , for the system (1), (2), (5) is given aŝ j k = arg max 0≤j k ≤Np (j k |y 1:k ),(42)wherep(j k |y 1:k ) = N j k s i=1w j k ,i kγ j k ,i k . Proof. The fact that β j k and y 1:k−1 are uncorrelated, the predictive density of delay step, p(j k |y 1:k−1 ), is P (β j k k = 1). Then, the filter density can be given as p(j k |y 1:k ) = p(j k , x k |y 1:k )dx k = p(j k |x k )p(x k |y 1:k )dx k = P (β j k k = 1|x k )p(x k |y 1:k )dx k . Now, if we use the particle approximation from (36) with β j k k = 1 i.e., y k = z k−j k , then only the particles that have been assigned with a delay of j k steps is used to approximate the above integral. Using the probability of ith particle being assigned a delay of j k steps, we can further write the above equation asp (j k |y 1:k ) = N j k s i=1w j k ,i k p(β j k k = 1|x j k ,i k ) = N j k s i=1w j k ,i kγ j k ,i k . If p(j k |y 1:k ) is maximized over 0 ≤ j k ≤N , we obtain our estimate for the delay step. Corollary 1. If d denotes the delay assigned to the ith particle at time step k, the mean of delay steps is given asd k = N d=1 N d s i=1w d,i k d. The steps to approximate the posterior densities of state, p(x k |y 1:k ), and delay step,p(j k |y 1:k ) are outlined in Appendix B. Note that when no measurement is received at the estimator, the posterior is approximated with w j k ,i k = w j k−1 ,i k−1 and there is no estimate for delay step. V. SIMULATION RESULTS To validate the proposed Gaussian-Approximated filter (GAF) and SMC method for the randomly delayed measurements, we have simulated the two nonlinear state estimation problems: (i) non-stationary growth model and (ii) maneuvering target with unknown and coordinated turn rate. To demonstrate the superiority of the proposed Bayesian estimators, their performances are compared with that of the existing filters for the above two problems. The cubature quadrature sampling points [25] are used to implement the proposed GAF. The particle filter for randomly delayed measurements (PF-RD) developed in [14], [15] are reformulated for the proposed measurement model and, along with the standard PF [26], are used as the existing filters for performance comparison. The root mean square error (RMSE) [21] is selected as the performance index for all the implemented filters. Further, since every particle is assigned a delay at each step, the SMC method is also used to estimate the delay steps, which is not possible with the other implemented filters. A. Problem 1 The time-varying growth model is widely used in literature, owing to its non-stationary property, to validate a newly developed filtering algorithm [8], [13], [26]. The system model is given as x k = 0.5x k−1 +25 x k−1 1 + x 2 k−1 + 8 cos(1.2k) + q k−1 , z k = x 2 k /20 + v k ,(43) where q k−1 and v k are independent zero mean Gaussian processes with E[q 2 k ] = 10 and E[v 2 k ] = 1, respectively. The initial estimate is given by p(x 0 ) ∼ N (0, 1) and the number of particles used for the simulation is, N s = 500. The delayed measurements are generated using a stationary λ k = λ = 0.80 and with maximum permissible delay steps, N = 3. To compare the performances, the RMSEs in estimated state calculated by using 100 Monte Carlo (MC) runs are plotted over 50 time steps for each filter in Fig. 2a. The time-averaged RMSEs for the proposed SMC, PF-RD, standard PF and the proposed GAF are 5.14, 5.48, 7.05, and 9.60, receptively. It can be seen that the delay-accounted SMC method and PF-RD perform with more accuracy than the other filters at the cost of additional computational burden which is shown in Table IV. Fig. 2b shows the RMSE in estimated delay using 100 MC runs for the proposed SMC method. B. Problem 2 An aircraft that executes the maneuvering turn in twodimensional plane with a fixed but unknown turn rate, Ω, is considered by using the coordinated turn model for the aerospace target tracking. This model receives the bearing and range measurement observed from a radar to estimate the unobserved kinematics of the aircraft. The states representing the kinematics of aircraft are x k = [ζζ ηη Ω] ⊤ , where ζ and η represent positions, andζ andη are used for velocities along the X and Y axes, respectively. The dynamics of the target aircraft in discrete-time is given by [15], [21]: x k =          1 sin ΩT Ω 0 − 1 − cos ΩT Ω 0 0 cos ΩT 0 − sin ΩT 0 0 1 − cos ΩT Ω 1 sin ΩT Ω 0 0 sin ΩT 0 cos ΩT 0 0 0 0 0 1          x k−1 + q k−1 ,(44) where T is the time interval between two successively received measurements. q k−1 is a zero mean Gaussian sequence with T 2 2 T 3 2 T . The range, r, and bearing, θ are the observation available for tracking, which are observed through a radar placed at the origin. The measurement model can be given as z k = r k θ k ⊤ = ζ 2 k + η 2 k tan −1 η k ζ k ⊤ + v k ,(45) where v k is an independently distributed zero-mean Gaussian sequence with covariance R = diag[σ 2 r σ 2 θ ]. The parameters used in this simulation are given in Table V. The initial estimate for state are drawn from the normal distribution with mean and covariance,x 0 = [1000m 300 ms −1 1000m 0 ms −1 − 3 o s −1 ] T and P 0|0 = diag[100m 2 10 m 2 s −2 100m 2 10m 2 s −2 100mrad 2 s −2 ], respectively. The number of particles used for simulation is, N s = 5000. The delayed measurements are generated using a stationary λ k = λ = 0.90 and N = 3. q 1 0.1 m 2 s −3 q 2 1.75 × 10 −4 s −3 σ r 10 m σ θ √ 10 mrad The RMSEs calculated over 100 MC runs are plotted in Figs. 3 and 4 for different filters. It can be observed from the plots that the standard PF, which does not account for the random delays, diverges, whereas the algorithms such as the proposed SMC, PF-RD, and proposed GAF, which have been developed for the random delay, perform with better accuracy. Also, the PF-based filters outperform the GAF at the cost of extra computational effort that is shown in Table VI. Note that since the particles are not repetitively used to compute the likelihood at each step in the proposed SMC method, it tracks the kinematics of the target with slightly better accuracy than that of the PF-RD at a relatively low computational cost. VI. CONCLUSIONS This paper presents a measurement model using the Poisson random variable to represent the random delay and packet dropout while receiving the measurements in the networked systems. The proposed model generates the independent measurements and uncorrelated noise sequence over time. Subsequently, we present the generalized Gaussian-approximated filter for the developed delay model by deriving the terms that get modified owing to random delay in measurements. Further, we propose a SMC algorithm for randomly delayed measurements and packet dropouts. This method divides the whole set of samples into several groups based on the possible delay steps. Each sample is assigned with a delay value that represents the number of steps through which measurements are delayed; the delay steps are updated at every time step. The proposed algorithm also gives a method to estimate the delay steps of the received measurement at a time step. The simulation results show that if the filtering algorithms are accounted for random delays, they perform with more accuracy than the conventional filters. Also, the RMSE plots reflect the superiority of the SMC method that is obtained at the cost of an additional computational burden. APPENDIX A COMPUTATION OF AUTOCORRELATION OF MODIFIED MEASUREMENT NOISE The proposed measurement model in (5), by using (2), can be rewritten as y k = N j k =0 β j k k h k−j k (x k−j k ) + N j k =0 β j k k v k−j k , where the modified noise is defined as ν k = N j k =0 β j k k v k−j k .(46) Consider that the non-delayed measurement noise has the following property: E[v k ] = 0 and E[v k v ⊤ k ] = R k , ∀k. Now, using the definition of modified measurement noise, ν k , in (46), we can establish that E[ν k ] = 0. Further, its autocorrelation can be given as E[ν k ν ⊤ m ]; k − m = a, where a is any integer number. This can further be computed as E[ν k ν ⊤ m ] = E N s=0 α s k (1 − α s−1 k−1 ) · · · (1 − α 0 k−s )v k−s × N l=0 α l m (1 − α l−1 m−1 ) · · · (1 − α 0 m−l )v ⊤ m−l = N s=0 N l=0 E[α s k (1 − α s−1 k−1 ) · · · (1 − α 0 k−s )α l k−a × (1 − α l−1 k−a−1 ) · · · (1 − α 0 k−a−l )]E[v k−s v ⊤ k−a−l ].(47) Note that in the above expansion, if s = 0, there will be only one term, i.e. α 0 k , and if s = 1, the expansion will include α 1 k (1 − α 0 k−1 ), and so on. The same is true for l. Now, considering the different integer values for a, (47) can be calculated as follows. Case-I. 0 < a ≤ N : It is clear that in above equation E[v k−s v ⊤ k−a−l ] = E[v k−s v ⊤ k−s ] = R k−s ; if s − l = a 0; else, which effectively means that we have to compute the expectation of the first part in (47) only for the values of s and l such that s − l = a. The first expectation operation of (47) can be expanded as Case-II. a > N : Given that 0 ≤ s, l ≤ N and a > N , the second expectation of (47), E[v k−s v ⊤ k−a−l ] = 0, ∀s, l. Therefore, E[ν k ν ⊤ k−a ] = 0; a > N. Case-III. a = 0: Eq. (47) can be written as E[ν k ν ⊤ m ] = N s=0 N l=0 E[α s k (1 − α s−1 k−1 ) · · · (1 − α 0 k−s )α l k × (1 − α l−1 k−1 ) · · · (1 − α 0 k−l )]E[v k−s v ⊤ k−l ].(48) Clearly, E[v k−s v ⊤ k−l ] = E[v k−t v ⊤ k−t ] = R k−t ; if s = l = t, = 0; else. Therefore, evaluating (48) for the values of s and l when the both are equal, s = l = t, we have E[ν k ν ⊤ m ] = N t=0 E[(α t k ) 2 (1 − α t−1 k−1 ) 2 · · · (1 − α 0 k−t ) 2 × E[v k−t v ⊤ k−t ]. Since α t k (k = 0, 1, · · · and 0 ≤ t ≤ N ) are independent and E[(1 − α t k ) 2 ] = E[(1 − α t k )], we can simplify the above expression as E[ν k ν ⊤ m ] = N t=0 E[α t k ]E[(1 − α t−1 k−1 )] · · · E[(1 − α 0 k−t )] × E[v k−t v ⊤ k−t ] = N t=0 γ t k R k−t Case-IV. a < 0: Proceeding similar to Cases-I and II (Cases where, a > 0), we get E[ν k ν ⊤ m ] = 0; a < 0. Hence, the modified measurement noise, ν k , in (46), has the following property: E[ν k ] = 0 and E[ν k ν ⊤ m ] = N t=0 γ t k R k−t ; if k = m 0; if |k − m| > 0. . end for • end for • Normalize the importance weight of the particles asw j k ,i k = w j k ,i k N j k =0 N j k s i=1 w j k ,i k . • Construct the posterior,p(x k |y 1:k ), as given in (36). • Estimate the delay step aŝ j k = N j k =1 N j k s i=1w j k ,i k j k,i • Resample the particles [{{x j k ,i k , w j k ,i k } N j k s i=1 }N j k =0 ] := RESAMPLE[{{x j k ,i k , w j k ,i k } N j k s i=1 }N j k =0 ] Ranjeet Kumar Tiwari and Shovan Bhaumik are with the Department of Electrical Engineering, Indian Institute of Technology Patna, India (e-mail: [email protected] and [email protected].) Fig. 1 : 1Delay steps versus probability for Poisson distribution the following cases for the computation of above equation. Case-I (s = l): Given that α s k−s is a binary variable and E[(α s k−s ) 2 ] = E[α s k−s ], and using the relation that Remark 5 . 5The computation of importance weight in Theorem 2 is valid for the time instant when the system (1), (2), (5) receives a measurement. However, when β j k k = 0 for all values of j k and the measurement is lost; there are only the particles sampled from the proposal density and w j k u is a uniformly sampled number in [0, 1]. Fig. 2 : 2(a). RMSE of estimated state. (b). RMSE of estimated delay steps with N = 3. Fig. 3 : 3(a). RMSE of position. (b). RMSE of velocity. covariance Q = diag[q 1 M q 1 M q 2 T ], where q 1 and q 2 are the noise intensity parameters, and M = T 3 3 Fig. 4 : 4(a). RMSE of turn rate. (b). RMSE of delay steps with N = 3. 1 ) · · · (1 − α 0 k−a−l )], and for every combination of s and l that gives s − l = a, we have a situation whereE[(1 − α s−a k−a )α l k−a ] = E[(1 − α l k−a )α l k−a ]. Further, given that α l k−a is a binary variable, we can writeE[(α i k−a ) 2 ] = E[α l k−a ], E[(1 − α l k−a )α l k−a ] = 0. Therefore, E[ν k ν ⊤ k−a ] = 0; 0 < a ≤ N. j k−1 =0 , λ k , y k ] • SetN = min(N, k − 1), andγ j k if j k = j k−τ + τ, ∀ 0 < τ ≤N γ j k k ; else.-Denote the j k delay steps assigned to ith particle as j k,i . -Evaluate the importance weight with j k delay asw j k ,i k = w j k−1 ,i k−1 p(y k |x j k ,i k−j k ) p(x j k ,i k |x j k−1 ,i k−1 ) q(x j k ,i k |xj 0:k−1 ,i 0:k−1 , y 1:k ) TABLE I : IReceived measurements with Θ = 0.5 and N = 2k 1 2 3 4 5 6 7 8 9 10 TABLE III : IIIReceived measurements with λ = 0.7 and N = 2k 1 2 3 4 5 6 7 8 9 10 TABLE IV : IVRelative computational timeAlgorithms Relative computational time Proposed GAF 0.09 Standard PF 1 PF-RD 2 Proposed SMC 1.75 TABLE V : VTracking parameters Sampling Time (T ) 0.125 s Turn Rate (Ω) −3 o s −1 TABLE VI : VIRelative computational timeAlgorithms Relative computational time Proposed GAF 0.03 Standard PF 1 PF-RD 2.16 Proposed SMC 1.26 Unscented kalman filter state estimation for manipulating unmanned aerial vehicles. H B Khamseh, S Ghorbani, F Janabi-Sharifi, Aerospace Science and Technology. 92H. B. Khamseh, S. Ghorbani, and F. Janabi-Sharifi, "Unscented kalman filter state estimation for manipulating unmanned aerial vehicles," Aerospace Science and Technology, vol. 92, pp. 446-463, 2019. Jtrf2014, the jpl kalman filter and smoother realization of the international terrestrial reference system. C Abbondanza, T M Chin, R S Gross, M B Heflin, J W Parker, B S Soja, T Van Dam, X Wu, Journal of Geophysical Research: Solid Earth. 12210C. Abbondanza, T. M. Chin, R. S. Gross, M. B. Heflin, J. W. Parker, B. S. Soja, T. van Dam, and X. Wu, "Jtrf2014, the jpl kalman filter and smoother realization of the international terrestrial reference system," Journal of Geophysical Research: Solid Earth, vol. 122, no. 10, pp. 8474-8510, 2017. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. J Hu, Z Wang, D Chen, F E Alsaadi, Information Fusion. 31J. Hu, Z. Wang, D. Chen, and F. E. Alsaadi, "Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects," Information Fusion, vol. 31, pp. 65-75, 2016. Optimal linear state estimation over a packet-dropping network using linear temporal coding. L He, D Han, X Wang, L Shi, Automatica. 494L. He, D. Han, X. Wang, and L. Shi, "Optimal linear state estimation over a packet-dropping network using linear temporal coding," Automat- ica, vol. 49, no. 4, pp. 1075-1082, 2013. Distributed fusion filter for networked stochastic uncertain systems with transmission delays and packet dropouts. J Ma, S Sun, Signal Processing. 130J. Ma and S. Sun, "Distributed fusion filter for networked stochastic uncertain systems with transmission delays and packet dropouts," Signal Processing, vol. 130, pp. 268-278, 2017. Measurement random latency probability identification. X Wang, Y Liang, Q Pan, Y Wang, IEEE Transactions on Automatic Control. 6112X. Wang, Y. Liang, Q. Pan, and Y. Wang, "Measurement random latency probability identification," IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 4210-4216, 2016. Networked state estimation with delayed and irregularly spaced time-stamped observations. B Yan, H Lev-Ari, A M Stanković, IEEE Transactions on Control of Network Systems. 53B. Yan, H. Lev-Ari, and A. M. Stanković, "Networked state estimation with delayed and irregularly spaced time-stamped observations," IEEE Transactions on Control of Network Systems, vol. 5, no. 3, pp. 888-900, 2017. Unscented filtering algorithm using two-step randomly delayed observations in nonlinear systems. A Hermoso-Carazo, J Linares-Pérez, Applied Mathematical Modelling. 339A. Hermoso-Carazo and J. Linares-Pérez, "Unscented filtering algorithm using two-step randomly delayed observations in nonlinear systems," Applied Mathematical Modelling, vol. 33, no. 9, pp. 3705-3717, 2009. Linear minimum variance estimators for systems with bounded random measurement delays and packet dropouts. S Sun, Signal processing. 897S. Sun, "Linear minimum variance estimators for systems with bounded random measurement delays and packet dropouts," Signal processing, vol. 89, no. 7, pp. 1457-1466, 2009. Optimal linear filters for discrete-time systems with randomly delayed and lost measurements with/without time stamps. Shuli Sun, IEEE Transactions on Automatic Control. 586Shuli Sun, "Optimal linear filters for discrete-time systems with ran- domly delayed and lost measurements with/without time stamps," IEEE Transactions on Automatic Control, vol. 58, no. 6, pp. 1551-1556, 2012. Gaussian filter for nonlinear systems with one-step randomly delayed measurements. X Wang, Y Liang, Q Pan, C Zhao, Automatica. 494X. Wang, Y. Liang, Q. Pan, and C. Zhao, "Gaussian filter for nonlinear systems with one-step randomly delayed measurements," Automatica, vol. 49, no. 4, pp. 976-986, 2013. A modified Bayesian filter for randomly delayed measurements. A K Singh, P Date, S Bhaumik, IEEE Transactions on Automatic Control. 621A. K. Singh, P. Date, and S. Bhaumik, "A modified Bayesian filter for randomly delayed measurements," IEEE Transactions on Automatic Control, vol. 62, no. 1, pp. 419-424, 2016. Particle filter with onestep randomly delayed measurements and unknown latency probability. Y Zhang, Y Huang, N Li, L Zhao, International Journal of Systems Science. 471Y. Zhang, Y. Huang, N. Li, and L. Zhao, "Particle filter with one- step randomly delayed measurements and unknown latency probability," International Journal of Systems Science, vol. 47, no. 1, pp. 209-221, 2016. Particle filter for nonlinear systems with multiple step randomly delayed measurements. Y Huang, Y Zhang, N Li, L Zhao, Electronics Letters. 5123Y. Huang, Y. Zhang, N. Li, and L. Zhao, "Particle filter for nonlinear systems with multiple step randomly delayed measurements," Electron- ics Letters, vol. 51, no. 23, pp. 1859-1861, 2015. Particle filter for randomly delayed measurements with unknown latency probability. R K Tiwari, S Bhaumik, T Kirubarajan, Sensors. 20195689R. K. Tiwari, S. Bhaumik, T. Kirubarajan et al., "Particle filter for randomly delayed measurements with unknown latency probability," Sensors, vol. 20, no. 19, p. 5689, 2020. H-infinity filtering for discrete-time systems with randomly varying sensor delays. S Zhou, G Feng, Automatica. 447S. Zhou and G. Feng, "H-infinity filtering for discrete-time systems with randomly varying sensor delays," Automatica, vol. 44, no. 7, pp. 1918- 1922, 2008. Design and implementation of Gaussian filter for nonlinear system with randomly delayed measurements and correlated noises. X Wang, Y Liang, Q Pan, C Zhao, F Yang, Applied Mathematics and Computation. 232X. Wang, Y. Liang, Q. Pan, C. Zhao, and F. Yang, "Design and implementation of Gaussian filter for nonlinear system with randomly delayed measurements and correlated noises," Applied Mathematics and Computation, vol. 232, pp. 1011-1024, 2014. Optimal full-order filtering for discretetime systems with random measurement delays and multiple packet dropouts. S Sun, L Xie, W Xiao, Journal of Control Theory and Applications. 81S. Sun, L. Xie, and W. Xiao, "Optimal full-order filtering for discrete- time systems with random measurement delays and multiple packet dropouts," Journal of Control Theory and Applications, vol. 8, no. 1, pp. 105-110, 2010. Linear optimal estimation for discrete-time systems with measurement-delay and packet dropping. X Song, Z Duan, J H Park, Applied Mathematics and Computation. 284X. Song, Z. Duan, and J. H. Park, "Linear optimal estimation for discrete-time systems with measurement-delay and packet dropping," Applied Mathematics and Computation, vol. 284, pp. 115-124, 2016. Optimal linear estimators for systems with random sensor delays, multiple packet dropouts and uncertain observations. J Ma, S Sun, IEEE Transactions on Signal Processing. 5911J. Ma and S. Sun, "Optimal linear estimators for systems with random sensor delays, multiple packet dropouts and uncertain observations," IEEE Transactions on Signal Processing, vol. 59, no. 11, pp. 5181- 5192, 2011. Cubature kalman filters. S Haykin, I Arasaratnam, IEEE Trans. Autom. Control. 546S. Haykin and I. Arasaratnam, "Cubature kalman filters," IEEE Trans. Autom. Control, vol. 54, no. 6, pp. 1254-1269, 2009. New extension of the kalman filter to nonlinear systems. S J Julier, J K Uhlmann, Signal processing, sensor fusion, and target recognition VI. 3068S. J. Julier and J. K. Uhlmann, "New extension of the kalman filter to nonlinear systems," in Signal processing, sensor fusion, and target recognition VI, vol. 3068. International Society for Optics and Photonics, 1997, pp. 182-193. Gaussian filters for nonlinear filtering problems. K Ito, K Xiong, IEEE transactions on automatic control. 455K. Ito and K. Xiong, "Gaussian filters for nonlinear filtering problems," IEEE transactions on automatic control, vol. 45, no. 5, pp. 910-927, 2000. A tutorial on particle filtering and smoothing: Fifteen years later. A Doucet, A M Johansen, Handbook of nonlinear filtering. 123A. Doucet, A. M. Johansen et al., "A tutorial on particle filtering and smoothing: Fifteen years later," Handbook of nonlinear filtering, vol. 12, no. 656-704, p. 3, 2009. Cubature quadrature Kalman filter. S Bhaumik, IET Signal Processing. 77S. Bhaumik et al., "Cubature quadrature Kalman filter," IET Signal Processing, vol. 7, no. 7, pp. 533-541, 2013. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. M S Arulampalam, S Maskell, N Gordon, T Clapp, IEEE Transactions on Signal Processing. 502M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking," IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174-188, 2002.
{'fraction_non_alphanumeric': 0.07951397062818029, 'fraction_numerical': 0.032758089926298826, 'mean_word_length': 3.6081441922563418, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 43, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'A networked system often uses a shared communication network to transmit the measurements to a remotely located estimation center. Due to the limited bandwidth of the channel, a delay may appear while receiving the measurements. This delay can be arbitrary step random, and packets are sometimes dropped during transmission as it exceeds a certain permissible number. In this paper, such measurements are modeled with the Poisson distribution, which allows the user to determine the maximum delay the system might suffer. When the measurement delay exceeds the permissible number, the packet dropout happens. Based on the proposed model, we solve the problem by assuming that the prior and posterior densities of states are Gaussian and derive the expression of the estimated state and the error covariance. Later, relaxing the Gaussian assumption for densities, we propose a solution with the help of the sequential Monte Carlo (SMC) approach. The proposed SMC method divides the set of particles into several groups, where each group supports the possibility that the received measurement is delayed by a certain number of steps. The strength of an individual group is determined by the probability of a measurement being delayed with the same number of steps that the group represents. This approach estimates the states and also assesses the amount of delay from the received measurements. Finally, the developed estimators are implemented on two nonlinear estimation problems, and the simulation results are compared. The proposed SMC approach shows better results compared to the designed Gaussian delay filters and existing particle filters with delay.', 'arxivid': '2304.01707', 'author': ['Ranjeet Kumar Tiwari ', 'Shovan Bhaumik '], 'authoraffiliation': [], 'corpusid': 257921244, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 19044, 'n_tokens_neox': 16927, 'n_words': 10422, 'pdfsha': 'aef2a8d0e0fa6b9e0ce7cffb0e805533a691af3c', 'pdfurls': ['https://export.arxiv.org/pdf/2304.01707v1.pdf'], 'title': ['Modeling and Estimation for Systems with Randomly Delayed Measurements and Packet Dropouts', 'Modeling and Estimation for Systems with Randomly Delayed Measurements and Packet Dropouts'], 'venue': []}
arxiv
Kernel-based quantum regressor models learn non-Markovianity 23 Sep 2022 Diego Tancara Centro deÓptica e Información Cuántica Universidad Mayor SantiagoChile Hossein T Dinani Escuela Data Science Facultad de Ciencias Ingenería y Tecnología Universidad Mayor SantiagoChile Ariel Norambuena Centro deÓptica e Información Cuántica Universidad Mayor SantiagoChile Felipe F Fanchini Faculdade de Ciências UNESP -Universidade Estadual Paulista 17033-360BauruSPBrazil Raúl Coto Department of Physics Florida International University 33199MiamiFloridaUSA Kernel-based quantum regressor models learn non-Markovianity 23 Sep 2022(Dated: September 26, 2022) Quantum machine learning is a growing research field that aims to perform machine learning tasks assisted by a quantum computer. Kernel-based quantum machine learning models are paradigmatic examples where the kernel involves quantum states, and the Gram matrix is calculated from the overlap between these states. With the kernel at hand, a regular machine learning model is used for the learning process. In this paper we investigate the quantum support vector machine and quantum kernel ridge models to predict the degree of non-Markovianity of a quantum system. We perform digital quantum simulation of amplitude damping and phase damping channels to create our quantum dataset. We elaborate on different kernel functions to map the data and kernel circuits to compute the overlap between quantum states. We show that our models deliver accurate predictions that are comparable with the fully classical models. Quantum machine learning is a growing research field that aims to perform machine learning tasks assisted by a quantum computer. Kernel-based quantum machine learning models are paradigmatic examples where the kernel involves quantum states, and the Gram matrix is calculated from the overlap between these states. With the kernel at hand, a regular machine learning model is used for the learning process. In this paper we investigate the quantum support vector machine and quantum kernel ridge models to predict the degree of non-Markovianity of a quantum system. We perform digital quantum simulation of amplitude damping and phase damping channels to create our quantum dataset. We elaborate on different kernel functions to map the data and kernel circuits to compute the overlap between quantum states. We show that our models deliver accurate predictions that are comparable with the fully classical models. I. INTRODUCTION. During the last decades we have witnessed the rapidly growing fields of Artificial Intelligence (AI) and Quantum Computing (QC). The basis for AI and QC were developed in the past century. However, it is now that this knowledge is widely available for research, business, health, among others. AI aims to provide machines with human-like intelligence. From the very beginning, AI has been conceived in different ways, leading to the development of different branches, known as Machine Learning (ML)[1-3], Deep Learning [4] and Reinforcement Learning [5]. ML is based on statistical learning, where the machine learns from data that has already been labelled (Supervised learning) or from unlabelled data (Unsupervised learning). In recent years, Supervised learning has undoubtedly impacted on physics [2,3,6]. In particular, it is known for unravelling patterns from datasets that yield quantum phase transitions [7,8]. Quantum computing is also at the forefront of current technologies. Nowadays, research groups have delivered highly functional and fault-tolerant quantum algorithms encompassing a wide variety of systems including: superconducting qubits [9,10], trapped ions [11], cold atoms [12], photonics [13,14] and color centers in diamond [15]. In the last years, quantum computers have pushed further the boundaries of physics, chemistry, biology, and computing itself, with groundbreaking achievements in the simulation of novel materials [16], molecules [9,13,17,18], in designing algorithms towards quantum supremacy [10,19] and quantum machine learning . Among the main obstacles to be overcome in the development of quantum technologies is the interaction of the quantum system with the environment. This interaction disturbs the quantum state and, in general, can * [email protected] be divided into two types of processes: Markovian and non-Markovian [41]. Non-Markovian processes are those in which memory effects are taken into account and their importance can be noted in several processes and protocols such as state teleportation [42], quantum metrology [43] and even in current quantum computers [44]. In this paper we use quantum machine learning to determine the degree of non-Markovianity of a quantum process. We focus on kernel-based machine learning models to learn from quantum states. Our results shows that the quantum computer can create the dataset, but also treat and learn from it, providing feedback on the very process in which it is involved. The paper is organized as follows. In Sec. II we introduce two quantum machine learning models based on kernels, namely: Quantum Support Vector Machine and Quantum Kernel Ridge models. The goal of these models is to estimate the degree of non-Markovianity from a dataset made of quantum states. Furthermore, we elaborate on the performance of the models based on three different kernel functions and four different kernel circuits to measure the overlap between two quantum states. All these possible combinations yield different Gram matrices. In Sec. III, we introduce the Digital Quantum Simulation approach that we followed to describe the evolution of the system in Amplitude Damping and Phase Damping channels. In Sec. IV, we show our main results regarding the prediction of the degree of non-Markovianity. In Sec. V we deliver the final remarks of this work. Quantum machine learning aims to perform machine learning tasks assisted by a quantum computer. In recent years, different implementations have been addressed, including Variational Quantum Circuits [45][46][47], quantum Nearest-Neighbor methods [21] and quantum Ker-nel Methods [20,23,35]. The latter, naturally appears in models that support a kernel function to represent the data into a feature space. Two well-understood examples are the Support Vector Machine (SVM) and the Kernel Ridge Regressor (KRR) models. Their extension to the quantum domain via a precomputed kernel is straightforward. Next, we describe the SVM and KRR models and their connection with the kernel. A. Support Vector Machine One of the most broadly used models in ML is Support Vector Machines (SVM) [48]. This model can be used for classification [49,50] and regression [48,51,52] tasks. The former, gives rise to an intuitive representation that relies on a hyperplane that splits the dataset into different classes. Therefore, predicting the label of unknown data only depends on where the data samples fall regarding the hyperplane. In general, other models also use a hyperplane. However, the SVM sets the maximum-margin, i.e. maximizing the distance between the hyperplane and some of the boundary training data, which are the data samples close to the edge of the class. These particular samples are known as support vectors (SVs). Since SVs are a subset of the training dataset, this model is suitable for situations where the number of training data samples is small compared to the feature vector's dimension. Once the model has fitted the training dataset, it can be used as a decision function that predicts new samples, without holding the training dataset (eager learning algorithm) in memory. In this work we will focus on a regression task, which predicts a real number rather than a class. In what follows, we briefly describe the mathematical formulation of the optimization problem. More details can be found in Ref. [53]. SVM delivers the tools for finding a function f ( x) that fits the training dataset { x i , y i }, where x i ∈ R d are the feature vectors with dimension d, and y i ∈ R are the corresponding labels. Note that i runs over the number of training samples (i = 1, 2, . . . , l ). We begin with the linear function f ( x) = w· x+b, with w ∈ R d and b ∈ R being fitting parameters. We shall discuss the case of nonlinear separable data later on. For ǫ-SVM [48], deviations of f ( x) from the labeled data (y i ) must be smaller than ǫ, i.e. |f ( x)−y i | ≤ ǫ. Moreover, we must address the model complexity as given by the l 2 -norm w 2 , and the tolerance for deviations ξ i , ξ * i (slack variables) larger than ǫ, that are weighted by C > 0. Therefore, the optimization problem can be stated as [1, 48,52], minimize 1 2 w 2 + C i (ξ i + ξ * i ) subjected to    y i − w · x i − b ≤ ǫ + ξ i w · x i + b − y i ≤ ǫ + ξ * i ξ i , ξ * i ≥ 0(1) One can solve this problem introducing the Lagrange multipliers α i , α * i , η i , η * i ≥ 0, with the Lagrangian defined as [48,51,52], L = 1 2 w 2 + C i (ξ i + ξ * i ) − i (η i ξ i + η * i ξ * i ) − i α i (ǫ + ξ i − y i + w · x i + b) − i α * i (ǫ + ξ * i + y i − w · x i − b).(2) From the vanishing partial derivatives ∂ b L, ∂ w L, ∂ ξ L and ∂ ξ * L the optimization problem can be recast as, maximize − 1 2 i,j (α i − α * i )(α j − α * j ) x i , x j −ǫ i (α i + α * i ) + i y i (α i − α * i ) subjected to i (α i − α * i ) = 0 α i , α * i ∈ [0, C](3) For convenience, we have written the dot product as an inner product, x i , x j = x i · x j . From ∂ w L = 0 we find w = i (α i − α * i ) x i , that leads to the decision function f ( x) = i (α i − α * i ) x i , x + b,(4) that depends on the inner product between the unlabeled data ( x) and the training data ( x i ). We can recover b from the Karush-Kuhn-Tucker (KKT) condition, which states that at the solution point of the Lagrangian, the product between the Lagrange multipliers and the conditions vanishes. We remark that this calculation is computed internally in scikit-learn library [1]. We would like to stress that the decision function in Eq. (4) has a sparse representation in terms of α i , α * i . Only a small subset of the training dataset (support vectors) contributes to the decision function. In Appendix A, we show the arguments for the sparsity and the calculation of b. We have introduced so far a linear decision function that can handle linearly separated data. For nonlinearly separated data, it is possible to define a clever kernel function k(x i , x) that generalizes x i , x by taking the samples to a higher dimensional space, where they are linearly separable. We elaborate further on this idea later on. B. Kernel Ridge Regressor Kernel Ridge Regressor (KRR) is another important nonlinear machine learning model. It has been successfully used to predict the evolution of quantum systems [54]. It combines Ridge Regression with the kernel trick [1,55]. The former, provides a linear solution based on least squares with l 2 regularization that penalizes large coefficients. Like in SVM, the l 2 -norm prevents model complexity, while the kernel allows the model to learn a nonlinear function in the original space. This model offers a straightforward optimization problem stated by [1] minimize N i=1 w · x i − y i 2 + α w .(5) The above problem can be written in an equivalent way as [55], minimize N i=1 (y i − w · x i − b) 2 , subjected to w 2 ≤ α d ,(6) where there is a one-to-one correspondence between the hyperparameters α and α d . Introducing the Lagrange multipliers as in the previous subsection the decision function can be found as, f ( x) = i β i k(x i , x) + b,(7) It is worth noting that SVM and KRR are similar in terms of the l 2 regularization and that both use the kernel trick, but the loss function is different. While SVM relies on a linear ǫ-insensitive loss, KRR uses squared error loss. The former implies that all the training points that result in errors that fall inside the ǫ-tube do not contribute in the solution, which originates sparseness. In contrast, KRR considers all the training points. This yields differences in the performance of these models. Machine learning algorithms have greatly profited from kernel functions [6,28,35]. Therefore, we now introduce a generalization of the decision function to learn from nonlinear data. The kernel can be understood as a measure of similarities between two vectors, and it supports representations ranging from polynomial to exponential functions [1]. Along this paper we consider three different functions for the kernel k(x i , x j ), namely: linear x i , x j + c, polynomial ( x i , x j + c) d , and exponential exp(−σ 1 − x i , x j ). We have so far addressed the classical part (optimization problem) of this hybrid quantum machine learning approach. In the next subsection we will focus on implementing the kernel through a quantum circuit. C. Quantum Kernels We have noted that the kernel provides efficient separability in nonlinear regions. The main idea behind the kernel is that it allows to map the data to a higherdimensional space, termed as "featured space" [53]. In general lines, let's consider a feature map φ : x ∈ χ → φ(x) ∈ H that encodes information from a certain domain χ (commonly χ ∈ R n ) to a feature space H. The advantages of using the map rely on the "kernel trick" [6], which allows us to set the decision function without the explicit calculation of φ(x). This idea has encouraged researchers to bridge classical and quantum machine learning [25,26,35]. Let's consider a Hilbert space H that contains the states of a quantum system. Now, instead of encoding the information of χ in a feature space given by functions φ(x), with x ∈ χ, the information is encoded in quantum states |φ(x) ∈ H [35,56,57], which is known as quantum embedding. Quantum embedding is a crucial step in the process and, in some cases, may lead to a disadvantage against classical models. To overcome this, we resorted to perform digital quantum simulation of the quantum dynamics rather than classical simulation [53], which allows us to handle quantum states to build up the kernel. Thus, we train our model with a symmetric and semi-positive definite matrix (Gram matrix), rather than the data samples (quantum states). The next step is to calculate the kernel from the training samples ρ i . A natural choice is the pairwise trace distance between the quantum states (Tr[ρ i ρ j ]), that is commonly carried by the Swap test [58,59]. In what follows we describe the circuit implementation. First, we encode the information into two different qubits. Each of these qubits undergoes a NM evolution (induced by independent ancilla qubits). Then the overlap between states ρ i and ρ j yields the matrix element k (θ i , θ j ) = Tr[ρ i ρ j ], where θ i is the parameter that control the NM evolution. We note that for the case of pure states, ρ i = |ψ i ψ i | and ρ j = |ψ j ψ j |, the kernel simply reduces to | ψ i |ψ j | 2 . We describe next different implementations for the overlapping. Swap test The Swap test is a high level sequence of quantum operations that involves two data qubits, an ancilla qubit, two-qubits (CNOT) gates, one-qubit gates and a final measurement on the ancilla [58], see Fig. 1. By measuring the probability of finding the ancilla in state |0 (P 0 ), one obtains the state overlapping by computing Tr[ρ i ρ j ] = 2P 0 − 1. Inversion test Our second kernel considers the quantum state of a closed system (unitary evolution), that encompasses the system qubit and the environment ancilla qubit [60]. It begins with two different quantum states driven by an unitary evolution U (θ), such that |Ψ θ = U (θ)|00 , with |00 = |0 s ⊗ |0 a . The kernel is defined as the squared absolute value of the projection between these two states, that is equivalent to two subsequent evolutions-assuming that the inverse evolution U † (θ i ) can be implemented. The matrix elements reads, k(θ i , θ j ) = | Ψ θi | Ψ θj | 2 = | 00| U † (θ i )U (θ j ) |00 | 2 = | 00| Θ | 2 ,(8) Swap test Inversion test Ancilla-based algorithm Bell-basis algorithm where |Θ = U † (θ i )U (θ j ) |00 . In contrast to the Swap test kernel, this one requires two measurements, which allows us to decrease the number of quantum registers (Fig. 1). We remark that this kernel is not experimentally feasible for the particular goal of detecting non-Markovianity. In general, one has no access to perform measurements upon the environment. In addition, it requieres reverse unitary interactions of the system-environment dynamics. Nevertheless, we consider it because it may be applied to other machine learning tasks [60] and it delivers the best accuracy we found in this paper. Ancilla-based algorithm The Ancilla-based algorithm (ABA) is a variation of the Swap test that conveniently reduces the number of gates. It was first discovered in the context of quantum optics [61], and rediscovered later with assistance of a neural network and introduced for quantum circuits [59]. The circuit is depicted in Fig. 1. Bell-basis algorithm The Bell-basis algorithm (BBA) considers less resources than the previous one (ABA), but demands Bellbasis measurements on all the system qubits [59]. The circuit is depicted in Fig. 1. In this paper we do not intent to explicitly compare the accuracy of all these approaches for estimating the overlapping (for a comparison between Swap test, ABA and BBA see [59]). We will compare them in terms of the accuracy of the decision function. In the next section we describe the quantum circuits that account for the interaction between the system qubit with the environment ancilla that ultimately yields non-Markovianity. III. DIGITAL QUANTUM SIMULATION OF NON-MARKOVIAN CHANNELS The main purpose of this paper is to determine the degree of non-Markovianiaty of a quantum process using a quantum machine learning algorithm. We begin simulating two non-Markovian channels, amplitude damping and phase damping, whose degree of non-Markovianity can be controlled. For this purpose we simulate the processes using usual circuit routines, taking auxiliary qubits to represent the environment. In this section, we show how the degree of non-Markoviniaty is calculated and present how the non-Markovian amplitude damping and phase damping processes can be simulated using a quantum circuit. A. Calculating the degree of non-Markovianity There are different ways to measure the degree of non-Markovianity. The most popular measures are based on the trace distance dynamics [62], the dynamics of entanglement [63,64], and mutual information [65], among others [66]. In this paper we consider the measure based on entanglement dynamics of a bipartite quantum state that encompasses the system that interacts with the environment and an ancilla qubit that is isolated from it [64]. Worthwhile noticing that this ancilla only serves the purpose of quantifying non-Markovianity and it is not implemented in the quantum circuits, in contrast to the ancilla used to simulate the effect of the environment for the amplitude damping and phase damping processes. A monotonic decrease in the entanglement of the bipartite system implies that the dynamics is Markovian. An increase in the entanglement during the evolution is a result of memory effects and thus non-Markovianity. The measure can be calculated as N = max dE(t)/dt>0 dE(t) dt dt,(9) where the maximization is done over all initial states and E is the measure of entanglement. It has been found that the maximization is achieved for Bell states [67]. Therefore, we consider a bipartite system in a Bell state and use concurrence as the measure of entanglement [68]. B. Amplitude Damping For the amplitude damping (AD) channel, we consider a qubit interacting with a bath of harmonic oscillators, given by the Hamiltonian ( = 1) [69,70] H = ω 0 σ + σ − + k ω k a † k a k + k (g * k σ + a k + g k σ − a † k ).(10) Here, σ + = σ † − = |1 0| with |1 (|0 ) corresponding to the excited (ground) state of the qubit with transition frequency ω 0 , a k (a † k ) is the annihilation (creation) operator of the k-th mode of the bath with frequency ω k , and g k is the coupling between the qubit and the k-th mode. We assume that the bath has a Lorentzian spectral density J(ω) = 1 2π γ 0 λ 2 (ω 0 − ω) 2 + λ 2 ,(11) where λ ≈ 1/τ r with τ r being the environment correlation time, γ 0 ≈ 1/τ s where τ s is the typical time scale of the system. The dynamics of the qubit that is coupled resonantly with the environment can be expressed as ρ(t) = 1 i=0 M i (t)ρ(0)M † i (t),(12) where the Kraus operators are given by [71] [72] M 0 (t) = |0 0| + p(t)|1 1|,(13)M 1 (t) = 1 − p(t)|0 1|,(14) in which p(t) = e −λt λ d sinh(dt/2) + cosh(dt/2) 2 ,(15) with d = λ 2 − 2γ 0 λ. The dynamics is known to be non-Markovian in the strong coupling regime λ < 2γ 0 (τ s < 2τ r ) [73]. The AD process can be simulated for a general scenario with a quantum circuit via an ancilla qubit [71,72]. After tracing out the ancilla qubit we obtain the desired mixed state. Figure 2 shows the quantum circuit. The Hadamard gate prepares the qubit in the superposition state (|0 + |1 ) / √ 2 while the controlled rotation and CNOT gates simulate the interaction of the qubit with the environment. In this circuit, the angle θ a is given by [71,72] θ a = 2 arccos p(t) ,(16) where p(t) is given in Eq. (15). Amplitude damping (AD) Phase damping (PD) FIG. 2. Quantum circuits for simulating AD and PD channels. C. Phase Damping For the phase damping (PD) channel, following Ref. [74], we consider a qubit undergoing decoherence induced by a colored noise given by the time dependent Hamiltonian ( = 1) H(t) = Γ(t)σ z .(17) Here, Γ(t) is a random variable which obeys the statistics of a random telegraph signal defined as Γ(t) = α(−1) n(t) , where α is the coupling between the qubit and the external influences, n(t) is a random variable with Poisson distribution with mean t/(2τ ), and σ z is the Pauli z operator. In this case, the dynamics of the qubit is given by the following Kraus operators [74] M 0 (t) = 1 + Λ(t) 2 I, M 1 (t) = 1 − Λ(t) 2 σ z ,(18) where Λ(t) = e −t/(2τ ) cos( µt 2τ ) + 1 µ sin( µt 2τ ) ,(20) with µ = (4ατ ) 2 − 1, and I being the identity matrix. For ατ > 1/4 the dynamics is non-Markovian, while for ατ < 1/4 it is Markovian. The PD channel can be simulated using a quantum circuit, shown in Fig. 2 [71]. In this circuit, the Hadamard gate prepares the qubit into the superposition state and the controlled rotation simulates the interaction with the environment. The angle θ p is given by θ p = 2 arccos (Λ(t)) ,(21) where Λ(t) is given in Eq. (20). IV. RESULTS We perform our simulations with the statevector simulator and qasm simulator, integrated in the Aer's package from IBM qiskit [75]. For comparison, we also run simulations using Pennylane library [76], obtaining similar outcomes. The statevector simulator is an ideal simulator that considers the evolution of the wavefunction. In contrast, the qasm simulator mimics the open dynamics of the IBM quantum computer. This means that it considers losses and shot-noise. However, it allows us to set all qubits equal and fully connected (not relying on a specific quantum hardware). It is well-known that the quantum state of a qubit can be represented as a point in a sphere of radius one (Bloch's sphere). A generic state can be represented in the Bloch's sphere in terms of the expectations values as ρ = 1 2   I + i=x,y,z σ i σ i   ,(22) where I is the 2 × 2 identity matrix. For illustration we firstly focus on the amplitude damping channel. In Fig. 3 we show the expectation values calculated using the statevector simulator and qasm simulator. The former, provides outcomes with no dispersion (top), as expected from the ideal simulation. On the other hand, qasm simulator delivers more realistic results that include dispersion (bottom). This dispersion will be pivotal for selecting the best algorithm that computes the overlap, since statevector simulator brings no significant difference in the prediction. In order words, simulations on statevector simulator may be misleading when selecting a machine learning model. In Fig. 4 we show the degree of NM for the amplitude damping channel as a function of the parameter θ (rotation angle that controls NM introduced in subsection III B). For the calculations, we used qasm simulator with the exponential kernel function -that yields the best accuracy as shown in Appendix B. For exploration of the algorithms we only focus on QSVM. We manually seek optimal hyperparameters and report the prediction on the training dataset. A more robust analysis will be given later on. We can observe that the inversion test leads to a feature space that allows better prediction of the degree of NM. We now compare the performance between QSVM and QKRR. Hereafter, we focus on simulations on the qasm simulator for the inversion test with exponential function. To prevent overfitting, we use two steps for cross-validation. First, we use the train test split function in scikit-learn [1] to randomly split the training set from the test set. Then, we use the GridSearchCV function to explore the best fitting hyperparameters for each model, and we use a five-fold cross-validation. Thus, GridSearchCV provides the best estimator for the range of given parameters averaged over five different sampling of the training set. Finally, we used these estimators to predict the test set, which contains the data that the model has not seen. In Fig. 5 we show our predictions for amplitude damping and phase damping. One can observe that both models succeeded in predicting the degree of non-Markovianity, besides small differences in the score (mean squared error). However, there are important aspects that might be taken into account before selecting one over the other. First, we remark that QSVM requires less training data to deliver good fittings. This is known, and it results from the sparseness in the training sam- ples (only SVs contribute). Therefore, QSVM provides a major advantage given that the most time consuming operation is the calculation of the Gram matrix. Thus, less training samples reduces the overall computation time. In contrast, we observe that as the number of data samples increases, QKRR improves. For comparison, we estimate the degree of non-Markovianity using a classical kernel, i.e. the radial basis function (RBF). We follow the procedure reported in Ref. [53], where the training is carried out with the expectation values (σ x , σ y , σ z ). Thus, instead of using quantum states to build up a kernel, we resort to use classical data, i.e. measurement outcomes. However, the process to obtain the states to be measured is the same we outlined in section III-in Ref. [53] the authors used a master equation approach instead of digital quantum simulation. In Table I we show the mean squared errors for each model for the amplitude damping (AD) and phase damping (PD) channels. We remark that the quantum versions, those where the kernel is calculate from the overlap between quantum states, deliver accurate predictions that are comparable with the classical models, albeit we found that SVM with a RBF kernel provides the best accuracy, as evidenced in terms of the mean squared error and the coefficient of determination R 2 (not shown here). This particular problem illustrates that extending the kernel to be quantum provides interesting insights and contributes to concatenate quantum blocks of operations. It not necessarily outperforms a fully classical training process but delivers useful outcomes. Table shows V. CONCLUSIONS In this paper we have thoroughly studied kernel-based quantum machine learning models to predict the degree of non-Markovianity using quantum data (quantum states). Each state is obtained through digital quantum simulation, where an ancilla qubit originates the non-Markovian behavior. We focus on two different decoherence channels, amplitude damping and phase damping. These quantum states are mapped to a Gram matrix by calculating its overlap. We investigate different kernel functions, say: linear, polynomial and exponential, and different kernel circuits to compute the overlap, say: inversion test, bell-basis algorithm, ancilla-based al-gorithm and the Swap test. We found that the inversion test with the exponential function delivers the best results. We draw our attention to two well-known kernel based machine learning models, Support Vector Machine (SVM) and Kernel Ridge (KRR). Because of their used with a precomputed quantum kernel we dubbed them as quantum SVM (QSVM) and quantum KRR (QKRR), respectively. By optimizing the learning process through cross-validation steps and grid search we found a good accuracy in our models. We found QSVM to be slightly better than QKRR, not only in the prediction's accuracy, but also in requiring less training samples. Finally, we compare our results with their classical counterpart, i.e. when using classical data (expectation values) to train the models. While there are not significant differences, we observe that SVM with an RBF kernel delivers the best performance. This means that in this particular case it is better to measure upon the system and then process the measurement outcomes with machine learning techniques. L = 1 2 w 2 + C i (ξ i + ξ * i ) − i (η i ξ i + η * i ξ * i ) − i α i (ǫ + ξ i − y i + w · x i + b) − i α * i (ǫ + ξ * i + y i − w · x i − b).(A1) Taking the partial derivatives with respect to the primal variables (b, w, ξ i , ξ * i ) yields, ∂ b L = i (α * i − α i ) = 0,(A2)∂ w L = w − i (α i − α * i )x i = 0,(A3)∂ ξ = C − α i − η i = 0, (A4) ∂ ξ * = C − α * i − η * i = 0.(A5) First, from the KKT condition we obtain η i ξ i = 0. By multiplying Eq. (A4) with ξ i , we deduce the relation as, (C − α i )ξ i = 0. (A6) This means that only samples with α i = C lie outside the ǫ-tube (ξ i = 0). We now consider the second constraint, α i (ǫ + ξ i − y i + w · x i + b) = 0.(A7) Note that all samples inside the ǫ-tube (|f ( x i )−y i | < ǫ) have a vanishing Lagrange multiplier α i , which leads to the sparse representation of f ( x) in Eq. (4). A similar procedure can be followed for ξ * i , η * i , α * i , which allows to approach the value for b [52]. Appendix B: Kernel functions performance We now compare three different functions for the kernel k(x i , x j ), say: linear x i , x j , polynomial ( x i , x j + 0.1) 3 , and exponential exp(−3 1 − x i , x j ). Figure 6 shows that the exponential kernel function provides the best fitting. The polynomial function is only considered for completeness, since a more thorough exploration of the parameters may lead to a better fitting. FIG. 1 . 1Quantum circuits compute the overlap between two quantum states in the kernel function to calculate the Gram matrix. For the inversion test U represents either the amplitude damping or phase damping channel depicted inFig. 2. For the ancilla-based algorithm (ABA) U = T † H[59]. FIG. 3 . 3Expectations values delivered by the noisy qasm simulator exhibit small dispersion given by the shot-noise, in contrast to the ideal statevector simulator. We only observe correlations in the plane defined by σx and σz. FIG. 4 . 4QSVM prediction of non-Markovianity as a function of the rotation angle θ for different kernel circuits. The inversion test outperforms the others. We set the hyperparameters {C = 0.5, ǫ = 0.01}. the accuracy of the quantum and classical version of the studied machine learning models. The hyperparameters for AD (PD) are, QSVM: C = 4 × 10 −1 (2 × 10 −1 ), ǫ = 10 −2 ; QKRR: α = 10 −1 (2 × 10 −1 ); SVM: C = 10 2 , ǫ = 10 −3 ; KRR: α = 10 −4 (10 −5 ).QSVMQKRR SVM KRR AD 6.0 × 10 −5 2.7 × 10 −5 2.6 × 10 −6 1.4 × 10 −5 PD 3.3 × 10 −4 1.6 × 10 −4 5.9 × 10 −5 1.8 × 10 −4 .T. acknowledges support from Universidad Mayor through the Doctoral fellowship. A.N. acknowledges financial support from Fondecyt Iniciación No. 11220266. Appendix A: Lagrangian calculations with SVM We begin with the Lagrangian in Eq. (2), FIG. 5. Both QSVM and QKRR deliver accurate predictions of the degree of non-markovianity, based on the mean squared error score. For a small training dataset QSVM performs better (not shown here). For a sufficiently large number of points QKRR provides a smaller mean squared error.1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 θ 0.0 0.2 0.4 0.6 0.8 original predict ion_SVM predict ion_KRR 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 θ 0.0 0.2 0.4 0.6 0.8 Amplitude damping Phase damping SVM=6.0*10 -5 KRR=2.7*10 -5 SVM=3.3*10 -4 KRR=1.6*10 -4 original predict ion_SVM predict ion_KRR TABLE I . IThe FIG. 6. Exponential kernel function delivers the best prediction of non-Markovianity.[1] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin-cent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. Scikit-learn: Machine learning1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 θ 0.0 0.2 0.4 0.6 0.8 original linear poli exp . Journal of Machine Learning Research. 1285python. Journal of Machine Learning Research, 12(85):2825-2830, 2011. Machine learning and the physical sciences. Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-Maranto, Lenka Zdeborová, Rev. Mod. Phys. 9145002Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Lau- rent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt- Maranto, and Lenka Zdeborová. Machine learning and the physical sciences. Rev. Mod. Phys., 91:045002, Dec 2019. A high-bias, low-variance introduction to machine learning for physicists. Pankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexandre G R Day, Clint Richardson, Charles K Fisher, David J Schwab, Physics Reports. 810A high-bias, low-variance introduction to Machine Learning for physicistsPankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexan- dre G.R. Day, Clint Richardson, Charles K. Fisher, and David J. Schwab. A high-bias, low-variance introduc- tion to machine learning for physicists. Physics Reports, 810:1 -124, 2019. A high-bias, low-variance introduction to Machine Learning for physicists. Automatic differentiation in pytorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z Devito, Z Lin, A Desmaison, L Antiga, A Lerer, A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. NIPS-W, 2017. Introduction to reinforcement learning. R S Sutton, A G Barto, MIT Press1st ednR. S. Sutton and A. G. Barto. Introduction to reinforce- ment learning. MIT Press 1st edn, 1998. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Vedran Dunjko, J Hans, Briegel, Reports on Progress in Physics. 81774001Vedran Dunjko and Hans J Briegel. Machine learning & artificial intelligence in the quantum domain: a re- view of recent progress. Reports on Progress in Physics, 81(7):074001, jun 2018. Machine learning phases of matter. Juan Carrasquilla, Roger G Melko, Nature Physics. 135Juan Carrasquilla and Roger G. Melko. Machine learning phases of matter. Nature Physics, 13(5):431-434, 2017. Unveiling phase transitions with machine learning. Askery Canabarro, Felipe Fernandes Fanchini, André Luiz Malvezzi, Rodrigo Pereira, Rafael Chaves, Phys. Rev. B. 10045129Askery Canabarro, Felipe Fernandes Fanchini, André Luiz Malvezzi, Rodrigo Pereira, and Rafael Chaves. Unveiling phase transitions with machine learning. Phys. Rev. B, 100:045129, Jul 2019. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, Jay M Gambetta, Nature. 5497671Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M. Chow, and Jay M. Gambetta. Hardware-efficient variational quan- tum eigensolver for small molecules and quantum mag- nets. Nature, 549(7671):242-246, 2017. Quantum supremacy using a programmable superconducting processor. Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Nature. 5747779Frank Arute, Kunal Arya, Ryan Babbush, Dave Ba- con, Joseph C. Bardin et al. Quantum supremacy us- ing a programmable superconducting processor. Nature, 574(7779):505-510, 2019. Benchmarking an 11-qubit quantum computer. K Wright, K M Beck, S Debnath, J M Amini, Y Nam, N Grzesiak, J S Chen, N C Pisenti, M Chmielewski, C Collins, K M Hudek, J Mizrahi, J D Wong-Campos, S Allen, J Apisdorf, P Solomon, M Williams, A M Ducore, A Blinov, S M Kreikemeier, V Chaplin, M Keesan, C Monroe, J Kim, Nature Communications. 1015464K. Wright, K. M. Beck, S. Debnath, J. M. Amini, Y. Nam, N. Grzesiak, J. S. Chen, N. C. Pisenti, M. Chmielewski, C. Collins, K. M. Hudek, J. Mizrahi, J. D. Wong-Campos, S. Allen, J. Apisdorf, P. Solomon, M. Williams, A. M. Ducore, A. Blinov, S. M. Kreike- meier, V. Chaplin, M. Keesan, C. Monroe, and J. Kim. Benchmarking an 11-qubit quantum computer. Nature Communications, 10(1):5464, 2019. Multi-qubit entanglement and algorithms on a neutral-atom quantum computer. T M Graham, Y Song, J Scott, C Poole, L Phuttitarn, K Jooya, P Eichler, X Jiang, A Marra, B Grinkemeyer, M Kwon, M Ebert, J Cherek, M T Lichtman, M Gillette, J Gilbert, D Bowman, T Ballance, C Campbell, E D Dahl, O Crawford, N S Blunt, B Rogers, T Noel, M Saffman, Nature. 6047906T. M. Graham, Y. Song, J. Scott, C. Poole, L. Phutti- tarn, K. Jooya, P. Eichler, X. Jiang, A. Marra, B. Grinke- meyer, M. Kwon, M. Ebert, J. Cherek, M. T. Licht- man, M. Gillette, J. Gilbert, D. Bowman, T. Ballance, C. Campbell, E. D. Dahl, O. Crawford, N. S. Blunt, B. Rogers, T. Noel, and M. Saffman. Multi-qubit en- tanglement and algorithms on a neutral-atom quantum computer. Nature, 604(7906):457-462, 2022. A variational eigenvalue solver on a photonic quantum processor. Alberto Peruzzo, Jarrod Mcclean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, Jeremy L O&apos;brien, Nature Communications. 514213Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man- Hong Yung, Xiao-Qi Zhou, Peter J. Love, Alán Aspuru- Guzik, and Jeremy L. O'Brien. A variational eigen- value solver on a photonic quantum processor. Nature Communications, 5(1):4213, 2014. Machine learning method for state preparation and gate synthesis on photonic quantum computers. Juan Miguel Arrazola, R Thomas, Josh Bromley, Izaac, R Casey, Kamil Myers, Nathan Brádler, Killoran, Quantum Science and Technology. 4224004Juan Miguel Arrazola, Thomas R Bromley, Josh Izaac, Casey R Myers, Kamil Brádler, and Nathan Killoran. Machine learning method for state preparation and gate synthesis on photonic quantum computers. Quantum Science and Technology, 4(2):024004, jan 2019. Fault-tolerant operation of a logical qubit in a diamond quantum processor. M H Abobeih, Y Wang, J Randall, S J H Loenen, C E Bradley, M Markham, D J Twitchen, B M Terhal, T H Taminiau, Nature. M. H. Abobeih, Y. Wang, J. Randall, S. J. H. Loenen, C. E. Bradley, M. Markham, D. J. Twitchen, B. M. Ter- hal, and T. H. Taminiau. Fault-tolerant operation of a logical qubit in a diamond quantum processor. Nature, 2022. Low-depth quantum simulation of materials. R Babbush, N Wiebe, J Mcclean, J Mcclain, H Neven, G K , -L Chan, Phys. Rev. X. 811044R. Babbush, N. Wiebe, J. McClean, J. McClain, H. Neven, and G. K.-L. Chan. Low-depth quantum sim- ulation of materials. Phys. Rev. X, 8:011044, 2018. Scalable quantum simulation of molecular energies. P J J O&apos;malley, R Babbush, I D Kivlichan, J Romero, J R Mcclean, R Barends, J Kelly, P Roushan, A Tranter, N Ding, B Campbell, Y Chen, Z Chen, B Chiaro, A Dunsworth, A G Fowler, E Jeffrey, E Lucero, A Megrant, J Y Mutus, M Neeley, C Neill, C Quintana, D Sank, A Vainsencher, J Wenner, T C White, P V Coveney, P J Love, H Neven, A Aspuru-Guzik, J M Martinis, Phys. Rev. X. 631007P. J. J. O'Malley, R. Babbush, I. D. Kivlichan, J. Romero, J. R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jef- frey, E. Lucero, A. Megrant, J. Y. Mutus, M. Neeley, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wen- ner, T. C. White, P. V. Coveney, P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis. Scalable quantum simulation of molecular energies. Phys. Rev. X, 6:031007, Jul 2016. Subspacesearch variational quantum eigensolver for excited states. K M Nakanishi, K Mitarai, K Fujii, Phys. Rev. Research. 133062K. M. Nakanishi, K. Mitarai, and K. Fujii. Subspace- search variational quantum eigensolver for excited states. Phys. Rev. Research, 1:033062, 2019. Quantum adiabatic algorithm for factorization and its experimental implementation. X Peng, Z Liao, N Xu, G Qin, X Zhou, D Suter, J Du, Phys. Rev. Lett. 101220405X. Peng, Z. Liao, N. Xu, G. Qin, X. Zhou, D. Suter, and J. Du. Quantum adiabatic algorithm for factorization and its experimental implementation. Phys. Rev. Lett., 101:220405, 2008. Quantum support vector machine for big data classification. M Rebentrost, P , S Lloyd, Phys. Rev. Lett. 113130503M. Rebentrost, P. andMohseni and S. Lloyd. Quantum support vector machine for big data classification. Phys. Rev. Lett., 113:130503, 2014. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Nathan Wiebe, Ashish Kapoor, Krysta M Svore, Quantum Information & Computation. 153-4Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Information & Computation, 15(3-4):0318-0358, 2015. Entanglementbased machine learning on a quantum computer. X.-D Cai, D Wu, Z.-E Su, M.-C Chen, X.-L Wang, L Li, N.-L Liu, C.-Y. Lu, J.-W Pan, Phys. Rev. Lett. 114110504X.-D. Cai, D. Wu, Z.-E. Su, M.-C. Chen, X.-L. Wang, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan. Entanglement- based machine learning on a quantum computer. Phys. Rev. Lett., 114:110504, 2015. Experimental realization of a quantum support vector machine. Z Li, X Liu, N Xu, J Du, Phys. Rev. Lett. 114140504Z. Li, X. Liu, N. Xu, and J. Du. Experimental realization of a quantum support vector machine. Phys. Rev. Lett., 114:140504, 2015. Quantum machine learning. Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, Seth Lloyd, Nature. 5497671Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. Nature, 549(7671):195-202, 2017. Supervised learning with quantumenhanced feature spaces. Vojtěch Havlíček, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, Jay M Gambetta, Nature. 5677747Vojtěch Havlíček, Antonio D. Córcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M. Chow, and Jay M. Gambetta. Supervised learning with quantum- enhanced feature spaces. Nature, 567(7747):209-212, 2019. Quantum machine learning in feature hilbert spaces. Maria Schuld, Nathan Killoran, Phys. Rev. Lett. 12240504Maria Schuld and Nathan Killoran. Quantum machine learning in feature hilbert spaces. Phys. Rev. Lett., 122:040504, Feb 2019. Quantum speedup for pool-based active learning. Z He, L Li, S Zheng, X Zou, H Situ, Quantum Inf Process. 18345Z. He, L. Li, S. Zheng, X. Zou, and H. Situ. Quan- tum speedup for pool-based active learning. Quantum Inf Process, 18:345, 2019. Kernel methods in quantum machine learning. R Mengoni, A Di Pierro, Quantum Mach. Intell. 165R. Mengoni and A. Di Pierro. Kernel methods in quan- tum machine learning. Quantum Mach. Intell., 1:65, 2019. Experimental kernel-based quantum machine learning in finite feature space. Karol Bartkiewicz, Clemens Gneiting, Kateřina Antonínčernoch, Karel Jiráková, Franco Lemr, Nori, Scientific Reports. 10112356Karol Bartkiewicz, Clemens Gneiting, AntonínČernoch, Kateřina Jiráková, Karel Lemr, and Franco Nori. Exper- imental kernel-based quantum machine learning in finite feature space. Scientific Reports, 10(1):12356, 2020. Nearest centroid classification on a trapped ion quantum computer. Sonika Johri, Shantanu Debnath, Avinash Mocherla, Singk Alexandros, Anupam Prakash, Jungsang Kim, Iordanis Kerenidis, npj Quantum Information. 7122Sonika Johri, Shantanu Debnath, Avinash Mocherla, Alexandros SINGK, Anupam Prakash, Jungsang Kim, and Iordanis Kerenidis. Nearest centroid classification on a trapped ion quantum computer. npj Quantum Information, 7(1):122, 2021. Support vector machines on the d-wave quantum annealer. D Willsch, M Willsch, H De Raedt, K Michielsen, Computer Physics Communications. 248107006D. Willsch, M. Willsch, H. De Raedt, and K. Michielsen. Support vector machines on the d-wave quantum an- nealer. Computer Physics Communications, 248:107006, 2020. Recent advances in quantum machine learning. Yao Zhang, Qiang Ni, Quantum Engineering. 234Yao Zhang and Qiang Ni. Recent advances in quantum machine learning. Quantum Engineering, 2(1):e34, 2020. The theory of the quantum kernel-based binary classifier. D K Park, C Blank, F Petruccione, Physics Letters A. 384126422D. K. Park, C. Blank, and F. Petruccione. The theory of the quantum kernel-based binary classifier. Physics Letters A, 384:126422, 2020. Machine learning: Quantum vs classical. M Tariq, Antonio Khan, Robles-Kelly, IEEE Access. 8Tariq M. Khan and Antonio Robles-Kelly. Machine learn- ing: Quantum vs classical. IEEE Access, 8:219275- 219294, 2020. Quantum machine learning models are kernel methods. Maria Schuld, arXiv:2101.11020v2Maria Schuld. Quantum machine learning models are kernel methods. arXiv:2101.11020v2, 2021. Universal Approximation Property of Quantum Machine Learning Models in Quantum-Enhanced Feature Spaces. Takahiro Goto, Kohei Quoc Hoan Tran, Nakajima, Physical Review Letters. 127990506Takahiro Goto, Quoc Hoan Tran, and Kohei Nakajima. Universal Approximation Property of Quantum Machine Learning Models in Quantum-Enhanced Feature Spaces. Physical Review Letters, 127(9):090506, August 2021. Towards understanding the power of quantum kernels in the NISQ era. Xinbiao Wang, Yuxuan Du, Yong Luo, Dacheng Tao, 531Xinbiao Wang, Yuxuan Du, Yong Luo, and Dacheng Tao. Towards understanding the power of quantum kernels in the NISQ era. Quantum, 5:531, August 2021. Dyon van Vreumingen, and Vedran Dunjko. Structural risk minimization for quantum linear classifiers. Casper Gyurik, Casper Gyurik, Dyon van Vreumingen, and Vedran Dun- jko. Structural risk minimization for quantum linear clas- sifiers, 2021. Quantum semi-supervised kernel learning. Quantum Machine Intelligence. Seyran Saeedi, Aliakbar Panahi, Tom Arodz, 324Seyran Saeedi, Aliakbar Panahi, and Tom Arodz. Quan- tum semi-supervised kernel learning. Quantum Machine Intelligence, 3(2):24, 2021. Quantum-inspired support vector machine. Chen Ding, Tian-Yi Bao, He-Liang Huang, IEEE Transactions on Neural Networks and Learning Systems. Chen Ding, Tian-Yi Bao, and He-Liang Huang. Quantum-inspired support vector machine. IEEE Transactions on Neural Networks and Learning Systems, pages 1-13, 2021. The Theory of Open Quantum Systems. H.-P Breuer, F Petruccione, H.-P. Breuer and F. Petruccione. The Theory of Open Quantum Systems, 2007. Nonlocal memory effects allow perfect teleportation with mixed states. Elsi-Mari Laine, Heinz-Peter Breuer, Jyrki Piilo, Scientific Reports. 414620Elsi-Mari Laine, Heinz-Peter Breuer, and Jyrki Piilo. Nonlocal memory effects allow perfect teleportation with mixed states. Scientific Reports, 4(1):4620, 2014. Plenio. Quantum metrology in non-markovian environments. Alex W Chin, Susana F Huelga, Martin B , Phys. Rev. Lett. 109233601Alex W. Chin, Susana F. Huelga, and Martin B. Ple- nio. Quantum metrology in non-markovian environ- ments. Phys. Rev. Lett., 109:233601, Dec 2012. Demonstration of non-markovian process characterisation and control on a quantum processor. G A L White, C D Hill, F A Pollock, L C L Hollenberg, K Modi, Nature Communications. 1116301G. A. L. White, C. D. Hill, F. A. Pollock, L. C. L. Hol- lenberg, and K. Modi. Demonstration of non-markovian process characterisation and control on a quantum pro- cessor. Nature Communications, 11(1):6301, 2020. Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models. Marcello Benedetti, Erika Lloyd, Quantum Science and Technology. 4443001Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mat- tia Fiorentini. Parameterized quantum circuits as ma- chine learning models. Quantum Science and Technology, 4(4):043001, nov 2019. . M Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R , M. Cerezo, Andrew Arrasmith, Ryan Babbush, Si- mon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. Variational quantum algorithms. Kosuke Mcclean, Xiao Mitarai, Lukasz Yuan, Patrick J Cincio, Coles, Nature Reviews Physics. 39McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational quantum algorithms. Nature Reviews Physics, 3(9):625-644, 2021. Estimating the degree of nonmarkovianity using quantum machine learning. T Hossein, Diego Dinani, Felipe F Tancara, Raul Fanchini, Coto, Hossein T. Dinani, Diego Tancara, Felipe F. Fan- chini, and Raul Coto. Estimating the degree of non- markovianity using quantum machine learning, 2022. The nature of statistical learning theory. V Vapnik, V. Vapnik. The nature of statistical learning theory., 1995. A tutorial on support vector machines for pattern recognition. J C Christopher, Burges, Data Mining and Knowledge Discovery. 22Christopher J. C. Burges. A tutorial on support vec- tor machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998. Universal learning curves of support vector machines. M Opper, R Urbanczik, Phys. Rev. Lett. 86M. Opper and R. Urbanczik. Universal learning curves of support vector machines. Phys. Rev. Lett., 86:4410- 4413, May 2001. New support vector algorithms. B Schölkopf, A J Smola, R C Williamson, P L Bartlett, Neural Computation. 121207B. Schölkopf, A. J. Smola, R. C. Williamson, and P. L. Bartlett. New support vector algorithms. Neural Computation, 12:1207, 2000. A tutorial on support vector regression. Alex J Smola, Bernhard Schölkopf, Statistics and Computing. 143Alex J. Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and Computing, 14(3):199-222, 2004. Estimating the degree of non-markovianity using machine learning. Felipe F Fanchini, Göktug Karpat, Daniel Z Rossatto, Ariel Norambuena, Raúl Coto, Phys. Rev. A. 10322425Felipe F. Fanchini, Göktug Karpat, Daniel Z. Rossatto, Ariel Norambuena, and Raúl Coto. Estimating the de- gree of non-markovianity using machine learning. Phys. Rev. A, 103:022425, Feb 2021. A comparative study of different machine learning methods for dissipative quantum dynamics. Luis E Herrera Rodriguez, Arif Ullah, J Rueda Kennet, Pavlo O Espinosa, Alexei A Dral, Kananenka, Luis E. Herrera Rodriguez, Arif Ullah, Kennet J. Rueda Espinosa, Pavlo O. Dral, and Alexei A. Kananenka. A comparative study of different machine learning methods for dissipative quantum dynamics, 2022. The elements of statistical learning: Data mining, inference, and prediction. T Hastie, R Tibshirani, J H Friedman, T. Hastie, R. Tibshirani, and J.H. Friedman. The el- ements of statistical learning: Data mining, inference, and prediction, 2009. Robust data encodings for quantum classifiers. Ryan Larose, Brian Coyle, Phys. Rev. A. 10232420Ryan LaRose and Brian Coyle. Robust data encodings for quantum classifiers. Phys. Rev. A, 102:032420, Sep 2020. Encoding patterns for quantum algorithms. Manuela Weigold, Johanna Barzen, Frank Leymann, Marie Salm, IET Quantum Communication. 24Manuela Weigold, Johanna Barzen, Frank Leymann, and Marie Salm. Encoding patterns for quantum algorithms. IET Quantum Communication, 2(4):141-152, 2021. Five two-bit quantum gates are sufficient to implement the quantum fredkin gate. John A Smolin, David P Divincenzo, Phys. Rev. A. 53John A. Smolin and David P. DiVincenzo. Five two-bit quantum gates are sufficient to implement the quantum fredkin gate. Phys. Rev. A, 53:2855-2856, Apr 1996. Learning the quantum algorithm for state overlap. Lukasz Cincio, Yigit Subaşı, T Andrew, Patrick J Sornborger, Coles, New Journal of Physics. 2011113022Lukasz Cincio, Yigit Subaşı, Andrew T Sornborger, and Patrick J Coles. Learning the quantum algorithm for state overlap. New Journal of Physics, 20(11):113022, nov 2018. Quantum embeddings for machine learning. Seth Lloyd, Maria Schuld, Aroosa Ijaz, Josh Izaac, Nathan Killoran, Seth Lloyd, Maria Schuld, Aroosa Ijaz, Josh Izaac, and Nathan Killoran. Quantum embeddings for machine learning, 2020. swap test and hong-ou-mandel effect are equivalent. Juan Carlos Garcia-Escartin, Pedro Chamorro-Posada, Phys. Rev. A. 8752330Juan Carlos Garcia-Escartin and Pedro Chamorro- Posada. swap test and hong-ou-mandel effect are equiv- alent. Phys. Rev. A, 87:052330, May 2013. Measure for the degree of non-markovian behavior of quantum processes in open systems. Heinz-Peter Breuer, Elsi-Mari Laine, Jyrki Piilo, Phys. Rev. Lett. 103210401Heinz-Peter Breuer, Elsi-Mari Laine, and Jyrki Piilo. Measure for the degree of non-markovian behavior of quantum processes in open systems. Phys. Rev. Lett., 103:210401, Nov 2009. Measures of non-markovianity: Divisibility versus backflow of information. Dariusz Chruściński, Andrzej Kossakowski, Rivas, Phys. Rev. A. 8352128Dariusz Chruściński, Andrzej Kossakowski, andÁngel Rivas. Measures of non-markovianity: Divisibility versus backflow of information. Phys. Rev. A, 83:052128, May 2011. Entanglement and non-markovianity of quantum evolutions. Ángel Rivas, Susana F Huelga, Martin B Plenio, Phys. Rev. Lett. 10550403Ángel Rivas, Susana F. Huelga, and Martin B. Plenio. Entanglement and non-markovianity of quantum evolu- tions. Phys. Rev. Lett., 105:050403, Jul 2010. Quantifying non-markovianity via correlations. Shunlong Luo, Shuangshuang Fu, Hongting Song, Phys. Rev. A. 8644101Shunlong Luo, Shuangshuang Fu, and Hongting Song. Quantifying non-markovianity via correlations. Phys. Rev. A, 86:044101, Oct 2012. Operational markov condition for quantum processes. Felix A Pollock, César Rodríguez-Rosario, Thomas Frauenheim, Mauro Paternostro, Kavan Modi, Phys. Rev. Lett. 12040405Felix A. Pollock, César Rodríguez-Rosario, Thomas Frauenheim, Mauro Paternostro, and Kavan Modi. Op- erational markov condition for quantum processes. Phys. Rev. Lett., 120:040405, Jan 2018. Inequivalence of correlation-based measures of non-markovianity. Alaor Cervati Neto, Göktug Karpat, Felipe Fernandes Fanchini, Phys. Rev. A. 9432105Alaor Cervati Neto, Göktug Karpat, and Felipe Fernan- des Fanchini. Inequivalence of correlation-based mea- sures of non-markovianity. Phys. Rev. A, 94:032105, Sep 2016. Entanglement of a pair of quantum bits. Scott Hill, William K Wootters, Phys. Rev. Lett. 78Scott Hill and William K. Wootters. Entanglement of a pair of quantum bits. Phys. Rev. Lett., 78:5022-5025, Jun 1997. Non-markovian dynamics of a damped driven two-state system. P Haikka, S Maniscalco, Phys. Rev. A. 8152103P. Haikka and S. Maniscalco. Non-markovian dynamics of a damped driven two-state system. Phys. Rev. A, 81:052103, May 2010. Time-local heisenberg-langevin equations and the driven qubit. S J Whalen, H J Carmichael, Phys. Rev. A. 9363820S. J. Whalen and H. J. Carmichael. Time-local heisenberg-langevin equations and the driven qubit. Phys. Rev. A, 93:063820, Jun 2016. Quantum computation and quantum information. M A Nielsen, I Chuang, M. A. Nielsen and I. Chuang. Quantum computation and quantum information, 2000. Ibm q experience as a versatile experimental testbed for simulating open quantum systems. Guillermo García-Pérez, Matteo A C Rossi, S Maniscalco, npj Quantum Information. Guillermo García-Pérez, Matteo A. C. Rossi, and S. Man- iscalco. Ibm q experience as a versatile experimen- tal testbed for simulating open quantum systems. npj Quantum Information, page 1, Jun 2020. Nonmarkovian effects on the dynamics of entanglement. B Bellomo, R Lo Franco, G Compagno, Phys. Rev. Lett. 99160502B. Bellomo, R. Lo Franco, and G. Compagno. Non- markovian effects on the dynamics of entanglement. Phys. Rev. Lett., 99:160502, Oct 2007. Depolarizing channel as a completely positive map with memory. Sonja Daffer, Krzysztof Wódkiewicz, James D Cresser, John K Mciver, Phys. Rev. A. 7010304Sonja Daffer, Krzysztof Wódkiewicz, James D. Cresser, and John K. McIver. Depolarizing channel as a com- pletely positive map with memory. Phys. Rev. A, 70:010304, Jul 2004. C David, Thomas Mckay, Luciano Alexander, Michael J Bello, Lev Biercuk, Jiayin Bishop, Jerry M Chen, Antonio D Chow, Daniel Córcoles, Stefan Egger, Juan Filipp, Michael Gomez, Ali Hush, Diego Javadi-Abhari, Moreda, James Wootton, and Jay M. Gambetta. Qiskit backend specifications for openqasm and openpulse experiments. Brent Paulovicks, Erick Winston, Christopher J. WoodPaul NationDavid C. McKay, Thomas Alexander, Luciano Bello, Michael J. Biercuk, Lev Bishop, Jiayin Chen, Jerry M. Chow, Antonio D. Córcoles, Daniel Egger, Stefan Filipp, Juan Gomez, Michael Hush, Ali Javadi-Abhari, Diego Moreda, Paul Nation, Brent Paulovicks, Erick Winston, Christopher J. Wood, James Wootton, and Jay M. Gam- betta. Qiskit backend specifications for openqasm and openpulse experiments, 2018. Automatic differentiation of hybrid quantum-classical computations. Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, Shahnawaz Ahmed, Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, Shahnawaz Ahmed et al. Pennylane: Automatic differentiation of hybrid quantum-classical computations, 2018.
{'fraction_non_alphanumeric': 0.06294416243654823, 'fraction_numerical': 0.03124838681923772, 'mean_word_length': 4.195887349128297, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 35, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Quantum machine learning is a growing research field that aims to perform machine learning tasks assisted by a quantum computer. Kernel-based quantum machine learning models are paradigmatic examples where the kernel involves quantum states, and the Gram matrix is calculated from the overlap between these states. With the kernel at hand, a regular machine learning model is used for the learning process. In this paper we investigate the quantum support vector machine and quantum kernel ridge models to predict the degree of non-Markovianity of a quantum system. We perform digital quantum simulation of amplitude damping and phase damping channels to create our quantum dataset. We elaborate on different kernel functions to map the data and kernel circuits to compute the overlap between quantum states. We show that our models deliver accurate predictions that are comparable with the fully classical models.', 'arxivid': '2209.11655', 'author': ['Diego Tancara \nCentro deÓptica e Información Cuántica\nUniversidad Mayor\nSantiagoChile\n', 'Hossein T Dinani \nEscuela Data Science\nFacultad de Ciencias\nIngenería y Tecnología\nUniversidad Mayor\nSantiagoChile\n', 'Ariel Norambuena \nCentro deÓptica e Información Cuántica\nUniversidad Mayor\nSantiagoChile\n', 'Felipe F Fanchini \nFaculdade de Ciências\nUNESP -Universidade Estadual Paulista\n17033-360BauruSPBrazil\n', 'Raúl Coto \nDepartment of Physics\nFlorida International University\n33199MiamiFloridaUSA\n'], 'authoraffiliation': ['Centro deÓptica e Información Cuántica\nUniversidad Mayor\nSantiagoChile', 'Escuela Data Science\nFacultad de Ciencias\nIngenería y Tecnología\nUniversidad Mayor\nSantiagoChile', 'Centro deÓptica e Información Cuántica\nUniversidad Mayor\nSantiagoChile', 'Faculdade de Ciências\nUNESP -Universidade Estadual Paulista\n17033-360BauruSPBrazil', 'Department of Physics\nFlorida International University\n33199MiamiFloridaUSA'], 'corpusid': 252519630, 'doi': '10.1103/physreva.107.022402', 'github_urls': [], 'n_tokens_mistral': 18379, 'n_tokens_neox': 16103, 'n_words': 9173, 'pdfsha': '83f74f823a958e837edd4df896cee4b8b45851ae', 'pdfurls': ['https://export.arxiv.org/pdf/2209.11655v1.pdf'], 'title': ['Kernel-based quantum regressor models learn non-Markovianity', 'Kernel-based quantum regressor models learn non-Markovianity'], 'venue': []}
arxiv
Moving Planes and Singular Points of Rational Parametric Surfaces 14 Jan 2010 Falai Chen Department of Mathematics University of Science and Technology of China Hefei 230026AnhuiChina Xuhui Wang Department of Mathematics University of Science and Technology of China Hefei 230026AnhuiChina School of Mathematics Hefei University of Technology Hefei 230009AnhuiChina Moving Planes and Singular Points of Rational Parametric Surfaces 14 Jan 2010arXiv:0909.2810v2 [math.NA]Rational parametric surfaceMoving planeµ-basisSingular point In this paper we discuss the relationship between the moving planes of a rational parametric surface and the singular points on it. Firstly, the intersection multiplicity of several planar curves is introduced. Then we derive an equivalent definition for the order of a singular point on a rational parametric surface. Based on the new definition of singularity orders, we derive the relationship between the moving planes of a rational surface and the order of singular points. Especially, the relationship between the µ-basis and the order of a singular point is also discussed. Introduction Give an algebraic surface f (x, y, z, w) = 0 in homogeneous form, the singular points of the surface are the points at which all the partial derivatives simultaneously vanish. Geometrically, A singular point on the surface is a point where the tangent plane is not uniquely defined, and it embodies geometric shape and topology information of the surface. Detecting and computing singular points has wide applications in Solid Modeling and Computer Aided Geometric Design (CAGD). To find the singular points of a parametric surface P(s, t), one can solve P s (s, t) × P t (s, t) = 0. However, the find the orders of singularities, one has to resort to implicit form. Let f (x, y, z, w) be the implicit equation of P(s, t). A singular point Q = (x 0 , y 0 , z 0 , w 0 ) of P(s, t) has order r if all the partial derivatives of f (x, y, z, w) with order up to r − 1 vanish at Q, and at least one of the r-th derivative of f at Q is nonzero. Unfortunately, converting the parametric form of a surface into implicit form is a difficult task, which is still a hot topic of research [1, 2,6,10,13,15]. Other methods such as generalized resultants [3,17,18] to find singular points don't need prior implicitization. However, they are not designed for detecting and computing the singular points of higher order (order ≥ 3). In this paper, we develop methods to treat singularities of rational parametric surfaces directly. Specifically, we use moving surfaces technique to study singularities of rational parametric surfaces and the relationship between the order of the singularities and moving surfaces. The remainder of this paper is organized as follows. In Section 2, we recalls some preliminary results about the µ-basis of a rational surface. The notion of intersection multiplicity of several planar curves is also introduced. In Section 3, a new definition for the order of a singular point directly from the parametric equation of a surface is presented, and the equivalence of the new definition with the classic definition is proved. In Section 4, we discuss the relationship between the moving planes and µ-basis of a rational surface and the singular points of the rational surface. We conclude this paper with future research problems. Preliminaries Let R[s, t] be the ring of bivariate polynomials in s, t over the set of real numbers R. A rational parametric surface in homogeneous form is defined P(s, t) = a(s, t), b(s, t), c(s, t), d(s, t) ,(1) where a, b, c, d ∈ R[s, t] are polynomials with gcd(a, b, c, d) = 1. In order to apply the theory of algebraic geometry, sometimes we need to work with homogeneous polynomials, P(s, t, u) = a(s, t, u), b(s, t, u), c(s, t, u), d(s, t, u) . A base point of a rational surface P(s, t) is a parameter pair (s 0 , t 0 ) such that P(s 0 , t 0 ) = 0. Note that even if the rational surface P(s, t) is real, the base points could be complex numbers and possibly at infinity. A moving plane is a family of planes with parametric pairs (s, t) [15] A(s, t)x + B(s, t)y + C(s, t)z + D(s, t) where A(s, t), B(s, t), C(s, t), D(s, t) ∈ R[s, t]. A moving plane is said to follow the rational surface (1) if A(s, t)a(s, t) + B(s, t)b(s, t) + C(s, t)c(s, t) + D(s, t)d(s, t) ≡ 0.(4) The moving plane ( Let L st be the set of the moving planes which follow the rational surface P(s, t), then L st is exactly the syzygy module syz(a, b, c, d) and is a free module of rank 3 [6]. A µ-basis of the rational surface (1) consists of three moving planes p, q, r following (1) such that [p, q, r] = κP(s, t), where κ is nonzero constant and [p, q, r] is the outer product of p = (p 1 , p 2 , p 3 , p 4 ), q = (q 1 , q 2 , q 3 , q 4 ), and r = (r 1 , r 2 , r 3 , r 4 ) defined by [p, q, r] = p 2 p 3 p 4 q 2 q 3 q 4 r 2 r 3 r 4 , − p 1 p 3 p 4 q 1 q 3 q 4 r 1 r 3 r 4 , p 1 p 2 p 4 q 1 q 2 q 4 r 1 r 2 r 4 , − p 1 p 2 p 3 q 1 q 2 q 3 r 1 r 2 r 3 . A µ-basis forms a basis for the syzygy module L st [6]. Definition 1. Let f (x, y, z, w) = 0 be the implicit equation of the parametric surface P(s, t). Then X 0 = (x 0 , y 0 , z 0 , w 0 ) isf x (X 0 ) = f y (X 0 ) = f z (X 0 ) = f w (X 0 ) = 0, and at least one of the second order derivatives is non-zero. To discuss the order of singular points on a rational parametric surface, we need to recall some preliminary knowledge about the intersection multiplicity of several curves in P 2 (C), the projective plane over the complex numbers. According to the above definition, we can define the intersection multiplicity of several planar curves in P 2 (C). For planar curves • If I l J = I l+1 , then e(J, R) = e(I, R), J is a reduction ideal of I. C 1 , C 2 , . . . , C v which are defined by homogeneous equations f 1 (s, t, u) = 0, . . . , f v (s, t, u) = 0 re- spectively. Let f 1 , . . . , f v be the local equation of C 1 , C 2 , . . . , C v near point p, • If I is generated by a regular sequence, then e(I, R) = dim k R/I, I is a complete intersection. Proposition 3. [12] 1. I has a reduction ideal which is generated by a regular sequence. 2. The regular sequence can be chosen to the generic linear combinations of the generators of I. The order of singular points on rational parametric surfaces Given a rational parametric surface, we first give a definition about the order of singular points directly from the parametric equation. Definition 4. For a rational surface (2), a point X 0 = (x 0 , y 0 , z 0 , w 0 ) (wlog, assume w 0 = 0) is a r-fold singular point if w 0 a(s, t, u) − x 0 d(s, t, u) = w 0 b(s, t, u) − y 0 d(s, t, u) = w 0 c(s, t, u) − z 0 d(s, t, u) = 0(5) has r + λ intersection points (counting multiplicity) in the (s, t, u) plane, where the multiplicity is defined in (2), and λ is the number of base point of the surface. We will show that the above definition is equivalent to the classic definition of order of singularities (Definition 1). Theorem 5. Definition 1 and Definition 4 are equivalent. Proof. X 0 is an r-fold singular point on a surface if and only if, for a generic line passing through X 0 , the line intersects the surface at n−r distinct points besides X 0 , here n is the implicit degree of the surface. Without loss of generality, assume the singular point is at the origin X 0 = (x 0 , y 0 , z 0 , w 0 ) = (0, 0, 0, 1). Let the generic line be defined by l = L 1 ∩ L 2 , where L 1 : α 1 x + α 2 y + α 3 z = 0, L 2 : β 1 x + β 2 y + β 3 z = 0. Consider the two planar curves C : g 1 = α 1 a(s, t, u) + . . . + α 3 c(s, t, u) = 0, D : g 2 = β 1 a(s, t, u) + . . . + β 3 c(s, t, u) = 0. Let Z be the common zeros of a, b, c, S be the surface, l − be the line segments by removing the origin from the line l, denote ϕ : (s, t, u) → P(s, t, u), then C ∩ D = ϕ −1 (S ∩ l − ) ∪ Z. By Bezout's theorem, one has n 2 = #(S ∩ l − ) + p∈Z dim O p / g 1 , g 2 p .(6) From (3), we can get that g 1 p and g 2 p is a reduction ideal of a(s, t), b(s, t), c(s, t) p , where p is the intersection point of g 1 = 0, g 2 = 0. Thus, e( a, b, c p , O p ) = e( g 1 , g 2 p , O p ) = dim k O p / g 1 , g 2 p Therefore, (6) is equivalent to n 2 = #(S ∩ l − ) + p∈Z e( a, b, c p , O p ).(7) Let Z 1 be the point set which satisfies a = b = c = 0 and d = 0, Z 2 be the point set which satisfies a = b = c = d = 0, then Z = Z 1 ∪ Z 2 , and Z 1 ∩ Z 2 = ∅. Therefore, (7) is equivalent to n 2 = #(S ∩ l − ) + r + λ.(8) Since the implicit degree of the surface is n 2 − λ, we immediately get that Definition 1 and Definition 4 are equivalent. Relationship between moving planes and singular points In this section, we study the relationship between the moving planes and the order of singular points on a rational parametric surface. Theorem 6. Let P(s, t, u) be a parametric surface with no base points, and L(s, t, u) be a moving plane following P(s, t, u). If X 0 = (x 0 , y 0 , z 0 , w 0 ) (assume w 0 = 0) is an r-fold singular point on the surface, then w 0 a(s, t, u) − x 0 d(s, t, u) = 0, w 0 b(s, t, u) − y 0 d(s, t, u) = 0, w 0 c(s, t, u) − z 0 d(s, t, u) = 0, L(s, t, u) · X 0 = 0 have r intersection points (counting multiplicity). Proof. The rational parametric surface P(s, t) has the following special three moving planes (s, t), 0, 0, a(s, t) , d(s, t), 0, b(s, t) , L 1 := − dL 2 := 0, −L 3 := 0, 0, −d(s, t), c(s, t) , and they belong to L s,t . Given a moving plane L(s, t) = A(s, t), B(s, t), C(s, t) D(s, t) follow the rational surface P(s, t), assume A(s, t), B(s, t), C(s, t), D(s, t) are relatively prime (If they are not relatively prime, we can deal with it similarly). As four dimensional vectors, L 1 , L 2 , L 3 are all perpendicular to P(s, t), and L is also perpendicular to P(s, t). Thus, there exist h, h 1 , h 2 , h 3 ∈ R[s, t], and gcd(h 1 , h 2 , h 3 ) = 1, such that hL(s, t) = h 1 L 1 (s, t) + h 2 L 2 (s, t) + h 3 L 3 (s, t) (9) = (−h 1 d, −h 2 d, −h 3 d, h 1 a + h 2 b + h 3 c). Since gcd(A, B, C, D) = 1 and gcd(h 1 , h 2 , h 3 ) = 1, h = gcd(−h 1 d, −h 2 d, −h 3 d, h 1 a + h 2 b + h 3 c) = gcd(d, h 1 a + h 2 b + h 3 c). Thus h|d. For an r-fold singular point X 0 on the surface, (5) have r intersection points (counting multiplicity): (p 0 , . . . , p v ) = (s 0 , t 0 , u 0 ), · · · , (s v , t v , u v ) ,(10) and the multiplicity at p i is m i , i = 0, · · · , v, thus r = m 0 + . . . + m v . From (9), we have h(s, t)l(s, t) = h 1 (s, t)l 1 (s, t) + h 2 (s, t)l 2 (s, t) + h 3 (s, t)l 3 (s, t),(11) where l(s, t) = L · X 0 , l i (s, t) = L i · X 0 , i = 1, 2, 3. Now we discuss the two possible cases. • If the intersection point p i is not at infinity, assume p i = (s i , t i , 1), and from (11), we have h(s − s i , t − t i )l(s − s i , t − t i ) = h 1 (s − s i , t − t i )l 1 (s − s i , t − t i ) + h 2 (s − s i , t − t i )l 2 (s − s i , t − t i ) + h 3 (s − s i , t − t i )l 3 (s − s i , t − t i ). Thus,the ideals generated by local equations of h(s, t, u)l(s, t, u), l 1 (s, t, u), l 2 (s, t, u), l 3 (s, t, u) near p i and local equations of l 1 (s, t, u), l 2 (s, t, u), l 3 (s, t, u) near p i are same. • If intersection point p i is at infinity, assume p i = (1, t i , 0). Since L(s, t) · P(s, t) ≡ 0, its homogeneous form also satisfies: L(s, t, u) · P(s, t, u) ≡ 0, and the dehomogenized form also has L(1, t, u) · P(1, t, u) ≡ 0. Similar to the above analysis, their exist h ′ , h ′ 1 , h ′ 2 , h ′ 3 ∈ R[t, u], and gcd(h ′ 1 , h ′ 2 , h ′ 3 ) = 1, such that h ′ l(1, t, u) = h ′ 1 l 1 (1, t, u) + h ′ 2 l 2 (1, t, u) + h ′ 3 l 3 (1, t, u) , where l(s, t, u), l i (s, t, u) is the homogeneous form of l(s, t), l i (s, t), i = 1, 2, 3, and h ′ |d(1, t, u). Therefore, the ideals generated by local equations of h ′ l(1, t, u), l 1 (1, t, u), l 2 (1, t, u), l 3 (1, t, u) near p i and local equations of l 1 (1, t, u), l 2 (1, t, u), l 3 (1, t, u) near p i are also same. Since h(p i ) = 0, h ′ (p i ) = 0 (otherwise, p i is a base point of P(s, t)), and based on the definition of the multiplicity, p i , i = 1, . . . , v are also the intersection points of w 0 a(s, t, u) − x 0 d(s, t, u) = 0, w 0 b(s, t, u) − y 0 d(s, t, u) = 0, w 0 c(s, t, u) − z 0 d(s, t, u) = 0, L(s, t, u) · X 0 = 0. and the intersection multiplicity at p i are also same. Remark 1. µ-basis p, q, r of P(s, t) are also three special moving planes of the surface, and from Theorem 6, L 1 (s, t, u) · X 0 = 0, L 2 (s, t, u) · X 0 = 0, L 3 (s, t, u) · X 0 = 0, p(s, t, u) · X 0 = 0, q(s, t, u) · X 0 = 0, r(s, t, u) · X 0 = 0 (12) also have the r intersection points. Next we discuss the relationship of the µ-basis and the order of singular points on a rational parametric surface. For any moving plane l(s, t) ∈ L s,t , there exist polynomials h i (s, t), i = 1, 2, 3, such that l(s, t) = h 1 p + h 2 q + h 3 r (Theorem 3.2 in [6]), thus L 1 · X 0 , L 2 · X 0 , L 3 · X 0 , p · X 0 , q · X 0 , r · X 0 = p · X 0 , q · X 0 , r · X 0 . Similar to the proof of Theorem 6, we can get that p(s, t, u) · X 0 = q(s, t, u) · X 0 = r(s, t, u) · X 0 = 0 (13) also have the same multiplicity at each intersection point p i , i = 0, . . . , v as Equation (5). An immediate consequence of the above analysis is: Corollary 7. For a rational parametric surface P(s, t) with no base point and its µ-basis p, q, r, then X 0 = (x 0 , y 0 , z 0 , w 0 ) is r-fold singular point on the surface if and only if the intersection points of p(s, t, u) · X 0 = q(s, t, u) · X 0 = r(s, t, u) · X 0 = 0 is r (counting multiplicity). Now we consider the moving surface ideal: I ′ = dx − a, dy − b, dz − c, dw − 1 ∩ R[x, y, z, s, t]. Theorem 3.4, 3.5 of [6] shows that I ′ is a prime ideal and f (x, y, z, s, t) ∈ I ′ if and only if f (x, y, z, s, t) = 0 is a moving surface following the rational surface P(s, t). Moreover, if P(s, t) contains no base point then I ′ = p, q, r(14) where p = p · (x, y, z, 1), q = q · (x, y, z, 1), r = r · (x, y, z, 1). Therefore, For a rational surface P(s, t) with no base point, and any moving surface f = 0 following it, f ∈ p, q, r . Based on the above analysis, we can improve Theorem 6 as the following theorem Conclusion To make the µ-basis more applicable in computing the singular points, we discuss the relations between moving planes (Specially, µ-basis) and singular point of the rational surface. In the future, we will discuss how to detect and compute singular points on an rational parametric surface based on moving planes (or µ-basis) in an efficient way. Acknowledgements This work was partially supported by the NSF of China grant 10671192. 3) can be written as a vector form L(s, t) = A(s, t), B(s, t), C(s, t), D(s, t) . a singular point of order r or an r-fold point if all the derivatives of f of order up to r − 1 are zero at X 0 and at least one r-th derivative of f does not vanish at X 0 . Specifically, X 0 is a double point if and only if Definition 2 . 2[12] Let R be a local ring with maximal ideal m and M be a finitely generated R-module. Assume R contains k = R/m. For l ≫ 0, the Hilbert polynomial implies thatdim k (M/m l+1 M) = e d! l d + . . . ,where d = dim(R) and e = e(M) is the multiplicity of M.The refined case is as follows. Let I be an ideal with m s M ⊂ IM for some s, then l ≫ 0 implies that dim k (M/I l+1 M) = e d! l d + . . . , where e = e(I, M) is the multiplicity of I in M. then the intersection multiplicity of these curves at point p is m(p) = e(I p , R p ) for I p = f 1 , f 2 , . . . , f v and R p = O P 2 ,p , which is the ring of rational functions defined at p. Assume m s ⊂ I ⊂ R, then • If m s ⊂ J ⊂ I ⊂ R, then e(J, R) ≥ e(I, R). Theorem 8 . 8For a parametric surface P(s, t, u) with no base point and any moving surface f (x, y, z, w, s, t, u) = 0 following it. If X 0 = (x 0 , y 0 , z 0 , w 0 ) (assume w 0 = 0) is an r-fold singular point on the surface, then w 0 a(s, t, u) − x 0 d(s, t, u) = w 0 b(s, t, u) − y 0 d(s, t, u) = w 0 c(s, t, u) − z 0 d(s, t, u) = f (x 0 , y 0 , z 0 , w 0 , s, t, u) = 0 have and only have r intersection points (counting multiplicity), where f (x, y, z, w, s, t, u) is the homogeneous form of f (x, y, z, w, s, t). [ 1 ] 1B. Buchberger, Applications of Groebner bases in nonlinear computational geometry, In Kapur, D. and Mundy, J., (eds), Geometric Reasoning, Elsevier Science Publisher, MIT Press, 413-446, 1989. Implicitization of surfaces in P 3 in the presense of base points. L Busé, D A Cox, C D&apos;andrea, Journal of Algebra and Its Applications. 2L. Busé, D. A. Cox, and C. D'Andrea, Implicitization of surfaces in P 3 in the presense of base points, Journal of Algebra and Its Applications, Vol.2, 189-214, 2003. Intersection and self-intersection of surfaces by means of Bezoutian matrices. L Busé, M Elkadi, A Galligo, Computer Aided Geometric Design. 25L. Busé, M. Elkadi, and A. Galligo, Intersection and self-intersection of surfaces by means of Bezoutian matrices, Computer Aided Geometric Design, Vol.25, 53-68, 2008. The µ-basis of a rational ruled surface. F Chen, J Zheng, T W Sederberg, Computer Aided Geometric Design. 18F. Chen, J. Zheng, and T. W. Sederberg, The µ-basis of a rational ruled surface, Computer Aided Geometric Design, Vol.18, 61-72, 2001. Revisiting the µ-basis of a rational ruled surface. F Chen, W Wang, J. Symbolic Computation. 365F. Chen, and W. Wang, Revisiting the µ-basis of a rational ruled surface, J. Symbolic Computation, Vol.36, No.5, 699-716, 2003. The µ-basis and implicitization of a rational parametric surface. F Chen, D A Cox, Y Liu, Journal of Symbolic Computation. 39F. Chen, D. A. Cox, and Y. Liu, The µ-basis and implicitization of a rational parametric surface, Journal of Symbolic Computation, Vol.39, 689-706, 2005. Implicitization and parametrization of quadratic and cubic surfaces by µ-bases, Computing. F Chen, L Shen, J Deng, 79F. Chen, L. Shen, and J. Deng, Implicitization and parametrization of quadratic and cubic surfaces by µ-bases, Computing, Vol.79, No.2-4, 131-142, 2007. Computing singular points of plane rational curves. F Chen, W Wang, Y Liu, Journal of Symbolic Computation. 43F. Chen, W. Wang, and Y. Liu, Computing singular points of plane rational curves, Journal of Symbolic Computation, Vol.43, 92-117, 2008. Using multivariate resultants to find the implicit equation of a rational surface, The visual Computer. E Chionh, R Goldman, International Journal of Computer Graphics. 8E. Chionh and R. Goldman, Using multivariate resultants to find the im- plicit equation of a rational surface, The visual Computer: International Journal of Computer Graphics, Vol.8, 171-180, 1992. On the minors of the implicitization bézout matrix for a rational plane curve. E Chionh, T W Sederberg, Computer Aided Geometric design. 18E. Chionh and T. W. Sederberg, On the minors of the implicitization bézout matrix for a rational plane curve, Computer Aided Geometric design, Vol.18, 21-36, 2001. The moving line ideal basis of planar rational curves. D A Cox, T W Sederberg, F Chen, Comput. Aided Geom. Des. 15D. A. Cox, T. W. Sederberg, and F. Chen, The moving line ideal basis of planar rational curves. Comput. Aided Geom. Des., Vol.15, 803-827, 1998. What is the multiplicity of a base point? An expository talk given at the XIV Coloquio Latinoamericano de Algebra. D A Cox, D. A. Cox, What is the multiplicity of a base point? An expository talk given at the XIV Coloquio Latinoamericano de Algebra, 2001. On the Validity of Implicitization by Moving Quadrics for Rational Surfaces with No Base Points. D A Cox, R N Goldman, M Zhang, J. Symbolic Computation. 29D. A. Cox, R. N. Goldman, and M. Zhang, On the Validity of Implicit- ization by Moving Quadrics for Rational Surfaces with No Base Points, J. Symbolic Computation, Vol.29, 419-440, 2000. Computing self-intersection curves of rational ruled surfaces. X , Jia F Chen, J Deng, 26X. Jia F. Chen, and J. Deng, Computing self-intersection curves of ra- tional ruled surfaces, Vol.26, 287-299, 2009. Implicitization using moving curves and surfaces. T W Sederberg, F Chen, Proceedings of Siggraph'1995. Siggraph'1995T. W. Sederberg and F. Chen, Implicitization using moving curves and surfaces, Proceedings of Siggraph'1995, 301-308, 1995. H Wang, X Jia, R Goldman, Axial Moving Planes and Singularities of Rational Space Curves. 26H. Wang, X. Jia, and R. Goldman, Axial Moving Planes and Singu- larities of Rational Space Curves, Computer Aided Geometric Design, Vol.26, 300-316, 2009. The resultant of an unmixed bivariate system. A Khetan, J. Symbolic Comput. 36A. Khetan, The resultant of an unmixed bivariate system, J. Symbolic Comput., Vol. 36, 425-442, 2003. A Khetan, N Song, R Goldman, Sylvester A-resultants for bivariate polynomials with planar Newton polygons. New YorkACMextended abstractA. Khetan, N. Song and R. Goldman, Sylvester A-resultants for bivariate polynomials with planar Newton polygons (extended abstract), ISSAC 2004, ACM, New York, 205-212, 2004. Implicitization and parametrization of quadratic surfaces with one simple base point. X Wang, F Chen, J Deng, Proceedings of the ISSAC. the ISSACACM PressX. Wang, F. Chen, and J. Deng, Implicitization and parametrization of quadratic surfaces with one simple base point, Proceedings of the ISSAC'2008, ACM Press, 31-38, 2008.
{'fraction_non_alphanumeric': 0.10176180408738549, 'fraction_numerical': 0.0365045806906272, 'mean_word_length': 3.1794620066758297, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 80, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'In this paper we discuss the relationship between the moving planes of a rational parametric surface and the singular points on it. Firstly, the intersection multiplicity of several planar curves is introduced. Then we derive an equivalent definition for the order of a singular point on a rational parametric surface. Based on the new definition of singularity orders, we derive the relationship between the moving planes of a rational surface and the order of singular points. Especially, the relationship between the µ-basis and the order of a singular point is also discussed.', 'arxivid': '0909.2810', 'author': ['Falai Chen \nDepartment of Mathematics\nUniversity of Science and Technology of China Hefei\n230026AnhuiChina\n', 'Xuhui Wang \nDepartment of Mathematics\nUniversity of Science and Technology of China Hefei\n230026AnhuiChina\n\nSchool of Mathematics\nHefei University of Technology Hefei\n230009AnhuiChina\n'], 'authoraffiliation': ['Department of Mathematics\nUniversity of Science and Technology of China Hefei\n230026AnhuiChina', 'Department of Mathematics\nUniversity of Science and Technology of China Hefei\n230026AnhuiChina', 'School of Mathematics\nHefei University of Technology Hefei\n230009AnhuiChina'], 'corpusid': 14014066, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 8069, 'n_tokens_neox': 7076, 'n_words': 4265, 'pdfsha': '9704bb1207887019576f4a3ee467267b25dd3c06', 'pdfurls': ['https://arxiv.org/pdf/0909.2810v2.pdf'], 'title': ['Moving Planes and Singular Points of Rational Parametric Surfaces', 'Moving Planes and Singular Points of Rational Parametric Surfaces'], 'venue': []}
arxiv
SU(3)-breaking corrections to the hyperon vector coupling f 1 (0) in covariant baryon chiral perturbation theory 27 Mar 2009 L S Geng Departamento de Física Teórica and IFIC Universidad de Valencia-CSIC E-46071ValenciaSpain J Martin Camalich Departamento de Física Teórica and IFIC Universidad de Valencia-CSIC E-46071ValenciaSpain M J Vicente Vacas Departamento de Física Teórica and IFIC Universidad de Valencia-CSIC E-46071ValenciaSpain SU(3)-breaking corrections to the hyperon vector coupling f 1 (0) in covariant baryon chiral perturbation theory 27 Mar 2009(Dated: March 27, 2009)numbers: 1330Ce1215Hh1239Fe We calculate the SU(3)-breaking corrections to the hyperon vector coupling f1(0) up to O(p 4 ) in covariant baryon chiral perturbation theory with dynamical octet and decuplet contributions. We find that the decuplet contributions are of similar or even larger size than the octet ones. Combining both, we predict positive SU(3)-breaking corrections to all the four independent f1(0)'s (assuming isospin symmetry), which are consistent, within uncertainties, with the latest results form large Nc fits, chiral quark models, and quenched lattice QCD calculations. I. INTRODUCTION Hyperon semileptonic decays, parameterized by three vector transition form factors (f 1 , f 2 , and f 3 ) and three axial form factors (g 1 , g 2 , and g 3 ), have received renewed interest in recent years due to various reasons. In particular, they provide an alternative source [1,2,3,4] to allow one to extract the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V us [5,6], in addition to kaon semileptonic decays (see, e.g., Ref. [7] for a recent review), hadronic decays of the τ lepton [8] and the ratio Γ(K + → µ + ν µ )/Γ(π + → µ + ν µ ) [9]. The hyperon vector coupling f 1 (0) plays an essential role in order to extract V us accurately. Due to the Conservation of Vector Current (CVC) f 1 (0) is known up to SU(3)-breaking effects, which are of subleading-order according to the Ademollo-Gatto theorem [10]. Theoretical estimates of SU(3)-breaking corrections to f 1 (0) have been performed in various frameworks, including quark models [11,12,13], large-N c fits [3], and chiral perturbation theory (ChPT) [14,15,16,17,18]. These SU(3)-breaking corrections have also been studied recently in quenched lattice QCD (LQCD) calculations for two of the four independent channels (assuming isospin symmetry): Σ − → n [19] and Ξ 0 → Σ + [20]. In principle, ChPT provides a model independent way to estimate the SU(3)-breaking corrections to f 1 (0). However, it is known that ChPT calculations converge slowly in SU(3) flavor space. This problem becomes even more pronounced in the one-baryon sector, where the physics at a certain order can be blurred by the power-counting restoration procedures, as can be clearly seen in the case of the baryon octet magnetic moments [21]. Fortunately, in the case of f 1 (0), the Ademollo-Gatto theorem dictates that up to O(p 4 ) no unknown LEC's contribute and, therefore, no power-counting-breaking terms appear. Consequently, up to this order there is no need to apply any power-counting restoration procedures and a ChPT calculation is fully predictive. In a recent O(p 4 ) calculation performed in Heavy Baryon (HB) ChPT, it was shown that the chiral series with only the octet contributions converge slowly while the convergence is completely spoiled by the inclusion of the decuplet ones [17]. In a later work [18], the infrared version of baryon chiral perturbation theory (IRChPT) [22] was employed and calculations were performed up to O(p 4 ) with only the octet contributions. The slow convergence of the chiral series was confirmed but the importance of relativistic corrections was stressed. In the present work, we perform the first covariant baryon ChPT calculation of f 1 (0) up to O(p 4 ), including both octet and decuplet contributions. This article is organized as follows. In Sec. 2, we fix our notation and write down the relevant chiral Lagrangians up to O(p 4 ). To study the contributions of the decuplet baryons, we adopt the "consistent" coupling scheme for the Rarita-Schwinger description of the spin-3/2 decuplet fields [23]. In Sec. 3, we present our numerical results order by order, contrast them with the corresponding HBChPT and IRChPT results, and study the convergence of the chiral series. We also compare our full results with those obtained in other approaches, including large N c fits, quark models, and lattice QCD calculations. Finally, we use our results of f 1 (0) to extract V us from the experimental values of the decay rates and g 1 (0)/f 1 (0). Summary and conclusions follow in Sec. 4. II. FORMALISM The baryon vector form factors as probed by the charged ∆S=1 weak current V µ = V usū γ µ s are defined by B ′ |V µ |B = V usū (p ′ ) γ µ f 1 (q 2 ) + 2iσ µν q ν M B ′ + M B f 2 (q 2 ) + 2q µ M B ′ + M B f 3 (q 2 ) u(p),(1) where q = p ′ − p. In the SU(3)-symmetric limit, f 1 (0) is fixed by the conservation of the SU(3) V -charge g V . Furthermore, the Ademollo-Gatto theorem states that SU(3)-breaking corrections start at second order in the expansion parameter m s − m f 1 (0) = g V + O((m s − m) 2 ),(2) where m s is the strange quark mass and m is the mass of the light quarks. The values of g V are − 3 2 , − 1 √ 2 , −1, 3 2 , 1 √ 2 , 1 for Λ → p, Σ 0 → p, Σ − → n, Ξ − → Λ, Ξ − → Σ 0 , and Ξ 0 → Σ + , respectively. In the isospin-symmetric limit only four of these channels, which we take as Λ → N , Σ → N , Ξ → Λ, and Ξ → Σ, provide independent information We will parameterize the SU(3)-breaking corrections order-by-order in the relativistic chiral expansion as follows: f 1 (0) = g V 1 + δ (2) + δ (3) + · · · ,(3) where δ (2) and δ (3) are the leading and next-to-leading order SU(3)-breaking corrections induced by loops, corresponding to O(p 3 ) and O(p 4 ) chiral calculations. A. Chiral Lagrangians involving only octet baryons and pseudoscalars The lowest-order SU(3) chiral Lagrangian describing the pseudo-Goldstone bosons in the presence of an external vector current is: L (2) φ = F 2 0 4 ∇ µ U (∇ µ U ) † + U χ † + χU † ,(4) where the parameter F 0 is the chiral-limit decay constant, U is the SU(3) representation of the meson fields and ∇ µ is its covariant derivative: ∇ µ U = ∂ µ − i[v µ , U ],φB = B (iD / − M 0 ) B + D/F 2 B γ µ γ 5 [u µ , B] ± ,(5) where B denotes the traceless flavor matrix accounting for the octet-baryon fields, M 0 is the chiral-limit octet-baryon mass, D and F are the axial and vector meson-baryon couplings and D µ B = ∂ µ B + [Γ µ , B] is the covariant derivative. Furthermore, with u 2 ≡ U , u µ and Γ µ are the so-called vielbein and connection: u µ = i(u † ∂ µ u − u∂ µ u † ) + (u † v µ u − uv µ u † ), Γ µ = 1 2 (u † ∂ µ u + u∂ µ u † ) − i 2 (u † v µ u + uv µ u † ).(6) The only higher-order chiral Lagrangian that also contributes is through the SU(3)-breaking of the masses of octet baryons 1 L (2) φB = b D/F B [χ + , B] ± with χ + = 2χ = 4B 0 M = 2diag(m 2 π , m 2 π , 2m 2 K − m 2 π ),(7) which leads to the following octet baryon masses up to this order: In this work, we adopt the so-called "consistent" couplings [23] to describe the interactions between the decuplet and the octet baryons. Compared to conventional couplings (see, e.g., Refs. [23,25]), the consistent couplings are more stringent due to the requirement that all interactions have the same type of gauge invariance as the kinetic term of the spin-3/2 fields [23]. To calculate f 1 (0) up to O(p 3 ), one only needs the following lowest-order chiral Lagrangians [26]: M N = M 0 + 4m 2 K b D − 4(m 2 K − m 2 π )b F , M Σ = M 0 + 4m 2 π b D , M Ξ = M 0 + 4m 2 K b D + 4(m 2 K − m 2 π )b F , M Λ = M 0 + 4 3 (4m 2 K − m 2 π )b D .(8)L (1) DD =T µ (γ µνα iD α − M D0 γ µν )T ν ,(9)L (1) DB = iC m D (D † µTν γ µνλ u λ B +Bu λ γ µνλ D µ T ν ),(10) where M D0 is the chiral-limit decuplet-baryon mass, D α T ν abc = ∂ α T ν abc + (Γ α ) d a T ν dbc + (Γ α ) d b T ν adc + (Γ α ) d c T ν abd , T ν = T ade ψ ν ,T µ =T adeψµ with the following associations: T 111 = ∆ ++ , T 112 = ∆ + / √ 3, T 122 = ∆ 0 / √ 3, T 222 = ∆ − , T 113 = Σ * + / √ 3, T 123 = Σ * 0 / √ 6, T 223 = Σ * − / √ 3, T 133 = Ξ * 0 / √ 3, T 233 = Ξ * − / √ 3, and T 333 = Ω − . The value of the pseudoscalar-baryon-decuplet coupling C is determined to be C ≈ 1.0 from the ∆ → πN decay width. 2 In SU(3) flavor space, the value of C can be different for different channels. In the present work, as in Ref. [17], we use the same C for all the channels, assuming that SU(3)-breaking corrections to f 1 (0) induced by using channel-specific C's are of higher order. The spin-3/2 propagator in d dimensions is S µν (p) = − p / + M D p 2 − M 2 D + iǫ g µν − 1 d − 1 γ µ γ ν − 1 (d − 1)M D (γ µ p ν − γ ν p µ ) − d − 2 (d − 1)M 2 D p µ p ν(11) with M D the decuplet baryon mass. To calculate f 1 (0) at O(p 4 ), the following second-order chiral Lagrangian is needed to break the mass degeneracy of the decuplet baryons 3 L (2) DD = γ M 2T µ χ + T µ ,(12) which leads to All the diagrams contributing to f 1 (0) up to O(p 4 ) are shown in Fig. 1, where the leading and next-to-leading order SU(3)-breaking corrections are given by the diagrams in the first and second row, respectively. M ∆ = M D0 + 3m 2 π γ M , M Σ * = M D0 + (2m 2 K + m 2 π )γ M M Ξ * = M D0 + (4m 2 K − m 2 π )γ M , M Ω = M D0 + 3(2m 2 K − m 2 π )γ M .(13) The O(p 3 ) results are quite compact and have the following structure for the transition i → j: δ (2) B (i → j) = M=π,η,K β BP M H BP (m M ) + M=π,η β MP M H MP (m M , m K ) + M=π,η,K β KR M H KR (m M ) − 3 8 M=π,η H TD1 (m M , m K ) + 3 8 M=π,η H TD2 (m M ) + 3 4 H TD2 (m K ) + 1 2 M=π,η,K (β WF M (i) + β WF M (j))H WF (m M ),(14) where β BP , β MP , β KR , and β WF are given in Tables VI, VII In the O(p 4 ) calculation, we have implemented mass-splitting corrections in a similar way as Ref. [18] except that we have used the masses obtained from the second-order ChPT fit, as described above, instead of the physical masses. Similar to the IRChPT study of Ref. [18], the O(p 4 ) results contain higher-order divergences. We have removed the infinities using the modified minimal-subtraction (M S) scheme. The analytical results are quite lengthy and will not be shown here. In Fig. 2, we show the scale dependence of the octet contributions, which is rather mild for most cases except for the Σ → N transition. The scale dependence can be used to estimate higher-order contributions by varying µ in a reasonable range. In the following, we present the results by varying µ from 0.7 to 1.3 GeV. It should be mentioned that if we had adopted the same method as Ref. [17] to calculate the O(p 4 ) contributions, i.e., by expanding the results and keeping only those linear in baryon mass splittings, our O(p 4 ) results would have been convergent. We have checked that our results up to O(p 3 ) are the same as those obtained in Ref. [14], while in the M B ∼ Λ χSB limit our results recover the HBChPT ones [17] at both O(p 3 ) and O(p 4 ) including the 1/M recoil corrections. All these are known to explicitly verify the Ademollo-Gatto theorem in the sense of Eq. (2). Table II shows the SU(3)-breaking corrections in the notation of Eq. (3). For comparison, we also list the numbers obtained in HBChPT [17] and IRChPT [18]. The numerical values are obtained with the parameters given in Table I. As in Ref. [21] we have used an average F 0 = 1.17f π with f π = 92.4 MeV. It should be pointed out that the HBChPT and the IRChPT results are obtained using f π . First, we note that in three of the four cases, the δ (3) numbers are smaller than the δ (2) ones. The situation is similar in IRChPT but quite different in HBChPT. In the HBChPT calculation [17], the δ (3) contribution is larger than the δ (2) one for the four cases. 4 This tells that recoil corrections (in the HBChPT language) or relativistic effects are important. On the other hand, the results of the present work and those of IRChPT [18], including the contributions of different chiral orders, are qualitatively similar. They are both very different from the HBChPT predictions, even for the signs in three of the four cases. Obviously, as stressed in Ref. [18], one should trust more the relativistic than the non-relativistic results, which have to be treated with caution whenever 1/M recoil corrections become large. It is clear from Table II that the convergence is slow even in the case of the relativistic calculations, a well known feature of SU(3) baryon ChPT. It is then necessary to have a way to calculate "higher-order" contributions. Going to O(p 5 ) one needs to introduce unknown LEC's such that the predictive power of ChPT is lost. An alternative approach is to consider the contributions of dynamical heavier resonances. A basic assumption of ChPT is that these heavier degrees of freedom can be integrated out with their effects incorporated in the LEC's. However, that may not be totally true in the one-baryon sector since the gap between the lowest baryon octet and the lowest baryon decuplet is only ∼ 0.3 GeV, not very different from the pion mass and even smaller than the kaon(eta) mass. Therefore, it is necessary to investigate their contributions. In the HBChPT scheme, this task has recently been undertaken in Ref. [17], where it is concluded that the decuplet contributions completely spoil the convergence of the chiral series. We study the contributions of the decuplet baryons in the covariant framework in the following section. Fig. 3 shows the diagrams that contribute to SU(3)-breaking corrections to f 1 (0) with dynamical decuplet baryons up to O(p 4 ). It should be noted that unlike in the HBChPT case [17], Kroll-Rudermann (KR) kind of diagrams also contribute. In fact, using the consistent coupling scheme of Ref. [23], there are four KR diagrams: Two are from minimal substitution in the derivative of the pseudoscalar fields and the other two are from minimal substitution in the derivative of the decuplet fields (see Eq. (10) and also Ref. [26]). present work HBChPT [17] IRChPT [18] δ (2) The O(p 3 ) results are relatively simple and have the following general structure for the transition i → j: δ (3) δ (2) + δ (3) δ (2) + δ (3) δ (2) + δ (3) Λ → N −3.8 0.2 +1δ (2) D (i → j) = M=π,η,K γ BP M D BP (m M ) + M=π,η γ MP M D MP (m M , m K ) + M=π,η,K γ KR M D KR (m M ) + 1 2 M=π,η,K (γ WF M (i) + γ WF M (j))D WF (m M ),(15) where γ BP , γ MP , γ KR , and γ WF are listed in Tables X, XI Fig. 4. The infinities have been removed by the M S procedure. The dependence is found to be rather mild except for the Ξ → Σ transition. In this case, unlike in the octet case, the divergences cannot be removed by expanding and keeping only terms linear in baryon and decuplet mass splittings. The full O(p 4 ) analytical results are quite involved and, therefore, they will not be shown here. Present work HBChPT δ (2) δ (3) δ (2) + δ (3) δ (2) δ (3) δ (2) + δ (3) Λ → N 0.7 3.0 +0δ (2) δ (3) δ (2) + δ (3) Λ → N −3.1 3.2 +1.3 −1.0 0.1 +1.3 −1.0 Σ → N −2.2 10.9 +4.2 −3.1 8.7 +4.2 −3.1 Ξ → Λ −2.9 6.9 +2.8 −2.1 4.0 +2.8 −2.1 Ξ → Σ −3.0 4.7 +2.2 −1.6 1.7 +2.2 −1.6 The numerical results obtained with the parameter values given in Table I are summarized in Table III. It can be seen that at O(p 3 ), the decuplet contributions are relatively small compared to the octet ones at the same order. On the other hand, the O(p 4 ) contributions are sizable and all of them have positive signs. Using the conventional Lagrangians of Ref. [25], one obtains different numbers and different µ dependence. In the heavy-baryon limit, however, the results obtained with both coupling schemes are found to be the same and convergent, confirming the fact that the differences induced by the "consistency" procedure are of higher chiral order [23] (see also Ref. [26]). In Table III, the numbers denoted by HBChPT differ from those of Ref. [17]. The δ (2) column would have coincided if we had used the same values for the couplings C = 0.8 and F 0 = 0.0933 GeV. On the other hand, our δ (3) contributions due to the octet baryon mass splittings are much smaller than those of Ref. [17]. It is interesting to note that unlike in the octet case, the HBChPT results are similar to the relativistic ones. 5 As the decuplet-octet mass splitting increases, one expects that the decuplet contributions decrease and eventually vanish as the splitting goes to infinity. This is indeed the case, as can be clearly seen from Fig. 5, where the O(p 3 ) decuplet contributions are plotted as a function of the decuplet-octet mass splitting. C. Full results and comparison with other approaches Summing the octet and the decuplet contributions, we obtain the numbers shown in Table IV. Two things are noteworthy. First, the convergence is slow, even taking into account the scale dependence of the δ (3) corrections. Second, for three of the four transitions, the δ (3) corrections have a different sign than the δ (2) ones. 5 The HB results are obtained in a slightly different way than the relativistic ones. In Table V, we compare our results with those obtained from other approaches, including large N c fits [3], quark models [11,12,13], and two quenched LQCD calculations [19,20]. The large N c results in general favor positive corrections, which are consistent with our central values. Two of the quark models predict negative corrections, while that of Ref. [13] favors positive corrections. It is interesting to note that in Ref. [13] the valence quark effects give negative contributions, as in the other two quark models, while the chiral effects provide positive contributions, resulting in net positive corrections. Our numbers also agree, within uncertainties, with the quenched LQCD ones. In principle, LQCD calculations provide another model-independent way to obtain the SU(3)-breaking corrections to f 1 (0). At present, however, the quenched LQCD calculations are not yet accurate enough to determine these numbers, due to the large quark masses used in the simulation and other systematic uncertainties. Finally, we will briefly discuss the implications of our results for the estimation of V us . There have been several previous attempts to extract this parameter using hyperon semileptonic decays [1,2,3,4]. As discussed in Ref. [4] a rather clean determination of f 1 V us can be done by using g 1 /f 1 and the decay rates from experiment and taking for g 2 and f 2 their SU(3) values. This latter approximation is supported by the fact that their contributions to the decay rate are reduced by kinematic factors (See, for instance, Eq. (10) of Ref. [3]). Using the values of f 1 V us compiled in Table 3 of Ref. [4] and our results for f 1 we get V us = 0.2177 ± 0.0030 .(16) This value is consistent with the large N c fits of Ref. [3] and with the result obtained from τ decays [8], and lower than the results of kaon decays [7] and the fits to hyperon decays from Refs. [1,2]. Although the quoted error seems competitive with calculations using other processes, we must remark that the error estimation for Eq. (16) Ref. [11] Ref. [12] Ref. [13] Λ → N 0. We have performed a study of the SU(3)-breaking corrections to the hyperon vector coupling f 1 (0) in covariant baryon chiral perturbation theory including both the octet and the decuplet contributions. We confirm earlier findings in HBChPT and IRChPT that the convergence of the chiral series is slow in the case with only dynamical octet baryons. Our study of the decuplet contributions shows that at O(p 3 ) they are in general smaller than those of their octet counterparts, while at O(p 4 ) they are sizable. Combining both octet and decuplet contributions, we found positive SU(3)-breaking corrections to all the four independent f 1 (0)'s, which compare favorably with the large N c fits and those of the quark model taking into account chiral effects. The fact that the O(p 4 ) chiral contributions are comparable to the O(p 3 ) ones suggests that the O(p 5 ) chiral effects may not be negligible. We have estimated their size by varying µ from 0.7 to 1.3 GeV. Taking into account these higher-order uncertainties, our results still favor positive SU(3)-breaking corrections to the four f 1 (0)'s. An accurate determination of V us from hyperon semileptonic decays depends largely on our knowledge on the value of f 1 (0). While the SU(3)-symmetric values have been used in some fits to extract V us , most studies have taken into account SU(3)-breaking corrections to f 1 (0). We have provided the first covariant baryon ChPT predictions for f 1 (0) up to O(p 4 ) including both the octet and the decuplet contributions. We encourage their uses in new attempts to extract V us from hyperon decay data. In this subsection, we present the coefficients and loop functions appearing in the calculation of the O(p 3 ) octet contributions, i.e., Eq. (14). In this subsection, we provide the coefficients and loop functions appearing in the calculation of the O(p 3 ) decuplet contributions, i.e., Eq. (15). H BP = 1 (4πF 0 ) 2   −3m 2 − 4 cos −1 ( m 2MB )m 3 4M 2 B − m 2 m 2 M 2 B − 3 + 2 log m 2 M 2 B m 4 M 2 B + 2 log M 2 B µ m 3 m 2   ,(17)H KR = 1 (4πF 0 ) 2   log MB m m 4 M 2 B − 4M 2 B − m 2 cos −1 m 2MB m 3 M 2 B + m 2 (log µ 2 M 2 B + 2) + M 2 B (1 + log µ 2 M 2 B )   ,(18)D BP = − C 2 (4πF 0 ) 2 M 2 D 1 0 dx M 2 B (1 − x) (2x − 1)M 2 D + 2M B xM D − (x − 2)M 2 B + 2m 2 x × log µ 2 ((x − 1)M 2 B + m 2 ) x − M 2 D (x − 1) − M 2 B + 2M D M B + M 2 D − m 2 x ,(23)D MP = − C 2 (4πF 0 ) 2 M 2 D 1 0 dx 1−x 0 dy M 2 B 2M B x(xM B − M B − M D ) + −3(x − 1)xM 2 B + 2M D xM B − M 2 D x − m 2 1 y +m 2 2 (x + y − 1) log µ 2 (x − 1)xM 2 B + M 2 D x + m 2 1 y − m 2 2 (x + y − 1) ,(24)D KR = − C 2 (4πF 0 ) 2 M 2 D 1 0 dx M B (M D + M B x) (x − 1)M 2 B + m 2 x − M 2 D (x − 1) × log − µ 2 M 2 D (x − 1) − ((x − 1)M 2 B + m 2 ) x ,(25)D WF = − C 2 (4πF 0 ) 2 M 2 D 1 0 dx M B 2M 2 B (x − 1)x(M B x − M B − M D ) + −5M 3 B (x − 1) 2 x + 4M 2 B M D (x − 1)x+ (26) 3M B (x − 1) m 2 (x − 1) − M 2 D x + 2M D M 2 D x − m 2 (x − 1) log − µ 2 m 2 (x − 1) − x (M 2 B (x − 1) + M 2 D ) . Λ → N −4 0 −2 Σ → N − 4 3 0 − 2 3 Ξ → Λ −2 0 −2 Ξ → Σ − 4 3 −2 − 14 3 13π loop η loop K loop Λ −3 0 −2 Σ − 2 3 −1 − 10 3 N −4 0 −1 Ξ −1 −1 −3 with v µ being the vector source. The explicit breaking of chiral symmetry comes from χ = 2B 0 M where B 0 measures the strength of the breaking and M = diag(m, m, m s ) is the quark mass matrix in the isospin symmetric limit[24]. In the above and forthcoming Lagrangians, the symbol ... denotes the trace in SU(3) flavor space.The lowest-order chiral Lagrangian describing octet baryons interacting with pseudoscalars and an external vector source reads: , yields M 0 = 1.197 GeV, b D = −0.0661 GeV −1 , and b F = 0.2087 GeV −1 , which correspond to M N = 0.942(0.939) GeV, M Σ = 1.192(1.193) GeV, M Ξ = 1.321(1.318) GeV, and M Λ = 1.112(1.116) GeV, with the physical values given in parentheses. It is clear that the differences between the second order fits and the physical values are quite small. Using either of them will be numerically equivalent. In the O(p 4 ) calculation, we will use the second order fits, Eq. (8), to keep track of the SU(3)-breaking pattern. While at O(p 3 ), we use the average mass of the octet baryons, M B = 1.151 MeV, without introducing mass splittings. B. Chiral Lagrangians involving decuplet baryons AFIG. 1 : 1fit to the decuplet baryon masses, with the meson masses given above, yields γ M = 0.3236 GeV −1 and m D0 = 1.216 GeV, which correspond to M ∆ = 1.235(1.232) GeV, M Σ * = 1.382(1.384) GeV, M Ξ * = 1.529(1.533) GeV, and M Ω = 1.676(1.672) GeV. As in the octet case, we use the second order fits in our calculation of the O(p 4 ) results, while in the O(p 3 ) calculation, we use the average of the decuplet baryon masses, M D = 1.382 GeV, Feynman diagrams contributing to the SU(3)-breaking corrections to the hyperon vector coupling f1(0) up to O(p 4 ). The solid lines correspond to baryons and dashed lines to mesons; crosses indicate the coupling of the external current; black dots denote mass splitting insertions. We have not shown explicitly those diagrams corresponding to wave function renormalization, which have been taken into account in the calculation. III. RESULTS AND DISCUSSIONS A. SU(3)-breaking corrections to f1(0) due to octet contributions up to O(p 4 ) , VIII, and IX in the Appendix, and the loop functions H BP , H MP , H KR , H TD1 , H TD2 , and H WF are also given there. It is interesting to note that although separately these loop functions are divergent (scale-dependent) and some of them contain power-counting breaking pieces (H KR and H MP ), the overall contributions are finite and do not break power-counting. This is an explicit manifestation of the Ademollo-Gatto theorem. FIG. 2 : 2Scale dependence of the octet contributions to the SU(3)-breaking corrections to the hyperon vector coupling f1(0). B. SU(3)-breaking corrections to f1(0) induced by dynamical decuplet baryons up to O(p 4 ) FIG. 3 : 3Feynman diagrams contributing to the leading and next-to-leading order SU(3)-breaking corrections to the hyperon vector coupling f1(0), through dynamical decuplet baryons. The notations are the same as those ofFig. 1except that double lines indicate decuplet baryons. We have not shown explicitly those diagrams corresponding to wave function renormalization, which have been included in the calculation. FIG. 4 : 4, XII, and XIII of the Appendix. The loop functions D BP , D MP , D KR , and D WF can be calculated analytically, but they are quite lengthy. In the Appendix, they are given in terms of Feynman-parameter integrals, which can be easily integrated.To calculate the O(p 4 ) chiral contributions, we implement the decuplet-baryon mass splittings in the same way as in the octet case. The O(p 4 ) results contain again higher-order divergences, with the scale dependence shown Scale dependence of the decuplet contributions to the SU(3)-breaking corrections to the hyperon vector coupling f1(0). calculated with µ = 1 GeV and the uncertainties are obtained by varying µ from 0.7 GeV to 1.3 GeV. To obtain the O(p 3 ) numbers, we have used M B = 1.151 GeV and M D = 1.382 GeV and have performed an expansion in terms of the decuplet-octet mass splitting, M D − M B . To obtain the O(p 4 ) ones, we have used physical masses for both the octet and the decuplet baryons and have performed an additional expansion keeping only the terms linear in the octet and the decuplet baryon mass splittings.Although this procedure is the same as that of Ref.[17], we get different results. We find that the discrepancy comes from the octet mass-splitting corrections to the meson-pole diagram ofFig. 3. If we had mistakenly exchanged the masses of the mesons in the loop, we would have obtained the same results as those of Ref.[17]. FIG. 5 : 5Decuplet O(p 3 ) contributions to the SU(3)-breaking corrections to the hyperon vector coupling f1(0) as a function of the decuplet-octet mass splitting MD − MB. includes only the experimental errors and the uncertainties related to the scale dependence. Other systematic uncertainty sources, like the effect of higher order SU(3)-breaking corrections are hard to estimate and have not been included. Even with these limitations, our results clearly point to positive values for the SU(3)-breaking corrections to f 1 and therefore towards relatively small values of V us . was partially supported by the MEC grant FIS2006-03438 and the European Community-Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (Hadron-Physics2, Grant Agreement 227431) under the Seventh Framework Programme of EU. L.S.G. acknowledges support from the MICINN in the Program "Juan de la Cierva." J.M.C. acknowledges the same institution for a FPU grant. TABLE I : IValues for the masses and couplings appearing in the calculation of the SU(3)-breaking corrections to f1(0).D 0.8 MB 1.151 GeV F 0.46 MD 1.382 GeV fπ 0.0924 GeV M0 1.197 GeV TABLE TABLE TABLE IV : IVSU(3)-breaking corrections to f1(0) up to O(p 4 ) (in percentage), including both the octet and the decuplet contributions. TABLE V : VSU(3)-breaking corrections (in percentage) to f1(0) obtained in different approaches. Present work Large Nc Quark model quenched LQCD Ref. [3] TABLE X : XCoefficients γ BP appearing in Eq.(15).Channel π loop η loop K loop TABLE XI : XICoefficients γ MP appearing in Eq.(15).Channel πK loop ηK loop Λ → N 1 0 Σ → N −2 −1 Ξ → Λ 0 −1 Ξ → Σ 1 2 TABLE XII : XIICoefficients γ KR appearing in Eq.(15).Channel π loop η loop K loop Λ → N 7 0 3 Σ → N 14 3 1 13 3 Ξ → Σ 4 1 5 Ξ → Σ 5 3 2 19 3 TABLE XIII : XIIICoefficients γ WF appearing in Eq.(15). We have omitted a singlet term indistinguishable from M 0 for our purposes. Note the definition of uµ in Eq. (6) is a factor of 2 different from that of HBChPT in Refs.[17,27].3 As in the octet case, we have omitted a singlet term indistinguishable from M D0 for our purposes. What we denote by δ(3) is the sum of those labeled by α(3) and α (1/M ) in Ref.[17]. . N Cabibbo, E C Swallow, R Winston, Ann. Rev. Nucl. Part. Sci. 5339N. Cabibbo, E. C. Swallow and R. Winston, Ann. Rev. Nucl. Part. Sci. 53, 39 (2003). . N Cabibbo, E C Swallow, R Winston, Phys. Rev. Lett. 92251803N. Cabibbo, E. C. Swallow and R. Winston, Phys. Rev. Lett. 92, 251803 (2004). . R Flores-Mendieta, Phys. Rev. D. 70114036R. Flores-Mendieta, Phys. Rev. D 70, 114036 (2004). . V Mateu, A Pich, JHEP. 051041V. Mateu and A. Pich, JHEP 0510, 041 (2005). . N Cabibbo, Phys. Rev. Lett. 10531N. Cabibbo, Phys. Rev. Lett. 10, 531 (1963). . M Kobayashi, T Maskawa, Prog. Theor. Phys. 49652M. Kobayashi and T. Maskawa, Prog. Theor. Phys. 49, 652 (1973). . E Blucher, arXiv:hep-ph/0512039E. Blucher et al., arXiv:hep-ph/0512039. . E Gamiz, M Jamin, A Pich, J Prades, F Schwab, arXiv:0709.0282PoS KAON. 8hep-phE. Gamiz, M. Jamin, A. Pich, J. Prades and F. Schwab, PoS KAON, 008 (2008) [arXiv:0709.0282 [hep-ph]]. . W J Marciano, Phys. Rev. Lett. 93231803W. J. Marciano, Phys. Rev. Lett. 93, 231803 (2004). . M Ademollo, R Gatto, Phys. Rev. Lett. 13264M. Ademollo and R. Gatto, Phys. Rev. Lett. 13, 264 (1964). . J F Donoghue, B R Holstein, S W Klimt, Phys. Rev. D. 35934J. F. Donoghue, B. R. Holstein and S. W. Klimt, Phys. Rev. D 35, 934 (1987). . F Schlumpf, Phys. Rev. D. 512262F. Schlumpf, Phys. Rev. D 51, 2262 (1995). . A Faessler, T Gutsche, B R Holstein, M A Ivanov, J G Korner, V E Lyubovitskij, Phys. Rev. D. 7894005A. Faessler, T. Gutsche, B. R. Holstein, M. A. Ivanov, J. G. Korner and V. E. Lyubovitskij, Phys. Rev. D 78, 094005 (2008). . A Krause, Helv. Phys. Acta. 633A. Krause, Helv. Phys. Acta 63, 3 (1990). . J Anderson, M A Luty, Phys. Rev. D. 474975J. Anderson and M. A. Luty, Phys. Rev. D 47, 4975 (1993). . N Kaiser, Phys. Rev. C. 6428201N. Kaiser, Phys. Rev. C 64, 028201 (2001). . G Villadoro, Phys. Rev. D. 7414018G. Villadoro, Phys. Rev. D 74, 014018 (2006). . A Lacour, B Kubis, U G Meissner, JHEP. 071083A. Lacour, B. Kubis and U. G. Meissner, JHEP 0710, 083 (2007). . D Guadagnoli, V Lubicz, M Papinutto, S Simula, Nucl. Phys. B. 76163D. Guadagnoli, V. Lubicz, M. Papinutto and S. Simula, Nucl. Phys. B 761, 63 (2007). . S Sasaki, T Yamazaki, arXiv:0811.1406hep-phS. Sasaki and T. Yamazaki, arXiv:0811.1406 [hep-ph]. . L S Geng, J M Camalich, L Alvarez-Ruso, M J V Vacas, Phys. Rev. Lett. 101222002L. S. Geng, J. M. Camalich, L. Alvarez-Ruso and M. J. V. Vacas, Phys. Rev. Lett. 101, 222002 (2008). . T Becher, H Leutwyler, Eur. Phys. J. C. 9643T. Becher and H. Leutwyler, Eur. Phys. J. C 9 (1999) 643. . V Pascalutsa, Phys. Lett. B. 50385V. Pascalutsa, Phys. Lett. B 503, 85 (2001). . S Scherer, Adv. Nucl. Phys. 27277S. Scherer, Adv. Nucl. Phys. 27 (2003) 277. . C Hacker, N Wies, J Gegelia, S Scherer, Phys. Rev. C. 7255203C. Hacker, N. Wies, J. Gegelia and S. Scherer, Phys. Rev. C 72, 055203 (2005). . L S Geng, J Martin Camalich, M J Vicente, Vacas, arXiv:0903.0779hep-phL.S. Geng, J. Martin Camalich and M.J. Vicente Vacas, arXiv:0903.0779 [hep-ph]. . M N Butler, M J Savage, R P Springer, Nucl. Phys. B. 39969M. N. Butler, M. J. Savage and R. P. Springer, Nucl. Phys. B 399, 69 (1993).
{'fraction_non_alphanumeric': 0.0776973987994459, 'fraction_numerical': 0.04842234877635832, 'mean_word_length': 3.4216687083163198, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 42, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "We calculate the SU(3)-breaking corrections to the hyperon vector coupling f1(0) up to O(p 4 ) in covariant baryon chiral perturbation theory with dynamical octet and decuplet contributions. We find that the decuplet contributions are of similar or even larger size than the octet ones. Combining both, we predict positive SU(3)-breaking corrections to all the four independent f1(0)'s (assuming isospin symmetry), which are consistent, within uncertainties, with the latest results form large Nc fits, chiral quark models, and quenched lattice QCD calculations.", 'arxivid': '0910.3628', 'author': ['L S Geng \nDepartamento de Física Teórica and IFIC\nUniversidad de Valencia-CSIC\nE-46071ValenciaSpain\n', 'J Martin Camalich \nDepartamento de Física Teórica and IFIC\nUniversidad de Valencia-CSIC\nE-46071ValenciaSpain\n', 'M J Vicente Vacas \nDepartamento de Física Teórica and IFIC\nUniversidad de Valencia-CSIC\nE-46071ValenciaSpain\n'], 'authoraffiliation': ['Departamento de Física Teórica and IFIC\nUniversidad de Valencia-CSIC\nE-46071ValenciaSpain', 'Departamento de Física Teórica and IFIC\nUniversidad de Valencia-CSIC\nE-46071ValenciaSpain', 'Departamento de Física Teórica and IFIC\nUniversidad de Valencia-CSIC\nE-46071ValenciaSpain'], 'corpusid': 118347983, 'doi': '10.1103/physrevd.79.094022', 'github_urls': [], 'n_tokens_mistral': 11838, 'n_tokens_neox': 10021, 'n_words': 6039, 'pdfsha': 'c31653491136234d28e945b59570d9393deeb2c4', 'pdfurls': ['https://arxiv.org/pdf/0903.4869v1.pdf'], 'title': ['SU(3)-breaking corrections to the hyperon vector coupling f 1 (0) in covariant baryon chiral perturbation theory', 'SU(3)-breaking corrections to the hyperon vector coupling f 1 (0) in covariant baryon chiral perturbation theory'], 'venue': []}
arxiv
The Influence of Thin Film Confinement on Surface Plasticity in Polystyrene and Poly(2-vinylpyridine) Homopolymer and Block Copolymer Films 22 Apr 2016 Bekele Gurmessa Department of Physics North Dakota State University Andrew B Croll [email protected] Department of Physics North Dakota State University The Influence of Thin Film Confinement on Surface Plasticity in Polystyrene and Poly(2-vinylpyridine) Homopolymer and Block Copolymer Films 22 Apr 2016* To whom correspondence should be addressed 1 arXiv:1604.06545v1 [cond-mat.soft] Thin block copolymer films have attracted considerable academic attention because of their ability to self-assemble into various microstructures, many of which have potential technological applications. Despite the ongoing interest, little effort has focused on the onset of plasticity and failure which are important factors for the eventual adoption of these materials. Here we use delamination to impart a quantifiable local stain on thin films of homopolymer polystyrene and poly(2-vinylpyridine), as well as block copolymers made of styrene and 2-vinylpyridine.Direct observation of the damage caused by bending with atomic force microscopy and laser scanning confocal microscopy, leads to the identification of a critical strain for the onset of plasticity. Moving beyond our initial scaling analysis, the more quantitative analysis presented here shows strain levels for thick films to be comparable to bulk measurements. Monitoring the critical strain leads to several observations: 1.) as-cast PS-P2VP has low critical strain, 2.) annealing slowly increases critical strain as microstructural ordering takes place, and 3.) similar to the homopolymer, both as cast and ordered films both show increasing critical strain under confinement.Block copolymers are facinating materials with a richness of mechanical behavior emerging from the organized nanostructures which can form within. 1-15 Over the past several decades, many block copolymer systems have been industrialized and much has been learned about the details of their bulk mechanical behavior. Studies have largely focused on hard-soft material combinations, driven by attempts to toughen materials while maintaining other desirable properties (optical transparency, for example). 3-10 There is also some precedent for the study of hard-hard systems in attempts to compatiblize homopolymer mixtures or attempts to add crystallinity. 11-14 As one might imagine, material properties are found to depend on almost all of the features governing the nanostructure of the materials (volume fraction, 6,7,9 chain-chain interactions, 4,16 chain orientation and alignment, 7,8,10,12 thermal history 12 ).Currently, the renewed industrial drive towards miniturization has lead to increased attention on block copolymer in thin film geometries.[17][18][19]Despite the interest, there remains little direct mechanical characterization of block copolymer thin films, largely due to the lack of established methods for the mechanical testing of thin polymer films in general. The result is a gap in knowledge of properties that should be expected to vary from their bulk values, or at the least be much more severely constrained by geometric and interfacial phenomena. In an attempt to fill this gap, we have used a recently-developed localized-bending methodology to examine the early stages of plastic failure in a model block copolymer, polystyrene-b-poly(2-vinylpyridine) or PS-P2VP, and the two homopolymers used to make it up (polystyrene, PS, and poly(2-vinylpyridine), P2VP). 20 An increase in the critical strain for plastic deformation was observed in all samples as film thickness is reduced. The critical strain recorded for the symmetric block copolymer fell halfway between the values recorded for the two homopolymers, but only after sufficient annealing for significant microphase separation to take place.The glass transition temperature, T g , is a prime example of a material property that has often been observed to significantly deviate from its bulk values as a polymer thin film's thickness drops below ∼ 100 nm.[21][22][23][24][25][26][27][28][29][30][31][32][33][34]The changes are known to be related to the environment experienced by the 2 film's surfaces. Surface affinity leads to slower than usual dynamics and an increased T g , while low affinity or free surfaces generally lead to increased dynamics. 21-23 For example, PS on a silicon support is found to have a glass transition that decreases upon confinement, whereas P2VP is found to have a significantly increased glass transition temperature due to increased hydrogen bonding with the substrate. 21,26 On the other hand, free surfaces generally lead to universal decreases in T g . 22,23 It is important to note that recent work with stacked films, 26,27 and soft substrates 31,34 has shown that the surface chemistry alone is not enough to completely explain measurements.Given the relationship between internal dynamics and mechanical behavior in bulk materials, it is not surprising that many researchers have attempted to find thin film effects in the mechanical properties of polymer films.25,28,[35][36][37][38][39][40][41][42][43][44][45][46][47]The results have been mixed. For example, stiffining has been observed, 40 moduli have been observed to increase, 41 decrease 42-45 or remain unchanged in experiments.46,47While there is no clear explanation for the discrepancy, different substrates and annealing histories may play a role, as well as differences in the lengthscales and timescales probed in the different measurements. Large deformations may be influenced by changes in the inter-chain entanglement density that arise due to confinement, 48,49 whereas small deformations are not. 20 Slow, low-amplitude and direct mechanical experiments may be the only way to link with T g phenomena, but even this is unclear.31,37,50Recently, we have used delamination as a method to create simple, localized bending in thin free-standing polystyrene films.20In these experiments a thin polymer film is mounted to a soft substrate and the composite is slowly compressed. Elastic instability leads to repetitive buckling (wrinkling) or delamination of the film from the substrate. The bending in a delaminated section of film is easily quantified by Laser Scanning Confocal Microscopy (LSCM) or Atomic Force Microscopy (AFM) and local curvature (thus strain) can be directly evaluated. More importantly, the film can be returned to its initially flat state by the removal of compression in the composite.The flat film can then be interrogated with AFM or LSCM in order to identify signs of yield.Damage can then be correlated with the surface strain created by the bending in the preceeding delmaination (seeFig 1). The result is important because it evaluates a well defined physical Introduction endpoint of a mechanical test (failure) and relies only on tracking the film's position in space. Hence, regardless of the details of what a modulus might mean for a heterogeneous thin-film, a clear outcome of the deformation can be mapped. The goal of this work is threefold. First, we wish to examine the onset of plasticity in additional materials in order to prove that we are accurately measuring a material-dependent property. Secondly, we wish to improve our initial scaling analysis to the point where we can tentatively claim quantitative results. Finally, and most importantly, we wish to explore how microstructure might affect the failure process in thin block copolymer films. The paper is organized largely along these lines. After experimental procedures are described in detail, results for PS and P2VP hompolymer are discussed. Lastly, results observed with PS-P2VP block copolymer are described and brief conclusions are made. Substrate and Film Preparation: The substrate was prepared by mixing PDMS prepolymer and cross-linker (Dow chemical Corning, Sylgard 184) in a 20:1 weight ratio, respectively. The mixture was then degassed and poured into a petri dish, cured at 85 • C for two hours and then left in vacuum oven for the next 12-15 hours. Finally, after the PDMS is cooled down, it is cut into a rectangular sections of dimension 3 mm × 12 mm × 70 mm. Matching pairs of samples were made, by spin casting PS-b-P2VP/toluene solution onto a freshly cleaved mica substrate. The concentration of the polymer (by weight) considered here ranges from 0.5% -3%. The solvent was toluene (Anhydrous 99.8%, Sigma-Aldrich Inc.) and the solutions were filtered (pore size 0.482 µm, Cadence Science Inc.) before spin coating. The first sample is used as-cast and the second sample undergoes a step of annealing to enable selfassembly of the block copolymer microdomains. To initiate the self-assembly process, samples were annealed at temperatures ranging from 165 • C -195 • C -all well above the glass transition temperature of the bulk PS-b-P2VP (∼ 100 • C) either in an air environment or in a glove box filled with dry nitrogen. Finally, samples are transferred to a clean deionized water surface (Milli-Q) and subsequently transferred to a PDMS substrate loaded on a home built strain stage in order to impart a compressive stress. In all cases the films are imaged with Laser Scanning Confocal Microscopy (LSCM -Olympus Fluoview 1000) or with Atomic Force Microscopy (AFM -DI Dimension 2100). Thickness Measurement: Film thickness is one of the dominant geometric properties related to bending and must be measured carefully. The homopolymer and as-cast block copolymer present little challenge in analysis; films are scratched near the feature of interest and Atomic Force Microscopy (AFM -DI Dimension 2100) is used to locally measure a film thickness. However, when the thin film is ordered lamella-forming block copolymer will be decorated by terraces (islands or holes) whenever the as-cast thickness is not commensurate with the lamellar thickness. Under such conditions, using AFM cross-section analysis alone is insufficient to completely describe a thickness; an average thickness must be contrived from the density of surface features, f , the lamellar thickness L 0 , the number of complete layers below the surface, n, and volume conservation. Specifically, we calculate the average thickness as, h avg = f × L 0 + n × L 0 .(1) Alternatively, the largest thickness (n + 1)L 0 and thickness of only complete layers, nL 0 was considered in analysis. Neither resulted in significant changes in the overall trends observed. With the lamellar forming polymer used here, we find L 0 = 42 ± 0.7 nm in agreement with other measurements. 13,51 Mechanics: Once prepared, the composite sample is compressed with a custom built strain stage. When under compression, the film buckles out of plane forming a sinusoidal pattern in a process known as wrinkling. [52][53][54][55][56][57] The sinusoidal pattern itself is only stable in the small strain limit. Applying a large strain 58-61 results in heterogeneous stress fields, 62,63 which lead to non-linear responses such as localized bending, shear deformation zones, crazes and delaminations. 38,49,[64][65][66] In this work we focus only on delamination that create free-standing film bends, however, other sharp features will also cause local plasticity. Once delaminated, the film surface is imaged in three dimensions using LSCM, allowing an unambiguous determination of local curvatures. When cou-pled with Euler-buckling theory, the curvature yields a value for the local bending induced strain. A typical experiment is shown in Fig. 2. The mechanical compression is removed, and once again the film is imaged with LSCM or AFM (see Fig. 2c and 2d). Results and discussion In a previous publication, 20 Gurmessa example, if a second polymer has a much smaller modulus than polystyrene an extreme strain on the substrate may be necessary to cause delamination. 61 P2VP is an ideal molecule for such a comparison as its statistical segment size is almost identical to polystyrene, its bulk T g is similar and its bulk mechanical properties are also close to those of polystyrene. [67][68][69] A typical experiment is illustrated in Fig. 2. In this case we show an as-cast film of block copolymer (which has more relevance to the discussion below). The smooth film morphologically resembles what is observed with the P2VP homopolymer. Specifically, the figure shows LSCM images of an as-cast PS-b-P2VP film of thickness h = 71 nm under: zero-strain (2a), compression (2b), removal of compression (2c) and after transferred to a highly reflective silicon substrate (2d). A smooth and flat surface is observed before strain is applied, but the film eventually evolves to wrinkles and delaminations (brightest wrinkle peaks) as compressive stress is increased. 61 Confocal imaging of the sample while it is under compression, allows delamination width to be accurately measured. More importantly, the heights of each pixel can be determined through LSCM's optical sectioning and the amplitude of the delamination can be determined. where A is amplitude, and w is width as shown in Fig. 1. Strictly speaking, this scaling is only quantitatively correct if the delamination forms a perfect ellipse (which is clearly not the case). In order to calculate a quantitatively accurate curvature, the precise shape of the delamination must be taken into account. Recent theoretical predictions based on the Föppl-von kármán equation suggest the delamination takes the following form g(x) = (A/2)(1 + cos(2πx/w)), where g(x) is the plate surface, A is an amplitude and w is the width of the delamination. 70 The curvature is given by the second derivative of g(x) evaluated at the peak of the delamination, x = 0. In this case κ = 2π 2 A/w 2 and the scaling prefactor is numerically 19.74. A typical fit of g(x) is shown in Fig. 3. The fit is reasonable, but imperfect, due to imprecision in matching the minima. In essence, the fit of such a function is dominated by the lateral location of the maxima and minima, not their respective vertical values. A conceptually more accurate alternative is to fit the peak with a parabola and calculate the curvature from the parabolic fit (h(x) = ax 2 + bx + c and κ ≈ 2a). As shown in the inset of Fig. 3a the local fit is quite good. Figure 3b shows a plot of the curvature of many different experimental delaminations determined by fits with g(x) and h(x) as a function of the scaling A/w 2 . Data points are take from PS, P2VP and as cast diblock films, and thicknesses range from 40 nm to 315 nm. We note no deviation from either curve for any of the data -this should be expected for a purely geometric feature such as the curvature. Finally, both curves are linear (as one would expect) and have similar slopes. The hypothesis that the curvature is better represented by the parabolic fits is not born out in practice. The measured slope is 20.7 ± .5. All data discussed below is adjusted to a quantitative scale using this measured prefactor. There is a strong correlation between high delamination curvature and the the amount of damage observed once film is transferred to a flat silicon substrate. h is the film thickness. Fig. 4 shows the critical strain for plasticity as a function of film thickness for both PS (the same data as in reference 20 adjusted with the scaling prefactor discussed above) and homopolymer P2VP. The behavior of P2VP largely mirrors that of PS films. Both polymers have low critical strains in the traditional 'thick' film region, but show strains that dramatically increases as confinement increases. Notably, P2VP has critical strains ∼ 2% greater than PS over the entire range of measurement as illustrated in Fig 4. The result shows that the critical strain measured here is indeed a material dependent property and not a geometric artifact. Also, the slightly higher values found for P2VP mirror the differences in their traditionally measured bulk values. 69 Finally, the increased threshold in PS and P2VP is consistent with the changes observed in the glass transitions of freestanding films of the two materials, suggesting that at least a qualitative connection may exist between T g and the plasticity we observe. 29 To aid in interpreting the data, fits are made to a simple layer model, similar to that proposed by Keddie 21 to explain the thickness dependence of glass transition temperature of polymer thin films. The model assumes an enhanced molecular mobility (a liquid-like layer) near the freesurface, compared to the bulk polymer. Several authors 21,24,28,30,40,44 have shown that the glass transition temperature of thin polystyrene films is reduced significantly compared to the bulk value as confinement increases. The depression in T g is attributed to the enhanced mobility of chain segments residing at the free surface which is caused by the reduction of barriers to segmental cooperative motion. 26 Here the layering will give rise to different yield strains at the surface and in the bulk. Specifically, the yield strain as a function of thickness for a two layered system is, ε p (h) = ( /h)(ε p − ε 0 p ) + ε 0 p ,(2) where is the size of the soft layer and ε p and ε 0 p refer to the surface and bulk strain, respectively. A larger strain is needed in the liquid-like layer before stress can be stored; the stress can more easily relax away in the liquid layer than in the bulk layer. Fitting to the data in Fig. 4 give ε p = 12 ± 1%, ε 0 p = 0.3 ± .2%, and ε p = 31 ± 1%, ε 0 p = 2.4 ± .1%, respectively, for PS and P2VP, assuming a typical lengthscale = 10 nm. 29 The plateau values are smaller than yield strains measured in bulk, but this is not unexpected. The technique used here focuses on local, microscopic signs of damage. This kind of sensitivity is not possible with a bulk sample, because only force and displacement are measured in a traditional bulk experiment. A sample must move enough material plastically to be measured as hysterisis in a force-displacement curve. Furthermore, it would be extremely difficult to image the beginnings of plastic rearrangement after a bulk experiment because the entire bulk sample would need to be examined. The block copolymer poses new challenges to interpretation because of its ability to microphase separate. In the disordered state the two blocks are well mixed and (upon vitrification) the solid is easily modeled as a linear, isotropic, continuum material. As the material begins to phase separate (or fluctuations become large) it is no longer clear that the material can be thought of in a continuum sense. However, with the small size of the domains and the long range of mechanical perturbation, continuum ideas are often applied successfully to microphase separated solids. [3][4][5][6][7][8][9][10][11][12][13][14] We follow similar assumptions, and make no changes to our modeling of the basic mechanical deformation (e.g. calculation of a surface strain) after microphase separation has taken place. In order to follow the microstructures influence on the onset of plasticity in PS-b-P2VP films, the self-assembly process of the diblock copolymer microstructure must be carefully controlled. Self-assembly is typically carried out through thermal annealing 12,71,72 or solvent annealing. 73 Solvent annealing may leave residual stresses when the sample is quenched, but these stresses can be minimized with additional thermal annealing. In a typical thermal experiment, the copolymer is heated past its glass transition temperature, (T g ∼ 100 • C for PS and P2VP), for a predetermined time followed by rapid quenching of the sample to room temperature (below T g ). When the sample is quenched below its glass transition temperature, the polymer structure is kinetically frozen due to the extremely low mobility of the chains. Given the composition of the block copolymer considered (PS block of M n = 40 kg/mol and a P2VP block of M n = 40.5 kg/mol), lamella parallel to the mica substrate are expected at equilibrium. 51 In this system, the PS block has a lower surface energy (compared to P2VP) and will reside at the free surface whereas P2VP favors the substrate, yielding lamella parallel to the substrate. A series of samples of similar thicknesses were annealed to temperatures ranging from 165 • C - a temperature above the glass transition for an appropriate time, the lamellar structure will force the surface of the block copolymer thin film to be decorated by terraces (holes, islands, bi-continuous patterns). Fig. 5 illustrates the surface morphological evolution observed as the samples of similar thicknesses that are annealed at a temperature of 175 • C for different annealing times. As time progresses, the domain size coarsens as expected. Fig. 5 makes it apparent that self-assembly of the polymer chains into periodic microdomains occurs at relatively short annealing times. It is important to note that the appearance of a terraced morphology does not necessarily mean that microphase separation has completed, or that material properties have stopped changing. 74,75 Tests of the critical strain were then conducted with microphase separated PS-b-P2VP samples using the same procedures outlined above. The pre and post buckling state of the sample are illustrated in Fig. 6. The location of the delaminations is easy to follow through each stage of the experiment due to the unique pattern formed by the terraces (note that the sample in Fig. 6 is of non-uniform thickness and is only used to highlight the comparison). Similar to the as cast films, thickness. 76 Regardless, the overall trend displayed in Figure 7 does allow two distinctly different regimes to be identified: as-cast and well annealed. As-cast PS-b-P2VP films of varying thickness were constructed and the critical strain for plasticity was measured. Figure 8a shows the result, along with the layer model fits to the PS and P2VP homopolymer. The as-cast films have a 'bulk' value similar to PS but as the films become thinner they fall off the PS trend-line and show a greater than PS change in critical strain. Fitting with the layer model, maintaining the same 10 nm surface layer, results in unphysical strain values (ε p = 29 ± 3%, ε 0 p = −0.4 ± .3%). The reason for the complex behavior is rooted in the non-equilibrium nature of the as-cast state. As a film is spin cast, polymer interacts with the two surfaces (here air and mica) until solvent evaporates and the film vitrifies. This means that there is often some degree of order present after spin casting a film simply because of the surfaces. 74 Similar to other surface effects, the ordering will disproportionately affect very thin films. In this case, the thinnest films are largely ordered at the end of the spin coating process. A fact easily verified by repeating the measurement with annealed samples of similar thickness. Figure 8b. shows the results after long annealing times (> 4 h). The trend now is found roughly halfway between the PS homopolymer data (red curve) and the P2VP data (black curve). The thinest films show critical strains that are identical to the values of the as-cast films, verifying our earlier hypothesis. A fit to Eqn. ?? yields ε p = 15 ± 1%, ε 0 p = 1.8 ± .2%. We tentatively interpret the result as an indication that a simple 'mixing-rule' can be applied to the annealed samples; they display properties proportionately to the volume fraction of each material used to make them up. While this may be true for the relatively isotropic lamellar forming system used above, measurements conducted with a cylinder forming PS-P2VP molecule (of volume fraction f = .37 and similar total molecular weight and film thickness) show very little change from the as cast to ordered state (the ratio of critical strains is measured to be ∼ 0.95). Taken together, the conclusion can only be that long range connectivity must play a major roll in the mechanics of these thin films because the lamellar film has long range order, whereas the cylinder forming and as-cast material does not. In summary, the critical strain for plasticity was measured in PS, P2VP and block copolymers of PS and P2VP. Contrary to earlier work, the analysis presented here moves beyond scaling es-timates and is put into a quantitative framework. For all polymer studied the critical strain was found to reach a constant but material dependent value in thick films. Specifically, thick P2VP homopolymer becomes irreversibly deformed at a strain of ε = 2.4 %, PS homopolymer at a strain of ε = 0.3 %. As-cast symmetric diblock copolymer has a polystyrene like value, whereas well ordered block copolymer has a critical strain of ε = 1.8 %. PS rich cylinder forming block copolymer was found to have a polystyrene-like critical strain, indicating that long range order plays a roll in the deformation process. More interestingly, all polymer shows an increase in critical strain as thickness falls below ∼ 100 nm, consistent with an assumed relation between the glass transition and plasticity in thin polymer films. Figure 1 : 1schematic of the experimental geometry. (a) A thin polymer film is laminated to a soft substrate. (b) The composite is compressed causing the film to wrinkle and delaminate. (c) Compression is removed and the film returns to a flat state, although damage may have occurred at locations of high curvature. were purchased from Polymer Source Inc. and used as received. The symmetric PS-P2VP has a PS block of molecular weight M n = 40 kg/mol and a P2VP block of M n = 40.5 kg/mol. The polydispersity index (PDI) of the diblock is 1.02. The second molecule considered was a cylinder forming PS-P2VP with PS block of M n = 56 kg/mol and a P2VP block of M n = 21 kg/mol, and has PDI of 1.06. The two molecules were chosen to have differing microphases but similar total molecular weight. In addition, polystyrene homopolymer of molecular weights M n = 1.3 Mg/mol and M n = 1.2 kg/mol with PDI of 1.15 and 1.04 respectively and Poly(2-vinylpyridine) (P2VP) homopolymer of molecular weight M n = 135 kg/mol and PDI of 1.06, also purchased from Polymer Source Inc., were used in the experiment. and Croll reported the measurement of the onset strain for plasticity at the surface of a thin polystyrene film. Several observations were made: 1.) the onset of plasticity occurs at low strain; scaling estimates put it at the order of ∼ 10 −3 , 2.) the critical strain for plastic failure remains constant for thicker films but rises as the film confinement increases. 3.) the confinement induced increases of the yield strain are independent of molecular weight and begin as the film thickness drops below about ∼ 100 nm. The observation of a low strain motivates the need to advance the modeling such that the strain estimate can be made quantitative, which need only involve a detailed exploration of model assumptions and the precise shape of the delaminations at their peaks. Determining the generality of the second two observations is the main focus of the present work, and is accomplished through the use of additional polymer chemistry and architecture. A second polymer must be chosen to have similar mechanical properties to polystyrene ensuring that no large-scale changes in experimental techniques are necessary. For Figure 2 : 2Typical surface morphologies of as cast PS-b-P2VP films at every stages of the mechanical loading. A confocal microscope image of (a) prestrain state (b) the same film under compression, showing several wrinkles and delaminations (c) the relaxed film corresponding to the location of the previous delaminations and (d) the same film after it has been transferred to a silicon substrate. Figure 3 3shows a typical delamination cross-section recorded with both AFM and Confocal microscopy. The two measurements coincide within the error of locating a precise location along a delamination with two different instruments (essentially an error in length from a sample edge or other reference point to the location of interest). Curvature of such a feature will scale as κ ∼ A/w 2 , For example, much of the film which only displayed low curvature features (such as the wrinkles) while deformed show no signs of damage at the end of the experiment, whereas regions of high curvature (tall delaminations) show clear signs of damage. The critical point for plastic deformation, the point of lowest curvature which still results in damage, can easily be extracted from the data and the curvature at this point can be calculated. The surface strain experienced at this location is given by ε = hκ/2, where Figure 3 : 3(a) A typical delamination of a PS film measured by LSCM (red squares) and AFM (blue squares). The film is 150 nm thick. The full delamination is fit with a cosine (solid black) while the peaks are fit with parabola (inset). (b) Scaling plot of the peak curvature calculated from the cosine fit (black square) and the parabola fit (open circles). Data are from PS, P2VP and as-cast PS-b-P2VP films of various thicknesses (ranging from 40 nm to 315 nm). Solid line is a linear fit with a slope of 20.7. The dashed blue line shows a slope of 1 for reference. Figure 4 : 4Comparison of the evolution of yield stain of PS and P2VP homopolymer as a function of confinement. Both polymers show increases in the critical strain for plasticity as thickness falls below 100 nm. Solid curves are fits to the layer model discussed in the text. Figure 5 : 5Surface morphologies of samples annealed at 175 • C for the annealing times indicated. All of the samples are decorated by holes of lamella spacing ∼ 42 µm. Scale bar indicates 20 µm and the sample at T=0 has a thickness of 125 ± 5 nm. Figure 6 :Figure 7 : 67Surface morphologies of ordered PS-b-P2VP films at various stages of the experiment. A confocal microscope image of a.) prestrain state. b.) the same film under compression with several wrinkles and delaminations. c.) the same location of the film after removal of compression. d.) the same film after it has been transferred to silicon. a flat surface is observed when no strain is applied, and the film eventually evolves to wrinkles and delaminations (brightest wrinkle peaks) as global compressive stress is increased. The sample once again flattens as the compression is removed. The experiment was repeated for as-cast films and samples of identical thickness treated to annealing at various temperatures and times. The results are summarized in Figure 7.For all temperatures qualitatively similar behavior is observed. The critical strain for plasticity increases as annealing takes place until it reaches a plateau value which does not change significantly upon further annealing. The sample annealed at 175 • C reaches the plateau after four hours of annealing, notably longer than it takes for the surface to break up into well defined islands (compare withFig. 5). Unfortunately the sample to sample variation obscures the fine details of the annealing process. For example, under the assumption of a single time constant exponential process (e.g. ε(t) ∼ A − Be −t/τ ) no clear trend is visible (see the inset ofFig. 7). The variation is likely due to small thickness differences between sets of samples (we measure of order 10 nm in thickness variation by AFM) coupled with the sensitivity of the island formation process to film Evolution of critical strain in block copolymer films as the annealing time increases. The figure shows that the yield strain, regardless of annealing temperature, equilibrates to a finite value after a relatively short annealing time. The inset shows a plot of the time constant, τ, as a function of annealing temperature. Figure 8 : 8Critical strain as a function of film thickness of as cast and PS films fitted to the layer model described in the text. . F S Bates, G H Fredrickson, Annu. Rev. Phys. Chem. 41Bates, F. S.; Fredrickson, G. H. Annu. Rev. Phys. Chem. 1990, 41, 525-557. . P R Lewis, C Price, Nature. 223Lewis, P. R.; Price, C. Nature 1969, 223, 494-495. . C E Schwier, A S Argon, R E Cohen, Phil, Mag, 52Schwier, C. E.; Argon, A. S.; Cohen, R. E. Phil. Mag. A 1985, 52, 581-603. . C C Honeker, E L Thomas, Chem. of Mater. 8Honeker, C. C.; Thomas, E. L. Chem. of Mater. 1996, 8, 1702-1714. . R Weidisch, M Ensslen, G H Michler, H Fischer, Macromolecules. 32Weidisch, R.; Ensslen, M.; Michler, G. H.; Fischer, H. Macromolecules 1999, 32, 5375-5382. . R Weidisch, G H Michler, H Fischer, M Arnold, S Hofmann, M Stamm, Polymer. 40Weidisch, R.; Michler, G. H.; Fischer, H.; Arnold, M.; Hofmann, S.; Stamm, M. Polymer 1999, 40, 1191-1199. . R Weidisch, G Schreyeck, M Ensslen, G H Michler, M Stamm, D W Schubert, H Budde, S ; H Oring, M Arnold, R Jerome, Macromolecules. 33Weidisch, R.; Schreyeck, G.; Ensslen, M.; Michler, G. H.; Stamm, M.; Schubert, D. W.; Budde, H.; H oring, S.; Arnold, M.; Jerome, R. Macromolecules 2000, 33, 5495-5504. . Y Cohen, R J Albalak, B J Dair, M S Capel, E Thomas, Macromolecules. 33Cohen, Y.; Albalak, R. J.; Dair, B. J.; Capel, M. S.; Thomas, E. Macromolecules 2000, 33, 6502-6516. . R Lach, R Weidisch, A Janke, K Knoll, Macromol. Rapid Commun. 25Lach, R.; Weidisch, R.; Janke, A.; Knoll, K. Macromol. Rapid Commun. 2004, 25, 2019- 2024. . C Ye, G Singh, M L Wadley, A ; A Karim, C K Vogt, B D , Macromolecules. 46Ye, C.; Singh, G.; Wadley, M. L.; Karim, A.; A., C. K.; Vogt, B. D. Macromolecules 2013, 46, 8608-8615. . C Y Ryu, J Ruokolainen, G H Fredrickson, E J Kramer, S F Hahn, Macromolecules, 35Ryu, C. Y.; Ruokolainen, J.; Fredrickson, G. H.; Kramer, E. J.; Hahn, S. F. Macromolecules 2002, 35, 2157-2166. . J Ruokolainen, G H Fredrickson, E J Kramer, C Y Ryu, S F Hahn, S N Magonov, Macromolecules. 35Ruokolainen, J.; Fredrickson, G. H.; Kramer, E. J.; Ryu, C. Y.; Hahn, S. F.; Magonov, S. N. Macromolecules 2002, 35, 9391-9402. . J.-Y Lee, A J Crosby, Macromolecules. 38Lee, J.-Y.; Crosby, A. J. Macromolecules 2005, 38, 9711-9717. . W Kim, J Han, C Y Ryu, H J Yang, Poly. Sci. B. Poly. Phys. 44Kim, W.; Han, J.; Ryu, C. Y.; Yang, H. J. Poly. Sci. B. Poly. Phys. 2006, 44, 3612-3620. A Makke, M Perez, O Lame, J.-L Barrat, Proc. Nat. Acad. Sci. Nat. Acad. Sci109Makke, A.; Perez, M.; Lame, O.; Barrat, J.-L. Proc. Nat. Acad. Sci. 2012, 109, 680-685. . R Weidisch, M Ensslen, G H Michler, M Arnold, H ; S Budde, H Fisher, H , 34Weidisch, R.; Ensslen, M.; Michler, G. H.; Arnold, M.; Budde, H.; S., H.; Fisher, H. Macro- molecules 2001, 34, 2528-2535. . M W Matsen, Chem. Phys. 106Matsen, M. W. j. Chem. Phys. 1997, 106, 7781-7791. . M J Fasolka, A M Mayes, Annu. Rev. Mater. Res. 31Fasolka, M. J.; Mayes, A. M. Annu. Rev. Mater. Res. 2001, 31, 323-355. . R A Segalman, Mat. Sci. Eng. R Rep. 48Segalman, R. A. Mat. Sci. Eng. R Rep. 2005, 48, 191-226. . B J Gurmessa, A B Croll, Phys. Rev. Lett. 78303Gurmessa, B. J.; Croll, A. B. Phys. Rev. Lett. 2013, 110, 078303. . J L Keddie, R A L Jones, R A Cory, Europhys. Lett. 27Keddie, J. L.; Jones, R. A. L.; Cory, R. A. Europhys. Lett. 1994, 27, 59-64. . J S Sharp, J A Forrest, Phys. Rev. Lett. 235701Sharp, J. S.; Forrest, J. A. Phys. Rev. Lett. 2003, 91, 235701. . O Bäumchen, J D Mcgraw, J A Forrest, K Dalnoki-Veress, Phys. Rev. Lett. 10955701Bäumchen, O.; McGraw, J. D.; Forrest, J. A.; Dalnoki-Veress, K. Phys. Rev. Lett. 2012, 109, 055701. . C J Ellison, J M Torkelson, Nature Mater. 2Ellison, C. J.; Torkelson, J. M. Nature Mater. 2003, 2, 695-700. . W Cheng, R Sainidou, P Burgardt, N Stefanou, A Kiyanova, M Efremov, G Fytas, P F Nealey, Macromolecules, 40Cheng, W.; Sainidou, R.; Burgardt, P.; Stefanou, N.; Kiyanova, A.; Efremov, M.; Fytas, G.; Nealey, P. F. Macromolecules 2007, 40, 7283-7290. . C B Roth, K L Mcnerny, W F Jager, J M Torkelson, Macromolecules. 40Roth, C. B.; McNerny, K. L.; Jager, W. F.; Torkelson, J. M. Macromolecules 2007, 40, 2568- 2574. . C B Roth, J M Torkelson, Macromolecules. 40Roth, C. B.; Torkelson, J. M. Macromolecules 2007, 40, 3328-3336. . Z Fakhraai, J A Forrest, Science. 319Fakhraai, Z.; Forrest, J. A. Science 2008, 319, 600-604. . K Paeng, M D Ediger, Macromolecules. 44Paeng, K.; Ediger, M. D. Macromolecules 2011, 44, 7034-7042. . J E Pye, C B Roth, Phys. Rev. Lett. Pye, J. E.; Roth, C. B. Phys. Rev. Lett. 2011, 107, 235701. . R J Lang, W L Merling, D S Simmons, Acs Macro, 3Lang, R. J.; Merling, W. L.; Simmons, D. S. ACS Macro Letters 2014, 3, 758-762. . J A Torres, P F Nealey, J J De Pablo, Phys. Rev. Lett. 85Torres, J. A.; Nealey, P. F.; de Pablo, J. J. Phys. Rev. Lett. 2000, 85, 3221-3225. . F Varnik, J Baschnagel, K Binder, Phys. Rev. E. 6521507Varnik, F.; Baschnagel, J.; Binder, K. Phys. Rev. E 2002, 65, 021507. . C Zhang, Y Guo, R D Priestley, Macromolecules. 44Zhang, C.; Guo, Y.; Priestley, R. D. Macromolecules 2011, 44, 4001-4006. . Y.-C Lee, K C Bretz, F W Wise, W Sachse, Appl. Phys. Lett. 1692Lee, Y.-C.; Bretz, K. C.; Wise, F. W.; Sachse, W. Appl. Phys. Lett. 1996, 69, 1692. . K Yoshimoto, T S Jain, P F Nealey, J J De Pablo, J. Chem. Phys. 144712Yoshimoto, K.; Jain, T. S.; Nealey, P. F.; de Pablo, J. J. J. Chem. Phys. 2005, 122, 144712. . J M Torres, C M Stafford, B D Vogt, ACS Nano. 3Torres, J. M.; Stafford, C. M.; Vogt, B. D. ACS Nano 2009, 3, 2677-2685. . J.-H Lee, J Y Chung, C M Stafford, Acs Macro, Lett. 2012Lee, J.-H.; Chung, J. Y.; Stafford, C. M. ACS Macro. Lett. 2012, 1, 122-126. . P C Chung, E Glynos, P F Green, Langmuir, 30Chung, P. C.; Glynos, E.; Green, P. F. Langmuir 2014, 30, 15200-15205. . P A O&apos;connell, G B Mckenna, Science. 307O'Connell, P. A.; McKenna, G. B. Science 2005, 307, 1760-1763. . C A Tweedie, G Constantinides, K E Lehman, D J Brill, G S Blackman, K J Van Vliet, Adv. Mater. 19Tweedie, C. A.; Constantinides, G.; Lehman, K. E.; Brill, D. J.; Blackman, G. S.; Van Vliet, K. J. Adv. Mater. 2007, 19, 2540-2546. . C M Stafford, B D Vogt, C Harrison, D Julthongpiput, R Huang, Macromolecules. 39Stafford, C. M.; Vogt, B. D.; Harrison, C.; Julthongpiput, D.; Huang, R. Macromolecules 2006, 39, 5095-5099. . J.-H Zhao, M Kiene, C Hu, P S Ho, Appl. Phys. Lett. 77Zhao, J.-H.; Kiene, M.; Hu, C.; Ho, P. S. Appl. Phys. Lett. 2000, 77, 2843-2845. . T R Böhme, J J De Pablo, J. Chem. Phys. 116Böhme, T. R.; de Pablo, J. J. J. Chem. Phys. 2002, 116, 9939-9951. . K Miyake, N Satomi, S Sasaki, Appl. Phys. Lett. 31925Miyake, K.; Satomi, N.; Sasaki, S. Appl. Phys. Lett. 2006, 89, 031925. . J A Forrest, K Dalnoki-Veress, J R Dutcher, Phys. Rev. E. 58Forrest, J. A.; Dalnoki-Veress, K.; Dutcher, J. R. Phys. Rev. E 1998, 58, 6109-6114. . A Crosby, Private CommunicationCrosby, A. Private Communication. . H R Brown, T P Russell, Macromolecules. 29Brown, H. R.; Russell, T. P. Macromolecules 1996, 29, 798-800. . L Si, M V Massa, K Dalnoki-Veress, H R Brown, R A L Jones, Phys. Rev. Lett. 127801Si, L.; Massa, M. V.; Dalnoki-Veress, K.; Brown, H. R.; Jones, R. A. L. Phys. Rev. Lett. 2005, 94, 127801. . B Schnell, H Meyer, C ; P Fond, W J Baschnagel, J , Eur. Phys. J. E. 34Schnell, B.; Meyer, H.; Fond, C.; P., W. J.; Baschnagel, J. Eur. Phys. J. E 2011, 34, 97-110. . S Ji, C.-C Liu, J G Son, K Gotrik, G S Craig, P Gopalan, F Himpsel, K Char, P F Nealey, Macromolecules, 41Ji, S.; Liu, C.-C.; Son, J. G.; Gotrik, K.; Craig, G. S.; Gopalan, P.; Himpsel, F.; Char, K.; Nealey, P. F. Macromolecules 2008, 41, 9098-9103. . N Bowden, S Brittain, A G Evans, J W Hutchinson, G M Whitesides, Nature. 393Bowden, N.; Brittain, S.; Evans, A. G.; Hutchinson, J. W.; Whitesides, G. M. Nature 1998, 393, 146-149. . E Cerda, L Mahadevan, Phys. Rev. Lett. 9074302Cerda, E.; Mahadevan, L. Phys. Rev. Lett. 2003, 90, 074302. . J Groenewold, A Physica, 298Groenewold, J. Physica A. 2001, 298, 32-45. . J Huang, M Juszkiewicz, W H De Jeu, E Cerda, T Emrick, N Menon, T P Russell, Science. 317Huang, J.; Juszkiewicz, M.; de Jeu, W. H.; Cerda, E.; Emrick, T.; Menon, N.; Russell, T. P. Science 2007, 317, 650-653. . C M Stafford, C Harrison, K L Beers, A Karim, E J Amis, M R Vanlandingham, H.-C Kim, W Volksen, R D Miller, E E Simonyi, Nature Mater. 3Stafford, C. M.; Harrison, C.; Beers, K. L.; Karim, A.; Amis, E. J.; Vanlandingham, M. R.; Kim, H.-C.; Volksen, W.; Miller, R. D.; Simonyi, E. E. Nature Mater. 2004, 3, 545-550. . J Y Chung, J H Lee, K L Beers, C M Stafford, Nano. Lett. 11Chung, J. Y.; Lee, J. H.; Beers, K. L.; Stafford, C. M. Nano. Lett. 2011, 11, 3361-3365. . L Pocivavsek, R Dellsy, A Kern, S Johnson, B Lin, K.-Y C Lee, E Cerda, Science. 320Pocivavsek, L.; Dellsy, R.; Kern, A.; Johnson, S.; Lin, B.; Lee, K.-Y. C.; Cerda, E. Science 2008, 320, 912-916. B Davidovitch, R D Schroll, D Vella, M Adda-Bedia, E A Cerda, Proc. Nat. Acad. Sci. Nat. Acad. SciDavidovitch, B.; Schroll, R. D.; Vella, D.; Adda-Bedia, M.; Cerda, E. A. Proc. Nat. Acad. Sci. 2011, . D.-P Holmes, A Crosby, J. Phys. Rev. Lett. 38303Holmes, D.-P.; Crosby, A. J. Phys. Rev. Lett. 2010, 105, 038303. . Y Ebata, A B Croll, A J Crosby, Soft Matter. 8Ebata, Y.; Croll, A. B.; Crosby, A. J. Soft Matter 2012, 8, 9086-9091. . T A Witten, Rev. Mod. Phys. 79Witten, T. A. Rev. Mod. Phys. 2007, 79, 643-675. . T Tallinen, J A Astrom, J Timonen, Nat. Mater. 8Tallinen, T.; Astrom, J. A.; Timonen, J. Nat. Mater. 2009, 8, 25-28. . H Mei, C M Landis, R Huang, Mech, Mater, 43Mei, H.; Landis, C. M.; Huang, R. Mech. Mater. 2011, 43, 627-642. Elasticity and Geometry. B Audoly, Y Pomeau, Oxford University PressAudoly, B.; Pomeau, Y. Elasticity and Geometry; Oxford University Press, 2010. . J Y Chung, A J Nolte, C M Stafford, Adv. Mater. 23Chung, J. Y.; Nolte, A. J.; Stafford, C. M. Adv. Mater. 2011, 23, 349-368. . A S Argon, R D Andrews, J A Godrick, W Whitney, J. Appl. Phys. 39Argon, A. S.; Andrews, R. D.; Godrick, J. A.; Whitney, W. J. Appl. Phys. 1968, 39, 1899- 1906. . E Kramer, J. J. Macrom. Sci. B. 10Kramer, E. J. J. Macrom. Sci. B 1974, 10, 191-205. . Y Takahashi, N Ochiai, Y Matsushita, I Noda, J Poly, 28Takahashi, Y.; Ochiai, N.; Matsushita, Y.; Noda, I. Poly. J. 1996, 28, 1065-1070. . D Vella, J Bico, A Boudaoud, B Roman, P M Reis, Pro, Nat. Acad. Sci. 106Vella, D.; Bico, J.; Boudaoud, A.; Roman, B.; Reis, P. M. Pro. Nat. Acad. Sci. 2009, 106, 10901-10906. . M F Schulz, A K Khandpur, F S Bates, Macromolecules, 29Schulz, M. F.; Khandpur, A. K.; Bates, F. S. Macromolecules 1996, 29, 2857-2867. . Y Mai, A Eisenberg, Chem. Soc. Rev. 41Mai, Y.; Eisenberg, A. Chem. Soc. Rev. 1996, 41, 5969-5985. . C M Grozea, I T Li, D Grozea, G C Walker, Macromolecules, 44Grozea, C. M.; Li, I. T.; Grozea, D.; Walker, G. C. Macromolecules 2011, 44, 3901-3909. . A M Mayes, T P Russell, P Bassereau, S M Baker, G S Smith, Macromolecules, 27Mayes, A. M.; Russell, T. P.; Bassereau, P.; Baker, S. M.; Smith, G. S. Macromolecules 1994, 27, 749-755. . A B Croll, A.-C Shi, K Dalnoki-Veress, Phys. Rev. E. 8051803Croll, A. B.; Shi, A.-C.; Dalnoki-Veress, K. Phys. Rev. E 2009, 80, 051803. . B Collin, D Chatenay, G Coulon, D Ausserre, Y Gallot, Macromolecules. 25Collin, B.; Chatenay, D.; Coulon, G.; Ausserre, D.; Gallot, Y. Macromolecules 1992, 25, 1621-1622.
{'fraction_non_alphanumeric': 0.06837190982600032, 'fraction_numerical': 0.04043001784017979, 'mean_word_length': 3.9702901888530633, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "Thin block copolymer films have attracted considerable academic attention because of their ability to self-assemble into various microstructures, many of which have potential technological applications. Despite the ongoing interest, little effort has focused on the onset of plasticity and failure which are important factors for the eventual adoption of these materials. Here we use delamination to impart a quantifiable local stain on thin films of homopolymer polystyrene and poly(2-vinylpyridine), as well as block copolymers made of styrene and 2-vinylpyridine.Direct observation of the damage caused by bending with atomic force microscopy and laser scanning confocal microscopy, leads to the identification of a critical strain for the onset of plasticity. Moving beyond our initial scaling analysis, the more quantitative analysis presented here shows strain levels for thick films to be comparable to bulk measurements. Monitoring the critical strain leads to several observations: 1.) as-cast PS-P2VP has low critical strain, 2.) annealing slowly increases critical strain as microstructural ordering takes place, and 3.) similar to the homopolymer, both as cast and ordered films both show increasing critical strain under confinement.Block copolymers are facinating materials with a richness of mechanical behavior emerging from the organized nanostructures which can form within. 1-15 Over the past several decades, many block copolymer systems have been industrialized and much has been learned about the details of their bulk mechanical behavior. Studies have largely focused on hard-soft material combinations, driven by attempts to toughen materials while maintaining other desirable properties (optical transparency, for example). 3-10 There is also some precedent for the study of hard-hard systems in attempts to compatiblize homopolymer mixtures or attempts to add crystallinity. 11-14 As one might imagine, material properties are found to depend on almost all of the features governing the nanostructure of the materials (volume fraction, 6,7,9 chain-chain interactions, 4,16 chain orientation and alignment, 7,8,10,12 thermal history 12 ).Currently, the renewed industrial drive towards miniturization has lead to increased attention on block copolymer in thin film geometries.[17][18][19]Despite the interest, there remains little direct mechanical characterization of block copolymer thin films, largely due to the lack of established methods for the mechanical testing of thin polymer films in general. The result is a gap in knowledge of properties that should be expected to vary from their bulk values, or at the least be much more severely constrained by geometric and interfacial phenomena. In an attempt to fill this gap, we have used a recently-developed localized-bending methodology to examine the early stages of plastic failure in a model block copolymer, polystyrene-b-poly(2-vinylpyridine) or PS-P2VP, and the two homopolymers used to make it up (polystyrene, PS, and poly(2-vinylpyridine), P2VP). 20 An increase in the critical strain for plastic deformation was observed in all samples as film thickness is reduced. The critical strain recorded for the symmetric block copolymer fell halfway between the values recorded for the two homopolymers, but only after sufficient annealing for significant microphase separation to take place.The glass transition temperature, T g , is a prime example of a material property that has often been observed to significantly deviate from its bulk values as a polymer thin film's thickness drops below ∼ 100 nm.[21][22][23][24][25][26][27][28][29][30][31][32][33][34]The changes are known to be related to the environment experienced by the 2 film's surfaces. Surface affinity leads to slower than usual dynamics and an increased T g , while low affinity or free surfaces generally lead to increased dynamics. 21-23 For example, PS on a silicon support is found to have a glass transition that decreases upon confinement, whereas P2VP is found to have a significantly increased glass transition temperature due to increased hydrogen bonding with the substrate. 21,26 On the other hand, free surfaces generally lead to universal decreases in T g . 22,23 It is important to note that recent work with stacked films, 26,27 and soft substrates 31,34 has shown that the surface chemistry alone is not enough to completely explain measurements.Given the relationship between internal dynamics and mechanical behavior in bulk materials, it is not surprising that many researchers have attempted to find thin film effects in the mechanical properties of polymer films.25,28,[35][36][37][38][39][40][41][42][43][44][45][46][47]The results have been mixed. For example, stiffining has been observed, 40 moduli have been observed to increase, 41 decrease 42-45 or remain unchanged in experiments.46,47While there is no clear explanation for the discrepancy, different substrates and annealing histories may play a role, as well as differences in the lengthscales and timescales probed in the different measurements. Large deformations may be influenced by changes in the inter-chain entanglement density that arise due to confinement, 48,49 whereas small deformations are not. 20 Slow, low-amplitude and direct mechanical experiments may be the only way to link with T g phenomena, but even this is unclear.31,37,50Recently, we have used delamination as a method to create simple, localized bending in thin free-standing polystyrene films.20In these experiments a thin polymer film is mounted to a soft substrate and the composite is slowly compressed. Elastic instability leads to repetitive buckling (wrinkling) or delamination of the film from the substrate. The bending in a delaminated section of film is easily quantified by Laser Scanning Confocal Microscopy (LSCM) or Atomic Force Microscopy (AFM) and local curvature (thus strain) can be directly evaluated. More importantly, the film can be returned to its initially flat state by the removal of compression in the composite.The flat film can then be interrogated with AFM or LSCM in order to identify signs of yield.Damage can then be correlated with the surface strain created by the bending in the preceeding delmaination (seeFig 1). The result is important because it evaluates a well defined physical", 'arxivid': '1604.06545', 'author': ['Bekele Gurmessa \nDepartment of Physics\nNorth Dakota State University\n\n', 'Andrew B Croll [email protected] \nDepartment of Physics\nNorth Dakota State University\n\n'], 'authoraffiliation': ['Department of Physics\nNorth Dakota State University\n', 'Department of Physics\nNorth Dakota State University\n'], 'corpusid': 101285931, 'doi': '10.1021/acs.macromol.5b01240', 'github_urls': [], 'n_tokens_mistral': 14000, 'n_tokens_neox': 11690, 'n_words': 7067, 'pdfsha': 'b7fbf50caaaefe1d549a3fec988db367863e82cb', 'pdfurls': ['https://arxiv.org/pdf/1604.06545v1.pdf'], 'title': ['The Influence of Thin Film Confinement on Surface Plasticity in Polystyrene and Poly(2-vinylpyridine) Homopolymer and Block Copolymer Films', 'The Influence of Thin Film Confinement on Surface Plasticity in Polystyrene and Poly(2-vinylpyridine) Homopolymer and Block Copolymer Films'], 'venue': []}
arxiv
Orbital Migration of Protoplanets in a Marginally Gravitationally Unstable Disk 14 Jan 2013 Alan P Boss [email protected] Department of Terrestrial Magnetism 5241 Broad Branch Road20015-1305Carnegie Institution, WashingtonNW, DC Orbital Migration of Protoplanets in a Marginally Gravitationally Unstable Disk 14 Jan 2013Subject headings: Hydrodynamics -Protoplanetary disks -Planet-disk interac- tions -Planets and satellites: dynamical evolution and stability -Planets and satellites: formation -2 - Core accretion and disk instability require giant protoplanets to form in the presence of disk gas. Protoplanet migration models generally assume disk masses low enough that the disk's self-gravity can be neglected. However, disk instability requires a disk massive enough to be marginally gravitationally unstable (MGU). Even for core accretion, a FU Orionis outburst may require a brief MGU disk phase. We present a new set of three dimensional, gravitational radiation hydrodynamics models of MGU disks with multiple protoplanets, which interact gravitationally with the disk and with each other, including disk gas mass accretion. Initial protoplanet masses are 0.01 to 10 M ⊕ for core accretion models, and 0.1 to 3 M Jup for Nice scenario models, starting on circular orbits with radii of 6, 8, 10, or 12 AU, inside a 0.091 M ⊙ disk extending from 4 to 20 AU around a 1M ⊙ protostar. Evolutions are followed for up to ∼ 4000 yr and involve phases of relative stability (e ∼ 0.1) interspersed with chaotic phases (e ∼ 0.4) of orbital interchanges. The 0.01 to 10 M ⊕ cores can orbit stably for ∼ 1000 yr: monotonic inward or outward orbital migration of the type seen in low mass disks does not occur. A system with giant planet masses similar to our Solar System (1.0, 0.33, 0.1, 0.1 M Jup ) was stable for over 1000 yr, and a Jupiter-Saturn-like system was stable for over 3800 yr, implying that our giant planets might well survive a MGU disk phase. Introduction The discovery of short-period giant planets forced theorists to study inward orbital migration of giant planets formed at much greater distances. Attention has focused on Type I and Type II migration (Kley & Nelson 2012), the former dealing with ∼ 10M ⊕ cores that migrate rapidly due to tidal torques with the gaseous disk, and the latter dealing with ∼ M Jup protoplanets that are massive enough to open a gap in the disk, and thereafter evolve along with the disk. Unchecked inward Type I migration presumably can lead to a loss of the protoplanet. However, in the core accretion scenario for giant planet formation, inward Type I migration can speed the growth of a core by the sweeping up of the smaller bodies it encounters (e.g., Alibert et al. 2005), albeit at the price of a free parameter that reduces the Type I migration rate to a favorable value. Most Type I migration models consider disks with masses low enough that the disk's self-gravity can be ignored. However, analytical (Pierens & Huré 2005) and two-dimenensional hydrodynamical (Baruteau & Masset 2008) studies of Type I migration in self-gravitating disks found that the inclusion of disk self-gravity could lead to a significant reduction in the Type I migration rate. Similarly, Nelson & Benz (2003) found that even massive planets undergoing Type I migration in gravitationally stable disks had their migration rates reduced when the disk's self-gravity was included. Disk instability scenarios for giant planet formation in self-gravitating disks may sidestep the danger of Type I migration, as the clumps initially formed have masses of order 1 M Jup (e.g., Boss 2005), large enough to open disk gaps and undergo Type II migration in a low mass disk. Marginally gravitationally unstable (MGU) disks are of interest from several points of view. Solar-type young stars are observed to undergo FU Orionis outbursts (e.g., Hartmann & Kenyon 1996), where mass accretion rates onto the central protostar increase dramatically and remain high for periods of order 100 yr. Such outbursts may well occur every ∼ 10 4 yr in T Tauri stars. A leading explanation for FU Orionis outbursts is a MGU disk (e.g., Zhu et al. 2010;Vorobyov & Basu 2010). MGU disks also offer an attractive mechanism for achieving the large-scale transport, both inward and outward, of small particles in the solar nebula that appears to be required to explain the presence of refractory grains in comets (e.g., Brownlee et al. 2006;Boss et al. 2012), and for mixing initially spatially heterogeneous distributions of isotopes (Boss 2012a). Finally, MGU disks are required for the disk instability mechanism of giant planet formation to operate (e.g., Boss 1997). Recent extremely high spatial resolution models have shown that disk instability is able to produce fragments (even inside ∼ 10 AU) in MGU disks with much longer cooling times than had previously been thought to be needed (Pardekooper 2012;Meru & Bate 2012), in strong support of the disk instability mechanism. Core accretion and disk instability both require giant protoplanets to form in the presence of the disk gas. Considerable efforts have gone into theoretical studies of protoplanetary orbital migration (reviewed by Kley & Nelson 2012), yet nearly all models of the interactions of protoplanets with disk gas assume a disk mass low enough that the disk's self-gravity can be neglected, as previously noted. The exceptions also include works by Boss (2005), Baruteau et al. (2011), andMichael et al. (2011), who all studied quite different initial conditions for MGU disks, and as a result found a wide range of outcomes, from large-scale inward orbital migration, to relatively little orbital migration. The Nice model has become a leading explanation for the orbital evolution of the giant planets in our solar system Gomes et al. 2005;Levison et al. 2008). In the Nice model, Saturn forms with an orbital period less than twice that of Jupiter, but as both planets interact with a massive residual disk of planetesimals (following dissipation of the gaseous disk), their orbits cross a 2:1 mean motion resonance, where Saturn's orbital period equals twice that of Jupiter. At that point, their orbits are destabilized, and Saturn undergoes a phase of close encounters with Uranus and Neptune, which are assumed to have formed outside Saturn's orbit. The two ice giants are kicked further outward to their present orbits, while the two gas giants are left behind on slightly eccentric orbits, with e ∼ 0.05 to 0.1. While the Nice model was derived in the context of the core accretion model for giant planet formation, the question arises as to the orbital stability of multiple giant planet systems during a MGU phase, such as during a FU Orionis outburst prior to gaseous disk dissipation in the core accretion scenario, or as a result of formation by the disk instability mechanism. Boss (2005) studied the orbital evolution of single "virtual protoplanets" (VP) with initial masses of 1 M Jup embedded in a MGU disk. Here we present two new set of models, each with up to four VPs initially present in the MGU disk. In the first set, the VP masses are chosen to investigate the orbital evolution of ∼ Earth-mass cores trying to accrete gas during an MGU disk phase, and in the second set, to investigate the evolution of already formed giant planets embedded in MGU disks, a situation analogous to the Nice model of giant planet evolution in gas-free, massive planetesimal disks. Numerical Methods for the Disk The three dimensional (3D) numerical hydrodynamics code used is the same as that used in previous studies of disk instability (e.g., Boss 2005Boss , 2010Boss , 2011Boss , 2012b, any one of which may be consulted for a brief summary of the numerical techniques. A full description of the code and various test cases is given by Boss & Myhill (1992). The code has been also been tested specifically for accuracy in disk instability calculations (e.g., Boss 2012b, and references therein). Compared to the models presented by Boss (2005), the only differences are that the equations were solved on a spherical coordinate grid with N φ = 256 and the number of terms in the spherical harmonic expansion for the gravitational potential of the disk was N Y lm = 32. As in Boss (2005), the equations were solved with N r = 101 and N θ = 23 in π/2 ≥ θ ≥ 0. The radial grid was uniformly spaced with ∆r = 0.16 AU between 4 AU and 20 AU. The θ grid was compressed into the midplane to ensure adequate vertical resolution (∆θ = 0.3 o at the midplane). While this spatial resolution is sufficient to model the largescale evolution of a MGU disk, it may not be fine enough to properly resolve the Roche lobes and Hill spheres of individual protoplanets. However, given that the masses of the even the most massive protoplanets that form are much less than the total disk mass, to first order MGU disks evolve on their own, with only minor perturbations from the embedded protoplanets. In fact, searches for features in the disk gas distribution associated with the more massive protoplanets did not reveal any clear structures. The Jeans length criterion (e.g., Truelove et al. 1997;Boss 2002) was used to ensure that any clumps that formed were not numerical artifacts during the 400 yr of disk evolution leading from the analytical initial conditions (see below) to the relaxed disk phase when the protoplanets were inserted into the models. In the interests of pushing the protoplanet models as far as possible in time, however, the Jeans length criterion was thereafter ignored, as it might at times have forced a significant refinement in the grid structure, slowing the subsequent evolution. This approach seemed reasonable, as studying any subsequent clump formation was not the goal of these models. Even still, the models typically required from two to three years of continual computation on a dedicated cluster node. Initial Conditions for the Disk The MGU disk system initially consists of a 1M ⊙ central protostar surrounded by a protoplanetary disk with a mass of 0.091 M ⊙ between 4 AU and 20 AU. The initial protoplanetary disk structure is the same as that defined in Boss (2005), which is an an approximate vertical density distribution (Boss 1993) for an adiabatic, self-gravitating disk of arbitrary thickness in near-Keplerian rotation about a point mass M s ρ(R, Z) γ−1 = ρ o (R) γ−1 − γ − 1 γ 2πGσ(R) K Z + GM s K 1 R − 1 (R 2 + Z 2 ) 1/2 , where R and Z are cylindrical coordinates, ρ o (R) is the midplane density, and σ(R) is the surface density. The adiabatic pressure used in the initial model is defined by p = Kρ γ , where the adiabatic constant K = 1.7 × 10 17 (cgs units) and γ = 5/3 for the initial model. The radial variation of the midplane density is a power law that ensures near-Keplerian rotation throughout the disk ρ o (R) = ρ o4 R 4 R 3/2 , where ρ o4 = 1.0×10 −10 g cm −3 and R 4 = 4 AU. A lower density halo ρ h of infalling molecular cloud gas and dust surrounds the disk, with ρ h (r) = ρ h4 R 4 r 3/2 , where ρ h4 = 1.0 × 10 −14 g cm −3 , and r is the spherical coordinate radius. The initial temperature profile is based on the two dimensional radiative hydrodynamics calculations of Boss (1996), specifically the "standard model" shown in Figure 9 of Boss (1996). The models have an outer disk temperature of T o = 50 K, resulting in an initial Q gravitational stability parameter as low as Q min = 1.5 in the outermost disk, so that the outermost disk is gravitationally unstable. The initial midplane disk temperature at 4 AU (the inner boundary) is 600 K, leading to Q > 10 in the innermost disk and gravitational stability. Overall, then, the disk is marginally gravitationally unstable. The initial disk model is then evolved for 400 yr (3.8 × 10 5 time steps) before the protoplanets are inserted into the disk, in order to allow the disk to settle into a steady phase of disk instability, as shown in Figure 1. Several distinct clumps and spiral arms are evident at this initial time for the protoplanet evolutions, allowing the models to investigate a "worst case scenario" for the survival of protoplanets during an MGU disk phase. Numerical Methods for the Protoplanets The protoplanets are handled in the same manner as described by Boss (2005), where a dense clump was represented by a virtual protoplanet (VP) in the dynamically evolving disk. A VP is a point mass object that accretes mass and angular momentum from the disk, thereby determining its orbital evolution, subject to the gravitational forces of the central protostar and the spiral arms and clumps of the MGU disk, while the disk itself reacts to the gravitational force of the virtual protoplanet (VP). Rice et al. (2003) used a similar technique in their smoothed particle hydrodynamics (SPH) models of fragmentation in protostellar disks. Krumholz et al. (2004) described their own technique for inserting sink particles into an adaptive mesh refinement hydrodynamics code. Each VP is assumed to accrete massṀ at the rate given by the Bondi-Hoyle-Lyttleton (BHL) formula (Livio 1986;Ruffert & Arnett 1994) M = f 4πρ(GM) 2 (v 2 + c 2 s ) 3/2 , where f is a dimensionless coefficient, G is the gravitational constant, M is the VP mass, ρ is the local disk gas density, c s is the local sound speed, and v is the speed of the VP through the local gas. The VPs also accrete orbital angular momentum from the disk gas, by accreting an amount of momentum from the local hydrodynamical cell proportional to the mass being accreted from that cell, i.e., by "consistent advection", in such a way as to guarantee the conservation of the total orbital angular momentum of the entire system. The mass and angular momentum accreted by each VP are removed from the cells in which they reside during the time step under consideration. The protoplanets' resulting updated velocity is used to calculate their updated positions, to second-order accuracy in space and time, consistent with the accuracy of the hydrodynamical solution. To keep the system synchronized, the same time step is used for both the disk hydrodynamics and the protoplanet orbital evolutions. The hydrodynamical time steps used are quite small, typically < 10 −3 yr. This is about 10 −4 of an orbital period at 4 AU, and about 10 −5 of an orbital period at 20 AU. These exceedingly small time steps help to ensure the accuracy of the protoplanets' orbital evolutions; symplectic integrators typically achieve excellent energy conservation when using time steps of order 10 −3 the orbital period (e.g., Chambers 2003). As in Boss (2005), the VPs affect the disk's evolution through having the gravitational potentials of each of the point masses (−GM/R) added into the total gravitational potential of the entire system, where the radius R is the distance from a VP position to a cell center. This radius R is constrained to be no smaller than ∆r/2, where ∆r is the local grid spacing in the radial coordinate of the hydrodynamical grid, thereby softening the gravitational potential in order to avoid singularities when a VP approaches the center of a grid cell. The VPs evolve as a result of the mass and angular momentum accreted, subject to the gravitational potential of the protostar and disk, as well as to the effects of centrifugal force. Gas drag is neglected, as is appropriate for these relatively massive protoplanets (cf. Boss et al. 2012, where gas drag is included for small particles evolving in MGU disks). Numerical tests withṀ = 0 and the gravitational potential of only a central protostar (Boss 2005) showed that VPs on initially circular or elliptical orbits with semimajor axes of 5 AU orbit stably for at least 500 years, with the VP's angular momentum being conserved to at least 8 digits. Initial Conditions for the Protoplanets For the ∼ Earth-mass protoplanet models, all four models were initialized in the same manner, with the only variation being in the initial mass of the protoplanet, as follows: model a: 0.01 M ⊕ , model b: 0.10 M ⊕ , model c: 1.0 M ⊕ , and model d: 10.0 M ⊕ . In each case, four initially equal mass protoplanets were inserted in the midplane locations denoted in Figure 1, at orbital radii of 6 AU, 8 AU, 10 AU, and 12 AU, respectively, with the variations in azimuthal locations evident in Figure 1. Each protoplanet was inserted at the center of a hydrodynamical cell with the same radial and azimuthal velocity as that of the disk gas in that cell. Because the hydrodynamical grid is restricted to the top hemisphere (π/2 ≥ θ ≥ 0) of the spherical coordinate grid, the protoplanets must be limited to orbiting in the disk midplane. The ∼ Earth-mass models were allowed to accrete mass according to the BHL formula with f = 10 −4 being a factor in the formula. Nelson & Benz (2003) explored a range of values of f , from 1 to 10 −4 . The coefficient f should be less than unity because of various effects neglected in the analysis, such as the accretion of rotating gas of nonuniform density and temperature, shock fronts, and other effects as well, such as three dimensionality (e.g., Krumholz et al. 2005;Blondin & Raymer 2012). Krumholz et al. (2005) found that f could be as low as 10 −2 purely as a result of the vorticity of the gas being accreted. Table 1 summarizes the initial conditions and outcomes for the ∼ Earth-mass protoplanets, while Tables 2 and 3 show the same information for the giant protoplanet models, starting with VP masses in the range of 0.1 M Jup to 3 M Jup , again at initial distances of 6 AU, 8 AU, 10 AU, or 12 AU from the protostar. These models were run with either f = 10 −4 or 10 −3 in the BHL mass accretion formula. Also shown in all three tables are the final masses of the protoplanets (M f ) at the final time (t f ) for the protoplanets that were still between 4 AU and 20 AU at the end of the calculation. Protoplanets that hit either the inner or the outer boundary are noted as having been ejected "in" or "out", respectively. Note that close encounters between the protoplanets need not lead to "ejections", so long as the planet manages to stay between 4 AU and 20 AU. Similarly, not all ejections need be the immediate result of a close encounter -further interactions with the massive disk can also result in a protoplanet hitting either the inner or outer disk boundary. In any case, these ejections do not mean that the ejected protoplanet has received enough of a kinetic energy increase to reach the escape speed from the protostar and protoplanetary disk system. In fact, because of the softening of the gravitational potential of the protoplanets, close encounters between protoplanets cannot have effectively closer approaches than ∆r/2 = 0.08 AU, and so cannot lead to protoplanets that reach the escape speed, which would require closer approaches at least ten times smaller. The models also do not include the effects of extended atmospheres around the protoplanets, which could play a role in very close encounters. Finally, the tables also list the amount of disk mass that was accreted by the central protostar during the evolutions, ∆M s . Results Earth-Mass Protoplanets We first consider the possible fates of ∼ Earth-mass cores that are attempting to accrete gas and become giant planets during a MGU disk phase. This question has not been considered to date in the context of the classic core accretion scenario, where the disk mass is assumed to be low enough to preclude gravitational instability (e.g., Hubickyj et al. 2005;Lissauer et al. 2009) Figures 2 and 3 show the evolutions of the four ∼ Earth-mass protoplanet models. Monotonically inward migration of the type associated with Type I migration is not seen. In general, the protoplanets experienced a significant amount of orbital perturbations driven by the clumps and spiral arms in the MGU disk, resulting in continually evolving, quasiperiodic changes in the orbital semimajor axes and eccentricities. The time scales for these quasi-periodic wobbles in a and e are of order ∼ 30 yr, i.e., the orbital period at a distance of order ∼ 10 AU, where the MGU disk is most active at forming transient clumps and spiral arms (Figure 1). While vigorous, these perturbations tend to average out to result in little net overall migration of the cores. Nevertheless, in each of the four models, at least one core was perturbed enough to hit either the 4 AU or the 20 AU disk boundary; nearly half (7/16) of the cores were considered to be "ejected" in these models (Table 1). Note that this relatively large fraction of "ejected" cores is in reality a severe overestimate, as in a more realistic disk model, cores would not be considered lost unless they collided with the central protostar, or were physically ejected from the system on hyperbolic orbits. Given this caveat, the models depicted in Figure 2 show that ∼ Earth-mass cores should be able to survive a brief (∼ 10 3 yr) MGU phase of the sort associated with FU Orionis outbursts, though a few cores might undergo large excursions in semimajor axis as a result, and the surviving cores are likely to be left on significantly eccentric (e ∼ 0.3) orbits (Figure 3). These results are relatively independent of the initial core mass: all four models shown in Figures 2 and 3 appear qualitatively similar. Because the Earth-mass models all assumed f = 10 −4 for BHL mass accretion, and because all started off with relatively small masses, the amount of mass accreted during the ∼ 10 3 yr evolutions was negligible (Table 1); the largest increase was in model d, with initially 10 M ⊕ protoplanets, where the amount of mass accreted by one of the protoplanets was ∼ 0.002M ⊕ . Giant-Planet Mass Protoplanets We now turn to the models motivated by the Nice scenario for the orbital evolution of the Solar System's giant planets Gomes et al. 2005;Levison et al. 2008). We seek to learn what might happen to massive gas and ice giant planets, formed by either core accretion or disk instability, if they should encounter a MGU disk phase, such as during an FU Orionis outburst. Table 2 summarizes the Nice scenario models with two different values of the BHL gas accretion factor f . In model M, with f = 10 −4 , an initially 1M Jup protoplanet gained 0.6M Jup in mass during 3400 yr, yielding a mass accretion rate ofṀ ∼ 2 × 10 −4 M Jup yr −1 . In model Mh, with f = 10 −3 , an initially 1M Jup protoplanet gained 1.5M Jup in mass during 1300 yr, yielding a mass accretion rate of ∼ 10 −3 M Jup yr −1 . The relatively high rate of mass accretion in model Mh may not be physically reasonable. Nelson & Benz (2003) argued that such rates should be less than ∼ 10 −4 M Jup yr −1 . Kley (1999) foundṀ = 4.35 × 10 −5 M Jup yr −1 in his standard model, while Machida et al. (2010) foundṀ ∼ 10 −5 M Jup yr −1 in their numerical simulations. However, in all of these other studies, the disks considered were not MGU disks, and hence the protoplanetary mass accretion rates could be expected to be significantly smaller. Nevertheless, theṀ estimates for models M and Mh imply that the models with f = 10 −4 are probably more realistic than those with f = 10 −3 . The final masses listed in Table 2 show that the mass accretion rates are systematically considerably higher in the f = 10 −3 models than in those with f = 10 −4 , as is to be expected. Table 2 also shows that only 10 of the initial total of 32 protoplanets remained within the disk during the evolutions: 22 hit either the inner or outer boundary, 10 of the f = 10 −4 models, and 12 of the f = 10 −3 models. Considering the small number statistics, there does not appear to be a strong dependence on the assumed value of f . However, in the models with the most massive protoplanets (N, Nn), 7 out of 8 protoplanets were ejected, while 15 out of 24 were ejected in the lower mass models (M, Mh, O, Oh, P, Ph). Apparently protoplanets with initial masses of ∼ 3M Jup are more likely to undergo strong mutual close encounters, which, coupled with the MGU disk perturbations, eventually lead to their ejections. The higher overall ejection frequency for the Table 2 models compared to those in Table 1 is due in part to the longer time periods calculated for the Table 2 models, with the remaining difference being caused by the stronger effects of close encounters between the much more massive protoplanets in the Table 2 models. Figures 4 and 5 depict the semimajor axis evolutions of the eight models listed in Table 2. As in the case of the ∼ Earth-mass cores, the protoplanets are subjected to quasi-periodic perturbations from the MGU disk's clumps and spiral arms, resulting in somewhat chaotic orbital evolutions, but again without any clear evidence for monotonically inward (or outward) migration. In general, the semimajor axes of the surviving protoplanets remained in the range of ∼ 5 AU to ∼ 15 AU, similar to the initial orbits, in spite of mutual close encounters leading to frequent ejections of the less fortunate, generally lower mass, protoplanets. Table 3 summarizes the models that are the closest to the Nice model for our Solar System, all calculated with the same value of f = 10 −4 . The initial protoplanet masses in these models are closer to those of the current masses of the giant planets in our Solar System than those of the previous models. Model Q, in particular, has masses similar to those of Jupiter, Saturn, Uranus, and Neptune, though the 0.1 M Jup protoplanets that represent the two ice giants start off with masses about twice that of Uranus and Neptune. Model R explores a situation with even more massive outer protoplanets, while models S and T investigate a system where only Jupiter and Saturn exist, starting at two different initial orbits for Saturn, 12 AU and 10 AU, respectively. Figures 6 and 7 display the evolutions of the semimajor axes and eccentricities for the four models listed in Table 3. Model Q, the closest model to our outer Solar System, manages to survive intact for a period of at least 1200 yr, though only after undergoing a major orbital reshuffling: at the final time shown in Figure 6, the initially innermost Jupiter-mass protoplanet is now the outermost body, the initially Saturn-mass protoplanet is the innermost body, and the two initially outer ice giants are orbiting between the two gas giants, with all four having semimajor axes between ∼ 9 AU and ∼ 13 AU. Note the high eccentricities for prolonged periods for the two ice giant-mass protoplanets in models Q (Figure 7), implying that ejections are eventually likely for these bodies. These systems would presumably become even more unstable once the disk gas is removed, with an uncertain final outcome. A similar reordering of the orbital distribution occurs in model R, though in this case the initially 0.33 M Jup protoplanet, and one of the 0.5 M Jup protoplanets are ejected, leaving behind an outer 1.4 M Jup gas giant and an inner 0.62 M Jup protoplanet. Model T shows that two gas giant planets can survive for ∼ 4000 yr in a MGU disk, though they may well interchange their orbital positions. On the other hand, with a different initial orbital configuration, model S shows that the initially Saturn-mass protoplanet might be ejected, leaving the initially Jupiter-mass protoplanet as the sole survivor after 3800 yr. In both cases, the eccentricities of the protoplanets tend to be modest (Figure 7c,d), with e ∼ 0.05 to 0.2. Discussion Tables 2 and 3 show that in a system with protoplanets of different initial masses, in nearly every case the protoplanet that is ejected is one of the lower mass protoplanets. The only exception to this general rule was the initially 1 M Jup protoplanet in model Ph, which was ejected along with an initially 0.5 M Jup protoplanet. Note, however, that even in this case, because of the high value of f = 10 −3 , the two other initially 0.5 M Jup protoplanets grew to masses of 2.6 M Jup and 1.6 M Jup by the end of the calculation, and these were the bodies responsible for ejecting the initially 1 M Jup protoplanet. Hence the general rule appears to be that the more massive protoplanets are left behind during close orbital encounters, as is expected to be the case based on equipartition of energy: the less massive body will receive a larger velocity perturbation than the more massive body, and so is more likely to hit a disk boundary. We now briefly compare the results to those of the previous studies of orbital migration in MGU disks. Boss (2005) considered the evolution of fully three dimensional MGU disks with a mass of 0.091 M ⊙ extending from 4 AU to 20 AU around a 1 M ⊙ protostar, i.e., the same situation as is considered here with multiple protoplanets with varied masses. Jupitermass protoplanets inserted at 8 AU were found by Boss (2005) to orbit fairly stably, or to move out to ∼ 10 AU, over ∼ 10 3 yr, even while gaining mass by accretion. This implied that protoplanets in MGU disks do not immediately open disk gaps and undergo Type II migration. These results are quite consistent with the present models, where the addition of other protoplanets does not prevent the surviving protoplanets from orbiting relatively stably. Baruteau et al. (2011) considered the evolution of two dimensional, thin MGU disks with a mass of 0.4 M ⊙ extending from 20 AU to 250 AU around a 1 M ⊙ protostar. Saturnmass and Jupiter-mass protoplanets inserted at 100 AU were found to migrate rapidly inward to ∼ 25 AU, on a time scale comparable to that expected for Type I migration, ∼ 10 4 yr, while planets with 5M Jup migrated even faster, in ∼ 3 × 10 3 yr. Type II migration did not occur, as the planets were unable to open disk gaps. The MGU nature of the evolving disk resulted in periodic outward motions, rather than the monotonic inward motions of classic Type I migration. These results are in basic agreement with the present models. Baruteau et al. (2011) included the effects of the planet's gravity on the disks, but did not include planet mass accretion (i.e., they fixed the planet masses), and their models were restricted to considering the evolution of a single planet at a time, unlike the present models, where planet-planet interactions are an important factor. Most importantly, the Baruteau et al. (2011) models were limited to orbits in the outer disk: their planets were forced to stop inward migration at ∼ 25 AU, whereas the present models have protoplanets that start inside 12 AU. The absence of inward migration in the present models appears to be linked to the high inner disk temperatures (∼ 600 K), leading to Q >> 1, and stifling to some extent the spiral arms just outside 4 AU (Figure 1), combined with the chaotic outcomes of protoplanet interactions with a MGU disk. Michael et al. (2011) considered the evolution of fully three dimensional MGU disks with a mass of 0.14 M ⊙ extending from 5 AU to 40 AU around a 1 M ⊙ protostar. Two Jupiter-mass protoplanets inserted at 25 AU were found to migrate rapidly inward to ∼ 17 AU in ∼ 10 3 yr, where both stalled. The inward motion was again not monotonic, but rather jerky, due to the MGU disk interactions. Similar to Baruteau et al. (2011), Michael et al. (2011 included the effects of the planet's gravity on the disk, did not include planetary mass accretion, and calculated evolutions for only a single planet at a time. Michael et al. (2011) also studied considerably larger disks than the present models, but their inner disk boundary of 5 AU was quite similar to the 4 AU value in the present models. The Michael et al. (2011) protoplanets migrated inward but stopped at ∼ 17 AU, whereas in the present models, the survivors clustered around distances of ∼ 8 AU to ∼ 13 AU. This slight difference in outcomes can be attributed to the different underlying MGU disk structures used in the two sets of models: Michael et al. (2011) started with a disk with a minimum Q = 1.38 at 26.7 AU, whereas in the present models, the minimum Q = 1.5 occurred at the outer disk boundary (20 AU), and rose to Q > 10 in the inner disk. Given these MGU disk differences, the Michael et al. (2011) results seem to be compatible with the present results. Finally, throughout the evolutions of all of these MGU disk models, disk mass flowed freely inward, past the orbiting protoplanets, and was accreted by the inner grid boundary at 4 AU. The total amount of mass accreted by the central region (i.e., the protostar) was typically ∼ 0.03M ⊙ (Tables 1, 2, 3) over a time period of ∼ 3000 yrs, leading to a protostellar mass accretion rate of ∼ 10 −5 M ⊙ yr −1 . This rate is comparable to the inferred mass accretion rates for T Tauri stars undergoing FU Orionis outbursts (e.g., Hartmann & Kenyon 1996), confirming the applicability of these MGU disk models for protoplanetary systems undergoing FU Orionis events. The Tables show that this central mass accretion rate did not vary significantly across the models calculated, showing that the varied protoplanet masses had little effect on the overall evolution of the MGU disks. Conclusions Given the limited number of models run, and the resulting highly incomplete examination of the initial conditions parameter space, one cannot draw too strong of a conclusion from these models, but at the least, these models illustrate the range of outcomes that could result from a MGU disk phase during planetary system formation. Neverthless, it is clear that a MGU disk phase need not be fatal to growing cores in the core accretion scenario, or to giant planets formed by either core accretion or disk instability, at least not for protoplanets with initial orbits in the range of 6 AU to 12 AU from a solar-mass protostar. FU Orionis phases thus need not be fatal to the giant planet formation process. It is even conceivable that a Nice model-like scenario could be constructed for protoplanets that survive a MGU disk phase, though the most Nice-like model presented here (model Q) ended up with Jupiter as the outermost body, rather than the innermost. Other initial conditions might well lead to a more Nice-like outcome. The ∼ Earth-mass protoplanets are excited to relatively high eccentricity orbits during the MGU disk phase, with e ∼ 0.2 to 0.5. For the giant-planet-mass models, the final eccentricities are more modest, typically with e ∼ 0.05 to 0.2, as is to be expected on the basis of equipartition of energy. Hence, except for limited periods of time, the orbital eccentricities for the giant planets are not as high as the highest values observed for Doppler-discovered extrasolar giant planets, where values as high as e ∼ 0.8 have been determined. However, most exoplanets have more modest eccentricities, with over half having e < 0.2. The highest eccentricities found for exoplanets presumably have their origins in planet-planet scattering events, which can also result in significant orbital inclinations, and even in retrograde orbital rotations, which seems to be required in order to explain the orbital inclinations deduced on the basis of the Rossiter-McLaughlin effect for short-period exoplanets (e.g., Albrecht et al. 2012). The present models cannot address this possibility, as their limitation to protoplanet orbits within the disk midplane precludes interactions leading to inclined orbits. As Boss (2005) found, protoplanets located at ∼ 10 AU in a MGU disk can orbit relatively stably for significant periods of time (∼ 10 3 yr or more), without undergoing monotonic inward Type-I-like migration, and without opening a disk gap, leading to Type-II-like migration. Instead, the quasi-periodic gravitational perturbations induced by the spiral arms and clumps of the MGU disk result in eccentric orbits (e ∼ 0.2), while close encounters with the other protoplanets, combined with the MGU disk interactions, can lead to a significant number of "ejections" of the less massive protoplanets through hitting the inner or outer disk boundaries, though these "ejections" might very well be ameliorated in models that included a disk that extended from the true surface of the protostar (∼ 0.05 AU) out to much larger distances (∼ 50 AU). Such improved models of protoplanet-MGU disk interactions are needed to determine if observed exoplanets on distant orbits (e.g., around HR 8799; Marois et al. 2008Marois et al. , 2010 could have formed closer to their star and then been ejected outward, or were formed more or less in situ (e.g., Boss 2011). Virtual protoplanet models based on the 60 AU-radius disk models of Boss (2011) are in progress and will be the subject of a future report. I thank Hal Levison for suggesting in 2007 the Nice model calculations at the Nobel Symposium #135, John Chambers for advice on orbital fitting procedures, Sandy Keiser for her invaluable assistance with the Carnegie Alpha Cluster, and the referee for numerous suggestions for improving the paper. This research was supported in part by the NASA Origins of Solar Systems Program (NNX09AF62G) and contributed in part to the NASA Astrobiology Institute (NNA09DA81A). The calculations were performed on the Carnegie Alpha Cluster, the purchase of which was partially supported by NSF Major Research Instrumentation grant MRI-9976645. Table 1. Initial conditions and final outcomes for the ∼ Earth-mass protoplanets embedded in MGU disks with identical f BHL factors. M i and a i are the initial planet mass and orbital radius, M f is the planet mass at the final time t f , and ∆M s /M ⊙ is the amount of disk mass accreted by the central protostar by the end of the evolution. model f M i /M ⊕ a i (AU) t f (yr) M f /M ⊕ ∆M s /M ⊙ a 10 − Fig. 1 .Fig. 2 .Fig. 3 .Fig. 4 .Fig. 5 .Fig. 6 .Fig. 7 . 1234567-[Removed to fit within the limits for submission.] Midplane density contours for the MGU disk at the phase when the protoplanets (red circles) are first inserted into the models. The disk starts with a mass of 0.091M ⊙ , with an outer radius of 20 AU and an inner radius of 4 AU, through which mass accretes onto the initially 1 M ⊙ central protostar. Density contours are shown in g cm −3 . Red circles denote the locations where protoplanets are inserted, at distances of 6, 8, 10, or 12 AU from the protostar, starting at 9 o'clock and rotating counterclockwise, respectively. -Time evolution of the semimajor axes of ∼ Earth-mass embedded protoplanets with initial masses of (a) 0.01 M ⊕ (model a), (b) 0.10 M ⊕ (model b), (c) 1.00 M ⊕ (model c), and (d) 10.0 M ⊕ (model d). Elapsed times since protoplanet insertion for each model are: (a) 730 yr, (b) 930 yr, (c) 730 yr, and (d) 760 yr. Data are plotted every 100 time steps. Protoplanets were inserted at radii of 6 AU (black), 8 AU (blue), 10 AU (red), or 12 AU (green), as shown in Figure 1. Protoplanets that strike the inner (4 AU) or outer (20 AU) disk boundaries are considered to be ejected and are dropped from the calculations. -Time evolution of the eccentricities of ∼ Earth-mass embedded protoplanets, plotted in the same manner as in Figure 2, for models a, b, -Time evolution of the semimajor axes of giant planet-mass embedded protoplanets, plotted as in Figure 2. Elapsed times: (a) model N: 3400 yr, (b) model Nh: 410 yr, (c) model O: 3050 yr, and (d) model Oh: 1080 yr. -Time evolution of the semimajor axes of giant planet-mass embedded protoplanets, plotted as in Figure 2. Elapsed times: (a) model P: 3050 yr, (b) model Ph: 1500 yr, (c) model M: 3400 yr, and (d) model Mh: 1300 yr. -Time evolution of the semimajor axes of giant planet-mass embedded protoplanets, plotted as in Figure 2. Elapsed times: (a) model R: 1200 yr, (b) model Q: 1200 yr, (c) model S: 3800 yr, and (d) model T: 3800 yr. -Time evolution of the eccentricities of giant planet-mass embedded protoplanets, plotted as in Figure 6. Elapsed times: (a) model R: 1200 yr, (b) model Q: 1200 yr, (c) model S: 3800 yr, and (d) model T: 3800 yr. Table 2. Initial conditions and final outcomes for the giant protoplanets embedded in MGU disks with varied f BHL factors.Table 3. Initial conditions and final outcomes for the giant protoplanets embedded in MGU disks with identical f BHL factors.4 0.01 6 730 0.01 0.023 " 0.01 8 " 0.01 " " 0.01 10 " eject out " " 0.01 12 " 0.01 " b 10 −4 0.1 6 900 eject in 0.029 " 0.1 8 " eject in " " 0.1 10 " 0.1 " " 0.1 12 " 0.1 " c 10 −4 1.0 6 730 1.0 0.025 " 1.0 8 " eject in " " 1.0 10 " eject out " " 1.0 12 " 1.0 " d 10 −4 10.0 6 760 10.0 0.025 " 10.0 8 " eject out " " 10.0 10 " eject out " " 10.0 12 " 10.0 " model f M i /M Jup a i (AU) t f (yr) M f /M Jup ∆M s /M ⊙ M 10 −4 1.0 6 3400 1.6 0.028 " 0.33 8 " eject out " " 0.33 10 " eject out " " 0.33 12 " eject in " Mh 10 −3 1.0 6 1300 2.5 0.023 " 0.33 8 " eject out " " 0.33 10 " eject out " " 0.33 12 " eject out " N 10 −4 1.0 6 3400 eject out 0.042 " 3.0 8 " 4.0 " " 3.0 10 " eject out " " 3.0 12 " eject out " Nh 10 −3 1.0 6 410 eject out 0.019 " 3.0 8 " eject out " " 3.0 10 " eject in " " 3.0 12 " eject in " O 10 −4 1.0 6 3050 eject out 0.035 " 1.0 8 " eject in " " 1.0 10 " 1.5 " " 1.0 12 " 1.1 " Oh 10 −3 1.0 6 1080 eject out 0.023 " 1.0 8 " eject in " " 1.0 10 " 3.1 " " 1.0 12 " eject out " P 10 −4 1.0 6 3050 1.4 0.029 " 0.5 8 " eject out " " 0.5 10 " eject out " " 0.5 12 " 0.91 " Ph 10 −3 1.0 6 1500 eject out 0.024 " 0.50 8 " eject out " " 0.50 10 " 2.6 " Table 2-Continued model f M i /M Jup a i (AU) t f (yr) M f /M Jup ∆M s /M ⊙ " 0.50 12 " 1.6 " model f M i /M Jup a i (AU) t f (yr) M f /M Jup ∆M s /M ⊙ Q 10 −4 1.0 6 1200 1.1 0.024 " 0.33 8 " 0.37 " " 0.10 10 " 0.10 " " 0.10 12 " 0.10 " R 10 −4 1.0 6 1200 1.4 0.022 " 0.33 8 " 0.62 " " 0.50 10 " eject in " " 0.50 12 " eject in " S 10 −4 1.0 6 3800 1.3 0.031 " 0.33 12 " eject in " T 10 −4 1.0 6 3800 1.3 0.029 " 0.33 10 " 0.40 " . S Albrecht, ApJ. 75718Albrecht, S., et al. 2012, ApJ, 757, 18 . Y Alibert, C Mordasini, W Benz, C Winisdoerffer, A&A. 434343Alibert, Y., Mordasini, C., Benz, W., & Winisdoerffer, C. 2005, A&A, 434, 343 . C Baroteau, F Masset, ApJ. 678483Baroteau, C., & Masset, F. 2008, ApJ, 678, 483 . C Baroteau, F Meru, S.-J Paardekooper, MNRAS. 416Baroteau, C., Meru, F., & Paardekooper, S.-J. 2011, MNRAS, 416, 1971 . J M Blondin, E Raymer, ApJ. 75230Blondin, J. M., & Raymer, E. 2012, ApJ, 752, 30 . A P Boss, ApJ. 417351Boss, A. P. 1993, ApJ, 417, 351 . A P Boss, ApJ. 469906Boss, A. P. 1996, ApJ, 469, 906 . A P Boss, Science. 2761836Boss, A. P. 1997, Science, 276, 1836 . A P Boss, ApJ. 576462Boss, A. P. 2002, ApJ, 576, 462 . A P Boss, ApJ. 629535Boss, A. P. 2005, ApJ, 629, 535 . A P Boss, ApJ. 725145Boss, A. P. 2010, ApJ, 725, L145 . A P Boss, ApJ. 73174Boss, A. P. 2011, ApJ, 731, 74 . A P Boss, AREPS. 4023Boss, A. P. 2012a, AREPS, 40, 23 . A P Boss, MNRAS. 4191930Boss, A. P. 2012b, MNRAS, 419, 1930 . A P Boss, E A Myhill, ApJSS. 83311Boss, A. P., & Myhill, E. A. 1992, ApJSS, 83, 311 . A P Boss, C M Alexander, O&apos;d, M Podolak, Earth Planet. Sci. Let. 34518Boss, A. P., Alexander, C. M. O'D., & Podolak, M. 2012, Earth Planet. Sci. Let., 345, 18 . D E Brownlee, Science. 3141711Brownlee, D. E., et al. 2006, Science, 314, 1711 . J E Chambers, AJ. 1261119Chambers, J. E. 2003, AJ, 126, 1119 . R Gomes, H F Levison, K Tsiganis, A Morbidelli, Nature. 435466Gomes, R., Levison, H. F., Tsiganis, K., & Morbidelli, A. 2005, Nature, 435, 466 . L Hartmann, S J Kenyon, ARAA. 34207Hartmann, L., & Kenyon, S. J., 1996, ARAA, 34, 207 . O Hubickyj, P Bodenheimer, J J Lissauer, Icarus. 179415Hubickyj, O., Bodenheimer, P., & Lissauer, J. J. 2005, Icarus, 179, 415 . W Kley, MNRAS. 303696Kley, W. 1999, MNRAS, 303, 696 . W Kley, R P Nelson, ARAA. 50211Kley, W., & Nelson, R. P. 2012, ARAA, 50, 211 . M R Krumholz, C F Mckee, R I Klein, ApJ. 611399Krumholz, M. R., McKee, C. F.,& Klein, R. I. 2004, ApJ, 611, 399 . M R Krumholz, C F Mckee, R I Klein, ApJ. 618757Krumholz, M. R., McKee, C. F.,& Klein, R. I. 2005, ApJ, 618, 757 . H F Levison, A Morbidelli, C Vanlaerhoven, R Gomes, K Tsiganis, Icarus. 196258Levison, H. F., Morbidelli, A., VanLaerhoven, C., Gomes, R., & Tsiganis, K. 2008, Icarus, 196, 258 . J J Lissauer, O Hubickyj, G D&apos;angelo, P Bodenheimer, Icarus. 199338Lissauer, J. J., Hubickyj, O., D'Angelo, G., & Bodenheimer, P. 2009, Icarus, 199, 338 . M Livio, Comm. Ap. 11111Livio, M. 1986, Comm. Ap., 11, 111 . M N Machida, E Kokubo, S Inutsuka, T Matsumoto, MNRAS. 4051227Machida, M. N., Kokubo, E., Inutsuka, S., & Matsumoto, T. 2010, MNRAS, 405, 1227 . C Marois, B Macintosh, T Barman, B Zuckerman, I Song, J Patience, D Lafreniére, R Doyon, Science. 3221348Marois, C., Macintosh, B., Barman, T., Zuckerman, B., Song, I., Patience, J., Lafreniére, D., & Doyon, R. 2008, Science, 322, 1348 . C Marois, B Zuckerman, Q M Konopacky, B Macintosh, T Barman, Nature. 4681080Marois, C., Zuckerman, B., Konopacky, Q. M., Macintosh, B., & Barman, T. 2010, Nature, 468, 1080 . F Meru, M R Bate, MNRAS. submittedMeru, F., & Bate, M. R. 2012, MNRAS, submitted . S Michael, R H Durisen, A C Boley, ApJL. 73742Michael, S., Durisen, R. H., & Boley, A. C. 2011, ApJL, 737, L42 . A F Nelson, W Benz, ApJ. 589578Nelson, A. F., & Benz, W. 2003, ApJ, 589, 578 . S.-J Pardekooper, MNRAS. 4213286Pardekooper, S.-J. 2012, MNRAS, 421, 3286 . A Pierens, J.-M Huré, A&A. 43337Pierens, A., & Huré, J.-M. 2005, A&A, 433, L37 . W K M Rice, P J Armitage, I A Bonnell, M R Bate, S V Jeffers, S G Vine, MNRAS. 34636Rice, W. K. M., Armitage, P. J., Bonnell, I. A., Bate, M. R., Jeffers, S. V., & Vine, S. G. 2003, MNRAS, 346, L36 . M Ruffert, D Arnett, ApJ. 427351Ruffert, M., & Arnett, D. 1994, ApJ, 427, 351 . E I Vorobyov, S Basu, ApJ. 714133Vorobyov, E. I., & Basu, S. 2010, ApJ, 714, L133 . J K Truelove, ApJ. 489179Truelove, J. K., et al. 1997, ApJ, 489, L179 . K Tsiganis, R Gomes, A Morbidelli, H F Levison, Nature. 435459Tsiganis, K, Gomes, R., Morbidelli, A., & Levison, H. F. 2005, Nature, 435, 459 . Z Zhu, L Hartmann, C Gammie, ApJ. 7131143Zhu, Z., Hartmann, L., & Gammie, C. 2010, ApJ, 713, 1143
{'fraction_non_alphanumeric': 0.05237079644111248, 'fraction_numerical': 0.047049699476507466, 'mean_word_length': 3.864298438646128, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "Core accretion and disk instability require giant protoplanets to form in the presence of disk gas. Protoplanet migration models generally assume disk masses low enough that the disk's self-gravity can be neglected. However, disk instability requires a disk massive enough to be marginally gravitationally unstable (MGU). Even for core accretion, a FU Orionis outburst may require a brief MGU disk phase. We present a new set of three dimensional, gravitational radiation hydrodynamics models of MGU disks with multiple protoplanets, which interact gravitationally with the disk and with each other, including disk gas mass accretion. Initial protoplanet masses are 0.01 to 10 M ⊕ for core accretion models, and 0.1 to 3 M Jup for Nice scenario models, starting on circular orbits with radii of 6, 8, 10, or 12 AU, inside a 0.091 M ⊙ disk extending from 4 to 20 AU around a 1M ⊙ protostar. Evolutions are followed for up to ∼ 4000 yr and involve phases of relative stability (e ∼ 0.1) interspersed with chaotic phases (e ∼ 0.4) of orbital interchanges. The 0.01 to 10 M ⊕ cores can orbit stably for ∼ 1000 yr: monotonic inward or outward orbital migration of the type seen in low mass disks does not occur. A system with giant planet masses similar to our Solar System (1.0, 0.33, 0.1, 0.1 M Jup ) was stable for over 1000 yr, and a Jupiter-Saturn-like system was stable for over 3800 yr, implying that our giant planets might well survive a MGU disk phase.", 'arxivid': '2301.00293', 'author': ['Alan P Boss [email protected] \nDepartment of Terrestrial Magnetism\n5241 Broad Branch Road20015-1305Carnegie Institution, WashingtonNW, DC\n', 'Alan P Boss [email protected] \nDepartment of Terrestrial Magnetism\n5241 Broad Branch Road20015-1305Carnegie Institution, WashingtonNW, DC\n'], 'authoraffiliation': ['Department of Terrestrial Magnetism\n5241 Broad Branch Road20015-1305Carnegie Institution, WashingtonNW, DC', 'Department of Terrestrial Magnetism\n5241 Broad Branch Road20015-1305Carnegie Institution, WashingtonNW, DC'], 'corpusid': 118400869, 'doi': '10.1088/0004-637x/764/2/194', 'github_urls': [], 'n_tokens_mistral': 15476, 'n_tokens_neox': 12861, 'n_words': 7921, 'pdfsha': 'e3bf1864e19b334292bb9e57c07eaa33b248ee42', 'pdfurls': ['https://arxiv.org/pdf/1301.3178v1.pdf'], 'title': ['Orbital Migration of Protoplanets in a Marginally Gravitationally Unstable Disk', 'Orbital Migration of Protoplanets in a Marginally Gravitationally Unstable Disk', 'Orbital Migration of Protoplanets in a Marginally Gravitationally Unstable Disk', 'Orbital Migration of Protoplanets in a Marginally Gravitationally Unstable Disk'], 'venue': []}
arxiv
Machine learning & artificial intelligence in the quantum domain Vedran Dunjko [email protected] Hans J Briegel [email protected] Institute for Theoretical Physics Max Planck Institute of Quantum Optics Institute for Theoretical Physics University of Innsbruck 6020, 85748Innsbruck, GarchingAustria, Germany Department of Philosophy University of Innsbruck 6020InnsbruckAustria University of Konstanz 78457KonstanzGermany Machine learning & artificial intelligence in the quantum domain Quantum information technologies, on the one side, and intelligent learning systems, on the other, are both emergent technologies that will likely have a transforming impact on our society in the future. The respective underlying fields of basic researchquantum information (QI) versus machine learning and artificial intelligence (AI) -have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can indeed learn and benefit from each other. QML explores the interaction between quantum computing and machine learning, investigating how results and techniques from one field can be used to solve the problems of the other. In recent time, we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for machine learning problems, critical in our "big data" world. Conversely, machine learning already permeates many cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical machine learning optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of artificial intelligence for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement -exploring what ML/AI can do for quantum physics, and vice versa -researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain. I. INTRODUCTION Quantum theory has influenced most branches of physical sciences. This influence ranges from minor corrections, to profound overhauls, particularly in fields dealing with sufficiently small scales. In the second half of the last century, it became apparent that genuine quantum effects can also be exploited in engineering-type tasks, where such effects enable features which are superior to those achievable using purely classical systems. The first wave of such engineering gave us, for example, the laser, transistors, and nuclear magnetic resonance devices. The second wave, which gained momentum in the '80s, constitutes a broad-scale, albeit not fully systematic, investigation of the potential of utilizing quantum effects for various types of tasks which, at the base of it, deal with the processing of information. This includes the research areas of cryptography, computing, sensing and metrology, all of which now share the common language of quantum information science. Often, the research into such interdisciplinary programs was exceptionally fruitful. For instance, quantum computation, communication, cryptography and metrology are now mature, well-established and impactful research fields which have, arguably, revolutionized the way we think about information and its processing. In recent years, it has become apparent that the exchange of ideas between quantum information processing and the fields of artificial intelligence and machine learning has its own genuine questions and promises. Although such lines of research are only now receiving a broader recognition, the very first ideas were present already at the early days of QC, and we have made an effort to fairly acknowledge such visionary works. In this review we aim to capture research at the interplay between machine learning, artificial intelligence and quantum mechanics in its broad scope, with a reader with a physics background in mind. To this end, we dedicate comparatively large amount of space to classical machine learning and artificial intelligence topics, which are often sacrificed in physics-oriented literature, while keeping the quantum information aspects concise. The structure of the paper is as follows. In the remainder of this introductory section I, we give quick overviews of the relevant basic concepts of the fields quantum information processing, and of machine learning and artificial intelligence. We finish off the introduction with a glossary of useful terms, list of abbreviations, and comments on notation. Subsequently, in section II we delve deeper into chosen methods, technical details, and the theoretical background of the classical theories. The selection of topics here is not necessarily balanced, from a classical perspective. We place emphasis on elements which either appear in subsequent quantum proposals, which can sometimes be somewhat exotic, or on aspects which can help put the relevance of the quantum results into proper context. Section III briefly summarizes the topics covered in the quantum part of the review. Sections IV -VII cover the four main topics we survey, and constitute the central body of the paper. We finish with a an outlook section VIII. Remark: The overall objective of this survey is to give a broad, "birds-eye" account of the topics which contribute to the development of various aspects of the interplay between quantum information sciences, and machine learning and artificial intelligence. Consequently, this survey does not necessarily present all the developments in a fully balanced fashion. Certain topics, which are in their very early stages of investigation, yet important for the nascent research area, were given perhaps a disproportional level of attention, compared to more developed themes. This is, for instance, particularly evident in section VII, which aims to address the topics of quantum artificial intelligence, beyond mainstream data analysis applications of machine learning. This topic is relevant for a broad perspective on the emerging field, however it has only been broached by very few authors, works, including the authors of this review and collaborators. The more extensively explored topics of, e.g., quantum algorithms for machine learning and data mining, quantum computational learning theory, or quantum neural networks, have been addressed in more focused recent reviews (Wittek, 2014a;Schuld et al., 2014a;Biamonte et al., 2016;Arunachalam and de Wolf, 2017;Ciliberto et al., 2017). A. Quantum mechanics, computation and information processing Executive summary: Quantum theory leads to many counterintuitive and fascinating phenomena, including the results of the field of quantum information processing, and in particular, quantum computation. This field studies the intricacies of quantum information, its communication, processing and use. Quantum information admits a plethora of phenomena which do not occur in classical physics. For instance, quantum information cannot be cloned -this restricts the types of processing that is possible for general quantum information. Other aspects lead to advantages, as has been shown for various communication and computation tasks: for solving algebraic problems, reduction of sample complexity in black-box settings, sampling problems and optimization. Even restricted models of quantum computing, amenable for near-term implementations, can solve interesting tasks. Machine learning and artificial intelligence tasks can, as components, rely on the solving of such problems, leading to an advantage. Quantum mechanics, as commonly presented in quantum information, is based on few simple postulates: 1) the pure state of a quantum system is given by a unit vector |ψ in a complex Hilbert space, 2) closed system pure state evolution is generated by a Hamiltonian H, specified by the linear Schrödinger equation H |ψ = i ∂ ∂t |ψ , 3) the structure of composite systems is given by the tensor product, and 4) projective measurements (observables) are specified by, ideally, non-degenerate Hermitian operators, and the measurement process changes the description of the observed system from state |ψ to an eigenstate |φ , with probability given by the Born rule p(φ) = | ψ |φ | 2 (Nielsen and Chuang, 2011). While the full theory still requires the handling of subsystems and classical ignorance 1 , already the few mathematical axioms of pure state closed system theory give rise to many quintessentially quantum phenomena, like superpositions, no-cloning, entanglement, and others, most of which stem from just the linearity of the theory. Many of these properties re-define how researchers in quantum information perceive what information is, but also have a critical functional role in say quantum enhanced cryptography, communication, sensing and other applications. One of the most fascinating consequences of quantum theory are, arguably, captured by the field of quantum information processing (QIP), and in particular quantum computing (QC), which is most relevant for our purposes. QC has revolutionized the theories and implementations of computation. This field originated from the observations by Manin (Manin, 1980) and Feynman (Feynman, 1982) that the calculation of certain properties of quantum systems, as they evolve in time, may be intractable, while the quantum systems themselves, in a manner of speaking, do perform that hard computation by merely evolving. Since these early ideas, QC has proliferated, and indeed the existence of quantum advantages which are offered by scalable universal quantum computers have been demonstrated in many settings. Perhaps most famously, quantum computers have been shown to have the capacity to efficiently solve algebraic computational problems, which are believed to be intractable for classical computers. This includes the famous problems of factoring large integers computing the discrete logarithms (Shor, 1997), but also many others such as Pell equation solving, some non-Abelian hidden subgroup problems, and others, see e.g. (Childs and van Dam, 2010;Montanaro, 2016) for a review. Related to this, nowadays we also have access to a growing collection of quantum algorithms 2 for various linear algebra tasks, as given in e.g. (Harrow et al., 2009;Rebentrost et al., 2016a), which may offer speed-ups. query complexity: a (quantum) algorithm solves a problem by intermittently calling a black-box subroutine, defined only via its inputoutput relations. Query complexity of an algorithm is the number of calls to the oracle, the algorithm will perform. used to solve other types of problems. For instance, in statistical physics, the capacity to sample from Gibbs distributions is often the key tool to compute properties of the partition function. A broad class of quantum approaches to sampling problems focuses on quantum enhancements of such Markov Chain methods (Temme et al., 2011;Yung and Aspuru-Guzik, 2012). Sampling tasks have been receiving an ever increasing amount of attention in the QIP community, as we will comment on shortly. Quantum computers are typically formalized in one of a few standard models of computation, many of which are, computationally speaking, equally powerful 4 . Even if the models are computationally equivalent, they are conceptually different. Consequently, some are better suited, or more natural, for a given class of applications. Historically, the first formal model, the quantum Turing machine (Deutsch, 1985), was preferred for theoretical and computability-related considerations. The quantum circuit model (Nielsen and Chuang, 2011) is standard for algebraic problems. The measurement-based quantum computing (MBQC) model (Raussendorf and Briegel, 2001;Briegel et al., 2009) is, arguably, best-suited for graph-related problems (Zhao et al., 2016), multi-party tasks and distributed computation (Kashefi and Pappa, 2016) and blind quantum computation (Broadbent et al., 2009). Topological quantum computation (Freedman et al., 2002) was an inspiration for certain knot-theoretic algorithms (Aharonov et al., 2006), and is closely related to algorithms for topological error-correction and fault tolerance. The adiabatic quantum computation model (Farhi et al., 2000) is constructed with the task of ground-state preparation in mind, and is thus well-suited for optimization problems (Heim et al., 2017). FIG. 2 Computational models Research into QIP also produced examples of interesting restricted models of computation: models which are in all likelihood not universal for efficient QC, however can still solve tasks which seem hard for classical machines. Recently, there has been an increasing interest in such models, specifically the linear optics model, the so-called low-depth random circuits model and the commuting quantum circuits model 5 . In (Aaronson and Arkhipov, 2011) it was shown that the linear optics model can efficiently produce samples from a distribution specified by the permanents of certain matrices, and it was proven (barring certain plausible mathematical conjectures) that classical computers cannot reproduce the samples from the same distribution in polynomial time. Similar claims have been made for low-depth random circuits (Boixo et al., 2016;Bravyi et al., 2017) and commuting quantum circuits, which comprise only commuting gates (Shepherd and Bremner, 2009;. Critically, these restricted models can be 4 Various notions of "equally powerful" are usually expressed in terms of algorithmic reductions. In QIP, typically, the computational model B is said to be at least as powerful as the computational model A, if any algorithm of complexity O(f (n)) (where f (n) is some scaling function, e.g. "polynomial" or "exponential"), defined for model A, can be efficiently (usually this means in polynomial time) translated to an algorithm for B, which solves the same problem, and whose computational complexity is O(poly(f (n))). Two models are then equivalent if A is as powerful as B and B is as powerful as A. Which specific reduction complexity we care about (polynomial, linear, etc.) depends on the setting: e.g. for factoring polynomial reductions suffice, since there seems to be an exponential separation between classical and quantum computation. In contrast, for search, the reductions need to be sub-quadratic to maintain a quantum speed-up, since only a quadratic improvement is achievable. 5 Other restricted models exist, such as the one clean qubit model (DQC1) where the input comprises only one qubit in a pure state, and others are maximally mixed. This model can be used to compute a function -the normalized trace of a unitary specified by a quantum circuit -which seems to be hard for classical devices. realized to sufficient size, as to allow for a demonstration of computations which the most powerful classical computers that are currently available cannot achieve, with near-term technologies. This milestone, referred to as quantum supremacy (Preskill, 2012;Lund et al., 2017), and has been getting a significant amount of attention in recent times. Another highly active field in QIP concentrates on (analogue) quantum simulations, with applications in quantum optics, condensed matter systems, and quantum many-body physics (Georgescu et al., 2014). Many, if not most of the above mentioned aspects of quantum computation are finding a role in quantum machine learning applications. Next, we briefly review basic concepts from the classical theories of artificial intelligence and machine learning. B. Artificial intelligence and machine learning Executive summary: The field of artificial intelligence incorporates various methods, which are predominantly focused on solving problems which are hard for computers, yet seemingly easy for humans. Perhaps the most important class of such tasks pertain to learning problems. Various algorithmic aspects of learning problems are tackled by the field of machine learning, which evolved from the study of pattern recognition in the context of AI. Modern machine learning addresses a variety of learning scenarios, dealing with learning from data, e.g. supervised (data classification), and unsupervised (data clustering) learning, or from interaction, e.g. reinforcement learning. Modern AI states, as its ultimate goal, the design of an intelligent agent which learns and thrives in unknown environments. Artificial agents that are intelligent in a general, human sense must have the capacity to tackle all the individual problems addressed by machine learning and other more specialized branches of AI. They will consequently require a complex combination of techniques. In its broadest scope, the modern field of artificial intelligence (AI) encompasses a wide variety of sub-fields. Most of these sub-fields deal with the understanding and abstracting of aspects of various human capacities which we would describe as intelligent, and attempt to realize the same capacities in machines. The term "AI" was coined at Dartmouth College conferences in the 1956 (Russell and Norvig, 2009), which were organized to develop ideas about machines that can think, and the conferences are often cited as the birthplace of the field. The conferences were aimed to "find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves" 6 . The history of the field has been turbulent, with strong opinions on how AI should be achieved. For instance, over the course of its first 30 years, the field has crystalized into two main competing and opposite viewpoints (Eliasmith and Bechtel, 2006) on how AI may be realized: computationalism -holding that that the mind functions by performing purely formal operations on symbols, in the manner of a Turing machine, see e.g. (Newell and Simon, 1976)), and connectionism -which models mental and behavioral phenomena as the emergent processes of interconnected networks of simple units, mimicking the biological brain, see e.g. (Medler, 1998)). Aspects of these two viewpoints still influence approaches to AI. Irrespective of the underlying philosophy, for the larger part of the history of AI, the realization of "genuine AI" was, purportedly perpetually "a few years away" -a feature often attributed also to quantum computers by critics of the field. In the case of AI, such runaway optimism had a severe calamitous effect on the field, in multiple instances, especially in the context of funding (leading to periods now dubbed "winters of AI"). By the late 90s, the reputation of the field was low, and, even in hindsight, there was no consensus on the reasons why AI failed to produce human-level intelligence. Such factors played a vital role in the fragmentation of the field into various sub-fields which focused on specialized tasks, often appearing under different names. A particularly influential perspective of AI, often called nouvelle or embodied AI, was advocated by Brooks, which posited that intelligence emerges from (simple) embodied systems which learn through interaction with their environments (Brooks, 1990). In contrast to standard approaches of the time, Nouvelle AI insists on learning, rather than having properties pre-programmed, and on the embodiment of AI entities, as opposed to abstract entities like chess playing programs. To a physicist, this perspective that intelligence is embodied is reminiscent to the viewpoint that information is physical, which had been "the rallying cry of quantum information theory" (Steane, 1998). Such embodied approaches are particularly relevant in robotics where the key issues involve perception (the capacity of the machine to interpret the external world using its sensors, which includes computer vision, machine hearing and touch), motion and navigation (critical in e.g. automated cars). Related to human-computer interfaces, AI also incorporates the field of natural language processing which includes language understanding -the capacity of the machine to derive meaning from natural language, and language generation -the ability of the machine to convey information in a natural language. Other general aspects of AI pertain to a few well-studied capacities of intelligent entities (Russell and Norvig, 2009). For instance, automated planning is related to decision theory 7 and, broadly speaking, addresses the task of identifying strategies (i.e. sequences of actions) which need to be performed in order to achieve a goal, while minimizing (a specified) cost. Already the simple class of so-called off-line planning tasks, where the task, cost function, and the set of possible actions are known beforehand, contains genuinely hard problems, e.g. it include, as a special case, the NP-complete 8 travelling salesman problem (TSP); for illustration see Fig. 3 9 . In modern times, TSP itself would no longer be considered a genuine AI problem, but it is serves to illustrate how already very specialized, simple sub-sub-tasks of AI may be hard. More general planning problems also include on-line variants, where not everything is known beforehand (e.g. TSP but where the "map" may fail to include all the available roads, and one simply has to actually travel to find good strategies). On-line planning overlaps with reinforcement learning, discussed later in this section. Closely related to planning is the capacity of intelligent entities for problem solving. In technical literature, problems solving is distinguished from planning by a lack of additional structure in the problem, usually assumed in planning -in other words, problem solving is more general and typically more broadly defined than planning. The lack of structure in general problem solving establishes a clear connection to (also unstructured) searching and optimization: in the setting of no additional information or structure, problem solving is the search for the solution to a precisely specified problem. While general problem solving can be, theoretically, achieved by a general search algorithm (which can still be subdivided into classes such as depth-first, breath-first, depth-limited search etc.), more often there is structure to the problem, in which case an informed search strategies -often called heuristic search strategies -will be more efficient (Russell and Norvig, 2009). Human intelligence, to no small extent, relies on our knowledge. We can accumulate knowledge, reason over it, and use it to come to the best decisions, for instance in the context of problem solving and planning. An aspect of AI tries to formalize such logical reasoning, knowledge accumulation and knowledge representation, often relying on formal logic, most often first order logic. A particularly important class of problems central to AI, and related to knowledge acquisition, involve the capacity of the machine to learn through experience. This feature was emphasized already in the early days of AI, and the derived field of machine learning (ML) now stands as arguably the most successful aspect (or spin-off) of AI, which we will address in more detail. 1. Learning from data: machine learning Stemming from the traditions of pattern recognition, such as recognizing handwritten text, and statistical learning theory (which places ML ideas in a rigorous mathematical framework), ML, broadly speaking, explores the construction of algorithms that can learn from, and make predictions about data. Traditionally, ML deals with two main learning settings: supervised and unsupervised learning, which are closely related to data analysis and data mining-type tasks (Shalev-Shwartz and Ben-David, 2014). A broader perspective (Alpaydin, 2010) on the field also includes reinforcement learning (Sutton and Barto, 1998), which is closely related to learning as is realized by biological intelligent entities. We shall discuss reinforcement learning separately. In broad terms, supervised learning deals with learning-by-example: given a certain number of labeled points (so-called training set) {(x i , y i )} i where x i denote data points, e.g. N −dimensional vectors, and y i denote labels (e.g. binary variables, or real values), the task is to infer a "labeling rule" x i → y i which allows us to guess the labels of previously unseen data, that is, beyond the training set. Formally speaking, we deal with the task of inferring the conditional probability distribution P (Y = y|X = x) (more specifically, generating a labeling function which, perhaps probabilistically, assigns labels to points) based on a certain number of samples from the joint distribution P (X, Y ). For example, we could be inferring whether a particular DNA sequence belongs to an individual who is likely to develop diabetes. Such an inference can be based on the datasets of patients whose DNA sequences had been recorded, along with the information on whether they actually developed diabetes. In this example, the variable Y (diabetes status) is binary, and the assignment of labels is not deterministic, as diabetes also depends on environmental factors. Another example could include two real variables, where x is the height from which an object is dropped, and y the duration of the fall. In this example, both variables are real-valued, and (in vacuum) the labeling relation will be essentially deterministic. In unsupervised learning, the algorithm is provided just with the data points without labels. Broadly speaking, the goal here is to identify the underlying distribution, or structure, and other informative features in the dataset. In other words, the task is to infer properties of the distribution P (X = x), based on a certain number of samples, relative to a user-specified guideline or rule. Standard examples of unsupervised learning are clustering tasks, where data-points are supposed be grouped in a manner which minimizes within-group mean-distance, while maximizing the distance between the groups. Note that the group membership can be thought of as a label, thus this also corresponds to a labeling task, but lacks "supervision": examples of correct labelings. In basic examples of such tasks, the number of expected clusters is given by the user, but this too can be automatically optimized. Other types of unsupervised problems include feature extraction and dimensionality reduction, critical in combatting the so-called curse of dimensionality. The curse of dimensionality refers to problems which stem from the fact that the raw representations of real-life data often occupy very high dimensional spaces. For instance, a standard resolution one-second video-clip at standard refresh frequency, capturing events which are extended in time maps to a vector in ∼ 10 8 dimensional space 10 , even though the relevant information it carries (say a licence-plate number of a speeding car filmed) may be significantly smaller. More generally, intuitively it is clear that, since geometric volume scales exponentially with the dimension of the space it is in, the number of points needed to capture (or learn) general features of an n−dimensional object will also scale exponentially. In other words, learning in high dimensional spaces is exponentially difficult. Hence, a means of dimensionality reduction, from raw representation space (e.g. moving car clips), to the relevant feature space (e.g. licence-plate numbers) is a necessity in any real-life scenario. These approaches the data-points to a space of significantly reduced dimension, while attempting to maintain the main features -the relevant information -of the structure of the data. A typical example of a dimensionality example technique is e.g. principal component analysis. In practice, such algorithms also constitute an important step in data pre-processing for other types of learning and analysis. Furthermore, this setting also includes generative models (related to density estimation), where new samples from an unknown distribution are generated, based on few exact samples. As humanity is amassing data at an exponential rate (insideBIGDATA, 2017) it becomes ever more relevant to extract genuinely useful information in an automated fashion. In modern world ubiquitous big data analysis and data mining are the central applications of supervised and unsupervised learning. Learning from interaction: reinforcement learning Reinforcement learning (RL) (Russell and Norvig, 2009;Sutton and Barto, 1998) is, traditionally, the third canonical category of ML. Partially caused by the relatively recent prevalence of (un)supervised methods in the contexts of the pervasive data mining and big data analysis topics, many modern textbooks on ML focus on these methods. RL strategies have mostly remained reserved for robotics and AI communities. Lately, however, the surge of interest in adaptive and autonomous devices, robotics, and AI have increased the prominence of RL methods. One recent celebrated result which relies on the extensive use of standard ML and RL techniques in conjunction is that of AlphaGo (Silver et al., 2016), a learning system which mastered the game of Go, and achieved, arguably, superhuman performance, easily defeating the best human players. This result is notable for multiple reasons, including the fact that it illustrates the potential of learning machines over special-purpose solvers in the context of AI problems: while specialized devices which relied on programming over learning (such as Deep Blue) could surpass human performance in chess, they failed to do the same for the more complicated game of Go, which has a notably larger space of strategies. The learning system AlphaGo achieved this many years ahead of typical predictions. The distinction between RL and other data-learning ML methods is particularly relevant from a quantum information perspective, which will be addressed in more detail in section VII.B. RL constitutes a broad learning setting, formulated within the general agent-environment paradigm (AE paradigm) of AI (Russell and Norvig, 2009). Here, we do not deal with a static database, but rather an interactive task environment. The learning agent (or, a learning algorithm) learns through the interaction with the task environment. Environment Agent Reward Agent Environment Learning model s a 2 p=0.9 FIG. 5 An agent interacts with an environment by exchanging percepts and actions. In RL rewards can be issued. Basic environments are formalized by Markov Decision Processes (inset in Environment). Environments are reminiscent to oracles, see 1, in that the agent only has access to the input-output relations. Further, figures of merit for learning often count the number of interaction steps, which is analogous to the concept of query complexity. As an illustration, one can imagine a robot, acting on its environment, and perceiving it via its sensors -the percepts being, say, snapshots made by its visual system, and actions being, say, movements of the robotas depicted in Fig. 5 . The AE formalism is, however, more general and abstract. It is also unrestrictive as it can also express supervised and unsupervised settings. In RL, it is typically assumed that the goal of the process is manifest in a reward function, which, roughly speaking, rewards the agent, whenever the agents behavior was correct (in which case we are dealing with positive reinforcement, but other variants of operant conditioning are also used 11 ). This model of learning seems to cover pretty well how most biological agents (i.e. animals) learn: one can illustrate this through the process of training a dog to do a trick by giving it treats whenever it performs well. As mentioned earlier, RL is all about learning how to perform the "correct" sequence of actions, given the received percepts, which is an aspect of planning, in a setting which is fully on-line: the only way to learn about the environment is by interacting with it. Intermediary learning settings While supervised, unsupervised and reinforcement learning constitute the three broad categories of learning, there are many variations and intermediary settings. For instance, semi-supervised learning interpolates between unsupervised and supervised settings, where the number of labeled instances is very small compared to the total available training set. Nonetheless, even a small number of labeled examples have been shown to improve the bare unsupervised performance (Chapelle et al., 2010), or, from an opposite perspective, unlabeled data can help with classification when facing a small quantity of labeled examples. In active supervised learning, the learning algorithm can further query the human user, or supervisor, for the labels of particular points which would improve the algorithm's performance. This setting can only be realized when it is operatively possible for the user to correctly label all the points, and may yield advantages when this exact labeling process is expensive. Further, in supervised settings, one can consider so-called inductive learning algorithms which output a classifier function, based on the training data, which can be used to label all possible points. A classifier is simply a function which assigns labels to the points in the domain of the data. In contrast, in transductive learning (Chapelle et al., 2010) settings, the points that need to be labeled later are known beforehand -in other words, the classifier function is only required to be defined on a-priori known points. Next, a supervised algorithm can perform lazy learning, meaning that the whole labeled dataset is kept in memory in order to label unknown points (which can then be added), or eager learning, in which case, the (total) classifier function is output (and the training set is no longer explicitly required) (Alpaydin, 2010). Typical examples of eager learning are linear classifiers, such as basic support vector machines, described in the next section, whereas lazy learning is exemplified by e.g. nearest-neighbour methods 12 . Our last example, online learning (Alpaydin, 2010), can be understood as either an extension of eager supervised learning, or a special case of RL. Online learning generalizes standard supervised learning, in the sense that the training data is provided sequentially to the learner, and used to, incrementally, update the classifying function. In some variants, the algorithm is asked to classify each point, and is given the correct response afterward, and the performance is based on the guesses. The match/mismatch of the guess and the actual label can also be understood as a reward, in which case online learning becomes a restricted case of RL. Putting it all together: the agent-environment paradigm The aforementioned specialized learning scenarios can be phrased in a unifying language, which also enables us to discuss how specialized tasks fit in the objective of realizing true AI. In modern take on AI (Russell and Norvig, 2009), the central concept of the theory is that of an agent. An agent is an entity which is defined relative to its environment, and which has the capacity to act, that is, do something. In computer science terminology the requirements for something to be an agent (or for something to act) are minimal, and essentially everything can be considered an agent -for instance, all non-trivial computer programs are also agents. While we, unsurprisingly, do not more precisely specify what intelligent behaviour entails, already this simple perspective on AI has non-trivial consequences. The first is that intelligence can be ascertained from the interaction history between the agent and its environment alone. Such a viewpoint on AI is also closely related to behavior-based AI and the ideas behind the Turing test (Turing, 1950); it is in line with an embodied viewpoint on AI (see embodied AI in section I.B) and it has influenced certain approaches towards quantum AI, touched in section VII.C. The second is that the development of better ML and other types of relevant algorithms does constitute genuine progress towards AI, conditioned only on the fact that such algorithms can be coherently combined into a whole agent. It is however important to note that to actually achieve this integration may be far from trivial. In contrast to such strictly behavioral and operational points of view, an alternative approach towards whole agents (or complete intelligent agents) focuses on agent architectures and cognitive architectures (Russell and Norvig, 2009). In this approach to AI the emphasis is equally placed not only on intelligent behaviour, but also on forming a theory about the structure of the (human) mind. One of the main goals of a cognitive architecture is to design a comprehensive computational model which encapsulates various results stemming from research in cognitive psychology. The aspects which are predominantly focused on understanding human cognition are, however, not central for our take on AI. We discuss this further in section VII.C. b. Notation Throughout this review paper, we have strived to use the notation specified in the reviewed works. To avoid a notational chaos, we, however keep the notation consistent within subsections -this means that, within one subsection, we adhere to the notation used in the majority of works if inconsistencies arise. II. CLASSICAL BACKGROUND The main purpose of this section is to provide the background regarding classical ML and AI techniques and concepts which are either addressed in quantum proposals we discuss in the following sections or important for the proper positioning of the quantum proposal in the broader learning context. The concepts and models of this section include common models found in classical literature, but also certain more exotic models, which have been addressed in modern quantum ML literature. While this section contains most of the classical background needed to understand the basic ideas of the quantum ML literature, to tame the length of this section, certain very specialized classical ML ideas are presented on-the-fly during the upcoming reviews. We first provide the basics concepts related to common ML models, emphasizing neural networks in II.A.1 and support vector machines in II.A.2. Following this, in II.A.3, we also briefly describe a larger collection of algorithmic methods, and ideas arising in the context of ML, including regression models, k−means/medians, decision trees, but also more general optimization and linear algebra methods which are now commonplace in ML. Beyond the more pragmatic aspects of model design for learning problems, in subsection II.B we provide the main ideas of the mathematical foundations of computational learning theory, which discuss learnability -i.e. the conditions under which learning is possible at all -computational learning theory and the theory of Vapnik and Chervonenkis -which rigorously investigate the bounds on learning efficiency for various supervised settings. Subsection II.C covers the basic concepts and methods of RL. A. Methods of machine learning Executive summary: Two particularly famous models in machine learning are artificial neural networks -inspired by biological brains, and support vector machines -arguably the best understood supervised learning model. Neural networks come in many flavours, all of which model parallel information processing of a network of simple computational units, neurons. Feed-forward networks (without loops) are typically used for supervised learning. Most of the popular deep learning approaches fit in this paradigm. Recurrent networks have loops -this allows e.g. feeding information from outputs of a (sub)-network back to its own input . Examples include Hopfield networks, which can be used as content-addressable memories, and Boltzmann machines, typically used for unsupervised learning. These networks are related Ising-type models, at zero, or finite temperatures, respectively -this sets the grounds for some of the proposals for quantization. Support vector machines classify data in an Euclidean space, by identifying best separating hyperplanes, which allows for a comparatively simple theory. The linearity of this model is a feature making it amenable to quantum processing. The power of hyperplane classification can be improved by using kernels which, intuitively, map the data to higher dimensional spaces, in a non-linear way. ML naturally goes beyond these two models, and includes regression (data fitting) methods and many other specialized algorithms. Since the early days of the fields of AI and ML, there have been many proposals on how to achieve the flavours of learning we described above. In what follows we will describe two popular models for ML, specifically artificial neural networks, and support vector machines. We highlight that many other models exist, and indeed, in many fields other learning methods (e.g. regression methods), are more commonly used. A selection of such other models is briefly mentioned thereafter, along with examples of techiques which overlap with ML topics in a broader sense, such as matrix decomposition techniques, and which can be used for e.g. unsupervised learning. Our choice of emphasis is, in part, again motivated by later quantum approaches, and by features of the models which are particularly well-suited for cross-overs with quantum computing. Artificial neural networks and deep learning Artificial neural networks (artificial NNs, or just NNs) are a biologically inspired approach to tackling learning problems. Originating in 1943 (McCulloch andPitts, 1943), the basic component of NNs is the artificial neuron (AN), which is, abstractly speaking, a real-valued function AN : R k → R parametrized by a vector of real, non-negative weights (w i ) i = w ∈ R k , and the activation function φ : R → R, given with AN (x) = φ i x i w i , with x = (x i ) i ∈ R k .(1) For the particular choice when the activation function is the threshold function φ θ (x) = 1 if x > θ ∈ R + and φ θ (x) = 0 otherwise, the AN is called a perceptron (Rosenblatt, 1957), and has been studied extensively. Already such simple perceptrons performing classification into subspaces specified by the hyperplane with the normal vector w, and off-set θ (c.f. support vector machines later in this section). Note, in ML terminology, a distinction should be made between artificial neurons (ANs) and perceptrons -perceptrons are special cases of ANs, with the fixed activation function -the step function -, and a specified update or training rule. ANs in modern times use various activation functions (often the differentiable sigmoid functions), and can use different learning rules. For our purposes, this distinction will not matter.The training of such a classifier/AN for supervised learning purposes consists in optimizing the parameters w and θ as to correctly label the training set -there are various figures of merit particular approaches care about, and various algorithms that perform such an optimization, which are not relevant at this point. By combining ANs in a network we obtain NNs (if ANs are perceptrons, we usually talk about multi-layered perceptrons). While single perceptrons, or single-layered perceptrons can realize only linear classification, already a three-layered network suffices to approximate any continuous real-valued function (precision depending on the number of neurons in the inner, so-called hidden, layer). Cybenko (Cybenko, 1989) was the first to prove this for sigmoid activation functions, whereas Hornik generalized this to show that the same holds for all non-constant, monotonically increasing and bounded activation functions (Hornik, 1991) soon thereafter. This shows that if sufficiently many neurons are available, a three-layered ANN can be trained to learn any dataset, in principle 17 . Although this result seems very positive, it comes with the price of a large model complexity, which we discuss in section II.B.2 18 . In recent times, it has become apparent that using multiple, sequential, hidden feed-forward layers (instead of one large), i.e. deep neural networks (deep NNs), may have additional benefits. First, they may reduce the number of parameters (Poggio et al., 2017). Second, the sequential nature of processing of information from layer to layer can be understood as a feature abstraction mechanism (each layer processes the input a bit, highlighting relevant features which are processed further). This increases the interpretability of the model (intuitively, the capacity for high level explanations of the model's performance) (Lipton, 2016), which is perhaps best illustrated in so-called convolutional (deep) NNs, whose structure is inspired by the visual cortex. One of the main practical disadvantages of such deep networks is the computational cost and computational instabilities in training (c.f.. the vanishing gradient problem (Hochreiter et al., 2001)), and also the size of the dataset which has to be large (Larochelle et al., 2009). With modern technology and datasets, both obstacles are becoming less prohibitive, which has lead to a minor revolution in the field of ML. Not all ANNs are feed-forward: recurrent neural networks (recurrent NNs) allow for the backpropagation of signals. Particular examples of such networks are so called Hopfield networks (HNs), and Boltzmann machines (BMs), which are often used for different purposes than feedforward networks. In HNs, we deal with one layer, where the outputs of all the neurons serve as inputs to the same layer. The network is initialized by assigning binary values (traditionally, −1 and 1 are used, for reasons of convenience) to the neurons (more precisely, some neurons are set to fire, and some not), which are then processed by the network, leading to a new configuration. This update can be synchronous (the output values are "frozen" and all the second-round values are computed simultaneously) or asynchronous (the update is done one neuron at a time in a random order). The connections in the network are represented by a matrix of weights (w ij ) ij , specifying the connection strength between the i th and the j th neuron. The neurons are perceptrons, with a threshold activation function, given with the local threshold vector (θ i ) i . Such a dynamical system, under a few mild assumptions (Hopfield, 1982), converges to a configuration (i.e. bit-string) which (locally) minimizes the energy functional E(s) = − 1 2 ij w ij s i s j + i θ i s i ,(2) with s = (s i ) i , s i ∈ {−1, 1}, that is, the Ising model. In general, this model has many local minima, which depend on the weights w ij , and the thresholds, which are often set to zero. Hopfield provided a simple algorithm (called Hebbian learning, after D. Hebb for historic reasons (Hopfield, 1982)), which enables one to "program" the minima -in other words, given a set of bitstrings S (more precisely, strings of signs +1/ − 1), one can find the matrix w ij such that exactly those strings S are local minima of the resulting functional E. Such programmed minima are then called stored patterns. Furthermore, Hopfield's algorithm achieved this in a manner which is local (the weights w ij depend only on the i th and j th bits of the targeted strings, allowing parallelizability), incremental (one can modify the matrix w ij to add a new string without having to keep the old strings in memory), and immediate. Immediateness means that the computation of the weight matrix is not a limiting, but finite process. Violating e.g. incrementality would lead to a lazy algorithm (see section I.B.3), which can be sub-optimal in terms of memory requirements, but often also computational complexity 19 . It was shown that the minima of such a trained network are also attractive fixed-points, with a finite basin of attraction. This means that if a trained network is fed a new string, and let run, it will (eventually) converge to a pattern which is closest to it (the distance measure that is used depends on the learning rule, but typically it is the Hamming distance, i.e. number of entries where the strings disagree). Such a system then forms an associative memory, also called a content-addressable memory (CAM). CAMs can be used for supervised learning (the "labels" are the stored patterns), and conversely, supervised learning machinery can be used for CAM 20 . An important feature of HNs is their capacity: how many distinct patterns it can store 21 . For the Hebbian update rule this 19 The lazy algorithm may have to process all the patterns/data-points the number of which may be large and/or growing. 20 For this, one simply needs to add a look-up table connecting labels to fixed patterns. 21 Reliable storage entails that previously stored patterns will be also recovered without change (i.e they are energetic local minima of Eq. (2), but also that there is a basin of attraction -a ball around the stored patterns with respect to a distance measure (most commonly the Hamming distance) for which the dynamical process of the network converges to the stored pattern. An issue with capacities is the occurrence of spurious patterns: local minima with a non-trivial basin of attraction which were not stored. number scales as O(n/ log(n)), where n is the number of neurons, which Storkey (Storkey, 1997) improved to O(n/ log(n)). In the meantime, more efficient learning algorithms have been invented (Hillar and Tran, 2014). Aside from applications as CAMs, due to the representation in terms of the energy functional in Eq. (2), and the fact that the running of HNs minimize it, they have also been considered for the tasks of optimization early on (Hopfield and Tank, 1985). The operative isomorphism between Hopfield networks and the Ising model, technically, holds only in the case of a zero-temperature system. Boltzmann machines generalize this. Here, the value of the i th neuron is set to −1 or 1 (called "off" and "on" in literature, respectively) with probability p(i = −1) = (1 + exp (−β∆E i )) −1 , with ∆E i = j w ij s j + θ i ,(3) where ∆E i is the energy difference of the configuration with i th neuron being on or off, assuming the connections w are symmetric, and β is the inverse temperature of the system. In the limit of infinite running time, the network's configuration is given by the (input-state invariant) Boltzmann distribution over the configurations, which depends on the weights w, local thresholds (weights) θ and the temperature. BMs are typically used in a generative fashions, to model, and sample from, (conditional) probability distributions. In the simplest variant, the training of the network attempts to ensure that the limiting distribution of the network matches the observed frequencies in the dataset. This is achieved by the tuning of the parameters w and θ. The structure of the network dictates how complicated a distribution can be represented. To capture more complicated distributions, over say k dimensional data, the BMs have N > k neurons. k of them will be denoted as visible units, and the remainder are called hidden units, and they capture latent, not directly observable, variables of the system which generated the dataset, and which we are in fact modelling. Training such networks consists in a gradient ascent of the log-likelihood of observing the training data, in the parameter space. While this seems conceptually simple, it is computationally intractable, in part as it requires accurate estimates of probabilities of equilibrium distributions, which are hard to obtain. In practice, this is somewhat mitigated by using restricted BMs, where the hidden and visible units form the partition of a bi-partite graph (so only connections between hidden and visible units exist). (Restricted) BMs have a large spectrum of uses, including providing generative models -producing new samples from the estimated distribution, as classifiers -via conditioned generation, as feature extractors -a form of unsupervised clustering, and as building blocks of deep architectures (Larochelle et al., 2009). However, their utility is mostly limited by the cost of training -for instance, the cost of obtaining equilibrium Gibbs distributions, or by the errors stemming from heuristic training methods such as contrastive divergence (Larochelle et al., 2009;Bengio and Delalleau, 2009;Wiebe et al., 2014a). Support Vector Machines Support Vector Machines (SVMs) form a family of perhaps best understood approaches for solving classification problems. The basic idea behind SVMs is that a natural way to classify points based on a dataset {x i , y i } i , for binary labels y i ∈ {−1, 1}, is to generate a hyperplane separating the negative instances from the positive ones. Such observations are not new, and indeed, perceptrons, briefly discussed in the previous section, perform the same function. Such a hyperplane can then be used to classify all points. Naturally, not all sets of points allow this (those that do are called linearly separable), but SVMs are further generalized to deal with sets which are not linearly separable using so-called kernels. Kernels, effectively, realize non-linear mappings of the original dataset to higher dimensions where they may become separable, depending on a few technical conditions 22 ), and by allowing a certain degree of misclassification, which leads to so-called "soft-margin" SVMs. Even in the case the dataset is linearly separable, there will still be many hyperplanes doing the job. This leads to various variants of SVMs, but the basic variant identifies a hyperplane which: a) correctly splits the training points, and b) maximizes the so-called margin: the distance of the hyperplane to the nearest point (see Fig. 7). The distance of choice is most often the geometric Euclidean distance, which leads to so-called maximum margin classifiers. In high-dimensional spaces, in general the maximization of the margin ends in a situation where there are multiple +1 and −1 instances of training data points which are equally far from the hyperplane. These points are called support vectors. The finding of a maximum margin classifier corresponds to finding a normal vector w and offset b of the separating hyperplane, which corresponds to the optimization problem w * = argmin w,b 1 2 w 2 (4) such that y i (w.x i + b) ≥ 1.(5) The formulation above is actually derived from the basic problem by noting that we may arbitrarily and simultaneously scale the pair (w, b) without changing the hyperplane. Therefore, we may always choose a scaling such that the realized margin is 1, in which case, the margin corresponds to w −1 , which simply maps a maximization problem to a minimization problem as above. The square ensures the problem is stated as a standard quadratic programming problem. This problem is often expressed in its Lagrange dual form, which reduces to (α * 1 , . . . α * N ) = argmin α1...α N   i α i − 1 2 i,j α i α j y i y j x i .x j  (6) such that α i ≥ 0 and i α i y i = 0,(7) where the solution of the original problem is given by w * = i y i α i x i .(8) In other words, we have expressed w * in the basis of the data-vectors, and the data-vectors x i for which the corresponding coefficient α i is non-zero are precisely the support vectors. The offset b * is easily computed having access to one support vector of, say, an instance +1, denoted x + , by solving w * .x + + b * = 1. The class of a new point z can also be computed directly using the support vectors via the following expression z → sign i y i α i x i .z + b * .(9) The dual representation of the optimization problem is convenient when dealing with kernels. As mentioned, a way of dealing with data which is not linearly separable, is to first map all the points into a higher-dimensional space via a non-linear function φ : R m → R n , where m < n is the dimensionality of the datapoints. As we can see, in the dual formulation, the data-points only appear in terms of inner products x i .x j . This leads to the notion of the kernel function k which, intuitively, measures the similarity of the points in the larger space, typically defined with k( x i , x j ) = φ(x i ) τ φ(x j ). In other words, to train the SVM according to a non-trivial kernel k, induced by the non-linear mapping φ, the optimization line Eq. (6) will be replaced with argmin α1. ..α N i α i − 1 2 i,j α i α j y i y j k(x i , x j ) . The offset is computed analogously, using just one application of φ. The evaluation of a new point is given in the same way with z → sign ( i y i α i k(x i , z) + b * ) . In other words, the data-points need not be explicitly mapped via φ, as long as the map-inducing inner product k(·, ·) can be computed more effectively. The choice of the kernel is critical in the performance of the classifier, and the finding of good kernels is non-trivial and often solved by trial-and-error. While increasing the dimension of the extended space (co-domain of φ) may make data-points more linearly separable (i.e. fewer mismatches for the optimal classifier), in practice they will not be fully separable (and furthermore, increasing the kernel dimension comes with a cost which we elaborate on later). To resolve this, SVMs allow for misclassification, with various options for measuring the "amount" of misclassification, inducing a penalty function. A typical approach to this is to introduce so-called "slack variables" ξ i ≥ 0 to the original optimization task, so: w * = argmin w,b 1 2 w 2 + C i ξ i(10) such that y i (w. x i + b) ≥ 1 − ξ i .(11) If the value ξ i of the optimal solution is between 0 and 1, the point i is correctly classified, but is within the margin, and ξ i > 1 denotes a misclassification. The (hyper)parameter C controls the relative importance we place on minimizing the margin norm, versus the importance we place on misclassification. Interestingly, the dual formulation of the above problem is near-identical to the hard-margin setting discussed thus far, with the small difference that the parameters α i are now additionally constrained with α i ≤ C in Eq. (7). SVMs, as described above, have been extensively studied from the perspective of computational learning theory, and have been connected to other learning models. In particular, their generalization performance, which, roughly speaking, characterizes how well a trained model 23 will perform beyond the training set can be analyzed. This is the most important feature of a classifying algorithm. We will briefly discuss generalization performance in section II.B.2. We end this short review of SVMs by considering a non-standard variant, which is interesting for our purposes as it has been beneficially quantized. SVMs as described are trained by finding the maximal margin hyperplane. Another model, called least-squares SVM (LS-SVM) takes a regression (i.e. data-fitting) approach to the problem, and finds a hyperplane which, essentially, minimizes the least square distance of the vector of labels, and the vector of distances from the hyperplane, where the i th entry of the vector is given with (w.x i + b). This is effected by a small modification of the soft-margin formulation: w * LS = argmin w,b 1 2 w 2 + C i ξ 2 i(12) such that y i (w. x i + b) = 1 − ξ i ,(13) where the only two differences are that the constraints are now equalities, and the slack variables are squared in the optimization expression. This seemingly innocuous change causes differences in performance, but also in the training. The dual formulation of the latter optimization problem reduces to a linear system of equations: 0 1 T 1 N Ω + γ −1 I b α = 0 Y ,(14) where 1 is an "all ones" vector, Y is the vector of labels y i , b is the offset, γ is a parameter depending on C. α is the vector of the Lagrange multipliers yielding the solution. This vector again stems from the dual problem which we omitted due to space constraints, and which can be found in (Suykens and Vandewalle, 1999). Finally, Ω is the matrix collecting the (mapped) "inner products" of the training vectors so Ω i,j = k(x i , x j ), where k is a kernel function, in the simplest case, just the inner product. The training of LS-SVMs is thus simpler (and particularly convenient from a quantum algorithms perspective), but the theoretical understanding of the model, and its relationship to the well-understood SVMs, is still a matter of study, with few known results (see e.g. (Ye and Xiong, 2007)). Other models While NNs and SVMs constitute two popular approaches for ML tasks (in particular, supervised learning), many other models exist, suitable for a variety of ML problems. Here we very briefly list and describe some of such models which have also appeared in the context of quantum ML. While classification typically assigns discrete labels to points, in the case when the labeling function has a continuous domain (say the segment [0, 1]) we are dealing with function approximation tasks, often dealt with by using regression techniques. Typical examples here include linear regression, which approximate the relationship of points and labels with a linear function, most often minimizing the least-squares error. More broadly, such techniques are closely related to data-fitting, that is, fitting the parameters of a parametrized function such as to best fit observed (training) data. The k-nearest neighbour algorithm is an intuitive classification algorithm which given a new point considers the k nearest training points (with respect to a metric of choice), and assigns the label by the majority vote (if used for classification), or by averaging (in the case of regression, i.e. continuous label values). The mutually related k-means and k-medians algorithms are typically used for clustering: the k specifies the number of clusters, and the algorithm defines them in a manner which minimizes the within-cluster distance to the mean (or median) point. Another method for classification and regression optimizes decision trees, where each dimension, or entry (or more generally a feature 24 ) of the new data point influences a move on a decision tree. The depth of the tree is the length of the vector (or number of features), and the degree of each node depends on the possible number of distinct features/levels per entry 25 . The vertices of the tree specify an arbitrary feature of interest, which can influence the classification result, but most often they consider the overlaps with geometrical regions of the data-point space. Decision trees are in principle maximally expressive (can represent any labeling function), but very difficult to train without constraints. More generally, classification tasks can be treated as the problem of finding a hypothesis h : Data → Labels (in ML, the term hypothesis is essentially synonymous to the term classifier, also called a learner) which is from some family H, which minimizes error (or loss) under some loss function. For instance, the hypotheses realized by SVMs are given by the hyperplanes (in the kernel space), and in neural nets they are parametrized by the parameters of the nets: geometry, thresholds, activation functions, etc. Additional to loss terms, the minimization of which is called empirical risk minimization, ML applications benefit from adding an additional component to the objective function: the regularization term, the purpose of which is to penalize complex functions, which could otherwise lead to poor generalization performance, see section II.B.2. The choices of loss functions, regularization terms, and classes of hypotheses lead to different particular models, and training corresponds to optimization problems given by the choice of the loss function and the hypothesis (function) family. Furthermore, it has been shown that essentially any learning algorithm which requires only convex optimization for training leads to poor performance under noise. Thus non-convex optimization is necessary for optimal learning (see e.g. (Long and Servedio, 2010;Manwani and Sastry, 2011)). An important class of meta-algorithms for classification problems are boosting algorithms. The basic idea behind boosting algorithms is the highly non-trivial observation, first proven via the seminal AdaBoost algorithm (Freund and Schapire, 1997), that multiple weak classifiers, which perform better than random on distinct parts of the input space, can be combined into an overall better classifier. More precisely, given a set of (weak) hypotheses/classifiers {h j }, h j : R n → {−1, 1}, under certain technical conditions, there exists a set of weights {w i }, w i ∈ R, such that the composite classifier of the form hc w (x) = sign( i w i h i (x)) performs better. Interestingly, a single (weak) learning model can be used to generate the weak hypotheses needed for the construction of a better composite classifier -one which, in principle, can achieve arbitrary high success probabilities, i.e. a strong learner. The first step of this process is achieved by altering the frequencies at which the training labeled data-points appear, one can effectively alter the distributions over the data (in a black-box setting, these can be obtained by e.g. rejection sampling methods). The training of one and the same model on such differentially distributed datasets can generate distinct weak learners, which emphasize distinct parts of the input space. Once such distinct hypotheses are generated, optimization of the weight w i of the composite model is performed. In other words, weak learning models can be boosted 26 . 24 Features, however, have a more generic meaning in the context of ML. A data vector is a vector of features, where what a feature is depends on the context. For instance, features can be simply values at particular positions, or more global properties: e.g. a feature of data vectors depicting an image may be "contains a circle", and all vectors corresponding to pictures with circles have it. Even more generically, features pertain to observable properties of the objects the data-points represent ("observable" here simply means that the property can be manifested in the data vector). 25 For instance, we can classify humans, parrots, bats and turtles, by binary features can f ly and is mamal. E.g. choosing the root can f ly leads to the branch can f ly = no with two leaves decided by is mamal = yes pinpointing the human, whereas is mamal = no would specify the turtle. Parrots and bats would be distinguished by the same feature in the can f ly = yes subtree. 26 It should be mentioned that the above description only serves to illustrate the intuition behind boosting ideas. In practice, various boosting methods have distinct steps, e.g. they may perform the required optimizations in differing orders, using training phases in parallel etc. which is beyond the needs of this review. Aside from the broad classes of approaches to solve various ML tasks, ML is also often conflated with specific computational tools which are used to solve them. A prominent example of this is the development of algorithms for optimization problems, especially those arising in the training of standard learning models. This includes e.g. particle swarm optimization, genetic and evolutionary algorithms, and even variants of stochastic gradient descent. ML also relies on other methods including linear algebra tools, e.g. matrix decomposition methods, such as singular value decomposition, QR-, LU-and other decompositions, derived methods such as principal component analysis , and various techniques from the field of signal analysis (Fourier, Wavelet, Cosine, and other transforms). The latter set of techniques serves to reduce the effective dimension of the data set, and helps combat the curse of dimensionality. The optimization, linear algebra, and signal processing techniques, and their interplay with quantum information is an independent body of research with enough material to deserve a separate review, and we will only reflect on these methods when needed. B. Mathematical theories of supervised and inductive learning Executive summary: Aside from proposing learning models, such as NNs or SVMs, learning theory also provides formal tools to identify the limits of learnability. No free lunch theorems provide sobering arguments that naïve notions of "optimal" learning models cannot be obtained, and that all learning must rely on some prior assumptions. Computational learning theory relies on ideas from computational complexity theory, to formalize many settings of supervised learning, such as the task of approximating or identifying an unknown (boolean) function -a concept -which is just the binary labeling function. The main question of the theory is the quantification of the number of invocations of the black-box -i.e. of the function (or of the oracle providing examples of the function's values on selected inputs) -needed to reliably approximate the (partially) unknown concept to desired accuracy. In other words, computational learning theory considers the sample complexity bounds for various learning settings, specifying the concept families and type of access. The theory of Vapnik and Chervonenkis, or simply VC theory, stems from the tradition of statistical learning. One of the key goals of the theory is to provide theoretical guarantees on generalization performance. This is what is asked for in the following question: given a learning machine trained on a dataset of size N , stemming from some process, with a measured empirical risk (error on the training set) of some value R, what can be said about its future performance on other data-points which may stem from the same process? One of the key results of VC theory is that this can be answered, with the help of a third parameter -the model complexity of the learning machine. Model complexity, intuitively, captures how complicated functions the learner can learn: the more complicated the model, the higher chance of "overfitting", and consequently, the weaker the guarantees on performance beyond the training set. Good learning models can control their model complexity, leading to a learning principle of structural risk minimization. The art of ML is a juggling act, balancing sample complexity, model complexity, and the computational complexity of the learning algorithm 27 . Although modern increase of interests in ML and AI are mostly due to applications, aspects of ML and AI do have strong theoretical backgrounds. Here we focus on such foundational results which clarify what learning is, and which investigate the questions of what learning limits are. We will very briefly sketch some of the basic ideas. The first collection of results, called No Free Lunch theorems place seemingly pessimistic bounds on the conditions under which learning is at all possible (Wolpert, 1996). No Free Lunch theorems are, essentially, a mathematical formalization of Hume's famous problem of induction (Hume, 1739;Vickers, 2016), which deals with the justification of inductive reasoning. One example of inductive reasoning occurs during generalization. Hume points out that, without a-priori assumptions, concluding any property concerning a class of objects based on any number of observations 28 is not justified. In a similar vein, learning based on experience cannot be justified without further assumptions: expecting that a sequence of events leads to the same outcome as it did in the past, is only justified if we assume a uniformity of nature. The problems of generalization and of uniformity can be formulated in the context of supervised learning and RL, with (not uncontroversial) consequences (c.f. (NFL)). For instance, one of the implications is that the expected performance of any two learning algorithms beyond the training set must be equal, if one uniformly averages over all possible labeling functions, and analogous statements hold for RL settings -in other words, without assumptions on environments/datasets, the expected performance of any two learning models will be essentially the same, and two learning models cannot be meaningfully compared in terms of performance without making statements about the task environments in question. In practice, we, however, always have some assumptions on the dataset and environment: for instance the principle of parsimony (i.e. Occam's razor), asserting that simpler explanations tend to be correct, prevalent in science, suffices to break the symmetries required for NFLs to hold in their strongest form (Lattimore and Hutter, 2011;Hutter, 2010;Ben-David et al., 2011). No review of theoretical foundations of learning theory should circumvent the works of Valiant, and the general computational learning theory (CLT), which stems from a computer science tradition, initiated by Valiant (Valiant, 1984), and the related VC theory of Vapnik and Chervonenkis, developed from a statistical viewpoint (Vapnik, 1995). We present the basic ideas of these theories in no particular order. Computational learning theory CLT can be understood as a rigorous formalization of supervised learning, and which stems from a computational complexity theory tradition. The most famous model in CLT is that of probably approximately correct (PAC) learning. We will explain the basic notions of PAC learning on a simple example: optical character recognition. Consider the task of training an algorithm to decide whether a given image (given as a black and white bitmap) of a letter corresponds to the letter "A", by supplying a set of examples and counterexamples: a collection of images. Each image x can be encoded as a binary vector in {0, 1} n (where n =height×width of the image). Assuming that there exists an univocally correct assignment of label 0 (not "A") and 1 to each image implies there exists a characteristic function f : {0, 1} n → {0, 1} which discerns letters A from other images. Such an underlying characteristic function (or, equivalently, the subset of bitstrings for which it attains value "1") is, in computational learning theory, called a concept. Any (supervised) learning algorithm will first be supplied with a collection of N examples (x i , f ((x i )) i . In some variants of PAC learning, it is assumed that the data-points (x) are drawn from some distribution D attaining values in {0, 1} n . Intuitively, this distribution can model the fact that in practice, the examples that are given to the learner stem from its interaction with the world, which specifies what kinds of "A"s we are more likely to see 29 . PAC learning typically assumes inductive settings, meaning that the learning algorithm, given a sample set S N (comprising N identically independently distributed samples from D) outputs a hypothesis h : {0, 1} n → {0, 1} which is, intuitively, the algorithms "best guess" for the actual concept f . The quality of the guess is measured by the total error (also known as loss, or regret), err D (h S N ) = x P (D = x)|h S N (x) − f (x)|,(15) averaged according to the same (training) distribution D, where h S N is the hypothesis the (deterministic) learning algorithm outputs given the training set S N . Intuitively, the larger the training set is (N ), the smaller the error will be, but this also depends on the actual examples (and thus S N and D). PAC theory concerns itself with probably (δ), approximately ( ) correct learning, i.e. with the following expression: P S N ∼D N [err D (h S N ) ≤ ] ≥ 1 − δ,(16) where S ∼ D means S was drawn according to the distribution D. The above expression is a statement certifying that the learning algorithm, having been trained on the dataset sampled from D, will, except with probability δ, have a total error below . We say a concept f is ( , δ)-learnable, under distribution D, if there exists a learning algorithm, and an N , such that Eq. (16) holds, and simply learnable, if it is ( , δ)-learnable for all ( , δ) choices. The functional dependence of N on ( , δ) (and on the concept and distribution D) is called the sample complexity. In PAC learning, we are predominantly concerned with identifying tractable problems, so a concept/distribution pair f, D is PAC-learnable if there exists an algorithm for which the sample complexity is polynomial in −1 and δ −1 . These basic ideas are generalized in many ways. First, in the case the algorithm cannot output all possible hypotheses, but only a restricted set H (e.g. the hypothesis space is smaller than the total concept space), we can look for the best case solution by substituting the actual concept f with the optimal choice h * ∈ H which minimizes the error in (15), in all the expressions above. Second, we are typically not interested in just distinguishing the letter "A" from all other letters, but rather recognizing all letters. In this sense, we typically deal with a concept class (e.g. "letters"), which is a set of concepts, and it is (PAC) learnable if there exists an algorithm for which each of the concepts in the class are (PAC) learnable. If, furthermore, the same algorithm also learns for all distributions D, then the class is said to be (distribution-free) learnable. CLT contains other models, generalizing PAC. For instance, concepts may be noisy or stochastic. In the agnostic learning model, the labeled examples (x, y) are sampled from a distribution D over {0, 1} n × {0, 1}, which also models probabilitstic concepts 30 . Further, in agnostic learning, we define a set of concepts C ⊆ {c|c : {0, 1} n → {0, 1}}, and given D, we can identify the best deterministic approximation of D in the set C, given with opt C = min c∈C err D (c). The goal of learning is to produce a hypothesis h ∈ C which performs not much worse than the best approximation opt C , in the PAC sense -the algorithm is a ( , δ)−agnostic learner for D and C, if given access to samples from D it outputs a hypothesis h ∈ C such that err D (c) ≤ + opt C , except with probability δ. Another common model in CLT is, the exact learning from membership queries model (Angluin, 1988), which is, intuitively, related to active supervised learning (see section I.B.3). Here, we have access to an oracle, a black-box, which outputs the concept value f (x) when queried with an example x. The basic setting is exact, meaning we are required to output a hypothesis which makes no errors whatsoever, however with a bounded probability (say 3/4). In other words, this is PAC learning where = 0, but we get to choose which examples we are given, adaptively, and δ is bounded away from 1/2. The figure of merit usually considered in this setting is query complexity, which denotes the number of calls to the oracle the learning algorithm uses, and is for most intents and purposes synonymous to sample complexity 31 . This, in spirit, corresponds to an active supervised learning setting. Much of PAC learning deals with identifying examples of interesting concept classes which are learnable (or proving that relevant classes are not), but other more general results exist connecting this learning framework. For instance, we can ask whether we can achieve a finite-sampling universal learning algorithm: that is, an algorithm that can learn any concept, under any distribution using some fixed number of samples N . The No Free Lunch theorems we mentioned previously imply that this is not possible: for each learning algorithm (and , δ), and any N there is a setting (concept/distribution) which requires more than N samples to achieve ( , δ)-learning. Typically, the criterion for a problem to be learnable assumes that there exists a classifier whose performance is essentially arbitrarily good -that is, it assumes the classifier is strong. The boosting result in ML, already touched upon in section II.A.3, shows that settling on weak classifiers, which perform only slightly better than random classification, does not generate a different concept of learnability (Schapire, 1990). Classical CLT theory has also been generalized to deal with concepts with continuous ranges. In particular, so called p-concepts have range in [0, 1] (Kearns and Schapire, 1994). The generalization of the entire CLT to deal with such continuous-valued concepts is not without problems, but nonetheless, some of the central results, for instance quantities which are analogs of the VC-dimension, and analogous theorems relating this to generalization performance, can still be provided (see (Aaronson, 2007) for an overview given in the context of the learning of quantum states discussed in section V.A.1). Computational learning theory is closely related to the statistical learning theory of Vapnik and Chervonenkis (VC theory) which we discuss next. VC theory The statistical learning formalism of Vapnik and Chervonenkis was developed over the course of more than 30 years, and in this review we are forced to present just a chosen aspect of the total 31 When the oracle allows non-trivial inputs, one typically talks about query complexity. Sample complexity deals with the question of "how many samples" which suggest the setting where the oracle only produces outputs, without taking inputs. The distinction is not relevant for our purposes and is more often a matter of convention of the research line. theory, which deals with generalization performance guarantees. In the previous paragraph on PAC learning, we have introduced the concept of total error, which we will refer to as (total) risk. It is defined as the average over all the data points, which is, for a hypothesis h, given with R(h) = error(h) = x P (D = x)|h(x) − f (x) | (we are switching notation to maintain consistency with literature of differing communities). However, this quantity cannot be evaluated in practice, as in practice we only have access to the training data. This leads us to the notion of the empirical risk given withR (h) = 1 N x∈S N |h(x) − f (x)|,(17) where S N is the training set drawn independently from the underlying distribution D. The quantityR(h) is intuitive and directly measurable. However, the problem of finding learning models which optimize empirical risk alone is not in it self interesting as it is trivially resolved with a look-up table. From a learning perspective, the more interesting and relevant quantity is the performance beyond the training set, which is contained in the unmeasurable R(h), and indeed the task of inductive supervised learning is identifying h which minimizes R(h), given only the finite training set S N . Intuitively, the hypothesis h which minimizes the empirical risk should also be our best bet for the hypothesis which minimizes R(h), but this can only make sense if our hypothesis family is somehow constrained, at least to a family of total functions: again, a look-up table has zero empirical risk, yet says nothing about what to do beyond. One of the key contributions of VC theory is to establish a rigorous relationship between the observable quantityR(h) -the empirical risk, the quantity we actually wish to bound R(h) -the total risk, and the family of hypotheses our learning algorithm can realize. Intuitively, if the function family is too flexible (as is the case with just look-up tables) a perfect fit on the examples says little. In contrast, having a very restrictive set of hypotheses, say just one (which is independent from the dataset/concept and the generating distribution), suggest that the empirical risk is a fair estimate of the total risk (however bad it may be), as nothing has been tailored for the training set. This brings us to the notion of the model complexity of the learning model, which has a few formalizations, and here we focus on the Vapnik-Chervonenkis dimension of the model (VC dimension) 32 . The VC-dimension is an integer number assigned to a set of hypotheses H ⊆ {h|h : S → {0, 1}}, (e.g. the possible classification functions our learning algorithm can even in principle be trained to realize), where S can be, for instance, the set of bitstrings {0, 1} n , or, more generally, say real vectors in R n . In the context of basic SVMs, the set of hypotheses are "all hyperplanes" 33 . Consider now a subset C k of k points in R n in a general position 34 . These points can attain binary labels in 2 k different ways. The hypothesis family H is said to shatter the set C, if for any labeling of the set C k , there exists a hypothesis h ∈ H which correctly labels the set C k according to . In other words, using functions from H we can learn any labeling function on the set C k of k points in a general position perfectly. The VC dimension of H is then the largest k max such that there exists the set C kmax of points in general position which is shattered (perfectly "labelable" for any labeling) by H. For instance, for n = 2, "rays" shatter three points but not 4 (imagine vertices of a square where diagonally opposite vertices share the same label), and in n = N , "hyperplanes" 32 Another popular measure of model complexity is e.g. Rademacher complexity (Bartlett and Mendelson, 2003). 33 Naturally, a non-trivial kernel function enriches the set of hypotheses realized by SVMs. 34 General position implies that no sub-set of points is co-planar beyond what is necessary, i.e. points in SR n are in general position if no hyperplane in R n contains more than n points in S. shatter N + 1 points. While it is beguiling to think that the VC dimension corresponds to the number of free parameters specifying the hypothesis family, this is not the case 35 . The VC theorem (in one of its variants) (Devroye et al., 1996) then states that the empirical risk matches total risk, up to a deviation which decays in the number of samples, but grows in the VC-dimension of the model, more formally: P R (h S N ) − R(h S N ) ≤ = 1 − δ (18) = d (log(2N/d) + 1) N − log(δ/4) N ,(19) where d is the VC-dimension of the model, N number of samples, and h S N is the hypothesis output by the model, given the training set S N , which is sampled from the underlying distribution D. The underlying distribution D implicitly appears also in the total risk R. Note that the chosen acceptable probability of incorrectly bounding the true error, that is, probability δ, contributes only logarithmically to the misestimation bound , whereas the VC dimension and the number of samples contribute (mutually inversely) linearly to the square of . The VC theorem suggests that the ideal learning algorithm would have a low VC dimension (allowing a good estimate of the relationship of the empirical and total risk), while at the same time, performing well on the training set. This leads to a learning principle called structural risk minimization. Consider a parametrized learning model (say parametrized by an integer l ∈ l) such that each l induces a hypothesis family H l , each more expressive then the previous, so H l ⊆ H l+1 . Structural risk minimization (contrasted to empirical risk minimization which just minimizes empirical risk) takes into account that in order to have (a guarantee on) good generalization performance we need to have both good observed performance (i.e. low empirical risk) and low model complexity. High model complexity induces the risk stemming from the structure of the problem, manifested in common issues such as data overfitting. In practice, this is achieved by considering (meta-)parametrized models, like {H l },where we minimize a combination of l (influencing the VC-dimension) and the empirical risk associated to H l . In practice, this is realized by adding a regularization term to the training optimization, so generically the (unregularized) learning process resulting in argmin h∈HR (h) is updated to argmin h l ∈H l R (h) + reg(l) , where reg(·) penalizes the complexity of the hypothesis family, or just the given hypothesis. VC dimension is also a vital concept in PAC learning, connecting the two frameworks. Note first that a concept class C, which is a set of concepts is also a legitimate set of hypotheses, and thus has a well-defined VC dimension d C . Then, the sample complexity of ( , δ)−(PAC)-learning of C is given with O (d C + ln 1/δ) −1 . Many of the above results can also be applied in the contexts of unsupervised learning, however the theory of unsupervised (or structure learning), is mostly concerned with the understanding of particular methodologies, the topic of which is beyond this review paper. 35 The canonical counterexample is the family specified by the partition of the real plane, halved by the graph of the two-parametric function h α,β (x) = α sin(βx), which can be proven to shatter any finite number of points in n = 2. The fact that the number of parameters of a function does not fully capture the complexity of the function should not be surprising as any (continuous) function over k + n variables (parameters + dimension) can be encoded as a function over 1 + n variables. C. Basic methods and theory of reinforcement learning Executive summary: While RL, in all generality, studies learning in and from interactive task environments, perhaps the best understood models consider more restricted settings. Environments can often be characterized by Markov Decision Processes, i.e. they states, which can be observed by the agent. The agent can cause transitions from states to states, by its actions, but the rules of transitions are not known beforehand. Some of the transitions are rewarded. The agent learns which actions to perform, given that the environment is in some state, such that it receives the highest value of rewards (expected return), either in a fixed time frame (finite-horizon) or over (asymptotically) long time periods, where future rewards are geometrically depreciated (infinite-horizon). Such models can be solved by estimating action-value functions, which assign expected return to actions given states, for which the agent must explore the space of strategies, but other methods exist. In more general models, the state of the environment need not be fully observable, and such settings are significantly harder to solve. RL settings can also be tackled by models from the so-called Projective Simulation framework for the design of learning agents, inspired by physical stochastic processes. While comparatively new, this model is of particular interest as it had been designed with the possibilities of beneficial quantization in mind. Interactive learning methods include models beyond textbook RL, including partially observable settings, which require generalization and more. Such extensions, e.g. generalization, typically require techniques from non-interactive learning scenarios, but also lead to agents with an ever increasing level of autonomy. In this sense, RL forms a bridge between ML and general AI models. Broadly speaking, RL deals with the problem of learning how to optimally behave in unknown environments. In the basic textbook formalism, we deal with a task environment, which is specified by a Markov decision process (MDP). MDPs are labeled, directed graphs with additional structures, comprising a discrete and finite sets of states S = {s i } and actions A = {a i }, which denote the possible states of the environment, and the actions the learning agent can perform on it, respectively. The choice of the actions of the agent change the state of the environment, in a manner which is specific to the environment (MDP), and which may be probabilistic. This is captured by a transition rule P(s|s , a), denoting the probability of the environment ending up in the state s, if the action a had been performed in the state s . Technically, this can be viewed as a collection of actionspecific Markov transition matrices {P a } a∈A that the learner can apply on the environment by performing an action. These describe the dynamics of the environment conditioned on the actions of the agent. The final component specifying the environment is a reward function R : S × A × S → Λ, where Λ is a set of rewards, often binary. In other words, the environment rewards certain transitions 36 . At each time instance, the action of the learner is specified by a policy: a conditional probability distribution π(a|s), specifying the probability of the agent outputting the action a provided it is in the state s. Given an MDP, intuitively the goal is finding good policies, i.e. those which yield high rewards. This can be formalized in many non-equivalent ways. Given a policy π and some initial state s we can e.g. define finite-horizon expected total reward after N interaction steps with R s N (π) = N i=1 r i , where r i is the expected reward under policy π at time-step i, in the given environment, and assuming we started from the state s. If the environment is finite and strongly connected 37 , the finite-horizon rewards diverge as the horizon N grows. However, by adding a geometrically depreciating factor (rate γ) we obtain an always bounded expression R γ (π) = ∞ i=1 γ i r i , called the infinite horizon expected reward (parametrized by γ), which is more commonly studied in literature. The expected rewards in finite or infinite horizons form the typical figures of merit in solving MDP problems, which come in two flavors. First, in decision theory, or planning (in the context of AI), the typical goal is finding the policy π opt which optimizes the (in)finite horizon reward in a given MDP, formally: given the (full or partial) specification of the MDP M , solve π opt = argmax π R N/γ (π), where R is the expected reward in finite (for N steps) or infinite horizon (for a given depreciation γ) settings, respectively. Such problems can be solved by dynamic and linear programming. In RL (Sutton and Barto, 1998), the specification of the environment (the MDP), in contrast, is not given, but rather can be explored by interacting with it dynamically. The agent can perform an action, and receive the subsequent state (and perhaps a reward). The ultimate goal here comes in two related (but conceptually different) flavours. One is to design an agent which will over time learn the optimal policy π opt , meaning the policy can be read out from the memory of the agent/program. Slightly differently, we wish an agent which will, over time gradually alter its behaviour (policy) as to act according to the optimal policy. While in theory these two are closely related, e.g. in robotics these are quite different as the reward rate before convergence (perfect learning) also matters 38 . First of all, we point out that RL problems as given above can be solved reliably whenever the MDP is finite and strongly connected: a trivial solution is to stick to a random policy until a reliable tomography of the environment can be done, after which the problem is resolved via dynamic programming 39 . Often, environments actually have additional structure, so-called initial and terminal states: if the agent reaches the terminal state, it is "teleported" to the fixed initial state. Such structure is called episodic, and can be used as a means of ensuring the strong connectivity of the MDP. One way of obtaining solutions is by tracking so-called value functions V π (s) : S → R which assign expected reward under policy π assuming we start from state s; this is done recursively: the value of the current state is the current reward plus the averaged value of the subsequent state (averaged under the stochastic transition rule of the environment P (s|a, s )). Optimal policies optimize these functions, and this too is achieved sequentially by modifying the policy as to maximize the value functions. This, however, assumes the knowledge of the transition rule P (s|a, s ). In further 36 Rewards can also be probabilistic. This can be modelled by explicitly allowing stochastic reward functions, or by extending the state space, to include rewarding and non-rewarding instances of states (note, the reward depends on current state, action and the reached state) in which case the probability of the reward is encoded in the transition probabilities. 37 In this context this means that the underlying MDP has finite return times for all states, that is, there is a finite probability of going back to the initial state from any state for some sequence of actions. 38 These two flavours are closely related to the notions of on-policy and off-policy learning. These labels typically pertain to how the estimates of the optimal policy are internally updated, which may be in accordance to the actual current policy and actions of the agent, or independently from the executed action, respectively. For more details see e.g. (Sutton and Barto, 1998). 39 If the environment is not strongly connected, this is not possible: for instance the first move of the learner may lead to "good" or "bad" regions from which there is no way out, in which case optimal behaviour cannot be obtained with certainty. development of the theory, it was shown that tracking action-value functions Q π (s, a), given by Q π (s, a) = s P (s |a, s)(Λ(s, a, s ) + γV π (s ))(20) assigning the value not only to the state, but the subsequent action as well can be modified into an online learning algorithm 40 . In particular, the Q-values can be continuously estimated by weighted averaging the current reward (at timestep t) for an action-value, and the estimate of the highest possible Q-value of the subsequent action-value: Q t+1 (s t , a t ) = Q t (s t , a t ) old value + α t learning rate ·      learned value r t+1 reward + γ discount · max a Q t (s t+1 , a) estimate of optimal future value − Q t (s t , a t ) old value      .(21) Note that having access to the optimal Q-values suffices to find the optimal policy: given a state, simply pick an action with the highest Q-value, but the algorithm above says nothing about which policy the agent should employ while learning. In (Watkins and Dayan, 1992) it was shown that the algorithm, specified by the update rule of Eq. 21, called Q-learning indeed converges to optimal Q values as long as the agent employs any fixed policy which has non-zero probabilities for all actions given any state (the parameter α t , which is a function of time, has to satisfy certain conditions, and γ should be the γ of the targeted figure of merit R γ ) 41 . In essence, this result suffices for solving the first flavour of RL, where the optimal policy is "learned" by the agent in the limit, but, in principle, never actually used. The convergence of the Q-learning update to the optimal Q-values, and consequently to the optimal behaviour, has been proven for all learning agents using Greedy-in-the-limit, infinite exploration (GLIE) policies. As the name suggests, such policies, in the asymptotic limit perform actions with the highest value estimated 42 . At the same time, infinite exploration means that, in the limit all state/action combinations will be tried out infinitely many times ensuring true optimal action values are found, and that the local minima are avoided. In general, the optimal trade off between these two competing properties, the exploration of the learning space, and the exploitation of obtained knowledge is quintessential for RL. There are many other RL algorithms which are based on state value, or action-value optimizations, such as SARSA 43 , various value iteration methods, temporal difference methods etc. (Sutton and Barto, 1998). In more recent times, progress has been achieved by using parametrized approximations of state-action-value-functions -a cross-breed between function approximation and reinforcement learning -which reduces the search space of available Q-functions. Here, the results which combine deep learning for value function approximation with RL have been particularly successful (Mnih et al., 2015) and the same approach also underpins the AlphaGo (Silver et al., 2016) system. This brings us to a different class of methods which do not optimize state, or action-value functions, but rather learn complete policies, often by performing an estimate of gradient descent, or other means of direct optimization in policy space. This is feasible whenever the policies are specified indirectly, by a comparably small number of parameters, and can in some cases be faster (Peshkin, 2001). The methods we discussed thus far consider special cases of environments, where the environment is Markovian, or, related to this, fully observable. The most common generalization of this are so-called partially observable MDPs (POMDP), where the underlying MDP structure is extended to include a set of observations O and a stochastic function defined with the conditional probability distribution P P OM DP (o ∈ O|s ∈ S, a ∈ A). The set of states of the environment are no longer directly accessible to the agent, but rather the agent perceives the observations from the set O, which indirectly and, in general, stochastically depend on the actual unobservable environmental state, as given by the distribution P P OM DP , and the action the agent took last. POMDPs are expressive enough to capture many real world problems, and are thus a common world model in AI, but are significantly more difficult to deal with compared to MDPs 44 . As mentioned, the setting of POMDPs moves us one step closer to arbitrary environment settings, which is the domain of artificial (general) intelligence 45 . The context of AGI is often closely related to modern view on robotics, where the structure of what can be observed, and what actions are possible stems not only from the nature of the environment, but also (bodily) constraints of the agent: e.g. a robot is equipped with sensors, specifying and limiting what the robot can observe or perceive, and actuators, constraining the possible actions. In such an agent-centric viewpoint, we typically talk about the set of percepts -signals that the agent can perceive -which may correspond to full states, or partial observations, depending on the agent-environment setting -and the set of actions 46 . This latter viewpoint, that the percept/action structure stems from the physical constitution of the agent and the environment, which we will refer to as an embodied perspective, was one of the starting points of the development of the projective simulation (PS) model for AI. PS is a physics-inspired model for AI which can be used for solving RL tasks. The centerpiece of the model is the so-called Episodic and Compositional Memory (ECM), which is a stochastic network of clips, see Fig. 9. Clips are representations of short autobiographical episodes, i.e. memories of the agent. Using the compositional aspects of the memory, which allows for a rudimentary notion of creativity, the agent can also combine actual memories to generate fictitious, conceivable clips which need not have actually occurred. More formally, clips can be defined recursively as either memorized percepts or actions, or otherwise structures (e.g. sequences) of clips. Given a current percept, the PS agent calls its ECM network to perform a stochastic random walk over its clip space (the structure of which depends on the history of the agent) projecting itself into conceivable situations, before committing to an action. Aspects of this model have been beneficially quantized, and also used both in quantum experiments and in robotics and we will focus more on this model in section VII.A. a. Learning efficiency and learnability for RL As mentioned in the introduction to this section, No Free Lunch theorems also apply to RL, and any statement about learning requires us to restrict the space of possible environments. For instance, "finite-space, time-independent MDPs" is a restriction which allows perfect learning relative to some of the standard figures of merit, as was first proven by the Q-learning algorithm. Beyond learnability, in more recent times, notions of sample complexity for RL tasks have also been explored, addressing the problem from different perspectives. The theory of sample complexity for RL settings is significantly more involved than for supervised learning, although the very basic desiderata remain the same: how many interaction steps are needed before the agent learns. Learning can naturally mean many things, but most often what is meant is that the agent learns the optimal policy. Unlike supervised learning, RL has the additional temporal dimension in the definitions of optimality (e.g. finite or infinite horizons), leading to an even broader space of options one can explore. Further details on this important field of research are beyond the scope of this review, and we refer the interested reader to e.g. the thesis of Kakade (Kakade, 2003) which also does a good job of reviewing some of the early works, and finds sample complexity bounds for RL for many basic settings, or e.g. (Lattimore et al., 2013;Dann and Brunskill, 2015) for some of the newer results. III. QUANTUM MECHANICS, LEARNING, AND AI Quantum mechanics has already had profound effect on the fields of computation and information processing. However, its impact on AI and learning has, up until very recently, been modest. Although the fields of ML and AI have a strong connection to theory of computation, these fields are still different, and not all progress in (quantum) computation implies qualitative progress in AI. For instance, although it has been more than 20 years, still the arguably most celebrated result in QC is that of Shor's factoring algorithm (Shor, 1997), which, on the face of it, has no impact on AI 47 . Nonetheless, other, less famous results may have application to various aspects of AI and learning. The field of QIP has thus, from its early stages had a careful and tentative interplay with various aspects of AI, although it is only recently that this line of research has received a broader attention. Roughly speaking, we can identify four main directions covering the interplay between ML/AI summarized in in Fig. 10. Applications of ML in quantum physics (1) Estimation and metrology (2) Historically speaking, the first contacts between aspects of QIP and learning theory occurred in terms of the direct application of statistics and statistical learning in light of the quantum theory, which forms the first line: classical machine learning applied in quantum theory and experiment reviewed in section IV. In this first topic, ML techniques are applied to data stemming from quantum experiments. The second topic, in contrast, machine learning over genuinely quantum data: quantum generalization of machine learning-type tasks, discussed in section V. This brings us to the topic which has been receiving substantial interest in recent times: can quantum computers genuinely help in machine learning problems, addressed in section VI. The final topic we will investigate considers aspects of QIP which extend beyond machine learning (taken in a narrow sense), such as generalizations of RL, and which can be understood as stepping-stones towards quantum AI. This is reflected upon in section VII.C It is worthwhile to note that there are many possible natural classifications of the comprehensive field we discuss in this review. Our chosen classification is motivated by two subtly differing perspectives on the classification of quantum ML, discussed further in section VII.B.1. IV. MACHINE LEARNING APPLIED TO (QUANTUM) PHYSICS In this section we review works and ideas where ML methods have been either directly utilized, or have otherwise been instrumental for QIP results. To do so, we are however, facing the ungrateful task of specifying the boundaries of what is considered a ML method. In recent times, partially due to its successes, ML has become a desirable key word, and consequently an umbrella term for a broad spectrum of techniques. This includes algorithms for solving genuine learning problems, but also methods and techniques designed for indirectly related problems. From such an all-encompassing viewpoint, ML also includes aspects of (parametric) statistical learning, the solving of black-box (or derivative-free) optimization problems, but also the solving of hard optimization problems in general 48 . As we do not presume to establish hard boundaries, we adopt a more inclusive perspective. The collection of all works which utilize such methods, which could conceivably fit in broad-scope ML, for QIP applications cannot be covered in one review. Consequently, we place emphasis on pioneering works, and works where the authors themselves advertise the ML flavour of used methodologies, thereby emphasizing the potential of such ML/QIP interdisciplinary endeavors. The use of ML in the context of QIP, understood as above, has been considerable, with an effective explosion of related works in the last few years. ML has been shown to be effective in a great variety of QIP related problems: in quantum signal processing, quantum metrology, Hamiltonian estimation, and in problems of quantum control. In recent times, the scope of applications has been significantly extended, ML and involved techniques have also been applied to combatting noise in the process of performing quantum computations, problems in condensed-matter and many-body physics, and in the design of novel quantum optical experiments. Such results suggest that advanced ML/AI techniques will play an integral role in quantum labs of the future, and in particular, in the construction of advanced quantum devices and, eventually, quantum computers. In a complementary direction, QIP applications have also engaged many of the methods of ML, showing that QIP may also become a promising proving ground for cutting edge ML research. Contacts between statistical learning theory (as a part of the theoretical foundations of ML) and quantum theory come naturally due to the statistical foundations of quantum theory. Already the very early theories of quantum signal processing (Helstrom, 1969), probabilistic aspects of quantum theory and quantum state estimation (Holevo, 1982), and early works (Braunstein and Caves, 1994) which would lead to modern quantum metrology (Giovannetti et al., 2011) included statistical analyses which establish tentative grounds for more advanced ML/QIP interplay. Related early works further emphasize the applicability of statistical methods, in particular maximum likelihood estimation, to quantum tomographic scenarios, such as the tasks of state estimation (Hradil, 1997), the estimation of quantum processes (Fiurášek and Hradil, 2001) and measurements (Fiurášek, 2001) and the reconstruction of quantum processes from incomplete tomographic data (Ziman et al., 2005) 49 . The works of this type generically focus on physical scenarios where clean analytic theory can be applied. However, in particular in experimental, or noisy (thus, realistic) settings, many of the assumptions, which are crucial for the pure analytic treatment, fail. This leads to the first category of ML applications to QIP we consider. A. Hamiltonian estimation and metrology Executive summary: Metrological scenarios can involve complex measurement strategies, where, e.g., the measurements which need to be performed may depend on previous outcomes. Further, the physical system under analysis may be controlled with the help of additional parameters -so-called controls -which can be sequentially modified, leading to a more complicated space of possibilities. ML techniques can help us find optima in such a complex space of strategies, under various constraints, which are often pragmatically and experimentally motivated constraints. The identifying of properties of physical systems, be it dynamic properties of evolutions (e.g. process tomography), or properties of the states of given systems (e.g. state tomography), is a fundamental task. Such tasks are resolved by various (classical) metrological theories and methods, which can identify optimal strategies, characterize error bounds, and which have also been quite generally exported to the quantum realm. For instance, quantum metrology studies the estimation of the parameters of quantum systems, and, generally, identifies optimal measurement strategies, for their estimation. Further, quantum metrology places particular emphasis on scenarios where genuine quantum phenomena -a category of phenomena associated to and sometimes even defined by the need for complex, and difficult-to-implement quantum devices for their realization -yield an advantage over simpler, classical strategies. The specification of optimal strategies, in general, constitute the problem of planning 50 , for which various ML techniques can be employed. The first examples of ML applications for finding measurement strategies originate from the problem of phase estimation, a special case of Hamiltonian estimation. Interestingly, already this simple case, provides a fruitful playground for ML techniques: analytically optimal measurement strategies are relatively easy to find, but are experimentally unfeasible. In turn, if we limit ourselves to a set of "simple measurements", near-optimal results are possible, but they require difficult-to-optimize adaptive strategies -the type of problem ML is good for. Hamiltonian estimation problems have also been tackled in more general settings, invoking more complex machinery. We first briefly describe basic Hamiltonian estimation settings and metrological concepts. Then we will delve deeper in these results combining ML with metrology problems. Hamiltonian estimation The generic scenarios of Hamiltonian estimation, a common instance of metrology in the quantum domain, consider a quantum system governed by a (partially unknown) Hamiltonian within a specified family H(θ), where θ = (θ 1 , . . . , θ n ), is a set of parameters θ. Roughly speaking, Hamiltonian estimation deals with the task of identifying the optimal methods (and the performance thereof) for estimating the Hamiltonian parameters. This amounts to optimizing the choice of initial states (probe states), which will evolve under the Hamiltonian, and the choice of the subsequent measurements, which uncover the effect the Hamiltonian had, and thus, indirectly, the parameter values 51 . This prolific research area considers 50 More specifically, most metrology settings problems constitute instances of off-line planning, and thus not RL, as the "environment specification" is fully specified -in other words, there is no need to actually run an experiment, and the optimal strategies can be found off-line. See section I.B for more detail. 51 Technically, the estimation also involves the use of a suitable estimator function, but these details will not matter. many restrictions, variations and generalizations of this task. For instance, one may assume settings in which we either have control over the Hamiltonian evolution time t, or it is fixed so that t = t 0 , which are typically referred to as frequency, and phase estimation, respectively. Further, the efficiency of the process can be measured in multiple ways. In a frequentist approach, one is predominantly interested in estimation strategies which, roughly speaking, allow for the best scaling of precision of the estimate, as a function of the number of measurements. The quantity of interest is the so-called quantum Fisher information, which bounds and quantifies the scaling. Intuitively, in this setting, also called the local regime, many repetitions of measurements are typically assumed. Alternatively, in the Bayesian, or single-shot, regime the prior information, which is given as a distribution over the parameter to be estimated, and its update to the posterior distribution given a measurement strategy and outcome, are central objects (Jarzyna and Demkowicz-Dobrzański, 2015). The objective here is the identification of preparation/measurement strategies which optimally reduce the average variance of the posterior distribution, which is computed via Bayes' theorem. One of the key interests in this problem is that the utilization of, arguably, genuine quantum features, such as entanglement, squeezing etc. in the structure of the probe states and measurements may lead to provably more efficient estimation than is possible by so-called classical strategies for many natural estimation problems. Such quantum-enhancements are potentially of immense practical relevance (Giovannetti et al., 2011). The identification of optimal scenarios has been achieved in certain "clean" theoretical scenarios, which are, however, often unrealistic or impractical. It is in this context that ML-flavoured optimization, and other ML approaches can help. Phase estimation settings Interesting estimation problems, from a ML perspective, can already be found in the simple examples of a phase shift in an optical interferometer, where one of the arms of an otherwise balanced interferometer contains a phase shift of θ. Early on, it was shown that given an optimal probe state, with mean photon number N , and an optimal (so-called canonical ) measurement, the asymptotic phase uncertainty can decay as N −1 (Sanders and Milburn, 1995) 52 , known as the Heisenberg limit. In contrast, the restriction to "simple measurement strategies" (as characterized by the authors) , involving only photon number measurements in the two output arms, achieve a quadratically weaker scaling of √ N −1 , referred to as the standard quantum limit. This was proven in more general terms: the optimal measurements cannot be achieved by the classical post-processing of photon number measurements of the output arms, but constitute an involved, experimentally unfeasible POVM (Berry and Wiseman, 2000). However in (Berry and Wiseman, 2000) it was shown how this can be circumvented by using "simple measurements", provided they can be altered in run-time. Each measurement consists of a photon number measurement of the output arms, and is parametrized by an additional, controllable phase shift of φ in the free arm -equivalently, the unknown phase can be tweaked by a chosen φ. The optimal measurement process is an adaptive strategy: an entangled N-photon state is prepared (see e.g. (Berry et al., 2001)), the photons are sequentially injected into the interferometer, and photon numbers are measured. At each step, the measurement performed is modified by choosing a differing phase shift φ, which depends on previous measurement outcomes. In (Berry and Wiseman, 2000;Berry et al., 2001), an explicit strategy was given, which achieves the Heisenberg scaling of the optimal order O(1/N ). However, for N > 4 it was shown this strategy is not strictly optimal. This type of planning is hard as it reduces to the solving of non-convex optimization problems 53 . The field of ML deals with such planning problems as well, and thus many optimization techniques have been developed for this purpose. The applications of such ML techniques, specifically particle swarm optimization were first suggested in pioneering works Sanders, 2010, 2011), and later in (Sergeevich and Bartlett, 2012). In subsequent work, perhaps more well-known methods of differential evolution have been demonstrated to be superior and more computationally efficient (Lovett et al., 2013). Generalized Hamiltonian estimation settings ML techniques can also be employed in significantly more general settings of quantum process estimation. More general Hamiltonian estimation settings consider a partially controlled evolution given by H C (θ), where C is a collection of control parameters of the system. This is a reasonable setting in e.g. the production of quantum devices, which have controls (C), but whose actual performance (dependant on θ) needs to be confirmed. Further, since production devices are seldom identical, it is beneficial to even further generalize this setting, by allowing the unknown parameters θ to be only probabilistically characterized. More precisely, they are probabilistically dependent on another set of hyperparameters ζ = (ζ 1 , . . . , ζ k ), such that the parameters θ are distributed according to a known conditional probability distribution P (θ|ζ). This generalized task of estimating the hyperparameters ζ thus allows the treatment of systems with inherent stochastic noise, when the influence of noise is understood (given by P (θ|ζ)). Such very general scenarios are addressed in (Granade et al., 2012), relying on classical learning techniques of Bayesian experimental design (BED) (Loredo, 2004), combined with Monte Carlo methods. The details of this method are beyond the scope of this review, but, roughly speaking, BED assumes a Bayesian perspective on the experiments of the type described above. The estimation methods of the general problem (ignoring the hyperparameters and noise, for simplicity, although the same techniques apply) realize a conditional probability distribution P (d|θ; C) where d corresponds to experimental data, i.e. measurement outcomes collected in the experiment. Assuming some prior distribution over hidden parameters (P (θ)), the posterior distribution, given experimental outcomes, is given via Bayes theorem by P (θ|d; C) = P (d|θ; C)P (θ) P (d|C) .(22) The evaluation of above is already non trivial, predominantly as the normalization factor P (d|C) includes an integration over the parameter space. Further, of particular interest are scenarios where an experiment is iterated many times. In this case, analogously to the adaptive setting for metrology discussed above, it is beneficial to tune the control parameters C dependent on the outcomes. BED (Loredo, 2004), tackles such adaptive settings, by selecting the subsequent control parameters C as to maximize a utility function 54 , for each update step. The Bayes updates consist of the 53 The non-convexity stems from the fact that the effective input state at each stage depends on previous measurements performed. As the entire interferometer set-up can be viewed as a one-subsystem measurement, the conditional states also depend on unknown parameters, and these are used in the subsequent stages of the protocol (Hentschel and Sanders, 2010). 54 The utility function is an object stemming from decision theory and, in the case of BED it measures how well the experiment improves our inferences. It is typically defined by the prior-posterior gain of information as measured by the Shannon entropy, although there are other possibilities. computing of P (θ|d 1 , . . . , d l−1 d k ) ∝ P (d k |θ)P (θ|d 1 , . . . , d l−1 ) at each step. The evaluation of the normalization factor P (d|C) is, however, also non-trivial, as it includes an integration over the parameter space. In (Granade et al., 2012) this integration is tackled via numerical integration techniques, namely sequential Monte Carlo, yielding a novel technique for robust Hamiltonian estimation. The robust Hamiltonian estimation method was subsequently expanded to use access to trusted quantum simulators, which forms a more powerful and efficient estimation scheme (Wiebe et al., 2014b) 55 , which was also shown to be robust to moderate noise and imperfections in the trusted simulators (Wiebe et al., 2014c). A restricted version of the method of estimation with simulators was experimentally realized in (Wang et al., 2017). More recently, connected to the methods of robust Hamiltonian estimation, Bayesian and sequential Monte Carlo based estimation have further been combined with particle swarm optimization techniques (Stenberg et al., 2016). There the goal was to achieve reliable coupling strength and frequency estimation in simple decohering systems, corresponding to realistic physical models. More specifically, the studied problem is the estimation of field-atom coupling terms, and the mode frequency term, in the Jaynes-Cummings model. The controlled parameters are the local qubit field strength, measurements are done via swap spectroscopy. Aside from using ML to perform partial process tomography of controlled quantum systems, ML can also help in the genuine problems of quantum control, specifically, the design of target quantum gates. This forms the subsequent topic. B. Design of target evolutions Executive summary: One of the main tasks quantum information is the design of target quantum evolutions, including quantum gate design. This task can be tackled by quantum control which studies controlled physical systems where certain parameters can be adjusted during system evolution, or by using extended systems, and unmodulated dynamics. Here, the underlying problem is an optimization problem, that is, the problem of finding optimal control functions or extended system parameters, of a system which is otherwise fully specified. Under realistic constraints these optimization tasks are often non-convex, thus hard for conventional optimizers, yet amenable to advanced ML technologies. Target evolution design problems can also be tackled by using feed-back from the actual experimental system, leading to the use of on-line optimization methods and RL. From a QIP perspective, one of the most important tasks is the design of elementary quantum gates, needed for quantum computation. The paradigmatic approach to this is via quantum control, which aims to identify how control fields of physical systems need to be adapted in time, to achieve desired evolutions. The designing of target evolutions can also be achieved in other settings, e.g. by using larger systems, and unmodulated dynamics. In both cases, ML optimization techniques can be used to design optimal strategies, off line. However, target evolutions can also be achieved in run-time, by interacting with a tunable physical system, and without the need for the complete description of the system. We first consider off-line settings, and briefly comment on the latter on-line settings thereafter. Off-line design The paradigmatic setting in quantum control considers a Hamiltonian with a controllable (c) and a drift part (dr), e.g. H(C(t)) = H dr + C(t)H c . The free part is modulated via a (real-valued) control field C(t). The resulting time-integrated operator U = U [C(t)] ∝ exp −i T 0 dtH(C(t)) , over some finite time T , is a function of the chosen field function C(t). The typical goal is to specify the control field C(t) which maximizes the transition probability from some initial state |0 to a final state |φ , thus find argmax C | φ| U [C(t)] |0 | 56 . Generically, the mappings C(t) → U [C(t)] are highly involved, but nonetheless, empirically it was shown that greedy optimization approaches provide optimal solutions (which is the reason why greedy approaches dominate in practice). This empirical observation was later elucidated theoretically (Rabitz et al., 2004), suggesting that in generic systems local minima do not exist, which leads to easy optimization (see also (Russell and Rabitz, 2017) for a more up-to-date account). This is good news for experiments, but also suggests that quantum control has no need for advanced ML techniques. However, as is often the case with claims of such generality, the underlying subtle assumptions are fragile which can often be broken. In particular, greedy algorithms for optimizing the control problem as above can fail, even in the low dimensional case, if we simply place rather reasonable constraints on the control function and parameters. Already for 3-level and 2-qubit systems with constraints on the allowed evolution time t, and the precision of the linearization of the time-dependent control parameters 57 , it is possible to construct examples where greedy approaches fail, yet global (derivative-free) approaches, in particular differential evolution, succeed (Zahedinejad et al., 2014). Another example of hard off-line control concerns the design of high fidelity single-shot three-qubit gates 58 , which is in (Zahedinejad et al., 2015 addressed using a specialized novel optimization algorithm the authors called subspace-selective self-adaptive differential evolution (SuSSADE) . An interesting alternative approach to gate design is by utilizing larger systems. Specifically designed larger systems can naturally implement desired evolutions on a subsystem, without the need of time-dependent control (c.f. QC with always-on interaction (Benjamin and Bose, 2003)). In other words, local gates are realized despite the fact that the global dynamics is unmodulated. The non-trivial task of constructing such global dynamics, for the Toffoli gate, is in (Banchi et al., 2016) tackled by a method which relies stochastic gradient descent, and draws from supervised learning techniques. On-line design Complementary to off-line methods, here we assume access to an actual quantum experiment, and the identification of optimal strategies relies on on-line feedback. In these cases, the quantum experiment 56 An example of such additional fields would be controlled laser fields in ion trap experiments, and the field function C specifies how the laser field strengths are modulated over time. 57 It is assumed that the field function C(t) describing parameter values as functions of time is step-wise constant, split in K segments. The larger the value K is, the better is the approximation of a smooth function which would arguably be better suited for greedy approaches. 58 This includes the Toffoli (and Fredkin) gate which is of particular interest as it forms a universal gate set together with the simple single-qubit Hadamard transform (Shi, 2002) (if ancillas qubits are used). need not be fully specified beforehand. Further, the required methodologies lean towards on-line planning and RL, rather than optimization. In the case optimization is required, the parameters of optimization are different due to experimental constraints, see (Shir et al., 2012) for an extensive treatment of the topic. The connections between on-line methods which use feedback from experiments to "steer" systems to desired evolutions, have been connected to ML in early works (Bang et al., 2008;Gammelmark and lmer, 2009). These exploratory works deal with generic control problems via experimental feedback, and have, especially at the time, remained mostly unnoticed by the community. In more recent times, feedback-based learning and optimization has received more attention. For instance in (Chen et al., 2014) the authors have explored the applicability of a modified Q-learning algorithm for RL (see section II.C) on canonical control problems. Further, the potential of RL methods had been discussed in the context of optimal parameter estimation, but also typical optimal control scenarios in (Palittapongarnpim et al., 2016). In the latter work, the authors also provide a concise yet extensive overview of related topics, and outline a perspective which unifies various aspects of ML and RL in an approach to resolve hard quantum measurement and control problems. In (Clausen and Briegel, 2016), RL based on PS updates was analyzed in the context of general control-and-feedback problems. Finally, ideas of unified computational platforms for quantum control, albeit without explicit emphasis on ML techniques had been previously provided in (Machnes et al., 2011). In the next section, we further coarse-grain our perspective, and consider scenarios where ML techniques control various gates, and more complex processes, and even help us learn how to do interesting experiments. C. Controlling quantum experiments, and machine-assisted research Executive summary: ML and RL techniques can help us control complex quantum systems, devices, and even quantum laboratories. Furthermore, almost as a by-product, they may also help us to learn more about the physical systems and processes studied in an experiment. Examples include adaptive control systems (agents) which learn how to control quantum devices, e.g. how to preserve the memory of a quantum computer, combat noise processes, generate entangled quantum states, and target evolutions of interest. In the process of learning such optimal behaviours even simple artificial agents also learn, in an implicit, embodied embodied, sense, about the underlying physics, which can be used by us to obtain novel insights. In other words artificial learning agents can genuinely help us do research. The prospects of utilizing ML and AI in quantum experiments have been investigated also for "higher-level" experimental design problems. Here one considers automated machines that control complex processes which e.g. specify the execution of longer sequences of simple gates, or the execution of quantum computations. Moreover, it has been suggested that learning machines can be used for, and integrated into, the very design of quantum experiments, thereby helping us in conducting genuine research. We first present two results where ML and RL methods have been utilized to control more complex processes (e.g. generate sequences of quantum gates to preserve memory), and consider the perspectives of machines genuinely helping in research thereafter. Controlling complex processes The simplest example of involved ML machinery used to generate control of slightly more complex systems was done in the context of is the problem of dynamical decoupling for quantum memories. In this scenario, a quantum memory is modelled as a system coupled to a bath (with a local Hamiltonian for the system (H S ) and the bath H B ), and decoherence is realized by a coupling term H SB ; the local unitary errors are captured by H S . The evolution of the total Hamiltonian H noise = H S + H B + H SB would destroy the contents of the memory, but this can be mitigated by adding a controllable local term H C acting on the system alone 59 . Certain optimal choices of the control Hamiltonian H C are known. For instance, we can consider the scenario where H C is modulated such that it implements instantaneous 60 Pauli-X and Pauli-Y unitary operations, sequentially, at intervals ∆t. As this interval, which is also the time of the decoherence-causing free evolution, approaches zero, so ∆t → 0, this process is known to ensure perfect memory. However, the moment the setting is made more realistic, allowing finite ∆t times, the space of optimal sequences becomes complicated. In particular, optimal sequences start depending on ∆t, the form of the noise Hamiltonian, and total evolution time. To identify optimal sequences, in (August and Ni, 2017), the authors employ recurrent NNs, which are trained as a generative model -meaning they are trained to generate sequences which minimize final noise. The entire sequences of pulses (Pauli gates) which the networks generated were shown to outperform well-known sequences. In a substantially different setting, where interaction necessarily arises, the authors studied how AI/ML techniques can be used to make quantum protocols themselves adaptive. Specifically, the authors applied RL methods based on PS (Briegel and De las Cuevas, 2012) (see section VII.A) to the task of protecting quantum computation from local stray fields . In MBQC (Raussendorf and Briegel, 2001;Briegel et al., 2009), the computation is driven by performing adaptive single-qubit projective measurements on a large entangled resource state, such as the cluster state (Raussendorf and Briegel, 2001). In a scenario where the resource state is exposed to a stray field, each qubit undergoes a local rotation. To mitigate this, in , the authors introduce learning agent, which "plays" with a local probe qubit, initialized in say the +1 eigenstate of σ x , denoted |+ , learning how to compensate for the unknown field. In essence, given a measurement, the agent chooses a different measurement, obtaining a reward whenever a +1 outcome is observed. The agent is thus trained to compensate for the unknown field, and serves as an "interpreter" between desired measurements and the measurements which should be performed in the given setting (i.e. in the given field with given frequency of measurements (∆t)), see Fig. 11. The problem of mitigating such fixed stray fields could naturally be solved with non-adaptive methods where we use the knowledge about the system to solve our problem, by e.g. measuring the field and adapting accordingly, or by using fault-tolerant constructions. From a learning perspective, such direct methods have a few shortcomings which may be worth presenting for didactic purposes. Fault tolerant methods are clearly wasteful, as they fail to gain utilize any knowledge about the noise processes. In contrast, field estimation methods learn too much, and assume a model of the world. To clarify the latter, to compensate the measured field, we need to use quantum mechanics, specifically the Born rule. In contrast, RL approach is model-free: the Born rule plays no part, and "correct behavior" is learned, and established exclusively based on experience. This is conceptually different, but also operatively critical, as model-free approaches allow for more autonomy and flexibility (i.e. the same machinery can be used in more settings without intervention) 61 . Regarding learning too much, one of the basic principles of statistical learning posits that "when solving a problem of interest, one should not solve a more general problem as an intermediate step" (Vapnik, 1995), which is intuitive. The problem of the presented setting is "how to adapt the measurement settings," and not "characterize the stray fields". While in the present context, the information-theoretic content of the two questions may be the same, it should easy to imagine that if more complex fields are considered, full process characterization contains a lot more information than needed to optimally adapt the local measurements. The approaches of can further be generalized to utilize information from stabilizer measurements (Orsucci et al., 2016), or similarly outcomes of syndrome measurements when codes are utilized (Combes et al., 2014), (instead of probe states) to similar ends. Addressing somewhat related problems, but using supervised learning methods, the authors in (Mavadia et al., 2017) have shown how to compensate for qubit decoherence (stochastic evolution) also in experiments . Learning how to experiment One of the first examples of applications of RL in QIP appears in the context of experimental photonics, where one of the current challenges lies in the generation of highly entangled, high dimensional, multi-party states. Such states are generated on optical tables, the configuration of which, to generate complex quantum states, can be counter-intuitive and unsystematic. The searching for configurations which are interesting can be mapped to a RL problem, where a learning agent is rewarded whenever it generates an interesting state (in a simulation). In a precursor work (Krenn et al., 2016), the authors used a feedback-assisted search algorithm to identify previously unknown configurations which generate novel highly entangled states. This demonstrated that the design of novel quantum experiments can also be automatized, which can significantly aid in research. This idea given in the context of optical tables, has subsequently been combined with earlier proposals to employ AI agents in quantum information protocols and as "lab robots" in future quantum laboratories (Briegel, 2013). This led to the application of more advanced RL techniques, based on the PS framework, for the tasks of understanding the Hilbert space accessible with optical tables, and the autonomous machine-discovery of useful optical gadgets (Melnikov et al., 2017). Related to the topic of learning new insight from experimenting machines, in (Bukov et al., 2017) the authors consider the problem of preparing target states by means of chosen pulses implementing (a restricted set) of rotations. This is a standard control task, and authors show that RL achieves respectable and sometimes near-optimal results. However, for our purposes, the most relevant aspects of this work pertain to the fact that the authors also illustrate how of ML/RL techniques can be used to obtain new insights in quantum experiments, and non-equilibrium physics, by circumventing human intuition which can be flawed. Interestingly, the authors also demonstrate the reverse, i.e. how physics insights can help elucidate learning problems 62 . D. Machine learning in condensed-matter and many-body physics Executive summary: One of the quintessential problems of many-body physics is the identification of phases of matter. A popular overlap between ML and this branch of physics demonstrates that supervised and unsupervised systems can be trained to classify different phases. More interestingly, unsupervised learning can be used to detect phases, and even discover order parameters -possibly genuinely leading to novel physical insights. Another important overlap considers the representational power of (generalized) neural networks, to characterize interesting families of quantum systems. Both suggest a deeper link between certain learning models, on the one side, and physical systems, on the other side, the scope of which is currently an important research topic. ML techniques have, over the course of last 20 years, become an indispensable toolset of many natural sciences which deal with highly complex systems. These include biology (specifically genetics, genomics, proteomics, and the general field of computational biology) (Libbrecht and Noble, 2015), medicine (e.g. in epidemiology, disease development, etc.) (Cleophas and Zwinderman, 2015), chemistry (Cartwright, 2007), high energy and particle physics (Castelvecchi, 2015). Unsurprisingly, they have also permeated various aspects of condensed matter and many-body physics. Early examples of this were proposed in the context of quantum chemistry and density functional theory (Curtarolo et al., 2003;Snyder et al., 2012;Li et al., 2015a), or for the approximation of the Green's function of the single-site Anderson impurity model (Arsenault et al., 2014). The interest in connections between NNs and many-body and condensed matter physics has undergone immense growth since. Some of the results which we cover next deviate from the primary topic of this review, those concerning the overlaps of QIP and ML. However, since QIP, condensed matter, and many-body physics share significant overlaps we feel it is important to at least briefly flesh out the basic ideas. One of the basic lines of research in this area deals with the learning of phases of matter, and the detection of phase transitions in physical systems. A canonical example is the discrimination of samples of configurations stemming from different phases of matter, e.g. Ising model configurations of thermal states below, or above the critical temperature. This problem has been tackled using principal component analysis and nearest neighbour unsupervised learning techniques (Wang, 2016) (see also (Hu et al., 2017)). Such methods also have the potential to, beyond just detecting phases, actually identify order parameters (Wang, 2016) -in the above case, magnetization. More complicated discrimination problems, e.g. discriminating Coulomb phases, have been resolved using basic feed-forward networks, and convolutional NNs were trained to detect topological phases, , but also phases in fermionic systems on cubic lattices (Ch'ng et al., 2016). Neural networks have also been combined with quantum Monte Carlo methods (Broecker et al., 2016), and with unsupervised methods (van Nieuwenburg et al., 2017) (applied also in (Wang, 2016)), in both cases to improve classification performance in various systems. It is notable that all these methods prove quite successful in "learning" phases, without any information of the system Hamiltonian. While the focus in this field had mostly been on neural network architectures, other supervised methods, specifically kernel methods (e.g. SVMs) had been used for the same purpose (Ponte and Melko, 2017). Kernel methods may be in some cases advantageous as they can have a higher interpretability: it is often easier to understand the reason behind the optimal model in the cases of kernel methods, rather than NNs, which also means that learning about the underlying physics may be easier in the cases of kernel methods. Note that this will most likely be challenged by deep NN approaches in years to come. A partial explanation behind the success of neuronal approaches for classifying phases of matter may lie in their form. Specifically, they may have the capacity to encode important properties of physical systems both in the classical in quantum case. This motivates the second line of research we mention in this context. BMs, even in their restricted variant, are known to have the capacity to encode complicated distributions. In the same sense, restricted BMs, extended to accept complex weights (i.e. the weights w ij in Eqs. (2) and (3)) encode quantum states, and the hidden layer captures correlations, both classical and quantum (entanglement). In it was shown that this approach describes equilibrium and dynamical properties of many prototypical systems accurately: that is, restricted BMs form a useful ansatz for interesting quantum states (called neural-network quantum states (NQS)), where the number of neurons in the hidden layer controls the size of the representable subset of the Hilbert space. This is analogous to how, for instance, the bond dimension controls the scope of the matrix product state ansatz (Verstraete et al., 2008). This property can also be exploited in order to achieve efficient quantum state tomography 63 (Torlai et al., 2017). In subsequent works, the authors have also analyzed the structure of entanglement of NQS states (Deng et al., 2017), and have provided analytic proofs of the representation power of deep restricted BMs, proving they can e.g. represent ground states of any k-local Hamiltonians with polynomial-size gaps (Gao and Duan, 2017). It is worthwhile to note that representational powers of standard variational representations (e.g. that of the variational renormalization group) had previously been contrasted to those of deep NNs (Mehta and Schwab, 2014), with the goal of elucidating the success of deep networks. Related to this, the Tensor Network (Östlund and Rommer, 1995;Verstraete and Cirac, 2004) formalism has been used for the efficient description of deep convolutional arithmetic circuits, establishing also a formal connection between quantum many-body states and deep learning (Levine et al., 2017). Very recently, the intersection between ML and many-body quantum physics have also inspired research into ML-motivated entanglement witnesses and classifiers (Ma and Yung, 2017;, and also into furthering the connections between ML and many-body physics, specifically, entanglement theory. These recent results have positioned NNs as one of the most exciting new techniques to be applied in the context of both condensed-matter and many-body physics. Additionally, they also show the potential of the converse direction of influence -the application of mathematical formalism of many-body physics for the deepening of our understanding of complex learning models. V. QUANTUM GENERALIZATIONS OF MACHINE LEARNING CONCEPTS The onset of quantum theory necessitated a change in how we describe physical systems, but also a change in our understanding of what information is 64 . Quantum information is a more general concept, and QIP exploits the genuine quantum features for more efficient processing (using quantum computers) and more efficient communication. Such quintessential quantum properties, such as the fact that even pure states cannot be perfectly copied (Wootters and Zurek, 1982), are often argued to be at the heart of many quantum applications, such as cryptography. Similarly, quintessential information processing operations are more general in the quantum world: closed quantum systems can undergo arbitrary unitary evolutions, whereas the corresponding classical closed-system evolutions correspond to the (finite) group of permutations 65 . The majority of ML literature deals with learning from, and about data -that is, classical information. This section examines the question of what ML looks like, when the data (and perhaps its processing) is fundamentally quantum. We will first explore quantum generalizations of supervised learning, where the "data-points" are now genuine quantum states. This generates a plethora of scenarios which are indistinguishable in the classical case (e.g. having one or two copies of the same example is not the same!). Next, we will consider another quantum generalization of learning, where quantum states are used to represent the generalizations of unknown concepts in CLT -thus we talk about the learning of quantum states. Following this we will present some results on quantum generalizations of POMDP's which could lead to quantum-generalized reinforcement learning (although this actually just generalizes the mathematical structure). A. Quantum generalizations: machine learning of quantum data Executive summary: A significant fraction of the field of ML deals with data analysis, classification, clustering, etc. QIP generalizes standard notions of data, to include quantum states. The processing of quantum information comes with restrictions (e.g. no-cloning or no-deleting), but also new processing options. This section addresses the question of how conventional ML concepts can be extended to the quantum domain, mostly focusing on aspects of supervised learning and learnability of quantum systems, but also concepts underlying RL. One of the basic problems of ML is that of supervised learning, where a training set D = {(x i , y i )} i is used to infer a labeling rule mapping data points to labels x i rule → y i (see section I.B for more details). More generally, supervised learning deals with classification of classical data. In the tradition of QIP, data can also be quantum -that is, all quantum states carry, or rather represent, (quantum) information. What can be done with datasets of the type {(ρ i , y i )} i , where ρ i is a quantum state? Colloquially it is often said that one of the critical distinction between classical and quantum data is that quantum data cannot be copied. In other words, having one instance of an example, by notation abuse denoted (ρ i ⊗ y i ), is not generally as useful as having two copies (ρ i ⊗ y i ) ⊗2 . In contrast in the case of classification with functional labeling rules, this is the same. The closest classical analog of dealing with quantum data is the case where labelings are not deterministic, or equivalently, where the conditional distribution P (label|datapoint) is not extremal (Dirac). This is the case of classification (or learning) of random variables, or probabilistic concepts, where the task is to produce the best guess label, specifying the random process which "most likely" produced the datapoint 66 . In this case, having access to two examples in the training phase which are independently sampled from the same distribution is not the same as having two copies of one and the same individual sample-these are perfectly correlated and carry no new information 67 . To obtain full information about a distribution, or random variable, one in principle needs infinitely many samples. Similarly, in the quantum case, having infinitely many copies of the same quantum state ρ is operatively equivalent to having a classical description of the given state. Despite similarities, quantum information is still different from mere stochastic data. The precursors of ML-type classification tasks can be identified in the theories of quantum state discrimination, which we briefly comment on first. Next, we review some early works dealing with "quantum pattern matching" which spans various generalizations of supervised settings, and first works which explicitly propose the study of quantum-generalized machine learning. Next, we discuss more general results, which characterize inductive learning in quantum settings. Finally, we present a CLT perspective on learning with quantum data, which addresses the learnability of quantum states. 1. State discrimination, state classification, and machine learning of quantum data a. State discrimination The entry point to this topic can again be traced to seminal works of Helstrom and Holevo (Helstrom, 1969;Holevo, 1982) as the problems of state discrimination can be rephrased as variants of supervised learning problems. In typical state discrimination settings, the task is the identifying of a given quantum state (given as an instance of a quantum system prepared in that state), under the promise that it belongs to a (typically finite) set {ρ i } i , where the set is fully classically specified. Recall, state estimation, in contrast, typically assumes continuous parametrized families, and the task is the estimation of the parameter. In this sense, discrimination is a discretized estimation problem 68 , and the problems of identifying optimal measurements (under various figures of merit), and success bounds have been considered extensively and continuously throughout the history of QIP (Helstrom, 1969;Croke et al., 2008;Slussarenko et al., 2017). Remark: Traditional quantum state discrimination can be rephrased as degenerate supervised learning setting for quantum states. Here, the space of "data-points" is restricted to a finite (or parametrized) family {ρ i } i , and the training set contains an effective infinite number of examples D = {(ρ i , i) ⊗∞ }; naturally, this notation is just a short-hand for having the complete classical description of the quantum states 69 . In what follows we will sometimes write ρ ⊗∞ to denote a quantum system containing the classical description of the density matrix ρ. 66 Note that in this setting we do not have the descriptions of the stochastic processes given a-priory -they are to be inferred from the training examples. 67 In this sense, no-cloning theorem also applies to classical information: an unknown random variable cannot be cloned. In QIP language this simply means that no-cloning theorem applies to diagonal density matrices, i.e. ρ → ρ ⊗ ρ, even when ρ is promised to be diagonal. 68 Intuitively, estimation is to discrimination, what regression is to classification in the ML world. 69 From an operative, and information content perspective, having infinitely many copies is equivalent to having a full classical description: infinite copies are sufficient and necessary for perfect tomography -yielding the exact classical description -whereas having an exact classical description is sufficient and necessary for generating an unbounded copy number. b. Quantum template matching -classical templates A variant of discrimination, or class assignment task, which is one of the first instances of works which establish explicit connections with ML and discrimination-type problems, is "template matching" (Sasaki et al., 2001). In this pioneering work, the authors consider discrimination problems where the input states ψ may not correspond to the (known) template states {ρ i } i , and the correct matching label is determined by the largest the Uhlmann fidelity. More precisely, the task is defined as follows: given a classically specified family of template states {ρ i } i , given M copies of a quantum input ψ ⊗M , output the label i corr defined with i corr = argmax i Tr √ ψρ i √ ψ 2 . In this original work, the authors focused on two-class cases, with pure state inputs, and identify fully quantum, and semi-classical strategies for this problem. "Fully quantum strategies" identify the optimal POVM. Semi-classical strategies impose a restriction of measurement strategies to separable measurements, or perform state estimation on the input, a type of "quantum feature extraction". c. Quantum template matching -quantum templates. In a generalization of the work in (Sasaki et al., 2001), the authors in (Sasaki and Carlini, 2002) consider the case where instead of having access to the classical descriptions of the template states {ρ i } i , we are given access to a certain number K of copies. In other words, we are given access to a quantum system in the state i ρ ⊗K i .. Setting K → ∞, recovers the case with classical templates. This generalized setting introduces many complications, which do not exist in the "more classical" case with classical templates. For instance, classifying measurements now must "use up" copies of template states, as they too cannot be cloned. The authors identify various flavors of semi-classical strategies for this problem. For instance, if the template states are first estimated, we are facing the scenario of classical templates (albeit with error). The classical template setting itself allows semiclassical strategies, where all systems are first estimated, and it allows coherent strategies. The authors find optimal solutions for K = 1, and show that there exists a fully quantum procedure that is strictly superior to straightforward semiclassical extensions. Remark: Quantum template matching problems can be understood as quantum-generalized supervised learning, where the training set is of the form {(ρ ⊗K i , i) i }, data beyond the training set comes from the family ψ ⊗M (number of copies is known), and the classes are defined via minimal distance, as measured by the Uhlmann fidelity. The case K → ∞ approaches the special case of classical templates. Restricting the states ψ to the set of template states (restricted template matching), and setting M = 1 recovers standard state discrimination. d. Other known optimality results for (restricted) template matching For the restricted matching case, where the input is promised to be from the template set, the optimal solutions for the two-class setting, minimum error figure of merit, and uniform priors of inputs, have been found in (Bergou and Hillery, 2005;Hayashi et al., 2005) for the qubit case. In (Hayashi et al., 2006) the authors found optimal solutions for the unambiguous discrimination case 70 . An asymptotically optimal strategy restricted matching with finite templates K < ∞, for arbitrary priors, and mixed qubit states was later found in (Guţȃ and Kot lowski, 2010). This work also provides a solid introduction 70 In unambiguous discrimination, the device is allowed to output an ambiguous "I do not know" outcome, but is not allowed to err in the case it does output an outcome. The goal is to minimize the probability of the ambiguous outcome. to the topic, a review of quantum analogies for statistical learning, and emphasizes connections to ML methodologies and concepts. Later, in (Sentís et al., 2012) the authors introduced and compared all three strategies: classical estimate-and-discriminate, classical optimal, and quantum strategy, for the restricted template matching case with finite templates. Recall, the adjective "classical" here denotes that the training states are fully measured out as the first step -the quantum set is converted to classical information, meaning that no quantum memory is further required, and that the learning can be truly inductive. A surprising result is that the intuitive estimate-and-discriminate strategy, which reduces supervised classification to optimal estimation coupled with a (standard) quantum state discrimination problem, is not optimal for learning. Another measurement provides not only better performance, but matches the optimal quantum strategy exactly (as opposed to asymptotically). Interestingly, the results of (Guţȃ and Kot lowski, 2010) and (Sentís et al., 2012) opposite claims for essentially the same setting: no separation, vs. separation between coherent (fully quantum) and semi-classical strategies, respectively. This discrepancy is caused by differences in the chosen figures of merit, and a different definition of asymptotic optimality , and serves as an effective reminder of the subtle nature of quantum learning. Optimal strategies had been subsequently explored in other settings as well, e.g. when the data-set comprises coherent states (Sentís et al., 2015), and or in the cases where an error margin is in an otherwise unambiguous setting (Sentís et al., 2013). e. Quantum generalizations of (un)supervised learning The works of the previous paragraph consider particular families of generalizations of supervised learning problems. The first attempts to classify and characterize what ML could look like in a quantum world from a more general perspective was, however, first explicitly done in (Aïmeur et al., 2006). There, the basic object introduced is the database of labeled quantum or classical objects, i.e. D K n = {(|ψ i ⊗i , y i )} n i=1 71 , which may come in copies. Such a database can, in general then be processed to solve various types of tasks, using classical or quantum processing. The authors propose to characterize quantum learning scenarios in terms of classes, denoted L context goal . Here context may denote we are dealing with classical or quantum data and whether the learning algorithm is relying on quantum capabilities or not. The goal specifies the learning task or goal (perhaps in very broad terms). Examples include L c c , which corresponds to standard classical ML, and L q c , which could mean we use a quantum computer to analyze classical data. The example of template matching classical templates (K = ∞) (Sasaki et al., 2001) considered earlier in this section would be denoted L c q , and the generalization with finite template numbers K < ∞ would fit in L ⊗K q . While the formalism above suggests focus on supervised settings, the authors also suggest that datasets could be inputs for (unsupervised) clustering. The authors further study quantum algorithms for determining closeness of quantum states 72 , which could be the basic building block of quantum clustering algorithms, and also compute certain error bounds for special cases of classification (state discrimination) using well known results of Helstrom (Helstrom, 1969). Similar ideas were used in (Lu and Braunstein, 2014) for the purpose of definition of a quantum decision tree algorithm for data classification in the quantum regime. The strong connection between quantum-generalized learning theory sketched out in (Aïmeur et al., 2006) and the classical 73 theory of Helstrom (Helstrom, 1969) was more deeply explored in (Gambs,71 Such a dataset can be stored in, or instantiated by, a 2-n partite quantum system, prepared in the state n i=1 |ψ i ⊗K i |y i . 72 These are based on the SWAP-test (see section VI.C.2), in terms of Uhlmann fidelity 73 Here we mean classical in the sense of "being a classic", rather than pertaining to classical systems. 2008). There, the author computed the lower bounds of sample complexity -in this case the minimal number of copies K -needed to solve a few types of classification problems. For this purpose the author introduced a few techniques which reduce ML-type classification problems to the settings where theory (Helstrom, 1969) of could be directly applied. These types of results contribute to the establishing of a deeper connection between problems of ML and techniques of QIP. f. Quantum inductive learning Recall that inductive, eager learning, produces a best guess classifier which can be applied to the entire domain of data-points, based on the training set. But, already the results of (Sasaki and Carlini, 2002) discussed in paragraph on template matching with quantum templates, point to problems with this concept in the quantum realm -the optimal classifier may require a copy of the quantum data-points to perform classification, which seemingly prohibits unlimited use. The perspectives of such quantum generalizations of supervised learning in its inductive form, were recently addressed from a broad perspective (Monràs et al., 2017). Recall that inductive learning algorithms, intuitively, use only the training set to specify a hypothesis (the estimation of the true labeling function). In contrast, in transductive learning, the learner is also given the data points the labels of which are unknown. These unlabeled points may correspond to the cross-validation test set, or the actual target data. Even though the labels are unknown, they carry additional information of the complete dataset which can be helpful in identifying the correct labeling rule 74 . Another distinction is that transductive algorithms need only label the given points, whereas inductive algorithms need to specify a classifier, i.e., a labeling function, defined on the entire space of possible points. In (Monràs et al., 2017), the authors notice that the property of an algorithm being inductive corresponds to a non-signaling property 75 , using which they can prove that "being inductive" (i.e. being "no signalling") is equivalent to having an algorithm which outputs a classifier h based on the training set alone, which is then applied to every training instance. A third equivalent characterization of inductive learning is that the training and testing cleanly separate as phases. While these observations are quite intuitive in the classical case, they are in fact problematic in the quantum world. Specifically, if the training examples are quantum objects, quantum no-cloning, in general, prohibits the applying of a hypothesis function (candidate labeling function) h arbitrarily many times. This is easy to see since each instance of h must depend on the quantum data in some non-trivial way, if we are dealing with a learning algorithm. Multiple copies of h would then require multiple copies of (at least parts of) the quantum data. A possible implication of this would be that, in the quantum realm, inductive learning cannot be cleanly separated into training and testing. Nonetheless, the authors show that the no-signalling criterion, for certain symmetric measures of performance, implies that a separation is, asymptotically, possible. Specifically, the authors show that for any quantum inductive no-signalling algorithm A there exists another, perhaps different algorithm A which does separate in a training and testing phase and which, asymptotically, attains the same performance (Monràs et al., 2017). Such a protocol A , essentially, utilizes a semi-classical strategy. In other words, for inductive settings, classical intuition survives, despite no-cloning theorems. 74 For instance, a transductive algorithm may use unsupervised clustering techniques to assign labels, as the whole set is given in advance. 75 The outcome of the entire learning and evaluation process can be viewed as a probability distribution P (y) = P (y 1 . . . y k |x 1 . . . x k ; A), where A is the training set, x 1 , . . . x k are the points of the test state and y 1 . . . y k the respective labels the algorithm assigns with the probability P (y). No signaling implies that the marginal distribution for the k th test element P (y k ) only depends on x k and the training set, but not on other test points {x l } l =k . Computational learning perspectives: quantum states as concepts The previous subsections addressed the topics of classification of quantum states, based on quantum database examples. The overall theory, however, relies on the assumption that there exists a labeling rule, which generates such examples, and what is learned is the labeling rule. This rule is also known as concept, in CLT (e.g. PAC learning, see section II.B.1 for details). A reasonable sufficient criterion is, if one can predict the probabilities of outcomes of any two-outcome measurements on this state, as this already suffices for a full tomographic reconstruction. What would "the learning of quantum states" mean, from this perspective? What does it mean to "know a quantum state"? A natural criterion is that one "knows" a quantum state, if one can predict the measurement outcome probabilities of any given measurement. In (Aaronson, 2007), the author addressed the question of the learnability of quantum states in the sense above, where the role of a concept is played by a given quantum state, and "knowing" the concept then equates to the possibility of predicting the outcome probability of a given measurement and its outcome. One immediate distinction from conventional CLT, discussed in II.B.1, is that the concept range is no longer binary. However, as as we clarified, classical CLT theory has generalizations with continuous ranges. In particular, so called p-concepts have range in [0, 1] (Kearns and Schapire, 1994), and quantities which are analogs of the VC-dimension, and analogous theorems relating this to generalization performance, exist for the p-concept case as well (see (Aaronson, 2007)). Explicitly, the basic elements of such the generalized theory are: domain of concepts X, a sample x ∈ X and the p-concept f : X → [0, 1]. These abstract objects are mapped to central objects of quantum information theory (Aaronson, 2007) as follows: the domain of concepts is the set of two-outcome quantum measurement, and a sample is a POVM element Π 76 (in short: x ↔ Π); the p-concept to be learned is a quantum state ψ and the evaluation of the concept/hypothesis on the sample corresponds to the probability Tr[Πψ] ∈ [0, 1] of observing the measurement outcome associated with Π when the state ψ is measured. To connect the data classification-based perspectives of supervised learning to the CLT perspective above, note that in the given quantum state CLT this framework, the quantum concept -quantum state -"classifies" quantum POVM elements (the effects) according to the probability of observing that effect. The training set elements for this model are of the form (Π, Tr(ρΠ)), with 0 ≤ Π ≤ 1. In the spirit of CLT, the concept class "quantum states", is said to be learnable under some distribution D over two-outcome generalized measurement elements (Π), if for every conceptquantum state ρ -there exists an algorithm with access to examples of the form (Π, Tr(ρΠ)), where Π is drawn according to D, which outputs a hypothesis h which (approximately) correctly predicts the label Tr(ρΠ ) with high probability, when Π is drawn from D. Note that the role of a hypothesis here can simply be played by a "best guess" classical description of the quantum state ρ. The key result of (Aaronson, 2007) is that quantum states are learnable with sample complexity scaling only linearly in the number of qubits 77 , that is logarithmically in the dimension of the density matrix. In operative terms, if Alice wishes to send an n qubit quantum state to Bob who will perform on it a two-outcome measurement (and Alice does not know which), she can achieve near-ideal performance by sending (O(n)) classical bits 78 , which has clear practical but also theoretical importance. In some sense, these results can also be thought of as a generalized variant of Holevo bound theorems (Holevo,76 More precisely Π is a positive-semidefinite operator such that 1 − Π is positive-semidefinite as well. 77 The dependencies on the allowed inverse error and inverse allowed failure probability are polynomial and polylogarithmic, respectively. 78 Here we assume Alice can locally generate her states at will. A classical strategy (using classical channels) is thus always possible, by having Alice send the outcomes of full state tomography (or equiv. the classical description of the state), but this requires the using of O(2 n ) bits already for pure states. 1982), limiting how much information can be stored and retrieved in the case of quantum systems. This latter result has thus far been more influential in the contexts of tomography than quantum machine learning, despite being quite a fundamental result in quantum learning theory. However, for fully practical purposes. The results above come with a caveat. The learning of quantum state is efficient in sample complexity (e.g. number of measurements one needs to perform), however, the computational complexity of the reconstruction of the hypothesis is, in fact, likely exponential in the qubit number. Very recently, the efficiency of also the reconstruction algorithms for the learning of stabilizer states was shown in (Rocchetto, 2017). B. (Quantum) learning and quantum processes Executive summary: The notion of quantum learning has been used in literature to refer to the studying of various aspects of "learning about" quantum systems. Beyond the learning of quantum states, one can also consider the learning of quantum evolutions. Here "knowing" is operatively defined as having the capacity to implement the given unitary at a later point -this is similar to how "knowning" in computational learning theory implies we can apply the concept function at a later point. Finally, as learning can pertain to learning in interactive environments -RL -one can consider the quantum generalizations of such settings. One of the first results in this direction formulates a quantum generalization of POMDPs. Note as POMDPs form the mathematical basis of RL, the quantum-generalized mathematical object -quantum POMDP, may form a basis of quantum-generalized RL. a. Learning of quantum processes The concept of learning is quite diffuse and "quantum learning" has been used in literature quite often, and not every instance corresponds to generalizations of "classical learning" in a machine or statistical learning sense. Nonetheless, some such works further illustrate the distinctions between the approaches one can employ with access to classical (quantum) tools, while learning about classical or quantum objects. Learning unitaries For instance "quantum learning of unitary operations" has been used to refer to the task of optimal storing and retrieval of unknown unitary operations, which is a two stage process. In the storing phase, one is given access to a few uses of some unitary U . In the retrieval phase, one is asked to approximate the state U |ψ , given one or few instances of a (previously fully unknown) state |ψ . Like in the case of quantum template states (see section V.A.1), we can distinguish semi-classical prepare-and-measure strategies (where U is estimated and represented as classical information), and quantum strategies, where the unitaries are applied on some resource state, which is used together with the input state |ψ in the retrieval stage. There is no simple universal answer to the question of optimal strategies. In (Bisio et al., 2010), the authors have shown that, under reasonable assumptions, the surprising result that optimal strategies are semi-classical. In contrast, in (Bisio et al., 2011) the same question was asked for generalized measurements, and the opposite was shown: optimal strategies require quantum memory. See e.g. (Sedlák et al., 2017) for some recent results on probabilistic unitary storage and retrieval, which can be understood as genuinely quantum learning 79 of quantum operations. Learning measurements The problem of identifying which measurement apparatus one is facing has first been in comparatively fewer works, see e.g. (Sedlák and Ziman, 2014) for a more recent example. Related to this, we encounter a more learning-theoretical perspective on the topic of learning measurements. In the comprehensive paper (Cheng et al., 2016) (which can serve as a review of parts of quantum ML in its own right), the authors explore the question of the learnability of quantum measurements. This can be thought of as the dual of the task of learning quantum states discussed previously in this section. Here, the examples are of the form (ρ, T r(ρE)), and it is the measurement that is fixed. In this work, the authors compute a number of complexity measures, which are closely related to the VC dimension (see section II.B.1), for which sample complexity bounds are known. From such complexity bounds one can, for instance, rigorously answer various relevant operative questions, such as, how many random quantum probe states we need to prepare on average, to accurately estimate a quantum measurement. Complementing the standard estimation problems, here we do not compute the optimal strategy, but effectively gauge the information gain of a randomized strategy. These measures are computed for the family of hypotheses/concepts which can be obtained by either fixing the POVM element (thus learning the quantum measurement), or by fixing the state (which is the setting of (Aaronson, 2007)), and clearly illustrate the power of ML theory when applied in QIP context. b. Foundations of quantum-generalized RL The majority of quantum generalizations of machine learning concepts fit neatly in the domain of supervised learning, however, with few notable exceptions. In particular, in , the authors introduce a quantum generalization of partially observable Markov decision processes (POMDP), discussed in section II.C. For convenience of the reader we give a brief recap of these objects. A fully observable MDP is a formalization of task environments: the environment can be in any number of states S the agent can observe. An action a ∈ A of the agent triggers a transition of the state of the environment -the transition can be stochastic, and is specified by a Markov transition matrix P a . 80 Additionally, beyond the dynamics, each MDP comes with a reward function R : S × A × S → Λ, which rewards certain state-action-state transitions. In POMDP, the agent does not see the actual state of the environment, but rather just observations o ∈ O, which are (stochastic) functions of the environmental state 81 . Although the exact environmental state of the environment is not directly accessible to the agent, given the full specification of the system, the agent can still assign a probability distribution over the state space given an interaction history. This is called a belief state, and, can be represented as a mixed state (mixing the "classical" actual environmental states), which is diagonal in the POMDP state basis. The quantum generalization promotes the environment belief state to any quantum state defined on the Hilbert space spanned by the orthonormal basis {|s |s ∈ S}. of observing that outcome. Finally, rewards are defined via the expected values of action-specific positive operators R a , so T r[R a ρ], given the state ρ. In , the authors have studied this model from the computational perspective of the hardness of identifying the best strategies for the agent, contrasting this setting to classical settings, and proving separations. In particular, the complexity of deciding policy existence for finite horizons 82 , are the same for the quantum and classical cases 83 . However, a separation can be found with respect to the goal reachability problem, which asks whether there exists a policy (of any length) which, with probability 1, reaches some target state. This separation is maximal -this problem is decidable in the classical case, yet undecidable in the quantum case. While this particular separation may not have immediate consequences for quantum learning, it suggests that there may be other (dramatic) separations, with more immediate relevance. VI. QUANTUM ENHANCEMENTS FOR MACHINE LEARNING One of the most advertised aspects of quantum ML deals with the question of whether quantum effects can help us solve classical learning tasks more efficiently, ideally mirroring the successes of quantum computation. The very first attempts to apply quantum information techniques to ML problems were made even before the seminal works of Shor and Grover (Shor, 1997;Grover, 1996). Notable examples include the pioneering research into quantum neural networks and quantum perceptrons (Lewenstein, 1994;Kak, 1995), and also in the potential of quantum computational learning theory (Bshouty and Jackson, 1998). The topic of quantum neural networks (quantum NNs) has had sustained growth and development since these early days, exploring various types of questions regarding the interplay of quantum mechanics and neural networks. Most of the research in this area is not directly targeted at algorithmic improvements, hence will be only briefly mentioned here. A fraction of the research into quantum NNs, which was disproportionately more active in the early days, considered the speculative topics of the function of quantum effects in neural networks, both artificial and biological (Kak, 1995;Penrose, 1989). Parts of this research line has focused concrete models, such as the effect of transverse fields in HNs (Nishimori and Nonomura, 1996), and decoherence in models of biological nets (Tegmark, 2000), which, it is argued, would destroy any potential quantum effect. A second topic which permeates the research in quantum NNs is concerned with the fundamental question of a meaningful quantization of standard feed-forward neural networks. The key question here is finding the best way to reconcile the linear nature of quantum theory, and the necessity for non-linearities in the activation function of a neural network (see section II.A.1), and identifying suitable physical systems to implement such a scheme. Early ideas here included giving up on non-linearities per se, and considering networks of unitaries which substitute layers of neurons (Lewenstein, 1994). Another approach exploits non-linearities which stem from measurements and post-selection (arguably first suggested in (Kak, 1995)). The same issue is addressed by Behrman et al. (Behrman et al., 1996) by using a continuous mechanical system where the non-linearity is achieved by coupling the system with an environment 84 , in the model system of quantum dots. The purely foundational research into implementations of such networks, and analysis of their quantum mechanical features, has been and is continuing to be an 82 That is, given a full specification of the setting, decide whether there exist a policy for the agent which achieves a cumulative reward above some value, in a certain number of states. 83 This decision problem is undecidable in the infinite horizon case, already for the classical problem, and thus trivially undecidable in the quantum case as well. 84 Similar ideas were also discussed by Peruš in (Peruš, 2000). active field of research (see e.g. (Altaisky et al., 2017)). For more information on this topic we refer the reader to more specialized reviews (Schuld et al., 2014b;Garman, 2011). Unlike the research into quantum NNs, which has a foundational flavor, majority of works studying quantum effects for classical ML problems are specifically focused on identifying improvements.. First examples of quantum advantages in this context were provided in the context of quantum computational learning theory, which is the topic of the first subsection below. In the second subsection we will survey research suggesting the possibilities of improvement of the capacity of associative memories. The last subsection deals with proposals which address computational run-time improvements of classical learning algorithms, the first of which came out already in the early 2000s. Here we will differentiate approaches which focus on quantum improvements in the training phase of a classifier by means of quantum optimization (mostly focused on exploiting near-term technologies, and restricted devices), and approaches which build algorithms based on, roughly speaking, quantum parallelism and "quantum linear algebra" -which typically assume universal quantum computers, and often "pre-filled" database. It should be noted that the majority of research in quantum ML is focused precisely on this last aspect, and the results here are already quite numerous. We can thus afford to present only a chosen selection of results. A. Learning efficiency improvements: sample complexity Executive summary: The first results showing the separation between quantum and classical computers were obtained in the context of oracles, and for sample complexity -even the famous Grover's search algorithm constitutes such a result. Similarly, CLT deals with the learning, i.e., the identification or the approximation of concepts, which are also nothing but oracles. Thus, quantum oracular computation settings and learning theory share the same underlying framework, which is investigated and exploited in this formal topic. To talk about quantum CLT, and improvements, or bounds, on sample complexity, the classical concept oracles are thus upgraded to quantum concept oracles, which output quantum states, and/or allow access in superposition. As elaborated in section II.B.1, CLT deals with the problem of learning concepts, typically abstracted as boolean functions of bit-strings of length n so c : {0, 1} n → {0, 1}, from input-output relations alone. For intuitive purposes it is helpful to think of the task of optical character recognition (OCR), where we are given a bitmap image (black-and-white scan) of some size n = N × M , and a concept may be say "everything which represents the letter A", more precisely, the concept, specifying which bitmaps correspond to the bitmaps of the letter "A". Further, we are most often interesting in a learning performance for a set of concepts: a concept class C = {c|c : {0, 1} n → {0, 1}} -in the context of the running example of OCR, we care about algorithms which are capable of recognising all letters, and not just "A". The three typical settings studied in literature are the PAC model, exact learning from membership queries, and the agnostic model, see section II.B.1. These models differ in the type of access to the concept oracle which is allowed. In the PAC model, the oracle outputs labeled examples according to some specified distribution, analogous to basic supervised learning. In the membership queries model, the learner gets to choose the examples, and this is similar to active supervised learning. In the agnostic model, the concept is "noisy", i.e. forms a stochastic function, which is natural in supervised settings (the joint datapoint-label distribution P (x, y) need not be functional), for details we refer the reader to section II.B.1. All three models have been treated from a quantum perspective, and whether or not quantum advantages are obtainable greatly depends on the details of the settings. Here we give a very succinct overview of the main results, partially following the structure of the recent survey on the topic by Arunachalam and de Wolf (Arunachalam and de Wolf, 2017). Quantum PAC learning The first quantum generalization of PAC learning was presented in (Bshouty and Jackson, 1998), where the quantum example oracle was defined to output coherent superpositions x p D (x) |x, c(x) ,(23) for a given distribution D over the data points x, for a concept c. Recall, classical PAC oracles output a sample pair (x, c(x)), where x is drawn from D, which can be understood as copies of the mixed state x p D (x) |x, c(x) x, c(x)|, with p D (x) = P (D = x) . The quantum oracle reduces to the standard oracle if the quantum example is measured in the standard (computational) basis. This first pioneering work showed that quantum algorithms, with access to such a quantumgeneralized oracle can provide more efficient learning of certain concept classes. The authors have considered the concept class of DNF formulas, under the uniform distribution: here the concepts are s-term formulae in disjunctive normal form. In other words, each concept c is of the form c(x) = I j (x I ) j , where x I is a substring of x associated to I, which is a subset of the indices of cardinality at most s, and (x I ) j is a variable or its negation (a literal). An example of a DNF is of the form (x 1 ∧ x 3 ∧ ¬x 6 ) ∨ (x 4 ∧ ¬x 8 ∧ x 1 ) · · · , where parentheses (terms) only contain variables or their negations in conjunction (ANDs, ∧), whereas all the parentheses are in disjunction (ORs, ∨). The uniform DNF learning problem (for n variables, and poly(n) terms) is not known to be efficiently PAC learnable, but, in (Bshouty and Jackson, 1998) it was proven to be efficiently quantum PAC learnable. The choice of this learning problem was not accidental: DNF learning is known to be learnable in the membership query model, which is described in detail in the next section. The corresponding classical algorithm which learns DNF in the membership query model directly inspired the quantum variant in the PAC case 85 . If the underlying distribution over the concept domain is uniform, other concept classes can be learned with a quantum speed-up as well, specifically, so called k-juntas: n-bit binary functions which depend only on k < n bits. In (Atıcı and Servedio, 2007), Atıcı and Servedio have shown that there exists a quantum algorithm for learning k-juntas using O(k log(k)/ ) uniform quantum examples, O(2 k ) uniform classical examples, and O(n k log(k)/ + 2 k log(1/ )) time. Note the improvement in this case is not in query complexity, but rather in the classical processing, which, for the best known classical algorithm has complexity at least O(n 2k/3 ) (see (Arunachalam and de Wolf, 2017;Atıcı and Servedio, 2007) for further details). Diverging from perfect PAC settings, in (Cross et al., 2015), the authors considered the learning of linear boolean functions 86 under the uniform distribution over the examples. The twist in this work is the assumption of noise 87 which allows for evidence of a classical quantum learnability separation. a. Distribution-free PAC While the assumption of the uniform distribution D constitutes a convenient theoretical setting, in reality, most often we have few guarantees on the underlying distribution of the examples. For this reason PAC learning often refers to distribution-free learning, meaning, learning under the worst case distribution D. Perhaps surprisingly, it was recently shown that the quantum PAC learning model offers no advantages, in terms of sample complexity, over the classical model. Specifically, in (Arunachalam and de Wolf, 2016) the authors show that if C is a concept class of VC dimension d + 1, then for every (non-negative) δ ≤ 1/2 and ≤ 1/20, every ( , δ)-quantum PAC learner requires Ω(d/ + log(d −1 )/ ) samples. The same number of samples, however, is also known to suffice for a classical PAC learner (for any and δ). A similar result, showing no separation between quantum and classical agnostic learning was also proven in (Arunachalam and de Wolf, 2016) 88 . b. Quantum predictive PAC learning Standard PAC learning settings do not allow exponential separations between classical and quantum sample complexity of learning, and consequently the notion of learnable concepts is the same in the classical and the quantum case. This changes if we consider weaker learning settings, or rather, a weaker meaning of what it means to learn. The PAC learning setting assumes that the learning algorithm outputs a hypothesis h with a low error with high confidence. In the classical case, there is no distinction between expecting that the hypothesis h can be applied once, or any arbitrary number of times. However, in the quantum case, where the examples from the oracle may be quantum states, this changes, and inductive learning in general may not be possible in all settings, see section V. In (Gavinsky, 2012), the author considers a quantum PAC settings where only one (or polynomially few) evaluations of the hypothesis are required, called the Predictive Quantum (PQ) model 89 . In this setting the author identifies a relational concept class (i.e. each data point may have many correct labels) which is not (polynomially) learnable in the classical case, but is PQ learnable under a standard quantum oracle, under the uniform distribution. The basic idea is to use quantum states, obtained by processing quantum examples, for each of the testing instances -in other words, the "implementation" of the hypothesis contains a quantum state obtained from the oracle. This quantum state cannot be efficiently estimated, but can be efficiently obtained using the PQ oracle. The concept class, and the labeling process are inspired by a distributed computation problem for which an exponential classical-quantum separation had been identified earlier in (Bar-Yossef et al., 2008). This work provides another noteworthy example of the intimate connection between various aspects of QIP -in this case, quantum communication complexity theory -and quantum learning. Learning from membership queries In the model of exact learning from membership queries, the learner can choose the elements from the concept domain it wishes labeled (similar to active learning), however, the task is to identify the concept exactly (no error), except with probability δ < 1/3 90 Learning from membership queries 88 The notions of efficiency and sample complexity in the agnostic model are analogous to those in the PAC model, as is the quantum oracle which provides the coherent samples x,y p D (x, y) |x, y . See section II.B.1 for more details. 89 In a manner of speaking, to learn a concept, in the PAC sense, implies we can apply what we have learned arbitrarily many times. In PQ it suffices that the learner be capable of applying what it had learned just once once, to be considered successful. It however follows that if the number of examples is polynomial, PQ learnability also implies that the verification of learning can be successfully executed polynomially many times as well. 90 As usual, success probability which is polynomially bounded away from 1/2 would also do. has, in the quantum domain, usually been called oracle identification. While quantum improvements in this context are possible, in (Servedio and Gortler, 2004), the authors show that they are at most low-degree polynomial improvements in the most general cases. More precisely, if a concept class C over n − bits has classical and quantum membership query complexities D(C) and Q(C), respectively, then D(C) = O(nQ(C) 3 ) 91 -in other words, improvements in sample complexity can be at most polynomial. Polynomial relationships have also been established for worst-case exact learning sample complexitites (so-called (N, M )-query complexity), see (Kothari, 2013) and (Arunachalam and de Wolf, 2017). The above result is in spirit similar to earlier results in (Beals et al., 2001), where it was shown quantum query complexity cannot provide a better than polynomial improvement over classical results, unless structural promises on the oracle are imposed. The results so far considered are standard, comparatively simple generalizations of classical learning settings, leading to somewhat restricted improvements in sample complexity. More dramatic improvements are possible if computational (time) complexity is taken into account, or if slightly non-standard generalizations of the learning model are considered. Note, we are not explicitly bringing computational complexity separations into the picture. Rather, under the assumption that certain computation problems are hard for the learner, we obtain a sample complexity separation. In particular, already in (Kearns and Valiant, 1994) the authors have constructed several classes of Boolean functions in the distribution-free model whose efficient learning (in the sample complexity sense) implies the capacity of factoring of so-called Blum integers -a task not known to be solvable classically, but solvable on a quantum computer 92 . Using this observations, Servedio and Gortler have demonstrated classes which are efficiently quantum PAC learnable, and classes which are efficiently learnable in the quantum membership query model, but which not efficiently learnable in the corresponding classical models, unless Blum integers 93 can be efficiently factored on a classical computer (Servedio and Gortler, 2004). . 91 This simple formulation of the claim of (Servedio and Gortler, 2004) was presented in (Arunachalam and de Wolf, 2017). 92 These ideas exploit the connections between asymmetric cryptography and learning. In asymmetric cryptography, a message can be decrypted easily using a public key, but the decryption is computationally hard, unless one has a private key. To exemplify public key can be a Blum integer, whereas the private key one of the factors. The data-points are essentially the encryptions of integers k E(k, N ), for a public key N . The concept is defined by the least significant bit of k, which, provably, is not easier to obtain with bounded error than the decryption itselfwhich is computationally hard. A successful efficient learner of such a concept could factor Blum integers.The full proposal has further details we omit for simplicity. 93 The integer n is a Blum integer if it is a product of two distinct prime numbers p and q, which are congruent to 3 mod 4 (i.e. both can be written in the form 4t + 3, for a non-negative integer t.). B. Improvements in learning capacity Executive summary: The observation that a complete description of quantum systems typically requires the specification of exponentially many complex-valued amplitudes has lead to the idea that those same amplitudes could be used to store data using only logarithmically few systems. While this idea fails for most applications, it has inspired some of the first proposals to use quantum systems for the dramatic improvement of the capacities of associative, or content-addressable memories. More likely quantum upgrades of CAM memories, however, may come from a substantially different direction -which explores methods of extracting information from HNs -used as CAM memories -and which is inspired by quantum adiabatic computing to realize a recall process which is similar yet different from standard recall methods. The quantum methods may yield advantages by outputting superpositions of data, and it has been suggested they also utilize the memory more efficiently, leading to increased capacities. The pioneering investigations in the areas between CLT, NNs and QIP, have challenged the classical sample complexity bounds. Soon thereafter (and likely independently), the first proposals suggesting quantum improvements in the context of space complexity emerged -specifically the efficiency of associative memories. Recall, associative, or content-addressable memory (abbreviated CAM) is a storage device which can be loaded with patterns, typically a subset of n-bit bit-strings P = {x i } i , x i ∈ {0, 1} n , which are then, unlike in the case of standard RAM-type memories, not recovered by address but by content similarity: given an input string y ∈ {0, 1} n , the memory should return y if it is one of the stored patterns (i.e. y ∈ P ), or a stored pattern which is "closest" to y, with respect to some distance, typically the Hamming distance. Deterministic perfect storage of any set of patterns clearly requires O(n × 2 n ) bits (there are in total 2 n distinct patterns each requiring n bits), and the interesting aspects of CAMs begin when the requirements are somewhat relaxed. We can identify roughly two basic groups of ideas which were suggested to lead to improved capacities. The first group, sketched next, relies directly on the structure of the Hilbert space, whereas the second group of ideas stems from the quantization of a well-understood architecture for a CAM memory system: the Hopfield network. Capacity from amplitude encoding In some of the first works (Ventura and Martinez, 2000;Trugenberger, 2001) it was suggested that the proverbial "exponential-sized" Hilbert space describing systems of qubits may allow exponential improvements: intuitively even exponentially numerous pattern sets P can be "stored" in a quantum state of only n qubits: |ψ P = |P | − 1 2 x∈P |x . These early works suggested creative ideas on how such a memory could be used to recover patterns (e.g. via modified amplitude amplification), albeit, often suffering from lack of scalability, and other quite fundamental issues to yield complete proposals 94 , and thus we will not dig into details. We will, however, point out that these works may be interpreted to propose some of the first examples of "amplitude encoding" of classical data, which is heavily used in modern approaches to quantum ML. In particular, the stored memory of a CAM can always be represented as a single bit-string (b (0···0) , b (0···1) . . . , b (1...1) ) of length 2 n (each bit in the bit-string is indexed by a pattern, and its value encodes if it is stored or not). This data-vector (in this case binary, but this is not critical) is thus encoded into amplitudes of a quantum state of an exponentially smaller number of qubits: b = (b (0···0) , b (0···1) . . . , b (1...1) ) → x∈{0,1} n b x |x (up to normalization). Capacity via quantized Hopfield networks A different approach to increasing the capacities of CAM memories arises from the "quantization" of different aspects of classical HNs, which constitute well-understood classical CAM systems. a. Hopfield networks as a content-addressable memory Recall, a HN is a recurrent NN characterized by a set of n neurons, whose connectivity is given by a (typically symmetric) real matrix of weights W = (w ij ) ij and a vector of (real) local thresholds {θ i } n i=1 . In the context of CAM memories, the matrix W encodes the stored patterns, which are in this setting best represented as sequences of signs, so x ∈ {1, −1} n . The retrieval, given an input pattern y ∈ {1, −1} n , is realized by setting the k th neuron s k to the k th value of the input pattern y k , followed by the "running of the network" according to standard perceptron rules: each neuron k computes its subsequent value by checking if its inbound weighted sum is above the local threshold: s k ← sign( l w kl s l − θ k ) (assuming sign(0) = +1) 95 . As discussed previously, under moderate assumptions the described dynamical system converges to local attractive points, which also correspond to the energy minima of the Ising functional E(s) = − 1 2 ij w ij s i s j + i θ i s i .(24) Such a system still allows significant freedom in the rule specifying the matrix W, given a set of patterns to be stored: intuitively, we need to "program" the minima of E (choosing the appropriate W will suffice, as the local thresholds can be set to zero) to be the target patterns, ideally without storing too many unwanted, so-called spurious, patterns. This, and other properties of a useful storing rule, that is, rule which specifies W given the patterns, are given as follows (Storkey, 1997): a) locality: an update of a particular connection should depend only on the information available to the neurons on either side of the connection 96 ; b) incrementality: the rule should allow the updating of the matrix W to store an additional pattern based only on the new pattern and W itself 97 c) immediateness: the rule should not require a limiting computational process for the evaluation of the weight matrix (rather, it should be a simple computation of few steps). The most critical property of a useful rule is that it d) results in a CAM with a non-trivial capacity: it should be capable of storing and retrieving some number of patters, with controllable error (which includes few spurious patterns, for instance). 95 The updates can be synchronous, meaning all neurons update their values at the same time, or asynchronous, in which case usually a random order is assigned. In most analyses, and here, asynchronous updates are assumed. 96 Locality matters as the lack of it prohibits parallelizable architectures. 97 In particular, it should not be necessary to have external memory storing e.g. all stored patters, which would render HN-based CAM memories undesirably non-adaptive and inflexible. The historically first rule, the Hebbian rule, satisfies all the conditions above and is given by a simple recurrence relation: for the set of patterns {x k } k the weight matrix is given with w ij = k x k i x k j /M (where x k j is the j th sign of the k th pattern, and M is the number of patterns). The capacity of HN's under standard recall and Hebbian updates has been investigated from various perspectives, and in the context of absolute capacity (the asymptotic ratio of the number of patterns that can be stored without error to the number of neurons, as the network size tends to infinity), it is known to scale as O( n 2ln(n) ). A well known result in the field improves on this to the capacity of O( n √ 2ln(n) ), and is achieved by a different rule introduced by Storkey (Storkey, 1997), while maintaining all the desired properties. Here, we should emphasize that, in broad terms, the capacity is typically (sub)-linear in n. Better results, however, can be achieved in the classical settings if some of the assumptions a) − c) are dropped, but this is undesirable. b. Quantization of Hopfield-based CAMs In early works Tzafestas, 2006, 2007), the authors have considered fuzzy and probabilistic learning rules, and have broadly argued that a) such probabilistic rules correspond to a quantum deliberation process and that b) the resulting CAMs can have significantly larger capacities. However, more rigorous (and fully worked out) results were shown more recently, by combining HNs with ideas from adiabatic QC. The first idea, presented in (Neigovzen et al., 2009) connects HNs and quantum annealing. Recall that the HN can be characterized by the Ising functional E(s) = − 1 2 ij w ij s i s j (see Eq. 2), where the stored patterns correspond to local minima, and where we have, without the loss of generality, assumed that the local thresholds are zero. The classical recall corresponds to the problem of finding local minima closest to the input pattern y. However, an alternative system, with similar features, is obtained if the input pattern is added in place of the local thresholds: E(s, y) = − 1 2 ij w ij s i s j − Γ i y i s i . Intuitively, this lowers the energy landscape of the system specifically around the input pattern configuration. But then, the stored pattern (previous local minimum) which is closest to the input pattern is the most likely candidate for a global minimum. Further, the problem of finding such configurations can now be tackled via quantum annealing: we define the quantum "memory Hamiltonian" naturally as H mem = − 1 2 ij w ij σ z i σ z j , and the HN Hamiltonian, given input y with H p = H mem + ΓH inp , where the input Hamiltonian is given with H inp = − i y i σ z i . The quantum recall is obtained by the adiabatic evolution via the Hamiltonian trajectory H(t) = Λ(t)H init + H p , where Λ(0) is large enough that H init dominates, and Λ(1) = 0. The system is initialized in the ground state of the (arbitrary and simple) Hamiltonian H init , and if the evolution in t is slow enough to satisfy the criteria of the adiabatic theorem, the system ends in the ground state of H p . This proposal exchanged local optimization (classical retrieval) for global optimization. While this is generally a bad idea 98 , what is gained is a quantum formulation of the problem which can be run on adiabatic architectures, and also the fact that this system can return quantum superpositions of recalled patterns, if multiple stored patterns are approximately equally close to the input, which can be an advantage (Neigovzen et al., 2009). However, the system above does not behave exactly the same as the classical recall network, which was further investigated in subsequent work (Seddiqi and Humble, 2014) analysing the sensitivity of the quantum recall under various classical learning rules. Further, in (Santra et al., 2016) the authors have provided an extensive analysis of the capacity of the Hebb-based HN, but under quantum annealing recall as proposed in (Neigovzen et al., 2009) showing, surprisingly, that this model yields exponential storage capacity, under the assumption of random memories. This result stands in apparent stark contrast to standard classical capacities reported in textbooks 99 . Regarding near-term implementability, in (Santra et al., 2016) the authors have investigated the suitability of the Chimera graph-based architectures of D-Wave programmable quantum annealing device for quantum recall HN tasks, showing potential for demonstrable quantum improvements in near-term devices. C. Run-time improvements: computational complexity Executive summary: The theory of quantum algorithms has provided examples of computational speed ups for decision problems, various functional problems, oracular problems, sampling tasks, and optimization problems. This section presents quantum algorithms which provide speed-ups for learning-type problems. The two main classes of approaches differ in the underlying computational architecture -a large class of algorithms relies on quantum annealers, which may not be universal for QC, but may natively solve certain sub-tasks important in the context of ML. These approaches then have an increased likelihood of being realizable with near-term devices. In contrast, the second class of approaches assumes universal quantum computers, and often data prepared and accessible in quantum database, but offers up to exponential improvements. Here we distinguish between quantum amplitude amplification and amplitude encoding approaches, which, with very few exceptions, cover all quantum algorithms for supervised and unsupervised learning. The most prolific research area within quantum ML in the last few years has focused on identifying ML algorithms, or their computationally intensive subroutines, which may be sped up using quantum computers. While there are multiple natural ways to classify the performed research, an appealing first-order delineation follows the types of quantum computational architectures assumed 100 . Here we can identify research which is focused on using quantum annealing architectures, which are experimentally well justified and even commercially available in recent times (mostly in terms of the D-Wave system set-ups). In most of such research, the annealing architecture will be utilized to perform a classically hard optimization problem usually emerging in the training phases of many classical algorithms. An involved part of such approaches will often be a meaningful rephrasing of such ML optimization to a form which an annealing architecture can (likely) handle. While the overall supervised task comprises multiple computational elements, it is only the optimization that will be treated by a quantum system in these proposals. The second approach to speeding up ML algorithms assumes universal quantum computation capabilities. Here, the obtained algorithms are typically expressed in terms of quantum circuits. 99 At this point it should be mentioned that recently exponential capacities of HNs have been proposed for fully classical systems, by considering different learning rules (Hillar and Tran, 2014;Karbasi et al., 2014), which also tolerate moderate noise. The relationship and potential advantages of the quantum proposals remains to be elucidated. 100 Other classification criteria could be according to tasks, i.e. supervised vs. unsupervised vs. generative models etc., or depending on the underlying quantum algorithms used, e.g. amplitude amplification, or equation solving. For most proposals in this research line, to guarantee actual speed-ups, there will be additional assumptions. For instance, most proposals can only guarantee improvements if the data, which is to be analyzed, is already present in a type of a quantum oracle or a quantum memory, and, more generally, that certain quantum states, which depend on the data, can be prepared efficiently. The overhead of initializing such a memory in the first place is not counted, but this may not unreasonable as in practice, the same database is most often used for a great number of analyses. Other assumptions may also be placed on the structure of the dataset itself, such as low condition numbers of certain matrices containing the data (Aaronson, 2015). Speed-up via adiabatic optimization Quantum optimization techniques play an increasingly important role in quantum ML. Here, we can roughly distinguish two flavours of approaches, which differ in what computationally difficult aspect of training of a classical model is tackled by adiabatic methods. In the (historically) first approach, we deal with clear-cut optimization in the context of binary classifiers, and more specifically, boosting (see II.A.3). Since, it has been shown that annealers can also help by generating samples from hard-to-simulate distributions. We will mostly focus on the historically first approaches, and only briefly mention the other more recent results. a. Optimization for boosing The representative line of research, which also initiated the development of this topic of quantum-enhanced ML based on adiabatic quantum computation, focuses on a particular family of optimization problems called quadratic unconstrained optimization (QUBO) problems of the form x * = (x * 1 , . . . , x * n ) = argmin (x1,...,xn) i<j J ij x i x j , x k ∈ {0, 1}(25) specified by a real matrix J. QUBO problems are equivalent to the problem of identifying lowest energy states of the Ising functional 101 E(s) = − 1 2 ij J ij s i s j + i θ i s i , provided we make no assumptions on the underlying lattice. Modern annealing architectures provide means for tackling the problem of finding such ground states using adiabatic quantum computation. Typically we are dealing with systems which can implement the tunable Hamiltonian of the form H(t) = −A(t) i σ x H initial +B(t) ij J ij σ z i σ z j Htarget ,(26) where A, B are smooth positive functions such that A(0) B(0) and B(1) A(1), that is, by tuning t sufficiently slowly, we can perform adiabatic preparation of the ground state of the Ising Hamiltonian H target , thereby solving the optimization problem. In practice, the parameters J ij cannot be chosen fully freely (e.g. the connectivity is restricted to the so-called Chimera graph (Hen et al., 2015) in D-Wave architectures), and also the realized interaction strenght values have a limited precision and accuracy (Neven et al., 2009a;Bian et al., 2010), but we will ignore this for the moment. In general, finding ground states of the Ising model is functional NP-hard 102 , which is likely beyond the reach of quantum computers. However, annealing architectures still may have many advantages, for instance it is believed they may still provide speed ups in all, or at least average instances, and/or that they may provide good heuristic methods, and hopefully near optimal solutions 103 . In other words, any aspect of optimization occurring in ML algorithms which has an efficient mapping to (non-trivial) instances of QUBO problems, specifically those which can be realized by experimental set-ups, is a valid candidate for quantum improvements. Such optimization problems have been identified in a number of contexts, mostly dealing with training binary classifiers, thus belong to the class of supervised learning problems. The first setting considers the problem of building optimal classifiers from linear combinations of simple hypothesis functions, which minimize empirical error, while controlling the model complexity through a so-called regularization term. This is the common optimization setting of boosting (see II.A.3), and, with appropriate mathematical gymnastics and few assumptions, it can be reduced to a QUBO problem. The overarching setting of this line of works can be expressed in the context of training a binary classifier by combining weaker hypotheses. For this setting, consider a dataset D = {x i , y i } M i=1 , x i ∈ R n , y i ∈ {−1, 1}, and a set of hypotheses {h j } K j=1 , h j : R n → {−1, 1}. For a given weight vector w ∈ R n we define the composite classifier of the form hc w (x) = sign( k w k h k (x)). The training of the composite classifier is achieved by the optimization of the vector w as to minimize misclassification on the training set, and as to decrease the risk of overtraining. The misclassification cost is specified via a loss function L, which depends on the dataset, and the hypothesis set in the boosting context. The overtraining risk, which tames the complexity of the model, is controlled by a so-called regularization term R. Formally we are solving argmin w L(w; D) + R(w). This constitutes the standard boosting frameworks exactly, but is also closely related to the training of certain SVMs, i.e. hyperplane classifiers 104 . In other words, quantum optimization techniques which work for boosting setting can also help for hyperplane classification. There are a few well-justified choices for L and R, leading to classifiers with different properties. Often, best choices (the definition of which depends on the context) lead to hard optimization(Long and Servedio, 2010), and some of those can be reduced to QUBOs, but not straightforwardly. In the pioneering paper on the topic (Neven et al., 2008), Neven and co-authors consider the boosting setting. The regularization term is chosen to be proportional to the 0-norm, which counts the number of non-zero entries, that is, R(w, λ) = λ w 0 . The parameter λ controls the relative importance of regularization in the overall optimization task. A common choice for the loss function would be the 0-1 loss function L 0−1 , optimal in some settings, given with L 0−1 (w) = M j=1 Θ (−y j k w k h k (x j )) (where Θ is the step function), which simply counts the number of misclassifications. This choice 102 Finding ground states is not a decision problem, so, technically it is not correct to state it is NP-hard. The class functional NP (FNP) is the extension of the NP class to functional (relational) problems. 103 Indeed, one of the features of adiabatic models in general is that they provide an elegant means for (generically) providing approximate solutions, by simply performing the annealing process faster than prescribed by the adiabatic theorem. 104 If we allow the hypotheses h j to attain continuous real values, then by setting h j to be the projection on the j th component of the input vector, so h j (x) = x j , then the combined classifier attains attains the inner-productthreshold form hcw(x) = sign(w τ x) which contains hyperplane classifiers -the only component missing is the hyperplane offset b which incorporated into the weight vector by increasing the dimension by 1. is reasonably well motivated in terms of performance, and is likely to be computationally hard. With appropriate discretization of the weights w, which the authors argue likely does not hurt performance, the above forms a solid candidate for a general adiabatic approach. However, it does not fit the QUBO structure (which has only quadratic terms), and hence cannot be tackled using existing architectures. To achieve the desired QUBO structure the authors impose two modifications: they opt for a quadratic loss function L 2 (w) = M j=1 |y j − k w k h k (x j )| 2 , and restrict the weights to binary (although this can be circumvented to an extent). Such a system is also tested using numerical experiments. In a follow-up paper (Neven et al., 2009a), the same team has generalized the initial proposal to accommodate another practical issue: problem size. Available architectures allow optimization over a few thousand variables, whereas in practice the number of hypotheses one optimizes over (K) may be significantly larger. To resolve this, the authors show how to break a large optimization problem into more manageable chunks while maintaining (experimentally verified) good performance. These ideas were also tested in an actual physical architecture (Neven et al., 2009b), and combined and refined in a more general, iterative algorithm in , tested also using actual quantum architectures. While L 0−1 loss functions were known to be good choices, they were not the norm in practice as they lead to non-convex optimization -so convex functions were preferred. However, in 2010 it became increasingly clear that convex functions are provably bad choices. For instance, in the seminal paper (Long and Servedio, 2010) Long and Servedio 105 , showed that boosting with convex optimization completely fails in noisy settings. Motivated by this in , the authors re-investigate D-Wave type architectures, and identify a reduction which allows a non-convex optimization. Expressed in the hyperplane classification setting (as explained, this is equivalent to the boosting setting in structure), they identify a reduction which (indirectly) implements a non-convex function l q (x) = min{(1 − q) 2 , (max(0, 1 − x)) 2 }. This function is called the q-loss function, where q is a real parameter. The implementation of the q-loss function allows for the realization of optimization relative to the total loss of the form L q (w, b; D) = j l q (y j (w τ x + b)). The resulting regularization term is in this case proportional to the 2-norm of w, instead of the 0-norm as in the previous examples, which may be sub-optimal. Nonetheless, the above forms a prime example where quantum architectures lead to ML settings which would not have been explored in the classical case (the loss L q is unlikely to appear naturally in many settings) yet are well motivated, as a) the function is non-convex and thus has the potential to circumvent all the no-go results for convex functions, and b) the optimization process can be realized in a physical system. The authors perform a number of numerical experiments demonstrating the advantages of this choice of a non-convex loss function when analysing noisy data, which is certainly promising. In later work (Denchev et al., 2015), it was also suggested that combinations of loss-regularization which are realizable in quantum architectures can also be used for so-called totally corrective boosting with cardinality penalization, which is believed to be classically intractable. The details of this go beyond the scope of this review, but we can at least provide a flavour of the problem. In corrective boosting, the algorithm updates the weights w essentially one step at a time. In totally corrective boosting, at the t th step of the boosting algorithm optimization, t entries of w are updated simultaneously. This is known to lead to better regularized solutions, but the optimization is harder. Cardinality penalization pertains to using explicitly the 0-norm for the regularization (discussed earlier), rather than the more common 1-norm. This, too, leads to harder optimization which may be treated using an annealing architecture. In (Babbush et al., 2014), the authors significantly generalized the scope of loss functions which can be embedded into quantum architectures, by observing that any polynomial unconstrained binary optimization can, with small overhead, be mapped onto a (slightly larger) QUBO problem. This, in particular, opens up the possibility of implementing odd-degree polynomials which are non-convex and can approximate the 0-1 loss function. This approach introduced new classes of unusual yet promising loss functions. b. Applications of quantum boosting Building on the "quantum boosting" architecture described above, in (Pudenz and Lidar, 2013), the authors explore the possibility of (aside from boosting) realizing anomaly detection, specifically envisioned in the computationally challenging problem of software verification and validation 106 . In the proposed learning step the authors use quantum optimization (boosting) to learn the characteristics of the program being tested. In the novel testing step the authors modify the target Hamiltonian as to lower the energy of the states which encode input-outputs where the real and ideal software differ. These can then be prepared in superposition (i.e. they can prepare a state which is a superposition over the inputs where the P will produce an erroneous output) similarly to the previously mentioned proposals in the context of adiabatic recall of superpositions in HN (Neigovzen et al., 2009). c. Beyond boosting Beyond the problems of boosting, annealers have been shown to be useful for the training of so-called Bayesian Network Structure Learning problems (O'Gorman et al., 2015), as their training can also be reduced to QUBOs. Further, annealing architectures can also be used the training of deep neural networks, relying on sampling, rather than optimization. A notable approach to this is based on the fact that the training of deep networks usually relies on the use of a so-called generative deep belief network, which are, essentially, restricted BMs with multiple layers 107 . The training of deep belief networks, in turn, is the computational bottleneck, as i requires the sampling of hard-to-generate distributions, which may be more efficiently prepared using annealing architectures, see e.g. (Adachi and Henderson, 2015). Further. novel ideas introducing fully quantum BM-like models have been proposed (Amin et al., 2016). Further, in recent work (Sieberer and Lechner, 2017) which builds on the flexible construction in (Lechner et al., 2015), the authors have shown how to achieve programmable adiabatic architectures, which allows running algorithms where the weights themselves are in superposition. This possibility is also sure to inspire novel QML ideas. Moving on from BMs, in recent work (Wittek and Gogolin, 2017), the authors have also shown how suitable annealing architectures may be useful to speed-up the performing of probabilistic inference in so-called Markov logic networks 108 . This task involves the estimation of partition functions of arising from statistical models, concretely Markov random fields, which include the Ising model as a special case. Quantum annealing may speed up this sub-task. More generally, general, the ideas that restricted, even simple, quantum systems which may be realizable with current technologies, could implement information processing elements useful for 106 A software is represented as a map P from input to output spaces, here specified as subset of the space of pairs (x input , xoutput). An implemented map (software) P is differentiated from the ideal softwareP by the mismatches in the defining pairs. 107 In other words, they are slightly less restricted BMs, with multiple layers and no within-layer connectivity. 108 Markov logic networks (Richardson and Domingos, 2006) combine first-order logic as used for knowledge representation and reasoning, and statistical modelling -essentially, the world is described via first-order sentences (a knowledge base), which gives rise to a graphical statistical model (a Markov random field), where correlations stem from the relations in the knowledge base. supervised learning are beginning to be explored in setting beyond annealers. For instance, in (Schuld et al., 2017), a simple interferometric circuit is used for the efficient evaluation of distances between data-vectors, useful for classification and clustering. A more complete account of these recent ideas is beyond the scope of this review. Speed-ups in circuit architectures One of the most important applications of ML in recent times has been in the context of data mining, and analyzing so-called big data. The most impressive improvements in this context have been achieved by proposing specialized quantum algorithms which solve particular ML problems. Such algorithms assume the availability of full-blown quantum computers, and have been tentatively probed since early 2000s. In recent times, however, we have witnessed a large influx of ideas. Unlike the situation we have seen in the context of quantum annealing, where an optimization subroutine alone was run on a quantum system, in most of the approaches of this section, the entire algorithm, and even the dataset may be quantized. The ideas for quantum-enhancements for ML can roughly be classified into two groups: a) approaches which rely on Grover's search and amplitude amplification to obtain up-to-quadratic speed-ups, and, b) approaches which encode relevant information into quantum amplitudes, and which have a potential for even exponential improvements. The second group of approaches forms perhaps the most developed research line in quantum ML, and collects a plethora quantum tools -most notably quantum linear algebra, utilized in quantum ML proposals. a. Speed-ups by amplitude amplification In (Anguita et al., 2003), it was noticed that the training of support vector machines may be a hard optimization task, with no obviously better approaches than brute-force search. In turn, for such cases of optimization with no structure, QIP offers at least a quadratic relief, in the guise of variants of Grover's (Grover, 1996) search algorithm or its application to minimum finding (Durr and Hoyer, 1999). This idea predates, and is, in spirit, similar to some of the early adiabatic-based proposals of the previous subsection, but the methodology is substantially different. The potential of quadratic improvements stemming from Grover-like search mechanisms was explored more extensively in (Aïmeur et al., 2013), in the context of unsupervised learning tasks. There the authors assume access to a black-box oracle which computes a distance measure between any two data-points. Using this, combined with amplitude amplification techniques (e.g. minimum finding in (Durr and Hoyer, 1999)), the authors achieve up to quadratic improvements in key subroutines used in clustering (unsupervised learning) tasks. Specifically, improvements are obtained in algorithms performing minimum spanning tree clustering, divisive clustering and k-medians clustering 109 . Additionally, the authors also show that quantum effects allow for a better parallelization of clustering tasks, by constructing a distributed version of Grover's search. This construction may be particularly relevant as large databases can often be distributed. More recently, in (Wiebe et al., 2014a) the author considers the problem of training deep (more than two-layered) BMs. As we mentioned earlier, one of the bottlenecks of exactly training BMs stems from the fact that it requires the estimation of probabilities of certain equilibrium distributions. Computing this analytically is typically not possible (it is as hard as computing partition functions), and sampling approaches are costly as it requires attaining the equilibrium distribution and many iterations to reliably estimate small values. This is often circumvented by using proxy solutions (e.g. relying on contrastive divergence) to train approximately, but it is known that these methods are inferior to exact training. In (Wiebe et al., 2014a), a quantum algorithm is devised which prepares coherent encodings of the target distributions, relying on quantum amplitude amplification, often attaining quadratic improvements in the number of training points, and even exponential improvements in the number of neurons, in some regimes. Quadratic improvements have also been obtained in pure data mining contexts, specifically in association rules mining (Yu et al., 2016), which, roughly speaking identifies correlations between objects in large databases 110 . As our final example in the class of quantum algorithms relying on amplitude amplification we mention the algorithm for the training perceptrons . Here, quantum amplitude amplification was used to quadratically speed up training, but, interestingly, also to quadratically reduce the error probability. Since perceptrons constitute special cases of SVMs, this result is similar in motivation to the much older proposal (Anguita et al., 2003), but relies on more modern and involved techniques. b. Precursors of amplitude encoding In an early pioneering, and often overlooked, work (Schützhold, 2003), Schützhold proposed an interesting application of QC on pattern recognition problems, which addresses many ideas which have only been investigated, and re-invented, by the community relatively recently. The author considers the problem of identifying "patterns" in images, specified by ). The function f is given as a quantum oracle |x |y |b U f → |x |y |b ⊕ f (x, y) . The oracle is used in quantum parallel (applied to a superposition of all coordinates), and conditioned on the bit-value function being 1 (this process succeeds with constant probability, whenever the density of points is constant,) leading to the state |ψ = N x,y s.t.f (x,y)=1 |x |y , where N is a normalization factor. Note, this state is proportional to the vectorized bitmap image itself, when given in the computational basis. Next, the author points out that "patterns" -repeating macroscopic features -can often be detected by applying discrete Fourier transform to the image vector, which has classical complexity O(N M log(N M )). However, the quantum Fourier transform (QFT) can be applied to the state |ψ utilizing exponentially fewer gates. The author proceeds to show that the measurements of the QFT transformed state may yield useful information, such as pattern localization. This work is innovative in a few aspects. First, the author utilized the encoding of data-points (here strings of binary values) into amplitudes by using a quantum memory, in a manner which is related to the applications in the context of content-addressable memories discussed in VI.B.1. It should be pointed out, however, that in the present application of amplitude encoding, non-binary amplitudes have clear meaning (in say grayscale images), although this is not explicitly discussed by the author. Second, in contrast to all previous proposals, the author shows the potential for a quantifiable exponential computational complexity improvement for a family of tasks. However, this is all contingent on having access of the pre-filled database (U f ) the loading of which would nullify any advantage. Aside from the fact that this may be considered a one-off overhead, Schützhold discusses physical means of loading data from optical images in a quantum-parallel approach, which may be effectively efficient. c. Amplitude encoding: linear algebra tools The very basic idea of amplitude encoding is to treat states of N −level quantum systems, as data vectors themselves. More precisely given a data-vector x ∈ R n , the amplitude encoding would constitute the normalized quantum state |x = i x i |i /||x||, where it is often also assumed that norm of the vector x can always be accessed. Note that N −dimensional data-points are encoded into amplitudes of n ∈ O(log(N )) qubits. Any polynomial circuit applied to the n-qubit register encoding the data thus constitutes only a polylogarithmic computation relative to the data-vector size, and this is at the basis of all exponential improvements (also in the case of (Schützhold, 2003), discussed in the previous section) 111 . These ideas have lead to a research area which could be called "quantum linear algebra" (QLA), that is, a collection of algorithms which solve certain linear algebra problems, by directly encoding numerical vectors into state vectors. These quantum sub-routines have then been used to speed up numerous ML algorithms, some of which we describe later in this section. QLA includes algorithms for matrix inversion and principal component analysis (Harrow et al., 2009;, and many others. For didactic purposes, we will first give the simplest example which performs the estimation of inner products in logarithmic time. Tool 1: inner product evaluation Given access to boxes which prepare quantum states |ψ and |φ , the overlap | φ |ψ | 2 can be estimated to precision using O(1/ ) copies, using the so-called the swap-test. The swap test ) applies a controlled-SWAP gate onto the state |ψ |φ , where the control qubit is set to the uniform superposition |+ . The probability of "succeeding", i.e. observing |+ on the control after the circuit is given with (1+| φ |ψ | 2 )/2, and this can be estimated by iteration (a more efficient option using quantum phase estimation is also possible). If the states |ψ and |φ encode unit-length data vectors, the success value encodes their inner product up to sign. Norms, and phases can also be estimated by minor tweaks to this basic idea -in particular, actual norms of the amplitude-encoded states will be accessible in a separate oracle, and used in algorithms. The sample complexity of this process depends only on precision, whereas the gate complexity is proportional to O(log(N )) as that many qubits need to be control-swapped and measured. The swap test also works as expected if the reduced states are mixed, and the overall state is product. This method of computing inner products, relative to classical vector multiplication, offers an exponential improvement with respect to N (if calls to devices which generate |ψ and |φ take O(1)), at the cost of significantly worse scaling with respect to errors, as classical algorithms have typical error scaling with the logarithm of inverse error, O(log(1/ )). However, in context of ML problems, this is can constitute an excellent compromise. Tool 2: quantum linear system solving Perhaps the most influential technique for quantum enhanced algorithms for ML is based on one of the quintessential problems of linear algebra: solving systems of equations. In their seminal paper (Harrow et al., 2009), the authors have proposed the first algorithm for "quantum linear system" (QLS) solving, which performs the following. Consider an N × N linear system Ax = b, where κ and d are the condition number 112 , and sparsity of the Hermitian system matrix A 113 . Given (quantum) oracles giving positions and values of non-zero elements of A, (that is, given standard oracles for A as encountered in Hamiltonian simulation, cf. (Berry et al., 2015)) and an oracle which prepares the quantum state |b which is the amplitude encoding of b (up to norm), the algorithm in (Harrow et al., 2009) prepares the quantum state |x which is −close to the amplitude encoding of the solution vector x. The run-time of the first algorithm is O(κ 2 d 2 log(N )/ ). Note, the complexity scales proportionally to the logarithm of the system size. Note that any classical algorithm must scale at least with N , and this offers room for exponential improvements. The original proposal in (Harrow et al., 2009) relies on Hamiltonian simulation (implementing exp(iAt),) upon which phase estimation is applied. Once phases are estimated, inversely proportional amplitudes -that is, the inverses of the eigenvalues of A -are imprinted via a measurement. It has also been noted that certain standard matrix pre-conditioning techniques can also be applicable in the QLS scheme (Clader et al., 2013). The linear scaling in the error in these proposals stems from the phase estimation subroutine. In more recent work , the authors also rely on best Hamiltonian simulation techniques, but forego the expensive phase estimation. Roughly speaking, they (probabilistically) implement a linear combination of unitaries of the form k α k exp(ikAt) upon the input state. This constitutes a polynomial in the unitaries which can be made to approximate the inverse operator A −1 (in a measurement-accessible subspace) more efficiently. This, combined with other numerous optimizations, yields a final algorithm with complexityÕ(κdpolylog(N/ )), which is essentially optimal. It is important to note that the apparently exponentially more efficient schemes above do not trivially imply provable computational improvements, even if we assume free access to all oracles. For instance, one of the issues is that the quantum algorithm outputs a quantum state, from which classical values can only be accessed by sampling. This process for the reconstruction of the complete output vector would kill any improvements. On the other hand, certain functions of the amplitudes can be computed efficiently, the computation of which may still require O(N ) steps classically, yielding the desired exponential improvement. Thus this algorithm will be most useful as a sub-routine, an intermediary step of bigger algorithms, such as those for quantum machine learning. Tool 3: density matrix exponentiation Density matrix exponentiation (DME) is a remarkably simple idea, with few subtleties, and, arguably, profound consequences. Consider an N -dimensional density matrix ρ. Now, from a mathematics perspective, ρ is nothing but a semidefinite positive matrix, although it is also commonly used to denote the quantum state of a quantum system -and these two are subtly different concepts. In the first reading, where ρ is a matrix (we will denote it [ρ] to avoid confusion), [ρ] is also a valid description of a physical Hamiltonian, with time-integrated unitary evolution exp(−i[ρ]t). Could one approximate exp(−i[ρ]t), having access to quantum systems prepared in the state ρ? Given sufficiently many copies (ρ ⊗n ), the obvious answer is yes -one could use full state tomography to reconstruct [ρ], to arbitrary precision, and then execute the unitary using say Hamiltonian simulation (efficiency notwithstanding). In , the authors show a significantly simpler method: given any input state σ, and one copy of ρ, the quantum state σ = T r B [exp(−i∆tS)(σ A ⊗ ρ B ) exp(i∆tS)],(28) where S is the Hermitian operator corresponding to the quantum SWAP gate, approximates the desired time evolution to first order, for small ∆t: σ = σ − i∆t[ρ, σ] + O(∆t 2 ). If this process is iterated, by using fresh copies of ρ, we obtain that the target state σ ρ = exp(−iρt)σ exp(iρt) can be approximated to precision , by setting ∆t to O( /t) and using O(t 2 / ) copies of the state ρ. DME is, in some sense, a generalization of the process of using SWAP-tests between two quantum states, to simulate aspects of a measurement specified by one of the quantum states. One immediate consequence of this result is in the context of Hamiltonian simulation, which can now be efficiently realized (with no dependency on the sparsity of the Hamiltonian), whenever one can prepare quantum systems in a state which is represented by the matrix of the Hamiltonian. In particular, this can be realized using qRAM stored descriptions of the Hamiltonian, whenever the Hamiltonian itself is of low rank. More generally, this also implies, e.g. that QLS algorithms can also be efficiently executed when the system matrix is not sparse, but rather dominated by few principal components, i.e. close to a low rank matrix 114 . Remark: Algorithms for QLS, inner product evaluation, quantum PCA, and consequently, almost all quantum algorithms listed in the remainder of this section also assume "pre-loaded databases", which allow accessing of information in quantum parallel, and/or the accessing or efficient preparation of amplitude encoded states. The problem of parallel access, or even the storing of quantum states has been addressed and mostly resolved using so-called quantum random access memory (qRAM) architectures (Giovannetti et al., 2008) 115 . The same qRAM structures can be also used to realize oracles utilized in the approaches based on quantum search. However, having access to quantum databases pre-filled with classical data does a-priori not imply that quantum amplitude encoded states can also be generated efficiently, which is, at least implicilty, assumed in most works below. For a separate discussion on the cost of some of similar assumptions, we refer the reader to (Aaronson, 2015). d. Amplitude encoding: algorithms With all the quantum tools in place, we can now present a selection of quantum algorithms for various supervised and unsupervised learning tasks, grouped according to the class of problems they solve. The majority of proposals of this section follow a clear paradigm: the authors investigate established ML approaches, and identify those where the computationally intensive parts can be reduced to linear algebra problems, most often, diagonalization and/or equation solving. In this sense, further improvements in quantum linear algebra approaches, are likely to lead to new results in quantum ML. As a final comment, all the algorithms below pertain to discrete-system implementations. Recently, in (Lau et al., 2017), the authors have also considered continuous variable variants of qRAM, QLS and DME, which immediately lead to continuous variables implementations of all the quantum tools and most quantum-enhanced ML algorithms listed below. Regression algorithms One of the first proposals for quantum enhancements tackled linear regression 114 Since a density operator is normalized, the eigenvalues of data-matrices are rescaled by the dimension of the system. If the eigenvalues are close to uniform, they are rendered exponentially small in the qubit number. This then requires exponential precision in DME, which would off-set any speed-ups. However, if the spectrum is dominated by a constant number of terms, the precision required, and overall complexity, is again independent from the dimension, allowing overall efficient algorithms. 115 qRAM realizes the following mapping: |addr |b qRAM −→ |addr |b ⊕ d addr , where d addr represents the data stored at the address addr (the ⊕ represents modulo addition, as usual), which is the reversible variant of conventional RAM memories. In (Giovannetti et al., 2008), it was shown a qRAM can be constructed such that its internal processing scales logarithmically in the number of memory cells. problems, specifically, least squares fitting, and relied on QLS. In least squares fitting, we are given N M-dimensional real datapoints paired with real labels, so ( x i , y i ) N i=1 , x i = (x j i ) j ∈ R M , y = (y i ) i ∈ R N . In regression y is called the response variable (also regressant or dependant variable), whereas the datapoints x i are called predictors (or regressors or explanatory variables), and the goal of least-squares linear regression is to establish the best linear model, that is β = (β j ) j ∈ R M given with argmin β Xβ − y 2 , where the data matrix X collects the data-points x i as rows. In other words, linear regression assumes a linear relationship between the predictors and the response variables. It is well-established that the solution to the above least-squares problem is given with β = X + y, where X + is the Moore-Penrose pseudoinverse of the data-matrix, which is, in the case that X † X is invertible, given with X + = (X † X) −1 X † . The basic idea in is to apply X † onto the initial vector |y which amplitude-encodes the response variables, obtaining a state proportional to X † |y . This can be done e.g. by modifying the original QLS algorithm (Harrow et al., 2009) to imprint not the inverses of eigenvalues but the eigenvalues themselves. Following this, the task of applying (X † X) −1 (onto the generated state proportional to X † |y ) is interpreted as an equation-solving problem for the system (X † X)β = X † y. The end result is a quantum state |β proportional to the solution vector β, in time O(κ 4 d 3 log(N )/ ), where κ, d and are the condition number, the sparsity of the "symmetrized" data matrix X † X, and the error, respectively. Again, we have in general few guarantees on the behaviour of κ, and an obvious restriction on the sparsity d of the data-matrix. However, whenever both are O(polylog(N )), we have a potential 116 for exponential improvements. This algorithm is not obviously useful for actually finding the solution vector β, as it is encoded in a quantum state. Nonetheless, it is useful for estimating the quality of fit: essentially by applying X onto |β we obtain the resulting prediction of y, which can be compared to the actual response variable vector via a swap test efficiently 117 . These basic ideas for quantum linear regression have since been extended in a few works. In an extensive, and complementary work (Wang, 2014), the authors rely on the powerful technique of "qubitization" (Low and Chuang, 2016), and optimize the goal of actually producing the best-fit parameters β. By necessity, the complexity of their algorithm is proportional to the number of data-points M , but is logarithmic in the data dimension N , and quite efficient in other relevant parameters. In (Schuld et al., 2016), the authors follow the ideas of more closely, and achieve the same results as in the original work also when the data matrix is not sparse, but rather low-rank. Further, they improve on the complexities by using other state-of-the-art methods. This latter work critically relies on the technique of DME. Clustering algorithms In (Lloyd et al., 2013), amplitude encoding and inner product estimation are used to estimate the distance u −v between a given data vector u and the average of a collection of data points (centroid )v = i v i /M for M datapoints {v i } i , in time which is logarithmic in both 116 In this section we often talk about the "potential" for exponential speed-ups because some of the algorithms as given do not solve classical computational problems for which classical lower bounds are known. Consider the conditions which have to be satisfied for the QLS algorithm to offer exponential speed-ups. First, we need to be dealing with problems where the preparation of the initial state and qRAM memory can be done in O(polylog(N )). Next, the problem condition number must be O(polylog(N )) as well. Assuming all this is satisfied, we are still not done: the algorithm generates a quantum state. As classical algorithms do not output quantum states, we cannot talk about quantum speed-ups. The quantum state can be measured, outputting at most O(polylog(N )) (more would kill exponential speed-ups due to printout alone) bits which are functions of the quantum state. However, the hardness of computing these output bits, given all the initial assumptions is clearly not obvious, needs to be proven. 117 In the paper, the authors take care to appropriately symmetrize all the matrices in a manner we discussed in a previous footnote, but for clarity, we ignore this technical step. the vector length N , and number of points M . Using this as a building block, the authors also show an algorithm for k-means classification/clustering (where the computing of the distances to the centroid is the main cost), achieving an overall complexity O(M log(M N )/ ), which may even further be improved in some cases. Here, it is assumed that amplitude-encoded state vectors, and their normalization values, are accessible via an oracle, or that they can be efficiently implemented from a qRAM storing all the values. Similar techniques, combined with coherent quantum phase estimation, and Grover-based optimization, have been also used for the problem of k-nearest neighbour algorithms for supervised and unsupervised learning . Quantum Principal Component Analysis The ideas of DME were in the same paper immediately applied to a quantum version of principal component analysis (PCA). PCA constitutes one of the most standard unsupervised learning techniques, useful for dimensionality reduction but, naturally, has a large scope of applications beyond ML. In quantum PCA, for a quantum state ρ one applies quantum phase estimation of the unitary exp(−i[ρ]) using DME, applied onto the state ρ itself. In the ideal case of absolute precision, given the spectral decomposition ρ = i λ i |λ i λ i | , this process generates the state i λ i |λ i λ i | ⊗ |λ i λ i |, whereλ i denotes the numerical estimation of the eigenvalue λ i , corresponding to the eigenvector |λ i . Sampling from this state recovers both the (larger) eigenvalues, and the corresponding quantum states, which are amplitude-encoding the eigenvectors, which may be used in further quantum algorithms. The recovery of high-value eigenvalues and eigenvectors constitutes the essence of classical PCA as well. Quantum Support Vector Machines One of the most influential papers in quantum-enhanced ML relies on QLS and DME for for the task of quantizing support vector machine algorithms. For the basic ideas behind SVMs see section II.A.2. We focus our attention to the problem of training SVMs, as given by the optimization task in its dual form, in Eq. (6), repeated here for convenience: (α * 1 , . . . α * N ) = argmin α1...α N i α i − 1 2 i,j α i α j y i y j x i .x j , such that α i ≥ 0 and i α i y i = 0. The solution of the desired SVM is then easily computed by w * = i y i α i x i . As a warm-up result, in the authors point out that using quantum evaluation of inner products, appearing in Eq. (30), already can lead to exponential speed-ups, with respect to the data-vector dimension N . The quantum algorithm complexity is, however, still polynomial in the number of datapoints M , and the error dependence is now linear (as the error of the inner product estimation is linear). The authors proceed to show that full exponential improvements can be possible (with respect to N and M both), however for the special case of least-squares SVMs. Given the background discussions we have already done with respect to DME and QLS, the basic idea is here easy to explain. Recall that the problem of training least-squares SVMs reduces to a linear program, specifically a least-squares minimization. As we have seen previously, such minimization reduces to equation solving, which was given by the system in Eq. (14), which we repeat here: 0 1 T 1 N Ω + γ −1 I b α = 0 Y .(30) Here, 1 is an "all ones" vector, Y is the vector of labels y i , α is the vector of the Lagrange multipliers yielding the solution, b is the offset, γ is a parameter depending on the hyperparameter C, and Ω is the matrix collecting the (mapped) inner products of the training vectors so Ω i,j = x i .x j . The key technical aspects of ) demonstrate how the system above is realized in a manner suitable for QLS. To give a flavour of the approach, we will simply point out that the system sub-matrix Ω is proportional to the reduced density matrix of the quantum state i |x i | |i 1 |x i 2 , obtained after tracing out the subsystem 2. This state can, under some constraints, be efficiently realized with access to qRAM encoding the data-points. Following this, DME enables the application of QLS where the system matrix has a block proportional to Ω, up to technical details we omit for brevity. The overall quantum algorithm generates the quantum state proportional to |ψ out ∝ b |0 + M i=1 α i |i , encoding the offset and the multipliers. The multipliers need not be extracted from this state by sampling. Instead any new point can be classified by (1) generating an amplitude-encoded state of the input, and (2) estimating the inner product between this state and |ψ out ∝ b |0 |0 + M i=1 α i |x i | |i |x i , which is obtained by calling the quantum data oracle using |ψ out . This process has an overall complexity of O(κ 3 ef f −3 log(M N )), where κ ef f depends on the eigenstructure of the data matrix. Whenever this term is polylogarithmic in data size, we have a potential for exponential improvements. Gaussian process regression In (Zhao et al., 2015) the authors demonstrate how QLS can be used to dramatically improve Gaussian process regression (GPR), a powerful supervised learning method. GPR can be thought of as a stochastic generalization of standard regression: given a training set {x i , y i }, it models the latent function (which assigns labels y to data-points), assuming Gaussian noise on the labels f (x) = y + where encodes independent and identically distributed More precisely, GPR is a process in which an initial distribution over possible latent functions is refined by taking into account the training set points, using Bayesian inference. Consequently, the output of GPR is, roughly speaking, a distribution over models f which are consistent with the observed data (the training set). While the descriptions of such a distribution may be large, in computational terms, to predict the value of a new point x * , in GPR, one needs to compute two numbers: a linear predictor (also referred to as the predictive mean, or simply mean), and the variance of the predictor, which are specific to x * . These numbers characterize the distribution of the predicted value y * by the GPR model which is consistent with the training data. Further, it turns out, both values can be computed using modified QLS algorithms. The fact that this final output size is independent from the dataset size, combined with QLS, provides possibilities for exponential speed-ups in terms of data size. This, naturally holds, provided the data is available in qRAM, as is the case in most algorithms of this section. It should be mentioned that the authors take meticulous care of listing out all the "hidden costs", (and the working out intermediary algorithms) in the final tally of the computational complexity. Geometric and topological data analysis All the algorithms we presented in this subsection thus far critically depend on having access to "pre-loaded" databases -the loading itself would introduce a linear dependence on the database size, whereas the inner-product, QLS and DME algorithms provide potential for just logarithmic dependence. However, this can be circumvented in the cases where the data-points in the quantum database can be efficiently computed individually. This is reminiscent of the fact that most applications of Grover's algorithm have a step in which the Grover oracle is efficiently computed. In ML applications, this can occur if the classical algorithm requires, as a computational step, a combinatorial exploration of the (comparatively small) dataset. Then, the quantum algorithm can generate the combinatorially larger space in quantum parallelthereby efficiently computing the effective quantum database. The first example where this was achieved was presented in , in context of topological and geometric data analysis. These techniques are very promising in the context of ML, as topological features of data do not depend on the metric of choice, and thus capture the truly robust, features of the data. The notion of topological features (in the ML world of discrete data points) are given by those properties which exist when data is observed at different spatial resolutions. Such persistent features are thus robust and less likely to be artefacts of noise, or choice of parameters, and are mathematically formalized through so-called persistent homology. A particular family of features of interest are the number of connected components, holes, voids (or cavities). These numbers, which are defined for simplicial complexes (roughly, a closed set of simplices), are called Betti numbers. To extract such features from data, one must thus construct nested families of simplical complexes from the data, and compute the corresponding features captured by the Betti numbers. However, there are combinatorially many simplices one should consider, and which should be analyzed, and one can roughly think of each possible simplex as data-points which need further analysis. However, they are efficiently generated from a small set -essentially the collection of the pair-wise distances between datapoints. The authors show how to generate quantum states which encode the simplexes in logarithmically few qubits, and further show that from this representation, the Betti numbers can be efficiently estimated. Iterating this at various resolutions allows the identification of persistent features. As usual, full exponential improvements happen under some assumptions on the data, and here they are manifest in the capacity to efficiently construct the simplical states -in particular, having the total number of simplices in the complex be exponentially large would suffice, although it is not clear when this is the case, see (Aaronson, 2015). This proposal provides evidence that quantum ML methods based on amplitude encoding may, at least in some cases, yield exponential speed-ups even if data is not pre-stored in a qRAM or an analogous system. As mentioned a large component of modern approaches to quantum-enhanced ML, relies on quantum linear algebra techniques, and any progress in this area may lead to new quantum ML algorithms. A promising recent examples of this were given in terms of algorithms for quantum gradient descent (Rebentrost et al., 2016b;Kerenidis and Prakash, 2017), which could e.g. lead to novel quantum methods for training neural networks. VII. QUANTUM LEARNING AGENTS, AND ELEMENTS OF QUANTUM AI The topics discussed thus far in this review, with few exceptions, deal with the relationship between physics, mostly QIP, and traditional ML techniques which allow us to better understand data, or the process which generates it. In this section, we go one step beyond data analysis and optimization techniques and address the relationship between QIP and more general learning scenarios, or even between QIP and AI. As mentioned, in more general learning or AI discussions, we typically talk about agents, interacting with their environments, which may be, or more often fail to be, intelligent. In our view, by far the most important aspect of any intelligent agent, is its capacity to learn from its interactions with its environment. However, general intelligent agents learn in environments which are complex and changeable. Further, the environments are susceptible to being changed by the agent itself, which is the crux of e.g. learning by experiments. All this delineates general learning frameworks, which begin with RL, from more restricted settings of data-driven ML. In this section, we will consider physics-oriented approaches to learning via interaction, specifically the PS model, and then focus on quantum-enhancements in the context of RL 118 . Following this, we will discuss an approach for considering the most general learning scenarios, where the agent, the environment and their interaction, are treated quantum-mechanically: this constitutes a quantum generalization of the broad AE framework, underlying modern AI. We will finish off briefly discussing other results from QIP which may play a role in the future of QAI, which do not directly deal with learning, but which may still play a role in the future of QAI. A. Quantum learning via interaction Executive summary: The first proposal which addressed the specification of learning agents, designed with the possibility of quantum processing of episodic memory in mind, was the model of Projective Simulation PS. The results on quantum improvements of agents which learn by interacting with classical environments have mostly been given within this framework. The PS agent deliberates by effectively projecting itself into conceivable situations, using its memory, which organizes its episodic experiences in a stochastic network. Such an agent can solve basic RL problems, meta-learn, and solve problems with aspects of generalization. The deliberation is a stochastic diffusion process, allowing for a few routes for quantization. Using quantum random walks, quadratic speed-ups can be obtained. The applications of QIP to reinforcement and other interactive learning problems has been comparatively less studied, when compared to quantum enhancements in supervised and unsupervised problems. One of the first proposals which provides a coherent view on learning agents from a physics perspective was that of Projective Simulation (abbrv. PS) (Briegel and De las Cuevas, 2012). We first provide a detailed description the PS model, and review the few other works related to this topic at the end of the section. PS is a flexible framework for the design of learning agents motivated both from psychology and physics, and influenced by modern views on robotics. One of the principal reasons why we focus on this model is that it provides a natural route to quantization, which will be discussed presently. However already the classical features of the model reveal an underlying physical perspective which may be of interest for the reader, and which we briefly expose first. The PS viewpoint on (quantum) agents is conceived around a few basic principles. First, in the PS view, the agent is a physical, or rather, an embodied entity, existing relative to its environment, rather than a mathematical abstraction 119 . Note, this does not prohibit computer programs to be agents: while the print-out of the code is not an agent, the executed instantiation of the code, the running program, so to speak, has its own well-defined virtual interfaces, which delineate it from, and allow interaction with other programs in its virtual world -in this sense, that program too is embodied. Second, the interfaces of the agent are given by its sensors, collecting the environmental input, and the actuators, enabling the agent to act on the environment. Third, the learning is learning from experience, and, the interfaces of the agent constrain the elementary experiences of the agent to be collections from the sets of percepts S = {s i } i which the agent can perceive and actions A = {a i } i . At this point we remark that the basic model assumes discretized time, and sensory it is not fully general -for instance learning in real environments always involves supervised and other learning paradigms to control the size of the exploration space, but also various other techniques which occur when we try to model settings in continuous, or otherwise not turn-based fashion. space, which is consistent with actual realizations, although this could be generalized. Fourth, a (good) learning agent's behaviour -that is, the choice of actions, given certain percepts -is based on its cumulative experience, accumulated in the agent's memory, which is structured. This brings us to the central concept of the PS framework, which is the memory of the agent: the episodic and compositional memory (ECM). The ECM is a structured network of units of experience which are called clips or episodes. A clip, denoted c i , can represent 120 an individual percept or action, so c i ∈ S ∪ A -and indeed there is no other external type appearing in the PS framework. However, experiences may be more complex (such as an autobiographical episodic memory, similar to short video-clips, where we remember a temporally extended sequence of actions and percepts that we experienced). This brings us to the following recursive definition: a clip is either a percept, an action, or a structure over clips. a) b) FIG. 12 a) The agent learns to associate symbols to one of the two movements. b) the internal PS network requires only action and percept clips, arranged in two layers, with connections only from actions to percepts. The "smiling" edges are rewarded. Adapted from (Briegel and De las Cuevas, 2012). Typical examples of structured clips are percept-action (s 1 , a 1 , . . . , s k , a k ) sequences describing what happened, i.e. a k−length history of interaction between the agent and environment. Another example are simple sets of percepts (s 1 or s 2 . . .), which will be later used to generalize knowledge. The overall ECM is a network of clips (that is, a labeled directed graph, where the vertices are the clips), where the edges organize the agent's previous experiences, and has a functional purpose explained momentarily. Fifth, learning agent must act: that is, there has to be a defined deliberation mechanism, which given a current percept, the state of memory, i.e. the current ECM network, the agent, probabilistically decides on (or rather "falls into") the next action and performs it. Finally, sixth, a learning agent must learn, that is, the ECM network must change under experiences and this occurs in two modes: by (1) changing the weights of the edges, and (2) the topology of the network, through the addition of deletion of clips. The above six principles describe the basic blueprint behind PS agents. The construction of a particular agent will require us to further specify certain components, which we will exemplify using the simplest example: a reinforcement learning PS agent, capable of solving the so-called invasion game. In the invasion game, the agent Fig 12 is facing an attacker, who must be blocked by appropriately moving to the left or right. These two options form the actions of the agent. The attacker presents a symbol, say a left-or right-pointing arrow, to signal what its next move will be. Initially, the percepts have no meaning for the agent, and indeed the attacker can alter the meaning in time. The basic scenario here is, in RL terms a contextual two-armed bandit problem (Langford and Zhang, 2008), where the agent gets rewarded when it correctly couples the two percepts to the two actions. The basic PS agent that can solve this is specified as follows. The action and percept spaces are the two moves, and two signals, so A = {−, +} (left and right move), and S = {←, →}, respectively. The clips set is just the union of the two sets. The connections are directed edges from percepts to actions, weighted with real values, called h−values, h ij ≥ 1, which form the h−matrix. The deliberation is realized by a random walk in the memory space, governed proportionally to the h−matrix: that is the probability of transition from percept s to action a is given with p(a|s) = h s,a a h s,a . In other words, the column-wise normalized h−matrix specifies the stochastic transition matrix of the PS model, in the Markov chain sense. Finally, the learning is manifest in the tuning of the h−values, via an update rule, which is in its most basic form given with: h t+1 (c j , c i ) = h t (c j , c i ) + δ cj ,ci λ,(31) where t, t + 1 denote consecutive time steps, λ denotes the reward received in the last step, and δ cj ,ci is 1 if and only if the c i to c j transition occurred in the previous step. Simply stated, used edges get rewards. The h−value h t (c i , c j ) associated to the edges connecting clips c i , c j , when the time step t is clear from context we will simply denote h ij . One can easily see that the above rule constitutes a simple RL mechanism, and that it will indeed over time lead to a winning strategy in the invasion game; since only the correctly paired transitions get rewards, they are taken more and more frequently. However, these h−values in this simple process diverge, which also makes re-learning, in the eventuality the rules of the game change, more difficult with time. To manage this, one typically introduces a decay, or dissipation, parameter γ leading to the rule: h t+1 (c j , c i ) = h t (c j , c i ) − γ(h t (c j , c i ) − 1) + δ cj ,ci λ.(32) The dissipation is applied at each time step. Note that since the dissipating term diminishes the values of h t (c j , c i ) by an amount proportional to the deviation of these values from 1, where 1 is the initial value. The above rule leads to the unit value h = 1 when there are no rewards, and a limiting upper value of 1 + λ/γ, when every move is rewarded. This limits maximal efficiency to 1−(2+λ/γ) −1 , but, as a trade-off, leads to much faster relearning. This is illustrated in Fig. 13. The update rules can get a bit more involved, in the setting of delayed rewards. For instance, in a maze, or so called grid-world settings, illustrated in Fig. 14, it is a sequence of actions that leads to a reward. In other words, the final reward must "propagate" to all relevant percept-action edges which were involved in the winning move sequence. In the basic PS model, this is done via a socalled glow mechanism: to each edge in the ECM, a glow value g ij is assigned in addition to the h ij −value. It is set to 1 whenever the edge is used, and decays with the rate η ∈ [0, 1], that is, g t ij = (1 − η)g t−1 ij . The h−value update rule is appended to reward all "glowing" edges, proportional to the glow value, whenever a reward is issued: h t+1 (c j , c i ) = h t (c j , c i ) − γ(h t (c j , c i ) − 1) + g t (c j , c i )λ.(33) In other words, all the edges which contributed to the final reward get a fraction, in proportion to how recently they were used. This parallels the intuition that the more recent actions relative to the rewarded move played a larger role in getting rewarded. The expression in Eq. 33 has functional similarities to the Q-learning action-value update rule in Eq. 21. However, the learning dynamics is different, and the expressions are conceptually different -Q-learning updates estimate bounded Q-values, whereas the PS is not a state-value estimation method, but rather a purely reward-driven system. The PS framework allows other constructions as well. In (Briegel and De las Cuevas, 2012), the authors also introduced emoticons -edge-specific flags, which capture aspects of intuition. These can be used to speed-up re-learning via a reflection mechanism, where a random walk can be iterated multiple times, until a desired -flagged -set of actions is hit, see (Briegel and De las Cuevas, 2012) for more detail. Further in this direction, the deliberation of the agent can be based not on a hitting process -where the agent performs the first action it hits -but rather on a mixing process. In the latter case, the ECM is a collection Markov chains, and the correct action is sampled from the stationary distribution over the ECM. This model is referred to as the reflective PS (rPS) model, see Fig. 15. Common to all models, however, is that the deliberation process is governed by a stochastic walk, specified by the ECM. FIG. 14 The environment is essentially a grid, where each site has an individual percept, the moves dictate the movements of the agent (say up, down, left, right), and certain sites are blocked off -walls. The agent explores this world looking for the rewarded site. When the exit is found, a reward is given and the agent is reset to the same initial position. Adapted from (Melnikov et al., 2014). Regarding performance, the basic PS structure, with a two-layered network encoding percepts and actions -which matches standard tabular RL approaches -was extensively analysed and benchmarked against other models (Melnikov et al., 2014;Mautner et al., 2015). However, the questions that are emphasized in PS literature diverge from questions of performance in RL tasks, in two directions. First, the authors are interested in the capacities of the PS model beyond textbook RL. For instance, in (Mautner et al., 2015) it was shown that the action composition aspects of the ECM allow the agent to perform better in some benchmarking scenarios, which had a natural application for example in the context of protecting MBQC from unitary noise , and in the context of finding novel quantum experiments (Melnikov et al., 2017), elaborated on in section IV.C. Further, by utilizing the capacity of ECM to encode larger and multiple networks, we can also address problems which require generalization -inferring correct behaviour by percept similaritybut also design agents which autonomously optimize their own meta-parameters, such as γ and η in the PS model. That is, the agents can meta-learn (Makmal et al., 2016). These problems go beyond the basic RL framework, and the PS framework is flexible enough to also allow the incorporation of other learning models -e.g. neural networks could be used to perform dimensionality reduction (which could allow for broader generalization capabilities), or even to directly optimize the ECM itself. The PS model has been combined with such additional learning machinery in an application to robotics and haptic skill learning (Hangl et al., 2016). However, there is an advantage into keeping the underlying PS dynamics homogenous, that is, essentially solely based on random walks over the PS network, in that if offers a few natural routes to quantization. This is the second direction of foundational research in PS. For instance, in (Briegel and De las Cuevas, 2012) the authors expressed the entire classical PS deliberation dynamics as a incoherent part of a Liouvillean dynamics (master equation for the quantum density operator), which also included some coherent part (Hamiltonian-driven unitary dynamics). This approach may yield advantages both in deliberation time and also expands the space of internal policies the agent can realize. Another perspective on the quantization of the PS model was developed in the framerowk of discrete-time quantum walks. In (Paparo et al., 2014), the authors have exploited the paradigm of Szegedy-style quantum walks, to improve quadratically deliberation times of rPS agents. The Szegedy (Szegedy, 2004) approach to random walks can be used to specify a unitary random walk operator U P , for a given transition matrix P 121 , whose spectral properties are intimately related to those of P itself. We refer the reader to the original references for the exact specification of U P , and just point out that U P can be efficiently constructed via a simple circuit depending on P , or given black-box access to entries of P . Assume P corresponds to an irreducible and aperiodic (guaranteeing a unique stationary distribution), and also time-reversible (meaning it satisfies detailed balance conditions) Markov chain. Let π = (π i ) i be the unique stationary distribution of P , and δ the spectral gap of P 122 , and |π = i √ π i |i be the coherent encoding of the distribution π. Then we have that a) U P |π = |π , and b) the eigenstates {λ i } of P and eigenphases θ i of U P are related by λ i = cos(θ i ) 123 . This is important as the spectral properties, specifically the spectral gap δ more-or-less tightly fixes the mixing time -that is the number of applications of P needed to obtain the stationary distribution -toÕ(1/δ), by the famous Aldous bounds (Aldous, 1982). This quantity will later bound the complexity of classical agents. In contrast, for U P , we have that its non-zero eigenphases θ are not smaller thanÕ(1/ √ δ). This quadratic difference between the inverse spectral eigenvalue gap in the classical case, and the eigenphase gap in the quantum case is at the crux of all speed-ups. In (Magniez et al., 2011), it was shown how the above properties of U P can be used to construct a quantum operator R(π) ≈ 1 − 2 |π π| , which exponentially efficiently approximates the reflection over the encoding of the stationary distribution |π . The basic idea in the construction of R(π) is to apply phase estimation onto U P with precision high enough to detect non-zero phases, impose a global phase on all states with a non zero detected phase, and undo the process. Due to the quadratic relationship between the inverse spectral gap, and the smallest eigenphase, this can be achieved in timeÕ(1/ √ δ). That is, we can reflect over the (coherent encoding of the) stationary distribution, whereas obtaining it by classical mixing takesÕ(1/δ) applications of the classical walk operator. In (Paparo et al., 2014) this was used to obtain quadratically accelerated deliberation times for the rPS agent. In the rPS model, the ECM network has a special structure, enforced by the update rules. In particular, for each percept s we can consider the subnetwork ECM s , which collects all the clips one can reach starting from s. By construction, it contains all the action clips, but also other, intermediary clips. The corresponding Markov chain P s , governing the dynamics of ECM s , is, by construction, irreducible, aperiodic and time-reversible. In the deliberation process, given percept s, the deliberation process mixes the corresponding Markov chain P s , and outputs the reached clip, provided it is an action clip, and repeats the process otherwise. Computationally speaking, we are facing the problem of outputting a single sample, clip c, drawn according to the conditional probability distribution p(c) = π c / if c ∈ A and p(c) = 0 otherwise. Here is the total weight of all action clips in π. The classical computational complexity of this task is given by the product ofÕ(1/δ) -which is the mixing cost, and O(1/ ) which is the average time needed to actually hit an action clip. Using the Szegedy quantum walk techniques, based on constructing the reflector R(π), followed by an amplitude amplification algorithm to "project" onto the action space, we obtain a quadratically better complexity ofÕ(1/ √ δ) × O(1/ √ ). In full detail, this is achievable if we can generate one copy of the coherent encoding of the stationary distribution efficiently at each step, and in the context of the rPS this can be done in many cases as was shown in (Paparo et al., 2014), and further generalized in and . The proposal in (Paparo et al., 2014) was the first example of a provable quantum speed-up in the context of RL 124 , and was followed up by a proposal for an experimental demonstration , which identified a possibility of a modular implementation based on coherent controlization -the process of adding control to almost unknown unitaries. It is worth-while to note that further progress in algorithms for quantum walks and quantum Markov chain theory has the potential to lead to quantum improvements of the PS model. This to an extent mirrors the situation in quantum machine learning, where new algorithms for quantum linear algebra may lead to quantum speed-ups of other supervised and unsupervised algorithms. Computational speed-ups of deliberation processes in learning scenarios are certainly important, but in strict RL paradigm, such internal processing does not matter, and the learning efficiency depends only on the number of interaction steps needed to achieve high quality performance. Since the rPS and its quantum analog, the so-called quantum rPS agent are, by definition, behaviorally equivalent (i.e. they perform the same action with the same probability, given identical histories), their learning efficiency is the same. The same, however, holds in the context of all the supervised learning algorithms we discussed in previous sections, where the speed-ups were in the context of computational complexity. In contrast, quantum CLT learning results did demonstrate improvements in sample complexity, as discussed in section VI.A. While formally distinct, computational and sample complexity can become more closely related the moment the learning settings are made more realistic. For instance, if the training of a given SVM requires the solution of a BQP complete problem 125 , classical machines will most likely be able to run classification instances which are uselessly small. In contrast, a quantum computer could run such a quantum-enhanced learner. The same observation motivates most of research into quantum annealers for ML, see section VI.C.1. In (Paparo et al., 2014), similar ideas were more precisely formalized in the context of active reinforcement learning, where the interaction is occurring relative to some external real time. This is critical, for instance, in settings where the environment changes relative to this real time, which is always the case in reality. If the deliberation time is slow relative to this change, the agent perceives a "blurred", time-averaged environment where one cannot learn. In contrast, a faster agent will have time to learn before the environment changes -and this makes a qualitative difference between the two agents. In the next section we will show how actual learning efficiency, in the rigid metronomic turn-based setting can also be improved, under stronger assumptions. As mentioned, works which directly apply quantum techniques to RL, or other interactive modes of learning, are comparatively few in numbers, despite the ever growing importance of RL. These results still constitute quite isolated approaches, and we briefly review two recent papers. In (Crawford et al., 2016) the authors design a RL algorithm based on a deep Boltzmann machine, and combine this with quantum annealing methods for training such machines to achieve a possible speed-up.This work combines multiple interesting ideas, and may be particularly relevant in the light of recent advances in quantum annealing architectures. In (Lamata, 2017), the authors demonstrated certain building blocks of larger quantum RL agents in systems of superconducting qubits. B. Quantum agent-environment paradigm for reinforcement learning Executive summary: To characterize the ultimate scope and limits of learning agents in quantum environments, one must first establish a framework for quantum agents, quantum environments and their interaction: a quantum AE paradigm. Such a paradigm should maintain the correct classical limit, and preserve the critical conceptual components -in particular the history of the agent-environment interaction, which is non-trivial in the quantum case. With such a paradigm in place the potential of quantum enhancements of classical agents is explored, and it is shown that quantum effects, under certain assumptions, can help near-generically improve the learning efficiency of agents. A by-product of the quantum AE paradigm is a classification of learning settings, which is different and complementary to the classification stemming from a supervised learning perspective. The topics of learning agents acting in quantum environments, and the more general questions of the how agent-environment interactions should be defined, have to this day only been broached in few works by the authors of this review and other co-authors. As these topics may form the general principles underlying the upcoming field of quantum AI, we take liberty to present them to substantial detail. Motivated by the pragmatic question of the potential of quantum enhancements in general learning settings, in it was suggested that the first step should be the identification of a quantum generalization of the AE paradigm, which underlies both RL and AI. This is comparatively easy to do in finite-sized, discrete space settings. a. Quantum agent-environment paradigm The (abstract) AE paradigm, roughly illustrated in Fig. 6, can be understood as a two-party communication scenario, the quantum descriptions of which are well-understood in QIP. In particular, the two players -here the agent, and the environment -are modelled as (infinite) sequences of unitary maps {E i A } i , and {E i E } i , respectively. They both have private memory registers R A and R E , with matching Hilbert spaces H A , and H E , and to enable precise specification of how they communicate (and to cleanly delineate the two players), the register of the communication channel, R C , is introduced, and it is the register which is alone accessible to both players -that is, the maps of the agent act on H A ⊗ H C and of the environment on H E ⊗ H C 126 . The two players then interact by sequentially applying their respective maps in turn (see Fig. 16). To further tailor this fully general setting for the AE paradigm purposes, the percept and action sets are promoted to sets of orthonormal vectors {|s |s ∈ S} and {|a |a ∈ A}, which are also mutually orthogonal. These are referred to as classical states. The Hilbert space of the channel is spanned by these two sets, so H C = span{|x | x ∈ S ∪ A}. This also captures the notion that the agent/environment only performs one action, or issues one percept, per turn. Without loss of generality, we can also assume that the state-spaces of the agent's and environment's registers are also spanned by sequences of percepts and actions. It is without loss of generality assumed that the reward status is encoded in the percept space. RL: 2 come Hilbert spaces, H A = span{|a i i}, H S = span{|s i i}, and form orthonormal bases. The percept and action states, and their mixtures, are referred to as classical states. Any figure of merit Rate(·) of the performance of an agent A in E is a function of the history of interaction H 3 h = (a 1 , s 1 , . . .), collecting the exchanged percepts and actions. The history of interaction is thus the central concept in learning. The correct quantum generalization of the history is not trivial, and we will deal with this momentarily. If either A or E are stochastic, the interaction of A and E is described by a distribution over histories (of length t), denoted by A $ t E. Most figures of merit are then extended to such distributions by convex-linearity. To recover, e.g., supervised learning in this paradigm, take E to be characterized by the distribution P (x, y), where the agent is given an n sized sample of (x, y) pairs as the first n percepts. After this, the agent is to respond with labels as actions to given percepts, now unlabeled data-points x. This setting is native to RL if the percept space also contains the reinforcement signal -the reward. We denote the percept space including the reward status asS (e.g., if rewards are binary thenS = S ⇥ {0, 1}). The agent-environment paradigm is a two-party interactive setting, and thus convenient for a quantum information treatment of QML. All the existing results group into four categories: CC, CQ, QC and QQ, depending on whether the agent (first symbol) or the environment (second symbol) are classical (C) or quantum (Q) [30]. The CC scenario covers classical machine learning. The CQ setting asks how classical ML techniques may aid in quantum tasks, such as quantum control [14,15], quantum metrology [16], adaptive quantum computing [17] and the design of quantum experiments [18]. Here we, for example, deal with non-convex/non-linear optimization problems arising in quantum experiments, tackeled by ML techniques. QC corresponds to quantum variants of learning algorithms [7, 10, 19] facing a classical environment. Figuratively speaking, this studies the potential of a learning robot, enhanced with a with a "quantum chip". In QQ settings, the focus of this work, both A and E are quantum systems. Here, the interaction can be fully quantum, and even the question of what it means "to learn" becomes problematic as, for instance, the agent and environment may become entangled. Framework.-Since learning constitutes a two-player interaction, standard quantum extensions can be applied: the action and percept sets are represented by the aforementioned Hilbert spaces H A , H S . The agent and the environment act on a common communication register R C (capable of representing both percepts and actions). Thus, the agent (environment) is described as a sequence of CPTP maps {M t A } ({M t E })one for each time-stepwhich acts on the register R C , but also a private register R A (R E ) which constitutes the internal memory of the agent (environment). This is illustrated in Fig. 1 above the dashed line. The central object characterizing an interaction, namely its history, is, for the classical case, recovered by performing periodic measurements on R C in the classical (often called computational) basis. The generalization of this process for the quantum case is a tested interaction: we define the tester as a sequence of controlled maps of the form U T t |xi RC ⌦ | i RT = |xi RC ⌦ U x t | i RT where x 2 S [ A, and {U x t } x are unitary maps acting on the tester register R T , for all steps t. The history, relative to a given tester, is defined to be the state of the register R T . A tested interaction is shown in Fig. 1. R A / M A 1 · · · M A t · · · R C M E 1 • • M E 2 • · · · • M E t • · · · R E / · · · · · · R T / U T 2 U T 3 U T 4 · · · U T 2t 1 U T 2t · · · FIG. 1. Tested agent-environment interaction. In general, each map of the tester U T k acts on a fresh subsystem of the register RT , which is not under the control of the agent of the environment. The crossed wires represent multiple systems. The restriction that testers are controlled maps relative to the classical basis guarantees that, for any choice of the local maps U x T , the interaction between classical A and E remains unchanged. A classical tester copies the content of R C relative to the classical basis, which has essentially the same e↵ect as measuring R C and copying the outcome. In other words, the interface between A and E is then classical. It can be shown that, in the latter case, for any quantum agent and/or environment there exist classical A and E which generate the same history under any tester [20]. In other words, classical agents can, in QC settings and, equivalently, in classically tested QQ settings, achieve the same performance as quantum agents, in terms of any history-dependent figure of merit. Thus, the only improvements can then be in terms of computational complexity. Scope and limits of quantum improvements.-What is the ultimate potential of quantum improvements in learning? In the QC and classically tested settings, we are bound to computational complexity improvements, which have been achieved in certain cases. Improvements in learning e ciency require special type of access to the environments, which is not fully tested. Exactly this is done in [6,8], for the purpose of improving computational complexity, with great success, as the improvement can be exponential. There, the classical source of samples is substituted by a quantum RAM [23] architecture, which allows for the accessing of many samples in superposition. Such a substitution comes naturally in (un)supervised settings, as the basic interaction comprises only two steps and is memoryless -the agent requests M samples, and the environment provides them. DL: FIG. 16 RL: Tested agent-environment interaction suitable for RL. In general, each map of the tester U T k acts on a fresh subsystem of the register RT , which is not under the control of the agent, nor of the environment. The crossed wires represent multiple systems. DL: The simpler setting of standard quantum machine learning, where the environmental map is without internal memory, presented in the same framework. It should be mentioned that the quantum AE paradigm also includes all other quantum ML settings as a special case. For instance, most quantum-enhanced ML algorithms assume access to quantum database, a quantum memory, and this setting is illustrated in Fig. 16, part DL. Since the quantum database is without loss of generality a unitary map, it requires no additional memory of its own, nor does it change over interaction steps. At this point, the classical AE paradigm can be recovered when the maps of the agent and environment are restricted to "classical maps", which, roughly speaking do not generate superpositions of classical states, nor entanglement when applied to classical states. Further, we now obtain a natural classification of generalized AE settings: CC, CQ, QC and QQ, depending on whether the agent or the environment are classical (C) or quantum (Q). We will come back to this classification in section VII.B.1. The performance of a learning agent, beyond internal processing time, is a function of the history of interaction, which is a distribution over percept-action sequences (of a given finite length) which can occur between a given agent and environment. Any genuine learning-related figure of merit, for instance, the probability of a reward at a given time-step (efficiency), or number of steps needed before the efficiency is above a threshold (learning speed) is a function of the interaction history. In the classical case, the history can simply be read out by a classical-basis measurement of the register H C , as the local state of the communication register is diagonal in this basis, and not entangled to the other systems -meaning the measurement does not perturb, i.e. commutes with the interaction. In the quantum case this is not, in general, the case. To recover a robust notion of a history (needed for gauging of the learning), a more detailed description of measurement is used, which captures weaker measurements as well: an additional system, a tester is added, which interchangeably couples to the H C register, and can copy full or partial information to a separate register. Formally, this is a sequence of controlled maps, relative to the classical basis, controlled by the states on H C and acting on a separate register, as illustrated in Fig. 16. The tester can copy the full information, when the maps are a generalized controlled-NOT gate -in which case it is called a classical testeror even do nothing, in which case the interaction is untested. The restriction of the tester to maps which are controlled with respect to the classical basis guarantees that a classical interaction will never be perturbed by its presence. With this basic framework in place, the authors show a couple of basic theorems characterizing when any quantum separations in learning-related figures of merit of can be expected at all. The notion of quantum separations here are the same as in the context of oracular computation, or quantum PAC theory: a separation means no classical agent could achieve the same performance. The authors prove basic expected theorems: quantum improvements (separations) require a genuine quantum interaction, and, further, full classical testing prohibits this. Further, they show that for any specification of a classical environment, there exists a "quantum implementation" -a sequence of maps {E i E } i -which is consistent with the classical specification, and prohibits any quantum improvements. FIG. 17 The interactions for the classical (A) and quantum-enhanced classical agent (A q ). In Steps 1 and 2, A q uses quantum access to an oracularized environment E q oracle to obtain a rewarding sequence hr. Step 3: A q simulates the agent A, and 'trains' the simulation to produce the rewarding sequence. In Step 4, A q uses the pre-trained agent for the remainder of the now classically tested interaction, with the classical environment E. Adapted from . b. Provable quantum improvements in RL However, if the above no-go scenarios are relaxed, much can be achieved. The authors provide a structure of task environments (roughly speaking, maze-type problems), specification of quantum-accessible realizations of these environments, and a sporadic tester (which leaves a part of the interaction untested), for which classical learning agents can often be quantumenhanced. The idea has a few steps, which we only very briefly sketch out. As a first step, the environments considered are deterministic and strictly episodic -this means the task is reset after some M steps. Since the environments are deterministic, whether or not rewards are given depends only on the sequence of actions, as the interlacing percepts are uniquely specified. Since everything is reset after M steps there are no correlations in the memory of the environment between the blocks, i.e. episodes. This allows for the specification of a quantum version of the same environment, which can be accessed in superpositions and which takes blocks of actions and returns the same sequence plus a reward status -moreover, it can be realized such that it is self-inverse 127 . With access to such an object, a quantum agent can actually Grover-search for an example of a winning sequence. To convert this exploration advantage to a learning advantage, the set of agents and environments is restricted to pairs which are "luck-favoring", i.e. those where better performance in the past implies improved performance in the future, relative to a desired figure of merit. Under these conditions, any learning agent which is luck-favoring relative to a given environment can be quantum enhanced by first using quantum access to quadratically faster find the first winning instance, which is then used to "pre-train" the agent in question. The overall quantum-enhanced agent, provably outperforms the basic classical agent. The construction is illustrated in Fig. 17. These results can be generalized to a broader class of environments. Although these results form the first examples of quantum improvements in learning figures of merit in RL contexts, the assumptions of having access to "quantized" environments of the type used-in essence, the amount of quantum control the agent is assumed to have-are quite restrictive from a practical perspective. The questions of minimal requirements, and the questions of the scope of improvements possible are still unresolved. AE-based classification of quantum ML The AE paradigm is typically encountered in the contexts of RL, robotics, and more general AI settings, while it is less common in ML communities. Nonetheless, conventional ML scenarios can naturally be embedded in this paradigm, since it is, ultimately, mostly unrestrictive. For instance, supervised learning can be thought of as an interaction with an environment which is, for a certain number of steps, an effective database (or the underlying process, generating the data), providing training examples. After a certain number of steps, the environment starts providing unlabeled datapoints, and the agent responds with the labels. If we further assume the environment additionally responds with the correct label to whatever the agent sent, when the data-point/percept was from the training set, we can straightforwardly read out the empirical risk (training set error) from the history. Since the quantization of the AE paradigm naturally leads to four settings -CC, CQ, QC and QQ -depending on whether the agent, or environment, or both are fully quantum systems, we can classify all of the results in quantum ML into one of the four groups. Such coarse grained division places standard ML in CC, results on using ML to control quantum systems in CQ, quantum speed ups in ML algorithms (without a quantum database, as is the case in annealing approaches) in QC, and quantum ML/RL where the environments, databases or oracles are quantum-accessible are QQ. This classification is closely related to the classification introduced in (Aïmeur et al., 2006), which uses the L context goal , notation, where "context" may denote we are dealing with classical or quantum data and/or learner, and "goal" specifies the learning task (see section V.A.1 for more details). The QAE-based separation is not, however, identical to it: for instance classical learning tasks may require quantum or classical access -this distinguishes the examples of quantum speed-ups in internal processing in ML which require a quantum database, and those which do not. In operative terms, this separation makes sense as the database must be pre-filled at some point, and if this is included we obtain a QC setting (which now may fail to be efficient in terms of communication complexity). On the other hand, the L context goal systematics does a nice job separating classical ML, from quantum generalizations of the same, discussed in section V. This mismatch also illustrates the difficulties one encounters if a sufficiently coarse-grained classification of the quantum ML field is required. The classification criteria of this field, and also aspects of QAI in this review have been inspired by both the AE-induced criteria (perhaps natural from a physics perspective), and the L context goal classification (which is more objective driven, and natural from a computer science perspective). C. Towards quantum artificial intelligence Executive summary: Can quantum computers help us build (quantum) artificial intelligence? The answer to this question cannot be simpler than the answer to the to the deep, and largely open, question of what intelligence is in the first place. Nonetheless, at least for very pragmatic readings of AI, early research directions into what QAI may be in the future can be identified. We have seen that quantum machine learning enhancements and generalizations cover data analysis and pattern matching aspects. Quantum reinforcement learning demonstrates how interactive learning can be quantum-enhanced. General QC can help with various planning, reasoning, and similar symbol manipulation tasks intelligent agents seem to be good at. Finally, the quantum AE paradigm provides a framework for the design and evaluation of whole quantum agents, built also from quantum-enhanced subroutines. These conceptual components form a basis for a behaviour-based theory of quantum-enhanced intelligent agents. AI is quite a loaded concept, in a manner in which ML is not. The question of how genuine AI can be realized is likely to be as difficult as the more basic question of what intelligence is at all, which has been puzzling philosophers and scientists for centuries. Starting a broad discussion of when quantum AI will be reached, and what will be like, is thus clearly ill-advised. We can nonetheless provide a few less controversial observations. The first observation is the fact that the overall concept of quantum AI might have multiple meanings. First, it may pertain to a generalization of the very notions of intelligence, in the sense section V discusses how classical learning concepts generalize to include genuinely quantum extensions. A second, and a perhaps more pragmatic reading of quantum AI, may ask whether quantum effects can be utilized to generate more intelligent agents, where the notion of intelligence itself is not generalized: quantum-enhanced artificial intelligence. We will focus on this latter reading for the remainder of this review, as quantum generalization of basic learning concepts on its own, just as the notion of intelligence on its own, seem complicated enough. To comment on the question of quantum-enhanced AI, we first remind the reader that the conceptual debates in AI often have two perspectives. The ultimately pragmatic perspective is concerned only with behavior in relevant situations. This is perhaps best captured by Alan Turing, who suggested that it may be irrelevant what intelligence is, if it can be recognized, by virtue of similarity to a "prototype" of intelligence -a human (Turing, 1950) 128 . Another perspective tends to try to capture cognitive architectures, such as SOAR developed by John Laird, Allen Newell, and Paul Rosenbloom (Laird, 2012). Cognitive architectures try to identify the components needed to build intelligent agents, capable of many tasks, and thus also care about how the intelligence is implemented. They often also serve as models of human cognition, and are both theories of what cognition is, and how to implement it. A third perspective comes from the practitioners of AI who often believe that AI will be a complicated combination of various methods and techniques including learning and specialized algorithms, but are also sympathetic to the Turing test as the definitional method. A simple reading of this third perspective is particularly appearing, as it allows us to all but equate computation, ML and AI. Consequently all quantum machine learning algorithms, and even broader, even most quantum algorithms already constitute progress in quantum AI. Aspects of such reading can be found in a few works on the topic (Sgarbas, 2007;Wichert, 2014;Moret-Bonillo, 2015) 129 . The current status of the broad field of quantum ML and related research is showing signs of activity with respect to all of the three aspects mentioned. The substantial activity in the context of ML improvements, in all aspects presented, is certainly filling the toolbox of methods which one day may play a role in the complicated designs of quantum AI practitioners. In this category, a relevant role may also be played by various algorithms which may help in planning, pruning, reasoning via symbol manipulation, and other tasks AI practice and theory encounters. Many possible quantum algorithms which may be relevant come to mind. Examples include the algorithm for performing Bayesian inference (Low et al., 2014), algorithms for quadratic and super-polynomial improvements in NANDand boolean-tree evaluations, which are important in evaluation of optimal strategies in two-player games 130 (Childs et al., 2009;Zhan et al., 2012;Farhi et al., 2008). Further, even more exotic ideas, such as quantum game theory (Eisert et al., 1999), may be relevant. Regarding approaches to quantum artificial general intelligence, and, related, to quantum cognitive architectures, while no proposals explicitly address this possibility, the framework of PS offers sufficient flexibility and structure that it may be considered a good starting point. Further, this framework is intended to keep a homogenous structure, which may lead to more straightforward global quantization, in comparison to models which are built out of inhomogeneous blocks -already in classical systems, the performance of system combined out of inhomogeneous units may lead to difficult-to-control behaviour, and it stands to reason that quantum devices may have a more difficult time to be synchronized. It should be mentioned that recently there have been works providing a broad framework describing how composite large quantum systems can be precisely treated (Portmann et al., 2017). Finally, from the ultimate pragmatic perspective, the quantum AE paradigm presented can offer a starting point for a quantum-generalized Turing test for QAI, as the Turing test itself fits in the paradigm: the environment is the administrator of the test, and the agent is the machine trying to convince the environment it is intelligent. Although, momentarily, the only suitable referees for such a test are classical devices -humans -it may be conceivable they, too, may find quantum gadgets useful to better ascertain the nature of the candidate 131 . However, at this point it is prudent to remind ourselves and the reader, that all the above considerations are still highly speculative, and that the research into genuine AI has barely broken ground. VIII. OUTLOOK In this review, we have presented overviews of various lines of research that connect the fields of quantum information and quantum computation, on the one side, and machine learning and artificial intelligence, on the other side. Most of the work in this new area of research is still largely theoretical and conceptual, and there are, for example, hardly any dedicated experiments demonstrating how quantum mechanics can be exploited for ML and AI. However, there are a number of theoretical proposals Lamata, 2017;Friis et al., 2015) and also first experimental works showing how these ideas can be implemented in the laboratory (Neigovzen et al., 2009;Li et al., 2015b;Cai et al., 2015;Ristè et al., 2017) 132 . At the same time it is clear that certain quantum technologies, which have been developed in the context in QIP and QC, can be readily applied to quantum learning, to the extent that learning agents or algorithms employ elements of quantum information processing in their very design. Similarly, it is clear, and there are by now several examples, how techniques from classical machine learning can be fruitfully employed in data analysis and the design of experiments in quantum many-body physics (see section IV.D). One may ask about the long-term impact of the exchange of concepts and techniques between QM and ML/AI. Which implications will this exchange have on the development of the individual fields, and what is the broader perspective of these individual activities leading towards a new field of research, with its own questions and promises? Indeed, returning the focus back to the topics of this review, we can highlight one overarching question encapsulating the collective effort of the presented research: ⇒ What are the potential, and the limitations of an interaction between quantum physics, and ML and AI? From a purely theoretical perspective, we can learn from analogies with the fields of communication, computation, or sensing. QIP has shown that to understand the limits of such information processing disciplines, both in pragmatic and conceptual sense, one must consider the full extent of quantum theory. Consequently, we should expect that the limits of learning, and of intelligence can also only be fully answered in this broader context. In this sense, the topics discussed in sections V already point to the rich and complex theory describing what learning may be, when even information itself is a quantum object, and aspects of the section VII.C point to how a general theory of quantum learning may be phrased 133 . The motivation of phrasing such a general theory may be fundamental, but it also may have more pragmatic consequences. In fact, arguments can be made that the field of quantum machine learning and the future field of quantum AI may constitute one of the most important research fields to emerge in recent times. A part of the reason behind such a bold claim stems from the obvious potential of both directions of influence between the two constituent sides of quantum learning (and quantum AI). For instance, the potential of quantum enhancements for ML is profound. In a society where data is generated at geometric rate 134 , and where its understanding may help us combat global problems, the potential of faster, better analyses cannot be overestimated. In contrast, ML and AI technologies are becoming indispensable tools in all high technologies, but they are also showing potential to help us do research in a novel, better way. A more subtle reason supporting optimism lies in positive feedback loops between ML, AI and QIP which are becoming apparent, and which is moreover, specific to these two disciplines. To begin with, we can claim that QC, once realized, will play an integral part in future AI systems, on general grounds. This can be deduced from even a cursory overview of the history of AI, which reveals that qualitative improvements in computing and information technologies result in progress in AI tasks, which is also intuitive. In simple terms, state-of-the-art in AI will always rely on state-of-the-art in computing. In contrast, ML and AI technologies are becoming indispensable tools in all high technologies. The perfect match between ML, AI and QIP, however may have deeper foundations. In particular, →advancements in ML/AI may help with critical steps in the building of quantum computers. In recent times, it has become ever more apparent that learning methods may make the difference between a given technology being realizable or being effectively impossible -beyond obvious examples, for instance direct computational approaches to build a human-level Go-playing software had failed, whereas AlphaGo (Silver et al., 2016), a fundamentally learning AI technology, achieved this complex goal. QC may in fact end up being such a technology, where exquisite fast, and adaptive controlrealized by an autonomous smart laboratory perhaps, helps mitigate the hurdles towards quantum computers. However, cutting edge research discussed in sections IV.C and IV.D suggest that ML and AI techniques could help at an even deeper level, by helping us discover novel physics which may be the missing link for full blown quantum technologies. Thus ML and AI may be what we need to build quantum computers. Another observation, which is hinted at increasing frequency in the community, and which fully entwines ML, AI and QIP, is that → AI/ML applications may be the best reasons to build quantum computers. Quantum computers have been proven to dramatically outperform their classical counterparts only on a handful of (often obscure) problems. Perhaps the best applications of quantum computers that have enticed investors until recently were quantum simulation and quantum cryptology (i.e. using QC to break encryption), which may have been simply insufficient to stimulate broad-scale public investments. In contrast ML and AI-type tasks may be regarded as the "killer applications" QC has been waiting for. However, not only are ML and AI applications well motivated -in recent times, arguments have been put forward that ML-type applications may be uniquely suited to be tackled by quantum technologies. For instance, ML-type applications deal with massive parallel processing of high dimensional data -quantum computers seem to be good for this. Further, while most simulation and numerics tasks require data stability, which is incompatible with the noise modern days quantum devices undergo, ML applications always work with noisy data. This means that such an analysis makes sense only if it is robust to noise to start with, which is the often unspoken fact of ML: the important features are the robust features. Under such laxer set of constraints on the desired information processing, various current day technologies, such as quantum annealing methods may become a possible solution. The two main flavours, or directions of influence, in quantum ML thus have a natural synergistic effect further motivating that despite their quite fundamental differences, they should be investigated in close collaboration. Naturally, at the moment, each individual sub-field of quantum ML comes with its own set of open problems, key issues which need to be resolved before any credible verdict on the future of quantum ML can be made. Most fit in one of the two quintessential categories of research into quantum-enhanced topic: a) what are the limits/how much of an edge over best classical solutions can be achieved, and b) could the proposals be implemented in practice in any reasonable term. For most of the topics discussed, both questions above remain widely open. For instance, regarding quantum-enhancements using universal computation, only a few models have been beneficially quantized, and the exact problem they solve, even in theory, is not matching the best established methods used in practice. Regarding the second facet, the most impressive improvements (barring isolated exceptions) can be achieved only under a significant number of assumptions, such as quantum databases, and certain suitable properties the structure of the data-sets 135 . Beyond particular issues which were occasionally pointed out in various parts of this review, we will forego providing an extensive list of specific open questions for each of the research lines, and refer the interested reader to the more specialized reviews for more detail (Wittek, 2014a;Schuld et al., 2014a;Biamonte et al., 2016;Arunachalam and de Wolf, 2017;Ciliberto et al., 2017). This leads us to the final topic of speculation of this outlook section: whether QC will truly be instrumental in the construction of genuine artificial (general) intelligence. On one hand, there is no doubt that quantum computers could help in heavily computational problems one typically encounters in, e.g., ML. In so far as AI reduces to sets of ML tasks, quantum computing may help. But AI is more than a sum of such specific-task-solving parts. Moreover, human brains are (usually) taken as a reference for systems capable of generating intelligent behaviour. Yet there is little, and no non-controversial, reason to believe genuine quantum effects play any critical part in their performance (rather, there is ample reasons to dismiss the relevance of quantum effects). In other words, quantum computers may not be necessary for general AI. The extent to which quantum mechanics has something to say about general AI will be subject of research in years to come. Nonetheless, already now, we can set aside any doubt that quantum computers and AI can help each other, to an extent which will not be disregarded. FIG. 1 Oracular computation and FIG. 3 3TSP example: finding the shortest route visiting the largest cities in Germany. FIG. 4 4Supervised (in this case, best linear classifier) and unsupervised learning (here clustering into two most likely groups and outliers) illustrated. 2 p=0. 9 FIG. 8 298A three state, two-action MDP. FIG. 9 9Illustration of the structure of the episodic and compositional memory in PS, comprising clips (episodes) and probabilistic transitions. The actuator of the agent performs the action. Adapted from(Briegel and De las Cuevas, 2012). N × M black-and-white bitmaps, characterized by a function f : {1, . . . , N } × {1, . . . , M } → {0, 1} (which technically coincides with a concept in CLT see II.B.1), specifying the color-value f (x, y) ∈ {0, 1} of a pixel at coordinate (x, y FIG. 13 13Basic learning curves for PS with non-zero γ in the invasion game with a rules switch at time step 250. Adapted from(Briegel and De las Cuevas, 2012). FIG. 15 15QrPS representation of network, and its steady state over non-action (red) and action (blue) clips. Machine learning in condensed-matter and many-body physics 45 V. Quantum generalizations of machine learning concepts 47 A. Quantum generalizations: machine learning of quantum data 47 1. State discrimination, state classification, and machine learning of quantum data 48 2. Computational learning perspectives: quantum states as concepts 52 B. (Quantum) learning and quantum processes 53II. Classical background 15 A. Methods of machine learning 16 1. Artificial neural networks and deep learning 17 2. Support Vector Machines 19 3. Other models 22 B. Mathematical theories of supervised and inductive learning 24 1. Computational learning theory 25 2. VC theory 27 C. Basic methods and theory of reinforcement learning 30 III. Quantum mechanics, learning, and AI 34 IV. Machine learning applied to (quantum) physics 35 A. Hamiltonian estimation and metrology 37 1. Hamiltonian estimation 37 2. Phase estimation settings 38 3. Generalized Hamiltonian estimation settings 39 B. Design of target evolutions 40 1. Off-line design 41 2. On-line design 41 C. Controlling quantum experiments, and machine-assisted research 42 1. Controlling complex processes 43 2. Learning how to experiment 44 D. VI. Quantum enhancements for machine learning 55 A. Learning efficiency improvements: sample complexity 56 1. Quantum PAC learning 57 2. Learning from membership queries 58 B. Improvements in learning capacity 60 1. Capacity from amplitude encoding 60 2. Capacity via quantized Hopfield networks 61 C. Run-time improvements: computational complexity 63 1. Speed-up via adiabatic optimization 64 2. Speed-ups in circuit architectures 68 VII. Quantum learning agents, and elements of quantum AI 76 A. Quantum learning via interaction 77 B. Quantum agent-environment paradigm for reinforcement learning 83 1. AE-based classification of quantum ML 86 C. Towards quantum artificial intelligence 87 VIII. Outlook 88 Quantum control and gate design FIG. 10 Table of topics investigating the overlaps between quantum physics, machine learning, and AI.(3) Controlling quantum experiments, and machine-assisted research (4) Condensed matter and many body physics Quantum enhancements for ML (1) Quantum perceptrons and neural networks (2) Quantum computational learning theory (3) Quantum enhancement of learning capacity (4) Quantum computational algorithmic speed- ups for learning Quantum generalizations of ML-type tasks (1) Quantum generalizations: machine learning of quantum data (2) (Quantum) learning of quantum pro- cesses Quantum learning agents and elements of quan- tum AI (1) Quantum-enhanced learning through interaction (2) Quantum agent-environment paradigm (3) Towards quantum AI 79 Quantum in that that which is learned is encoded in a quantum state. 80 In other words, for any environment state s, producing an action a causes a transition to some state s with probability s τ P a s, where states are represented as canonical vectors. In general, the observations output can also depend on the previous action of the agent.The dynamic of the quantum POMDP are defined by actions which correspond to quantum instruments (superoperators) the agent can apply: to each action a, we associate the set of Krauss operators {K a o } o∈O , which satisfy o K a † o K a o = 1. If the agent performs the action a, and observes the observation o, the state of the environment is mapped as ρ → K a o ρK a o † /T r[K a o ρK a o † ], where T r[K a o ρK a o † ] is the probability 81 This requires more general and richer formalism of density operators, and leads to generalized measurements, completely positive evolutions, etc. Paraphrased from (McCarthy et al., 1955). Each frame is cca. 10 6 dimensional, as each pixel constitutes one dimension, multiplied with 30 frames required for the one-second clip. More generally, we can distinguish four modes of such operant conditioning: positive reinforcement (reward when correct), negative reinforcement (removal of negative reward when correct), positive punishment (negative reward when incorrect) and negative punishment (removal of reward when incorrect). For example, in k−nearest neighbour classification, the training set is split into disjoint subsets specified by the shared labels. Given a new point which is to be classified, the algorithm identifies k nearest neighbour points from the data set to the new point. The label of the new point is decided by the majority label of these neighbours. The labeling process thus needs to refer to the entire training set. More specifically, there exists a set of weights doing the job, even though standard training algorithms may fail to converge to that point. 18 Roughly speaking, models with high model complexity are more likely to "overfit", and it is more difficult to provide guarantees they will generalize well, i.e., perform well beyond the training set. Indeed, this can be supported by hard theory, see Cover's Theorem(Cover, 1965). In ML, the term model is often overloaded. Most often it refers to a classification system which has been trained on a dataset, and in that sense it "models" the actual labeling function. Often, however, it will also refer to a class of learning algorithms (e.g. the SVM learning model). While the dichotomies between sample complexity and computational complexity are often considered in literature, the authors have first heard the trichotomic setting, including model complexity from(Wittek, 2014b). Examples of such balancing, and its failures can be observed in sections V.A.2, and VI.A.1. An exception to this would be the uninteresting case when the class was finite and all instances had been observed. For instance, in modern devices, the devices are (mostly) trained for the handwriting of the owner, which will most of the time be distinct from other persons handwritings, although the device should in principle handle any (reasonable) handwriting. 30 Note that we recover the standard PAC setting once the conditional probability distribution of P D (y|x) where the values of the first n bits (data-points) are fixed, is Kronecker-delta -i.e. the label is deterministic. This rule is inspired by the the Bellman optimality equation, Q * (s, a) := E[R(s, a)] + γE[max a Q * (s , a )], wherethe expected values are taken over the randomness MDP transition rule and the reward function, which has as the solution -the fixed point -the optimal Q−value function. This equation can be used when the specification of the environment is fully known. Note that the optimal Q-values can be found without actually explicitly identifying an optimal policy. 41 Q-learning is an example of an off-policy algorithm as the estimate of the future value in Eq. 21 is not evaluated relative to the actual policy of the agent (indeed, it is not necessarily even defined), but rather relative to the so-called "greedy-policy", which takes the action with the maximal value estimate (note the estimate appears with a maximization term). 42 To avoid any confusion, we have introduced the concept policy to refer to the conditional probability distributions specifying what the agent will do given a state. However, the same term is often overloaded to also refer to the specification of the effective policy an agent will use given some state/time-step. For instance, " −greedy policies" refer to behaviour in which, given a state, the the agent outputs the action with the highest corresponding Q−value -i.e. acts greedily -with probability 1 − , and produces a random action otherwise. Clearly, this rule specifies a policy at any given time step, given the current Q-value table of the agent. One can also think of time-dependent policies, which mean that the policy also explicitly depends on the time-step. An example of a such a time-dependant and a (slowly converging) GLIE policy is an −greedy policy, where = (t) = 1/t is a function of the time-step, converging to zero. 43 SARSA is the acronym for state-action-reward-state-action. For instance, the problem of finding optimal infinite-horizon policies, which was solvable via dynamical programming in the fully observable (MDP) case becomes, in general, uncomputable. 45 To comment a bit on how RL methods and tasks may be generalized towards general AI, one can consider learning scenarios where one has to combine standard data-learning ML to handle the realistic percept space (which is effectively infinite) with RL techniques. An example of this as was done e.g. in the famous AlphaGo system(Silver et al., 2016). Further, one could also consider more general types of interaction, beyond the strict turn-based metronomic model. For instance in active reinforcement learning, the interaction occurs relative to an external clock, which intertwines computational complexity and learning efficiency of the agent (see section VII.A). Further, the interaction may occur in fully continuous time. This setting is also not typically studied in the basic theory of AI, but occurs in the closely related problem of control theory(Wiseman and Milburn, 2010), which may be more familiar to physicists. Such generalizations are at the cutting edge of research, also in the classical realm, and also beyond the scope of this paper. 46 In this sense, a particular agent/robot, may perceive the full state of the environment in some environments (making the percepts identical to states), whereas in other environments, the sensors fail to observe everything, in which case the percepts correspond to observations. In fact, this is not entirely true -certain proofs of separation between PAC learnability in the quantum and classical model assume hardness of factoring of certain integers (see section VI.A.2). Certain optimization problems, such as online optimization problems where information is revealed incrementally, and decisions are made before all information is available, are more clearly related to "quintessential" ML problems such as supervised, unsupervised, or reinforcement learning. 49 Interestingly, such techniques allow for the identification of optimal approximations of unphysical processes which can be used to shed light on the properties of quantum operations. This is often also expressed in terms of the variance (∆θ) 2 , so as N −2 , rather than the standard deviation. This addition partially circumvents the computation of the likelihood function P (d|θ; C) which requires the simulation of the quantum system, and is in fact, in general intractable. For the sake of intuition, a frequent application of X gates, referred to as bang-bang control, on a system which is freely evolving with respect to σz effectively flips the direction of rotation of the system Hamiltonian, effectively undoing its action. 60 By instantaneous we mean that it is assumed that the implementation requires no evolution time, e.g. by using infinite field strengths. Indeed, the authors also show that correct behavior can be established when additional unknown parameters are introduced, like time-and-space dependent fields (see for results), where hand-crafted methods would fail. For instance, the authors investigate the strategies explored by the learning agent, and identify spin-glass like phase transition in the space of protocols as a function of the protocol duration. This highlights the difficulty of the learning problem. This method can be thought of as effectively by assigning a prior stating that the analyzed state is well approximated by a NQS. Arguably, in the light of the physicalistic viewpoint on the nature of information, which posits that "Information is [ultimately] physical". 65 Classical evolutions are guaranteed to transform computational basis states (the "classical states") to computational basis states, and closed-system implies the dynamics must be reversible, leaving only permutations. To provide the minimal amount of intuition, the best classical algorithm for the membership query model, heavily depends on Fourier transforms (DFT) of certain sets -the authors then use the fact that FT can be efficiently implemented on the amplitudes of the states generated by the quantum oracle using quantum computers. We refer the reader to (Bshouty and Jackson, 1998) for further details.86 The learning of such functions is in QIP circles also known as the (non-recursive) Bernstein-Vazirani problem defined first in(Bernstein and Vazirani, 1997). 87 However, the meaning of noise is not exactly the same in the classical and quantum case. For a discussion on some of the shortcomings see e.g.(Brun et al., 2003;Trugenberger, 2003), and we also refer the reader to more recent reviews(Schuld et al., 2014b,c) for further details and analysis of the potential application of such memories to pattern recognition problems. Generically, local optimization is easier than global, and in the context of the Ising system, global optimization is known to be NP-hard. More precisely, an efficient algorithm which solves general QUBO problems can also efficiently solve arbitrary Ising ground state problems. One direction is trivial as QUBO optimization is a special case of ground state finding, where the local fields are zero. In the converse, given an Ising ground state problem over n variables, we can construct a QUBO over n + 1 variables, which can be used to encode the local terms. Servedio also, incidentally, provided some of the earliest results in quantum computational learning theory, discussed in previous sections. In minimum tree clustering, data is represented as a weighted graph (weight being the distance), and a minimum weight spanning tree is found. k clusters are identified by simply removing the k − 1-highest weight edges. Divisive clustering is an iterative method which splits sets into two subsets according to a chosen criterion, and this process is iterated. k−median clustering identifies clusters which minimize the cumulative within-cluster distances to the median point of the cluster. To exemplify the logic behind association rules mining, in the typical context of shopping, if shopping item (list element) B occurs in nearly every shopping list in which shopping item A occurs as well, one concludes that the person buying A is also likely to buy B. This is captured by the rule denoted B ⇒ A. In a related work(Wiebe and Granade, 2015), the authors investigate the learning capacity, of "small" quantum systems, and identify certain limitations in the context of Bayesian learning, based on Grover optimality bounds.Here, "small" pertains to systems of logarithmic size, encoding information in amplitudes. This work thus probes the potential of space complexity improvements for quantum-enhanced learning, related to early ideas discussed in VI.B. Here, the condition number of the matrix A is given by the quotient of the largest and smallest singular value of A.113 The assumption that A is Hermitian is non-restrictive, as an oracle for any sparse matrix A can be modified to yield an oracle for the symmetrized matrix A = |0 1| ⊗ A † + |1 0| ⊗ A. Although RL is a particularly mathematically clean model for learning by interaction, it is worthwhile to note For instance, the Q-learning algorithm (see section II.C) is typically defined without an embodied agent-environment context. Naturally, we can easily promote this particular abstract model to an agent, by defining an agent which internally runs the Q-learning algorithm. Representation means that we, strictly speaking, distinguish actual percepts, from the memorized percepts, and the same for actions. This distinction is however not crucial for the purposes of this exposition. By transition matrix, we mean an entry-wise non-negative matrix, with columns adding to unity. 122 The spectral gap is defined with δ = 1 − |λ 2 |, where λ 2 is, in norm, the second largest eigenvalue. 123 In full detail, these relations hold whenever the MC is lazy (all states transition back to themselves with probability at least 1/2 ), ensuring that all the eigenvalues are non-negative, which can be ensured by adding the identity transition with probability 1/2. This slows down mixing and hitting processes by an irrelevant factor of 2. We point out that the first ideas suggesting that quantum effects could be useful had been previously suggested in(Dong et al., 2005). 125 BQP stands for bounded-error quantum polynomial, and collects decision problems which can be solved with bounded error using a quantum computer. Complete problems of a given class are, in a sense, the hardest problems in that class, as all other are reducible to the complete instances using weaker reductions. In particular, it is not believed BQP complete problems are solvable on a classical computer, whereas all decision problems solvable by classical computers do belong to the class BQP. Other delineations are possible, where the agent and environment have individually defined interfaces -a part of E accesible to A and a part of A accessible to E -leading to a four-partite system, but we will not be considering this here. This realization is possible under a couple of technical assumptions, for details see. Interestingly, the Turing test assumes that humans are good supervised learners of the concept of "intelligent agents", all the while being incapable of specifying the classifier -the definition of intelligence -explicitly. It should be mentioned that some of the early discussions on quantum AI also consider the possibilities that human brains utilize some form of quantum processing, which may be at the crux of human intelligence. Such claims are still highly hypothetical, and not reviewed in this work. 130 See http://www.scottaaronson.com/blog/?p=207 for a simple explanation. 131 This is reminiscent to the problem of quantum verification, where quantum Turing test is a term used for the test which efficiently decides whether the Agent is a genuine quantum device/computer(Kashefi, 2013) These complement the experimental work based on superconducting quantum annealers(Neven et al., 2009b;Adachi and Henderson, 2015), which is closely related to one of the approaches to QML.133 The question of whether information may be quantum, and whether we can talk about "quantum knowledge" as an outside observer broaches the completely fundamental questions of interpretations of quantum mechanics: for instance a Quantum Bayesianist would likely reject such a third-person perspective on learning. 134 https://insidebigdata.com/2017/02/16/the-exponential-growth-of-data/ (accessed July 2017) In many proposals, the condition number of a matrix depending on the dataset explicitly appears in run-time, see section VI.C.2 Quantum Machine Learning: What Quantum Computing Means to Data Mining. Elsevier Insights. P Wittek, AP. ElsevierP. Wittek. Quantum Machine Learning: What Quantum Computing Means to Data Mining. Else- vier Insights. Elsevier, AP, 2014a. ISBN 9780128009536. URL https://books.google.de/books?id= PwUongEACAAJ. The quest for a quantum neural network. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, 10.1007/s11128-014-0809-81573-1332Quantum Information Processing. 13Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. The quest for a quantum neural network. Quantum Information Processing, 13(11):2567-2586, Nov 2014a. ISSN 1573-1332. URL http://dx.doi.org/10. 1007/s11128-014-0809-8. Quantum machine learning. Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, Seth Lloyd, arXiv:1611.09347Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning, 2016, arXiv:1611.09347. A survey of quantum learning theory. Srinivasan Arunachalam, Ronald De, Wolf , abs/1701.06806CoRRSrinivasan Arunachalam and Ronald de Wolf. A survey of quantum learning theory. CoRR, abs/1701.06806, 2017. URL http://arxiv.org/abs/1701.06806. Quantum machine learning: a classical perspective. Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, Leonard Wossnig, arXiv:1707.08561Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, and Leonard Wossnig. Quantum machine learning: a classical perspective, 2017, arXiv:1707.08561. A Michael, Isaac L Nielsen, Chuang, 1107002176Quantum Computation and Quantum Information: 10th Anniversary Edition. New York, NY, USACambridge University Press978110700217310th editionMichael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, New York, NY, USA, 10th edition, 2011. ISBN 1107002176, 9781107002173. Computable and Uncomputable. Sovetskoye Radio. Yuri Manin, Yuri Manin. Computable and Uncomputable. Sovetskoye Radio, 1980. Simulating physics with computers. Richard Feynman, 10.1007/bf026501790020-7748International Journal of Theoretical Physics. 216-7Richard Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21 (6-7):467-488, June 1982. ISSN 0020-7748. URL http://dx.doi.org/10.1007/bf02650179. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. W Peter, Shor, 10.1137/s0097539795293172SIAM Journal on Computing. 265Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5):1484-1509, oct 1997. URL https://doi.org/10.1137/ s0097539795293172. Quantum algorithms for algebraic problems. M Andrew, Childs, Wim Van Dam, https:/link.aps.org/doi/10.1103/RevModPhys.82.1Rev. Mod. Phys. 82Andrew M. Childs and Wim van Dam. Quantum algorithms for algebraic problems. Rev. Mod. Phys., 82: 1-52, Jan 2010. URL https://link.aps.org/doi/10.1103/RevModPhys.82.1. Quantum algorithms: an overview. npjQI, 2:15023 EP. Ashley Montanaro, 10.1038/npjqi.2015.23Review ArticleAshley Montanaro. Quantum algorithms: an overview. npjQI, 2:15023 EP -, Jan 2016. URL http: //dx.doi.org/10.1038/npjqi.2015.23. Review Article. Quantum algorithm for linear systems of equations. Aram W Harrow, Avinatan Hassidim, Seth Lloyd, https:/link.aps.org/doi/10.1103/PhysRevLett.103.150502Phys. Rev. Lett. 103150502Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 103:150502, Oct 2009. URL https://link.aps.org/doi/10.1103/PhysRevLett.103. 150502. Quantum linear systems algorithm with exponentially improved dependence on precision. Andrew M Childs, Robin Kothari, Rolando D Somma, arXiv:1511.02306Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum linear systems algorithm with exponentially improved dependence on precision, 2015, arXiv:1511.02306. Quantum singular value decomposition of non-sparse low-rank matrices. Patrick Rebentrost, Adrian Steffens, Seth Lloyd, arXiv:1607.05404Patrick Rebentrost, Adrian Steffens, and Seth Lloyd. Quantum singular value decomposition of non-sparse low-rank matrices, 2016a, arXiv:1607.05404. Sampling from the thermal quantum gibbs state and evaluating partition functions with a quantum computer. David Poulin, Pawel Wocjan, https:/link.aps.org/doi/10.1103/PhysRevLett.103.220502Phys. Rev. Lett. 103220502David Poulin and Pawel Wocjan. Sampling from the thermal quantum gibbs state and evaluating partition functions with a quantum computer. Phys. Rev. Lett., 103:220502, Nov 2009. URL https://link.aps. org/doi/10.1103/PhysRevLett.103.220502. Simulated quantum annealing can be exponentially faster than classical simulated annealing. E Crosson, A W Harrow, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS). E. Crosson and A. W. Harrow. Simulated quantum annealing can be exponentially faster than classical simulated annealing. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 714-723, Oct 2016. Quantum speed-ups for semidefinite programming. G S L Fernando, Krysta Brandao, Svore, arXiv:1609.05537Fernando G. S. L. Brandao and Krysta Svore. Quantum speed-ups for semidefinite programming, 2016, arXiv:1609.05537. A quantum approximate optimization algorithm. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, arXiv:1411.4028Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm, 2014, arXiv:1411.4028. Quantum simulation. M Georgescu, S Ashhab, Franco Nori, https:/link.aps.org/doi/10.1103/RevModPhys.86.153Rev. Mod. Phys. 86M. Georgescu, S. Ashhab, and Franco Nori. Quantum simulation. Rev. Mod. Phys., 86:153-185, Mar 2014. URL https://link.aps.org/doi/10.1103/RevModPhys.86.153. A fast quantum mechanical algorithm for database search. K Lov, Grover, http:/doi.acm.org/10.1145/237814.2378661996. ACM. ISBN 0-89791-785-5Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC '96. the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC '96New York, NY, USALov K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC '96, pages 212-219, New York, NY, USA, 1996. ACM. ISBN 0-89791-785-5. URL http://doi.acm.org/10.1145/237814.237866. Spatial search by quantum walk. M Andrew, Jeffrey Childs, Goldstone, https:/link.aps.org/doi/10.1103/PhysRevA.70.022314Phys. Rev. A. 7022314Andrew M. Childs and Jeffrey Goldstone. Spatial search by quantum walk. Phys. Rev. A, 70:022314, Aug 2004. URL https://link.aps.org/doi/10.1103/PhysRevA.70.022314. Quantum random walks: An introductory overview. J Kempe, 10.1080/00107151031000110776Contemporary Physics. 444J Kempe. Quantum random walks: An introductory overview. Contemporary Physics, 44(4):307- 327, 2003, http://dx.doi.org/10.1080/00107151031000110776. URL http://dx.doi.org/10.1080/ 00107151031000110776. Exponential algorithmic speedup by a quantum walk. Andrew M Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, Daniel A Spielman, http:/doi.acm.org/10.1145/780542.7805521-58113-674-9Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC '03. the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC '03New York, NY, USAACMAndrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman. Exponential algorithmic speedup by a quantum walk. In Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, STOC '03, pages 59-68, New York, NY, USA, 2003. ACM. ISBN 1-58113-674-9. URL http://doi.acm.org/10.1145/780542.780552. Quantum walks. Daniel Reitzner, Daniel Nagaj, Vladimir Buzek, ACTA PHYSICA SLOVACA. 616Daniel Reitzner, Daniel Nagaj, and Vladimir Buzek. Quantum walks. ACTA PHYSICA SLOVACA, 61(6): 603-725, 2012. Discrete-query quantum algorithm for NAND trees. Andrew M Childs, Richard Cleve, Stephen P Jordan, David L Yonge-Mallo, 10.4086/toc.2009.v005a005Theory of Computing. 51Andrew M. Childs, Richard Cleve, Stephen P. Jordan, and David L. Yonge-Mallo. Discrete-query quantum algorithm for NAND trees. Theory of Computing, 5(1):119-123, 2009. URL https://doi.org/10.4086/ toc.2009.v005a005. Super-polynomial quantum speed-ups for boolean evaluation trees with hidden structure. Bohua Zhan, Shelby Kimmel, Avinatan Hassidim, http:/doi.acm.org/10.1145/2090236.2090258In Innovations in Theoretical Computer Science. Bohua Zhan, Shelby Kimmel, and Avinatan Hassidim. Super-polynomial quantum speed-ups for boolean evaluation trees with hidden structure. In Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 8-10, 2012, pages 249-265, 2012. URL http://doi.acm.org/10.1145/2090236. 2090258. Separations in query complexity using cheat sheets. Shalev Scott Aaronson, Robin Ben-David, Kothari, http:/doi.acm.org/10.1145/2897518.2897644978-1-4503-4132-5Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC '16. the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC '16New York, NY, USAACMScott Aaronson, Shalev Ben-David, and Robin Kothari. Separations in query complexity using cheat sheets. In Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC '16, pages 863-876, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4132-5. URL http://doi.acm.org/10. 1145/2897518.2897644. Quantum communication and complexity. Wolf Ronald De, 0304-3975Theoretical Computer Science. 2871Ronald de Wolf. Quantum communication and complexity. Theoretical Computer Science, 287(1): 337 -353, 2002. ISSN 0304-3975. URL http://www.sciencedirect.com/science/article/pii/ S0304397502003778. Natural Computing. Man-Hong Yung and Alán Aspuru-Guzik. A quantum-quantum metropolis algorithm. K Temme, T J Osborne, K G Vollbrecht, D Poulin, F Verstraete, 10.1038/nature097700028-0836Proceedings of the National Academy of Sciences. 4717336NatureK. Temme, T. J. Osborne, K. G. Vollbrecht, D. Poulin, and F. Verstraete. Quantum metropolis sampling. Nature, 471(7336):87-90, Mar 2011. ISSN 0028-0836. URL http://dx.doi.org/10.1038/nature09770. Man-Hong Yung and Alán Aspuru-Guzik. A quantum-quantum metropolis algorithm. Proceedings of the National Academy of Sciences, 109(3):754-759, 2012, http://www.pnas.org/content/109/3/754.full.pdf. URL http://www.pnas.org/content/109/3/754.abstract. Quantum theory, the church-turing principle and the universal quantum computer. D Deutsch, 0080-4630Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 400D. Deutsch. Quantum theory, the church-turing principle and the universal quantum computer. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 400(1818):97- 117, 1985, http://rspa.royalsocietypublishing.org/content/400/1818/97.full.pdf. ISSN 0080-4630. URL http://rspa.royalsocietypublishing.org/content/400/1818/97. A one-way quantum computer. Robert Raussendorf, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevLett.86.5188Phys. Rev. Lett. 86Robert Raussendorf and Hans J. Briegel. A one-way quantum computer. Phys. Rev. Lett., 86:5188-5191, May 2001. URL https://link.aps.org/doi/10.1103/PhysRevLett.86.5188. Van den Nest. Measurement-based quantum computation. H J Briegel, D E Browne, W Dur, R Raussendorf, M , 10.1038/nphys11571745-2473Nat Phys. H. J. Briegel, D. E. Browne, W. Dur, R. Raussendorf, and M. Van den Nest. Measurement-based quantum computation. Nat Phys, pages 19-26, Jan 2009. ISSN 1745-2473. URL http://dx.doi.org/10.1038/ nphys1157. Fast graph operations in quantum computation. Liming Zhao, Carlos A Pérez-Delgado, Joseph F Fitzsimons, https:/link.aps.org/doi/10.1103/PhysRevA.93.032314Phys. Rev. A. 9332314Liming Zhao, Carlos A. Pérez-Delgado, and Joseph F. Fitzsimons. Fast graph operations in quantum computation. Phys. Rev. A, 93:032314, Mar 2016. URL https://link.aps.org/doi/10.1103/PhysRevA. 93.032314. Multiparty delegated quantum computing. Elham Kashefi, Anna Pappa, arXiv:1606.09200Elham Kashefi and Anna Pappa. Multiparty delegated quantum computing, 2016, arXiv:1606.09200. Universal blind quantum computation. A Broadbent, J Fitzsimons, E Kashefi, 50th Annual IEEE Symposium on Foundations of Computer Science. A. Broadbent, J. Fitzsimons, and E. Kashefi. Universal blind quantum computation. In 2009 50th Annual IEEE Symposium on Foundations of Computer Science, pages 517-526, Oct 2009. Topological quantum computation. H Michael, Alexei Freedman, Michael J Kitaev, Zhenghan Larsen, Wang, 10.1090/s0273-0979-02-00964-3Bulletin of the American Mathematical Society. 4001Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang. Topological quantum computation. Bulletin of the American Mathematical Society, 40(01):31-39, oct 2002. URL https: //doi.org/10.1090/s0273-0979-02-00964-3. A polynomial quantum algorithm for approximating the jones polynomial. Dorit Aharonov, Vaughan Jones, Zeph Landau, http:/doi.acm.org/10.1145/1132516.11325791-59593-134-1Proceedings of the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC '06. the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC '06New York, NY, USAACMDorit Aharonov, Vaughan Jones, and Zeph Landau. A polynomial quantum algorithm for approximating the jones polynomial. In Proceedings of the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC '06, pages 427-436, New York, NY, USA, 2006. ACM. ISBN 1-59593-134-1. URL http://doi.acm.org/10.1145/1132516.1132579. Quantum computation by adiabatic evolution. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Michael Sipser, arXiv:quant-ph/0001106Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser. Quantum computation by adiabatic evolution, 2000, arXiv:quant-ph/0001106. Designing adiabatic quantum optimization: A case study for the traveling salesman problem. Bettina Heim, Ethan W Brown, Dave Wecker, Matthias Troyer, arXiv:1702.06248Bettina Heim, Ethan W. Brown, Dave Wecker, and Matthias Troyer. Designing adiabatic quantum optimization: A case study for the traveling salesman problem, 2017, arXiv:1702.06248. The computational complexity of linear optics. Scott Aaronson, Alex Arkhipov, http:/doi.acm.org/10.1145/1993636.1993682978-1-4503-0691-1Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11. the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11New York, NY, USAACMScott Aaronson and Alex Arkhipov. The computational complexity of linear optics. In Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11, pages 333-342, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0691-1. URL http://doi.acm.org/10.1145/1993636.1993682. Sergio Boixo, Sergei V Isakov, N Vadim, Ryan Smelyanskiy, Nan Babbush, Zhang Ding, Michael J Jiang, Bremner, arXiv:1608.00263John M. Martinis, and Hartmut Neven. Characterizing quantum supremacy in near-term devices. Sergio Boixo, Sergei V. Isakov, Vadim N. Smelyanskiy, Ryan Babbush, Nan Ding, Zhang Jiang, Michael J. Bremner, John M. Martinis, and Hartmut Neven. Characterizing quantum supremacy in near-term devices, 2016, arXiv:1608.00263. Quantum advantage with shallow circuits. Sergey Bravyi, David Gosset, Robert Koenig, arXiv:1704.00690Sergey Bravyi, David Gosset, and Robert Koenig. Quantum advantage with shallow circuits, 2017, arXiv:1704.00690. Temporally unstructured quantum computation. Dan Shepherd, Michael J Bremner, 1364-5021Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 465Dan Shepherd and Michael J. Bremner. Temporally unstructured quantum computation. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 465(2105):1413-1439, 2009, http://rspa.royalsocietypublishing.org/content/465/2105/1413.full.pdf. ISSN 1364-5021. URL http://rspa.royalsocietypublishing.org/content/465/2105/1413. Achieving quantum supremacy with sparse and noisy commuting quantum computations. Quantum, 1:8. Michael J Bremner, Ashley Montanaro, Dan J Shepherd, 10.22331/q-2017-04-25-82521-327XMichael J. Bremner, Ashley Montanaro, and Dan J. Shepherd. Achieving quantum supremacy with sparse and noisy commuting quantum computations. Quantum, 1:8, April 2017. ISSN 2521-327X. URL https://doi.org/10.22331/q-2017-04-25-8. . J , 25th Solvay ConfJ. Preskill, 2012. 25th Solvay Conf. Quantum sampling problems, bosonsampling and quantum supremacy. A P Lund, Michael J Bremner, T C Ralph, 10.1038/s41534-017-0018-22056-6387npj Quantum Information. 315A. P. Lund, Michael J. Bremner, and T. C. Ralph. Quantum sampling problems, bosonsampling and quantum supremacy. npj Quantum Information, 3(1):15, 2017. ISSN 2056-6387. URL http://dx.doi. org/10.1038/s41534-017-0018-2. Artificial Intelligence: A Modern Approach. Stuart Russell, Peter Norvig, 0136042597Prentice Hall Press9780136042594Upper Saddle River, NJ, USA3rd editionStuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edition, 2009. ISBN 0136042597, 9780136042594. . M L Mccarthy, N Minsky, C E Rochester, Shannon, Proposal For The Dart-Mouth Summer, Research, On, Intelligence, McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon. A PROPOSAL FOR THE DART- MOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE. http://www- formal.stanford.edu/jmc/history/dartmouth/dartmouth.html, 1955. URL http://www-formal.stanford. edu/jmc/history/dartmouth/dartmouth.html. Symbolic versus Subsymbolic. Chris Eliasmith, William Bechtel, 10.1002/0470018860.s00022Ltd. John Wiley & SonsChris Eliasmith and William Bechtel. Symbolic versus Subsymbolic. John Wiley & Sons, Ltd, 2006. ISBN 9780470018866. URL http://dx.doi.org/10.1002/0470018860.s00022. Computer science as empirical inquiry: Symbols and search. Allen Newell, Herbert A Simon, http:/doi.acm.org/10.1145/360018.3600220001-0782Commun. ACM. 193Allen Newell and Herbert A. Simon. Computer science as empirical inquiry: Symbols and search. Commun. ACM, 19(3):113-126, March 1976. ISSN 0001-0782. URL http://doi.acm.org/10.1145/360018.360022. A brief history of connectionism. David A Medler, Neural Computing Surveys. 1David A. Medler. A brief history of connectionism. Neural Computing Surveys, 1:61-101, 1998. Elephants don't play chess. Rodney A Brooks, 0921-8890Robotics and Autonomous Systems. 61Rodney A. Brooks. Elephants don't play chess. Robotics and Autonomous Systems, 6(1):3 -15, 1990. ISSN 0921-8890. URL http://www.sciencedirect.com/science/article/pii/S0921889005800259. Design- ing Autonomous Agents. . Andrew Steane. Quantum computing. Reports on Progress in Physics. 612117Andrew Steane. Quantum computing. Reports on Progress in Physics, 61(2):117, 1998. URL http: //stacks.iop.org/0034-4885/61/i=2/a=002. Understanding Machine Learning: From Theory to Algorithms. Shai Shalev, - Shwartz, Shai Ben-David, Cambridge University Press11070571329781107057135New York, NY, USAShai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, USA, 2014. ISBN 1107057132, 9781107057135. Introduction to Machine Learning. Ethem Alpaydin, The MIT Press2nd edition, 2010. ISBN 026201243X, 9780262012430Ethem Alpaydin. Introduction to Machine Learning. The MIT Press, 2nd edition, 2010. ISBN 026201243X, 9780262012430. ISBN 0262193981. insideBIGDATA. The exponential growth of data. Richard S Sutton, Andrew G Barto, MIT PressCambridge, MA, USAIntroduction to Reinforcement LearningRichard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981. insideBIGDATA. The exponential growth of data. https://insidebigdata.com/2017/02/16/ the-exponential-growth-of-data/, 2017. Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, 10.1038/nature169610028-0836Thore Graepel, and Demis Hassabis. 529David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, Jan 2016. ISSN 0028-0836. URL http://dx.doi.org/10.1038/nature16961. Article. Semi-Supervised Learning. Olivier Chapelle, Bernhard Schölkopf, Alexander Zien, The MIT Press97802625141251st edition, 2010. ISBN 0262514125Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. Semi-Supervised Learning. The MIT Press, 1st edition, 2010. ISBN 0262514125, 9780262514125. Universal Artificial Intellegence. Marcus Hutter, 10.1007/b138233SpringerBerlin HeidelbergMarcus Hutter. Universal Artificial Intellegence. Springer Berlin Heidelberg, 2005. URL https://doi.org/ 10.1007/b138233. Computing machinery and intelligence. A M Turing, One of the most influential papers in the history of the cognitive sciences. A. M. Turing. Computing machinery and intelligence, 1950. URL http://cogprints. org/499/. One of the most influential papers in the history of the cognitive sciences: http://cogsci.umn.edu/millennium/final.html. A logical calculus of the ideas immanent in nervous activity. S Warren, Walter Mcculloch, Pitts, 10.1007/BF024782591522-9602The bulletin of mathematical biophysics. 54Warren S. McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115-133, Dec 1943. ISSN 1522-9602. URL http://dx.doi.org/ 10.1007/BF02478259. The Perceptron, a Perceiving and Recognizing Automaton (Project Para). F Rosenblatt, Cornell Aeronautical Laboratory. Cornell Aeronautical LaboratoryReportF. Rosenblatt. The Perceptron, a Perceiving and Recognizing Automaton (Project Para). Report: Cornell Aeronautical Laboratory. Cornell Aeronautical Laboratory, 1957. Approximation by superpositions of a sigmoidal function. G Cybenko, 10.1007/bf02551274Mathematics of Control, Signals, and Systems. 24G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2(4):303-314, dec 1989. URL https://doi.org/10.1007/bf02551274. Approximation capabilities of multilayer feedforward networks. Kurt Hornik, 10.1016/0893-6080(91)90009-tNeural Networks. 42Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251-257, jan 1991. URL https://doi.org/10.1016/0893-6080(91)90009-t. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao, 10.1007/s11633-017-1054-21751-8520International Journal of Automation and Computing. Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing, Mar 2017. ISSN 1751-8520. URL https://doi.org/10.1007/ s11633-017-1054-2. The mythos of model interpretability. Z C Lipton, abs/1606.03490CoRRZ. C. Lipton. The mythos of model interpretability. CoRR, abs/1606.03490, 2016. URL http://arxiv.org/ abs/1606.03490. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. S Hochreiter, Y Bengio, P Frasconi, J Schmidhuber, A Field Guide to Dynamical Recurrent Neural Networks. S. C. Kremer and J. F. KolenIEEE PressS. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001. Exploring strategies for training deep neural networks. Hugo Larochelle, Yoshua Bengio, Jérôme Louradour, Pascal Lamblin, 1532-4435J. Mach. Learn. Res. 10Hugo Larochelle, Yoshua Bengio, Jérôme Louradour, and Pascal Lamblin. Exploring strategies for training deep neural networks. J. Mach. Learn. Res., 10:1-40, June 2009. ISSN 1532-4435. URL http://dl.acm. org/citation.cfm?id=1577069.1577070. Neural networks and physical systems with emergent collective computational abilities. J J Hopfield, 0027-8424Proc Natl Acad Sci U S A. 798pmidJ. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A, 79(8):2554-2558, Apr 1982. ISSN 0027-8424. URL http://www.ncbi.nlm.nih. gov/pmc/articles/PMC346238/. 6953413[pmid]. Increasing the capacity of a hopfield network without sacrificing functionality. Amos Storkey, 3-540-63631-5Proceedings of the 7th International Conference on Artificial Neural Networks, ICANN '97. the 7th International Conference on Artificial Neural Networks, ICANN '97London, UK, UKSpringer-VerlagAmos Storkey. Increasing the capacity of a hopfield network without sacrificing functionality. In Proceedings of the 7th International Conference on Artificial Neural Networks, ICANN '97, pages 451-456, London, UK, UK, 1997. Springer-Verlag. ISBN 3-540-63631-5. URL http://dl.acm.org/citation.cfm?id= 646257.685557. Robust exponential memory in hopfield networks. Christopher Hillar, Ngoc M Tran, arXiv:1411.4625Christopher Hillar and Ngoc M. Tran. Robust exponential memory in hopfield networks, 2014, arXiv:1411.4625. neural" computation of decisions in optimization problems. J J Hopfield, D W Tank, 10.1007/BF003399431432-0770Biological Cybernetics. 523J. J. Hopfield and D. W. Tank. "neural" computation of decisions in optimization problems. Biological Cybernetics, 52(3):141-152, Jul 1985. ISSN 1432-0770. URL http://dx.doi.org/10.1007/BF00339943. Justifying and generalizing contrastive divergence. Yoshua Bengio, Olivier Delalleau, 10.1162/neco.2008.11-07-6470899-7667Neural Comput. 216Yoshua Bengio and Olivier Delalleau. Justifying and generalizing contrastive divergence. Neural Comput., 21 (6):1601-1621, June 2009. ISSN 0899-7667. URL http://dx.doi.org/10.1162/neco.2008.11-07-647. Nathan Wiebe, Ashish Kapoor, Krysta M Svore, arXiv:1412.3489Quantum deep learning. Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum deep learning, 2014a, arXiv:1412.3489. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. M Cover, 0367-7508IEEE Transactions on Electronic Computers, EC. 143M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14(3):326-334, June 1965. ISSN 0367-7508. Least squares support vector machine classifiers. J A K Suykens, J Vandewalle, 10.1023/A:10186286097421573-773XNeural Processing Letters. 93J.A.K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Processing Letters, 9(3):293-300, Jun 1999. ISSN 1573-773X. URL https://doi.org/10.1023/A:1018628609742. Svm versus least squares svm. Jieping Ye, Tao Xiong, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics. Marina Meila and Xiaotong Shenthe Eleventh International Conference on Artificial Intelligence and StatisticsSan Juan, Puerto Rico2of Proceedings of Machine Learning ResearchJieping Ye and Tao Xiong. Svm versus least squares svm. In Marina Meila and Xiaotong Shen, editors, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of Proceedings of Machine Learning Research, pages 644-651, San Juan, Puerto Rico, 21-24 Mar 2007. PMLR. URL http://proceedings.mlr.press/v2/ye07a.html. Random classification noise defeats all convex potential boosters. M Philip, Rocco A Long, Servedio, 10.1007/s10994-009-5165-z1573-0565Machine Learning. 78Philip M. Long and Rocco A. Servedio. Random classification noise defeats all convex potential boost- ers. Machine Learning, 78(3):287-304, 2010. ISSN 1573-0565. URL http://dx.doi.org/10.1007/ s10994-009-5165-z. Noise tolerance under risk minimization. Naresh Manwani, P S Sastry, arXiv:1109.5231Naresh Manwani and P. S. Sastry. Noise tolerance under risk minimization, 2011, arXiv:1109.5231. A decision-theoretic generalization of on-line learning and an application to boosting. Yoav Freund, Robert E Schapire, 0022-0000Journal of Computer and System Sciences. 551Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 -139, 1997. ISSN 0022-0000. URL http://www.sciencedirect.com/science/article/pii/S002200009791504X. The lack of a priori distinctions between learning algorithms. H David, Wolpert, 10.1162/neco.1996.8.7.1341Neural Computation. 87David H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural Computation, 8 (7):1341-1390, 1996, http://dx.doi.org/10.1162/neco.1996.8.7.1341. URL http://dx.doi.org/10.1162/ neco.1996.8.7.1341. Treatise on Human Nature. David Hume, Oxford University Press1739David Hume. Treatise on Human Nature. Oxford University Press, 1739. The problem of induction. John Vickers, No free lunch theorems -discussions and links. Edward N. ZaltaThe Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford Universityspring 2016 editionJohn Vickers. The problem of induction. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2016 edition, 2016. NFL. No free lunch theorems -discussions and links. http://www.no-free-lunch.org/. No free lunch versus occam's razor in supervised learning. Tor Lattimore, Marcus Hutter, 10.1007/978-3-642-44958-1_17Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence -Papers from the Ray Solomonoff 85th Memorial Conference. Melbourne, VIC, AustraliaTor Lattimore and Marcus Hutter. No free lunch versus occam's razor in supervised learning. In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence -Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 -December 2, 2011, pages 223-235, 2011. URL https://doi.org/10.1007/978-3-642-44958-1_17. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Marcus Hutter, Springer-Verlag9783642060526Berlin, HeidelbergMarcus Hutter. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer-Verlag, Berlin, Heidelberg, 2010. ISBN 3642060528, 9783642060526. Universal learning vs. no free lunch results. Shai Ben-David, Nathan Srebro, Ruth Urner, Workshop at NIPS 2011. Shai Ben-David, Nathan Srebro, and Ruth Urner. Universal learning vs. no free lunch results. In Workshop at NIPS 2011, 2011. A theory of the learnable. G Valiant, http:/doi.acm.org/10.1145/1968.19720001-0782Commun. ACM. 2711G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, November 1984. ISSN 0001-0782. URL http://doi.acm.org/10.1145/1968.1972. The Nature of Statistical Learning Theory. Vladimir N Vapnik, ISBN 0-387-94559-8Springer-Verlag New York, IncNew York, NY, USAVladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc., New York, NY, USA, 1995. ISBN 0-387-94559-8. Queries and concept learning. Dana Angluin, Machine learning. 24Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988. The strength of weak learnability. Robert E Schapire, 10.1023/A:10226488007600885-6125Mach. Learn. 52Robert E. Schapire. The strength of weak learnability. Mach. Learn., 5(2):197-227, July 1990. ISSN 0885-6125. URL http://dx.doi.org/10.1023/A:1022648800760. Efficient distribution-free learning of probabilistic concepts. J Michael, Robert E Kearns, Schapire, 10.1016/s0022-0000(05)80062-5Journal of Computer and System Sciences. 483Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48(3):464-497, jun 1994. URL https://doi.org/10.1016/ s0022-0000(05)80062-5. The learnability of quantum states. Scott Aaronson, 1364-5021Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 463Scott Aaronson. The learnability of quantum states. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 463(2088):3089-3114, 2007, http://rspa.royalsocietypublishing.org/content/463/2088/3089.full.pdf. ISSN 1364-5021. URL http: //rspa.royalsocietypublishing.org/content/463/2088/3089. Rademacher and gaussian complexities: Risk bounds and structural results. L Peter, Shahar Bartlett, Mendelson, 1532-4435J. Mach. Learn. Res. 3Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3:463-482, March 2003. ISSN 1532-4435. URL http://dl.acm. org/citation.cfm?id=944919.944944. A Probabilistic Theory of Pattern Recognition. L Devroye, L Györfi, G Lugosi, SpringerL. Devroye, L. Györfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. Technical note: Q-learning. J C H Christopher, Peter Watkins, Dayan, 10.1023/A:10226767223151573-0565Machine Learning. 8Christopher J.C.H. Watkins and Peter Dayan. Technical note: Q-learning. Machine Learning, 8(3):279-292, May 1992. ISSN 1573-0565. URL https://doi.org/10.1023/A:1022676722315. Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, 10.1038/nature1423600280836Nature. 5187540Ioannis Antonoglou. and Demis HassabisVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, February 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236. Projective simulation for artificial intelligence. Leonid Peshkin, 10.1038/srep00400Scientific Reports. Hans J. Briegel and Gemma De las Cuevas2400Brown University, USPhD thesisReinforcement Learning by Policy SearchLeonid Peshkin. Reinforcement Learning by Policy Search. PhD thesis, Brown University, US, 2001. Hans J. Briegel and Gemma De las Cuevas. Projective simulation for artificial intelligence. Scientific Reports, 2:400 EP -, May 2012. URL http://dx.doi.org/10.1038/srep00400. Article. Quantum Measurement and Control. M Wiseman, G J Milburn, Cambridge University PressM. Wiseman and G.J. Milburn. Quantum Measurement and Control. Cambridge University Press, 2010. ISBN 9780521804424. URL https://books.google.de/books?id=ZNjvHaH8qA4C. On the Sample Complexity of Reinforcement Learning. Sham Kakade, University College LondonPhD thesisSham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. The sample-complexity of general reinforcement learning. Tor Lattimore, Marcus Hutter, Peter Sunehag, arXiv:1308.4828Tor Lattimore, Marcus Hutter, and Peter Sunehag. The sample-complexity of general reinforcement learning, 2013, arXiv:1308.4828. Sample complexity of episodic fixed-horizon reinforcement learning. Christoph Dann, Emma Brunskill, Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15. the 28th International Conference on Neural Information Processing Systems, NIPS'15Cambridge, MA, USAMIT PressChristoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15, pages 2818-2826, Cambridge, MA, USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id= 2969442.2969555. Quantum detection and estimation theory. Carl W Helstrom, 10.1007/BF010074791572-9613Journal of Statistical Physics. 12Carl W. Helstrom. Quantum detection and estimation theory. Journal of Statistical Physics, 1(2):231-252, 1969. ISSN 1572-9613. URL http://dx.doi.org/10.1007/BF01007479. Probabilistic and statistical aspects of quantum theory. North-Holland series in statistics and probability. A S Holevo, North-Holland Pub. CoA.S. Holevo. Probabilistic and statistical aspects of quantum theory. North-Holland series in statistics and probability. North-Holland Pub. Co., 1982. ISBN 9780444863331. URL https://books.google.de/ books?id=ELDvAAAAMAAJ. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Advances in quantum metrology. L Samuel, Carlton M Braunstein, Caves, 10.1038/nphoton.2011.351749-4885Phys. Rev. Lett. 724Nat PhotonSamuel L. Braunstein and Carlton M. Caves. Statistical distance and the geometry of quantum states. Phys. Rev. Lett., 72:3439-3443, May 1994. URL https://link.aps.org/doi/10.1103/PhysRevLett.72.3439. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Advances in quantum metrology. Nat Photon, 5 (4):222-229, Apr 2011. ISSN 1749-4885. URL http://dx.doi.org/10.1038/nphoton.2011.35. Quantum-state estimation. Z Hradil, https:/link.aps.org/doi/10.1103/PhysRevA.55.R1561Phys. Rev. A. 55Z. Hradil. Quantum-state estimation. Phys. Rev. A, 55:R1561-R1564, Mar 1997. URL https://link.aps. org/doi/10.1103/PhysRevA.55.R1561. Maximum-likelihood estimation of quantum processes. Jaromír Fiurášek, Zdeněk Hradil, https:/link.aps.org/doi/10.1103/PhysRevA.63.020101Phys. Rev. A. 6320101Jaromír Fiurášek and Zdeněk Hradil. Maximum-likelihood estimation of quantum processes. Phys. Rev. A, 63:020101, Jan 2001. URL https://link.aps.org/doi/10.1103/PhysRevA.63.020101. Maximum-likelihood estimation of quantum measurement. Jaromír Fiurášek, https:/link.aps.org/doi/10.1103/PhysRevA.64.024102Phys. Rev. A. 6424102Jaromír Fiurášek. Maximum-likelihood estimation of quantum measurement. Phys. Rev. A, 64:024102, Jul 2001. URL https://link.aps.org/doi/10.1103/PhysRevA.64.024102. Process reconstruction: From unphysical to physical maps via maximum likelihood. Mário Ziman, Martin Plesch, Vladimír Bužek, Peterštelmachovič , https:/link.aps.org/doi/10.1103/PhysRevA.72.022106Phys. Rev. A. 7222106Mário Ziman, Martin Plesch, Vladimír Bužek, and PeterŠtelmachovič. Process reconstruction: From unphysical to physical maps via maximum likelihood. Phys. Rev. A, 72:022106, Aug 2005. URL https://link.aps.org/doi/10.1103/PhysRevA.72.022106. True precision limits in quantum metrology. Marcin Jarzyna, Rafa L Demkowicz-Dobrzański , New Journal of Physics. 17113010Marcin Jarzyna and Rafa l Demkowicz-Dobrzański. True precision limits in quantum metrology. New Journal of Physics, 17(1):013010, 2015. URL http://stacks.iop.org/1367-2630/17/i=1/a=013010. Optimal quantum measurements for phase estimation. B C Sanders, G J Milburn, https:/link.aps.org/doi/10.1103/PhysRevLett.75.2944Phys. Rev. Lett. 75B. C. Sanders and G. J. Milburn. Optimal quantum measurements for phase estimation. Phys. Rev. Lett., 75:2944-2947, Oct 1995. URL https://link.aps.org/doi/10.1103/PhysRevLett.75.2944. Optimal states and almost optimal adaptive measurements for quantum interferometry. D W Berry, H M Wiseman, https:/link.aps.org/doi/10.1103/PhysRevLett.85.5098Phys. Rev. Lett. 85D. W. Berry and H. M. Wiseman. Optimal states and almost optimal adaptive measurements for quantum interferometry. Phys. Rev. Lett., 85:5098-5101, Dec 2000. URL https://link.aps.org/doi/10.1103/ PhysRevLett.85.5098. Optimal input states and feedback for interferometric phase estimation. D W Berry, H M Wiseman, J K Breslin, https:/link.aps.org/doi/10.1103/PhysRevA.63.053804Phys. Rev. A. 6353804D. W. Berry, H. M. Wiseman, and J. K. Breslin. Optimal input states and feedback for interferometric phase estimation. Phys. Rev. A, 63:053804, Apr 2001. URL https://link.aps.org/doi/10.1103/PhysRevA. 63.053804. Machine learning for precise quantum measurement. Alexander Hentschel, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.104.063603Phys. Rev. Lett. 10463603Alexander Hentschel and Barry C. Sanders. Machine learning for precise quantum measurement. Phys. Rev. Lett., 104:063603, Feb 2010. URL https://link.aps.org/doi/10.1103/PhysRevLett.104.063603. Efficient algorithm for optimizing adaptive quantum metrology processes. Alexander Hentschel, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.107.233601Phys. Rev. Lett. 107233601Alexander Hentschel and Barry C. Sanders. Efficient algorithm for optimizing adaptive quantum metrol- ogy processes. Phys. Rev. Lett., 107:233601, Nov 2011. URL https://link.aps.org/doi/10.1103/ PhysRevLett.107.233601. Optimizing qubit hamiltonian parameter estimation algorithm using PSO. Alexandr Sergeevich, Stephen D Bartlett, 10.1109/cec.2012.6252948IEEE Congress on Evolutionary Computation. IEEEAlexandr Sergeevich and Stephen D. Bartlett. Optimizing qubit hamiltonian parameter estimation algorithm using PSO. In 2012 IEEE Congress on Evolutionary Computation. IEEE, jun 2012. URL https: //doi.org/10.1109/cec.2012.6252948. Differential evolution for many-particle adaptive quantum metrology. Neil B Lovett, Cécile Crosnier, Martí Perarnau-Llobet, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.110.220501Phys. Rev. Lett. 110220501Neil B. Lovett, Cécile Crosnier, Martí Perarnau-Llobet, and Barry C. Sanders. Differential evolution for many-particle adaptive quantum metrology. Phys. Rev. Lett., 110:220501, May 2013. URL https: //link.aps.org/doi/10.1103/PhysRevLett.110.220501. Robust online hamiltonian learning. Christopher Christopher E Granade, Nathan Ferrie, D G Wiebe, Cory, New Journal of Physics. 1410103013Christopher E Granade, Christopher Ferrie, Nathan Wiebe, and D G Cory. Robust online hamiltonian learning. New Journal of Physics, 14(10):103013, 2012. URL http://stacks.iop.org/1367-2630/14/ i=10/a=103013. Bayesian adaptive exploration. Thomas J Loredo, http:/aip.scitation.org/doi/abs/10.1063/1.1751377AIP Conference Proceedings. 7071Thomas J. Loredo. Bayesian adaptive exploration. AIP Conference Proceedings, 707(1):330-346, 2004, http://aip.scitation.org/doi/pdf/10.1063/1.1751377. URL http://aip.scitation.org/doi/abs/ 10.1063/1.1751377. Hamiltonian learning and certification using quantum resources. Nathan Wiebe, Christopher Granade, Christopher Ferrie, D G Cory, https:/link.aps.org/doi/10.1103/PhysRevLett.112.190501Phys. Rev. Lett. 112190501Nathan Wiebe, Christopher Granade, Christopher Ferrie, and D. G. Cory. Hamiltonian learning and certification using quantum resources. Phys. Rev. Lett., 112:190501, May 2014b. URL https://link. aps.org/doi/10.1103/PhysRevLett.112.190501. Quantum hamiltonian learning using imperfect quantum resources. Nathan Wiebe, Christopher Granade, Christopher Ferrie, David Cory, https:/link.aps.org/doi/10.1103/PhysRevA.89.042314Phys. Rev. A. 8942314Nathan Wiebe, Christopher Granade, Christopher Ferrie, and David Cory. Quantum hamiltonian learning using imperfect quantum resources. Phys. Rev. A, 89:042314, Apr 2014c. URL https://link.aps.org/ doi/10.1103/PhysRevA.89.042314. Experimental quantum hamiltonian learning. Jianwei Wang, Stefano Paesani, Raffaele Santagati, Sebastian Knauer, Antonio A Gentile, Nathan Wiebe, Maurangelo Petruzzella, Jeremy L O&apos;brien, John G Rarity, Anthony Laing, Mark G Thompson, 10.1038/nphys40741745-2473Nat Phys. 136Jianwei Wang, Stefano Paesani, Raffaele Santagati, Sebastian Knauer, Antonio A. Gentile, Nathan Wiebe, Maurangelo Petruzzella, Jeremy L. O'Brien, John G. Rarity, Anthony Laing, and Mark G. Thompson. Experimental quantum hamiltonian learning. Nat Phys, 13(6):551-555, Jun 2017. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys4074. Letter. Characterization of decohering quantum systems: Machine learning approach. P V Markku, Oliver Stenberg, Frank K Köhn, Wilhelm, https:/link.aps.org/doi/10.1103/PhysRevA.93.012122Phys. Rev. A. 9312122Markku P. V. Stenberg, Oliver Köhn, and Frank K. Wilhelm. Characterization of decohering quantum systems: Machine learning approach. Phys. Rev. A, 93:012122, Jan 2016. URL https://link.aps.org/ doi/10.1103/PhysRevA.93.012122. Quantum optimally controlled transition landscapes. Herschel A Rabitz, Michael M Hsieh, Carey M Rosenthal, 0036-8075Science. 3035666Herschel A. Rabitz, Michael M. Hsieh, and Carey M. Rosenthal. Quantum optimally controlled transition land- scapes. Science, 303(5666):1998-2001, 2004, http://science.sciencemag.org/content/303/5666/1998.full.pdf. ISSN 0036-8075. URL http://science.sciencemag.org/content/303/5666/1998. Common foundations of optimal control across the sciences: evidence of a free lunch. Benjamin Russell, Herschel Rabitz, 1364-503XPhilosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 375Benjamin Russell and Herschel Rabitz. Common foundations of optimal control across the sciences: evidence of a free lunch. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 375(2088), 2017, http://rsta.royalsocietypublishing.org/content/375/2088/20160210.full.pdf. ISSN 1364-503X. URL http://rsta.royalsocietypublishing.org/content/375/2088/20160210. Evolutionary algorithms for hard quantum control. Ehsan Zahedinejad, Sophie Schirmer, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevA.90.032310Phys. Rev. A. 9032310Ehsan Zahedinejad, Sophie Schirmer, and Barry C. Sanders. Evolutionary algorithms for hard quantum control. Phys. Rev. A, 90:032310, Sep 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.90. 032310. Both toffoli and controlled-not need little help to do universal quantum computation. Yaoyun Shi, arXiv:quant-ph/0205115Yaoyun Shi. Both toffoli and controlled-not need little help to do universal quantum computation, 2002, arXiv:quant-ph/0205115. High-fidelity single-shot toffoli gate via quantum control. Ehsan Zahedinejad, Joydip Ghosh, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevLett.114.200502Phys. Rev. Lett. 114Ehsan Zahedinejad, Joydip Ghosh, and Barry C. Sanders. High-fidelity single-shot toffoli gate via quan- tum control. Phys. Rev. Lett., 114:200502, May 2015. URL https://link.aps.org/doi/10.1103/ PhysRevLett.114.200502. Designing high-fidelity single-shot three-qubit gates: A machine-learning approach. Ehsan Zahedinejad, Joydip Ghosh, Barry C Sanders, https:/link.aps.org/doi/10.1103/PhysRevApplied.6.054005Phys. Rev. Applied. 654005Ehsan Zahedinejad, Joydip Ghosh, and Barry C. Sanders. Designing high-fidelity single-shot three-qubit gates: A machine-learning approach. Phys. Rev. Applied, 6:054005, Nov 2016. URL https://link.aps. org/doi/10.1103/PhysRevApplied.6.054005. Quantum computing with an always-on heisenberg interaction. C Simon, Sougato Benjamin, Bose, https:/link.aps.org/doi/10.1103/PhysRevLett.90.247901Phys. Rev. Lett. 90247901Simon C. Benjamin and Sougato Bose. Quantum computing with an always-on heisenberg interaction. Phys. Rev. Lett., 90:247901, Jun 2003. URL https://link.aps.org/doi/10.1103/PhysRevLett.90.247901. Quantum gate learning in qubit networks: Toffoli gate without time-dependent control. Leonardo Banchi, Nicola Pancotti, Sougato Bose, 10.1038/npjqi.2016.19Npj Quantum Information. 2Leonardo Banchi, Nicola Pancotti, and Sougato Bose. Quantum gate learning in qubit networks: Toffoli gate without time-dependent control. Npj Quantum Information, 2:16019 EP -, 07 2016. URL http: //dx.doi.org/10.1038/npjqi.2016.19. Quantum control experiments as a testbed for evolutionary multi-objective algorithms. M Ofer, Jonathan Shir, Zaki Roslund, Herschel Leghtas, Rabitz, 10.1007/s10710-012-9164-71389-2576Genetic Programming and Evolvable Machines. 134Ofer M. Shir, Jonathan Roslund, Zaki Leghtas, and Herschel Rabitz. Quantum control experiments as a testbed for evolutionary multi-objective algorithms. Genetic Programming and Evolvable Machines, 13 (4):445-491, December 2012. ISSN 1389-2576. URL http://dx.doi.org/10.1007/s10710-012-9164-7. Quantum learning by measurement and feedback. Jeongho Bang, James Lim, M S Kim, Jinhyoung Lee, ; S Gammelmark, K Mølmer, arXiv:0803.2976New Journal of Physics. 11333017Quantum learning machineJeongho Bang, James Lim, M. S. Kim, and Jinhyoung Lee. Quantum learning machine, 2008, arXiv:0803.2976. S. Gammelmark and K. Mølmer. Quantum learning by measurement and feedback. New Journal of Physics, 11(3):033017, 2009. URL http://stacks.iop.org/1367-2630/11/i=3/a=033017. Fidelity-based probabilistic q-learning for control of quantum systems. C Chen, D Dong, H X Li, J Chu, T J Tarn, 2162-237XIEEE Transactions on Neural Networks and Learning Systems. 255C. Chen, D. Dong, H. X. Li, J. Chu, and T. J. Tarn. Fidelity-based probabilistic q-learning for control of quantum systems. IEEE Transactions on Neural Networks and Learning Systems, 25(5):920-933, May 2014. ISSN 2162-237X. Learning in quantum control: High-dimensional global optimization for noisy quantum dynamics. Pantita Palittapongarnpim, Peter Wittek, Ehsan Zahedinejad, Barry C Sanders, abs/1607.03428CoRRPantita Palittapongarnpim, Peter Wittek, Ehsan Zahedinejad, and Barry C. Sanders. Learning in quantum control: High-dimensional global optimization for noisy quantum dynamics. CoRR, abs/1607.03428, 2016. URL http://arxiv.org/abs/1607.03428. Quantum machine learning with glow for episodic tasks and decision games. Jens Clausen, Hans J Briegel, arXiv:1601.07358Jens Clausen and Hans J. Briegel. Quantum machine learning with glow for episodic tasks and decision games, 2016, arXiv:1601.07358. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework. S Machnes, U Sander, S J Glaser, P De Fouquières, A Gruslys, S Schirmer, T Schulte-Herbrüggen, https:/link.aps.org/doi/10.1103/PhysRevA.84.022305Phys. Rev. A. 8422305S. Machnes, U. Sander, S. J. Glaser, P. de Fouquières, A. Gruslys, S. Schirmer, and T. Schulte-Herbrüggen. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework. Phys. Rev. A, 84:022305, Aug 2011. URL https://link.aps.org/doi/10.1103/PhysRevA. 84.022305. Using recurrent neural networks to optimize dynamical decoupling for quantum memory. Moritz August, Xiaotong Ni, https:/link.aps.org/doi/10.1103/PhysRevA.95.012335Phys. Rev. A. 9512335Moritz August and Xiaotong Ni. Using recurrent neural networks to optimize dynamical decoupling for quantum memory. Phys. Rev. A, 95:012335, Jan 2017. URL https://link.aps.org/doi/10.1103/ PhysRevA.95.012335. Adaptive quantum computation in changing environments using projective simulation. M Tiersch, E J Ganahl, H J Briegel, 10.1038/srep12874Scientific Reports. 512874M. Tiersch, E. J. Ganahl, and H. J. Briegel. Adaptive quantum computation in changing environments using projective simulation. Scientific Reports, 5:12874 EP -, Aug 2015. URL http://dx.doi.org/10. 1038/srep12874. Article. Estimation of coherent error sources from stabilizer measurements. Davide Orsucci, Markus Tiersch, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevA.93.042303Phys. Rev. A. 9342303Davide Orsucci, Markus Tiersch, and Hans J. Briegel. Estimation of coherent error sources from stabilizer measurements. Phys. Rev. A, 93:042303, Apr 2016. URL https://link.aps.org/doi/10.1103/PhysRevA. 93.042303. In-situ characterization of quantum devices with error correction. Joshua Combes, Christopher Ferrie, Chris Cesare, Markus Tiersch, G J Milburn, Hans J Briegel, Carlton M Caves, arXiv:1405.5656Joshua Combes, Christopher Ferrie, Chris Cesare, Markus Tiersch, G. J. Milburn, Hans J. Briegel, and Carlton M. Caves. In-situ characterization of quantum devices with error correction, 2014, arXiv:1405.5656. Prediction and real-time compensation of qubit decoherence via machine learning. Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, Michael J Biercuk, 10.1038/ncomms14106Nature Communications. 814106Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, and Michael J. Biercuk. Prediction and real-time compensation of qubit decoherence via machine learning. Nature Communications, 8:14106 EP -, Jan 2017. URL http://dx.doi.org/10.1038/ncomms14106. Article. Automated search for new quantum experiments. Mario Krenn, Mehul Malik, Robert Fickler, Radek Lapkiewicz, Anton Zeilinger, https:/link.aps.org/doi/10.1103/PhysRevLett.116.090405Phys. Rev. Lett. 11690405Mario Krenn, Mehul Malik, Robert Fickler, Radek Lapkiewicz, and Anton Zeilinger. Automated search for new quantum experiments. Phys. Rev. Lett., 116:090405, Mar 2016. URL https://link.aps.org/doi/ 10.1103/PhysRevLett.116.090405. Projective Simulation for Classical and Quantum Autonomous Agents. Talk delivered at the KITP Program Control of Complex Quantum Systems. J Hans, Briegel, Santa BarbaraHans J. Briegel. Projective Simulation for Classical and Quantum Autonomous Agents. Talk delivered at the KITP Program Control of Complex Quantum Systems, Santa Barbara., 2013. Active learning machine learns to create new quantum experiments. Alexey A Melnikov, Hendrik Poulsen Nautrup, Mario Krenn, Vedran Dunjko, Markus Tiersch, Anton Zeilinger, Hans J Briegel, arXiv:1706.00868Alexey A. Melnikov, Hendrik Poulsen Nautrup, Mario Krenn, Vedran Dunjko, Markus Tiersch, Anton Zeilinger, and Hans J. Briegel. Active learning machine learns to create new quantum experiments, 2017, arXiv:1706.00868. Machine learning meets quantum state preparation. the phase diagram of quantum control. Marin Bukov, G R Alexandre, Dries Day, Phillip Sels, Anatoli Weinberg, Pankaj Polkovnikov, Mehta, arXiv:1705.00565Marin Bukov, Alexandre G. R. Day, Dries Sels, Phillip Weinberg, Anatoli Polkovnikov, and Pankaj Mehta. Machine learning meets quantum state preparation. the phase diagram of quantum control, 2017, arXiv:1705.00565. Machine learning applications in genetics and genomics. W Maxwell, William Stafford Libbrecht, Noble, 10.1038/nrg39201471-0056Nat Rev Genet. 166Maxwell W. Libbrecht and William Stafford Noble. Machine learning applications in genetics and genomics. Nat Rev Genet, 16(6):321-332, Jun 2015. ISSN 1471-0056. URL http://dx.doi.org/10.1038/nrg3920. Review. Machine Learning in Medicine -a Complete Overview. J Ton, Aeilko H Cleophas, Zwinderman, 10.1007/978-3-319-15195-3Springer International PublishingTon J. Cleophas and Aeilko H. Zwinderman. Machine Learning in Medicine -a Complete Overview. Springer International Publishing, 2015. URL https://doi.org/10.1007/978-3-319-15195-3. Development and Uses of Artificial Intelligence in Chemistry. Hugh Cartwright, 10.1002/9780470189078.ch8John Wiley & Sons, IncHugh Cartwright. Development and Uses of Artificial Intelligence in Chemistry, pages 349-390. John Wiley & Sons, Inc., 2007. ISBN 9780470189078. URL http://dx.doi.org/10.1002/9780470189078.ch8. Artificial intelligence called in to tackle LHC data deluge. Davide Castelvecchi, 10.1038/528018aNature. 5287580Davide Castelvecchi. Artificial intelligence called in to tackle LHC data deluge. Nature, 528(7580):18-19, dec 2015. URL https://doi.org/10.1038/528018a. Predicting crystal structures with data mining of quantum calculations. Stefano Curtarolo, Dane Morgan, Kristin Persson, John Rodgers, Gerbrand Ceder, https:/link.aps.org/doi/10.1103/PhysRevLett.91.135503Phys. Rev. Lett. 91135503Stefano Curtarolo, Dane Morgan, Kristin Persson, John Rodgers, and Gerbrand Ceder. Predicting crystal structures with data mining of quantum calculations. Phys. Rev. Lett., 91:135503, Sep 2003. URL https://link.aps.org/doi/10.1103/PhysRevLett.91.135503. Finding density functionals with machine learning. John C Snyder, Matthias Rupp, Katja Hansen, Klaus-Robert Müller, Kieron Burke, https:/link.aps.org/doi/10.1103/PhysRevLett.108.253002Phys. Rev. Lett. 108253002John C. Snyder, Matthias Rupp, Katja Hansen, Klaus-Robert Müller, and Kieron Burke. Finding density functionals with machine learning. Phys. Rev. Lett., 108:253002, Jun 2012. URL https://link.aps. org/doi/10.1103/PhysRevLett.108.253002. Fast and accurate modeling of molecular atomization energies with machine learning. Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert Müller, O Anatole Von Lilienfeld, https:/link.aps.org/doi/10.1103/PhysRevLett.108.058301Phys. Rev. Lett. 10858301Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert Müller, and O. Anatole von Lilienfeld. Fast and accurate modeling of molecular atomization energies with machine learning. Phys. Rev. Lett., 108:058301, Jan 2012. URL https://link.aps.org/doi/10.1103/PhysRevLett.108.058301. Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces. Zhenwei Li, James R Kermode, Alessandro De Vita, https:/link.aps.org/doi/10.1103/PhysRevLett.114.096405Phys. Rev. Lett. 11496405Zhenwei Li, James R. Kermode, and Alessandro De Vita. Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces. Phys. Rev. Lett., 114:096405, Mar 2015a. URL https://link. aps.org/doi/10.1103/PhysRevLett.114.096405. Machine learning for many-body physics: The case of the anderson impurity model. Louis-François Arsenault, Alejandro Lopez-Bezanilla, O Anatole Von Lilienfeld, Andrew J Millis, https:/link.aps.org/doi/10.1103/PhysRevB.90.155136Phys. Rev. B. 90155136Louis-François Arsenault, Alejandro Lopez-Bezanilla, O. Anatole von Lilienfeld, and Andrew J. Millis. Machine learning for many-body physics: The case of the anderson impurity model. Phys. Rev. B, 90: 155136, Oct 2014. URL https://link.aps.org/doi/10.1103/PhysRevB.90.155136. Discovering phase transitions with unsupervised learning. Lei Wang, https:/link.aps.org/doi/10.1103/PhysRevB.94.195105Phys. Rev. B. 94195105Lei Wang. Discovering phase transitions with unsupervised learning. Phys. Rev. B, 94:195105, Nov 2016. URL https://link.aps.org/doi/10.1103/PhysRevB.94.195105. Discovering phases, phase transitions and crossovers through unsupervised machine learning: A critical examination. Wenjian Hu, R P Rajiv, Richard T Singh, Scalettar, arXiv:1704.00080Wenjian Hu, Rajiv R. P. Singh, and Richard T. Scalettar. Discovering phases, phase transitions and crossovers through unsupervised machine learning: A critical examination, 2017, arXiv:1704.00080. Machine learning phases of matter. Juan Carrasquilla, Roger G Melko, 10.1038/nphys40351745-2473Nat Phys. 135Juan Carrasquilla and Roger G. Melko. Machine learning phases of matter. Nat Phys, 13(5):431-434, May 2017. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys4035. Letter. Kelvin Ch, &apos; Ng, Juan Carrasquilla, Roger G Melko, Ehsan Khatami, arXiv:1609.02552Machine learning phases of strongly correlated fermions. Kelvin Ch'ng, Juan Carrasquilla, Roger G. Melko, and Ehsan Khatami. Machine learning phases of strongly correlated fermions, 2016, arXiv:1609.02552. Machine learning quantum phases of matter beyond the fermion sign problem. Peter Broecker, Juan Carrasquilla, Roger G Melko, Simon Trebst, arXiv:1608.07848Peter Broecker, Juan Carrasquilla, Roger G. Melko, and Simon Trebst. Machine learning quantum phases of matter beyond the fermion sign problem, 2016, arXiv:1608.07848. Learning phase transitions by confusion. P L Evert, Ye-Hua Van Nieuwenburg, Sebastian D Liu, Huber, 10.1038/nphys40371745-2473Nat Phys. 135Evert P. L. van Nieuwenburg, Ye-Hua Liu, and Sebastian D. Huber. Learning phase transitions by confusion. Nat Phys, 13(5):435-439, May 2017. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys4037. Letter. Kernel methods for interpretable machine learning of order parameters. Pedro Ponte, Roger G Melko, arXiv:1704.05848Pedro Ponte and Roger G. Melko. Kernel methods for interpretable machine learning of order parameters, 2017, arXiv:1704.05848. Solving the quantum many-body problem with artificial neural networks. Giuseppe Carleo, Matthias Troyer, 0036-8075Science. 3556325Giuseppe Carleo and Matthias Troyer. Solving the quantum many-body problem with artificial neural networks. Science, 355(6325):602-606, 2017, http://science.sciencemag.org/content/355/6325/602.full.pdf. ISSN 0036-8075. URL http://science.sciencemag.org/content/355/6325/602. Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. F Verstraete, V Murg, J I Cirac, 10.1080/14789940801912366Advances in Physics. 572F. Verstraete, V. Murg, and J.I. Cirac. Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Advances in Physics, 57(2):143-224, 2008, http://dx.doi.org/10.1080/14789940801912366. URL http://dx.doi.org/10.1080/14789940801912366. Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla, Matthias Troyer, Roger Melko, Giuseppe Carleo, arXiv:1703.05334Many-body quantum state tomography with neural networks. Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla, Matthias Troyer, Roger Melko, and Giuseppe Carleo. Many-body quantum state tomography with neural networks, 2017, arXiv:1703.05334. Quantum entanglement in neural network states. Dong-Ling Deng, Xiaopeng Li, S. Das Sarma, https:/link.aps.org/doi/10.1103/PhysRevX.7.021021Phys. Rev. X. 721021Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Quantum entanglement in neural network states. Phys. Rev. X, 7:021021, May 2017. URL https://link.aps.org/doi/10.1103/PhysRevX.7.021021. Efficient representation of quantum many-body states with deep neural networks. Xun Gao, Lu-Ming Duan, arXiv:1701.05039Xun Gao and Lu-Ming Duan. Efficient representation of quantum many-body states with deep neural networks, 2017, arXiv:1701.05039. An exact mapping between the variational renormalization group and deep learning. Pankaj Mehta, David J Schwab, arXiv:1410.3831Pankaj Mehta and David J. Schwab. An exact mapping between the variational renormalization group and deep learning, 2014, arXiv:1410.3831. Thermodynamic limit of density matrix renormalization. Stefan Stellanöstlund, Rommer, https:/link.aps.org/doi/10.1103/PhysRevLett.75.3537Phys. Rev. Lett. 75StellanÖstlund and Stefan Rommer. Thermodynamic limit of density matrix renormalization. Phys. Rev. Lett., 75:3537-3540, Nov 1995. URL https://link.aps.org/doi/10.1103/PhysRevLett.75.3537. Renormalization algorithms for quantum-many body systems in two and higher dimensions. F Verstraete, J I Cirac, arXiv:cond-mat/0407066F. Verstraete and J. I. Cirac. Renormalization algorithms for quantum-many body systems in two and higher dimensions, 2004, arXiv:cond-mat/0407066. Deep learning and quantum entanglement: Fundamental connections with implications to network design. Yoav Levine, David Yakira, Nadav Cohen, Amnon Shashua, arXiv:1704.01552Yoav Levine, David Yakira, Nadav Cohen, and Amnon Shashua. Deep learning and quantum entanglement: Fundamental connections with implications to network design, 2017, arXiv:1704.01552. Transforming bell's inequalities into state classifiers with machine learning. Chi Yue, Man-Hong Ma, Yung, arXiv:1705.00813Yue-Chi Ma and Man-Hong Yung. Transforming bell's inequalities into state classifiers with machine learning, 2017, arXiv:1705.00813. A separability-entanglement classifier via machine learning. Sirui Lu, Shilin Huang, Keren Li, Jun Li, Jianxin Chen, Dawei Lu, Zhengfeng Ji, Yi Shen, Duanlu Zhou, Bei Zeng, arXiv:1705.01523Sirui Lu, Shilin Huang, Keren Li, Jun Li, Jianxin Chen, Dawei Lu, Zhengfeng Ji, Yi Shen, Duanlu Zhou, and Bei Zeng. A separability-entanglement classifier via machine learning, 2017, arXiv:1705.01523. A single quantum cannot be cloned. W K Wootters, W H Zurek, 10.1038/299802a0Nature. 2995886W. K. Wootters and W. H. Zurek. A single quantum cannot be cloned. Nature, 299(5886):802-803, Oct 1982. URL http://dx.doi.org/10.1038/299802a0. No-signaling bound on quantum state discrimination. Sarah Croke, Erika Andersson, Stephen M Barnett, https:/link.aps.org/doi/10.1103/PhysRevA.77.012113Phys. Rev. A. 7712113Sarah Croke, Erika Andersson, and Stephen M. Barnett. No-signaling bound on quantum state discrimination. Phys. Rev. A, 77:012113, Jan 2008. URL https://link.aps.org/doi/10.1103/PhysRevA.77.012113. Quantum state discrimination using the minimum average number of copies. Sergei Slussarenko, Morgan M Weston, Jun-Gang Li, Nicholas Campbell, Howard M Wiseman, Geoff J Pryde, https:/link.aps.org/doi/10.1103/PhysRevLett.118.030502Phys. Rev. Lett. 11830502Sergei Slussarenko, Morgan M. Weston, Jun-Gang Li, Nicholas Campbell, Howard M. Wiseman, and Geoff J. Pryde. Quantum state discrimination using the minimum average number of copies. Phys. Rev. Lett., 118:030502, Jan 2017. URL https://link.aps.org/doi/10.1103/PhysRevLett.118.030502. Quantum template matching. Masahide Sasaki, Alberto Carlini, Richard Jozsa, https:/link.aps.org/doi/10.1103/PhysRevA.64.022317Phys. Rev. A. 6422317Masahide Sasaki, Alberto Carlini, and Richard Jozsa. Quantum template matching. Phys. Rev. A, 64: 022317, Jul 2001. URL https://link.aps.org/doi/10.1103/PhysRevA.64.022317. Quantum learning and universal quantum matching machine. Masahide Sasaki, Alberto Carlini, https:/link.aps.org/doi/10.1103/PhysRevA.66.022303Phys. Rev. A. 6622303Masahide Sasaki and Alberto Carlini. Quantum learning and universal quantum matching machine. Phys. Rev. A, 66:022303, Aug 2002. URL https://link.aps.org/doi/10.1103/PhysRevA.66.022303. Universal programmable quantum state discriminator that is optimal for unambiguously distinguishing between unknown states. A János, Mark Bergou, Hillery, https:/link.aps.org/doi/10.1103/PhysRevLett.94.160501Phys. Rev. Lett. 94160501János A. Bergou and Mark Hillery. Universal programmable quantum state discriminator that is optimal for unambiguously distinguishing between unknown states. Phys. Rev. Lett., 94:160501, Apr 2005. URL https://link.aps.org/doi/10.1103/PhysRevLett.94.160501. Quantum pure-state identification. A Hayashi, M Horibe, T Hashimoto, https:/link.aps.org/doi/10.1103/PhysRevA.72.052306Phys. Rev. A. 7252306A. Hayashi, M. Horibe, and T. Hashimoto. Quantum pure-state identification. Phys. Rev. A, 72:052306, Nov 2005. URL https://link.aps.org/doi/10.1103/PhysRevA.72.052306. Unambiguous pure-state identification without classical knowledge. A Hayashi, M Horibe, T Hashimoto, https:/link.aps.org/doi/10.1103/PhysRevA.73.012328Phys. Rev. A. 7312328A. Hayashi, M. Horibe, and T. Hashimoto. Unambiguous pure-state identification without classical knowledge. Phys. Rev. A, 73:012328, Jan 2006. URL https://link.aps.org/doi/10.1103/PhysRevA.73.012328. Quantum learning: asymptotically optimal classification of qubit states. Mȃdȃlin Guţȃ, Wojciech Kot Lowski, New Journal of Physics. 1212123032Mȃdȃlin Guţȃ and Wojciech Kot lowski. Quantum learning: asymptotically optimal classification of qubit states. New Journal of Physics, 12(12):123032, 2010. URL http://stacks.iop.org/1367-2630/12/i= 12/a=123032. Quantum learning without quantum memory. G Sentís, J Calsamiglia, R Muñoz-Tapia, E Bagan, 10.1038/srep00708Article. Gael Sentís. 2708Scientific Reports. personal communicationG. Sentís, J. Calsamiglia, R. Muñoz-Tapia, and E. Bagan. Quantum learning without quantum memory. Scientific Reports, 2:708 EP -, Oct 2012. URL http://dx.doi.org/10.1038/srep00708. Article. Gael Sentís. personal communication, 2017. Quantum learning of coherent states. Gael Sentís, Mȃdȃlin Guţȃ, Gerardo Adesso, 10.1140/epjqt/s40507-015-0030-4EPJ Quantum Technology. 217Gael Sentís, Mȃdȃlin Guţȃ, and Gerardo Adesso. Quantum learning of coherent states. EPJ Quantum Technology, 2(17), Jul 2015. URL https://doi.org/10.1140/epjqt/s40507-015-0030-4. Programmable discrimination with an error margin. G Sentís, E Bagan, J Calsamiglia, R. Muñoz Tapia, https:/link.aps.org/doi/10.1103/PhysRevA.88.052304Phys. Rev. A. 8852304G. Sentís, E. Bagan, J. Calsamiglia, and R. Muñoz Tapia. Programmable discrimination with an error margin. Phys. Rev. A, 88:052304, Nov 2013. URL https://link.aps.org/doi/10.1103/PhysRevA.88.052304. . Esma Aïmeur, Gilles Brassard, Sébastien Gambs, 10.1007/11766247_37978-3-540-34630-2Machine Learning in a Quantum World. SpringerEsma Aïmeur, Gilles Brassard, and Sébastien Gambs. Machine Learning in a Quantum World, pages 431-442. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006. ISBN 978-3-540-34630-2. URL http: //dx.doi.org/10.1007/11766247_37. Quantum decision tree classifier. Songfeng Lu, Samuel L Braunstein, 10.1007/s11128-013-0687-51573-1332Quantum Information Processing. 13Songfeng Lu and Samuel L. Braunstein. Quantum decision tree classifier. Quantum Information Processing, 13(3):757-770, 2014. ISSN 1573-1332. URL http://dx.doi.org/10.1007/s11128-013-0687-5. Sebastien Gambs, arXiv:0809.0444Quantum classification. Sebastien Gambs. Quantum classification, 2008, arXiv:0809.0444. Inductive supervised quantum learning. Alex Monràs, Gael Sentís, Peter Wittek, https:/link.aps.org/doi/10.1103/PhysRevLett.118.190503Phys. Rev. Lett. 118190503Alex Monràs, Gael Sentís, and Peter Wittek. Inductive supervised quantum learning. Phys. Rev. Lett., 118: 190503, May 2017. URL https://link.aps.org/doi/10.1103/PhysRevLett.118.190503. Andrea Rocchetto, arXiv:1705.00345Stabiliser states are efficiently pac-learnable. Andrea Rocchetto. Stabiliser states are efficiently pac-learnable, 2017, arXiv:1705.00345. Optimal quantum learning of a unitary transformation. Alessandro Bisio, Giulio Chiribella, Giacomo Mauro D&apos;ariano, Stefano Facchini, Paolo Perinotti, https:/link.aps.org/doi/10.1103/PhysRevA.81.032324Phys. Rev. A. 8132324Alessandro Bisio, Giulio Chiribella, Giacomo Mauro D'Ariano, Stefano Facchini, and Paolo Perinotti. Optimal quantum learning of a unitary transformation. Phys. Rev. A, 81:032324, Mar 2010. URL https://link.aps.org/doi/10.1103/PhysRevA.81.032324. and Michal Sedl?k. Quantum learning algorithms for quantum measurements. Alessandro Bisio, Giacomo Mauro Dariano, Paolo Perinotti, 0375-9601Physics Letters A. 37539Alessandro Bisio, Giacomo Mauro DAriano, Paolo Perinotti, and Michal Sedl?k. Quantum learning algorithms for quantum measurements. Physics Letters A, 375(39):3425 -3434, 2011. ISSN 0375-9601. URL http://www.sciencedirect.com/science/article/pii/S0375960111009467. Perfect probabilistic storing and retrieving of unitary channels. Michal Sedlák, Alessandro Bisio, Mário Ziman, Michal Sedlák, Alessandro Bisio, and Mário Ziman. Perfect probabilistic storing and retrieving of uni- tary channels, 2017. URL http://qpl.science.ru.nl/papers/QPL_2017_paper_30.pdf. Featured in QPL/IQSA 2017. Optimal single-shot strategies for discrimination of quantum measurements. Michal Sedlák, Mário Ziman, https:/link.aps.org/doi/10.1103/PhysRevA.90.052312Phys. Rev. A. 9052312Michal Sedlák and Mário Ziman. Optimal single-shot strategies for discrimination of quantum measurements. Phys. Rev. A, 90:052312, Nov 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.90.052312. The learnability of unknown quantum measurements. Hao-Chung Cheng, Min-Hsiu Hsieh, Ping-Cheng Yeh, Quantum Information & Computation. 16Hao-Chung Cheng, Min-Hsiu Hsieh, and Ping-Cheng Yeh. The learnability of unknown quantum measure- ments. Quantum Information & Computation, 16(7&8):615-656, 2016. URL http://www.rintonpress. com/xxqic16/qic-16-78/0615-0656.pdf. Quantum partially observable markov decision processes. Jennifer Barry, Daniel T Barry, Scott Aaronson, https:/link.aps.org/doi/10.1103/PhysRevA.90.032311Phys. Rev. A. 9032311Jennifer Barry, Daniel T. Barry, and Scott Aaronson. Quantum partially observable markov decision processes. Phys. Rev. A, 90:032311, Sep 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.90.032311. Quantum perceptrons. M Lewenstein, 10.1080/09500349414552331Journal of Modern Optics. 4112M. Lewenstein. Quantum perceptrons. Journal of Modern Optics, 41(12):2491-2501, dec 1994. URL https://doi.org/10.1080/09500349414552331. On quantum neural computing. Subhash Kak, 0020-0255Information Sciences. 833Subhash Kak. On quantum neural computing. Information Sciences, 83(3):143 -160, 1995. ISSN 0020-0255. URL http://www.sciencedirect.com/science/article/pii/002002559400095S. Learning DNF over the uniform distribution using a quantum example oracle. H Nader, Jeffrey C Bshouty, Jackson, 10.1137/s0097539795293123SIAM Journal on Computing. 283Appeared in n Computational learning theory (COLT) conference proceedings in 1995Nader H. Bshouty and Jeffrey C. Jackson. Learning DNF over the uniform distribution using a quantum example oracle. SIAM Journal on Computing, 28(3):1136-1153, jan 1998. URL https://doi.org/10. 1137/s0097539795293123. Appeared in n Computational learning theory (COLT) conference proceedings in 1995. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Roger Penrose, ISBN 0-19-851973-7Oxford University Press, IncNew York, NY, USARoger Penrose. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press, Inc., New York, NY, USA, 1989. ISBN 0-19-851973-7. Quantum effects in neural networks. Hidetoshi Nishimori, Yoshihiko Nonomura, 10.1143/JPSJ.65.3780Journal of the Physical Society of Japan. 6512Hidetoshi Nishimori and Yoshihiko Nonomura. Quantum effects in neural networks. Journal of the Physical Society of Japan, 65(12):3780-3796, 1996, http://dx.doi.org/10.1143/JPSJ.65.3780. URL http: //dx.doi.org/10.1143/JPSJ.65.3780. Importance of quantum decoherence in brain processes. Max Tegmark, https:/link.aps.org/doi/10.1103/PhysRevE.61.4194Phys. Rev. E. 61Max Tegmark. Importance of quantum decoherence in brain processes. Phys. Rev. E, 61:4194-4206, Apr 2000. URL https://link.aps.org/doi/10.1103/PhysRevE.61.4194. A quantum dot neural network, 1996. Mitja Peruš. Neural networks as a basis for quantum associative networks. E C Behrman, J Niemel, J E Steck, S R Skinner, Neural Netw. World. 106E.C. Behrman, J. Niemel, J.E. Steck, and S.R. Skinner. A quantum dot neural network, 1996. Mitja Peruš. Neural networks as a basis for quantum associative networks. Neural Netw. World, 10(6): 1001-1013, 2000. Entanglement in a quantum neural network based on quantum dots. M V Altaisky, N N Zolnikova, N E Kaputkina, V A Krylov, Yu E Lozovik, N S Dattani, 1569-4410Photonics and Nanostructures -Fundamentals and Applications. 24M.V. Altaisky, N.N. Zolnikova, N.E. Kaputkina, V.A. Krylov, Yu E. Lozovik, and N.S. Dattani. Entanglement in a quantum neural network based on quantum dots. Photonics and Nanostructures -Fundamentals and Applications, 24:24 -28, 2017. ISSN 1569-4410. URL http://www.sciencedirect.com/science/ article/pii/S1569441017300317. The quest for a quantum neural network. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, 10.1007/s11128-014-0809-8Quantum Information Processing. 13Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. The quest for a quantum neural network. Quantum Information Processing, 13(11):2567-2586, aug 2014b. URL https://doi.org/10.1007/ s11128-014-0809-8. A heuristic review of quantum neural networks. Jesse A Garman, Imperial College London, Department of Physics, United KingdomMaster's thesisJesse A. Garman. A heuristic review of quantum neural networks. Master's thesis, Imperial College London, Department of Physics, United Kingdom, 2011. Quantum algorithms for learning and testing juntas. Alp Atıcı, Rocco A Servedio, 10.1007/s11128-007-0061-66Quantum Information ProcessingAlp Atıcı and Rocco A. Servedio. Quantum algorithms for learning and testing juntas. Quantum Information Processing, 6(5):323-348, sep 2007. URL https://doi.org/10.1007/s11128-007-0061-6. Quantum learning robust against noise. Andrew W Cross, Graeme Smith, John A Smolin, https:/link.aps.org/doi/10.1103/PhysRevA.92.012327Phys. Rev. A. 9212327Andrew W. Cross, Graeme Smith, and John A. Smolin. Quantum learning robust against noise. Phys. Rev. A, 92:012327, Jul 2015. URL https://link.aps.org/doi/10.1103/PhysRevA.92.012327. Quantum complexity theory. Ethan Bernstein, Umesh Vazirani, 10.1137/S0097539796300921SIAM Journal on Computing. 265Ethan Bernstein and Umesh Vazirani. Quantum complexity theory. SIAM Journal on Computing, 26 (5):1411-1473, 1997, https://doi.org/10.1137/S0097539796300921. URL https://doi.org/10.1137/ S0097539796300921. Optimal quantum sample complexity of learning algorithms. Srinivasan Arunachalam, Ronald De, Wolf , arXiv:1607.00932Srinivasan Arunachalam and Ronald de Wolf. Optimal quantum sample complexity of learning algorithms, 2016, arXiv:1607.00932. Quantum predictive learning and communication complexity with single input. Dmitry Gavinsky, 1533-7146Quantum Info. Comput. 127-8Dmitry Gavinsky. Quantum predictive learning and communication complexity with single input. Quantum Info. Comput., 12(7-8):575-588, July 2012. ISSN 1533-7146. URL http://dl.acm.org/citation.cfm? id=2231016.2231019. Exponential separation of quantum and classical one-way communication complexity. Ziv Bar-Yossef, T S Jayram, Iordanis Kerenidis, 10.1137/060651835SIAM Journal on Computing. 381Ziv Bar-Yossef, T. S. Jayram, and Iordanis Kerenidis. Exponential separation of quantum and classical one-way communication complexity. SIAM Journal on Computing, 38(1):366-384, jan 2008. URL https://doi.org/10.1137/060651835. Equivalences and separations between quantum and classical learnability. Rocco A Servedio, Steven J Gortler, 10.1137/s0097539704412910SIAM Journal on Computing. 335Rocco A. Servedio and Steven J. Gortler. Equivalences and separations between quantum and classical learnability. SIAM Journal on Computing, 33(5):1067-1092, jan 2004. URL https://doi.org/10.1137/ s0097539704412910. An optimal quantum algorithm for the oracle identification problem. Robin Kothari, abs/1311.7685Robin Kothari. An optimal quantum algorithm for the oracle identification problem. CoRR, abs/1311.7685, 2013. URL http://arxiv.org/abs/1311.7685. Quantum lower bounds by polynomials. Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, Ronald De Wolf, http:/doi.acm.org/10.1145/502090.5020970004-5411J. ACM. 484Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald de Wolf. Quantum lower bounds by polynomials. J. ACM, 48(4):778-797, July 2001. ISSN 0004-5411. URL http://doi.acm.org/10. 1145/502090.502097. Cryptographic limitations on learning boolean formulae and finite automata. Michael Kearns, Leslie Valiant, http:/doi.acm.org/10.1145/174644.1746470004-5411J. ACM. 411Michael Kearns and Leslie Valiant. Cryptographic limitations on learning boolean formulae and finite automata. J. ACM, 41(1):67-95, January 1994. ISSN 0004-5411. URL http://doi.acm.org/10.1145/ 174644.174647. Quantum associative memory. Dan Ventura, Tony Martinez, 0020-0255Information Sciences. 1241?4Dan Ventura and Tony Martinez. Quantum associative memory. Information Sciences, 124(1?4): 273 -296, 2000. ISSN 0020-0255. URL http://www.sciencedirect.com/science/article/pii/ S0020025599001012. Probabilistic quantum memories. C A Trugenberger, 10.1103/physrevlett.87.067901Physical Review Letters. 876C. A. Trugenberger. Probabilistic quantum memories. Physical Review Letters, 87(6), jul 2001. URL https://doi.org/10.1103/physrevlett.87.067901. Comment on "probabilistic quantum memories. T Brun, H Klauck, A Nayak, M Rötteler, Ch Zalka, 10.1103/physrevlett.91.209801Physical Review Letters. 9120T. Brun, H. Klauck, A. Nayak, M. Rötteler, and Ch. Zalka. Comment on "probabilistic quantum memories". Physical Review Letters, 91(20), nov 2003. URL https://doi.org/10.1103/physrevlett.91.209801. Trugenberger replies. Carlo A Trugenberger, 10.1103/physrevlett.91.209802Physical Review Letters. 9120Carlo A. Trugenberger. Trugenberger replies:. Physical Review Letters, 91(20), nov 2003. URL https: //doi.org/10.1103/physrevlett.91.209802. Quantum computing for pattern classification. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, arXiv:1412.3646Trends in Artificial Intelligence. 8862SpringerLNAIMaria Schuld, Ilya Sinayskiy, and Francesco Petruccione. Quantum computing for pattern classification. Trends in Artificial Intelligence, LNAI 8862, Springer, pages 208-220, 2014c, arXiv:1412.3646. Quantum learning for neural associative memories. Fuzzy Sets and Systems. G G Rigatos, S G Tzafestas, 0165-0114157G.G. Rigatos and S.G. Tzafestas. Quantum learning for neural associative memories. Fuzzy Sets and Systems, 157(13):1797 -1813, 2006. ISSN 0165-0114. URL http://www.sciencedirect.com/science/ article/pii/S0165011406000923. Neurodynamics and attractors in quantum associative memories. G G Rigatos, S G Tzafestas, 1069-2509Integr. Comput.-Aided Eng. 143G. G. Rigatos and S. G. Tzafestas. Neurodynamics and attractors in quantum associative memories. Integr. Comput.-Aided Eng., 14(3):225-242, August 2007. ISSN 1069-2509. URL http://dl.acm.org/citation. cfm?id=1367089.1367091. Quantum pattern recognition with liquid-state nuclear magnetic resonance. Rodion Neigovzen, Jorge L Neves, Rudolf Sollacher, Steffen J Glaser, https:/link.aps.org/doi/10.1103/PhysRevA.79.042321Phys. Rev. A. 7942321Rodion Neigovzen, Jorge L. Neves, Rudolf Sollacher, and Steffen J. Glaser. Quantum pattern recognition with liquid-state nuclear magnetic resonance. Phys. Rev. A, 79:042321, Apr 2009. URL https://link. aps.org/doi/10.1103/PhysRevA.79.042321. Adiabatic quantum optimization for associative memory recall. Hadayat Seddiqi, Travis S Humble, arXiv:1407.1904Front. Phys. 279Hadayat Seddiqi and Travis S. Humble. Adiabatic quantum optimization for associative memory recall. Front. Phys. 2:79, 2014, arXiv:1407.1904. Exponential capacity of associative memories under quantum annealing recall. Siddhartha Santra, Omar Shehab, Radhakrishnan Balu, arXiv:1602Siddhartha Santra, Omar Shehab, and Radhakrishnan Balu. Exponential capacity of associative memories under quantum annealing recall, 2016, arXiv:1602. Noise facilitation in associative memories of exponential capacity. Amin Karbasi, Amir Hesam, Amin Salavati, Lav R Shokrollahi, Varshney, arXiv:1403.3305Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, and Lav R. Varshney. Noise facilitation in associative memories of exponential capacity, 2014, arXiv:1403.3305. Read the fine print. Scott Aaronson, 10.1038/nphys32721745-2473Nat Phys. 114Scott Aaronson. Read the fine print. Nat Phys, 11(4):291-293, Apr 2015. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys3272. Commentary. Probing for quantum speedup in spin-glass problems with planted solutions. Itay Hen, Joshua Job, Tameem Albash, F Troels, Matthias Rønnow, Daniel A Troyer, Lidar, https:/link.aps.org/doi/10.1103/PhysRevA.92.042325Phys. Rev. A. 9242325Itay Hen, Joshua Job, Tameem Albash, Troels F. Rønnow, Matthias Troyer, and Daniel A. Lidar. Probing for quantum speedup in spin-glass problems with planted solutions. Phys. Rev. A, 92:042325, Oct 2015. URL https://link.aps.org/doi/10.1103/PhysRevA.92.042325. Training a large scale classifier with the quantum adiabatic algorithm. Hartmut Neven, S Vasil, Geordie Denchev, William G Rose, Macready, arXiv:0912.0779Hartmut Neven, Vasil S. Denchev, Geordie Rose, and William G. Macready. Training a large scale classifier with the quantum adiabatic algorithm, 2009a, arXiv:0912.0779. The ising model: teaching an old problem new tricks. Zhengbing Bian, Fabian Chudak, William G Macready, Geordie Rose, Zhengbing Bian, Fabian Chudak, William G. Macready, and Geordie Rose. The ising model: teaching an old problem new tricks, 2010. Training a binary classifier with the quantum adiabatic algorithm. Hartmut Neven, S Vasil, Geordie Denchev, William G Rose, Macready, arXiv:0811.0416Hartmut Neven, Vasil S. Denchev, Geordie Rose, and William G. Macready. Training a binary classifier with the quantum adiabatic algorithm, 2008, arXiv:0811.0416. Nips 2009 demonstration: Binary classification using hardware implementation of quantum annealing. Harmut Neven, S Vasil, Marshall Denchev, Jiayong Drew-Brook, Zhang, G William, Geordie Macready, Rose, NIPS 2009 demonstration. Harmut Neven, Vasil S Denchev, Marshall Drew-Brook, Jiayong Zhang, William G Macready, and Geordie Rose. Nips 2009 demonstration: Binary classification using hardware implementation of quantum annealing. In NIPS 2009 demonstration, 2009b. Qboost: Large scale classifier training with adiabatic quantum optimization. H Neven, V S Denchev, G Rose, W G Macready, Proceedings of the Asian Conference on Machine Learning. Steven C. H. Hoi and Wray Buntinethe Asian Conference on Machine LearningSingapore25Singapore Management UniversityH. Neven, V.S. Denchev, G. Rose, and W.G. Macready. Qboost: Large scale classifier training with adiabatic quantum optimization. In Steven C. H. Hoi and Wray Buntine, editors, Proceedings of the Asian Conference on Machine Learning, volume 25 of Proceedings of Machine Learning Research, pages 333-348, Singapore Management University, Singapore, 04-06 Nov 2012. PMLR. URL http: //proceedings.mlr.press/v25/neven12.html. Robust classification with adiabatic quantum optimization. S Vasil, Nan Denchev, S V N Ding, Hartmut Vishwanathan, Neven, arXiv:1205.1148Vasil S. Denchev, Nan Ding, S. V. N. Vishwanathan, and Hartmut Neven. Robust classification with adiabatic quantum optimization, 2012, arXiv:1205.1148. S Vasil, Nan Denchev, Shin Ding, S V N Matsushima, Hartmut Vishwanathan, Neven, arXiv:1504.01446Totally corrective boosting with cardinality penalization. Vasil S. Denchev, Nan Ding, Shin Matsushima, S. V. N. Vishwanathan, and Hartmut Neven. Totally corrective boosting with cardinality penalization, 2015, arXiv:1504.01446. Construction of non-convex polynomial loss functions for training a binary classifier with quantum annealing. Ryan Babbush, Nan Vasil Denchev, Sergei Ding, Hartmut Isakov, Neven, 10.1007/s11128-012-0506-4arXiv:1406.42031573-1332Quantum Information Processing. Kristen L. Pudenz and Daniel A. Lidar12Quantum adiabatic machine learningRyan Babbush, Vasil Denchev, Nan Ding, Sergei Isakov, and Hartmut Neven. Construction of non-convex polynomial loss functions for training a binary classifier with quantum annealing, 2014, arXiv:1406.4203. Kristen L. Pudenz and Daniel A. Lidar. Quantum adiabatic machine learning. Quantum Information Processing, 12(5):2027-2070, 2013. ISSN 1573-1332. URL http://dx.doi.org/10.1007/ s11128-012-0506-4. Bayesian network structure learning using quantum annealing. B O&apos;gorman, R Babbush, A Perdomo-Ortiz, A Aspuru-Guzik, V Smelyanskiy, 10.1140/epjst/e2015-02349-91951-6401The European Physical Journal Special Topics. 2241B. O'Gorman, R. Babbush, A. Perdomo-Ortiz, A. Aspuru-Guzik, and V. Smelyanskiy. Bayesian network structure learning using quantum annealing. The European Physical Journal Special Topics, 224(1): 163-188, 2015. ISSN 1951-6401. URL http://dx.doi.org/10.1140/epjst/e2015-02349-9. Application of quantum annealing to training of deep neural networks. H Steven, Maxwell P Adachi, Henderson, arXiv:1510.06356Steven H. Adachi and Maxwell P. Henderson. Application of quantum annealing to training of deep neural networks, 2015, arXiv:1510.06356. Quantum boltzmann machine. H Mohammad, Evgeny Amin, Jason Andriyash, Bohdan Rolfe, Roger Kulchytskyy, Melko, arXiv:1601.02036Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko. Quantum boltzmann machine, 2016, arXiv:1601.02036. . M Lukas, Wolfgang Sieberer, Lechner, arXiv:1708.02533Programmable superpositions of ising configurations. Lukas M. Sieberer and Wolfgang Lechner. Programmable superpositions of ising configurations, 2017, arXiv:1708.02533. A quantum annealing architecture with all-to-all connectivity from local interactions. Wolfgang Lechner, Philipp Hauke, Peter Zoller, Science Advances. 19Wolfgang Lechner, Philipp Hauke, and Peter Zoller. A quantum annealing architec- ture with all-to-all connectivity from local interactions. Science Advances, 1(9), 2015, http://advances.sciencemag.org/content/1/9/e1500838.full.pdf. URL http://advances.sciencemag. org/content/1/9/e1500838. Quantum enhanced inference in markov logic networks. Peter Wittek, Christian Gogolin, 10.1038/srep45672Scientific Reports. 745672Peter Wittek and Christian Gogolin. Quantum enhanced inference in markov logic networks. Scientific Reports, 7:45672, apr 2017. URL https://doi.org/10.1038/srep45672. Markov logic networks. Matthew Richardson, Pedro Domingos, 10.1007/s10994-006-5833-1Machine Learning. 62Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62(1-2):107-136, jan 2006. URL https://doi.org/10.1007/s10994-006-5833-1. Quantum machine learning with small-scale devices: Implementing a distance-based classifier with a quantum interference circuit. Maria Schuld, Mark Fingerhuth, Francesco Petruccione, arXiv:1703.10793Maria Schuld, Mark Fingerhuth, and Francesco Petruccione. Quantum machine learning with small-scale de- vices: Implementing a distance-based classifier with a quantum interference circuit, 2017, arXiv:1703.10793. Quantum optimization for training support vector machines. Davide Anguita, Sandro Ridella, Fabio Rivieccio, Rodolfo Zunino, 0893-6080Advances in Neural Networks Research: {IJCNN} '03. 16Davide Anguita, Sandro Ridella, Fabio Rivieccio, and Rodolfo Zunino. Quantum optimization for training support vector machines. Neural Networks, 16(5?6):763 -770, 2003. ISSN 0893-6080. URL http: //www.sciencedirect.com/science/article/pii/S089360800300087X. Advances in Neural Networks Research: {IJCNN} '03. A Quantum Algorithm for Finding the Minimum. Christoph Durr, Peter Hoyer, quant- ph/9607014Christoph Durr and Peter Hoyer. A Quantum Algorithm for Finding the Minimum, January 1999, quant- ph/9607014. URL http://arxiv.org/abs/quant-ph/9607014. Quantum speed-up for unsupervised learning. Esma Aïmeur, Gilles Brassard, Sébastien Gambs, 10.1007/s10994-012-5316-51573-0565Machine Learning. 90Esma Aïmeur, Gilles Brassard, and Sébastien Gambs. Quantum speed-up for unsupervised learning. Machine Learning, 90(2):261-287, 2013. ISSN 1573-0565. URL http://dx.doi.org/10.1007/s10994-012-5316-5. Quantum algorithm for association rules mining. Chao-Hua Yu, Fei Gao, Qing-Le Wang, Qiao-Yan Wen, https:/link.aps.org/doi/10.1103/PhysRevA.94.042311Phys. Rev. A. 9442311Chao-Hua Yu, Fei Gao, Qing-Le Wang, and Qiao-Yan Wen. Quantum algorithm for association rules mining. Phys. Rev. A, 94:042311, Oct 2016. URL https://link.aps.org/doi/10.1103/PhysRevA.94.042311. Nathan Wiebe, Ashish Kapoor, Krysta M Svore, arXiv:1602.04799Quantum perceptron models. Nathan Wiebe, Ashish Kapoor, and Krysta M Svore. Quantum perceptron models, 2016, arXiv:1602.04799. Pattern recognition on a quantum computer. Ralf Schützhold, https:/link.aps.org/doi/10.1103/PhysRevA.67.062311Phys. Rev. A. 6762311Ralf Schützhold. Pattern recognition on a quantum computer. Phys. Rev. A, 67:062311, Jun 2003. URL https://link.aps.org/doi/10.1103/PhysRevA.67.062311. Quantum principal component analysis. Nathan Wiebe, Christopher Granade, 10.1038/nphys3029arXiv:1512.031451745-2473Nat Phys. Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost109Can small quantum systems learn?Nathan Wiebe and Christopher Granade. Can small quantum systems learn?, 2015, arXiv:1512.03145. Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nat Phys, 10(9):631-633, Sep 2014. ISSN 1745-2473. URL http://dx.doi.org/10.1038/nphys3029. Letter. Quantum fingerprinting. Harry Buhrman, Richard Cleve, John Watrous, Ronald De Wolf, https:/link.aps.org/doi/10.1103/PhysRevLett.87.167902Phys. Rev. Lett. 87167902Harry Buhrman, Richard Cleve, John Watrous, and Ronald de Wolf. Quantum fingerprinting. Phys. Rev. Lett., 87:167902, Sep 2001. URL https://link.aps.org/doi/10.1103/PhysRevLett.87.167902. Hamiltonian simulation with nearly optimal dependence on all parameters. D W Berry, A M Childs, R Kothari, IEEE 56th Annual Symposium on Foundations of Computer Science. D. W. Berry, A. M. Childs, and R. Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 792-809, Oct 2015. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. B D Clader, B C Jacobs, C R Sprouse, https:/link.aps.org/doi/10.1103/PhysRevLett.100.160501Phys. Rev. Lett. 110160501Phys. Rev. Lett.B. D. Clader, B. C. Jacobs, and C. R. Sprouse. Preconditioned quantum linear system algorithm. Phys. Rev. Lett., 110:250504, Jun 2013. URL https://link.aps.org/doi/10.1103/PhysRevLett.110.250504. Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Phys. Rev. Lett., 100:160501, Apr 2008. URL https://link.aps.org/doi/10.1103/PhysRevLett.100.160501. Quantum machine learning over infinite dimensions. Hoi-Kwan, Raphael Lau, George Pooser, Christian Siopsis, Weedbrook, https:/link.aps.org/doi/10.1103/PhysRevLett.118.080501Phys. Rev. Lett. 11880501Hoi-Kwan Lau, Raphael Pooser, George Siopsis, and Christian Weedbrook. Quantum machine learning over infinite dimensions. Phys. Rev. Lett., 118:080501, Feb 2017. URL https://link.aps.org/doi/10.1103/ PhysRevLett.118.080501. Quantum algorithm for data fitting. Nathan Wiebe, Daniel Braun, Seth Lloyd, https:/link.aps.org/doi/10.1103/PhysRevLett.109.050505Phys. Rev. Lett. 10950505Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quantum algorithm for data fitting. Phys. Rev. Lett., 109: 050505, Aug 2012. URL https://link.aps.org/doi/10.1103/PhysRevLett.109.050505. New quantum algorithm for linear regression. Guoming Wang, arXiv:1402.0660Guoming Wang. New quantum algorithm for linear regression, 2014, arXiv:1402.0660. Hamiltonian simulation by qubitization. Hao Guang, Isaac L Low, Chuang, arXiv:1610.06546Guang Hao Low and Isaac L. Chuang. Hamiltonian simulation by qubitization, 2016, arXiv:1610.06546. Prediction by linear regression on a quantum computer. Maria Schuld, Ilya Sinayskiy, Francesco Petruccione, https:/link.aps.org/doi/10.1103/PhysRevA.94.022342Phys. Rev. A. 9422342Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. Prediction by linear regression on a quantum computer. Phys. Rev. A, 94:022342, Aug 2016. URL https://link.aps.org/doi/10.1103/PhysRevA. 94.022342. Quantum algorithms for supervised and unsupervised machine learning. Seth Lloyd, Masoud Mohseni, Patrick Rebentrost, ; Rebentrost, arXiv:411Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning, 2013, arXiv:(Rebentrost et al., 2014)1307.0411. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Nathan Wiebe, Ashish Kapoor, Krysta M Svore, 1533-7146Quantum Info. Comput. 153-4Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Info. Comput., 15(3-4):316-356, March 2015. ISSN 1533-7146. URL http://dl.acm.org/citation.cfm?id=2871393.2871400. Quantum support vector machine for big data classification. Patrick Rebentrost, Masoud Mohseni, Seth Lloyd, https:/link.aps.org/doi/10.1103/PhysRevLett.113.130503Phys. Rev. Lett. 113130503Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classification. Phys. Rev. Lett., 113:130503, Sep 2014. URL https://link.aps.org/doi/10.1103/ PhysRevLett.113.130503. Quantum assisted gaussian process regression. Zhikuan Zhao, Jack K Fitzsimons, Joseph F Fitzsimons, arXiv:1512.03929Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum assisted gaussian process regression, 2015, arXiv:1512.03929. Quantum algorithms for topological and geometric analysis of data. Seth Lloyd, Silvano Garnerone, Paolo Zanardi, 10.1038/ncomms10138Nature Communications. 710138Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric analysis of data. Nature Communications, 7:10138, jan 2016. URL https://doi.org/10.1038/ncomms10138. Quantum gradient descent and newton's method for constrained polynomial optimization. Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, Seth Lloyd, arXiv:1612.01789Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, and Seth Lloyd. Quantum gradient descent and newton's method for constrained polynomial optimization, 2016b, arXiv:1612.01789. Quantum gradient descent for linear systems and least squares. Iordanis Kerenidis, Anupam Prakash, arXiv:1704.04992Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares, 2017, arXiv:1704.04992. The epoch-greedy algorithm for multi-armed bandits with side information. John Langford, Tong Zhang, Advances in Neural Information Processing Systems. J. C. Platt, D. Koller, Y. Singer, and S. T. RoweisCurran Associates, Inc20John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side informa- tion. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 817-824. Curran Associates, Inc., 2008. URL http://papers.nips.cc/ paper/3178-the-epoch-greedy-algorithm-for-multi-armed-bandits-with-side-information.pdf. Projective simulation applied to the grid-world and the mountain-car problem. Alexey A Melnikov, Adi Makmal, Hans J Briegel, arXiv:1405.5459Alexey A. Melnikov, Adi Makmal, and Hans J. Briegel. Projective simulation applied to the grid-world and the mountain-car problem, 2014, arXiv:1405.5459. Projective simulation for classical learning agents: A comprehensive investigation. Julian Mautner, Adi Makmal, Daniel Manzano, Markus Tiersch, Hans J Briegel, 10.1007/s00354-015-0102-01882-7055New Generation Computing. 331Julian Mautner, Adi Makmal, Daniel Manzano, Markus Tiersch, and Hans J. Briegel. Projective simulation for classical learning agents: A comprehensive investigation. New Generation Computing, 33(1):69-114, Jan 2015. ISSN 1882-7055. URL http://dx.doi.org/10.1007/s00354-015-0102-0. Projective simulation with generalization. CoRR, abs/1504.02247. Alexey A Melnikov, Adi Makmal, Vedran Dunjko, Hans-J Briegel, Alexey A. Melnikov, Adi Makmal, Vedran Dunjko, and Hans-J. Briegel. Projective simulation with generalization. CoRR, abs/1504.02247, 2015. URL http://arxiv.org/abs/1504.02247. Meta-learning within projective simulation. A Makmal, A A Melnikov, V Dunjko, H J Briegel, 2169-3536IEEE Access. 4A. Makmal, A. A. Melnikov, V. Dunjko, and H. J. Briegel. Meta-learning within projective simulation. IEEE Access, 4:2110-2122, 2016. ISSN 2169-3536. Robotic playing for hierarchical complex skill learning. S Hangl, E Ugur, S Szedmak, J Piater, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). S. Hangl, E. Ugur, S. Szedmak, and J. Piater. Robotic playing for hierarchical complex skill learning. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2799-2804, Oct 2016. Quantum speedup for active learning agents. Giuseppe Davide Paparo, Vedran Dunjko, Adi Makmal, Miguel Angel Martin-Delgado, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevX.4.031002Phys. Rev. X. 431002Giuseppe Davide Paparo, Vedran Dunjko, Adi Makmal, Miguel Angel Martin-Delgado, and Hans J. Briegel. Quantum speedup for active learning agents. Phys. Rev. X, 4:031002, Jul 2014. URL https: //link.aps.org/doi/10.1103/PhysRevX.4.031002. Quantum speed-up of markov chain based algorithms. M Szegedy, 45th Annual IEEE Symposium on Foundations of Computer Science. M. Szegedy. Quantum speed-up of markov chain based algorithms. In 45th Annual IEEE Symposium on Foundations of Computer Science, pages 32-41, Oct 2004. Some inequalities for reversible markov chains. David J Aldous, The Journal of the London Mathematical Society, Second Series. 25David J. Aldous. Some inequalities for reversible markov chains. The Journal of the London Mathematical Society, Second Series, 25:564-576, 1982. Search via quantum walk. Frédéric Magniez, Ashwin Nayak, Jérémie Roland, Miklos Santha, 10.1137/090745854SIAM J. Comput. 401Frédéric Magniez, Ashwin Nayak, Jérémie Roland, and Miklos Santha. Search via quantum walk. SIAM J. Comput., 40(1):142-164, 2011. URL https://doi.org/10.1137/090745854. Quantum mixing of markov chains for special distributions. V Dunjko, H J Briegel, New Journal of Physics. 17773004V. Dunjko and H. J. Briegel. Quantum mixing of markov chains for special distributions. New Journal of Physics, 17(7):073004, 2015a. URL http://stacks.iop.org/1367-2630/17/i=7/a=073004. Sequential quantum mixing for slowly evolving sequences of markov chains. Vedran Dunjko, Hans J Briegel, arXiv:1503.01334Vedran Dunjko and Hans J. Briegel. Sequential quantum mixing for slowly evolving sequences of markov chains, 2015b, arXiv:1503.01334. Quantum Reinforcement Learning. Daoyi Dong, Chunlin Chen, Zonghai Chen, 10.1007/11539117_97978-3-540-31858-3SpringerBerlin Heidelberg; Berlin, HeidelbergDaoyi Dong, Chunlin Chen, and Zonghai Chen. Quantum Reinforcement Learning, pages 686-689. Springer Berlin Heidelberg, Berlin, Heidelberg, 2005. ISBN 978-3-540-31858-3. URL http://dx.doi.org/10.1007/ 11539117_97. Quantum-enhanced deliberation of learning agents using trapped ions. V Dunjko, H J Friis, Briegel, New Journal of Physics. 17223006V Dunjko, N Friis, and H J Briegel. Quantum-enhanced deliberation of learning agents using trapped ions. New Journal of Physics, 17(2):023006, 2015a. URL http://stacks.iop.org/1367-2630/17/i=2/ a=023006. Reinforcement learning using quantum boltzmann machines. Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S Oberoi, Pooya Ronagh, arXiv:1612.05695Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi, and Pooya Ronagh. Reinforcement learning using quantum boltzmann machines, 2016, arXiv:1612.05695. Basic protocols in quantum reinforcement learning with superconducting circuits. Lucas Lamata, 10.1038/s41598-017-01711-62045-2322Scientific Reports. 711609Lucas Lamata. Basic protocols in quantum reinforcement learning with superconducting circuits. Scientific Reports, 7(1):1609, 2017. ISSN 2045-2322. URL http://dx.doi.org/10.1038/s41598-017-01711-6. Quantum-enhanced machine learning. Vedran Dunjko, Jacob M Taylor, Hans J Briegel, https:/link.aps.org/doi/10.1103/PhysRevLett.117.130501Phys. Rev. Lett. 117130501Vedran Dunjko, Jacob M. Taylor, and Hans J. Briegel. Quantum-enhanced machine learning. Phys. Rev. Lett., 117:130501, Sep 2016. URL https://link.aps.org/doi/10.1103/PhysRevLett.117.130501. Framework for learning agents in quantum environments. Vedran Dunjko, Jacob M Taylor, Hans J Briegel, arXiv:1507.08482Vedran Dunjko, Jacob M. Taylor, and Hans J. Briegel. Framework for learning agents in quantum environments, 2015b, arXiv:1507.08482. ISBN 0262122960, 9780262122962. Kyriakos N. Sgarbas. The road to quantum artificial intelligence. John E Laird, arXiv:0705.3360Current Trends in Informatics. The MIT PressThe Soar Cognitive ArchitectureJohn E. Laird. The Soar Cognitive Architecture. The MIT Press, 2012. ISBN 0262122960, 9780262122962. Kyriakos N. Sgarbas. The road to quantum artificial intelligence. Current Trends in Informatics, pages 469-477, 2007, arXiv:0705.3360. Principles of quantum artificial intelligence. Andrzej Wichert, 978-9814566742World ScientificHackensack New JerseyAndrzej Wichert. Principles of quantum artificial intelligence. World Scientific, Hackensack New Jersey, 2014. ISBN 978-9814566742. Can artificial intelligence benefit from quantum computing?. Vicente Moret-Bonillo, 10.1007/s13748-014-0059-0s13748-014-0059-0Progress in Artificial Intelligence. 32Vicente Moret-Bonillo. Can artificial intelligence benefit from quantum computing? Progress in Artificial Intelligence, 3(2):89-105, Mar 2015. ISSN 2192-6360. URL https://doi.org/10.1007/ s13748-014-0059-0. Quantum inference on bayesian networks. Guang Hao Low, Theodore J Yoder, Isaac L Chuang, https:/link.aps.org/doi/10.1103/PhysRevA.89.062315Phys. Rev. A. 8962315Guang Hao Low, Theodore J. Yoder, and Isaac L. Chuang. Quantum inference on bayesian networks. Phys. Rev. A, 89:062315, Jun 2014. URL https://link.aps.org/doi/10.1103/PhysRevA.89.062315. A quantum algorithm for the hamiltonian NAND tree. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, 10.4086/toc.2008.v004a008Theory of Computing. 41Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum algorithm for the hamiltonian NAND tree. Theory of Computing, 4(1):169-190, 2008. URL https://doi.org/10.4086/toc.2008.v004a008. Quantum games and quantum strategies. Jens Eisert, Martin Wilkens, Maciej Lewenstein, https:/link.aps.org/doi/10.1103/PhysRevLett.83.3077Phys. Rev. Lett. 83Jens Eisert, Martin Wilkens, and Maciej Lewenstein. Quantum games and quantum strategies. Phys. Rev. Lett., 83:3077-3080, Oct 1999. URL https://link.aps.org/doi/10.1103/PhysRevLett.83.3077. Causal boxes: Quantum informationprocessing systems closed under composition. C Portmann, C Matt, U Maurer, R Renner, B Tackmann, 0018-9448IEEE Transactions on Information Theory. 635C. Portmann, C. Matt, U. Maurer, R. Renner, and B. Tackmann. Causal boxes: Quantum information- processing systems closed under composition. IEEE Transactions on Information Theory, 63(5):3277-3305, May 2017. ISSN 0018-9448. Turing Resreach Symposuim. Elham Kashefi, Elham Kashefi. Turing Resreach Symposuim, May 2012. Link: https://www.youtube.com/watch?v=3y7JCjaNZLY, 2013. Coherent controlization using superconducting qubits. Nicolai Friis, Alexey A Melnikov, Gerhard Kirchmair, Hans J Briegel, 10.1038/srep18036Sci. Rep. 5Nicolai Friis, Alexey A. Melnikov, Gerhard Kirchmair, and Hans J. Briegel. Coherent controlization using superconducting qubits. Sci. Rep., 5, Dec 2015. URL http://dx.doi.org/10.1038/srep18036. Article. Experimental realization of a quantum support vector machine. Zhaokai Li, Xiaomei Liu, Nanyang Xu, Jiangfeng Du, https:/link.aps.org/doi/10.1103/PhysRevLett.114.140504Phys. Rev. Lett. 114140504Zhaokai Li, Xiaomei Liu, Nanyang Xu, and Jiangfeng Du. Experimental realization of a quantum support vector machine. Phys. Rev. Lett., 114:140504, Apr 2015b. URL https://link.aps.org/doi/10.1103/ PhysRevLett.114.140504. Entanglement-based machine learning on a quantum computer. X.-D Cai, D Wu, Z.-E Su, M.-C Chen, X.-L Wang, Li Li, N.-L Liu, C.-Y. Lu, J.-W Pan, https:/link.aps.org/doi/10.1103/PhysRevLett.114.110504Phys. Rev. Lett. 114110504X.-D. Cai, D. Wu, Z.-E. Su, M.-C. Chen, X.-L. Wang, Li Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan. Entanglement-based machine learning on a quantum computer. Phys. Rev. Lett., 114:110504, Mar 2015. URL https://link.aps.org/doi/10.1103/PhysRevLett.114.110504. Demonstration of quantum advantage in machine learning. Diego Ristè, Marcus P Da Silva, Colm A Ryan, Andrew W Cross, Antonio D Córcoles, John A Smolin, Jay M Gambetta, Jerry M Chow, Blake R Johnson, 10.1038/s41534-017-0017-32056-6387npj Quantum Information. 316Diego Ristè, Marcus P. da Silva, Colm A. Ryan, Andrew W. Cross, Antonio D. Córcoles, John A. Smolin, Jay M. Gambetta, Jerry M. Chow, and Blake R. Johnson. Demonstration of quantum advantage in machine learning. npj Quantum Information, 3(1):16, 2017. ISSN 2056-6387. URL https://doi.org/10. 1038/s41534-017-0017-3.
{'fraction_non_alphanumeric': 0.04838296231407243, 'fraction_numerical': 0.03352658878494257, 'mean_word_length': 4.783475783475783, 'pattern_counts': {'":': 1, '<': 6, '<?xml version=': 0, '>': 4, 'https://': 103, 'lorem ipsum': 0, 'www.': 18, 'xml': 0}, 'pii_count': 5, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 52, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Quantum information technologies, on the one side, and intelligent learning systems, on the other, are both emergent technologies that will likely have a transforming impact on our society in the future. The respective underlying fields of basic researchquantum information (QI) versus machine learning and artificial intelligence (AI) -have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can indeed learn and benefit from each other. QML explores the interaction between quantum computing and machine learning, investigating how results and techniques from one field can be used to solve the problems of the other. In recent time, we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for machine learning problems, critical in our "big data" world. Conversely, machine learning already permeates many cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical machine learning optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of artificial intelligence for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement -exploring what ML/AI can do for quantum physics, and vice versa -researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.', 'arxivid': '1709.02779', 'author': ['Vedran Dunjko [email protected] ', 'Hans J Briegel [email protected] ', '\nInstitute for Theoretical Physics\nMax Planck Institute of Quantum Optics\nInstitute for Theoretical Physics\nUniversity of Innsbruck\n6020, 85748Innsbruck, GarchingAustria, Germany\n', '\nDepartment of Philosophy\nUniversity of Innsbruck\n6020InnsbruckAustria\n', '\nUniversity of Konstanz\n78457KonstanzGermany\n'], 'authoraffiliation': ['Institute for Theoretical Physics\nMax Planck Institute of Quantum Optics\nInstitute for Theoretical Physics\nUniversity of Innsbruck\n6020, 85748Innsbruck, GarchingAustria, Germany', 'Department of Philosophy\nUniversity of Innsbruck\n6020InnsbruckAustria', 'University of Konstanz\n78457KonstanzGermany'], 'corpusid': 3681629, 'doi': '10.1088/1361-6633/aab406', 'github_urls': [], 'n_tokens_mistral': 122540, 'n_tokens_neox': 104162, 'n_words': 65920, 'pdfsha': '0cc6dbfd929bc816d507527993f55f9b4e88615d', 'pdfurls': ['https://arxiv.org/pdf/1709.02779v1.pdf'], 'title': ['Machine learning & artificial intelligence in the quantum domain', 'Machine learning & artificial intelligence in the quantum domain'], 'venue': []}
arxiv
Many-body position operator in lattice fermionic systems with periodic boundary conditions 1 Sep 2009 (September 1, 2009) Balázs Hetényi Institut für Theoretische Physik Austria and Mathematisches Institut Fakultät für Mathematik Informatik und Statistik Technische Universität Graz Petersgasse 16A-8010Graz Ludwig Maximilians Universität Theresienstrasse 3980333MünchenGermany Many-body position operator in lattice fermionic systems with periodic boundary conditions 1 Sep 2009 (September 1, 2009)Short title: Periodic lattice position operator 1 A total position operator X in the position representation is derived for lattice fermionic systems with periodic boundary conditions. The operator is shown to be Hermitian, the generator of translations in momentum space, and its time derivative is shown to correspond to the total current operator in a periodic system. The operator is such that its moments can be calculated up to any order. To demonstrate its utility finite size scaling is applied to the Brinkman-Rice transition as well as metallic and insulating Gutzwiller wavefunctions.Short title: Periodic lattice position operator The position operator and its moments give important information about localization in quantum systems. As was shown by Kohn [1] metals and insulators are distinguished by the extent of their localization. Many real systems are periodic, and in many model systems periodic boundary conditions are imposed. In such cases the Hilbert space which that forms the domain of operators is restricted hence the position operator is ill-defined [2]. The single particle position operator in the crystal momentum representation was derived by Blount [2] and discussed extensively in the context of band-theory. In the crystal momentum representation this operator can be generalized to the many-body case [3]. To calculate the total position in the position representation Resta [4,5] suggests to average the quantity e i 2πx L . The expectation value of the total position operator is then defined as x = L 2π Im ln Ψ|e i 2πx L |Ψ .(1) Via first order perturbation theory, Resta also shows [4] that the time derivative of the polarization operator based on the above definition gives the total current in the limit L → ∞. This idea has been applied to lattice fermionic systems at half-filling [5], and extended to systems at arbitrary fillings [6]. A related formalism due to Souza et al. [7] based on the cumulant generating function (of which Eq. (1) is a special case) establishes relations between localization and polarization. It is important to note that the position operator in this method is calculated indirectly, by first evaluating the expectation value of e i 2πx L . Eq. (1) is valid as can be shown [5] but the calculation of higher moments is not straight forward, the spread functional suggested by Resta and Sorella [5] (based on eq. (1)) is valid in the thermodynamic limit. Here it is shown that a total position operator for a lattice fermionic system with periodic boundary conditions can be defined as the generator of total momentum shifts. It is also demonstrated that the time derivative of the total position operator gives the current for a system with any number of sites (finite L). The total position operator derived below is such that expectation values of arbitrary powers are readily evaluated, hence an accurate assessment and finite size scaling of localization is enabled (up to any desired order). The utility of the operator is then demonstrated via variational calculations on the Hubbard model [8][9][10] based on the Gutzwiller wavefunction [10,11]. The derivation of the total position operator is closely related to that of the total momentum operator in Ref. [12]. The class of models for which the formalism presented below are those used in strongly correlated systems consisting of site to site hopping terms and some interaction terms. An example of a lattice model is the Hubbard Hamiltonian, H = −t i,j σ (c † iσ c jσ + H.c.) + U iσ n i↑ n i↓ ,(2) consisting of L sites. In the following, the total position operator will be derived for the one-dimensional Hubbard model. Generalizations to higher dimensions and other lattice models will be discussed below. The real-space (Wannier state) and reciprocal-space (Bloch state) creation operators are related in the usual way, c k = 1 √ L L j=1 e i 2πkx j L c j ,(3) where x j is the position of site j. In order to define a total position operator we first define a momentum permutation operator as P kl = 1 − (c † k −c † l )(c k −c l ),(4) wherec † k creates a particle in the Bloch state k. A momentum space shift operator can be defined as U n = P n−1n ....P 12 ,(5) with the property that U Lck = c k−1 U L , k = 2, ..., L c L U L , k = 1.(6) For systems with spin-1 2 particles we can define the compound momentum space shift operator as U = U L↑ U L↓ ,(7) with the property Uc j,σ = e i 2πx j L c j,σ U,(8) where c j,σ is an annihilation operator for particles at site x j with spin σ. We define the total position operator X through three conditions. First we require it to be the generator of total momentum shifts, i.e. U = e i 2πX L .(9) We also require X to be Hermitian, X = X † .(10) and that the time derivative of X give the total current, eẊ = ie[H, X] = J,(11) which for the Hubbard model is defined as J = −iet i,j σ (c † iσ c jσ − c † jσ c iσ ).(12) In order to derive the explicit form of X we first define g(α) = L−1 x=0 ie −i 2πxα L ,(13) which can be evaluated via the geometric sum formula to give g(α) = i 1 − e −i2πα 1 − e −i 2πα L .(14) We can take the derivative of g(α) at some integer value m for α, g ′ (m) = 2π L L−1 x=0 xe −i 2πxm L .(15) Inverting the Fourier series, we can obtain an expression for the position x valid for x = 0, ..., L − 1, x = 1 2π L k=1 g ′ (m)e i 2πxm L .(16) For m = L, g ′ (m) = 2π/(e −i 2πm L − 1),(17) and g ′ (L) can be evaluated from Eq. (15) using the arithmetic sum formula giving g ′ (L) = π(L − 1). Thus, an overall expression for x reads as x = L−1 m=1 1 2 + e −i 2πxm L e −i 2πm L − 1 .(18) The right hand side of Eq. (18) is the sawtooth function f (x) = xmodL. We propose to take the sawtooth function as the definition of our position operator. Based on Eq. (9) we write the total position operator X for a many-particle system as a power series in the momentum shift operator as X = L−1 m=1 1 2 + U m e −i 2πm L − 1 .(19) It is to be emphasized that X is a genuine many-body operator (as is that of Resta [4]). Having defined our total position operator, we can now test whether it satisfies the requirements (Eqs. (9), (10), and (11)). Letting X operate on an arbitrary Wannier state (|x, σ = c † 1,σ1 ...c † N,σN |0 ) for a system gives the result X|x, σ = L−1 m=1 1 2 + e i 2πm(x 1 +...+x N ) L e −i 2πm L − 1 |x, σ = ((x 1 + ... + x L )modL)|x, σ(20) where we have used Eqs. (8) and (18). Since U|x, σ = e i 2π(x 1 +...+x N ) L |x, σ ,(21) Eq. (9) follows. Hermiticity of X follows from the unitarity of U and from the fact that U L = 1. To demonstrate that the operator X satisfies the condition in Eq. (11), we first note that U commutes with the interaction part of the Hamiltonian. This can be shown using Eq. (8). Thus our task consists of evaluating the commutator [T, X], T denoting the kinetic part of the Hubbard Hamiltonian. We first define an operator Y = L m=1 U m e −i 2πm L − 1 .(22) The last term in the sum is divergent. However, below we show that this divergence disappears for the commutator [T, Y ]. We first evaluate the commutator [T, Y ] = L m=1 [T, U m ] e −i 2πm L − 1 .(23) We split the kinetic energy in two parts as A = −t i,j σ c † iσ c jσ A † = −t i,j σ c † jσ c iσ ,(24) thus we can rewrite Eq. (23) as [T, Y ] = L m=1 [A, U m ] + [A † , U m ] e −i 2πm L − 1 .(25) Each commutator in Eq. (25) can be evaluated using Eq. (8). We obtain [A, U m ] = (e −i 2πm L − 1)U m A [A † , U m ] = (1 − e −i 2πm L )A † U m ,(26) giving a new expression for the commutator [T, Y ] = L m=1 U m A − A † U m .(27) We now substitute the condition in Eq. (9) and we obtain [T, Y ] = L m=1 e i 2πXm L A − A † e i 2πXm L .(28) It is easily seen that this commutator is zero, since X operating on a Wannier state gives an integer and L m=1 e i 2πXm L = 0.(29) On the other hand, using the same reasoning we used to arrive at Eq. (27) it can be shown that [T, X] = L−1 m=1 U m A − A † U m ,(30) hence, from Eq. (27) we see that [T, X] = A † − A,(31) since U L = 1. From Eq. (31) the expression for the current (Eq. (12)) follows straightforward. The total position operator X derived above can be generalized to many dimensions as follows. In higher dimensions the operator becomes a vector operator. The generalization of the above derivation has to be based on a generalized total momentum shift operator consisting of the product of all one-dimensional momentum shift operators in a particular direction. For example, for a three dimensional system with dimensions x, y, z a total momentum shift operator for the x direction (spinless case) would consist of the product of all one dimensional momentum shift operators W L,x = y,z U (y,z) L,x ,(32) where U (y,z) L,x denotes the total momentum shift operator in the x-direction for a given set of coordinates y, z (Eq. (5)). Such an operator satisfies the commutation relation W L,xckx,ky,kz = c kx−1,ky,kz W L,x , k x = 2, ..., L; k y , k z = 1, ..., L c L,ky,kz W L,x , k x = 1; k y , k z = 1, ..., L. Subsequent construction of a total position operator for a three dimensional systems follows the same steps as the one-dimensional case. The total momentum shift operator for a spin-1 2 system can be written as W i = W L,i,↑ W L,i,↓ ,(34) where W i is a vector operator, and i = x, y, z. A particular component of the total position operator can then be written as R i = L−1 m=1 1 2 + W m i e −i 2πm L − 1 .(35) The commutator of operator R i will give the current in the i direction. This is a consequence of the fact that the operator W i commutes with the hoppings in directions other than i included in the Hubbard Hamiltonian. Extensions of the Hubbard model can also be handled. More complex interaction types (nearest neighbor, etc.) follow the same derivation as above, as the expression for the current does not change in this case. For more complex hoppings the expression for the current is modified to include the new hoppings, but the derivation presented above is still valid. For impurity models [13,14] the strategy of derivation of a total position operator is modified slightly. For example, the one-dimensional periodic Anderson model, in which each site contains a set of localized f -orbitals, can be written as H = −t i,j σ (c † iσ c jσ + H.c.) + E f i,l,σ n f (i, l, σ) + 1 2 i l,σ =l ′ ,σ ′ U (l, l ′ )n f (i, l, σ)n f (i, l ′ , σ ′ ) + H ′ ,(36) with H ′ = i,l,σ {V l f † i,l,σ c i,σ + H.c.}.(37) In Eqs. (36) and (37) n f (i, l, σ)(f † i,l,σ ) denotes the density(creation operator) of f -orbital with label l at site i and with spin σ. Each lattice site contains a set of f orbitals, but there are no inter-site hoppings between the localized f -orbitals on different sites. As a consequence the current operator is the same as that of the Hubbard model, inspite of the fact that the charge density includes the f -orbital terms [15]. One could construct a total position operator which does not include impurity orbitals, and has the same form as X derived above (only electrons in the conduction band enter the definition). As conduction takes place only on the standard lattice sites, not the ones associated with the f -orbitals, such an approach may in some cases be sufficient to characterize localization phenomena associated with metal-insulator transitions. However it is also possible to construct a total position operator valid for a system with the periodic Anderson Hamiltonian. To do this one has to consider the f -orbitals as separate lattices, and construct a total momentum shift operator for each set of f -orbitals localized on different lattice sites. One can construct an operator V (l) L = Q (l) L−1L ....Q (l) 12 ,(38) where Q (l) jk = 1 − (f † j,l −f † k,l )(f j,l −f k,l ).(39) f j,l denotes the Fourier transform of the annihilation operators of a particular f -orbital, f k,l = 1 √ L L j=1 e i 2πkx j L f j,l .(40) The operator in Eq. (38) satisfies the property V (l) Lf k = f k−1,l V (l) L , k = 2, ..., L f L,l V (l) L , k = 1. (41) Thus a total momentum shift operator can be constructed as Z = U l V (l) ,(42) where V (l) = V (l) L,↑ V (l) L,↓ .(43) The total momentum shift operator Z can be used to construct a total position operator X P AM = L−1 m=1 1 2 + Z m e −i 2πm L − 1 .(44) The operator X P AM includes the positions of electrons in impurity orbitals as well as those in the conduction band. To prove that it satisfies the three required conditions proceeds as before. Proving that the time-derivative of the position operator is equal to the current is simplified by the fact that the operators V m commute with the periodic Anderson Hamiltonian. This is another consequence of the fact that there are no hoppings between f -orbitals positioned on different sites. Hence all that needs to be proven is that the commutator corresponding to U gives the current operator corresponding to that of the Hubbard model [15]. This was already shown above. The operator X is well defined in the occupation number representation and it and its moments can thus be calculated in practical situations. Here we demonstrate the utility of the operator X by calculating the moments and performing finite size scaling for the Gutzwiller approximate solution of the Hubbard model at half-filling. The Gutzwiller wavefunction (GWF) has the form |Ψ = exp −γ i n i↑ n i↓ |Ψ 0 .(45) where |Ψ 0 is a noninteracting wavefunction, and γ is a variational parameter which projects out double occupations. Most often |Ψ 0 is the Fermi sea. In this case the exact solution in one [16,17] and infinite dimensions [18,19] are available. At half-filling the former is metallic for finite U , in contradiction with the exact solution [20]. An approximate solution to the GWF due to Gutzwiller (GA) results in the Brinkman-Rice metal insulator transition [11,21,22]. In finite dimensions the GA is only approximate, however in infinite dimensions it correponds to the exact solution [18,19]. In a one-dimensional system the Brinkman-Rice transition is known to occur at U c ≈ 10. If |Ψ 0 is a non-interacting antiferromagnetic wavefunction the Gutzwiller wavefunction can be made insulating [23]. In the following, to assess the localization accompanying the metal-insulator transition we calculate the quantity χ 4 = X 4 − X 2 X 2 L 2 ,(46) via quantum Monte Carlo methods [24,25]. In Fig. 1 χ 4 as a function of the Hubbard interaction strength for three different system sizes is presented. A transition at U c ≈ 10 is clearly visible from the simultaneous drop of all three curves. For large U (U ≥ 11) the largest(smallest) system shows the smallest(largest) value of the fourth moment, which is the tendency one expects for the insulating state. (The same behaviour was found for the square-root of the second order deviation.) These results coincide with what is known about the Brinkman-Rice transition being a localization transition [22]. In Figs. 2 and 3 a metallic and an insulating wavefunction are compared. For the former the noninteracting wavefunctions (ground state of the U = 0 system) was used in place of |Ψ 0 in Eq. (45). For the insulating wavefunction an antiferromagnetic solution was used with a magnetization of m = 0.33333. The size dependence of the quantity χ 4 is clearly sensitive to whether the system is metallic or insulating: as the variational parameter γ is increased χ 4 decreases in both cases, but the size dependence of χ 4 is opposite between the two cases. The metallic state (Fig. 2) shows an increase in delocalization with system size, whereas in the insulating state (Fig. 3) the larger system is more localized. The insets in Figs. 2 and 3 show the value of the fourth order Binder cumulant [26][27][28] defined as U 4 = 1 − X 4 3 X 2 X 2 ,(47) a quantity used in the finite size scaling [29] of phase transitions. U 4 approaches a value of two-thirds in the case of perfect localization. Again, total order (localization) is approached by both the metallic and insulating wavefunctions, but the size dependence is the opposite between the two cases, with the larger system closer to the limiting value of two-thirds for the insulating wavefunction (hence more localized). In this paper a total position operator was derived for lattice models. The operator satisfies three crucial criteria: it is the generator of total momentum shifts, it is Hermitian, and its time derivative corresponds to the total current operator. The form of the operator is such that the average total position and its moments can be readily calculated. Hence Binder cumulants used in finite size scaling can also be evaluated. The sensitivity of such moments and cumulants was also demonstrated by investigating their size dependence in the Brinkman-Rice transition, and metallic and insulating Gutzwiller wavefunctions. Part of this work was performed at the Institut für Theoretische Physik at TU-Graz under FWF (Förderung der wissenschaftlichen Forschung) grant number P21240-N16. Part of this work was performed under the HPC-EUROPA2 project (project number 228398). Helpful discussions with H. G. Evertz are gratefully acknowledged. FIG. 1 . 1χ4 (defined in Eq. (46)) for the Hubbard model using the Gutzwiller wavefunction evaluated in the Gutzwiller approximation scheme. The Brinkman-Rice transition is known to occur at Uc ≈ 10. FIG. 2 . 2Size dependence of χ4 for a metallic Gutzwiller wavefunction. The inset shows the size dependence of the fourth order Binder cumulant. FIG. 3 . 3Size dependence of χ4 for an insulating Gutzwiller wavefunction. The inset shows the size dependence of the fourth order Binder cumulant. . W Kohn, Phys. Rev. 133171W. Kohn, Phys. Rev., 133 A171 (1964). E I Blount, Solid State Physics: Advances in Research and Applications. F. Seitz and D. Turnbull13305E. I. Blount, Solid State Physics: Advances in Research and Applications, Eds. F. Seitz and D. Turnbull, 13 305 (1962). . N D M Hine, W M C Foulkes, J. Phys.: Condens. matter. 19506212N. D. M. Hine and W. M. C. Foulkes, J. Phys.: Condens. matter, 19 506212 (2007). . R Resta, Phys. Rev. Lett. 801800R. Resta, Phys. Rev. Lett., 80 1800 (1998). . R Resta, S Sorella, Phys. Rev. Lett. 82370R. Resta and S. Sorella, Phys. Rev. Lett., 82 370 (1999). . A A Aligia, G Ortiz, Phys. Rev. Lett. 822560A. A. Aligia and G. Ortiz, Phys. Rev. Lett., 82 2560 (1999). . I Souza, T Wilkens, R M Martin, Phys. Rev. B. 621666I. Souza, T. Wilkens, and R. M. Martin, Phys. Rev. B, 62 1666 (2000). . J Hubbard, Proc. Roy. Soc. J. Hubbard, Proc. Roy. Soc., A276 238 (1963). . J Kanamori, Prog. Theoret. Phys. 30275J. Kanamori, Prog. Theoret. Phys., 30 275 (1963). . M C Gutzwiller, Phys. Rev. Lett. 201445M. C. Gutzwiller, Phys. Rev. Lett., 20 1445 (1963). . M C Gutzwiller, Phys. Rev. 1371726M. C. Gutzwiller, Phys. Rev., 137 A1726 (1965). F H L Essler, H Frahm, F Göhmann, A Klümper, V E Korepin, The One-Dimensional Hubbard Model. Cambridge University PressF. H. L. Essler, H. Frahm, F. Göhmann, A. Klümper, and V. E. Korepin, The One-Dimensional Hubbard Model, Cambridge University Press, (2005). G D Mahan, Many-Particle Physics. Kluwer Academic3rd EdG. D. Mahan, Many-Particle Physics, 3rd Ed., Kluwer Academic (2000). . M Imada, Y Fujimori, Tokura, Rev. Mod. Phys. 701039M. Imada, A Fujimori, and Y. Tokura, Rev. Mod. Phys., 70 1039 (1998). . D Baeriswyl, C Gros, T M Rice, Phys. Rev. B. 358391D. Baeriswyl, C. Gros, and T. M. Rice, Phys. Rev. B, 35 8391 (1987). . W Metzner, D Vollhardt, Phys. Rev. Lett. 59121W. Metzner and D. Vollhardt, Phys. Rev. Lett., 59 121 (1987). . W Metzner, D Vollhardt, Phys. Rev. B. 377382W. Metzner and D. Vollhardt, Phys. Rev. B, 37 7382 (1988). . W Metzner, D Vollhardt, Phys. Rev. Lett. 62324W. Metzner and D. Vollhardt, Phys. Rev. Lett., 62 324 (1989). . W Metzner, D Vollhardt, Helv. Phys. Acta. 63364W. Metzner and D. Vollhardt, Helv. Phys. Acta, 63 364 (1990). . E H Lieb, F Y Wu, Phys. Rev. Lett. 201445E. H. Lieb and F. Y. Wu, Phys. Rev. Lett., 20 1445 (1968). . W F Brinkman, T M Rice, Phys. Rev. B. 24302W. F. Brinkman and T. M. Rice, Phys. Rev. B, 2 4302 (1970). . D Vollhardt, Rev. Mod. Phys. 5699D. Vollhardt, Rev. Mod. Phys., 56 99 (1984). . W Metzner, Z. Phys. B. 77253W. Metzner, Z. Phys. B, 77 253 (1989). . H Yokoyama, H Shiba, J. Phys. Soc. Japan. 561490H. Yokoyama and H. Shiba, J. Phys. Soc. Japan, 56 1490 (1986). . B Hetényi, H G Evertz, W Von Der Linden, Phys. Rev. B. 45107B. Hetényi, H. G. Evertz, and W. von der Linden, Phys. Rev. B 045107 (2009). . K Binder, Phys. Rev. Lett. 47693K. Binder, Phys. Rev. Lett., 47 693 (1981). . K Binder, Ferroelectrics. 7343K. Binder, Ferroelectrics, 73 43 (1987). . K Binder, Annu. Rev. Phys. Chem. 4333K. Binder, Annu. Rev. Phys. Chem., 43 33 (1992). . M E Fisher, M N Barber, Phys. Rev. Lett. 281516M. E. Fisher and M. N. Barber, Phys. Rev. Lett., 28 1516 (1972).
{'fraction_non_alphanumeric': 0.073370839193624, 'fraction_numerical': 0.03877168307548055, 'mean_word_length': 3.6778508771929825, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 6, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'A total position operator X in the position representation is derived for lattice fermionic systems with periodic boundary conditions. The operator is shown to be Hermitian, the generator of translations in momentum space, and its time derivative is shown to correspond to the total current operator in a periodic system. The operator is such that its moments can be calculated up to any order. To demonstrate its utility finite size scaling is applied to the Brinkman-Rice transition as well as metallic and insulating Gutzwiller wavefunctions.Short title: Periodic lattice position operator', 'arxivid': '0909.0211', 'author': ['Balázs Hetényi \nInstitut für Theoretische Physik\nAustria and Mathematisches Institut\nFakultät für Mathematik\nInformatik und Statistik\nTechnische Universität Graz\nPetersgasse 16A-8010Graz\n\nLudwig Maximilians Universität\nTheresienstrasse 3980333MünchenGermany\n'], 'authoraffiliation': ['Institut für Theoretische Physik\nAustria and Mathematisches Institut\nFakultät für Mathematik\nInformatik und Statistik\nTechnische Universität Graz\nPetersgasse 16A-8010Graz', 'Ludwig Maximilians Universität\nTheresienstrasse 3980333MünchenGermany'], 'corpusid': 115166825, 'doi': '10.1088/1751-8113/42/41/412003', 'github_urls': [], 'n_tokens_mistral': 7043, 'n_tokens_neox': 6196, 'n_words': 3809, 'pdfsha': '2fad31c3a637e03fcbb2af045131f4fdaae892eb', 'pdfurls': ['https://arxiv.org/pdf/0909.0211v1.pdf'], 'title': ['Many-body position operator in lattice fermionic systems with periodic boundary conditions', 'Many-body position operator in lattice fermionic systems with periodic boundary conditions'], 'venue': []}
arxiv
The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey Cheng Wang Fengwei Guo Ruilin Yu Luyao Wang Yuxin Zhang The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey 1Index Terms-Driver modelverification and validationregu- lationautonomous vehiclessafety assessment Driver models play a vital role in developing and verifying autonomous vehicles (AVs). Previously, they are mainly applied in traffic flow simulation to model realistic driver behavior. With the development of AVs, driver models attract much attention again due to their potential contributions to AV certification. The simulation-based testing method is considered an effective measure to accelerate AV testing due to its safe and efficient characteristics. Nonetheless, realistic driver models are prerequisites for valid simulation results. Additionally, an AV is assumed to be at least as safe as a careful and competent driver. Therefore, driver models are inevitable for AV safety assessment. However, no comparison or discussion of driver models is available regarding their utility to AVs in the last five years despite their necessities in the release of AVs. This motivates us to present a comprehensive survey of driver models in the paper and compare their applicability. Requirements for driver models in terms of their application to AV safety assessment are discussed. A summary of driver models for simulationbased testing and AV certification is provided. Evaluation metrics are defined to compare their strength and weakness. Finally, an architecture for a careful and competent driver model is proposed. Challenges and future work are elaborated. This study gives related researchers especially regulators an overview and helps them to define appropriate driver models for AVs. I. INTRODUCTION A UTONOMOUS vehicles (AVs) have been intensively studied in recent years. This significantly drives the evolution of vehicles to a smarter and more intelligent level. As an example of the achievements, Level 2 AVs [1] have been introduced into the market recently. Although drivers may not use a level 2 system as intended and thus additional risks emerge [2], the level 2 system could fundamentally increase driving safety and comfort by controlling both the longitudinal and lateral motion of a vehicle. To further increase autonomy, a Level 3 AV is supposed to be the goal for the next stage by shifting the entire dynamic driving tasks (DDT) to an AV itself in a predefined operational design domain (ODD) [1]. Before bringing a Level 3 AV into the market, corresponding verification and validation (V&V) procedures are essential to prove its safety. However, numerous known and unknown unsafe scenarios exist due to the complexity of open world, the validation without an explicit stop criterion seems to be infeasible. Thus, the typical question "How safe is safe enough" [3] arises. To answer this question, defining safety goals for AVs becomes imperative. The safety goals differ across AV functions and systems. A safety goal in ISO 26262 [4] is defined as a low and acceptable residual risk for electric/electronic systems. In contrast, the accident rate per kilometer is described in ISO 21448 [5] as the validation target for AVs. Usually, the accident rate per kilometer results from human drivers are utilized as a baseline, and an AV is supposed to have an accident rate per kilometer 100 or 1000 times lower than the baseline [6]. In this way, a reasonable validation target is derived as an AV is expected to be safer. A similar concept is proposed in [7] [8] where a positive risk balance (PRB) compared with human driving performance is suggested prior to the launch of AVs. Motivated by this, the UNECE released regulation No.157 [9] for L3 Automated Lane Keeping Systems (ALKS) with a maximum driving speed of 130 km/h. This regulation suggests that an AV's performance shall be ensured at least to the level at which a competent and careful human driver (CC Driver) could minimize the risks. To depict a competent and careful human driver, two driver performance models are introduced. They are a reaction time-based driver model (the Japanese driver model [10]) and a fuzzy safety model (FSM [11]), respectively. Cutin, cut-out, and deceleration scenarios with various parameter combinations are simulated to identify the collision and noncollision boundaries of these two driver performance models. Subsequently, the performance boundaries are regarded as a baseline for AV safety assessment, i.e., the safety performance boundary of an AV shall be larger than a CC Driver. Figure 1 illustrates the concrete application process of a driver model to derive its performance boundary. First, reasonably foreseeable parameter ranges [12] are determined by the collected data from the real world. They are defined as likely occurring scenarios within a specific ODD and period. According to risk acceptance and relevant exposure, the limits of reasonably foreseeable parameter ranges vary. Subsequently, sampling techniques such as uniform sampling and adaptive sampling [13] are applied to generate concrete scenarios [14] based on the parameter ranges and correlations. Finally, the performance boundary of a driver model is determined by distinguishing collision and non-collision scenarios executed in simulations. In addition, driver performance models are essential for the simulation-based method [15] [16] for AV V&V. In the simulation-based method, AVs are tested in a virtual environ- Fig. 1. The application of driver models for AV safety verification. (a) represents the determination of reasonably foreseeable parameter ranges using collected data; (b) illustrates the performance boundary of a driver model using uniform sampling within the parameter ranges; (c) depicts the performance boundary of a driver model using adaptive sampling within the parameter ranges. ment with surrounding vehicles powered by driver models. Due to the huge test effort involved in the distance-based method, shifting V&V to simulations seems to be an inevitable choice to escape from the "approval trap". As indicated in [17], 6.62 billion test kilometers must be driven to prove that an AV is approximately twice as good as human-driven vehicles at a significant level of 5% according to the fatal accidents from the Federal Statistical Office. With this fact in mind, projects like VV-Method [18] and SET Level [19] are initiated to develop a seamless chain of reasoning for the proof of safety and a holistic tool chain. Both projects emphasize the importance of the simulation-based method. Even in the simulation-based method, four subcategories [20] are recommended by P.E.A.R.S (an open consortium to harmonize the prospective effectiveness assessment of active safety systems by simulation): 1) Direct usage of real-world cases (i.e., reconstructed crash data or field data) without any changes. 2) Usage of real-world cases plus varying the initial values by means of distribution. 3) Deriving scenario mechanisms and distributions from real-world cases and selecting a low number of representative cases. 4) Deriving scenario mechanisms and distributions from real-world cases and applying sampling to generate multiple cases. The last subcategory is suggested by Fries et al. [21] and Kaufmann et al. [7] because this approach does not directly rely on real-world cases but establishes the link to them via distributions. Since no predefined trajectories are available, a driver model is required to update the state of surrounding vehicles with given initial states during the simulation. According to the level of detail to represent a traffic flow, the following classifications are considered [22] [23]: 1) microscopic simulation models: the space-time behavior of vehicles and drivers as well as their interactions are modeled at a high level of detail; 2) mesoscopic simulation models: the traffic flow dynamics are described in aggregate terms using probability distribution functions and the dynamics of these distributions are governed by individual drivers' behavior; 3) macroscopic simulation models: traffic flow is represented as large road networks, measured in terms of characteristics such as speed, flow, and density. Apparently, driver models for AV safety assessment should fall into the microscopic category, since the behavior of an individual vehicle is essential to evaluate AV's performance to handle driving tasks in each scenario. Macroscopic simulation models, on the other hand, are more suitable for analyzing the impact of AVs on traffic flow or even society. As a result, driver models play a vital role in the safety assessment of AVs. Nevertheless, since AV certification is challenging and achieving PRB with driver models as references remain under exploration, little review regarding their application for AV verification is currently available despite the fact that the driver model has been studied for decades. In the recent five years, Singh and Kathuria [24] discussed the study of driving behavior using naturalistic driving data. Similarly, a review was conducted in [25] to analyze the effect of advanced features such as adaptive cruise control (ACC) on driver behavior. Park and Zahabi [26] presented a review of human performance models with a focus on human cognition and interactions with in-vehicle technology. A survey on carfollowing models was performed in [27] [28] without covering AV safety assessment, which is the same as the reviews made five years ago, e.g., Rahman et al. [29] and Moridpour et al. [30] gave a review of lane-changing models in 2013 and 2012, respectively. Therefore, we are motivated to present a detailed and holistic review of currently available driver models in terms of their application for AV safety assessment by considering the following three research questions: • RQ1: what are the requirements on the driver models for AV safety assessment? • RQ2: what driver models are available in this context considering the requirements? • RQ3: What kind of driver models are appropriate for what kind of AV safety assessment tasks? To the best of the authors' knowledge, no survey on driver models focuses on their application in AV safety assessment. Additionally, lateral avoidance is also an important maneuver, apart from the models for longitudinal braking when confronting a critical situation, appropriate driver models for lateral avoidance are also included, whereas few surveys about driver models takes this point into account. After answering these three research questions, we finally propose an architec-ture for a CC driver and discuss the current gaps and future research directions. Therefore, our contributions to the work are as follows: • requirements on driver models for AV safety assessment are derived, which facilitates AV developers to develop their driver models for AV testing; • a comprehensive survey of driver models is presented. This gives developers and regulators an overview of current status; • appropriate driver models for AV safety assessment are compared based on our proposed metrics. Thus, a selection of appropriate models for related researchers is possible by our comparison; • a thoughtful discussion of possible driver models for AV certification is given to indicate future working directions. Section II primarily addresses the requirements for driver models in terms of their application in AV safety assessment, as well as the scope of relevant driver models. Based on the determined scope, the driver models aiming at modeling realistic driver behavior are elaborated in Section III. Section IV deals with driver models handling critical situations. Subsequently, the applicability of driver models for AV safety assessment is highlighted in Section V. Based on the analysis, a discussion is conducted and limitations are identified in Section VI. Lastly, the conclusion and future works are summarized in Section VII. II. REQUIREMENTS AND SCOPE In this section, we first explore what kind of driver models are useful for AV V&V. This facilitates the determination of our survey scope. Based on this, we give an overview of driver models that have been utilized in the AV safety assessment. This overview serves as a guide for the detailed description of the driver models in the following sections. A. Requirements on driver models According to the definition of simulation-based testing in Section I, AVs are tested in simulated scenarios, which can be multiple concrete scenarios or a large-scale random environment such as a simulated city. Concrete scenarios are usually described in a standard format such as OpenSCENARIO [31] for data exchange between software. In order to provide sound and credible test results, a high-validity simulated environment is usually required. To this end, the driver models should be realistic, and their trajectories should be human-like and reasonable given an initial driving state. In addition to testing AVs in predefined concrete scenarios, large-scale simulation environments such as a city with different traffic situations offer the possibility to discover unknown unsafe scenarios and thus validate AVs. In order to replicate inattentive or distracted driving behaviors for the generation of critical scenarios, it is necessary to take into account the stochasticity of information processing and situation understanding. [32]. This kind of driver model is categorized as cognitive models [33], in which the internal processes and states that produce the behavior are modeled. Predictive models [33], on the other hand, attempt to simulate the driver behavior itself without necessarily considering the underlying processes that lead to the behavior. The cause-and-effect relationships between the behavior and the external factors are ignored, which results in limited predictive capabilities [34]. Different from simulation-based testing, the performance boundary of human drivers should be measured in critical scenarios because critical scenarios elicit peak performance capabilities, and routine scenarios elicit typical (not necessarily the best) behavior [35]. Therefore, a driver model used as a baseline for AV safety verification focuses on a driver's performance in critical scenarios. The driver model should essentially apply the maximum effort to avoid collisions or adjust the reaction intensity according to the level of risk. As a result, it is possible to determine in which scenario the driver model can avoid a collision and in which it can not. Consequently, both cognitive and predictive models are required in simulation-based testing. Depending on the scale of the simulation, predictive models are suitable for replicating human-like trajectories in concrete scenarios, whereas cognitive models are more appropriate for large-scale simulations in order to generate human-like driving errors randomly. Conversely, the driver model as a baseline for AV safety verification shall reflect the peak human driving capability in order to obtain the performance boundary. B. Scope Based on the derived requirements on driver models, we define the review scope, as illustrated in Fig. 2. First, we investigate suitable driver models for simulations in terms of their suitability to generate realistic driver behavior. This includes existing car-following, lane-changing and cognitive models. Human cognition process is included in cognitive models to model driving errors, whereas car-following and lane-changing models focus on the maneuver level. For each type of model, we further classify the models according to their model characteristics. In addition to driver models for simulation, driver models as references are elaborated. These models attempt to model a driver's reaction to an imminent situation. Representative examples from this category are the Japanese model [10] defined in UNECE Regulation No.157 [9]. However, the reference driver models in the regulation have only considered braking as the strategy to avoid collisions. Little attention is paid to the driver models using steering for collision avoidance. However, the steering maneuver is also a decent action for collision avoidance if free space is available on the side. Furthermore, simultaneous steering and braking maneuvers can mitigate the collision severity if a collision is inevitable. Therefore, we consider not only braking models, but also steering models and braking & steering models in the paper. Consequently, the review is beneficial for developers and regulators to develop a comprehensive driver model for AV safety verification. For each possible maneuver, we further divide them into different categories considering their modeling processes. III. DRIVER MODELS FOR SIMULATIONS In this section, we present a detailed elaboration of driver models for simulation-based testing, including predictive and cognitive models. In particular, car-following and lanechanging models are distinguished in predictive models. Subsequently, an overview of simulation tools with corresponding integrated driver models is given to show their application in AV safety assessment. A. Car-following models Car-following (CF) is one of the typical behaviors when modeling predictive models. For instance, a car-following model is necessary when testing the adaptive cruise capability of an AV. The model represents the interaction between preceding and following vehicles in the same lane and belongs to microscopic driving behavior models. We divided CF models into analytic and data-driven models according to different modeling methods. Analytic models: Analytical models provide a physical description of CF models that are constructed based on artificially designed desired destinations and can be used to match real traffic flow. It can be further subdivided into mechanical models, psycho-physical models, cellular automata (CA) models, and adaptable models, as described in Fig. 3. The Gazis-Herman-Rothery (GHR) model, the earliest mechanical model proposed by Chandler et al. [63], utilizes the relative speed between the preceding and following vehicles as a stimulus item and considers the speed of the preceding vehicle and the time headway as influencing factors of the sensitivity coefficient. The model is expressed by: a n (t + T ) = λv m n (t + T ) [x n−1 (t) − x n (t)] l [v n−1 (t) − v n (t)](1) where n refers to the following vehicle; n − 1 refers to the preceding vehicle; a n (t + T ) is the acceleration of the following vehicle at the time t + T ; x n (t) and v n (t) are the position and velocity of the following vehicle at the time t, respectively; λ is the sensitivity coefficient; T is the reaction time; m, l are coefficients to be calibrated. Based on the GHR model, several important CF models, such as Gipps model [37], Helly model [64], Newell model [65], Intelligent Driver Model (IDM) model [36] and Optimal Velocity (OV) model [66], are successively developed. However, these models often rely on strong assumptions, which could limit their validity. Studies have shown that CF behavior is not purely mechanical but involves the driver's perception, information processing, and decision-making processes [67]. To improve the mechanical models, especially the GHR model, some studies added a memory function [68] and the physical state of multiple vehicles ahead [69] as stimuli to better simulate the CF behaviors of drivers in the real world. Inspired by these improved methods, some researchers established psychological-physical models that directly incorporate the driver's perception process. For instance, Wiedemann introduced the term "perceptual threshold" to define the minimum value of a stimulus that the driver can perceive and respond to [70]. The basic idea of the model is that once the following driver believes that the relative distance to the preceding vehicle is less than the psychological safety distance, the driver starts to slow down; Because the driver cannot accurately estimate the speed of the preceding vehicle, the following vehicle's speed will be lower than the preceding vehicle's speed for a period of time until the distance between the two vehicles reaches another psychological safety distance. Afterward, the following driver starts to slowly accelerate. Consequently, a repetitive pattern of deceleration, acceleration, and re-acceleration is established. Considering the way the brain estimates the collision time, Andersen et al. [71] proposed the Driving-by-Visual-Angle (DVA) model, which uses the visual angle and its change rate as variables for the driver to make acceleration decisions. This model offers a more realistic representation of a driver's reaction while driving. The driving simulator studies show that this model can better fit the driving data. However, it is difficult for psychological-physical models to find a balance between the simplicity of the model and the performance due to the complex perceptual processes of the drivers. Cellular Automaton (CA) is considered a promising approach to address this challenge. It is defined as a dynamical system that evolves in discrete time dimensions according to certain local rules in a cellular space that is composed of cells with discrete and finite states. As a result, the discretecontinuous-discrete approximation process can be avoided by applying CA theory to model CF behavior. Since the model developed by Nagel and Schreckenberg (NaSch) [72], more CA-related driver models are proposed [73] [74] [75]. Despite being widely used in traffic flow simulation, the CA model falls short of meeting the accuracy required by vehicle simulation. This is due to the contradiction between simplicity and authenticity inherent in the CA algorithm. Therefore, most automotive simulation software still uses mechanical CF models or "psycho-physical" models. Recently, adaptable driver models incorporating driver characteristics are studied. Chen [76] investigated intrinsic long-term driving characteristics and their short-term changes using drivers experiencing external stimuli, and proposed a long and short-term driving (LSTD) model that incorporates these changes into the CF model. Long-term driving characteristics were extracted through cluster analysis, and changes after external stimuli were measured as indicators of short-term driving characteristics. The model was justified using the NGSIM dataset [77]. Unlike Chen, Liao [78] proposed different theoretical models for three traffic states, considering that drivers have varying driving styles in different traffic conditions. Afterwards, numerical tests were conducted to verify the safety, stability, comfort, fuel economy, and consistency of these models with various driving styles in various scenarios. Data-driven: With the advent of the era of big data and the rapid improvement of data collection technology, highprecision and large-sample trajectory data can be obtained easily, which stimulates the development of data-driven CF models. Instead of adhering to various theoretical assumptions and pursuing mathematical derivations in a strict sense, datadriven models use non-parametric methods to mine the intrinsic information of trajectory data and build CF models with high prediction accuracy. By learning from data, artificial neural network methods aim to establish a general description of driving behavior, and they typically have high prediction accuracy in previously observed situations. Therefore, a number of studies utilize artificial neural networks to model CF behavior. For instance, back propagation (BP) neural networks [38], radial basis function neural network [39] [79], and fuzzy neural networks [80] [81] [82] have been continuously applied to model CF behavior. However, the generalization of these models is usually limited. The classification of analytic models. Mechanical Models: A mechanical model is constructed using mathematical and physical equations with a theoretical basis abstracted from traffic phenomena or driving processes. Psycho-Physical Models: A psycho-physical model is constructed based on a driver's perception and response. Cellular Automaton Models: A CA-based model, characterized by discrete time, space, and state variables with localized spatial interaction and temporal causality, has the capability to simulate the spatial-temporal evolution process of complex systems. Adaptive Driver Models: These models consider driver characteristics in order to increase model generalization. Support vector regression is a regression algorithm based on the support vector machine framework. It can be used for regression fitting of trajectory data. This method follows the principle of structural risk minimization and theoretically has stronger data learning and generalization abilities than artificial neural networks. An exemplary application is the model studied by Zhang et al. [83]. Based on the assumption that drivers tend to exhibit similar driving behaviors when facing the same driving scenario, He et al. [84] searched the K similar historical driving scenarios for the most likely driving behaviors, which were then used as model output to generate a KNN (K-nearest-neighbor) CF model. Compared to other data-driven models with opaque structures, the KNN model has a clearer modeling structure and is more understandable. Deep learning (DL) models, compared to traditional neural network models, usually have multiple hidden layers and a correspondingly huge number of neuronal connection weights, thresholds, and other parameters. Various DL-based CF models have been concentrated in the past five years [85] [86] [87] [88]. For instance, both Zhou et al. [85] and Wang et al. [86] proposed CF models based on recurrent neural networks (RNN) by taking continuous historical time series and vehicle dynamic data as input, while the output is the desired speed for the subject vehicle. The results show that their models performs well in predicting the trajectory of the following vehicle. However, the high accuracy of DL models comes at the expense of data dependency, high computational costs, and poor generalization. Deep Reinforcement Learning (DRL) has addressed these issues to some extent. Zhu et al. [89] used the difference between simulated speed and observed speed as the reward function and considered a 1 s reaction delay to build a CF model. The model was able to reproduce humanlike CF behavior and showed better generalization ability, as the agent learned decision-making mechanisms from the training data, rather than parameter estimation through data fitting. As an extension, Hart et al. [90] incorporated the idea of driving styles in the reward function to simulate different driver characteristics. As the preceding description indicates, CF behavior modeling is actually a challenging task if factors such as accuracy, generalization, computational cost, and driver characteristics are considered. Depending on the specific task, some aspects have to be sacrificed. Additionally, data-driven approaches are intensively studied recently due to their generally high accuracy. B. Lane-changing models Lane-changing (LC) models usually incorporate three levels while responding to the surrounding environment, as shown in Fig. 4. They are strategic, tactical, and operational levels, respectively. 1) strategic level the driver knows about the route in a network, which influences the lane choice, for example, with regard to lane blockages, on-ramps, off-ramps, or other mandatory merges; 2) tactical level maneuvers are selected to achieve shortterm objectives such as a decision to pass a slow-moving vehicle or maintain the desired speed; 3) operational level drivers decide about the maneuvers to control their vehicles and determine whether an immediate lane change is both safe and desirable. Lane-changing is divided into two types: mandatory lanechanging (MLC) and discretionary lane-changing (DLC) [41] [43] based on driving scenarios. MLC happens when the driver must leave the current lane (e.g., to use an off-ramp or avoid a lane blockage), and DLC happens when the driver performs a lane change to improve driving conditions (e.g., to increase the desired speed in the case of a slow leading vehicle). From the perspective of interaction, free, cooperative, and forced lanechanging are proposed [91] [92]. In cooperative and forced lane-changing, the follower slows down either reluctantly or willingly to create enough space for the lane changer to insert. Rule-based: The Gipps model [40] is a type of rule-based LC model. In this model, LC is decided by considering the necessity, desirability, and safety. Factors that affect LC are predefined and their importance is evaluated in a deterministic manner. Three zones depending on the distance to the intended turn are defined to govern the driver's behavior for intended LC. More specifically, a desired speed is kept if the intended turn is far away, while lane changes to the turning lanes or adjacent lanes are considered in the middle zone. When the intended turn is close, the driver focuses on keeping the correct lane and ignores gaining other advantages. Due to the clearly structured triggering conditions, the model has been applied in several traffic simulations. However, the variability in individual driver behavior [29], parameter estimation [43], and applicability in congested scenarios [30] are not addressed. Yang and Koutsopoulos [41] refined the LC into MLC and DLC, and defined four steps to model a LC maneuver: the decision to consider a LC, choice of the target lane, search for an acceptable gap, and execution of the change, as illustrated in Fig. 4. Different from the Gipps model, the initiation of an MLC is described with a probability that depends on the distance to the intended turn. Although driver behavior variability is modeled to some degree, parameter estimation and validation of the model are missing. To increase the interaction between vehicles during LC, Hidas [91] [92] considered free, cooperative, and forced lane-changing in his model. In cooperative LC, a certain deceleration, determined by the aggressiveness parameter and the urgency of LC, is adopted by the lag driver. The lag gap at the end of deceleration can be obtained based on kinematic equations and compared with the minimum acceptable lag gap to determine if cooperative LC is feasible. Forced LC utilizes the same process as cooperative LC. The differences lie only in the adjusted value of the deceleration of the lagging vehicle and the speed decrease of the subject vehicle. Discrete choice-based: In discrete choice-based models, LC decisions are described in a probabilistic manner. Ahmed's model [42] [69] is representative of this category. He modeled a LC maneuver as a sequence of three steps: the decision to consider LC, the choice of the target lane, and the determination of a sufficient gap. To consider the unusual LC maneuver in congested traffic scenarios where the subject vehicle forces the lag vehicle to yield, he took force merging into account in addition to MLC and DLC. The probability of each type of LC maneuver is calculated in a discrete choice framework. Additionally, whether a gap is accepted or not for MLC, DLC, and forced merging has probability characteristics as well. Consequently, the differences between MLC, DLC, and forced merging are captured in his model. However, a rigid separation between MLC and DLC could be unrealistic in some scenarios because once the MLC is activated, other considerations such as DLC are ignored. Therefore, Toledo [43] developed an integrated probabilistic LC model in which MLC and DLC can take effect simultaneously. To evaluate the model, a comparison between separate and integrated MLC & DLC was performed. The results demonstrated the importance of incorporating tradeoffs between MLC and DLC into a LC model. Incentive-based: The idea of the incentive choice-based models is to unify the anticipated advantages and disadvantages of a prospective LC. Based on this concept, minimizing overall braking induced by lane change (MOBIL) [44] was proposed. In MOBIL, both the attractiveness of a given lane (i.e., its utility) and the risk associated with lane changes are measured using single-lane accelerations. When a lane change is considered, it is assumed that a driver makes a trade-off between the expected advantage and the disadvantage imposed on other drivers. The advantages are measured by the difference in the accelerations after and before the lane change, while the disadvantages are quantified by the deceleration imposed on the lag vehicle. Additionally, a "politeness factor" was defined to enable LC from purely egoistic to more cooperative driving behavior. The MOBIL model has the advantage of transferring the assessment of the traffic situation to the acceleration function of the car-following model, allowing for a compact and general model formulation with only a few additional parameters. Nevertheless, empirical justification, model calibration, and validation remain unaddressed. The LC model with relaxation and synchronization (LMRS) [45] is another example based on the incentive choice framework. Three different incentives including route following, speed gaining, and right keeping are merged into a single desire. By comparing the single desire with three predefined thresholds, no LC, free LC, synchronized LC, and cooperative LC are distinguished. If the threshold of a free LC is exceeded, LC is performed without preparation. If the single desire is greater than the threshold for synchronized LC, the subject vehicle synchronizes its speed with the target lane and prepares for LC. In cooperative LC, the lagging vehicle creates a gap proactively by following the potential lane changer. This can occur when, for example, the subject vehicle turns indicators or shows lateral movement if its LC desire is large. If the decision to LC is initiated, a gap model similar to MOBIL is utilized in LMRS. To calibrate and validate the model, the data from a segment of highways was applied. The results demonstrated the reproduction of reality in terms of lane volume distributions and lane-specific speeds. Data-driven: Due to the lack of flexibility under dynamic driving situations and the resulting poor performance, datadriven approaches are motivated by training properly on a large set of sample data. For instance, a neural network [93], a deep belief network (DBN) [46] and a support vector machine (SVN) [94] are applied to model LC decisions. Additionally, deep reinforcement learning (DRL) also shows great potential [47] [95] [96]. Since a LC process incorporates a sequence of actions and the action to be executed affects the ultimate goal of the task, RL shows great potential to deal with this kind of problem. However, the mapping from state-action pairs to the total return (usually called Q-value) increases significantly with the size of state-action spaces, thus neural networks are applied to model this mapping. C. Cognitive models The goal of cognitive models is to simulate the human cognition process while driving, which includes perception, recognition, judgment, and operation. Original cognitive models such as ACT-R [97], Soar [98] and QN-MHP [99] are based on psychological cognitive architectures. These models can facilitate the understanding of driver behavior in the context of general human abilities and constraints. However, they are not suitable for simulation in arbitrary dynamic environments due to their complex structures. Recently, cognitive models aiming to simulate realistic traffic environments have been proposed. Driver failures that cause crashes can be simulated by taking into account inattentive or distracted driving during the information acquisition process in the cognitive models. A multi-agent traffic simulation software named Re:sim is proposed in [32] to model driver agents and their interactions with AVs. In the model, a driver agent perceives the surrounding objects that exist in his line of sight and field of view. The relative states of the observed objects are then calculated, and recognition labels such as preceding or oncoming are assigned to them. Subsequently, hazardous objects are identified, and the risk of collision is estimated. Based on this judgment, the agent decides to operate and react using driving models such as the Weidemann following model. Similarly, the stochastic cognitive model (SCM) [21] [100] consists of six modules: information acquisition, mental model, situation manager, action manager, action implementation, and driver characteristics. In information acquisition, the visual perception of the driver for perceiving the environment, such as the gaze allocation and fixation duration on a specific area of interest, is modeled. The mental model calculates and stores relevant driving states of observed objects. The situational risk is evaluated by the situation manager, which controls the action manager to provide an appropriate action for the action implementation. Importantly, the SCM model provides the opportunity to parameterize the driver characteristics so that the driver's perception and cognition, compliance with traffic rules can be flexibly adjusted. Unlike the SCM model, the DReaM model [48] focuses on urban traffic, particularly junction scenarios. Aside from that, the DReaM model has a similar structure to the SCM model. D. Simulation tools Due to the utility of driver models, some of them are widely used in traffic simulation software, such as in SUMO [101], PELOPS [102], and Aimsun [103]. Table I summarizes the traffic simulation software with known driver models. We can find that various CF models are utilized in different traffic simulation software, whereas the Gipps [40] lane-changing model is more frequently integrated into traffic simulation than other LC models. Some recently proposed cognitive models continue to use common driver models where more effort is spent on information acquisition and processing. IV. DRIVER MODELS AS REFERENCES Driver models used as references for AV safety verification should accurately represent human drivers' driving abilities. PELOPS [102] Wiedemann [70] Sparmann [106] Gipps [40] PARAMICS [107] Fritsche [108] Fritsche [108] MITSIMLab [109] Ahmed [69] Gazis et al. [110] Toledo [43] Gipps [40] Aimsun [103] Gipps [37] Gipps [40] SimTraffic [111] Headway-based Gipps [40] VISSIM [112] Wiedemann [70] Sparmann [106] CORSIM [113] Headway-based Rule-based Re:sim [114] Wiedemann [70] Not applicable DReaM [48] (OpenPASS [115]) IDM [36] Rule-based CORSIM and DReaM have their own rule-based lane-changing models. Re:sim currently only includes a car-following model. Such driving abilities are typically shown in critical scenarios. To describe drivers' driving abilities, the concept of careful and competent driver models is proposed in [9]. To present a clear scope, the definition of careful and competent is necessary. In the paper, careful infers that a driver is capable of identifying a risk in time to avoid unnecessary intense response to a situation, whereas Competent means that a driver uses all possible maneuvers to avoid or mitigate a collision. Thus, we discuss in the following how to identify situation risks in order to trigger which type of driver models. Braking is the most common reaction of drivers in critical situations. However, research [116] shows that if a collision can not be avoided by braking only, steering behavior is also performed by drivers. The combination of braking and steering has the potential to further reduce the probability of a collision. Therefore, all these three types of collision avoidance maneuvers are studied to present a holistic overview of driver models in critical situations. Cognitive models are not discussed even though some of them such as SCM [21] are supposed to be applicable in critical situations, their validity is not demonstrated. A. Braking models • RQ B1: What conditions cause an emergency braking maneuver to be activated? • RQ B2: What models are appropriate for describing emergency braking maneuvers? Regarding the triggering strategy RQ B1: visual perception and critical metrics are usually used. Visual looming is a typical representative of visual perception, which refers to the optical size and expansion of a preceding vehicle on the retina [50] [117]. To quantify this visual looming, the inverse tau [118], defined as τ −1 =θ/θ, is applied, whereθ represents the preceding vehicle's optical expansion rate on the driver's retina, and θ is the optical size. The inverse tau increases with the collision risk level. However, Markkula [119] argued that a driver's braking is not initiated by exceeding a perceptual threshold, but by the accumulation of noisy perceptual evidence over time. Following this idea, Svärd et al. [120] proposed a driver model for initiation and modulation of precrash brake response to deal with off-road glance behavior. In this model, the initiation time is obtained by the noisy accumulation of perceptual evidence for and against braking. Besides visual perception, criticality metrics are employed to estimate situation risk. In [9], hard braking is applied when a challenging vehicle cuts in and the Time-to-collision (TTC) is smaller than 2 s. According to the study in [121], the TTC for braking onset in urban environments is between 3 and 4 s, while the threshold for participants in the driving test is 2.5 s. Besides TTC, time-to-brake (TTB) is also used in some cases. For instance, a value of 0 is used as the threshold to activate the emergency braking maneuver in [122]. Some driver models aim to model risk without an explicit threshold. Depending on the risk level, different deceleration values are applied. The fuzzy safety model (FSM) [123] models the longitudinal risk by defining a safe and an unsafe distance. If the actual distance is larger than the safe distance, no risk exists. Conversely, the highest risk is shown if the actual distance is below the unsafe distance. The risk is interpolated if the distance is between the two distance boundaries. Similarly, the risk-response driver model [124] utilizes risk field theory to model situation risk and then response correspondingly according to the risk level. For emergency braking model RQ B2: Although existing CF models include deceleration behavior, they are less suitable for use as a comparison reference for AV safety verification because they do not focus on a driver's braking reaction process to imminent situations, but rather on kinematic behavior at the vehicle level, without taking specific driver execution behavior into account. The study in [125] shows that the Gipps and GHR models exhibit unsatisfying behavior in critical scenarios. Depending on the modeling method, emergency braking models to answer RQ B2 can be roughly divided into four categories: visual perception models, reaction time models, and fuzzy theory models. Visual perception models: Warren [49] described the braking process based on the tau theory, where the brake-pedal position z should be adjusted according to Equation (2). b pedal is a stiffness parameter determining the speed of pedal adjustments. ε is the noisy term.τ m is the target margin value andτ is the change rate of τ . ∆z = b pedal (τ m −τ ) + ε(2) Similar to the τ model, the deceleration-error model [50] adjusts the deceleration by comparing the current deceleration to an ideal deceleration, where the idea deceleration is defined as the deceleration whenτ equals -0.5. To capture the flexibility of the stiffness parameter in the deceleration-error model, an action boundary for describing the braking urgency is defined in [126], beyond which a collision is unavoidable. If a situation becomes urgent, the proximity to the action boundary decreases, and the driver should apply braking with increasing strength. Reaction time models: These models are designed to simulate the response delay that divers may experience during emergency braking. The Japanese driver model proposed in [10] is a typical one in this category, where only braking is considered for collision avoidance. The driver model is separated into the three segments: "Perception", "Decision" and "Braking". A risk perception point is defined to activate the decision and braking. In cut-in scenarios, the risk begins when the cut-in vehicle exceeds the normal lateral wandering zone and the TTC is below 2 s. When the lateral and longitudinal risk are identified, a driver starts to react. The gas pedal is released and the braking pedal is about to take effect, this reaction delay time is 0.75 s. The deceleration rate then increases linearly until to max deceleration 0.744 g. Subsequently, the maximum deceleration rate is maintained. Meantime, the Reg157 model is defined in [9]. This model utilizes TTC to estimate situation risk and apply braking maneuver to avoid collisions. The maximum deceleration is assumed to be at least 6 m/s 2 , and the perception time, together with the time needed to achieve the maximum deceleration, is equal to 0.35 s. The Responsibility Sensitive Safety model (RSS) [51] describes the rules that an AV should follow in order to not cause accidents proactively. The longitudinal safety distance, considers the worst situation where the preceding vehicle decelerates with maximum deceleration, and the following vehicle accelerates with maximum acceleration and then decelerates moderately after the reaction time. A conservative distance is obviously assumed from this definition. The RSS model provides safety requirements for a driving strategy. Consequently, it can be used as a reference model to determine the safety responsibility boundary for AVs. Since the driver reaction time is an important parameter in this type of model and varies among drivers and situations, a classification is made based on the characteristics of reaction time: fixed reaction time [127] [128], variable buffer-based reaction time [129] [100] and random sampling reaction time [130]. Fixed reaction time means that the reaction time is a fixed value. Variable buffer-based reaction time enables the selection of different independent variables (e.g., speed, distance, or indicator state). For each selected variable, a reaction time is drawn based on the underlying distribution. The random sampling of brake reaction time attempts to model distraction by using sufficient statistical samples to characterize stochastic distributions of reaction time, which can be fed into the microscopic-level CF model for crash prediction [130]. Despite its simplicity, the reaction time model is assumed to be an important role in critical scenarios, describing the driver's extreme operating behavior in emergency situations. It is an important reference model in AV safety evaluation. Fuzzy theory models: Fuzzy logic inference systems are known for their great ability to simulate human reasoning processes as well as the possibility of considering various driving styles and driving environments. Due to these benefits, the development of emergency braking systems using fuzzy logic theory is motivated. Three steps are included in the fuzzy reasoning process. Fuzzification converts the input values to fuzzy values based on predefined rules. Then, the inference engine mimics human reasoning by performing fuzzy inference on the inputs. The output fuzzy variables are finally converted to executable values by defuzzification. The states of the following vehicle and the preceding vehicle are commonly used as input values, and the output values are brake angle or brake pressure [52] [131] [132]. Fig. 5. An illustration of the braking process for collision avoidance. The situation risk is estimated by a risk perception metric. When an emergency braking maneuver is necessary, deceleration onsets after certain reaction time. The maximum deceleration is either constant or adjusted according to the situation risk level. Recently, the Fuzzy Safety Model (FSM) [11] for rear-end collisions is proposed. Depending on the scenario type, longitudinal and eventually lateral distances are checked against the safe distance to judge if the braking should be initiated. Unlike previous studies, two fuzzy surrogate safety metrics are explicitly employed to evaluate the situation risk. Subsequently, a proper deceleration corresponding to the risk level is performed according to Equation (3). D = CF S(D max − D comf ) + D comf if CF S > 0 P F S · D comf if CF S = 0 (3) where the Proactive Fuzzy surrogate Safety metric (PFS) and the Critical Fuzzy surrogate Safety metric (CFS) are the two metrics to evaluate situation risk. D max and D comf represent the maximum and comfortable deceleration of the following vehicle. From the above analysis, we discover the following commonalities of the discussed collision avoidance braking models. First, a risk perception metric is essential in order to trigger the braking model. Either visual perception or criticality metrics are applied to estimate the situation risk. Second, most works consider the reaction parameter either explicitly (reaction time models) or implicitly (fuzzy theory models) in their models. Third, the deceleration in some models can be adapted according to the risk level, while it is a fixed profile when the model is triggered. As a result, we use Fig. 5 to summarize the braking models. Due to the special application for AV safety verification, interpretability is an important factor for the models in order to provide an understandable reference for such safety-critical systems. Additionally, nearcrash and crash data are limited, which further hinders the application of data-driven models in this case. B. Evasive steering models Similar to the braking models, two research questions should be answered regarding the evasive steering models: • RQ S1: What conditions cause an evasive steering model to be activated? Fig. 6. The illustration of three evasive steering models using the preview method. (a) models using a single preview point; (b) models optimizing over a preview horizon; (c) models using multiple preview points. • RQ S2: What models are appropriate for describing evasive steering maneuvers? For the triggering strategy RQ S1: Depending on the risk level, an evasive steering maneuver should be conducted or not. Despite the simplicity of TTC, it is the most common metric to activate such maneuvers. For instance, A TTC threshold of 2.5 s is utilized in [57]. They demonstrated that the proposed approach can perform human-like pedestrian collision avoidance maneuvers by steering with a maximum driving speed of 30 km/h. Zhao et al. [133] consider 0.75 s for the threshold of TTC for an emergency evasion maneuver. Similarly, a two-dimensional TTC with a threshold of 5 s is defined for a data-driven evasive steering model [58]. In [122], an evasive maneuver is activated when the Time-to-Steer (TTS) is 0. A "threat metric" related to acceleration or jerk level is utilized to trigger an evasion maneuver, which is defined by a cubic polynomial with zero derivatives at the knots. The number of knots and their position depends on the number of objects along the path and their position. However, the trigger threshold is not presented [134]. Some studies define the trigger moment implicitly. Isermann et al. [135] argue that the timing to evade is determined by at what distance the evasion must be executed so that a collision is still preventable. They utilize a sigmoid function to describe the evasive trajectory. Park et al. [136] assume that braking is first applied, and the steering maneuver must wait until collision by braking is no longer possible. Additionally, the expected lateral acceleration should be greater than a threshold under which the collision can be avoided by the driver's steering. For the evasive steering model RQ S2: Typically, lateral position and heading angle errors are used to model an evasive steering model. We divide the existing evasive steering model into five categories based on the various ways to calculate the errors: preview point-based, optimization-based, fuzzy-control models, data-driven models. In particular,the models using a single preview point, the models using multiple preview points, and the models using optimization over a preview horizon are illustrated in Fig. 6. A single preview point: A preview point is the simplest way to achieve the desired steering angle for an evasive maneuver. Based on the lateral position or heading angle at a future point, the steering angle δ(t) is obtained [53]. δ(t) = Kε(t − T R )(4) where ε(t) is the angle between the heading of the vehicle and the preview point, T R is the driver reaction time, and K is a gain constant. As an improvement, Zhao et al. [133] take both lateral distance and heading angle at the preview point into account. Nevertheless, if the preview point is close, the control performance is poor. Conversely, inappropriate action could occur based on solely the preview information at the time of its acquisition. Optimization over a preview horizon: Instead of determining the steering angle based on a single future point, this class of model optimizes the steering angle over a horizon (sequential multiple points). The model proposed by MacAdam [55] [56] minimizes the predicted lateral deviation from a desired path to determine the steering wheel angle. In this category, motion predictive control (MPC) is usually utilized. Multiple preview points: Models optimizing over a preview horizon show good stabilization capabilities in relation to rapid maneuvers. However, the models can be computationally intensive and may not converge if the constraints are not welldefined. To approximate this type of model but achieve the same performance, models using multiple preview points are an alternative. In this case, the driver model [54] utilizes a weighted sum of current and previewed path deviations e i , and the heading error e ψ at the current position, as expressed by Equation (5). K ψ , K 1 and K i are model constant parameters, whereas the number n represent the preview points. δ = K ψ e ψ + K 1 e 1 + K p n i=2 K i e i(5) Fuzzy-control: In [57], a fuzzy controller is applied to perform evasive steering maneuvers. The fuzzy reasoning process incorporates three stages. The lateral displacement and vehicle speed are converted to a fuzzy value in the fuzzification stage. Human-like reasoning process to yield the values of the output fuzzy variables is conducted in the inference engine. In the defuzzification stage, the fuzzy output values are converted to crisp values. Data-driven: A deep deterministic policy gradient (DDPG) algorithm [58], which could learn the sequential decisionmaking process over continuous action spaces, was used to model evasive behaviors. Another data-driven model proposed by Das and Mishra [59] attempted to avoid collisions using left and right turn maneuvers that are learned from a dataset. Fig. 7. The process of an evasive steering maneuver performed by a driver. If a driver perceives a situation risk that necessitates a response, the steering maneuver is executed by minimizing errors to a given evasion path after the reaction time. Fig. 7 illustrates the process of the steering maneuver in case of a critical situation. The situation risk is estimated. When the steering maneuver is possible, e.g., enough free space at the steering side, The steering angle is typically determined and executed by minimizing errors to a reference after a certain reaction time. C. Braking and steering models Under rather critical situations or a situation where a collision is unavoidable, simultaneously braking and steering maneuvers are appropriate to mitigate the collision severity as much as possible. The driver model proposed by Jurecki and Stańczyk [60] synthetics these two maneuvers analytically. In the driver model, the braking model is described as: D + W 1Ḋ = W 2 ∆y(t − T R ) + W 3 v rel d rel(6) where D is deceleration; ∆y represents the lateral relative distance; T R is the driver's reaction time; W 1 , W 2 and W 3 are model constant parameters. v rel and d rel are the longitudinal relative velocity and distance, respectively. The corresponding steering model δ is expressed by: δ + W 4δ = W 5 ∆y(t − T R )(7) W 4 and W 5 are another model parameters. These model parameters are identified based on 450 trials with 30 drivers. Schorn and Isermann [137] employ a sigmoidal function to generate the desired trajectory, which is followed by a feedforward control to execute braking and steering. Similarly, many studies [61] [62] [138] [139] nowadays use the model predictive control (MPC) technique to control the steering angle and the brakes with a given following path. As a result, the optimization-based methods demonstrate a tendency to deal with simultaneous braking and steering. V. APPLICABILITY In this section, we first define evaluation metrics for driver models in the two applications discussed in the paper. Next, we summarize and categorize the aforementioned driver models based on their model characteristics. We then analyze the potential applications and suitability of different driver model categories for the two different applications based on the proposed evaluation metrics to finally answer the RQ3. A. Evaluation metrics A driver model for simulation is primarily used in simulations to evaluate AV safety in mixed-traffic environments with human drivers. Therefore, the driver model shall replicate human driver behavior as closely as possible, including the variability of the behavior itself. To fulfill this evaluation metric, a driver model shall meet four evaluation metrics: variability, adaptability, simplicity, and accuracy. These evaluation metrics are presented in Table II, along with their descriptions and examples of typical models that best meet each evaluation metric. Additionally, possible research focus is provided, for which this evaluation metric is particularly relevant. A driver model as a reference is used as a benchmark to evaluate AV safety performance, and it shall represent the capabilities of careful and competent drivers. Its underlying logic is that an AV shall demonstrate superior performance compared to a careful and competent human driver in critical scenarios, i.e., it can successfully avoid collisions in at least the same critical scenarios in which a careful and comperent human driver can do. Therefore, a benchmark driver model for AV safety evaluation should satisfy four evaluation metrics: representativeness, maneuvers-coverage, interpretability, and accuracy. These evaluation metrics are also listed in Table II together with the descriptions, typical driver models and possible research focus. B. Summarization of human driver models Regardless of whether they are used for simulation or as a reference, human driver models are commonly developed based on diverse mathematical models. Based on the different abstract mathematical expressions, human driver models can mainly be divided into the following categories: Linear model: This type of model simulates human driver behavior by establishing a linear relationship between the factors considered and the maneuvers taken by human drivers, such as the GHR [46] and IDM [36] car-following models. Non-linear model: The perceptual, judgment, and decisionmaking processes of human drivers are complex and nonlinear. Many researchers choose to improve the complexity of the model and better fit human driver behavior by adding nonlinear terms to the model. Compared to pure data-driven models, the non-linear models in this category have relatively lower complexity or dimensionality. The methods used for nonlinearization mainly include the following categories: 1) Nonlinearity through activation function: Nonlinearity is achieved by adding a nonlinear activation function, such as the perceptual threshold introduced in [71] (similar to the ReLU The driver model shall reflect the variability in human driver behavior, which arises from the complex processes of perception, cognition, and decision-making. Even for an identical driver in the same situations, various behaviors including errors may be exhibited. Although the likelihood of errors is low, they shall be considered since they can lead to hazardous scenarios for AVs. Therefore, the driver model shall demonstrate variability in simulations and avoid oversimplification or neglect of any behaviors that may be critical for AV safety assurance. The model with stochastic brake reaction time [130], the stochastic cognitive model (SCM) [21] Testing an AV's ability to handle the behavioral diversity of surrounding human drivers within the same scenario. Adaptability The driver model shall demonstrate its adaptability to different situations. Human driver behavior can vary across different situations. Thus, the driver model needs to be able to adapt to these varying situations and reflect the corresponding changes in driver behavior. This adaptability is essential for accurately modeling human driving behavior and evaluating AV safety performance in simulations. The model with driver characteristics included [78], the model proposed by Zhang et al. [83] Mileage-based simulation Simplicity The driver model shall be designed to have relatively low computational complexity. In order to capture the diverse human driver behavior and different driving situations, simulations must be run on a large number of scenarios or driving miles, often necessitating the use of methods such as the Monte Carlo method. Moreover, concrete scenarios frequently involve multiple vehicles controlled by human drivers. Therefore, reducing the complexity of the driver model can help to minimize computational costs. GHR model [46], IDM model [36] Monte-Carlo simulation, Mileage-based simulation, Coverage-oriented simulation Accuracy Accuracy refers to the degree to which a driver model is able to reproduce the actual driving behavior of human drivers. In other words, it measures how closely a model's output matches the real-world data. A highly accurate driver model will produce results that are very close to the actual human driving behavior in different scenarios. This is important for valid and credible AV safety assessment. DBN [46], the model based on a gated RNN network [86] Microscopic simulation Reference Representativeness The driver model shall be representative of the abilities of a careful and competent human driver in critical situations. Representativeness does not mean collision-free, as the model should not be expected to avoid all collisions. Instead, it should match the abilities of a careful and competent human driver in these situations. The Japanese driver model [9], the FSM [123] Addressing the question of "how good is good enough" in AV evaluations. Maneuvercoverage When faced with danger, humans often weigh several possible collision avoidance maneuvers, including braking, steering, and a combination of both. If a model is limited to only braking maneuvers, it demonstrates only the ability of human drivers under limited conditions because braking is not always the optimal collision avoidance strategy. Therefore, the driver models as references shall include different possible collision avoidance maneuvers. Interpretability Using a driver model as a benchmark for evaluating AV safety performance means that it serves as a criterion in the evaluation. As a criterion, its definition should be clear and interpretable to ensure the reliability of the evaluation and gain the public's trust. Models using the preview method [53], the RSS model [51] Defining benchmark for AV safety assessment Accuracy Accuracy refers to the ability of the human driver model to accurately represent the peak performance of human drivers in critical scenarios. The accuracy of the human driver model is critical in ensuring that an AV is evaluated against a realistic and representative benchmark The FSM [11], the DDPG model [58] Guidance on the design of vehicle dynamics under extreme conditions function [140]), to the model. 2) Nonlinearity through integral terms: Nonlinearization is achieved by adding integral terms to the model, such as the memory function introduced in [69]. 3) Nonlinearity through adaptive coefficient: Nonlinearization of the model is achieved by adaptively adjusting coefficients based on situations, such as the model proposed by Liao et al. [78] 4). Nonlinearity through fuzzy term: Nonlinearization of the model is achieved by introducing fuzzy items, such as fuzzy logic in [141]. 5) Nonlinearity through stochasticization: Nonlinearization of the model is achieved by introducing random terms, such as reaction time in [130]. Optimization-based model: Unlike non-linear models, it is in some cases impossible to model the relationship between influence factors and control variables mathematically. However, it is able to characterize these models by defining objective functions and constraints. According to a given objective function, this model optimizes driving behavior to achieve the optimal result. It incorporates the evasive steering model proposed by MacAdam et al. [55] [56], as well as the widely used MPC technique. Data-driven model: Data-driven models capture the complexity of human driver behavior by adopting high-dimensional nonlinear models, such as the recurrent neural network in [85] and DBN in [46], and training them with real human driver data. Although there are differences in the degree of reliance on data in the models, data is indispensable. C. Applicability analysis This subsection evaluates the summarized driver models in terms of their applications for AV safety assessment and verification. Based on the proposed metrics, a comparison of linear, non-linear, optimization-based, and data-driven models is conducted. As a result, the dis-and advantages of each type of driver model are presented, which can guide developers to select proper driver models for their different purposes. In the context of simulations, Table III provides a comparison of different categories of driver models in meeting the proposed evaluation metrics. Nonlinear and data-driven models demonstrate superior capability in meeting more evaluation metrics compared to linear and optimization-based models, particularly three out of the four evaluation metrics. The key distinguishing factor between nonlinear and data-driven models is the trade-off between complexity and accuracy. Nonlinear models exhibit relatively lower complexity and are thus more appropriate for scenarios that demand extensive simulation calculations, such as those involving high mileage and multi-scenario coverage. In contrast, data-driven models, particularly those utilizing deep learning techniques, consider more feature dimensions and exhibit higher-order nonlinearity, which leads to higher accuracy but also entails increased complexity and computational costs. The "+" symbol indicates that the model is able to meet the evaluation metric. Conversely, "-" means the metric is a challenge for the model. In the application where driver models serve as a reference, the comparative results are summarized in Table IV. The non-linear model is capable of meeting the evaluation metrics related to emergency braking. On the other hand, the optimization-based model can simultaneously fulfill the evaluation metrics for simulating both braking and steering avoidance behaviors, while demonstrating better interpretability. Conversely, the data-driven model, although able to meet the representativeness, accuracy, and even maneuver-coverage evaluation metrics, is disadvantaged by its lack of interpretability and transparency. As a benchmark for evaluating the performance of AV, interpretability and transparency are crucial for regulation [142] and insurances [143]. The clear definition of the cost function, weights, and constraints of the optimization-based model are more easily understood by humans and can be explained to the public. Therefore, when constructing a driver model for reference, the optimizationbased model can be a good choice. The "+" symbol indicates that the model is able to meet the evaluation metric. Conversely, "-" means the metric is a challenge for the model. B" stands for braking and "S" stands for steering. The combination of different driver models is not considered in each category. The data-driven category is supposed to be capable of modeling diverse collision avoidance maneuvers given enough relevant data. D. Careful and competent driver models Based on the analysis, we propose an architecture to describe a careful and competent driver model, as illustrated in Fig. 8. For risk perception, the common used metrics with a fixed threshold is unable to describe the driving behavior reasonably in cases of an emergency situation. In contrast, the fuzzy surrogate safety metrics provide promising results. A corresponding reaction is performed by continuously observing the metrics while driving to avoid sudden or too late operations. The reaction process of a competent driver includes three stages. The braking maneuver is the first choice. Even though the evasive maneuver is theoretically better than the braking maneuver at high velocities [144], a study [145] shows that drivers are more likely to brake in critical situations. if a collision cannot be avoided by the braking maneuver, the evasive maneuver is applied because a shorter distance is required. If a collision is inevitable, simultaneous braking and steering are performed to mitigate the collision severity. The driver models corresponding to these three stages can be chosen according to the aforementioned driver models considering the evaluation metrics. Fig. 8. The architecture of a careful and competent driver model for handling critical scenarios. In stage 1, the braking is applied based on the estimated risk; in stage 2, the steering maneuver is executed if free space on the side exists; in stage 3, simultaneous braking and steering are initiated to mitigate a collision. For each stage, suitable driver models can be selected based on a user's demand using the proposed evaluation metrics. VI. DISCUSSION In this paper, we investigated the role of driving models in the safety assessment of AVs. By addressing three research questions, the guidance for safety engineers to select appropriate driving models either for AV safety evaluation in simulations or for determining AV safety performance level compared to careful and competent drivers was provided. Meanwhile, we presented an architecture of a reference driver model for tackling critical scenarios. This architecture portrays the extreme capabilities of a careful and competent human driver and thus can be considered a reference when it comes to AV release. To the authors' best knowledge, this is the first work that summarizes driver models in terms of their applications for AV safety assessment. For RQ1, we proposed evaluation metrics for driver models when applying them for AV safety assessment. These metrics pose requirements on new developing driver models according to their applications. Additionally, we found that predictive and cognitive models are required in simulation-based testing. More specifically, cognitive models are suitable for largescale environments where modeling human driving errors is necessary. In contrast, predictive models are the choice when testing AVs in single concrete scenarios. When taking a driver model as a reference for AV safety verification, drivers' peak capability in critical scenarios should be modeled. For RQ2, we presented an overview of driver models based on the defined scope in the paper. Various car-following and lane-changing models were described under the category of predictive models. It is observed that applying artificial intelligence (AI) to model specific driving behavior shows a trend. Due to the inscrutability of AI, Hybrid models that combine AI and analytic models to increase explainability while maintaining high fidelity are being investigated [146]. Additionally, cognitive models that aim to model driving behavior both in normal traffic and critical scenarios show promising results except for the high model complexity. In critical situations, we considered possible maneuvers including collision avoidance by braking, collision avoidance by steering, and collision avoidance by simultaneous braking and steering in order to build a competent driver. The models that correspond to those possible maneuvers are outlined, which can be served as reference models for AV safety verification. In addition, we have always taken the timing for triggering avoidance maneuvers into account, which is beneficial when designing AV functions. Lastly, we provided an architecture to illustrate drivers' extreme driving capability when faced with critical situations, which can be used to build a driver model that poses a higher performance boundary for AV safety verification. For RQ3, we first proposed several metrics to assess the driver models that are discussed in the paper. Based on these metrics, we are able to analyze the strengths and weaknesses of each driver model, which facilitates the determination of its applicability. For the driver models for simulation, non-linear and data-driven models are the preferred choices. When choosing between the two, a trade-off needs to be made between complexity and accuracy. If greater accuracy is desired and the additional computational cost of higher complexity is acceptable, a data-driven model may be chosen. Conversely, a non-linear model may be chosen if a relative lower accuracy can be tolerated in exchange for the lower computational cost. As computer hardware performance continues to rapidly advance, the high complexity of data-driven models may no longer be a concern in the future, making them the optimal choice. Regarding driver models as a reference, optimization-based models are the optimal choice since they can exhibit powerful performance in both steering and braking for collision avoidance, and have good interpretability. If the interpretability issue of data-driven models can be addressed in the future, they could also become a good choice. Compare to the work in [34], we analyzed in-depth various driver models as references for AV safety verification and discussed their applicability, while they focus on a discussion about human error modeling in traffic simulations. Additionally, a discussion of a suitable architecture of a reference driver model in their work is missing. For some other surveys, either they only review one category of driver models like car-following models [147], or driver models for AV safety assessment are not considered in their reviews [24] [26]. Based on the defined scope in Section II, we limit our review to the driver models that are useful either for the safety assessment of AVs in a simulation environment or for AV approval by showing their reference roles. Thus, other topics like driver behavior analysis are not considered. In addition, there may be some selection bias for relevant papers. However, the commonly used driver models are identified carefully. Thus, a significant influence on our summarized results is avoided. The proposed metrics to evaluate different types of driver models are summarized considering our two applications for AV safety assessment. Therefore, they are limited to our purpose, and a comprehensive evaluation of driver models including their applications in other domains should be further studied. Generally, our paper provides a fundamental consideration of currently existing driver models for AV safety assessment. On one hand, it can aid policymakers such as the UNECE committee to choose appropriate driver models when drafting regulations for AV approval. On the other hand, safety or simulation engineers could utilize suitable driver models depending on their demands to conduct their simulations in order to provide evidence to support a holistic safety argumentation for their developed AVs. In summary, the paper conducts a survey about diver models in order to answer three research questions regarding their applications for AV safety assessment. By summarizing the identified relevant papers, we are able to present guidance on what type of driver models are suitable for what type of task. Compared with other related works, our work presents a holistic overview of driver models with a special focus on their applications in AV safety assessment. Though some limitations exist, the survey is generally useful for policymakers and developers. VII. CONCLUSION AND FUTURE WORK The paper presents a discussion of driver models in terms of their application in AV safety assessment. Based on the results, we have the following findings: 1) cognitive models show a driver model development trend, which aims to be applicable in both normal traffic and critical situations; 2) collision avoidance by steering, and collision avoidance by simultaneous steering and braking are less studied than collision avoidance by braking. The proposed architecture combining three stages of collision avoidance maneuvers is promising for establishing a careful and competent reference model; 3) the interpretability property of driver models is important for safety-critical systems such as AVs. Meantime, these findings indicate future working directions. More open-sourced driver simulator datasets and crash datasets are desirable in comparison to the number of opensourced naturalistic driving datasets such as HighD [148] and AD4CHE [149]. The data is valuable for studying driver steering, steering and braking behavior in critical situations. In this way, a holistic driver model incorporating different maneuvers at different stages of a collision could be created as a reference instead of a single emergency braking model. Explainable AI must be prioritized in order to achieve both high interpretability and performance. In all, the paper provides a solid foundation for future driver models for AV safety assessment. Fig. 2 . 2The review scope of the paper and the classification of driver models. Car-following, lane-changing and cognitive models are discussed in terms of theirs applications in testing AVs in simulations. For driver models as references, braking, steering and a combination of both for collision avoidance are elaborated. Fig. 3 . 3Fig. 3. The classification of analytic models. Mechanical Models: A mechanical model is constructed using mathematical and physical equations with a theoretical basis abstracted from traffic phenomena or driving processes. Psycho-Physical Models: A psycho-physical model is constructed based on a driver's perception and response. Cellular Automaton Models: A CA-based model, characterized by discrete time, space, and state variables with localized spatial interaction and temporal causality, has the capability to simulate the spatial-temporal evolution process of complex systems. Adaptive Driver Models: These models consider driver characteristics in order to increase model generalization. Fig. 4 . 4The process of a lane-changing maneuver and its three-level hierarchical division. The desire to reach a goal (strategic level) or increase efficiency (tactical level) motivates lane-changing in decision-making. The gap-selection is responsible for finding a suitable timing. When the timing is right, lane-changing is performed (operational level). TABLE I OVERVIEW IOF TRAFFIC SIMULATION SOFTWARE WITH CORRESPONDING INTEGRATED DRIVER MODELSSoftware Car-following models Lane-changing models SUMO [101] Kraus [104] Krajzewicz [105] TABLE II THE IIMETRICS FOR EVALUATING DRIVER MODELS IN TERMS OF THEIR APPLICATIONS IN AV SAFETY ASSESSMENTApplication Evaluation metrics Descriptions Typical models Possible research Simulation Variability TABLE III EVALUATION IIIAND COMPARISON OF DRIVER MODELS WITH DIFFERENT MODEL CHARACTERISTICS IN TERMS OF THEIR SUITABILITY FOR SIMULATION PURPOSESEvaluation metrics Linear Non-linear Optimi- zation Data-driven Variability - + - + Adaptability - + + + Simplicity + + - - Accuracy - - - + TABLE IV EVALUATION IVAND COMPARISON OF DRIVER MODELS WITH DIFFERENT MODEL CHARACTERISTICS IN TERMS OF THEIR SUITABILITY FOR REFERENCE PURPOSESEvaluation metrics Linear Non-linear Optimi- zation Data-driven Representativeness (B) + + + + Representativeness (S) - - + + Maneuver-coverage - - + + Interpretability + + + - Accuracy (B) - + + + Accuracy (S) - - + + Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. Sae J3016, 10.4271/J3016_202104SAE J3016, "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles," Apr. 2021. [Online]. Available: https://doi.org/10.4271/J3016 202104 Is driving automation used as intended? Real-world use of partially automated driving systems and their safety consequences. H Kim, M Song, Z Doerzaph, Transportation research record. 26761H. Kim, M. Song, and Z. Doerzaph, "Is driving automation used as intended? Real-world use of partially automated driving systems and their safety consequences," Transportation research record, vol. 2676, no. 1, pp. 30-37, 2022. How safe is safe enough for self-driving vehicles?. P Liu, R Yang, Z Xu, Risk analysis. 392P. Liu, R. Yang, and Z. Xu, "How safe is safe enough for self-driving vehicles?" Risk analysis, vol. 39, no. 2, pp. 315-325, 2019. Road vehicles -Functional safety. Iso, enISO, "ISO 26262-1:2018(en), Road vehicles -Functional safety," 2018. [Online]. Available: https://www.iso.org/standard/43464.html ISO 21448:2022: Road vehicles-Safety of the intended functionality. Iso, ISO, "ISO 21448:2022: Road vehicles-Safety of the intended functionality," 2022. [Online]. Available: https://www.iso.org/standard/ 77490.html MTBF Model for AVs-From Perception Errors to Vehicle-Level Failures. F Oboril, C Buerkle, A Sussmann, S Bitton, S Fabris, arXiv:2205.02621arXiv preprintF. Oboril, C. Buerkle, A. Sussmann, S. Bitton, and S. Fabris, "MTBF Model for AVs-From Perception Errors to Vehicle-Level Failures," arXiv preprint arXiv:2205.02621, 2022. Positive risk balance: a comprehensive framework to ensure vehicle safety. N Kauffmann, F Fahrenkrog, L Drees, F Raisch, Ethics and Information Technology. 241N. Kauffmann, F. Fahrenkrog, L. Drees, and F. Raisch, "Positive risk balance: a comprehensive framework to ensure vehicle safety," Ethics and Information Technology, vol. 24, no. 1, pp. 1-16, 2022. Road Vehicles -Safety and security for automated driving systems -Design, verification and validation methods. ISO: ISO/TR 4804ISO: ISO/TR 4804, "Road Vehicles -Safety and security for automated driving systems -Design, verification and validation methods," 2020. UN Regulation No. 157 -Uniform provisions concerning the approval of vehicles with regards to Automated Lane Keeping Systems. U N Ece, UN ECE, "UN Regulation No. 157 -Uniform provisions concerning the approval of vehicles with regards to Automated Lane Keeping Systems," Mar. 2021. [Online]. Avail- able: http://op.europa.eu/en/publication-detail/-/publication/36fd3041- 807a-11eb-9ac9-01aa75ed71a1 Competent and Careful human driver performance model. Japan Experts Of, Experts of Japan, "Competent and Careful human driver performance model," 2020. [Online]. Available: https://wiki.unece.org/download/ attachments/113344748/FRAV-07-10.pdf?api=v2 Driver models for the definition of safety requirements of automated vehicles in international regulations. Application to motorway driving conditions. K Mattas, G Albano, R Donà, M C Galassi, R Suarez-Bertoa, S Vass, B Ciuffo, Accident Analysis & Prevention. 174106743K. Mattas, G. Albano, R. Donà, M. C. Galassi, R. Suarez-Bertoa, S. Vass, and B. Ciuffo, "Driver models for the definition of safety requirements of automated vehicles in international regulations. Appli- cation to motorway driving conditions," Accident Analysis & Preven- tion, vol. 174, p. 106743, Sep. 2022. Defining reasonably foreseeable parameter ranges using real-world traffic data for scenario-based safety assessment of automated vehicles. H Nakamura, H Muslim, R Kato, S Préfontaine-Watanabe, H Nakamura, H Kaneko, H Imanaga, J Antona-Makoshi, S Kitajima, N Uchida, IEEE Access. 10760H. Nakamura, H. Muslim, R. Kato, S. Préfontaine-Watanabe, H. Naka- mura, H. Kaneko, H. Imanaga, J. Antona-Makoshi, S. Kitajima, N. Uchida, and others, "Defining reasonably foreseeable parameter ranges using real-world traffic data for scenario-based safety assess- ment of automated vehicles," IEEE Access, vol. 10, pp. 37 743-37 760, 2022. Safety Performance Boundary Identification of Highly Automated Vehicles: A Surrogate Model-Based Gradient Descent Searching Approach. Y Wang, R Yu, S Qiu, J Sun, H Farah, IEEE Transactions on Intelligent Transportation Systems. 2312Y. Wang, R. Yu, S. Qiu, J. Sun, and H. Farah, "Safety Performance Boundary Identification of Highly Automated Vehicles: A Surrogate Model-Based Gradient Descent Searching Approach," IEEE Transac- tions on Intelligent Transportation Systems, vol. 23, no. 12, pp. 23 809- 23 820, 2022. Scenarios for development, test and validation of automated vehicles. T Menzel, G Bagschik, M Maurer, 2018 IEEE Intelligent Vehicles Symposium (IV). T. Menzel, G. Bagschik, and M. Maurer, "Scenarios for development, test and validation of automated vehicles," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 1821-1827. A simulation-based, statistical approach for the derivation of concrete scenarios for the release of highly automated driving functions. N Weber, D Frerichs, U Eberle, in AmE 2020-Automotive meets Electronics; 11th GMM-Symposium. VDE, 2020N. Weber, D. Frerichs, and U. Eberle, "A simulation-based, statistical approach for the derivation of concrete scenarios for the release of highly automated driving functions," in AmE 2020-Automotive meets Electronics; 11th GMM-Symposium. VDE, 2020, pp. 1-6. Realistic LiDAR with noise model for real-time testing of automated vehicles in a virtual environment. J P Espineira, J Robinson, J Groenewald, P H Chan, V Donzella, IEEE Sensors Journal. 218J. P. Espineira, J. Robinson, J. Groenewald, P. H. Chan, and V. Donzella, "Realistic LiDAR with noise model for real-time testing of automated vehicles in a virtual environment," IEEE Sensors Journal, vol. 21, no. 8, pp. 9919-9926, 2021. The release of autonomous vehicles. W Wachenfeld, H Winner, Autonomous driving. SpringerW. Wachenfeld and H. Winner, "The release of autonomous vehicles," in Autonomous driving. Springer, 2016, pp. 425-449. Verification and Validation Methods. Vvm, VVM, "Verification and Validation Methods," 2022. [Online]. Available: https://www.vvm-projekt.de/ The SET Level project. SET Level. SET Level, "The SET Level project," 2022. [Online]. Available: https://setlevel.de/en/project Prospective effectiveness assessment for road safety: Overview. P E A R S Consortium, P.E.A.R.S. Consortium, "Prospective effectiveness assess- ment for road safety: Overview," 2021. [Online]. Driver Behavior Model for the Safety Assessment of Automated Driving. A Fries, F Fahrenkrog, K Donauer, M Mai, F Raisch, 2022 IEEE Intelligent Vehicles Symposium (IV). IEEEA. Fries, F. Fahrenkrog, K. Donauer, M. Mai, and F. Raisch, "Driver Behavior Model for the Safety Assessment of Automated Driving," in 2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022, pp. 1669-1674. Microscopic and mesoscopic traffic models. A Ferrara, S Sacone, S Siri, Freeway traffic modelling and control. SpringerA. Ferrara, S. Sacone, and S. Siri, "Microscopic and mesoscopic traffic models," in Freeway traffic modelling and control. Springer, 2018, pp. 113-143. State-of-the-art of vehicular traffic flow modelling. S P Hoogendoorn, P H Bovy, Proceedings of the Institution of Mechanical Engineers. 2154Part I: Journal of Systems and Control EngineeringS. P. Hoogendoorn and P. H. Bovy, "State-of-the-art of vehicular traffic flow modelling," Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 215, no. 4, pp. 283-303, 2001. Analyzing driver behavior under naturalistic driving conditions: A review. H Singh, A Kathuria, Accident Analysis & Prevention. 150105908H. Singh and A. Kathuria, "Analyzing driver behavior under naturalistic driving conditions: A review," Accident Analysis & Prevention, vol. 150, p. 105908, Feb. 2021. Vehicles, Advanced Features, Driver Behavior, and Safety: A Systematic Review of the Literature. R P Gouribhatla, S S Pulugurtha, Journal of Transportation Technologies. 1203R. P. Gouribhatla and S. S. Pulugurtha, "Vehicles, Advanced Features, Driver Behavior, and Safety: A Systematic Review of the Literature," Journal of Transportation Technologies, vol. 12, no. 03, pp. 420-438, 2022. A Review of Human Performance Models for Prediction of Driver Behavior and Interactions With In-Vehicle Technology. J Park, M Zahabi, Human Factors: The Journal of the Human Factors and Ergonomics Society. 001872082211327J. Park and M. Zahabi, "A Review of Human Performance Models for Prediction of Driver Behavior and Interactions With In-Vehicle Technology," Human Factors: The Journal of the Human Factors and Ergonomics Society, p. 001872082211327, Oct. 2022. Simulation Strategies for Mixed Traffic Conditions: A Review of Car-Following Models and Simulation Frameworks. B N Matcha, S N Namasivayam, M Hosseini Fouladi, K C Ng, S Sivanesan, S Y Eh Noum, Journal of Engineering. 2020B. N. Matcha, S. N. Namasivayam, M. Hosseini Fouladi, K. C. Ng, S. Sivanesan, and S. Y. Eh Noum, "Simulation Strategies for Mixed Traffic Conditions: A Review of Car-Following Models and Simulation Frameworks," Journal of Engineering, vol. 2020, pp. 1-22, Jan. 2020. A review of car-following models and modeling tools for human and autonomous-ready driving behaviors in micro-simulation. H U Ahmed, Y Huang, P Lu, Smart Cities. 41H. U. Ahmed, Y. Huang, and P. Lu, "A review of car-following models and modeling tools for human and autonomous-ready driving behaviors in micro-simulation," Smart Cities, vol. 4, no. 1, pp. 314-335, 2021. Review of Microscopic Lane-Changing Models and Future Research Opportunities. M Rahman, M Chowdhury, Y Xie, Y He, IEEE Transactions on Intelligent Transportation Systems. 144M. Rahman, M. Chowdhury, Y. Xie, and Y. He, "Review of Micro- scopic Lane-Changing Models and Future Research Opportunities," IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1942-1956, 2013. Lane changing models: a critical review. S Moridpour, M Sarvi, G Rose, Transportation letters. 23S. Moridpour, M. Sarvi, and G. Rose, "Lane changing models: a critical review," Transportation letters, vol. 2, no. 3, pp. 157-173, 2010. . &quot; Asam, Openscenario, ASAM, "OpenSCENARIO," 2021. [Online]. Available: https:// www.asam.net/standards/detail/openscenario/ A Nationwide Impact Assessment of Automated Driving Systems on Traffic Safety Using Multiagent Traffic Simulations. S Kitajima, H Chouchane, J Antona-Makoshi, N Uchida, J Tajima, IEEE Open Journal of Intelligent Transportation Systems. 3S. Kitajima, H. Chouchane, J. Antona-Makoshi, N. Uchida, and J. Tajima, "A Nationwide Impact Assessment of Automated Driving Systems on Traffic Safety Using Multiagent Traffic Simulations," IEEE Open Journal of Intelligent Transportation Systems, vol. 3, pp. 302- 312, 2022. Computational driver model in transport engineering: COSMODRIVE. H Tattegrain-Veste, T Bellet, A Pauzié, A Chapon, Transportation research record. 1550116H. Tattegrain-Veste, T. Bellet, A. Pauzié, and A. Chapon, "Com- putational driver model in transport engineering: COSMODRIVE," Transportation research record, vol. 1550, no. 1, pp. 1-7, 1996. 16 What do traffic simulations have to provide for virtual road safety assessment? human error modeling in traffic simulations. C Siebke, M Mai, G Prokop, IEEE Transactions on Intelligent Transportation Systems. C. Siebke, M. Mai, and G. Prokop, "What do traffic simulations have to provide for virtual road safety assessment? human error modeling in traffic simulations," IEEE Transactions on Intelligent Transportation Systems, 2022. Review of models of driver behaviour and development of a unified driver behaviour model for driving in safety critical situations. D Shinar, I Oppenheim, Human Modelling in Assisted Transportation. SpringerD. Shinar and I. Oppenheim, "Review of models of driver behaviour and development of a unified driver behaviour model for driving in safety critical situations," in Human Modelling in Assisted Transporta- tion. Springer, 2011, pp. 215-223. Congested traffic states in empirical observations and microscopic simulations. M Treiber, A Hennecke, D Helbing, Physical review E. 6221805M. Treiber, A. Hennecke, and D. Helbing, "Congested traffic states in empirical observations and microscopic simulations," Physical review E, vol. 62, no. 2, p. 1805, 2000. A behavioural car-following model for computer simulation. P Gipps, Transportation Research Part B: Methodological. 152P. Gipps, "A behavioural car-following model for computer simula- tion," Transportation Research Part B: Methodological, vol. 15, no. 2, pp. 105-111, Apr. 1981. Development of a Car-following Model Based on Artificial Neural Networks. H Jia, Z Juan, X Wang, Journal of Highway and Transportation Research and Development. 184H. Jia, Z. Juan, and X. Wang, "Development of a Car-following Model Based on Artificial Neural Networks," Journal of Highway and Transportation Research and Development, vol. 18, no. 4, pp. 92-94, 2001. Development of a Car-following Model Based on Combined Neural Network Model. X Xu, J Rong, L Wang, Journal of Highway and Transportation Research and Development. 243X. Xu, J. Rong, and L. Wang, "Development of a Car-following Model Based on Combined Neural Network Model," Journal of Highway and Transportation Research and Development, vol. 24, no. 3, pp. 130-132, 2007. A model for the structure of lane-changing decisions. P G Gipps, Transportation Research Part B: Methodological. 205P. G. Gipps, "A model for the structure of lane-changing decisions," Transportation Research Part B: Methodological, vol. 20, no. 5, pp. 403-414, 1986. A microscopic traffic simulator for evaluation of dynamic traffic management systems. Q Yang, H N Koutsopoulos, Transportation Research Part C: Emerging Technologies. 43Q. Yang and H. N. Koutsopoulos, "A microscopic traffic simulator for evaluation of dynamic traffic management systems," Transportation Research Part C: Emerging Technologies, vol. 4, no. 3, pp. 113-129, 1996. Models of freeway lane changing and gap acceptance behavior. K Ahmed, M Ben-Akiva, H Koutsopoulos, R Mishalani, Transportation and traffic theory. 13K. Ahmed, M. Ben-Akiva, H. Koutsopoulos, and R. Mishalani, "Mod- els of freeway lane changing and gap acceptance behavior," Trans- portation and traffic theory, vol. 13, pp. 501-515, 1996. Modeling integrated lane-changing behavior. T Toledo, H N Koutsopoulos, M E Ben-Akiva, Transportation Research Record. 18571T. Toledo, H. N. Koutsopoulos, and M. E. Ben-Akiva, "Modeling integrated lane-changing behavior," Transportation Research Record, vol. 1857, no. 1, pp. 30-38, 2003. General lane-changing model MOBIL for car-following models. A Kesting, M Treiber, D Helbing, Transportation Research Record. 19991A. Kesting, M. Treiber, and D. Helbing, "General lane-changing model MOBIL for car-following models," Transportation Research Record, vol. 1999, no. 1, pp. 86-94, 2007. Integrated lane change model with relaxation and synchronization. W J Schakel, V L Knoop, B Van Arem, Transportation Research Record. 23161W. J. Schakel, V. L. Knoop, and B. van Arem, "Integrated lane change model with relaxation and synchronization," Transportation Research Record, vol. 2316, no. 1, pp. 47-57, 2012. A data-driven lane-changing model based on deep learning. D.-F Xie, Z.-Z Fang, B Jia, Z He, Transportation research part C: emerging technologies. 106D.-F. Xie, Z.-Z. Fang, B. Jia, and Z. He, "A data-driven lane-changing model based on deep learning," Transportation research part C: emerging technologies, vol. 106, pp. 41-60, 2019. A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers. P Wang, C.-Y. Chan, A De La Fortelle, 2018 IEEE Intelligent Vehicles Symposium (IV). P. Wang, C.-Y. Chan, and A. de La Fortelle, "A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 1379-1384. Report on design of modules for the stochastic traffic simulation. C Siebke, M Bäumler, M Ringhand, I M Mai, F Elrod, I G Prokop, Technische Universität Dresden, Tech. Rep. C. Siebke, M. Bäumler, M. Ringhand, I. M. Mai, F. Elrod, and I. G. Prokop, "Report on design of modules for the stochastic traffic simulation," Technische Universität Dresden, Tech. Rep., 2021. The dynamics of perception and action. W H Warren, Psychological Review. 1132W. H. Warren, "The dynamics of perception and action." Psychological Review, vol. 113, no. 2, pp. 358-389, 2006. Calibration, information, and control strategies for braking to avoid a collision. B R Fajen, Journal of Experimental Psychology: Human Perception and Performance. 313480B. R. Fajen, "Calibration, information, and control strategies for braking to avoid a collision." Journal of Experimental Psychology: Human Perception and Performance, vol. 31, no. 3, p. 480, 2005. On a formal model of safe and scalable self-driving cars. S Shalev-Shwartz, S Shammah, A Shashua, arXiv:1708.06374arXiv preprintS. Shalev-Shwartz, S. Shammah, and A. Shashua, "On a for- mal model of safe and scalable self-driving cars," arXiv preprint arXiv:1708.06374, 2017. Automatic braking system using fuzzy logic method. I Rizianiza, D Shoodiqin, Journal of Physics: Conference Series, ser. 1. IOP Publishing183312005I. Rizianiza and D. Shoodiqin, "Automatic braking system using fuzzy logic method," in Journal of Physics: Conference Series, ser. 1, vol. 1833. IOP Publishing, 2021, p. 012005. Identification of driver model parameters. A Reński, International Journal of occupational safety and ergonomics. 71A. Reński, "Identification of driver model parameters," International Journal of occupational safety and ergonomics, vol. 7, no. 1, pp. 79-92, 2001. A mathematical model for driver steering control, with design, tuning and performance results. R S Sharp, D Casanova, P Symonds, 33Vehicle system dynamicsR. S. Sharp, D. Casanova, and P. Symonds, "A mathematical model for driver steering control, with design, tuning and performance results," Vehicle system dynamics, vol. 33, no. 5, pp. 289-326, 2000. Application of an optimal preview control for simulation of closed-loop automobile driving. C C Macadam, IEEE Transactions on systems, man, and cybernetics. 116C. C. MacAdam, "Application of an optimal preview control for simulation of closed-loop automobile driving," IEEE Transactions on systems, man, and cybernetics, vol. 11, no. 6, pp. 393-399, 1981. Understanding and Modeling the Human Driver. C C Macadam, Vehicle System Dynamics. 401-3C. C. Macadam, "Understanding and Modeling the Human Driver," Vehicle System Dynamics, vol. 40, no. 1-3, pp. 101-134, Jan. 2003. Autonomous pedestrian collision avoidance using a fuzzy steering controller. D F Llorca, V Milanés, I P Alonso, M Gavilán, I G Daza, J Pérez, M A Sotelo, IEEE transactions on intelligent transportation systems. 122D. F. Llorca, V. Milanés, I. P. Alonso, M. Gavilán, I. G. Daza, J. Pérez, and M. A. Sotelo, "Autonomous pedestrian collision avoidance using a fuzzy steering controller," IEEE transactions on intelligent transportation systems, vol. 12, no. 2, pp. 390-401, 2011. Modeling driver's evasive behavior during safety-critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning. H Guo, K Xie, M Keyvan-Ekbatani, arXiv:2209.15133arXiv preprintH. Guo, K. Xie, and M. Keyvan-Ekbatani, "Modeling driver's eva- sive behavior during safety-critical lane changes: Two-dimensional time-to-collision and deep reinforcement learning," arXiv preprint arXiv:2209.15133, 2022. A Machine Learning approach for collision avoidance and path planning of mobile robot under dense and cluttered environments. S Das, S K Mishra, Computers and Electrical Engineering. 103108376S. Das and S. K. Mishra, "A Machine Learning approach for collision avoidance and path planning of mobile robot under dense and cluttered environments," Computers and Electrical Engineering, vol. 103, p. 108376, 2022. Driver model for the analysis of preaccident situations. R Jurecki, T Stańczyk, Vehicle System Dynamics. 475R. Jurecki and T. Stańczyk, "Driver model for the analysis of pre- accident situations," Vehicle System Dynamics, vol. 47, no. 5, pp. 589- 612, 2009. A model predictive control approach for combined braking and steering in autonomous vehicles. P Falcone, F Borrelli, J Asgari, H E Tseng, D Hrovat, 2007 Mediterranean Conference on Control & Automation. IEEEP. Falcone, F. Borrelli, J. Asgari, H. E. Tseng, and D. Hrovat, "A model predictive control approach for combined braking and steering in autonomous vehicles," in 2007 Mediterranean Conference on Control & Automation. IEEE, 2007, pp. 1-6. Emergency collision avoidance strategy for autonomous vehicles based on steering and differential braking. H Li, T Zheng, F Xia, L Gao, Q Ye, Z Guo, Scientific Reports. 12122647H. Li, T. Zheng, F. Xia, L. Gao, Q. Ye, and Z. Guo, "Emergency collision avoidance strategy for autonomous vehicles based on steering and differential braking," Scientific Reports, vol. 12, no. 1, p. 22647, 2022. Traffic Dynamics: Studies in Car Following. R E Chandler, R Herman, E W Montroll, Operations Research. 62R. E. Chandler, R. Herman, and E. W. Montroll, "Traffic Dynamics: Studies in Car Following," Operations Research, vol. 6, no. 2, pp. 165-184, Apr. 1958. Simulation of bottlenecks in single-lane traffic flow. W Helly, Proceedings of the Symposium on Theory of Traffic Flow. the Symposium on Theory of Traffic FlowNew YorkElsevierW. Helly, "Simulation of bottlenecks in single-lane traffic flow," in Proceedings of the Symposium on Theory of Traffic Flow. New York: Elsevier, 1959, pp. 207-238. A simplified car-following theory: a lower order model. G F Newell, Transportation Research Part B: Methodological. 363G. F. Newell, "A simplified car-following theory: a lower order model," Transportation Research Part B: Methodological, vol. 36, no. 3, pp. 195-205, Mar. 2002. Dynamical model of traffic congestion and numerical simulation. M Bando, K Hasebe, A Nakayama, A Shibata, Y Sugiyama, Physical Review E. 512M. Bando, K. Hasebe, A. Nakayama, A. Shibata, and Y. Sugiyama, "Dynamical model of traffic congestion and numerical simulation," Physical Review E, vol. 51, no. 2, pp. 1035-1042, Feb. 1995. Car following from the driver's perspective. E R Boer, Transportation Research Part F: Traffic Psychology and Behaviour. 24E. R. Boer, "Car following from the driver's perspective," Transporta- tion Research Part F: Traffic Psychology and Behaviour, vol. 2, no. 4, pp. 201-206, Dec. 1999. A Generalization of Linear Car-Following Theory. G Lee, Operations Research. 144G. Lee, "A Generalization of Linear Car-Following Theory," Opera- tions Research, vol. 14, no. 4, pp. 595-606, Aug. 1966. Modeling drivers' acceleration and lane changing behavior. K I Ahmed, Massachusetts Institute of TechnologyPhD ThesisK. I. Ahmed, "Modeling drivers' acceleration and lane changing behavior," PhD Thesis, Massachusetts Institute of Technology, 1999. Simulation des StraBenverkehrsflusses. Institut fur Verkehrswesen. R Wiedemann, GermanyUniversity of KarlsruheR. Wiedemann, "Simulation des StraBenverkehrsflusses. Institut fur Verkehrswesen," University of Karlsruhe, Germany, 1974. Optical Information for Car Following: The Driving by Visual Angle (DVA) Model. G J Andersen, C W Sauer, Human Factors. 495G. J. Andersen and C. W. Sauer, "Optical Information for Car Fol- lowing: The Driving by Visual Angle (DVA) Model," Human Factors, vol. 49, no. 5, pp. 878-896, Oct. 2007. A cellular automaton model for freeway traffic. K Nagel, M Schreckenberg, Journal de Physique I. 212K. Nagel and M. Schreckenberg, "A cellular automaton model for freeway traffic," Journal de Physique I, vol. 2, no. 12, pp. 2221-2229, Dec. 1992. Towards a realistic microscopic description of highway traffic. W Knospe, L Santen, A Schadschneider, M Schreckenberg, Journal of Physics A: Mathematical and General. 3348477W. Knospe, L. Santen, A. Schadschneider, and M. Schreckenberg, "To- wards a realistic microscopic description of highway traffic," Journal of Physics A: Mathematical and General, vol. 33, no. 48, p. L477, Dec. 2000. Cellular automata approach to three-phase traffic theory. B S Kerner, S L Klenov, D E Wolf, Journal of Physics A: Mathematical and General. 35479971B. S. Kerner, S. L. Klenov, and D. E. Wolf, "Cellular automata approach to three-phase traffic theory," Journal of Physics A: Math- ematical and General, vol. 35, no. 47, p. 9971, Nov. 2002. Cellular automata simulating experimental properties of traffic flow. D Helbing, M Schreckenberg, Physical Review E. 593D. Helbing and M. Schreckenberg, "Cellular automata simulating experimental properties of traffic flow," Physical Review E, vol. 59, no. 3, pp. R2505-R2508, Mar. 1999. Investigating the long-and short-term driving characteristics and incorporating them into car-following models. X Chen, J Sun, Z Ma, J Sun, Z Zheng, Transportation Research Part C: Emerging Technologies. 117102698X. Chen, J. Sun, Z. Ma, J. Sun, and Z. Zheng, "Investigating the long-and short-term driving characteristics and incorporating them into car-following models," Transportation Research Part C: Emerging Technologies, vol. 117, p. 102698, Aug. 2020. Next Generation Simulation (NGSIM) Vehicle Trajectories and Supporting Data. US Department of TransportationUS Department of Transportation, "Next Generation Simulation (NGSIM) Vehicle Trajectories and Supporting Data," Jan. 2016. [Online]. Available: https://catalog.data.gov/dataset/next-generation- simulation-ngsim-vehicle-trajectories-and-supporting-data A car-following model accounting for the driving habits. P Liao, T.-Q Tang, T Wang, J Zhang, Physica A: Statistical Mechanics and its Applications. 525P. Liao, T.-Q. Tang, T. Wang, and J. Zhang, "A car-following model accounting for the driving habits," Physica A: Statistical Mechanics and its Applications, vol. 525, pp. 108-118, Jul. 2019. Application of artificial neural network and particle swarm optimization in car-following model. L Zhou, D Wang, W Li, Journal of Jilin University (Engineering and T echnology Edition). 4L. Zhou, D. Wang, and W. Li, "Application of artificial neural network and particle swarm optimization in car-following model," Journal of Jilin University (Engineering and T echnology Edition), no. 4, pp. 896- 899, 2009. Use of neural fuzzy networks with mixed genetic/gradient algorithm in automated vehicle control. S Huang, W Ren, conference Name: IEEE Transactions on Industrial Electronics. 46S. Huang and W. Ren, "Use of neural fuzzy networks with mixed genetic/gradient algorithm in automated vehicle control," IEEE Trans- actions on Industrial Electronics, vol. 46, no. 6, pp. 1090-1102, Dec. 1999, conference Name: IEEE Transactions on Industrial Electronics. A Neural-Fuzzy Framework for Modeling Car-following Behavior. X Ma, 2006 IEEE International Conference on Systems, Man and Cybernetics. 2X. Ma, "A Neural-Fuzzy Framework for Modeling Car-following Behavior," in 2006 IEEE International Conference on Systems, Man and Cybernetics, vol. 2, Oct. 2006, pp. 1178-1183, iSSN: 1062-922X. Research on car-following modeling and simulation based on fuzzy neural network. D Li, X Liu, J Rong, J Hu, Journal of Beijing University of Technology. 334D. Li, X. Liu, J. Rong, and J. Hu, "Research on car-following modeling and simulation based on fuzzy neural network," Journal of Beijing University of Technology, vol. 33, no. 4, pp. 398-401, 2007. Study on Driving Decision-Making Mechanism of Autonomous Vehicle Based on an Optimized Support Vector Machine Regression. J Zhang, Y Liao, S Wang, J Han, Applied Sciences. 8113J. Zhang, Y. Liao, S. Wang, and J. Han, "Study on Driving Decision- Making Mechanism of Autonomous Vehicle Based on an Optimized Support Vector Machine Regression," Applied Sciences, vol. 8, no. 1, p. 13, Jan. 2018. A simple nonparametric car-following model driven by field data. Z He, L Zheng, W Guan, Transportation Research Part B: Methodological. 80Z. He, L. Zheng, and W. Guan, "A simple nonparametric car-following model driven by field data," Transportation Research Part B: Method- ological, vol. 80, pp. 185-201, Oct. 2015. A recurrent neural network based microscopic car following model to predict traffic oscillation. M Zhou, X Qu, X Li, Transportation Research Part C: Emerging Technologies. 84M. Zhou, X. Qu, and X. Li, "A recurrent neural network based micro- scopic car following model to predict traffic oscillation," Transportation Research Part C: Emerging Technologies, vol. 84, pp. 245-264, Nov. 2017. Capturing Car-Following Behaviors by Deep Learning. X Wang, R Jiang, L Li, Y Lin, X Zheng, F.-Y. Wang, conference Name: IEEE Transactions on Intelligent Transportation Systems. 19X. Wang, R. Jiang, L. Li, Y. Lin, X. Zheng, and F.-Y. Wang, "Capturing Car-Following Behaviors by Deep Learning," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 3, pp. 910-920, Mar. 2018, conference Name: IEEE Transactions on Intelligent Transporta- tion Systems. Learning-based stochastic driving model for autonomous vehicle testing. L Liu, S Feng, Y Feng, X Zhu, H X Liu, Transportation research record. 2676L. Liu, S. Feng, Y. Feng, X. Zhu, and H. X. Liu, "Learning-based stochastic driving model for autonomous vehicle testing," Transporta- tion research record, vol. 2676, no. 1, pp. 54-64, 2022. Integrated deep learning and stochastic car-following model for traffic dynamics on multi-lane freeways. S Lee, D Ngoduy, M Keyvan-Ekbatani, Transportation Research Part C: Emerging Technologies. 106S. Lee, D. Ngoduy, and M. Keyvan-Ekbatani, "Integrated deep learning and stochastic car-following model for traffic dynamics on multi-lane freeways," Transportation Research Part C: Emerging Technologies, vol. 106, pp. 360-377, Sep. 2019. Human-like autonomous carfollowing model with deep reinforcement learning. M Zhu, X Wang, Y Wang, Transportation Research Part C: Emerging Technologies. 97M. Zhu, X. Wang, and Y. Wang, "Human-like autonomous car- following model with deep reinforcement learning," Transportation Research Part C: Emerging Technologies, vol. 97, pp. 348-368, Dec. 2018. Formulation and validation of a car-following model based on deep reinforcement learning. F Hart, O Okhrin, M Treiber, arXiv:2109.14268arXiv preprintF. Hart, O. Okhrin, and M. Treiber, "Formulation and validation of a car-following model based on deep reinforcement learning," arXiv preprint arXiv:2109.14268, 2021. Modelling lane changing and merging in microscopic traffic simulation. P Hidas, Transportation Research Part C: Emerging Technologies. 105-6P. Hidas, "Modelling lane changing and merging in microscopic traffic simulation," Transportation Research Part C: Emerging Technologies, vol. 10, no. 5-6, pp. 351-371, 2002. Modelling vehicle interactions in microscopic simulation of merging and weaving. P Hidas, Transportation Research Part C: Emerging Technologies. 131P. Hidas, "Modelling vehicle interactions in microscopic simulation of merging and weaving," Transportation Research Part C: Emerging Technologies, vol. 13, no. 1, pp. 37-62, 2005. A new lane-changing model with consideration of driving style. G Ren, Y Zhang, H Liu, K Zhang, Y Hu, International Journal of Intelligent Transportation Systems Research. 173G. Ren, Y. Zhang, H. Liu, K. Zhang, and Y. Hu, "A new lane-changing model with consideration of driving style," International Journal of Intelligent Transportation Systems Research, vol. 17, no. 3, pp. 181- 189, 2019. A novel lane change decision-making model of autonomous vehicle based on support vector machine. Y Liu, X Wang, L Li, S Cheng, Z Chen, IEEE access. 7Y. Liu, X. Wang, L. Li, S. Cheng, and Z. Chen, "A novel lane change decision-making model of autonomous vehicle based on support vector machine," IEEE access, vol. 7, pp. 26 543-26 550, 2019. Driving decision and control for automated lane change behavior based on deep reinforcement learning. T Shi, P Wang, X Cheng, C.-Y. Chan, D Huang, 2019 IEEE intelligent transportation systems conference (ITSC). IEEET. Shi, P. Wang, X. Cheng, C.-Y. Chan, and D. Huang, "Driving decision and control for automated lane change behavior based on deep reinforcement learning," in 2019 IEEE intelligent transportation systems conference (ITSC). IEEE, 2019, pp. 2895-2900. An Integrated Model for Autonomous Speed and Lane Change Decision-Making Based on Deep Reinforcement Learning. J Peng, S Zhang, Y Zhou, Z Li, IEEE Transactions on Intelligent Transportation Systems. 2311J. Peng, S. Zhang, Y. Zhou, and Z. Li, "An Integrated Model for Autonomous Speed and Lane Change Decision-Making Based on Deep Reinforcement Learning," IEEE Transactions on Intelligent Trans- portation Systems, vol. 23, no. 11, pp. 21 848-21 860, 2022. An integrated theory of the mind. J R Anderson, D Bothell, M D Byrne, S Douglass, C Lebiere, Y Qin, Psychological review. 11141036J. R. Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin, "An integrated theory of the mind." Psychological review, vol. 111, no. 4, p. 1036, 2004. Aasman and others, Modelling driver behaviour in Soar. J , KPN Research Leidschendam. J. Aasman and others, Modelling driver behaviour in Soar. KPN Research Leidschendam, The Netherlands, 1995. Queueing Network-Model Human Processor (QN-MHP) A computational architecture for multitask performance in human-machine systems. Y Liu, R Feyen, O Tsimhoni, ACM Transactions on Computer-Human Interaction (TOCHI). 131Y. Liu, R. Feyen, and O. Tsimhoni, "Queueing Network-Model Hu- man Processor (QN-MHP) A computational architecture for multi- task performance in human-machine systems," ACM Transactions on Computer-Human Interaction (TOCHI), vol. 13, no. 1, pp. 37-70, 2006. Modelling stochastic gaze distribution for multi-agent traffic simulation: Impact of driver characteristics and situational traffic circumstances on the driver's gaze behaviour. M Witt, P Ring, L Wang, K Kompaß, G Prokop, Kognitive Systeme. 20181M. Witt, P. Ring, L. Wang, K. Kompaß, and G. Prokop, "Modelling stochastic gaze distribution for multi-agent traffic simulation: Impact of driver characteristics and situational traffic circumstances on the driver's gaze behaviour," Kognitive Systeme, vol. 2018, no. 1, 2018. SUMO-simulation of urban mobility: an overview. M Behrisch, L Bieker, J Erdmann, D Krajzewicz, The Third International Conference on Advances in System Simulation. ThinkMind. Proceedings of SIMUL 2011M. Behrisch, L. Bieker, J. Erdmann, and D. Krajzewicz, "SUMO-simulation of urban mobility: an overview," in Proceedings of SIMUL 2011, The Third International Conference on Advances in System Simulation. ThinkMind, 2011. The driver model of the traffic flow simulation PELOPS-modelling and application possibilities. F Christen, Q Huang, 2F. Christen and Q. Huang, "The driver model of the traffic flow simulation PELOPS-modelling and application possibilities," in 2nd Berlin Expert Conference on Driver Modelling "Driver Modelling in Science and Economy. Berlin Expert Conference on Driver Modelling "Driver Modelling in Science and Economy", 2008. Fundamentals of traffic simulation. J Casas, J L Ferrer, D Garcia, J Perarnau, A Torday, Traffic simulation with aimsunJ. Casas, J. L. Ferrer, D. Garcia, J. Perarnau, and A. Torday, "Traffic simulation with aimsun," Fundamentals of traffic simulation, pp. 173- 232, 2010. Microscopic modeling of traffic flow: Investigation of collision free vehicle dynamics. S Krauß, DLR Forschungszentrum fuer Luftund Raumfahrt e. S. Krauß, "Microscopic modeling of traffic flow: Investigation of collision free vehicle dynamics," DLR Forschungszentrum fuer Luft- und Raumfahrt e.V., Tech. Rep., 1998. Kombination von taktischen und strategischen Einflüssen in einer mikroskopischen Verkehrsflusssimulation. D Krajzewicz, Fahrermodellierung in Wissenschaft und Wirtschaft, 2. Berliner Fachtagung für Fahrermodellierung. D. Krajzewicz, "Kombination von taktischen und strategischen Einflüssen in einer mikroskopischen Verkehrsflusssimulation," Fahrermodellierung in Wissenschaft und Wirtschaft, 2. Berliner Fachtagung für Fahrermodellierung, no. 28, pp. 104-115, 2009. Spurwechselvorgänge auf Zweispurigen BABRichtungsfahrbahnen. ForschungStraßenbau und Straßenverkehrstechnik. U Sparmann, 263HeftU. Sparmann, "Spurwechselvorgänge auf Zweispurigen BABRich- tungsfahrbahnen. ForschungStraßenbau und Straßenverkehrstechnik," Heft, vol. 263, 1978. Fundamentals of traffic simulation. P Sykes, Traffic simulation with paramicsP. Sykes, "Traffic simulation with paramics," Fundamentals of traffic simulation, pp. 131-171, 2010. A model for traffic simulation. H.-T Fritzsche, D.-B Ag, Traffic Engineering+ Control. 355H.-T. Fritzsche and D.-b. Ag, "A model for traffic simulation," Traffic Engineering+ Control, vol. 35, no. 5, pp. 317-21, 1994. Traffic simulation with MITSIMLab. M Ben-Akiva, H N Koutsopoulos, T Toledo, Q Yang, C F Choudhury, C Antoniou, R Balakrishna, Fundamentals of traffic simulation. M. Ben-Akiva, H. N. Koutsopoulos, T. Toledo, Q. Yang, C. F. Choudhury, C. Antoniou, and R. Balakrishna, "Traffic simulation with MITSIMLab," Fundamentals of traffic simulation, pp. 233-268, 2010. Nonlinear Follow-the-Leader Models of Traffic Flow. D C Gazis, R Herman, R W Rothery, Operations Research. 94D. C. Gazis, R. Herman, and R. W. Rothery, "Nonlinear Follow-the- Leader Models of Traffic Flow," Operations Research, vol. 9, no. 4, pp. 545-567, Aug. 1961. Synchro Studio 7: Synchro plus SimTraffic and 3D Viewer. D Husch, J Albeck, Trafficware. D. Husch and J. Albeck, Synchro Studio 7: Synchro plus SimTraffic and 3D Viewer. Trafficware, 2006. Microscopic traffic flow simulator VISSIM. M Fellendorf, P Vortisch, Fundamentals of traffic simulationM. Fellendorf and P. Vortisch, "Microscopic traffic flow simulator VISSIM," Fundamentals of traffic simulation, pp. 63-93, 2010. CORSIM-corridor traffic simulation model," in Traffic Congestion and Traffic Safety in the 21st Century: Challenges, Innovations, and OpportunitiesUrban Transportation Division, ASCE; Highway Division, ASCE; Federal Highway Administration, USDOT; and National Highway Traffic Safety Administration. A Halati, H Lieu, S Walker, A. Halati, H. Lieu, and S. Walker, "CORSIM-corridor traffic simulation model," in Traffic Congestion and Traffic Safety in the 21st Century: Challenges, Innovations, and OpportunitiesUrban Transportation Di- vision, ASCE; Highway Division, ASCE; Federal Highway Adminis- tration, USDOT; and National Highway Traffic Safety Administration, USDOT., 1997. Re:sim (Multi-Agent Traffic Simulation Software). Misakidesign, MisakiDesign, "Re:sim (Multi-Agent Traffic Simulation Software)." [Online]. Available: https://github.com/Reisim The Eclipse Working Group openPASS-an open source approach to safety impact assessment via simulation. J Dobberstein, J Bakker, L Wang, T Vogt, M Düring, L Stark, J Gainey, A Prahl, R Mueller, G Blondelle, Proc. 25th ESV Conference. 25th ESV ConferenceJ. Dobberstein, J. Bakker, L. Wang, T. Vogt, M. Düring, L. Stark, J. Gainey, A. Prahl, R. Mueller, and G. Blondelle, "The Eclipse Working Group openPASS-an open source approach to safety impact assessment via simulation," in Proc. 25th ESV Conference, 2017. Toward autonomous collision avoidance by steering. A Eidehall, J Pohl, F Gustafsson, J Ekmark, IEEE Transactions on Intelligent Transportation Systems. 81A. Eidehall, J. Pohl, F. Gustafsson, and J. Ekmark, "Toward au- tonomous collision avoidance by steering," IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 1, pp. 84-94, 2007. A farewell to brake reaction times? Kinematics-dependent brake response in naturalistic rear-end emergencies. G Markkula, J Engström, J Lodin, J Bärgman, T Victor, Accident Analysis & Prevention. 95G. Markkula, J. Engström, J. Lodin, J. Bärgman, and T. Victor, "A farewell to brake reaction times? Kinematics-dependent brake response in naturalistic rear-end emergencies," Accident Analysis & Prevention, vol. 95, pp. 209-226, Oct. 2016. A Theory of Visual Control of Braking Based on Information about Time-to-Collision. D N Lee, Perception. 54D. N. Lee, "A Theory of Visual Control of Braking Based on Information about Time-to-Collision," Perception, vol. 5, no. 4, pp. 437-459, Dec. 1976. Modeling driver control behavior in both routine and near-accident driving. G Markkula, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual Meeting58G. Markkula, "Modeling driver control behavior in both routine and near-accident driving," Proceedings of the Human Factors and Er- gonomics Society Annual Meeting, vol. 58, no. 1, pp. 879-883, Sep. 2014. Computational modeling of driver pre-crash brake response, with and without off-road glances: Parameterization using real-world crashes and near-crashes. M Svärd, G Markkula, J Bärgman, T Victor, Accident Analysis & Prevention. 163106433M. Svärd, G. Markkula, J. Bärgman, and T. Victor, "Computational modeling of driver pre-crash brake response, with and without off-road glances: Parameterization using real-world crashes and near-crashes," Accident Analysis & Prevention, vol. 163, p. 106433, Dec. 2021. Analyzing driver-pedestrian interaction at crosswalks: A contribution to autonomous driving in urban environments. F Schneemann, I Gohl, 2016 IEEE Intelligent Vehicles Symposium (IV). F. Schneemann and I. Gohl, "Analyzing driver-pedestrian interaction at crosswalks: A contribution to autonomous driving in urban environ- ments," in 2016 IEEE Intelligent Vehicles Symposium (IV), Jun. 2016, pp. 38-43. Active pedestrian safety by automatic braking and evasive steering. C G Keller, T Dang, H Fritz, A Joos, C Rabe, D M Gavrila, IEEE Transactions on Intelligent Transportation Systems. 124C. G. Keller, T. Dang, H. Fritz, A. Joos, C. Rabe, and D. M. Gavrila, "Active pedestrian safety by automatic braking and evasive steering," IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4, pp. 1292-1304, 2011. Fuzzy Surrogate Safety Metrics for real-time assessment of rear-end collision risk. A study based on empirical observations. K Mattas, M Makridis, G Botzoris, A Kriston, F Minarini, B Papadopoulos, F Re, G Rognelund, B Ciuffo, Accident Analysis & Prevention. 148105794K. Mattas, M. Makridis, G. Botzoris, A. Kriston, F. Minarini, B. Pa- padopoulos, F. Re, G. Rognelund, and B. Ciuffo, "Fuzzy Surrogate Safety Metrics for real-time assessment of rear-end collision risk. A study based on empirical observations," Accident Analysis & Preven- tion, vol. 148, p. 105794, 2020. How do drivers respond to driving risk during car-following? Risk-response driver model and its application in human-like longitudinal control. X Zhao, R He, J Wang, Accident Analysis & Prevention. 148105783X. Zhao, R. He, and J. Wang, "How do drivers respond to driving risk during car-following? Risk-response driver model and its application in human-like longitudinal control," Accident Analysis & Prevention, vol. 148, p. 105783, 2020. A Review of Near-Collision Driver Behavior Models. G Markkula, O Benderius, K Wolff, M Wahde, Human Factors: The Journal of the Human Factors and Ergonomics Society. 546G. Markkula, O. Benderius, K. Wolff, and M. Wahde, "A Review of Near-Collision Driver Behavior Models," Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 54, no. 6, pp. 1117-1143, Dec. 2012. Affordance-based perception-action dynamics: A model of visually guided braking. H S Harrison, M T Turvey, T D Frank, Psychological Review. 1233305H. S. Harrison, M. T. Turvey, and T. D. Frank, "Affordance-based perception-action dynamics: A model of visually guided braking." Psychological Review, vol. 123, no. 3, p. 305, 2016. Analysis of optimal velocity model with explicit delay. M Bando, K Hasebe, K Nakanishi, A Nakayama, Physical Review E. 585M. Bando, K. Hasebe, K. Nakanishi, and A. Nakayama, "Analysis of optimal velocity model with explicit delay," Physical Review E, vol. 58, no. 5, pp. 5429-5435, Nov. 1998. Delays, inaccuracies and anticipation in microscopic traffic models. M Treiber, A Kesting, D Helbing, Physica A: Statistical Mechanics and its Applications. 3601M. Treiber, A. Kesting, and D. Helbing, "Delays, inaccuracies and anticipation in microscopic traffic models," Physica A: Statistical Mechanics and its Applications, vol. 360, no. 1, pp. 71-88, Jan. 2006. Modeling reaction time within a traffic simulation model. K Basak, S N Hetu, C L Zheminli, H Azevedo, T Loganathan, Toledo, Runminxu, Yanxu, M Li-Shiuanpeh, Ben-Akiva, 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). The Hague; NetherlandsIEEEK. Basak, S. N. Hetu, ZheminLi, C. L. Azevedo, H. Loganathan, T. Toledo, RunminXu, YanXu, Li-ShiuanPeh, and M. Ben-Akiva, "Modeling reaction time within a traffic simulation model," in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). The Hague, Netherlands: IEEE, Oct. 2013, pp. 302-309. Simplified, data-driven, errorable car-following model to predict the safety effects of distracted driving. J Przybyla, J Taylor, J Jupe, X Zhou, 2012 15th International IEEE Conference on Intelligent Transportation Systems. Anchorage, AK, USAIEEEJ. Przybyla, J. Taylor, J. Jupe, and X. Zhou, "Simplified, data-driven, errorable car-following model to predict the safety effects of distracted driving," in 2012 15th International IEEE Conference on Intelligent Transportation Systems. Anchorage, AK, USA: IEEE, Sep. 2012, pp. 1149-1154. Hardware simulation of automatic braking system based on fuzzy logic control. N C Basjaruddin, K Kuspriyanto, S Suhendar, D Saefudin, V A Azis, Journal of Mechatronics, Electrical Power, and Vehicular Technology. 71N. C. Basjaruddin, K. Kuspriyanto, S. Suhendar, D. Saefudin, and V. A. Azis, "Hardware simulation of automatic braking system based on fuzzy logic control," Journal of Mechatronics, Electrical Power, and Vehicular Technology, vol. 7, no. 1, pp. 1-6, 2016. Fuzzy logic controller on automated car braking system. M Mamat, N Ghani, 2009 IEEE International Conference on Control and Automation. IEEEM. Mamat and N. Ghani, "Fuzzy logic controller on automated car braking system," in 2009 IEEE International Conference on Control and Automation. IEEE, 2009, pp. 2371-2375. Emergency Steering Evasion Assistance Control Based on Driving Behavior Analysis. Z Zhao, L Zhou, Y Luo, K Li, IEEE Transactions on Intelligent Transportation Systems. 202Z. Zhao, L. Zhou, Y. Luo, and K. Li, "Emergency Steering Evasion Assistance Control Based on Driving Behavior Analysis," IEEE Trans- actions on Intelligent Transportation Systems, vol. 20, no. 2, pp. 457- 475, Feb. 2019. Real time path planning for threat assessment and collision avoidance by steering. A Eidehall, D Madås, 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). A. Eidehall and D. Madås, "Real time path planning for threat assessment and collision avoidance by steering," in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). . IEEE. IEEE, 2013, pp. 916-921. Collision-avoidance systems PRORETA: Situation analysis and intervention control. R Isermann, R Mannale, K Schmitt, Control Engineering Practice. 2011R. Isermann, R. Mannale, and K. Schmitt, "Collision-avoidance sys- tems PRORETA: Situation analysis and intervention control," Control Engineering Practice, vol. 20, no. 11, pp. 1236-1246, 2012. Emergency collision avoidance by steering in critical situations. J Park, D Kim, K Huh, International journal of automotive technology. 221J. Park, D. Kim, and K. Huh, "Emergency collision avoidance by steering in critical situations," International journal of automotive technology, vol. 22, no. 1, pp. 173-184, 2021. Automatic steering and braking for a collision avoiding vehicle. M Schorn, R Isermann, IFAC Proceedings Volumes. 39M. Schorn and R. Isermann, "Automatic steering and braking for a collision avoiding vehicle," IFAC Proceedings Volumes, vol. 39, no. 16, pp. 378-383, 2006. Integrated Control of Steering and Braking for Effective Collision Avoidance with Autonomous Emergency Braking in Automated Driving. D Wang, K Nazem Tahmasebi, D Chen, 30D. Wang, K. Nazem Tahmasebi, and D. Chen, "Integrated Control of Steering and Braking for Effective Collision Avoidance with Au- tonomous Emergency Braking in Automated Driving," in 2022 30th . Mediterranean Conference on Control and Automation. Mediterranean Conference on Control and Automation (MED), 2022, pp. 945-950. Integrated Steering and Differential Braking for Emergency Collision Avoidance in Autonomous Vehicles. R Hajiloo, M Abroshan, A Khajepour, A Kasaiezadeh, S.-K Chen, IEEE Transactions on Intelligent Transportation Systems. 225R. Hajiloo, M. Abroshan, A. Khajepour, A. Kasaiezadeh, and S.-K. Chen, "Integrated Steering and Differential Braking for Emergency Collision Avoidance in Autonomous Vehicles," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 5, pp. 3167-3178, 2021. A F Agarap, arXiv:1803.08375Deep learning using rectified linear units (relu). arXiv preprintA. F. Agarap, "Deep learning using rectified linear units (relu)," arXiv preprint arXiv:1803.08375, 2018. Car-following model based on fuzzy inference system. S Kikuchi, Transportation Research Record. S. Kikuchi, "Car-following model based on fuzzy inference system," Transportation Research Record, pp. 82-82, 1992. Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. D J Fagnant, K Kockelman, Transportation Research Part A: Policy and Practice. 77D. J. Fagnant and K. Kockelman, "Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations," Trans- portation Research Part A: Policy and Practice, vol. 77, pp. 167-181, 2015. How will vehicle automation and electrification affect the automotive maintenance, repair sector?. M Grosso, I Raileanu, J Krause, M Alonso Raposo, A Duboz, A Garus, A Mourtzouchou, B Ciuffo, Transportation Research Interdisciplinary Perspectives. 12100495M. Grosso, I. Cristinel Raileanu, J. Krause, M. Alonso Raposo, A. Duboz, A. Garus, A. Mourtzouchou, and B. Ciuffo, "How will ve- hicle automation and electrification affect the automotive maintenance, repair sector?" Transportation Research Interdisciplinary Perspectives, vol. 12, p. 100495, Dec. 2021. Collision avoidance with automatic braking and swerving. C Ackermann, R Isermann, S Min, C Kim, IFAC proceedings volumes. 473C. Ackermann, R. Isermann, S. Min, and C. Kim, "Collision avoidance with automatic braking and swerving," IFAC proceedings volumes, vol. 47, no. 3, pp. 10 694-10 699, 2014. Review of the literature on obstacle avoidance maneuvers: braking versus steering. L D Adarns, P Place, L Adarns, The University of Michigan Transportation Research InstituteL. D. Adarns, P. Place, and L. Adarns, "Review of the literature on obstacle avoidance maneuvers: braking versus steering," The University of Michigan Transportation Research Institute, 1994. A hybrid rule-based and data-driven approach to driver modeling through particle filtering. R Bhattacharyya, S Jung, L A Kruse, R Senanayake, M J Kochenderfer, IEEE Transactions on Intelligent Transportation Systems. 238R. Bhattacharyya, S. Jung, L. A. Kruse, R. Senanayake, and M. J. Kochenderfer, "A hybrid rule-based and data-driven approach to driver modeling through particle filtering," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 13 055-13 068, 2021. Modeling the car-following behavior with consideration of driver, vehicle, and environment factors: A historical review. J Han, X Wang, G Wang, Sustainability. 14138179J. Han, X. Wang, and G. Wang, "Modeling the car-following behavior with consideration of driver, vehicle, and environment factors: A historical review," Sustainability, vol. 14, no. 13, p. 8179, 2022. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems. R Krajewski, J Bock, L Kloeker, L Eckstein, 2018 21st International Conference on Intelligent Transportation Systems (ITSC). R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, "The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems," in 2018 21st International Conference on Intelligent Transportation Systems (ITSC). . IEEE. IEEE, 2018, pp. 2118-2125. The AD4CHE Dataset and its Application in Typical Congestion Scenarios of Traffic Jam Pilot Systems. Y Zhang, C Wang, R Yu, L Wang, W Quan, Y Gao, P Li, IEEE Transactions on Intelligent Vehicles. Y. Zhang, C. Wang, R. Yu, L. Wang, W. Quan, Y. Gao, and P. Li, "The AD4CHE Dataset and its Application in Typical Congestion Scenar- ios of Traffic Jam Pilot Systems," IEEE Transactions on Intelligent Vehicles, pp. 1-12, 2023.
{'fraction_non_alphanumeric': 0.04767606157507251, 'fraction_numerical': 0.022525470365137204, 'mean_word_length': 4.855475723927716, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 1, 'https://': 9, 'lorem ipsum': 0, 'www.': 4, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 14, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Driver models play a vital role in developing and verifying autonomous vehicles (AVs). Previously, they are mainly applied in traffic flow simulation to model realistic driver behavior. With the development of AVs, driver models attract much attention again due to their potential contributions to AV certification. The simulation-based testing method is considered an effective measure to accelerate AV testing due to its safe and efficient characteristics. Nonetheless, realistic driver models are prerequisites for valid simulation results. Additionally, an AV is assumed to be at least as safe as a careful and competent driver. Therefore, driver models are inevitable for AV safety assessment. However, no comparison or discussion of driver models is available regarding their utility to AVs in the last five years despite their necessities in the release of AVs. This motivates us to present a comprehensive survey of driver models in the paper and compare their applicability. Requirements for driver models in terms of their application to AV safety assessment are discussed. A summary of driver models for simulationbased testing and AV certification is provided. Evaluation metrics are defined to compare their strength and weakness. Finally, an architecture for a careful and competent driver model is proposed. Challenges and future work are elaborated. This study gives related researchers especially regulators an overview and helps them to define appropriate driver models for AVs.', 'arxivid': '2303.14779', 'author': ['Cheng Wang ', 'Fengwei Guo ', 'Ruilin Yu ', 'Luyao Wang ', 'Yuxin Zhang '], 'authoraffiliation': [], 'corpusid': 257766793, 'doi': '10.48550/arxiv.2303.14779', 'github_urls': ['https://github.com/Reisim'], 'n_tokens_mistral': 35710, 'n_tokens_neox': 30750, 'n_words': 19737, 'pdfsha': '5f61878735ba462776511af18175cbc9c1630732', 'pdfurls': ['https://export.arxiv.org/pdf/2303.14779v1.pdf'], 'title': ['The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey', 'The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey'], 'venue': []}
arxiv
Extension of a factorization method of nonlinear second order ODE's with variable coefficients 2017 H C Rosu Luis Potosí S L P Mexico O Cornejo-Pérez M Pérez-Maldonado Luis Potosí S L P Mexico J A Belinchón Facultad de Ingeniería IPICyT Instituto Potosino de Investigacion Cientifica y Tecnologica Camino a la presa San José 2055, Col. Lomas 4a Sección78216San Instituto Potosino de Investigacion Cientifica y Tecnologica Departamento de Física Facultad de Ciencias Naturales Centro Universitario Cerro de las Campanas Universidad Autónoma de Querétaro Camino a la presa San José 2055, Col. Lomas 4a Sección76010, 78216Santiago de Querétaro, SanMexico Universidad de Atacama 485Copayapu, CopiapóChile Extension of a factorization method of nonlinear second order ODE's with variable coefficients Rev. Mex. Fís 632017nonlinear second order equationfactorizationpowers of first derivative PACS numbers: 0230Hq1130Pb arXiv:161201938v3 The factorization of nonlinear second-order differential equations proposed by Rosu and Cornejo-Pérez in 2005 is extended to equations containing quadratic and cubic forms in the first derivative. A few illustrative examples encountered in physics are provided. I. INTRODUCTION Finding exact solutions of nonlinear differential equations has long been an active field of research because of the insight they offer in the understanding of many processes in physics, biology, chemistry, and other scientific areas. Among the methods developed to find analytical solutions of nonlinear ordinary differential equations (ODEs) and nonlinear partial differential equations (PDEs) we enumerate the truncation procedure in Painlevé analysis [1], the Hirota bilinear method [2], the tanh function method [3,4], the Jacobi elliptic function method [5], and the Prelle-Singer method [6,7]. The factorization method, which in mathematics has roots that go to Euler and Cauchy, is a well-known technique used to find exact solutions of linear second order ODEs in an algebraic manner. In physics, it has attracted much interest as an elegant way of solving fundamental eigenvalue problems in quantum mechanics, and later due primarily to its natural association with supersymmetric quantum mechanics [8][9][10][11][12][13][14]. The latter approach has been extended to some types of nonlinear ODEs [15], and to more dimensions [16][17][18][19] as well. In recent times, the factorization technique has been applied to find exact solutions of many nonlinear ODEs [20], and to nonlinear PDEs, mainly in the context of traveling waves [21][22][23][24][25][26][27][28][29]. The factorization technique was further extended to a class of coupled Liénard equations, which also included a coupled version of the modified Emden equation, by Hazra et al [30]. Their algorithm can be generalized to higher order scalar and coupled ODEs, but one has to pay the price of increased algebraic complexity. In addition, Tiwari et al [31] factorized even more complicated quadratic and mixed Liénard-type nonlinear systems, among which the coupled Mathews-Lakshmanan nonlinear oscillators. In this paper, we generalize the factorization technique that we introduced previously [22,23] for nonlinear equations with a monomial function in the first derivative, i.e., with a damping term which can be also nonlinear, to nonlinear equations with polynomial functions of second and third degree in the first derivative. In the following section, we review the factorization in the monomial case. Next, we present the factorization of nonlinear equations with polynomial function of second degree in the first derivative and illustrate it with a couple of examples. The last section is devoted to the factorization of nonlinear equations with polynomial function of third degree in the first derivative. We end up the paper with the conclusion section. II. FACTORIZATION OF NONLINEAR EQUATIONS WITH A MONOMIAL OF FIRST DEGREE IN THE FIRST DERIVATIVE Nonlinear equations of the type y ss + f (y, s)y s + F (y, s) = 0 ,(1) where the subscript s denotes the derivative with respect to s and F (y, s) and f (y, s) are arbitrary functions of y(s) and s, necessarily a polynomial function can be factorized as follows [32]: [D s − φ 2 (y, s)][D s − φ 1 (y, s)]y(s) = 0 ,(2) where D s = d ds . Expanding (2), one can use the following grouping of terms [22,23]: D 2 s y − φ 1 + φ 2 + dφ 1 dy y D s y + (φ 1 φ 2 − ∂φ 1 /∂s) y = 0 ,(3) and comparing Eq. (1) with Eq. (3), we get the conditions φ 1 + φ 2 + ∂φ 1 ∂y y = −f ,(4)φ 1 φ 2 − ∂φ 1 ∂s = F (y, s) y .(5) Any factorization like (2) of a scalar equation of the form given in Eq. (1) allows us to find a compatible first order nonlinear differential equation, [D s − φ 1 (y, s)]y ≡ D s y − φ 1 (y, s)y = 0 ,(6) whose solution provides a particular solution of (1). In other words, if we are able to find a couple of functions φ 1 (y, s) and φ 2 (y, s) such that they factorize Eq. (1) in the form (2), solving Eq. (6) allows to get particular solutions of (1). The advantage of this factorization has been shown in the important particular case when there is no explicit dependence on s, i.e., for equations y ss + f (y)y s + F (y) = 0,(7) for which the factorization conditions are φ 1 + φ 2 + dφ 1 dy y = −f ,(8)φ 1 φ 2 = F (y) y ,(9) when the two unknown functions φ 1 (y) and φ 2 (y) can be found easily by factoring F (y) when it is a polynomial or written as a product of two functions. This property of the nonlinear factorization has been successfully used when it has been introduced a decade ago and contributed to its popularity [33]. An illustration of this technique in the case of the cubic Ginzburg-Landau equation can be found in [34]. Notice that interchanging the factoring functions turns (8) and (9) into φ 1 + φ 2 + dφ 2 dy y = −f ,(10)φ 1 φ 2 = F (y) y ,(11) which correspond to equations y ss +f (y)y s + F (y) = 0 . If s is a traveling variable, this suggests kinematic relationships between the kink solutions of (7) and (12) evolving under the different nonlinear dampings f (y) and f (y). Finally, in the case f = 0 and F (y, s) = V (s)y, the factoring functions φ's depend only on s and the equations (1) are linear ones y ss + V (s)y = 0 .(13) The factorization conditions take the simplified form φ 1 + φ 2 = 0 ,(14)φ 1 φ 2 − dφ 1 ds = V (s) .(15) From (14), one has φ 1 = −φ 2 = φ which upon substitution in (15) leads to the well known Riccati equation −dφ/ds − φ 2 = V (s) defining the Schrödinger potential in quantum mechanics in terms of the factoring function. The interchange of φ 1 with φ 2 produces the partner Riccati equation dφ/ds − φ 2 =Ṽ (s) of much use in supersymmetric quantum mechanics [35,36]. III. FACTORIZATION OF NONLINEAR EQUATIONS WITH POLYNOMIAL FUNCTION OF SECOND DEGREE IN THE FIRST DERIVATIVE Let us consider the following nonlinear second order ODE with variable coefficients y ss + f (y, s)y 2 s + g(y, s)y s + F (y, s) = 0 . A factorization of the form [D s + f (y, s)y s − φ 2 (y, s)] [D s − φ 1 (y, s)] y = 0 ,(17) is possible if the following constraint equations are satisfied: φ 1 + φ 2 + ∂φ 1 ∂y + f (y, s)φ 1 y = −g(y, s),(18)φ 1 φ 2 − ∂φ 1 ∂s = F (y, s) y(19) There are also cases when one can work with φ 2 = 0. In such cases, the constraint equations take the form φ 1 + ∂φ 1 ∂y + f (y, s)φ 1 y = −g(y, s),(20)− ∂φ 1 ∂s = F (y, s) y .(21) Finally, the degenerate case corresponding to φ 1 = 0, which also implies F = 0, leads to the simple constraint φ 2 = −g(y, s) . As an example of a degenerate case, we mention the equation for the radial function of the isotropic metric in general relativity [37] y ss − 3 y y 2 s − 1 s y s = 0 ,(23) for which (22) is written as φ 2 = 1 s .(24) The solution y = 1 2 a √ 1 + bs 2 ,(25) where a and b are integration constants, can be found by elementary means [37]. The most important application is when no explicit dependence on s occurs in the equation and so neither F nor the φ's depend on s when the constraints are similar to (8) and (9). If moreover one assumes φ 1 = φ 2 = φ then the second constraint equation provides the factorization function as φ(y) = F (y) y .(26) Substituting (26) in the first constraint equation leads to the following expression for the g coefficient g(y) = − 1 2 F (y) y 3 + F y F + 2f (y) y .(27) For given f (y) and F (y), the latter equation gives the coefficient g(y) for which the nonlinear equation can be factorized in the form D s + f (y)y s − F (y)/y D s − F (y)/y y = 0 .(28) There are equations of the latter type which do not present a linear term in the first derivative. This implies g(y) = 0, i.e. + F y F + 2f (y) y = 0 ,(29) which is separable. The solution F (y) = Cy −3 e −2 y f (u)du ,(30) with C an integration constant, provides the form of F which for given f allows the factorization of the equation. However, as simple as it may look, the condition (30) is quite restrictive. In physical applications, differential equations with squares of the first derivative are encountered in highly nonlinear areas, such as cosmology [38] and gravitation theories, e.g., Weyl conformal gravity [39] and f (R) gravity [40], but occasionally they show up in other branches as well. In the following, we will give two examples of factorization of such equations. A. An equation in Weyl's conformal cosmology The following equation y ss − α y y 2 s + y σ x 2 = 0 ,(31) where α and σ are real constants, arises in intermediate calculations concerning the vacuum solution of the field equations in Weyl's conformal gravity [41,42]. Let us try the factorization D s − α y y s (D s − φ 1 (y, s)) y = 0 .(32) Therefore, the following constraint equations should be satisfied ∂φ 1 ∂s = − y σ−1 s 2 (33) φ 1 − α y φ 1 y + ∂φ 1 ∂y y = 0.(34) Equation (34) is separable and generates the function φ 1 (y, s) = f (s)y α−1 , then, from eq. (33) we obtain ∂φ 1 ∂s = ∂ ∂s (y α−1 f (s)) = y α−1 f ′ (s) = − y σ−1 s 2(35) which implies α = σ, and f (s) = 1 s + c 1 , where c 1 is an arbitrary constant. Assuming the following [27] (D s − φ 1 (y, s)) y = Ω, then, we get Ω ′ − α y ′ y Ω = 0.(37) with solution Ω = k 0 y α . Therefore, we get the first order equation y ′ − 1 s + c 1 y α = k 0 y α ,(38) which can be rewritten in the form y ′ − 1 s + k 1 y α = 0 ,(39) where k 1 is an integration constant. The general solution of Eq. (39) is given in the form y = ((α − 1) (−k 1 s − k 2 − ln s)) 1 1−α .(40) where k 2 is an integration constant. For k 1 = 0 and α = 5, we obtain the following particular solutions which when γ = 4/3 provides Langmuir's radial β function occurring in the formula for the space charge between coaxial cylinders [43]. Using (19), one can choose y 1,2 = ± 1 √ 2(−k2−ln(s)) 1/4 ,(41)y 3,4 = ± i √ 2(−k2−ln(s)) 1/4 .(42)φ 1 = − 1 − 1 y , φ 2 = − 1 3 1 + 1 y .(43) Substituting (43) in (18), one obtains γ = 5/3, which shows that the Langmuir case cannot be factored. If γ = 5/3, we can obtain a particular solution from the first-order differential equation D s + 1 − 1 y y = 0 =⇒ y s + y − 1 = 0 ,(44) which is y(s) = Ce −s + 1 ,(45) where C is the integration constant. IV. FACTORIZATION OF NONLINEAR EQUATIONS WITH POLYNOMIAL FUNCTION OF THIRD DEGREE IN THE FIRST DERIVATIVE It is well known that equations of the type y ss +f (y, s)y 3 s +g(y, s)y 2 s +h(y, s)y s +F (y, s) = 0 , where the coefficient functions are mappings from twodimensional disks to the set of real numbers, D 2 → R, define projective connections [44,45]. Such equations allow for the factorization D s + f (y, s)ẏ 2 − φ 2 (y, s) [D s − φ 1 (y, s)] y = 0 ,(47) with the compatible first order equation [D s − φ 1 (y, s)] y ≡ y s − φ 1 (y, s)y = 0 ,(48) under the constraint equations f (y, s)φ 1 y = −g(y, s) (49) φ 1 + φ 2 + ∂φ 1 ∂y y = −h(y, s) (50) φ 1 φ 2 − φ 1s y = F (y, s) y .(51) On the other hand, for any symmetric affine connection Γ = (Γ i jk (s, y)), the so-called projective connection associated to Γ [44] which carries all information about unparametrized geodesics of Γ is determined by the equation y ss − Γ 1 22 y 3 s + (Γ 2 22 −2Γ 1 12 )y 2 s − (Γ 1 11 −2Γ 2 12 ) y s + Γ 2 11 = 0 . (52) Thus, one finds that equations (52) can be factored if φ 1 y = (Γ 2 22 −2Γ 1 12 )/Γ 1 22 (53) φ 1 + φ 2 + ∂φ 1 ∂y y = Γ 1 11 −2Γ 2 12 (54) φ 1 φ 2 − φ 1s y = Γ 2 11 y .(55) We do not present any particular case. Rather we notice that for given Γ's, (53) provides φ 1 . Then, substituting in (54), we get φ 2 , but in the end (55) should be still satisfied. This looks complicated and makes the success of the method less probable. V. CONCLUSION In summary, we have discussed here a simple factorization method of complicated nonlinear second-order differential equations containing quadratic and cubic polynomial forms in the first derivative, and we have presented some examples. Only those equations with the coefficients satisfying certain constraints involving the factoring functions can be factorized. By doing this, one can seek solutions of simpler first order nonlinear differential equations, corresponding to the first factorization bracket from the right. This works fine when there is only a linear term in the first derivative. When the powers of the first derivatives are more than one, the constraint conditions on the factoring functions become more complicated, and the factorization method is less appropriate. In general, the factorization method can still work when the coefficients of the nonlinear equation do not depend explicitly on the independent variable, because the constraint equations are less restrictive in these cases. AcknowledgementsThe authors wish to thank Dr. J. Poveromo for informing them on the nonlinear equation occurring in Weyl's conformal gravity model. M. Pérez-Maldonado thanks CONACyT for a doctoral fellowship. . J Weiss, J. Math. Phys. 241405J. Weiss, J. Math. Phys. 24 (1983) 1405. . R Hirota, Phys. Rev. Lett. 271192R. Hirota, Phys. Rev. Lett. 27 (1971) 1192. . E J Parkes, B R Duffy, Comput. Phys. Commun. 98288E.J. Parkes and B.R. Duffy, Comput. Phys. Commun. 98 (1996) 288. . E G Fan, Phys. Lett. A. 277212E.G. Fan, Phys. Lett. A 277 (2000) 212. . Z T Fu, S K Liu, S D Liu, Q Zhao, Phys. Lett. A. 29072Z.T. Fu, S.K. Liu, S.D. Liu, and Q. Zhao, Phys. Lett. A 290 (2001) 72. . M Prelle, M Singer, Trans. Am. Math. Soc. 279215M. Prelle and M. Singer, Trans. Am. Math. Soc. 279 (1983) 215. . V K Chandrasekar, S N Pandey, M Senthilvelan, M Lakshmanan, J. Math. Phys. 4723508V.K. Chandrasekar, S.N. Pandey, M. Senthilvelan, and M. Lakshmanan, J. Math. Phys. 47 (2006) 023508. E Schrödinger, Proc. R. Ir. Acad. A. R. Ir. Acad. A47E. Schrödinger, Proc. R. Ir. Acad. A 47 (1941-1942) 53. . L Infeld, T E Hull, Rev. Mod. Phys. 2321L. Infeld and T.E. Hull, Rev. Mod. Phys. 23 (1951) 21. . B Mielnik, J. Math. Phys. 253387B. Mielnik, J. Math. Phys. 25 (1984) 3387. . D J Fernández, C , Lett. Math. Phys. 8337D.J. Fernández C., Lett. Math. Phys. 8 (1984) 337. . C V Sukumar, J. Phys. A: Math. Gen. 1857C.V. Sukumar, J. Phys. A: Math. Gen. 18 (1985) L57. . B Mielnik, O Rosas-Ortiz, J. Phys. A. 3710007B. Mielnik and O. Rosas-Ortiz, J. Phys. A 37 (2004) 10007. S.-H Dong, Factorization Method in Quantum Mechanics. SpringerS.-H. Dong, "Factorization Method in Quantum Mechan- ics", Springer, (2007). . A A Andrianov, M V Ioffe, J. Phys. A: Math. Theor. 45503001A.A. Andrianov and M.V. Ioffe, J. Phys. A: Math. Theor. 45 (2012) 503001. . A A Andrianov, N V Borisov, M V Ioffe, JETP Lett. 3993A.A. Andrianov, N.V. Borisov, and M.V. Ioffe, JETP Lett. 39 (1984) 93. . M V Ioffe, J. Phys. A: Math. Gen. 3710363M.V. Ioffe, J. Phys. A: Math. Gen. 37 (2004) 10363. . M V Ioffe, J Negro, L M Nieto, D N Nishnianidze, J. Phys. A: Math. Gen. 399297M.V. Ioffe, J. Negro, L. M. Nieto, and D. N. Nishnian- idze, J. Phys. A: Math. Gen. 39 (2006) 9297. . F Cannata, M V Ioffe, D N Nishnianidze, J. Math. Phys. 5052105F. Cannata, M.V. Ioffe, and D. N. Nishnianidze, J. Math. Phys. 50, (2009) 052105. . L M Berkovich, Appl. Anal. Discrete Math. 1122L.M. Berkovich, Appl. Anal. Discrete Math. 1 (2007) 122. Factorizations and Transformations of Differential Equations. L M Berkovich, Regular and Chaotic Dynamics Editorial Center. in RussianL.M. Berkovich, "Factorizations and Transformations of Differential Equations", Regular and Chaotic Dynamics Editorial Center, (in Russian), (2002). . H C Rosu, O Cornejo-Pérez, Phys. Rev. E. 7146607H.C. Rosu and O. Cornejo-Pérez, Phys. Rev. E 71 (2005) 046607. . O Cornejo-Pérez, H C Rosu, Prog. Theor. Phys. 114533O. Cornejo-Pérez and H.C. Rosu, Prog. Theor. Phys. 114 (2005) 533. . O Cornejo-Pérez, J Negro, L M Nieto, H C Rosu, Found. Phys. 361587O. Cornejo-Pérez, J. Negro, L.M. Nieto, and H.C. Rosu, Found. Phys. 36 (2006) 1587. . P G Estévez, Ş Kuru, J Negro, L M Nieto, J. Phys. A: Math. Gen. 3911441P.G. Estévez, Ş. Kuru, J. Negro, and L.M. Nieto, J. Phys. A: Math. Gen. 39 (2006) 11441. . P G Estévez, Ş Kuru, J Negro, L M Nieto, J. Phys. A: Math. Theor. 409819P.G. Estévez, Ş. Kuru, J. Negro, and L.M. Nieto, J. Phys. A: Math. Theor. 40 (2007) 9819. . D S Wang, H Li, J. Math. Anal. Appl. 343273D.S. Wang and H. Li, J. Math. Anal. Appl. 343 (2008) 273. . E S Fahmy, Chaos, Solitons and Fractals. 381209E.S. Fahmy, Chaos, Solitons and Fractals 38 (2008) 1209. . S C Mancas, H C Rosu, Phys. Lett. A. 3771434S.C. Mancas and H.C. Rosu, Phys. Lett. A 377 (2013) 1434. . T Hazra, V K Chandrasekar, R Gladwin Pradeep, M Lakshmanan, J. Math. Phys. 5323511T. Hazra, V. K. Chandrasekar, R. Gladwin Pradeep, and M. Lakshmanan, J. Math. Phys. 53 (2011) 023511. . A K Tiwari, S N Pandey, V K Chandrasekar, M Lakshmanan, Appl. Math. Comp. 252457A.K. Tiwari, S.N. Pandey, V.K. Chandrasekar, and M. Lakshmanan, Appl. Math. Comp. 252 (2015) 457. . P G Estévez, Ş Kuru, J Negro, L M Nieto, Int. J. Theor. Phys. 502046P.G. Estévez, Ş. Kuru, J. Negro, and L.M. Nieto, Int. J. Theor. Phys. 50 (2011) 2046. Traveling Wave Analysis of Partial Differential Equations, Numerical and Analytical Methods with MATLAB and MAPLE. G W Griffiths, W E Schiesser, Academic PressG.W. Griffiths and W.E. Schiesser, "Traveling Wave Analysis of Partial Differential Equations, Numerical and Analytical Methods with MATLAB and MAPLE", Academic Press, (2012). . H C Rosu, O Cornejo-Pérez, P Ojeda-May, Phys. Rev. E. 8537102H.C. Rosu, O. Cornejo-Pérez, P. Ojeda-May, Phys. Rev. E 85 (2012) 037102. B Bagchi, Supersymmetry in Quantum and Classical Mechanics. Chapman and Hall/CRCB. Bagchi, "Supersymmetry in Quantum and Classical Mechanics", Chapman and Hall/CRC, (2001). Supersymmetry in Quantum Mechanics. F Cooper, A Khare, U Sukhatme, World ScientificF. Cooper, A. Khare, U. Sukhatme, "Supersymmetry in Quantum Mechanics", World Scientific, (2001). . H A Buchdahl, Astrophys. J. 1401512H.A. Buchdahl, Astrophys. J. 140, (1964) 1512. . M K Mak, T Harko, J A Belinchón, Int. J. Mod. Phys. D. 111265M.K. Mak, T. Harko, and J.A. Belinchón, Int. J. Mod. Phys. D 11 (2002) 1265. C Deliduman, O Kasikci, C Yapiskan, arXiv:1511.07731Flat galactic rotation curves from geometry in Weyl gravity. C. Deliduman, O. Kasikci, and C. Yapiskan, Flat galac- tic rotation curves from geometry in Weyl gravity, arXiv:1511.07731. . C G Böhmer, T Harko, F S N Lobo, Astropart. Phys. 29386C.G. Böhmer, T. Harko, and F. S. N. Lobo, Astropart. Phys. 29 (2008) 386. . J Poveromo, private communicationJ. Poveromo, private communication. . P D Mannheim, D Kazanas, Astrophys. J. 342635P.D. Mannheim and D. Kazanas, Astrophys. J. 342 (1989) 635. . I Langmuir, K B Blodgett, Phys. Rev. 22347I. Langmuir and K.B. Blodgett, Phys. Rev. 22 (1923) 347. . V S Matveev, Math. Ann. 352865V.S. Matveev, Math. Ann. 352 (2012) 865. . Y Y Bagderina, J. Appl. Ind. Math. 1037Y.Y. Bagderina, J. Appl. Ind. Math. 10 (2016) 37.
{'fraction_non_alphanumeric': 0.09268046441191317, 'fraction_numerical': 0.059262998485613325, 'mean_word_length': 3.4851709304958116, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 12, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The factorization of nonlinear second-order differential equations proposed by Rosu and Cornejo-Pérez in 2005 is extended to equations containing quadratic and cubic forms in the first derivative. A few illustrative examples encountered in physics are provided.', 'arxivid': '1612.01938', 'author': ['H C Rosu ', 'Luis Potosí ', 'S L P ', 'Mexico O Cornejo-Pérez ', 'M Pérez-Maldonado ', 'Luis Potosí ', 'S L P Mexico ', 'J A Belinchón ', '\nFacultad de Ingeniería\nIPICyT\nInstituto Potosino de Investigacion Cientifica y Tecnologica\nCamino a la presa San José 2055, Col. Lomas 4a Sección78216San\n', '\nInstituto Potosino de Investigacion Cientifica y Tecnologica\nDepartamento de Física\nFacultad de Ciencias Naturales\nCentro Universitario Cerro de las Campanas\nUniversidad Autónoma de Querétaro\nCamino a la presa San José 2055, Col. Lomas 4a Sección76010, 78216Santiago de Querétaro, SanMexico\n', '\nUniversidad de Atacama\n485Copayapu, CopiapóChile\n'], 'authoraffiliation': ['Facultad de Ingeniería\nIPICyT\nInstituto Potosino de Investigacion Cientifica y Tecnologica\nCamino a la presa San José 2055, Col. Lomas 4a Sección78216San', 'Instituto Potosino de Investigacion Cientifica y Tecnologica\nDepartamento de Física\nFacultad de Ciencias Naturales\nCentro Universitario Cerro de las Campanas\nUniversidad Autónoma de Querétaro\nCamino a la presa San José 2055, Col. Lomas 4a Sección76010, 78216Santiago de Querétaro, SanMexico', 'Universidad de Atacama\n485Copayapu, CopiapóChile'], 'corpusid': 119645023, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 7808, 'n_tokens_neox': 6516, 'n_words': 3520, 'pdfsha': 'd56797e63bd8596911c256de37122cb1f756dd5d', 'pdfurls': ['https://arxiv.org/pdf/1612.01938v3.pdf'], 'title': ["Extension of a factorization method of nonlinear second order ODE's with variable coefficients", "Extension of a factorization method of nonlinear second order ODE's with variable coefficients"], 'venue': ['Rev. Mex. Fís']}
arxiv
Analysis of the [56, 2 + ] Baryon Masses in the 1/N c Expansion Apr 2003 J L Goity Department of Physics Hampton University 23668HamptonVAUSA Thomas Jefferson National Accelerator Facility 23606Newport NewsVAUSA C Schat Department of Physics Duke University 27708DurhamNCUSA N N Scoccola Physics Department Comisión Nacional de Energía Atómica 1429)Buenos AiresArgentina Universidad Favaloro Solís 453, (1078) Buenos AiresArgentina ECT * Villa TambosiI-38050Villazzano (Trento) Italy ( Σ J − ∆ J ) − (σ J ′ − ∆ J ′ ( Ξ J − ∆ J ) − (ξ J ′ − ∆ J ′ ( Ω J − ∆ J ) − (ω J ′ − ∆ J ′ ) Further Analysis of the [56, 2 + ] Baryon Masses in the 1/N c Expansion Apr 2003(Dated: December 20, 2021)arXiv:hep-ph/0304167v1 16 The mass spectrum of the positive parity [56, 2 + ] baryons is studied in the 1/N c expansion up to and including O(1/N c ) effects with SU (3) symmetry breaking implemented to first order. A total of eighteen mass relations result, several of which are tested with the available data. The breaking of spin-flavor symmetry is dominated by the hyperfine interactions, while spin-orbit effects are found to be small. † Fellow of CONICET, Argentina.θ Ξ,Ξ ′ J with J = 3/2, 5/2. The physical states are given by Σ J = Σ (8) J cos θ Σ,Σ ′ J + Σ (10) J sin θ Σ,Σ ′ J and Σ ′ J = −Σ (8) J sin θ Σ,Σ ′ J + Σ (10) J cos θ Σ,Σ ′ J and in a similar way for the cascades. The mixing angles are determined by the ratio of the matrix elements of the operatorB 2 to the spinflavor mass splitting induced by the O(N −1 c ) singlet operators. This implies that the mixing angles are O(ǫN 0 c ). The mixings affect the mass eigenvalues at O(ǫ 2 /N c ), which is beyond the accuracy of the present analysis. approach that has turned out to be very successful in baryon phenomenology. The 1/N c expansion has been applied to the ground state baryons [4,5,6,7,8,9,10], and to excited baryons, where the masses and decays of the negative parity spin-flavor 70plet [11,12,13,14,15] and the positive parity Roper 56-plet [16] have been analyzed. Two frameworks have been used in implementing the 1/N c expansion for baryons. One framework is based on the contracted spin-flavor SU(2N f ) c symmetry, N f being the number of light flavors, which is a symmetry of QCD in the N c → ∞ limit [4,12,17]. In this framework commutation relations of operators like axial currents and hadron masses are constrained by consistency relations. The observed baryons at N c = 3 are identified with the low lying spin states of an infinite representation of the contracted symmetry. The second framework makes use of the spin-flavor SU(2N f ) algebra, with an explicit representation of operators that act on a space of states constructed as tensor products of N c valence quarks [7]. Both approaches are consistent and deliver equivalent results order by order in the 1/N c expansion. From the practical point of view, however, the second one is easier to work with, especially at subleading orders in 1/N c , and for this reason it has been chosen in most analyses. Another advantage in this approach, is the possibility of using the language of the constituent quark model, as applied to the spin-flavor degrees of freedom, without any loss of generality. The study of excited baryons is not free of difficulties. Although a significant amount of symmetry in the form of a contracted SU(2N f ) c is always present in the N c → ∞ limit [12,18], there is no strict spin-flavor symmetry in that limit. Indeed, as it was shown in [11], spin-orbit interactions break spin-flavor symmetry at O(N 0 c ) in states belonging to mixed symmetric spin-flavor representations, and configuration mixing, i.e., mixing of states belonging to different spin-flavor multiplets in general occurs at O(N 0 c ) as well. The use of spin-flavor symmetry as a zeroth order approximation is therefore not warranted for excited baryons. However, a phenomenological fact is that spin-orbit interactions are very small (in the real world with N c = 3 they have a magnitude expected for O(N −2 c ) effects), and since all sources of O(N 0 c ) spin-flavor breaking, including the configuration mixing, requires such orbital interactions, it is justified to treat them in practice as subleading. Thanks to this observation, the usage of spin-flavor SU(2N f ) as the zeroth order symmetry is justified. A second problem is posed by the fact that excited baryons have finite widths. The impact of this on the analyses of the masses is not fully clarified yet. One likely possibility is that their effects are included in the effective parameters that determine the masses' 1/N c expansion. This is an issue that has been recently considered in Ref. [19]. The analysis of the [56, 2 + ] masses is made along the lines established in previous investigations of the [70, 1 − ] baryons [11,13,15]. The [56, 2 + ] multiplet contains two SU(3) octets with total angular momentum J = 3/2 and 5/2, and four decuplets with J = 1/2, 3/2, 5/2 and 7/2, as listed in Table The mass operator can be expressed as a string of terms expanded in 1/N c : H mass = c i O i + b iBi (2) where the operators O i are SU(3) singlets and the operatorsB i provide SU(3) breaking and are defined to have vanishing matrix elements between non-strange states. The effective coefficients c i and b i are reduced matrix elements that encode the QCD dynamics and they are determined by a fit to the empirically known masses. The operators O i andB i can be expressed as positive parity and rotationally invariant products of generators of SU(6) ⊗ O(3) as it has been explained elsewhere [11]. A generic n-body operator has the structure O (n) = 1 N n−1 c O ℓ O SF ,(3) where the factors O ℓ and O SF can be expressed in terms of products of generators of the orbital group O(3) (ℓ i ), and of the spin-flavor group SU(6) (S i , T a and G ia ), respectively. The explicit 1/N c factors originate in the n − 1 gluon exchanges required to give rise to an nbody operator. The matrix elements of operators may also carry a nontrivial N c dependence due to coherence effects [4]: for the states considered, G ia (a = 1, 2, 3) and T 8 Table I. This can be shown using reductions, valid for the symmetric representation, of matrix elements involving excited quark and/or core operators, such as: Sym | s i | Sym = 1 N c Sym | S i | Sym (4) Sym | S c i | Sym = N c − 1 N c Sym | S i | Sym , etc., where S = s + S c , s being the spin operator acting only on one quark (the excited one for instance), and S c acts on the remaining (N c − 1) core quarks. Similarly, relations for two-body operators can also be derived, e.g.: For N c = 3, and ǫ ∼ 1/3, the ratios associated with the relations (1) to (8) in Table IV are estimated to be of the order of 4%. The ratios obtained with the physical masses are listed in the last column of Table IV, and they are within that estimated theoretical range. It is important to emphasize that all these empirically verified relations represent a genuine test of spin-flavor symmetry and its breaking according to the 1/N c expansion, as pointed out above. The fact that they are all verified to the expected accuracy is remarkable and gives strong support to the analysis based on the premises of this work. Sym | s i G c ja | Sym = Sym | s i (G ja − g ja ) | Sym (5) = 1 N c Sym | S i G ja − 1 4 δ ij T a − i 2 ǫ ijk G ka ) | Sym The fit to the available data, where states with three or more stars in the Particle Data Listings are included, leads to the effective constants c i and b i shown in Table I and the results for the masses shown in Table V, where fourteen of them are predictions. The χ 2 dof of the fit is 0.7, where the number of degrees of freedom (dof) is equal to four. The errors shown for the predictions in Table V are obtained propagating the errors of the coefficients in Table I. There is also a systematic error O(N −2 c ), resulting from having included only operators up to O(N −1 c ) in the analysis, which can be roughly estimated to be around 30 MeV. In Table V The better established Λ − Σ splitting in the J = 5/2 octet is almost 100 MeV, while the other splitting in the J = 1/2 octet is small and slightly negative. The latter one involves however the one star state Σ(1840), which might also be assigned to the radially excited 56 ′ . The large N c analysis implies that these splittings are O(ǫ/N c ), and are produced only by the operatorsB 2 andB 3 . The result from the fit indicates that the Λ 5/2 − Σ 5/2 receives a contribution of 63 MeV fromB 3 and 40 MeV fromB 2 . It is interesting to observe that several mass splitting differences receive only contributions fromB 2 as it is obvious from Table III. These involve the splittings in the octets (Λ 5/2 − N 5/2 ) − (Λ 3/2 − N 3/2 ), (Σ 5/2 − N 5/2 ) − (Σ 3/2 − N 3/2 ) and (Ξ 5/2 − N 5/2 ) − (Ξ 3/2 − N 3/2 ), and the decuplet splittings (Σ J − ∆ J ) − (Σ J ′ − ∆ J ′ ), (Ξ J − ∆ J ) − (Ξ J ′ − ∆ J ′ ) and (Ω J − ∆ J ) − (Ω J ′ − ∆ J ′ ). Further information on these splittings would allow to pin down with better confidence the relevance ofB 2 . The fit implies for instance that the contribution ofB 2 to the Ω 1/2 − Ω 7/2 splitting is about 225 ± 100 MeV, a rather large effect. The operatorB 2 involves the orbital angular momentum operator, and since in all other known cases where orbital couplings occur their effects are suppressed, the same would be expected here. The naive expectation is that the coefficient ofB 2 would be of order 2 √ 3 ǫ times the coefficient of O 2 . It is in fact substantially larger. However, this result is not very conclusive, because b 2 is largely determined by only a few inputs resulting in a rather large relative error for this parameter. Related to this, the sign of the coefficient ofB 2 determines the ascending or descending ordering of the masses of strange states in the decuplet as J increases. In the present analysis the higher J states are lighter. However, the structure of SU(3) breaking splittings cannot be established better because of the rather small number of strange states available for the fit. This is perhaps the most important motivation for further experimental and lattice QCD study of the still non-observed states. It is of interest to draw some comparisons among the analyses carried out in previous works, that include the ground state baryons [8,9], the [70, 1 − ] baryons [13,15], O 1 = N c ½ c 1 = 541 ± 4 O 2 = 1 Nc l i S i c 2 = 18 ± 16 O 3 = 1 Nc S i S i c 3 = 241 ± 14 B 1 = −S b 1 = 206 ± 18 B 2 = 1 Nc l i G i8 − 1 2 √ 3 O 2 b 2 = 104 ± 64 B 3 = 1 Nc S i G i8 − 1 2 √ 3 O 3 b 3 = 223 ± 68N J 0 0 0 Λ J 1 Accuracy (1) ∆ 5/2 − ∆ 3/2 = N 5/2 − N 3/2 0.6% (2) 5(∆ 7/2 − ∆ 5/2 ) = 7(N 5/2 − N 3/2 ) 1.8% (3) ∆ 7/2 − ∆ 1/2 = 3(N 5/2 − N 3/2 ) 1.5% (4) 8(Λ 3/2 − N 3/2 ) + 22(Λ 5/2 − N 5/2 ) = 15(Σ 5/2 − Λ 5/2 ) + 30(Σ 7/2 − ∆ 7/2 ) 0.4% (5) Λ 5/2 − Λ 3/2 + 3(Σ 5/2 − Σ 3/2 ) = 4(N 5/2 − N 3/2 ) 1.7% (6) Λ 5/2 − Λ 3/2 + Σ 5/2 − Σ 3/2 = 2(Σ ′ 5/2 − Σ ′ 3/2 ) 0.5% (7) 7 Σ ′ 3/2 + 5 Σ 7/2 = 12 Σ ′ In the mass range from 1600 to 2100 MeV there exists a set of positive parity baryons which might be assigned to an irreducible representation [56, 2 + ] of SU(6) ⊗ O(3), where SU(6) is the spin-flavor group and O(3) classifies the orbital excitations. Among the candidate states in that set, all non-strange states are known as well as seven strangeness S = −1 states. Some of the strange states are, however, established with low certainty (two stars or less in the Particle Data Listings [1]). In this letter the available empirical information is used to implement an analysis of the masses based on the 1/N c expansion of QCD [2, 3], an  V. Note that the octets have spin S = 1/2 while the decuplets have spin S = 3/2 as in the ground state baryons. For non-strange states this is the I = S rule. The states are obtained by coupling the orbital part with ℓ = 2 to the spin-flavor symmetric states, namely, |J J z ; S; (p = 2S, q), Y, I I z Sym = |S S z ; (p, q), Y, I I z Sym | ℓ = 2 m (1) where (p, q) label the SU(3) representation and Y stands for the hypercharge. Note that, unlike the states in mixed-symmetric representations where excited and core quarks have to be distinguished for the purpose of building a basis of mass operators, such a distinction is unnecessary for the symmetric representation. have matrix elements of O(N c ), while the rest of the generators have matrix elements of zeroth order. At each order in 1/N c and ǫ, where the latter parameter measures SU(3) breaking, there is a basis of operators. The construction of these bases is straightforward, and the operators are listed in An important observation is that in the present case there is no SU(3) singlet operator breaking spin-flavor symmetry at O(N 0 c ). In particular, operators involving the O(3) generators, that in the mixed-symmetric spin-flavor representations can be O(N 0 c ), are demoted to O(1/N c ) in the spin-flavor symmetric representation. At O(N −1 c ) only two singlet operators appear, the spin-orbit operator O 2 and the hyperfine operator O 3 , both being two-body operators. At order ǫ there is one operator O(N 0 Since there are twenty four independent masses in the isospin symmetric limit, and the basis consists of six operators, there are eighteen mass relations that hold independently of the values of the coefficients c i and b i . These relations are depicted in Table IV. In addition to the Gell-Mann Okubo (GMO) relations for each octet (two such relations) and the equal spacing relations (EQS) for each decuplet (eight such relations), there are eight relations that involve states belonging to different SU(3) multiplets as well as different values of J: the first three in the Table IV involve only the masses of non-strange states, while the remaining five relations have been chosen in such a way that several of them can be tested directly with the available data. These latter eight relations provide a useful test of the validity of the 1/N c expansion as implemented in this analysis. The GMO and EQS relations cannot be tested due to the scarcity of information on strange baryons. If the one and two star states are excluded, there are four relations that can be tested, namely, the three non-strange ones (1, 2 and 3) and relation(4). If the one and two star states are included (three such states), there are three addditional relations that can be tested, namely (5, 6 and 7). In all cases they are found to be satisfied within the experimental errors. Following[9], in order to compare to what extent the empirical accuracies of the mass relations match the theoretical expectations, each of the mass relations inTable IViscast in the form LHS = RHS with the left hand side (LHS) and right hand side (RHS) possessing only terms with positive coefficients. The accuracy of the mass relations is then defined as |LHS − RHS|/[(LHS + RHS)/2]. These ratios are O(ǫ 2 N −2 c ) for the GMO and EQS relations, and O(N −3 c ), O(ǫ 2 N −2 c ) and/or O(ǫN −3 c ) for the others. the partial contributions from each operator to the mass of the different members of the multiplet are also shown. The operator O 1 provides the spin-flavor singlet mass of about 1625 MeV. The breaking of spin-flavor symmetry by the SU(3) singlet operators is essentially given in its entirety by the hyperfine interaction O 3 , that produces a splitting between octet and decuplet states of approximately 240 MeV, while the spin-orbit operator O 2 is rather irrelevant inducing spin-flavor breaking mass shifts of less than 30 MeV. Note that O 2 is the sole source of the splittings between the two N states and also between the ∆ states. The weakness of O 2 is thus very convincingly established. The breaking of SU(3) is dominated by the operatorB 1 , which gives a shift of about 200 MeV per unit of strangeness. The main role of the subleading operatorsB 2,3 is to provide the observed Λ − Σ splittings in the octets, and the different splittings between the N and the average Λ − Σ masses in the two octets, and the Σ − ∆ splitting in the J = 7/2 decuplet. Finally,B 2 gives the only contributions to the state mixings. The mixing angles that result from the fit are: θ Σ,Σ ′ 3/2 = −0.16, θ Σ,Σ ′ 5/2 = −0.26, θ Ξ,Ξ ′ 3/2 = −0.21 and θ Ξ,Ξ ′ 5/2 = −0.19 (in radians). the Roper multiplet [56 ′ , 0 + ] [16], and the present analysis. At the level of SU(3) singlet operators the hyperfine interaction is O(1/N c ) in all cases. It is interesting to compare the strength of the hyperfine interaction in the different multiplets estimating the strength of the quark pairwise hyperfine interaction. In the large N c limit that strength should be the same for different low lying excited states. For the ground state baryons the hyperfine operator, up to terms proportional to the identity operator, is given by: i =j s (i) · s (j) where the indices i, j run from 1 to N c . The ground state ∆ − N splitting then gives a strength of about 100 MeV for this operator. In excited states with ℓ = 1, the results from the 70-plet depend in general on the mixing angles used as an input [18]. For the particular choice of the angles used in the analyses [13, 14, 15] the hyperfine interaction involving the excited quark and quarks in the core (the operator O 7 in [15]) is suppressed, indicating that the hyperfine interaction is predominantly short range. The relevant hyperfine interaction is in this case the one involving the N c − 1 quarks in the core, i.e., the indices i and j run only over the quarks in the core. This leads in the [70, 1 − ] to a strength of about 160 MeV. In the [56, 2 + ], a reasonable assumption is that a single quark is excited with ℓ = 2. From the result obtained in the [70, 1 − ], it is expected that the excited quark will also have negligible participation in the hyperfine interaction. This will be given essentially by the hyperfine interaction of the core quarks. Using the relation 1 − 2 Nc S 2 = (S c ) 2 − 3 4 ½ valid in states belonging to the symmetric spin-flavor representation, the result of the fit implies a strength of 240 MeV. In the Roper [56 ′ , 0 + ] multiplet the situation is less clear, as all quarks may participate of the hyperfine interaction. The average strength in the core turns out to be about 160 MeV. These results indicate an increase in the strength of the hyperfine interaction in going from the ground state baryons to excited baryons. This suggests the presence of an underlying dynamical mechanism that might be possible to identify in specific models. The other SU(3) singlet interaction common to all multiplets, the spin-orbit interaction, is weak in the two known cases, namely [70, 1 − ] and [56, 2 + ]. The SU(3) breaking operatorB 1 = −S gives a mass shift per unit of strangeness of about 200 MeV in all multiplets considered, which is in line with the the value of the strange quark mass. The operator l i g i8 which contributes at O(ǫN 0 c ) in the 56-plet and at O(ǫ/N c ) in the 70-plet, carries coefficients of similar size but different sign. This issue can be further clarified when the role ofB 2 is better established. Operator Fitted coef. (MeV) Table I . IThe corresponding matrix elements between the states belonging to the [56, 2 + ] multiplet are given in Tables II and III. Note that the operators used in the analysis of the [70, 1 − ] masses are reduced to the operators given here in TABLE I : IList of operators and the coefficients resulting from the fit with χ 2 dof = 0.7O 1 O 2 O 3 2 8 3/2 N c − 3 2Nc 3 4Nc 2 8 5/2 N c 1 Nc 3 4Nc 4 10 1/2 N c − 9 2Nc 15 4Nc 4 10 3/2 N c − 3 Nc 15 4Nc 4 10 5/2 N c − 1 2Nc 15 4Nc 4 10 7/2 N c 3 Nc 15 4Nc TABLE II : IIMatrix elements of SU (3) singlet operators.B 1B2B3 TABLE IV : IVThe 18 independent mass relations include the GMO relations for the two octets and the two EQS for each of the four decuplets. The accuracy is calculated as explained in the text. c ), namelyB 1 and two operators O(N −1 c ), namelyB 2,3 .Note that in the 56-plet there are no state mixings in the SU(3) symmetric limit. Only the operatorB 2 induces mixings. The mixings affect the octet and decuplet Σ(8),(10) and Ξ(8),(10) states, and in the limit of isospin symmetry there are four mixing angles, namely θ Σ,Σ ′ J and AcknowledgmentsWe thank Winston Roberts for useful comments on the manuscript. JLG thanks the Physics Department of TANDAR (Argentina) for the kind hospitality. This work was par- [2] G. 't Hooft, Nucl. Phys. B72 (1974) 461.[3] E. Witten, Nucl. Phys. B160 (1979) 57.Those partial contributions in blank are equal to the one above in the same column. . R Dashen, A V Manohar, Phys. Lett. 315425R. Dashen and A. V. Manohar, Phys. Lett. B315 (1993) 425; . Phys. Lett. 315438Phys. Lett. B315 (1993) 438. . R Dashen, E Jenkins, A V Manohar, Phys. Rev. D. 494713R. Dashen, E. Jenkins, and A. V. Manohar, Phys. Rev. D 49 (1994) 4713. . R Dashen, E Jenkins, A V Manohar, Phys. Rev. D. 513697R. Dashen, E. Jenkins, and A. V. Manohar, Phys. Rev. D 51 (1995) 3697. . C D Carone, H Georgi, S Osofsky, Phys. Lett. 322227C. D. Carone, H. Georgi and S. Osofsky, Phys. Lett. B322 (1994) 227. . M A Luty, J March-Russell, Nucl. Phys. 42671M. A. Luty and J. March-Russell, Nucl. Phys. B426 (1994) 71. . M A Luty, J March-Russell, M White, Phys. Rev. D. 512332M. A. Luty, J. March-Russell and M. White, Phys. Rev. D 51 (1995) 2332. . E Jenkins, Phys. Lett. 315441E. Jenkins, Phys. Lett. B315 (1993) 441. . E Jenkins, R F Lebed, Phys. Rev. D. 52282E. Jenkins and R. F. Lebed, Phys. Rev. D 52 (1995) 282. . J Dai, R Dashen, E Jenkins, A V Manohar, Phys. Rev. D. 53273J. Dai, R. Dashen, E. Jenkins, and A. V. Manohar, Phys. Rev. D 53 (1996) 273. . J L Goity, Phys. Lett. 414140J. L. Goity, Phys. Lett. B414, (1997) 140. . D Pirjol, T M Yan, Phys. Rev. D. 571449D. Pirjol and T. M. Yan, Phys. Rev. D 57 (1998) 1449. . D Pirjol, T M Yan, Phys. Rev. D. 575434D. Pirjol and T. M. Yan, Phys. Rev. D 57 (1998) 5434. . C E Carlson, C D Carone, J L Goity, R F Lebed, Phys. Rev. D. 438114008Phys. Lett.C. E. Carlson, C. D. Carone, J. L. Goity and R. F. Lebed, Phys. Lett. B438 (1998) 327; Phys. Rev. D 59 (1999) 114008. . C D Carone, H Georgi, L Kaplan, D Morin, Phys. Rev. D. 505793C. D. Carone, H. Georgi, L. Kaplan and D. Morin, Phys. Rev. D 50 (1994) 5793; . C E Carlson, C D Carone, Phys. Rev. D. 44153005Phys. Lett.C. E. Carlson and C. D. Carone, Phys. Lett. B441 (1998) 363; Phys. Rev. D 58 (1998) 053005. . C L Schat, J L Goity, N N Scoccola, Phys. Rev. Lett. 88102002C. L. Schat, J. L. Goity and N. N. Scoccola, Phys. Rev. Lett. 88 (2002) 102002; . J L Goity, C L Schat, N N Scoccola, Phys. Rev. D. 66114014J. L. Goity, C. L. Schat and N. N. Scoccola, Phys. Rev. D 66 (2002) 114014. . C E Carlson, C D Carone, Phys. Lett. 484260C. E. Carlson and C. D. Carone, Phys. Lett. B484 (2000) 260. . J L Gervais, B Sakita, Phys. Rev. Lett. 521795Phys. Rev. DJ. L. Gervais and B. Sakita, Phys. Rev. Lett. 52 (1984) 87; Phys. Rev. D 30 (1984) 1795. . K Bardakci, Nucl. Phys. 243197K. Bardakci, Nucl. Phys. B243 (1984) 197. . D Pirjol, C Schat, hep-ph/0301187D. Pirjol and C. Schat, hep-ph/0301187. . T D Cohen, R F Lebed, hep-ph/0301167T.D.Cohen and R.F.Lebed, hep-ph/0301219, and hep-ph/0301167.
{'fraction_non_alphanumeric': 0.06619284123993835, 'fraction_numerical': 0.04872409659188217, 'mean_word_length': 3.4831094049904032, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 9, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The mass spectrum of the positive parity [56, 2 + ] baryons is studied in the 1/N c expansion up to and including O(1/N c ) effects with SU (3) symmetry breaking implemented to first order. A total of eighteen mass relations result, several of which are tested with the available data. The breaking of spin-flavor symmetry is dominated by the hyperfine interactions, while spin-orbit effects are found to be small. † Fellow of CONICET, Argentina.θ Ξ,Ξ ′ J with J = 3/2, 5/2. The physical states are given by Σ J = Σ (8) J cos θ Σ,Σ ′ J + Σ (10) J sin θ Σ,Σ ′ J and Σ ′ J = −Σ (8) J sin θ Σ,Σ ′ J + Σ (10) J cos θ Σ,Σ ′ J and in a similar way for the cascades. The mixing angles are determined by the ratio of the matrix elements of the operatorB 2 to the spinflavor mass splitting induced by the O(N −1 c ) singlet operators. This implies that the mixing angles are O(ǫN 0 c ). The mixings affect the mass eigenvalues at O(ǫ 2 /N c ), which is beyond the accuracy of the present analysis.', 'arxivid': 'hep-ph/0304167', 'author': ['J L Goity \nDepartment of Physics\nHampton University\n23668HamptonVAUSA\n\nThomas Jefferson National Accelerator Facility\n23606Newport NewsVAUSA\n', 'C Schat \nDepartment of Physics\nDuke University\n27708DurhamNCUSA\n', 'N N Scoccola \nPhysics Department\nComisión Nacional de Energía Atómica\n1429)Buenos AiresArgentina\n\nUniversidad Favaloro\nSolís 453, (1078) Buenos AiresArgentina\n\nECT *\nVilla TambosiI-38050Villazzano (Trento) Italy\n', '( Σ J − ∆ J ) − (σ J ′ − ∆ J ′ ', '( Ξ J − ∆ J ) − (ξ J ′ − ∆ J ′ ', '( Ω J − ∆ J ) − (ω J ′ − ∆ J ', '′ ) Further '], 'authoraffiliation': ['Department of Physics\nHampton University\n23668HamptonVAUSA', 'Thomas Jefferson National Accelerator Facility\n23606Newport NewsVAUSA', 'Department of Physics\nDuke University\n27708DurhamNCUSA', 'Physics Department\nComisión Nacional de Energía Atómica\n1429)Buenos AiresArgentina', 'Universidad Favaloro\nSolís 453, (1078) Buenos AiresArgentina', 'ECT *\nVilla TambosiI-38050Villazzano (Trento) Italy'], 'corpusid': 119376312, 'doi': '10.1016/s0370-2693(03)00700-7', 'github_urls': [], 'n_tokens_mistral': 7929, 'n_tokens_neox': 6850, 'n_words': 4300, 'pdfsha': 'cb437c52d745de39a343a1128a6a38eceab0fb57', 'pdfurls': ['https://export.arxiv.org/pdf/hep-ph/0304167v1.pdf'], 'title': ['Analysis of the [56, 2 + ] Baryon Masses in the 1/N c Expansion', 'Analysis of the [56, 2 + ] Baryon Masses in the 1/N c Expansion'], 'venue': []}
arxiv
Depth-Optimal Synthesis of Clifford Circuits with SAT Solvers Tom Peham [email protected] Nina Brandl [email protected] Richard Kueng [email protected] Robert Wille [email protected] Lukas Burgholzer [email protected] Chair for Design Automation Institute for Integrated Circuits Johannes Kepler University Linz Technical University of Munich Germany, Austria Institute for Integrated Circuits Johannes Kepler University Linz Austria Chair for Design Automation Technical University of Munich Germany Software Competence Center Hagenberg GmbH Austria Institute for Integrated Circuits Johannes Kepler University Linz Austria Depth-Optimal Synthesis of Clifford Circuits with SAT Solvers * Both authors contributed equally to this work. Circuit synthesis is the task of decomposing a given logical functionality into a sequence of elementary gates. It is (depth-)optimal if it is impossible to achieve the desired functionality with even shorter circuits. Optimal synthesis is a central problem in both quantum and classical hardware design, but also plagued by complexity-theoretic obstacles. Motivated by fault-tolerant quantum computation, we consider the special case of synthesizing blocks of Clifford unitaries. Leveraging entangling input stimuli and the stabilizer formalism allows us to reduce the Clifford synthesis problem to a family of poly-size satisfiability (SAT) problems -one for each target circuit depth. On a conceptual level, our result showcases that the Clifford synthesis problem is contained in the first level of the polynomial hierarchy (NP), while the classical synthesis problem for logical circuits is known to be complete for the second level of the polynomial hierarchy (Σ P 2 ). Based on this theoretical reduction, we formulate a SAT encoding for depth-optimal Clifford synthesis. We then employ SAT solvers to determine a satisfying assignment or to prove that no such assignment exists. From that, the shortest depth for which synthesis is still possible (optimality) as well as the actual circuit (synthesis) can be obtained. Empirical evaluations show that the optimal synthesis approach yields a substantial depth improvement for random Clifford circuits and Clifford+T circuits for Grover search. I. INTRODUCTION Quantum computing is a computational paradigm that might offer computational advantages over classical algorithms for certain problems. State of the art quantum computing hardware is still limited in scale, featuring a relatively small number of qubits that are prone to errors. While finding shortterm applications for these noisy intermediate scale quantum (NISQ) computers is an ongoing and vibrant research field [1], they cannot be used for advanced applications such as integer factoring (Shor; [2]), unstructured search (Grover; [3]), solving linear systems (HHL; [4]), or convex optimization [5]- [7]. To scale up quantum computing to longer computations with many qubits, fault-tolerant computation schemes have to be used that protect the information of a qubit against errors and allow the application of quantum gates with a low errorrate, see e.g. [8]- [10]. One way to do this is using quantum error correcting codes and performing all computations in the Clifford+T gate-set [11]. The benefit of restricting the gate-set is that every gate can be performed in a fault-tolerant fashion on an appropriate code. Fault-tolerant quantum circuits can become quite large due to the overhead from breaking down every quantum computation into this restricted gate-set, with error syndrome extraction and error correction steps. For near-term applications they are usually optimized to have a minimal two-qubit gate count as these gates tend to have the highest error rates. For faulttolerant computations, a suitable performance metric is circuit depth because it directly correlates with the runtime of the computation. The problem of synthesizing optimal circuits in the faulttolerant regime has commonly been considered holistically, trying to minimize the entire circuit, or partially, by reducing the T-gate count only, see e.g. [13]- [16]. The general synthesis problem turns out to be very hard. Instead of minimizing the entire Clifford+T circuit, one can consider only the Clifford parts to try to make the general synthesis problem easier. In this work, we show that the problem of synthesizing depth-optimal Clifford circuits is, in fact, at most as hard as Boolean satisfiability (SAT). We achieve this by considering the synthesis problem for a larger circuit that receives a pairwise maximally entangled state as input. Although this theoretically increases the problem size, the utilization of the maximally entangled state allows for breaking down the synthesis problem in terms of stabilizers (Gottesman-Knill). (top) decomposition of one Grover block into Clifford+T gates, the actual synthesis is achieved using Qiskit's Grover class using a PhaseOracle [12]. (bottom) depth-optimized synthesis of all (nontrivial) Clifford blocks using the methods from this work. Our SAT approach certifies that the green and purple blocks are already depth-optimal. This is not the case for the central teal block, where our method yields considerable improvements (depth 9 vs. depth 5). Based on this perspective, we formulate a SAT encoding for synthesizing and optimizing n-qubit Clifford circuits of maximal depth d max with O(n 2 d max ) variables and O(n 4 d max ) constraints. For larger circuits, we furthermore develop heuristic optimization routines based on optimal SAT encodings that reduce Clifford circuit depth in a divide-and-conquer approach. We implemented the proposed methods and compared them to a state of the art and openly available Clifford synthesis technique. Results show that while the optimal synthesis scales only up to 5 qubits, the state of the art is, on average, two orders of magnitude away from the optimal depth. The heuristic approaches also yield better results than the state of the art and are much more scaleable than the optimal approach. The efficacy of the methods for fault-tolerant quantum computations is illustrated by showing how the depth of Clifford+T circuits can be reduced by optimizing Clifford sub-circuits. All implementations are publicly available via the quantum circuit compilation tool QMAP [17], which is part of the Munich Quantum Toolkit (MQT) and accessible at https://github.com/cda-tum/qmap. The remainder of this work is structured as follows: Sec. II motivates the need for synthesizing depth-optimal Clifford circuits and provides context to the classical circuit synthesis problem. Then, Sec. III briefly goes over previous work on quantum circuit synthesis, especially in the Clifford case. In Sec. IV we show the main construction of this work and the reduction of Clifford synthesis to SAT before giving the full details on the SAT encoding in Sec.V. Based on that, we derive a heuristic optimization routine in Sec. VI. The evaluations of the methods introduced in this work are presented in Sec. VII. Finally, Sec. VIII concludes this work. II. BACKGROUND AND MOTIVATION To keep this work self-contained, the following sections review the central concepts of fault-tolerant quantum computing and circuit synthesis that are required throughout the rest of this work. A. Fault Tolerant Quantum Computing Quantum computations are prone to errors from multiple sources, be it decoherence, information leakage or other kinds of noise. In order to protect quantum information, quantum error correcting codes (QECCs) have been introduced. Essentially the information of a single logical qubit is encoded into multiple physical qubits for which various different encoding schemes have been discovered. Executing a quantum gate on a logical qubit is not a straightforward matter anymore as it is not at all obvious how to extend the functionality of a physical gate to the logical level. Furthermore, applying a gate to a logical qubit should be possible in a fault-tolerant fashion, such that applying a gate should introduce errors that can be corrected with an arbitrarily high probability. The Clifford gate-set is a set of gates that can be applied transversally to many QECCs, i.e., applying a Clifford operation to a logical qubit can be done by performing that Clifford gate on every physical qubit individually. Therefore, if a gate application introduces an error onto a physical qubit, this error cannot spread throughout the encoded qubit and is simpler to correct. Any Clifford unitary can be obtained from a sequence of Hadamard, Phase and CNOT gates. Notably, the Pauli X-, Y -and Z-gates are also in the Clifford group. Clifford gates alone are not sufficient to perform universal quantum computation and can be classically simulated in polynomial time [11], [18]. However, adding a T-gate T = |0⟩⟨0| + e i π 8 |1⟩⟨1| suffices to achieve universality. A T-gate can be realized fault-tolerantly using the magic state distillation protocol [19] and gate teleportation. A fault-tolerant quantum computation can therefore be realized by alternating blocks of Clifford gates and blocks of T-gates where error syndromes are detected and corrected throughout the computation. Fig. 1 shows the quantum circuit for a 3-qubit Grover search that is implemented using the Clifford+T gate-set on the logical qubits |q 0 ⟩, |q 1 ⟩ and |q 2 ⟩. In this fault-tolerant regime, the most important performance metric is the circuit depth. Assuming gates on different qubits can be executed in parallel, the depth directly deter-mines the runtime of the computation. As the cycle time can vary considerably depending on the technology the quantum computer is built upon, reducing the depth of a circuit can reduce the cost of a computation drastically. Example 1. The circuit in Fig. 1a has 3 Clifford blocks of interest. Fig. 1b shows the same circuit where each block was synthesized to have minimal depth. As one can see, the two outermost blocks were already depth-optimal in the original circuit. Although this looks intuitively true, it still needs some form of proof. The Clifford block in the middle, however, has a depth of 9 (the first Hadamard and CNOT-gate are parallel) and can be optimized to depth 5. High-depth Clifford blocks like this appear frequently when synthesizing circuits using state of the art synthesis tools for quantum computing 1 . The quest of finding such depth-optimal realizations of Clifford circuits will be our primary motivation in this work as the restricted nature of these circuits lends itself nicely to classical design automation methods such as SAT and SMT solvers. B. From Classical to Quantum Synthesis Before diving into the details of our quantum protocol, it is worthwhile to briefly review classical equivalence checking of logical circuits (or functions), as well as classical circuit synthesis. This will provide guidance and also motivate our use of SAT solvers for quantum circuit synthesis. 1) Classical Equivalence Checking and SAT: Let C, C ′ be two circuits with n input bits. We say that these circuits are equivalent (C ≃ C ′ ) if and only if they produce the same output for each conceivable input. In formulas, ∀x ∈ {0, 1} n Cx = C ′ x. Taking the contrapositive yields C ̸ ≃ C ′ ⇔ ∃x ∈ {0, 1} n s.t. Cx ̸ = C ′ x. The logical not-equality Cx ̸ = C ′ x can readily be converted into a Boolean function ϕ C,C ′ (x) with input x. This highlights an interesting one-to-one correspondence between (the negation of) circuit equivalence and the satisfiability problem (SAT). On a theoretical level, SAT is a hard problem that is complete for the problem class NP. Nevertheless a plethora of heuristic SAT solvers [20] perform very well in practice. 2) Classical Optimal Synthesis and QBF: Circuit synthesis aims to find a logical circuit, that implements a desired n-bit target functionality with as few elementary gates as possible. Here, we will focus on circuit depth (i.e. layers of gates). A decision version of this optimization problem looks as follows: ∃C d , depth(C d ) ≤ d max ∀x ∈ {0, 1} n C d x = Cx,(1) where C d is a placeholder for another logical circuit. In words, this formula evaluates to true if and only if it is possible to exactly reproduce the functionality of circuit C with another logical circuit C d that obeys depth(C d ) ≤ d max . Multiple queries to this logical function with different values of d max allow us to determine the optimal depth of any circuit synthesis, e.g. via binary search. It is also possible to rephrase Eq. (1) as a quantified Boolean formula (QBF). For starters, note that we can represent any logical circuit C d by a binary encoding y of length (at most) poly(nd max ). Different bit strings y give rise to different circuits C d and vice versa. With this binary encoding at hand, we can adapt the Boolean function reformulation of equivalence checking to the case at hand: ϕ C (y, x) = 1 if Cx = C d x and C d is the circuit encoded by y. Otherwise, this formula evaluates to 0. Putting everything together, we obtain the following QBF reformulation of logical circuit synthesis: ∃y ∈ {0, 1} poly(ndmax) ∀x ∈ {0, 1} n ϕ C (y, x) ! = 1. (2) The exist quantifier (∃) ranges over all possible logical circuit encodings with depth at most d max (C d ↔ y) while the forall quantifier (∀) ranges over all 2 n possible input bitstrings. Such QBFs are, in general, much harder to handle than a mere SAT problem with only one type of quantifier. In fact, the reformulated circuit synthesis problem Eq. (2) is complete for the problem class Σ p 2 -one branch of the second level of the polynomial hierarchy [21]. Unless the polynomial hierarchy collapses to the first level (which is widely believed to be false), these problems are much harder than SAT and, by extension, equivalence checking. QBFs do arise naturally in a variety of contexts [22]. Solvers do exist, see e.g. [23]- [25] and typically rely on the counterexample-guided inductive synthesis principle (CEGIS) [26] which has its roots in abstraction refinement (CEGAR) [27], [28]. For program synthesis, for example, CEGIS-style solvers alternate between generating candidate programs and checking them for counter-examples. 3) Going quantum: Circuit Equivalence and Synthesis: The two classical challenges we just discussed have natural counterparts in the quantum realm. Quantum circuits that act on n qubits are also comprised of elementary (quantum) gates, but their functionality is radically different. We say that two such circuits U, V are equivalent if and only if U ≃ V ⇔ ∀|ψ⟩ ∈ C 2 n U |ψ⟩ = e iϕ V |ψ⟩, where ϕ ∈ [0, 2π) is a complex phase. In contrast to (classical) logical circuits, there are infinitely many possible input states |ψ⟩ to be checked. 2 Consistency and mutual interrelations permit us to compress this number to (at most) 4 n disjoint input states [30]- [34]. This number is, however, still exponential in the total number of qubits. It is known that the (negated) unitary equivalence problem is QMA-complete [35] (QMA is the appropriate quantum generalization of the classical problem class NP). So, at face value, the quantum circuit equivalence problem looks even harder than its classical counterpart. Suppose that we are given a target unitary U , e.g. in the form of a high-level quantum circuit, and we want to decompose it into as few gate layers as possible (i.e. circuit depth). This practical problem occurs whenever we want to execute a high-level unitary (e.g. a quantum algorithm) on an actual nqubit quantum computer. The details of this synthesis problem depend on the type of elementary gate-set, but always produces a two-fold quantified problem ∃U d , depth(U d ) ≤ d max ∀|ψ⟩ U d |ψ⟩ = e iϕ U |ψ⟩, that resembles Eq. (1) with non-binary quantum states and a complex phase ϕ ∈ [0, 2π). This problem is at least as hard as logical gate synthesis, because it includes (reversible) encodings of the latter one as a special case. Reversibility of quantum circuits allows us to slightly streamline this display: ∃U d , depth(U d ) ≤ d max ∀|ψ⟩ U U † d |ψ⟩ = e iϕ |ψ⟩, (3) where U † d is the adjoint or reverse circuit of U d . For some special cases like Clifford circuits, we can recast Eq. (3) as a logical Boolean formula whose intrinsic difficulty is on par with logical circuit equivalence or SAT as shown in Sec. IV. The difficulty level is then much lower than general classical circuit synthesis. Empowered by this rigorous theoretical insight, we then employ state of the art SAT solvers to address the full Clifford synthesis problem. III. RELATED WORK Quantum circuits comprised of universal gate-sets are universal in the sense that they can approximate every unitary evolution. Quantum gate synthesis can be viewed as a quantitative take on this very issue: what is the best way to implement a given unitary evolution, e.g. a quantum computation? The celebrated Solovay-Kitaev theorem [8], [36] can be viewed as a very general synthesis protocol for arbitrary single-qubit unitaries (n = 1) and arbitrary universal gate-sets. Extensions to n qubits can be achieved by either combining single-qubit gate synthesis with certain entangling multi-qubit gates, or by directly generalizing the Solayev-Kitaev algorithm to d = 2 n -dimensional unitaries [36]. Although implementations do exist, see e.g. [37], this synthesis algorithm is typically too slow for practical purposes. As a result, the community has moved away from this rigorous meta-algorithm and towards more scalable heuristics. Many of them address the universal gate-set comprised of Clifford+T gates, see e.g. [13]- [16]. Clifford circuits (without T gates) are an interesting special case in this context. Mathematically speaking, they form a representation of a finite symplectic group [38]- [40] with additional structure. For instance, it is known that every Clifford unitary can be decomposed into a Clifford circuit of depth at most O(n) [41]. Such insights highlight that Clifford circuits cannot be overly complex -a feature that also extends to Clifford synthesis. For the special case of n = 6 qubits, competitive Clifford synthesis protocols have been put forth in [42]. For general n, the algorithms by Koenig and Smolin can be used to associate a given Clifford circuit with exactly one element of the Clifford group. And, subsequently, their algorithm can be used to synthesise this very group element. More recently, the group of Robert Calderbank developed Clifford synthesis algorithms that even work on the logical level (i.e. on top of an error correcting stabilizer code) [43], while Bravyi et al. discovered constant depth representations of arbitrary Clifford circuits, under the assumption that one is allowed to use global entangling operations of Ising type [44]. While the aforementioned approaches do produce a provably correct decomposition of Clifford circuits into elementary (Clifford) gates, it is not clear whether size and depth are (close to) optimal. This is where reformulations in terms of SAT/QBF can make a significant difference. They reformulate the gate synthesis problem as a family of quantified Boolean formulas (QBFs), one for each maximum circuit depth d max we allow. Such a QBF evaluates to true if and only if an exact circuit representation with depth (at most) d max is possible and returns the explicit representation. Otherwise, it evaluates to false. Attempting to solve these QBFs for different depths bears the potential of identifying the best circuit representation of a given functionality. This is why SAT/QBF-based synthesis approaches have long been a mainstay in classical design automation [45]. In fact, the idea of combining SAT-based synthesis with Clifford circuits is not entirely new. In Ref. [29], a subset of authors proposes this very idea for optimal stabilizer state preparation: find the shortest Clifford circuit that takes |0, . . . , 0⟩ as input and produces a known target stabilizer state. The ideas presented here may be viewed as an extension of these earlier ideas to full Clifford circuit synthesis. In addition, we also supply rigorous proofs of correctness and provide additional context, as well as background. IV. MAIN RESULT AND THEORETICAL UNDERPINNING Our main conceptual result is a one-to-one correspondence between Clifford synthesis, on the one hand, and Boolean satisfiability (SAT) on the other. The main result is displayed in Theorem 1 and originates from two genuinely quantum twists to the original synthesis questions: (i) maximally entangled input stimuli and (ii) the Gottesman-Knill theorem. It forms a rigorous foundation for optimal Clifford circuit synthesis with SAT, the topic of Sec. V below. A. Quantum twist 1: maximally entangled input states The first step in our theoretical argument goes by many names, including the Choi-Jamiolkowski isomorphism [46], [47], entanglement-assisted process tomography [48] and the flattening operation in tensor analysis [49], [50]. Conceptually we take two quantum circuits U and V on n-qubits. Instead of testing all possible input states, we create a single universally valid input state to check their equivalence. In a first thought process, we consider a test circuit twice as large and apply U to the top n qubits and the inverse of each gate in V to the remaining n qubits. By entangling the kth input qubit of U with the kth input qubit of V we create a new state |ω 2n ⟩ of size 2n. As U and V operate on entangled qubits, all changes applied by the first circuit will be reverted by the second circuit only if and only if they have the same functionality up to a global phase. If they instead differ at any point of their unitary, the resulting state will not be equal to |ω 2n ⟩ again. The idea of two circuits with identical unitaries reverting the changes of each other still applies when we let U and V † both work (U V † ) ⊗ I ⊗n |ω 2n ⟩⟨ω 2n |(U V † ) † ⊗ I ⊗n = |ω 2n ⟩⟨ω 2n |. (4) here |ω 2n ⟩ is the tensor product of n 2-qubit Bell states that entangle the kth qubit with the n + kth qubit for all k ∈ [n]. Proof. Note that the Bell state is proportional to the vectorized identity matrix, i.e. |ω 2n ⟩ √ 2 n = vec(I ⊗n ). The operatorvector correspondence, see e.g. [49, Eq. (1.132)], then asserts U V † ⊗ I n |ω 2n ⟩ = vec(U V † )/ √ 2 n . This ensures that U V † = e iϕ I n (as a matrix-valued equality) if and only if Applying Lemma 1 to the quantum circuit synthesis formula Eq. (3) gets rid of the complex phase and the forall quantifier over all possible input states: U V † ⊗ I|ω 2n ⟩ = 1 √ 2 n vec(U V † ) ! = e iϕ∃U d , depth(U d ) ≤ d max (U U † d ) ⊗ IΩ(U U † d ) † ⊗ I = Ω,(5) where we have used Ω as a shorthand notation for |ω 2n ⟩⟨ω 2n |. However, for the erasure of an entire forall quantifier, we have to go to the mixed state formalism and effectively double the number of qubits involved from n to 2n, see Fig. 2. B. Quantum twist 2: Gottesman-Knill theorem On first sight, Eq. (5) looks much less daunting than Eq. (3) or even its classical counterpart Eq. (1). This is due to the fact that entanglement allows us to check for equivalence with a single input state (Lemma 1) instead of a forall (∀) over exponentially many possibilities. To exploit this reformulation, there are two broad avenues on how to proceed: (i) Use an actual quantum computer to empirically check the single remaining equivalence in Eq. (5). This avenue was taken, for instance, in quantum assisted quantum compiling [51]. (ii) Use (strong) classical simulation of quantum circuits to check whether the two 2n-qubit states in Eq. (5) are really equivalent. Four broad simulation approaches come to mind: array-based [52], stabilizer-based [18], tensor networks [53] and decision diagrams [54], [55]. Here, we follow the second avenue and adopt stabilizerbased simulation. The main reason for this is that Ω = |ω 2n ⟩⟨ω 2n |, the 2n-qubit state responsible for all these simplifications, is itself a stabilizer state with very desirable structure. Fact 1. The 2n-qubit state Ω = |ω 2n ⟩⟨ω 2n | is a stabilizer state with generators ⟨(X I ⊗(n−1) ) ⊗2 , (Z I ⊗(n−1) ) ⊗2 ⟩ where the parenthesis ( ) stand for cyclic permutation. It is well-known that the 2-qubit Bell state is a stabilizer state with generators ⟨XX, ZZ⟩ [10]. Fact 1 follows from taking the n-fold tensor product of these generators at appropriate qubit locations. For n=2 for example this means ⟨XIXI, ZIZI, IXIX, IZIZ⟩ or for n=3 ⟨XIIXII, ZIIZII, IXIIXI, IZIIZI, IIXIIX, IIZIIZ⟩, etc. Still, every kth qubit is entangled with the (n + k)th qubit. We can efficiently simulate stabilizer circuits (of which Clifford circuits are a part) in polynomial time on a classical computer according to the Gottesman-Knill theorem [56]. To simulate the circuit's action on the 2n-qubit stabilizer state Ω, we perform a logical mapping of stabilizers depending on the gates and again obtain a stabilizer state. We can conclude whether the applied circuits performed the identity based on the equality of the input and output generators. Fact 2 (Gottesman-Knill). Suppose that both U and U d are n-qubit Clifford circuits. Then, it is possible to efficiently check (U U † d ) ⊗ IΩ(U U † d ) † ⊗ I = Ω.(6) This also follows from the conceptual idea of the entangled input: If U and U d have the same functionality, U † d would invert the stabilizer mapping done by U , which results in not altering the input stabilizers at all. Note that input and output generators must match exactly. As the bottom half of this 2n circuit applies the identity, it will not alter the generators on the last n qubits. Therefore we can also cut the generators we need to check in half, eliminating the ⊗2 operation in Fact 1. More precisely, it is possible to construct a Boolean function ϕ U,Ω (U d ) that evaluates to 1 if the input circuit U d achieves Eq. (6) and 0 otherwise. We will present such an explicit construction in Sec. V below. All that matters at this point is that we can represent any ( C. Main result and synopsis The insights culminating in Eq. (7) are worth a prominent display and a bit of additional context. Theorem 1 (SAT reformulation of Clifford synthesis). Let U be a n-qubit Clifford unitary (target) and fix a maximum depth d max ∈ N. Then, the decision problem "is it possible to exactly reproduce U with (at most) d max Clifford layers?" can be rephrased as an instance of SAT with O(n 2 d max ) variables and O(n 4 d max ) clauses of constant size each. This insight has both conceptual and practical implications, especially if one keeps in mind that d max ≤ O(n) for any Clifford circuit [41]. In turn, binary search allows for exactly determining the minimum circuit depth for a given Clifford unitary U by solving (at most) ⌈log 2 (n)⌉+O(1) SAT reformulations for varying ansatz depths d max . What is more, satisfying assignments of the Boolean formula in Eq. (7) are bit encodings of actual Clifford circuits that exactly reproduce U and have depth at most d max (synthesis). On a conceptual level, this approach provides a poly-time reduction of optimal Clifford synthesis to (logarithmically many instances of) SAT. This highlights that this special case is much easier than the general circuit synthesis problem (classical and quantum). In particular: optimal Clifford synthesis is at most as hard as classical circuit equivalence checking. On a practical level, Theorem 1 provides a rigorous and context-specific motivation for employing state of the art SAT solvers to rigorously address the Clifford synthesis problem. An actual step-by-step introduction to this encoding, as well as benchmarks, are the content of the remainder of this article. V. OPTIMAL CLIFFORD CIRCUIT SYNTHESIS WITH SAT The efficient simulability of Clifford circuits is based on the stabilizer tableau encoding of a stabilizer state, see e.g. [10] and references therein. This polynomial-sized representation of a quantum state is the key to deriving a polynomially-sized SAT encoding for the considered synthesis problem. Hence, before going into details on the encoding itself, we will give a brief recap on how to work with stabilizer tableaus. A. Stabilizer Tableau Representation of Stabilizer States An n-qubit stabilizer state can be represented by a (2n + 1) × n binary matrix called the stabilizer tableau. The idea is that every stabilizer generator for a state can be written using 2n + 1 bits of information. In the standard notation [18] for stabilizer tableaus, there are binary variables for Pauli Xand Z-type stabilizers x i,j , z i,j with i, j ∈ {0, 1, ..n − 1} and r i = 1 if the i-th stabilizer has a negative phase: For a Pauli-Y type stabilizer at position i, j both x i,j and z i,j must be set to 1 at the corresponding position. The stabilizer states in this format can be altered by the usual Clifford gates and the following update rules, where ⊕ denotes a bitwise XOR operation: • Applying H on qubit j: swaps the jth X-type column with the j-th Z-type column and r ⊕= x j z j . This follows from the transformations HXH † = Z, HZH † = X and HY H † = −Y , hence it switches x i,j ↔ z i,j ∀i and flips the phase only in case of a Pauli-Y stabilizer; • Applying S on qubit j: is a bitwise XOR of the j-th X-type column to the j-th Z-type column z j ⊕= x j again with r ⊕= x j z j ; • Applying CNOT on control qubit c and target qubit t: is a bitwise XOR of the c-th X-type column to the t-th X-type column x t ⊕= x c and vice versa for Z-type z c ⊕= z t as well as r ⊕= x c z t (x t ⊕ z c ⊕ 1). Further update rules for any other Clifford gate can be derived from these basic rules. More information about a stabilizer state can be encoded into the tableau by including its destabilizers [18]. These are Pauli-strings that, together with the stabilizers, generate the entire Pauli group. They are treated identically to the stabilizer generators for the purpose of updating the stabilizer tableau. B. Tableau and Gate Variables In the following, let Q be the set of qubits acted on by a quantum circuit and d max the maximal depth of the circuit. While all Clifford unitaries can be obtained from just H, S and CN OT gates, conveniently the target gate-set used for compilation may also include other gates like the Pauli X, Y , and Z operations or two-qubit gates like the CZ gate. To reflect this flexibility in the encoding, we define two sets SQGs and TQGs, the set of single-qubit gates and two-qubit gates, respectively, such that they can be used to implement any Clifford circuit. At every layer of the quantum circuit, a certain gate can either be applied or not. This suggests introducing the variables Svars = {g d q | g ∈ SQGs, q ∈ Q, 0 ≤ d < d max } T vars = {g d q0, q1 | g ∈ TQGs, q 0 ∈ Q, q 1 ∈ Q \ {q 0 }, 0 ≤ d < d max } representing the application of a gate to a specific qubit (or pair of qubits) at depth d. The possible stabilizer tableaus are encoded in a straightforward fashion according to their definition. The Z-, Xand R-part of the tableau use the variables Xvars = {z d q | q ∈ Q, 0 ≤ d < d max }, Zvars = {x d q | q ∈ Q, 0 ≤ d < d max }, Rvars = {r d | 0 ≤ d < d max }, where every element of the sets Zvars, Xvars and Rvars is a bitvector. These bitvectors encode how all stabilizers act on the Z-, X-, Rpart for a particular qubit. Based on the construction in Lemma 1, this encoding requires 2n qubits in order to guarantee that all circuits synthesized from these variables have the same unitary. But having these n additional qubits has the undesirable side-effect that the synthesized circuit should act as the identity on the lower n qubits of the circuit. This unnecessarily blows up the search space as the identity can be implemented ambiguously. One could enforce constraints on these qubits, but this would unnecessarily increase the size of the encoding. We can avoid this complication by considering only the upper n qubits and switching from stabilizers of the entangled input state to stabilizers and destabilizers of the |0⟩ ⊗n state as stated by the following fact. Fact 3. For a Clifford unitary U the stabilizers of (U ⊗ I ⊗n )|ω 2n ⟩ on the first n qubits are identical to the stabilizers and destabilizers of U |0⟩ ⊗n . Together with Fact 1, Lemma 1 tells us that for a Clifford circuit U the 2n stabilizers of U ⊗ I n |ω 2n ⟩ uniquely fix the unitary of the circuit. Given U , we can explicitly calculate these stabilizers by propagating the generators for Ω through U ⊗ I ignoring the lower n qubits. This boils down to only analysing the first half of every stabilizer generator of the 2nqubit state Ω. The result is a 2n(n+1) tableau for every given U . The initial tableau has diagonal entries with value 1 and coincides with the stabilizers of the |0⟩ ⊗n input state (Z-type) combined with the respective destabilizers (X-type). Hence, we can encode our problems using only |Q| = n qubits and each bitvector for the tableau variables has size 2n since the information about the destabilizers has to be included as well. C. Transition Relation With the variables defined, we can now encode how gates act on the stabilizer tableaus as described in Sub. V-A. Naturally, the transition between tableaus would then be a constraint along the lines of g d q ⇒ (UpdateZ (g, q, d)∧UpdateX (g, q, d)∧UpdateR(g, q, d)), where the update formulas on the right encode the action of the gate on a qubit at a certain depth. For a Hadamard this would mean UpdateZ (H, q, d) = (z d+1 q ⇔ x d q ) UpdateX (H, q, d) = (x d+1 q ⇔ z d q ) UpdateR(H, q, d) = (r d+1 ⇔ (r d ⊕ x d q ∧ z d q )) While this encoding is correct, it is also wasteful in the sense that it would lead to |SQGs| + |TQGs| number of implications of this type for every qubit and depth.This number can be decreased significantly by noting that many gates act identically on the different parts of the stabilizer tableau (quantum computation is local). The Pauli gates, for example, act as identity on the Zand X-part of the tableau, only differing in how they change the R-part. Since we know our gate-set, we can collect all possible transformations of the individual parts of the tableau a-priori. Let Z-updates(q, d) be the set of all possible updates to the Z-part of the stabilizer tableau on qubit q at depth d. The elements of these sets are logical formulas over the tableau variables. We can then define a mapping Z-impliedby(q, d) : Z-updates(q, d) → P(Svars) that maps every update formula to the set of single-qubit gate variables that act on the stabilizer tableau with that update rule. The single-qubit changes to the Z-part are then encoded by introducing the following constraint for every qubit q, depth 0 ≤ d < d max − 1 and Z-update ∈ Z-updates(q, d): g d q ∈Z-impliedby(Z-update) =⇒ (z d+1 q ⇔ Z-update). Obviously, this can be done in a similar fashion for the Xand R-parts of the tableau as well as for two-qubit gates. While the constraints at this point encode all possible stabilizer tableaus, there are some variable assignments that lead to invalid circuits, e.g. when a qubit is acted on by two gates at the same depth. We, therefore, need to introduce another set of constraints for every depth d and qubit q to ensure consistency of the obtained solution. ExactlyOne   {g d q | g ∈ SQGs} ∪ {g d q, q1 | q 1 ∈ Q, g ∈ TQGs} ∪ {g d q0, q | q 0 ∈ Q, g ∈ TQGs}   D. Symmetry Breaking Symmetry breaking [57] is a widespread technique from the SAT-solving community. It introduces additional constraints to an existing CNF formula to avoid searching in symmetric parts of the search space. This can be done in an automated fashion by analyzing the formula for automorphisms to obtain. so-called "symmetry breakers". Doing this automatically has the downside that it is not clear which symmetries are found and whether the deduced constraints actually make the SAT instance any easier to solve or even harder. In the case of the SAT formulation above, we can obtain symmetry breakers manually by using knowledge specific to Clifford synthesis. We can impose additional constraints on the SAT solver by eliminating valid solutions that could be expressed in a simpler manner. For example, the Hadamard gate is self-inverse. We can therefore add the constraint H d q =⇒ ¬H d+1 q for q ∈ Q and 0 ≤ d < d max − 1. This eliminates all assignments to the gate variables that model a sequence of two consecutive Hadamard gates. Another symmetry addresses possible degrees of freedom in the gate ordering and is best illustrated by means of the following equivalent circuits: |q 0 ⟩ |q 1 ⟩ H X S |q 0 ⟩ |q 1 ⟩ X H S . The Hadamard on the first qubit can either be parallel to the X gate or the S gate on the second qubit. We can break this symmetry by imposing that the identity single-qubit gate cannot be followed by a non-identity single-qubit gate. Otherwise, the non-identity gate could be moved to the left without changing the Clifford unitary. More formally, for q ∈ Q, 0 ≤ d < d max − 1 we impose I d q =⇒ g∈SQGs\{I} ¬g d+1 q , where I is the identity gate. A similar constraint can be imposed on two-qubit gates. If the identity is applied to a pair of qubits, no two-qubit gate can come after the identities. Again, we add the constraint (I d q0 ∧ I d q1 ) =⇒ g∈TQGs ¬g d+1 q0, q1 . for q 0 ∈ Q, q 1 ∈ Q \ {q 0 }, 0 ≤ d < d max − 1. There are many more symmetries that can be broken in the encoding. In principal any Clifford gate identity can be used to derive a symmetry breaking constraint. However, a tradeoff has to be made between the number of constraints and the size of the solution space. E. Optimizing Circuit Depth The above encoding can be used to synthesize circuits that have at most a depth of d max but does not necessarily synthesize depth-optimal circuits. This is only guaranteed if d max is exactly the optimal depth, which has to be determined first. One approach would be to start with an initial guess for d max and iteratively decrease it until the corresponding SAT instance has a solution but the SAT instance for d max − 1 does not. A (theoretically) efficient way to achieve just that is binary search. The original circuit's depth can be used as an upper bound. If this is far away from the optimum, however, it can lead to the generation of instances that are tough to solve. Instead, the upper bound can be determined dynamically. For a related problem (state preparation circuits), binary search has been explored in Ref. [29] where it was also proposed to geometrically increase the depth horizon in which a solution is searched for. Unfortunately, after a few iterations, the SAT calls will be quite costly and only promises to speedup the entire optimization if the runtime to solve a SAT instance grows sub-exponentially. In case of exponential growth, simply searching linearly or in an arithmetic progression is faster. Yet another way of gauging the initial depth is from empirical knowledge. If it is known that, on average, the SAT method produces solutions that are 20% shallower than the output circuit by another optimization routine, we can simply try to run it and start with the expected depth as an initial guess. From this initial guess linear or quadratic probing can be employed to find the optimal solution. VI. HEURISTIC APPROACH VIA CIRCUIT DECOMPOSITION Above, we have seen how to reformulate Clifford synthesis as a SAT problem. However, the search space for that problem grows exponentially with the maximal circuit depth d max and the number of qubits n. Depending on the specific SAT solver being used, this exact synthesis approach can quickly become prohibitively expensive. One way to diminish these scaling issues is to split a big Clifford circuit into a collection of sub-circuits that can be synthesized in parallel. This splitting can be done both horizontally (to reduce qubit number) as well as vertically (to reduce circuit depth) and considerably reduces the size of the SAT search space. The result is a versatile heuristic (it cannot be guaranteed that the splitting into sub-circuits is optimal) that can be applied to larger Clifford circuits. More precisely, let G be a target Clifford circuit on n qubits with maximum depth d max . Then, the associated SAT encoding features bitstrings of length l = O n 2 d max , which corresponds to a search space of size 2 l = 2 O(n 2 dmax) . We can now vertically split up U = L 1 L 2 , where each L i has depth d ′ ≈ d max /2, and apply our Clifford synthesis to each Clifford block. The result is two parallel SAT instances with bitstrings of length l ′ ≈ l/2 each. In turn, the size of the search space is only 2 l ′ ≈ √ 2 l . This quadratic improvement in search space size comes at the cost of a (potentially) nonoptimal decomposition into two blocks. This general idea extends to more than two vertical blocks. Let G = L 1 · · · L m be a Clifford circuit with m layers, i.e. blocks of qubits L i = g 1 i · · · g ki i such that all g ki i act on different qubits and can therefore be run in parallel. Given a split size s, we can partition the circuit into m s sub-circuits G i = L i·s · · · L (i+1)·s for 0 ≤ i < m (here it is assumed that m is divisible by s but extending the argument is straightforward). We then simply need to compute the target stabilizer tableau for each of these sub-circuits. For G i , this can be done by simulating the circuit up to L i·s . We can then use the SAT reformulation proposed in Sec. V for each of the sub-circuits. The split size s can also provide a good initial guess as to the maximal depth needed for the encoding of each individual circuit. Since no data needs to be shared between the individual instances, all the SAT instances can be run in parallel to obtain the optimized sub-circuits G ′ i which are then concatenated for the final result, i.e., G ′ = G ′ 1 · · · G ′ m s . Note that such a splitting approach is guaranteed to produce a correct Clifford gate decomposition. This follows from applying Theorem 1 to each of the subblocks involved. However, it may not achieve optimal circuit depth. After all, this divide-and-conquer heuristic treats different blocks of the target circuit completely independently and scales with circuit depth and qubit number of the initial circuit. Scaling issues can be countered by decreasing the split size, but the point of diminishing returns is eventually reached where the size of the split has a strong negative impact on the target metric. Since any sub-circuit can be optimized using the SAT method without changing the circuit's functionality, we can take the divide-and-conquer approach even further. Given a maximal number of qubits n max , a circuit can be decomposed into sub-circuits G = G 1 · · · G m such that the number of qubits in each circuit is bounded by n max and there are no two-qubit gates between any two sub-circuits for parallel optimization. These two splitting techniques can be combined to make the approach as scalable as possible. Given a depth threshold d thr , a split size s < d trh , and a maximal number of qubits n max , the circuit can be split into sub-circuits of at most n max qubits. If any of these sub-circuits is deeper than d thr , the circuits can be further split horizontally into blocks of depth s. These circuit blocks can then be optimized independently from each other (in parallel). VII. EVALUATIONS The methods proposed in Sec. V and Sec. VI have been implemented in C++ using the publicly available SMT solver Z3 [58]. The implementation is integrated into the quantum circuit compilation tool QMAP [17], which is part of the Munich Quantum Toolkit (MQT) and available at https://github.com/cda-tum/qmap. To see how well the proposed methods perform in practice, we considered two types of benchmarks: (i) Random Clifford circuits (inspired by randomized benchmarking [59]- [61]): The circuits were obtained by sampling a random stabilizer tableau (including information about the destabilizers). Since our SAT encoding is based on the stabilizer tableau, this is already a valid input format for our method and no explicit circuit has to be generated. For every qubit number n, 10 random stabilizer tableaus have been generated and the results have been averaged over all these runs. The proposed methods are compared to the state of the art greedy Clifford synthesizer by Bravyi et al. [62] The timeout was set to 3 h. (ii) Clifford+T implementations of Grover search (inspired by fault-tolerant quantum computation [3], [8], [63]): we generated circuits for the Grover search algorithm using random Boolean functions as oracles. For each qubit number n, 10 circuits were generated in this fashion. Since these circuits contain T -gates, the circuit is partitioned into Clifford blocks and T blocks and each (non-trivial) Clifford block is optimized separately. All evaluations have been performed on a a 3.6 GHz Intel Xeon W-1370P machine running Ubuntu 20.04 with 128 GiB of main memory and 16 hardware threads. For generating and synthesizing circuits as well as for the greedy optimizer, the quantum computing SDK Qiskit [12] (version 0.42.1) by IBM has been used. The results of our experiments for synthesizing random Clifford circuits can be seen in Table I. The data under column Optimal shows the results using the proposed optimal SAT approach. Unfortunately, the increase in variables and the scaling of the encoding in the number of qubits can be seen rather drastically here. It is only possible to synthesize random Clifford circuits up to 5 qubits within the given time limit. Nonetheless, we can see that other methods, especially the state of the art, synthesize n-qubit circuits that are far from depth-optimal, increasing the depth on average by 105.26% (n = 3), 142.42% (n = 4) and 201.32% (n = 5), respectively. The column Heuristic Vertical shows the results using the proposed heuristic approach where the circuits were partitioned vertically, i.e., the resulting sub-circuits all had the same number of qubits as the original circuit. Compared to the vertical splitting heuristic, the state of the art still produces circuits that are 21.92% deeper on average. Column Heuristic Horizontal shows the results using the proposed heuristic approach where the circuits were partitioned into sub-circuits of five qubits each. At this value, the optimal approach still yields results in an acceptable amount of time (thus there are no entries for three and four qubits). This decomposition leads to much better results for lower qubit numbers but eventually produces worse results than the vertical decomposition after nine qubits -partially as an artefact of the decomposition scheme. The circuits cannot always be split perfectly into five qubit sub-circuits and potentially parallel gates in the original circuit might not be parallel anymore after the optimization. Another reason is that it gets increasingly difficult to find deep sub-circuits of only five qubits for random Clifford circuits since the interactions between qubits are bound to entangle more than five qubits rather quickly. A big upside of this synthesis method is its runtime. Since the runtime for synthesizing five qubit circuits is predictable, synthesizing all these sub-circuits can be done within a predictable time as well. All in all, the results in Table I suggest that the heuristic approach could be improved with more sophisticated circuit decomposition techniques. The current state of the art produces circuits that are far from the optimal, which leaves quite some room for improvement. The Grover search benchmark was chosen to analyse the possible improvement of the depth of actual fault-tolerant quantum circuits. Guided by the results of the random Clifford benchmarks we looked at Grover circuits for three to five qubits. The proposed optimization scheme resulted in circuits that were 13.38%, 21.71% and 16.72% shorter on average. The Grover benchmarks are made publicly available under https://github.com/cda-tum/qmap. VIII. CONCLUSION AND OUTLOOK Classical circuit synthesis, on the one hand, and Quantified Boolean Formulas (QBF), on the other, are two seemingly very different but related problems. This correspondence can be made one-to-one and forms the basis of several state of the art approaches for optimal (logical) circuit synthesis: QBF solvers are employed to determine the shortest circuit representation of a desired logical functionality. In this work, we have extended this general mindset to the quantum realm, considering the task of decomposing n-qubit Clifford circuits into as few elementary gates as possible. We then showed that deciding if is it possible to represent a given Clifford functionality with at most d max Clifford layers can be re-cast as a satisfiability (SAT) problem in O(n 2 d max ) Boolean variables. The reduction uses maximally entangled input stimuli, as well as the Gottesman-Knill theorem. It highlights that Clifford synthesis contained in NP (the first level of the polynomial hierarchy) is easier than general logical circuit synthesis which is complete for Σ p 2 (the second level of the polynomial hierarchy). In the electronic design automation community, SAT solvers have been applied to tackle classical synthesis problems with great success. We showed that similar approaches apply to quantum computing and that there is large potential to replicate the success of solving classical synthesis problems. While the optimal synthesis approach scales poorly in the number of qubits, it shows that there is a large gap between the optimal solution and the state of the art. Up to our knowledge, these are the first Clifford synthesis protocols that (i) are provably correct and (ii) come with a certificate of optimality. Furthermore, all of the proposed methods are publicly available within the Munich Quantum Toolkit (MQT) as part of the open-source quantum circuit compilation tool QMAP (https: //github.com/cda-tum/qmap). QMAP already works natively with IBM's Qiskit and even tighter integration with quantum SDKs is left for future work. These initial findings are encouraging and open the door for several interesting follow-up projects, i.e. a refinement of the proposed numerical solver. This will entail tweaks in the stabilizer encoding to squeeze out more performance, but also trying different SAT solvers and novel pre-processing techniques to determine sharper initial bounds on the maximum circuit depth. A configurable gate-set is also on our to-do list. For now, we only use H, S, S † , CN OT , as well as Pauli gates. In the future, this could be adapted to include additional single-qubit gates (e.g. the full single-qubit Clifford group) and two-qubit gates (e.g. SW AP , CZ, CY , . . . ). Note that more 'elementary' gates directly translate into a more complex encoding, and therefore a larger logical search space (more variables). On the other hand, this increased expressiveness per time step is bound to decrease the circuit depth and, therefore, result in shorter SAT formulas overall (fewer logical clauses). This trade-off might well be worthwhile. The proposed encoding procedure is flexible enough to facilitate architecture-aware synthesis of Clifford circuits. Some quantum architectures only support certain interactions between their qubits, typically defined by a coupling map. We can respect this coupling map in the proposed SAT encoding by only permitting Clifford gates that are also native to the concrete architecture. Virtually all these future research directions also extend to improve the proposed heuristic solver for near-optimal Clifford synthesis. We intend to explore different divide-andconquer strategies (decomposing into sub-circuits) and explore (near-)optimal ways on how to best synthesize each of these circuit blocks. Last but not least, researchers are now beginning to suggest and explore the use of quantum computers to solve challenging subroutines in quantum synthesis. Quantum assisted quantum compiling [51] falls into this category. For this work, the deliberate restriction to Clifford circuits has allowed us to not have to think along these lines (yet) -the Gottesman-Knill theorem ensures that classical simulation remains tractable throughout. But it is an interesting direction for future work to fruitfully combine quantum assisted quantum compiling ideas with conventional SAT-solving techniques. We leave such synergies for future work. Fig. 1 . 1Two Clifford+T circuits for 3-qubit Grover search: the oracle corresponds to a random 3-SAT formula with 3 variables and 5 clauses. Fig. 2 . 2Illustration of entanglement-assisted equivalence-checking: Two nqubit circuits U, V have equivalent functionality (up to a global phase), if and only if the above circuit produces the pairwise maximally entangled state |ω 2n ⟩. Here, V † is the reverse circuit (adjoint) of V and I is the identity. on the same n qubits, i.e. the control qubits of the pairwise entanglement. On the remaining target qubits we apply the identity as shown inFig. 2.Lemma 1. Let U, V be two n-qubit quantum circuits and let I ⊗n be the n-qubit identity operation. Then, U ≃ V if and only if √ 2 n 2vec(I ⊗n ) = e iϕ |ω 2n ⟩. Eq. (4) takes the outer product of this vector-valued equation to absorb the remaining global phase (mixed state formalism). −1,0 . . . x n−1,n−1 z n−1,0 . . . z n−1,n−1 r at most) depth-d max Clifford unitary U d with a bitstring y ∈ {0, 1}l that contains (at most) l(n, d max ) = O n 2 d max Boolean variables. And, what is more, we can actually represent ϕ U,Ω (y) as a CNF with O n 4 d max clauses of constant length. Putting all this to- gether ensures that we can rewrite the Clifford circuit synthesis problem as ∃y ∈ {0, 1} l(n,dmax) ϕ U,Ω (y) ! = 1. Table I EXPERIMENTAL IRESULTS FOR RANDOM CLIFFORD CIRCUITS.Optimal Heuristic Vertical Heuristic Horizontal Bravyi et al. n d |G| t [s] d |G| t [s] d |G| t [s] d |G| t [s] 3 5.70 11.40 0.33 8.30 18.90 0.69 - - - 11.70 16.10 0.18 4 6.60 16.70 4.10 13.30 23.70 2.03 - - - 16.00 23.50 0.16 5 7.60 25.00 381.95 18.90 38.60 7.72 7.80 25.50 603.08 22.90 37.30 0.18 6 - - -24.60 55.40 27.27 17.00 51.10 180.77 29.40 55.10 0.20 7 - - -31.40 69.40 43.51 25.10 75.80 97.16 37.00 70.20 0.17 8 - - -37.40 88.50 153.96 32.00 97.20 186.42 42.10 86.30 0.20 9 - - -41.20 102.40 313.82 39.70 121.30 219.32 53.50 108.80 0.28 10 - - -50.80 131.30 547.93 53.00 155.20 106.66 59.90 128.70 0.28 11 - - -58.80 152.90 838.91 60.70 180.90 133.97 72.20 157.30 0.25 12 - - -66.70 174.50 1464.46 75.80 217.40 117.73 78.90 170.10 0.26 13 - - -75.50 206.50 2423.24 86.10 250.00 283.31 91.40 207.10 0.34 14 - - -83.00 237.10 4412.97 99.80 305.70 111.89 100.30 235.20 0.33 n: Number of qubits d: Average depth |G|: Average gate count t: average runtime The circuit in question was generated using Qiskit 0.42.1 This is already our point of departure from earlier work, most notably[29]. There, a subset of the authors asked a related, but simpler, question that arises from only considering |ψ⟩ = |0, . . . , 0⟩ (i.e. fix a single input state). ACKNOWLEDGMENTSThe authors thank Armin Biere, David Gross, and Martina Seidl for inspiring discussions and valuable feedback.T.P., R.W. and L.B. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 101001318), as well as financial support from the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. All authors have been supported by the BMWK on the basis of a decision by the German Bundestag through the projects ProvideQ and QuaST, the Project QuantumReady (FFG 896217) and the State of Upper Austria in the frame of the COMET program (managed by the FFG). Quantum computing in the NISQ era and beyond. John Preskill, 279John Preskill, "Quantum computing in the NISQ era and beyond," vol. 2, p. 79, 2018. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. W Peter, Shor, SIAM J. Comput. Peter W. Shor, "Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer," SIAM J. Comput., 1997. A fast quantum mechanical algorithm for database search. K Lov, Grover, Proc. of the ACM. of the ACMLov K. Grover, "A fast quantum mechanical algorithm for database search," Proc. of the ACM, pp. 212-219, 1996. Quantum algorithm for linear systems of equations. Aram W Harrow, Avinatan Hassidim, Seth Lloyd, Physical Review Letters. 10315Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd, "Quantum algorithm for linear systems of equations," Physical Review Letters, vol. 103, no. 15, 2009. Quantum Speed-Ups for solving semidefinite programs. G S L Fernando, Krysta M Brandao, Svore, 10.1109/FOCS.2017.452017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS. Fernando G.S.L. Brandao and Krysta M. Svore, "Quan- tum Speed-Ups for solving semidefinite programs," in 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), 2017, pp. 415-426. DOI: 10.1109/FOCS.2017.45. Quantum SDP-Solvers: Better upper and lower bounds. Joran Van Apeldoorn, 10.22331/q-2020-02-14-2304230Joran van Apeldoorn et al., "Quantum SDP-Solvers: Better upper and lower bounds," vol. 4, p. 230, 2020. DOI: 10.22331/q-2020-02-14-230. Faster quantum and classical SDP approximations for quadratic binary optimization. G S L Fernando, Richard Brandão, Daniel Stilck Kueng, França, 10.22331/q-2022-01-20-6256625Fernando G. S. L. Brandão, Richard Kueng, and Daniel Stilck França, "Faster quantum and classical SDP ap- proximations for quadratic binary optimization," vol. 6, p. 625, 2022. DOI: 10.22331/q-2022-01-20-625. Quantum computations: Algorithms and error correction. Alexei Kitaev, Russian Mathematical Surveys. 526Alexei Kitaev, "Quantum computations: Algorithms and error correction," Russian Mathematical Surveys, vol. 52, no. 6, pp. 1191-1249, 1997. Fault-tolerant quantum computation. W Peter, Shor, 10.1109/SFCS.1996.548464DOI: 10 . 1109 / SFCS.1996.548464Proceedings of 37th Conference on Foundations of Computer Science. 37th Conference on Foundations of Computer SciencePeter W. Shor, "Fault-tolerant quantum computation," in Proceedings of 37th Conference on Foundations of Computer Science, 1996, pp. 56-65. DOI: 10 . 1109 / SFCS.1996.548464. A Michael, Isaac L Nielsen, Chuang, Quantum Computation and Quantum Information. Cambridge University PressMichael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information. Cambridge University Press, 2010. Stabilizer codes and quantum error correction. Daniel Gottesman, Daniel Gottesman, "Stabilizer codes and quantum error correction.," 1997. Learn quantum computation using Qiskit. Abraham Asfaw, Abraham Asfaw et al. "Learn quantum computation using Qiskit." (2020). Synthesis of unitaries with Clif-ford+T circuits. Vadym Kliuchnikov, arXiv:1306.3200quant-ph. preprintVadym Kliuchnikov. "Synthesis of unitaries with Clif- ford+T circuits." arXiv: 1306 . 3200 [quant-ph]. (2013), preprint. A meet-in-the-middle algorithm for fast synthesis of depth-optimal quantum circuits. Matthew Amy, 10.1109/TCAD.2013.2244643IEEE Trans. on CAD of Integrated Circuits and Systems. 326Matthew Amy et al., "A meet-in-the-middle algorithm for fast synthesis of depth-optimal quantum circuits," IEEE Trans. on CAD of Integrated Circuits and Systems, vol. 32, no. 6, pp. 818-830, 2013. DOI: 10.1109/TCAD. 2013.2244643. Parallelizing quantum circuit synthesis. Olivia Di , Matteo , Michele Mosca, 10.1088/2058-9565/1/1/015003arXiv:1606.07413Quantum Sci. Technol. 11quant-phOlivia Di Matteo and Michele Mosca, "Parallelizing quantum circuit synthesis," Quantum Sci. Technol., vol. 1, no. 1, p. 015 003, 2016. DOI: 10 . 1088 / 2058 - 9565/1/1/015003. arXiv: 1606.07413 [quant-ph]. Advanced exact synthesis of Clifford+T circuits. Philipp Niemann, Robert Wille, Rolf Drechsler, 10.1007/s11128-020-02816-0Quantum Inf Process. 19317Philipp Niemann, Robert Wille, and Rolf Drechsler, "Advanced exact synthesis of Clifford+T circuits," Quantum Inf Process, vol. 19, no. 9, p. 317, 2020. DOI: 10.1007/s11128-020-02816-0. MQT QMAP: Efficient quantum circuit mapping. Robert Wille, Lukas Burgholzer, Int'l Symp. on Physical Design. Robert Wille and Lukas Burgholzer, "MQT QMAP: Efficient quantum circuit mapping," in Int'l Symp. on Physical Design, 2023. Improved simulation of stabilizer circuits. Scott Aaronson, Daniel Gottesman, 10.1103/PhysRevA.70.052328Phys. Rev. A. 70552Scott Aaronson and Daniel Gottesman, "Improved sim- ulation of stabilizer circuits," Phys. Rev. A, vol. 70, no. 5, p. 052 328, 2004. DOI: 10.1103/PhysRevA.70. 052328. Universal quantum computation with ideal Clifford gates and noisy ancillas. Sergey Bravyi, Alexei Kitaev, 10.1103/PhysRevA.71.022316Phys. Rev. A. 712Sergey Bravyi and Alexei Kitaev, "Universal quantum computation with ideal Clifford gates and noisy ancil- las," Phys. Rev. A, vol. 71, no. 2, p. 022 316, 2005. DOI: 10.1103/PhysRevA.71.022316. A comprehensive study and analysis on SAT-solvers: Advances, usages and achievements. Sahel Alouneh, 10.1007/s10462-018-9628-0Artif Intell Rev. 524Sahel Alouneh et al., "A comprehensive study and analysis on SAT-solvers: Advances, usages and achieve- ments," Artif Intell Rev, vol. 52, no. 4, pp. 2575-2601, 2019. DOI: 10.1007/s10462-018-9628-0. Sanjeev Arora, Boaz Barak, 10.1017/CBO9780511804090Computational Complexity: A Modern Approach. Cambridge University PressSanjeev Arora and Boaz Barak, Computational Com- plexity: A Modern Approach. Cambridge University Press, 2009. DOI: 10.1017/CBO9780511804090. A Survey on applications of quantified boolean formulas. Ankit Shukla, 10.1109/ICTAI.2019.00020International Conference on Tools with Artificial Intelligence. Ankit Shukla et al., "A Survey on applications of quan- tified boolean formulas," in International Conference on Tools with Artificial Intelligence, 2019, pp. 78-84. DOI: 10.1109/ICTAI.2019.00020. Incremental determinization. Markus N Rabe, Sanjit A Seshia, Conference on Theory and Applications of Satisfiability Testing. 9710Markus N. Rabe and Sanjit A. Seshia, "Incremental determinization," in Conference on Theory and Ap- plications of Satisfiability Testing, vol. 9710, 2016, pp. 375-392. DepQBF 6.0: A search-based QBF solver beyond traditional QCDCL. Florian Lonsing, Uwe Egly, International Conference on Automated Deduction. 10395Florian Lonsing and Uwe Egly, "DepQBF 6.0: A search-based QBF solver beyond traditional QCDCL," in International Conference on Automated Deduction, vol. 10395, 2017, pp. 371-384. CAQE: A Certifying QBF Solver. Markus N Rabe, Leander Tentrup, 10.1109/FMCAD.2015.7542263Int'l Conf. on Formal Methods in CAD. Markus N. Rabe and Leander Tentrup, "CAQE: A Cer- tifying QBF Solver," in Int'l Conf. on Formal Methods in CAD, 2015, pp. 136-143. DOI: 10.1109/FMCAD. 2015.7542263. Combinatorial sketching for finite programs. Armando Solar-Lezama, 10.1145/1168857.1168907DOI: 10 . 1145 / 1168857 . 1168907Int'l Conf. On Architectural Support for Programming Languages and Operating Systems. Armando Solar-Lezama et al., "Combinatorial sketch- ing for finite programs," in Int'l Conf. On Architectural Support for Programming Languages and Operating Systems, 2006, pp. 404-415. DOI: 10 . 1145 / 1168857 . 1168907. Counterexample-guided abstraction refinement. Edmund Clarke, Computer Aided Verification. 1855Edmund Clarke et al., "Counterexample-guided ab- straction refinement," in Computer Aided Verification, vol. 1855, 2000, pp. 154-169. A theory of formal synthesis via inductive learning. Susmit Jha, A Sanjit, Seshia, 10.1007/s00236-017-0294-5Acta Informatica. 547Susmit Jha and Sanjit A. Seshia, "A theory of for- mal synthesis via inductive learning," Acta Informatica, vol. 54, no. 7, pp. 693-726, 2017. DOI: 10.1007/s00236- 017-0294-5. A SAT encoding for optimal Clifford circuit synthesis. Sarah Schneider, Lukas Burgholzer, Robert Wille, Asia and South Pacific Design Automation Conf. Sarah Schneider, Lukas Burgholzer, and Robert Wille, "A SAT encoding for optimal Clifford circuit synthesis," in Asia and South Pacific Design Automation Conf., 2023. Quantum state tomography via compressed sensing. David Gross, 10.1103/PhysRevLett.105.150401DOI: 10 . 1103 / PhysRevLett . 105 . 150401Phys. Rev. Lett. 10515150David Gross et al., "Quantum state tomography via compressed sensing," Phys. Rev. Lett., vol. 105, no. 15, p. 150 401, 2010. DOI: 10 . 1103 / PhysRevLett . 105 . 150401. Guaranteed recovery of quantum processes from few measurements. Martin Kliesch, 10.22331/q-2019-08-12-1713171Martin Kliesch et al., "Guaranteed recovery of quantum processes from few measurements," vol. 3, p. 171, 2019. DOI: 10.22331/q-2019-08-12-171. Recovering quantum gates from few average gate fidelities. Ingo Roth, 10.1103/PhysRevLett.121.170502Phys. Rev. Lett. 12117170Ingo Roth et al., "Recovering quantum gates from few average gate fidelities," Phys. Rev. Lett., vol. 121, no. 17, p. 170 502, 2018. DOI: 10.1103/PhysRevLett. 121.170502. Random stimuli generation for the verification of quantum circuits. Lukas Burgholzer, Richard Kueng, Robert Wille, Asia and South Pacific Design Automation Conf. Lukas Burgholzer, Richard Kueng, and Robert Wille, "Random stimuli generation for the verification of quantum circuits," in Asia and South Pacific Design Automation Conf., 2021. Fast state tomography with optimal error bounds. Madalin Guţȃ, 10.1088/1751-8121/ab8111DOI: 10 . 1088 / 1751 -8121 / ab8111J. Phys. A: Math. Theor. 5320Madalin Guţȃ et al., "Fast state tomography with op- timal error bounds," J. Phys. A: Math. Theor., vol. 53, no. 20, p. 204 001, 2020. DOI: 10 . 1088 / 1751 -8121 / ab8111. Non-identity check" is QMA-complete. Dominik Janzing, Pawel Wocjan, Thomas Beth, Int. J. Quantum Inform. 0303Dominik Janzing, Pawel Wocjan, and Thomas Beth, ""Non-identity check" is QMA-complete," Int. J. Quan- tum Inform., vol. 03, no. 03, pp. 463-473, 2005. The Solovay-Kitaev algorithm. M Christopher, Michael A Dawson, Nielsen, Quantum Info. Comput. 61Christopher M. Dawson and Michael A. Nielsen, "The Solovay-Kitaev algorithm," Quantum Info. Comput., vol. 6, no. 1, pp. 81-95, 2006. Optimization of the Solovay-Kitaev algorithm. Tien Trung Pham, Rodney Van Meter, Clare Horsman, 10.1103/PhysRevA.87.052332Phys. Rev. A. 87552Tien Trung Pham, Rodney Van Meter, and Clare Hors- man, "Optimization of the Solovay-Kitaev algorithm," Phys. Rev. A, vol. 87, no. 5, p. 052 332, 2013. DOI: 10.1103/PhysRevA.87.052332. Clifford group, stabilizer states, and linear and quadratic operations over GF(2). Jeroen Dehaene, Bart De Moor, 10.1103/PhysRevA.68.042318Phys. Rev. A. 68442Jeroen Dehaene and Bart De Moor, "Clifford group, stabilizer states, and linear and quadratic operations over GF(2)," Phys. Rev. A, vol. 68, no. 4, p. 042 318, 2003. DOI: 10.1103/PhysRevA.68.042318. Hudson's theorem for finite-dimensional quantum systems. David Gross, 10.1063/1.2393152J. Math. Phys. 4712David Gross, "Hudson's theorem for finite-dimensional quantum systems," J. Math. Phys., vol. 47, no. 12, p. 122 107, 2006. DOI: 10.1063/1.2393152. The Clifford group fails gracefully to be a unitary 4-design. Huangjun Zhu, arXiv:1609.08172quant-ph. preprintHuangjun Zhu et al. "The Clifford group fails grace- fully to be a unitary 4-design." arXiv: 1609 . 08172 [quant-ph]. (2016), preprint. Linear Depth Stabilizer and Quantum Fourier Transformation Circuits with no Auxiliary Qubits in Finite Neighbor Quantum Architectures. D Maslov, 10.1103/PhysRevA.76.052310arXiv:quant-ph/0703211Phys. Rev. A. 76552D. Maslov, "Linear Depth Stabilizer and Quantum Fourier Transformation Circuits with no Auxiliary Qubits in Finite Neighbor Quantum Architectures," Phys. Rev. A, vol. 76, no. 5, p. 052 310, 2007. DOI: 10. 1103/PhysRevA.76.052310. arXiv: quant-ph/0703211. 6-qubit optimal Clifford circuits. Sergey Bravyi, Joseph A Latone, Dmitri Maslov, arXiv:2012.06074preprintSergey Bravyi, Joseph A. Latone, and Dmitri Maslov. "6-qubit optimal Clifford circuits." arXiv: 2012.06074. (2020), preprint. Logical Clifford synthesis for stabilizer codes. Narayanan Rengaswamy, 10.1109/TQE.2020.3023419DOI: 10.1109/ TQE.2020.3023419IEEE Transactions on Quantum Engineering. 1Narayanan Rengaswamy et al., "Logical Clifford syn- thesis for stabilizer codes," IEEE Transactions on Quan- tum Engineering, vol. 1, pp. 1-17, 2020. DOI: 10.1109/ TQE.2020.3023419. Constant-cost implementations of Clifford operations and multiply-controlled gates using global interactions. Sergey Bravyi, Dmitri Maslov, Yunseong Nam, 10.1103/PhysRevLett.129.230501Phys. Rev. Lett. 129232022Sergey Bravyi, Dmitri Maslov, and Yunseong Nam, "Constant-cost implementations of Clifford operations and multiply-controlled gates using global interactions," Phys. Rev. Lett., vol. 129, no. 23, p. 230 501, 2022. DOI: 10.1103/PhysRevLett.129.230501. SAT-Based Methods for Circuit Synthesis. Roderick Bloem, arXiv:1408.2333preprintRoderick Bloem et al. "SAT-Based Methods for Circuit Synthesis." arXiv: 1408.2333 [cs]. (2014), preprint. Completely positive linear maps on complex matrices. Man-Duen Choi, 10.1016/0024-3795(75)90075-0Linear Algebra and its Applications. 103Man-Duen Choi, "Completely positive linear maps on complex matrices," Linear Algebra and its Applications, vol. 10, no. 3, pp. 285-290, 1975. DOI: 10.1016/0024- 3795(75)90075-0. Linear transformations which preserve trace and positive semidefiniteness of operators. Andrzej Jamiołkowski, 10.1016/0034-4877(72)90011-0Reports on Mathematical Physics. 34Andrzej Jamiołkowski, "Linear transformations which preserve trace and positive semidefiniteness of opera- tors," Reports on Mathematical Physics, vol. 3, no. 4, pp. 275-278, 1972. DOI: 10.1016/0034-4877(72)90011- 0. Ancilla-assisted quantum process tomography. Joseph B Altepeter, 10.1103/PhysRevLett.90.193601DOI: 10 . 1103 / PhysRevLett . 90 . 193601Phys. Rev. Lett. 9019193Joseph B. Altepeter et al., "Ancilla-assisted quantum process tomography," Phys. Rev. Lett., vol. 90, no. 19, p. 193 601, 2003. DOI: 10 . 1103 / PhysRevLett . 90 . 193601. John Watrous, The Theory of Quantum Information. Cambridge University Press590ppJohn Watrous, The Theory of Quantum Information. Cambridge University Press, 2018, 590 pp. ACM 270: Quantum and classical information processing with tensors. Richard Kueng, Richard Kueng. "ACM 270: Quantum and classical information processing with tensors." (2019). Quantum-assisted quantum compiling. Sumeet Khatri, 3140Sumeet Khatri et al., "Quantum-assisted quantum com- piling," vol. 3, p. 140, 2019. Massively parallel quantum computer simulator, eleven years later. Hans De Raedt, 10.1016/j.cpc.2018.11.005DOI: 10. 1016/j.cpc.2018.11.005Computer Physics Communications. 237Hans De Raedt et al., "Massively parallel quantum com- puter simulator, eleven years later," Computer Physics Communications, vol. 237, pp. 47-61, 2019. DOI: 10. 1016/j.cpc.2018.11.005. Handwaving and interpretive dance: An introductory course on tensor networks. C Jacob, Christopher T Bridgeman, Chubb, J. Phys. A: Math. Theor. Jacob C. Bridgeman and Christopher T. Chubb, "Hand- waving and interpretive dance: An introductory course on tensor networks," J. Phys. A: Math. Theor., 2017. Just like the real thing: Fast weak simulation of quantum computation. Stefan Hillmich, Igor L Markov, Robert Wille, Design Automation Conf. Stefan Hillmich, Igor L. Markov, and Robert Wille, "Just like the real thing: Fast weak simulation of quan- tum computation," in Design Automation Conf., 2020. QMDDs: Efficient quantum function representation and manipulation. Philipp Niemann, IEEE Trans. on CAD of Integrated Circuits and Systems. Philipp Niemann et al., "QMDDs: Efficient quantum function representation and manipulation," IEEE Trans. on CAD of Integrated Circuits and Systems, 2016. The Heisenberg representation of quantum computers. Daniel Gottesman, arXiv:quant-ph/9807006preprintDaniel Gottesman. "The Heisenberg representation of quantum computers." arXiv: quant-ph/9807006. (1998), preprint. Armin Biere, Handbook of Satisfiability. IOS PressArmin Biere et al., Handbook of Satisfiability. IOS Press, 2009. Z3: An efficient SMT solver. Leonardo De Moura, Nikolaj Bjørner, Tools and Algorithms for the Construction and Analysis of Systems. Leonardo de Moura and Nikolaj Bjørner, "Z3: An efficient SMT solver," in Tools and Algorithms for the Construction and Analysis of Systems, 2008, pp. 337- 340. Randomized benchmarking of quantum gates. Emanuel Knill, 10.1103/PhysRevA.77.012307Phys. Rev. A. 77112Emanuel Knill et al., "Randomized benchmarking of quantum gates," Phys. Rev. A, vol. 77, no. 1, p. 012 307, 2008. DOI: 10.1103/PhysRevA.77.012307. Efficient measurement of quantum gate error by interleaved randomized benchmarking. Easwar Magesan, 10.1103/PhysRevLett.109.080505Phys. Rev. Lett. 1098Easwar Magesan et al., "Efficient measurement of quan- tum gate error by interleaved randomized benchmark- ing," Phys. Rev. Lett., vol. 109, no. 8, p. 080 505, 2012. DOI: 10.1103/PhysRevLett.109.080505. Comparing experiments to the fault-tolerance threshold. Richard Kueng, Phys. Rev. Lett. 11717Richard Kueng et al., "Comparing experiments to the fault-tolerance threshold," Phys. Rev. Lett., vol. 117, no. 17, p. 170 502, 2016. Clifford circuit optimization with templates and symbolic Pauli gates. Sergey Bravyi, 10.22331/q-2021-11-16-580arXiv:2105.022915580Sergey Bravyi et al., "Clifford circuit optimization with templates and symbolic Pauli gates," vol. 5, p. 580, 2021. DOI: 10.22331/q-2021-11-16-580. arXiv: 2105. 02291. Theory of fault-tolerant quantum computation. Daniel Gottesman, 10.1103/PhysRevA.57.127Phys. Rev. A. 571Daniel Gottesman, "Theory of fault-tolerant quantum computation," Phys. Rev. A, vol. 57, no. 1, pp. 127-137, 1998. DOI: 10.1103/PhysRevA.57.127.
{'fraction_non_alphanumeric': 0.05493913605515458, 'fraction_numerical': 0.04137940321016913, 'mean_word_length': 4.262915456027213, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 0, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 5, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 8, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Circuit synthesis is the task of decomposing a given logical functionality into a sequence of elementary gates. It is (depth-)optimal if it is impossible to achieve the desired functionality with even shorter circuits. Optimal synthesis is a central problem in both quantum and classical hardware design, but also plagued by complexity-theoretic obstacles. Motivated by fault-tolerant quantum computation, we consider the special case of synthesizing blocks of Clifford unitaries. Leveraging entangling input stimuli and the stabilizer formalism allows us to reduce the Clifford synthesis problem to a family of poly-size satisfiability (SAT) problems -one for each target circuit depth. On a conceptual level, our result showcases that the Clifford synthesis problem is contained in the first level of the polynomial hierarchy (NP), while the classical synthesis problem for logical circuits is known to be complete for the second level of the polynomial hierarchy (Σ P 2 ). Based on this theoretical reduction, we formulate a SAT encoding for depth-optimal Clifford synthesis. We then employ SAT solvers to determine a satisfying assignment or to prove that no such assignment exists. From that, the shortest depth for which synthesis is still possible (optimality) as well as the actual circuit (synthesis) can be obtained. Empirical evaluations show that the optimal synthesis approach yields a substantial depth improvement for random Clifford circuits and Clifford+T circuits for Grover search.', 'arxivid': '2305.01674', 'author': ['Tom Peham [email protected] ', 'Nina Brandl [email protected] ', 'Richard Kueng [email protected] ', 'Robert Wille [email protected] ', 'Lukas Burgholzer [email protected] ', '\nChair for Design Automation\nInstitute for Integrated Circuits Johannes Kepler University Linz\nTechnical University of Munich\nGermany, Austria\n', '\nInstitute for Integrated Circuits Johannes Kepler University Linz\nAustria\n', '\nChair for Design Automation Technical University of Munich Germany Software Competence Center Hagenberg GmbH\nAustria\n', '\nInstitute for Integrated Circuits Johannes Kepler University Linz\nAustria\n'], 'authoraffiliation': ['Chair for Design Automation\nInstitute for Integrated Circuits Johannes Kepler University Linz\nTechnical University of Munich\nGermany, Austria', 'Institute for Integrated Circuits Johannes Kepler University Linz\nAustria', 'Chair for Design Automation Technical University of Munich Germany Software Competence Center Hagenberg GmbH\nAustria', 'Institute for Integrated Circuits Johannes Kepler University Linz\nAustria'], 'corpusid': 258461565, 'doi': '10.48550/arxiv.2305.01674', 'github_urls': ['https://github.com/cda-tum/qmap.', 'https://github.com/cda-tum/qmap.', 'https://github.com/cda-tum/qmap.'], 'n_tokens_mistral': 23125, 'n_tokens_neox': 19254, 'n_words': 11666, 'pdfsha': '9f20a96103691f2ec68b7ac5b2e0467883803b79', 'pdfurls': ['https://export.arxiv.org/pdf/2305.01674v2.pdf'], 'title': ['Depth-Optimal Synthesis of Clifford Circuits with SAT Solvers', 'Depth-Optimal Synthesis of Clifford Circuits with SAT Solvers'], 'venue': []}
arxiv
arXiv:physics/0011058v1 [physics.chem-ph] Generalized Heitler-London Theory for H 3 : A Comparison of the Surface Integral Method with Perturbation Theory 23 Nov 2000 (February 8, 2022) Tanja I Sachse Max-Planck-Institut für Strömungsforschung Institut für Physik Technische Universität Bunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany Ulrich Kleinekathöfer Max-Planck-Institut für Strömungsforschung Institut für Physik Technische Universität Bunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany arXiv:physics/0011058v1 [physics.chem-ph] Generalized Heitler-London Theory for H 3 : A Comparison of the Surface Integral Method with Perturbation Theory 23 Nov 2000 (February 8, 2022) The generalized Heitler-London (GHL) theory provides a straightforward way to express the potential energy surface of H 3 in terms of Coulomb and exchange energies which can be calculated either by perturbation theory or using the surface integral method (SIM). By applying the Rayleigh-Schrödinger perturbation theory, GHL theory for the quartet spin state of H 3 is shown to yield results equivalent to the symmetrized Rayleigh-Schrödinger version of symmetry adapted perturbation theory (SAPT). This equivalence allows a comparison with the corresponding results obtained by the surface integral method. The surface integral result calculated with a product of atomic wave functions is found to have certain advantages over the perturbation approach. I. INTRODUCTION The generalized Heitler-London (GHL) theory provides a useful framework to calculate the potential energy surfaces for polyatomic systems [1][2][3][4]. Since the potential energy is expressed in terms of Coulomb and exchange energies it is possible to systematically separate out many-body effects in every single term contributing to the potential energy. In this paper some aspects of the three-body exchange effects occurring in H 3 are examined in more detail. Axilrod, Teller and Muto [5] were the first to suggest a formula describing the leading long range three-body dispersion term for three spherically symmetric atoms. Since then the non-additive effects have been intensively studied and several review articles have been published [6][7][8]. In the GHL approach the potentials can be decomposed into Coulomb and exchange energies, whereas in symmetry adapted perturbation theory (SAPT) these interactions are expressed in terms of Coulomb and exchange integrals in the manner first introduced by Heitler and London. Recently, SAPT was formulated for the interactions of trimers [9] and has been applied to numerical calculations up to third order for the quartet spin state of H 3 [10] and for the helium-trimer [11] up to third order. Other three-body calculations for H 3 are based on Heitler-London type calculations [12] and perturbation calculations making use of Unsöld approximations [13]. In the former the splitting into Coulomb and exchange part is as pointed out by the author himself not completely rigorous. In a previous paper [3] analytical results were reported for the doublet as well as for the quartet spin state for the H 3 system based on the GHL theory. Two kinds of exchange energies appear: cyclic exchange energies, where all three electrons are involved, and twobody exchange energies in the presence of the respective third atom. The cyclic exchange energy of three hydrogen and three helium atoms [14] was calculated using the surface integral method (SIM) which was previously applied to two atoms [1,2,4,[15][16][17]. In a forthcoming paper [18] it will be demonstrated that all exchange energies occurring in the H 3 -system can be calculated either by the surface integral method or by using perturbation theory, and the corresponding results for the implicit three-body effect on the two-body exchange energies will be derived and compared. For H 2 it was previously shown that SAPT and GHL are equivalent [19]. The purpose of this paper is to compare the surface integral method calculations of the three-body effects in the exchange energies based on an atomic product wave function with the results of first to third order of SAPT which are only available for the quartet spin state of H 3 [10]. In order to perform this comparison it is necessary to first prove that the SAPT and GHL theory expressions for the energy of the quartet state are equivalent. The results reveal that with the zeroth order wave function the surface integral result contains parts of the second order SAPT result and is therefore more efficient. In Sections II and III the basic ideas of the GHL theory and polarization approximation are described. In Section IV the equivalence of the GHL and the symmetrized Rayleigh-Schrödinger (SRS) theories is demonstrated order by order. The latter is designated a weak symmetry forcing SAPT. Section V reviews the surface integral method (SIM). Thereafter in Section VI the advantages of SIM over the perturbation approach will be demonstrated by comparing the numerical results of perturbation theory and SIM. II. GENERALIZED HEITLER-LONDON THEORY FOR H 3 The application of generalized Heitler-London theory to H 3 was previously discussed in Ref. [3]. The generalized Heitler-London equation is given bŷ HF = g ǫ gT (g)F (1) where F is the localized, i.e. non-symmetrized wave function,T (g) designates a permutation operator for the electron coordinates, and ǫ g stands for the Coulomb (g = I) and exchange energies (g = I). Applying results from the theory of the symmetric group, the energy eigenvalues of the Hamiltonian can be derived. For the H 3 -system, the result for the two doublet states is 1/2 E GHL = ǫ I − ǫ 123 ± 1 2 [(ǫ 12 − ǫ 23 ) 2 + (ǫ 23 − ǫ 13 ) 2 + (ǫ 13 − ǫ 12 ) 2 ](2) and for the quartet state 3/2 E GHL = ǫ I − ǫ 12 − ǫ 23 − ǫ 13 + 2ǫ 123 .(3) The remainder of this paper will be concerned only with the quartet state. III. POLARIZATION APPROXIMATION AND GENERALIZED HEITLER-LONDON (GHL) THEORY The Born-Oppenheimer non-relativistic Hamiltonian of the three-body system is given byĤ =Ĥ 0 +V (4) usingĤ 0 =Ĥ 0 A +Ĥ 0 B +Ĥ 0 C (5) V =V AB +V BC +V AC(6) whereĤ 0 A ,Ĥ 0 B andĤ 0 C are the Hamiltonians of three free hydrogen atoms andV AB ,V BC andV AC describe the interaction between atoms A and B, B and C, as well as A and C, respectively. The polarization approximation [20] is based on the equation HF = E p F(7) where the polarization wave function F and the polarization energy E p can be written as perturbation series F = φ n ,(8)E p = ǫ n .(9) The zeroth order polarization wave function φ 0 is the eigenfunction of the free Hamilto-nianĤ 0 and thus is a product of three free hydrogen wave functions. Starting from the GHL equation with F chosen as the polarization wave function, Eq. (1) together with the Hamiltonian Eq. (4) can be written as (Ĥ 0 +V )| n φ n = g ǫ gT (g)| N n=0 φ n .(10) Forming scalar products withT (g)φ 0 for each group element g (T (g)φ 0 , (Ĥ 0 +V ) n=0 φ n ) = g ′ ǫ g ′ (T (g) φ 0 , n=0T (g ′ )φ n )(11) a system of linear equations can be derived for the Coulomb energy ǫ I as well as for the exchange energies ǫ g (g = I) in terms of Coulomb integrals J, exchange integrals K g , and overlap integrals S g : E 0 + J ≈ ǫ I + g ′ =g ǫ g ′ S g ′ −1 : g = I E 0 S g + K g ≈ ǫ g + g ′ =g ǫ g ′ S g ′ −1 g : g = I .(12) The following notation for the nth order overlap, Coulomb and exchange integrals was used: S g := M n=0 S n g(13) J := M n=0 J n (14) K g := M n=0 K n g = M n=1 K n g ,(15) where S n g := (T (g)φ 0 , φ n ) (16) J n := (φ 0 ,V φ n−1 ) (17) J 0 = E 0 (18) K n g := (φ 0 ,VT (g −1 ) φ n−1 ) .(19) The equalities S n g −1 = S n g and K n g −1 = K n g hold. In Ref. [18] it will be shown how the Coulomb and exchange energies can be expressed in terms of Coulomb, exchange and overlap integrals and how the order-by-order contributions to the Coulomb and exchange energies can be found. The convergence properties of the polarization theory have been extensively discussed for the case of two hydrogen atoms [21]. For low orders it was shown that the perturbation series rapidly converges to the Coulomb energy [19,[21][22][23] though this is not the limit for the infinite order expansion. It is assumed that the behavior of this perturbation theory for a system of two atoms also roughly holds in the case of three atoms [9,10]. Since here we are only interested in low orders, especially the first, this expected behavior justifies approximating the localized wave function via the polarization approximation for three hydrogen atoms as well. IV. EQUIVALENCE OF THE GHL AND SRS THEORY FOR QUARTET H 3 In this section the order-by-order equivalence of the complete energy expressions obtained by using either the GHL or the SRS theory will be demonstrated. Both the GHL and SRS theories start with the Hamiltonian Eq. (4) and a zeroth order wave function which is a product of three free hydrogen atom wave functions. To demonstrate the equivalence of the first order expressions the first order SRS term will be expressed in terms of Coulomb and exchange energies. In Eq. (12) of Ref. [10] this term is given by 3/2 E 1 SRS = N 0 < ψ 0 |V (1 −T (12) −T (23) −T (13) +T (123) +T (132)) |ψ 0 > ,(20) which can be expressed with Eqs. (16) to (19) as 3/2 E 1 SRS = N 0 J 1 − K 1 12 − K 1 23 − K 1 13 + K 1 123 + K 1 132 ,(21) where N 0 = 1 − S 0 12 − S 0 23 − S 0 13 + S 0 123 + S 0 132 .(22) With Eq. (12) it is possible to express the first order contributions as J 1 = ǫ 1 I + ǫ 1 12 S 0 12 + ǫ 1 23 S 0 23 + ǫ 1 13 S 0 13 + ǫ 1 123 S 0 123 + ǫ 1 132 S 0 123 (23) K 1 12 = ǫ 1 12 + ǫ 1 I S 0 12 + ǫ 1 23 S 0 123 + ǫ 1 13 S 0 123 + ǫ 1 123 S 0 23 + ǫ 1 132 S 0 13 (24) K 1 23 = ǫ 1 23 + ǫ 1 I S 0 23 + ǫ 1 12 S 0 123 + ǫ 1 13 S 0 123 + ǫ 1 123 S 0 13 + ǫ 1 132 S 0 12 (25) K 1 13 = ǫ 1 13 + ǫ 1 I S 0 13 + ǫ 1 12 S 0 123 + ǫ 1 23 S 0 123 + ǫ 1 123 S 0 12 + ǫ 1 132 S 0 23 (26) K 1 123 = ǫ 1 123 + ǫ 1 I S 0 123 + ǫ 1 12 S 0 23 + ǫ 1 23 S 0 13 + ǫ 1 13 S 0 12 + ǫ 1 132 S 0 123 (27) K 1 132 = ǫ 1 132 + ǫ 1 I S 0 123 + ǫ 1 12 S 0 13 + ǫ 1 23 S 0 12 + ǫ 1 13 S 0 23 + ǫ 1 123 S 0 123(28) On inserting into Eq. (21) many terms cancel and Eq. (21) is equivalent to the first order contribution to Eq. (3) 3/2 E 1 SRS = N 0 J 1 − K 1 12 − K 1 23 − K 1 13 + K 1 123 + K 1 132 = ǫ 1 I − ǫ 1 12 − ǫ 1 23 − ǫ 1 13 + ǫ 1 123 + ǫ 1 132 = 3/2 E 1 GHL .(29) The rest of the proof will be done by complete induction. The claim of the induction is the equivalence of the GHL and SRS energy expressions up to nth order. From Eq. (12) of [10] the general nth-order expression for the interaction energy in SRS theory is found to be 3/2 E n SRS = N 0 < ψ 0 |V (1 −T (12) −T (23) −T (13) +T (123) +T (132)) |ψ (n−1) pol > − n−1 k=1 3/2 E k SRS < ψ 0 | (1 −T (12) −T (23) −T (13) +T (123) +T (132)) |ψ (n−k) pol > = N 0 J n − K n 12 − K n 23 − K n 13 + K n 123 + K n 132 − n−1 k=1 3/2 E k SRS (−S n−k 12 − S n−k 23 − S n−k 13 + S n−k 123 + S n−k 132 )(30) where N 0 is given by Eq. (22). Thus it is necessary to prove that 3/2 E n GHL = ǫ n I − ǫ n 12 − ǫ n 23 − ǫ n 13 + ǫ n 123 + ǫ n 132 (31) = 3/2 E n SRS .(32) To perform a proof by induction it is necessary to show that also the (n+1)st order terms of both theories are equal. To do so, the (n + 1)st order of GHL theory is expressed in terms of the quantities occurring in SRS theory. This can be achieved by inserting the solutions of the set of linear equations Eq. (12) into the complete GHL energy for the H 3 -quartet state [24] 3/2 E GHL = ǫ I − ǫ 12 − ǫ 23 − ǫ 13 + ǫ 123 + ǫ 132 (33) ≈ M n=0 3/2 E n GHL = M n=0 ǫ n I − ǫ n 12 − ǫ n 23 − ǫ n 13 + ǫ n 123 + ǫ n 132 = E 0 + J − K 12 − K 23 − K 13 + K 123 + K 132 1 − S 12 − S 23 − S 13 + S 123 + S 132 −1(34) where J, K g , and S g have been defined in Eqs. (13) to (15). To find the expression for the (n + 1)st order contribution to the energy of the quartet state, the left hand side is first multiplied by the denominator M n=0 3/2 E n GHL 1 − M n=0 (S n 12 + S n 23 + S n 13 ) + M n=0 (S n 123 + S n 132 ) = E 0 1 − M n=0 (S n 12 + S n 23 + S n 13 ) + M n=0 (S n 123 + S n 132 ) + M n=0 [J n − K n 12 − K n 23 − K n 13 + K n 123 + K n 132 ] .(35) Collecting terms of (n + 1)st order leads to 3/2 E n+1 GHL (1 − S 0 12 − S 0 23 − S 0 13 + S 0 123 + S 0 132 ) = J n+1 − K n+1 12 − K n+1 23 − K n+1 13 + K n+1 123 + K n+1 132 +E 0 (−S n+1 12 − S n+1 23 − S n+1 13 + S n+1 123 + S n+1 132 ) − n k=0 3/2 E k GHL (−S n+1−k 12 − S n+1−k 23 − S n+1−k 13 + S n+1−k 123 + S n+1−k 132 )(36) with the result that 3/2 E n+1 GHL = N 0 J n+1 − K n+1 12 − K n+1 23 − K n+1 13 + K n+1 123 + K n+1 132 − n k=1 E GHL,k 3/2 (−S n+1−k 12 − S n+1−k 23 − S n+1−k 13 + S n+1−k 123 + S n+1−k 132 ) .(37) Using the claim of the proof, which stated that for all orders up to the nth the GHL term is equal to the SRS-term, E GHL,k 3/2 in the last line can be replaced by 3/2 E SRS for all orders 1, . . . , n. Thus Eq. (37) can be transformed into 3/2 E n+1 GHL = N 0 J n+1 − K n+1 12 − K n+1 23 − K n+1 13 + K n+1 123 + K n+1 132 − n k=1 3/2 E k SRS (−S n+1−k 12 − S n+1−k 23 − S n+1−k 13 + S n+1−k 123 + S n+1−k 132 ) (38) = 3/2 E n+1 SRS(39) and the equality also holds for the (n + 1)st order. Thus the contributions to the energy of the H 3 -quartet state in the SRS and GHL theories are equal order by order. One advantage of the GHL theory is that it permits the calculation of the exchange energies by other methods, such as the surface integral method. In Ref. [10], the nonadditive energy terms of the quartet spin state of H 3 have been calculated up to third order. The first order terms can be split into a polarization and an exchange part. Since the first order polarization energy is pairwise additive, the only non-additive term in first order is contained in the exchange term which in Eqs. (23) and (55) of Ref. [9] is given by E 1 exch (3, 3) = < ψ 0 |V AB T (23) +T (13) +T (123) +T (132) − S 0 23 − S 0 13 − S 0 123 − S 0 132 |ψ 0 > + < ψ 0 |V AB T (12) +T (13) +T (123) +T (132) − S 0 12 − S 0 13 − S 0 123 − S 0 132 |ψ 0 > + < ψ 0 |V AB T (12) +T (23) +T (123) +T (132) − S 0 12 − S 0 23 − S 0 123 − S 0 132 |ψ 0 > ,(40) which can be expressed in terms of exchange energies as E 1 exch (3, 3) = ǫ 1 123 (1 − S 0 123 ) − ǫ 1 12 (1 + S 0 12 ) − ǫ H 2 ,1 12 (1 + S 0 12 ) − ǫ 1 23 (1 + S 0 23 ) − ǫ H 2 ,1 23 (1 + S 0 23 ) − ǫ 1 13 (1 + S 0 13 ) − ǫ H 2 ,1 13 (1 + S 0 13 ) .(41) This term is also obtained if the pure two-body contributions are subtracted from Eq. (29). V. SURFACE INTEGRAL METHOD (SIM) FOR THE CALCULATION OF EXCHANGE ENERGIES As shown in Refs. [14] and [18] all exchange energies occurring in the GHL-description of the H 3 system, i.e. the two-body as well as the cyclic exchange energies, can be calculated by the surface integral method (SIM). The exchange energy ǫ g 0 associated with the arbitrary group element g 0 = I is given accordingly by ε g 0 = V dv F 2 − (T (g 0 )F ) 2 −1 1 2 Σ F ∇ 9 T (g 0 )F − T (g 0 )F ∇ 9 F · d s − g =I,g 0 ε g V dv F (T (g 0 g)F ) − (T (g 0 )F )(T (g)F ) .(42) In order to compare numerical results for three-body exchange effects with the published SAPT results for H 3 [10], an expression for the non-additive exchange energy has to be obtained using SIM. The non-additive exchange energy basically contains the cyclic exchange energy and the implicit three-body effects on the two-body exchange energies. As already pointed out in Ref. [14] it can be shown that for a choice of the partial volume V such that F is localized inside, all quantities occurring in the sum of Eq. (42) go to zero with at least a factor of e −R faster than the surface integral itself if all internuclear distances are larger or equal to R. This holds for all exchange energies. In a different paper [18] it will be shown how to find the implicit three-body effect from the complete surface integral expression for the two-body exchange energies. For product wave functions as used here the pure two-body part is given by the first line of formula Eq. (42), i.e. surface integral (SI) over denominator. The implicit three-body effect is contained in the second line of Eq. (42), i.e. the products of partial overlap integrals with exchange energies. Following the same scheme used in the Appendix of Ref. [14], these terms can be shown to asymptotically go to zero as e −5R which is faster by a factor of e −3R than the surface integral (SI) itself. Using these results a GHL non-additive exchange energy for the quartet state of H 3 can be defined by simply subtracting the pure two-body contribution from the two-body exchange energies in the GHL result for the quartet state Eq. (3) ( 3/2 E GHL ) exch = 2ǫ 123 − ǫ 12 − ǫ H 2 12 − ǫ 23 − ǫ H 2 23 − ǫ 13 − ǫ H 2 13(43) which can be calculated either by SIM or perturbation theory. The first order contribution to this non-additive term Tables I and II and will be discussed in the next Section. ( 3/2 E 1 GHL ) exch = 2ǫ 1 123 − ǫ 1 12 − ǫ H 2 ,1 12 − ǫ 1 23 − ǫ H 2 ,1 23 − ǫ 1 13 − ǫ H 2 , In summary, the complete three-body exchange effect in H 3 , which consists of the cyclic exchange energy and the effect of the presence of the third atom on the two-body exchange energies, can asymptotically be approximated by the surface integral for the cyclic exchange energy. VI. RESULTS In Tables I and II as well as Figures 1 and 2 the numerical results for the first order non-additive exchange energy of SRS theory are compared with three different SIM-terms: (i) the non-additive exchange energy of GHL theory Eq. (43), (ii) the cyclic exchange energy (complete SIM expression Eq. (42) with overlaps), (iii) the surface integral (SI) of the cyclic exchange energy only (without overlaps). All these quantities have been calculated using the zeroth order localized wave function F = 1/π 3/2 exp(−r 1A − r 2B − r 3C ). Since the exchange energies calculated by SIM cannot be given a definite perturbative order (due to the fact that only part of the complete space is used in the calculation) the quantity (i) is not expected to yield the same numerical results as the first order non-additive exchange energy of SRS theory. But since the same zeroth order product wave function was used to calculate both terms it is expected that both quantities exhibit a similar overall behavior in the range of parameters studied. In Table I results for equilateral triangular geometry of the nuclei ranging between R = 4 and R = 10 atomic units are listed. Generally, all terms calculated by SIM have smaller absolute values than the first order perturbative ones. At R = 4 a.u., the absolute value of the complete SIM term Eq. (43) is 27 % below the SRS result Eq. (41), the cyclic exchange energy is 38 % smaller, and only the surface integral of the cyclic exchange energy is 25 % greater in absolute value. At R = 10 a.u., however, all three quantities calculated by SIM are no longer distinguishable and are only 6 % below the SRS result. In Table II the results for isosceles triangles with equal sides of length of 6 a.u. and with angles γ B varying between 30 • and 180 • are shown. All quantities except for the surface integral without overlaps exhibit a change of sign in the region around 120 • and 150 • . At 30 • , (i) the absolute value of the SIM term Eq. (43) is 31 % smaller than the SRS result, (ii) the cyclic exchange energy is 41 % smaller, and again (iii) the surface integral of the cyclic exchange energy only is 13 % greater in absolute value. At 180 • on the other hand, only the value for the surface integral has the wrong sign, while both the other terms have become indistinguishable and are now 35 % greater in absolute value than the SRS term. The differences between the numerical results for the quantities compared in Tables I and II are, as already pointed out, not due to numerical problems but due to the fact that the quantities are different by definition. From the Tables it appears that for triangular geometries of the nuclei and internuclear distances R ≥ 4 a.u. the first order non-additive exchange energy for the quartet state of H 3 can be quite well approximated by the surface integral of the cyclic exchange energy. This was stated in Ref. [14] and has now been explained by the fact that all the SIM approximations (see section V and in Ref. [14]) hold in this region. In Tables III and IV as well as Figures 1 and 2 higher orders of SRS theory are also taken into account and compared with the complete GHL non-additive exchange energy Eq. (43) in order to show that SIM goes beyond the first order of SRS theory. For equilateral triangular geometries of the nuclei and internuclear distances larger than 6 a.u. the results of GHL theory lie between the first order SRS term and the sum of the first and second order terms, approaching the first order term for increasing distances. At 6 a.u. GHL is very close to the first plus second order of SRS, and even at 4 a.u. GHL is only 17 % below the total sum up to third order of SRS theory. For isosceles structures of the nuclei with equal internuclear distances of 6 a.u. the advantage of SIM over the first order SRS theory is even more apparent. Starting at 60 • , the GHL result is closer to the first plus second order than to the first order SRS term. The change of sign occurs for the first order between 120 • and 150 • whereas for all other terms already between 90 • and 120 • . The differences of the GHL to the first plus second order SRS term range from 0.4% at 60 • to 33% at 120 • and 10% at 180 • . At 30 • the GHL result is again only 16% smaller than the SRS term with the third order term included. The advantage of SIM over the perturbative approach is that the surface integral SI is easily calculated numerically, and including the partial overlap terms provides part of the second order SRS contributions. VII. CONCLUSIONS This paper demonstrates how the perturbation series consisting of Coulomb, exchange and overlap integrals can be used to express the Coulomb and exchange energies occurring in GHL theory. Combining the perturbation series with the GHL theory yields an energy expression for the quartet spin state equivalent to that of symmetrized Rayleigh-Schrödinger perturbation theory given in [10]. It is possible to evaluate the exchange energies using the surface integral method (SIM). The SIM has the advantage that it derives from a clear physical picture for the exchange process in terms of the electrons continuously trading places. For the cyclic exchange energies this method has already been described in detail in Ref. [14], and for the implicit three-body effect on the two-body exchange energies it will be shown in Ref. [18]. The long range behavior of the three-body terms entering the two-body exchange energies and of the partial overlap integrals -multiplied by two-body exchange energies in the expression for the cyclic exchange energy in Eq. (42) -indicate that for large internuclear separations the surface integral for the cyclic exchange energy is sufficient to describe the non-additive contribution to the exchange part of the quartet spin state. The numerical results in Tables I and II confirm this conclusion. VIII. ACKNOWLEDGEMENTS We thank K. T. Tang and J. P. Toennies for helpful discussions. U. K. gratefully acknowledges financial support from the DFG. −3.6 · 10 −9 −0.7 · 10 −9 −0.7 · 10 −9 −3.4 · 10 −9 TABLE III. Comparison of the numerical results for the non-additive exchange energy in GHL theory (GHL Eq. (43)) with the first order non-additive exchange energy of SRS-theory (SRS 1 Eq. (41)), with the SRS non-additive exchange energy up to second order (SRS 2 ) [10] , and with up to third order SRS 3 [10] . The nuclei form equilateral triangles with sides of lengths R. −7.40 · 10 −6 −5.67 · 10 −6 −4.98 · 10 −6 −6.05 · 10 −6 120 −3.42 · 10 −7 3.88 · 10 −7 9.02 · 10 −7 2.61 · 10 −7 150 8.84 · 10 −7 1.43 · 10 −6 1.88 · 10 −6 1.31 · 10 −6 180 1.10 · 10 −6 1.63 · 10 −6 2.07 · 10 −6 1.48 · 10 −6 TABLE IV. Comparison of the numerical results of GHL-theory with the same quantities as in Table III. The nuclei form isosceles triangles with two sides of lengths R AB = R BC = 6 a.u., γ B is the angle included. the respective SRS-term Eq. (41) only by overlap integrals that are negligible compared to one.A comparison of the numerical results of the first order non-additive exchange energy Eq. (41) of SRS theory and the GHL term [Eq. (44)] calculated by SIM using the zeroth order product wave function F = 1/π 3/2 exp(−r 1A − r 2B − r 3C ) is given in FIGURESFIG. 1 . 1Comparison of different orders of the non-additive exchange energy in SRS theory with the GHL result (filled triangles) calculated with SIM from Eq. (43) for equilateral triangles. The first order SRS contribution is denoted by circles, and with all terms up to second order by open triangles. The stars show twice the surface integral of the cyclic exchange energy. FIG. 2 . 2Comparison of different orders of the non-additive exchange energy in SRS theory with the GHL result (filled triangles) calculated with SIM from Eq. (43) for isosceles triangles with R AB = R BC = 6 a.u. as a function of the included angle γ B . The first order SRS contribution is denoted by circles, and with all terms up to second order by open triangles. The stars show twice the surface integral of the cyclic exchange energy only. Note the change in the energy axis from linear to logarithmic scale. TABLE I . IComparison of the numerical results for the first order non-additive exchange energy of SRS-theory (SRS 1 Eq. (41)) with a similar but still different quantity derived from GHL theory Eq. (43), with the cyclic exchange calculated by SIM (2ǫ 123 (SIM)) including overlaps, and with the surface integral SI of the cyclic exchange energy without overlaps (2 SI). The nuclei form equilateral triangles with sides of lengths R.E 1 exch [E h ], R AB = R BC = 6 a.u. γ B [degrees]SRS Eq. (41) GHL Eq. (43) 2ǫ 123 (SIM) 2 SI 30 −3.75 · 10 −4 −2.60 · 10 −4 −2.23 · 10 −4 −4.25 · 10 −4 60 −5.90 · 10 −5 −5.19 · 10 −5 −5.15 · 10 −5 −5.70 · 10 −5 90 −7.40 · 10 −6 −6.05 · 10 −6 −6.03 · 10 −6 −7.95 · 10 −6 120 −3.42 · 10 −7 2.61 · 10 −7 2.60 · 10 −7 −1.62 · 10 −6 150 8.84 · 10 −7 1.31 · 10 −6 1.30 · 10 −6 −5.83 · 10 −7 180 1.10 · 10 −6 1.48 · 10 −6 1.48 · 10 −6 −4.10 · 10 −7 TABLE II . IIComparison of the numerical results of SRS-theory with the same quantities as inTable I. The nuclei form isosceles triangles with two sides of lengths R AB = R BC = 6 a.u., γ B is the angle included.E exch [E h ] R[a 0 ] SRS 1 Eq. (41) SRS 2 SRS 3 GHL Eq. (43) 4 −3.83 · 10 −3 −3.60 · 10 −3 −3.34 · 10 −3 −2.79 · 10 −3 6 −5.90 · 10 −5 −5.21 · 10 −5 −5.03 · 10 −5 −5.19 · 10 −5 7 −5.88 · 10 −6 −4.77 · 10 −6 −4.62 · 10 −6 −5.32 · 10 −6 8 −5.33 · 10 −7 −3.71 · 10 −7 −3.57 · 10 −7 −4.89 · 10 −7 10 E exch [E h ], R AB = R BC = 6 a.u. γ B [degrees] SRS1 Eq. (41) SRS 2 SRS 3 GHL Eq. (43) 30 −3.75 · 10 −4 −3.33 · 10 −4 −3.08 · 10 −4 −2.60 · 10 −4 60 −5.90 · 10 −5 −5.21 · 10 −5 −5.03 · 10 −5 −5.19 · 10 −5 90 . K T Tang, J P Toennies, C L Yiu, Int. Rev. Phys. Chem. 17363K.T. Tang, J.P. Toennies and C. L. Yiu, Int. Rev. Phys. Chem. 17, 363 (1998). S H Patil, K T Tang, Asymptotic Methods in Quantum Mechanics: Applications to Atoms, Molecules and Nuclei. BerlinSpringerS. H. Patil and K. T. Tang, Asymptotic Methods in Quantum Mechanics: Applications to Atoms, Molecules and Nuclei (Springer, Berlin, 2000). . U Kleinekathöfer, K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 1113377U. Kleinekathöfer, K.T. Tang, J.P. Toennies, and C.L. Yiu, J. Chem. Phys. 111, 3377 (1999). . U Kleinekathöfer, Chem. Phys. Lett. 324403U. Kleinekathöfer, Chem. Phys. Lett 324, 403 (2000). . B M Axilrod, E Teller, J. Chem. Phys. 11299B. M. Axilrod and E. Teller, J. Chem. Phys. 11, 299 (1943); . Y Muto, Proc. Phys. Soc. Jpn. 17629Y. Muto, Proc. Phys. Soc. Jpn. 17, 629 (1943). . M J Elrod, R J Saykally, Chem. Rev. 941975M. J. Elrod and R. J. Saykally, Chem. Rev. 94, 1975 (1994). . W J Meath, M Koulis, J. Mol. Struct. (Theochem.). 2261W. J. Meath and M. Koulis, J. Mol. Struct. (Theochem.) 226, 1 (1991). . W J Meath, R A Aziz, Mol. Phys. 52225W. J. Meath and R. A. Aziz, Mol. Phys. 52, 225 (1984). . R Moszynski, P E S Wormer, B Jeziorski, A Van Der Avoird, J. Chem. Phys. 1038058R. Moszynski, P. E. S. Wormer, B. Jeziorski, and A. van der Avoird, J. Chem. Phys. 103, 8058 (1995). . T Korona, R Moszynski, B Jeziorski, J. Chem. Phys. 1058178T. Korona, R. Moszynski, and B. Jeziorski, J. Chem. Phys. 105, 8178 (1996). . V F Lotrich, K Szalewicz, J. Chem. Phys. 112112V. F. Lotrich and K. Szalewicz, J. Chem. Phys. 112, 112 (2000). . R J Wheatley, Mol. Phys. 84899R. J. Wheatley, Mol. Phys. 84, 899 (1995). . Z C Zhang, A R Allnatt, J D Talman, W J Meath, Mol. Phys. 811425Z. C. Zhang, A. R. Allnatt, J. D. Talman, and W. J. Meath, Mol. Phys. 81, 1425 (1994). . U Kleinekathöfer, T I Sachse, K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 113948U. Kleinekathöfer, T. I. Sachse, K. T. Tang, J. P. Toennies, and C. L. Yiu, J. Chem. Phys. 113, 948 (2000) . K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 947266K. T. Tang, J. P. Toennies, and C. L. Yiu, J. Chem. Phys. 94, 7266 (1991). . K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 99377K. T. Tang, J. P. Toennies and C. L. Yiu, J. Chem. Phys. 99, 377 (1993). . U Kleinekathöfer, K T Tang, J P Toennies, C L Yiu, J. Chem. Phys. 107U. Kleinekathöfer, K. T. Tang, J. P. Toennies, and C. L. Yiu, J. Chem. Phys. 107, 9502, (1997). . T I Sachse, K T Tang, J P Toennies, in preparationT. I. Sachse, K. T. Tang and J. P. Toennies, in preparation. . T Cwiok, B Jeziorski, W Ko Los, R Moszynski, J Rychlewski, K Szalewicz, Chem. Phys. Lett. 19567T. Cwiok, B. Jeziorski, W. Ko los, R. Moszynski, J. Rychlewski und K.Szalewicz, Chem. Phys. Lett. 195, 67 (1992). . J O Hirschfelder, Chem. Phys. Lett. 1325J.O. Hirschfelder, Chem. Phys. Lett. 1, 325 (1967). . B Jeziorski, R Moszynski, K Szalewicz, Chem. Rev. 941887B. Jeziorski, R. Moszynski, and K. Szalewicz, Chem. Rev. 94, 1887 (1994). . G Chalasinski, B Jeziorski, K Szalewicz, Int. J. Quantum Chem. 11247G. Chalasinski, B. Jeziorski, and K. Szalewicz, Int. J. Quantum Chem. 11, 247 (1977). . K T Tang, J P Toennies, C L Yiu, Chem. Phys. Lett. 162170K. T. Tang, J. P. Toennies, and C. L. Yiu, Chem. Phys. Lett. 162, 170 (1989). The explicit expressions will be given in a forthcoming paper. 18The explicit expressions will be given in a forthcoming paper [18].
{'fraction_non_alphanumeric': 0.0745274703750857, 'fraction_numerical': 0.07188326314758593, 'mean_word_length': 3.2995087719298244, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 14, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The generalized Heitler-London (GHL) theory provides a straightforward way to express the potential energy surface of H 3 in terms of Coulomb and exchange energies which can be calculated either by perturbation theory or using the surface integral method (SIM). By applying the Rayleigh-Schrödinger perturbation theory, GHL theory for the quartet spin state of H 3 is shown to yield results equivalent to the symmetrized Rayleigh-Schrödinger version of symmetry adapted perturbation theory (SAPT). This equivalence allows a comparison with the corresponding results obtained by the surface integral method. The surface integral result calculated with a product of atomic wave functions is found to have certain advantages over the perturbation approach.', 'arxivid': 'physics/0011058', 'author': ['Tanja I Sachse \nMax-Planck-Institut für Strömungsforschung\nInstitut für Physik\nTechnische Universität\nBunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany\n', 'Ulrich Kleinekathöfer \nMax-Planck-Institut für Strömungsforschung\nInstitut für Physik\nTechnische Universität\nBunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany\n'], 'authoraffiliation': ['Max-Planck-Institut für Strömungsforschung\nInstitut für Physik\nTechnische Universität\nBunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany', 'Max-Planck-Institut für Strömungsforschung\nInstitut für Physik\nTechnische Universität\nBunsenstr.10D-37073, D-09107Göttingen, ChemnitzGermany, Germany'], 'corpusid': 15498758, 'doi': '10.1140/e10053-002-0007-6', 'github_urls': [], 'n_tokens_mistral': 11576, 'n_tokens_neox': 9645, 'n_words': 5953, 'pdfsha': 'da97d1264eda0be9b54e3e1093a8abd4c3b4ed7a', 'pdfurls': ['https://export.arxiv.org/pdf/physics/0011058v1.pdf'], 'title': ['arXiv:physics/0011058v1 [physics.chem-ph] Generalized Heitler-London Theory for H 3 : A Comparison of the Surface Integral Method with Perturbation Theory', 'arXiv:physics/0011058v1 [physics.chem-ph] Generalized Heitler-London Theory for H 3 : A Comparison of the Surface Integral Method with Perturbation Theory'], 'venue': []}
arxiv
Local Geometry of Self-similar Sets: Typical Balls, Tangent Measures and Asymptotic Spectra Short Title: Local Geometry of Self-similar Sets Manuel Morán Departamento de Análisis Económico y Economía Cuantitativa Universidad Complutense de Madrid. Cam-pus de Somosaguas 28223MadridSpain IMI-Institute of Interdisciplinary Mathematics Universidad Complutense de Madrid Plaza de Ciencias 328040MadridSpain Marta Llorente Departamento de Análisis Económico: Economía Cuantitativa Universidad Autónoma de Madrid Campus de Cantoblanco28049MadridSpain María Eugenia Mera Departamento de Análisis Económico y Economía Cuantitativa Universidad Complutense de Madrid. Cam-pus de Somosaguas 28223MadridSpain Local Geometry of Self-similar Sets: Typical Balls, Tangent Measures and Asymptotic Spectra Short Title: Local Geometry of Self-similar Sets Self-Similar SetsHausdorff MeasuresTangent MeasuresDensity of MeasuresCom- putability of Fractal MeasuresComplexity of Topological SpacesSierpinski Gasket We analyse the local geometric structure of self-similar sets with open set condition through the study of the properties of a distinguished family of spherical neighbourhoods, the typical balls. We quantify the complexity of the local geometry of self-similar sets, showing that there are uncountably many classes of spherical neighbourhoods that are not equivalent under similitudes.We show that, at a tangent level, the uniformity of the Euclidean space is recuperated in the sense that any typical ball is a tangent measure of the measure ν at ν-a.e. point, where ν is any self-similar measure. We characterise the spectrum of asymptotic densities of metric measures in terms of the packing and centred Hausdorff measures. As an example, we compute the spectrum of asymptotic densities of the Sierpinski gasket. Introduction and main results In order to gauge the vastness of the set of spherical neighbourhoods of a metric space X, it is useful to consider the quotient spaces Sph X / F , where Sph X is the set of spherical neighbourhoods of X and F is the equivalence class associated with some group F of self-mappings of X : B F B ⇔ B = f (B ) for B, B ∈ Sph X and some f ∈ F. The regularity of the Euclidean space R n is made clear by the fact that if S n is the set of similarities of R n , then Sph R n / Sn consists of a unique equivalence class. In this paper, we study the local geometry of a self-similar set E ⊂ R n satisfying the open set condition (OSC), geometry which is described by the spherical neighbourhoods of E as a metric [1], together with the results in Sec. 3.2, we are able to prove that, for general self-similar sets with OSC, there are uncountably many equivalence classes in the quotient spaces Sph E / Sn . This gives account of the complexity of the purely deterministic self-similar geometry. In spite of these facts, the literature has established the existence of a strong kind of regularity, on a tangent level and on average, in the neighbourhoods of a self-similar set. Recall that a self-similar set is defined as the unique compact set E ⊂ R n that satisfies the basic equation of self-similarity O as a feasible open set for Ψ. We can assume, without loss of generality, as we shall from now on, that O∩E = ∅ holds, also called strong open set condition (SOSC) (cf. [2] and [3], see also [4]). If f i (E) ∩ f j (E) = ∅ for i, j ∈ M, i = j, it is said that the strong separation condition (SSC) holds, in which case the OSC is also fulfilled. E = ∪ m−1 i=0 f i (E).(1) We want to understand the local geometry of E through the study of the local behaviour of the (2) where s is the similarity dimension of E, dim E, that is, the unique real number s that satisfies i∈M r s i = 1, r i being the contraction constant of the similarity f i , i ∈ M. Here β E stands for a measure β restricted to the set E. The measures M s := H s , H s Sph , C s , P s(3) are the s-dimensional Hausdorff measure, spherical Hausdorff measure, centred Hausdorff measure and packing measure, respectively. Any two measures in M s E are multiple of each other, moreover, in the case that s takes the integer value n, they are also multiple of the n-dimensional Lebesgue measure. Each measure in M s E highlights different basic geometric properties of subsets of R n . For α ∈ M s E , 0 < α(E) < ∞ holds and E is called an s-set (see [5] for further details and Sec. 2.2 for the definitions of the measures in M s ). We shall present in Sec. 2.1 below the natural probability measure µ. For the time being, we can see it as the normalised measure, α α(E) of any other α ∈ M s E . The results in this paper about the regularity of the metric measures are also shared by the wider class of self-similar measures, M S (E) (see [6] and Sec. 2.1 for a definition). Whereas the metric measures, M s , convey a strong geometric meaning, self-similar measures are an essential tool in multifractal analysis of logarithmic densities, a topic that has generated a vast amount of literature for the past 30 years. Scenery flow, tangent distribution and tangent measures Let ν be a Radon measure on R n and let x be a point in the support of ν. We can access the local geometry of ν E around x through the following zooming process: let T x,t (y) = t(y − x), t > 0, be the homothety that maps the ball B(x, t −1 ) onto the unit ball D := B(0, 1). Let ν x,t be the probability measure on D obtained from the normalisation of the restriction to D of the image measure of ν E under the homothety T x,t . If M(D) denotes the set of Radon measures on D, then the mapping t → ν x,t can be considered as a measure-valued time series that takes values in the metric space M(D) endowed with the weak topology. This time series is called scenery flow of ν around x (cf. [7]). The empirical distributions Φ x,t (ν), t > 0, associated with such "time" series, are probability measures on M(D) (so they belong to the set M(M(D)) of Radon measures on M(D)). The empirical distribution Φ x,t (ν) gives weight to a set A ⊂ M(D) according to the rate of the time interval [0, t] that the "empirical" data δ νx,t (unit mass at ν x,t ) stay in A. If the empirical distribution Φ x,t (ν) converges to a limit Φ x (ν) as t tends to infinity, then the limiting distribution Φ x (ν) is called the tangent distribution of ν at x (see [8]). S. Graf [9] proved that if E is a self-similar set with OSC and ν ∈ M S (E), then the limit Φ x (ν) exists ν-a.e. x, and it does not depend on x. Moreover, he constructed an explicit formula for the tangent distribution. This author gave credit for the first of these results to C. Bandt in [8], and Bandt in turn gives credit for the same result to S. Graf [10] (indeed a most refreshing case). M. Arbeiter [11], C. Bandt [10] and A. Pyörälä [12] extended these results in different ways. The uniqueness and independence of the limit Φ x (ν) from x is what M. Gavish, [13], calls, when displayed by a measure, the uniform scaling scenery property of such a measure. This means that, at a tangent level and in this sense, the flow scenery recovers the uniformity of the Euclidean space. Remark 1 There is another way to pass to the limit at the tangent level that leads to tangent measures, a concept prior to tangent distributions introduced by D. Preiss [14]. There, starting from a measure ν in the set M(R n ) of Radon measures on R n , he considers unrestricted zoomings ν x,t of ν at x by homotheties T x,t as above. Instead of performing an averaging procedure, Preiss considers non-null and locally finite limits, in the vague topology of M(R n ), of sequences {c n ν x,tn } with t n n→∞ → ∞ and c n > 0. Such limit points are called tangent measures of ν at x, and T an(ν, x) denotes the set of all such limits. In our approach, following C. Bandt [8], the measures ν x,tn are restricted and normalised zoomings, but the zoomings are through general expanding similitudes, rather than only homotheties. Let I n be the group of isometries of R n . We may define, in the set M(R n ), the equivalence relationship α ∼ = β ⇔ there is a g ∈ I n and a λ > 0 such that β = λ (g (α)), where g (α) is the image measure of α under g, i.e. g (α)(A) = α(g −1 (A)) for α-measurable A ⊂ R n . Thus, we identify two measures if they are equal up to an isometry (see, for instance, [10], where equivalent measures up to isometries are identified in the construction of tangent measures), and we also identify all measures in the half-straight line {λα : λ > 0, α ∈ M(R n )} . For α ∈ M(R n ), let α denote the equivalence class in M(R n )/ ∼ = to which α belongs, i.e. α = {β ∈ M(R n ) : β ∼ = α}(5) Given a measure ν ∈ M(R n ), we now consider the zoomings ν x,tn be of the form (g n ) ν B(x,dt −1 n ) where g n is a similitude of contraction ratio t n , d ≤ 1, and x ∈ spt(ν) (see (23)). We define the quotient space M(R n ) and the set of tangent equivalence classes of measures, T an(ν, x), by M(R n ) = { α : α ∈ M(R n )} (6) T an(ν, x) = α : there is a sequence c n ν x,tn w − −−− → n→∞ α, with t n → ∞, α = 0 and α ∈ M(R n ) ,(7) where w → denotes the weak convergence of measures on M(R n ). It turns out that, in the course of our research, the case in which the convergence of the magni- Remark 2 In our definition (7) any two zoomings, β = (g n ) ν B(x,dt −1 n ) and β = (h n ) ν B(x,dt −1 n ) of a given spherical neighbourhood B(x, dt −1 n ) are considered as valid steps in the construction of a tangent limiting measure α, where g n , h n are different similitudes. This can be considered as the identification of β and β as equivalent zoomings. Notice that β = g −1 n • h n β and that g −1 n • h n is an isometry. Thus, the equivalence relationship (4) and the definition in (7) are consistent. In contrast to the enlightening results obtained in [9], [10] and [11] on the uniform scaling scenery property of self-similar measures, to the best of our knowledge, the members of T an(ν, x) for ν ∈ M S (E) remain unknown. Several natural issues arise here: What is the relationship between Φ x (ν) and T an(ν, x)? What do the measures in T an(ν, x) look like? Do they display some uniform property? As for the first question, see Proposition 1 in [15]. Below, we give a partial answer to the second and third questions for measures in M S (E) (see (10) and Theorem 12). Typical balls M S (B) := {α B : B ∈ B, α ∈ M S (E)} .(8) It is well known [6] that, for any x ∈ E, the set {f (x) : f ∈ G} is dense in E,p −1 f k f −1 k α (B(y,d k ) st − −−− → k→∞ α B(x,d) ,(9) where the convergence in (9) is in the sense of the strong topology of Radon measures. Theorem 12 also states that, for all x ∈ E and α ∈ M S (B), M S (B) ⊂ T an st (α, x)(10) holds, where M S (B) = { α : α ∈ M S (B)} ,(11) (see (5) for the notation α). The results above imply that the use of general zooming similitudes, grants the strong convergence of the zoomings to the tangent measures, whereas in the ordinary spaces of tangent measures, where only homotheties are allowed, convergence can only be ensured in a weak topology sense. See Sec. below for further details on identifications and topologies of measures. Remark 4 Putting the results in Sec. 3.3, described in the first paragraph of this section, together with (10), we see that the self-similar scenery at x ∈ E depends on x on large scales, meaning that there is a broad variety of balls B(x, d) for varying x that, moreover, also vary with d for fixed x. Let α ∈ M(R n ), 0 ≤ s ≤ ∞ and x ∈ R n . The upper and lower spherical s-densities of α at x are defined, respectively, by θ s α (x) = lim sup d→0 θ s α (x, d),(12)θ s α (x) = lim inf d→0 θ s α (x, d),(13) where the s-density of the ball B(x, d), θ s α (x, d), is given by θ s α (x, d) = α(B(x, d)) (2d) s . Here the zooming process is summarised in only two scalars, (12) and (13). If θ s α (x) = θ s α (x), then we write θ s α (x) for the common value and call it s-density of α at x. Densities and their connections to their underlying measures have been studied extensively in the context of geometric measure theory. A major contribution from Marstrand (Marstrand's theorem, [1]) asserts that, in the Euclidean setting, if the s-density θ s α (x) exists in a set with a finite and positive α-measure with α ∈ M(R n ), then s is an integer. The widest class of subsets of Euclidean spaces that are s-sets (i.e. sets with a finite and positive α-measure) is either the class of self-similar sets that satisfy the OSC, with s being their similarity dimension (see (2)), or some variations of it, like the Mauldin and Williams graph-directed constructions, cf. [16], and controlled Moran constructions, cf. [17]. Here, we are interested in the case in which the similarity dimension s is not an integer and, by Marstrand's theorem described above, θ s α (x) and θ s α (x) do not coincide in subsets with a positive α-measure. This leads to the following definition of asymptotic spectrum of densities of a given measure α at a point and, more in general, in a subset of points. Definition 5 Given a subset A ⊂ R n , we define the asymptotic spectrum of (non-logarithmic) spherical s-densities, Spec(α, A), for a locally finite measure α by Spec(α, A) = lim k→∞ θ s α (x, d k ) : x ∈ A and lim k→∞ d k = 0 .(14) We insert the non-logarithmic epithet above because there is a ample literature on the so-called multifractal spectrum of logarithmic spherical densities. This literature also focuses on the limiting behaviour of α on small balls, but the interest is in the upper and lower limits of the quotients log α(B(x,d)) log d when d → 0 (for x ∈ E) and, in particular, in the fractal dimension of both the (αnull) sets where these limits exist and take particular values [18] and the sets of divergence points (see [19], [20], [21]) where the limits do not coincide. Much less is known about the behaviour of non-logarithmic densities, and the research in this paper can be considered a preliminary step in that direction. In particular, in Sec. 3, Theorem 14, we present the knowledge to date about the spectrum of non-logarithmic α-densities, α ∈ M s E , of self-similar sets E that satisfy the OSC. In particular, we show that Spec(α, x) is contained in the closed interval α(E) P s (E) , α(E) C s (E) for all x in a subset E of E with a full α-measure. There arises a natural class of self-similar sets with nice properties, the α-exact self-similar sets (see notation in 15), which are sets for which the endpoints of such interval belong to Spec(α, x), x ∈ E. Whereas the results for general self-similar sets with OSC presented in Sec. 3 are of a qualitative nature, in Sec. 4 we shall focus on our prime example of α-exact self-similar set, the Sierpinski gasket S, and exploit its regularity to accurately approximate the range of values taken by its spectrum, which is the content of Theorem 26. Moreover, we give a full characterisation of the spectrum of all the points in S, which is given by the union of two closed intervals of positive length, namely, Spec(α, S) = α(S)θ s µ (z 0 ), α(S)θ s µ (z 0 ) ∪ α(S) P s (S) , α(S) C s (S) , α ∈ M s S , where z 0 := (0, 0). Using the numerical approximations of θ s µ (z 0 ), θ s µ (z 0 ) obtained in Sec. 4 and of P s (S) and C s (S) obtained in [22] and [23], we can also show that these two intervals are disjointed. In the case that α ∈ {µ, P s S , C s S }, we have numerical estimations of these two disjointed intervals. The Sierpinski gasket is, as far as we know, the first connected self-similar with non-integer dimension for which the entire spectrum has been computed. Notation and preliminaries The self-similar set E given in (1) can be parametrised as E = {π(i) : i ∈ Σ} with parameter space Σ := M ∞ and geometric projection mapping π : Σ → E given by π(i) = ∩ ∞ k=1 f i(k) E, where i(k) denotes the curtailment i 1 . . . i k ∈ M k of i = i 1 i 2 · · · ∈ Σ and f i1...i k = f i1 • f i2 • f i3 • ...f i k . We adopt the convention M 0 = ∅ and write M * = ∪ ∞ k=0 M k for the set of words of finite length. Expressed in this notation, the semigroup generated by Ψ can be written as G = {f i : i ∈ M * } . For any i ∈ M * , we denote by E i the cylinder sets f i (E), and if i ∈ M 0 , then f i (E) := E. The sets E i are called k-cylinders if i ∈ M k . We also shorten the notation f i (A) to A i for a general set A ⊂ R n . We write r i := r i1 r i2 . . . r i k for the contraction ratio of the similitude f i . Moreover, σ : Σ → Σ shall stand for the shift map given by σ( i 1 i 2 i 3 . . . ) = i 2 i 3 i 4 . . . The code shift can be projected (as a correspondence) onto E, yielding the geometric shift T (x) := π • σ • π −1 (x),(15) x ∈ E. The shift orbit of x ∈ E is given by T k (x) : k ∈ N . Remark 6 Observe that x ∈ T k (A) if and only if f i (x) ∈ A for some i ∈ M k . Self-similar measures Let P(R n ) be the space of compactly supported probability Borel measures on R n , let p = (p 0 , ..., p m−1 ) ∈ R m be a probability vector and let M p : P(R n ) → P(R n ) be the Markov operator defined by M p (α) = m−1 i=0 p i α • f −1 i , α ∈ P(R n ). The unique fixed point of the contractive operator M p is called the self-similar measure µ p ; that is, µ p = i∈M p i µ p • f −1 i .(16) Moreover, [6] and [24] for further details). Set M k p (α) = i∈M k p i α • f −1 i w − → k→∞ µ p (17) for any α ∈ P(R n ), where, for i ∈ M k , p i := p i1 · · · p i k . Here M k p is the k-th iterate of M p (seeM S (E) := µ p : m−1 i=0 p i = 1, p i > 0, i = 0, ..., m − 1 .(18) For p s := (r s 0 , ..., r s m−1 ), where s is the similarity dimension of E (recall that r i is the contraction constant of the similarity f i , i ∈ M ), the measure µ ps is called the natural probability measure on E. Furthermore, if α ∈ M s E (see (2) for notation), then µ := µ ps = α α(E)(19) (see [25]). Notice that, whereas the measures in M s (see (3) for notation) convey an strong geometrical meaning, the measures µ p in M S (E) do not. They are concentrated in dense subsets of E, E p , whose dimension is given by dim( E p ) = s p := m−1 i=0 pi log pi m−1 i=0 pi log ri , but the measure µ p is singular w.r.t. the measures H sp and P sp (see [26] and [27]). Metric measures We now briefly recall metric measures. They are the classical tools for analysing the geometric properties of subsets of R n . The Hausdorff centred measure, C s (A), of a subset A ⊂ R n , was defined by Saint Raymond and Tricot [28] in a two-step process. First, the premeasure C s 0 (A) is defined for any s > 0 by C s 0 (A) = lim δ→0 inf ∞ i=1 (2d i ) s : 2d i ≤ δ, i = 1, 2, . . . ,(20) where the infimum is taken over all coverings, {B(x i , d i )} i∈N + , of A by closed balls B(x i , d i ) centred at points x i ∈ A. Then, the centred Hausdorff s-dimensional measure is defined by C s (A) = sup {C s 0 (F ) : F ⊂ A, F closed} . The second step in the definition of C s (A) is due to the lack of monotonicity of C s 0 (see [29] and [30,Example 4]). However, in [30], it was shown that the second step can be omitted when restricting oneself to self-similar sets with OSC. With regard to metric measures based on packings, the standard packing measure P s (see [28] and [31]) is also defined in a two-step process, P s 0 (A) = lim δ→0 sup ∞ i=1 (2d i ) s : 2d i ≤ δ, i = 1, 2, . . . , where the supremum is taken over all packings {B(x i , d i )} i∈N + , with x i ∈ A for all i, and B(x i , d i ) ∩ B(x j , d j ) = ∅ for i = j. Then, P s (A) = inf ∞ i=1 P s 0 (F i ) , where the infimum is taken over all coverings {F i } i∈N + of A by closed sets F i (cf. [32]). In [33], it was proved that if A is a compact set with P s 0 (A) < ∞, then P s (A) = P s 0 (A), so this simplification applies to any compact subset of a self-similar set with OSC. The spherical s-dimensional Hausdorff measure, H s Sph (A), is obtained by removing in (20) in (20) with the diameter of U i , |U i | (see [34] and [5]). No second step is required for these last two measures. The packing and the centred Hausdorff measures have a much simpler expression when dealing with self-similar sets E that satisfy the OSC as the browse for optimal packings or coverings can be reduced to the search for optimal density balls within the class of typical balls, B (see Definition 3). In particular, for any self-similar E that satisfies the OSC and with similarity dimension s, it is known (see [36]) that P s (E) = inf θ s µ (x, d) : B(x, d) ∈ B −1 ,(21) and, Lemma (13) of Sec. 3.2 implies that C s (E) = sup θ s µ (x, d) : B(x, d) ∈ B −1 .(22) 3 Local structure and typical balls Scenery flow and tangent measures We start by giving details on the construction of T an(ν, x) for ν ∈ M S (E) and x ∈ E (see 18 for notation). Tangent measures, identifications and topologies. Recall that the construction of the sets T an(ν, x) and T an st (ν, x) employs the identification, in the set M(R n ), of those measures that are equal up to isometries or mutual multiples (see (4), (5), (6) and (7) for notation). We now examine the construction of the spaces of equivalence classes of tangent measures above in more detail. For ν ∈ M(R n ) and x ∈ spt(ν), we first consider sequences {c n ν x,tn } ∞ n=0 , where for every n ∈ N, c n > 0, ν x,tn := 1 ν(B(x, dt −1 n )) (g x,tn ) ν B(x,dt −1 n ) ,(23) d ≤ 1 and g x,tn is some similarity with expanding ratio t n that maps the ball B(x, t −1 n ) onto the ball B(z n , 1), with z n = g x,tn (x), so each ν x,tn is a probability measure supported on B(z n , d). Then, T an(ν, x) and T an st (ν, x) consist of the equivalence classes of non-null weak and strong limits, respectively, as t n → ∞, of such sequences {c n ν x,tn } ∞ n=0 (see (7)). Lemma 8 shows that the elements in T an(ν, x) and T an st (ν, x) do not depend on either the sequence of constants c n or the particular elements chosen in the equivalence classes ν x,tn as long as the convergence of these elements is guaranteed. Remark 7 The unit ball D does not play any essential role in our definition of tangent measures in the quotient space M(R n ). In the opposite direction (second approach) we may, in a more akin way to the classical approach, require the similarities g x,tn to map B(x, t −1 n ) onto B(0, 1), and then define and T an st D (ν, x), respectively. This second method gives spaces of tangent equivalence classes which are particular cases of these in our primary approach. Are these equivalent methods? In order to answer this question, let a sequence (23), converge to a non-null Radon measure α. By Lemma 8 we may assume c n = 1 for all n ∈ N + . Since the measures ν x,tn are supported on balls B(z n , d) with d ≤ 1 (see Theorem 12 (i)), the limiting measure α must also be supported on a ball (ii) Let ν ∈ M(R n ), x ∈ spt(ν) and α ∈ T an(ν, x). Let {t n } ∞ n=0 ↑ ∞ be such that {ν x,tn } ∞ in the set of isometries I n such that {c n ν x,tn } ∞ n=0 , as inB(z, d) with z n −→ n→∞ z. Each measure ν x,tn = (τ zn ) ν x,tn , where τ zn (y) = y − z n ,{(f n ) ν x,tn } ∞ n=0 w − −−− → n→∞ α . Then, there is f ∈ I n such that (f ) α = α . The same is true if the convergence holds in the topology of the strong convergence in M(R n ). Proof. (i) By definition, a weak limiting measure α as in (7) is a non-null measure in M(R n ). Therefore, the sequence of constants {c n } must be bounded above and below by two positive and finite constants. We The argument also holds true for strong limits. (ii) For any n ∈ N + , we can write f n (·) = g n (·) + a n , where g n is an orthogonal map and a n ∈ R n . Recall that ν x,tn is supported on B(z n , d), so (g n + a n ) (ν x,tn ) is supported on a n + B(z n , d) ( [5], Theorem 1.18), with z n −→ n→∞ z. This means that, if ν x,tn := (f n ) ν x,tn converges, in the weak topology of M(R n ), to some non-null measure α in M(R n ), the sequence a n must be bounded, and then the sequence {f n } ∞ n=0 is also bounded in the supremum norm. Therefore, there is a convergent subsequence, {f n k } ∞ k=0 , of {f n } ∞ n=0 . Let f := lim k→∞ f n k . Since the sequence {(f n k ) (ν x,tn k )} ∞ k=0 converges to α , we have that α = lim k→∞ f n k (ν x,tn k ) = f α,(24) which proves that α ∼ = α. The second equality in (24) holds true because, for any ϕ in the space C 0 (R n ) of continuous, compactly supported functions on R n and for any ε > 0, there is k 0 > 0 such that for k ≥ k 0 , we have ϕ • f n k − ϕ • f ≤ ε 2 , ϕ • f d(ν x,tn k ) − ϕ • f dα ≤ ε 2 , and then ϕ d f n k (ν x,tn k ) − ϕ d(f α) ≤ ϕ • f n k − ϕ • f d(ν x,tn k ) + ϕ • f d(ν x,tn k ) − ϕ • f dα ≤ ε. If {ν x,tn } ∞ n=0 converges to α in the strong topology of M(D), then it also converges in the weak topology and the argument above applies. Scaling properties of typical balls and scenery flow We need some preliminary lemma and the following definition. µ p (f i (A)) = p i µ p (A), for µ p ∈ M S (E) and µ p -measurable A ⊂ O,(25)(ii) µ p (f −1 i (C)) = p −1 i µ p (C) for µ p ∈ M S (E) and µ p -measurable C ⊂ O i ,(26)(iii) B(f i (x), r i d) is α-density equivalent to B(x, d) for α ∈ M s E and B(x, d) ⊂ O,(27) (iv) f −1 i (B(f i (x), r i d)) is α-density equivalent to B(x, d) for α ∈ M s E and B(x, d) ⊂ O i .(28) Proof. The proof of (25) is trivial from (16) if E satisfies SSC. If SSC does not hold, then µ p (f −1 j (f i (A))) ≤ µ p (∂O) = 0 for j = i, because A ⊂ O and, hence, f −1 j (f i (A) ) ∩ E ⊂ ∂O, which is known to be a µ p -null set (cf. [27]), so (25) also follows from (16). If we set A = f −1 i (C) in (25), we obtain (26) (see also [10]). By (19), we can apply (25) and (26) to any measure α ∈ M s E , which easily gives (27) and (28). Before stating the main theorem of this section, we will see the following lemma. Lemma 11 (i) Let g, f : R n → R n , α ∈ M(R n ), λ > 0 and A ⊂ R n be an α-measurable subset. Then, the following equalities hold true: • λ (g) (α) = (g) (λα), • (f • g) α = f (g) (α), and • (g α) A = g (α g −1 (A) ) (ii) Let α be a measure on M(R n ), g : R n → R n a bijective mapping and β := g (α) . Then, α = x ∈ K, y ∈ ∂B} must be a quantity ε > 0 and then K ⊂ B(x, d − ε). The convergence of B n to B implies that there is an n 0 ∈ N + such that, for n > n 0 , (Notice that any α-measurable set is also α n -measurable for all n ∈ N + ). It is easy to check that (g −1 ) β. (iii) If {α k } k∈N is a sequence of measures on M(R n ) and (g) (α k ) st − −−− → k→∞ β, then α k st − −−− → k→∞ g −1 (β) .x − x n ≤ ε. Then, if z ∈ K, z − x n ≤ z − x + x − x n ≤ d,B ∈ A, that B − A := A c ∈ A if A ∈ A, and that A is closed under a finite union of its members or, in short, that A is a field. Let F k be a sequence of members of A. In order to show that ∪ k∈N + F k ∈ A, we first write ∪ k∈N + F k = ∪ k∈N + G k , where G k = ∪ k i=1 F i . This shows that ∪ k∈N + F k can be expressed as a countable union of the increasing sequence G k of members of A. Furthermore, ∪ k∈N + F k = ∪ k∈N + H k , where H k = (G k − G k−1 ) with G 0 = ∅. Now, each H k ∈ A and H k ∩ H k = ∅ for k = k . Then, using that each α n is a measure, we have lim n→∞ α n k∈N + H k = k∈N + lim n→∞ α n (H k ) = k∈N + α (H k ) = α k∈N + H k . This completes the proof that A is a σ-field. Notice that any closed set K ⊂ B can be written as the union of the α and α n -null set K ∩ ∂B and of the set K − ∂B, which belongs to A as a countable (i) µ p Bi = p i (f i ) (µ p B ) . (ii) µ p B = p −1 i f −1 i (µ p Bi ) .(29) (iii) There is a subset E ⊂ E with µ p ( E) = 1 such that, if x ∈ E and B(x, d) ⊂ O, then for any y ∈ E, there is a sequence {i j } j∈N + with i j ∈ M * and a sequence of balls B(y, dr ij ) j∈N + such that p −1 ij f −1 ij µ p B(y,dri j ) st −−−→ j→∞ µ p B(x,d) (iv) For any x ∈ E, M S (B) ⊂ T an st (µ p , x), where M S (B) is defined in (8). Proof. In order to show (i), let µ p ∈ M S (E), i ∈ M * and let B ⊂ O and A ⊂ R n be µ p -measurable sets. Then, (p i (f i ) (µ p B )) (A) = p i (µ p B ) f −1 i (A ) = p i µ p (f −1 i (A ∩ B i )) = µ p Bi (A), where the third equality follows from (26) and (i) is proved. Analogously, (ii) follows from (25). Now, let E = y ∈ E : T k (y) : k ∈ N + is dense in E(30) (see (15) in Sec. 2 the definition of T ). It is well known (cf. [37]) that the set E has a full µ p -measure. Let x ∈ E, B(x, d) ⊂ O, y ∈ E and {x j } j∈N + such that lim j→∞ x j = x (in the Euclidean metric) with x j ∈ T kj (y) for every j ∈ N + . We may also assume that B(x j , d) ⊂ O for every j ∈ N + . We shorten B(x j , d) to B j and B(x, d) to B. Since lim j→∞ x j = x, it follows that B j j∈N converges to B in the Hausdorff metric. Also, µ p (∂B) = 0 because µ p ∈ M S (E) (cf. [35]). Then, Lemma 11 (iv) implies that µ p B j st −−−→ j→∞ µ p B .(31) Now, notice that, since x j ∈ T kj (y) for each j ∈ N, there is i j ∈ M kj such that f ij (x j ) = y (see Remark 6). Then, f −1 ij (B(y, dr ij )) = B j . By (29) applied to B j and i j ∈ M * , we see that µ p B j = p −1 ij f −1 ij µ p B(y,dri j ) ,(32) which concludes the proof of (iii). Observe that, in the terminology of Sec. 3.1.1, the right hand term in (32) is, c tj ν y,tj for ν = µ p , t j = r −1 ij and c tj = p −1 ij µ p (B(y, dr ij )) (recall that ν y,tj was a normalised blowup and notice also that we may assume, rescaling E if necessary, that all typical balls have a radius d ≤ 1). So, (31) and (7) give µ p B ∈ T an st (µ p , x) and part (iv) is proved. Asymptotic spectra and measure-exact self-similar sets We shall write Im(θ s α , B) to designate the set (ii) C s (E) = sup Im(θ s µ , B) −1 . Proof. It is known that for a general self-similar set that satisfies the OSC (see [36] and [30]), C s (E) = sup{θ s µ (x, d) : x ∈ E and d > 0} −1(33) holds. Let O be any feasible open set. Then, it is enough to show that sup (x,d)∈E×R + θ s µ (x, d) ≤ sup B(x,d)∈B O θ s µ (x, d). Should this not be the case, there would exist ( x 0 , d 0 ) ∈ E × R + such that B(x 0 , d 0 ) / ∈ B O and θ s µ (x 0 , d 0 ) > sup B(x,d)∈B O θ s µ (x, d). In order to show that this contradicts (33), take x * ∈ E ∩O such that there is i ∈ M * with f i (x * ) = x * . Let ρ 1 := min { x * − z : z ∈ ∂O} . Observe that, if we take ρ 2 > 0 so that B(x 0 , d 0 ) ⊂ B(x * , ρ 2 ) and k ∈ N + , satisfying that r k i ρ 2 < ρ 1 , then f k i (B(x 0 , d 0 )) ⊂ f k i (B(x * , ρ 2 )) = B(x * , r k i ρ 2 ) ⊂ O, which, using that f k i (B(x 0 , d 0 ) ∩ S) ⊂ f k i (B(x 0 , d 0 )) ∩ S, raises the contradiction θ s µ (x 0 , d 0 ) ≤ r −ks i µ(f k i (B(x 0 , d 0 ))) (2d 0 ) s = µ(B(f k i (x 0 ), r k i d 0 )) (2d 0 r k i ) s ≤ sup B(x,d)∈B O θ s µ (x, d). Part (ii) is trivial from (i). In the next theorem, we shall establish the relationships between the pointwise and global spectra, the set Im(θ s α , B) and its extreme values α(E) (P s (E)) −1 and α(E) (C s (E)) −1 . Spec(α, x) = θ s α (x), θ s α (x) (see (13) and (12) for notation) Spec(α, E) ⊂ [κ 1 , κ 2 ] with 0 < κ 1 ≤ κ 2 < ∞. (ii) There is a subset E ⊂ E with µ( E) = 1 such that, for any y ∈ E, Spec(α, y) = Spec(α, E) = Spec(α, O ∩ E). (iii) α(E) P s (E) , α(E) C s (E) ⊂ Im(θ s α , B) ⊂ Spec(α, O∩E) ⊂ α(E) P s (E) , α(E) C s (E) . Proof. That θ s α (x) and θ [35] for any measure α ∈ M S (E), which proves the first assertion of (i). The second assertion is well known [6]. In order to prove (ii), let E be the full µ-measure subset of points of E that have a dense geometric shift orbit in E (see (30)) and let y ∈ E. hence, lim k→∞ θ s α (x, d k ) ∈ Spec(α, y) easily follows from (i). This ends the proof of (ii). Finally, the first inclusion in (iii) for α = µ follows from the continuousness of the function θ s µ (x, d) on R n × R + since 1 P s (E) ≤ θ s µ (x, d) ≤ 1 C s (E) holds if B(x, d) ∈ B as a straightforward consequence of (21) and (22 B(x 1 , d 1 ) and B(x 2 , d 2 ), both in B, such that θ s µ (x 1 , d 1 ) = inf θ s µ (x, d) : B(x, d) ∈ B(34) and θ s µ (x 2 , d 2 ) = sup θ s µ (x, d) : B(x, d) ∈ B ,(35) the inclusions in Theorem 14 (iii) can be replaced with equalities. Proof. The first inclusion in Theorem 14 (iii), together with (21), (22), (34) and (35), implies that Im(θ s µ , B) = 1 P s (E) , 1 C s (E) , which, in turn, gives that Im(θ s α , B) = Spec(α, O∩E). Corollary 15 motivates the introduction of the class of α-exact self-similar sets with special properties. Definition 16 We say that the self-similar set E is α-exact if there exists B ∈ C α such that µ(B) |B| s = sup µ(B) |B| s : B ∈ C α if α ∈ {C s E , H s E , H Sph E } , and µ(B) |B| s = inf µ(B) |B| s : B ∈ C α , if α = P s E , where C α is what we call "the relevant class of sets" for the measure α, which is defined as • C α := B if α ∈ {P s E , C s E } , • C H s E := {B ⊂ R n : B is a convex set} and • C H s Sph E := {B ⊂ R n : B is a closed ball}. One nice property that α-exact self-similar sets have is that they possess optimal coverings or packings, that is, almost-coverings (i.e. coverings for α-almost all points in E) or packings whose s-volume gives the exact value of the corresponding α-measure, whilst if α-exactness is not fulfilled, we can only hope to find coverings or packings with s-volume arbitrarily close to the corresponding α-measure. Example 17 Self-similar sets E with the strong separation condition are an example of α-exact selfsimilar sets. See [38] for α ∈ P s E , H s E , H s Sph E and [30] for α = C s . Example 18 The Sierpinski gasket S is an example of a set where the strong separation condition does not hold, and that is a P s S -exact (see [22]) and C s S -exact (see [23]) set. In [39], it is shown a class of self-similar sets E with OSC in the line whose members can be non- Complexity of the local structure of self-similar sets We now show how these results allow us to explore the complexity of the local geometric structure of self-similar sets that satisfy the OSC condition. First, we need to properly define the equivalence classes of restricted balls. Notice that different Euclidean balls, even if they share the centre, can produce the same restricted balls. This motivates the following definitions that are valid for general subsets of R n . Definition 20 Given a subset A ⊂ R n , the spherical diameter of A is defined by |A| Sph = inf {2d : A = A ∩ B(x, d) for some x ∈ A} Definition 21 Given a subset A ⊂ R n , we say that the restricted ball B(x, d) ∩ A is proper and write B(x, d) ∩ A ∈ P(A) if x ∈ A and 2d = |B(x, d) ∩ A| Sph . Definition 22 Given a measure α on R n and an α-measurable s-set A ⊂ R n , we define the α-spherical s-density of A by θ s Sph(α) (A) = α(A) |A| Sph s . Definition 23 Given a subset A ⊂ R n and two restricted balls B(x, d) ∩ A, B(x , d ) ∩ A both in P(A), we say that they are similarity-equivalent and write B( x, d) ∩ A Sn B(x , d ) ∩ A if there is an f ∈ S n such that B(x , d ) ∩ A = f (B(x, d) ∩ A). Lemma 24 Let A ⊂ R n and B(x, d) ∩ A ∈ P(A). (i) If f ∈ S n has similarity constant r f , then f (B(x, d)) ∩ f (A) ∈ P(f (A)) and |f (B(x, d)) ∩ f (A)| p = r f d. (ii) Let α ∈ M s and let A be an α-measurable s-set. If B(x, d) ∩ A Sn B(x , d ) ∩ A, then θ s Sph(α) (B(x, d) ∩ A) = θ s Sph(α) (B(x , d ) ∩ A) Proof. Let A ⊂ R n , B(x, d) ∩ A ∈ P(A). In order to show (i), assume that f (B(x, d)) ∩ f (A) is not proper. Then, there is a ball B(y, ρ) such that B(y, ρ) ∩ f (A) = f (B(x, d)) ∩ f (A) with y ∈ f (A) and ρ < r f d. Then B(f −1 (y), r −1 f ρ) ∩ A = B(x, d) ∩ A with f −1 (y) ∈ A and r −1 f ρ < d, in contradiction with |B(x, d) ∩ A| Sph = 2d. Therefore, f (B(x, d)) ∩ f (A) ∈ P(f (A)) and |f (B(x, d)) ∩ f (A)| Sph = 2r f d. Part (ii) is now trivial since α ∈ M s and, hence, α(B(x , d ) ∩ A) = α(f (B(x, d) ∩ A)) = r s f α(B(x, d) ∩ A) and, by (i), |B(x , d ) ∩ A| Sph s = |f (B(x, d) ∩ f (A))| Sph s = r s f (2d) s = r s f |B(x, d) ∩ A| Sph s . Now we can proceed to state our result for the complexity of the local geometry of self-similar sets with OSC. Corollary 25 Under the assumptions of Theorem 14, assume that s is a non-integer real number. Then, there is an uncountable number of equivalence classes in the quotient space Sph E / Sn . Proof. By Lemma 24 (ii), we know that all restricted balls in an equivalence class of Sph E / Sn share the same µ-spherical s-density, which allows us to naturally define a mapping θ s µ : Sph E / Sn → Im(θ s µ , B). This implies that the inverse θ s it is easy to see that C s (E) ≤ P s (E)). This, together with Theorem 14 (iii), means that Im(θ s µ , B) contains an interval with uncountably many points and the proof is completed. The spectrum of the Sierpinski gasket In this section, we shall apply the results obtained in Theorem 14 to fully characterise the asymptotic spectra of the Sierpinski gasket S. Recall that the Sierpinski gasket or Sierpinski triangle is a special case of a self-similar set generated by a system Ψ = {f 0, f 1, f 2 } of three contracting similitudes of the plane, with contraction ratios r i := 1/2, i ∈ M, given by f 0 (x, y) = 1 2 (x, y), f 1 (x, y) = 1 2 (x, y) + ( 1 2 , 0) and f 2 (x, y) = 1 2 (x, y) + 1 2 ( 1 2 , √ 3 2 ).(36) We shall denote by z i the fixed point of each f i , i = 0, 1, 2 that is, z 0 = (0, 0), z 1 = (1, 0) and z 2 = ( 1 2 , √ 3 2 ), and by T the equilateral triangle with vertexes z i , i ∈ M. It is well known that S is a connected set that satisfies the OSC and has similarity dimension s = log 3 log 2 . Thanks to previous work on the packing and Hausdorff centred measures of the Sierpinski gasket (cf. [22] and [23]), we know that S is both P s S and C s S -exact, and we have fairly precise approximations of the values of P s (S) and C s (S). Theoretical results Theorem 26 Let S be the Sierpinski gasket, S = y ∈ S : T k (y) : k ∈ N is dense in S , B be the collection of typical balls, R be a feasible open set for S, and α ∈ M s S . Then, the following statements hold true. (i) Spec(α, y) = Spec(α, S) = Spec(α, R ∩ S) = Im(θ s α , B) = α(S) P s (S) , α(S) C s (S) , y ∈ S.(37) (ii) Spec(α, S) is given by the union of two closed intervals of positive length: Spec(α, S) = θ s α (z 0 ), θ s α (z 0 ) ∪ α(S) P s (S) , α(S) C s (S) ,(38) where z 0 = (0, 0). Furthermore, θ s α (z 0 ) = min θ s α (z 0 , d) : 1 2 ≤ d ≤ 1(39) and θ s α (z 0 ) = max θ s α (z 0 , d) : 1 2 ≤ d ≤ 1 .(40) Proof. Our previous work guarantees that S is a P s -exact (see [22]) and C s -exact (see [23]) set. Then, the four equalities in (i) follow as a consequence of Theorem 14 and Corollary 15. In order to prove (38), let R i , i ∈ {0, 1, 2} be the three open rhombi composed of the topological interior of the union of the triangle T and its reflection across the edge of T opposite the point z i , i ∈ M (see R 2 in Fig. 1). Using that S = {z 0 , z 1 , z 2 } ∪ (S ∩ ∪ 2 i=0 R i ),(41) we obtain Spec(α, S) = Spec(α, S ∩ ∪ 2 i=0 R i ) ∪ ∪ 2 i=0 Spec(α, z i ) = = α(S) P s (S) , α(S) C s (S) ∪ Spec(α, z 0 ), where the last equality follows from (37), (41) and the fact that, by symmetry, Spec(α, z i ) must be identical for i ∈ {0, 1, 2} . Observe now that, if d ≤ 1/2, then B(z 0 , d) ∩ S = B(z 0 , d) ∩ f 0 (S). Hence, using that α is an s-dimensional metric measure θ s α (z 0 , d) = α(B(z 0 , d) ∩ f 0 (S)) (2d) s = α(f 0 (B(z 0 , 2d) ∩ S)) (2d) s = α(B(z 0 , 2d) ∩ S)) (4d) s = θ s α (z 0 , 2d). If 2d ≤ 1/2, we can repeat the argument k times until 1/2 ≤ 2 k d ≤ 1 and θ s α (z 0 , d) = θ s α (z 0 , 2 k d). This shows that min {θ s α (z 0 , d) : 0 ≤ d ≤ 1} = min θ s α (z 0 , d) : 1 2 ≤ d ≤ 1 = θ s α (z 0 ), where the last equality can easily be checked and, analogously, (40) holds. Remark 27 Notice that part (i) of Theorem 26 shows that there is a set of full α-measure whose points exhibit a strongly regular behaviour, whereas part (ii) underlines the special local behaviour of the Numerical results Following the structure of the algorithms developed in [22,23,41,42] for the numerical estimation of the metric measures of self-similar sets, the construction of the computational algorithm used in this work in order to approximate the values of θ s µ (z 0 ) and θ s µ (z 0 ) relies upon the discrete approximations of both the Sierpinski gasket and its invariant measure µ. Recall that any two measures in M s S are mutually multiple of each other (see (19)), so we can obtain Spec(α, S) from Spec(µ, S) if we know α(S). The Sierpinski gasket, as the attractor of Ψ = {f 0 , f 1 , f 2 } (see (36)), is the unique non-empty compact set that admits the self-similar decomposition S = F (S), where F is the Hutchinson operator defined, for A ⊂ R 2 , by F (A) := f 0 (A) ∪ f 1 (A) ∪ f 2 (A). It is well-known that, for any non-empty compact subset A ⊂ R 2 , S can be built with an arbitrary level of detail by increasing the iterations k in F k (A), where F k = F • F... • F is the k-th iterate of the contracting operator F. This is because F k (A) k→∞ → S in the Hausdorff metric (cf. [6]). Furthermore, if A ⊂ S, then F k (A) ⊂ S for any k ∈ N + . In particular, if we take A 1 := {z 0 , z 1 , z 2 } as the initial compact set, we obtain the set A k := F k−1 (A 1 ) ⊂ S, k ≥ 2,(42) which approximates S at the iteration k of our algorithm. The relation between the Markov operator and the natural probability measure µ ps given in (17), with s = log 3 log 2 and p i = r s i = 3 −k , and (19) leads to the following relation: M k ps (α) = 1 3 k i∈M k α • f −1 i w → µ, α ∈ P(R 2 ).(43) If we consider µ 1 := 1 3 (δ z0 + δ z1 + δ z2 ) as an initial measure α in (43), where δ x is a unit mass at x, then µ k := M k−1 ps (µ 1 ) = 1 3 k−1 i∈M k−1 µ 1 • f −1 i = 1 3 k i∈M k−1 δ fi(z0) + δ fi(z1) + δ fi(z2)(44) is a probability measure supported on A k ⊂ S and µ k w → µ. The discrete measure µ k is the approximation of the invariant measure µ that our algorithm takes at iteration k. Lemmas 28 and 30 (Lemma 28 is proved in [23]), provide precise relationships between the measures µ k and µ. Lemma 28 (i) Let {S i : i ∈ I ⊂ M k }, k ∈ N + , be a collection of k-cylinder sets of S. Then, µ i∈I S i ≤ µ k i∈I S i (ii) Let A ⊂ S, k ∈ N + , and let I = {i ∈ M k : S i ∩ A = ∅}. Then, µ k (A) ≤ µ i∈I S i Remark 29 The comparisons between the measures µ and µ k on collections of cylinders and sets given in the lemma above are passed to enlarged and reduced balls in part (i) of the next lemma. Since our algorithms compute only µ k -densities of balls with centres in A k (see (42)) and with some point of A k in their boundaries, in part (ii) of this lemma we approximate the µ-measure of a ball centred at x with the µ k -measure of a ball with its same centre and with a point of A k at its boundary. In order to obtain more accurate estimates of θ s µ (z 0 ) and θ s µ (z 0 ) (as we also do in [22] and [23] for the estimation of P s (S) and C s (S)), it is necessary to consider open balls when searching balls of minimal µ k -density (see (46)), whereas in the search of balls with maximal µ k -density, the approximating balls must be taken to be closed balls (see (47)). In the definition of θ s µ (·) and θ Lemma 30 Let k > 0, x ∈ R 2 , and 2 −k < d ≤ max i∈M z i − x . Then, (i) µ k (B(x, d − 2 −k )) ≤ µ(B(x, d)) ≤ µ k (B(x, d + 2 −k )) (ii) If B(x, d) ∩ A k = ∅, then there are points y k and z k in A k such that µ k B (x, d y k ) ≤ µ(B(x, d)) ≤ µ k (B(x, d z k )), where d y k := |y k − x| , d z k := |z k − x| , and {d y k , d z k } ∈ [d − 2 −k , d + 2 −k ]. Proof. (i) Let H k := {i ∈ M k : B(x, d − 2 −k ) ∩ S i = ∅} For any i ∈ H k , S i ⊂ B(x, d) holds, so ∪ i∈H k S i ⊂ B(x, d) . Using Lemma 28 (ii), we have , d)). µ k (B(x, d − 2 −k )) ≤ µ(∪ i∈H k S i ) ≤ µ(B(x Let G k := {i ∈ M k : S i ⊂B(x, d + 2 −k )}. Then,B(x, d) ∩ S ⊂ ∪ i∈G k S i and ∪ i∈G k S i ⊂B(x, d + 2 −k ). Using Lemma 28 (i), we get µ(B(x, d)) = µ(B(x, d) ∩ S) ≤ µ(∪ i∈G k S i ) ≤ µ k (∪ i∈G k S i ) ≤ µ k (B(x, d + 2 −k )) (ii) Let d * = max i∈M z i − x . If S ⊂ B(x, d), then d = d * and µ(B(x, d * )) = 1 = µ k (B(x, d * )) > µ k ((B(x, d * )), so property (ii) holds for d y k = d z k = d * . Let us now assume that S B(x, d). We prove first that F k := {i ∈ M k : ∂B(x, d) ∩ S i = ∅} = ∅.(45) If F k = ∅, then ∪ i∈M k S i ⊂B(x, d) ∪ (B(x, d)) c . We know that (∪ i∈M k S i ) ∩B(x, d) = ∅ because B(x, d) ∩ A k = ∅ and F k = ∅, and we also know that (∪ i∈M k S i ) ∩ (B(x, d)) c = ∅ because S B(x, d) and F k = ∅. This contradicts that ∪ i∈M k S i is a connected set, and (45) must hold. Using (i), we have that µ(B(x, d)) ≤ µ k (B(x, d + 2 −k )) = µ k (B(x, d z k )), where z k satisfies d z k = z k − x with d z k = max{ y − x : y ∈ A k ∩ B(x, d + 2 −k )}. The inequality d z k ≤ d + 2 −k is obvious, and d z k ≥ d − 2 −k follows because F k = ∅ and each k-cylinder S i , i ∈ M k contains some point in A k . Using the first inequality in (i), we have µ(B(x, d)) ≥ µ k (B(x, d − 2 −k )) = µ k (B(x, d y k )), where y k satisfies d y k = y k − x with d y k = min{ y − x : y ∈ A k ∩ B(x, d − 2 −k ) c }. The inequality d y k ≥ d − 2 −k is obvious, and d y k ≤ d + 2 −k follows because F k = ∅. Theorem 26 allows us to characterise Spec(α, S) for α ∈ {µ, P s S , C s S } through only four numbers, namely, θ s µ (z 0 ), θ s µ (z 0 ), P s (S) and C s (S). Thanks to previous numerical work that uses the measures µ k and the sets A k (see (44) and (42)) as approximations of µ and S, respectively, we have estimates given by our algorithms P k of P s (S) (see [22]) and C k of C s (S) (see [23]) and precise error bounds for such estimates. We show in Theorem 31 below how to obtain estimates ξ k of θ s µ (z 0 ) and ξ k of θ s µ (z 0 ), that such estimates converge to the real values, and we give accurate bounds for them, that is θ s µ (z 0 ) ∈ [ξ inf k , ξ sup k ] and θ s µ (z 0 ) ∈ [ξ inf k , ξ sup k ] (see the definition of ξ k , ξ k and of the intervals Theorem 31). This allows us to implement an algorithm along the lines of those developed for the estimation of C s (S) and P s (S) (see [22,23]). [ξ inf k , ξ sup k ] and [ξ inf k , ξ sup k ] in Theorem 31 For k > 1, let ξ k := min θ s µ k (z 0 , d) : d = |x − z 0 | , x ∈ A k , d ∈ [ 1 2 − 2 −k , 1](46) and ξ k := max θ s µ k (z 0 , d) : d = |x − z 0 | , x ∈ A k , d ∈ [ 1 2 − 2 −k , 1](47) be the estimates of θ s µ (z 0 ) and θ s µ (z 0 ), respectively. Let d k be such thatθ s µ k (z 0 , d k ) = ξ k , and let D k be such that θ s µ k (z 0 , D k ) = ξ k . Then, {θ s µ (z 0 ), ξ k } ∈ [ξ inf k , ξ sup k ],(48) and {θ s µ (z 0 ), ξ k } ∈ [ξ inf k , ξ sup k ],(49) where ξ inf k = K k ξ k , K k = (1 − 2 1−k ) s , ξ sup k = µ k (B(z 0 , d k + 2 −k )) (2d k ) s ,(50)ξ inf k = µ k (B(z 0 , D k − 2 −k )) (2D k ) s , K k = (1 + 2 1−k ) s , ξ sup k = K k ξ k .(51) Proof. That ξ k ∈ [ξ inf k , ξ sup k ] and ξ k ∈ [ξ inf k , ξ sup k ] is obvious from the definitions. We prove first that θ s µ (z 0 ) ∈ [ξ inf k , ξ sup k ]. Using Lemma 30 (i) and (39), we obtain θ s µ (z 0 ) ≤ µ(B(z 0 , d k )) (2d k ) s ≤ µ k (B(z 0 , d k + 2 −k )) (2d k ) s = ξ sup k . Let d ∈ [ 1 2 , 1] be such that θ s µ (z 0 ) = µ(B(z0,d)) (2d) s . Lemma 30 (ii) guarantees the existence of y k ∈ A k such that µ(B(z 0 , d)) ≥ µ k (B(z 0 , d y k )), where d y k := |y k − z 0 | ∈ [d − 2 −k , d + 2 −k ] ⊂ [ 1 2 − 2 −k , 1]. This, together with (46) and the inequality d ≥ 1 2 gives θ s µ (z 0 ) = µ(B(z 0 , d)) (2d) s ≥ µ k (B(z 0 , d y k )) (2d) s = d y k d s µ k (B(z 0 , d y k )) (2d y k ) s ≥ d y k d s ξ k ≥ d − 2 −k d s ξ k ≥ ξ inf k . The proof that θ s µ (z 0 ) ∈ [ξ inf k , ξ sup k ] is analogous. Using Lemma 30 (i) and (40), we obtain θ s µ (z 0 ) ≥ µ(B(z 0 , D k )) (2D k ) s ≥ µ k (B(z 0 , D k − 2 −k )) (2D k ) s = ξ inf k . Let D ∈ [ 1 2 , 1] be such that θ = d z k D s µ k (B(z 0 , d z k )) (2d z k ) s ≤ d z k D s ξ k ≤ D + 2 −k D s ξ k ≤ ξ sup k . We present in Table 1 the estimates ξ k , and ξ k of θ s µ (z 0 ) and θ (51)). We also provide the radii, d k and D k , of the µ k -optimal balls. See in Fig. 2a the graph of the function θ s µ14 (z 0 , d) as a function of d ∈ [ε, 1], and in Fig. 2b the points (g(d), θ s µ14 (z 0 , d)), where g(d) := ε + ε−1 log(ε) (log(d) − log(ε)) and ε = 0.05. This is a suitable logarithmic scale, [10], which allows us to see the periodicity of this function at such a scale. We present in Table 2 the estimates P k of P s (S) and C k of C s (S) obtained by our algorithms for k = 14. The lower and upper bounds of P s (S) are denoted by P inf k and P sup k , respectively, and the bounds of C s (S) are denoted by C inf k and C sup k , respectively. These results were computed in [22] and [23]), respectively. Recall that (P s (S)) −1 and (C s (S)) −1 are the µ-densities of the balls of minimum and maximum µ-density in the set of typical balls. The estimates P k and C k are obtained by replacing S with A k and µ with µ k . Again, we have used open balls in the estimation of the density of the ball of minimum µ k -density, and closed balls for the density of the ball of maximum µ k -density. The centre and radius of the open ball of minimum µ k -density are denoted by x * k and d k , respectively, and the centre and radius corresponding to the closed ball of maximum µ k -density are denoted by y * k and D k . The table also contains the corresponding optimal µ k -densities and their bounds. The upper bound P sup k := K P k P k of P s (S) is slightly improved here with respect to the one given in [22]. Here K P k := (1 − 2 5−k √ 3 ) −s instead of the value K P k = (1 − 2 6−k √ 3 ) −s used in [22]. This gives the value P sup 14 = 1.671292 given in Table 2 instead of the value P sup 14 = 1.668305 given in Table 1 in [22]. The results of the following corollary are based on the estimates of Tables 1 and 2. Corollary 32 Let S be the Sierpinski gasket. (i) For any α ∈ M s S , Spec(α, S) is the union of two closed disjointed intervals. (ii) Proof. We know (see (38) in Theorem 14) that Spec(µ, S) = θ s µ (z 0 ), θ s µ (z 0 ) ∪ 1 P s (S) , 1 C s (S) ,(52) and that (see (19)) Spec(α, S) = α(S)Spec(µ, S), α ∈ M s S . The two intervals in Spec(α, S), α ∈ M s S are disjointed if θ s µ (z 0 ) < 1 P s (S) . Such a condition holds (see Theorem 31, and Tables 1 and 2) subspace of R n , i.e. by restricted balls of the form B ∩ E, where B is a Euclidean ball. For general points x, y ∈ E, if B(x, d) denotes the closed Euclidean ball centred at x and with radius d, then B(x, d) ∩ E and B(y, d) ∩ E are not equivalent by translation, and B(x, d) ∩ E and B(x, d ) ∩ E with d = d are not homothetic-equivalent. Using classical tools of fractal geometry, namely, the s-densities of metric measures on balls (see Definitions 21 and 22), and Marstrand's Theorem for a given system Ψ = {f i } i∈M , M := {0, 1, . . . , m − 1} of contractive similitudes in R n . We shall assume that the system Ψ satisfies the OSC, meaning that there is an open set O ⊂R n such that f i (O) ⊂ O for all i ∈ M and f i (O)∩f j (O) = ∅ for i, j ∈ M, i = j. We shall refer to such a set fications occurs in the strong topology of measures in M(R n ) is relevant (see Sec. 1.2 below for a discussion of this result). We shall write T an st (ν, x) for the set of equivalence classes, w.r.t. ∼ =, of such strong limits. so the balls in B are typical in the sense that, if B ∈ B, then similar copies of B are densely spread over E at small scales by the action of G. These copies are a countable set of balls. As Theorem 12 shows, the measures in M S (B) are also typical in a deeper sense since, for any f ∈ G, B ∈ B and α ∈ M S (B), the equality α f (B) = p f f (α B ) holds for a certain constant p f < 1 associated with f. This means that the images of typical balls are identical copies, up to the constant p f , to the original ones not only as subsets, but also from the point of view of any property expressible in terms of self-similar measures. Moreover, in Theorem 12 it is shown that, for any typical ball B(x, d), for any measure α ∈ M S (E) and for all points y in a set E with full α-measure, there is a sequence of balls {B(y, d k )} with d k → 0, a sequence {f k } of similitudes in G and constants p −1 f k → ∞, such that Additionally, on a tangent scale, for each α ∈ M S (B) and each x ∈ E, each typical class of measures in M S (B) is a feasible outcome of the zooming process of α at x, so there is a wide variety of limiting measures in T an st (α, x), x ∈ E. The uniformity of the self-similar setting emerges here in the fact that the inclusion M S (B) ⊂ T an st (α, x) stands true for any x ∈ E, so all the points in E share the set M S (B) of tangent measures.1.3 Spectrum of local densities of a self-similar set: the Sierpinski gasket caseThe relevance of the typical balls is stressed by the connection between typical balls and the spectrum of densities, which in turn determines some basic geometric features of E. the requirement that the balls are centred at points of A. The classical s-dimensional Hausdorff measure, H s (A), results if coverings of A by arbitrary subsets, {U i } i∈ N + , are considered and 2d i is replaced we shall study the local structure of a self-similar set E that satisfies the OSC for a feasible open set O through the study of the scenery flow of α ∈ M S (E) at a.e. x ∈ E, and through the characterisation of the spectrum of the spherical s-densities of measures in M s E (Sec. 3.2), a limiting set that helps to summarise the structure in the neighbourhood of a point (Sec. 1.3). T an D (ν, x) and T an st D (ν, x) as weak and strong limits in M(D), respectively, of sequences of such measures ν x,tn , and T an(ν, x), T an st (ν, x) as the sets of equivalence classes of measures in T an D (ν, x) αα is equivalent by translation to ν x,tn , and ν x,tn is supported on D. It is easy to see that ν = (τ z ) α, so α is equivalent to α and supported on D. Thus, the second method gives the same space T an(ν, x) than our primary method. But ν so the second method does not produce the same space T an st (ν, x) than our method.Observe that, if we let ν x,tn = (τ z ) ν x,tn , then ν = (τ z ) α. But now the measure ν x,tn is supported on the ball B(z n − z, d) rather than on D. This observation is useful because D and all the balls B(z n − z, d) are contained in some ball B(0, R) for R large enough (notice that z n is a convergent sequence of points), so the convergence ν x,tn −→n→∞ α (weak or strong) occurs in M(B(0, R)), and we can see that, if we consider vague convergence of measures, we do not obtain anything new, since in the Polish space B(0, R) both convergences are equivalent ( [15], Appendix).Lemma 8 (i) The sequences {c n } ∞ n=0 in the construction of T an(ν, x) and T an st (ν, x) can be taken to be c n = 1, n = 0, 1, 2, ... Assume also that there is a sequence {f n } ∞ n=0 c can choose a subsequence {c n k } ∞ k=0 that converges to a constant c, and then the sequence cν x,tn must converge to the weak limit α. This gives ν −1 α, which belongs to the same equivalence class in T an(ν, x) as α. On the other hand, the non-null weak limits in M(R n ) of sequences {ν x,tn } ∞ n=0 are particular cases of those of sequences {c n ν x,tn } . This completes the proof of part (i) for weak limits. Definition 9 9Given a measure α ∈ M s E , two Euclidean balls B(x, d) and B(x , d ) are said to be α-density equivalent if θ s α (x, d) = θ s α (x , d ). We start with two elementary scaling properties of typical balls for measures in M S (E) and in M s E . Lemma 10 Let E be a self-similar set generated by the system Ψ = {f i } i∈M of similarities of R n , with M = {0, 1, . . . , m − 1} , and similarity dimension s. Let O be a feasible open set (for Ψ) and let i ∈ M * . Then (i) ( iv) Let B(x n , d) := B n be a sequence of closed balls that converges in the Hausdorff metric to a closed ball B(x, d) := B, and let α ∈ M(B) with α(∂B) = 0. Then α Bn := that α n (A) −→ n→∞ α(A) for any Borel set A ⊂ R n . Let α ∈ M(B) and let K be any compact set contained in the interior U of B. The distance d(K, ∂B) = min { x − y : which shows that K ⊂ B ∩ B n for n > n 0 . Then, for such values of n, we haveα n (K) = α(B n ∩ K) = α(K)We now prove that α nst − −−− → n→∞ α also holds in the σ-field B(B) of Borel subsets of B. Let A := A ⊂ B : A is α-measurable and lim n→∞ α n (A) = α(A) . union of compact sets K ∩ B(x, d − n −1 ) ⊂ U. Thus, the class K of closed subsets of B is contained in A. We know that the σ-fields generated by K and by A satisfy B(B) = σ(K) ⊂ σ(A) = A. This gives the strong convergence of α n to α on B(B). We can now go to the scenery flow of measures in M S (E). Theorem 12 Let E be a self-similar set generated by the system Ψ = {f i } i∈M of similarities on R n , with M = {0, 1, . . . , m − 1} and similarity dimension s. Let O be a feasible open set (for Ψ) and µ p ∈ M S (E). Then, for any µ p -measurable set B ⊂ O and i ∈ M * , the following statements hold true. Im(θ s α , B) := {θ s α (x, d) : B(x, d) ∈ B} Let E be a self-similar set generated by the system of similarities of R n , Ψ = {f i } i∈M , with M = {0, 1, . . . , m − 1} , and similarity dimension s. If E satisfies the OSC, then (i) C s (E) = sup θ s µ (x, d) : B(x, d) ∈ B O −1 , where B O := {B(x, d) ∈ B : B(x, d) ⊂ O} and O is any feasible open set for Ψ. belong to and are the extreme values of Spec(α, x) follows from the definitions. That all the intermediate values in between also belong to Spec(α, x) is a consequence of the continuousness of θ s α (x, d), with respect to d. This last property follows from the fact that the α-measure of the boundary of Euclidean balls is always null exact (and, consequently, non-H s Sph E -exact since these two measures coincide in the line), and the authors find conditions under which they are H s E -exact. Example 19 Self-similar sets E with OSC in the line, with similarity dimension s, and that admit an open interval as a feasible open set, are an example of P s E -exact self-similar sets [40]. µ − 1 : 1Im(θ s µ , B)→Sph E / Sn of such mapping is an injective correspondence. Using Marstrand's Theorem, parts (ii) and (iii) of Theorem 14 and that µ( E) = 1 > 0, it follows that either C s (E) < P s (E) or s is an integer (notice that from the definitions in Sec. 2.2 Figure 1 : 1A feasible open set. An open rhombus R 2 that is a feasible open set for S. vertexes as the most isolated points in S. However, the set of exceptional points does not consist only of the vertexes as there might be other exceptional points, all of them belonging to the set ∪ 2 i=0 (R i ∩ S)− S. The pointwise α-density spectrum of such points is contained in α(S) P s (S) , α(S) C s (S) . The detection and characterisation of the behaviour of these points remains an open issue. ), the use of open or closed balls has no relevance because the µ-measure of the boundary of any ball is null. However, in the case of densities of the discrete measures µ k , the values obtained in one or the other case do actually matter, mainly if k is not large. From now on, we shall use the notationB(x, d) := {y ∈ R 2 : |x − y| < d} andθ s α for the s-density of α defined using open balls. . Lemma 30 (ii) guarantees the existence of z k ∈ A k such that µ(B(z 0 , D)) ≤ µ k (B(z 0 , d z k )), where d z k := |z k − z 0 | ∈ [D − 2 −k , D + 2 −k ] ⊂ [ 1 2 − 2 −k , 1].This, together with (47) and the inequality D ≥ k (B(z 0 , d z k )) (2D) s 0 ) (see (48) and (49) for definitions), respectively, and the corresponding lower and upper bounds in the 100% confidence intervals [ξ inf k , ξ sup k ], [ξ inf k , ξ sup k ] (see (51),(49)) obtained by our algorithm for k = 14 (see the definition these values in (46), (47), (50) and of θ s µ 14 (z0, d) for d ∈ [ε, 1] and ε = 0.05. (b) Values of (g(d), θ s µ 14 (z0, d)), where g(d) := ε + ε−1 log(ε) (log(d) − log(ε)) and ε = 0.05. Figure 2 : 2Densities at z 0 A distinguished class of neighbourhoods of E, in terms of which our results are expressed, is the class of typical balls.Definition 3 A ball B(x, d) is said to be typical if x ∈ E and B(x, d) ⊂ O, where O is some feasibleopen set. We shall write B for the set of typical balls.The family of typical balls is invariant under the semigroup G generated by Ψ (see Sec. 2), since, for f ∈ G, it follows from f (O) ⊂ O that f (B) ⊂ B holds. Consider now the set of typical spherical B-measures, Theorem 14 Let E ⊂ R n be a self-similar set that satisfies the SOSC with feasible open set O and similarity dimension s, and let α ∈ M s E . Then, the following statements hold true.(i) For x ∈ E, it holds that The inclusions Spec(α, y) ⊂ Spec(α, E) ⊂ Spec(α, O ∩ E) are trivial as E ⊂ O. This follows from the fact that, if y / ∈ O, then T (y) ∩ O = ∅ because f i (O) ⊂ O for any i ∈ M, and repeating the same argument, we see that T k (y) could not be dense in E.The corresponding equalities would follow if we prove Spec(α, O ∩ E) ⊂ Spec(α, y). This holds true because, if z = lim k→∞ θ s α (x, d k ) for x ∈ O∩E and d k −→ k→∞ 0, since B(x, d k ) ∈ B for any sufficiently large k, we can apply Theorem 12 (iii) to see that, for such values of k, θ s α (x, d k ) ∈ Spec(α, y) and, ). The arguments given in the proof of (ii) applied to µ show that, if B(x, d) ∈ B, then θ s µ (x, d) ∈ Spec(µ, O ∩ E), which gives the next inclusion in (iii). The last inclusion follows from the observation that Spec(µ, O∩E) consists oflimiting values of sequences with terms in Im(θ s µ , B), whose extreme values are 1 P s (E) and 1 C s (E) . Using (19), we get that θ s α (x, d) = α(E)θ s µ (x, d), and (iii) follows for any α ∈ M s E . Of note is the case in which the extreme values of θ s α (x, d) are attained on B. In this case, we have the following result. Corollary 15 Let α ∈ {µ, P s E , C s E }. Under the hypotheses of Theorem 14, if there are two balls Spec(µ, S) ∼ [0.2997, 0.3567] ∪ [0.5994, 0.9951] [0.2998, 0.3566] ∪ [0.5999, 0.9944] ⊂ Spec(µ, S) ⊂ [0.2996, 0.3568] ∪ [0.5983, 0.9970](iii) Spec(P s S , S) ∼ [0.5, 0.5951] ∪ [1, 1.6602] [0.5010, 0.5945] ∪ [1, 1.6578] ⊂ Spec(P s S , S) ⊂ [0.4995, 0.5963] ∪ [1, 1.6662] (iv) Spec(C s S , S) ∼ [0.3012, 0.3584] ∪ [0.6023, 1] [0.3015, 0.3577] ∪ [0.6032, 1] ⊂ Spec(C s S , S) ⊂ [0.3005, 0.3588] ∪ [0.6002, 1] because Using (52), Theorem 31 and (53), we have that Spec(µ, S) ∼ [ξ 14 , ξ 14 ] ∪ ⊂ Spec(µ, S) ⊂ ξ inf 14 , ξ Spec(P s S , S) ∼ [P 14 ξ µ14 , P 14 ξ µ14 ] ∪ 1,θ s µ (z 0 ) ≤ ξ sup 14 < 0.3568 and 1 P s (S) ≥ 1 P sup 14 > 0.5983. 1 P 14 , 1 C 14 , ξ sup 14 , ξ inf 14 ∪ 1 P inf 14 , 1 C sup 14 sup 14 ∪ 1 P sup 14 , 1 C inf 14 , P 14 C 14 , P sup 14 ξ sup 14 , P inf 14 ξ inf 14 ∪ 1, P inf 14 C sup 14 ⊂ Spec(P s S , S) ⊂ P inf 14 ξ inf 14 , P sup 14 ξ sup 14 ∪ 1, P sup 14 C inf 14 , bounds and radii, d k and D k , of the µ k -optimal balls for k = 14. (z 0 ), bounds and radii, d k and D k , of the µ k -optimal balls for k = 14. Table 2: Packing and Centred measure estimates of S Centres and radii of the ballsB(x * 14 , d 14 ) and B(y * 14 , D 14 ) of minimum and maximum µ 14 -densities. Table 2: Packing and Centred measure estimates of S Centres and radii of the ballsB(x * 14 , d 14 ) and B(y * 14 , D 14 ) of minimum and maximum µ 14 -densities, estimates P 14 and C 14 of P s (S) and C s (S), and bounds. The last two columns in the table are theestimates P 14 and C 14 of P s (S) and C s (S), and bounds. The last two columns in the table are the The (φ, s) regular subsets of n space. J M Marstrand, Trans. Am. Math. Soc. 113J. M. Marstrand, The (φ, s) regular subsets of n space, Trans. Am. Math. Soc. 113 (1964) 369-392. The packing and covering functions of some self-similar fractals. S P Lalley, Indiana Univ. Math. J. 373S. P. Lalley, The packing and covering functions of some self-similar fractals, Indiana Univ. Math. J. 37(3) (1988) 699-710. Separation Properties for Self-Similar Sets. A Schief, Proc. Amer. Math. Soc. 1221A. Schief, Separation Properties for Self-Similar Sets, Proc. Amer. Math. Soc. 122(1) (1994) 111-115. Dynamical boundary of a self-similar set. M Morán, Fundam. Math. 160M. Morán, Dynamical boundary of a self-similar set, Fundam. Math. 160 (1999) 1-14. P Mattila, Geometry of sets and measures in Euclidean Spaces. Cambridge University PressP. Mattila, Geometry of sets and measures in Euclidean Spaces (Cambridge University Press, 1995). Fractals and self-similarity. J E Hutchinson, Ind. J. Math. 30J. E. Hutchinson, Fractals and self-similarity, Ind. J. Math. 30 (1984) 713-747. The scenery flow for hyperbolic Julia sets. T Bedford, A M Fisher, M Urbanski, Proc. London Math. Soc. 385T. Bedford, A. M. Fisher and M. Urbanski, The scenery flow for hyperbolic Julia sets, Proc. London Math. Soc. 3(85) (2002) 467-492. The tangent distribution for self-similar measures. C Bandt, Lecture at the 5th Conference on Real Analysis and Measure Theory. CapriC. Bandt, The tangent distribution for self-similar measures, Lecture at the 5th Conference on Real Analysis and Measure Theory, Capri (1992). On Bandt's tangential distribution for self-similar measures. S Graf, Monatsh. Math. 120S. Graf, On Bandt's tangential distribution for self-similar measures, Monatsh. Math. 120 (1995) 223-246. Local Geometry of Fractals Given by Tangent Measure Distributions. C Bandt, Monatsh. Math. C. Bandt, Local Geometry of Fractals Given by Tangent Measure Distributions, Monatsh. Math. (2001) 260-280. Random self-similar multifractals. M Arbeiter, N Patzsche, Math. Nachr. 181M. Arbeiter and N. Patzsche, Random self-similar multifractals, Math. Nachr. 181 (1996) 5-42. The scenery flow of self-similar measures with weak separation condition. Arxiv: 2103.14018v2. A Pyörälä, math.DSA. Pyörälä, The scenery flow of self-similar measures with weak separation condition. Arxiv: 2103.14018v2 [math.DS] (2021). Measures with uniform scaling scenery. M Gavish, Ergod. Theory Dyn. Syst. 311M. Gavish, Measures with uniform scaling scenery, Ergod. Theory Dyn. Syst. 31(1) (2011) 33-48. Geometry of measures in R n . Distribution, rectifiability and densities. D Preiss, Ann. Math. 125D. Preiss, Geometry of measures in R n . Distribution, rectifiability and densities, Ann. Math. 125 (1987) 537-643. Tangent measure distributions of fractal measures. P Mörtens, D Preiss, Math. Ann. 312P. Mörtens and D. Preiss, Tangent measure distributions of fractal measures, Math. Ann. 312 (1998) 53-93. Hausdorff Dimension in Graph Directed Constructions. R Mauldin, S Williams, Trans. Am. Math. Soc. 3092R. Mauldin and S. Williams, Hausdorff Dimension in Graph Directed Constructions, Trans. Am. Math. Soc. 309(2) (1988) 811-829. Additive functions of intervals and Hausdorff measure. P Moran, Math. Proc. Camb. Philos. Soc. 421P. Moran, Additive functions of intervals and Hausdorff measure, Math. Proc. Camb. Philos. Soc. 42(1) (1946) 15-23. D Harte, Multifractals. Theory and applications. Boca Raton, FLChapman & Hall/CRCD. Harte, Multifractals. Theory and applications. (Chapman & Hall/CRC, Boca Raton, FL., 2001). H Cajar, Billingsley Dimension in Probability Spaces. SpringerH. Cajar, Billingsley Dimension in Probability Spaces (Springer, 1980). The Hausdorff dimension of certain sets of non-normal numbers. C M Colebrook, Michigan Math. J. 17C. M. Colebrook, The Hausdorff dimension of certain sets of non-normal numbers, Michigan Math. J. 17 (1970) 103-116. The sets of divergence points of self-similar measures are residual. J Li, M Wu, J. Math. Anal. Appl. 4042J. Li and M. Wu, The sets of divergence points of self-similar measures are residual, J. Math. Anal. Appl. 404(2) (2013) 429-437. On the packing measure of the Sierpinski gasket. M Llorente, M E Mera, M Morán, Nonlinearity. 31M. Llorente, M. E. Mera and M. Morán, On the packing measure of the Sierpinski gasket. Non- linearity 31 (2018) 2571-2589. On the centered Hausdorff measure of the Sierpinski gasket. M Llorente, M E Mera, M Morán, preprint 2021M. Llorente, M. E. Mera and M. Morán, On the centered Hausdorff measure of the Sierpinski gasket, preprint 2021. http://ssrn.com/abstract=3970808. M F Barnsley, Fractals Everywhere. Courier CorporationM. F. Barnsley, Fractals Everywhere (Courier Corporation, 2012). An algorithm for computing the centered Hausdorff measures of self-similar sets. M Llorente, M Morán, Chaos Solitons & Fractals. 453M. Llorente and M. Morán, An algorithm for computing the centered Hausdorff measures of self-similar sets, Chaos Solitons & Fractals, 45(3) (2012) 246-255. Geometry of self-similar measures. M Morán, J M Rey, Ann. Acad. Sci. Fenn. Math. 222M. Morán and J. M. Rey, Geometry of self-similar measures, Ann. Acad. Sci. Fenn. Math. 22(2) (1997) 365-386. Singularity of Self-Similar Measures with respect to Hausdorff Measures. M Morán, J M Rey, Trans. Amer. Math. Soc. 3506M. Morán and J.M. Rey Singularity of Self-Similar Measures with respect to Hausdorff Measures, Trans. Amer. Math. Soc. 350(6) (1998) 2297-2310. Packing regularity of sets in n-space. X Raymond, C Tricot, Math. Proc. Cambridge Philos. Soc. 1031X. Saint Raymond and C. Tricot, Packing regularity of sets in n-space. Math. Proc. Cambridge Philos. Soc. 103(1) (1988) 133-145. Rectifiable and Fractal Sets, in Fractal Geometry and Analysis. C Tricot, J. Bélair and SC. Tricot, Rectifiable and Fractal Sets, in Fractal Geometry and Analysis, eds. J. Bélair and S. . Dubuc, Proc. Montreal. Nato Adv. Sci. Inst.v. 346Dubuc (Proc. Montreal. Nato Adv. Sci. Inst.v 346, Kluber, Dordrecht, 1991) 364-403. Advantages of the centered Hausdorff measure from the computability point of view. M Llorente, M Morán, Math. Scand. 1071M. Llorente and M. Morán, Advantages of the centered Hausdorff measure from the computability point of view, Math. Scand. 107(1) (2010) 103-122. Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups. D Sullivan, Entropy , Acta Math. 1531D. Sullivan, Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups, Acta Math. 153(1) (1984) 259-277. Sur la classification des ensembles boréliens de mesure de Lebesgue nulle. C Tricot, GeneveThese de doctoratC. Tricot, Sur la classification des ensembles boréliens de mesure de Lebesgue nulle, These de doctorat, Geneve (1979). Some relations between packing premeasure and packing measure. D J Feng, S Hua, Z Y Wen, Bull. London Math. Soc. 316D. J. Feng, S. Hua and Z.Y. Wen, Some relations between packing premeasure and packing measure, Bull. London Math. Soc. 31(6) (1999) 665-670. Dimension undäusseres. F Hausdorff, Math. Ann. 79F. Hausdorff, Dimension undäusseres, Math. Ann. 79 (1919) 157-179. On the structure of self-similar fractals. P Mattila, Ann. Acad. Sci. Fenn. Math. Ser. A. 72P. Mattila, On the structure of self-similar fractals, Ann. Acad. Sci. Fenn. Math. Ser. A 7(2) (1982) 189-195. Computability of the Hausdorff and packing measures on self-similar sets and the self-similar tiling principle. M Morán, Nonlinearity. 182M. Morán, Computability of the Hausdorff and packing measures on self-similar sets and the self-similar tiling principle, Nonlinearity 18(2) (2005) 559-570. P Walters, An Introduction to Ergodic Theory. New YorkSpringer-VerlagP. Walters, An Introduction to Ergodic Theory (Springer-Verlag, New York, 1982). Self-similar sets with optimal coverings and packings. M Llorente, M Morán, J. Math. Anal. Appl. 3342M. Llorente and M. Morán, Self-similar sets with optimal coverings and packings, J. Math. Anal. Appl. 334(2) (2007) 1088-1095. Exact Hausdorff measure and intervals of maximum density for Cantor sets in the line. E Ayer, R S Strichartz, Trans. Am. Math. Soc. 3519E. Ayer and R. S. Strichartz, Exact Hausdorff measure and intervals of maximum density for Cantor sets in the line, Trans. Am. Math. Soc. 351(9) (1999) 3725-3741. Exact packing measure of linear Cantor set. D J Feng, Math. Natch. 2481D. J. Feng, Exact packing measure of linear Cantor set, Math. Natch. 248(1) (2003) 102-109. Computability of the packing measure of totally disconnected selfsimilar sets. M Llorente, M Morán, Ergod. Theory Dyn. Syst. 365M. Llorente and M. Morán, Computability of the packing measure of totally disconnected self- similar sets, Ergod. Theory Dyn. Syst. 36(5) (2016) 1534-1556. Rate of convergence: the packing and centered Hausdorff measures of totally disconnected self-similar sets. M Llorente, M E Mera, M Morán, Chaos, Solitons & Fractals. 98M. Llorente, M. E. Mera and M. Morán, Rate of convergence: the packing and centered Hausdorff measures of totally disconnected self-similar sets, Chaos, Solitons & Fractals, 98 (2017) 220-232.
{'fraction_non_alphanumeric': 0.09923779857483495, 'fraction_numerical': 0.028806527641861812, 'mean_word_length': 3.0435824555536977, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 22, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 62, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We analyse the local geometric structure of self-similar sets with open set condition through the study of the properties of a distinguished family of spherical neighbourhoods, the typical balls. We quantify the complexity of the local geometry of self-similar sets, showing that there are uncountably many classes of spherical neighbourhoods that are not equivalent under similitudes.We show that, at a tangent level, the uniformity of the Euclidean space is recuperated in the sense that any typical ball is a tangent measure of the measure ν at ν-a.e. point, where ν is any self-similar measure. We characterise the spectrum of asymptotic densities of metric measures in terms of the packing and centred Hausdorff measures. As an example, we compute the spectrum of asymptotic densities of the Sierpinski gasket.', 'arxivid': '2301.08338', 'author': ['Manuel Morán \nDepartamento de Análisis Económico y Economía Cuantitativa\nUniversidad Complutense de Madrid. Cam-pus de Somosaguas\n28223MadridSpain\n\nIMI-Institute of Interdisciplinary Mathematics\nUniversidad Complutense de Madrid\nPlaza de Ciencias 328040MadridSpain\n', 'Marta Llorente \nDepartamento de Análisis Económico: Economía Cuantitativa\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain\n', 'María Eugenia Mera \nDepartamento de Análisis Económico y Economía Cuantitativa\nUniversidad Complutense de Madrid. Cam-pus de Somosaguas\n28223MadridSpain\n'], 'authoraffiliation': ['Departamento de Análisis Económico y Economía Cuantitativa\nUniversidad Complutense de Madrid. Cam-pus de Somosaguas\n28223MadridSpain', 'IMI-Institute of Interdisciplinary Mathematics\nUniversidad Complutense de Madrid\nPlaza de Ciencias 328040MadridSpain', 'Departamento de Análisis Económico: Economía Cuantitativa\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain', 'Departamento de Análisis Económico y Economía Cuantitativa\nUniversidad Complutense de Madrid. Cam-pus de Somosaguas\n28223MadridSpain'], 'corpusid': 256080828, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 28136, 'n_tokens_neox': 24869, 'n_words': 15040, 'pdfsha': '25b0b82eabf1dd515eaef36c152352d2e4a64dba', 'pdfurls': ['https://export.arxiv.org/pdf/2301.08338v1.pdf'], 'title': ['Local Geometry of Self-similar Sets: Typical Balls, Tangent Measures and Asymptotic Spectra Short Title: Local Geometry of Self-similar Sets', 'Local Geometry of Self-similar Sets: Typical Balls, Tangent Measures and Asymptotic Spectra Short Title: Local Geometry of Self-similar Sets'], 'venue': []}
arxiv
Pointwise and uniform convergence of Fourier extensions November 26, 2018 Marcus Webb Department of Computer Science Celestijnenlaan 200A 3001LeuvenBelgium Vincent Coppé Department of Computer Science Celestijnenlaan 200A 3001LeuvenBelgium Daan Huybrechs Department of Computer Science Celestijnenlaan 200A 3001LeuvenBelgium K U Leuven Department of Computer Science Celestijnenlaan 200A 3001LeuvenBelgium Pointwise and uniform convergence of Fourier extensions November 26, 2018Fourier extensionLebesgue functionLegendre polynomials on a circular arccon- structive approximation Mathematics Subject Classification (2010) 42A1041A1765T4042C15 Fourier series approximations of continuous but nonperiodic functions on an interval suffer the Gibbs phenomenon, which means there is a permanent oscillatory overshoot in the neighbourhoods of the endpoints. Fourier extensions circumvent this issue by approximating the function using a Fourier series which is periodic on a larger interval. Previous results on the convergence of Fourier extensions have focused on the error in the L 2 norm, but in this paper we analyze pointwise and uniform convergence of Fourier extensions (formulated as the best approximation in the L 2 -norm). We show that the pointwise convergence of Fourier extensions is more similar to Legendre series than classical Fourier series. In particular, unlike classical Fourier series, Fourier extensions yield pointwise convergence at the endpoints of the interval. Similar to Legendre series, pointwise convergence at the endpoints is slower by an algebraic order of a half compared to that in the interior. The proof is conducted by an analysis of the associated Lebesgue function, and Jackson-and Bernstein-type theorems for Fourier extensions. Numerical experiments are provided. We conclude the paper with open questions regarding the regularized and oversampled least squares interpolation versions of Fourier extensions. Introduction The Fourier series of a periodic function converges spectrally fast with respect to the number of terms in the series, that is, with an algebraic order which increases with the number of available derivatives and exponentially fast for analytic functions. Furthermore, the truncated Fourier series can be approximated via the FFT in a fast and stable manner [40]. As such, it is the go-to approach to approximate a periodic function. However, when the function in question is nonperiodic, the situation is very different. Regardless of how smooth this function is, convergence is slow in the L 2 norm and there is a permanent oscillatory overshoot close to the endpoints due to the Gibbs phenomenon [42]. Fourier extensions have been shown to be an effective means for the approximation of nonperiodic functions while avoiding the Gibbs phenomenon [1,4,7,18,24,25,27]. The idea is as follows: For a function f ∈ L 2 (−1, 1), consider an approximant f N given by f N (x) = n k=−n c k e iπ T kx , N = 2n + 1,(1) where the coefficients c −n , . . . c n are chosen to minimise the error f − f N L 2 (−1,1) , and T > 1 is a user-determined parameter. This approximant f N is the nth Fourier extension of f to the periodic interval [−T, T ]. For the purposes of this paper, others kinds of Fourier extension, which might come from a discrete sampling of f or regularization, are a modification of this 1 . There are many approximation schemes that avoid the Gibbs phenomenon. Chebyshev polynomial interpolants such as those implemented in the Chebfun [14,36] and ApproxFun [31] software packages are extremely successful, so why consider Fourier extensions? First, discrete collocation versions of Fourier extensions sample the function on equispaced or near-equispaced grids, which in some situations are more natural than Chebyshev grids, which cluster near the endpoints [5]. Second, the approach generalises naturally to higher dimensions. If one has a function on a bounded subset Ω ⊂ R d , then one can use multivariate Fourier series which are periodic on a ddimensional bounding box containing Ω [8,19,28]. Modifications of Fourier extensions which use discete samples of a function are particularly relevant in this generalization, because the integrals defining the L 2 (Ω) norm can be difficult to compute. Fourier extensions can be computed stably in O(N log 2 (N )) floating point operations, with the following important caveats ( [21,27,24]). Computation of f N is equivalent to inversion of the so-called prolate matrix [37], which is a Toeplitz matrix G ∈ R N ×N with entries G k,j = sinc (k − j) π T , with right-hand-side vector b ∈ C N with entries b k = T [27]. The prolate matrix is exponentially ill-conditioned [34,Eq. 63], so computation of the exact Fourier extension is practically impossible, even for moderately sized N . However, a truncated Singular Value Decomposition (SVD) solution is only worse than the exact solution (in the L 2 (−1, 1) norm) by a small factor O(ε 1 2 ) in the limit as N → ∞, where ε > 0 is the truncation parameter [4,3]. Furthermore, using an oversampled least squares interpolation in equispaced points in [−1, 1] can bring this down to O(ε) for a sufficient oversampling rate [4,3,2]. At the heart of these facts is the observation that while the Fourier basis on [−T, T ] does not form a Schauder basis for L 2 (−1, 1), it satisfies the weaker conditions of a frame [3]. 2 1 2 1 −1 e − iπ T kx f (x) dx Fourier extensions which approximate a truncated SVD solution rather than the exact solution are called regularized Fourier extensions. An approximate SVD of the prolate matrix can be computed in O(N log 2 (N )) operations using the Fast Fourier Transform (FFT) and exploiting the so-called plunge region in the profile of its singular values [21]. This is a vast improvement on O(N 3 ) operations for a standard SVD. Fast algorithms for regularized, oversampled least squares interpolation Fourier extensions were developed in [27], building on the work of Lyon [24]. Previous convergence results on Fourier extensions have focused on convergence in the L 2 norm, because the Fourier extension by definition minimizes the error in the L 2 norm over the approximation space. Convergence in L 2 of algebraic order k for functions in the Sobolev space H k (−1, 1) was proved by Adcock and Huybrechs [4,Thm. 2.1]. It follows immediately that convergence is superalgebraic for smooth functions. Exponential convergence in L 2 and L ∞ norms for analytic functions was proved by Huybrechs for T = 2 [18] and by Adcock et al. for general T > 1 [4]. The proofs of exponential convergence appeal to connections between the Fourier extension problem and the sub-range Chebyshev polynomials [4], for which series approximations converge at an exponential rate which depends on analyticity in Bernstein ellipses in the complex plane. Regarding pointwise convergence of Fourier extensions for non-analytic functions, there are no proofs in the literature. Some numerical exploration of pointwise convergence appears in [9,Sec. 2], but a rigorous theoretical foundation is lacking. Summary of new results In this paper we prove that for f in the Hölder space C k,α ([−1, 1]), f (x) − f N (x) = O(N −k−α log(N )) for x ∈ [a, b] ⊂ (−1, 1) O(N 1 2 −k−α )) for x ∈ [−1, 1],(2) see Theorem 3.2. The factors of log(N ) and N 1 2 come from bounds on the Lebesgue function associated with Fourier extension derived in Section 4, and the factor of N −k−α comes from a Jackson-type theorem proved for Fourier extensions derived in Section 5 on best uniform approximation by Fourier extensions. This factor of N −k−α can be pessimistic if f is least regular at the boundary; in Section 5 we discuss how a weighted form of regularity (as opposed to Hölder regularity taken uniformly over the interval [−1, 1]) might yield a more natural correspondence between regularity and convergence rate. This is precisely the case in best polynomial approximation on an interval, where weighted moduli of continuity have a tight correspondence with best approximation errors [11,Ch. 7,Thm. 7.7]. From equation (2), it is immediate that if f ∈ C α ([−1, 1]) where α ∈ (0, 1), then f N converges to f uniformly in any subinterval [a, b] ⊂ (−1, 1), and if α > 1 2 , then we get uniform convergence over the whole interval [−1, 1]. We also prove a local pointwise convergence result, which states that if f ∈ L 2 (−1, 1), but f is uniformly Dini-Lipschitz in a subinterval [a, b], then the Fourier extension converges uniformly in compact subintervals of (a, b) (see Theorem 3.5). This is done by generalizing a localization theorem of Freud on convergence of orthogonal polynomial expansions in [−1, 1] (see Section 6). A key insight of this paper is that the kernel associated with approximation by Fourier extension has an explicit formula which is related to the Christoffel-Darboux kernel of the Legendre polynomials on a circular arc (see Lemma 4.3). The asymptotics of these polynomials were derived by Krasovksy using Riemann-Hilbert analysis [22,23,10], which we use to derive asymptotics of the kernel. The Lebesgue function for Fourier extensions are estimated using these asymptotics in Theorem 4.1. We find that the Lebesgue function is O(log(N )) in the interior of [−1, 1] and O(N 1 2 ) globally. This is just as with the Lebesgue function for Lebesgue series, and distinct from classical Fourier series which has a O(log N ) Lebesgue function over the full periodic interval. The results of this paper would become more interesting when they can be extended to regularized and oversampled interpolation versions of Fourier extensions, because as discussed above, these are the versions for which stable and efficient algorithms have been developed. The multivariate case is another direction this line of inquiry would ideally lead. We briefly discuss future research like this in Section 8. The paper is structured as follows. Section 2 recounts the known results about convergence of Fourier extensions in the L 2 norm. Section 3 gives new pointwise and uniform convergence theorems along with proofs which depend on results proved in the self-contained Sections 4, 5, and 6. Section 4 is on the Lebesgue function for Fourier extensions. Section 5 is on uniform best approximation for Fourier extensions, in which Jackson-and Bernstein-type theorems are proved. Section 6 is on an analogue of Freud's localization theorem for Fourier extensions. Section 7 provides the reader with results from numerical experiments, and Section 8 provides discussion. The appendix contains a derivation of asymptotics of Legendre polynomials on a circular arc, on the arc itself, from the Riemann-Hilbert analysis of Krasovsky [22,23,10]. Convergence of Fourier extensions in L 2 In this section we summarise the already known results regarding convergence in the L 2 norm. Exponential convergence As is discussed in [18,1], the Fourier extension f N in equation (1) is a polynomial in the mapped variable t = m(x), where m(x) = 2 cos π T x − cos π T 1 − cos π T − 1.(3) This change of variables transforms the Fourier extension problem into two series expansions in modified Jacobi polynomials [18]. Since exponential convergence in this setting is dictated by Bernstein ellipses in the complex plane, which are defined to be the closed contours, B(ρ) = 1 2 ρe iθ + 1 ρ e −iθ : θ ∈ [π, π] , ρ > 1,(4) it makes sense to consider the mapped contours, D(ρ) := m −1 (B(ρ)) ,(5) as a candidate for determining the rate of exponential convergence for Fourier extensions. They are indeed the relevant contours, as was proven in the following theorem. ). If f is an analytic function in D(ρ ) and continuous on D(ρ ) itself, then f − f N L 2 (−1,1) = O(ρ −n ) f L ∞ (D(ρ)) , where ρ < min ρ , cot 2 π 4T and N = 2n + 1. The constant in the big O depends only on T . Note that there is a T -dependent upper limit on the rate of exponential convergence. Algebraic convergence For functions in the Sobolev space H k (−1, 1) of L 2 (−1, 1) functions whose kth weak derivatives are in L 2 (−1, 1), we have algebraic convergence of order k. f − f N L 2 (−1,1) = O(N −k ) f H k (−1,1) , where the constant in the big O depends only on k and T . Corollary 2. 3. If f is smooth then f N → f superalgebraically in the L 2 (−1, 1) norm. Subalgebraic convergence This elementary result says that Fourier extensions converge in the L 2 norm for L 2 functions. Proposition 2.4. If f ∈ L 2 (−1, 1), then f − f N L 2 (−1,1) → 0 as N → ∞. Proof. Let g ∈ L 2 (−T, T ) be the function that is equal to f inside [−1, 1] and zero in the complement. Let g(x) = Pointwise and uniform convergence We prove pointwise convergence rates for functions in various Hölder spaces. For k = 0, 1, 2, . . . and α ∈ [0, 1], the Hölder space C k,α ([−1, 1]) is the space, C k,α ([−1, 1]) := f ∈ C k ([−1, 1]) : |f (k) | C α ([−1,1]) < ∞ ,(6) where |g| C α ([−1,1]) := sup x,y∈[−1,1] |g(x) − g(y)| |x − y| α .(7) It is a Banach space when endowed with the norm f C k,α ([−1,1]) = f C k ([−1,1]) + |f (k) | C α ([−1,1]) [15]. For all α ∈ [0, 1], we have C α ([−1, 1]) := C 0,α ([−1, 1]). Exponential convergence The pointwise convergence result for analytic functions is the same as Theorem 2.1. In fact, Theorem 2.1 is a corollary of the following theorem. ). If f is analytic inside of the mapped Bernstein ellipse, D(ρ ) (see equation (5)) and continuous on D(ρ ) itself, then f − f N L ∞ (−1,1) = O(ρ −n ) f L ∞ (D(ρ)) , where ρ < min ρ , cot 2 π 4T and N = 2n + 1. The constant in the big O depends only on T . Algebraic convergence Pointwise convergence for Hölder continuous functions is as follows. f − f N L ∞ (a,b) = O(N −α−k log N )|f (k) | C α ([−1,1]) . The constant in the big O depends on a, b, k, α, and T . Over the whole interval, [−1, 1], we have f − f N L ∞ (−1,1) = O(N 1 2 −α−k )|f (k) | C α ([−1,1]) . The constant in the big O depends on k, α, and T . We lose a half order of algebraic convergence at the endpoints, something that we could not possibly see in classical Fourier series because a periodic interval has no endpoints. Corollary 3.3. If f is smooth then f N → f superalgebraically in L ∞ (−1, 1). Subalgebraic convergence The loss of a half order of algebraic convergence at the endpoints predicted by Theorem 3.2 means that we require at least Hölder continuity with order greater than a half in order to guarantee uniform convergence. Theorem 3.4. If f ∈ C α ([−1, 1]), where α > 1 2 , then f − f N L ∞ (−1,1) → 0 as N → ∞. In order to guarantee local, pointwise convergence, there is a weak local continuity condition which can be employed as follows. A function f is uniformly Dini-Lipschitz in [a, b] if [42], lim δ 0 sup x,y∈[a,b] |x−y|<δ |(f (x) − f (y)) log δ| = 0.(8) This is a very weak condition, weaker than the Hölder condition for any α > 0, but it is sufficient for convergence of Fourier extensions in the interior of [−1, 1]. Theorem 3.5. If f ∈ L 2 (−1, 1) is uniformly Dini-Lipschitz in [a, b] ⊆ [−1, 1], then f − f N L ∞ (c,d) → 0 as N → ∞, for all [c, d] ⊂ (a, b). Remark 3.6. This theorem is stronger than it might appear at first. It says that even if a function is in L 2 (−1, 1), and can have for example jump discontinuities, we will still have pointwise convergence in regions where f is Dini-Lipschitz. However, the localization theorem (Theorem 6.1) which we use to prove this result, does not give any indication of the rate of convergence. Proofs of the results of this section For each odd positive integer N = 2n + 1, let P N be the orthogonal projection from L 2 (−1.1) onto the subspace H N , H N = span{e iπ T kx } n k=−n .(9)Then f N = P N (f ), since f N minimizes the L 2 (−1, 1) distance between f and H N . Let {e k } N k=1 be any orthonormal basis for H N ⊂ L 2 (−1, 1). Then the kernel, K N (x, y) = N k=1 e k (x)e k (y),(10) satisfies P N f (x) = 1 −1 K N (x, y)f (y)dy,(11) for all f ∈ L 2 (−1, 1). The Lebesgue function for the projection P N at a point x ∈ [−1, 1] is the L 1 norm of the kernel at x, Λ(x; P N ) = 1 −1 |K N (x, y)| dy.(12) The best approximation error functional on H N is defined for all f ∈ C([−1, 1]) by E(f ; H N ) = inf r N ∈H N f − r N L ∞ (−1,1) .(13) The importance of Λ(x; P N ) and E(f ; H N ) are encapsulated in Lebesgue's Lemma, which states that for any f ∈ C([−1, 1]), |f (x) − P N (f )(x)| ≤ (1 + Λ(x; P N ))E(f ; H N ),(14) for all . Now we can proceed to prove the pointwise convergence results stated above. The proofs depend on the content of Sections 4, 5 and 6, which consist of self-contained results. Lemma 3.7. Let f ∈ C([−1, 1]). Then for all closed subsets [a, b] ⊂ (−1, 1), we have f − P N (f ) L ∞ (a,b) = O(log N )E (f ; H N ) , where the constant in the big O depends on a, b and T . Over the whole interval [−1, 1], we have f − P N (f ) L ∞ (−1,1) = O(N 1 2 )E (f ; H N ) , where the constant in the big O depends only on T . Proof. By Lebesgue's Lemma, given in equation (14), it suffices to show that sup x∈[a,b] Λ(x; P N ) = O(log N ), and sup x∈[−1,1] Λ(x; H N ) = O(N 1 2 ). This is proved in Theorem 4.1. Proof of Theorem 3.2. By Lemma 3.7, it suffices to show that for f ∈ C k,α ([−1, 1]), we have E(f ; H N ) = O N −k−α |f | C α ([−1,f 1 (x) =      f (x) for x ∈ [a, b] f (a) for x ∈ [−1, a) f (b) for x ∈ (b, 1], and f 2 = f − f 1 . Since f 2 vanishes in [a, b] and is in L 2 (−1, 1), we have by Theorem 6.1 that P N (f 2 ) → 0 uniformly in all subintervals [c, d] ⊂ (a, b). It is clear by the definition of f 1 and the definition of Dini-Lipschitz continuity in equation (8) that f 1 is also uniformly Dini-Lipschitz in [−1, 1]. By Lemma 3.7, f 1 − P N (f 1 ) L ∞ (c,d) = O(log N )E (f 1 ; H N ) . By Lemma 5.2 and Theorem 5.3, E (f 1 ; H N ) = o(1/ log N ). This proves that P N (f 1 ) → f 1 uniformly on all subintervals [c, d] ⊂ (a, b). Now, since f = f 1 + f 2 , we have proved the result. The Lebesgue function of Fourier extensions Recall from Section 3 that the kernel associated with the Fourier extension operator P N is the bivariate function on [−1, 1] × [−1, 1], K N (x, y) = N k=1 e k (x)e k (y), where {e k } N k=1 is any orthonormal basis for H N . We call this kernel the prolate kernel, because one particular choice of orthonormal basis is the discrete prolate spheroidal wave functions (DPSWFs). These functions, denoted {ξ k,N } N k=1 , are the N eigenfunctions of a time-band-limiting operator; specifically, there exist eigenvalues {λ k,N } N k=1 such that 1 −1 ξ k,N (y) sin N π T (x − y) sin π T (x − y) dy = λ k,N ξ k,N (x), for k = 1, . . . N . DPSWFs play an important role in the analysis of perfectly bandlimited and nearly timelimited periodic signals, which was pioneered by Landau, Pollak and Slepian in the 1970s [34]. More recently, they have also been shown to be important for the computation of Fourier extensions, because the regularized version of Fourier extensions projects onto the DP-SWFs ξ k,N with eigenvalues λ k,N > ε for a given tolerance ε > 0 [4,3]. This is discussed in Section 8. The key outcome of this section is a proof of the following theorem. This will be proved by finding asymptotic formulae for the prolate kernel K N . The reader can verify that K N is invariant under a change of orthonormal basis for H N , so a suitable choice of basis is desired. We have found that rather than the DPSWF basis, a basis related to orthogonal polynomials on the unit circle have been more amenable to analysis. For N = 2n + 1, recall the definition of the N -dimensional space H N , H N = span e iπ T kx n k=−n . Any function r N ∈ H N is of the form r N (x) = e − iπ T nx p 2n (e iπ T x ),(15) where p 2n is a polynomial of degree 2n. Using this idea we prove the following lemma. Lemma 4.2 (Orthonormal basis for H N ). Let {Π k (z)} ∞ k=0 be the (normalized) orthogonal polynomials on the unit circle with respect to the weight f (θ) = 2T · χ [− π T , π T ] (θ), θ ∈ [−π, π],(16) i.e. for j, k = 0, 1, 2, . . ., 1 2π π −π Π k (e iθ )Π j (e iθ ) f (θ)dθ = δ j,k .(17) Then the set e − iπ T nx · Π k e iπ T x 2n k=0(18) forms an orthonormal basis for H N . Proof. By the observation immediately preceding this lemma, the set forms a basis for H N because {Π k } 2n k=0 forms a basis for polynomials of degree 2n. We need only show its orthonormality with respect to the inner product on H N induced by L 2 (−1, 1). Let j, k ∈ {0, . . . , 2n}. Then, making the change of variables θ = π T x, we have 1 −1 e − iπ T nx · Π j e iπ T x e − iπ T nx · Π k e iπ T x dx = 1 −1 Π j e iπ T x Π k e iπ T x dx = π T − π T Π j (e iθ )Π k e iθ T π dθ = 1 2π π −π Π k (e iθ )Π j (e iθ ) f (θ)dθ. By the orthonormal relationship between Π k and Π j on the unit circle, the basis is orthonormal on [−1, 1]. The Christoffel-Darboux formula for orthogonal polynomials on the unit circle states that, N −1 k=0 Π k (ζ)Π k (z) = Π * N (ζ)Π * N (z) − Π N (ζ)Π N (z) 1 − ζz , z, ζ ∈ C, ζz = 1,(19) where Π * N (z) = z N Π z −1 (which is also a polynomial of degree N ) [35,Thm 11.42]. On the unit circle itself, where z = e iθ , ζ = e iφ , this reduces, after some elementary manipulations, to N −1 k=0 Π k (e iφ )Π k (e iθ ) = e i N −1 2 (θ−φ) · Imag   e −i N 2 φ · Π N (e iφ ) · e −i N 2 θ · Π N e iθ sin θ−φ 2   .(20) From this general formula for orthogonal polynomials on the unit circle, we prove the following lemma regarding the prolate kernel. Lemma 4.3 (Prolate kernel formula). For all x, y ∈ [−1, 1], K N (x, y) = Imag    e − iπ T N 2 y · Π N e iπ T y · e − iπ T N 2 x · Π N e iπ T x sin π 2T (x − y)    . The formula in fact holds for all x, y ∈ [−T, T ]. Proof. By the fact that e − iπ T nx · Π k e iπ T x 2n k=0 is an orthonormal basis for H N , from Lemma 4.2, we have that K N (x, y) = 2n k=0 e − iπ T ny Π k e iπ T y e − iπ T nx Π k e iπ T x = e iπ T n(y−x) N −1 k=0 Π k e iπ T y Π k e iπ T x . The proof is completed by considering the Christoffel-Darboux formula for orthogonal polynomials on the unit circle in equation (20) (note that N −1 2 = n). Now, to ascertain asymptotics of the prolate kernel, it is sufficient to ascertain asymptotics of the orthogonal polynomials {Π k (z)} ∞ k=0 . These polynomials have been studied before in the literature, and are known as the Legendre polynomials on a circular arc [26]. (θ) = 2T · χ [− π T , π T ] (θ), and for x ∈ [−1, 1] define the variable η ∈ [0, π] by η = cos −1 sin x π 2T sin π 2T . There exists a constant δ > 0 such that for x ∈ [−1 + δ, 1 − δ], Π N e iπ T x = e iπ T N 2 x 2T sin π 2T e − iπ 4T sin (1 + x) π 2T sin (1 − x) π 2T 1 4 cos N η − π 4 (21) − e iπ 4T sin (1 − x) π 2T sin (1 + x) π 2T 1 4 sin N η − π 4 + O(N −1 ), and for x ∈ [1 − δ, 1], Π N e iπ T x = e iπ T N 2 x 2T sin π 2T π 2 N η 1 2 e − iπ 4T sin (1 + x) π 2T sin (1 − x) π 2T 1 4 J 0 (N η) (22) − e iπ 4T sin (1 − x) π 2T sin (1 + x) π 2T 1 4 J 1 (N η) + O(N − 1 2 ). The constants in the big O depend only on T and δ. The asymptotics for x ∈ [−1, −1 + δ] are found by using the relation Π N e − iπ T x = Π N e iπ T x . In terms of magnitude with respect to N , we have Π N e iπ T x = O(1) for x ∈ [−1 + δ, 1 − δ], O(N 1 2 ) for x ∈ [−1, −1 + δ] ∪ [1 − δ, 1].(23) Remark 4.6. The asymptotic order of Π N e iπ T x with respect to N in equation (23) Proof. This result follows directly from Lemma A.1, because if we take α = π−π/T and f α (θ) ≡ 1, then the polynomials Π N (z) = (2T ) − 1 2 φ N (−z, α) satisfy the orthonormality conditions that define Π N as in Lemma 4.2. To obtain the asymptotic formula above, make the change of variables θ = π T x + π in the asymptotic formulae for φ N (z, α). Be careful to note that the endpoint with explicit formula given above (x = 1), corresponds to θ = 2π − α, which is not the endpoint with explicit formula given in Lemma A.1 (θ = α). This was done to shorten the expressions for the asymptotics at the endpoints. To complete the proof we must prove equation (23). For x ∈ [−1 + δ, 1 − δ], all of the terms are clearly bounded above by O(δ − 1 4 ) = O(1). Now let x ∈ [1 − δ, 1]. We have η 2 ≤ π 2 4 (1 − cos(η)) for all η ∈ 0, π 2 and x sin π 2T ≤ sin x π 2T ≤ x π 2T for all x ∈ [0, 1]. Assuming δ < 1 2 , we have x, y ∈ [0, 1] and η, λ ∈ 0, π 2 , and hence η 2 ≤ π 2 4 (1 − x). Since 1 − x ∈ [0, 1] we have 1 − x ≤ sin (1 − x) π 2T / sin π 2T , so η 2 ≤ sin (1 − x) π 2T π 2 4 sin( π 2T ) . This implies that η 2 sin (1 + x) π 2T sin (1 − x) π 2T 1 4 = O(1), uniformly for all x ∈ [0, 1]. Note also that Bessel functions are uniformly bounded in absolute value by 1 (see [13,Eq. 10.14.1]). This makes it clear that Π N e iπ T x = O(N 1 2 ) for x ∈ [1 − δ, 1]. For x ∈ [−1, −1 + δ], use the relation Π N e − iπ T x = Π N e iπ T x . We now have the required results to prove Theorem 4.1. Proof of Theorem 4.1 part (i). Let [a, b] ⊂ (−1, 1) and choose τ > 0 sufficiently small so that [a − τ, b + τ ] ⊂ (−1, 1). Applying the first part of Theorem 4.5 gives us Π N e iπ T x = O(1), x ∈ [a − τ, b + τ ].(24) For the proof of part (i) we need to bound the integral 1 −1 |K N (x, y)| dy uniformly for x ∈ [a, b] . We do so by dividing the interval [−1, 1] into the following subsets: I 1 = y ∈ [−1, 1] : |y − x| ≤ N −1 I 2 = y ∈ [−1, 1] : N −1 < |y − x| ≤ τ I 3 = {y ∈ [−1, 1] : τ < |y − x|} . We will obtain estimates for the kernel for x ∈ [a, b] and y in each of I 1 , I 2 and I 3 , and then estimate the associated integral over each of I 1 , I 2 and I 3 . For N > 1/τ , we have I 1 and I 2 are nonempty and are contained within [a − τ, b + τ ] ⊂ (−1, 1). By equation (24), Π N e iπ T y = O(1), y ∈ I 1 ∪ I 2 . For y ∈ I 1 , we have K N (x, y) = e iπ T n(y−x) N −1 k=0 Π k (e iπ T y )Π k (e iπ T x ) = O(N ). This implies I1 |K N (x, y)| dy ≤ O(N ) I1 dy = O(1), because |I 1 | ≤ N −1 . By Lemma 4.3, K N (x, y) = Imag    e − iπ T N 2 y · Π N e iπ T y · e − iπ T N 2 x · Π N e iπ T x sin π 2T (x − y)    . Note that since the sine function is concave in [0, π], we have | sin π 2T (x − y) | ≥ sin π 2T |x − y| for x, y ∈ [−1, 1]. Therefore, for all y ∈ [−1, 1], |K N (x, y)| ≤ O(1) 1 |x − y| Π N e iπ T y For y ∈ I 2 , this can be reduced to |K N (x, y)| ≤ O(1) 1 |x−y| . Therefore, I2 |K N (x, y)| dy ≤ O(1) I2 1 |x − y| dy ≤ O(1) τ N −1 1 s ds = O(log(N )). For y ∈ I 3 , since |x − y| −1 < τ −1 = O(1), we have |K N (x, y)| ≤ O(1) Π N e iπ T y . Therefore, I3 |K N (x, y)| dy ≤ O(1) I3 Π N e iπ T y dy ≤ O(1) 1 −1 Π N eI 1 = y ∈ [−1, 1] : |y − x| ≤ N −1 or |1 − y| ≤ N −1 I 2 = y ∈ [−1, 1] : N −1 < |y − x| ≤ δ and |1 − y| > N −1 I 3 = y ∈ [−1, 1] : δ < |y − x| and |1 − y| > N −1 . By Theorem 4.5, Π N e iπ T x = O(N 1 2 ), x ∈ [−1, 1].(25) Therefore, K N (x, y) = e iπ T n(y−x) N −1 k=0 Π k (e iπ T y )Π k (e iπ T x ) = O(N 2 ). By the Cauchy-Schwarz inequality and the fact that |I 1 | ≤ 2N −1 , we have I1 |K N (x, y)| dy ≤ I1 |K N (x, y)| 2 dy 1 2 I1 dy 1 2 ≤ 2 N 1 2 1 −1 |K N (x, y)| 2 dy 1 2 . By the connection between K N and P N , 1 −1 |K N (x, y)| 2 dy = P N K N (x, ·) (x). Since K N (x, y) = K N (y, x) and because K N (·, x) ∈ H N for each x ∈ [−1, 1], we have, 1 −1 |K N (x, y)| 2 dy = K N (x, x). Therefore, I1 |K N (x, y)| dy = O(N − 1 2 ) O(N 2 ) 1 2 = O(N|K N (x, y)| ≤ O(N 1 2 ) 1 |x − y| Π N e iπ T y . Therefore, for y ∈ I 3 , |K N (x, y)| ≤ O(N 1 2 ) Π N e iπ T y , because |x − y| > δ for y ∈ I 3 . Hence, I3 |K N (x, y)| dy ≤ O(N 1 2 ) I3 Π N e iπ T y dy ≤ O(N 1 2 ) 1 −1 Π N e iπ T y 2 dy 1 2 = O(N 1 2 ). All that remains is to show that I2 |K N (x, y)| dy = O(N Take the asymptotic expressions for Π N in Theorem 4.5 for the x and y currently in question, and consider the numerator in the formula for the kernel K N (x, y) (Lemma 4.3). An asymptotic formula is as follows: Imag e − iπ T N 2 y · Π N e iπ T y · e − iπ T N 2 x · Π N e iπ T x (26) = 1 2T π 2 N η 1 2 π 2 N λ 1 2 · sin (1 + x) π 2T sin (1 − x) π 2T 1 4 J 0 (N η) sin (1 − y) π 2T sin (1 + y) π 2T 1 4 J 1 (N λ)(27)− sin (1 − x) π 2T sin (1 + x) π 2T 1 4 J 1 (N η) sin (1 + y) π 2T sin (1 − y) π 2T 1 4 J 0 (N λ) + O(1) = N 1 2 π 4T · η 2 sin (1 + x) π 2T sin (1 − x) π 2T 1 4 J 0 (N η) sin (1 − y) π 2T sin (1 + y) π 2T 1 4 (N λ) 1 2 J 1 (N λ) (28) − sin (1 − x) π 2T sin (1 + x) π 2T 1 4 (N η) 1 2 J 1 (N η) λ 2 sin (1 + y) π 2T sin (1 − y) π 2T 1 4 J 0 (N λ) + O(1). This was an important step in the proof, because there was cancellation when we took the imaginary part. This cancellation is essential for the result to hold, and it is the reason for deriving and including a fully explicit description of the leading order asymptotics of the polynomials in Appendix A. We will now proceed to find upper bounds on the resulting expression. We showed in the proof of Theorem 4.5 that, η 2 sin (1 + x) π 2T sin (1 − x) π 2T 1 4 = O(1), uniformly for all x ∈ [0, 1]. The same is true when x and η are replaced by y and λ. It is straightforward to also show that (1 − y) π 2T ≤ sin π 2T λ 2 for y ∈ [0, 1] and λ ∈ 0, π 2 . From this, we have that for y ∈ I 2 , λ ≥ π 2T N . Combining this with the fact that for t → ∞, J α (t) = O t − 1 2 (see [13, Eq. 10.17.3]) we get that J 0 (N λ) = O N − 1 4 . Note also that Bessel functions are uniformly bounded in absolute value by 1 (see [13, Eq. 10.14.1]). Furthermore, as t → ∞, t . Collecting the bounds mentioned in the last three paragraphs, we conclude that for y ∈ I 2 , we have, Imag e − iπ T N 2 y · Π N e iπ T y · e − iπ T N 2 x · Π N e iπ T x = O(N 1 2 )J 0 (N η)(1 − y) 1 4 + O(N 1 4 ). (29) To conclude, we prove two refinements of equation (29), depending on whether x ∈ [1 − δ, 1 − N −1 ] or x ∈ [1 − N −1 , 1]. When x ∈ [1 − δ, 1 − N −1 ], we have J 0 (N η) = O(N − 1 4 ) (just like for y ∈ I 2 discussed above), and so, Finally, when x ∈ [1 − N −1 , 1] and y ∈ I 2 , we have 1 − y = x − y + 1 − x ≤ x − y + N −1 (since x ≥ y). By concavity of the function t → |t| Imag e − iπ T N 2 y · Π N e iπ T y · e − iπ T N 2 x · Π N e iπ T x = O(N1 4 at t = x − y > 0, we have (x − y + N −1 ) 1 4 ≤ (x − y) 1 4 + 1 4 N −1 (x − y) − 3 4 . Substituting this bound into equation (29), we get, K N (x, y) = O N 1 2 (1 − y) 1 4 |x − y| = O N 1 2 |x − y| − 3 4 + O N − 1 2 |x − y| − 7 4 The integral is bounded in the predictable manner as follows: I2 |K N (x, y)| dy = O(N 1 2 ) I2 |x − y| − 3 4 dy + O(N − 1 2 ) I2 |x − y| − 7 4 dy ≤ O(N 1 2 ) + O(N − 1 2 ) 1 N −1 s − 7 4 ds = O(N 1 2 ) + O(N − 1 2 · N Best uniform approximation by Fourier extensions We will compare best uniform approximation in three spaces. For odd positive integer N = 2n+1, define, 1, 1]). We will see that best uniform approximation by Fourier extensions is more similar to that of algebraic polynomials. The best approximation error functionals for these spaces are defined by H N = span eP N = span x k 2n k=0 ⊂ C([−E(f ; H N ) = inf r N ∈H N f − r N L ∞ (−1,1) for all f ∈ C([−1, 1]), E(g; T N ) = inf t N ∈T N g − t N L ∞ (−T,T ) for all g ∈ C per ([−T, T ]), E(h; P N ) = inf p N ∈P N h − p N L ∞ (−1,1) for all h ∈ C([−1, 1]). We wish to find bounds in terms of N and the regularity of the functions to be approximated. For f ∈ C([−1, 1]) the modulus of continuity is defined by [11,29], ω(f ; δ) = sup x,y∈[−1,1] |x−y|<δ |f (x) − f (y)|.(30) For g ∈ C per ([−T, T ]) we define the periodic modulus of continuity to be, ω per (g; δ) = sup x,y∈[−T,T ] d T (x,y)<δ |g(x) − g(y)|,(31) where d T (x, y) is the distance between x, y as elements of the periodic interval [−T, T ]. The following results are immediate. A Jackson-type theorem The original Jackson Theorem for classical Fourier series asserts that for all k = 0, 1, 2, . . . and all functions g ∈ C k per ([−T, T ]), we have There is also a polynomial version of Jackson's Theorem, which states that for all k = 0, 1, 2, . . . and all functions h ∈ C k ([−1, 1]), we have E(g; T N ) = O(N −k ) ω per g (k) ; 1 N ,(32)E(h; P N ) = O(N −k ) ω h (k) ; 1 N ,(33)|g(x) − g(y)| ≤ |f (1) − f (−1)| 2(T − 1) δ; (iii) if x ∈ [−1, 1], y ∈ [−T, T ]\[−1, 1] then |g(x) − g(y)| ≤ |f (ξ) − f (x)| + |g(y) − g(ξ)| ≤ ω(f, δ) + |f (1) − f (−1)| 2(T − 1) δ,|f (1) − f (−1)| ≤ 2m−1 k=0 f −1 + k m − f −1 + k + 1 m ≤ 2mω f ; 1 m It suffices to take m > 1/δ to show that |f (1)−f (−1)| 2(T −1) δ ≤ 1 T −1 ω(f ; δ) . Combining all four cases demonstrates ω per (g, δ) ≤ T T −1 ω(f, δ). Now let k > 0 and choose as extension of f the 2(k + 1)th degree Hermite interpolant in the points x = 1, and x = −1; then g (k) (x) is the linear interpolation between f (k) (1) and f (k) (−1) for x ∈ [−T, T ]\[−1, 1]. By the case k = 0 proved above, ω per (g (k) ; δ) ≤ T T −1 ω(f (k) ; δ). Proof of Theorem 5.3. Let f ∈ C k ([−1, 1]). By Lemma 5.4, this function can be extended to a function g ∈ C k per ([−T, T ]) such that ω per (g (k) ; δ) is bounded by T T −1 ω(f (k) ; δ). Let t N ∈ T N be the best uniform approximation to g, then (trivially) there exists a function r N ∈ H N such that r N (x) = t N (x) for all x ∈ [−1, 1]. Hence, E(f ; H N ) ≤ f − r N L ∞ (−1,1) ≤ g − t N L ∞ (−T,T ) = E(g; T N ). The original Jackson Theorem can now be used to bound E(g; T N ): E(g; T N ) = O(N −k ) ω per (g (k) ; δ) ≤ O(N −k ) ω(f (k) ; δ). This proves the result. The combination of Lemma 5.1 and Theorem 5.3 yields the following useful fact. If f ∈ C k,α ([−1, 1]) for k ≥ 0 and α ∈ [0, 1], then E(f ; H N ) = O(N −k−α ) |f (k) | C α ([−1,1]) . In the sequel we will see that this is not actually tight, in the sense that functions in C k,α ([−1, 1]) can see a decay of best approximation error with a rate faster than N −k−α . This is in contrast to the situation for classical Fourier series in which it is indeed tight (see Theorem 5.5). A Bernstein-type theorem While Jackson-type theorems bound the best approximation error functional by powers of N and moduli of continuity of derivatives, Bernstein-type theorems attempt to do the opposite. Bernstein-type theorems follow from Bernstein inequalities. For classical Fourier series, the Bernstein inequality is, t N ∞ ≤ π T n t N ∞ ,(34) for all t N ∈ T N , where N = 2n + 1 [11,Ch. 4,Th. 2.4]. Equality holds when t N (x) ∝ e ± iπ T nx . From Bernstein's inequality it is possible to show that there exists C T > 0 such that [11,Ch. 7,Thm. 3.1], ω per g; 1 N ≤ C T n N k=3 k odd E(g; T N ).(35) Now, this is not precisely a converse to Jackson's Theorem, but it implies the following tightness property. (θ) = h (cos (θ)) = |sin (θ)| 2α for θ ∈ [−π, π]. If α < 1 2 then g ∈ C 2α ([−π, π]) , so E(g; T N ) = O(N −2α ) by Theorem 5.5. Furthermore, the best approximations will be even since g is even, so the approximants are in fact polynomials in cos(θ). This implies that the best approximations to g are polynomials in x, showing that E(h; P N ) = O(N −2α ), twice as good as would be expected from Jackson's Theorem for algebraic polynomials (equation (33) ω φ (f ; δ) = sup x±h∈[−1,1] 0≤h<φ(x)δ |f (x + h) − f (x − h)|.(36) Taking the weight φ(x) = 1 2 returns the standard modulus of continuity in equation (30). It turns out that if this weighted modulus of continuity is used with φ(x) = √ 1 − x 2 , then there is a direct analogue of Theorem 5.5 for best uniform approximation by algebraic polynomials. E(h; P N ) = O(N −α ) ⇐⇒ ω φ (h; δ) = O(δ α ), where φ(x) = √ 1 − x 2 . The proof of Theorem 5.6 depends on the Bernstein inequality for algebraic polynomials, which states that for all p N ∈ P N , φ · p N L ∞ (−1,1) ≤ N p N L ∞ (−1,1) ,(37) where φ(x) = √ 1 − x 2 [11,Ch. 4,Cor. 1.2]. Compare this with the Bernstein inequality for classical Fourier series (equation (34)). If we wish to remove the factor of φ in the left-hand-side of equation (37) then we must change N to N 2 on the right-hand-side; this is then Markov's inequality [11,Ch. 4,Thm. 1.4]. A Bernstein inequality was proved for Fourier extensions by Videnskii [38], see also [6, p. 242] and [30,Sec. 2]. It states that for all r N ∈ H N , φ · r N L ∞ (−1,1) ≤ π T n r N L ∞ (−1,1) ,(38) where the weight function is φ(x) = sin (1 − x) π 2T sin (1 + x) π 2T cos x π 2T .(39) Note that since the sine function is concave in [0, π], we have | sin π 2T (1 ± x) | ≥ sin π 2T |1 ± x| for x ∈ [−1, 1]. Also, | sin π 2T (1 ± x) | ≤ | π 2T (1 ± x)| and cos x π 2T ∈ [cos π 2T , 1] for x ∈ [−1, 1]. Therefore, sin π 2T 1 − x 2 ≤ φ(x) ≤ π 2T cos π 2T 1 − x 2 ,(40) and we can change equation (38) to φ · r N L ∞ (−1,1) ≤ π T sin π 2T n r N L ∞ (−1,1) ,(41) where φ(x) = √ 1 − x 2 . Using the Bernstein inequality in equation (41) we can prove a Bernsteintype theorem for Fourier extensions. Theorem 5.7 (Bernstein-type). There exists a constant C T > 0 such that for all f ∈ C([−1, 1]), the following holds: ω φ f ; 1 N ≤ C T n N k=3 k odd E(f ; H N ), where φ(x) = √ 1 − x 2 and N = 2n + 1. Proof. This follows directly from [11, Ch. 6, Thm. From this Bernstein-type theorem for Fourier extensions, we get one half of an equivalence theorem between best approximation errors and weighted moduli of continuity. For the full equivalence, one must prove a Conjecture 5.9 below. Theorem 5.8. Let f ∈ C([−1, 1]) and α ∈ (0, 1). It holds that E(f ; H N ) = O(N −α ) =⇒ ω φ (f ; δ) = O(δ α ). If Conjecture 5.9 is true, then the reverse implication holds too. Proof. The forward implication follows immediately from Theorem 5.7. Suppose now that Conjecture 5.9 is true. Then we would have 1]) if and only if F ∈ C 1 (A), x → q n (e iπ T x ) ∈ H, and |µ(e iπ T x )| ≤ π T φ(x). We wish to extend this to all f ∈ W 1,1 (−1, 1) such that φ · f ∈ L ∞ (−1, 1) by a density argument, where W 1,p (−1, 1) is the Sobolev space of L p (−1, 1) functions whose weak derivatives lie in L p (−1, 1). For such a function f , one can verify that the functions f ρ (x) = f (ρx) for ρ ∈ (0, 1) satisfy: E(f ; H N ) ≤ C T n φ · f L ∞ (−1,1)(42)for all f ∈ C 1 ([−1, 1]) by setting f (x) = F e iπ T x , because f ∈ C 1 ([−1,f ρ ∈ W 1,∞ (−1, 1), f ρ → f in L ∞ , and φ · f ρ ∞ ≤ φ · f ∞ . For each ρ and ε > 0 there exists f ρ,ε ∈ C 1 ([−1, 1]) such that f ρ,ε − f ρ W 1,∞ < ε by density of C 1 ([−1, 1]) in W 1,∞ (−1, 1). Therefore there exists f ε ∈ C 1 ([−1, 1]) such that f − f ε L ∞ (−1,1) < ε and φ · f ε ∞ ≤ φ · f ∞ + ε. Hence E(f ; H N ) ≤ f − f ε L ∞ (−1,1) + E(f ε ; H N ) ≤ 1 + C T n ε + C T n φ · f ∞ . Since ε is arbitrary, we have the desired inequality. A similar argument may be found in [11, pp. 280]. From the above it would follow that there exists a constant C T > 0 such that E(f ; H N ) ≤ C T ω φ f ; 1 N ,(43) from equation (42), [11,Ch. 6, Thm. 6.2] and [11, Ch. 7, Thm. 5.1(a)], with r = 1, µ = 1, X = L ∞ (−1, 1), Φ n = H N , and Y = W 1 ∞ (φ) := {f ∈ W 1,1 (−1, 1) : φ · f ∈ L ∞ (−1, 1)}. Equation (43) would imply that if ω φ (f ; δ) = O(δ α ) then E(f ; H N ) = O(N −α ), as required. Conjecture 5.9 (Jackson inequality for polynomials on a circular arc). For any T > 1, define the arc on the complex unit circle, A = e iθ : θ ∈ − π T , π T . There exists a constant C T > 0 such that for all F ∈ C 1 (A) and all n ∈ N, there exists a polynomial q n of degree n such that sup z∈A |F (z) − q n (z)| ≤ C T n sup z∈A |µ(z)F (z)| , where µ(z) = (z − e iπ T )(z − e − iπ T ). Notice that to approximate f we conjecture that we only need to use positive powers of z, which means we do not need to utilize all of the functions in H N . This is because by Mergelyan's Theorem [33,Thm. 20.5] polynomials are dense in the space C(A). It is not surprising because of the redundant nature of approximation by Fourier extensions. A localization theorem for Fourier extensions The theorem proved in this section is a modification of a theorem of Freud ([16, Thm. IV.5.4]), which is a localization theorem for orthogonal polynomials on an interval. We, however, are working with the orthonormal basis given in Lemma 4.2, and there are some clear differences between the two situations. We show that these differences do not change the statement of the result. Proof. First note that the pointwise error can be written in terms of the prolate kernel discussed in Section 4 as, a, b), so that f (x) = 0. By the formula for the prolate kernel (Lemma 4. 3), P N (f )(x) − f (x) = 1 −1 (f (y) − f (x))K N (x, y)dy. Let x ∈ [c, d] ⊂ (P N (f )(x) − f (x) = 1 −1 f (y) sin π 2T (x − y) Imag e − iπ T N 2 y · Π N e iπ T y · e − iπ T N 2 x · Π N e iπ T x dy. By expressing the imaginary part as 1 2i times the difference of the complex conjugates, it is easy to see that for this expression to tend to zero as N → ∞, it is sufficient that for any f as in the statement of the theorem, we have lim N →∞ To prove this we consider the functions g ξ (y) = f (y)e iπ 2T y sin π 2T (ξ − y) , for ξ ∈ [c, d]. It holds that g ξ ∈ L 2 (−1, 1), because g ξ is equal to 0 inside [a, b] and equal to f (an L 2 (−1, 1) function) multiplied by a bounded function (y → e iπ 2T y / sin π 2T (ξ − y) ) outside of [a, b]. Let ε > 0. By Proposition 2.4, for any ξ ∈ [c, d], there exists K ξ ∈ N and a function h K ξ ∈ H K ξ such that g ξ − h K ξ L 2 (−1,1) < ε. A key property of the function h K ξ is that for N ≥ K ξ , 1 −1 h K ξ (y)e iπ T N −1 2 y · Π N e iπ T y dy = 0,(44) because h K ξ (y)e iπ T N −1 2 y is a polynomial of degree K ξ −1 2 + N −1 2 ≤ N − 1 in the variable z = exp iπ T y . Now, because the map x → g x is a continuous mapping from [c, d] → L 2 (−1, 1), there exists an interval I(ξ) such that for all x ∈ I(ξ), g x − h K ξ L 2 (−1,1) < ε is still valid. In consequence of the Heine-Borel Compactness Theorem [33], the interval [c, d] will be covered by finitely many of these intervals I(ξ), which we denote, I(ξ 1 ), I(ξ 2 ), . . . , I(ξ s ). Let K ε be an odd integer such that h K ξ i ∈ H Kε for i = 1, . . . , s. For an arbitrary x ∈ [c, d] there is an interval I(ξ r ) such that x ∈ I(ξ r ) and for N > K ε , we have (using the expression (44)), 1 −1 f (y) sin π 2T (x − y) e iπ T N 2 y · Π N e iπ T y dy = 1 −1 g x (y) − h K ξr (y) e iπ T N −1 2 y Π N e iπ T y dy ≤ 1 −1 |g x (y) − h K ξr (y)| 2 dy 1 2 · 1 −1 e iπ T N −1 2 y Π N e iπ T y 2 dy 1 2 < ε. This last line used the normality of the basis for H N discussed in Lemma 4.2. In conclusion, since ε is arbitrary and the inequality above is valid for all N > K ε , the integral must converge to zero as N → ∞, uniformly with respect to x ∈ [c, d], as required. Numerical experiments In this section we provide numerically computed examples of pointwise and uniform convergence of Fourier extensions for functions with various regularity properties. It was discussed in the introduction that the condition number of the linear system for computing the Fourier extension is extremely ill-conditioned, making computation of the exact solution to the Fourier extension practically impossible. To deal with this issue, we used sufficiently high precision floating point arithmetic and we did not take N higher than 129, to ensure that the system could be inverted accurately. The right-hand-side vectors for the computations are computed by quadrature in high precision floating point arithmetic. In practice, one would compute a fast regularized oversampled interpolation Fourier extension using the algorithm in [27], requiring only O(N log 2 (N )) floating point operations. However, we are interested in the exact Fourier extension and want to avoid any artefacts that may be caused by the regularization or discretization of the domain. In some cases, we compare the convergence rate of Fourier extensions to that of Legendre series, because we predict that the qualitative behaviour of Legendre series will be similar (see Section 8). For the Legendre series approximations we computed the Legendre series coefficients one by one using adaptive quadrature in 64 bit floating point precision. Analytic and entire functions Theorem 3.1 gives an upper bound on the rate of exponential convergence of Fourier extension approximations to analytic functions. The regions of analyticity in the complex plane which dictate the rate are the mapped Bernstein ellipse D(ρ), where ρ > 1. The theorem is illustrated in Figure 1, where we approximate an entire function and four analytic functions which each have a pole in a different location in the complex plane. All examples exhibit exponential convergence in the uniform norm at a rate which is predicted by Theorem 3.1. This is also the case for the entire function, where the exponential convergence rate is limited by a T -dependent upper bound. The spline functions and the pointwise approximation errors for Fourier extensions with various values of N are plotted in Figure 2. The rates of convergence predicted by Theorem 3.2 fit reasonably well, sometimes performing slightly better. For comparison, we include the errors for a Legendre series approximation in a dashed line of the same color. Differentiable functions Non-differentiable functions We investigate the approximation of functions with algebraic singularities, discontinuities, and Dini-Lipschitz continuity. Functions with an algebraic singularity at the endpoint are studied in Figure 3. We plot the pointwise errors for Fourier extension and Legendre series approximations to f (x) = x α for α = 3 4 , 1 2 , and 1 10 . These functions lie in the Hölder spaces C α 0, 1 2 for their respective values of α. While Theorem 3.2 guarantees uniform convergence over [−1, 1] only for the first function (since for the other two functions α ≤ 1 2 ), in our experiments we find that the error of the other two functions converges to zero too. We believe that this discrepancy is related to the weighted moduli of continuity of these functions being more favourable than the standard moduli (see Section 5). Overall, the observed convergence rates are sometimes better than the predicted rates, but when Fourier extensions are compared with Legendre series, we see similar rates of pointwise convergence, especially at the singular point. Three functions with a singularity at the interior are shown in Figure 4. The first has an algebraic singularity: f (x) = |x − r| 1/4 where r = 0.29384 (chosen to avoid any symmetry with respect to the domain). We observe agreement with the expected convergence rate of O(N 1/4 log N )) for the error at interior points. The second function has a jump: f (x) = x if x ∈ 0, 1 4 , 1 if x ∈ 1 4 , 1 2 .(45) Even though the function is highly irregular because of the jump, this does not deny convergence at regular points, corroborating the local nature of Theorem 3.5. The last function is uniformly Dini-Lipschitz continuous in 0, 1 2 : f (x) = (log (|x − r|)) −2 if x ∈ 0, 1 2 \ {r} 0 if x = r,(46) where r = 0.29384 (chosen to avoid any symmetry with respect to the domain). In Figure 4, the expected convergence rate of O((log(N )) −1 ) of Lemma 3.7 is present. In all three cases, we compared the convergence of Fourier extension approximations and Legendre series. While there is sometimes a mismatch between the pessimistic prediction of Theorem 3.2 and Lemma 3.7 for the convergence rates (see Section 5), when we compare Fourier extensions and Legendre series, we observe agreement. Discussion We proved pointwise and uniform convergence results for Fourier extension approximations of functions in Hölder spaces and with local uniform Dini-Lipschitz conditions. This was achieved by proving upper bounds on the associated Lebesgue function and the decay rate of best uniform approximation error for Fourier extensions, then appealing to Lebesgue's Lemma. Comparison to Legendre series For a function f ∈ L 2 (−1, 1), let us compare the Fourier extension approximant, f N , to the Legendre series approximant, f L N (x) = N −1 k=0 a k p L k (x), a k = 1 2 1 −1 f (x)p L k (x) dx,(47) where p L k is the kth Legendre polynomial normalized so that 1 [17], which is precisely the same as the Lebesgue function for Fourier extensions (see Theorem 4.1). Best uniform approximation by Fourier extensions was compared to best uniform approximation by algebraic polynomials in Section 5. For any f ∈ C k ([−1, 1]) for k ∈ Z ≥0 , we have It follows that for C k,α ([−1, 1]) functions, the statement of Theorem 3.2 also applies to Legendre series approximations. The localized convergence result for Dini-Lipschitz functions, Theorem 3.5 also also applies to Legendre series [16,Thm. IV.5.6]. Some of the experiments in Section 7 demonstrate these similarities. Theorem 2.1 on exponential convergence differs from the exponential convergence results for Legendre series in two ways. First, the region in the complex plane which determines the rate of exponential convergence is determined by Bernstein ellipses for Legendre series, but by mapped Bernstein ellipses for Fourier extensions. Second, there is an upper limit of cot 2 π 4T for the rate of exponential convergence of Fourier extensions regardless of the region of analyticity, whereas for Legendre series the rate can be arbitrarily fast, and for entire functions the rate of convergence is superexponential [39]. Extensions of this work It was mentioned in the introduction that our convergence results will be more applicable if we can extend them to regularized and oversampled interpolation versions of Fourier extensions, because those are the kinds of Fourier extensions for which stable and efficient algorithms have been developed. Regularized Fourier extensions for a given regularization parameter ε > 0 can be defined as follows. Suppose the matrix G ∈ R N ×N , G k,j = sinc (k − j) π T , has eigendecomposition G = V SV * . Let S ε be S but with all entries less than ε set to 0. The coefficients c ε ∈ C N of the regularized Fourier extension of f ∈ L 2 (−1, 1) are given by [27]. In other words, the solution is projected onto the eigenvectors of G whose eigenvalues are greater than or equal to ε. These eigenvectors are the Discrete Prolate Spheroidal Sequences (DPSSs), which are the Fourier coefficients of the DPSWFs {ξ k,N } N k=1 discussed in Section 4 [34]. The regularized Fourier extension, therefore, finds the best approximation not in H N , but in the linear space H N,ε ⊂ H N ⊂ L 2 (−1, 1), where H N,ε = span {ξ k,N : λ k,N ≥ ε} . c ε = V S † ε V * b, where b k = T 2 1 2 1 −1 e − iπ T kx f (x) dx Therefore, if the Lebesgue function Λ(x; P N,ε ) (where P N,ε is the projection operator from L 2 (−1, 1) to H N,ε ) and best approximation error functional E(f ; H N,ε ) can be estimated as in Sections 4 and 5, then we immediately obtain pointwise convergence results for regularized Fourier extensions by Lebesgue's Lemma. Extensions to the regularized oversampled interpolation version of Fourier extensions can be conducted by considering the analogous quantities for the Periodic Discrete Prolate Spheroidal Sequences (PDPSSs) [41,27]. Generalization of this work to the multivariate case would be extremely interesting, because the shape of the domain Ω ⊂ R d and regularity of its boundary will likely come into play [28]. conclude, τ (θ) 2 sin 1 2 (θ − α) 1 4 = O(1), uniformly for all θ ∈ [α, π]. Note also that Bessel functions are uniformly bounded in absolute value by 1 (see [13,Eq. 10.14.1]). This makes it clear that e n (θ) = O(n 1 2 ), as required. The fact that ψ n (e −iθ , α) = ψ n (e iθ , α) follows from the fact that the weight satisfies f (−θ) = f (θ), so the coefficients of ψ n (z, α) are real (see [35, p. 288]). be its Fourier series, and for all odd integers N = 2n + 1, define t N (x) = n k=−n c k e iπ T kx . Then following the definitions of f N , g and t N , we have f − f N L 2 (−1,1) ≤ f − t N L 2 (−1,1) = g − t N L 2 (−T,T ) → 0 as N → ∞. Theorem 3. 2 . 2If f ∈ C k,α ([−1, 1]) where k ≥ 0 and α ∈ [0, 1], then for all [a, b] ⊂ (−1, 1), 1]) . This follows from Lemma 5.1 and Theorem 5.3. Proof of Theorem 3.4. This follows from Theorem 3.2 with k = 0, because N 1 2 −α log N → 0 as N → ∞ for all α > 1 2 . Proof of Theorem 3.5. The following proof is an analogue of a proof of Freud for polynomial approximation ([16, Thm. IV.5.6]). Define the functions f 1 and f 2 by Theorem 4. 1 ( 1Lebesgue function bounds). (i) For each closed interval [a, b] ⊂ (−1, 1), the Lebesgue function satisfies sup x∈[a,b] Λ(x; H N ) = O(log N ). Remark 4. 4 . 4Setting T = 1 in this formula returns the Dirichlet kernel of classical Fourier series, because Π N (z) = z N for the trivial weight f (θ) ≡ 1. Theorem 4 . 5 . 45Let {Π k } ∞ k=0 be the (normalized) orthogonal polynomials on the unit circle with respect to the weight f is the same as for the N th (normalized) Legendre polynomial in [−1, 1] [35, Thm. 8.21.6]. This proves that Λ(x; P N ) = O(log(N )) uniformly for x ∈ [a, b]. Proof of Theorem 4.1 part (ii). Let δ ∈ 0, 1 4 be sufficiently small so that Theorem 4.5 applies to the intervals [−1 + 2δ, 1 − 2δ] and [1 − 2δ, 1]. Using part (i) of the present theorem, we have that for all x ∈ [−1 + δ, 1 − δ] the Lebesgue function satisfies Λ(x; P N ) = O(log(N )) = O(N 1 2 ) uniformly in such x. Now, since Π N e − iπ T x = Π N e iπ T x , it follows that K N (−x, y) = K N (x, −y), so that Λ(−x; P N ) = Λ(x; P N ). Therefore, to complete the proof we need only show that Λ(x; P N ) = O(N 1 2 ) uniformly for x ∈ [1 − δ, 1]. For such x, we divide the interval [−1, 1] into the following subsets: in the proof of part (i) of the theorem, but this time using the estimate Π , we have for all x, y ∈ [−1, 1], 1 2 1) uniformly for x ∈ [1 − δ, 1]. For x ∈ [1 − δ, 1]and y ∈ I 2 , we have y ∈ [1 − 2δ, 1] so that the asymptotic expression in Theorem 4.5 holds. Define the variables η = cos −1 sin x 1 2 1J α (t) = O(1) (see[13, Eq. 10.17.3]) x ∈ [1 − δ, 1 − N −1 ] and y ∈ I 2 . Therefore,I2|K N (x, y)| dy ≤ O covers all x ∈ [−1, 1] with finitely many uniform O(N 1 2 ) upper bounds, we have the final result uniformly for all x ∈ [−1, 1]. ⊂ C per ([−T, T ]), Lemma 5. 1 .| 1If f is in the Holder space C α ([−1, 1]) for α ∈ [0, 1], then ω(f ; δ) ≤ δ α |f | C α ([−1,1]) for all δ > 0.Lemma 5.2. If f ∈ C([−1, 1]) is uniformly Dini-Lipschitz [42], i.e. (f (x) − f (y)) log δ| = 0, then ω(f ; δ) = o(1/| log δ|). where the constant in the big O depends on k and T [20, Thm. 1.IV]. where the constant in the big O depends only on k [20, Thm. 1.VIII]. We prove a version of Jackson's Theorem for Fourier extensions. Theorem 5. 3 ( 3Jackson-type). For all k = 0, 1, 2, . . . and all functions f ∈ C k (constant in the big O depends only on k and T .Lemma 5.4 (Periodic extension). Let f ∈ C k ([−1, 1]). Then f can be extended to a function g ∈ C k per ([−T, T ]) such that ω per (g (k) ; δ) ≤ T T − 1 ω(f (k) ; δ).Proof. First let k = 0. Define the function g ∈ C per ([−T, T ]) such that for x ∈ [−1, 1], g(x) = f (x) and for x ∈ [−T, T ]\[−1, 1], g(x) is the the linear function which interpolates f at {−1, 1}. We distinguish between 4 different cases for points x, y ∈ [−T, T ] such that d T (x, y) ≤ δ: (i) if x, y ∈ [−1, 1], then |g(x) − g(y)| = |f (x) − f (y)| ≤ ω(f ; δ); (ii) if x, y ∈ [−T, T ]\[−1, 1], then since g is linear in this region, where ξ is the closest of the endpoints to x; and (iv) if x ∈ [−T, T ]\[−1, 1], y ∈ [−T, T ] the bound is similar to the previous one. Now it remains to bound |f (1) − f (−1)| in terms of ω(f ; δ). For any positive integer m, we can use a telescoping sum, Theorem 5. 5 ( 5Jackson-Bernstein[11, Ch. 7, Thm. 3.3]). Let g ∈ C per ([−T, T ]) and α ∈ (0, 1).It holds that E(g; T N ) = O(N −α ) ⇐⇒ ω per (g; δ) = O(δ α ).The direct analogue of Theorem 5.5 for best uniform approximation by algebraic polynomials in C([−1, 1]) is not true. Indeed, consider the function h(x) = (1−x 2 ) α , whose modulus of continuity satisfies ω(h; δ) = O(δ α ) by Lemma 5.1. Define the function g ). It was only in the late twentieth century that characterizations of functions h ∈ C([−1, 1]) for which E(h; P N ) = O(N −α ) were developed [11, Ch. 8]. The key insight is to use weighted moduli of continuity. The weighted modulus of continuity with weight φ : [−1, 1] → [0, ∞) for a function f ∈ C([−1, 1]) is defined as Theorem 5. 6 ( 6Ditzian-Totik [12, Cor. 7.2.5]). Let h ∈ C([−1, 1]) and α ∈ (0, 1). It holds that 6.2] and [11, Ch. 7, Thm. 5.1(b)], with r = 1, µ = 1, X = L ∞ (−1, 1), Φ n = H N , and Y = W 1 ∞ (φ) := {f ∈ W 1,1 (−1, 1) : φ · f ∈ L ∞ (−1, 1)}, where W 1,1 (−1, 1) is the Sobolev space of absolutely continuous functions on (−1, 1). Theorem 6 . 1 ( 61Localization theorem). Let f ∈ L 2 (−1, 1) be such that f (x) = 0 for all x ∈ [a, b] ⊆ [−1, 1]. Then P N (f ) → 0 uniformly in all subintervals [c, d] ⊂ (a, b). · Π N e iπ T y dy = 0. Figure 1 : 1We compute Fourier extension approximations to 5 functions: f (x) = e x (yellow stars) and f (x) = 1x−r for r = 0.3i, 0.6i, 1.5i, 2.0i (red circles, blue squares, green crosses and brown diamonds, respectively). The T parameter is 2.43. Left: the mapped Bernstein ellipses D(ρ) in the complex plane, for ρ = 1.891, 3.454, 8.913. The outermost outline (in blue) encloses the maximal mapped Bernstein ellipse; analyticity outside this largest region does not increase the exponential convergence rate. Right: The L ∞ (−1, 1) error against values of N for each of the 5 functions. The black dashed lines indicate the convergence rates predicted by Theorem 3.1. We investigate Fourier extension approximation of splines of degree d = 3, 9 and 15 on the interval 0, 1 2 , which lie in the Hölder spaces C 3.2, we expect the pointwise errors to be O(N −d log N ) in the interior, and O(N 1 2 −d ) uniformly over the whole interval. Figure 2 :Figure 3 : 23Above : Plots of plines of degree 3, 9, and 15 in C 2,1 , C 8,1 , and C 14,1 , respectively with an interior point marked using a red circle, and a boundary point marked with a blue square. Below: The pointwise error at an interior point (red circle) and an endpoint (blue square) using Fourier extension with T = 2 (full lines) and using Legendre series (dashed lines) against the number of degrees of freedom, N . The black dotted lines indicate the upper bounds on the algebraic rates of convergence predicted by Theorem 3.Above left: f (x) = x 3/4 . Above middle: f (x) = x 1/2 . Above right: f (x) = x 1/10 . Below: The pointwise error at an interior point (red circles), singular endpoint (green crosses), and non-singular endpoint (blue square) using Fourier extension with T = 2 (full lines) and Legendre series (dashed lines) against the number of degrees of freedom, N . The black dotted lines indicate the upper bounds on the algebraic rates of convergence predicted by Theorem 3.2. Figure 4 : 4Above left: f (x) = |x − r| 1/4 with r = 0.29384. Above middle: function with a jump, given in equation (45). Above right: Dini-Lipschitz continuous function given in equation (46). It has a strong cusp at x = 0.29384. Below: The pointwise error at an interior point (red circles), singular interior point (green crosses), and endpoint (blue squares) using Fourier extension with T = 2 (full lines) and Legendre series (dashed lines) against the number of degrees of freedom, N . The black dotted lines in the bottom left plot indicate the upper bounds on the algebraic rates of convergence predicted by Theorem 3.2. The black dotted line in the bottom right plot indicates the rate of convergence predicted by Lemma 3.7. function of this approximation scheme is O(log N ) for x ∈ [a, b] ⊂ (−1, 1) and O(N E (f ; H N ) = O(N −k )ω f (k) ; 1 N and E(f ; P N ) = O(N −k )ω f (k) ; 1 N . Articles such as[4] refer to this type of Fourier extension as the exact continuous Fourier extension. AcknowledgementsWe benefited from useful discussions with Ben Adcock, Arno Kuijlaars, Walter Van Assche, and Andrew Gibbs.A Asymptotics of Legendre polynomials on a circular arc Krasovsky derived the asymptotics of polynomials orthogonal on an arc {e iθ : α ≤ θ ≤ 2π − α} with respect to a positive analytic weight f α (θ) by Riemann-Hilbert analysis[22,23,10]. We are interested in the case f α (θ) ≡ 1, the Legendre polynomials on an arc of the unit circle. The following lemma follows Krasovsky's instructions on how to calculate an asymptotic expansion of these polynomials in various regions of the complex plane, where we restrict to the special case of the arc itself.k=0 be the polynomials in z with positive leading coefficient, satisfying 1 2πfor n, m = 0, 1, 2, . . .. Then there exists δ > 0 such thatwhere τ (θ) = cos −1 cos (θ/2) γ and γ = cos α 2 .The asymptotics for θ ∈ [2π − α − δ, 2π − α] can be determined using φ n (e −iθ , α) = φ n (e iθ , α).Proof. In the notation of[22], the arc on which the polynomials are orthogonal is contained within "region 1" of the complex plane. The asymptotics of φ n (z, α) in region 1 (for f α (θ) = 1) are given bywhere χ n is the leading coefficient of φ n (z, α), γ = cos(α/2), R(z) is a 2×2-matrix-valued function which is analytic and satisfies R(z) = I + O(n −1 ), M (z) is a 2 × 2-matrix-valued analytic function whose expression changes depending on whether z is in a neighbourhood of the endpoints of the arc or not (see below), and ψ(z) is a conformal mapping of the outside of the arc to the outside of the unit circle, given byThe branch of the square root which is positive for positive arguments is taken. This is similar to[22,Eqn. 2.56], which gives the asymptotics of φ n (z, α) in subsets of the complex plane outside a fixed neighbourhood of the arc. The job of this Lemma is to unpack this expression and convert to the variable θ ∈ [α, 2π − α] where z = e iθ .The leading coefficient has asymptotic expression χ n = γ −n− 1 2 (1 + O(n −1 )) (by[22,Eq. 2.58]), and we have after some algebraic manipulation, for all θ ∈ [α, 2π − α],Since ψ e iθ / √ e iθ = 1 (which can be shown directly or inferred from the conformal mapping definition of ψ above), the function τ (θ) defined in the statement of the lemma maps θ ∈ [α, 2π −α] to τ ∈ [0, π], and provides us with the simple identity, ψ e iθ / √ e iθ = e −iτ (θ) . Substituting this into equation (51) we write,According to[22,Eq. 2.23], there exists δ > 0 so this asymptotic expression is valid for θ ∈ [α + δ, 2π − α − δ] with M set as the function,1 4, and for θ ∈ [α, α + δ] with M set as the function,where H . Therefore, grouping terms to convert exponentials into trigonometric functions we obtain (49-50).For θ ∈ [α, α + δ], we can simplify the formula for M and obtain,Using the fact that J ν = 1 2 (H For all τ ∈ 0, π 2 , τ 2 ≤ π 2 4 (1 − cos(τ )), and cos(τ ) = cos(θ/2)/γ, so τ 2 ≤ π 2 4γ (γ − cos(θ/2)) = π 2 2γ sin 1 4 (θ + α) sin 1 4 (θ − α) . For θ ∈ [α, π], we have sin 1 4 (θ − α) ≤ sin 1 2 (θ − α) , so we can On the resolution power of Fourier extensions for oscillatory functions. B Adcock, D Huybrechs, Journal of Computational and Applied Mathematics. 260B. Adcock and D. Huybrechs. On the resolution power of Fourier extensions for oscillatory functions. Journal of Computational and Applied Mathematics, 260:312-336, 2014. Frames and numerical approximation II: generalized sampling. submitted. B Adcock, D Huybrechs, B. Adcock and D. Huybrechs. Frames and numerical approximation II: generalized sampling. submitted, 2017. Frames and numerical approximation. B Adcock, D Huybrechs, Accepted in SIAM Review. B. Adcock and D. Huybrechs. Frames and numerical approximation. Accepted in SIAM Review, 2018. On the numerical stability of Fourier extensions. B Adcock, D Huybrechs, J Martín-Vaquero, Foundations of Computational Mathematics. 144B. Adcock, D. Huybrechs, and J. Martín-Vaquero. On the numerical stability of Fourier extensions. Foundations of Computational Mathematics, 14(4):635-687, 2014. Parameter selection and numerical approximation properties of Fourier extensions from fixed data. B Adcock, J Ruan, Journal of Computational Physics. 273B. Adcock and J. Ruan. Parameter selection and numerical approximation properties of Fourier extensions from fixed data. Journal of Computational Physics, 273:453-471, 2014. Polynomials and polynomial inequalities. P Borwein, T Erdélyi, Springer Science & Business Media161P. Borwein and T. Erdélyi. Polynomials and polynomial inequalities, volume 161. Springer Science & Business Media, 2012. A comparison of numerical algorithms for Fourier extension of the first, second, and third kinds. J P Boyd, Journal of Computational Physics. 1781J. P. Boyd. A comparison of numerical algorithms for Fourier extension of the first, second, and third kinds. Journal of Computational Physics, 178(1):118-160, 2002. Fourier embedded domain methods: extending a function defined on an irregular region to a rectangle so that the extension is spatially periodic and C ∞. J P Boyd, Applied mathematics and computation. 1612J. P. Boyd. Fourier embedded domain methods: extending a function defined on an irregular region to a rectangle so that the extension is spatially periodic and C ∞ . Applied mathematics and computation, 161(2):591-597, 2005. Accurate, high-order representation of complex three-dimensional surfaces via Fourier continuation analysis. O P Bruno, Y Han, M M Pohlman, Journal of Computational Physics. 2272O. P. Bruno, Y. Han, and M. M. Pohlman. Accurate, high-order representation of com- plex three-dimensional surfaces via Fourier continuation analysis. Journal of Computational Physics, 227(2):1094-1125, 2007. P Deift, Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. American Mathematical SocP. Deift. Orthogonal polynomials and random matrices: a Riemann-Hilbert approach, vol- ume 3. American Mathematical Soc., 1999. Constructive approximation. R A Devore, G G Lorentz, Springer Science & Business Media303R. A. DeVore and G. G. Lorentz. Constructive approximation, volume 303. Springer Science & Business Media, 1993. Moduli of smoothness. Z Ditzian, V Totik, Springer-VerlagBerlinZ. Ditzian and V. Totik. Moduli of smoothness. Springer-Verlag, Berlin, 1987. . Ab Olde Daalhuis, Lozier, Schneider, Boisvert, Clark, 2018-09-15FWJ Olver. BR Miller and BV Saunders20NIST Digital Library of Mathematical FunctionsRelease 1.0.NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, Release 1.0.20 of 2018-09-15. FWJ Olver, AB Olde Daalhuis, DW Lozier, BI Schneider, RF Boisvert, CW Clark, BR Miller and BV Saunders, eds. Chebfun guide. T A Driscoll, N Hale, L N Trefethen, Pafnuty PublicationsOxfordT. A. Driscoll, N. Hale, and L. N. Trefethen. Chebfun guide. Pafnuty Publications, Oxford, 2014. Partial differential equations. L C Evans, American Mathematical SocietyL. C. Evans. Partial differential equations. American Mathematical Society, 2010. Orthogonal polynomials. G Freud, Pergamon PressG. Freud. Orthogonal polynomials. Pergamon Press, 1971. Über die Laplacesche Reihe. T H Gronwall, Mathematische Annalen. 742T. H. Gronwall.Über die Laplacesche Reihe. Mathematische Annalen, 74(2):213-270, 1913. On the Fourier extension of nonperiodic functions. D Huybrechs, SIAM Journal on Numerical Analysis. 476D. Huybrechs. On the Fourier extension of nonperiodic functions. SIAM Journal on Numer- ical Analysis, 47(6):4326-4355, 2010. Computing with functions on domains with arbitrary shapes. D Huybrechs, R Matthysen, International Conference Approximation Theory. SpringerD. Huybrechs and R. Matthysen. Computing with functions on domains with arbitrary shapes. In International Conference Approximation Theory, pages 105-117. Springer, 2016. The theory of approximation. D Jackson, The American SocietyD. Jackson. The theory of approximation. The American Society, 1930. The fast Slepian transform. S Karnik, Z Zhu, M B Wakin, J Romberg, M A Davenport, S. Karnik, Z. Zhu, M. B. Wakin, J. Romberg, and M. A. Davenport. The fast Slepian transform. Applied and Computational Harmonic Analysis, 2017. Gap probability in the spectrum of random matrices and asymptotics of polynomials orthogonal on an arc of the unit circle. I V Krasovsky, International Mathematics Research Notices. 25I. V. Krasovsky. Gap probability in the spectrum of random matrices and asymptotics of polynomials orthogonal on an arc of the unit circle. International Mathematics Research Notices, 2004(25):1249-1272, 2004. The RiemannHilbert approach to strong asymptotics for orthogonal polynomials on. A B J Kuijlaars, R Mclaughlin, W Van Assche, M Vanlessen, Advances in Mathematics. 188A. B. J. Kuijlaars, R. Mclaughlin, W. Van Assche, and M. Vanlessen. The RiemannHilbert ap- proach to strong asymptotics for orthogonal polynomials on [-1,1]. Advances in Mathematics, 188:337-398, 2004. A fast algorithm for Fourier continuation. M Lyon, SIAM Journal on Scientific Computing. 336M. Lyon. A fast algorithm for Fourier continuation. SIAM Journal on Scientific Computing, 33(6):3241-3260, 2011. Approximation error in regularized SVD-based Fourier continuations. M Lyon, Applied Numerical Mathematics. 6212M. Lyon. Approximation error in regularized SVD-based Fourier continuations. Applied Numerical Mathematics, 62(12):1790-1803, 2012. Freud equations for Legendre polynomials on a circular arc and solution of the Grünbaum-Delsarte-Janssen-Vries problem. A P Magnus, Journal of Approximation Theory. 1391-2A. P. Magnus. Freud equations for Legendre polynomials on a circular arc and solution of the Grünbaum-Delsarte-Janssen-Vries problem. Journal of Approximation Theory, 139(1- 2):75-90, 2006. Fast algorithms for the computation of Fourier extensions of arbitrary length. R Matthysen, D Huybrechs, SIAM Journal on Scientific Computing. 382R. Matthysen and D. Huybrechs. Fast algorithms for the computation of Fourier extensions of arbitrary length. SIAM Journal on Scientific Computing, 38(2):A899-A922, 2016. Function approximation on arbitrary domains using Fourier extension frames. R Matthysen, D Huybrechs, SIAM Journal on Numerical Analysis. 563R. Matthysen and D. Huybrechs. Function approximation on arbitrary domains using Fourier extension frames. SIAM Journal on Numerical Analysis, 56(3):1360-1385, 2018. Fundamentals of approximation theory. H N Mhaskar, D V Pai, CRC PressH. N. Mhaskar and D. V. Pai. Fundamentals of approximation theory. CRC Press, 2000. B Nagy, V Totik, Bernsteins inequality for algebraic polynomials on circular arcs. Constructive approximation. 37B. Nagy and V. Totik. Bernsteins inequality for algebraic polynomials on circular arcs. Constructive approximation, 37(2):223-232, 2013. . JuliaApproximation/ApproxFun.jl. JuliaApproximation/ApproxFun.jl, 2018. Interpolation and approximation by polynomials. G M Phillips, Springer Science & Business Media14G. M. Phillips. Interpolation and approximation by polynomials, volume 14. Springer Science & Business Media, 2003. Real and Complex Analysis. W Rudin, McGraw-Hill International Editions3rd EditionW. Rudin. Real and Complex Analysis, 3rd Edition. McGraw-Hill International Editions, 1987. Prolate spheroidal wave functions, Fourier analysis, and uncertainty V: The discrete case. D Slepian, Bell System Technical Journal. 575D. Slepian. Prolate spheroidal wave functions, Fourier analysis, and uncertainty V: The discrete case. Bell System Technical Journal, 57(5):1371-1430, 1978. Orthogonal polynomials. G Szegő, American Mathematical Soc23G. Szegő. Orthogonal polynomials, volume 23. American Mathematical Soc., 1939. Approximation theory and approximation practice. L N Trefethen, SIAM128L. N. Trefethen. Approximation theory and approximation practice, volume 128. SIAM, 2013. The prolate matrix. Linear algebra and its applications. J Varah, 187J. Varah. The prolate matrix. Linear algebra and its applications, 187:269-278, 1993. Extremal estimates for the derivative of a trigonometric polynomial on an interval shorter than its period. V Videnskii, In Soviet Math. Dokl. 1V. Videnskii. Extremal estimates for the derivative of a trigonometric polynomial on an interval shorter than its period. In Soviet Math. Dokl, volume 1, pages 5-8, 1960. On the convergence rates of Legendre approximation. H Wang, S Xiang, Mathematics of Computation. 81278H. Wang and S. Xiang. On the convergence rates of Legendre approximation. Mathematics of Computation, 81(278):861-877, 2012. Extension of Chebfun to periodic functions. G B Wright, M Javed, H Montanelli, L N Trefethen, SIAM Journal on Scientific Computing. 375G. B. Wright, M. Javed, H. Montanelli, and L. N. Trefethen. Extension of Chebfun to periodic functions. SIAM Journal on Scientific Computing, 37(5):C554-C573, 2015. On the periodic discrete prolate spheroidal sequences. W Y Xu, C Chamzas, SIAM Journal on Applied Mathematics. 446W. Y. Xu and C. Chamzas. On the periodic discrete prolate spheroidal sequences. SIAM Journal on Applied Mathematics, 44(6):1210-1217, 1984. Trigonometric series. A Zygmund, Cambridge University Press1A. Zygmund. Trigonometric series, volume 1. Cambridge University Press, 2002.
{'fraction_non_alphanumeric': 0.0920040193257994, 'fraction_numerical': 0.03730264697380556, 'mean_word_length': 3.42609967101255, 'pattern_counts': {'":': 0, '<': 19, '<?xml version=': 0, '>': 33, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 46, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Fourier series approximations of continuous but nonperiodic functions on an interval suffer the Gibbs phenomenon, which means there is a permanent oscillatory overshoot in the neighbourhoods of the endpoints. Fourier extensions circumvent this issue by approximating the function using a Fourier series which is periodic on a larger interval. Previous results on the convergence of Fourier extensions have focused on the error in the L 2 norm, but in this paper we analyze pointwise and uniform convergence of Fourier extensions (formulated as the best approximation in the L 2 -norm). We show that the pointwise convergence of Fourier extensions is more similar to Legendre series than classical Fourier series. In particular, unlike classical Fourier series, Fourier extensions yield pointwise convergence at the endpoints of the interval. Similar to Legendre series, pointwise convergence at the endpoints is slower by an algebraic order of a half compared to that in the interior. The proof is conducted by an analysis of the associated Lebesgue function, and Jackson-and Bernstein-type theorems for Fourier extensions. Numerical experiments are provided. We conclude the paper with open questions regarding the regularized and oversampled least squares interpolation versions of Fourier extensions.', 'arxivid': '1811.09527', 'author': ['Marcus Webb \nDepartment of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium\n', 'Vincent Coppé \nDepartment of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium\n', 'Daan Huybrechs \nDepartment of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium\n', 'K U Leuven \nDepartment of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium\n'], 'authoraffiliation': ['Department of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium', 'Department of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium', 'Department of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium', 'Department of Computer Science Celestijnenlaan\n200A 3001LeuvenBelgium'], 'corpusid': 119133761, 'doi': '10.1007/s00365-019-09486-x', 'github_urls': [], 'n_tokens_mistral': 26435, 'n_tokens_neox': 22908, 'n_words': 14156, 'pdfsha': 'd3e0cbf52e2e193c934f9097977eb258c59c3999', 'pdfurls': ['https://arxiv.org/pdf/1811.09527v1.pdf'], 'title': ['Pointwise and uniform convergence of Fourier extensions', 'Pointwise and uniform convergence of Fourier extensions'], 'venue': []}
arxiv
25 Jul 2007 Stere Ianuş [email protected] Department of Mathematics University of Bucharest C.P. 10-119, Post. Of. 1072200BucharestRomania Adrian Mihai Ionescu [email protected] Department of Mathematics University Politehnica of Bucuresti Splaiul Independentei, Nr. 313, Sector 6BucureştiRomania Gabriel Eduard Vîlcu [email protected] Department of Mathematics and Computer Science "Petroleum-Gas" University of Ploieşti Bulevardul Bucureşti, Nr. 39PloieştiRomania 25 Jul 2007FOLIATIONS ON QUATERNION CR-SUBMANIFOLDS STERE IANUŞ, ADRIAN MIHAI IONESCU, GABRIEL EDUARD VÎLCUAMS Mathematics Subject Classification: 53C15 Key Words and Phrases: quaternion CR-submanifoldquaternion Kähler man- ifoldfoliation The purpose of this paper is to study the canonical foliations of a quaternion CR-submanifold of a quaternion Kähler manifold.for any X ∈ Γ(D ⊥ ), U, V ∈ Γ(D).The proof is now complete from(19)and (20). Corollary 5.2. Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then the induced metric g on M is bundle-like for F ⊥ if and only if the second fundamental form B of M satisfies:for any U, V ∈ Γ(D) and α = 1, 2 or 3, where (α, β, γ) is an even permutation of (1, 2, 3).Proof. The assertion is immediate from Theorem 5.1. Corollary 5.3. Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then F ⊥ is totally geodesic with bundle-like metric g = g |M if and only if M is mixed geodesic and the second fundamental form B of M satisfies:for any U, V ∈ Γ(D) and α = 1, 2 or 3, where (α, β, γ) is an even permutation of (1, 2, 3).Proof. The proof follows from Corollary 3.3 and Corollary 5.2.QR-products in quaternion Kähler manifoldsFrom Theorem 2.4 we deduce that any D-geodesic CR-submanifold of a quaternion Kähler manifold admits a σ-invariant totally geodesic foliation, which we denote by F. Proposition 6.1. If M is a totally geodesic quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g), then M is a ruled submanifold with respect to both foliations F and F ⊥ . Proof. The assertion follows from Corollary 4.3 and Theorem 2.4. Theorem 6.2. Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). Then M is a QR-product if and only if the next three conditions are satisfied:Proof. The proof is immediate from Theorems 2.4 and 4.2.Corollary 6.3. Any totally geodesic quaternion CR-submanifold of a quaternion Kähler manifold is a QR-product.Proof. The assertion is clear. Corollary 6.4. Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then M is a QR-product if and only if M is totally geodesic.Proof. The assertion is immediate from Theorem 6.2. Introduction CR-submanifolds of Kähler manifolds were introduced in [4] by Bejancu. They appear as generalization both of totally real and of holomorphic submanifolds of Kähler manifolds. This notion was further extended in the quaternion settings by Barros, Chen and Urbano ( [3]). They consider CR-quaternion submanifolds of quaternion Kählerian manifolds as generalizations of both quaternion and totally real submanifolds. If M is a quaternion CR-submanifold of a quaternion Kähler M , then two distributions, denoted by D and D ⊥ , are defined on M . It follows that D ⊥ is always integrable (and of constant rank) i.e. is tangent to a foliation on M . Necessary and sufficient conditions are provided in order that this foliation became totally geodesic and Riemannian, respectively. The paper is organized as follows: in Section 2 one reminds basic definitions and fundamental properties of quaternion CR-submanifolds of quaternion Kähler manifolds. Section 3 allows techniques that will be proven to be useful to characterize the geometry of D and D ⊥ ; conditions on total geodesicity are derived. Section 4 deals with CR-quaternion submanifolds that are ruled by respect to D ⊥ . Section 5 studies the case where D ⊥ is tangent to a Riemannian foliation. In the last section, QR-products in quaternion space forms are studied. A characterization of such submanifolds is given. Preliminaries on quaternion CR-submanifolds Let M be a differentiable manifold of dimension n and assume that there is a rank 3-subbundle σ of End(T M ) such that a local basis {J 1 , J 2 , J 3 } exists of sections of σ satisfying: (1) J 2 1 = J 2 2 = J 2 3 = −Id, J 1 J 2 = −J 2 J 1 = J 3 Then the bundle σ is called an almost quaternion structure on M and {J 1 , J 2 , J 3 } is called canonical local basis of σ. Moreover, M is said to be an almost quaternion manifold. It is easy to see that any almost quaternion manifold is of dimension n = 4m. A Riemannian metric g is said to be adapted to the almost quaternion structure σ if it satisfies: (2) g(J α X, J α Y ) = g(X, Y ), ∀α ∈ {1, 2, 3}, for all vector fields X,Y on M and any local basis {J 1 , J 2 , J 3 } of σ. Moreover, (M , σ, g) is said to be an almost quaternion hermitian manifold. If the bundle σ is parallel with respect to the Levi-Civita connection ∇ of g, then (M , σ, g) is said to be a quaternion Kähler manifold. Equivalently, locally defined 1-forms ω 1 , ω 2 , ω 3 exist such that: (3)    ∇ X J 1 = ω 3 (X)J 2 − ω 2 (X)J 3 ∇ X J 2 = −ω 3 (X)J 1 + ω 1 (X)J 3 ∇ X J 3 = ω 2 (X)J 1 − ω 1 (X)J 2 for any vector field X on M . Let (M , σ, g) be an almost quaternion hermitian manifold. If X ∈ T x M, x ∈ M , then the 4-plane Q(X) spanned by {X, J 1 X, J 2 X, J 3 X} is called a quaternion 4- plane. A 2-plane in T p M spanned by {X, Y } is called half-quaternion if Q(X) = Q(Y ). The sectional curvature for a half-quaternion 2-plane is called quaternion sectional curvature. A quaternion Kähler manifold is a quaternion space form if its quaternion sectional curvatures are equal to a constant, say c. It is well-known that a quaternion Kähler manifold (M , σ, g) is a quaternion space form (denoted M (c)) if and only if its curvature tensor is: R(X, Y )Z = c 4 {g(Z, Y )X − g(X, Z)Y + 3 α=1 [g(Z, J α Y )J α X −g(Z, J α X)J α Y + 2g(X, J α Y )J α Z]}(4) for all vector fields X, Y, Z on M and any local basis {J 1 , J 2 , J 3 } of σ. Remark 2.1. For a submanifold M of a quaternion Kähler manifold (M , σ, g), we denote by g the metric tensor induced on M . If ∇ is the covariant differentiation induced on M , the Gauss and Weingarten formulas are given by: (5) ∇ X Y = ∇ X Y + B(X, Y ), ∀X, Y ∈ Γ(T M ) and (6) ∇ X N = −A N X + ∇ ⊥ X N, ∀X ∈ Γ(T M ), ∀N ∈ Γ(T M ⊥ ) where B is the second fundamental form of M , ∇ ⊥ is the connection on the normal bundle and A N is the shape operator of M with respect to N . The shape operator A N is related to h by: (7) g(A N X, Y ) = g(B(X, Y ), N ), for all X, Y ∈ Γ(T M ) and N ∈ Γ(T M ⊥ ). If we denote by R and R the curvature tensor fields of ∇ and ∇ we have the Gauss equation: g(R(X, Y )Z, U ) = g(R(X, Y )Z, U ) + g(B(X, Z), B(Y, U )) − g(B(Y, Z), B(X, U )),(8) for all X, Y, Z, U ∈ Γ(T M ). A submanifold M of a quaternion Kähler manifold (M , σ, g) is called a quaternion CR-submanifold if there exists two orthogonal complementary distributions D and D ⊥ on M such that: i. D is invariant under quaternion structure, that is: (9) J α (D x ) ⊆ D x , ∀x ∈ M, ∀α = 1, 3; ii. D ⊥ is totally real, that is: (10) J α (D ⊥ x ) ⊆ T x M ⊥ , ∀α = 1, 3, ∀x ∈ M. A submanifold M of a quaternion Kähler manifold (M , σ, g) is a quaternion submanifold (respectively, a totally real submanifold) if dim D ⊥ = 0 (respectively, dim D = 0). We remark that condition ii. above implies that J α (D ⊥ x ) are in direct sum, for any local basis as in (1). Definition 2.2. ([3]) Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). Then M is called a QR-product if M is locally the Riemannian product of a quaternion submanifold and a totally real submanifold of M . Remark 2.3. If we denote by h ⊥ and h the second fundamental forms of D ⊥ and D, then we have the following two equations (see [5]): i. D-Gauss equation: g(R(X, Y )QZ, QU ) = g(R D (X, Y )QZ, QU ) + g(h(X, QZ), h(Y, QU )) −g(h(Y, QZ), h(X, QU )),(11) for all X, Y, Z, U ∈ Γ(T M ), where Q is the projection morphism of T M on D; ii. D ⊥ -Gauss equation: g(R(X, Y )Q ⊥ Z, Q ⊥ U ) = g(R D ⊥ (X, Y )Q ⊥ Z, Q ⊥ U ) + g(h ⊥ (X, Q ⊥ Z), h ⊥ (Y, Q ⊥ U )) −g(h ⊥ (Y, Q ⊥ Z), h ⊥ (X, Q ⊥ U )),(12)for all X, Y, Z, U ∈ Γ(T M ), where Q ⊥ is the projection morphism of T M on D ⊥ . Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). The we say that: i. M is D-geodesic if: B(X, Y ) = 0, ∀X, Y ∈ Γ(D). ii. M is D ⊥ -geodesic if: B(X, Y ) = 0, ∀X, Y ∈ Γ(D ⊥ ). iii. M is mixed geodesic if: B(X, Y ) = 0, ∀X ∈ Γ(D), Y ∈ Γ(D ⊥ ). We recall now the following result which we shall need in the sequel. A distribution D in a Riemannian manifold is called minimal if the trace of its second fundamental form vanishes. We will illustrate here some of the techniques in this paper on the following proposition (see also [8], [11]). Proposition 2.5. If M is a CR-submanifold of a quaternion Kähler manifold (M , σ, g), then the quaternion distribution D is minimal. Proof. Take X ∈ Γ(D) and U ∈ Γ(D ⊥ ). Then we have: g(∇ X X, U ) = g(∇ X X, U ) = g(J α ∇ X X, J α U ) = g(−(∇ X J α )X + ∇ X J α X, J α U ) = g(ω β (X)J γ X − ω γ (X)J β X + ∇ X J α X, J α U ) = g(∇ X J α X, J α U ) = g(A JαU J α X, X)(13) and g(∇ JαX J α X, U ) = −g(A JαU X, J α X) = −g(X, A JαU J α X). Now for the quaternion distribution D one takes local orthonormal frame in the form {e i , J 1 e i , J 2 e i , J 3 e i } and summing up over i will give the assertion. Totally real foliation on a quaternion CR-submanifold Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). Then we have the orthogonal decomposition: T M = D ⊕ D ⊥ . We have also the following orthogonal decomposition: T M ⊥ = µ ⊕ µ ⊥ , where µ is the subbundle of the normal bundle T M ⊥ which is the orthogonal complement of: µ ⊥ = J 1 D ⊥ ⊕ J 2 D ⊥ ⊕ J 3 D ⊥ . Since the totally real distribution D ⊥ of a quaternion CR-submanifold M of a quaternion Kähler manifold (M , σ, g) is always integrable we conclude that we have a foliation F ⊥ on M with structural distribution D ⊥ and transversal distribution D. We say that F ⊥ is the canonical totally real foliation on M . i. F ⊥ is totally geodesic; ii. B(X, Y ) ∈ Γ(µ), ∀X ∈ Γ(D), Y ∈ Γ(D ⊥ ); iii. A N X ∈ Γ(D ⊥ ), ∀X ∈ Γ(D ⊥ ), N ∈ Γ(µ ⊥ ); iv. A N Y ∈ Γ(D), ∀Y ∈ Γ(D), N ∈ Γ(µ ⊥ ). Proof. For X, Z ∈ Γ(D ⊥ ) and Y ∈ Γ(D) we have: g(J α (∇ X Z), Y ) = −g(∇ X Z − B(X, Z), J α Y ) = g(−(∇ X J α )Z + ∇ X J α Z, Y ) = g(ω β (X)J γ Z − ω γ (X)J β Z + ∇ X J α Z, Y ) = g(−A JαZ X + ∇ ⊥ X J α Z, Y ) = −g(A JαZ X, Y ) where (α, β, γ) is an even permutation of (1,2,3), and taking into account of (7) we obtain: (14) g(J α (∇ X Z), Y ) = −g(B(X, Y ), J α Z). i. ⇒ ii. If F ⊥ is totally geodesic, then ∇ X Z ∈ Γ(D ⊥ ), for X, Z ∈ Γ(D ⊥ ) and from (14) we derive: (14) we derive: g(B(X, Y ), J α Z) = 0 and the implication is clear. ii. ⇒ i. If we suppose B(X, Y ) ∈ Γ(µ), ∀X ∈ Γ(D), Y ∈ Γ(D ⊥ ), then fromg(J α (∇ X Z), Y ) = 0 and we conclude ∇ X Z ∈ Γ(D ⊥ ). Thus F ⊥ is totally geodesic. ii. ⇔ iii. This equivalence is clear from (7). iii. ⇔ iv. This equivalence is true because A N is a self-adjoint operator. Proof. The assertion is immediate from Theorem 3.1. Totally real ruled quaternion CR-submanifolds A submanifold M of a Riemannian manifold (M , g) is said to be a ruled submanifold if it admits a foliation whose leaves are totally geodesic immersed in (M , g). Definition 4.1. A quaternion CR-submanifold of a quaternion Kähler manifold which is a ruled submanifold with respect to the foliation F ⊥ is called totally real ruled quaternion CR-submanifold. B(X, Y ) ∈ Γ(µ), ∀X ∈ Γ(D), Y ∈ Γ(D ⊥ ). iii. The subbundle µ ⊥ is D ⊥ -parallel, i.e: ∇ ⊥ X J α Z ∈ Γ(µ ⊥ ) , ∀X, Z ∈ D ⊥ , α = 1, 3 and the second fundamental form satisfies: B(X, Y ) ∈ Γ(µ), ∀X ∈ Γ(D ⊥ ), Y ∈ Γ(T M ). iv. The shape operator satisfies: A JαZ X = 0, ∀X, Z ∈ D ⊥ , α = 1, 3 and A N X ∈ Γ(D), ∀X ∈ Γ(D ⊥ ), N ∈ Γ(µ). Proof. i. ⇔ ii. For any X, Z ∈ Γ(D ⊥ ) we have: ∇ X Z = ∇ X Z + B(X, Z) = ∇ D ⊥ X Z + h ⊥ (X, Z) + B(X, Z) and thus we conclude that the leafs of D ⊥ are totally geodesic immersed in M iff h ⊥ = 0 and M is D ⊥ -geodesic. The equivalence is now clear from Theorem 3.1. i. ⇔ iii. For any X, Z ∈ Γ(D ⊥ ), and U ∈ Γ(D) we have: g(∇ X Z, U ) = g(J α ∇ X Z, J α U ) = g(−(∇ X J α )Z + ∇ X J α Z, J α U ) = g(ω β (X)J γ Z − ω γ (X)J β Z + ∇ X J α Z, J α U ) = g(−A JαZ X + ∇ ⊥ X J α Z, J α U ) = −g(A JαZ X, J α U ) where (α, β, γ) is an even permutation of (1,2,3), and taking into account of (7) we obtain: (15) g(∇ X Z, U ) = −g(B(X, J α U ), J α Z). On the other hand, for any X, Z, W ∈ Γ(D ⊥ ) we have: g(∇ X Z, J α W ) = g(∇ X Z + B(X, Z), J α W ) = g(B(X, Z), J α W ).(16) If X, Z ∈ Γ(D ⊥ ) and N ∈ Γ(µ), then we have: g(∇ X Z, N ) = g(J α ∇ X Z, J α N ) = g(−(∇ X J α )Z + ∇ X J α Z, J α N ) = g(ω β (X)J β Z + ω γ (X)J γ Z − J α ∇ X J α Z, N ) = g(∇ X J α Z, J α N ) = g(−A JαZ X + ∇ ⊥ X J α Z, J α N ) and thus we obtain: (17) g(∇ X Z, N ) = g(∇ ⊥ X J α Z, J α N ). Finally, M is a totally real ruled quaternion CR-submanifold iff ∇ X Z ∈ Γ(D ⊥ ), ∀X, Z ∈ Γ(D ⊥ ) and by using (15), (16) and (17) we deduce the equivalence. ii. ⇔ iv. This is clear from (7). Proof. The assertion is clear from Theorem 4.2. Riemannian foliations and quaternion CR-submanifolds Let (M, g) be a Riemannian manifold and F a foliation on M . The metric g is said to be bundle-like for the foliation F if the induced metric on D ⊥ is parallel with respect to the intrinsic connection on the transversal distribution D ⊥ . This is true if and only if the Levi-Civita connection ∇ of (M, g) satisfies (see [5]): (18) g(∇ Q ⊥ Y QX, Q ⊥ Z) + g(∇ Q ⊥ Z QX, Q ⊥ Y ) = 0, ∀X, Y, Z ∈ Γ(T M ). If for a given foliation F there exists a Riemannian metric g on M which is bundle-like for F, then we say that F is a Riemannian foliation on (M, g). for any U, V ∈ Γ(D) and α = 1, 2 or 3, where (α, β, γ) is an even permutation of (1, 2, 3). Proof. From (18) we deduce that g is bundle-like for totally real foliation F ⊥ iff: (19) g(∇ U X, V ) + g(∇ V X, U ) = 0, ∀X ∈ Γ(D ⊥ ), U, V ∈ Γ(D). On the other hand, for any X ∈ Γ(D ⊥ ), U, V ∈ Γ(D) we have: g(∇ U X, V ) + g(∇ V X, U ) = g(∇ U X − B(U, X), V ) + g(∇ V X − B(V, X), U ) = g(∇ U X, V ) + g(∇ V X, U ) = g(−(∇ U J α )X + ∇ U J α X, J α V ) +g(−(∇ V J α )X + ∇ V J α X, J α U ) = g(ω β (U )J γ X − ω γ (U )J β X + ∇ U J α X, J α V ) +g(ω β (V )J γ X − ω γ (V )J β X + ∇ V J α X, J α U ) = g(∇ U J α X, J α V ) + g(∇ V J α X, J α U ) = −g(A JαX U, J α V ) − g(A JαX V, J α U ) where (α, β, γ) is an even permutation of (1,2,3), and taking into account of (7) we derive: (20) g(∇ U X, V ) + g(∇ V X, U ) = −g(B(U, J α V ) + B(V, J α U ), J α X), Theorem 2. 4 . 4([3]) Let M be a CR-submanifold of a quaternion Kähler manifold (M , σ, g). Then: i. The totally real distribution D ⊥ is integrable. ii. The quaternion distribution D is integrable if and only if M is D-geodesic. Theorem 3 . 1 . 31Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g). The next assertions are equivalent: Corollary 3. 2 . 2Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g). If M is mixed geodesic, then F ⊥ is totally geodesic.Proof. The assertion is clear from Theorem 3.1. Corollary 3. 3 . 3Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then M is mixed geodesic if and only if F ⊥ is totally geodesic. Theorem 4 . 2 . 42Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). The next assertions are equivalent: i. M is a totally real ruled quaternion CR-submanifold. ii. M is D ⊥ -geodesic and: Corollary 4. 3 . 3Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). If M is totally geodesic, then M is a totally real ruled quaternion CR-submanifold. Theorem 5 . 1 . 51Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). The next assertions are equivalent: i. The induced metric g on M is bundle-like for totally real foliation F ⊥ . ii. The second fundamental form B of M satisfies:B(U, J α V ) + B(V, J α U ) ∈ Γ(µ) ⊕ J β (D ⊥ ) ⊕ J γ (D ⊥ ), Theorem 6.5. Let M be a quaternion CR-submanifold of a quaternion space form M (c). If M is D-geodesic and ruled submanifold with respect to totally real foliation F ⊥ , then M is a QR-product. Moreover, locally M is a Riemannian product L×L ⊥ , where L is a quaternion space form, having quaternion sectional curvature c, and L ⊥ is a real space form, having sectional curvature c 4 .Proof. Because F ⊥ is a totally geodesic foliation, from (12) we obtain:From Gauss equation and (21) we obtain:Since M is a ruled submanifold with respect to totally real foliation F ⊥ , then M is D ⊥ -geodesic (by Theorem 4.2) and from (22) we derive:for any X, Y, Z, U ∈ Γ(D ⊥ ). Now, from(4)and (23) we derive:for any orthogonal unit vector fields X, Y ∈ Γ(D ⊥ ). Thus we conclude that the leaves of the totally real foliation F ⊥ are of constant curvature c 4 . Since M is D-geodesic, from Gauss equation we obtain:for any X ′ , Y ′ , Z ′ , U ′ ∈ Γ(D).On the other hand, if M is D-geodesic, then F is a totally geodesic foliation and from (11) we obtain:From(25)and (26) we deduce:for any X ′ , Y ′ , Z ′ , U ′ ∈ Γ(D). Now, from (4) and (27) we derive: Thus we conclude that the the leaves of the foliation F are of constant quaternion sectional curvature c. R D ⊥ (x ′ , J Α X ′ )j Α X ′ , X ′ ) = G(r(x ′ , J Α X ′ )j Α X ′ , X ′ ) = C, for any unit vector field X ′ ∈ Γ(D). The proof is now complete from (24) and (28g(R D ⊥ (X ′ , J α X ′ )J α X ′ , X ′ ) = g(R(X ′ , J α X ′ )J α X ′ , X ′ ) = c, for any unit vector field X ′ ∈ Γ(D). Thus we conclude that the the leaves of the foliation F are of constant quaternion sectional curvature c. The proof is now complete from (24) and (28). Quaternionic and para-quaternionic CR structure on (4n+3)-dimensional manifolds. D Alekseevsky, Y Kamishima, Central European J. Math. 25D. Alekseevsky and Y. Kamishima, Quaternionic and para-quaternionic CR structure on (4n+3)-dimensional manifolds, Central European J. Math. 2 (2004), no. 5, 732-753. Quaternionic structures on a manifold and subordinate structures. D Alekseevsky, S Marchiafava, Ann. Mat. Purra Appl. 17D. Alekseevsky and S. Marchiafava, Quaternionic structures on a manifold and subordinate structures, Ann. Mat. Purra Appl. 17 (1996), 205-273. Quaternion CR-submanifolds of quaternion manifolds. M Barros, B Chen, F Urbano, Kodai Math. J. 4M. Barros, B.Y Chen and F. Urbano, Quaternion CR-submanifolds of quaternion manifolds, Kodai Math. J. 4 (1981), 399-417. CR submanifolds of a Kaehler manifold. A Bejancu, Proc. Am. Math. Soc. 69A. Bejancu, CR submanifolds of a Kaehler manifold, Proc. Am. Math. Soc. 69 (1978), 135- 142. A Bejancu, H R Farran, Foliations and geometric structures, Mathematics and Its Applications. SpringerA. Bejancu and H. R. Farran, Foliations and geometric structures, Mathematics and Its Applications, Springer, 2006. A Besse, Einstein manifolds. BerlinSpringer-VerlagA. Besse, Einstein manifolds, Springer-Verlag, Berlin, 1987. Sur le groupes d'holonomie des variétés a connexion affine et des variétés riemanniennes. M Berger, Bull. Soc. Math. France. 83M. Berger, Sur le groupes d'holonomie des variétés a connexion affine et des variétés rie- manniennes, Bull. Soc. Math. France 83 (1955), 279-310. Some foliations and harmonic morphisms. S Ianuş, A M Pastore, Rev. Roum. Math. Pures Appl. 505-6S. Ianuş and A. M. Pastore, Some foliations and harmonic morphisms, Rev. Roum. Math. Pures Appl. 50 (2005), no. 5-6, 671-676. Quaternion Kählerian manifolds. S Ishihara, J. Diff. Geometry. 9S. Ishihara, Quaternion Kählerian manifolds, J. Diff. Geometry 9 (1974), 483-500. A subfoliation of a CR-foliation on a local conformal almost Kähler manifold. T W Kim, H K Pak, J. Korean Math. Soc. 415T. W. Kim and H. K. Pak, A subfoliation of a CR-foliation on a local conformal almost Kähler manifold, J. Korean Math. Soc. 41 (2004), no. 5, 865-874. Canonical foliations of certain classes of almost contact metric structures. T W Kim, H K Pak, Acta Math. Sin., Engl. Ser. 214T. W. Kim and H. K. Pak, Canonical foliations of certain classes of almost contact metric structures, Acta Math. Sin., Engl. Ser. 21 (2005), no. 4, 841-846. Foliated manifolds with bundle-like metrics. B L Reinhart, Ann. Math. 692B. L. Reinhart, Foliated manifolds with bundle-like metrics, Ann. Math. 69 (1959), no. 2, 119-132. Foliations, submanifolds, and mixed curvature. V Rovenskii, J. Math. Sci. 6V. Rovenskii, Foliations, submanifolds, and mixed curvature, J. Math. Sci. New York, 99 (2000), no. 6, 1699-1787. Quaternionic Kähler manifolds. S Salamon, Invent. Math. 67S. Salamon, Quaternionic Kähler manifolds, Invent. Math. 67 (1982), 143-171. P Tondeur, Geometry of foliations. Birkhäuser, Basel90P. Tondeur, Geometry of foliations, Monographs in Mathematics. 90, Birkhäuser, Basel, 1997.
{'fraction_non_alphanumeric': 0.10623567385507789, 'fraction_numerical': 0.024044533844786452, 'mean_word_length': 3.1778385772913817, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 23, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The purpose of this paper is to study the canonical foliations of a quaternion CR-submanifold of a quaternion Kähler manifold.for any X ∈ Γ(D ⊥ ), U, V ∈ Γ(D).The proof is now complete from(19)and (20). Corollary 5.2. Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then the induced metric g on M is bundle-like for F ⊥ if and only if the second fundamental form B of M satisfies:for any U, V ∈ Γ(D) and α = 1, 2 or 3, where (α, β, γ) is an even permutation of (1, 2, 3).Proof. The assertion is immediate from Theorem 5.1. Corollary 5.3. Let F ⊥ be the canonical totally real foliation on a quaternion CRsubmanifold M of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then F ⊥ is totally geodesic with bundle-like metric g = g |M if and only if M is mixed geodesic and the second fundamental form B of M satisfies:for any U, V ∈ Γ(D) and α = 1, 2 or 3, where (α, β, γ) is an even permutation of (1, 2, 3).Proof. The proof follows from Corollary 3.3 and Corollary 5.2.QR-products in quaternion Kähler manifoldsFrom Theorem 2.4 we deduce that any D-geodesic CR-submanifold of a quaternion Kähler manifold admits a σ-invariant totally geodesic foliation, which we denote by F. Proposition 6.1. If M is a totally geodesic quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g), then M is a ruled submanifold with respect to both foliations F and F ⊥ . Proof. The assertion follows from Corollary 4.3 and Theorem 2.4. Theorem 6.2. Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g). Then M is a QR-product if and only if the next three conditions are satisfied:Proof. The proof is immediate from Theorems 2.4 and 4.2.Corollary 6.3. Any totally geodesic quaternion CR-submanifold of a quaternion Kähler manifold is a QR-product.Proof. The assertion is clear. Corollary 6.4. Let M be a quaternion CR-submanifold of a quaternion Kähler manifold (M , σ, g) with µ = 0. Then M is a QR-product if and only if M is totally geodesic.Proof. The assertion is immediate from Theorem 6.2.', 'arxivid': '0707.3812', 'author': ['Stere Ianuş [email protected] \nDepartment of Mathematics\nUniversity of Bucharest\nC.P. 10-119, Post. Of. 1072200BucharestRomania\n', 'Adrian Mihai Ionescu [email protected] \nDepartment of Mathematics\nUniversity Politehnica of Bucuresti\nSplaiul Independentei, Nr. 313, Sector 6BucureştiRomania\n', 'Gabriel Eduard Vîlcu [email protected] \nDepartment of Mathematics and Computer Science\n"Petroleum-Gas" University of Ploieşti\nBulevardul Bucureşti, Nr. 39PloieştiRomania\n'], 'authoraffiliation': ['Department of Mathematics\nUniversity of Bucharest\nC.P. 10-119, Post. Of. 1072200BucharestRomania', 'Department of Mathematics\nUniversity Politehnica of Bucuresti\nSplaiul Independentei, Nr. 313, Sector 6BucureştiRomania', 'Department of Mathematics and Computer Science\n"Petroleum-Gas" University of Ploieşti\nBulevardul Bucureşti, Nr. 39PloieştiRomania'], 'corpusid': 17630425, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 9223, 'n_tokens_neox': 7820, 'n_words': 4214, 'pdfsha': 'dc71e069317d76dc260e9c6d401b7a249247aa38', 'pdfurls': ['https://arxiv.org/pdf/0707.3812v1.pdf'], 'title': [], 'venue': []}
arxiv
Strain Tunable Band-gaps Of Two-dimensional Hexagonal BN And AlN: An FP-(L)APW+lo Study Harihar Behera Indian Institute of Technology Bombay Powai, Mumbai-400076India Gautam Mukhopadhyay Indian Institute of Technology Bombay Powai, Mumbai-400076India Strain Tunable Band-gaps Of Two-dimensional Hexagonal BN And AlN: An FP-(L)APW+lo Study 1 * Email: [email protected] (Email of corresponding author)2D h-AlN and h-BN, electronic structure, band-gap engineering, strain-tunable bang-gap, NEMS PACS: 6322Np7361Ey7363Bd7322-f Using full potential density functional calculations within local density approximation (LDA), we found strain tunable band gaps of two-dimensional (2D) hexagonal BN (h-BN) and AlN (h-AlN) by application of in-plane homogeneous biaxial strain. The direct band gap of 2D h-BN turns indirect for compressive strains below 1.53% and remains direct under tensile strains up to 10%. However, the band gap of 2D h-AlN remains indirect for strains up to ±10%. While our result on 2D h-BN corroborates the reported strain effect on 2D h-BN (based on pseudo-potential method), our result on the strain tunable band gap of 2D h-AlN is something new. These results may find application in fabrication of future nano-electromechanical systems (NEMS) based on 2D h-BN and h-AlN. INTRODUCTION Nano-materials that can change energy gaps under mechanical strain are desirable for fabrication of nanoelectromechanical systems (NEMS). With the recent synthesis [1][2][3] of 2D h-BN (so-called "white graphene"), comprising of alternating B and N atoms in a honeycomb lattice, the study of this material assumes much significance at time when this material is currently emerging as a promising substrate/gate dielectric for high-quality graphene electronics [4][5]. Because of its wide direct band gap [6] (E g = 5.971 eV), h-BN is seen as a promising material for ultraviolet laser devices [6][7]. On the other hand, there are reports of the synthesis of hexagonal AlN nanobelts [8], serrated nanoribbons [9]. Theoretical studies [10][11] predict stable graphene-like 2D hexagonal structures of AlN, which is considered recently [11] as an adequate template and/or gate insulator for silicene (the silicon analogue of graphene) [10][11][12]. Using first principles pseudopotential method, J. Li et al. [13], reported strain induced (remarkable) modifications (such as the transition from direct to indirect band gap) of the band gap of 2D h-BN. However, the effect of strain on the band structures of 2D h-AlN is not reported yet. Here, we report our simulated study on the effect of biaxial strain (which mimics the experimental situation when the material in question is supported on a stretchable substrate) on the band gaps of 2D h-BN and h-AlN, using the density functional theory (DFT) based full potential (linearized) augmented plane wave plus local orbital (FP-(L)APW+lo) method [14]. CALCULATION METHODS The calculations have been performed by using the Perdew-Zunger variant of LDA [15] and FP-(L)APW+lo method [14] as implemented in the elkcode [16]. The plane wave cutoff of |G+k| max = 8.5/R mt (a.u. -1 ) (R mt is the smallest muffin-tin radius in the unit cell) was used for the plane wave expansion of the wave function in the interstitial region. The k point mesh of ( was used for all calculations. The total energy convergence was 2.0 µeV/atom between two successive steps. The 2D h-BN and h-AlN structures were simulated using three-dimensional hexagonal super-cells with a large value of the "c" (c = 40 a.u.) parameter. The application of in-plane biaxial strain up to ±10% was simulated by varying the value of the in-plane lattice parameter "a" (= |a| = |b|). RESULTS AND DISCUSSIONS The calculated ground state in-plane lattice constants of unstrained 2D h-BN and h-AlN were respectively obtained as a 0 (h-BN)= 2.488 Å and a 0 (h-AlN) = 3.09 Å, which are in excellent agreement with the reported theoretical results of a 0 (h-BN)= 2.488 Å in [13] and a 0 (h-AlN)= 3.09 Å in [10]. As seen in Figure 1(a), 2D h-BN with its both valence-band maximum (VBM) and conduction band minimum (CBM) located at the K point of the Brillouin Zone (BZ), revealed a direct band gap of E g (h-BN) = 4.606 eV, in agreement with previous calculations of 4.61 eV in [10] and 4.613 eV in [13]. Our estimated band gap energy of 2D h-BN is about 23% less than the experimental [6] direct band gap energy of 5.971 eV. This is due to the well known band gap (underestimation) problem within LDA. As seen in Figure 1(b), the band gap of 2D h-AlN is indirect with VBM located at K and CBM at  and has a value E g (h-AlN) = 3.037 eV, which is in agreement with the reported [10] calculated value of 3.08 eV. The nature of variations of our calculated values of E g with in-plane homogeneous biaxial strain  = (aa 0 )/a 0 for 2D h-BN and h-AlN are respectively depicted in Figure 2 and Figure 3. For strain values in the range from -1.53% up to +10% the E g (h-BN) remains direct; and for strains below -1.53% down to -10%, E g (h-BN) remains indirect with VBM located at the K and CBM at the . These results corroborate the reported results [13], and gives us confidence in our calculations, especially when applied to 2D h-AlN, for which we find a strong nonlinear variation of the band gap with strain as shown in Figure 3; the gap remains indirect within the considered strain range of ±10%, which is our new result. Figure 1 . 1Energy bands of 2D h-BN (a) and h-AlN (b). Figure 2 . 2Variation of E g of 2D h-BN with in-plane homogeneous biaxial strain  = (aa 0 )/a 0 . Figure 3 . 3Variation of E g of 2D h-AlN with in-plane homogeneous biaxial strain  = (aa 0 )/a 0 . CONCLUSIONS In this DFT based FP-(L)APW+lo study of the effects of biaxial strain on 2D h-BN and h-AlN, we found strain tunable energy band gaps of these nanostructures. While our result on h-BN corroborate the reported strain effects on h-BN, our result on strain tunable band gap of 2D-h-AlN is something new. These results may find applications in fabrication of NEMS based on h-BN and h-AlN. . K S Novoselov, Proc. Natl. Acad. Sci. U.S.A. 10210451K.S. Novoselov, et al., Proc. Natl. Acad. Sci. U.S.A. 102, 10451 (2005). . C Jin, Phys. Rev. Lett. 102195505C. Jin, et al., Phys. Rev. Lett. 102, 195505 (2009). . A Nag, ACS Nano. 41539A. Nag, et al., ACS Nano, 4, 1539 (2010). . I Meric, Tech. Digest Int. Electron Devices Meeting. 23IEDM. 2.1-23.2.4.I. Meric, et al., Tech. Digest Int. Electron Devices Meeting, IEDM, 2010, pp-23.2.1-23.2.4. . C R Dean, Nat. Nanotechnol. 5722C.R. Dean, et al., Nat. Nanotechnol. 5, 722 (2010). . K Watanabe, Nature Mater. 3404K. Watanabe, et al., Nature Mater. 3, 404 (2004). . Y Kubota, Science. 317932Y. Kubota, et al., Science, 317, 932 (2007). . Q Wu, J. Phys. Chem. B. 1079726Q. Wu, et al., J. Phys. Chem. B, 107, 9726 (2003). . T Xie, Inorg. Chem. Commun. 7545T. Xie, et al., Inorg. Chem. Commun. 7, 545 (2004). . H Şahin, Phys. Rev. B. 80155453H. Şahin, et al., Phys. Rev. B, 80, 155453 (2009). . M Houssa, Appl. Phys. Lett. 97112106M. Houssa, et al., Appl. Phys. Lett. 97, 112106 (2010). H Behera, G Mukhopadhyay, AIP Conf. Proc. 1313. 152H. Behera, and G. Mukhopadhyay, AIP Conf. Proc. 1313, 152 (2010). . J Li, G Gui, J Zhong, J. Appl. Phys. 10494311J. Li, G. Gui, and J. Zhong, J. Appl. Phys. 104, 094311 (2008). . E Sjöstedt, Solid State Commun. 11415E. Sjöstedt, et. al., Solid State Commun. 114, 15 (2000). . J P Perdew, A Zunger, Phys. Rev. B. 235048J. P. Perdew, and A. Zunger, Phys. Rev. B, 23, 5048 (1981). Elk is an open source code freely. Elk is an open source code freely available at: http://elk.sourceforge.net/
{'fraction_non_alphanumeric': 0.08558441558441558, 'fraction_numerical': 0.05779220779220779, 'mean_word_length': 3.7129742962056302, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Using full potential density functional calculations within local density approximation (LDA), we found strain tunable band gaps of two-dimensional (2D) hexagonal BN (h-BN) and AlN (h-AlN) by application of in-plane homogeneous biaxial strain. The direct band gap of 2D h-BN turns indirect for compressive strains below 1.53% and remains direct under tensile strains up to 10%. However, the band gap of 2D h-AlN remains indirect for strains up to ±10%. While our result on 2D h-BN corroborates the reported strain effect on 2D h-BN (based on pseudo-potential method), our result on the strain tunable band gap of 2D h-AlN is something new. These results may find application in fabrication of future nano-electromechanical systems (NEMS) based on 2D h-BN and h-AlN.', 'arxivid': '1206.3162', 'author': ['Harihar Behera \nIndian Institute of Technology Bombay\nPowai, Mumbai-400076India\n', 'Gautam Mukhopadhyay \nIndian Institute of Technology Bombay\nPowai, Mumbai-400076India\n'], 'authoraffiliation': ['Indian Institute of Technology Bombay\nPowai, Mumbai-400076India', 'Indian Institute of Technology Bombay\nPowai, Mumbai-400076India'], 'corpusid': 119301212, 'doi': '10.1063/1.4709985', 'github_urls': [], 'n_tokens_mistral': 2891, 'n_tokens_neox': 2415, 'n_words': 1254, 'pdfsha': '8c38fdf15e1dbd98beb140a358a6cf621f760bd1', 'pdfurls': ['https://export.arxiv.org/pdf/1206.3162v1.pdf'], 'title': ['Strain Tunable Band-gaps Of Two-dimensional Hexagonal BN And AlN: An FP-(L)APW+lo Study', 'Strain Tunable Band-gaps Of Two-dimensional Hexagonal BN And AlN: An FP-(L)APW+lo Study'], 'venue': []}
arxiv
Optimizing sparse fermionic Hamiltonians Yaroslav Herasymenko QuSoft & CWI Science Park 1231098 XGAmsterdamThe Netherlands QuTech Delft University of Technology Lorentzweg 12628 CJDelftThe Netherlands Maarten Stroeks QuTech Delft University of Technology Lorentzweg 12628 CJDelftThe Netherlands EEMCS Faculty Delft University of Technology Van Mourik Broekmanweg 62628 XEDelftThe Netherlands Jonas Helsen QuSoft & CWI Science Park 1231098 XGAmsterdamThe Netherlands Barbara Terhal QuTech Delft University of Technology Lorentzweg 12628 CJDelftThe Netherlands EEMCS Faculty Delft University of Technology Van Mourik Broekmanweg 62628 XEDelftThe Netherlands Optimizing sparse fermionic Hamiltonians We consider the problem of approximating the ground state energy of a fermionic Hamiltonian using a Gaussian state. In sharp contrast to the dense case [1], we prove that strictly q-local sparse fermionic Hamiltonians have a constant Gaussian approximation ratio; the result holds for any connectivity and interaction strengths. Sparsity means that each fermion participates in a bounded number of interactions, and strictly q-local means that each term involves exactly q fermionic (Majorana) operators. We extend our proof to give a constant Gaussian approximation ratio for sparse fermionic Hamiltonians with both quartic and quadratic terms. With additional work, we also prove a constant Gaussian approximation ratio for the so-called sparse SYK model with strictly 4-local interactions (sparse SYK-4 model). In each setting we show that the Gaussian state can be efficiently determined. Finally, we prove that the O(n −1/2 ) Gaussian approximation ratio for the normal (dense) SYK-4 model extends to SYKq for even q > 4, with an approximation ratio of O(n 1/2−q/4 ). Our results identify nonsparseness as the prime reason that the SYK-4 model can fail to have a constant approximation ratio [1].Sparse fermionic HamiltoniansKey to our work is the notion of a sparse Hamiltonian.Definition 4. Let H be a local traceless fermionicHamiltonian of 2n Majorana operators. We say that H is k-sparse, for an integer k, if no Majorana operator c i occurs in more than k terms of the Hamiltonian, i.e. all Majorana operators have degree at most k.This condition allows us to efficiently find Gaussian states with constant approximation ratio. We have the following theorem, which is the main result of our work: Introduction Approximating the ground state energy of a local Hamiltonian is a central problem in both physics and computer science. In computer science it plays a key role in complexity theory [2], while in physics ground states capture the behaviour of systems at low energy. Two common families of Hamiltonians of interest are those defined on collections of qubits and those acting on fermionic degrees of freedom. Fermionic Hamiltonians model various physical systems, such as electrons in condensed matter and quantum chemistryprime targets for quantum simulation. Fermions also define a model of quantum computation, equivalent to the one based on qubits [3]. Despite its practical and conceptual relevance, the general problem of approximating fermionic ground state energies is currently less well understood than its qubit counterpart. Some rigorous progress in studying this problemboth for qubits and for fermions -was made from the perspective of optimization. In this subfield of computer science, one of the central tasks is efficiently finding problem solutions that are provably close to optimal [4]. The closeness is usually quantified by an approximation ratio, i.e. the ratio between the value attained by an algorithm and the optimal value for a given problem. For the classical equivalent of the ground state energy finding -Constraint Satisfaction Problems (CSPs) -such approximation ratios have been extensively studied [5]. For quantum Hamiltonians, an interesting question is how well the ground state energy can be approximated using "classical" or "mean-field" states. For qubit Hamiltonians the natural choice of classical states are product states, while for fermionic Hamiltonians they are Gaussian states. Gaussian states play a prominent role in fermionic optimization problems using the mean-field Hartree-Fock method, see e.g. [6], or dynamical mean-field theory via solving impurity problems [7], or the simulation of free fermionic computation [8,9]. Formal guarantees on approximation ratios characterize numerical simulation methods using classical states and outline their limitations compared to quantum computing. For qubit Hamiltonians, it was first proved by Lieb [10] (see [11] for a simplified proof) that there always exists a product state which approximates the ground state energy of a traceless 2local qubit Hamiltonian by a factor of 1/9. Many more results on approximating ground state energies of many-body systems by product states can be found in [12,13,14,15,16]. In [11] it was shown, through the Goemans-Williamson method, that for a 2-local qubit Hamiltonian a product state can always be efficiently found with approximation ratio O(1/ log(n)) where n is the number of qubits. Ref. [11] also considered fermionic Hamiltonians with quadratic (q = 2) and quartic (q = 4) fermionic terms. They left as an open question whether all 4-local fermionic Hamiltonians have a constant approximation ratio with respect to Gaussian states (a Gaussian approximation ratio). A surprising counterexample to this conjecture was recently presented by Hastings & O'Donnell [1]. They showed that the family of SYK-4 models (Sachdev-Ye-Kitaev models with quartic fermionic interactions, see Definition 2), has, with high probability, a Gaussian approximation ratio no better than O(1/ √ n) where n is the number of fermionic modes. Contrasting this result to Refs. [10,13], it means that qubit and fermionic ground states strongly differ in their approximability by classical states. Moreover, this opens up the question of which fermionic Hamiltonians do have Gaussian approximation ratios. This is the question that we aim to answer here. We do this by considering sparse Hamiltonians, i.e. Hamiltonians where each fermionic mode participates in a bounded number of interactions. Sparsity holds for many physically relevant Hamiltonians, such as the Fermi-Hubbard model. It also holds for exotic Hamiltonians, such as those determined by constantdegree expander hypergraphs; notably, it does not hold for the SYK model. Sparsity of interactions has been considered in the classical CSP literature. It was shown in [17] that the MaxQP problem has an efficient constant approximation ratio algorithm on graphs of bounded chromatic number, in particular graphs with bounded degree. We show that a similar assumption of sparsity is enough to guarantee constant Gaussian approximation ratios for 4-local and strictly q-local Hamiltonians. Moreover, we show that a constant Gaussian approximation ratio can be achieved for the sparse SYK-4 model [18] (which has a logarithmically growing interaction participation and is thus not sparse by our definition). Finally, we consider in more detail the optimal approximation ratio for the dense SYK-q model for q > 4 (thus extending the work of [1]). We show that the shortfall of Gaussian states is even more pronounced in this setting. To avoid confusion, we note that instead of the ground state energy, existing works often consider approximating the maximal eigenvalue of the Hamiltonian λ max (H). These two optimization problems are equivalent if the family of Hamiltonians considered is invariant under a change of sign (e.g. traceless qlocal Hamiltonians). For mathematical convenience and consistency with the literature, in the rest of the text, we will also be formulating our results in terms of approximating λ max (H). Statement of results Preliminaries Before surveying our results, we introduce the basic setup of fermionic Hamiltonians and q-locality. This subsection also defines the SYK-q model and spells out the previous result of vanishing Gaussian approximation ratio for SYK-4. We consider a system of 2n traceless Majorana fermion operators c i , i = 1, . . . , 2n with c 2 i = I, c † i = c i , forming a Clifford algebra, i.e., {c j , c k } = 2δ j,k I and representing n fermionic modes. We denote as I an ordered subset I = {i 1 , i 2 , ..i q } ⊆ [2n] ≡ {1, . . . , 2n} where i 1 < i 2 < . . . i q with q even. We denote C I as the Hermitian Majorana monomial C I ≡ i q/2 c i1 ..c iq ,(1) and one can verify that C 2 I = I. We can think about a subset I as corresponding to a term or interaction in a Hamiltonian. Indeed, it is natural to impose some form of locality: Figure 1: Illustrating the key idea of the proof of Theorem 5. An example of a strictly 4-local Hamiltonian is given in (a), vertices and faces representing Majorana operators and their interactions. The Hamiltonian is split into sets of terms -different colors in (b) -well separated from each other inside each set (so-called diffuse sets, see Definition 15). This allows us to match all Majorana operators (panel (c)) by perfectly matching terms in one targeted set (color highlighted in (b) and (c)) while avoiding the matching of terms from all other sets. The Gaussian state is then created from this matching, with only terms from the targeted set contributing to the energy. By optimizing the choice of the targeted set, a finite approximation ratio can be guaranteed. Definition 1 (q-local fermionic Hamiltonian). Let H be a fermionic Hamiltonian on 2n Majorana operators. We say that H is q-local if H is a sum of Hermitian traceless terms C I of weight at most q, i.e. each term is proportional to a product of at most q operators c i . H is said to be strictly q-local when all terms have exactly weight q. A local traceless fermionic Hamiltonian H = I∈I J I C I is thus characterized by an interaction set I and the coefficients J I ∈ R. The maximum eigenvalue of H is denoted by λ max (H) := max ρ Tr(Hρ). Sometimes we will refer to a collection of sets I denoted as I = {I 1 , I 2 , . . .}. The support of I is defined as Sup(I) = ∪ i I i and I ⊆ I implies that the sets in I are also sets in I. Definition 2 (SYK-q Model). A degree-q (with q even) SYK model on 2n Majoranas is defined as a family of Hamiltonians H = 2n q −1/2 I⊆[2n],|I|=q J I C I ,(2) where each J I is a Gaussian random variable (i.e., with zero mean and unit variance) and each C I is the product of the q distinct Majorana operators as in Eq. (1). We normalize the model in expectation, i.e. E Tr(H 2 ) = 2n q −1 I⊆[2n] ,|I|=q E(J 2 I ) = 1. In [19] it was shown that with high probability (over the draw of J I s) for the SYK-4 model, one has In order to thus provide a counterexample to a constant Gaussian approximation ratio, one needs to prove a lower bound on λ max (H) for the SYK-4 model, which holds with high probability, which was done in [1]: Theorem 3. [1] There is a poly(n)-time quantum algorithm that, given any SYK-4 Hamiltonian H, returns a quantum state ρ. With probability 1 − exp − Ω(n) , this state ρ has Tr(Hρ) = Ω( √ n). Theorem 5. Let H be a traceless fermionic Hamiltonian on 2n Majorana operators with maximal eigenvalue λ max (H). If H is k-sparse and strictly q-local and 2n > k(q 2 − 1), a Gaussian state ρ can be efficiently constructed such that Tr(Hρ) λ max (H) ≥ 1 Q ,(3)for Q = q(q − 1)(k − 1) 2 + q(k − 1) + 2. The proof of this theorem is given in Section 5; its basic idea is explained in Figure 1. We note that this proof only holds for Hamiltonians with terms of exactly weight q. Typical physical Hamiltonians, however, have quadratic (kinetic energy of the electrons) and quartic terms (potential energy due to Coulomb interaction). Fortunately, we can also show that in the q = 4 case we can include q = 2 terms. For this we use a trick from [11] to lift such a 4-local Hamiltonian to a strictly 4-local Hamiltonian. This trick makes the Hamiltonian nonsparse. However, we show in Section 6 that, in this special case, we can circumvent the non-sparseness of the Hamiltonian and achieve a constant Gaussian approximation ratio. J I C I ,(5) where each J I is a Gaussian random variable (i.e., with zero mean and unit variance). The SYK model is extremely non-sparse, in the sense that every Majorana operator occurs together with all other Majorana operators. This makes the SYK model somewhat unphysical, and several sparse versions of the model have been considered [18,20]. Such sparse models intend to produce the same (low energy) physics, while being easier to simulate on both quantum and classical computers (see sections III, V in [18]). The sparse SYK model is generated by including terms by a Bernoulli trial with a certain probability p tuned such that the expected interaction degree is bounded: Definition 7.Tr(Hρ) λ max (H) ≥ 1 Q ,(7) where Q = 1236 + 2752k + 1536k 2 . Thus we arrive at the surprising conclusion that the SSYK-4 model has a constant Gaussian approximation ratio, while the dense SYK-4 does noteven though SSYK-4 has similar physical properties to SYK-4 . Higher-q SYK models We investigate what Gaussian approximation ratios can be achieved for the dense SYK model of even weight q > 4, as this was left as an open question in [1]. We establish an upper bound on the largest Gaussian expectation value of SYK-q, which behaves rather dramatically for q > 4. We prove the following Lemma employing a method similar to the one used in [19]. Lemma 9. Let H be the dense SYK-q Hamiltonian (with even q ≥ 4 and q = O(1)). With probability at least 1 − exp − Ω(n) over the draw of SYK-q Hamiltonians, the expectation value of every Gaussian state ρ is bounded, more precisely: max ρ Gaussian Tr(Hρ) = O n 1−q/4 .(8) This Lemma is proved in Section 8. Our second result establishes a lower bound on the largest eigenvalue for SYK-q, essentially generalizing what was established in [1] for q = 4. We prove the following Lemma (its proof can be found in Section 8): Lemma 10. Let H be the dense SYK-q Hamiltonian with even q ≥ 4 (and q = O(1)). With probability at least 1 − exp − Ω(n) over the draw of SYK-q Hamiltonians, λ max (H) = Ω( √ n). As an immediate consequence of the previous results, we see that the Gaussian approximation ratio of the dense SYK-q model can be no better than O n 1/2−q/4 : Theorem 11. Let H be the dense SYK-q Hamiltonian (with even q ≥ 4 and q = O(1)). With probability at least 1 − exp − Ω(n) over the draw of SYK-q Hamiltonians, we have max ρ Gaussian Tr(Hρ) λ max (H) = O n 1/2−q/4 .(9) Proof. Theorem 11 follows from combining Lemma 9 and Lemma 10 and applying the union bound. Discussion The goal of this section is to place our results in broader context and mention a few open questions. First, let us discuss the relation between this work and the fermion-to-qubit mapping methods. As was shown in [3], one can map a sparse O(1)-local fermionic Hamiltonian onto a sparse O(1)-local qubit Hamiltonian (BK-superfast encoding). However, note that one cannot obtain our Theorem 5 from such a mapping, since an approximating product state for the qubit Hamiltonian does not necessarily map back to a Gaussian fermionic state while Gaussian fermionic states map to stabilizer (non-separable) qubit states using the BK-superfast encoding. Ref. [3] also showed that one can map a general local fermionic Hamiltonian (like a SYK model) onto a qubit Hamiltonian with terms which are O(log n)local. Such qubit Hamiltonian is generally not expected to have a constant approximation ratio by a product state due to its n-dependent locality. In fact, one can easily prove that a dense model like the SYK model can only be mapped onto a qubit Hamiltonian which is Ω(log n)-local. We give the argument in Appendix A. These observations suggest that approximation ratios by classical states such as Gaussian states or product states are likely to be affected by sparsity in the case of fermions, which is consistent with our new results. Another question which is raised by our work and that of [1] and [14], is whether studying fermionic Hamiltonians can lead to new insights into the possibility of a quantum PCP theorem [22]. In this context it is important to mention that, besides the lower bound in Theorem 3, Ref. [1] also determined an upper bound on λ max of the SYK-4 model showing that with high probability λ max = Θ( √ n). This shows that the SYK-4 model is extremely frustrated: the maximal average expected energy per term, the energy density, is only Θ(n −3/2 ). In contrast, our results for the sparse SYK model (see Lemma 23) show that the maximal average expected energy per term is Ω(1), which is the more 'natural' physical scaling. A simple fermionic toy model in which the maximal average energy per term decreases is a model in which an extensive set of Majorana operators is mutually anti-commuting, see Lemma 29 in Appendix A. The presence of many such fully-anticommuting sets in the SYK model can be seen as one of the intuitive reasons why the maximal energy density achieved is so low. For k-local qubit Hamiltonians researchers have looked at the hardness of approximating the maximal energy density with constant error : showing that this problem is QMA-complete would prove the quantum PCP theorem. For dense (non-sparse) klocal qubit Hamiltonians, it was proved in [14] (Theorem 13) that there is a polynomial-time classical algorithm to approximate the maximal energy density, using product state approximations. Ref. [23] generalized this result and formulated an efficient classical algorithm which approximately estimates the free energy of a 2-local qubit Hamiltonian. One can similarly ask the question of approximating the maximal energy density for dense q-local fermionic Hamiltonians. Observe that the question is moot if the maximal energy density decreases as a function of n (as in the SYK model), since for large enough n (depending on ) the classical algorithm could always output 0 and make an error less than . However, other dense O(1)-local fermionic Hamiltonians could exist for which this question is nontrivial and not already covered by the dense qubit case. There are further open directions that are more practically oriented. One of these is achieving finite approximation ratios for at least some classes of nonsparse fermionic Hamiltonians (e.g., quantum chemistry or lattice systems with long-range Coulomb interactions). Furthermore, in most applications, one is interested in obtaining approximation ratios as close to 1 as possible. To achieve these goals, the techniques outlined in this work can be improved further and complemented with other ideas. A few possible approaches are as follows. One option is to extend the interaction subsets targeted by the constructed Gaussian state beyond the diffuse subsets considered here. If the overlapping interactions in the problem Hamiltonian are not prone to frustration, including them in the targeted set may dramatically increase the approximation ratio. The proof of Theorem 6 (Section 6) is a special case of this approach, with the constructed Gaussian state targeting multiple overlapping terms at the same time. Another option for improvement is to minimize the contribution from frustration terms instead of avoiding frustration altogether. This could both improve the eventual approximation ratio by targeting a larger pool of interactions, as well as allow to mitigate the issue of non-sparsity. An example of this approach is the proof of Theorem 8 (Section 7), where the contributions from the non-sparse part of the Hamiltonian are shown to be small compared to the energy achieved by the Gaussian state. As a third option, one can modify the basis of fermionic modes so that non-sparsity and frustration in the Hamiltonian are minimized. In the simplest case of q = 2, such a basis rotation can always turn all interactions into a diffuse set (simply by diagonalizing the Hamiltonian). A similar improvement may be possible for some classes of q-local Hamiltonians with q ≥ 4. Developing these and other ideas for efficient Gaussian ground state approximation are interesting possibilities for future research. Finally, it would be interesting to provide a nonrandom family of fermionic Hamiltonians without a constant approximation ratio with respect to Gaussian states. Background on Gaussian states In this section, we first provide some background and definitions that will be used throughout the remainder of this text. Gaussian states We define the class of fermionic Gaussian states, which are ground states and thermal states of noninteracting, quadratic (q = 2), fermionic Hamiltonians, and give some of their useful properties. We first note that any transformation by a real orthogonal matrix R ∈ SO(2n), i.e., c i = j R ij c j ,(10)ρ = 1 2 n exp − i i =j β ij c i c j ,(11) where (β ij ) 2n i,j=1 is a real anti-symmetric matrix and the normalization is such that Tr ρ = 1. Fermionic Gaussian states have a number of useful, which we list here for future use. 1. The matrix β can be block-diagonalized by a real orthogonal matrix R ∈ SO(2n) such that β = R n j=1 0 b j −b j 0 R T ,(12) with b j ≥ 0. Therefore, ρ can be brought to the following standard form ρ = 1 2 n n j=1 I + iλ jc2j−1c2j ,(13) wherec i = j R ij c j and λ j = tanh(2b j ) ∈ [−1, +1]. Each fermionic Gaussian state can be associated with a 2n × 2n correlation matrix Γ, with Γ ij = i 2 Tr ρ[c i , c j ] .(14) Γ is a real anti-symmetric matrix and hence there is a real orthogonal matrix R ∈ SO(2n) such that Γ = R n j=1 0 λ j −λ j 0 R T ,(15) where the λ j are in Eq. (13). 3. For pure fermionic Gaussian states, λ j ∈ {−1, +1} and hence for pure Gaussian states Γ T Γ = I. For mixed fermionic Gaussian states, Γ T Γ ≤ I. The Pfaffian of a 2k × 2k matrix anti-symmetric A is defined as Pf(A) = 1 2 k k! π∈S 2k sign(π)Π k i=1 A π(2i−1),π(2i) . Alternatively, we can see the Pfaffian as a sum over perfect matchings in a graph of 2k vertices where an edge (i < j) has weight A ij and each matching contributes the products of these weights to the sum. For a Gaussian state with correlation matrix Γ, one has for even |I|: Tr(C I ρ) = Pf(Γ I ),(16) where Γ I is the |I| × |I| submatrix of Γ restricted to rows and columns in the ordered set I. A special class of a pure Gaussian states is given by a perfect matching M of Majorana operators. Such matching M is specified by n disjoint pairs (m 1 , m 2 ) with m 1 < m 2 . For each pair we have a coefficient λ (m1,m2) = ±1, together forming the n-dimensional vector λ. The class of states are of the form 1. If M is consistent with interaction I, let the perfect matching on the subset I be given by pairs (i π(2l−1) , i π(2l) ) for l = 1, . . . , q/2 and a permutation π ∈ S q where i π(2l−1) < i π(2l) . Then, the following holds: ρ(M, λ) = 1 2 n Π (m1,m2)∈M (I + iλ (m1,m2) c m1 c m2 ).(17)Tr(C I ρ(M, λ)) = sign(π) l∈{1,...,q/2} λ (i π(2l−1) ,i π(2l) ) . where sign(π) = ±1. If M is inconsistent with I, then Tr(C I ρ(M, λ)) = 0. Proof. In order for the trace to be nonzero, one needs to exactly match the Majorana operators in C I with some in the expansion of ρ(M, λ) since Tr(C I ) = 0 for any non-empty subset I . If M is inconsistent, there is no term in the expansion of ρ which precisely matches C I , so the expectation vanishes. If M is consistent, we have Tr(C I ρ(M, λ)) = 1 2 n Tr C I Π (m 1 ,m 2 )∈M (I + iλ (m 1 ,m 2 ) c m1 c m2 ) = 1 2 n Tr C I Π (m 1 ,m 2 )∈M,(m 1 ,m 2 )∩I =∅) × (iλ (m 1 ,m 2 ) c m1 c m2 ) = sign(π) l∈[1,..,q/2] λ (i π(2l−1) ,i π(2l) ) .(19) Here we have used that one can first reorder C I such that the pairs in the perfect matching are adjacent, i.e. C I = sign(π)i q/2 c i π(1) c iπ (2) . . . c i π(q) , then one can commute through each pair to its matching pair in ρ and use (c i c j ) 2 = −I, i q = (−1) q/2 and tr(I) = 2 n . Approximation ratios for sparse fermionic Hamiltonians In this section we prove Theorem 5. We begin by setting up needed definitions and stating several technical Lemmas (which are proved in the Appendices). The key auxiliary notion in the proof of Theorem 5 is that of a diffuse subset of Hamiltonian terms. Intuitively, the terms in a diffuse subset are well separated from each other while covering only a limited part of the system. This idea is formalized as follows: DefinitionI = Q α=1 I α .(20) The parameter Q is given as Q = q(q − 1)(k − 1) 2 + q(k − 1) + 2 and does not depend on n. The construction of this splitting can be done efficiently, in time poly(n). Lemma 16 is a special case of Lemma 19, which is proven in Appendix B. The proof relies on a combinatorial argument on a graph that takes Hamiltonian terms as vertices and connects them with an edge if the pair violates conditions 1 or 2 of Definition 15. By the sparsity assumption, this graph has an efficiently constructable coloring with a bounded number of colors, from which the split I = Q α=1 I α can be constructed. The usefulness of diffuse sets comes from Lemma 20, see its proof in Appendix C. Here we state its corollary, relevant to proving Theorem 5: With matchings introduced above, one can construct useful Gaussian states. The tool to do so is given by the following statement: Tr(Hρ I ) = I∈I |J I |.(21) Lemma 18 is a specific case of a slightly more general Lemma 21, which is stated and proven in Appendix D. We denote J (I ) ≡ I∈I |J I |.(22) As shown below, Theorem 5 can be proven by constructing a diffuse I ⊂ I and a corresponding Gaussian state ρ I with large enough Tr(Hρ I ) = J (I ). Tr(Hρ) λ max (H) ≥ 1 Q ,(23)for Q = q(q − 1)(k − 1) 2 + q(k − 1) + 2. Proof. For a Hamiltonian H = I∈I J I C I , we construct the splitting of I into diffuse subsets I = ∪ α I α as guaranteed by Lemma 16. Next, find α = argmax α J(I α ); since Q in Lemma 16 is constant, α can be found efficiently. Next, use Lemma 17 to construct a matching M (I α ) (the condition 2n > q 2 − 1 is satisfied by assumptions of Theorem 5). Since I α is diffuse with respect to I, the Gaussian state ρ Iα can be efficiently constructed from M (I α ) via Lemma 18. Using Tr(Hρ Iα ) = J(I α ), the following inequality can be obtained for the resulting approximation ratio: Tr(Hρ Iα ) λ max (H) ≥ J(I α ) α J(I α ) ≥ 1 Q .(24) For the first inequality, note that λ max (H) ≤ I∈I |J I | = α J(I α ). The second inequality comes from a pigeonhole-type argument: if J(I α ) = max α J(I α ), it directly follows that J(I α ) ≥ 1 Q α J(I α ) . Inequality (24) concludes the proof, as it asserts the approximation ratio bound claimed in the Theorem. Sparse Hamiltonians with terms of weight 2 and 4 In this section we prove Theorem 6. We will again need to use the concept of diffuse subsets in Definition 15. The proof of Theorem 6 is similar in its basic idea to that of Theorem 5. The main obstacle in this case is the presence of terms of different weight, which does not allow one to use Lemmas 16-18 directly. This can be resolved by a slightly more elaborate construction and applying the more general Lemmas 19-21 which are proved in the Appendices and Lemmas 16-18 directly follow as special cases. Lemma 19 (Generalization of Lemma 16). Let I be the interaction set of a k-sparse q-local Hamiltonian on the set of Majorana fermions [2n]. The set I can be split into (qQ)/2 disjoint, strictly 2q -local subsets I (2q ) α (with α ∈ [Q] and q ∈ [q/2]) each of which is diffuse with respect to I: I = q/2 q =1 Q α=1 I (2q ) α .(25) The parameter Q = q(q − 1)(k − 1) 2 + q(k − 1) + 2 does not grow with n. The construction of this splitting can be done efficiently, in time poly(n). |J I |.(26) In Lemma 21, we use n instead of n to avoid confusion, as it will also be used for n = n. The Lemmas above are proven in Appendices B-D. With these in hand, we are ready to proceed with the proof of Theorem 6. Theorem (Repetition of Theorem 6). Let H be a traceless fermionic Hamiltonian on [2n] with maximal eigenvalue λ max (H). If H is k-sparse with terms of weight 2 and 4 and 2n > 15k, a Gaussian state ρ can be efficiently constructed, such that Tr(Hρ) λ max (H) ≥ 1 2Q (27) with Q = 12(k − 1) 2 + 4(k − 1) + 2. Proof. We make use of the construction in Ref. [11] which relates a Hamiltonian with weights 2 and 4 on a set of fermionic modes [2n], that is, H = I∈I (2) J I C I + I∈I (4) J I C I ,(28) to a strictly 4-local HamiltonianH on an extended set of fermions [2n + 2]: H = I∈I (2) (−ic 2n+1 c 2n+2 )J I C I + I∈I (4) J I C I . (29) IntroducingĨ (2) ≡ {(2n + 1, 2n + 2) ∪ I | I ∈ I (2) },H can be also written as: H = − I∈Ĩ (2) J I C I + I∈I (4) J I C I .(30) The relation betweenH and H is via the following property: Although strictly 4-local, HamiltonianH is no longer sparse since the operators c 2n+1 and c 2n+2 participate in |I (2) | terms (which is generally O(n)). This prevents a direct application of Lemma 16 toH. We resolve the issue as follows. Similarly to the proof of Theorem 5, we start by splitting each set of the original interactions I (2,4) in H into subsets diffuse w.r.t. I (2) ∪I (4) : I (2) = ∪ α I (2) α , I (4) = ∪ α I (4) α . Each of the two splittings exists and can be done efficiently, as guaranteed by Lemma 16 (since the original H is sparse). Since I (2) ∪ I (4) is k-sparse and 4-local, we can bound |{I (2) α }| < Q, |{I (4) α }| < Q for Q = 12(k − 1) 2 + 4(k − 1) + 2. In what follows, we will use the splittings I = α I (2) α ∪ α I (4) α to construct two Gaussian statesρ(I (2) α ) and ρ(I (4) α ) on [2n + 2] with good properties relative toH, that is, Tr(Hρ(I (2,4) α )) = I∈I (2,4) α |J I | ≡ J(I (2,4) α ).(31) With these Gaussian states, we will then show that the Gaussian stateρ(I (2,4) α ) for q, α = argmax q , α (J(I (q ) α )) is efficiently constructable and yields the desired approximation ratio forH. We will then apply Lemma 22 and extend the statement to the original Hamiltonian H, thus finishing the proof. Following the outline above, we now move to construct the Gaussian stateρ(I (2) α ). Consider an ansatz of the formρ ≡ ρ [2n] σ {2n+1,2n+2} , where ρ [2n] is it- self a Gaussian state of [2n]. To construct ρ [2n] , note that each I (2) α is 2−local and diffuse w.r.t. I (2) ∪ I (4) which is 4-local. Since 2n > 15k by assumptions of Theorem 6, we can apply Lemma 20 with q = 4 to construct a matching M (I (2) α ) that is consistent with I (2) α . Since I (2) α is 2-local, Lemma 20 also implies that the matching M (I (2) α ) is inconsistent with the entirety of I (4) ∪ I (2) \I (2) α . We then use M (I (2) α ) in Lemma 21 (substituting n = n for n used in the Lemma) to construct ρ [2n] inρ = ρ [2n] σ {2n+1,2n+2} . This implies the following expression (using Eq. (29) forH): Tr(Hρ) = Tr(σ(−ic 2n+1 c 2n+2 )) I∈I (2) α |J I |.(32) By choosing σ to be the +1 eigenstate projector of operator −ic 2n+1 c 2n+2 , we arrive at the desired outcome: Tr(Hρ) = I∈I (2) α |J I | = J(I (2) α ).(33) The constructed Gaussian stateρ we will denote as ρ(I (2) α ). For a diffuse I α ) for I (2) α , here comprised of a single term (shown in green). To ensure consistency with I (2) α inH, M (I (2) α ) perfectly matches these terms and the pair (2n + 1, 2n + 2). The rest of the vertices are matched so that each pair does not belong to the same term in I\I (2) α (grey). (b) Matching M (I (4) α ) for I (4) α shown in green. Vertices i 1 , i 2 are chosen not to belong to the same term in I (2) , ensuring no accidental consistency with a term inH. Note the special status of the term from I (2) that is a subset of the I α but may be consistent with some terms in I (2) (as those I don't obey the |I| ≥ q = 4 condition). At the same time, we aim to achieve Tr(Hρ(I (4) α )) = J(I (4) α ) which excludes contributions from I (2) . Thus we cannot extend M (I (4) α ) to the extended set [2n + 2] directly, as it was done for I (2) α . Instead, we will create a matching of [2n+2] using a reduced version of M (I (4) α ) which inherits its beneficial properties, and then complete the matching by making it inconsistent withĨ (2) -eliminating the difficulty described above. To enable this, we find and mark an edge (i 1 , i 2 ) ∈ M (I (4) α ), such that i 1 ∈ Sup(I (4) α ) . This is always possible since I . This implies that as a twofermion interaction, {i 1 , i 2 } is guaranteed not to belong to I (2) . The latter statement is the key property of the marked edge (i 1 , i 2 ) that we will employ momentarily. We construct a matchingM (I M (I (4) α ) = M (I (4) α )\(i 1 , i 2 ).(34) Since {i 1 , i 2 } ⊂ Sup(I (4) α ), we are guaranteed that M (I (4) α ) is consistent with I (4) α and inconsistent with I (4) \I (4) α (from the construction of M (I (4) α )). In the second stage, we completeM (I (4) α ) to the entire set of 2n + 2 modes by adding two edges: (i 1 , 2n + 1) and (i 2 , 2n + 2): M (I (4) α ) = M (I (4) α ) ∪ {(i 1 , 2n + 1), (i 2 , 2n + 2)}.(35) These new edges renderM (I (4) α ) inconsistent with I (2) . To see it, note that all interactions inĨ (2) take the form I = {j 1 , j 2 , 2n + 1, 2n + 2} where {j 1 , j 2 } ∈ I (2) . By construction {i 1 , i 2 } / ∈ I (2) , thus we have {j 1 , j 2 } = {i 1 , i 2 }. As a result, matchingM (I (4) α ) of [2n + 2] is consistent with I (4) α and inconsistent with I (2) ∪ I (4) \I (4) α . We continue by applying Lemma 21 to suchM (I (4) α ) andH (substituting n = n + 1 for n used in the Lemma). This efficiently constructs a Gaussian stateρ(I (4) α ) that yields: Tr(Hρ(I (4) α )) = I∈I (4) α |J I | ≡ J(I (4) α ),(36) as desired. The Gaussian state claimed in Theorem 6 is to be chosen among the statesρ(I (2,4) α ) whose existence we've proven above. We make the choice by identifying the highest energy in the respective Gaussian state: (q, α) = argmax (q,α) J (q) α . As we showed, the respective Gaussian stateρ(I (q) α ) can be efficiently constructed and the following is guaranteed: Tr(Hρ(I (q) α )) λ max (H) ≥ J (q) α q ,α J (q ) α ≥ 1 2Q .(37) Here we used that λ max (H) ≤ q ,α J (q ) α and that J (q) α = max (q ,α ) J (q ) α . With the stateρ(I Tr(Hρ(I (q) α )) λ max (H) ≥ Tr(Hρ(I (q) α )) λ max (H) ≥ 1 2Q .(38Tr(Hρ) λ max (H) ≥ 1 Q ,(39) where Q = 1236 + 2752k + 1536k 2 . Proof. In what follows we will omit the normalization 1/ √ 2kn in Eq. (6), of course this normalization is irrelevant for lowerbounding the Gaussian approximation ratio. We split H as H = H (k ) + h (k ) , s.t. the Hamiltonian H (k ) is k -sparse and the residual Hamiltonian h (k ) contains the rest of H. The term sets are denoted as follows: H (k ) = I∈I (k ) J I C I , h (k ) = I∈Ī (k ) J I C I ,(40) i.e. I = I (k ) ∪Ī (k ) . To define such a split, we use the following deterministic algorithm. For every given Majorana, we list the interactions I ∈ I which involve that Majorana using a lexicographical order for the words I = {i 1 , i 2 , i 3 , i 4 }. For each Majorana where such a list is longer than k , we mark all elements except for the first k . All terms of H which were marked this way at least once, we include into h (k ) . The rest of the terms enter H (k ) , which by this construction is k -sparse. To continue the proof we need a pair of Lemmas. The first lower bounds the total interaction strength of the SSYK-4 Hamiltonian: Lemma 23. With probability at least 1 − 2e − kn 32 , we have I∈I |J I | ≥ kn/8.(41) This statement is proven in Appendix E, by splitting the problem into upper bounding |I| separately from I |J I |, and then applying the Chernoff bound for both. The second lemma shows that the total interaction strength of the residual Hamiltonian h (k ) is bounded from above with high probability: Lemma 24. If k ≥ e 2 k + 1, we have with probability at least 1 − 2 exp − e −2k k 3 64(k −1) n that I∈Ī (k ) |J I | ≤ 4k 2 √ k − 1 e −k n.(42) Lemma 24 is proven in Appendix E. The key technical difficulty is bounding the random variablē I (k ) , which does not reduce to a sum of independent variables and thus a simple Chernoff bound cannot be applied. Instead, we apply an exponential version of Efron-Stein inequality [24]. To build a Gaussian state with finite approximation ratio, we apply the construction of Theorem 5 to H (k ) , which is k -sparse and strictly 4-local. If n is large enough (i.e. 2n > k (q 2 − 1) for q = 4), this state ρ is guaranteed to yield energy Tr( (24) in the proof of Theorem 5). At the same time, with high probability H (k ) ρ) > 1 Q I∈I (k ) |J I | for Q = 12k 2 −20k +10 (see Eq.|Tr(h (k ) ρ)| ≤ I∈Ī (k ) |J I | ≤ 4k 2 √ k −1 e −k n and I∈I |J I | ≥ kn 8 (Lemmas 23 and 24). The resulting approximation ratio is then: Tr(Hρ) λ max (H) ≥ Tr(H (k ) ρ) − |Tr(h (k ) ρ)| I∈I |J I | ≥ 1 Q I∈I |J I | − (1 + 1 Q ) I∈Ī (k ) |J I | I∈I |J I | ≥ 1 Q − 32(Q + 1)ke −k √ k − 1Q .(43) Crucially, the second term decays exponentially with k and the first term only algebraically (note here the definition of Q ). We now fix k = 8(k + 1), consistent with the requirement k ≥ e 2 k + 1 of Lemma 24. In this case 32 (Q +1)ke −k √ k −1Q as a function of k is always smaller than 1 2Q . This allows us to bound the right hand side of Eq. (43) as 1 2Q , and substituting k = 8(k +1) we obtain the bound claimed in the Theorem: Tr(Hρ) λ max (H) ≥ 1 1236 + 2752k + 1536k 2 .(44) The earlier assumed condition 2n > k (q 2 − 1) for q = 4 and k translates into n > 60(k + 1). Given the conditions of Lemmas 23 and 24, the bound in Eq. (44) holds with the probability: 1 − 2 exp − e −16(k+1) k 3 64(8k + 7) n 1 − 2e − kn 32 ≥ 1 − 4 exp − e −16(k+1) k 3 64(8k + 7) n .(45) 8 Upper bound on Gaussian approximation ratio for SYK-q Hamiltonians Gaussian upper bound for SYK-q models We consider the expectation value of a SYK-q Hamiltonian H with respect to fermionic Gaussian states and we obtain an upper bound on its expectation value, with high probability over the random couplings J I . Lemma (Repetition of Lemma 9). Let H denote a Hamiltonian drawn from the q-local SYK Hamiltonians (with q ≥ 4 even and q = O(1)), i.e. the coupling strengths J I are drawn according to their distribution. With probability at least 1 − exp(−Ω(n)), H has the property that, for any fermionic Gaussian state ρ Tr(Hρ) ≤ (q−1)!! 2 1/2−q/4 q 1/2+q/2 × log[q/ log(3/2)] (2n) 1−q/4 . (46) Proof. We first use Wick's theorem on the expectation of a product of Majorana operators w.r.t. a fermionic Gaussian state ρ characterized by a correlation matrix Γ, see Eq. (16). Note that the correlation matrix Γ i<j can be viewed as a real d : = (2n 2 −n)-dimensional vec- tor. We note that i<j Γ 2 ij = 1 2 Tr(Γ T Γ) ≤ 1 2 Tr(I) = n so that Γ ≤ n 1/2 . Let M (I) be a perfect matching of the indices in I (|I| even), there are (q − 1)!! such matchings. We have Tr(C I ρ) = i q/2 M (I) sign M (I) Tr c i1(M ) c i2(M ) ρ) × Tr(c i3(M ) c i4(M ) ρ) . . . Tr(c iq−1(M ) c iq(M ) ρ).(47) Here we have assumed that for each matching M (I); i 1 (M ) < i 2 (M ), i 3 (M ) < i 4 (M ), . . ., i q−1 (M ) < i q (M ), i.e. any sign arising from getting the expression to this form is absorbed in sign M (I) ). The expectation of H in Eq. (2) w.r.t. fermionic Gaussian states ρ can be written as: Tr(Hρ) = 2n q −1/2 i q/2 I⊆[2n], |I|=q J I M (I) sign M (I) × q/2 t=1 Tr c i2t−1(M ) c i2t(M ) ρ = 2n q −1/2 I⊆[2n], |I|=q J I M (I) sign M (I) × q/2 t=1 Γ i2t−1(M ),i2t(M ) . (48) We note that we can view Tr(Hρ) as a sum of (q −1)!! [25].) Let A be a random K-way tensor ∈ R d1×d2×...×d K and w i be vectors ∈ R di and A(w 1 , w 2 , . . . , w K ) := k1,...,k K A k1,...,k K (w 1 ) k1 . . . (w q/2 ) k K . If we have for each fixed unit vector w i / w i (i ∈ {1, . . . , K}): Pr |A(w 1 / w 1 , . . . , w K / w K )| ≥ t ≤ 2 exp − t 2 /(2σ 2 ) , (49) then the spectral norm |A | := max w1,...,w K A w 1 / w 1 , . . . , w K / w K (with w i ∈ R di ) can be bounded as follows: |A | ≤ 8σ 2 K i=1 d i log 2K/ log(3/2) + log 2 δ 1 2 , with probability at least 1 − δ. To apply the Lemma, note that the vectors w i correspond to Γ i<j viewing i < j as a single index and we can use their norm Γ ≤ n 1/2 . In addition, for each entry in the tensor we have E exp tJ(M, I) k1,...,k q/2 ≤ exp t 2 /2 (for t ≥ 0) as the entry is zero or a Gaussian variable with variance 1 and mean zero. Using Chernoff's bound and the fact that all entries ofJ(M, I) are statistically independent, we conclude that for any set of real vectors w 1 , . . . , w q/2 one has Pr k1,...,k q/2 J(M, I) k1,...,k q/2 (w 1 ) k1 w 1 . . . (w q/2 ) k q/2 w q/2 ≥ t ≤ 2 exp(−t 2 /2). (50) Therefore, for each term H M we can apply Lemma 25 and, using K = q/2 and σ = 1, obtain J (M, I) ≤ 4q(2n 2 −n) log q/ log(3/2) + 8 log 2δ −1 1/2 ,(51) with probability at least 1 − δ. Then we can first bound max ρ Gaussian Tr(Hρ) ≤ 2n q −1/2 Γ q/2 M J (M, I) ≤ q/ √ 2 q/2 (2n) −q/4 M J (M, I) ,(52) where we have used that 2n q ≥ (2n/q) q . We can now combine the upper bound in Eq. (51) and Eq. (52). Applying the union bound, we have with probability at least 1 − (q − 1)!! δ, that max ρ Gaussian Tr(Hρ) ≤ (q − 1)!! 2 1−q/2 q q+1 × (2n) 2−q/2 − (2n) 1−q/2 log q/ log(3/2) + 2 3−q/2 q q (2n) −q/2 log 2δ −1 1/2 .(53) Therefore, we can take δ = exp − Ω(n) such that, asymptotically, we have (assuming q = O(1)): max ρ Gaussian Tr(Hρ) ≤ (q − 1)!! 2 1/2−q/4 q 1/2+q/2 × log[q/ log(3/2)] (2n) 1−q/4 ,(54) with probability at least 1 − δ. Note that in deriving this upper bound we only use the norm of the correlation matrix Γ, hence this upper bound is not necessarily achievable by a Gaussian state as the constraint Γ T Γ ≤ I imposes more conditions on Γ than just an upper bound on its norm. Maximum eigenvalue lower bound for qlocal SYK Hamiltonians To show that fermionic Gaussian states cannot achieve a constant approximation ratio for q ≥ 4 SYK models, we derive a lower bound on the maximum eigenvalue of the Hamiltonians H in Eq. The remainder of this section will be devoted to proving this Lemma. The techniques used are similar to those used in Section 6 of Ref. [1]. We note that throughout this section, we shall use C to denote a quantity that is constant in n or is bounded from above and below by a constant in n, and it will generally differ from appearance to appearance (for the sake of clarity). Importantly, C can contain factors of q (note that q = O(1)). We start by obtaining a lower bound on the maximum eigenvalue of a so-called 2-colored SYK model and will use this to prove Lemma 10. The Hamiltonian of such a 2-colored SYK model is slightly different from the standard SYK model Hamiltonian in Eq. (2). We divide the 2n Majorana operators into two subsets, with sizes n 1 and n 2 (n 2 ≤ n 1 ), and denote the operators in the first set by φ 1 , . . . , φ n1 and the ones in the second set by χ 1 , . . . , χ n2 . The Hamiltonian is now given by 1 : H (2) = i √ n 2 n2 j=1 τ j χ j ,(55) where τ j = n 1 q − 1 −1/2 i q/2−1 S⊆[n1] |S|=q−1 J S,j φ S .(56) Here φ S the product of q − 1 of the φ Majorana operators in subset S, and J S,j are independent Gaussian random variables. The subset S labels an ordered subset of q − 1 Majorana operators (note that these are different from the subsets I defined before that correspond to ordered subsets of q Majorana operators). We note that the (Hermitian) τ j operators do not necessarily obey {τ j , τ k } = 2δ jk I, but instead satisfy E({τ j , τ k }) = −i q−2 δ jk I. Proof. We introduce a new set of Majorana operators (again of size n 2 ) σ 1 , . . . , σ n2 (which do obey {σ j , σ k } = 2δ jk I) and we define the quadratic Hamiltonian H : Lemma 26. Let {φ i } n1 i=1 and {χ i } n2 i=1 be n 1 + n 2 Ma- joranaH = i √ n 2 n2 j=1 σ j χ j .(57) This quadratic Hamiltonian H is optimized by the fermionic Gaussian state ρ 0 = (58) The expectation value of H (2) w.r.t. ρ θ is: Tr(H (2) ρ θ ) = Tr(H (2) θ ρ 0 ) , where H (2) θ := e +θζ H (2) e −θζ .(59) Using the BCH expansion of H θ and Tr(H (2) ρ 0 ) = 0 , we obtain: Tr(H (2)ρ θ ) = θ Tr([ζ, H (2) ]ρ 0 ) + θ 2 1 0 (1 − s)Tr([ζ, [ζ, H (2) ]]ρ sθ )ds = θ Tr([ζ, H (2) ]ρ 0 ) + θ 2 E s∼[0,1] (1−s)Tr([ζ, [ζ, H (2) ]]ρ sθ ) ≥ θ Tr([ζ,H (2) ]ρ 0 )−θ 2 [ζ,[ζ, H (2) ]] ,(60) where we have used the triangle inequality and · denotes the spectral norm. To lower bound Tr(H (2) ρ θ ), one now has to (i) lower bound θ Tr([ζ, H (2) ]ρ 0 ) and (ii) upper bound θ 2 [ζ, [ζ, H (2) ]] . This proof technique is similar in spirit to the proof in [26], although their proof is for qubit Hamiltonians with boundeddegree interactions. First, we find a lower bound for θ Tr([ζ, H (2) ]ρ 0 ) which holds with high probability: Tr([ζ, H (2) ]ρ 0 ) = i √ n 2 n2 j,k=1 Tr([τ j σ j , τ k χ k ]ρ 0 ) = i √ n 2 n2 j=1 Tr([τ j σ j , τ j χ j ]ρ 0 ) = 2i √ n 2 n2 j=1 Tr(σ j χ j τ 2 j ρ 0 ) = 2 √ n 2 2 −(n2+n1/2) n2 j=1 Tr(I n2 τ 2 j ) = 2(−1) q/2 √ n 2 n1 q−1 n2 j=1 S⊆[n1] |S|=q−1 J S,j 2 ,(61) where we have used that Tr([τ j σ j , τ k χ k ]ρ 0 ) is non-zero only for j = k, and the definition of τ j . The quantity Tr([ζ, H (2) ]ρ 0 ) is thus a chi-squared random variable (up to normalization factors and potentially a sign) with n 2 n1 q−1 degrees of freedom and its expectation value is given by: E Tr([ζ, H (2) ]ρ 0 ) = 2(−1) q/2 √ n 2 n1 q−1 n2 j=1 S⊆[n1] |S|=q−1 E J S,j 2 = 2 √ n 2 (−1) q/2 ,(62) where we have used that E J S,j 2 = 1. We note that in order to obtain a positive first-order contribution to Tr(H (2) ρ θ ), one should take θ positive for q/2 even, and one should take θ negative for q/2 odd. Since Tr([ζ, H (2) ]ρ 0 ) is a chi-squared random variable with n 2 n1 q−1 degrees of freedom, the following tail bounds can be obtained [27]: Pr Tr([ζ, H (2) ]ρ 0 )≤ √ n 2 ≤ exp −Ω n 2 n q−1 1 ,(63) for q/2 even, and Pr Tr([ζ, H (2) ]ρ 0 )≥ − √ n 2 ≤ exp −Ω n 2 n q−1 1 ,(64) for q/2 odd. The random variable Tr([ζ, H (2) ]ρ 0 ) is thus equal to 2 √ n 2 (−1) q/2 in expectation and the probability that -for any even q ≥ 4 -its norm is smaller than half the norm of this expectation is at most exponentially small in the system size. In order to upper bound θ 2 [ζ, [ζ, H (2) ]] , we first evaluate [ζ, [ζ, H (2) ]]: [ζ, [ζ, H (2) ]] = i q/2 n 2 n1 q−1 n2 j=1 S⊆[n1] |S|=q−1 J S,j [ζ, [ζ, φ S χ j ]] = i q/2 n 2 n1 q−1 n2 j,k,l=1 S⊆[n1] |S|=q−1 J S,j [τ k σ k , [τ l σ l , φ S χ j ]] = i 3q/2−2 n 2 n1 q−1 3 n2 j,k,l=1 S,S ,S J S,j J S ,k J S ,l × [φ S σ k , [φ S σ l , φ S χ j ]],(65) where the final sum over S, S , S is over all S, S , S ⊆ [n 1 ] with |S| = |S | = |S | = q − 1 (all sums over S, S , S will implicitly have this constraint from now on). The nested commutator in this expression simplifies as follows (note that the product of i 3q/2−2 and the nested commutator is Hermitian): i 3q/2−2 [φ S σ k , [φ S σ l , φ S χ j ]] =                C(φ K σ k σ l χ j ) H , if (|S ∩ S| is odd) ∧ (|S ∩ (S S)| + δ k,l is odd) ∧ (|S| = |S | = |S | = q − 1), 0, otherwise,(66) where (φ K σ k σ l χ j ) H denotes a Hermitian version of φ K σ k σ l χ j (i.e., φ K σ k σ l χ j up to potential integer powers of i) and K := (S S S ) ∪ (S ∩ S ∩ S ) (note that |K| is odd). We therefore have: [ζ, [ζ, H (2) ]] = C 1 √ n 2 n 1 q − 1 −3/2 n2 j,k,l=1 S,S ,S J S,j J S ,k J S ,l × (φ K σ k σ l χ j ) H f (S, S , S , j, k, l) ,(67) where we have defined f (S, S , S , j, k, l) :=                1, if (|S ∩ S| is odd) ∧ (|S ∩ (S S)| + δ k,l is odd) ∧ (|S| = |S | = |S | = q − 1), 0, otherwise.(68) We now wish to find an upper bound on the expected value of the spectral norm of [ζ, [ζ, H (2) ]]. And in addition, we would like to show that the spectral norm exceeds twice the value of this upper bound with probability that is at most exponentially small in the system size. To establish this, we will have to show the following: E [ζ, [ζ, H (2) ]] k ≤ α k ,(69) for even k proportional to the system size and for some α. ≥ (α ) k ≤ α/α ) k ,(70) with α ≥ α. So taking α = 2α and k equal to the system size 2n (= 2n 2 +n 1 ) yields the desired result of the probability of the spectral norm exceeding twice the value of the upper bound being at most exponentially small in the system size. For convenience, we define A := [ζ, [ζ, H (2) ]]. Since A is Hermitian (by direct calculation), the spectrum of A 2 is non-negative and therefore we have A k = λ max (A 2 ) k/2 ≤ Tr(A k ) (for even k). Using Eq. (67), we express A as C S ⊆[2n2+n1] QSCS for convenience, where C is a non-negative constant, QS are real random variables, and CS denotes a Hermitian (even) Majorana monomial. In addition, we define the random variable (which is obtained by replacing Majorana monomials in A with 1) A(1) := C S ⊆[2n2+n1] QS = C 1 √ n 2 n 1 q − 1 −3/2 n2 j,k,l=1 S,S ,S J S,j J S ,k J S ,l × f (S, S , S , j, k, l).(71) If we now assume that E QS 1 . . . QS k ≥ 0 and E A(1) k /α k ≤ 1/2 n2+n1/2 ,(72) both hold for some even k and some constant α (note that the first condition will automatically be satisfied since {J S,j } is a collection of independent standard Gaussian random variables), then for even k we can establish E A k ≤ E A k ≤ E Tr(A k ) = C k S 1,...,Sk ⊆[2n2+n1] E QS 1 ...QS k Re Tr CS 1 ...CS k ≤ 2 n2+n1/2 C k S 1,...,Sk ⊆[2n2+n1] E QS 1 ...QS k = 2 n2+n1/2 E A(1) k ≤ α k ,(73) where the first inequality is again Jensen's inequality and we have also used that E From this point onward, we shall take n 1 and n 2 proportional to n, where 2n = 2n 2 + n 1 denotes the total number of Majorana operators. We now show that the second condition in Eq. (72) is satisfied for k = 2n and α = C √ n. In order to do so, we show that E A(1) 2n ≤ (C √ n) 2n (where the factor of 2 n2+n1/2 is absorbed in C 2n ). To that end, we thus need to find an upper bound on the (2n)th moment of the random variable A(1) in Eq. (71). In Appendix F, we derive this upper bound and indeed show that E A(1) 2n ≤ (C √ n) 2n . Therefore, Tr(H (2) ρ θ ) ≥ C √ n,(75) with probability at least 1 − exp −Ω(n) . What is left is to show that this result also holds for the standard SYK Hamiltonian. This translation from 2-colored SYK Hamiltonian to standard SYK Hamiltonian is given in Lemma 27 below, and its proof is given in Appendix G. A Extensive sets of all anti-commuting terms One can easily prove that when one maps a dense, non-sparse, fermionic model such as the SYK model onto a qubit Hamiltonian, the locality of the resulting Hamiltonian has to grow as some function of n, due to the following Lemma: Lemma 28. Any set of all-mutually anti-commuting Pauli strings {Q i } m−1 i=0 , each of weight at most k, on n qubits has cardinality m bounded as m ≤ 3 × 2 k(3k−1) ,(76) assuming that k(k − 1) < n. Proof. Take Q 0 of weight at most k and let m − 1 Paulis Q i anticommute with it. We can represent each Pauli string as a 2n-bit string y, say Q 0 = y x y z where the Hamming weight |y x | ≤ k, |y z | ≤ k. Any other Q i in the set has to anti-commute with Q 0 on the support of the string y. First, note that the set of strings of length at most 2k which have symplectic inner product equal to 1 (so anti-commute) to a given string of length 2k is at most 2 2k−1 . Now we pick the largest subset M 1 of the set of elements Q 1 , . . . Q m−1 such that all elements in the subset act identically on the support of Q 0 , i.e. are represented by the same string of length at most 2k while differing beyond the support of Q 0 . Let the cardinality of this set be |M 1 | = m 1 ≤ m − 1 and m 1 ≥ m−1 2 2k−1 ≥ m 2 2k as the largest set should at least be a fraction 1/2 2k−1 of the total. So now we consider this set M 1 and their action on the remaining n−k qubits (outside the support of Q 0 ), where these elements all have to anti-commute. In addition, each element has Pauli weight at most k − 1 (as we had to overlap with at least one Pauli with Q 0 ). We then reapply this argument on this set, leading to a new set M 2 with |M 2 | = m 2 ≥ m1−1 2 2(k−1)−1 acting on n − 2k qubits and having weight k − 2 etc. We can reiterate this process l times so that the remaining weight of the set of Pauli strings M l has k − l = 1. This implies that M l can contain at most 3 elements since they all need to anti-commute on a single qubit (assuming that n − kl > 0 or n − k(k − 1) > 0). So we have 3 ≥ |M l=k−1 | = m k−1 ≥ m 4 k+(k−1)+...+l = m 2 k(3k−1) .(77) The SYK-4 model contains large (of size n) sets of mutually anti-commuting terms. An example is the set of all terms which only overlap on one fixed Majorana. Lemma 28 then shows that any fermion-to-qubit mapping (an encoding possibly using more qubits) will require the weight of some of the resulting Pauli terms to grow as a function of n. Note that the actual mapping by Bravyi and Kitaev [3] with k = O(log n) shows that the upper bound in Eq. (76) is not completely tight. Another straightforward observation on the energy scaling of a model where all terms anti-commute is that λ max does not necessarily scale with the number of terms, as captured by the following Lemma Thus, if all J I are of similar strength, we observe that the overall maximal energy scales as |I| rather than |I|. B Splitting sparse Hamiltonians into diffuse interaction sets Lemma (Repetition of Lemma 19). Let I be the interaction set of a k-sparse q-local Hamiltonian on the set of fermions [2n]. The set I can be split into (qQ)/2 disjoint, strictly 2q -local subsets I (2q ) α (with α ∈ [Q] and q ∈ [q/2]) each of which is diffuse with respect to I: I = q/2 q =1 Q α=1 I (2q ) α . (79) The parameter Q = q(q − 1)(k − 1) 2 + q(k − 1) + 2 does not grow with n. The construction of this splitting can be done efficiently, in time poly(n). Proof. Consider a graph G with vertices corresponding to interaction sets I ∈ I, where two interaction sets I 1 , I 2 are connected with an edge if either 1. they share at least one Majorana operator or 2. I 1 and I 2 both share Majorana operators with another set I = I 1 , I 2 . For a q-local k-sparse Hamiltonian, G has maximal degree Q with Q = q(q − 1)(k − 1) 2 + q(k − 1). Here q(k − 1) is the maximal number of interactions I 2 directly sharing a Majorana fermion with any given interaction I 1 , and q(q − 1)(k − 1) 2 is the maximal number of interactions satisfying condition 2. Since a Q -sparse graph is vertex-colorable by at most (Q + 1) colors [28], we can split I into (Q + 1) subsets I α , s.t. any two interactions I 1 , I 2 from a set I α are not connected by an edge in G. By definition of G, this amounts to sets I α satisfying the first two conditions of Definition 15. A greedy algorithm can be used to assign the vertices G with (Q + 1) colors, so I α can be constructed efficiently. Each interaction set I α can contain terms of different weight. For each value of α we define strictly 2q -local sets I I = q/2 q =1 Q +1 α=1 I (2q ) α ,(80) where all sets I |Sup(I (2q ) β )| ≤ 2q |[2n]\Sup(I (2q ) Q +1 )|,(81) This can be further bounded as |Sup(I (2q ) β )| ≤ q|[2n]\Sup(I (2q ) Q +1 )|, because 2q ≤ q. Since we assumed |Sup(I (2q ) Q +1 )| ≥ 2nq/(q + 1) and thus |[2n]\Sup(I (2q ) Q +1 )| < 2n/(q + 1), it follows that |Sup(I (2q ) β )| ≤ q|[2n]\Sup(I (2q ) Q +1 )| < 2nq/(q + 1). Thus we have shown that for a given q , the condition 3 of Definition 15 -indeed cannot be violated by more than one I I = q/2 q =1 Q α=1 I (2q ) α ,(82) where Q = Q + 2 = q(q − 1)(k − 1) 2 + q(k − 1) + 2. Interaction sets I (2q ) α are diffuse (satisfying all three conditions of Definition 15) with respect to I for all q and α. The construction of I (2q ) α is efficient, because each step can be implemented in time poly(n). C Majorana matchings from diffuse interaction sets Lemma (Repetition of Lemma 20). Let a strictly q -local I be diffuse w.r.t. q-local k-sparse I on [2n], such that 2n > (q 2 − 1)k. One can efficiently construct a matching M of [2n] that is consistent with I and inconsistent with all interactions I ∈ I\I such that (1) |I| ≥ q or (2) I ⊂ Sup(I ). Proof. We first note that for I ∈ I\I the condition |I| ≥ q implies I ⊂ Sup(I ). Indeed, there are two possible options for I ∈ I\I such that I ⊂ Sup(I ). The first option is that I is a strict subset of a single interaction from I . However, this is not possible given |I| ≥ q , because I is q -local. The second option is for I to share Majorana modes with two or more interactions in I . This is ruled out because I is diffuse with respect to I (cf. Condition 2 in Definition 15). The above implies that it is sufficient to construct the matching M that is consistent with I and inconsistent with {I ∈ I\I |I ⊂ Sup(I )}. We construct M in two steps. To construct a matching M of [2n]\Sup(I ), we aim to ensure that no (m 1 , m 2 ) ∈ M is a subset of any interaction in I. For this, consider a 'permitted edge' graph P with vertices [2n]\Sup(I ), and edges inserted between every pair (i 1 , i 2 ) unless they belong to the same interaction in I. We aim to construct M as a perfect matching of P. Note that since I is q-local and k-sparse, the graph P has degree bounded from below as |[2n]\Sup(I )| − (q − 1)k. At the same time, since I is diffuse, we're guaranteed by Condition 3 in Definition 15 that |[2n]\Sup(I )| ≥ 2n q+1 . Therefore, since 2n > (q 2 − 1)k by assumption, the degree of the vertices in P is lower bounded as |[2n]\Sup(I )| − (q − 1)k ≥ |[2n]\Sup(I )|/2 + 2n q+1 − (q − 1)k > |[2n]\Sup(I )|/2. Given this lower bound, we apply Dirac's theorem [29], which yields an efficiently constructable Hamiltonian cycle in the graph P. Matching M is then obtained by pairing the sequential vertices in this cycle, making it a perfect matching of P. By definition of P, M is guaranteed to contain at least one outgoing edge from every interaction in {I ∈ I\I |I ⊂ Sup(I )}. This makes M = M ∪ M inconsistent with {I ∈ I\I |I ⊂ Sup(I )}, as desired. Lemma 17, which is used in the proof of Theorem 5, is a special case of Lemma 20. To obtain Lemma 17, one sets q = q and considers strictly q-local I instead of simply q-local. In this case all terms in I\I satisfy the first condition of the Lemma, and therefore the constructed M is inconsistent with the entirety of I\I . (83) D Matchings and Gaussian states Proof. For the given matching M , consider its associated Gaussian state pure ρ(M, λ) of the form: ρ(M, λ) = 1 2 n Π {m 1 ,m 2 }∈M (I + iλ (m1,m2) c m1 c m2 ).(84) Lemma 14 implies that the contribution to Tr (Hρ(M, λ)) from inconsistent interactions I\I vanishes and contributions from I yield: Tr(Hρ(M, λ)) = I∈I J I sign(π) l∈{1,..,|I|/2} λ (i π(2l−1) ,i π(2l) ) .(85) The proof is completed by choosing an appropriate value for λ. Since I is diffuse, by Condition 1 of Definition 15, distinct interactions from I do not share Majorana fermions. This means that the values λ (m1,m2) for different I in Eq. (85) can be chosen independently. In particular, by picking appropriate λ (m1,m2) = ±1, one can eliminate the sign of J I sign(π) and achieve a contribution |J I | for each I ∈ I . Note that this procedure can be done efficiently, as it is simply a matter of choosing at most n ±1 values by checking the sign of most |I | terms. Denoting the thus chosen ρ(M, λ) as ρ(I ), this yields Eq. (83). A special case of Lemma 21 is Lemma 18 used in the proof of Theorem 5. E Concentration bounds for sparse SYK-4 Here we derive the concentration bounds for the SSYK-4 Hamiltonian that were used in the proof of Theorem 8 (Section 7). We first prove an auxiliary Lemma that will be used later in this Section, allowing to separate the statistics of interaction selection and interaction strength: LemmaP D a=1 X a |J a | < y ≤ P D a=1 X a < d + P d a=1 |J a | < y ,(86)P D a=1 X a |J a | > y ≤ P D a=1 X a > d + P d a=1 |J a | > y .(87) Proof. To prove Eq. (86), first show P D a=1 X a |J a | ≥ y = D d =1   P D a=1 X a = d P   d a=1 |J a | ≥ y     ≥ D d =d   P D a=1 X a = d P   d a=1 |J a | ≥ y     ≥ D d =d P D a=1 X a = d P d a=1 |J a | ≥ y = P D a=1 X a ≥ d P d a=1 |J a | ≥ y .(88) It follows that P D a=1 X a |J a | < y = 1 − P D a=1 X a |J a | ≥ y ≤1 − P D a=1 X a ≥ d P d a=1 |J a | ≥ y = 1 − 1 − P D a=1 X a < d 1 − P d a=1 |J a | < y ≤P D a=1 X a < d + P d a=1 |J a | < y .(89) This ends the proof of Eq. (86). In the same vein, one derives Eq. (87). Namely, we first have (cf. Eq. (88)): P D a=1 X a |J a | ≤ y ≥ d d =1   P D a=1 X a = d P   d a=1 |J a | ≤ y     ≥ P D a=1 X a ≤ d P d a=1 |J a | ≤ y .(90) Similarly to Eq. (89), one obtains Eq. (87) from Eq. (90): P D a=1 X a |J a | > y = 1 − P D a=1 X a |J a | ≥ y ≤ 1 − 1 − P D a=1 X a > d 1 − P d a=1 |J a | > y ≤ P D a=1 X a > d + P d a=1 |J a | > y .(91) We proceed with the proof of Lemmas 23 and 24, which were used in Section 7 to prove Theorem 8. Lemma (Repetition of Lemma 23). Let interactions I and interaction strengths {J D ≡ 2n − 1 3 ,(93) X I is drawn from a Bernoulli distribution with probability p = kD −1 , i.e. X I ∼ Bern kD −1 . The second set is J I for all I ∈ I, distributed normally J I ∼ N(0, 1). We introduce auxiliary variables J a for a ∈ [ kn/4 ] and J a ∼ N(0, 1). Then by Lemma 30: P I∈I |J I | < kn/8 ≤ P   I⊂[2n], |I|=4 X I < kn 4   + P   a∈[ kn/4 ] |J a | < kn/8   .(94) We can bound the first term using the Chernoff bound for sums of Bernoulli random variables. Substituting X I < kn 4   = P   I⊂[2n], |I|=4 X I < kn 4   ≤ exp − kn 4 (1 − log 2) .(95) On the other hand, standard concentration properties of Gaussian random variables imply, see Lemma 31 at the end of this Appendix, P   kn/4 a=1 |J a | < kn 8   < e −kn/32 .(96) Since exp − kn 4 (1 − log 2) ≤ e −kn/32 , the bound in Eq. (94) yields P I∈I |J I | < kn/8 < 2e −kn/32 ,(97) as desired. Lemma (Repetition of Lemma 24). If k ≥ e 2 k + 1, we have with probability at least 1 − 2 exp − e −2k k 3 64(k −1) n that I∈Ī (k ) |J I | ≤ 4k 2 √ k − 1 e −k n.(98) Proof. The random variable I∈Ī (k ) |J I | is a function of random variables X I ∼ Bern kD −1 for I ⊂ [2n], |I| = 4 and J I ∼ N(0, 1) for all I ∈ I. We introduce auxiliary random variables J a ∼ N(0, 1) for a ∈ [K] where K ≡ 4k 2 √ k − 1 e −k n .(99) By Lemma 30, one can upperbound P I∈Ī (k ) |J I | > 4k 2 √ k − 1 e −k n ≤ P |Ī (k ) | > 4k 2 √ k − 1 e −k n + P a∈[K] |J a | > 4k 2 √ k − 1 e −k n .(100) We now proceed with upper bounding P |Ī (k ) | > 4k 2 √ k −1 e −k n and then P a∈ [K] |J a | > 4k 2 √ k −1 e −k n . To bound P[|Ī (k ) | > 4k 2 √ k −1 e −k n], we introduce the Majorana degree function k i = k i ({X I }) ∈ [D] , which is a random variable that counts the number of interactions in I involving a given Majorana c i . Since X I ∼ Bern kD −1 , k i follows the binomial distribution Bin(D, kD −1 ) (note however that different k i and k j are not necessarily independent). Given the construction of h (k ) , it is clear that |Ī (k ) | can be bounded by the 'excess degree' summed over all Majoranas. Concretely, using the Majorana degree function k i we define a random variable Z ≡ Z({X I }) ≡ 1 2n 2n i=1 (k i − k ) I ki>k ,(101) which has the immediate property |Ī (k ) | ≤ 2nZ.(102) Here we used the indicator function I ki>k = 1 when k i > k and 0 otherwise. Given Eq. (102), P[Z > 2k 2 √ k −1 e −k ] ≥ P[|Ī (k ) | > 4k 2 √ k −1 e −k n] and thus it suffices to bound the former. We begin by calculating its mean: E[Z] = 1 2n 2n i=1 E[(k i − k )I ki>k ] = E[(k 1 − k )I k1>k ],(103) where we used linearity of E(.) and the permutation symmetry of the SSYK ensemble. Hence we now need to calculate E[(k 1 − k )I k1>k ] for a single Majorana (w.l.o.g. c 1 ). Since the associated degree k 1 ∼ Bin(D, kD −1 ), we calculate directly (denoting p = kD −1 ): E[(k 1 − k )I k1>k ] ≤ E[k 1 I k1>k ] = D k1=k +1 p k1 (1 − p) D−k1 D! (D − k 1 )!k 1 ! k 1 = Dp D−1 k1=k p k1 (1 − p) D−1−k1 (D − 1)! (D − 1 − k 1 )!k 1 ! .(104) The following identity holds [30]: z−1 x=y w x (1 − w) z−1−x (z − 1)! (z − 1 − x)!x! = β w (y, z − y),(105) where β w (y, z − y) is the regularized incomplete beta function. For integer y, z > y it is defined as β w (y, z − y) = (z − 1)! (y − 1)!(z − y − 1)! w t=0 t y−1 (1 − t) z−y−1 dt. Using the Stirling bound x! ≥ √ 2πx x+1/2 e −x , x ∈ N, one bounds β w (y, z − y) as: β w (y, z − y) < w(z − 1) 2π(y − 1) e w(z − 1) (y − 1) y−1 .(106) Substituting p = kD −1 and using Eqs. (105), (106) in Eq. (104) for k > e 2 k + 1 we obtain E[(k 1 − k )I ki>k ] < Dp p(D − 1) 2π(k − 1) e p(D − 1) k − 1 k −1 < e 2 k 4 2π(k − 1) e −k ⇒ E[Z] < e 2 k 4 2π(k − 1) e −k .(107) We now aim to apply the Efron-Stein inequality [24] to bound deviations from the mean E(Z). For this, we introduce an additional set of independent random variables {X I } such that X I ∼ Bern kD −1 . This allows to define auxiliary functions Z I Z I ≡ Z X I →X I(108) where for a single interaction I only, the variable X I is replaced by X I . Using the indicator function I Z>Z I , a further auxiliary function V = V ({X I }) can be defined: V ≡ E {X I }   I⊂[2n], |I|=4 (Z − Z I ) 2 I Z>Z I   ,(109) where the averaging is performed over the additional random variables {X I } alone. An exponential version of the Efron-Stein inequality (Theorem 2 of [24]) states for all θ > 0 and λ ∈ (0, θ −1 ): log E[ exp(λ(Z − E[Z]))] ≤ λθ 1 − λθ log E exp λV θ .(110) To employ Eq. (110), we have to bound E exp λV θ . First we upper bound V ({X I }) as a function. For all interactions I we claim, independent of {X I } and {X I }: (Z − Z I ) 2 I Z>Z I ≤ 4 n 2 I X I =1 .(111) To show this, we will go through four possible cases: (X I , X I ) = (0, 0), (1, 0), (0, 1), or (1, 1). If X I = X I , the left hand side of Eq. (111) vanishes, reproducing Eq. (111) for the cases (X I , X I ) = (0, 0) and (1,1). For (X I , X I ) = (0, 1), Z is smaller than Z I , because replacing X I = 0 by X I = 1 cannot decrease the excess degree for any Majorana (cf. definition of k i and 2nZ = V ({X I }) ≤ 4 n 2 I⊂[2n], |I|=4 X I .(112) Since X I ∼ Bin(1, kD −1 ), we have E exp λV θ ≤ E   exp   4λ θn 2 I⊂[2n], |I|=4 X I     = E exp 4λ θn 2 X 1 ( 2n 4 ) = (1 − kD −1 ) + kD −1 exp 4λ θn 2 ( 2n 4 ) ≤ exp k 2n 4 D −1 exp 4λ θn 2 − 1 = exp kn 2 exp 4λ θn 2 − 1 .(113) We further assume a constraint λ < n 2 θ 4 , which implies the inequality exp( 4λ θn 2 ) − 1 < 8λ θn 2 . This allows to further bound E exp λV θ : E exp λV θ ≤ exp 4λk θn .(114) We now assume an additional constraint λ < 1 2θ , which strengthens the condition λ < θ −1 of Eq. (110). With this constraint, using Eq. (114) in Eq. (110), we obtain: log E[ exp(λ(Z − E[Z]))] ≤ 4λ 2 1 − λθ k n ≤ 8λ 2 k n .(115) This inequality is true regardless of θ and λ, insofar both numbers are positive and satisfy the constraints we introduced: 4λ n 2 < θ < 1 2λ .(116) For a valid θ to exist, it's necessary and sufficient that λ belongs to the interval (0, n 2 √ 2 ). For such λ, Eq. (115) holds, and combined with a Markov inequality it implies for any t > 0: P[Z > E[Z] + t] < exp 8λ 2 k n − λt .(117) We next choose the value of λ ∈ (0, n 2 √ 2 ) that optimizes the right hand side. If t 2 √ 2k < 1, this is achieved with λ = tn 16k . This yields the result P[Z > E[Z] + t] < exp − nt 2 32k .(118) We choose t = k 4 2(k −1) e −k , which automatically ensures the desired condition t 2 √ 2k < 1 because of the constraint k > e 2 k + 1 that we assumed in the Lemma statement. We obtain: (Eq. (107)) and e 2 2π + 1 2 < 2, we arrive at an upper bound for the probability P |Ī (k ) | > 4k 2 √ k −1 e −k n : P Z > E[Z] + k 4 2(k − 1) e −k < exp − e −2k k 3 64(k − 1) n . (119) Since E[Z] < e 2 k 4 2π(k −1) e −kP Z > 2k 2 √ k − 1 e −k < exp − e −2k k 3 64(k − 1) n ⇒ P |Ī (k ) | > 4k 2 √ k − 1 e −k n < exp − e −2k k 3 64(k − 1) n .(120) To bound P a∈[K] |J a | > 4k 2 √ k −1 e −k n , we use the concentration properties of Gaussian random variables (see Lemma 31 at the end of this Appendix). Using K = 4k 2 √ k −1 e −k n in Lemma 31.1: P   a∈[K] |J a | > 4k 2 √ k − 1 e −k n   ≤ P   a∈[K] |J a | > K   < e −K/20 .(121) Note that our bound for P |Ī (k ) | > 4k 2 √ k −1 e −k n in Eq. (120) is always greater than our bound for P a∈[K] |J a | > 4k 2 √ k −1 e −k n in Eq. (121) . This allows us to conclude the proof of the Lemma, as Eqs. (100) and (120) imply: P   I∈Ī (k ) |J I | > 4k 2 √ k − 1 e −k n   ≤ 2 exp − e −2k k 3 64(k − 1) n .(122)1 + Erf λ √ 2 .(123) The Chernoff bound then implies: P A a=1 |J a | ≥ A ≤ e λ 2 2 −λ 1 + Erf λ √ 2 A , P A a=1 |J a | ≤ A/2 ≤ e λ 2 2 + λ 2 1 − Erf λ √ 2 A .(124) Evaluating the two expressions at λ = 1 2 and λ = 1 respectively and using basic inequalities for the resulting constants, we obtain the two bounds claimed in the Lemma. F Moment bound for dense SYK-q In this Appendix, we establish the moment bound E A(1) 2n ≤ C √ n 2n , where A(1) is defined as (in Eq. (71)): A(1) = C 1 √ n n q − 1 −3/2 n j,k,l=1 S,S ,S J S,j J S ,k J S ,l f (S, S , S , j, k, l).(125) The function f in this expression is defined as (in Eq. (68)): f (S, S , S , j, k, l) =                1, if (|S ∩ S| is odd) ∧ (|S ∩ (S S)| + δ k,l is odd) ∧ (|S| = |S | = |S | = q − 1), 0, otherwise.(126) We classify the terms in the sum in Eq. (125) into five classes whose total contributions to the sum are denoted by D 0 , D 1 , D 2 , D 3 and D 4 . D 0 comprises of all terms for which the three J's are distinct. We shall therefore call the call the D 0 contribution the diagonal-free contribution. D 1 comprises of all terms for which the three J's are equal. D 2 , D 3 and D 4 comprise of all terms for which exactly two out of three J's are equal. Taking f into account, and thereby the terms that actually appear in A(1), we conclude that the terms appearing in each class D 0 , D 1 , D 2 , D 3 and D 4 correspond to the index sets given in Table 1 To upper bound the (2n)th moment of A(1) min , we upper bound the rth moments (for even r ≤ 16 · 2n) of D 0 , D 1 , D 2 , D 3 , D 4 separately. In particular, if E (D i ) r ≤ C √ n r for i = 0, 1, ..., 4 and all even r ≤ 16 · 2n, then E A(1) 2n ≤ C √ n 2n . Note that through the multinomial expansion and successive application of Cauchy-Schwarz inequality these former bounds indeed give an upper bound on the (2n)th moment of A(1): E A(1) 2n = E (D 0 + D 1 + D 2 + D 3 + D 4 ) 2n = k0+...+k4=2n 2n! k 0 !...k 4 ! E D k0 0 D k1 1 ...D k4 4 ≤ k0+...+k4=2n C 2n E D 2k0 0 1/2 E D 4k1 1 1/4 E D 8k2 2 1/8 E D 16k3 3 1/16 E D 16k4 4 1/16 ≤ k0+...+k4=2n C 2n C √ n k0+...+k4 ≤ C √ n 2n ,(127) where we have used that the multinomial coefficient can be upper bounded by C 2n and that the number of 5-tuples of non-negative integers whose sum equals 2n is upper bounded by Cn 4 (which is smaller than C 2n for some constant C). Although clearly the rth moments of e.g. D 0 have to only be bounded for even r ≤ 2 · 2n, we bound -for the sake of clarity -the rth moments for even r ≤ 16 · 2n for all D i 's. We first deal with the case of D 0 , since the fact that this contribution is diagonal-free allows one to employ a decoupling technique. Afterwards, we will consider the D 1 , D 2 , D 3 and D 4 contributions. First, we state the following lemma, which will be useful throughout this appendix. Lemma 32. Let P and P be two polynomials of centered Gaussian random variables (i.e., the monomials are formed by products of elements from a sequence of independent centered Gaussian random variables, and each variable is allowed to appear in a monomial multiple times) with non-negative coefficients. Then, for any even r, E |P + P | r ≥ E |P | r . Proof. We have E |P + P | r = E |P | r + r k=1 r k E P r−k (P ) k , and E P r−k (P ) k is non-negative (for any integers r, k) since P and P have non-negative coefficients and all moments of centered Gaussian random variables are non-negative. F.1 Upper bound for moments of D 0 (diagonal-free contribution) We start by noting that the function f takes on values 0 or 1, dependent on the index sets S, S , S , j, k, l labeling the Majorana operators. We consider replacing f in each term of D 0 (Eq. (125)) with δ a,b δ c,d , where either a ∈ (S ∪ k), b = c ∈ (S ∪ l), d ∈ (S ∪ j), (option 1) or a ∈ (S ∪ k), b = c ∈ (S ∪ j), d ∈ (S ∪ l). (option 2) We denote this modified sum as D 0,δδ . By inspection, the index sets for which f is non-zero all correspond to a non-zero contribution for δ a,b δ c,d . Note that those index sets for which δ a,b δ c,d is non-zero also include index sets for which f is zero. Hence, the terms associated with non-zero δ a,b δ c,d (for the two options listed above) are a superset of the terms that correspond to non-zero values of f . Therefore, by Lemma 32, the upper bounds on even moments of D 0 can be obtained by upper bounding the even moments of D 0,δδ . We will denote the part of the sum D 0,δδ corresponding to option 1 as D 0,min : D 0,min := C 1 √ n n q − 1 −3/2 j,k,l, S,S ,S , s.t. |(S ∪k)∩(S ∪l)|≥1 and |(S ∪l)∩(S∪j)|≥1 J S,j J S ,k J S ,l ,(128) where the sum is over indices such that (S, j) = (S , k) = (S , l) = (S, j) (by definition of D 0 ) and such that (S ∪ k) ∩ (S ∪ l) and (S ∪ l) ∩ (S ∪ j) differ by at least one element. Any bound for all even moments of D 0,min also holds for D 0,δδ − D 0,min which corresponds to option 2, due to the symmetry (S, j) ↔ (S , l) between the two options. An upper bound on all even moments of D 0,δδ (and, by implication, D 0 ) then follows from binomial expansion and application of the Cauchy-Schwarz inequality, similarly to Eq. (127). Thus it only remains to prove E |D 0,min | r < (C √ n) r for all even r. To upper bound the even moments of D 0,min , we are going to employ a decoupling technique. To that end, we will study the even moments of a related decoupled quantity. This decoupled quantity is defined as D 0,min but with the standard Gaussian random variables J S,j , J S ,k and J S ,l (selected from a single sequence of standard Gaussian random variables) being replaced by their decoupled versions J (1) S,j , J (2) S ,k and J (3) S ,l (selected from three independent sequences of standard Gaussian random variables). The related decoupled quantity is given by (where the sum is again over indices (S, j) = (S , k) = (S , l) = (S, j) and again such that (S ∪ k) ∩ (S ∪ l) and (S ∪ l) ∩ (S ∪ j) differ by at least one element): C 1 n 3q/2−1 j,k,l, S,S ,S , s.t. |(S ∪k)∩(S ∪l)|≥1 and |(S ∪l)∩(S∪j)|≥1 J (1) S,j J (2) S ,k J (3) S ,l ,(129) where we have additionally used that (k/l) l ≤ k l ≤ (e k/l) l . To upper bound the even moments of this decoupled quantity, we will make use of Lemma 33 below from [31]. The even moments of this decoupled sum are upper bounded by upper bounding the even moments of a decoupled sum whose terms are a superset of the terms in the sum in Eq. 129. Through Lemma 32, the even moments of the latter sum are larger than those of the former sum. For each J (i) S,j , we introduce q! − 1 additional independent standard Gaussian random variables associated with the q! permutations of the indices in the subsets of size q. Furthermore, we introduce additional independent standard Gaussian random variables for which some or all of the q indices that label them are equal. We consider a sum over lists of 3q indices (which label the independent standard Gaussian random variables) i (1) 1 , . . . , i (1) q , i(2) 1 , . . . , i (2) q that are summed over each have any one index (denoted by resp. x and y) that is equal to an index in i 3,7 that would appear once in the sum over lists of indices but appears twice in the sum in Eq. (130) (once for p 1 = 1 and once for p 1 = 2). Through Lemma 32, the even moments of the sum in Eq. (130) will therefore be larger than those of the sum over lists of indices (and therefore larger than those of the sum in Eq. (129)), and it will thus suffice to upper bound the even moments of the sum in Eq. (130). p 1 −1 ,x, i (1) p 1 +1 ,..,i (1) q =1 2n i (2) 1 ,..,i(1)p 2 −1 ,y, i (2) p 2 +1 ,..,i (2) q =1 2n i (3) 1 ,..,i(2)p 3 −1 ,x,i(3)p 3 +1 ,.., i (3) p 4 −1 ,y,i (3) p 4 +1 ,..,i (3) q =1 J (1) i (1) 1 ,..,i(3)p 1 −1 ,x, i (1) p 1 +1 ,..,i (1) q J (2) i (2) 1 ,..,i (2) p 2 −1 ,y, i (2) p 2 +1 ,..,i (2) q J (3) i (3) 1 ,..,i (3) p 3 −1 ,x,i(1) q ) can be summed over to obtain new independent standard Gaussian random variables denoted by K (1) x,p1 , K (2) y,p2 and K (3) x,p3;y,p4 : K (1) x,p1 := 1/ (2n) q−1 2n i (1) 1 ,..,i(1)p 1 −1 ,i(1)p 1 +1 ,..,i (1) q =1 J (1) i (1) 1 ,..,i(1)p 1 −1 ,x,i(1)p 1 +1 ,..,i (1) q ,(131a)K (2) y,p2 := 1/ (2n) q−1 2n i (2) 1 ,..,i(2)p 2 −1 ,i(2) p 2 +1 ,..,i (2) q =1 J (2) i (2) 1 ,..,i(2)p 2 −1 ,y,i(2)p 2 +1 ,..,i (2) q ,(131b)K (3) x,p3;y,p4 := 1/ (2n) q−2 2n i(3) 1 ,..,i p 3 −1 ,i(3) p 3 +1 ,.., i p 4 −1 ,i(3)p 4 +1 ,..,i (3) q =1 J (3) i (3) 1 ,..,i(3)p 3 −1 ,x,i(3)p 3 +1 ,..,i(3)p 4 −1 ,y,i(3)p 4 +1 ,...,i (3) q ,(3) where we have used that the normalized sum 1/ √ m m i=1 J i of a sequence of standard Gaussian random variables J 1 , . . . , J m is again a standard Gaussian random variable. We now obtain the following expression for D decoupled K (1) x,p1 K (2) y,p2 K (3) x,p3;y,p4 .(132) The sum over all free indices gives an extra total factor of n 3q/2−2 , which partially cancels against n 3q/2−1 in Eq. (130). Importantly, we note that now the random variables K (1) x,p1 and K (1) x ,p1 are independent for x = x (and equivalently for K (2) y,p2 and K (3) x,p3;y,p4 ). We will apply Lemma 33 from [31] separately to each contribution to D decoupled 0,min in Eq. (132) (with a contribution corresponding to one combination of p i 's). Lemma 33 (Theorem 1 in [31]). Let Y ∈ R N ×...×N be a d-dimensional matrix and define: F {K (j) 1 , . . . , K (j) N } d j=1 := N i1,...,i d =1 Y i1,...,i d d j=1 K (j) ij ,(133)E F {K (j) 1 , . . . , K (j) N } d j=1 k ≤ C P k |P|/2 Y P k ≤ C max P k |P|/2 Y P k ,Y P = Y (P1,...,Ps) := max N i1,...,i d =1 Y i1,...,i d x (1) i P 1 . . . x (s) i Ps : i P 1 x (1) i P 1 2 ≤ 1, . . . , i P k x (k) i P k 2 ≤ 1 ,(134) with each x ∈ R. Remark. If F in Eq. (133) is diagonal-free (i.e., Y i1,...,i d = 0 if i j = i k for any j = k)E (F ) r ≤ CE (F ) r . See e.g. Theorem 2.1 in [32]. The fact that this decoupling inequality only holds for diagonal-free polynomials is exactly the reason for differentiating between the diagonal-free contribution D 0 and the diagonal contributions D 1 , D 2 , D 3 , D 4 to A (1). For 2n x,y=1 K (1) x,p1 K (2) y,p2 K x,p3;y,p4 in Eq. (132), we see that d = 3 and hence the possible partitions P are {1, 2, 3}, {1}{2, 3}, {2}{1, 3}, {1, 2}{3}, {1}{2}{3}. The associated Y P values can be (straightforwardly) calculated and are given in Table 2. Using Table 2 and Lemma 33, we find the following upper bound on E C/n 2n x,y=1 K (1) x,p1 K (2) y,p2 K (3) x,p3;y,p4 r (for all even r): C/n max √ rn, r √ n, r √ n, r, r 3/2 r ≤ C √ n r .(135) Note that D decoupled 0,min in Eq. 132 consists of q 4 (with q = O(1)) contributions, each corresponding to a given combination of p i 's. We can again use the multinomial expansion and successive application of the Cauchy-Schwarz inequality (together with the fact that the multinomial coefficients can be upper bounded by C r and that the number of q 4 -tuples of non-negative integers whose sum equals r is upper bounded by C r for some constant C) to conclude that the upper bounds of (C √ n) r for rth moments (for all even r) of these contributions imply an upper bound of (C √ n) r for rth moments (for all even r) of D decoupled 0,min . We now employ the decoupling inequality from the above remark to obtain E |D 0,min | r ≤ CE D decoupled 0,min r ≤ C √ n r . From the arguments given previously, this implies the desired bound E |D 0 | r ≤ C √ n r for all even r, in particular for r ≤ 16 · 2n. Table 2: The different partitions P of [3] into non-empty parts, with the associated number of parts |P|, and the associated Y P for 2n x,y=1 K (1) x,p1 K (2) y,p2 K x,p3;y,p4 in Eq. (132). Y P for the first four partitions can be straightforwardly evaluated by applying Eq. (134) to Eq. (129), and the fifth Y P can be evaluated by additional application of the Cauchy-Schwarz inequality. In the previous section we used a decoupling inequality to upper bound the rth moments (for even r ≤ 16 · 2n) of D 0 . These decoupling inequalities hold for (Gaussian) polynomials for which each Gaussian monomial is a product of distinct Gaussian random variables, i.e., diagonal-free polynomials. This holds indeed -by definition -for the D 0 contribution to A(1), but not for contributions D 1 , D 2 , D 3 and D 4 . For that reason, we cannot make use of the same decoupling inequality for the D 1 , D 2 , D 3 and D 4 contributions. Therefore, we have to resort to other methods to bound their rth moments (for even r ≤ 16 · 2n). • The D 1 contribution can be written as: D 1 := C 1 √ n n q − 1 −3/2 j,S J S,j 3 .(136) The rth moment (with r even) of D 1 can be upper bounded as follows: E |D 1 | r ≤ C r 1 √ n n q − 1 where we have used that E m i=1 K i r = k1+...+km=r r! k1!...km! E K 1 ) k1 . . . E (K m ) km (for K 1 , . . . , K m independent random variables), the fact that (S, j) can take on 2n 2n q−1 values and the fact that the pth moment of a standard Gaussian random variable is equal to (p − 1)!! (≤ p p/2 ). For even r, we therefore conclude that E |D 1 | r ≤ C √ n r . • The D 2 and D 3 contributions are equivalent and can be written as: D 2 = D 3 := C 1 √ n n q − 1 −3/2 j,S,S s.t. S =S J S,j 2 J S ,j .(138) The rth moment (with r even) can be written as follows: E |D 2 | r , E |D 3 | r = C r 1 √ n n q − 1 for which E(g) = 0. We note that g is a homogeneous polynomial in standard Gaussian random variables of degree 3. To upper bound the moments of g, and thereby the moments of D 2 and D 3 , we use the following result from [33]. This result is an extension of Lemma 33 from [31] to the setting where diagonal terms are allowed to appear in the polynomial. The extension also includes inhomogeneous polynomials, although in the current setting we are considering only homogeneous polynomials. Lemma 34 (Theorem 1.3 in [33]). Let K := K 1 . . . , K N denote a sequence of N independent standard Gaussian random variables and g : R N → R a polynomial of degree D. Then, for all r ≥ 2: For g in Eq. (140), we have that N = n n q−1 , since the sequence of Gaussian random variables corresponds to {J S,j }. To find an upper bound for the rth moment of g using Eq. (141), we first calculate D d g for d = 1, 2, 3. Then, for each d, we upper bound E D d g P for all partitions P of [d]. We will show that for all d and associated partitions P([d]), E D d g P can be upper bounded in such a way that E g(K) − E g(K) r ≤ C r 1≤d≤3 P([d]) r |P|/2 E D d g(K) P r ,(141)E |D 2 | r , E |D 3 | r ≤ C √ n r for all even 2 ≤ r ≤ 16 · 2n. Finally, the 0th moment also (trivially) satisfies this upper bound, hence it holds for all even r ≤ 16 · 2n. The derivatives of g are equal to: (144) In Table 3, we give the values of E D d g P for all partitions P([d]) for d = 1, 2, 3. E D d g P for d = 1 can be straightforwardly evaluated using Eq. (134) and for d = 2 can be trivially evaluated by using E D 2 g = 0. For d = 3, E D d g P can be upper bounded using Eq. (134), and the triangle and Cauchy-Schwarz inequalities (for illustration purposes, we provide an example of the derivation of this upper bound for P = {1, 2}{3} below). for which E(h) = 0. We note that h is a homogeneous polynomial in standard Gaussian random variables of degree 3. To upper bound the moments of g, and thus the moments of D 4 , we again use Lemma 32 from [33]. We use Eq. J S ,p (S,j) =⇒ E Dh ≤ Cn q−1 (S,j) ,(150) where the sum over p runs from 0 to n and we have used the bounds on |σ(S)|. Note that this is a pointwise upper bound on the entries of the vector E Dh , which will be enough to bound the corresponding norm. In Combining the upper bounds for E D d h P in Table 4 with the factor of r |P|/2 (≤ Cn |P|/2 ) in Eq. (141) and the normalization factor in Eq. (148), we find -using E(h) = 0 -that indeed E |D 4 | r ≤ C √ n r for all even r ≤ 16 · 2n. In conclusion, we have shown that the rth moments (for even r ≤ 16 · 2n) of D 0 , D 1 , D 2 , D 3 and D 4 can be upper bounded by C √ n r , and hence, by Eq. (127), the (2n)th moment of A(1) can be upper bounded by C √ n 2n . Thereby, we have also established that the second condition in Eq. (72) is satisfied. G Two-colored SYK to standard SYK In this Appendix, we give the proof of Lemma 27. Lemma 17 . 17Let the interaction set I be diffuse w.r.t. I ⊃ I (I and I are strictly q-local and k-sparse). If 2n > k(q 2 − 1), one can efficiently construct a matching M of the set [2n] that is consistent with each interaction in I and inconsistent with each interaction in I\I . Lemma 18 . 18Let H = I∈I J I C I be strictly q-local and I be a diffuse subset of I. Let M be a matching of [2n] as guaranteed by Lemma 17. One can efficiently construct a Gaussian state ρ I with the property: Lemma 22 ( 22Lemma 6 of [11]). For H andH introduced above, λ max (H) = λ max (H). Moreover, for any Gaussian stateρ of 2n + 2 Majorana modes, one can efficiently compute a Gaussian state ρ of 2n Majorana modes s.t. Tr(Hρ) ≥ Tr(Hρ). ⊂ I(2) ∪ I(4) , we construct the Gaussian statesρ(I(4) α ) in a different way. First we use Lemma 20 to construct a matching M (I (4) α ) of [2n]. This matching is guaranteed to be consistent Figure 2 : 2Demonstration of the method in the proof of Theorem 6. (a) Matching M (I (2) term. From the perspective ofH, it is not consistent with M (I (4) α ) although it coincides with an edge from M (I (4)α ). This is due to the intentional absence of the edge (2n + 1. However, since I (2) is 2-local and I (4) is 4-local, while in general Sup(I(2) ) ∩ Sup(I α ) of [2n + 2] in two stages. First we construct an intermediate matching M (I (4) α ) of [2n+2]\{i 1 , i 2 , 2n+1, 2n+2} by removing the edge from M ( α ) on [2n + 2] fermions at hand, we finalize the proof by an application of Lemma 22. This relates λ max (H) to λ max (H) and allows us to efficiently construct the Gaussian state ρ(I (q) α ) of [2n], with the desired property: terms, one for each matching M of some subset of indices I, i.e. Tr(Hρ) = M Tr(H M ρ) where H M = IJ (M, I)q/2 t=1 Γ i2t−1(M ),i2t(M ) . We have defined the q/2-way, d × d × . . . × d, tensorJ(M, I), whose entries are equal to either zero (when the indices coincide or are not ordered properly) or to a standard Gaussian random variable. Each J I appears only once in Tr(H M ρ) and therefore all entries ofJ(M, I) are statistically independent. We note that sign(M ) does not depend on which (ordered) subset I one chooses. To bound each term Tr(H M ρ), with high probability, we invoke the following Lemma:Lemma 25. (Theorem 1 in Lemma (Repetition of Lemma 10). For the class of q-local SYK Hamiltonians (with even q ≥ 4) in Eq. (2), λ max (H) = Ω( √ n) with probability at least 1 − exp − Ω(n) over the draw of Hamiltonians. operators. For the class of q-local 2-colored SYK Hamiltonians (with even q ≥ 4) in Eq. (55) defined in terms of these Majorana operators, the maximum eigenvalue of the Hamiltonian λ max (H) is lower bounded by C √ n (with C a constant) with probability at least 1 − exp − Ω(n) over the draw of Hamiltonians. + iσ j χ j , which achieves Tr(H ρ 0 ) = √ n 2 . The idea is now to construct a new state ρ θ obtained from ρ 0 by applying a unitary transformation to ρ 0 , and to find a lower bound for the expectation value of H(2) w.r.t. ρ θ .ρ θ := e −θζ ρ 0 e +θζ , where ζ := n2 j=1 τ j σ j and θ ∈ R. E [ζ, [ζ, H (2) ]] ≤ C √ n and Pr [ζ, [ζ, H (2) ]] ≥ 2C √ n ≤ exp −Ω(n) , (74) which is the desired result. Combining Eq. (60), Eqs. (63),(64) and Eq. (74), we conclude that there exists a θ = O(1) such that Lemma 27 . 27For the class of q-local SYK Hamiltonians (with even q ≥ 4) in Eq. (2), ρ θ (defined in Eq. (58)) achieves Tr(Hρ θ ) ≥ C √ n with probability at least 1 − exp − Ω(n) over the draw of Hamiltonians, provided that ρ θ achieves Tr(H (2) ρ θ ) ≥ C √ n (with H (2) the 2-coloured SYK Hamiltonian defined in Eq. (55)) with probability at least 1 − exp − Ω(n) over the draw of 2-coloured Hamiltonians. Lemma 29 . 29Let H = i∈I J I C I where the {C I } are a set of all-mutually anti-commuting Majorana operators on [2n] (each C I has even support). Then λ max (H) = I J 2 I . (78) Proof. We have H = I J I C I = I J 2 I I β I C I with I β 2 I = 1. Take the state ρ = 1 2 n (I + I β I C I ) and thus Tr(Hρ) = I J 2 I I β 2 I = I J 2I . This is the maximal eigenvalue that can be reached since one can map each c I onto a single Majorana operator c i(I) as these sets form identical algebras. Then we can use the normalization of β I to view i β I c i(I) =c 1 with single Majorana operatorc 1 (this is an example of the transformation in Eq. (10)). A single Majoranac 1 has spectrum ±1 and hence the (hugely degenerate) spectrum of H is simply ± I J 2 I . q = 1, .., q/2) by restricting to the strictly 2q -local part of I α . This gives a splitting of I into efficiently constructable subsets I (2q ) α : Definition 15. The rest of the proof is concerned with the third condition of a diffuse set in Definition 15, for all sets I (2q ) α . This means ensuring that for all values of α and q , the support size |Sup(I (2q ) α )| is smaller than 2n q q+1 . Fix q and consider sets I (2q ) α for ∀α ∈ [Q + 1]. Consider the case where |Sup(I (2q ) α )| < 2n q q+1 does not hold for at least one value of α, which we set to be α = Q + 1 without loss of generality. Let us prove that the violation |Sup(I (2q ) β )| ≥ 2n q q+1 cannot hold for any β = Q + 1. Firstly, no interaction I from I (2q ) β can be a strict subset of an interaction in I (2q ) Q +1 or share Majoranas with two terms in I (2q ) Q +1 simultaneously. The first scenario is excluded since I (2q ) Q +1 and I (2q ) β are both strictly 2q -local and the second scenario is excluded because I (2q ) Q +1 satisfies condition 2 of Definition 15. From these two facts it follows that each interaction in I (2q ) β must involve at least one Majorana from [2n]\Sup(I (2q ) Q +1 ). This implies Q all q for which there exists a violation |Sup(I (2q ) Q +1 )| ≥ 2n q q+1 . Since q q+1 > 1 2 for any q, this violation can be fixed by splitting I (2q ) Q +1 in half. Introduce non-overlapping setsĨ +1 |/2 :I (2q ) Q +1 =Ĩ (2q ) Q +1 ∪Ĩ (2q ) Q +1 . By implication, |Sup(Ĩ (2q )Q +1 )| ≤ 2n/2 ≤ 2nq/(q + 1) and similarly |Sup(Ĩ (2q ) Q +1 )| ≤ 2nq/(q + 1). We conclude the construction by modifying the set I(2q ) α for the considered q : we redefine I (2q ) Q +1 ≡Ĩ (2q ) Q +1 , and introduce one extra interaction set I (2q ) Q +2 ≡Ĩ (2q ) Q +1 .The proof can now be finalized. Performing the above procedure for all q where a violation was present, and Figure 3 : 3Example of the construction from the proof of Lemma 20. (a) The q-local set of interactions I (q = 4). Highlighted in green is diffuse and strictly q -local I (q = 4), in red are the interactions in Sup(I ) with weight less than q , in grey are the rest of interactions in I. The goal is to create a matching M consistent with green-colored terms, inconsistent with grey-colored terms, and with no guaranteed relation to the red-colored terms. (b) Matching M on Sup(I ), consistent with I by construction. Ensuring inconsistency with all redcolored terms is in general impossible. For example, consider the three overlapping red-colored terms at the top center. (c) Completing M = M ∪ M with a matching M on [2n]\Sup(I ), ensuring inconsistency with all grey-colored terms. For this, the vertices are matched only if they belong to different interactions. completing the {I (2q ) α } without such violations with I (2q ) α=Q +2 = ∅, we arrive at the splitting First we construct a matching M of Sup(I ) (note |Sup(I )| is always even). Next, we construct a matching M of the remaining Majorana modes [2n]\Sup(I ). The desired matching of [2n] is the union M = M ∪ M . To construct M , we match vertices of each I ∈ I in an arbitrary way: for every such I = {i 1 , ..i q }, {i 2l−1 , i 2l } ∈ M for l ∈ [1, ..q /2]. This matching is always possible, since I is diffuse and thus different interactions from I do not overlap. Thus constructed M (and therefore also M = M ∪ M ) is explicitly consistent with all I ∈ I . Lemma (Repetition of Lemma 21). Let H = I∈I J I C I on [2n ] be q-local and I be a diffuse subset of I. Consider a matching M of [2n ]. If M is consistent with I and inconsistent with I\I , one can efficiently construct a Gaussian state ρ I with the property: Tr(Hρ(I )) = I∈I |J I |. I } be those of the SSYK-4 model with average degree k. With probability at least 1 − 2e − kn 32 we have I∈I |J I | ≥ kn/8. (92) Proof. The random variable I∈I |J I | is a function of two sets of random variables. The first set is X I ∈ {0, 1} for all possible 4-Majorana interactions I ⊂ [2n], |I| = 4, indicating the presence of I in I. Denoting i=1 (k i − k )I ki>k ). Due to the factor I Z>Z I , (Z − Z I ) 2 I Z>Z I in this case is zero, in agreement with Eq. (111). The last case is (X I , X I ) = (1, 0). As any interaction I only involves 4 fermions, the reduction of total excess degree 2n(Z X I =1 − Z X I =0 ) is at most equal to 4, independent of the rest of the variables {X I }. Therefore (Z − Z I ) 2 I Z>Z I for (X I , X I ) = (1, 0) is at most equal to 4 n 2 , proving Eq. (111). From Eq. (111) it follows that E {X I } [(Z − Z I ) 2 I Z>Z I ] ≤ 4 n 2 I X I =1 , which we can use to bound V ({X I }). From the definition stated in Eq. (109) we get: Figure 4 : 4Illustration of examples of index sets (S, j), (S , k) and (S , l) (corresponding to non-zero values of f in Eq. (126)) associated with the different classes of contributions to A(1) in Eq. (125). The D 0 contribution in (a) is the diagonal-free contribution (i.e., (S, j), (S , k) and (S , l) are unequal). The D 1 , D 2 , D 3 and D 4 contributions in resp. (b), (c), (d) and (e) are the diagonal contributions (i.e., at least two of (S, j), (S , k) and (S , l) are equal). . An illustration of examples of the index sets (S, j), (S , k) and (S , l) associated with these different classes of contributions to A(1) is given in Figure 4. class associated index sets of terms associated index sets of terms in A(1) D 0 (S, j) = (S , k) = (S , l) = (S, j) (S, j) = (S , k) = (S , l) = (S, j) D 1 (S, j) = (S , k) = (S , l) (S, j) = (S , k) = (S , l) D 2 (S, j) = (S , l) = (S , k) S = S = S , j = l = k D 3 (S, j) = (S , k) = (S , l) S = S = S , j = l = k D 4 (S , k) = (S , l) = (S, j) (S , k) = (S , l) = (S, j) . If we additionally sum over all 'positions' of the x and y indices (where p 1 , p 2 , p 3 and p 4 ∈ {1, . . . , q} label these positions), we obtain the sum (see Eq. (130) below) whose terms are a superset of those in the sum in Eq. (129). Note that this sum in Eq. (130) contains all the contributions from a sum over lists of indices, and contains some terms multiple times that would occur only once in a sum over lists of indices: For example, in the hypothetical case q = 2, one could have a contribution J . . . , i F. 2 2Upper bound for moments of D 1 , D 2 , D 3 and D 4 S,S s.t. S =S J S,j 2 J S ,j r . (139) We define g := j,S,S s.t. S =S J S,j 2 J S ,j , S2 : S =S J 2 S ,j + 2J S,j S : S =S J S ,j (S,j) =⇒ E D g = n q − 1 (S,j) S : S =S J S ,j , if (S, j) = (T, k) 2(J T,j + J S,j ), if S = T and j = k 0, if j = k if(S = T = U or S = T = U or S = U = T ) and j = k = l 0, if (S = T = U or S = T = U ) and j = k = l 0, if j, k, l are not all equal    (S,j),(T,k),(U,l) =⇒ E D 3 g = D 3 g. T,k + 2J S,j , if T ∈ σ(S) and ∀k 2 S ∈σ(S),p J S ,p , if (T, k) = (S, if ((U, l) = (T, k) (U, T ∈ σ(S))) or ((U, l) = (S, j) and T ∈ σ(S) and ∀k) or ((T, k) = (S, j) and U ∈ σ(S) and ∀l)=⇒ E D 3 h = D 3 h. Theorem 6 . 6Let H be a traceless fermionic Hamiltonian on 2n Majorana operators with maximal eigenvalue λ max (H). If H is k-sparse with terms of weight 2 and 4 and 2n > 15k, a Gaussian state ρ can be efficiently constructed, such thatTr(Hρ) λ max (H) ≥ 1 2Q (4) for Q = 12(k − 1) 2 + 4(k − 1) + 2. 2.3 The sparse q = 4 SYK model In view of Theorem 5 it is worth revisiting the lack of a constant Gaussian approximation for the SYK model. A q-local (with q even) SYK model (SYK-q) on 2n Majoranas is defined as a family of Hamiltonians H = 2n q −1/2 I⊆[2n],|I|=q For both the SYK and the sparse SYK models, the normalization of H is chosen such that E Tr(H 2 ) = 1. Unlike the full SYK model with 2n terms in H, the sparse SYK model has a number of terms ∼ n in expectation. Note that the SSYK-4 model is only ksparse in expectation, and with high probability there is a Majorana operator with degree Ω log(n) log log(n) (the degree distribution follows that of an Erdos-Renyi hypergraph. See Theorem 3.4 in[21] for a proof of the statement for Erdos-Renyi graphs. The hypergraph version follows by the same logic). This means that Theorem 5 does not directly apply. However, one can show, through a truncation argument, that almost all instantiations of SSYK-4 can be sparsified, giving rise to a constant approximation ratio result that holds with high probability.The sparse SYK-4 or SSYK-4 model on 2n Majorana operators with expected degree k = O(1) is given as H = 1 √ 2kn I⊂[2n],|I|=4 X I J I C I (6) where the X I are i.i.d. Bernoulli random variables with p = Pr(X I = 1) = k ( 2n−1 3 ) and the J I are i.i.d. Gaussian random variables with mean 0 and variance 1. 4 Theorem 8. Let H be a SSYK-4 Hamiltonian in Eq. (6) with expected degree k = O(1), such that n > 60(k + 1). With probability at least 1 − 4 exp − e −16(k+1) k 3 64(8k+7) n , a Gaussian state ρ can be ef- ficiently constructed such that Fermionic Gaussian state. Given 2n Majorana operators denoted by c 1 , . . . , c 2n . A fermionic Gaussian state is a -generally mixedstate of the formpreserves the properties of Majorana operators and hence gives rise to a new set of 2n Majorana operators {c j } 2n j=1 . Definition 12. It is useful to introduce a notion of consistency between this class of Gaussian states specified by a matching M and an interaction subset I.Definition 13. An interaction subset I ⊆ [2n] and a perfect matching M on [2n] are called consistent if M contains a perfect matching of the elements of I. Given a set of interactions I, we say that M is consistent (resp. inconsistent) with I if M is consistent (resp. inconsistent) with each interaction in I. Lemma 14. Consider a matching M and an interaction I = {i 1 , i 2 , ..i q }.The following Lemma is straightforward 15. Consider a set of q-local interactions I on 2n Majorana operators. A subset of these interactions I ⊂ I is diffuse with respect to I, if the following three conditions apply: 1. ∀I 1 , I 2 ∈ I , I 1 and I 2 don't share any Majorana operators, i.e. I 1 ∩ I 2 = ∅. In the setting of Theorem 5, diffuse sets of terms appear naturally due to the following Lemma. Lemma 16. Consider a k-sparse strictly q-local fermionic Hamiltonian H on 2n Majoranas. The interaction set I of H can be split into Q disjoint subsets I α (α ∈ [Q]) all of which are diffuse with respect to I such that2. ∀I 1 , I 2 ∈ I , there exists no I 3 ∈ I which shares Majorana operators with both I 1 and I 2 (if I 3 ∩ I 1 = ∅ then I 3 ∩ I 2 = ∅ and vice versa). 3. The size of support of I , i.e. |Sup(I )|, is smaller than 2qn q+1 . I∈I J I C I on [2n ] be q-local and I be a diffuse subset of I. Consider a matching M of [2n ]. If M is consistent with I and inconsistent with I\I , one can efficiently construct a Gaussian state ρ I with the property:Tr(Hρ I ) =Lemma 20 (Generalization of Lemma 17). Let a strictly q -local I be diffuse w.r.t. q-local k-sparse I on [2n], such that 2n > (q 2 − 1)k. One can efficiently construct a matching M of [2n] that is consistent with I and inconsistent with all interactions I ∈ I\I such that (1) |I| ≥ q or (2) I ⊂ Sup(I ). Lemma 21 (Generalization of Lemma 18). Let H = I∈I )7 The sparse SYK-4 model Theorem (Repetition of Theorem 8). Let H be a SSYK-4 Hamiltonian in Eq. (6) with expected degree k = O(1), such that n > 60(k + 1). With probability at least 1 − 4 exp − e −16(k+1) k 3 64(8k+7) n , a Gaussian state ρ can be efficiently constructed such that Table 1 : 1The index sets associated with each class of terms, and the index sets associated with each class of terms that appear in the expression for A(1) (i.e., taking f into account). 1 , . . . , i (with each index in [2n]), instead of the sum over subsets of [2n] in Eq. (129). Note that the sum over lists, by definition, can contain terms for which two (or three) of the Gaussian random variables have equal index sets.The index lists i(2) q and i (3) 1 , . . . , i (3) q (1) 1 , . . . , i (1) q and i are d independent sequences of N standard Gaussian random variables. Then for any integer k ≥ 2:where {K (j) 1 , . . . , K (j) N } d j=1 where P are partitions of [d] into non-empty parts (P 1 , . . . , P s ). The second inequality holds because the number of partitions of [d] into non-empty parts is constant in n (since d is constant in n). The quantity Y P is defined as: then the moments of the 'decoupled' F in Eq. (133) are (up to constants only depending on d) an upper bound for the moments of its 'coupled' counterpart F K 1 , . . . , K N := N i1,...,i d =1 Y i1,...,i d d j=1 K ij (i.e., where the random variables are all taken from the same sequence of N standard Gaussian random variables): where P are partitions of [d] into non-empty parts, and Y P (with Y a d-way tensor) is defined in Eq. (134). D d g(K) denotes the dth derivative of g(K), which corresponds to a d-way tensor with entries equalto D d g(K) i1,...,i d = ∂ ∂Ki 1 For d = D, D d g(K) is constant.. . . ∂ ∂Ki d g(K). Table 3 : 3The different partitions P of[3] into non-empty parts, with the associated number of parts |P|, and (the upper bounds for) the associated E D d g P for g in Eq. (140). (141) to find an upper bound for the rth moment of h. We first calculate D d h for d = 1, 2, 3. Then, for each d, we upper bound E D d h P for all partitions P of [d]. Thereby, we show that for all d and associated partitions P([d]), E D d h P can be upper bounded such that E |D 4 | r ≤ C √ n for all even 2 ≤ r ≤ 16 · 2n. The 0th moment trivially satisfies this bound, and therefore it holds for all even r ≤ 16 · 2n. derivatives of h are equal to:r The D h = S ∈σ(S),p J 2 S ,p + 2J S,j S ∈σ(S),p Table 4 , 4we give the values of E D d h P for all partitions P([d]) for d = 1, 2, 3. E D d h P for d = 1 can be straightforwardly evaluated using Eq. (134) and for d = 2 can be trivially evaluated by using E D 2 h = 0. For d = 3, E D d h P can be upper bounded using Eq. (134), and the triangle and Cauchy-Schwarz inequalities. To obtain these upper bounds, we have again used the bounds on |σ(S)|.P |P| E D d h P {1} 1 C n q−1 3/2 {1, 2} 1 0 {1}{2} 2 0 {1, 2, 3} 1 Cn q−1/2 {1}{2, 3} 2 Cn q {2}{1, 3} 2 Cn q {1, 2}{3} 2 Cn q {1}{2}{3} 3 Cn q/2 Table 4 : 4The different partitions P of[3] into non-empty parts, with the associated number of parts |P|, and (the upper bounds for) the associated E D d h P for h in Eq. (149). We denote Hamiltonians from the class of 2-colored SYK Hamiltonians by H(2) . Combining the upper bounds for E D d g P inTable 3with the factor of r |P|/2 (≤ Cn |P|/2 ) in Eq. (141) and the normalization factor in Eq. (139), we find -using E(g) = 0 -that indeed E |D 2 | r , E |D 3 | r ≤ C √ n r for all even r ≤ 16 · 2n.Example: For illustration purposes, we give an explicit evaluation of E D d g P for P = {1, 2}{3} (the evaluations for other P's follow using similar methods). By definition (Eq. (134)), we have:Using the expression obtained for E D 3 g (S,j),(T,k),(U,l) , we obtain:where we have used the triangle inequality in the first inequality, and the Cauchy-Schwarz inequality for the second inequality (and we note that e.g. U y (U,j) is simply equal to the inner product of y (j) := (y (U1,j) , y (U2,j) , . . .) with the all-ones vector).• The D 4 contribution can be written as:We note that the main difference with the D 2 and D 3 contributions is that, for D 4 , the sum is over the double index j, k (instead of over the single index j), and over a restricted sum over sets S, S (instead of over a free sum over sets S, S ). To bound the moments of D 4 , we will employ a similar method as for D 2 and D 3 . The rth moment (with r even) can be upper bounded as follows (where we drop the '|S ∩ S | is odd' constraint using Lemma 32 and denote the collection of subsets S such that 0 < |S ∩ S | < q − 1 by σ(S)):(148)We note that |σ(S)| can be upper bounded and lower bounded by Cn q−2 (for some constants C). We defineProof. To establish Lemma 27, we now show that the state ρ that achieves Tr(H (2) ρ) ≥ C √ n (with H (2) defined in Eq. (55)), with probability at least 1 − exp − Ω(n) also achieves Tr(Hρ) ≥ C √ n for the standard SYK Hamiltonian. To that end, we consider a standard SYK model with 2n Majorana operators and partition these Majorana operators into a subset of size 2n(q−1) q and a complementary subset of size 2n q . The standard SYK model Hamiltonian H, see Eq. (2), consists of 2n q terms. These terms are labeled by all ordered subsets {j 1 < . . . < j q }, and I denotes the collection of these subsets. The terms in H for which q − 1 Majorana operators are in the first subset, and the other Majorana operator is in the complementary subset, are labelled by ordered subsets {j 1 < . . . < j q : j 1 < . . . < j q−1 < q−1 q ≤ j q }. We denote the collection of these subsets by T . The collection of other subsets is denoted by T = I\T . T and T thus correspond to collections of terms in the Hamiltonian. We denote the Hamiltonian consisting of the collection T by H T and the Hamiltonian consisting of terms T by H T , hence H = H T + H T . H T corresponds exactly to the 2-colored Hamiltonian in Eq. (55) when multiplied by For any state ρ, E Tr(Hρ) = 0, where the expectation value is w.r.t. the couplings in H since the couplings are random variables with zero mean. The state ρ θ defined in Eq. (58) is able to achieve Tr(H (2) ρ θ ) ≥ C √ n (with high probability) since it is constructed using a circuit that itself depends on the random couplings J I (I ∈ T ) appearing in H(2). Since ρ θ does not depend on the couplings J I with I ∈ T , we have Tr(H T ρ θ ) = 0. Since: (i) |Tr(C I ρ)| ≤ 1 (for any ρ) for I ∈ T , (ii) that each J I is a standard Gaussian random variable, and (iii) that |T | ≤ 2n q , the quantityis a Gaussian random variable with zero mean and variance at most one, for any ρ. Then, E exp(tTr(H T ρ)) ≤ exp( 1 2 t 2 ) for all t ≥ 0. Applying Chernoff's bound to Tr(H T ρ), and choosing t = C √ n, we obtain:for any constant C. Using Eq. (155) and Tr(Hρ) = Tr(H T ρ) + Tr(H T ρ), we conclude that the state ρ θ which achieves Tr(H (2) ρ θ ) ≥ C √ n (i.e., for the 2-colored SYK Hamiltonian) with probability at least 1 − exp − Ω(n) , also achieves Tr(Hρ θ ) ≥ C √ n (where H is the standard SYK Hamiltonian in Eq.(2)) with probability at least 1 − exp − Ω(n) . Therefore, λ max (H) ≥ C √ n with probability at least 1 − exp − Ω(n) . Optimizing strongly interacting fermionic Hamiltonians. B Matthew, Ryan O&apos; Hastings, Donnell, 10.1145/3519935.3519960Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022. the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022New York, NY, USAACMMatthew B. Hastings and Ryan O'Donnell. Op- timizing strongly interacting fermionic Hamilto- nians. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, page 776-789, New York, NY, USA, 2022. ACM. URL https://doi.org/10.1145/ 3519935.3519960. Quantum Hamiltonian Complexity. Yichen Sevag Gharibian, Zeph Huang, Seung Woo Landau, Shin, Foundations and Trends in Theoretical Computer Science. 103Sevag Gharibian, Yichen Huang, Zeph Lan- dau, and Seung Woo Shin. Quantum Hamil- tonian Complexity. Foundations and Trends in Theoretical Computer Science, 10(3):159- 282, 2015. URL https://doi.org/10.1561% 2F0400000066. Fermionic quantum computation. B Sergey, Alexei Yu Bravyi, Kitaev, Annals of Physics. 2981Sergey B. Bravyi and Alexei Yu. Kitaev. Fermionic quantum computation. Annals of Physics, 298(1):210-226, 2002. URL https://www.sciencedirect.com/science/ article/pii/S0003491602962548. The approximability of constraint satisfaction problems. Sanjeev Khanna, Madhu Sudan, Luca Trevisan, David P Williamson, SIAM Journal on Computing. 306Sanjeev Khanna, Madhu Sudan, Luca Trevisan, and David P Williamson. The approximability of constraint satisfaction problems. SIAM Journal on Computing, 30(6):1863-1920, 2001. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. X Michel, David P Goemans, Williamson, Michel X. Goemans and David P. Williamson. Improved approximation algorithms for max- imum cut and satisfiability problems using semidefinite programming. . J Acm, 10.1145/227683.22768442J. ACM, 42(6): 1115-1145, 1995. URL https://doi.org/10. 1145/227683.227684. Generalized Hartree-Fock theory for interacting fermions in lattices: numerical methods. V Christina, J Ignacio Kraus, Cirac, 10.1088/1367-2630/12/11/113004New Journal of Physics. 1211113004Christina V Kraus and J Ignacio Cirac. Generalized Hartree-Fock theory for interact- ing fermions in lattices: numerical meth- ods. New Journal of Physics, 12(11):113004, 2010. URL https://dx.doi.org/10.1088/ 1367-2630/12/11/113004. Complexity of quantum impurity problems. Sergey Bravyi, David Gosset, 10.1007/s00220-017-2976-9Communications in Mathematical Physics. 3562Sergey Bravyi and David Gosset. Complexity of quantum impurity problems. Communications in Mathematical Physics, 356(2):451-500, 2017. DOI: 10.1007/s00220-017-2976-9. Classical simulation of dissipative fermionic linear optics. Sergey Bravyi, Robert Koenig, Quant. Inf. Comp. 12Sergey Bravyi and Robert Koenig. Classical simulation of dissipative fermionic linear optics. Quant. Inf. Comp., 12(11-12):925-943, 2012. URL https://arxiv.org/abs/1112.2184. The power of noisy fermionic quantum computation. Piotrćwikliński Fernando De Melo, Barbara M Terhal, 10.1088/1367-2630/15/1/013015New Journal of Physics. 15113015Fernando de Melo, PiotrĆwikliński, and Bar- bara M Terhal. The power of noisy fermionic quantum computation. New Journal of Physics, 15(1):013015, 2013. URL https://doi.org/10. 1088/1367-2630/15/1/013015. The classical limit of quantum spin systems. H Elliott, Lieb, 10.1007/BF01646493Communications in Mathematical Physics. 314Elliott H. Lieb. The classical limit of quantum spin systems. Communications in Mathematical Physics, 31(4):327 -340, 1973. URL https:// doi.org/10.1007/BF01646493. Approximation algorithms for quantum many-body problems. Sergey Bravyi, David Gosset, Robert König, Kristan Temme, 10.1063/1.5085428Journal of Mathematical Physics. 60332203Sergey Bravyi, David Gosset, Robert König, and Kristan Temme. Approximation algorithms for quantum many-body problems. Journal of Mathematical Physics, 60(3):032203, 2019. URL https://doi.org/10.1063/1.5085428. Classical approximation schemes for the ground-state energy of quantum and classical Ising spin Hamiltonians on planar graphs. Nikhil Bansal, Sergey Bravyi, Barbara M Terhal, 9Quantum Inf. Comp.Nikhil Bansal, Sergey Bravyi, and Barbara M Terhal. Classical approximation schemes for the ground-state energy of quantum and classical Ising spin Hamiltonians on planar graphs. Quan- tum Inf. Comp., 9(7-8):701-720, 2009. URL https://arxiv.org/abs/0705.1115. Approximation algorithms for QMA-complete problems. Sevag Gharibian, Julia Kempe, 10.1137/110842272SIAM Journal on Computing. 414Sevag Gharibian and Julia Kempe. Approx- imation algorithms for QMA-complete prob- lems. SIAM Journal on Computing, 41(4):1028- 1050, 2012. URL https://doi.org/10.1137/ 110842272. Harrow. Product-state approximations to quantum ground states. G S L Fernando, Aram W Brandao, 10.1145/2488608.2488719Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, STOC '13. the Forty-Fifth Annual ACM Symposium on Theory of Computing, STOC '13New York, NY, USAAssociation for Computing MachineryFernando G.S.L. Brandao and Aram W. Har- row. Product-state approximations to quantum ground states. In Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Comput- ing, STOC '13, page 871-880, New York, NY, USA, 2013. Association for Computing Machin- ery. ISBN 9781450320290. URL https://doi. org/10.1145/2488608.2488719. Extremal eigenvalues of local Hamiltonians. Aram W Harrow, Ashley Montanaro, 10.22331/q-2017-04-25-6Quantum. 16Aram W. Harrow and Ashley Montanaro. Ex- tremal eigenvalues of local Hamiltonians. Quan- tum, 1:6, April 2017. URL https://doi.org/ 10.22331/q-2017-04-25-6. Improved product-state approximation algorithms for quantum local Hamiltonians. Thiago Bergamaschi, Thiago Bergamaschi. Improved product-state approximation algorithms for quantum local Hamiltonians, 2022. URL https://arxiv.org/ abs/2210.08680. Quadratic forms on graphs. Konstantin Noga Alon, Yury Makarychev, Assaf Makarychev, Naor, Inventiones mathematicae. 1633Noga Alon, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. Quadratic forms on graphs. Inventiones mathematicae, 163(3): 499-522, 2006. A sparse model of quantum holography. Shenglong Xu, Leonard Susskind, Yuan Su, Brian Swingle, Shenglong Xu, Leonard Susskind, Yuan Su, and Brian Swingle. A sparse model of quantum holog- raphy, 2020. URL https://arxiv.org/abs/ 2008.02303. Variational wave functions for Sachdev-Ye-Kitaev models. Arijit Haldar, Omid Tavakol, Thomas Scaffidi, https:/link.aps.org/doi/10.1103/PhysRevResearch.3.023020Phys. Rev. Research. 3023020Arijit Haldar, Omid Tavakol, and Thomas Scaf- fidi. Variational wave functions for Sachdev-Ye- Kitaev models. Phys. Rev. Research, 3:023020, 2021. URL https://link.aps.org/doi/10. 1103/PhysRevResearch.3.023020. Sparse Sachdev-Ye-Kitaev model, quantum chaos, and gravity duals. M Antonio, Yiyang García-García, Dario Jia, Jacobus J M Rosa, Verbaarschot, https:/link.aps.org/doi/10.1103/PhysRevD.103.106002Phys. Rev. D. 103106002Antonio M. García-García, Yiyang Jia, Dario Rosa, and Jacobus J. M. Verbaarschot. Sparse Sachdev-Ye-Kitaev model, quantum chaos, and gravity duals. Phys. Rev. D, 103:106002, 2021. URL https://link.aps.org/doi/10. 1103/PhysRevD.103.106002. Introduction to random graphs. Alan Frieze, Micha L Karoński, Cambridge University PressAlan Frieze and Micha l Karoński. Introduction to random graphs. Cambridge University Press, 2016. The quantum PCP conjecture. Dorit Aharonov, Itai Arad, Thomas Vidick, ACM SIGACT News. 44Dorit Aharonov, Itai Arad, and Thomas Vidick. The quantum PCP conjecture. ACM SIGACT News, 44:47-79, 2013. URL https://arxiv. org/abs/1309.7495. On the complexity of quantum partition functions. Sergey Bravyi, Anirban Chowdhury, David Gosset, and Pawel WocjanSergey Bravyi, Anirban Chowdhury, David Gos- set, and Pawel Wocjan. On the complexity of quantum partition functions, 2021. URL https: //arxiv.org/abs/2110.15466. Concentration inequalities using the entropy method. Stéphane Boucheron, Gábor Lugosi, Pascal Massart, The Annals of Probability. 313Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities using the en- tropy method. The Annals of Probability, 31(3): 1583-1614, 2003. URL http://www.jstor.org/ stable/3481501. Ryota Tomioka, Taiji Suzuki, Spectral norm of random tensors. arXiv. Ryota Tomioka and Taiji Suzuki. Spectral norm of random tensors. arXiv, 2014. URL https: //arxiv.org/abs/1407.1870. Improved approximation algorithms for boundeddegree local Hamiltonians. Anurag Anshu, David Gosset, Karen J Morenz Korol, Mehdi Soleimanifar, https:/link.aps.org/doi/10.1103/PhysRevLett.127.250502Phys. Rev. Lett. 127250502Anurag Anshu, David Gosset, Karen J. Morenz Korol, and Mehdi Soleimanifar. Im- proved approximation algorithms for bounded- degree local Hamiltonians. Phys. Rev. Lett., 127: 250502, 2021. URL https://link.aps.org/ doi/10.1103/PhysRevLett.127.250502. Adaptive estimation of a quadratic functional by model selection. B Laurent, P Massart, 10.1214/aos/1015957395The Annals of Statistics. 285B. Laurent and P. Massart. Adaptive estima- tion of a quadratic functional by model selec- tion. The Annals of Statistics, 28(5):1302 - 1338, 2000. URL https://doi.org/10.1214/ aos/1015957395. . Béla Bollobás. Modern Graph Theory. Graduate Texts in Mathematics. 184Springer-VerlagBéla Bollobás. Modern Graph Theory. Graduate Texts in Mathematics 184. Springer-Verlag New York, 1998. Some theorems on abstract graphs. Gabriel Andrew, Dirac , Proceedings of the London Mathematical Society. 31Gabriel Andrew Dirac. Some theorems on ab- stract graphs. Proceedings of the London Mathe- matical Society, 3(1):69-81, 1952. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Milton Abramowitz, Irene A Stegun, DoverNew YorkMilton Abramowitz and Irene A. Stegun. Hand- book of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York, 1964. Estimates of moments and tails of Gaussian chaoses. Rafal Latala, The Annals of Probability. 346Rafal Latala. Estimates of moments and tails of Gaussian chaoses. The Annals of Probability, 34 (6), nov 2006. URL https://doi.org/10.1214% 2F009117906000000421. Contraction and decoupling inequalities for multilinear forms and ustatistics. V H De La Pena, S J Montgomery-Smith, Jerzy Szulga, The Annals of Probability. 224V. H. de la Pena, S. J. Montgomery-Smith, and Jerzy Szulga. Contraction and decou- pling inequalities for multilinear forms and u- statistics. The Annals of Probability, 22(4): 1745-1765, 1994. URL http://www.jstor.org/ stable/2244916. Concentration inequalities for non-lipschitz functions. Rados Law Adamczak, Pawe Wolff, Rados law Adamczak and Pawe l Wolff. Con- centration inequalities for non-lipschitz functions
{'fraction_non_alphanumeric': 0.09130537827360295, 'fraction_numerical': 0.04133161788356385, 'mean_word_length': 3.311630962373636, 'pattern_counts': {'":': 0, '<': 65, '<?xml version=': 0, '>': 73, 'https://': 21, 'lorem ipsum': 0, 'www.': 3, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 115, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We consider the problem of approximating the ground state energy of a fermionic Hamiltonian using a Gaussian state. In sharp contrast to the dense case [1], we prove that strictly q-local sparse fermionic Hamiltonians have a constant Gaussian approximation ratio; the result holds for any connectivity and interaction strengths. Sparsity means that each fermion participates in a bounded number of interactions, and strictly q-local means that each term involves exactly q fermionic (Majorana) operators. We extend our proof to give a constant Gaussian approximation ratio for sparse fermionic Hamiltonians with both quartic and quadratic terms. With additional work, we also prove a constant Gaussian approximation ratio for the so-called sparse SYK model with strictly 4-local interactions (sparse SYK-4 model). In each setting we show that the Gaussian state can be efficiently determined. Finally, we prove that the O(n −1/2 ) Gaussian approximation ratio for the normal (dense) SYK-4 model extends to SYKq for even q > 4, with an approximation ratio of O(n 1/2−q/4 ). Our results identify nonsparseness as the prime reason that the SYK-4 model can fail to have a constant approximation ratio [1].Sparse fermionic HamiltoniansKey to our work is the notion of a sparse Hamiltonian.Definition 4. Let H be a local traceless fermionicHamiltonian of 2n Majorana operators. We say that H is k-sparse, for an integer k, if no Majorana operator c i occurs in more than k terms of the Hamiltonian, i.e. all Majorana operators have degree at most k.This condition allows us to efficiently find Gaussian states with constant approximation ratio. We have the following theorem, which is the main result of our work:', 'arxivid': '2211.16518', 'author': ['Yaroslav Herasymenko \nQuSoft & CWI\nScience Park 1231098 XGAmsterdamThe Netherlands\n\nQuTech\nDelft University of Technology\nLorentzweg 12628 CJDelftThe Netherlands\n', 'Maarten Stroeks \nQuTech\nDelft University of Technology\nLorentzweg 12628 CJDelftThe Netherlands\n\nEEMCS Faculty\nDelft University of Technology\nVan Mourik Broekmanweg 62628 XEDelftThe Netherlands\n', 'Jonas Helsen \nQuSoft & CWI\nScience Park 1231098 XGAmsterdamThe Netherlands\n', 'Barbara Terhal \nQuTech\nDelft University of Technology\nLorentzweg 12628 CJDelftThe Netherlands\n\nEEMCS Faculty\nDelft University of Technology\nVan Mourik Broekmanweg 62628 XEDelftThe Netherlands\n'], 'authoraffiliation': ['QuSoft & CWI\nScience Park 1231098 XGAmsterdamThe Netherlands', 'QuTech\nDelft University of Technology\nLorentzweg 12628 CJDelftThe Netherlands', 'QuTech\nDelft University of Technology\nLorentzweg 12628 CJDelftThe Netherlands', 'EEMCS Faculty\nDelft University of Technology\nVan Mourik Broekmanweg 62628 XEDelftThe Netherlands', 'QuSoft & CWI\nScience Park 1231098 XGAmsterdamThe Netherlands', 'QuTech\nDelft University of Technology\nLorentzweg 12628 CJDelftThe Netherlands', 'EEMCS Faculty\nDelft University of Technology\nVan Mourik Broekmanweg 62628 XEDelftThe Netherlands'], 'corpusid': 254096341, 'doi': '10.48550/arxiv.2211.16518', 'github_urls': [], 'n_tokens_mistral': 43243, 'n_tokens_neox': 38165, 'n_words': 22799, 'pdfsha': '9b1ae48795e21bf0c5d900a8120f3013fcf44a17', 'pdfurls': ['https://export.arxiv.org/pdf/2211.16518v1.pdf'], 'title': ['Optimizing sparse fermionic Hamiltonians', 'Optimizing sparse fermionic Hamiltonians'], 'venue': []}
arxiv
Non-BPS domain wall configurations in a supersymmetric model 22 Dec 1999 December 1999 V A Gani Institute of Theoretical and Experimental Physics Institute of Theoretical and Experimental Physics Moscow State Engineering Physics Institute (Technical University) Kashirskoe shosse, 31, B.Cheremushkinskaya, 25115409, 117259Moscow, MoscowRussia, Russia, Russia A E Kudryavtsev Institute of Theoretical and Experimental Physics Institute of Theoretical and Experimental Physics Moscow State Engineering Physics Institute (Technical University) Kashirskoe shosse, 31, B.Cheremushkinskaya, 25115409, 117259Moscow, MoscowRussia, Russia, Russia Non-BPS domain wall configurations in a supersymmetric model 22 Dec 1999 December 1999 We study the time evolution of configurations in the form of two parallel domain walls moving towards each other in a supersymmetric field model. The configurations involved are not BPS-saturated. It is found that for such collisions there exists some critical value v cr ≈ 0.9120 of the initial velocity v i of the walls. At v i < v cr we observed reflection, that was not followed by change of vacuum states sequence. In collisions with v i > v cr the sequence of vacuum states changes. The results of the numerical simulations are in agreement with "potential" The dynamic properties of domain walls in supersymmetric theories have attracted some attention recently [1] - [6]. Depending on particular form of the superpotential being chosen, one obtains different sets of supersymmetric vacua and different structure of domain wall configurations interpolating between them. We restrict ourselves by consideration of the theory described by the superpotential W (Φ, X) = m 2 λ Φ − 1 3 λΦ 3 − αΦX 2 ,(1) where m is a mass parameter and α and λ are coupling constants. We assume that α and λ are real and positive. The Lagrangian for the real parts of the scalar fields is given for this theory by the expression L = (∂φ) 2 + (∂χ) 2 − m 2 λ − λφ 2 − αχ 2 2 − 4α 2 φ 2 χ 2 .(2) The potential term of Eq. (2) has four degenerate vacuum states, shown in Fig. 1. This theory possess a wide class of domain walls, which link different vacua. Some of them satisfy first order differential equations analogous to the Bogomol'nyi-Prasad-Sommerfeld (BPS) equations [7]. The dynamic properties of the BPS configurations for this model were intensively studied recently [1] - [6]. This our work is devoted to the so-called non-BPS domain walls, i.e. configurations which link different vacua of the theory, but do not satisfy BPS equations. It is convenient to work with dimensionless field variables f and h, defined as φ = m λ f , χ = m √ λα h . The Lagrangian (2) yields the following equations of motion for fields f and h: f 2 tt − ∇ 2 f − 2f (1 − f 2 − h 2 ) + 4 ρ f h 2 = 0 , h 2 tt − ∇ 2 h − 2 ρ h(1 − f 2 − h 2 ) + 4 ρ 2 hf 2 = 0 .(3) Here ρ = λ/α, m = 1. It was shown (see, e.g., Ref. [5]) that for the case ρ = 4 the field equations (3) possess an "elementary" walls, connecting vacua 3 and 2, 2 and 4. Their form may be obtained analytically [5]: f 32 (z) = 1 2 1 + tanh z 2 , h 32 (z) = 1 2 1 − tanh z 2 ; (4) f 24 (z) = 1 2 1 − tanh z 2 , h 24 (z) = − 1 2 1 + tanh z 2 ,(5) here z is a space coordinate orthogonal to the walls. It is easy to see, that the rest energy of these 3 → 2 and 2 → 4 walls equals E 0 = 4/3. Consider a non-BPS ansatz configuration 3 → 2 → 4 constructed from two elementary domain walls 3 → 2 and 2 → 4 located at z = −z 0 and z = +z 0 respectively. Let us take their simple superposition in the form f 324 (z, z 0 ) = f 32 (z + z 0 ) + f 24 (z − z 0 ) − 1 , h 324 (z, z 0 ) = h 32 (z + z 0 ) + h 24 (z − z 0 ) .(6) Note, that from the system (3) the special "diagonal" solution 3 → 4 can be easily found by substituting f = 0: f 34 (z) ≡ 0, h 34 (z) = − tanh z 2 .(7) The energy of such configuration is E 34 = 16/3. To get z 0 -dependence of the energy of configuration (6) we have to insert (6) into Hamiltonian of the model. As a result we obtain E 324 (z 0 ) = 2E 0 + ∆E 324 (z 0 ) ,(8) where ∆E 324 = +∞ −∞ dz 2 df 32 dz df 24 dz + 2ρ dh 32 dz dh 24 dz + (1 − f 2 324 − h 2 324 ) 2 + 4 ρ f 2 324 h 2 324 −(1 − f 2 32 − h 2 32 ) 2 − 4 ρ f 2 32 h 2 32 − (1 − f 2 24 − h 2 24 ) 2 − 4 ρ f 2 24 h 2 24 .(9) Here f 32 = f 32 (z + z 0 ), h 32 = h 32 (z + z 0 ), f 24 = f 24 (z − z 0 ), h 24 = h 24 (z − z 0 ) . We calculated the z 0 -dependence of ∆E 324 numerically, see Fig. 2 (solid curve). At the limit of large z 0 the configuration (6) looks like two isolated walls 3 → 2 and 2 → 4. Therefore their total energy equals 2E 0 , and ∆E 324 ≈ 0. As it is seen from Fig. 2, energy ∆E 324 increases with decreasing z 0 . At z 0 = 0 ∆E 324 (0) ≈ 3.119. It corresponds to E 324 (0) = 2E 0 + ∆E 324 (0) ≈ 5.786. Note, that E 324 (0) is larger than E 34 = 16/3 ≈ 5.333. The energy of configuration 3 → 2 → 4 (6) has its absolute maximum at z 0 ≈ −0.37 when (∆E 324 ) max ≈ 3.202. At large negative z 0 ∆E 324 (z 0 ) has asymptotic value about 2.274. In the range z 0 < 0 configuration (6) actually has the shape of the 3 → 1 → 4 type, see Fig. 3. It is clear, that we can construct an ansatz configuration 3 → 1 → 4 in analogy to (6): f 314 (z, z 0 ) = f 31 (z + z 0 ) + f 14 (z − z 0 ) + 1 , h 314 (z, z 0 ) = h 31 (z + z 0 ) + h 14 (z − z 0 ) ,(10) where f 31 (z) = − 1 2 1 + tanh z 2 , h 31 (z) = 1 2 1 − tanh z 2 ; (11) f 14 (z) = − 1 2 1 − tanh z 2 , h 14 (z) = − 1 2 1 + tanh z 2 .(12) The energy of 3 → 1 and 1 → 4 walls is exactly the same as of 3 → 2 or 2 → 4. Hence, the z 0 -dependence of the energy of configuration (10) will be E 314 (z 0 ) = 2E 0 + ∆E 314 (z 0 ) ,(13) with "potential" ∆E 314 (z 0 ) analogous to ∆E 324 (z 0 ), Eq. (9). The shape of ∆E 314 (z 0 ) is the same as of ∆E 324 (z 0 ). As it was already mentioned, at z 0 < 0 ansatz (6) has the shape of 3 → 1 → 4 type indeed. Obviously, configuration (10) at negative z 0 has the shape of 3 → 2 → 4 type, see Fig. 3. Notice, that h 324 (z, −z 0 ) = h 314 (z, z 0 ). If we would like to compare energies of configurations (6) or (10) that belong to one of these two types, we have to place curves ∆E 324 (z 0 ) and ∆E 314 (−z 0 ) (or curves ∆E 314 (z 0 ) and ∆E 324 (−z 0 )) in the same plot. Fig. 2 is constructed just in this way. We solved field equations (3) numerically with initial conditions in the form of (6), where 3 → 2 and 2 → 4 walls located at some initial distance 2z 0 ≫ 1 and are moving towards each other with some initial velocity v i . Depending on the initial velocity we observed different types of evolution. If v i is less than some critical value v num cr , walls 3 → 2 and 2 → 4 collide and then escape from each other to infinity. As a result we return to the configuration of the 3 → 2 → 4 type. At initial velocities v i > v num cr the walls collide in a different way. The point is that after collision the configuration of the type 3 → 1 → 4 appears. From these numerical simulations we found v num cr ≈ 0.9120. The presence of different regimes in such collisions is a consequence of the fact that the energy of configuration 3 → 2 → 4 is not degenerate with respect to the parameter z 0 . So, we have a kind of "potential" interaction between 3 → 2 and 2 → 4 domain walls. It is worth to mention here, that in the case of BPS-saturated (or simply BPS) walls [6] there is no potential interaction. The latter property is a consequence of the degeneracy in energies of configurations with different interwall distances, analogous to our parameter z 0 . Existence of the critical velocity can be understood in terms of the potential approach. From Fig. 2 it is seen, that if the initial kinetic energy of the walls 3 → 2 and 2 → 4 is smaller than ∆E * ≈ 3.119, then (inelastic) reflection may be expected. If the kinetic energy of the walls exceeds ∆E * , it is natural to expect that configurations of the type (6) with negative z 0 appear. But configuration (6) at negative z 0 is of the type 3 → 1 → 4 indeed, and from Fig. 2 we see, that in this sector configurations (10) have smaller energy. Hence, configuration (6) at negative z 0 transforms into (10). In further evolution the walls 3 → 1 and 1 → 4 escape to infinity. It is also worth mentioning, that the initial configuration (6) with z 0 = 0 and v i = 0 looks like some excitation over the static solution (7). After emission of part of energy in the form of waves, the evolution of this initial configuration (z 0 = 0, v i = 0) leads to formation of an excited kink of type (7) (wobbling kink). We were unable to get this wobbling kink solution making numerical calculations of the equations of motion (3) with initial conditions (6) when either z 0 or v i was not equal to zero. Figure captions Fig. 1 . 1Locations of the vacuum states of the model.Fig. 2. The profile of the potential ∆E 324 versus z 0 (solid curve) and the profile of the potential ∆E 314 versus (−z 0 ) (dashed curve). Fig. 3 . 3Profiles of f (z) (solid lines) and h(z) (dashed lines) for configurations 3 → 2 → 4 and 3 → 1 → 4 at z 0 = ±10.0. AcknowledgmentsWe are thankful to M. B. Voloshin for useful discussions. One of the authors (V. A. Gani) would like to thank E. A. Smirnova for placing in our disposal some hardware resources and data transfer channel.This . M A Shifman, M B Voloshin, Phys. Rev. 572590M. A. Shifman, M. B. Voloshin, Phys. Rev. D57, 2590 (1998). . M B Voloshin, Phys. Rev. 571266M. B. Voloshin, Phys. Rev. D57, 1266 (1998). . M A Shifman, Phys. Rev. 571258M. A. Shifman, Phys. Rev. D57, 1258 (1998). . A V Smilga, A I Veselov, Nucl. Phys. 515163A. V. Smilga, A. I. Veselov, Nucl. Phys. B515, 163 (1998). . S V Troitsky, M B Voloshin, Phys. Lett. 44917S. V. Troitsky, M. B. Voloshin, Phys. Lett. B449, 17 (1999). . V A Gani, A E Kudryavtsev, hep-th/9904209V. A. Gani, A. E. Kudryavtsev, preprint ITEP-15/99; hep-th/9904209. . E , Sov. J. Nucl. Phys. 24449E. Bogomol'nyi, Sov. J. Nucl. Phys., 24, 449 (1976); . M K Prasad, C H Sommerfeld, Phys. Rev. Lett. 35760M. K. Prasad, C. H. Sommerfeld, Phys. Rev. Lett., 35, 760 (1976).
{'fraction_non_alphanumeric': 0.0802781917536016, 'fraction_numerical': 0.07292598112270243, 'mean_word_length': 3.201168614357262, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 12, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We study the time evolution of configurations in the form of two parallel domain walls moving towards each other in a supersymmetric field model. The configurations involved are not BPS-saturated. It is found that for such collisions there exists some critical value v cr ≈ 0.9120 of the initial velocity v i of the walls. At v i < v cr we observed reflection, that was not followed by change of vacuum states sequence. In collisions with v i > v cr the sequence of vacuum states changes. The results of the numerical simulations are in agreement with "potential"', 'arxivid': 'hep-th/9912211', 'author': ['V A Gani \nInstitute of Theoretical and Experimental Physics\nInstitute of Theoretical and Experimental Physics\nMoscow State Engineering Physics Institute (Technical University)\nKashirskoe shosse, 31, B.Cheremushkinskaya, 25115409, 117259Moscow, MoscowRussia, Russia, Russia\n', 'A E Kudryavtsev \nInstitute of Theoretical and Experimental Physics\nInstitute of Theoretical and Experimental Physics\nMoscow State Engineering Physics Institute (Technical University)\nKashirskoe shosse, 31, B.Cheremushkinskaya, 25115409, 117259Moscow, MoscowRussia, Russia, Russia\n'], 'authoraffiliation': ['Institute of Theoretical and Experimental Physics\nInstitute of Theoretical and Experimental Physics\nMoscow State Engineering Physics Institute (Technical University)\nKashirskoe shosse, 31, B.Cheremushkinskaya, 25115409, 117259Moscow, MoscowRussia, Russia, Russia', 'Institute of Theoretical and Experimental Physics\nInstitute of Theoretical and Experimental Physics\nMoscow State Engineering Physics Institute (Technical University)\nKashirskoe shosse, 31, B.Cheremushkinskaya, 25115409, 117259Moscow, MoscowRussia, Russia, Russia'], 'corpusid': 119671146, 'doi': '10.1134/1.1423755', 'github_urls': [], 'n_tokens_mistral': 3838, 'n_tokens_neox': 3093, 'n_words': 2018, 'pdfsha': '9ce0a734300aed0d19d84a449000aed35c56ef47', 'pdfurls': ['https://arxiv.org/pdf/hep-th/9912211v1.pdf'], 'title': ['Non-BPS domain wall configurations in a supersymmetric model', 'Non-BPS domain wall configurations in a supersymmetric model'], 'venue': []}
arxiv
Backreaction effects of dissipation in neutrino decoupling 7 Oct 2000 (October 29, 2018) Roy Maartens School of Computer Science and Mathematics Portsmouth University PO1 2EGPortsmouthEngland Josep Triginer Department of Physics Autonomous University of Barcelona 08193BellaterraSpain Backreaction effects of dissipation in neutrino decoupling 7 Oct 2000 (October 29, 2018) Dissipative effects during neutrino decoupling in the early universe create a small backreaction on the Hubble rate, and lead to a small rise in temperature and entropy. We use a simplified thermohydrodynamic model, which provides a causal approximation to kinetic theory, in order to estimate the backreaction effects and the entropy production.I. INTRODUCTIONNon-equilibrium processes in the early universe are typically associated with dynamical transitions or particle decouplings. In the case of neutrino decoupling, the standard approach is to treat the process as adiabatic (see e.g.[1]). The small non-equilibrium effects are thus usually neglected, which provides a reasonable approximation. However, given the increasing accuracy of cosmological observations and theoretical modeling, it is worthwhile revisiting the standard equilibrium models of processes such as neutrino decoupling, in order to see whether non-equilibrium corrections can lead to observable consequences. Recently, non-equilibrium corrections in neutrino decoupling have been calculated in a number of papers, using complicated kinetic theory and numerical computations (see [2] for a short review). The corrections are very small, as expected. For example, in[3][4][5]it was found that non-equilibrium effects lead to a small change in the decoupling temperature for neutrinos. Spectral distortions have also been analyzed[6], showing the remarkable fact that they amount to as much as 1% or more for the higher-energy side of the spectrum. Although these corrections in the spectrum, energy density and temperature of the neutrino component have hardly any effect on primordial helium synthesis, yielding a change in the mass fraction of ∼ 10 −4 , they can lead to other effects that may be observable. Thus it is shown that the non-equilibrium increase in neutrino temperature, which leads to an extra injection of energy into the photon spectrum, leads to a shift of equilibrium epoch between matter and radiation which, in turn, modifies the angular spectrum of fluctuations of the cosmic microwave background radiation[7,8].Despite the accuracy of these models in obtaining corrections to the decoupling temperature and distribution function due to non-equilibrium effects, they still make use of the standard Friedman equations for a perfect (i.e nondissipative) fluid. This leads to the physically inconsistent situation in which, say, the energy density and expansion evolve in time like a radiative fluid in equilibrium. One expects that small distortions in the particle equilibrium distribution function should be reflected in the macroscopic (i.e fluid) description, as given by the stress-energy tensor, by adding a bulk viscous pressure to the equilibrium one. Here we consider an alternative thermo-hydrodynamic model of dissipative effects in neutrino decoupling, simple enough to produce analytic solutions for the backreaction effects on the universal scale factor, and estimates for the entropy production due to dissipation. As explained above these effects are not the focus of recent papers, which use sophisticated kinetic theory models focusing on the neutrino temperature. Our simplified approach cannot compete with these models for accuracy and completeness, but it has the advantage of simplicity, allowing for a qualitative understanding of effects not previously investigated in detail. A similar approach has previously been developed in [9] to the reheating era that follows inflation.The thermo-hydrodynamic model is based on an approximation to kinetic theory which respects relativistic causality. This approximation is the Grad moment method, leading to the causal thermodynamics of Israel and Stewart[10]in the hydrodynamic regime (see also[11]for an alternative but equivalent approach). This causal theory is a generalization of the more commonly used relativistic Navier-Stokes-Fourier theory. The latter, due to Eckart [12], may be derived via the Chapman-Enskog approximation in kinetic theory. The resulting theory is quasi-stationary and noncausal, and suffers from the pathologies of infinite wavefront speeds and instability of all equilibrium states * [13] . The main new ingredient in the causal transport equations is a transient term which contains the relaxation time. Our simple model is based on a one-component fluid. In [14], relaxation time processes are incorporated in a two-fluid model. In this setting, electrons and positrons on the one side and neutrinos and antineutrinos on the other side, are found to be in two different equilibrium states with slightly different temperatures. The system evolves towards a state of thermal equilibrium in a characteristic relaxation time. Dissipative effects in the decoupling of a given species of particles arise from the growing mean free path of the decoupling particles in their weakening interaction with the cosmic fluid. Eventually the mean collision time exceeds the gravitational expansion time, and decoupling is complete. A hydrodynamic model may be used to cover the early stages of the decoupling process, but it will eventually break down when the mean collision time becomes large enough [15]. In the conditions prevailing at the time of neutrino decoupling, it is reasonable to neglect sub-horizon metric fluctuations and treat the spacetime as a Friedmann model. (The incorporation of perturbations in our model would use the covariant formalism for dissipative fluids developed in [16].) The dynamical effects of spatial curvature and any surviving vacuum energy will be negligible, so that we can reasonably assume a spatially flat geometry. Furthermore, we assume that the average 4-velocities of the neutrinos (regarded as massless) and of the photon-electron-positron gas are the same. With all these assumptions, only scalar dissipation is possible. Dissipation during neutrino decoupling arises because the falling temperature lowers the interaction rate with leptons as the lepton mass can no longer be ignored relative to the thermal energy. Thus dissipation is directly reflected in a deviation of the equation of state from the thermalized radiation form p = 1 3 ρ. Within a hydrodynamic one-fluid model, such dissipation is described via bulk viscosity, which vanishes in the p = 1 3 ρ limit, but is nonzero otherwise. We will use the full (i.e. non-truncated) version of the causal transport equation for bulk stress. II. CAUSAL TRANSPORT EQUATION FOR BULK STRESS The particle number 4-current and the energy-momentum tensor are N a = nu a , T ab = ρu a u b + (p + Π)h ab , where ρ is the energy density, p is the equilibrium (hydrostatic) pressure, n is the particle number density, Π is the bulk viscous pressure, and h ab = g ab + u a u b is the projector into the comoving instantaneous rest space. Particle and energy-momentum conservation ∇ a N a = 0 , ∇ b T ab = 0 , lead to the equationsṅ + 3Hn = 0 , (1) ρ + 3H(ρ + p + Π) = 0 ,(2) where H is the Hubble expansion rate. The specific entropy s and the temperature T are related via the Gibbs equation nT ds = dρ − ρ + p n dn .(3) Then it follows that nTṡ = −3HΠ ,(4) where Π is always non-positive. The Grad moment approximation in kinetic theory (or phenomenological arguments) leads to the full causal transport equation [10] for Π: τΠ + Π = −3ζH − 1 2 τ Π 3H +τ τ −ζ ζ −Ṫ T ,(5) where τ is the relaxation time scale, which allows for causal propagation of viscous signals, and ζ ≤ 0 is the bulk viscous coefficient as given below. Quasi-stationary, noncausal theories have τ = 0, which reduces the evolution equation (5) to an algebraic equation Π = −3ζH. This leads to instantaneous propagation of viscous signals. Note also that the causal relaxational effects lead to a small increase in the sound speed over its adiabatic value [17]: c 2 s → c 2 s + c 2 b where c 2 b = ζ (ρ + p)τ .(6) This result, which is not well known, is derived in the appendix. The approximation used in deriving the transport equation (also in the quasi-stationary case) requires that |Π| ≪ ρ, which is reasonable for most dissipative processes (see [18] for a nonlinear generalization of the causal transport equation.) Equation (5) as it stands is known as the full or non-truncated transport equation for bulk viscous pressure [19][20][21]. When the term containing the square bracket on the right is neglected, we get the truncated equation which is usually used. Under many conditions, truncation leads to a reasonable approximation. We will use the full equation. Taking n and ρ as independent variables, the Gibbs equation (3) leads to the integrability condition n ∂T ∂n ρ + (ρ + p) ∂T ∂ρ n = T ∂p ∂ρ n ,(7) and together with the energy conservation equation (2) this gives the temperature evolution equatioṅ T T = −3H ∂p ∂ρ n − 1 T ∂T ∂ρ n 3HΠ .(8) The first term on the right accounts for adiabatic cooling due to expansion, whereas in the second term, viscosity contributes to heating of the fluid (note that Π is always non-positive). Using equations (1) and (2), the Gibbs equation takes the form n 2 T ds = n3HΠ 3H(ρ + p) + 3HΠ dρ + (ρ + p) ∂n ∂p ρ ṗρ dρ − dp .(9) As expected we learn from the last equation that when the fluid is perfect (Π = 0), the specific entropy is conserved along the flow lines (ṡ = 0). Furthermore, if a barotropic equation of state for n holds, i.e. n = n(ρ), then ds = 0 so that s is a universal constant, the same on all flow-lines, and the fluid is called isentropic. 1 Yet, as Eq. (9) shows, this is no longer true in the presence of dissipation, i.e. a barotropic particle number density no longer forces ds to vanish. For simplicity, we assume the linear barotropic equation of state p = (γ − 1)ρ ,(10) where γ is constant and we are interested in the case γ ≈ 4 3 . The adiabatic speed of sound c s is given by c 2 s = ∂p ∂ρ s , which for a perfect fluid (either barotropic or not) becomes c 2 s =ṗ ρ . When Eq. (10) holds then c s = √ γ − 1. Using Eq. (10) and the integrability condition (7), we find T = ρ (γ−1)/γ F ρ 1/γ n ,(11) where F is an arbitrary function which satisfiesḞ = 0. If T is barotropic, then F is constant and we have a power-law form with fixed exponent for the temperature [17,22] T ∝ ρ (γ−1)/γ . In the non-dissipative case, these barotropic equations for p and T are compatible with the ideal gas law p = nT ,(13) but in the presence of dissipation this is no longer true. In effect, equations (10), (12) and (13) imply n ∝ ρ 1/γ , i.e. n n = 1 γρ ρ , which implies, by using Eq. (2), that Π = 0. We shall drop in the sequel a barotropic equation of state for the temperature in favour of the more physically appealing equation of state (13) together the γ-law in (10). III. DISSIPATION IN NEUTRINO DECOUPLING A hydrodynamic approach in the expanding universe requires a particle collision time t c short enough to adjust to the falling temperature. As the natural time-scale for the expanding universe is H −1 , we have t c < H −1 . If t c ≪ H −1 , then an equilibrium state can in principle be attained. Dissipative phenomena could play a prominent role for t c ∼ H −1 . We learn from kinetic theory that t c is determined by t c = 1 nσv ,(14) where n is the number density of the target particles with which the given species is interacting, σ the cross-section and v the mean relative speed of interacting particles. For the decoupling of massless neutrinos in the early universe, v = 1, the target number density is that of electrons, and [23] σ ≈ G F T 2 , where G F is the Fermi coupling constant. At the neutrino decoupling temperature T d , we have m e /T d ≈ 1 2 , so that the rest mass energy m e of electrons starts to become important. Since the electron number density in the radiation dominated era evolves as n e ∝ a −3 , where a is the scale factor, we have from Eq. (14) that t c ∝ a 3 T 2 .(15) Dissipation due to massless particles with long mean free path in a hydrodynamic fluid is described by the radiative transfer model. The bulk viscous coefficient takes the form [24] ζ = 4rT 4 Γ 2 t c ,(16) where r is 7 8 times the radiation constant and Γ measures the deviation of p/ρ from its pure-radiation value: Γ = 1 3 − ∂p ∂ρ n ,(17) where p and ρ refer to the pressure and energy density of the radiation/matter mixture as a whole. Since we assume the linear equation of state (10), it follows that Γ is a perturbative constant parameter in our simple model: Γ = 4 3 − γ ≪ 1 . The assumption that Γ is constant relies on the assumption that decoupling takes place rapidly. Since standard adiabatic treatments of decoupling [1] assume instantaneous decoupling, this assumption should be a reasonable first approximation. We may neglect the −3ζH term on the right of the transport equation (5), since it is O(Γ 2 ). Note that our simple model would thus break down in the quasi-stationary Eckart theory, since it would immediately lead to Π = O(Γ 2 ). The relaxation timescale τ in causal radiative transfer [25] is given by τ = t c . The termζ/ζ on the right of Eq. (5) becomesζ ζ = H + O(Γ) , on using equations (8) and (15). The full transport equation (5) becomes, to lowest order τΠ + Π = −4τ HΠ .(18) (We can think of the right hand side as an effective source term relative to the truncated transport equation.) We can rewrite this in the standard truncated form as τ * Π + Π = 0 ,(19) where the effective relaxation time acquires an expansion correction: τ * = τ 1 + 4τ H .(20) The amount of reduction depends on the size of τ = t c relative to H. The hydrodynamical description requires τ H < 1. If τ H ≪ 1, then τ * ≈ τ . But if τ H is close to 1, the reduction could be significant. The Friedmann equation ρ = 3H 2 ,(21) together with Eq. (2) leads to Π = −2Ḣ − (4 − 3Γ)H 2 .(22) On using equation (22) we get from (18) the evolution equation for Ḧ H + HḢ(8 − 3Γ + N ) + H 3 (2 − 3 2 Γ)(N + 4) = 0,(23) where N = (τ H) −1 ,(24) which is of the order of the number of interactions in an expansion time. Now, from equations (10), (13), (15) and (24) we have N = Ha H d a d 3 ,(25) where the expression n ∝ a −3 has been used and a d and H d = H(a d ) are the values at which N = 1, so that a d is determined by the equation t c (a d )H(a d ) = 1 .(26) Changing the independent variable to the scale factor a, developing equation (23) and collecting the previous results, yields a 2 HH ′′ + a 2 H ′2 + aHH ′ 9 − 3Γ + Ha H d a d 3 + 2 − 3 2 Γ H 2 4 + Ha H d a d 3 = 0 ,(27) where a prime denotes d/da. We expand H as H =H + δH where δH = Γh + O(Γ 2 ) .(28) The equilibrium Hubble rateH corresponds to the thermalized radiation state p = 1 3 ρ, so that Γ = 0, and Eq. (28) becomes a 2HH ′′ + a 2H ′2 + 9aHH ′ + 8H 2 + aHH ′ + 2H 2 H ā H d a d = 0 . The unique power-law solution is the well-known perfect radiative solution H = H 0 a 0 a 2 = 1 2t ,(29) where a 0 marks the start of the dissipative decoupling process, so that H =H for a < a 0 . Substituting Eq. (28) into (27) and using the fact that H 0 a 0 H d a d = a d a 0 + O(Γ) , we find that to O(Γ): a 2 h ′′ + a 5 + a d a 3 h ′ + 4 + 2 a d a 3 h = 3 2 H 0 a 0 a d 2 a d a 5 .(30) Defining α = a/a d , we can rewrite this as d 2 h dα 2 + 5 α + 1 α 4 dh dα + 4 α 2 + 2 α 5 h = 3 2 H 0 α 2 0 1 α 7 .(31) Now we use the following general result [26]: if ϕ is a solution of y ′′ + f (x)y ′ + g(x)y = k(x) when k = 0, then the general solution is y = C 1 ϕ + C 2 ϕ dx ϕ 2 E + ϕ 1 ϕ 2 E ϕEkdx dx , where E = exp f dx. By inspection, a solution of the homogeneous equation (31) is 1/α 2 . It follows that the general solution is h(a) = H 0 a 0 a 2 c 1 + c 2 Ei 1 3 a d a 3 + 3 2 ln a a d ,(32) where c 1 and c 2 are arbitrary integration constants and Ei is the exponential-integral function [27] Ei (x) ≡ x −∞ e v v dv = C + ln x + ∞ k=1 x k k!k , with C denoting Euler's constant. By equations (22) and (32), the bulk stress to first order is Π = (3H 2 − 4Hh − 2h ′ Ha)Γ,(33) This expression holds for a > a 0 , where a 0 marks the onset of dissipative evolution. Thereafter, the bulk stress decays according to the causal law (19). In order to relate the constants c 1 and c 2 , we require, according to standard matching conditions, that H is continuous. Thus h(a 0 ) = 0, which fixes c 1 : c 1 = −c 2 Ei 1 3 a d a 0 3 − 3 2 ln a 0 a d .(34) Thus, using Eq. (32), we see that the backreaction of the dissipative decoupling process on the expansion of the universe is given by δH =H c 2 Ei 1 3 a d a 3 − Ei 1 3 a d a 0 3 + 3 2 ln a a 0 Γ + O(Γ 2 ) .(35) Substituting Eq. (34) into Eq. (33), we find that the bulk stress becomes Π =ρ 2c 2 exp 1 3 a d a 3 Γ + O(Γ 2 ) ,(36) whereρ = 3H 2 is the equilibrium energy density. Since Π < 0, we require c 2 < 0. Below we find a prescription for c 2 in terms of physical parameters. IV. CONCLUSION In order to complete the model, we need to determine the remaining arbitrary constant c 2 in terms of physical parameters. A rough estimate, which is consistent with the simplicity of the model, arises as follows. We estimate the duration of the dissipative process as ∆a ≈ a d − a 0 ,(37) i.e. we assume that the process ends at a d . Then by Eqs. (8) and (13), the fractional viscous rise in temperature due to decoupling is approximately ∆T T ≈ − Π(a 0 ) ρ(a 0 ) ∆a a 0 .(38) We can consider the fractional temperature increase as an input from previous kinetic-theory investigations (as described in the introduction), which typically predict it to be O(10 −3 ). 2 Then equations (36)-(38) and (33) allow us to estimate the constant c 2 in terms of the physical parameters a d /a 0 , ∆T /T and Γ as: c 2 Γ ≈ − 1 2 ∆T T        exp − 1 3 a d a0 3 a d a0 − 1        .(39) Finally, we can also estimate the entropy production due to decoupling. By Eqs. (4) and (38), the viscous increase in entropy per particle is approximately ∆s ≈ 3 ∆T T .(40) Our model describes the response of the cosmic fluid to a bulk stress, which is a very simple thermo-hydrodynamic approximation to more realistic kinetic theory models of neutrino decoupling, but which nevertheless accommodates the dissipative effects and respects relativistic causality. The simplicity of our model allows us to derive analytic forms for the dynamical quantities and the backreaction effects, but it does not incorporate a mechanism for bringing the dissipative process to an end. The same reasoning applies when the temperature is barotropic. Note that this small temperature increase is due to dissipative heating, and is not to be confused with the larger temperature increase arising from electron-positron annihilation, which occurs after neutrino decoupling. Our model does not consider the annihilation process. Acknowledgements:This work was partially supported by a European Science Exchange Programme grant.APPENDIX A: CHARACTERISTIC VELOCITIES FOR BULK VISCOUS PERTURBATIONSFollowing[17], we derive equation(6)for the dissipative contribution to the sound speed. The full analysis of the causality and stability of the Israel-Stewart theory was performed in a series of papers by Hiscock and Salmonson[13,28]. They showed that both issues are closely related and obtained general expressions for the characteristic velocities for dissipative perturbations. Here we extract from their general expressions specific results for the case in which only bulk viscosity is present.The purely bulk viscous case stems from the general expressions of[13]by setting all the coefficients coupled to heat flux and shear viscosity to zero. This yields for the speed of propagating transverse modeswhich is what one expects for scalar sound-wave perturbations. Equation (128) of[13]governing the speed v = v L of propagating longitudinal modes becomes, on dividing by β 0 β 2 and setting α 0 = α 1 = 0,Dividing by β 1 and taking β 1 → ∞, we haveThe first term on the right is the adiabatic contribution c 2 s to v 2 , and the second term is the dissipative contribution c 2 b , which, requiring v 2 ≤ 1, leads toWe also learn from[13]that causality and stability requirefor all λ such that 0 ≤ λ ≤ 1. This condition is seen to be hold on account of the inequality (A2). The expression for c b refines and corrects the statement in[29](the first paper to apply causal bulk viscosity in cosmology) that ζ/ρτ = 1 is required by causality. S Weinberg, Gravitation and Cosmology. New YorkWileyS. Weinberg, Gravitation and Cosmology (Wiley, New York, 1972). . A D Dolgov, astro-ph/9807134A. D. Dolgov, astro-ph/9807134. . M A Herrera, S Hacyan, Astrophys. J. 336539M. A. Herrera and S. Hacyan, Astrophys. J. 336, 539 (1989). . N C Raha, B Mitra, Phys. Rev. D. 44393N. C. Raha and B. Mitra, Phys. Rev. D 44, 393 (1991). . N Fornengo, C W Kim, J Song, Phys. Rev. D. 565213N. Fornengo, C. W. Kim, and J. Song, Phys. Rev. D 56, 5213 (1997). . A D Dolgov, M Fukugita, Phys. Rev. D. 465378A. D. Dolgov and M. Fukugita, Phys. Rev. D 46, 5378 (1992). . N Y Gnedin, O Y Gnedin, Astrophys. J. 50911N. Y. Gnedin and O. Y. Gnedin, Astrophys. J. 509, 11 (1998). . A D Dolgov, S H Hansen, D V Semikoz, Nucl. Phys. B. 503426A. D. Dolgov, S. H. Hansen, and D. V. Semikoz, Nucl. Phys. B 503, 426 (1997); . W Zimdahl, D Pavon, R Maartens, Phys. Rev. D. 554681W. Zimdahl, D. Pavon, and R. Maartens, Phys. Rev. D 55, 4681 (1997). . W Israel, J M Stewart, Ann. Phys. (NY). 118341W. Israel and J. M. Stewart, Ann. Phys. (NY) 118, 341 (1979). . D Pavon, D Jou, J Casas-Vázquez, Ann. Inst. H. Poincaré. 3679D. Pavon, D. Jou, and J. Casas-Vázquez, Ann. Inst. H. Poincaré 36, 79 (1982). . C Eckart, Phys. Rev. 15919C. Eckart, Phys. Rev. 15, 919 (1940). . W A Hiscock, L Lindblom, Ann. Phys. (NY). 151466W. A. Hiscock and L. Lindblom, Ann. Phys. (NY) 151, 466 (1983). . M A Herrera, S Hacyan, Phys. Fluids. 283253M. A. Herrera and S. Hacyan, Phys. Fluids 28, 3253 (1985). . R Maartens, J Triginer, Phys. Rev. D. 58123507R. Maartens and J. Triginer, Phys. Rev. D 58, 123507 (1998). . R Maartens, J Triginer, Phys. Rev. D. 564640R. Maartens and J. Triginer, Phys. Rev. D 56, 4640 (1997). R Maartens, astro-ph/9609119Hanno Rund Conference on Relativity and Thermodynamics. S. D. MaharajUniversity of Natal, South AfricaR. Maartens, in Hanno Rund Conference on Relativity and Thermodynamics, ed. S. D. Maharaj (University of Natal, South Africa, 1996). (astro-ph/9609119) . R Maartens, V Méndez, Phys. Rev. D. 551937R. Maartens and V. Méndez, Phys. Rev. D 55, 1937 (1997). . W A Hiscock, J Salmonson, Phys. Rev. D. 433249W. A. Hiscock and J. Salmonson, Phys. Rev. D 43, 3249 (1991). . R Maartens, Class. Quantum Grav. 121455R. Maartens, Class. Quantum Grav. 12, 1455 (1995). . W , Phys. Rev. D. 535483W. Zimdahl, Phys. Rev. D 53, 5483 (1996). . V Méndez, J Triginer, J. Math. Phys. 372906V. Méndez and J. Triginer, J. Math. Phys. 37, 2906 (1996). T Padmanabhan, Structure Formation in the Universe. CambridgeCambridge University PressT. Padmanabhan, Structure Formation in the Universe (Cambridge University Press, Cambridge, 1993). . S Weinberg, Astrophys. J. 168175S. Weinberg, Astrophys. J. 168, 175 (1971). . N Udey, W Israel, Mon. Not. R. Astron. Soc. 1991137N. Udey and W. Israel, Mon. Not. R. Astron. Soc. 199, 1137 (1982). E Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen I. StuttgartTeubner117E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen I (Teubner, Stuttgart, 1983), p117. I S Gradshteyn, I M Ryzhik, Table of Integrals, Series, and Products. LondonAcademic925I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products (Academic, London, 1980), p925. W A Hiscock, L Lindblom, Contemporary Mathematics: Mathematics and General Relativity. J. IsenbergAmerican Math. SocietyW. A. Hiscock and L. Lindblom, in Contemporary Mathematics: Mathematics and General Relativity, ed. J. Isenberg (American Math. Society, 1988). . V A Belinskii, E S Nikomarov, I M Khalatnikov, Sov. Phys. JETP. 50213V. A. Belinskii, E. S. Nikomarov, and I. M. Khalatnikov, Sov. Phys. JETP 50, 213 (1979).
{'fraction_non_alphanumeric': 0.0630467819443874, 'fraction_numerical': 0.03799236045508687, 'mean_word_length': 3.8637634838194166, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 23, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Dissipative effects during neutrino decoupling in the early universe create a small backreaction on the Hubble rate, and lead to a small rise in temperature and entropy. We use a simplified thermohydrodynamic model, which provides a causal approximation to kinetic theory, in order to estimate the backreaction effects and the entropy production.I. INTRODUCTIONNon-equilibrium processes in the early universe are typically associated with dynamical transitions or particle decouplings. In the case of neutrino decoupling, the standard approach is to treat the process as adiabatic (see e.g.[1]). The small non-equilibrium effects are thus usually neglected, which provides a reasonable approximation. However, given the increasing accuracy of cosmological observations and theoretical modeling, it is worthwhile revisiting the standard equilibrium models of processes such as neutrino decoupling, in order to see whether non-equilibrium corrections can lead to observable consequences. Recently, non-equilibrium corrections in neutrino decoupling have been calculated in a number of papers, using complicated kinetic theory and numerical computations (see [2] for a short review). The corrections are very small, as expected. For example, in[3][4][5]it was found that non-equilibrium effects lead to a small change in the decoupling temperature for neutrinos. Spectral distortions have also been analyzed[6], showing the remarkable fact that they amount to as much as 1% or more for the higher-energy side of the spectrum. Although these corrections in the spectrum, energy density and temperature of the neutrino component have hardly any effect on primordial helium synthesis, yielding a change in the mass fraction of ∼ 10 −4 , they can lead to other effects that may be observable. Thus it is shown that the non-equilibrium increase in neutrino temperature, which leads to an extra injection of energy into the photon spectrum, leads to a shift of equilibrium epoch between matter and radiation which, in turn, modifies the angular spectrum of fluctuations of the cosmic microwave background radiation[7,8].Despite the accuracy of these models in obtaining corrections to the decoupling temperature and distribution function due to non-equilibrium effects, they still make use of the standard Friedman equations for a perfect (i.e nondissipative) fluid. This leads to the physically inconsistent situation in which, say, the energy density and expansion evolve in time like a radiative fluid in equilibrium. One expects that small distortions in the particle equilibrium distribution function should be reflected in the macroscopic (i.e fluid) description, as given by the stress-energy tensor, by adding a bulk viscous pressure to the equilibrium one. Here we consider an alternative thermo-hydrodynamic model of dissipative effects in neutrino decoupling, simple enough to produce analytic solutions for the backreaction effects on the universal scale factor, and estimates for the entropy production due to dissipation. As explained above these effects are not the focus of recent papers, which use sophisticated kinetic theory models focusing on the neutrino temperature. Our simplified approach cannot compete with these models for accuracy and completeness, but it has the advantage of simplicity, allowing for a qualitative understanding of effects not previously investigated in detail. A similar approach has previously been developed in [9] to the reheating era that follows inflation.The thermo-hydrodynamic model is based on an approximation to kinetic theory which respects relativistic causality. This approximation is the Grad moment method, leading to the causal thermodynamics of Israel and Stewart[10]in the hydrodynamic regime (see also[11]for an alternative but equivalent approach). This causal theory is a generalization of the more commonly used relativistic Navier-Stokes-Fourier theory. The latter, due to Eckart [12], may be derived via the Chapman-Enskog approximation in kinetic theory. The resulting theory is quasi-stationary and noncausal, and suffers from the pathologies of infinite wavefront speeds and instability of all equilibrium states *', 'arxivid': 'astro-ph/9901211', 'author': ['Roy Maartens \nSchool of Computer Science and Mathematics\nPortsmouth University\nPO1 2EGPortsmouthEngland\n', 'Josep Triginer \nDepartment of Physics\nAutonomous University of Barcelona\n08193BellaterraSpain\n'], 'authoraffiliation': ['School of Computer Science and Mathematics\nPortsmouth University\nPO1 2EGPortsmouthEngland', 'Department of Physics\nAutonomous University of Barcelona\n08193BellaterraSpain'], 'corpusid': 118901862, 'doi': '10.1023/a:1001994404420', 'github_urls': [], 'n_tokens_mistral': 8039, 'n_tokens_neox': 6782, 'n_words': 4242, 'pdfsha': '49bcf4c153c2e8d2d134779e3485978cda5e6d3a', 'pdfurls': ['https://arxiv.org/pdf/astro-ph/9901211v2.pdf'], 'title': ['Backreaction effects of dissipation in neutrino decoupling', 'Backreaction effects of dissipation in neutrino decoupling'], 'venue': []}
arxiv
Defect wormhole: A traversable wormhole without exotic matter 8 Jun 2023. v7 F R Klinkhamer *[email protected] Institute for Theoretical Physics Karlsruhe Institute of Technology (KIT) 76128KarlsruheGermany Defect wormhole: A traversable wormhole without exotic matter Acta Phys. Polon. B 8 Jun 2023. v7 We present a traversable-wormhole solution of the gravitational field equation of general relativity without need of exotic matter (exotic matter can, for example, have negative energy density and vanishing isotropic pressure). Instead of exotic matter, the solution relies on a 3-dimensional "spacetime defect" characterized by a locally vanishing metric determinant. I. INTRODUCTION Traversable wormholes [1] appear to require "exotic" matter, for example matter violating the Null Energy Condition (NEC). See, e.g., Ref. [2] for further discussion and references. In this paper, we look for a way around the necessity of having exotic matter, while making no essential changes in the established theories (general relativity and the standard model of elementary particle physics). Throughout, we use natural units with c = 1 and = 1. II. BASIC IDEA The regularized-big-bang spacetime [3,4] is a solution of the gravitational field equation of general relativity with normal matter and a degenerate metric. This spacetime corresponds to a traversable cosmic bounce [5,6] (a brief review appears in Ref. [7]). For comments on the standard version of general relativity and the extended version used here, see the last two paragraphs in Sec. I of Ref. [3]. As noted briefly in Sec. II of Ref. [4] and more extensively in Sec. II of Ref. [6], the degeneracy of the regularized-big-bang metric gives an effective matter component which is "exotic," specifically NEC violating. (The NEC [2] corresponds to the following requirement on the energy-momentum tensor T µ ν for an arbitrary null-vector k µ : T µ ν k µ k ν ≥ 0.) The heuristics, then, is that the exotic effects of the metric degeneracy turn the singular (concave) big-bang behavior [ a(t) ∼ √ t → 0 for t ↓ 0] into a smooth (convex) bounce behavior [a(T ) ∼ a B + T 2 for T ∈ (−∆T, +∆T ), with a B > 0 and ∆T > 0 ]. We now try to do something similar for the traversable wormhole, using general relativity with normal matter but allowing for a degenerate metric. III. SIMPLE EXAMPLE A. Nondegenerate metric -special case We can test the basic idea of Sec. II if we start from the simple example discussed by Morris and Thorne (MT) in Box 2 of Ref. [1]. There, the special case of a more general metric is given by (recall c = 1) ds 2 (EBMT-worm-spec) ≡ g µν (x) dx µ dx ν (EBMT-worm-spec) = −dt 2 + dl 2 + b 2 0 + l 2 dθ 2 + sin 2 θ dφ 2 , (3.1) with a nonzero real constant b 0 (taken to be positive, for definiteness). The coordinates t and l in (3.1) range over (−∞, ∞) and θ ∈ [0, π] and φ ∈ [0, 2π) are the standard spherical polar coordinates (see the paragraph below for a technical remark). Earlier discussions of this type of metric have appeared in the independent papers of Ellis [8] and Bronnikov [9], and, for this reason, we have added "EB" to the suffix in (3.1). The announced technical remark, which can be skipped in a first reading, is about the coordinates of the 2-sphere. Instead of the single set {θ, φ}, we should really use two (or more) appropriate coordinate patches for the 2-sphere [10,11]. A well-known example has the coordinates {X, Y } obtained by stereographic projection from the North Pole (θ = 0) on the equatorial plane R 2 and the coordinates {U, V } obtained by stereographic projection from the South Pole (θ = π); see Exercise 5.1 in Sec. 5.1 of Ref. [11]. For the coordinates of the first patch, the last term in the squared line element from (3.1) is replaced by 4 (b 2 0 + l 2 ) (1 + X 2 + Y 2 ) −2 dX 2 + dY 2 and, for the coordinates of the second patch, there is the term 4 (b 2 0 + l 2 ) (1 + U 2 + V 2 ) −2 dU 2 + dV 2 . In the first coordinate patch, the resulting metric components g XX and g Y Y vanish nowhere, and similarly for g U U and g V V in the second patch [by contrast, the metric component g φφ from (3.1) vanishes at two points, θ = 0 and π, with sin 2 θ = 0]. Let us continue the discussion of the metric (3.1) as it stands. Then, according to items (d) and (e) of Box 2 in Ref. [1], the wormhole from (3.1) is traversable; see also Fig. 6 in Ref. [8]. The crucial question, however, is the dynamics: can the wormhole metric (3.1) be a solution of the Einstein equation? Morris and Thorne's used a type of engineering approach: fix the desired specifications and see what it takes. The Einstein equation, G µν ≡ R µν − 1 2 g µν R = 8πG T µν , then requires the following components of the energymomentum tensor [1]: T t t (EBMT-worm-spec) = 1 8πG b 2 0 (b 2 0 + l 2 ) 2 , (3.2a) T l l (EBMT-worm-spec) = − 1 8πG b 2 0 (b 2 0 + l 2 ) 2 , (3.2b) T θ θ (EBMT-worm-spec) = 1 8πG b 2 0 (b 2 0 + l 2 ) 2 , (3.2c) T φ φ (EBMT-worm-spec) = 1 8πG b 2 0 (b 2 0 + l 2 ) 2 , (3.2d) with all other components vanishing. The energy density is given by ρ = T tt = −T t t and we have ρ < 0 from (3.2a), which definitely corresponds to unusual matter. Moreover, we verify, for the radial null vector k µ = (1, 1, 0, 0), the inequality We, next, consider the following metric Ansatz : T µ ν k µ k ν (EBMT-worm-spec) = 1 8πG b 2 0 (b 2 0 + l 2 ) 2 − 1 − 1 < 0 ,(3.ds 2 (K-worm-spec) = −dt 2 + ξ 2 λ 2 + ξ 2 dξ 2 + b 2 0 + ξ 2 dθ 2 + sin 2 θ dφ 2 ,(3.4) with nonzero real constants λ and b 0 (both taken to be positive, for definiteness) and coordinates t and ξ ranging over (−∞, ∞). The metric from (3.4) gives the following Ricci and Kretschmann curvature scalars: R (K-worm-spec) = −2 b 2 0 − λ 2 (b 2 0 + ξ 2 ) 2 , (3.5a) K (K-worm-spec) = 12 (b 2 0 − λ 2 ) 2 (b 2 0 + ξ 2 ) 4 , (3.5b) both of which are finite, perfectly smooth, and vanishing for ξ → ±∞. The metric g µν (x) from (3.4) is degenerate with a vanishing determinant g(x) ≡ det[g µν (x)] at ξ = 0. [Note that the metric g µν (x) from (3.1) is nondegenerate, as its determinant g(x) vanishes nowhere, provided two suitable coordinate patches are used for the 2-sphere.] In physical terms, this 3-dimensional hypersurface at ξ = 0 corresponds to a "spacetime defect" [12][13][14][15] and the Einstein equation is defined at ξ = 0 by continuous extension from its limit ξ → 0 (for this last point, see, in particular, Sec. 3.3.1 of Ref. [14] and also the related discussion in the second and third paragraphs of Sec. IV C 1). The terminology "spacetime defect" is by analogy with crystallographic defects in an atomic crystal (these crystallographic defects are typically formed during a rapid crystallization process). We now have two further technical remarks, which can be skipped in a first reading. First, we might consider changing the quasi-radial ξ coordinate to l = ξ 1 + λ 2 /ξ 2 ∈ (−∞, −λ] ∪ [λ, ∞) ,(3.6) which would give a metric similar to (3.1), ds 2 = −dt 2 + d l 2 + b 2 0 + l 2 − λ 2 dθ 2 + sin 2 θ dφ 2 . (3.7) But this coordinate transformation ξ → l is discontinuous and, therefore, not a diffeomorphism. We also remark that the coordinate l from (3.6) is unsatisfactory for the correct description of the whole spacetime manifold, as, for given values of {t, θ, φ}, both l = −λ and l = λ correspond to a single point of the manifold (with the single coordinate ξ = 0). The proper coordinates of the defect-wormhole spacetime (3.4) are {t, ξ, θ, φ} and not {t, l, θ, φ}, or possible regularizations based on the latter coordinates. For further discussion of some of the physics and mathematics issues of such spacetime defects, see Sec. III of Ref. [13] and Sec. 3 of Ref. [14]. Second, the embedding diagram of the spacetime (3.4) for (t, θ) = (const, π/2) and 0 < λ 2 < b 2 0 is similar [with a 3-dimensional Euclidean embedding space] to the embedding diagram of the spacetime (3.1) for the same values of (t, θ) and b 2 0 , as given by item (b) of Box 2 in Ref. [1]. The embedding diagram of the spacetime (3.4) for λ 2 > b 2 0 is similar to the embedding diagrams for λ 2 ∈ [0, b 2 0 ), except that, for λ 2 > b 2 0 , there is a (2+1)-dimensional Minkowski embedding space. These new embedding diagrams for λ 2 > 0 (and, for the moment, λ 2 = b 2 0 ) are nonsmooth at ξ = 0, which is a direct manifestation of the presence of the spacetime defect. In order to obtain smooth motion, we are led to nonstandard identifications at the wormhole throat. This is especially clear in the description of the spacetime (3.4) at λ 2 = b 2 0 , which has a flat metric (3.7) for λ 2 = b 2 0 in terms of the auxiliary quasi-radial variable l. The description then uses two copies of the flat Euclidean space E 3 with the interior of two balls with radius λ removed and their surfaces at l = ±λ identified "antipodally" (see Sec. IV B for further details). After these technical remarks, we return to the metric (3.4) and observe that the Einstein equation (defined at ξ = 0 by the limit; see above) requires T t t (K-worm-spec) = 1 8πG b 2 0 − λ 2 (b 2 0 + ξ 2 ) 2 , (3.8a) T ξ ξ (K-worm-spec) = − 1 8πG b 2 0 − λ 2 (b 2 0 + ξ 2 ) 2 , (3.8b) T θ θ (K-worm-spec) = 1 8πG b 2 0 − λ 2 (b 2 0 + ξ 2 ) 2 , (3.8c) T φ φ (K-worm-spec) = 1 8πG b 2 0 − λ 2 (b 2 0 + ξ 2 ) 2 . (3.8d) Compared to the previous results (3.2), we see that the previous factors b 2 0 in the numerators have been replaced by new factors (b 2 0 −λ 2 ), with corresponding changes in the denominators [ b 2 0 → b 2 0 − λ 2 and l 2 → l 2 = λ 2 + ξ 2 , so that b 2 0 + l 2 → b 2 0 + ξ 2 ] . Starting from λ 2 = 0 + , these new numerator factors (b 2 0 − λ 2 ) then change sign as λ 2 increases above b 2 0 and we no longer require exotic matter. Indeed, we have from (3.8a) that ρ = −T t t > 0 for λ 2 > b 2 0 . Moreover, we readily obtain, for any null vector k µ and parameters λ 2 ≥ b 2 0 , the inequality T µ ν k µ k ν (K-worm-spec) λ 2 ≥b 2 0 ≥ 0 , (3.9) which verifies the NEC [this result follows equally from the expressions in Sec. III A, if we again replace the numerator factors b 2 0 there by (b 2 0 − λ 2 ) and make corresponding changes in the denominators]. There is, of course, also the special case λ 2 = b 2 0 , for which the energy-momentum tensor vanishes altogether, T µ ν (K-worm-spec) λ 2 =b 2 0 = 0 ,(3.10) and so do the curvature scalars (3.5). In that case, we have a wormhole in the vacuum, which will be discussed further in Sec. IV C, where also the radial geodesics will be presented. IV. DEGENERATE WORMHOLE METRIC A. General Ansatz The special degenerate metric (3.4) can be generalized as follows: ds 2 (K-worm-gen) ≡ g µν (x) dx µ dx ν (K-worm-gen) = −e 2 φ(ξ) dt 2 + ξ 2 λ 2 + ξ 2 dξ 2 + r 2 (ξ) dθ 2 + sin 2 θ dφ 2 , (4.1) with a positive length scale λ and real functions φ(ξ) and r(ξ). Again, the coordinates t and ξ range over (−∞, ∞), while θ ∈ [0, π] and φ ∈ [0, 2π) are the standard spherical polar coordinates [as mentioned in Sec. III A, we should really use two appropriate coordinate patches for the 2-sphere]. If, moreover, we assume that φ(ξ) remains finite everywhere and that r(ξ) is positive with r(ξ) ∼ |ξ| for ξ → ±∞, then the spacetime from (4.1) corresponds to a wormhole (see also the discussion at the beginning of Sec. 11.2 in Ref. [2]). If the global minimum of the function r(ξ) has the value b 0 > 0 at ξ = ξ 0 ≡ 0 and if the function φ(ξ) is essentially constant near ξ = 0, then we expect interesting behavior for λ 2 of the order of b 2 0 or larger. In fact, using power series in ξ 2 for the Ansatz functions of the metric (4.1) [specifically, φ(ξ) = c 0 + c 2 ξ 2 + c 4 ξ 4 + . . . and r 2 (ξ) = b 2 0 + d 2 ξ 2 + d 4 ξ 4 + . . . ], we get energy-momentum components without singular behavior at ξ = 0. It is clear that further work will be cumbersome but perhaps not impossible. Some numerical results will be discussed at the end of Sec. V. For later use, we already give the tetrad e a µ corresponding to the general metric g µν = η ab e a µ e b ν from (4.1): e 0 µ (x) (K-worm-gen) = e φ(ξ) δ 0 µ , (4.2a) e 1 µ (x) (K-worm-gen) = ξ λ 2 + ξ 2 δ 1 µ , (4.2b) e 2 µ (x) (K-worm-gen) = r 2 (ξ) δ 2 µ , (4.2c) e 3 µ (x) (K-worm-gen) = r 2 (ξ) sin θ δ 3 µ , (4.2d) where the argument x of the tetrad stands for the coordinates t, ξ, θ, φ . The particular choice for (4.2b) will be commented on in Sec. IV C 2. B. Topology and orientability For a brief discussion of the topology of the spacetime with metric (4.1), we can set φ(ξ) = 0 and r 2 (ξ) = b 2 0 + ξ 2 , so that we are back to the special metric (3.4). Then, from the auxiliary coordinates l, θ, φ in the metric (3.7) for general λ > 0 and b 0 > 0, we get the following two sets of Cartesian coordinates (one for the "upper" universe with l > λ and the other for the "lower" universe with l < −λ):      Z + Y + X +      = l      cos θ sin θ sin φ sin θ cos φ      , for l ≥ λ > 0 , (4.3a)      Z − Y − X −      = l      cos θ sin θ sin φ sin θ cos φ      , for l ≤ −λ < 0 , (4.3b) {Z + , Y + , X + } ∧ = {Z − , Y − , X − } , for | l | = λ , (4.3c) where the last relation implements the identification of "antipodal" points on the two 2-spheres S 2 ± with | l | = λ (the quotation marks are because normally antipodal points are identified on a single 2-sphere, as for the RP 3 defect discussed in Refs. [12,13,16]). Note that the two coordinates sets {Z ± , Y ± , X ± } from (4.3a) and (4.3b) have different orientation (see the penultimate paragraph of Sec. IV C 2 for a further comment). The spatial topology of our degenerate-wormhole spacetime (4.1) is that of two copies of the Euclidean space E 3 with the interior of two balls removed and "antipodal" identification (4.3c) of their two surfaces. It can be verified that the wormhole spacetime from (4.1) and (4.3) is simply connected (all loops in space are contractible to a point), whereas the original exotic-matter wormhole [1] is multiply connected (there are noncontractible loops in space, for example, a loop in the upper universe encircling the wormhole mouth). C. Vacuum solution First-order equations Awaiting the final analysis of the general metric (4.1), we recall, from Sec. III B, that we already have an analytic wormhole-type solution of the Einstein gravitational field equation (defined at ξ = 0 by the limit ξ → 0): φ(ξ), r 2 (ξ) (K-worm-gen) vacuum sol = 0, λ 2 + ξ 2 , (4.4a) T µ ν (ξ) (K-worm-gen) vacuum sol = 0 . (4.4b) Unlike Minkowski spacetime, this flat vacuum-wormhole spacetime has asymptotically two flat 3-spaces with different orientations (see Sec. IV C 2 for further comments). Before we turn to the geodesics of the vacuum wormhole spacetime (4.4), we present an important mathematical result on the spacetime structure at the wormhole throat, ξ = 0. It has been observed by Horowitz [18] that the first-order (Palatini) formalism of general relativity would be especially suited to the case of degenerate metrics, the essential point being that the first-order formalism does not require the inverse metric. Let us have a look at the degenerate vacuum-wormhole metric from (4.1) and (4.4). We refer to Refs. [10,11] for background on Cartan's differential-form approach and adopt the notation of Ref. [10]. Take, then, the following dual basis e a ≡ e a µ dx µ from the general expression (4.2) with restrictions (4.4a): e 0 (K-worm-gen) vacuum sol = dt , (4.5a) e 1 (K-worm-gen) vacuum sol = ξ λ 2 + ξ 2 dξ , (4.5b) e 2 (K-worm-gen) vacuum sol = λ 2 + ξ 2 dθ , (4.5c) e 3 (K-worm-gen) vacuum sol = λ 2 + ξ 2 sin θ dφ . (4.5d) This basis gives, from the metricity condition (ω ab = −ω ba ) and the no-torsion condition (de a + ω a b ∧ e b = 0), the following nonzero components of the Levi-Civita spin connection: The crucial observation is that the above spin-connection and curvature components are well-behaved at ξ = 0, so that there is no direct need for the ξ → 0 limit. All in all, the degenerate vacuum-wormhole metric from (4.1) and (4.4) provides a smooth solution (4.5)-(4.6) of the first-order equations of general relativity [18], e [ a ∧ D e b ] = 0 , (4.8a) e b ∧ R cd ǫ abcd = 0 , (4.8b) with the completely antisymmetric symbol ǫ abcd , the covariant derivative D e b ≡ de b +ω b c ∧e c , and the square brackets around Lorentz indices a and b denoting antisymmetrization. We see that the complete vacuum solution is given by the tetrad e a µ (x) from (4.5) and the connection ω a µ b (x) from (4.6), not just the metric g µν (x) from (4.1) and (4.4). Geodesics We now get explicitly the radial geodesics ξ(t) passing through the vacuum-wormhole throat by adapting result (3.6b) of Ref. [16] to our case: with X − = −λ and X + = +λ identified at t = 0. The apparent discontinuity of X ± is an artifact of using two copies of Euclidean 3-space for the embedding of the trajectory, but the curve on the real manifold is smooth, as shown by (4.9). This point is also illustrated for the RP 3 defect by Figs. 2 and 3 in Ref. [16]. The curves from (4.10) in the (t, X − ) and (t, X + ) planes, have two parallel straightline segments, shifted at t = 0, with constant positive slope B ≤ 1 (velocity in units with c = 1). This equal velocity before and after the defect crossing is the main argument for using the "antipodal" identification in (4.3). This identification also agrees with the observation that the tetrad from (4.5) provide a smooth solution of the first-order equations of general relativity, which would not be the case for the tetrad with ξ in the numerator on the right-hand side of (4.5b) replaced by ξ 2 . ξ(t) (K-worm-gen) vacuum sol ; rad-geod =    ± (B t) 2 + 2 B λ t , for t ≥ 0 , ∓ (B t) 2 − 2 B λ t , for t ≤ 0 , The discussion of nonradial geodesics for the metric (4.4) is similar to the discussion in Ref. [17], which considers a related spacetime defect. The metric of this spacetime defect resembles the metric of the wormhole presented here, but their global spatial structures are different. Still, it appears possible to take over the results from Ref. [17] on defect-crossing nonradial geodesics (which stay in a single universe), if we realize that, for our vacuum wormhole, the defect-crossing geodesics come in from one universe and re-emerge in the other universe. These nonradial geodesics of the vacuum wormhole will be discussed later. Following up on the issue of spatial orientability mentioned in the first paragraph of Sec. IV C 1, we have the following comment. If the "advanced civilization" of Ref. [1] has access to our type of defect-wormhole, then it is perhaps advisable to start exploration by sending in parity-invariant machines or robots. The reason is that humans of finite size and with right-handed DNA may not be able to pass safely through this particular wormholethroat defect at ξ = 0, which separates two universes with different 3-space orientation. The vacuum solution (4.4) with tetrad (4.5) and connection (4.6) has one free parameter, the length scale λ which can, in principle, be determined as the limiting value of circumferences divided by 2π for great circles centered on the wormhole mouth [in the metric (3.7) for λ 2 = b 2 0 , circles with, for example, φ ∈ [0, 2π), constant θ = π/2, constant t, and various positive values of l (keeping the same spatial orientation)]. An alternative way to determine the length scale λ of a single vacuum-defect wormhole is to measure its lensing effects (cf. Sec. 5 of Ref. [17]) and an explicit example is presented in App. A. Some further comments on this length scale λ appear in Sec. V. V. DISCUSSION We have five final remarks. First, we have obtained, in line with earlier work by Horowitz [18], a smooth vacuum-wormhole solution of the first-order equations of general relativity, where the tetrad is given by (4.5) and the connection by (4.6). Vacuum wormholes also appear in certain modified-gravity theories (see, e.g., Ref. [19] and references therein), where the exotic effects trace back to the extra terms in the gravitational action (see also Refs. [20][21][22][23] and references therein). Our vacuum-wormhole solution does not require any fundamental change of the theory, 4-dimensional general relativity suffices, except that we now allow for degenerate metrics. The degeneracy hypersurface of the vacuum-wormhole solution corresponds to a "spacetime defect," as discussed extensively in Refs. [3][4][5][6][7][12][13][14][15][16][17]. Second, this vacuum-wormhole solution (4.5) has the length scale λ as a free parameter and, if there is a preferred value λ in Nature, then that value can only come from a theory beyond general relativity. An example of such a theory would be nonperturbative superstring theory in the formulation of the IIB matrix model [24,25]. That matrix model could give rise to an emergent spacetime with or without spacetime defects [26,27] and, if defects do appear, then the typical length scale λ of a remnant vacuum-wormhole defect would be related to the IIB-matrix-model length scale ℓ (the Planck length G 1/2 might also be related to this length scale ℓ ). Third, the main objective of the present paper has been to reduce the hurdles to overcome in the quest of traversable wormholes (specifically, we have removed the requirement of exotic matter). But there remains, at least, one important hurdle, namely to construct a suitable spacetime defect or to harvest one, if already present as a remnant from an early phase. Fourth, if it is indeed possible to harvest a vacuum-defect wormhole, then its length scale λ is most likely very small (perhaps of the order of the Planck length G 1/2 ∼ 10 −35 m). The question now arises if it is, in principle, feasible to enlarge (fatten) that harvested defect wormhole. Preliminary numerical results presented in App. B suggest that the answer may be affirmative. Fifth, the construction of multiple vacuum-defect-wormhole solutions is relatively straightforward, provided the respective wormhole throats do not touch. Details and further discussion are given in a follow-up paper [28]. ACKNOWLEDGMENTS It is a pleasure to thank Z.L. Wang for useful comments on the manuscript and E. Guendelman for a helpful remark after a recent wormhole talk by the author. The referee is thanked for a practical suggestion to improve the presentation. Appendix A: Gedankenexperiment In this appendix, we describe a Gedankenexperiment designed to measure the length scale λ of a vacuum-defect wormhole by its lensing effects. The lensing property is illustrated in Fig. 1. More specifically, the Gedankenexperiment is based on the fifth remark of Sec. 5 in Ref. [17], which, adapted to our case, states that a permanent point-like source at point P from Fig. 1 will be seen as a luminous disk at point P ′ from the same figure. (The lensing properties have, of course, first been discussed for exotic-matter wormholes and a selection of references appears in Ref. [29][30][31][32][33].) The concrete procedure of the Gedankenexperiment involves three steps (cf. Fig. 1): 1. place a permanent point-like light source at an arbitrary point P, 2. search for the point P ′ where a luminous disk is seen; 3. measure two quantities, the angle 2 α that the disk subtends as seen from P ′ and the shortest distance D PP ′ between the points P and P ′ . If no point with a luminous disk can be found in Step 2, then the point P must have been on the wormhole throat and we return to Step 1 by changing the position of point P. Now use the auxiliary coordinates and metric from (3.7) for b 2 0 = λ 2 . With l P > λ denoting the quasi-radial coordinate of point P (assumed to be in the "upper" universe), we have D PP ′ = 2 l P − λ and sin α = λ/ l P . Combined, these two expressions give the following result for the length scale λ of the vacuum-defect wormhole: cross over into the "lower" universe. The wormhole acts as a lens, with the transmitted geodesics refocussing at the point P ′ in the "lower" universe. λ = sin α 1 − sin α D PP ′ 2 ,(A1) solely in terms of the measured quantities α and D PP ′ . We have two further observations. First, recall that a permanent point-like source in Minkowski spacetime will, in principle, illuminate the whole of 3-space. But, in the vacuumdefect-wormhole spacetime, there will exist dark regions, even in the upper universe where the source is located. In the upper universe of Fig. 1, there is indeed a dark (shadow) region behind the wormhole throat (which has drained away some of the light emitted by the source). It is a straightforward exercise to describe the dark regions exactly. Second, if the light source at P is no longer permanent but a flash instead, then we observe at P ′ , after a certain moment, an expanding ring, even though there is no motion of the source position. This effect is entirely due to differences in the time-of-flight, for example, the time-of-flight of the short-dashed geodesic in Fig. 1 is more than that of the dotted geodesic. The maximal angular extension (2α) of the expanding ring and the sourceobserver distance (D PP ′ ) can, in principle, give the vacuum-defect-wormhole length scale λ by the expression (A1). Appendix B: Preliminary numerical results In the general Ansatz (4.1) for the defect-wormhole metric, we set φ(ξ) = f (ξ) ,(B1a)r 2 (ξ) = λ 2 + ξ 2 + g(ξ) ,(B1b) so that a nonzero function g(ξ) signals a non-vacuum wormhole configuration. For the numerical analysis, it turns out to be useful to compactify the quasi-radial ξ coordinate, η = sgn(ξ) ξ 2 λ 2 + ξ 2 ∈ [−1, 1] .(B2) In the following, we consider only nonnegative η and ξ. Next, we expand the metric functions f (η) = f (ξ) and g(η) = g(ξ) over η ∈ [0, 1], f (η) = N f n=1 c n sin n π η , (B3a) g(η) = d 0 (1 − η) + Ng n=1 d n sin n π η ,(B3b) with finite cutoffs N f and N g on the sums. In this appendix, we take λ = 1 , d 0 = 1/10 ,(B4) where the small but nonzero ratio d 0 /λ 2 quantifies the deviation of the vacuum configuration. Consider the radial null vector k µ = exp f (η) , |η|, 0, 0 and the tangential null vector k µ = exp f (η) , 0, √ r 2 , 0 , with replacement (B1b) in terms of η. Then define the following quantities: Θ rad ≡ G µ ν k µ k ν ,(B5a)Θ tang ≡ G µ ν k µ k ν ,(B5b) where G µ ν ≡ R µ ν − 1 2 g µ ν R is the Einstein tensor, which equals the energy-momentum tensor T µ ν from the assumed validity of the Einstein equation for units with 8πG = 1. We, now, introduce the following "penalty" measure: P ≡ 1 0 dη   (Θ rad ) 2 − Θ rad 2 + (Θ tang ) 2 − Θ tang 2   .(B6) The Null Energy Condition (NEC), as discussed in Sec. II, gives P = 0. With the ad hoc expansion (B3), the quantity P from (B6) is a function of the coefficients c n and d n , which can be minimized numerically. We have used the NMinimize routine of Mathematica 5.0. With the eight coefficients from Table I, we are able to reduce the penalty P to a value of order 10 −5 and the corresponding results are shown in Fig. 2. Even more important than this small number for P is the fact that we have established a trend of dropping P for an increasing number of coefficients, as shown in Figs. 3 and 4. Observe also that the function shapes of f (η) and g(η) for N coeff ≡ N f + N g = 4, 6, 8 in Figs. 3 and 4 are more or less stable and that the absolute values of the the coefficients c n and d n in Table I] drop for increasing order n. All this suggests that the numerical results converge on a nontrivial configuration, but this needs to be established rigorously. The numerical results as they stand appear to indicate that 1. the obtained wormhole throat is larger than the one of the original (harvested) vacuumdefect wormhole, min ξ r 2 (ξ) > λ 2 ; 2. the energy density ρ and the pressures {p rad , p tang } go to zero as ξ −4 asymptotically; 3. there is only a small violation of the NEC far away from the wormhole throat [this NEC violation can be expected to vanish for an infinite number of appropriate coefficients and the resulting ρ(η) is perhaps nonnegative everywhere]. If these preliminary numerical results are confirmed, this implies that we can, in principle, widen the throat of a harvested vacuum-defect wormhole by adding a finite amount of non-exotic matter. Addendum -Using an adapted penalty function P NEW and the NMinimize routine of Mathematica 12.1, we have obtained N coeff = 12 metric functions (B3) with a penalty value P ≈ 6.1 × 10 −6 . The corresponding plots are similar to those of at the left, the energy density ρ ≡ G t t and, at the right, the pressures p rad ≡ G ξ ξ [solid curve] and p tang ≡ G θ θ [dashed curve], where G µ ν is the Einstein tensor. On the third row, these last three quantities have been multiplied by ξ 4 = [λ 2 η/(1 − η)] 2 , in order to display their rapid asymptotic decrease (approximately as ξ −4 for ξ → ∞ or η → 1). The fourth row shows the quantities Θ rad and Θ tang as defined by (B5). The Null Energy Condition would correspond to having Θ rad ≥ 0 and Θ tang ≥ 0. With only eight coefficients, there are still small Null-Energy-Condition violations of Θ rad at η ∼ 0.8 and Θ tang at η ∼ 0.9; see the main text in App. B for further discussion. Table I ]. For N coeff = 8, the Θ rad value drops to approximately −0.001 at η ∼ 0.8 and the Θ tang value to approximately −0.01 at η ∼ 0.9. = {dθ, sin θ dφ, cos θ dφ} , by antisymmetry. The resulting curvature 2-form R a b ≡ dω a b + ω a c ∧ ω c b has all components vanishing identically, = dimensionless constant B ∈ (0, 1] and different signs (upper or lower) in front of the square roots for motion in opposite directions. The same curves (4.9) can be more easily obtained from straight lines l(t), with radial velocity magnitude v, in the 2-dimensional Minkowski subspace of the spacetime (3.7) and the definition (3.6) of the coordinate l. The Minkowski-subspace analysis identifies the constant B in (4.9) with the ratio v/c, so that the B = 1 curves are light-like and those with B < 1 timelike. For a more detailed description of these geodesics, it appears worthwhile to change to the Cartesian coordinates of Sec. IV B. Consider, indeed, the radial geodesic (4.9) with the upper signs and fixed {θ, φ} = {π/2, 0} and obtain the trajectory in terms of the Cartesian coordinates (−λ + B t , for t ≤ 0 , (4.10c) X + (t) (K-worm-gen) vacuum sol ; rad-geod = +λ + B t , for t ≥ 0 , (4.10d) FIG. 1 . 1Geodesics emanating from point P in the "upper" universe of the vacuum-defect-wormhole spacetime with metric (4.1) and (4.4). Certain geodesics (thin solid lines) avoid the wormhole throat and stay in the upper universe, while other geodesics (heavy short-dashed/long-dashed/dotted lines) reach the wormhole throat (here shown as a thick circle with antipodal points identified) and I. Eight nonzero coefficients giving P = 1.72084 × 10 −5 , for numerical parameters λ = 1 and d 0 = 1/10. The coefficients shown are exact numbers and can also be written as rational numbers, for example d 1 = 152362/10 6 . Fig. 2 , 2but here we only show Fig 5 as a continuation of the sequence in Figs. 3 and 4. FIG. 2 . 2Metric Ansatz functions f (η) = f (ξ) and g(η) = g(ξ) in the metric (4.1) as defined by (B1), (B2), and (B3) for the eight coefficients fromTable I, giving for the penalty function P from (B6) the value 1.72084 × 10 −5 . In addition, there are the following numerical parameters: λ = 1 and d 0 = 1/10. Several quantities result from the basic functions f (η) and g(η). The second row shows, FIG. 3 .FIG. 4 . 34Numerical results for N coeff ≡ N f +N g = 2 in the top quadrangle [penalty P = 0.000812553 for nonzero coefficients d 1 = 0.0805092 and d 2 = −0.0151493] and for N coeff = 4 in the bottom quadrangle [penalty P = 0.000111302 for nonzero coefficients c 1 = −0.0789967, c 2 = 0.0186060, d 1 = 0.144524, and d 2 = −0.0286358]. Numerical results for N coeff = 6 in the top quadrangle [penalty P = 0.0000316721 for nonzero coefficients c 1 = −0.0686991, c 2 = 0.0168485, c 3 = −0.00924705, d 1 = 0.137967, d 2 = −0.0416794, and d 3 = 0.00952128] and for N coeff = 8 in the bottom quadrangle [penalty P = 0.0000172084 for the nonzero coefficients from FIG. 5 . 5Numerical results for N coeff = 12 with P = 6.12347 × 10 −6 for nonzero coefficients c 1 = −0.0782196, c 2 = 0.0107336, c 3 = −0.00647395, c 4 = 0.00391987, c 5 = −0.00119384, c 6 = 0.00229041 and d 1 = 0.151704, d 2 = −0.0321288, d 3 = 0.0195948, d 4 = −0.00525931, d 5 = 0.00290803, d 6 = −0.000818478. The Θ rad value drops to approximately −5 × 10 −5 at η ∼ 0.8 and the Θ tang value to approximately −5 × 10 −3 at η ∼ 0.9. TABLE Wormholes in space-time and their use for interstellar travel: A tool for teaching general relativity. M S Morris, K S Thorne, Am. J. Phys. 56395M.S. Morris and K.S. Thorne, "Wormholes in space-time and their use for interstellar travel: A tool for teaching general relativity," Am. J. Phys. 56, 395 (1988). M Visser, Lorentzian Wormholes: From Einstein to Hawking. New York, NYSpringerM. Visser, Lorentzian Wormholes: From Einstein to Hawking (Springer, New York, NY, 1996). Regularized big bang singularity. F R Klinkhamer, arXiv:1903.10450Phys. Rev. D. 10023536F.R. Klinkhamer, "Regularized big bang singularity," Phys. Rev. D 100, 023536 (2019), arXiv:1903.10450. More on the regularized big bang singularity. F R Klinkhamer, arXiv:1907.06547Phys. Rev. D. 10164029F.R. Klinkhamer, "More on the regularized big bang singularity," Phys. Rev. D 101, 064029 (2020), arXiv:1907.06547. Nonsingular bouncing cosmology from general relativity. F R Klinkhamer, Z L Wang, arXiv:1904.09961Phys. Rev. D. 10083534F.R. Klinkhamer and Z.L. Wang, "Nonsingular bouncing cosmology from general relativity," Phys. Rev. D 100, 083534 (2019), arXiv:1904.09961. Nonsingular bouncing cosmology from general relativity: Scalar metric perturbations. F R Klinkhamer, Z L Wang, arXiv:1911.06173Phys. Rev. D. 10164061F.R. Klinkhamer and Z.L. Wang, "Nonsingular bouncing cosmology from general relativity: Scalar metric perturbations," Phys. Rev. D 101, 064061 (2020), arXiv:1911.06173. M-theory and the birth of the Universe. F R Klinkhamer, arXiv:2102.11202Acta Phys. Polon. B. 521007F.R. Klinkhamer, "M-theory and the birth of the Universe," Acta Phys. Polon. B 52, 1007 (2021), arXiv:2102.11202. Ether flow through a drainhole: A particle model in general relativity. H G Ellis, J. Math. Phys. 14104H.G. Ellis, "Ether flow through a drainhole: A particle model in general relativity," J. Math. Phys. 14, 104 (1973); . Errata J. Math. Phys. 15520Errata J. Math. Phys. 15, 520 (1974). Scalar-tensor theory and scalar charge. K A Bronnikov, Acta Phys. Polon. B. 4251K.A. Bronnikov, "Scalar-tensor theory and scalar charge," Acta Phys. Polon. B 4, 251 (1973). Gravitation, gauge theories and differential geometry. T Eguchi, P B Gilkey, A J Hanson, Phys. Rept. 66213T. Eguchi, P.B. Gilkey, and A.J. Hanson, "Gravitation, gauge theories and differential geom- etry," Phys. Rept. 66, 213 (1980). M Nakahara, Geometry, Topology and Physics. Bristol, UKInstitute of Physics Publ.Second EditionM. Nakahara, Geometry, Topology and Physics, Second Edition (Institute of Physics Publ., Bristol, UK, 2003). Skyrmion spacetime defect. F R Klinkhamer, arXiv:1402.7048Phys. Rev. D. 9024007F.R. Klinkhamer, "Skyrmion spacetime defect," Phys. Rev. D 90, 024007 (2014), arXiv:1402.7048. Comparison of spacetime defects which are homeomorphic but not diffeomorphic. F R Klinkhamer, F Sorba, arXiv:1404.2901J. Math. Phys. 55112503F.R. Klinkhamer and F. Sorba, "Comparison of spacetime defects which are homeomorphic but not diffeomorphic," J. Math. Phys. 55, 112503 (2014), arXiv:1404.2901. Skyrmion spacetime defect, degenerate metric, and negative gravitational mass. M Guenther, KITMaster ThesisM. Guenther, "Skyrmion spacetime defect, degenerate metric, and negative gravitational mass," Master Thesis, KIT, September 2017; available from https://www.itp.kit.edu/en/ publications/diploma On a soliton-type spacetime defect. F R Klinkhamer, arXiv:1811.01078J. Phys. Conf. Ser. 127512012F.R. Klinkhamer, "On a soliton-type spacetime defect," J. Phys. Conf. Ser. 1275, 012012 (2019), arXiv:1811.01078. A new type of nonsingular black-hole solution in general relativity. F R Klinkhamer, arXiv:1309.7011Mod. Phys. Lett. A. 291430018F.R. Klinkhamer, "A new type of nonsingular black-hole solution in general relativity," Mod. Phys. Lett. A 29, 1430018 (2014), arXiv:1309.7011. Lensing and imaging by a stealth defect of spacetime. F R Klinkhamer, Z L Wang, arXiv:1808.02465Mod. Phys. Lett. A. 341950026F.R. Klinkhamer and Z.L. Wang, "Lensing and imaging by a stealth defect of spacetime," Mod. Phys. Lett. A 34, 1950026 (2019), arXiv:1808.02465. Topology change in classical and quantum gravity. G T Horowitz, Class. Quant. Grav. 8587G.T. Horowitz, "Topology change in classical and quantum gravity," Class. Quant. Grav. 8, 587 (1991). A special class of solutions in F (R)-gravity. M Calzà, M Rinaldi, L Sebastiani, arXiv:1802.00329Eur. Phys. J. C. 78178M. Calzà, M. Rinaldi, and L. Sebastiani, "A special class of solutions in F (R)-gravity," Eur. Phys. J. C 78, 178 (2018), arXiv:1802.00329. Can extra dimensional effects allow wormholes without exotic matter?. S Kar, S Lahiri, S Sengupta, arXiv:1505.06831Phys. Lett. B. 750319S. Kar, S. Lahiri, and S. SenGupta, "Can extra dimensional effects allow wormholes without exotic matter?," Phys. Lett. B 750, 319 (2015), arXiv:1505.06831. Traversable wormholes in general relativity. R A Konoplya, A Zhidenko, arXiv:2106.05034Phys. Rev. Lett. 12891104R.A. Konoplya and A. Zhidenko, "Traversable wormholes in general relativity," Phys. Rev. Lett. 128, 091104 (2022), arXiv:2106.05034. Inaccessibility of traversable wormholes. D R Terno, arXiv:2203.03770Phys. Rev. D. 10644035D.R. Terno, "Inaccessibility of traversable wormholes," Phys. Rev. D 106, 044035 (2022), arXiv:2203.03770. Echoes from braneworld wormholes. S Biswas, M Rahman, S Chakraborty, arXiv:2205.14743Phys. Rev. D. 106124003S. Biswas, M. Rahman, and S. Chakraborty, "Echoes from braneworld wormholes," Phys. Rev. D 106, 124003 (2022), arXiv:2205.14743. A large-N reduced model as superstring. N Ishibashi, H Kawai, Y Kitazawa, A Tsuchiya, arXiv:hep-th/9612115Nucl. Phys. B. 498467N. Ishibashi, H. Kawai, Y. Kitazawa, and A. Tsuchiya, "A large-N reduced model as super- string," Nucl. Phys. B 498, 467 (1997), arXiv:hep-th/9612115. IIB matrix model. H Aoki, S Iso, H Kawai, Y Kitazawa, A Tsuchiya, T Tada, arXiv:hep-th/9908038Prog. Theor. Phys. Suppl. 134H. Aoki, S. Iso, H. Kawai, Y. Kitazawa, A. Tsuchiya, and T. Tada, "IIB matrix model," Prog. Theor. Phys. Suppl. 134, 47 (1999), arXiv:hep-th/9908038. IIB matrix model: Emergent spacetime from the master field. F R Klinkhamer, arXiv:2007.08485Prog. Theor. Exp. Phys. 2021F.R. Klinkhamer, "IIB matrix model: Emergent spacetime from the master field," Prog. Theor. Exp. Phys. 2021, 013B04 (2021), arXiv:2007.08485. IIB matrix model and regularized big bang. F R Klinkhamer, arXiv:2009.06525Prog. Theor. Exp. Phys. 2021F.R. Klinkhamer, "IIB matrix model and regularized big bang," Prog. Theor. Exp. Phys. 2021, 063B05 (2021), arXiv:2009.06525. Vacuum defect wormholes and a mirror world. F R Klinkhamer, arXiv:2305.13278F.R. Klinkhamer, "Vacuum defect wormholes and a mirror world," arXiv:2305.13278. Geometrical optics in the Ellis geometry. L Chetouani, G Clément, Gen. Rel. Grav. 16111L. Chetouani and G. Clément, "Geometrical optics in the Ellis geometry," Gen. Rel. Grav. 16, 111 (1984). On the exact gravitational lens equation in spherically symmetric and static space-times. V Perlick, arXiv:gr-qc/0307072Phys. Rev. D. 6964017V. Perlick, "On the exact gravitational lens equation in spherically symmetric and static space-times," Phys. Rev. D 69, 064017 (2004), arXiv:gr-qc/0307072. Gravitational lensing by wormholes. K K Nandi, Y Z Zhang, A V Zakharov, arXiv:gr-qc/0602062Phys. Rev. D. 7424020K.K. Nandi, Y.Z. Zhang, and A.V. Zakharov, "Gravitational lensing by wormholes," Phys. Rev. D 74, 024020 (2006), arXiv:gr-qc/0602062. Light curves of light rays passing through a wormhole. N Tsukamoto, T Harada, arXiv:1607.01120Phys. Rev. D. 9524030N. Tsukamoto and T. Harada, "Light curves of light rays passing through a wormhole," Phys. Rev. D 95, 024030 (2017), arXiv:1607.01120. A novel gravitational lensing feature by wormholes. R Shaikh, P Banerjee, S Paul, T Sarkar, arXiv:1811.08245Phys. Lett. B. 789422Phys. Lett. BR. Shaikh, P. Banerjee, S. Paul, and T. Sarkar, "A novel gravitational lensing feature by wormholes," Phys. Lett. B 789, 270 (2019) [erratum: Phys. Lett. B 791, 422 (2019)], arXiv:1811.08245.
{'fraction_non_alphanumeric': 0.08061462814996927, 'fraction_numerical': 0.05455439459127228, 'mean_word_length': 3.6883356385431076, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 14, 'https://': 1, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 49, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We present a traversable-wormhole solution of the gravitational field equation of general relativity without need of exotic matter (exotic matter can, for example, have negative energy density and vanishing isotropic pressure). Instead of exotic matter, the solution relies on a 3-dimensional "spacetime defect" characterized by a locally vanishing metric determinant.', 'arxivid': '2301.00724', 'author': ['F R Klinkhamer *[email protected] \nInstitute for Theoretical Physics\nKarlsruhe Institute of Technology (KIT)\n76128KarlsruheGermany\n'], 'authoraffiliation': ['Institute for Theoretical Physics\nKarlsruhe Institute of Technology (KIT)\n76128KarlsruheGermany'], 'corpusid': 257833496, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 14787, 'n_tokens_neox': 12288, 'n_words': 7120, 'pdfsha': '1627184749b097491502bd885b391623e71febd0', 'pdfurls': ['https://export.arxiv.org/pdf/2301.00724v7.pdf'], 'title': ['Defect wormhole: A traversable wormhole without exotic matter', 'Defect wormhole: A traversable wormhole without exotic matter'], 'venue': ['Acta Phys. Polon. B']}
arxiv
Dynamical Methods for Target Control of Biological Networks Thomas Parmer Center for Complex Networks and Systems Research Computing, and Engineering Luddy School of Informatics Indiana University 47408BloomingtonIndianaUSA Filippo Radicchi Center for Complex Networks and Systems Research Computing, and Engineering Luddy School of Informatics Indiana University 47408BloomingtonIndianaUSA Dynamical Methods for Target Control of Biological Networks Estimating the influence that individual nodes have on one another in a Boolean network is essential to predict and control the system's dynamical behavior, for example, detecting key therapeutic targets to control pathways in models of biological signaling and regulation. Exact estimation is generally not possible due to the fact that the number of configurations that must be considered grows exponentially with the system size. However, approximate, scalable methods exist in the literature. These methods can be divided in two main classes: (i) graph-theoretic methods that rely on representations of Boolean dynamics into static graphs, (ii) and mean-field approaches that describe average trajectories of the system but neglect dynamical correlations. Here, we compare systematically the performance of these state-of-the-art methods on a large collection of real-world gene regulatory networks. We find comparable performance across methods. All methods underestimate the ground truth, with mean-field approaches having a better recall but a worse precision than graph-theoretic methods. Computationally speaking, graph-theoretic methods are faster than mean-field ones in sparse networks, but are slower in dense networks. The preference of which method to use, therefore, depends on a network's connectivity and the relative importance of recall vs. precision for the specific application at hand. INTRODUCTION Understanding the influence that individual elements have on other elements in a complex dynamical system is essential for the prediction and control of the system's behavior. Such a notion of influence is studied broadly on Boolean networks. Examples include studies concerning perturbations [1][2][3][4], causal inference [5][6][7][8][9][10], and control [11][12][13][14]. Some papers focus on the effects that pinning specific variables to invariant values has on the rest of the system's dynamics without having knowledge of the exact configuration that the system is in [7,[14][15][16][17]. Pinning a set of seed nodes may drive other nodes to deterministic long-term dynamical states. This set of controlled nodes is named as the domain of influence of the seed set [15]. The identification of domains of influence is useful in target control, whereby partial knowledge of a system's state can be used to infer the state of an uncontrolled, target set of variables, e.g., driving a cancer cell towards an apoptotic state. Unfortunately, the exact determination of the domain of influence of a seed set is a computationally infeasible task because of the exponentially large number of configurations that a Boolean network can assume. Some approximate methods exist in the literature. These include graph-theoretic models such as the logical interaction hypergraph (LIH) [5], the expanded network [8], and the dynamics canalization map (DCM) [9]. The three methods above provide static graphs that represent the dynamics of a Boolean network ( Fig. 1a-b). Specifically, the LIH represents a Boolean network as a signed directed hypergraph by adding signs to each interaction and hyperarcs to represent logical AND relationships. The expanded network uses composite nodes to represent AND relationships and complementary nodes to represent variable negation for interactions involving NOT operations (Fig. 1c). Finally, the DCM uses s-unit nodes to represent all possible states of a Boolean network and thresholds (t-unit nodes) to represent AND and OR relationships present in a node's redescribed look-up table (LUT, Fig. 1d). Inference is performed by evaluating the transfer functions that determine a node's state update, which may be written as a logical mapping between possible input vectors to the node's output or as Boolean expressions (see Fig. 2). Any LUT can be converted to a Boolean expression, and any Boolean expression can be converted to disjunctive normal form (DNF). In this way, the Boolean expression can be described only by AND, OR, and NOT operators, and the logical satisfaction of any clause guarantees the entire expression to be true. Klamt et al. use DNF of Boolean functions to infer propagation of downstream signals based on the perturbation of certain input nodes [5]. Similarly, Wang et al. use DNF of Boolean functions to infer cascading failures by removing certain nodes in the network [8]. Alternatively, the Quine-McCluskey Boolean minimization algorithm [18] can be used to reduce a Boolean expression to its prime implicants [19]. Marques-Pita et al. take advantage of it in a process called schemata redescription to remove redundancy from node transfer functions and infer downstream influence of controlled nodes [9]. This algorithm is also used to reduce transfer functions to DNF in the expanded network [10,15]. An additional method for the estimation of domains of influence is the individual-based mean-field approximation (IBMFA) proposed by Parmer et al. [16]. In their work, the IBMFA is used to estimate the probability of a node's state given a pinning perturbation of a seed set. Parmer network towards fixed-point attractors. However, their method immediately adapts to the estimation of domains of influence. Despite the availability of these different methods, it is unclear which is the best to determine domains of influence. At the same time, there is a large amount of similarity between the various methods. All of the graph-theoretic approaches mentioned above, for example, are exact in their description of the network dynamics; the difference lies only in how transfer functions are represented. However, the specific representations of the transfer functions are important as they determine how much inference can be made in estimating a node's domain of influence. The goal of the present paper is to fill these gaps of knowledge. We first introduce a framework called the generalized threshold network (GTN). This framework is a generalization of the graph-theoretic approaches by [8] and [9]. As with these approaches, the GTN also represents the exact dynamics of a network; however, it is ambivalent towards the representation of the transfer function used. Next, we show that a simple search algorithm on the GTN can be used to estimate the domain of influence of a node; this is similar to the calculation of the logical domain of influence on the expanded network [15] and the calculation of pathway modules on the DCM [17]. Finally, we estimate the domain of influence of nodes using the IBMFA [16]. We test the performance of the various methods on the corpus of biological signaling and regulatory networks obtained from the Cell Collective repository [20]. We find that graph representations based on DNF or schematic redescription of transfer functions perform very similarly to one another and very similarly to the IBMFA method. All three methods underestimate the true domain of influence, but outperform naive methods based on LUT representations. The IBMFA performs somewhat better at recall of node states found within the domain of influence as compared to the other methods, but performs worse in terms of precision. The computational cost of each method also varies; the IBMFA takes longer than the other methods to run in sparse networks but it runs quicker than the other methods in dense networks. BOOLEAN NETWORKS A Boolean network B is composed of N nodes, each of which has an associated binary state variable σ i (t) = 0, 1 at time t. Nodes are connected via directed edges, as defined by an adjacency matrix A with element A ij if node j has a dynamical dependence on node i (Fig. 1a). The network can contain self-loops. We consider synchronous update rules, so that time is represented by a discrete, integer variable. At time t, node i updates its state based on the states of its neighbors N i at time t − 1 and the transfer function F i that uniquely maps every possible combination of the input values to an output state. This map is called the look-up table, or LUT, of node i. Clearly, the transfer function F i can also be written as a logical expression; for example, we can use σ i = σ j ∨ σ k to specify a logical OR dependency of node i on neighbors j and k (Fig. 1b). A network's dynamical configuration at an arbitrary time t is represented by the vector σ(t) = [σ 1 (t), σ 2 (t), . . . , σ N (t)]. Since we consider synchronous update, where all nodes simultaneously update their state at each time step t, the dynamics of the system is deterministic. Also, irrespective of the initial condition σ(t = 0), the network is guaranteed to eventually reach an attractor, either a fixed point or a limit cycle. In this work, we consider biological signaling and regulatory networks from the Cell Collective repository [20]. Biological networks are useful case studies to understand node influence in nonlinear systems as nodes generally have reversible states and transfer functions are heterogeneous, making analytical approaches difficult. DOMAIN OF INFLUENCE OF A SEED SET We consider the effect on the system's dynamics of pinning perturbations consisting of imposing and keeping invariant the state of a subset of nodes in the Boolean network. We refer to the set of pinned nodes as the seed set, and we indicate it using the notation X = {(i 1 ,σ i1 ), (i 2 ,σ i2 ), . . . , (i |X | ,σ i |X | )}, that is, σ i (t) = σ i = 0, 1 for all (i,σ i ) ∈ X and for all t ≥ 0. Note that, to avoid contradiction, (i,σ i ) ∈ X implies that (i, 1 −σ i ) / ∈ X . By contrast, the unperturbed nodes are allowed to change state over time. If a configuration is sampled at random from the dynamical state space of B at time t given an initial condition σ(t = 0), each node i has probability P (σ i (t) = 1) to be found in the state σ i = 1 at time t. We refer to this as the activation probability of node i at time t [16]. Here, we assume that the state of each node i is initialized with maximally uncertain probability P (σ i (t = 0) = 1) = 1/2 if i / ∈ X . If i ∈ X , instead P (σ i (t) = 1) =σ i for t ≥ 0. This assumption leaves us with a total of 2 N −|X | possible initial configurations, each having the same probability to occur due to the imposed condition on the initial state of the N − |X | nodes that are outside the seed set. After a transient time period, the dynamics started from each of these configurations settles down into an attractor. As a result, the long-term activation probability of node i, i.e., lim t→∞ P (σ i (t) = 1), converges to a fixed value or oscillates, depending on the nature of the attractors being averaged over. For networks with N ≤ 10, we calculate the true activation probabilities by brute-force evaluation over all R = 2 N −|X | possible initial configurations; otherwise, we sample R = 100 randomly chosen initial configurations to obtain an estimate of the ground-truth activation probabilities. The average state value of a node i at time t based on the R sampled initial configurations is computed as σ i (t) = 1 R R r=1 σ (r) i (t) ,(1) where σ Given the perturbation of the seed set X , the state of another variable i / ∈ X in the network may eventually become certain, i.e., P (σ i (t) = 1) = 0, 1. for all t ≥ T . All nodes with deterministic long-term behavior compose the domain of influence of the seed set X , i.e., D(X ) = {(i, σ i (T ))|i ∈ B ∧ σ i (T ) = 0, 1} ,(2) where T is a finite number of iterations after which the network dynamics is not expected to change, and that can be therefore considered as representative for the longterm dynamics of the network. As in [16], we use T = 10 to estimate the long-term states of the nodes. Please note that the set D automatically includes all elements of the seed set X . APPROXIMATING THE DOMAIN OF INFLUENCE OF A SEED SET Determining the ground-truth domain of influence of the seed set X is generally infeasible, as the task requires to test if nodes reach long-term invariant states for all possible 2 N −|X | initial configurations. There are, however, several approaches that can be used to approximate the ground-truth solution in a computationally feasible manner. Below, we provide a brief description of the various approximate methods considered in this paper. The individual-based mean-field approximation The individual-based mean-field approximation (IBMFA) introduced by Parmer et al. provides a computationally feasible algorithm to approximate the activation probability of individual nodes in Boolean networks [16]. The approximation is inspired by the one generally used in the study of spreading processes on complex networks [21]. The approximation neglects dynamical correlation among state variables to produce predictions in a time that grows as 2 kmax N , where k max indicates the maximum degree of the network. The approximation works as follows. Indicate with s i (t) the activation probability of node i at time t under the IBMFA. Based on the framing of the problem of identifying the domain of influence of the seed set X , we have s i (t = 0) = 1/2 if i / ∈ X and s i (t ≥ 0) =σ i if i ∈ X . Then, the activation probability of each node i / ∈ X is computed at each time step t > 0 according to s i (t) = {nj :j∈Ni} δ 1,Fi( n N i ) j∈Ni [s j (t−1)] nj [1−s j (t−1)] 1−nj , (3) where N i = {j (i) 1 , . . . , j (i) ki } = {j ∈ B|A ji = 1} is the neighborhood of i and F i ( σ Ni ) is the transfer function of i that depends on the network configuration at time t − 1 restricted to i's neighborhood. We use the IBMFA to estimate the domain of influence of the seed set X as D IBMFA (X ) = {(i, s i (T ))|i ∈ B ∧ s i (T ) = 0, 1} . (4) Graph-theoretic approximations We consider a class of approximations based on static graph representations of Boolean dynamical systems. The various methods are presented within a unified framework based on the so-called generalized threshold network (GTN), which serves as a generalization of the LIH [5], the expanded network [8], and the DCM [9]. The GTN is a thresholded network that represents the entire dynamics of a Boolean network B. As with the aforementioned graphs, the GTN is also dynamically and logically complete in that every possible dynamical interaction is represented and state transitions are unambiguous. The GTN of a Boolean network B is composed of three sets: the set of state nodes, the set of threshold nodes, and the set of direct edges connecting state and threshold nodes. If B contains N nodes, its GTN has 2N state nodes. Each of the state nodes represents a specific state of the nodes in the Boolean network. The state node with label i-0 indicates the state σ i = 0 of node i ∈ B; the state node with label i-1 stands for the state σ i = 1 of node i ∈ B. The set of state nodes is thus functionally equivalent to the set of original and complementary nodes in the expanded network or to the set of s-units in the DCM. State nodes interact through threshold nodes; threshold nodes determine the logic that allows for state transitions. For example, a composite node in the expanded network representing an AND relationship between k inputs is replaced by a threshold node with threshold equal to k. Threshold nodes are similar to t-units in the DCM. Unlike the DCM, however, the GTN does not have hyperedges; instead permutation redundancy is represented by an additional threshold node (similar to the intermediate representation of canalyzing maps in [9] before the inclusion of hyperedges). Furthermore, multiple threshold nodes can be connected to one another, offering full flexibility in the representation of the transfer function. One major limitation of using the LIH or the expanded network is that these network constructions rely on a simplified representation of the transfer functions, such as DNF of the logical expression dictating a node's update. However, for large Boolean expressions, determining DNF of the expression is infeasible and does not lead to a concise description; additionally, not all networks have their update functions in logical rule format, requiring the logical rules to be determined from the nodes' LUTs. The DCM, by contrast, is constructed via redescribed transfer functions that are in general more concise; however, the process of redescription is also NP-hard and impossible to do for expressions with a large number of inputs. Thus it's unclear how to best represent transfer functions in the optimal way that is both concise and computationally efficient. The GTN has several advantages. First, it offers a generalization of the expanded network and the DCM where transfer functions are not restricted in their construction. Second, it allows a shared description for different types of dynamical networks, so that expanded networks and DCMs can be compared directly. Third, it can naturally accommodate variables with any discrete number of values, not just two as in the case of Boolean networks; further, not all variables need to have the same number of states. Finally, it allows for the exploration of studies on which transfer function representations are most useful for predicting dynamics in different types of networks. (a) (c) (e) (b) (d) (f) Transfer function representations The exact structure of a GTN depends on how transfer functions are represented. More precisely, the set of state nodes is always the same; however, the set of edges and the set of threshold nodes may change depending on the specific choice for the representation of the transfer functions. There are many such possible representations, and we do not attempt to enumerate them all here. Here, we consider a representation based on disjunctive normal form (DNF) as this is used in studies of the LIH and the expanded network [5,7,10,15]. Also, we consider a representation based on schematic redescription (SR) as this is used in studies of the DCM [9,17]. Finally, we consider a naive representation that relies only on the LUT of each node without any additional logical reduction. To construct a GTN from DNF, we first find the DNF of the logical update expression of each node and also the DNF of the negation of the logical update expression. Please note that each network in the Cell Collective repository already has these logical expressions available. For a given logical expression of node i, each disjunctive clause is separated; the state node for every input in that clause is connected to a threshold node whose threshold is equal to the number of inputs in the clause. The threshold node is then connected to the state node i-1 if the logical expression implies σ i = 1 or to the state node i-0 otherwise. For example, in Fig. 2a, the logical DNF expression dictating the state of node d is σ d = (σ a ∧ σ c ) ∨ (σ b ∧ σ c ). There are two separate clauses in this expression, and each one is represented in the GTN using a threshold node. Each clause has two liter-als and thus each associated threshold node has threshold equal to 2 (Fig. 2b). To construct a GTN from the SR form, we first find the two-symbol schematic redescription of each node's LUT and construct the DCM [9,22]. All s-units are kept as state nodes, and all t-units are kept as threshold nodes. Then, we add other threshold nodes wherever two edges are fused together to remove all hyperedges in the network. Thus, the GTN transfer functions become equivalent to the intermediate threshold network representation of the canalyzing maps mentioned in [9]. In Fig. 2e, the redescribed LUT shows that either a or b can be active (σ a = 1 or σ b = 1) while the state of the other does not matter; in addition, c must also be in state σ c = 1 in order for d to have state σ d = 1. The corresponding GTN representation in Fig. 2f has two threshold nodes: the first indicates that either a-1 or b-1 must be present, and the second indicates that the first threshold must be met and c-1 must be present to reach d-1. Finally, to construct a GTN from a LUT form, we first find the LUT mapping of each node's transfer function. Then, we split the LUT of node i into rows with output σ i = 1 and rows with output σ i = 0. Next, we create a single Boolean expression for the rows with output σ i = 1 using OR expressions between each row. We do the same for all rows with output σ i = 0. After that, we create AND expressions between each input in each row. Thus, the expression is automatically in DNF and can be converted into the GTN representation in the same way as described above for the DNF method. In Fig. 2c, the LUT of node d has three rows that result in state σ d = 1, and each row has three inputs. This can be converted to the logical expression σ d = (¬σ a ∧ σ b ∧ σ c ) ∨ (σ a ∧ ¬σ b ∧ σ c ) ∨ (σ a ∧ σ b ∧ σ c ) , which is automatically in DNF. As there are three separate clauses, the corresponding GTN representation has three threshold nodes; each clause has three literals and so each threshold node has threshold equal to 3 (Fig. 2d). The LUT representation provides upper bounds on the number of threshold nodes M and edges E needed in a GTN to represent the dynamical system of a Boolean network B with N nodes. We remind that the number of state nodes in the Boolean network is 2N , while the number of threshold nodes M and edges E depends on the number of LUT entries as M = N i=1 2 ki(5) and E = N i=1 (k i + 1) 2 ki ,(6) where k i is the degree of node i ∈ B. Eq. (6) can be derived by noting that there are 2 ki rows in the LUT for node i; each row requires k i edges from the neighbors of node i to a threshold node, plus an additional edge from the threshold node to node i. We note an exponential dependence on the nodes' degree for the size of the GTN. However, the GTN representation can be much more concise by using the DNF or SR forms (see Fig. S1). Identification of the domain of influence Given a GTN representation Z of a Boolean network, with Z = SR, DNF or LUT, we estimate the domain of influence D Z (X ) of the arbitrary seed set X via a breadthfirst-search (BFS) algorithm. This algorithm is similar to the one used in Ref. [15] to find so-called logical domains of influence. The algorithm works as follows. We indicate with r the stage of the algorithm, with Q r the queue of stage r, and with S the set of already visited nodes in the GTN. We set r = 0, and we include all elements of the seed set X in the initial queue, i.e., Q r=0 = {i-σ i |(i,σ i ) ∈ X }. Further, we initialize the set S = ∅ and the domain of influence D Z (X ) = ∅. We then iterate the following instructions: 1. We create an empty queue for the next stage of the algorithm, i.e., Q r+1 = ∅. 2. While the queue Q r is not empty, we pop one element e out of the queue Q r . We add e to the set S. If e is a state node, we add the corresponding element to the domain of influence, i.e., if e = i-σ i then element (i, σ i ) is included in D Z (X ). Next, for each neighbor n of e in the GTN, if n / ∈ S, we consider the following options: • We add n to Q r if n is a threshold node and its threshold is met by state nodes currently in S. • We add n to Q r+1 if n is a state node that does not contradict any state nodes already in S. 3. If Q r+1 = ∅, we increase r → r + 1, and we go back to point 1. Otherwise, we terminate the algorithm. The necessity of having two queues Q r and Q r+1 is so that state nodes are updated in discrete iterations (i.e., BFS levels). Threshold nodes, by contrast, are dealt with immediately so that transfer functions can be evaluated before the next stage of the algorithm. The order used to remove elements from the queue Q r at point 2 of the algorithm is irrelevant for the actual output of the algorithm, i.e., the set D Z (X ). Note that the two sets S and D Z (X ) differ since the former is composed of (state and threshold) nodes of the GTN, whereas the latter contains nodes of the original graph, along with their corresponding state. Note that although different GTN representations all give a complete mapping of the network's dynamics, they give different estimates in general for the domain of influence of a seed set X . For example, in Fig. 2, the domain of influence of the seed set X = {(a, σ a = 1), (c, σ c = 1)} is {a-1, c-1, d-1} for the DNF or SR representations but only {a-1, c-1} for the LUT representation. This is because no logical reduction is done on the node LUTs in this representation, and so inference on the state of d requires knowledge of all three inputs, rather than only two. As such, performance of this method should be a lowerbound for the performance of other representations, such as DNF or SR. Furthermore, although the DNF and SR representations predict the same domain of influence in this example, note that the SR representation is slightly more concise. Metrics of performance Given a Boolean network and a seed set X , we obtain various approximations of the domain of influence, namely D IBMFA (X ), D DNF (X ), D LUT (X ), and D SR (X ). Also, we find the estimate the ground-truth domain of influence D(X ) via Eqs. (1) and (2). For compactness of notation, we remove the explicit dependence on X , so that we write D Z to denote the generic approximate set D Z (X ) and D to denote the ground-truth set D(X ). We take advantage of multiple metrics to compare the various sets. Specifically, we compare each approximate solution D Z against the ground-truth D in terms of precision and recall. We determine the set of true positive defined as A TP = {(i,σ i ) ∈ D ∧ (i,σ i ) ∈ D Z }, the set of true negatives as A TN = {(i,σ i ) / ∈ D ∧ (i,σ i ) / ∈ D Z }, RESULTS In order to test which methods best elucidate influence on biological signaling and regulatory networks, we create GTN representations of networks from the Cell Collective repository [20]. For each of these networks, we consider only seed sets of size 1, 2, and 3. Specifically, we consider all possible seeds sets of size 1 (in a Boolean network with N nodes there are 2N possible seed sets of size 1). We instead randomly sample 1, 000 seed sets of size 2 and 3. For each of these sets, we determine the approximate and ground-truth domains of influence. Comparison between the various approximations of the domain of influence and the ground-truth domain of influence are displayed in Fig. 3. All methods underesti- mate the true size of the domain of influence (Fig. 3a). The LUT method performs the worst, while the DNF and SR methods are nearly identical. The IBMFA performs slightly better than the other approximate methods. These results hold for Jaccard similarity and recall as well (Fig. 3b-c): the IBMFA performs the best, while DNF and SR are nearly identical, and LUT is much worse than the others. For precision, by contrast, all GTN-based methods perform very well and only the IBMFA method makes some small mistakes (Fig. 3d). We also measure the Spearman's rank correlation coefficient, across the entire corpus of Boolean networks in the Cell Collective repository, between the size of the domain of influence D Z predicted by approximation Z and its ground-truth counterpart D. The above results are again confirmed (Fig. S2). The DNF, SR, and IBMFA methods are nearly identical, and they all perform well in ranking the seed sets, whereas the LUT method performs poorly. The poor performance of the LUT method is expected, as this method naively creates a representation from the node LUTs without any logical reduction taking place (as it happens instead with the DNF or SR methods). The similarity between the DNF and SR methods is interesting; the two-symbol redescription of the SR method generally reduces transfer functions further than the DNF method is able to. However, it appears that this does not have a large impact on the results. Thus, for many networks, it appears that the DNF description is sufficient for good inference of the domain of influence. Furthermore, it is noteworthy that the IBMFA performs better than the GTN methods on all measures other than precision, even though it approximates node activation probabilities and does not make exact, causal inferences, as the GTN methods do. However, the IBMFA does not have perfect precision, unlike the GTN methods, meaning that sometimes bad inferences are made. There is therefore a tradeoff between recall and precision when choosing methods: DNF and SR methods have better precision but worse recall, on average, than the IBMFA. We measure the similarity between the domains of influence obtained thorough the various approximations in Fig. 4. We see that SR and DNF generate almost identical predictions; those predictions are also pretty similar to those obtained with the IBMFA method (see also Fig. S3). By contrast, the LUT method makes predictions of the domain of influence quite different from those of the other methods. We further analyze the dependence of our results on the network size in Fig. S4. Interestingly, the size of the domain of influence of a seed set increases with the network size, regardless of the size of the seed set (Pearson's correlation coefficient r = 0.46 for seed set sizes equal to 3). This property if valid for the DNF, SR, and IBMFA methods, but not for the LUT method which shows no positive correlation with network size. This finding suggests that the LUT method performs worse as network size increases, which is shown also by decreased similarity and recall scores in larger networks. However, the performance of the DNF, SR, and IBMFA methods, as measured by similarity and recall, also decreases as network size increases. We note an important limitation of our ground-truth estimate of the domain of influence in that the percentage of the dynamical state space sampled by R = 100 random configurations becomes vanishingly small as N increases (see Fig. S5). Under-sampling of the state space can cause our ground-truth predictions to overestimate the size of the domain of influence by predicting false positives, and this could be an alternate explanation as to why the performance of the DNF, SR, and IBMFA methods appears to decrease as network size increases. It is important, therefore, to verify that our estimated domain of influence accurately reflects the true dynamics of the various networks under study. Toward this end, we select six networks from the Cell Collective that have size 10 ≤ N ≤ 14. We calculate the true domain of influence for seed sets on these networks via brute-force enumeration of all possible configurations. We then compare the estimated ground-truth domain of influence D p based on sampling a fraction p of the state space to the true domain of influence D. This is done by randomly sampling R = p2 N initial configurations. We test five values: p = 0.01, 0.10, 0.25, 0.50 and 0.75. As the results of Fig. S6 show, the accuracy of the estimate D p depends on the metric being used and the value of p. In this problem, there are no false negatives; therefore, similarity is equal to precision and recall is always 100%. As such, we show only size and similarity in the figure. For size and similarity/precision, all p values perform very similarly except for p = 0.01 which performs worse than the other values. The value p = 0.10 also deviates from the true value, although this deviation is small and decreases as seed set size is increased. By contrast, the deviation of p = 0.01 increases as the seed size increases. However, even in this case, the similarity/precision is high at about 90% for seed sets of size 3. Nevertheless, we find that small samples do indeed overestimate the size of the domain of influence and have lower precision (i.e., they have more false positives). This suggests that our results for the ground-truth estimates in Fig. 3 may similarly overestimate the size of D and the number of true positives, and this may contribute to the decreased scores of the DNF, SR, and IBMFA methods in terms of size, similarity, and recall. Finally, we compare the computational time required by the various methods to generate approximations of the domain of influence, see Fig. 5. Times are calculated per network over all seed sets of a given size (computations were performed using an Intel Core i5 3.2 GHz processor). For the GTN-based approximations, the time necessary to create the GTN is added into the calculation and similarly averaged over the number of seed sets. Depending on the sparsity of the Boolean network, and the representation of the transfer functions, it may take a long time to create a GTN; however, the advantage to using such a graph dynamical approach is that this operation only has to be performed once. Afterwards, the domain set can be approximated in a time that grows linearly with the number of edges of the GTN, upperbounded by Eq. (6). Runtime for the creation of the GTN is especially noticeable for SR graph representations (see Fig. 5). In networks with low degree, like the Drosophila Melanogaster single-cell segment polarity network [23], the average time to approximate domain sets is lower than other methods considered; however, in networks with high degree, like the EGFR & ErbB signaling network [24], the average runtime is much higher for the SR method than for the other methods. When we consider all networks in the Cell Collective, we see that the time to calculate domains of influence for the IBMFA method grows linearly with the network size (see Fig. 5c). This time is on average greater than the time to find domains of influence using the DNF or LUT methods on a GTN, while the time to approximate domains of influence using the SR method is more variable and is sometimes much greater than for the IBMFA. None of the GTN-based methods are characterized by a clear relationship between computational time and network size. If we consider the total LUT size for each node in the network, however, we see a clear relationship between this quantity and the runtimes of GTN-based approximations, as running time tends to increase exponentially based on total LUT size (Fig. 5d). DISCUSSION In this paper, we presented results of a systematic analysis aiming at comparing the performance of different types of approximate methods in estimating the domain of influence of seed nodes in Boolean networks. The analysis was carried out on a corpus of 74 real-world biological networks from the Cell Collective repository [20]. Seeds are nodes in the Boolean network with pinned dynamical state. The domain of influence is defined as the set of nodes (seed and non-seed nodes) whose long-term dynamical state become deterministic as the consequence of the external perturbation that pins the state of the seed nodes. Approximate methods considered in this paper belong to two classes: (i) graph-theoretic methods, and (ii) mean-field methods. Methods in class (i) are based on representations of the Boolean dynamics into static graphs; methods in class (ii) rely instead on descriptions of average trajectories of the Boolean dynamics where fluctuations are ignored. In spite of the different spirit of the approximation performed, one of the main findings of our systematic study is that methods from the two classes display similar performance, and they can perform quite well if the goal is to determine the seed sets that have the greatest influence on a network. More in detail, all approximate methods underestimate the ground truth, with mean-field approaches having a better recall but a worse precision than the other class of methods. Computationally speaking, graph-theoretic methods are faster than mean-field ones in sparse networks, but are slower in dense networks. An important theoretical byproduct of the present study was the introduction of the so-called generalized threshold network (GTN), i.e., a graphical representation of the state space of a discrete dynamical system taking place on a network structure. The GTN serves as a generalization of the existing approaches by [8] and [9], but it offers a unified framework that can be applied regardless of the specific representation. In this paper, we considered three different representations: those based on disjunctive normal form of node logical expressions, those based on schematic redescription of node look-up tables, and those based naively on nodes' look-up tables without further logical inference being made. We stress that the results of this paper are affected by some limitations. For example, our estimates of the ground-truth domain of influence of a seed set depend on sampled configurations that constitute only a small fraction of the actual state space, therefore leading to the appearance of false positives in the ground-truth estimates of the domain of influence. Although our results were confirmed also in small networks where ground-truth estimates are exact (see Fig. S6c-d), some of the gaps in size, similarity, and recall seen by the methods tested here may be due to this systematic bias. Another major limitation is that these methods were tested only on biological networks of moderate size from the Cell Collective repository. It is unclear how different methods would perform in larger and/or non-biological networks. Further research is needed for this purpose. Finally, we note that we relied on pre-existing logical expressions in the Cell Collective for the DNF method, and that such expressions are not in general available for all networks. However, as an alternative, it is possible to find disjunctive normal form of the prime implicants of node LUTs using the Quine-McCluskey algorithm [18]; we find that both methods for finding DNF of node expressions make nearly identical predictions (see Fig. S7), suggesting that the results seen here are also valid for networks that do not have such pre-existing logical expressions available. Despite such limitations, this work is a step towards understanding how the behavior of Boolean networks can be effectively and efficiently predicted. This work further elucidates the type of strategy to be used depending on the network, i.e., sparse vs. dense, and/or the specific application at hand, i.e., when recall is favored over precision or vice versa. ACKNOWLEDGMENTS This project was partially supported by the Army Research Office under contract number W911NF-21-1-0194 and by the Air Force Office of Scientific Research under award number FA9550-21-1-0446. The funders had no role in study design, data collection and analysis, the decision to publish, or any opinions, findings, and conclusions or recommendations expressed in the manuscript. Figure S2. Seed set rankings based on the domain of influence in real-world networks. We measure the Spearman's correlation coefficient between the size of the domains of influence predicted by the various approximate methods and the size of the ground-truth domains of influence for networks in the Cell Collective. Results are shown for the ground-truth domain of influence (GT) as well as approximate domains of influence as estimated by the individual-based mean-field approximation (IBMFA) and three different GTN representations whose transfer functions are defined by the node look-up tables (LUT), by disjunctive normal form (DNF) of logical expressions, and by schematic redescription (SR). Results are calculated per network for a given seed set size and then averaged across all networks. As a term of comparison, rankings based on degree centrality (by summing the out-degree of seed nodes in the original structural graph) are also considered (SG); note that in this case, seed nodes of different states will have the same rank-order. Dynamical methods except for LUT perform similarly; however, LUT still performs better than predicting seed set rankings based on degree centrality. Results are averaged over all seed sets of size 1. Each point in the plot is a network; time is plotted as a function of the network size N . For many networks, the PI representation takes longer than regular DNF, and nearly as long as the SR representation. The average time per seed set, across networks, is 0.11 seconds for the DNF representation, 34.6 seconds for the PI representation, and 39.9 seconds for the SR representation. (b) The average size of the domain of influence found by the DNF method without reduction, as compared to the PI method. Each point in the plot is a network. Results are averaged over all seed sets of size 1; the dashed line indicates an equal average size of the domains of influence. As panel b shows, the DNF method without reduction is almost equivalent to the PI method. The average Jaccard similarity between the two methods across networks and across seed set sizes is 0.995. This high similarity does not depend on the seed set size. Figure 1 . 1Static representations of the dynamics on a Boolean network. (a) We consider a toy example of a Boolean network. A direct connection indicates that a variable's state depends on the state of the other variable. In this network for example, the state of node e depends on the states of nodes d and f. Nodes a and f are instead inputs in this network, in the sense that their state is time invariant. (b) The transfer functions are represented in logical form for each node based on the states of its neighbors. (c) Expanded network representation of the Boolean network. Each node of the original network of panel (a) is denoted by its label plus its state. For example, "e-1" indicates the state σe = 1 of node e; the composite node with label "f -1 & d-1" denotes instead the simultaneous appearance of the states σ f = 1 and σ d = 1. In the visualization, nodes representing active states are denoted in black, nodes representing inactive states are displayed in white, and composite nodes representing AND relationships are denoted in grey. (d) Dynamics canalization map representation; s-units representing active nodes are denoted with black circles and s-units representing inactive nodes are denoted with white circles. T-units representing redescribed schemata are displayed with grey diamonds. The label appearing in each of the t-units represents the specific value of the threshold that they represent. T-units with threshold equal to one that represent schemata with no permutation redundancy are left out of the figure for simplicity. Additionally, self-loops are left out of panels (a), (c) and (d). i (t) is the state of node i in the r-th sampled configuration at time t. Figure 2 . 2Representations of transition functions and associated generalized threshold networks. (a) A Boolean network composed of four nodes. The network contains three input nodes, a, b and c. The logical expression of the transfer function of node d is provided and converted to disjunctive normal form (DNF). (b) The corresponding generalized threshold network (GTN) of the DNF is displayed as a graph with 6 nodes and 5 edges. To keep the visualization compact, we display only the portion of the GTN that concerns the activation of node d. (c) The transition function of node d is shown as a look-up table (LUT). (d)The GTN representation of the LUT is displayed as a graph with 9 nodes and 12 edges. Also here, we display only the portion of the GTN that concerns the activation of node d. (e) The transition function of node d is shown as the two-symbol schemata redescription. The wildcard symbol (#) indicates a node that can have either state value; the position-free symbol ( o ) denotes that the two indicated state values can switch; that is either a or b may be active to ensure that node d is in the active state, while the state of the other input does not matter. (f) The GTN representation of the two-symbol schemata redescription is displayed as a graph with 6 nodes and 5 edges. Here too, only the portion of the GTN that concerns the activation of node d is displayed. the set of false positives as A FP = {(i,σ i ) / ∈ D ∧ (i,σ i ) ∈ D Z }, and the set of false negatives asA FN = {(i,σ i ) ∈ D∧(i,σ i ) / ∈ D Z }.We compute precision as the ratio |A TP |/(|A TP |+|A FP |), and recall as the ratio |A TP |/(|A TP | + |A FN |). Also, we measure the similarity of the approximation Z with the ground truth using the Jaccard index, i.e., J D,DZ = (D ∩ D Z )/(D ∪ D Z ). Such a metric of similarity is used also in the straight comparisons between pairs of approximations, i.e., we rely on J DY,DZ = (D Y ∩ D Z )/(D Y ∪ D Z ) to contrast approximations Y and Z. Figure 3 . 3Ground-truth vs. approximate domains of influence in real-world networks. (a) Average size of the domain of influence for networks in the Cell Collective repository. Data are grouped based on the size of the seed set, and displayed results are obtained by taking the average over all sets of a given size included in our analysis. The figure contains results for the ground-truth estimate of the domain of influence (GT), its individual-based mean-field approximation (IBMFA), and its three approximations obtained from the GTN representations based on the transfer functions defined by the node look-up tables (LUT), disjunctive normal form (DNF) of logical functions, and schematic redescription (SR). (b) Average value of the similarity (i.e., Jaccard index) between approximations of the domain of influence and the ground-truth domain of influence. (c) Same as in panel (b), but for the average recall of approximate domains of influence as compared to the ground-truth. (d) Same as in panel (c), but for the average precision of approximate domains of influence as compared to the ground-truth. Figure 4 . 4Comparison between approximate domains of influence in real-world networks. (a) We consider the same set of results as in Fig. 3 and measure the Jaccard index between approximate domains of influence obtained by the various identification methods. Each entry in the tables reports the average value of the similarity score across all networks in the data set, and for all values of the size of the seed set. (b) Same as panel (a), but results are calculated only for seed sets of size 1. (c) Same as panel (a), but results are calculated only for seed sets of size 2. (d) Same as panel (a), but results are calculated only for seed sets of size 3. Figure 5 . 5Computational time for the estimation of the domain of influence in real networks. (a) We measure the time required to estimate domains of influence in the Drosophila Melanogaster single-cell segment polarity network [9, 23]. The size of this network is N = 17, while its maximum degree is kmax = 4. Results are averaged over all seed sets of a given size. (b) Same as panel (a), but for the Tumor cell migration EGFR & ErbB signaling network (N = 104, kmax = 14) [24]. (c) Time required to estimate the domains of influence in all networks within the Cell Collective. Results are averaged over all seed sets of size 1. Each point in the plot is a network; time is plotted as a function of the network size N . (d) Same as in panel (c), but time is plotted as a function of the total size of the look-up tables in the network, i.e., N i=1 2 k i . Figure S1 . S1Size comparison of GTN representations of networks in the Cell Collective. (a) Generalized threshold network (GTN) size NG is plotted against the original network size NB for three different GTN representations whose transfer functions are defined by the node look-up tables (LUT), by disjunctive normal form (DNF) of logical expressions, and by schematic redescription (SR). Each point in the plot is a GTN representation of a network in the Cell Collective. The GTN size is normalized by the number of nodes in the original graph. (b) Same as panel a, but the number of edges of different GTN representations EG is plotted against the number of edges in the original network EB. Each point in the plot is a GTN. The number of edges is normalized by the number of edges in the original graph. Figure S3 . S3Similarity of dynamical methods in approximating the domain of influence. (a) Cumulative probability distribution of the average similarity between domains of influence for networks in the Cell Collective. Results are shown for approximate domains of influence as estimated by the individual-based mean-field approximation (IBMFA) and two different GTN representations whose transfer functions are defined by disjunctive normal form (DNF) of logical expressions, and by schematic redescription (SR). Similarity is calculated as the average Jaccard index between approximate domains of influence obtained by the various methods. Results are averaged over all seed sets of size |X | = 1 and across all networks. The cumulative probability distribution is shown for similarities between three pairs of methods: DNF and SR, DNF and IBMFA, and SR and IBMFA. (b) Same as panel a, but results are calculated only for seed sets of size |X | = 2. (b) Same as panel a, but results are calculated only for seed sets of size |X | = 3. (d) The average size of the approximate domain of influence found by the SR and IBMFA methods, as compared to the DNF method. Each point in the plot is a network. Results are averaged over all seed sets of size |X | = 1. The dashed line indicates an equal average size of the approximate domain of influence. (e) Same as panel d, but results are calculated only for seed sets of size |X | = 2. (f) Same as panel d, but results are calculated only for seed sets of size |X | = 3.Results show that the DNF and SR methods are nearly identical in terms of size and similarity across networks and across seed set sizes. By contrast, the IBMFA is similar to DNF and SR in most networks but deviates significantly in some networks where it predicts larger domains of influence. Figure S4 . S4Ground-truth vs. approximate domains of influence by network size. (a) Average size of the domain of influence for networks in the Cell Collective. Results are shown for the ground-truth domain of influence (GT) as well as approximate domains of influence as estimated by the individual-based mean-field approximation (IBMFA) and three different GTN representations whose transfer functions are defined by the node look-up tables (LUT), by disjunctive normal form (DNF) of logical expressions, and by schematic redescription (SR). Results are averaged over all seed sets of size |X | = 1. Each point in the plot is a network. (b) Same as panel a, but results are calculated only for seed sets of size |X | = 2. (c) Same as panel a, but results are calculated only for seed sets of size |X | = 3. (d) Same as panel a, but for the average value of the similarity (i.e., Jaccard index) between approximations of the domain of influence and the ground-truth domain of influence per network. (e) Same as panel d, but results are calculated only for seed sets of size |X | = 2. (f) Same as panel d, but results are calculated only for seed sets of size |X | = 3. (g) Same as in panel a, but for the average recall of approximate domains of influence as compared to the ground-truth per network. (h) Same as panel g, but results are calculated only for seed sets of size |X | = 2.(i) Same as panel g, but results are calculated only for seed sets of size |X | = 3. Across seed set sizes, DNF, SR, IBMFA, and the ground-truth estimate predict larger domains of influence as network size increases. Additionally, all methods have decreasing similarity and recall compared to the ground-truth as network size increases. Figure S5 . S5Proportion of the state space sampled in the Cell Collective. Fraction of configurations sampled as a function of the network size. Different colors correspond to different sizes of the seed set X . For networks of size N ≤ 10, the entire state space is sampled (R = 2 N ); otherwise, R = 100 randomly chosen initial configurations are chosen. The size of the state space grows as 2 N −|X | and thus R becomes a vanishingly small proportion as N increases. The uppermost line denotes p = 1 (the whole state space); the line below denotes p = 0.01 (1% of the state space). Figure S6 . S6Bruteforce vs. approximate domains of influence in the Cell Collective. (a) Average size of the domain of influence, as estimated by sampling a proportion p of the state space. Results are averaged over all seed sets of a given size and across 6 networks from the Cell Collective with size 10 ≤ N ≤ 14. This is compared to the actual domain of influence, as measured via bruteforce enumeration of each network's possible configurations. (b) Same as panel a, but for the average value of the similarity (i.e., Jaccard index) between the estimated domain of influence and the bruteforce solution. Given that recall is perfect for each estimate, average similarity here is equivalent to average precision. (c) Average size of the domain of influence for the 6 networks in panel a. Results are shown for the ground-truth domain of influence as based on bruteforce enumeration (BF) as well as approximate domains of influence as estimated by the individual-based mean-field approximation (IBMFA) and three different GTN representations whose transfer functions are defined by the node look-up tables (LUT), by disjunctive normal form (DNF) of logical expressions, and by schematic redescription (SR). Results are averaged over all seed sets of a given size and across all networks. (d) Same as panel c, but for the average value of the similarity (i.e., Jaccard index) between approximations of the domain of influence and the bruteforce solution. Figure S7 . S7Comparison of DNF methods in approximating the domain of influence. (a) Average time required to estimate domains of influence for networks in the Cell Collective. Results are shown for the ground-truth domain of influence (GT) as well as approximate domains of influence as estimated by the individual-based mean-field approximation (IBMFA) and three different GTN representations whose transfer functions are defined by disjunctive normal form of logical functions without any reduction (DNF), by disjunctive normal form of the prime implicants of each logical function (PI), and by schematic redescription (SR). et al. leverage the IBMFA to optimally identify the minimal sets of nodes able to drive a Boolean arXiv:2304.10443v1 [physics.soc-ph] 20 Apr 2023(a) (b) (c) (d) The origins of order: Selforganization and selection in evolution. Stuart A Kauffman, Oxford University PressUSAStuart A Kauffman et al. The origins of order: Self- organization and selection in evolution. Oxford Univer- sity Press, USA, 1993. A proposal for using the ensemble approach to understand genetic regulatory networks. Stuart Kauffman, Journal of theoretical biology. 2304Stuart Kauffman. A proposal for using the ensemble ap- proach to understand genetic regulatory networks. Jour- nal of theoretical biology, 230(4):581-590, 2004. Genetic network models and statistical properties of gene expression data in knock-out experiments. Roberto Serra, Marco Villani, Alessandro Semeria, Journal of theoretical biology. 2271Roberto Serra, Marco Villani, and Alessandro Semeria. Genetic network models and statistical properties of gene expression data in knock-out experiments. Journal of theoretical biology, 227(1):149-157, 2004. Perturbation avalanches and criticality in gene regulatory networks. P Rämö, O Kesseli, Yli-Harja, Journal of Theoretical Biology. 2421P Rämö, J Kesseli, and O Yli-Harja. Perturbation avalanches and criticality in gene regulatory networks. Journal of Theoretical Biology, 242(1):164-170, 2006. A methodology for the structural and functional analysis of signaling and regulatory networks. Steffen Klamt, Julio Saez-Rodriguez, Jonathan A Lindquist, Luca Simeoni, Ernst D Gilles, BMC bioinformatics. 71Steffen Klamt, Julio Saez-Rodriguez, Jonathan A Lindquist, Luca Simeoni, and Ernst D Gilles. A method- ology for the structural and functional analysis of sig- naling and regulatory networks. BMC bioinformatics, 7(1):1-26, 2006. A logical model provides insights into t cell receptor signaling. Julio Saez-Rodriguez, Luca Simeoni, Jonathan A Lindquist, Rebecca Hemenway, Ursula Bommhardt, Boerge Arndt, Utz-Uwe Haus, Robert Weismantel, D Ernst, Steffen Gilles, Klamt, PLoS computational biology. 38163Julio Saez-Rodriguez, Luca Simeoni, Jonathan A Lindquist, Rebecca Hemenway, Ursula Bommhardt, Boerge Arndt, Utz-Uwe Haus, Robert Weismantel, Ernst D Gilles, Steffen Klamt, et al. A logical model provides insights into t cell receptor signaling. PLoS com- putational biology, 3(8):e163, 2007. Computing combinatorial intervention strategies and failure modes in signaling networks. Regina Samaga, Axel Von Kamp, Steffen Klamt, Journal of Computational Biology. 171Regina Samaga, Axel Von Kamp, and Steffen Klamt. Computing combinatorial intervention strategies and failure modes in signaling networks. Journal of Com- putational Biology, 17(1):39-53, 2010. Elementary signaling modes predict the essentiality of signal transduction network components. Rui-Sheng Wang, Réka Albert, BMC systems biology. 51Rui-Sheng Wang and Réka Albert. Elementary signal- ing modes predict the essentiality of signal transduction network components. BMC systems biology, 5(1):1-14, 2011. Canalization and control in automata networks: body segmentation in drosophila melanogaster. Manuel Marques, - Pita, Luis M Rocha, PloS one. 8355946Manuel Marques-Pita and Luis M Rocha. Canalization and control in automata networks: body segmentation in drosophila melanogaster. PloS one, 8(3):e55946, 2013. Cell fate reprogramming by control of intracellular network dynamics. G T Jorge, Réka Zanudo, Albert, PLoS computational biology. 1141004193Jorge GT Zanudo and Réka Albert. Cell fate reprogram- ming by control of intracellular network dynamics. PLoS computational biology, 11(4):e1004193, 2015. Impact of individual nodes in boolean network dynamics. Fakhteh Ghanbarnejad, Konstantin Klemm, Europhysics Letters). 99558006EPLFakhteh Ghanbarnejad and Konstantin Klemm. Impact of individual nodes in boolean network dynamics. EPL (Europhysics Letters), 99(5):58006, 2012. Dynamics and control at feedback vertex sets. i: Informative and determining nodes in regulatory networks. Bernold Fiedler, Atsushi Mochizuki, Gen Kurosawa, Daisuke Saito, Journal of Dynamics and Differential Equations. 253Bernold Fiedler, Atsushi Mochizuki, Gen Kurosawa, and Daisuke Saito. Dynamics and control at feedback vertex sets. i: Informative and determining nodes in regulatory networks. Journal of Dynamics and Differential Equa- tions, 25(3):563-604, 2013. Dynamics and control at feedback vertex sets. ii: A faithful monitor to determine the diversity of molecular activities in regulatory networks. Atsushi Mochizuki, Bernold Fiedler, Gen Kurosawa, Daisuke Saito, Journal of theoretical biology. 335Atsushi Mochizuki, Bernold Fiedler, Gen Kurosawa, and Daisuke Saito. Dynamics and control at feedback vertex sets. ii: A faithful monitor to determine the diversity of molecular activities in regulatory networks. Journal of theoretical biology, 335:130-146, 2013. Structure-based control of complex networks with nonlinear dynamics. Jorge Gomez Tejeda Zañudo, Gang Yang, Réka Albert, Proceedings of the National Academy of Sciences. 11428Jorge Gomez Tejeda Zañudo, Gang Yang, and Réka Albert. Structure-based control of complex networks with nonlinear dynamics. Proceedings of the National Academy of Sciences, 114(28):7234-7239, 2017. Target control in logical models using the domain of influence of nodes. Gang Yang, Jorge Gómez Tejeda Zañudo, Réka Albert, Frontiers in physiology. 454Gang Yang, Jorge Gómez Tejeda Zañudo, and Réka Al- bert. Target control in logical models using the domain of influence of nodes. Frontiers in physiology, page 454, 2018. Influence maximization in boolean networks. Thomas Parmer, M Luis, Filippo Rocha, Radicchi, Nature communications. 131Thomas Parmer, Luis M Rocha, and Filippo Radicchi. Influence maximization in boolean networks. Nature communications, 13(1):1-11, 2022. Dynamical modularity in automata models of biochemical networks. Thomas Parmer, Luis M Rocha, Thomas Parmer and Luis M. Rocha. Dynamical modu- larity in automata models of biochemical networks, 2023. . W V Quine, A Way to Simplify Truth Functions. American Mathematical Monthly. 62W V Quine. A Way to Simplify Truth Functions. Amer- ican Mathematical Monthly, 62:627-631, 1955. Boolean formalization of genetic control circuits. René Thomas, Journal of theoretical biology. 423René Thomas. Boolean formalization of genetic control circuits. Journal of theoretical biology, 42(3):563-585, 1973. The cell collective: toward an open and collaborative approach to systems biology. Tomáš Helikar, Bryan Kowal, Sean Mcclenathan, Mitchell Bruckner, Thaine Rowley, Alex Madrahimov, Ben Wicks, Manish Shrestha, Kahani Limbu, Jim A Rogers, BMC systems biology. 61Tomáš Helikar, Bryan Kowal, Sean McClenathan, Mitchell Bruckner, Thaine Rowley, Alex Madrahimov, Ben Wicks, Manish Shrestha, Kahani Limbu, and Jim A Rogers. The cell collective: toward an open and collabo- rative approach to systems biology. BMC systems biology, 6(1):1-14, 2012. Epidemic processes in complex networks. Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, Alessandro Vespignani, Reviews of modern physics. 873925Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, and Alessandro Vespignani. Epidemic pro- cesses in complex networks. Reviews of modern physics, 87(3):925, 2015. Cana: A python package for quantifying control and canalization in boolean networks. Alexander J Rion Brattig Correia, Xuan Gates, Luis M Wang, Rocha, arXiv:1803.04774arXiv preprintRion Brattig Correia, Alexander J Gates, Xuan Wang, and Luis M Rocha. Cana: A python package for quanti- fying control and canalization in boolean networks. arXiv preprint arXiv:1803.04774, 2018. The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in drosophila melanogaster. Réka Albert, G Hans, Othmer, Journal of theoretical biology. 2231Réka Albert and Hans G Othmer. The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in drosophila melanogaster. Journal of theoretical biology, 223(1):1-18, 2003. The logic of egfr/erbb signaling: theoretical properties and analysis of high-throughput data. Regina Samaga, Julio Saez-Rodriguez, Leonidas G Alexopoulos, K Peter, Steffen Sorger, Klamt, PLoS computational biology. 581000438Regina Samaga, Julio Saez-Rodriguez, Leonidas G Alex- opoulos, Peter K Sorger, and Steffen Klamt. The logic of egfr/erbb signaling: theoretical properties and analy- sis of high-throughput data. PLoS computational biology, 5(8):e1000438, 2009.
{'fraction_non_alphanumeric': 0.0410078660399158, 'fraction_numerical': 0.012962347466881914, 'mean_word_length': 4.453478335922665, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "Estimating the influence that individual nodes have on one another in a Boolean network is essential to predict and control the system's dynamical behavior, for example, detecting key therapeutic targets to control pathways in models of biological signaling and regulation. Exact estimation is generally not possible due to the fact that the number of configurations that must be considered grows exponentially with the system size. However, approximate, scalable methods exist in the literature. These methods can be divided in two main classes: (i) graph-theoretic methods that rely on representations of Boolean dynamics into static graphs, (ii) and mean-field approaches that describe average trajectories of the system but neglect dynamical correlations. Here, we compare systematically the performance of these state-of-the-art methods on a large collection of real-world gene regulatory networks. We find comparable performance across methods. All methods underestimate the ground truth, with mean-field approaches having a better recall but a worse precision than graph-theoretic methods. Computationally speaking, graph-theoretic methods are faster than mean-field ones in sparse networks, but are slower in dense networks. The preference of which method to use, therefore, depends on a network's connectivity and the relative importance of recall vs. precision for the specific application at hand.", 'arxivid': '2304.10443', 'author': ['Thomas Parmer \nCenter for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA\n', 'Filippo Radicchi \nCenter for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA\n', 'Thomas Parmer \nCenter for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA\n', 'Filippo Radicchi \nCenter for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA\n'], 'authoraffiliation': ['Center for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA', 'Center for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA', 'Center for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA', 'Center for Complex Networks and Systems Research\nComputing, and Engineering\nLuddy School of Informatics\nIndiana University\n47408BloomingtonIndianaUSA'], 'corpusid': 258236346, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 15904, 'n_tokens_neox': 14512, 'n_words': 10342, 'pdfsha': '732b392e7c8e201bcc78999a673149a2f9d67f25', 'pdfurls': ['https://export.arxiv.org/pdf/2304.10443v1.pdf'], 'title': ['Dynamical Methods for Target Control of Biological Networks', 'Dynamical Methods for Target Control of Biological Networks', 'Dynamical Methods for Target Control of Biological Networks', 'Dynamical Methods for Target Control of Biological Networks'], 'venue': []}
arxiv
Electronic structures of B-2p and C-2p of boron-doped diamond film by soft X-ray absorption and emission spectroscopy 16 Jul 2004 Jin Nakamura Eiki Kabasawa Nobuyoshi Yamada Yasuaki Einaga Daisuke Saito Hideo Isshiki Shigemi Yugo Rupert C C Perera Department of Applied Physics & Chemistry Department Chemistry The University of Electro-Communications Chofu-shi182-8585TokyoJapan Department of Electreo-Engeering Keio University 305-8568HiyoshiKanagawa Center for X-ray Optics Lawrence Berkeley National Laboratory The University of Electro-Communications Chofu-shi182-8585, 94720Tokyo, BerkeleyCAJapan Electronic structures of B-2p and C-2p of boron-doped diamond film by soft X-ray absorption and emission spectroscopy 16 Jul 2004(Dated: submitted to Phys. Rev. B, June 14 2004)XAS & XES of doped diamondnumbers: 8105Uw7155-i7425Jb7870En7870Dm X-ray absorption (XAS) and emission (XES) spectroscopy near B-K and C-K edges have been performed on metallic (∼1at%B, B-diamond) and semiconducting (∼0.1at%B and N, BN-diamond) doped-diamond films. Both B-K XAS and XES spectra shows metallic partial density of state (PDOS) with the Fermi energy of 185.3 eV, and there is no apparent boron-concentration dependence in contrast to the different electric property. In C-K XAS spectrum of B-diamond, the impurity state ascribed to boron is clearly observed near the Fermi level. The Fermi energy is found to be almost same with the top of the valence band of non-doped diamond, EV, 283.9 eV. C-K XAS of BN-diamond shows both the B-induced shallow level and N-induced deep-and-broad levels as the in-gap states, in which the shallow level is in good agreement with the activation energy (Ea=0.37 eV) estimated from the temperature dependence of the conductivity, namely the change in C-2p PDOS of impurity-induced metallization is directly observed. The electric property of this diamond is mainly ascribed to the electronic structure of C-2p near the Fermi level. The observed XES spectra are compared with the DVXα cluster calculation. The DVXα result supports the strong hybridization between B-2p and C-2p observed in XAS and XES spectra, and suggests that the small amount of borons (≤1at%) in diamond occupy the substitutional site rather than interstitial site. I. INTRODUCTION Diamond is a very attractive material with industrial applications because of its maximum hardness, high surface stability (chemical inertness), large energy gap (∼5.5 eV), high thermal conductivity, and so on. Boron-doped diamond expands its possibility into an application of electric devices. [1] Lightly boron-doped diamond shows p-type character with an activation energy of about 0.37 eV, [2] and heavily doped diamond shows metallic conductivity. [3] Furthermore the recent discovery of the superconductivity of more-heavily boron doped diamond brought a new attention to the problem in the superconductivity of impurity-induced metallization in semiconductor, [4] However the crystalinity of these heavily doped compounds is not clear in contrast to the low (≤ 0.5 %) doped diamonds. It seems that boron atoms occupy the interstitial sites when heavily doped case (∼4%) [5,6] and substitute for carbon at low doped case (≤ 0.5 %). [7] It should be important to clarify the memorable electronic structure of more-heavily doped diamonds, but at present the priority study should be that of the low doped diamond due to their crystalinities. Therefore, in this paper, we study the electronic struc-tures of low doped diamond with the metallic (∼1at%) and semiconducting (∼0.1at%) characters. The partial density of states (PDOS's) of boron-and carbon-2p using X-ray absorption (XAS) and X-ray emission (XES) spectroscopy near the B-K and C-K edges of these doped diamonds are reported. XAS and XES near B-K and C-K edges are powerful techniques for direct measurement of PDOS's of dopant-boron and host-carbon, especially for the semiconducting or insulating materials, in comparison with the electron spectroscopy. II. EXPERIMENTAL Highly boron-doped diamond thin films were deposited on Si (100) wafers in a microwave plasma-assisted chemical vapor deposition (MPCVD) system (ASTeX Corp.). Details of the preparation were described elsewhere. [8] A mixture of acetone and methanol in the volume ratio of 9/1 was used as the carbon source. B 2 O 3 , the boron source, was dissolved in the acetone-methanol solution at a B/C atomic ratio of 1:100. This 1 at% boron-doped diamond, B-diamond, shows metallic conductivity at the room temperature. The lightly doped diamond film is synthesized using MPCVD method with a h-BN target (BN-diamond). [9] The boron and nitrogen concentrations are estimated to be 0.1at% for both boron and nitrogen by SIMS measurements. And the electric property is semiconducting with the activation energy E a of about 0.37 eV. [9] However the value of E a depends on the impurity concentration, this value is consistent with the previous reports of lightly boron-doped diamond film. [1,7] Several values of the nitrogen-impurity levels (deep ntype) are reported. [10] Soft X-ray absorption (XAS) and XES measurements were performed at BL-8.0.1 [11] of the Advanced Light Source (ALS) in Lawrence Berkeley National Laboratory (LBNL). The energy resolutions of the incoming and outgoing X-rays were 0.2∼0.3 eV. For the calibrations of the monochromator and spectrometer, h-BN, B 2 O 3 , HOPG and natural diamond were used as the standard samples. [12,13,14] Although all the samples are polycrystals, there is a possibility of orientation. In order to check the orientation and the surface π-resonant state reported in some borides, polarization (angle) dependencies of XAS and XES were measured. There are no essential differences in XAS and XES spectra among those with different angles, which suggests that there is no orientation and any borides showing the surface π-resonant state in these samples. noticed that there is no sharp peak at 192 eV (* in Fig. 1) which corresponds to the surface π-resonant state of some borides, h-BN and B 2 O 3 . This means there is no trace of these borides in these diamond samples. Figure 2 shows angular and energy dependence of B-K XES spectra of B-and BN-diamonds. Figure 2(a) shows two B-K XES spectra of B-diamond with the different incident angle θ of 70 • and 20 • . The excitation energy, E ex , is 200 eV. These two spectra coincide each other, which indicates that there is no orientation of this sample. The most important point is the observation of the clear Fermi edge in both samples with the same threshold at 185.0 eV in consistent with B-K XAS spectra. . The inset viewgraph shows B-K XES spectrum with E ex of 185.5 eV which corresponds to the sharp state near the Fermi level in B-K XAS spectrum (Fig. 1). In the inset, the spectra of h-BN with E ex of 200 eV is also plotted. The bonding in h-BN is ideal sp 2 between B and N having a peak at around 182∼183 eV. The observed B-K XES spectra of B-diamond differ from that of h-BN, but it is hard to say that this spectra is due to sp 3 of B in diamond from these results only. Figure 2(b) shows detailed excitation-energy dependence of B-diamond. For the spectrum with E ex of 185.5 eV, the elastic (intense) peak at is observed. Then we magnified the spectrum with E ex of 185.5 eV 5 times in Fig. 2(b). The detailed features of these spectra in the energy region of E ≤ 184 eV agrees well with each other. This means that all B-2p has an unique electronic structure. Furthermore the spectrum of BN-diamond shows almost same form, which means there is no B concentration dependence in these doping region. (289.1 eV) and the band gap of non-doped diamond. In Fig. 3(a), the difference between B-and non-doped diamonds is also shown. It is clearly seen that the 1-at%B in diamond makes a metallic state in C-2p PDOS. But for BN-diamond (dotted line), the threshold energy shifts higher a little. Figure 3(b) shows detailed in-gap states and shows the edge of C-K XES of non-doped diamond. The threshold energy of the impurity state in XAS of Bdiamond, 283.9 eV, is agreement with the edge energy in C-K XES of non-doped diamond (the top of the valence band, E v ). It is noted that all the profiles of normal C-K XES of B-, BN-and non-doped diamond are almost same (see section III C). In contrast to the metallic Bdiamond, BN-diamond seems to has a small gap. For this semiconducting sample the activation energy, E a , of 0.37 eV was measured from the temperature dependence of the conductivity. [9] Then, the shift of 0.3∼0.4 eV between metallic and semiconducting samples is consistently explained by the small gap with E a of 0.37 eV. In other words, 0.1-at%B makes a shallow level near the valence band with E a of 0.37 eV. In addition, a broad in-gap state spreads over the gap is observed in BN-diamond. It is reported that nitrogen-dopant makes several deep levels in diamond [10] and the present spectra are similar to the reported C-K XAS spectra for graphite-carbon nitride system. [15] Then this broad in-gap state is ascribed to N-dopant. Figure 4(a) shows the C-K XES spectra with E ex of 284 eV and 300 eV. The spectrum with E ex of 300 eV (dotted line) is almost same as the spectrum of the nondoped-diamond. [12] This is consistent with the band calculation results. [16] This agreement between doped and non-doped diamond suggests that the host C-1s core level does not change by B-doping within the experimental error. However, the spectrum with E ex of 284 eV (thick solid line) shows a sharp peak at about 284 eV (elastic peak) and a broad tail toward the low energy side. Figure 4(b) shows the subtraction of the elastic peak from the spectrum. The elastic peak was assumed to be gaussian with FWHM of 0.6 eV. [17] The result of the subtraction of the elastic peak is essentially same as before the subtraction because the width of elastic peak is narrow. Because the unoccupied state, a peak at 284 eV is observed only in B-doped-diamond, the XES spectrum with E ex of 284 eV is a represent of the PDOS of the carbon atom hybridized with dopant B, i.e., nearest neighboring (N.N.) carbon from boron. Because it is difficult to study the electronic structures of dopant (dilute) boron and the neighboring carbon atoms using band calculation, calculations were performed by, DVXα method. [18], a cluster calculation method. C. C-K XES of doped diamond film D. DVXα cluster calculation The program SCAT [18] was used for DVXα calculation. Because the samples are covalent, Madelung potential was not taken into account. It is also known that an average of PDOS's of a few atoms near the center of the large cluster reproduces well experimental results. In the present work, we calculated the PDOS's for non-dopeddiamond, and two doped diamond cases where boron occupies the interstitial site and substitute's carbon site. The typical cluster size is about 200 atoms, a limitation due to the memory size of the program. First, PDOS of non-doped cluster model, C 184 , was calculated and the results were compared with the experimental data and the band calculations. Although the Fermi levels in both the calculations and observation are not agreed exactly, the overall feature of the PDOS's are in agreement with each others. For the B doped diamond case, typical cluster models of C 174 BH 16 and C 184 BH 12 were used for substitutional and interstitial cases, respectively. In these cases, the non-doped models, C 175 H 16 and C 184 H 12 were also applied and the results were confirmed to be same with that of C 184 . In both B-doped cluster models, boron atom is always set at the center of the clusters. In these large clusters, an effect of H-termination is found to be negligible for the PDOS's of inner C or B atoms. It is noted that a few in-gap states are appeared even in the non-doped case. It might be ascribed to the surface states. Then in the present DVXα results, the origin of the energy is set to the maximum energy of the electron in the occupied states except the meaningless surface states, i.e., the energy is measured from the top of the valence band (V.B.), E V . The results are shown in Fig. 5. Figure 5 DVXα suggest that dopant boron replace the carbon sites in these concentration region consistently with the pre-vious report. [7] IV. CONCLUSIONS X-ray absorption (XAS) and emission (XES) spectra at the B-K and C-K edges have been performed on metallic (∼1at%B) and semiconducting (∼0.1at%B and N) doped-diamond films. Both B-K XAS and XES spectra show metallic partial density of states (PDOS) with the Fermi energy of 185.3 eV, and there is no apparent boron-concentration dependence in contrast to the different electric property. In C-K XAS spectrum of metallic B-diamond, the impurity state ascribed to boron is clearly observed near the Fermi level. The Fermi energy is found to be almost same with the top of the valence band of non-doped diamond, E V , 283.9 eV. The C-K XAS of semiconducting BN-diamond shows both the Binduced shallow level and N-induced deep-and-broad levels as the in-gap states, in which the shallow level is in good agreement with the activation energy (E a =0.37 eV) estimated from the temperature dependence of the conductivity, namely the change in C-2p PDOS of impurityinduced metallization is directly observed. The electronic property of these diamonds is mainly attributed to the electronic structure of C-2p near the Fermi level. The observed XAS and XES spectra are compared with the DVXα cluster calculations. The DVXα result supports the strong hybridization between B-2p and C-2p observed in XAS and XES spectra, and suggests that borons in diamond occupy the substitutional site in the present doping range between 0.1at%B and 1at%B in diamond, rather than interstitial site. and XES spectra of B-and BN-diamonds. The incident angle was set to θ=20 • . The excitation energy of XES measurement was 200 eV. III. RESULTS AND DISCUSSIONSA. B-K XAS and XES of doped diamond film Figure 1 FIG. 2 : 12shows B-K XAS and XES spectra of B-and BN-diamonds. In both compounds, clear metallic states of B-2p are observed, in which both the Fermi level are the same with each other to be of 185.3 eV measured from B-1s core level. There is a pseudo-gap state between 187 and 190 eV, and the intensity steeply increases with an increase of energy with the threshold of 190.5 eV. It is B-K XES spectra of B-and BN-diamonds. (a) The incident-angle dependence of B-diamond. The inset shows excitation-energy dependence and shows B-K XES spectrum of h-BN. (b) The excitation-energy dependence of B-diamond, in which the spectrum with Eex of 185.5 eV being magnified 5 times. The spectrum of BN-diamond with Eex of 200 eV is also shown. spectra of B-, BN-and Non-dopeddiamonds. (a) overall features of those spectra, (b) detailed spectra of the in-gap states. The XAS and XES of non-doped diamond are also shown. The dashed lines indicate the top of the valence band of non-doped diamond and the impurity level excepted from the activation energy of BN-diamond, respectively. B. C-K XAS of doped diamond film Figure 3 3shows the C-K XAS spectra of B-, BN-and non-doped-diamonds.Figure 3(a) shows the overall features of C-K XAS spectra. The spectrum of non-dopeddiamond (thin solid line) shows clear gap state with E ≤ 289.1 eV which corresponds to the bottom of the conduction band (C.B.). On the other hand, the spectra of B-and BN-diamonds show in-gap states. For B-diamond (thick solid line), only one peak at 284 eV is observed as the in-gap state. The threshold energy of this peak is estimated to about 283.9 eV which is consistent with the energy expected from both the observed bottom of C.B. FIG. 4 : 4C-K XES spectra of B-diamond: (a) XES spectra with Eex of 284 eV (near the Fermi energy) and of 300 eV. C-K XAS spectrum of B-diamond is also shown (two arrows indicate two Eex positions). (b) C(N.N.) XES spectra derived from the subtraction of the elastic peak from observed XES spectra with Eex of 284 eV (near the Fermi energy). The inset shows C(N.N.) XES and C-K XAS spectra. (a) shows experimental XES spectra of B-K with E ex of 200 eV, C-K with E ex of 284 eV and C-K with E ex of 300 eV of Bdiamond, corresponds to PDOS of B-2p, C(N.N.)-2p and host C-2p, respectively. B-K XAS spectra of B-diamond near the Fermi level are also shown (red dashed line). The horizontal axes for B-K and C-K are shifted in order that the Fermi level is in agreement with each other. One can see that there is a good agreement in between B-2p and C(N.N.)-2p PDOS, which means strong hybridization between these orbitals.Figures 5(b) and (c) show the results in substitutional and interstitial cases. The origin of the energy in DVXα is set to the maximum energy of the electron in the occupied states, i.e., top of the V.B. The PDOS of C(N.N.)-2p (blue line) is derived as averaged PDOS of four N.N. carbon atoms [the four blue balls in the inset in Figs. 5(b) and (c)]. The PDOS of host C-2p (black line) is the averaged one of a few carbon atoms near the center of the cluster but far from the dopant boron, which is in good agreement with the result of non-doped cluster, C 184 . All the calculated PDOS's are convoluted using gaussian function with the experimental width. In the substitutional case [Fig. 5(b)], PDOS of B-2p (red line) shows a large peak around E−E V =0 eV, and PDOS of C(N.N.)-2p possesses a main peak around E − E V =-3.8 eV. It is worth while to notice that the considerable amounts of state around E V are appeared in both PDOS's of B-2p and C(N.N.)-2p. It is consistent with the experimental results that the observed PDOS's of B-2p and C(N.N.)-2p also shows main states (peak) in higher energy side from the PDOS of host C-2p. Furthermore, the broad-beat structures of the B-2p PDOS's tail in the low energy side agrees with those of C(N.N.)-2p PDOS, which is consistent with the strong hybridization between dopant B-2p and C(N.N.)-2p observed experimentally [inset of Fig. 4(b) and Fig. 5(a)]. The result in the substitutional case is consistent with the experimental one. On the other hand, in the interstitial case, both the PDOS's of B-2p and C(N.N.)-2p shift and broaden toward the low energy side. Especially large in-gap states for both B-2p and C(N.N.)-2p PDOS's are appeared with the threshold energy of about E−E V ∼+3 eV. There are almost no states near the E V , which does not support the experimental results. These results of FIG. 5: Comparison of observed B-K XAS and XES, and C-K XES spectra with DVXα simulations. (a) Observed XAS and XES spectra, (b) DVXα-results of substitutional case, and (c) DVXα-results of interstitial case. In the inset pictures of (b) and (c), red and blue ball represent boron and N.N. carbon atoms, respectively. AcknowledgmentWe express our thanks to Dr. Y. Muramatsu G S Gildenblat, S A Grot, A Badzian, Proc. IEEE. IEEE79647G.S. Gildenblat, S.A. Grot and A. Badzian, Proc. IEEE 79, 647(1991). . J W Glesener, Appl. Phys. Lett. 64217J.W. Glesener, Appl. Phys. Lett. 64, 217(1994). . H Shimomi, Y Nishibayashi, N Fujimori, Jpn. J. Appl. Phys. 301363H. Shimomi, Y. Nishibayashi and N. Fujimori, Jpn. J. Appl. Phys. 30, 1363(1991). . E A Ekimov, V A Sidorov, E D Bauer, N N Mel&apos;nik, N J Curro, J D Thompson, S M Stishov, Nature. 428542E.A. Ekimov, V.A. Sidorov, E.D. Bauer, N.N. Mel'nik, N.J. Curro, J.D. Thompson, S.M. Stishov, Nature, 428, 542(2004). . Y H Chen, C T Hu, I N Lin, Appl. Phys. Lett. 752857Y.H. Chen, C.T. Hu and I.N. Lin, Appl. Phys. Lett. 75, 2857(1999). . K Thonke, Semicond. Sci. Tech. 1820K. Thonke, Semicond. Sci. Tech. 18, S20 (2003) . M Werner, R Job, A Zaitzev, W R Fahrer, W Seifert, C Johnston, P R Chalker, Phys. Stat. Sol.(a). 154385M. Werner, R. Job, A. Zaitzev, W.R. Fahrer, W. Seifert, C. Johnston and P.R. Chalker, Phys. Stat. Sol.(a) 154, 385(1996). . T Yano, D A Tryk, K Hashimoto, A Fujishima, J. Electrochem. Soc. 1451870T. Yano, D. A. Tryk, K. Hashimoto, A. Fujishima, J. Electrochem. Soc. 145, 1870(1998). . D Saito, E Tsutsumi, N Ishigaki, T Tashiro, T Kimura, S Yugo, Diamond Relat. Mater. 111804D. Saito, E. Tsutsumi, N. Ishigaki, T. Tashiro, T. Kimura and S. Yugo, Diamond Relat. Mater. 11, 1804(2002). . K Iakoubovskii, G J Adriaenssens, J. Phys. Condens. Matter. 12and refs. there inK. Iakoubovskii and G.J. Adriaenssens, J. Phys. Con- dens. Matter. 12, L77(2000), and refs. there in. . J J Jia, T A Callcott, J Yurkas, A W Ellis, F J Himpsel, M G Samant, J Stöhr, D L Ederer, J A Carlisle, E A Hudson, L J Terminello, D K Shuh, R C C Perera, Rev. Sci. Instrum. 673372J.J. Jia, T.A. Callcott, J. Yurkas, A.W. Ellis, F.J. Himpsel, M.G. Samant, J. Stöhr, D.L. Ederer, J.A. Carlisle, E.A. Hudson, L.J. Terminello, D.K. Shuh and R.C.C. Perera, Rev. Sci. Instrum. 67, 3372(1996). . Y Ma, P Skytt, N Wassdahl, P Glans, D C Mancini, J Guo, J Nordgren, Phys. Rev. Lett. 713725Y. Ma, P. Skytt, N. Wassdahl, P. Glans, D. C. Mancini, J. Guo and J. Nordgren, Phys. Rev. Lett. 71, 3725(1993). . P Skytt, P Glans, D C Mancini, J H Guo, N Wassdahl, J Nordgren, Y Ma, Phys. Rev. 5010457P. Skytt, P. Glans, D. C. Mancini, J.H. Guo, N. Wassdahl, J. Nordgren and Y. Ma, Phys. Rev. B50, 10457(1994) . Y Muramatsu, T Kaneyoshi, E M Gullikson, R C C Perera, Spectrocimica Acta A59. 1951Y. Muramatsu, T. Kaneyoshi, E.M. Gullikson and R.C.C. Perera, Spectrocimica Acta A59, 1951(2003). . I Jimenez, R Gago, J M Albella, L J Terminello, Diamond Relat. Mater. 101170I. Jimenez, R. Gago, J.M. Albella and L.J. Terminello, Diamond Relat. Mater. 10, 1170(2001). . T Oguchi, private communicationT. Oguchi, private communication. The intensity of the peak at 284.2 eV in XAS spectrum is weak, to increase the flux, the slits of the monochromator were increased so that the resolution is same as the FWHM of the 284. 2eV peak (∼0.6 eVThe intensity of the peak at 284.2 eV in XAS spectrum is weak, to increase the flux, the slits of the monochro- mator were increased so that the resolution is same as the FWHM of the 284.2 eV peak (∼0.6 eV). . H Adachi, M Tsukada, C Satoko, J. Phys. Soc. Jpn. 45875H. Adachi, M. Tsukada and C. Satoko, J. Phys. Soc. Jpn., 45, 875(1978).
{'fraction_non_alphanumeric': 0.06142837856082085, 'fraction_numerical': 0.030871697943386886, 'mean_word_length': 3.9131107671899183, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'X-ray absorption (XAS) and emission (XES) spectroscopy near B-K and C-K edges have been performed on metallic (∼1at%B, B-diamond) and semiconducting (∼0.1at%B and N, BN-diamond) doped-diamond films. Both B-K XAS and XES spectra shows metallic partial density of state (PDOS) with the Fermi energy of 185.3 eV, and there is no apparent boron-concentration dependence in contrast to the different electric property. In C-K XAS spectrum of B-diamond, the impurity state ascribed to boron is clearly observed near the Fermi level. The Fermi energy is found to be almost same with the top of the valence band of non-doped diamond, EV, 283.9 eV. C-K XAS of BN-diamond shows both the B-induced shallow level and N-induced deep-and-broad levels as the in-gap states, in which the shallow level is in good agreement with the activation energy (Ea=0.37 eV) estimated from the temperature dependence of the conductivity, namely the change in C-2p PDOS of impurity-induced metallization is directly observed. The electric property of this diamond is mainly ascribed to the electronic structure of C-2p near the Fermi level. The observed XES spectra are compared with the DVXα cluster calculation. The DVXα result supports the strong hybridization between B-2p and C-2p observed in XAS and XES spectra, and suggests that the small amount of borons (≤1at%) in diamond occupy the substitutional site rather than interstitial site.', 'arxivid': 'cond-mat/0407438', 'author': ['Jin Nakamura ', 'Eiki Kabasawa ', 'Nobuyoshi Yamada ', 'Yasuaki Einaga ', 'Daisuke Saito ', 'Hideo Isshiki ', 'Shigemi Yugo ', 'Rupert C C Perera ', '\nDepartment of Applied Physics & Chemistry\nDepartment Chemistry\nThe University of Electro-Communications\nChofu-shi182-8585TokyoJapan\n', '\nDepartment of Electreo-Engeering\nKeio University\n305-8568HiyoshiKanagawa\n', '\nCenter for X-ray Optics\nLawrence Berkeley National Laboratory\nThe University of Electro-Communications\nChofu-shi182-8585, 94720Tokyo, BerkeleyCAJapan\n'], 'authoraffiliation': ['Department of Applied Physics & Chemistry\nDepartment Chemistry\nThe University of Electro-Communications\nChofu-shi182-8585TokyoJapan', 'Department of Electreo-Engeering\nKeio University\n305-8568HiyoshiKanagawa', 'Center for X-ray Optics\nLawrence Berkeley National Laboratory\nThe University of Electro-Communications\nChofu-shi182-8585, 94720Tokyo, BerkeleyCAJapan'], 'corpusid': 40660144, 'doi': '10.1103/physrevb.70.245111', 'github_urls': [], 'n_tokens_mistral': 7321, 'n_tokens_neox': 6221, 'n_words': 3699, 'pdfsha': '60125d7ce6c2371c0423f5b4496e632539093b85', 'pdfurls': ['https://export.arxiv.org/pdf/cond-mat/0407438v2.pdf'], 'title': ['Electronic structures of B-2p and C-2p of boron-doped diamond film by soft X-ray absorption and emission spectroscopy', 'Electronic structures of B-2p and C-2p of boron-doped diamond film by soft X-ray absorption and emission spectroscopy'], 'venue': []}
arxiv
Engineering heat transport across epitaxial lattice- mismatched van der Waals heterointerfaces Emigdio Chavez-Angel Catalan Institute of Nanoscience and Nanotechnology (ICN2) CSIC and BIST Campus UAB08193Bellaterra, BarcelonaSpain Polychronis Tsipas Institute of Nanoscience and Nanotechnology National Center for Scientific Research "Demokritos 15341 Agia ParaskeviAthensGreece Peng Xiao Catalan Institute of Nanoscience and Nanotechnology (ICN2) CSIC and BIST Campus UAB08193Bellaterra, BarcelonaSpain Mohammad Taghi Ahmadi School of Engineering University of Warwick CV4 7ALCoventryUnited Kingdom Abdalghani Daaoub School of Engineering University of Warwick CV4 7ALCoventryUnited Kingdom Hatef Sadeghi School of Engineering University of Warwick CV4 7ALCoventryUnited Kingdom Clivia M Sotomayor Torres Catalan Institute of Nanoscience and Nanotechnology (ICN2) CSIC and BIST Campus UAB08193Bellaterra, BarcelonaSpain ICREA Passeig Lluis Companys 23 08010BarcelonaSpain Athanasios Dimoulas Institute of Nanoscience and Nanotechnology National Center for Scientific Research "Demokritos 15341 Agia ParaskeviAthensGreece Alexandros El Sachat Catalan Institute of Nanoscience and Nanotechnology (ICN2) CSIC and BIST Campus UAB08193Bellaterra, BarcelonaSpain Institute of Nanoscience and Nanotechnology National Center for Scientific Research "Demokritos 15341 Agia ParaskeviAthensGreece Engineering heat transport across epitaxial lattice- mismatched van der Waals heterointerfaces 1 Artificially engineered 2D materials offer unique physical properties for thermal management, surpassing naturally occurring materials. Here, using van der Waals epitaxy, we demonstrate the ability to engineer extremely insulating ultra-thin thermal metamaterials based on crystalline lattice-mismatched Bi2Se3/MoSe2 superlattices and graphene/PdSe2 heterostructures with exceptional thermal resistances (70-202 m 2 K/GW) and ultralow cross-plane thermal conductivities (0.01-0.07 W/mK) at room temperature, comparable to those of amorphous materials. Experimental data obtained using frequency-domain thermoreflectance and lowfrequency Raman spectroscopy, supported by tight-binding phonon calculations, reveal the impact of lattice mismatch, phonon-interface scattering, size effects, temperature and interface thermal resistance on cross-plane heat dissipation, uncovering different thermal transport regimes and the dominant role of long-wavelength phonons. Our findings provide essential insights into emerging synthesis and thermal characterization methods and valuable guidance for the development of large-area heteroepitaxial van der Waals films of dissimilar materials with tailored thermal transport characteristics. The recent advent of van der Waals (vdW) heterostructures and superlattices (SLs) has opened new perspectives in nanoelectronics with ultra-high mobility and topological properties, in optics with high absorption and sensitivity, as well as in the field of heat transport engineering, with low scattering rates and highly anisotropic properties. [1][2][3][4] Specifically, thermodynamically stable misfit layer compounds, vdW heterostructures and SLs with tailored thermal transport and thermoelectric conversion properties have been recently proposed for thermal management applications. [4][5][6][7][8] Due to the periodic nature of SLs, new phonon modes and bandgaps can be formed due to the folding effect, which results in strong modifications of the phonon group velocity and thermal conductivity depending on the period thickness. 9 More interestingly, vdW SLs assembled from layers with structural lattice mismatch and weak vdW interactions could enable strong reduction of phonon transport along the c-axis while retaining their in-plane crystallinity. Despite significant efforts toward this direction, many experimental studies have reported vdW stacks using top-down fabrication methods, such as exfoliation, which can only prepare small flakes on a micrometer scale. 8,[10][11][12] Additionally, such flakes is most likely to have defects or contamination. Nevertheless, Vaziri et al. 12 and Sood et al. 8 have demonstrated high thermal isolation across a few micrometer size exfoliated Gr/MoSe2/MoS2/WSe2 heterostructures and graphene/MoS2 SLs, respectively. In addition, Kim et al. have achieved ultra-low cross-plane thermal conductivity at room temperature (~ 0.041 W/mK) using van der Waals films with random interlayer rotations, but exclusively in polycrystalline WS2 films. 13 In contrast, bottom-up epitaxial techniques, such as molecular beam epitaxy (MBE), enable the fabrication of high-order vdW SLs at wafer scale with atomically smooth and abrupt periodic interfaces. 1,[14][15][16] The ability to control atomic layer thickness and chemical composition also allows the precise designing of the transport properties of the SLs. Moreover, due to the weak vdW interactions between layers, vdW epitaxy offers great flexibility for integrating atomic layers of distinct materials such as metals, semiconductors, superconductors, or insulators, without considering lattice-matching requirements. 1,17 Here, using wafer-scale heteroepitaxial growth we demonstrate superior cross-plane thermal insulation based on atomically-thin crystalline vdW layered materials. Specifically, we direct grow on different substrates high quality lattice-mismatched Bi2Se3/MoSe2 SLs and graphene/PdSe2 heterostructures of varying thickness that exhibit tailored thermal transport properties at the atomic-scale. Combining contactless characterization techniques, e.g., frequency-domain thermoreflectance (FDTR) and low-frequency Raman spectroscopy, we study the acoustic and thermal properties of these epitaxial films. We focus on unraveling the impact of vibrational mismatch, phonon-interface scattering, temperature and film thickness on cross-plane thermal transport and we estimate the effective cross-plane thermal conductivity and total thermal resistance of the films taking into account all the interfacial contributions in our multilayer structures. Phonon transport calculations support our experimental data and further reveal the 4 impact of the thermal contact resistance on thermal conduction of ultra-thin layered materials and the presence of different thermal transport regimes in such atomically thin 2D films. Untill todate, only the in-plane thermal transport properties of Bi2Se3, MoSe2 and PdSe2 films have been investigated; [18][19][20][21][22] while such high cross-plane thermal insulating properties have been achieved only using either polycrystalline films consisting of a single material or exfoliated vdW stacks. Results We grew the epitaxial Bi2Se3 films, Bi2Se3/MoSe2 SLs and graphene/PdSe2 heterostructures on various single-crystal substrates such as strontium titanate (STO), sapphire and silicon carbide (SiC) by MBE, yielding 2D thin films with minimal disorder. A schematic representation of each vdW material is shown in Fig. 1a-c. The structural and chemical characterization of the samples were studied by X-ray Photoelectron Spectroscopy (XPS), low-frequency Raman spectroscopy, scanning tunnelling microscopy (STM) and high resolution scanning transmission electron microscopy (HR-STEM) measurements. Figure 1d displays HR-STEM images of different thickness Bi2Se3/MoSe2 SLs that confirm the expected layering structures, which consist of vertically stacked Bi2Se3 and MoSe2 sublayers with atomically sharp and contamination-free interfaces. Reflection high-energy electron diffraction (RHEED) patterns show that MoSe2 and Bi2Se3 layers are repeatedly grown highly oriented on top of each other despite their large lattice mismatch (~20%) (see also Fig. S1 in Supporting Information (SI)). The RHEED patterns of one quintuple layer (QL) Bi2Se3 and Bi2Se3/MoSe2 heterostructure show the difference in the relative positions of the streaks, which reflects the large lattice mismatch between Bi2Se3 and MoSe2 at room temperature. In situ XPS data for the Bi2Se3 thin films and Bi2Se3/MoSe2 heterostructures are presented in Fig. 1k-n. The binding energies of the Bi 4f7/2 and Se 3d5/2 core levels for 1 QL Bi2Se3 grown 5 directly on STO were 158.00 eV and 53.71 eV, respectively, in good agreement with previous reported 23 on thin-film single crystal Bi2Se3. After 2 ML MoSe2 growth, the Mo 3d5/2 and Se 3d5/2 peak positions at 228.54 eV and 54.25 eV, respectively, indicate Mo-Se bonds and agreed well with the MoSe2 formation. 24 The Se 3d peak in Fig. 1n is deconvoluted in four peaks to consider two types of bonds, namely, Bi-Se and Mo-Se bonds, keeping the Se 3d5/2-3d3/2 spin-orbit splitting fixed at 0.86 eV. The two distinct Se environments suggest Bi2Se3 and MoSe2 formation rather than the formation of a mixed Bi-Mo-Se compound. The latter is reinforced by the fact that the Bi 4f7/2 peak position remains the same after MoSe2 film growth (see Fig. S2 in SI). The XPS spectra from the graphene/PdSe2 heterostructures are shown in the SI (Fig. S3). Next, we systematically study the phonon properties of Bi2Se3 films, Bi2Se3/MoSe2 SLs and graphene/PdSe2 heterostructures using low-frequency Raman spectroscopy. Figure 2a shows the Raman spectra of Bi2Se3 films of varying thickness (from 1 to 20 QL), where all the out and inplane Raman active optical modes are observed (2Eg and 2A1g). Specifically, the E 1 g, E 2 g, A 1 1g and A 2 1g modes are detected at ~37 cm -1 , ~132 cm -1 , ~71 cm -1 and ~173 cm -1 , respectively, in agreement with previous studies. 25,26 Both out-of-plane modes (A 1 1g, A 2 1g) show a pronounced red shift (about 2.7 cm -1 ) as the thickness decreases; while the in-plane modes (E 1 g, E 2 g) are red-shifted with decreasing thickness of about 1.7 and 3.5 cm -1 , respectively (see also Fig. S11a). We note that the A 1 1g is more sensitive with thickness because it reflects the out-of-plane vibrations of the Se and Bi atoms. 26 We also observe a broadening of the E 2 g mode as the film thickness decreases 7 (see Fig. 2a), in agreement with previous reports, 26 suggesting that the layer-to-layer stacking strongly affects the interlayer bonding. In SLs (see Fig. 2b) except of the Raman modes that correspond to Bi2Se3, we detect the A1g mode in the spectral range of 240.1-241.1 cm -1 , which confirms the existence of MoSe2 layers. 27,28 In Figure 2c we also plot the ratios of the Raman intensities for the out-of-plane ( In graphene/PdSe2 heterostructures, we detected 6 main peaks in the high-frequency region (>130 cm -1 ) that belong to 1 1 1 , 2 , 1 2 , 3 and 1 3 phonon modes of PdSe2 (Fig. 2d). These modes can be attributed to the intralayer vibrations of PdSe2. The Raman peak positions of all phonon modes showed a red shift with increasing the number of layers, in agreement with previous CVD grown PdSe2 films. 29,30 Interestingly, the intralayer vibration at 149.1 cm −1 , originating from the intralayer Se-Se bonds, exhibits sufficiently strong Raman intensity in 3, 5, and 7L PdSe2, indicating its strong coupling to the electronic states. 29 Furthermore, we observe Raman-inactive modes in the frequency region between 50-130 cm −1 , which are activated due to the breakdown of translation symmetry in few layers, as has been recently shown. 30,31 The phonon modes detected in the frequency region between 100 to 300 cm -1 are also consistent with a recent study where graphene/PdSe2 heterostructures have been formed by exfoliation. 32 Finally, the two Raman peaks, SiC1 and SiC2 with 196.6 cm −1 and 204.3 cm −1 , respectively, originated from the undoped SiC 8 substrate. 33 We note that the mode 2 is not well visible because it partially overlaps with the SiC2 mode. The thermal measurements were performed using our custom-built frequency-domain thermoreflectance (FDTR) setup, which essentially combines simultaneous measurements of cross-plane thermal conductivity (kz) and interface thermal conductance. 34,35 For the case of SLs, our multilayer structures consist of Au/SLs/substrate stacks (see Fig. 3a). For each SL, we obtained FDTR measurements and extracted the effective kz and the interface thermal resistances between Au (transducer)/SL (R1) and SL/substrate (R2) following a multilayer three-dimensional heat diffusion model. 36 The required material properties for this model are the thickness (t), the volumetric heat capacity (C), R1, R2 and kz. The thermal conductivity and volumetric heat capacity of Au as well as the volumetric specific heat of Bi2Se3 and MoSe2 were taken from the literature. 37,38 Therefore, the unknown parameters are the kz, R1 and R2. To estimate these three parameters, first we quantified the sensitivity of the recorded phase signal to different parameters according to our multilayer geometry (see Fig. S4 in SI), following a similar methodology as reported elsewhere. [34][35][36] Typical examples of the recorded phase signal and the corresponding best model fits in bilayer MoSe2 and Bi2Se3/MoSe2 SLs with periods 2, 2.5 and 3 are shown in Fig. 3b. To extract the kz of each SL from a single measurement, we followed the same approach that used in previous works, [34][35][36]39 and supported by our sensitivity analysis. First, we extract kz by fitting the experimental data in a low frequency range (20 kHz to 1 MHz), where the phase signal sensitivity to R1, R2, and C of the films is negligible. Then, we fix kz and and fit the high frequency range (1−45 MHz) to estimate R1 and R2. The same procedure was followed to extract kz, R1 and R2 for all the epitaxial films. The sensitivity analysis and FDTR data for the case of graphene/PdSe2 are shown in Fig. S5 and Fig. S6, respectively. In Figure S9, we also show all the interface thermal resistance values, Rint =R1+ R2, where R1= 1/G1 and R2 = 1/G2 extracted by the FDTR experiments and compare them with previous reports. In SLs, we observe that kz slightly increases with increasing thickness with values between 0.059-0.07 W/mK (see Fig. 3c). However, in graphene/PdSe2 heterostructures, we found a strong thickness-dependent kz. Specifically, the kz increased by a factor of six with increasing the thickness of the top PdSe2 layers from one to seven layers. To confirm the robustness of our approach to measuring the intrinsic kz of ultra-thin films, we performed FDTR measurements in Bi2Se3 films of different thicknesses in both STO and sapphire substrates. We found that in both substrates the kz of the Bi2Se3 films increases by a factor of five with increasing thickness from 1 to 20 QL. The excellent agreement in kz values is shown in Figure 3c. The origin of the weak thickness dependence of the kz in SLs can be understood considering both interface-phonon scattering and size effects. By increasing the film thickness, thus the period of the SLs, thermal phonons (especially short-wavelength) are scattered by multiple Bi2Se3-MoSe2 interfaces, thus reducing their contribution to cross-plane thermal transport. In contrast, the increase in the volume fraction of the constituents of the SLs with increasing thickness allows more long-wavelength phonons to propagate and contribute to kz until they are scattered at the sample boundaries. These opposite effects resulted in the suppressed thickness-dependent kz trend presented in Fig. 3c However, the absence or limited interfaces in pure Bi2Se3 and graphene/PdSe2 allow the majority of the thermally excited phonons to contribute to cross-plane thermal transport, i.e., kz is limited mainly by finite size effects. This is reflected in the different rates of increase of kz observed in Fig. 3c. In particular, for the same thickness range, in Bi2Se3/MoSe2 SLs, we found only a 28% of increase in kz with increasing thickness while in Bi2Se3 and graphene/PdSe2 films of about 42% and 68%, respectively. We note that to study coherent and incoherent effects in SLs, the volume fraction of the constituents and total thickness of the films should remain constant while the thickness of each layer in a period must be adjusted to vary the interface density. 16 To quantify the impact of cross-plane ballistic phonon transport on the total thermal resistance in our epitaxial films, we estimate the total thermal resistance per unit area, Rtot, which can be written as the sum of the combined interface thermal resistance, Rint = R1 + R2, and volumetric cross-plane thermal resistance, Rfilm = t/kz. 34,44 From these calculations, we found that in the SLs Rtot is linearly proportional to the total thickness (or number of periods, n), such that Rtot, n=3 > Rtot, n=2,5 > Rtot, n=2 > Rtot, n=1,5> Rtot, n=1 (Fig. 3d). Furthermore, we observe that Rint increases with increasing the period of the SLs (Fig. S9), in agreement with previous thermal resistance measurements in short-period SLs. 8,40 We note that the large lattice-mismatch between Bi2Se3 and MoSe2 (~20%) most likely enhances the phonon interface scattering and further contributes to the increased values of Rtot. This is in agreement with previous studies that showed that latticemismatched interfaces exhibit reduced interface thermal conductance and phonon transmission due to the increased lattice disorder. 45,46 However, in graphene/PdSe2 heterostructures Rtot remains constant with increasing the thickness of PdSe2 from 1 to 7 layers. In fact, the different slopes in Fig. 3d suggest different thermal transport regimes. The similar values of Rtot in graphene/PdSe2 indicate strong ballistic thermal transport, as it is expected in a thin film with no internal scattering. 47 Similarly, crossplane ballistic thermal transport has been found in graphene, 48 MoS2 44 and PtSe2 34 thin films. However, the increased values of the Rtot with increasing thickness observed in SLs indicate additional phonon scattering (quasi-ballistic regime). We attribute this result to the scattering of high-frequency phonons at multiple Bi2Se3/MoSe2 interfaces, which largely disrupt phonon ballistic transport. Therefore, interface roughness and lattice mismatch are effective in destroying the coherence of high-frequency phonons. In Fig. 3d S12 in SI, we also plot the volumetric cross-plane thermal resistance, Rfilm, as a function of thickness t, which further supports the previous discussion. Specifically, we observe that the Rfilm is increasing by more than a factor of two as the thickness increases from 1.4 to 7.2 nm, which further suggests quasi-ballistic phonon transport and phonon diffusive scattering at interfaces. To gain further insight into the influence of phonon-interface scattering on cross-plane thermal conductance, we constructed a tight-binding model 49 Finally, to consider the effect of thermal contact resistance to electrodes on the overall thermal conductance, we considered two scenarios with weak and strong couplings to electrodes. In summary, we have successfully developed ultra-thin epitaxial vdW films that can act as highly insulating thermal metamaterials, comprising of dissimilar atomically thin layers of 2D semiconductors (MoSe2 and PdSe2), the 3D topological insulator Bi2Se3, and monolayer graphene. Our study reveals that short period crystalline Bi2Se3/MoSe2 superlattices can be used to achieve Methods MBE growth and in-situ characterization (XPS, STM) The experiments were carried out in an ultrahigh vacuum MBE system equipped with RHEED, X-ray photoelectron spectrometer and scanning tunneling microscope. High purity metals Mo and PdSe2 growth, the samples were transferred to the STM chamber without breaking the vacuum for in-situ STM characterization. STM images were obtained at room temperature using a Pt/Ir tip. The scanning conditions were V=0.4 Volt and I=400 pA. Raman spectroscopy The Raman spectra were recorded using a customized setup using a Monovista Raman spectrometer manufactured by Princeton and ensembled by S&I GmbH. It was used in a single grating mode (2400 lines) with a spectral resolution better than 0.4 cm −1 . The laser line was rejected using a Bragg filter around ±5 cm -1 . The samples were placed in an automatic xyz stage. Then, a green diode laser (λ = 532 nm) was focused on the sample using a 100× microscope objective. The power of the laser was kept as low as possible (<100 μW) to avoid any possible effect of self-heating. Frequency domain thermoreflectance Computational Method To model thermal conductance due to phonons, we first construct a tight-binding dynamical matrix of the MoSe2 and Bi2Se3 lattices using the parameters shown in table S1 of the SI. The choice of these parameters is informed by comparing our TB Debye frequency with the Debye frequency computed using density functional theory. 50,51 We then construct the dynamical matrix of junctions including the layered materials between electrodes. Following the method described in a previous report, 49 Data availability The data that support the findings of this study are available from the corresponding author upon request. FDTR sensitivity analysis and experimental data For the sensitive analysis, we used a polynomial fit of the experimental thickness dependence of the cross-plane thermal conductivity (kz). This fit allows us to have a continuous function to plot the sensitivity as a function of excitation frequency (20 kHz-45 MHz) and sample thickness. From Fig. S4a,b and Fig. S5ab we find that the sensitivity of the phase signal to G12 =1/R1 and G23 =1/R2 is relatively low and mainly at high frequencies. However, the sensitivity of the phase signal to kz is high in all the frequency range ( Fig. S4d and Fig. S5d). In our measurements, the error bars of the kz, were estimated by the standard deviation of several measurements, including the numerical errors from the fits (see Fig. S6-S8). Tight-binding phonon calculations Fig. 1 . 1Structural and chemical characterization of epitaxial vdW films. Top and side views of (a) Bi2Se3, (b) MoSe2 and (c) PdSe2 crystal structures. The purple, blue and orange spheres represent Bi, Mo and Pd atoms, respectively, while Se atoms are shown with grey spheres. 1 QL Bi2Se3 consists of five atoms per unit cell, such as Se-Bi-Se-Bi-Se, while each monolayer MoSe2 consists of three atomic sublayers, in which Mo atoms are sandwiched between Se atoms. In PdSe2, each Pd atom is connected with four Se atoms, and each Se atom is bonded with two Pd atoms and another Se atom. (d) Cross-sectional STEM images of the as-synthesized vdW Bi2Se3/MoSe2 SLs with periods n = 3 and n = 1 (inset figure) on STO substrates. After the growth, a 4 nm thick Se capping layer was deposited in situ to protect the SL from oxidation. RHEED patterns of 1 QL Bi2Se3 and Bi2Se3/MoSe2 SL with period n=1 along the (e), (g) [11-20] and (f), (h) [11-10] azimuths. (i) STM image of the as grown monolayer PdSe2 on graphene and (j) the corresponding fast Fourier transform (FFT) image of the whole region. (k-n). XPS data of Bi2Se3/MoSe2 SL grown on STO substrates. Fig. 2 . 2Low-frequency Raman spectroscopy in epitaxial vdW films. Raman scattering spectra in (a) Bi2Se3 films of different thickness and (b) Bi2Se3/MoSe2 SLs. (c) Ratios of the Raman intensities for the out-of-plane (case of Bi2Se3 films (black squares) and Bi2Se3/MoSe2 SLs (red circles). (d) Low-frequency Raman scattering spectra in graphene/PdSe2 heterostructures of different thicknesses. The redshaded area shows the frequency region where the Raman-inactive modes are detected. (e) Atomic displacements (grey arrows) of all the Raman modes detected in Bi2Se3 films, Bi2Se3/MoSe2 SLs and graphene/PdSe2 heterostructures. The purple, blue and orange spheres represent Bi, Mo and Pd atoms, respectively, while Se atoms are shown with grey spheres. Notably, in ultra-thin graphene/PdSe2 stacks, Rfilm saturates at a finite value of about 54 m 2 K/GW, .i.e., ballistic thermal transport. Similarly, Sood et al. using density functional theory (DFT) calculations, have estimated a constant cross-plane ballistic thermal resistance of ~10 m 2 K/GW in 13 MoS2 films in the limit of 2−3 monolayers.44 The significant contributions of both Rfilm and Rint components to the Rtot in all the epitaxial films suggest that interfacial effects do not entirely govern cross-plane thermal transport. Fig. 3 . 3Thermal conductivity and interfacial heat transport measurements. (a) Schematic illustrations of the FDTR technique and the multilayer system of the SLs. (b) Typical FDTR data measured in bilayer MoSe2 and Bi2Se3/MoSe2 SLs with periods n = 2, 2.5, and 3 and the corresponding best model fits in the entire frequency range (20 kHz -45 MHz). (c) Cross-plane thermal conductivity versus film thickness measured in bilayer MoSe2 (green rhomb), Bi2Se3/MoSe2 SLs (black circles), graphene/PdSe2 heterostructures (red squares) and Bi2Se3 films on STO (purple circles) and sapphire (pink squares) substrates. The blue open triangles show previously reported kz values measured in thicker Bi2Se3 films. 20 By extrapolating the measured kz trends, we find the minimum film thickness from which phonon-interface scattering starts to have a strong impact on cross-plane heat dissipation, i.e., kz(SLs)< kz(Bi2Se3) < kz(MoSe2). (d) Total thermal resistance, Rtot= Rint + Rfilm, of Bi2Se3/MoSe2 SLs (black solid circles) and graphene/PdSe2 hetrostructures (red solid squares) versus film thickness. The uncertainty of the estimated Rtot was calculated on the basis of error propagation for the input parameters. Total thermal resistance measurements in Au/graphene/SiO2 48 (purple open triangles) and Al/graphene-MoS2 (SLs)/ SiO2 8 (green open triangles) are included for comparison. The colored lines in (c) and (d) are guides for the eye. (see details in Methods) and calculated the transmission coefficients of phonons with different frequencies traversing through MoSe2, Bi2Se3 and Bi2Se3/MoSe2 heterostructures from one metallic electrode to the other (see Fig. 4a). The DFT calculations indicate that the Debye frequencies of MoSe2 and Bi2Se3 are 24meV and 48meV,respectively.50,51 We adjusted the parameters of our tight-binding model to obtain similar Debye frequencies for both materials (seeFig. S13in SI). Furthermore, we took into account the lattice mismatch between the heterostructure layers by choosing a different coupling strength and coupling configurations between MoSe2 and Bi2Se3 layers (see details inFigure S13and Methods). Figures 4b,c,d show the calculated kz of different thickness Bi2Se3, MoSe2 and Bi2Se3/MoSe2heterostructures as a function of temperature. The kz increases with increasing temperature and starts to saturate at higher temperatures.Figure 4eshows the room temperature kz as a function of 15 film thickness, which is in a good qualitative agreement with the experimental data when we consider a strong coupling between films and electrodes (see blue-shaded area). Specifically, the kz trends follow a similar order to that in the experiment, and unlike Bi2Se3, the kz in the SLs varies slowly with thickness. Interestingly, when we consider a weak coupling to electrodes, the calculated absolute kz values are in much better quantitative agreement with the experiments (see red-shaded area). This suggests that the total interface thermal resistance, Rint, is increased when the films are weakly coupled with the top and bottom electrodes, confirming the relative high Rint values obtained from FDTR experiments (seeFig. S9). These results highlight the role of thermal contact resistance in cross-plane thermal conductivity of ultra-thin vdW layered materials. Fig. 4 . 4Tight-binding phonon calculations. (a) Schematic structure of a Bi2Se3/MoSe2 vdW layered structure between electrodes. The calculated cross-plane thermal conductivity of (b) Bi2Se3, (c) bilayer MoSe2 and (d) Bi2Se3/MoSe2 heterostructures of different thickness as a function of temperature (0-400 K) considering weak and strong coupling of the vdW films with the electrodes. (e) The calculated room temperature cross-plane thermal conductivity as a function of thickness for MoSe2, Bi2Se3 and Bi2Se3/MoSe2 heterostructures with weak (solid star symbols) and strong (open star symbols) coupling with the electrodes. The experimental kz values for MoSe2, Bi2Se3 and Bi2Se3/MoSe2 are shown with green, purple and black solid circles in (e) for direct comparison with the calculations. The difference in the calculated values indicated in blue and red shaded areas in (e) show the importance of quantifying interfacial thermal transport in vdW films bonded to a substrate. a superior thermal resistance up to 202 m 2 K/GW and ultralow effective cross-plane thermal conductivity at room temperature down to 0.059 W/mK. We attribute this result to the interface roughness and large lattice mismatch between the constituent layers of the superlattices that boosts phonon-interface scattering and suppress cross-plane heat dissipation. Conversely, graphene/PdSe2 heterostructures exhibit a strong thickness-dependent effective cross-plane thermal conductivity and a constant total thermal resistance of about 70 m 2 K/GW due to ballistic thermal transport. Given the sub-3 nm thickness of these heterostructures, their effective thermal conductivities at room temperature are estimated between 0.012 and 0.06 W/mK. Our phonon transport calculations align well with the experimental results and further shed light on the impact of interfacial thermal resistances between ultra-thin heterointerfaces and top and bottom metallic contacts on cross-plane heat dissipation. Importantly, this work has yielded significant advancements in the epitaxial growth of highquality heterogeneous interfaces over large areas, as well as in the quantitative understanding of interfacial thermal transport across atomically-thin vdW films on various substrates. The implications of these findings are extensive for the design of heat-sensitive electronic components and 2D electronic devices, such as 2D transistors and microchips, that can operate without thermal limitations e.g., overheating. The ability to obstruct heat dissipation in the vertical direction while maintaining in-plane crystallinity in wafer-scale engineered vdW stacks not only facilitates thermal management applications but also enhances their suitability as active materials in thermoelectric devices, thereby increasing their thermoelectric efficiency. Combining with bandgap engineering strategies, the followed phonon engineering approach could provide a promising route to realize a wide variety of functional semiconductor heterojunctions and superlattices for nanoscale electronic and thermoelectric devices. Pd were evaporated from an e-gun evaporator whereas Se and Bi were evaporated from effusion cells. The Bi2Se3 and MoSe2 films were grown on STO (111) and sapphire substrates at ~300 o C under Se-rich conditions with a Se/Mo (Bi, Pd) flux ratio of about 20. The PdSe2 films were grown on 4H-SiC (0001)/monolayer graphene substrates at ~240 o C under Se-rich conditions. The Mo,Bi and Pd growth rates were kept constant at ~0.1Ǻ/sec. In-situ XPS measurements were performed with excitation by Mg Kα radiation (1253.6 eV) using a SPECS XR50 source. After FDTR is an ultra-fast laser based pump-probe technique which can measure thermal properties of bulk, thin films and nanostructured materials. The experimental setup is based on two lasers operating at 488 nm (pump) and 532 nm (probe). The pump beam is modulated in a wide frequency range (20 kHz-45 MHz), generating a periodic heat flux with a Gaussian spatial distribution on the 2D sample surface. The reflectivity of the sample, which is probed at the same position by the probed laser, changes as a function of the temperature and displays a phase lag compared to the pump signal. The phase response of the reflected probe beam to the thermal wave was recorded using a lock-in amplifier while the pump signal was used as a reference. We also used a 50× objective to repeat the FDTR measurements and obtained consistent results.The measurements were performed under ambient conditions at room temperature (~22 °). Before FDTR measurements, 100 nm Au films were deposited on top of the MBE-grown samples and used as transducers. the phonon transmission ( ) then can be calculated from the relation ( ) = (Γ ( ) ( )Γ ( ) † ( )) where Γ , ( ) = (∑ , ( ) − ∑ , † ( )) describes the level broadening due to the coupling to the left L and right R electrodes, ∑ , ( ) are the retarded selffrequencies associated with this coupling and = ( 2 − D − ∑ − ∑ ) −1 is the retarded Green's function, where D and I are dynamical and the unit matrices, respectively. function, ℏ is reduced Planck's constant and is Boltzmann's constant. Fig. S2 . S2XPS spectra in Bi2Se3/MoSe2 heterostructure grown on STO substrate, showing that the Bi 4f (7/2) peak position remains unaffected after MoSe2 film growth. Fig. S3 . S3(a) XPS data showing the Pd 3d (5/2) (336.44 eV) and Pd 3d (3/2) (341.68) core levels and Se 3d electrons. The Se 3d (5/2) (blue) and Se 3d (3/2) (yellow) peaks are overlapped and shown as one peak (black). No reaction with the substrate is observed and the positions and line shapes of the Pd 3d and Se 3d peaks indicate Pd-Se bonding. 4 Fig. S4 . 4S4Calculated phase sensitivity (-Vin/Vout) to different parameters, (a) G12,=1/R1 (b) G23=1/R2 (c) Cv, (d) kz, and (e) anisotropy as a function of thickness and modulation frequency for the case of Au/SLs/STO stacks. Fig. S5 . S5Calculated phase sensitivity (-Vin/Vout) to different parameters, (a) G12, (b) G23, (c) Cv, (d) kz, and (e) anisotropy as a function of thickness and modulation frequency for the case of Au/PdSe2/graphene/SiC stacks. 6 Fig. S6 . 6S6FDTR data sets measured in graphene/PdSe2 films of different thickness grown on silicon carbide and the corresponding best model fits in the whole frequency range (20 kHz-45 MHz). Fig. S7 . S7FDTR data sets measured in 1, 3, 7, 10 and 20 QL Bi2Se3 films on sapphire substrate and the corresponding best model fits in the whole frequency range (20 kHz-45 MHz). 7 Fig. S8 . 7S8FDTR data measured in 1 QL Bi2Se3 (red circles) and layered Bi2Se3/MoSe2 SLs with periods n = 1, 1.5 and the corresponding best model fits in the whole frequency range (20 kHz-45 MHz). Fig. S9 .- 7 Fig. S10 . S97S10Total interface thermal resistance values, Rint=R1+R2, extracted by the FDTR measurements taking into account the multilayer geometry of Bi2Se3/MoSe2 SLs and graphene/PdSe2 heterostructures. The errors bars estimated by the standard deviation of several 8 measurements, including the numerical errors from the fits. For comparison, we plot previous interface thermal resistance values measured in 2D/2D and 3D/2D interfaces. 1Typical examples of measured (a) pump and (b) probe spot sizes in Au/ Bi2Se3/MoSe2 (SLs)/STO stacks. The spot size of each measurement was measured by using the knife's edge method. The edge of the Au transducer layer was used as a sharp edge to measure the intensity of the reflected light as a function of stage position. The beam intensity as a function of translation distance was fitted to an error function curve 8 and the 1/e 2 radius of this curve was taken as the laser spot radius. Fig. S11 . S11Peak position of all the Raman active modes versus thickness in (a) Bi2Se3 films and (b) Bi2Se3/MoSe2 SLs. 9 Fig. S12 . 9S12The volumetric cross-plane thermal resistance, Rfilm = t/kz, as a function of film thickness t for the Bi2Se3/MoSe2 SLs (black solid circles) and graphene/PdSe2 heterostructures (red solid squares). Fig. S13 . S13Tight-binding phonon band structure. (a) The schematic structure of layers vdW structures, phonon band structure of (b) MoSe2, (c) Bi2Se3 and (d) Bi2Se3/MoSe2 hetero structure.10 Fig. S14 . S14Calculated cross-plane thermal conductivity of MoSe2, Bi2Se3 and Bi2Se3/MoSe2 heterostructures considering a weak coupling with the electrodes. (a) The schematic structure of layers vdW structures between electrodes. The thermal conductivity as a function of temperature of (b) MoSe2, (c) Bi2Se3 and (d) Bi2Se3/MoSe2 heterostructures. (e) The thermal conductivity as a function of thickness for MoSe2, Bi2Se3 and Bi2Se3/MoSe2 heterostructures at room temperature. 11 Fig. S15 . 11S15Calculated cross-plane thermal conductivity of MoSe2 considering a strong coupling with the electrodes. (a) The schematic structure of MoSe2 between electrodes, (b) phonon transmission coefficient as a function of phonon energy, (c) thermal conductivity as a function of temperature and (d) thermal conductivity as a function of thickness at room temperature. 12 Fig. S16. Calculated cross-plane thermal conductivity of Bi2Se3 considering a strong coupling with the electrodes. (a) The schematic structure of Bi2Se3 between electrodes, (b) phonon transmission coefficient as a function of phonon energy, (c) thermal conductivity as a function of temperature and (d) thermal conductivity as a function of thickness at room temperature. Fig. S17 . S17Calculated cross-plane thermal conductivity of Bi2Se3/MoSe2 heterostructures considering a strong coupling with the electrodes. (a) The schematic structure of Bi2Se3/MoSe2 between electrodes, (b) phonon transmission coefficient as a function of phonon energy, (c) thermal conductivity as a function of temperature and (d) thermal conductivity as a function of thickness at room temperature. . Therefore, despite the influence of interface phonon scattering on kz, the apparent increase of kz with increasing thickness in SLs indicates that in this thickness range finite size effects are still dominant. This is in agreement with previous studies that showed that the transmission of phonons across short-period AlN/GaN 40 and SiGe41,42 SLs strongly depends on the phonon wavelength, suggesting that long-wavelength phonons are the dominant carriers of heat. Molecular dynamics simulations have also shown that the kz of SLs with lattice-mismatched interfaces increases monotonically with period length.43 Therefore, in our SLs, the apparent linear scale of kz vs period is not sufficient to conclude if phonons travel coherently across the film thickness. For instance, previous calculations showed that despite the linear kz increase with thickness, lattice mismatch destroys the Bragg reflection conditions and phonons are diffusely scattered at interfaces, thus losing their coherency. 43 we show all our calculated Rtot values and compare them with previous Rtot measurements on different 2D materials and SLs, where similar interfacial contributions from bottom (2D material/substrate) and top (metal/2D material) interfaces were considered. 8,48 In Fig. Table S1 . S1The parameters used for to construct the tight binding model with respect to the reference energy t.Force constant k1 k2 k3 k4 k5 t t t/5 t/6 t/2 Competing interestsThe authors declare no competing interests Additional Information Heteroepitaxial van der Waals semiconductor superlattices. G Jin, Nat. Nanotechnol. 16Jin, G. et al. Heteroepitaxial van der Waals semiconductor superlattices. Nat. Nanotechnol. 16, 1092-1098 (2021). High-order superlattices by rolling up van der Waals heterostructures. B Zhao, Nature. 591Zhao, B. et al. High-order superlattices by rolling up van der Waals heterostructures. Nature 591, 385-390 (2021). Flexible ferroelectric element based on van der Waals heteroepitaxy. J Jiang, Sci. Adv. 31700121Jiang, J. et al. Flexible ferroelectric element based on van der Waals heteroepitaxy. Sci. Adv. 3, e1700121 (2017). Superlattices based on van der Waals 2D materials. Y K Ryu, R Frisenda, A Castellanos-Gomez, Chem. Comm. 55Ryu, Y. K., Frisenda, R. & Castellanos-Gomez, A. Superlattices based on van der Waals 2D materials. Chem. Comm. 55, 11498-11510 (2019). Misfit Layer Compounds and Ferecrystals: Model Systems for Thermoelectric Nanocomposites. D R Merrill, D B Moore, S R Bauers, M Falmbigl, D C Johnson, Mater. 8Merrill, D. R., Moore, D. B., Bauers, S. R., Falmbigl, M. & Johnson, D. C. Misfit Layer Compounds and Ferecrystals: Model Systems for Thermoelectric Nanocomposites. Mater. 8, 2000-2029 (2015). Microstructural Control and Thermoelectric Properties of Misfit Layered Sulfides (LaS)1+mTS2 (T = Cr, Nb): The Natural Superlattice Systems. P Jood, Chem. Mater. 26Jood, P. et al. Microstructural Control and Thermoelectric Properties of Misfit Layered Sulfides (LaS)1+mTS2 (T = Cr, Nb): The Natural Superlattice Systems. Chem. Mater. 26, 2684-2692 (2014). Coherent control of thermal phonon transport in van der Waals superlattices. R Guo, Y.-D Jho, A J Minnich, Nanoscale. 10Guo, R., Jho, Y.-D. & Minnich, A. J. Coherent control of thermal phonon transport in van der Waals superlattices. Nanoscale 10, 14432-14440 (2018). Engineering Thermal Transport across Layered Graphene-MoS2. A Sood, Sood, A. et al. Engineering Thermal Transport across Layered Graphene-MoS2 . Superlattices, ACS Nano. 15Superlattices. ACS Nano 15, 19503-19512 (2021). Thermal conductivity and heat transfer in superlattices. G Chen, M Neagu, Appl. Phys. Lett. 71Chen, G. & Neagu, M. Thermal conductivity and heat transfer in superlattices. Appl. Phys. Lett. 71, 2761-2763 (1997). High-yield exfoliation of 2D semiconductor monolayers and reassembly of organic/inorganic artificial superlattices. Z Lin, 7Lin, Z. et al. High-yield exfoliation of 2D semiconductor monolayers and reassembly of organic/inorganic artificial superlattices. Chem 7, 1887-1902 (2021). Monolayer atomic crystal molecular superlattices. C Wang, Nature. 555Wang, C. et al. Monolayer atomic crystal molecular superlattices. Nature 555, 231-236 (2018). Ultrahigh thermal isolation across heterogeneously layered twodimensional materials. S Vaziri, Sci. Adv. 51325Vaziri, S. et al. Ultrahigh thermal isolation across heterogeneously layered two- dimensional materials. Sci. Adv. 5, eaax1325 (2019). Extremely anisotropic van der Waals thermal conductors. S E Kim, 10.1038/s41586-021-03867-8Nature. 597Kim, S. E. et al. Extremely anisotropic van der Waals thermal conductors. Nature 597, 660-665, doi:10.1038/s41586-021-03867-8 (2021). Controlled Growth of Atomically Thin In2Se3 Flakes by van der Waals Epitaxy. M Lin, J. Am. Chem. Soc. 135Lin, M. et al. Controlled Growth of Atomically Thin In2Se3 Flakes by van der Waals Epitaxy. J. Am. Chem. Soc. 135, 13274-13277 (2013). Van der Waals epitaxy-a new epitaxial growth method for a highly latticemismatched system. A Koma, Thin Solid Films. 216Koma, A. Van der Waals epitaxy-a new epitaxial growth method for a highly lattice- mismatched system. Thin Solid Films 216, 72-76 (1992). Crossover from incoherent to coherent phonon scattering in epitaxial oxide superlattices. J Ravichandran, Nat. Mater. 13Ravichandran, J. et al. Crossover from incoherent to coherent phonon scattering in epitaxial oxide superlattices. Nat. Mater. 13, 168-172 (2014). A review of molecular-beam epitaxy of wide bandgap complex oxide semiconductors. W Nunn, T K Truttmann, B Jalan, J. Mater. Res. 36Nunn, W., Truttmann, T. K. & Jalan, B. A review of molecular-beam epitaxy of wide bandgap complex oxide semiconductors. J. Mater. Res. 36, 4846-4864 (2021). Unraveling Heat Transport and Dissipation in Suspended MoSe2 from Bulk to Monolayer. D Saleta Reig, Adv. Mater. 342108352Saleta Reig, D. et al. Unraveling Heat Transport and Dissipation in Suspended MoSe2 from Bulk to Monolayer. Adv. Mater. 34, 2108352 (2022). Straightforward measurement of anisotropic thermal properties of a Bi2Se3 single crystal. D Fournier, M Marangolo, M Eddrief, N N Kolesnikov, C Fretigny, Fournier, D., Marangolo, M., Eddrief, M., Kolesnikov, N. N. & Fretigny, C. Straightforward measurement of anisotropic thermal properties of a Bi2Se3 single crystal. . J. Phys. Condens. Matter. 30115701J. Phys. Condens. Matter. 30, 115701 (2018). Thermal conductivity of Bi2Se3 from bulk to thin films: Theory and experiment. L Paulatto, Phys. Rev. B. 101205419Paulatto, L. et al. Thermal conductivity of Bi2Se3 from bulk to thin films: Theory and experiment. Phys. Rev. B 101, 205419 (2020). L Chen, Plane Anisotropic Thermal Conductivity of Low-Symmetry PdSe2. Sustainability. 134155Chen, L. et al. In-Plane Anisotropic Thermal Conductivity of Low-Symmetry PdSe2. Sustainability 13, 4155 (2021). Temperature-dependent Raman study and determination of anisotropy ratio and in-plane thermal conductivity of low-temperature CVD-grown PdSe2 using unpolarized laser excitation. T Jena, M T Hossain, P K Giri, J. Mater. Chem. C. 9Jena, T., Hossain, M. T. & Giri, P. K. Temperature-dependent Raman study and determination of anisotropy ratio and in-plane thermal conductivity of low-temperature CVD-grown PdSe2 using unpolarized laser excitation. J. Mater. Chem. C 9, 16693-16708 (2021). Observation of Surface Dirac Cone in High-Quality Ultrathin Epitaxial Bi2Se3 Topological Insulator on AlN(0001) Dielectric. P Tsipas, ACS Nano. 8Tsipas, P. et al. Observation of Surface Dirac Cone in High-Quality Ultrathin Epitaxial Bi2Se3 Topological Insulator on AlN(0001) Dielectric. ACS Nano 8, 6614-6619 (2014). Epitaxial ZrSe2/MoSe2 semiconductor v.d. Waals heterostructures on wide band gap AlN substrates. P Tsipas, Microelectron. Eng. 147Tsipas, P. et al. Epitaxial ZrSe2/MoSe2 semiconductor v.d. Waals heterostructures on wide band gap AlN substrates. Microelectron. Eng. 147, 269-272 (2015). Growth and characterization of topological insulator Bi2Se3 thin films on SrTiO3 using pulsed laser deposition. P H Le, K H Wu, C W Luo, J Leu, Thin Solid Films. 534Le, P. H., Wu, K. H., Luo, C. W. & Leu, J. Growth and characterization of topological insulator Bi2Se3 thin films on SrTiO3 using pulsed laser deposition. Thin Solid Films 534, 659-665 (2013). Raman Spectroscopy of Few-Quintuple Layer Topological Insulator Bi2Se3 Nanoplatelets. J Zhang, Nano Lett. 11Zhang, J. et al. Raman Spectroscopy of Few-Quintuple Layer Topological Insulator Bi2Se3 Nanoplatelets. Nano Lett. 11, 2407-2414 (2011). Photoluminescence emission and Raman response of monolayer MoS2, MoSe2, and WSe2. P Tonndorf, Opt. Express. 21Tonndorf, P. et al. Photoluminescence emission and Raman response of monolayer MoS2, MoSe2, and WSe2. Opt. Express 21, 4908-4916 (2013). Fabrication and characterization of large-area suspended MoSe2 crystals down to the monolayer. S Varghese, J. phys. chem. mater. 446001Varghese, S. et al. Fabrication and characterization of large-area suspended MoSe2 crystals down to the monolayer. J. phys. chem. mater. 4, 046001 (2021). Phonon-assisted electronic states modulation of few-layer PdSe2 at terahertz frequencies. Z Li, npj 2D Mater. Appl. 587Li, Z. et al. Phonon-assisted electronic states modulation of few-layer PdSe2 at terahertz frequencies. npj 2D Mater. Appl. 5, 87 (2021). Layer-dependent optical and dielectric properties of centimeter-scale PdSe2 films grown by chemical vapor deposition. M Wei, npj 2D Mater. Appl. 61Wei, M. et al. Layer-dependent optical and dielectric properties of centimeter-scale PdSe2 films grown by chemical vapor deposition. npj 2D Mater. Appl. 6, 1 (2022). Direct Observation of the Linear Dichroism Transition in Two-Dimensional Palladium Diselenide. J Yu, Nano Lett. 20Yu, J. et al. Direct Observation of the Linear Dichroism Transition in Two-Dimensional Palladium Diselenide. Nano Lett. 20, 1172-1182 (2020). Self-powered and high-performance all-fiber integrated photodetector based on graphene/palladium diselenide heterostructures. H Yang, Opt. Express. 29Yang, H. et al. Self-powered and high-performance all-fiber integrated photodetector based on graphene/palladium diselenide heterostructures. Opt. Express 29, 15631-15640 (2021). Temperaturedepending Raman line-shift of silicon carbide. M Bauer, A M Gigler, A J Huber, R Hillenbrand, R W Stark, J. Raman. Spectrosc. 40Bauer, M., Gigler, A. M., Huber, A. J., Hillenbrand, R. & Stark, R. W. Temperature- depending Raman line-shift of silicon carbide. J. Raman. Spectrosc. 40, 1867-1874 (2009). Effect of crystallinity and thickness on thermal transport in layered PtSe2. A El Sachat, npj 2D Mater. Appl. 632El Sachat, A. et al. Effect of crystallinity and thickness on thermal transport in layered PtSe2. npj 2D Mater. Appl. 6, 32 (2022). Anisotropic Thermal Conductivity of Crystalline Layered SnSe2. P Xiao, Nano Lett. 21Xiao, P. et al. Anisotropic Thermal Conductivity of Crystalline Layered SnSe2. Nano Lett. 21, 9172-9179 (2021). A frequency-domain thermoreflectance method for the characterization of thermal properties. A J Schmidt, R Cheaito, M Chiesa, Rev. Sci. Instrum. 8094901Schmidt, A. J., Cheaito, R. & Chiesa, M. A frequency-domain thermoreflectance method for the characterization of thermal properties. Rev. Sci. Instrum. 80, 094901 (2009). 10.1007/10681727_948Non-Tetrahedrally Bonded Elements and Binary Compounds I" in SpringerMaterials. Berlin HeidelbergSpringer-Verlag41Bismuth selenide (Bi2Se3) Debye temperatureBismuth selenide (Bi2Se3) Debye temperature, heat capacity: Datasheet from Landolt- Börnstein -Group III Condensed Matter · Volume 41C: "Non-Tetrahedrally Bonded Elements and Binary Compounds I" in SpringerMaterials (https://doi.org/10.1007/10681727_948) (Springer-Verlag Berlin Heidelberg). Low-temperature heat capacities of molybdenum diselenide and ditelluride. H L Kiwia, E F Westrum, J. Chem. Thermodyn. 7Kiwia, H. L. & Westrum, E. F. Low-temperature heat capacities of molybdenum diselenide and ditelluride. J. Chem. Thermodyn. 7, 683-691 (1975). Pulse accumulation, radial heat conduction, and anisotropic thermal conductivity in pump-probe transient thermoreflectance. A J Schmidt, X Chen, G Chen, Rev. Sci. Instrum. 79114902Schmidt, A. J., Chen, X. & Chen, G. Pulse accumulation, radial heat conduction, and anisotropic thermal conductivity in pump-probe transient thermoreflectance. Rev. Sci. Instrum. 79, 114902 (2008). Heat-Transport Mechanisms in Superlattices. Y K Koh, Y Cao, D G Cahill, D Jena, Koh, Y. K., Cao, Y., Cahill, D. G. & Jena, D. Heat-Transport Mechanisms in Superlattices. . Adv. Funct. Mater. 19Adv. Funct. Mater. 19, 610-615 (2009). Experimental Investigation of Size Effects on the Thermal Conductivity of Silicon-Germanium Alloy Thin Films. R Cheaito, Phys. Rev. Lett. 109195901Cheaito, R. et al. Experimental Investigation of Size Effects on the Thermal Conductivity of Silicon-Germanium Alloy Thin Films. Phys. Rev. Lett. 109, 195901 (2012). Thermal interface conductance in Si/Ge superlattices by equilibrium molecular dynamics. Y Chalopin, K Esfarjani, A Henry, S Volz, G Chen, Phys. Rev. B. 85195302Chalopin, Y., Esfarjani, K., Henry, A., Volz, S. & Chen, G. Thermal interface conductance in Si/Ge superlattices by equilibrium molecular dynamics. Phys. Rev. B 85, 195302 (2012). Minimum superlattice thermal conductivity from molecular dynamics. Y Chen, D Li, J R Lukes, Z Ni, M Chen, Phys. Rev. B. 72174302Chen, Y., Li, D., Lukes, J. R., Ni, Z. & Chen, M. Minimum superlattice thermal conductivity from molecular dynamics. Phys. Rev. B 72, 174302 (2005). Quasi-Ballistic Thermal Transport Across MoS2 Thin Films. A Sood, Nano Lett. 19Sood, A. et al. Quasi-Ballistic Thermal Transport Across MoS2 Thin Films. Nano Lett. 19, 2434-2442 (2019). Effect of lattice mismatch on phonon transmission and interface thermal conductance across dissimilar material interfaces. X Li, R Yang, Phys. Rev. B. 8654305Li, X. & Yang, R. Effect of lattice mismatch on phonon transmission and interface thermal conductance across dissimilar material interfaces. Phys. Rev. B 86, 054305 (2012). Thermal characterization of Bi2Te3/Sb2Te3 superlattices. M N Touzelbaev, P Zhou, R Venkatasubramanian, K E Goodson, J. Appl. Phys. 90Touzelbaev, M. N., Zhou, P., Venkatasubramanian, R. & Goodson, K. E. Thermal characterization of Bi2Te3/Sb2Te3 superlattices. J. Appl. Phys. 90, 763-767 (2001). Electrical resistance of disordered one-dimensional lattices. R Landauer, Philos. Mag. (Abingdon). 21Landauer, R. Electrical resistance of disordered one-dimensional lattices. Philos. Mag. (Abingdon) 21, 863-867 (1970). Heat Conduction across Monolayer and Few-Layer Graphenes. Y K Koh, M.-H Bae, D G Cahill, E Pop, Nano Lett. 10Koh, Y. K., Bae, M.-H., Cahill, D. G. & Pop, E. Heat Conduction across Monolayer and Few-Layer Graphenes. Nano Lett. 10, 4363-4368 (2010). Theory of electron, phonon and spin transport in nanoscale quantum devices. H Sadeghi, Nanotechnology. 29373001Sadeghi, H. Theory of electron, phonon and spin transport in nanoscale quantum devices. Nanotechnology 29, 373001 (2018). Structural evolution and phase transition mechanism of MoSe2 under high pressure. Y Xiao, Sci. Rep. 1122090Xiao, Y. et al. Structural evolution and phase transition mechanism of MoSe2 under high pressure. Sci. Rep. 11, 22090 (2021). Phonon spectrum and bonding properties of Bi2Se3: Role of strong spin-orbit interaction. B.-T Wang, P Zhang, Appl. Phys. Lett. 10082109Wang, B.-T. & Zhang, P. Phonon spectrum and bonding properties of Bi2Se3: Role of strong spin-orbit interaction. Appl. Phys. Lett. 100, 082109 (2012). RHEED patterns and XPS data. RHEED patterns and XPS data RHEED patterns of all the Bi2Se3/MoSe2 films along the. Fig, S1, 10-10] and [11-20Fig. S1. RHEED patterns of all the Bi2Se3/MoSe2 films along the [10-10] and [11-20] The white arrows in (c), (d) and (e), (f) show Bi2Se3 and MoSe2 streaks, respectively. azimuths. The white arrows in (c), (d) and (e), (f) show Bi2Se3 and MoSe2 streaks, respectively. Temperature-Dependent Thermal Boundary Conductance of. E Yalon, Yalon, E. et al. Temperature-Dependent Thermal Boundary Conductance of Monolayer MoS2 by Raman Thermometry. ACS Appl. Mater. Interfaces. 9Monolayer MoS2 by Raman Thermometry. ACS Appl. Mater. Interfaces 9, 43013- 43020 (2017). Interfacial Thermal Transport in Monolayer MoS2-and Graphene-Based Devices. P Yasaei, Adv. Mater. Interfaces. 41700334Yasaei, P. et al. Interfacial Thermal Transport in Monolayer MoS2-and Graphene- Based Devices. Adv. Mater. Interfaces 4, 1700334 (2017). Effect of crystallinity and thickness on thermal transport in layered PtSe2. A El Sachat, npj 2D Mater. Appl. 632El Sachat, A. et al. Effect of crystallinity and thickness on thermal transport in layered PtSe2. npj 2D Mater. Appl. 6, 32 (2022). Tuning Interfacial Thermal and Electrical Conductance across a. Y.-J Wu, Wu, Y.-J. et al. Tuning Interfacial Thermal and Electrical Conductance across a Metal/MoS2 Monolayer through N-Methyl-2-pyrrolidone Wet Cleaning. Adv. Mater. Interfaces. 72000364Metal/MoS2 Monolayer through N-Methyl-2-pyrrolidone Wet Cleaning. Adv. Mater. Interfaces 7, 2000364 (2020). Thermal Conductance of the 2D MoS2/h-BN and graphene/h-BN Interfaces. Y Liu, Sci. Rep. 743886Liu, Y. et al. Thermal Conductance of the 2D MoS2/h-BN and graphene/h-BN Interfaces. Sci. Rep. 7, 43886 (2017). Temperature and interlayer coupling induced thermal transport across graphene/2D-SiC van der Waals heterostructure. M S Islam, I Mia, A S M J Islam, C Stampfl, J Park, Sci. Rep. 12761Islam, M. S., Mia, I., Islam, A. S. M. J., Stampfl, C. & Park, J. Temperature and interlayer coupling induced thermal transport across graphene/2D-SiC van der Waals heterostructure. Sci. Rep. 12, 761 (2022). Ultrahigh thermal isolation across heterogeneously layered twodimensional materials. S Vaziri, Sci. Adv. 51325Vaziri, S. et al. Ultrahigh thermal isolation across heterogeneously layered two- dimensional materials. Sci. Adv. 5, eaax1325 (2019). Measurement of the refractive index of liquid using laser beam displacement. S Nemoto, 10.1364/AO.31.006690Appl. Opt. 31Nemoto, S. Measurement of the refractive index of liquid using laser beam displacement. Appl. Opt. 31, 6690-6694, doi:10.1364/AO.31.006690 (1992).
{'fraction_non_alphanumeric': 0.050780060542012215, 'fraction_numerical': 0.04087481416468144, 'mean_word_length': 4.6536708860759495, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Artificially engineered 2D materials offer unique physical properties for thermal management, surpassing naturally occurring materials. Here, using van der Waals epitaxy, we demonstrate the ability to engineer extremely insulating ultra-thin thermal metamaterials based on crystalline lattice-mismatched Bi2Se3/MoSe2 superlattices and graphene/PdSe2 heterostructures with exceptional thermal resistances (70-202 m 2 K/GW) and ultralow cross-plane thermal conductivities (0.01-0.07 W/mK) at room temperature, comparable to those of amorphous', 'arxivid': '2303.05808', 'author': ['Emigdio Chavez-Angel \nCatalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain\n', 'Polychronis Tsipas \nInstitute of Nanoscience and Nanotechnology\nNational Center for Scientific Research "Demokritos\n15341 Agia ParaskeviAthensGreece\n', 'Peng Xiao \nCatalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain\n', 'Mohammad Taghi Ahmadi \nSchool of Engineering\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom\n', 'Abdalghani Daaoub \nSchool of Engineering\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom\n', 'Hatef Sadeghi \nSchool of Engineering\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom\n', 'Clivia M Sotomayor Torres \nCatalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain\n\nICREA\nPasseig Lluis Companys 23\n08010BarcelonaSpain\n', 'Athanasios Dimoulas \nInstitute of Nanoscience and Nanotechnology\nNational Center for Scientific Research "Demokritos\n15341 Agia ParaskeviAthensGreece\n', 'Alexandros El Sachat \nCatalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain\n\nInstitute of Nanoscience and Nanotechnology\nNational Center for Scientific Research "Demokritos\n15341 Agia ParaskeviAthensGreece\n'], 'authoraffiliation': ['Catalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain', 'Institute of Nanoscience and Nanotechnology\nNational Center for Scientific Research "Demokritos\n15341 Agia ParaskeviAthensGreece', 'Catalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain', 'School of Engineering\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom', 'School of Engineering\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom', 'School of Engineering\nUniversity of Warwick\nCV4 7ALCoventryUnited Kingdom', 'Catalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain', 'ICREA\nPasseig Lluis Companys 23\n08010BarcelonaSpain', 'Institute of Nanoscience and Nanotechnology\nNational Center for Scientific Research "Demokritos\n15341 Agia ParaskeviAthensGreece', 'Catalan Institute of Nanoscience and Nanotechnology (ICN2)\nCSIC and BIST\nCampus UAB08193Bellaterra, BarcelonaSpain', 'Institute of Nanoscience and Nanotechnology\nNational Center for Scientific Research "Demokritos\n15341 Agia ParaskeviAthensGreece'], 'corpusid': 257482347, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 17436, 'n_tokens_neox': 14599, 'n_words': 8124, 'pdfsha': '49163fe4f40219f7be6f56c97a504c3af96920bb', 'pdfurls': ['https://export.arxiv.org/pdf/2303.05808v1.pdf'], 'title': ['Engineering heat transport across epitaxial lattice- mismatched van der Waals heterointerfaces', 'Engineering heat transport across epitaxial lattice- mismatched van der Waals heterointerfaces'], 'venue': []}
arxiv
MULTIPLE SOLUTIONS FOR A WEIGHTED p-LAPLACIAN PROBLEM 10 Jul 2022 Rohit Kumar Abhishek Sarkar MULTIPLE SOLUTIONS FOR A WEIGHTED p-LAPLACIAN PROBLEM 10 Jul 2022arXiv:2207.04462v1 [math.AP] We prove the existence of at least three solutions for a weighted p-Laplacian operator involving Dirichlet boundary condition in a weighted Sobolev space. The main tool we use here is a three solution theorem in reflexive Banach spaces due to G. Bonanno and B. Ricceri.2010 Mathematics Subject Classification. 35B38, 35J62, 35J92. Introduction In this article we are interested in proving existence of three solutions for a Dirichlet boundary value problem involving weighted p-Laplacian operator. We consider the following problem: −div(a(x)|∇u| p−2 ∇u) + |u| p−2 u = λf (x, u) + µg (x, u) in Ω, u = 0 on ∂Ω, (1.1) where Ω ⊂ R N is a bounded domain, p > 1 and N ≥ 1. The restriction between p and N will be specified as we proceed. We assume that the weight a satisfies the following conditions      a is positive a.e. in Ω, a −1/(p−1) ∈ L 1 loc (Ω), a ∈ L 1 loc (Ω); a −s ∈ L 1 (Ω) with some s > 0. (1.2) We look for solutions in the weighted Sobolev space W 1,p 0 (a; Ω) associated with the weight a(x), which is defined in Section 2. The weighted operator was first introduced by Murthy-Stampacchia [9] for the second order linear pdes. Later it was generalized to higher order linear pdes and also quasilinear elliptic pdes. For interested reader we refer to the book by Drábek et al. [6] and the research article [8], where boundary value problems for weighted p-Laplacian operators have been studied independently. Our aim is to show the existence of at least three solutions of problem (1.1), by using a three critical points theorem introduced by Riccieri and also by Bonanno in their series of articles. First we state the theorem proved by Riccieri [12]. Theorem 1.1. Let X be a separable and reflexive real Banach space; I ⊂ R an interval; φ : X → R a sequentially weakly lower semicontinuous C 1 functional whose derivative admits a continuous inverse on X * ; J : X → R a C 1 functional with compact derivative. Assume that lim u →∞ (φ(u) + λJ(u)) = +∞, for all λ ∈ I, and that there exists ρ ∈ R such that sup λ∈I inf u∈X (φ(u) + λ(J(u) + ρ)) < inf u∈X sup λ∈I (φ(u) + λ(J(u) + ρ)). Then, there exist a non-empty open set Γ ⊂ I and a positive real number r such that, for each λ ∈ Γ, the equation φ ′ (u) + λJ ′ (u) = 0, has at least three solutions in X whose norms are less than r. We note that the first result appeared in the literature due to Ricceri [11], having made assumptions that the space is reflexive and separable. Later, Bonanno [3] gave an equivalent conditions to Ricceri's theorem. But Ricceri [10] then generalized his result only for reflexive Banach spaces (with some compensation). Here we state the equivalent theorem combining [3,10]. Theorem 1.2. Let X be a reflexive Banach space; φ : X → R a continuously Gâteaux differentiable and sequentially weakly lower semicontinuous C 1 functional, bounded on each bounded subset of X, whose Gâteaux derivative admits a continuous inverse on X * ; Φ : X → R a C 1 functional with compact Gâteaux derivative. Assume that (i) lim u →∞ (φ(u) + λΦ(u)) = +∞; (ii) there exist r ∈ R and u 0 , u 1 ∈ X such that φ(u 0 ) < r < φ(u 1 ); (iii) inf u∈φ −1 ((−∞,r]) Φ(u) > (φ(u 1 ) − r)Φ(u 0 ) + (r − φ(u 0 ))Φ(u 1 ) φ(u 1 ) − φ(u 0 ) . Then, there exists a non-empty open set Γ ⊂ [0, ∞) and a positive real number ρ with the following property: for each λ ∈ Γ and every C 1 functional J : X → R with compact Gâteaux derivative, there exists δ > 0 such that for each µ ∈ [0, δ], the equation φ ′ (u) + λΦ ′ (u) + µJ ′ (u) = 0 has at least three solutions in X, whose norms are less than ρ. As an application of aforementioned theorems we refer to [1,5] for Dirichlet boundary value problems and for Neumann boundary value problems we refer to [2,4] and the references therein. We follow the similar path to [5]. The rest of this paper is organized as follow. In Section 2, we briefly discuss the weighted Sobolev spaces and state the main theorem. Section 3 deals with the proof of main theorem and also some necessary lemmas. Preliminaries and Result We briefly discuss the weighted Sobolev spaces in a way the approach had been done in [6]. Given a satisfying (1.2), the weighted Sobolev space W 1,p (a; Ω) is defined to be the set of all real valued measurable functions u for which u := Ω |u| p dx + Ω a(x)|∇u| p dx 1/p < ∞. (2.1) Since a −1/(p−1) ∈ L 1 loc (Ω) (see (1.2)) , it follows that W 1,p (a; Ω) equipped with the norm · is uniformly convex Banach space; thus by Milman-Pettis theorem it is a reflexive Banach space. The assumption a ∈ L 1 loc (Ω) (see (1.2)) ensures C ∞ 0 (Ω) ⊂ W 1,p (a; Ω), which allows us to consider the closure of C ∞ 0 (Ω) with respect to the norm · , and denote it by W 1,p 0 (a; Ω). Moreover, the continuous embedding holds Note that, p > p s . When p s > N, from the classical Sobolev embedding theorem we have the following compact embedding: W 1,p 0 (a; Ω) ֒→ W 1,ps 0 (Ω) ֒→֒→ C 0,α (Ω),(2.3) for all 0 < α < 1 − (N/p s ). Hereafter, it is always assumed that s > 0 (in (1.2)) is chosen such that p > p s > N i.e., s > N/(p − N). u∈W 1,p 0 (a;Ω)\{0} maxΩ|u(x)| u < ∞. (2.4) Remark 2.3. Note that, we can talk about an upper bound for above k. Using the above embedding (2.2) and [13] it follows that k ≤ N −1/ps √ π Γ 1 + N 2 1/N p s − 1 p s − N 1−1/ps |Ω| 1/N −1/ps . Definition 2.4. A weak solution of problem (1.1) is such an u ∈ W 1,p 0 (a; Ω) which satisfies Ω a(x)|∇u| p−2 ∇u · ∇vdx + Ω |u| p−2 uvdx = λ Ω f (x, u)vdx + µ Ω g(x, u)vdx, (2.5) for every v ∈ W 1,p 0 (a; Ω). Fix x 0 ∈ Ω and choose r 1 , r 2 with 0 < r 1 < r 2 such that B(x 0 , r 1 ) ⊂ B(x 0 , r 2 ) ⊂⊂ Ω, where B(x, r) denotes the ball in R N centered at x and of radius r. Let ξ = ξ(p, r 1 , r 2 ) := 2kr 1 (r 2 2 − r 2 1 ) a 1/p L 1 (A r 2 r 1 ) ,(2.6) and η = η(p, N, r 1 , r 2 ) := 2 p k p r p 2 (r 2 2 − r 2 1 ) p a L 1 (A r 2 r 1 ) + k p d p N w N r N 2 + k p w N r N 1 1/p ,(2.7) where A r 2 r 1 := B(x 0 , r 2 ) \ B(x 0 , r 1 ). We note that ξ and η both are finite since a ∈ L 1 loc (Ω). We also define (Ω) such that (H1) F (x, t) ≥ 0 for each (x, t) ∈ {Ω \ B(x 0 , r 1 )} × [0, d] (H2) d p η p |Ω| sup (x,t)∈Ω×[−c,c] F (x, t) < c p Ω F (x, d)dx (H3) F (x, t) < h(x)(1 + |t| γ ) , for a.e. x ∈ Ω and t ∈ R large (H4) F (x, 0) = 0 for a.e x ∈ Ω (H5) g : Ω × R → R be a Carathéodory function such that for all τ > 0, there sup |t|≤τ |g(·, t)| ≤ w τ (x). Then, there exists an open interval Λ ⊂ [0, ∞) and a positive real number ρ with the following property: for each λ ∈ Λ there exists δ > 0 such that for each µ ∈ [0, δ], problem (1.1) has at least three weak solutions in W 1,p 0 (a; Ω), whose norms are less than ρ. Proof of Main Result In this section we prove the main result and necessary lemmas. We define the following functionals φ, Φ and Υ on W 1,p 0 (a; Ω) by φ(u) := 1 p Ω a(x)|∇u| p dx + 1 p Ω |u| p dx = 1 p u p , Φ(u) := − Ω F (x, u)dx and Υ(u) := − Ω G(x, u)dx. It is worth mentioning that since p s > N and together with the assumptions on f and g, the functionals Φ and Υ are well defined. Then for any u, v ∈ W 1,p 0 (a; Ω), we have (φ ′ (u), v) = Ω a(x)|∇u| p−2 ∇u · ∇vdx + Ω |u| p−2 uvdx, (Φ ′ (u), v) = − Ω f (x, u)vdx and (Υ ′ (u), v) = − Ω g(x, u)vdx. From (2.5) it is clear that u ∈ W 1,p 0 (a; Ω) be a weak solution of problem (1.1) if for every v ∈ W 1,p 0 (a; Ω) following identity holds (φ ′ (u), v) + λ(Φ ′ (u), v) + µ(Υ ′ (u), v) = 0. Thus, we can look for solutions (weak) of problem (1.1) applying Bonanno's theorem for three solutions. Proof. For any x, y ∈ R N then by applying the inequality from [7], there holds |x| p−2 x − |y| p−2 y, x − y ≥ 1 2 p |x − y| p , p ≥ 2, for all x, y ∈ R N , where ·, · denotes the usual inner product in R N . Thus, noting that a(x) > 0 a.e., we have (φ ′ (u) − φ ′ (v), u − v) ≥ c p u − v p , ∀u, v ∈ W 1,p 0 (a; Ω), for p ≥ 2. Hence φ ′ is uniformly monotone operator in W 1,p 0 (a; Ω), p ≥ 2. Also, for the case 1 < p < 2, we can proceed as in [8,Lemma 4] and get the desired uniform monotonicity. In addition, a simple computation suggests that φ ′ is coercive. Indeed, (φ ′ (u), u) u ≥ u p u = u p−1 . Also note that the map t → (φ ′ (u + tv), w) is continuous on [0, 1] for all u, v, w ∈ W 1,p 0 (a; Ω), hence φ ′ is hemicontinuous. Therefore, the conclusion follows immediately by applying Theorem 26.A of [14] Next, we prove another lemma which is essential to prove Theorem 2.5. Lemma 3.2. Assume that there exist two positive constants c, d with d p ξ p > c p such that (F1) F (x, t) ≥ 0 for each (x, t) ∈ {Ω \ B(x 0 , r 1 )} × [0, d]. (F2) d p η p |Ω| sup Ω×[−c,c] F (x, t) < c p Ω F (x, d)dx. Then there exist r > 0 and u * ∈ W 1,p 0 (a; Ω), such that φ(u * ) = 1 p u * p > r,(3.F (x, t) ≤ c k u * p Ω F (x, u * )dx. (3.2) Proof. Define u * (x) =      d, x ∈ B(x 0 , r 1 ), d (r 2 2 −r 2 1 ) (r 2 2 − |x − x 0 | 2 ), x ∈ B(x 0 , r 2 ) \ B(x 0 , r 1 ), 0, x ∈ Ω \ B(x 0 , r 2 ). It is easy to check that u * ∈ W 1,p 0 (a; Ω). Note that, u * p = 2 p d p (r 2 2 − r 2 1 ) p A r 2 r 1 a(x)|x − x 0 | p dx + w N d p (r 2 2 − r 2 1 ) p r 2 r 1 (r 2 2 − r 2 ) p r N −1 dr + d p w N r N 1 . (3.3) From (2.6), (2.7) and (3.3), we deduce ξ p d p k p < u * p < η p d p k p . (3.4) Now by using the fact that d p ξ p > c p and (3.4), we get for each x ∈ Ω, the condition (F1) suggests 1 p u * p > 1 p ξ p d p k p > 1 p c p k p .Ω\B(x 0 ,r 2 ) F (x, u * (x))dx + B(x 0 ,r 2 )\B(x 0 ,r 1 ) F (x, u * (x))dx ≥ 0. (3.6) Now by using condition (F2), (3.6) and the definition of u * , we get |Ω| max (x,t)∈Ω×[−c,c] F (x, t) < c ηd p B(x 0 ,r 1 ) F (x, d)dx = c ηd p B(x 0 ,r 1 ) F (x, u * )dx < c k u * p B(x 0 ,r 1 ) F (x, u * )dx ≤ c k u * p Ω F (x, u * )dx, i.e., (3.2) follows. The proof is complete. Now we give a proof of our main theorem of this article. Proof of Theorem 2.5. First we note down following observations which are of immediate consequences: (i) Φ belongs to C 1 and also Φ ′ is compact. (ii) φ is weakly lower semicontinuous (since it's a norm) and bounded on each bounded subset of W 1,p 0 (a; Ω). (iii) (φ ′ ) −1 exists and is continuous too, thanks to Lemma 3.1. (iv) From the assumptions on g, it follows that Υ is continuously Gâteaux differentiable on W 1,p 0 (a; Ω), with compact derivative. Thanks to (H3), for each λ ≥ 0, we have lim u →∞ (φ(u) + λΦ(u)) = +∞. Put r = 1 p c k p . Note that max Ω |u(x)| ≤ k u , for every u ∈ W 1,p 0 (a; Ω). Hence for each u such that φ(u) = 1 p u p ≤ r and one has max Ω |u(x)| ≤ k u = c. Thanks to Lemma 3.2, there exists u * ∈ W 1,p 0 (a; Ω) such that φ(u * ) = 1 p u * p > r > 0 = φ(0). Therefore using (2.4) and (3.2), we get − inf u∈φ −1 ((−∞,r]) Φ(u) = sup u∈φ −1 ((−∞,r]) (−Φ(u)) ≤ sup {u: u p ≤pr} Ω F (x, u)dx < Ω sup |t|≤c F (x, t)dx < |Ω| max Ω×[−c,c] F (x, t) < c k u * p Ω F (x, u * )dx = 1 p c k p p u * p Ω F (x, u * )dx = r p u * p Ω F (x, u * )dx = r(−Φ(u * )) φ(u * ) . Φ(u) > (φ(u 1 ) − r)Φ(u 0 ) + (r − φ(u 0 ))Φ(u 1 ) φ(u 1 ) − φ(u 0 ) . Hence all the conditions of Theorem 1.2 are satisfied and the existence of three nontrivial distinct solutions follow immediately. Remark 3.3. We note that the method is still applicable for other boundary conditions as well. For example, we can consider the same problem (1.1) with Neumann boundary condition, and look for the solution in the space W 1,p (a; Ω). Remark 3.4. Remark 2.1 hints that we can also consider the following boundary value problem and discuss about the existence of at least three solutions in W 1,p 0 (a; Ω) for the following Dirichlet boundary value problem: −div(a(x)|∇u| p−2 ∇u) = λf (x, u) + µg(x, u) in Ω, u = 0 on ∂Ω, (3.9) where Ω ⊂ R N is a bounded domain. F (x, t) := t 0 f (x, s)ds and G(x, t) 0 Theorem 2. 5 ( 5Main Result). Assume that there exist three positive constants c, d and γ with d p ξ p > c p and functions h and w τ ∈ L 1 Lemma 3 . 1 ( 31Continuous Inverse). It follows that (φ ′ ) −1 : X * → X exists and it is continuous. from (3.5) immediately. Since 0 ≤ u * ≤ d choose u 0 = 0 and u 1 = u * , so that Φ(u 0 ) = 0 = φ(u 0 ) and from (3.8), we get inf u∈φ −1 ((−∞,r]) Remark 2.1. It is worth mentioning that by recalling a version of Friedrichs type inequality associated with some weight (see[6, eq. no (1.28), p.27]) the norm ∂Ω) l , for l ≥ 0; where 'dist' denotes the distance function from a point x in Ω to the boundary ∂Ω.u a := Ω a(x)|∇u| p dx 1/p , on the space W 1,p 0 (a; Ω) is equivalent to the norm · defined in (2.1). Example 2.2. A typical example of weight (1.2) can be considered as a(x) := 1 dist(x, From the above embedding (2.3), we have k := sup AcknowledgmentThe first author acknowledges the support of the CSIR fellowship for his Ph.D. work. A part of the research was conducted while the second author was in University of West Bohemia, Pilsen, Czech Republic. The second author was supported the Grant Agency of the Czech Republic, project no. 18-03253S and also by the DST-INSPIRE Grant DST/INSPIRE/04/2018/002208 Three solutions for a Dirichlet boundary value problem involving the p−Laplacian. G A Afrouzi, S Heidarkhani, Nonlinear Anal. 66G.A. Afrouzi and S. Heidarkhani, Three solutions for a Dirichlet boundary value problem in- volving the p−Laplacian, Nonlinear Anal. 66 (2007) 2281-2288. Existence of solutions of the Neumann problem for a class of equations involving the p-Laplacian via a variational principle of Ricceri. G Anello, G Cordaro, Arch. Math. 79G. Anello and G. Cordaro, Existence of solutions of the Neumann problem for a class of equations involving the p-Laplacian via a variational principle of Ricceri, Arch. Math. 79 (2002) 274-287. A minimax inequality and its applications to ordinary differential equations. G Bonanno, J. Math. Anal. Appl. 270G. Bonanno, A minimax inequality and its applications to ordinary differential equations, J. Math. Anal. Appl. 270 (2002) 210-219. Three solutions to a Neumann problem for elliptic equations involving the p-Laplacian. G Bonanno, P Candito, Arch. Math. 80G. Bonanno, P. Candito, Three solutions to a Neumann problem for elliptic equations involving the p-Laplacian, Arch. Math. 80 (2003) 424-429. Multiplicity theorems for the Dirichlet problem involving the p-Laplacian. G Bonanno, R Livrea, Nonlinear Anal. 54G. Bonanno, R. Livrea, Multiplicity theorems for the Dirichlet problem involving the p- Laplacian, Nonlinear Anal. 54 (2003) 1-7. P Drábek, A Kufner, F Nicolosi, Quasilinear Elliptic Equations with Degenerations and Singularities. BerlinWalter de Gruyter & Co5of de Gruyter Series in Nonlinear Analysis and ApplicationsP. Drábek, A. Kufner, and F. Nicolosi, Quasilinear Elliptic Equations with Degenerations and Singularities, vol. 5 of de Gruyter Series in Nonlinear Analysis and Applications, Walter de Gruyter & Co., Berlin, 1997. Singular solutions of the p-Laplace equation. S Kichenassamy, L Veron, Math. Ann. 275S. Kichenassamy and L. Veron, Singular solutions of the p-Laplace equation, Math. Ann. 275 (1985) 599-615. On boundary value problems for degenerate quasilinear elliptic equations and inequalities. V Le, K Schmitt, J. Differential Equations. 144V. Le and K.Schmitt, On boundary value problems for degenerate quasilinear elliptic equations and inequalities, J. Differential Equations 144 (1998) 170-218. Boundary value problems for some degenerate-elliptic operators. V Murthy, G Stampacchia, Ann. Mat. Pura Appl. 80V. Murthy and G. Stampacchia, Boundary value problems for some degenerate-elliptic opera- tors, Ann. Mat. Pura Appl. 80 (1968) 1-122. A three critical points theorem revisited. B Ricceri, Nonlinear Anal. 70B. Ricceri, A three critical points theorem revisited, Nonlinear Anal. 70 (2009) 3084-3089. Existence of three solutions for a class of elliptic eigenvalue problem. B Ricceri, Math. Comput. Model. 32B. Ricceri, Existence of three solutions for a class of elliptic eigenvalue problem, Math. Comput. Model. 32 (2000) 1485-1494. On a three critical points theorem. B Ricceri, Arch. Math. (Basel). 75B. Ricceri, On a three critical points theorem, Arch. Math. (Basel) 75 (2000) 220-226. Some inequalities of Sobolev type on two-dimensional spheres. G Talenti, General Inequalities. W. Walter5G. Talenti, Some inequalities of Sobolev type on two-dimensional spheres, in: W. Walter (Ed.), General Inequalities, Vol. 5, Internat. Ser. Numer. Math. 80 (1987) 401-408. Nonlinear Functional Analysis and its Applications, II/B: Nonlinear Monotone Operators. E Zeidler, SpringerNewYorkE. Zeidler, Nonlinear Functional Analysis and its Applications, II/B: Nonlinear Monotone Operators, Springer:NewYork, 1990.
{'fraction_non_alphanumeric': 0.11136974037600716, 'fraction_numerical': 0.04010743061772605, 'mean_word_length': 3.1827259111333, 'pattern_counts': {'":': 0, '<': 21, '<?xml version=': 0, '>': 24, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 33, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'We prove the existence of at least three solutions for a weighted p-Laplacian operator involving Dirichlet boundary condition in a weighted Sobolev space. The main tool we use here is a three solution theorem in reflexive Banach spaces due to G. Bonanno and B. Ricceri.2010 Mathematics Subject Classification. 35B38, 35J62, 35J92.', 'arxivid': '2207.04462', 'author': ['Rohit Kumar ', 'Abhishek Sarkar ', 'Rohit Kumar ', 'Abhishek Sarkar '], 'authoraffiliation': [], 'corpusid': 250426495, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 7022, 'n_tokens_neox': 6046, 'n_words': 3309, 'pdfsha': '34f6a9df3b2b0c961971a8f02cc335f9c613f475', 'pdfurls': ['https://export.arxiv.org/pdf/2207.04462v1.pdf'], 'title': ['MULTIPLE SOLUTIONS FOR A WEIGHTED p-LAPLACIAN PROBLEM', 'MULTIPLE SOLUTIONS FOR A WEIGHTED p-LAPLACIAN PROBLEM', 'MULTIPLE SOLUTIONS FOR A WEIGHTED p-LAPLACIAN PROBLEM', 'MULTIPLE SOLUTIONS FOR A WEIGHTED p-LAPLACIAN PROBLEM'], 'venue': []}
arxiv
Statics and diffusive dynamics of surfaces driven by p-atic topological defects Farzan Vafa Center of Mathematical Sciences and Applications Harvard University 02138CambridgeMAUSA L Mahadevan School of Engineering and Applied Sciences Harvard University 02138CambridgeMAUSA Departments of Physics, and Organismic and Evolutionary Biology Harvard University 02138CambridgeMAUSA Statics and diffusive dynamics of surfaces driven by p-atic topological defects (Dated: March 2, 2023) Inspired by epithelial morphogenesis, we consider a minimal model for the shaping of a surface driven by p-atic topological defects. We show that a positive (negative) defect can dynamically generate a (hyperbolic) cone whose shape evolves diffusively, and predict that a defect of charge +1/p leads to a final semi-cone angle β which satisfies the inequality sin β ≥ 1 − 1 p + 1 2p 2 . By exploiting the fact that for axisymmetric surfaces, the extrinsic geometry is tightly coupled to the intrinsic geometry, we further show that the resulting stationary shape of a membrane with negligible bending modulus and embedded polar order is a deformed lemon with two defects at antipodal points. Finally, we close by pointing out that our results may be relevant beyond epithelial morphogenesis in such contexts as shape transitions in macroscopic closed spheroidal surfaces such as pollen grains. Inspired by epithelial morphogenesis, we consider a minimal model for the shaping of a surface driven by p-atic topological defects. We show that a positive (negative) defect can dynamically generate a (hyperbolic) cone whose shape evolves diffusively, and predict that a defect of charge +1/p leads to a final semi-cone angle β which satisfies the inequality sin β ≥ 1 − 1 p + 1 2p 2 . By exploiting the fact that for axisymmetric surfaces, the extrinsic geometry is tightly coupled to the intrinsic geometry, we further show that the resulting stationary shape of a membrane with negligible bending modulus and embedded polar order is a deformed lemon with two defects at antipodal points. Finally, we close by pointing out that our results may be relevant beyond epithelial morphogenesis in such contexts as shape transitions in macroscopic closed spheroidal surfaces such as pollen grains. I. INTRODUCTION A two-dimensional surface embedded in R 3 is fully described, up to rigid motions, by the first and second fundamental forms, or equivalently, the induced metric and the curvature tensor. The first fundamental form encodes the intrinsic geometry, whereas the second fundamental forms encodes both the intrinsic and extrinsic aspects of the geometry. More specifically, the eigenvalues of the second fundamental form are the two principal curvatures of the surface; their average, the mean (extrinsic) curvature, describes how the surface is embedded in R 3 , whereas their product, the Gaussian (intrinsic) curvature, is independent of the embedding. For example, a cylinder and cone have zero Gaussian curvature, but nonzero mean curvature, whereas minimal surfaces, such as helicoids and catenoid, have non-zero Gaussian curvature but zero mean curvature. The six quantities characterizing the first and second fundamental forms are not all independent; for a surface to be embeddable in three dimensions, there are additional three compatibility relations (See Ref. [1] for a comprehensive review of these ideas.) In biology, epithelial morphogenesis of thin sheet-like structures in plants and animals is responsible for the vast majority of functional structures that make up organs and organisms. These may be modeled effectively as two-dimensional surfaces whose geometry is driven by active processes that are intimately connected to the presence of orientational order in the tangent plane that modifies the embedding and in turn is modified by it. The nature of in-plane order is akin to that of polar molecules, liquid crystals, etc., or more generally to p-fold rotational order, denoted as "p-atics". There is a growing body of evidence suggesting that topological defects, singular disruptions of the rotational order, play a crucial role in guiding or controlling morphogenesis, as seen in experimental observations of cell extrusion and apoptosis [2], mound formation [3,4], layer formation [5], and body shaping using bulges, pits and tentacles [6]. Previous work on the role of defects in deformable surfaces has focused on the dynamics driven by the extrinsic geometry [7][8][9][10][11][12][13] (see Ref. [14] for a recent review). In contrast, in this work, following the formalism introduced in Ref. [15], and taking advantage of the results of Ref. [16], we focus on viewing the intrinsic geometry as the fundamental field and study its dynamics. However, unlike our previous work [15], where we included the effect of activity, here we consider a passive system, where there is no activity, and demonstrate that even in this passive setting the dynamics is rich. It has been known that defects drive the geometry (see for example [9][10][11]). What is novel here is that we find a simple and robust link between topological defects and the resulting geometry. This paper is organized as follows. We begin in Sec. II by reviewing a minimal model for a p-atic on a curved surface that incorporates intrinsic geometry and then extend it to include extrinsic geometry as well. Throughout the paper, we consider the following three examples: isolated positive defect, isolated negative defect, and multiple defects. In Sec. III, we introduce the dynamical equation for intrinsic geometry, and then in Sec. IV, we study the dynamics of intrinsic geometry of defects on the plane. In particular, we show that a positive (negative) defect can dynamically generate a (hyperbolic) cone, and predict its shape. In Sec. V, we turn to the dynamics of extrinsic geometry for axisymmetric surfaces. For an isolated positive defect, we analytically find the height h(t) of the surface as a function of time t, and show that h(t) ∝ √ t. In Sec. VI, we consider surfaces and focus on the intrinsic and extrinsic dynamics of a sphere and lemon geometry. In Sec. VII, we incorporate the effect of mean curvature through the bending energy. We review the crucial fact that for axisymmetric surfaces, the intrinsic geometry entirely encodes the extrinsic geometry, which we exploit to write the bending energy in terms of the intrinsic metric. Numerically, we find that for small bending modulus, the final geometry configuration is a deformed lemon. Moreover, we propose a model for pollen grains where the transition between spherical and lemon geometries is driven by an order-disorder phase transition depending on the hydration. We conclude in Sec. VIII by reviewing our results and suggesting future directions of research. II. MINIMAL MODEL In this section, we first review aspects of [16] which develops techniques to study p-atic liquid crystals deep in the ordered limit on fixed curved surfaces and then apply it to a minimal model of morphogenesis [15]. A. Isothermal coordinates Following Gauss' work [17], we learn that in two dimensions it is always possible to choose local coordinates z andz, known as isothermal (conformal) coordinates, such that the metric takes the form ds 2 = g zz dzdz + gz z dzdz = 2g zz |dz| 2 ≡ e ϕ |dz| 2 . (1) In terms of z = x + iy andz = x − iy, we also have ds 2 = e ϕ(x,y) (dx 2 + dy 2 ). We thus immediately see that the metric is conformally flat, i.e. proportional to the identity matrix, where e ϕ , known as the conformal factor, describes positiondependent isotropic stretching. Following Ref. [16], in analogy to electrostatics, we will interpret ϕ as the geometric potential. B. Orientational order 1. p-atic tensor order parameter Now suppose our curved 2D surface is equipped with patic order, that is, p-fold rotational symmetry. Let Q be the p-atic tensor order parameter, a traceless real symmetrized rank-p tensor. In terms of isothermal coordinates, since Q is traceless (contraction of any pair of indices vanishes), the only non-zero components of Q are Q ≡ Q z...z andQ ≡ Qz ...z , where here ellipses denote p copies. Also, by reality, Q = (Q) * . For ease of notation, let ∇ ≡ ∇ z denote the covariant derivative with respect to z and∇ ≡ ∇z denote the covariant derivative with respect toz. Explicitly, covariant derivatives of the p-atic tensor are ∇Q = ∂Q + p(∂ϕ)Q,∇Q =∂Q (2a) ∇Q =∂Q + p(∂ϕ)Q, ∇Q = ∂Q,(2b) where partial derivatives ∂ ≡ ∂ z and∂ ≡ ∂z. Topological defects The in-plane orientational order can be interrupted by topological defects, where the phase of the order parameter winds around a closed loop and the amplitude vanishes. Topological defects have been observed to play a key role in diverse biological processes [2][3][4][5][6], in Fig. 1, we show a few examples and sketches of defects, which we aim to describe. C. Free energy For a surface with intrinsic in-plane p-atic order that is embedded in three dimensions, the three main contributions to the free energy that we consider are: (i) F Q , from the p-atic tensor Q describing features in the plane (ii) F g , from the metric g ab (iii) F el , due to the embedding. Then the total free energy F is the sum of contributions from the p-atic field, the intrinsic metric, and the embedding, with F = F Q + F g + F el .(3) In isothermal coordinates, F Q , the contribution from the p-atic field to the free energy (Eq. (3)), is given by F Q = 2 p+1 p 2 d 2 z √ g[K|∇Q| 2 +K |∇Q| 2 + −2 (1−2 p |Q| 2 ) 2 ],(4) where |∇Q| 2 = g p−1 zz ∇Q∇Q, |∇Q| 2 = g p−1 zz∇ Q∇Q, |Q| 2 = g p zz QQ. Here K, K > 0 are Frank elastic type terms (having the same effect in flat space), and the last term governs the p-atic order, with controlling the microscopic p-atic coherence length (or defect core radius) ξ = √ K + K . F g , the geometric contribution to the free energy (Eq. (3)), is written in isothermal coordinates as F g = d 2 z √ g[2K ϕ Rϕ + λ],(5) where R = −2e −ϕ ∂∂ϕ is the Gaussian curvature. K ϕ is an elastic constant penalizing changes in the curvature, and this term is a manifestation of the well-known trace anomaly, where the response of the system to conformal rescaling of the metric is proportional to the curvature [19]. λ(t) controls the growth rate of the area. In general, λ = λ(x, t), but for simplicity we will take λ = λ(t), with λ chosen such that the surface area A = d 2 z √ g does not change. Here we will mostly focus on the case λ < 0, which corresponds to positive Gaussian curvature. The final contribution to the free energy (Eq. (3)) is the bending energy, F el = B d 2 z √ gH 2 , where H is the mean curvature [20,21]. In Sec. VII, we express H in terms of ϕ. D. Strongly ordered limit We work deep in the ordered limit ( 1). In this limit, F Q (Eq. (4)) is minimized when 2 p |Q| 2 = 1.(6) From Eq. (6), writing the order parameter Q in terms of its amplitude A and phase θ as Q z...z = A z...z e iθ = Ae iθ(7) leads to A = e − p 2 ϕ ,(8) where we have used g zz = 1 2 e ϕ from Eq. (1). Upon substitution of Q (Eq. (7) with the amplitude A given by Eq. (8)) into Eq. (4), F Q simplifies to F Q = (K + K ) d 2 z p 2 ∂ϕ + i∂θ 2 ,(9) where we have used ∇ z Q z...z = p 2 ∂ϕ + i∂θ Q z...z ∇zQ z...z = − p 2∂ ϕ + i∂θ Q z...z , which itself was obtained by evaluating the covariant derivatives (Eq. (2)) using Eqs. (7) and (8). Minimiz-ing F Q (Eq. (9)) with respect to θ gives ∂∂θ = 0 .(10) In the presence of a topological defect of charge σ ∈ Z/p, the phase θ will wind by 2πpσ. Thus a solution to Eq. (10) with defects j at z j with charge σ j is θ = − i 2 j (pσ j ) ln z − z j z − z j .(11) Using Eq. (11) and the Green's function G(z, z ), which satisfies ∂∂G(z, z ) = 1 4 δ 2 (z − z ), we can compute the contribution of defects to F Q (Eq. (9)), leading to F Q = 2(K + K ) ×   −4 m =n σ m σ n G(z m , z n ) −π m σ m − 1 2 σ 2 m ϕ(z m ) + 1 2 d 2 z|∂ϕ| 2 (12) (see [16] for more details). The first term in Eq. (12) is the usual elastic interaction between defect pairs and the second term is the interaction between topological defects and the geometry [16,22], where a topological defect of charge σ m acquires an effective charge of q m = σ m − 1 2 σ 2 m . The third term is an elastic contribution to the free energy from the geometry. III. RELAXATIONAL DYNAMICS OF THE INTRINSIC GEOMETRY In this paper, we are interested in the interaction between topological defects and geometry, which is cap-tured by the last two terms of F Q (Eq. (12)), which we will focus on. For simplicity, we focus on the case when the defects are frozen, i.e. we fix Q and assume that the geometry responds to the presence of the defects. This assumption is valid if the defects are already at equilibrium positions, or we are in a regime where defect dynamics are slow compared to the changes in the geometry. We will start by limiting ourselves to the study of the dynamics of the intrinsic geometry, as it is simpler but still capable of yielding insights into the shape of the surface. We will then incorporate the extrinsic geometry via the embedding and the mean curvature in axisymmetric cases, noting that in these situations, there is a tight link between intrinisic and extrinsic geometry. With these assumptions, the relevant part of the free energy (using the last line of Eq. (12) and Eq. (5)) is given by F = −2π(K + K ) m σ m − 1 2 σ 2 m ϕ(z m ) + (K + K + 2K ϕ ) d 2 z|∂ϕ| 2 + d 2 z √ gλ. (13) We assume relaxational dynamics for ϕ, i.e., ∂ t ϕ = −γ −1 ϕ 1 √ g ∂F ∂ϕ .(14) Stationary solutions to Eq. (14) satisfy ∂F ∂ϕ = 0, leading to 1 D ∂ t ϕ = −2(R − R 0 ) + 4πe −ϕ j χ j δ 2 (z − z j ) = 0, (15) where D = 2γ −1 ϕ (2K ϕ + K + K ) is the diffusivity, R = −2e −ϕ ∂∂ϕ is the Gaussian curvature, R 0 = −2(Dγ ϕ ) −1 λ, and χ j = K + K 2K ϕ + K + K σ j − 1 2 σ 2 j ≤ σ j − 1 2 σ 2 j ,(16) where the inequality for χ j follows because K, K , K ϕ ≥ 0. We can interpret Eq. (15) as the Gaussian curvature R is sourced by defects at z j with strengths χ j , and away from the defects, R is locked to R 0 , an effective target curvature via R = R 0 and thus is constant. Related aspects were noted in Ref. [11]. We now turn to the evolution of the geometry. Rewriting Eq. (15) explicitly in terms of ϕ, we have e ϕ ∂ t ϕ = D   ∂∂ϕ + π n j=1 χ j δ 2 (z − z j ) + R 0 e ϕ /2   .(17) Eq. (17), except for the nonlinearity due to the e ϕ terms, looks like the regular diffusion equation with sources at positions of defects z j , with strengths χ j . Linearizing Eq. (17) in the neighborhood of ϕ = 0, then we have the usual linear diffusion equation, with point sources, whose solutions can be written by convolving with the usual Green's function. The full nonlinear equation (Eq. (17)) corresponds to Ricci flow [23] with sources. Moreover, the e ϕ factor gives rise to nonlinearity, which has been extensively studied by mathematicians and in fact confirms this physical intuition. We begin our analysis of Eq. (14) with the analysis of defects on the plane, with the case of intrinsic geometry covered in Sec. IV and the case of extrinsic geometry covered in Sec. V. In Sec. VI, we generalize the analysis to surfaces. We then take into account the effect of the mean curvature in Sec. VII. IV. INTRINSIC GEOMETRY OF DEFECTS ON THE PLANE A. Stationary solution Here we study a single defect at the origin of the plane for R 0 = 0. A solution to Eq. (15) is ϕ = −χ log(zz), which is in fact the geometry of a cone, i.e., the cone half angle β satisfies 1 − sin β = χ. A positive (negative) defect thus ultimately generates a cone with positive (negative) curvature singularity. Related aspects were noted in [9][10][11]. We now comment that since χ ≤ σ−σ 2 /2, then we predict that there is an upper bound for χ. Since σ is in units of 1/p, then for p = 1, χ ≤ 1/2, which corresponds to sin β ≥ 1/2, i.e., β ≥ π/6. This means that there is an upper bound to how sharp a cone can be. For all p-atics, the upper bound is given by p = 1, and as p increases, the cone becomes less sharp. For example, we predict for a nematic that χ ≤ 1/2 − (1/2) 2 /2 = 3/8. B. Dynamics We study the evolution of the intrinsic geometry by starting with the case of an isolated defect on the plane (R 0 = 0), with initial condition ϕ(z,z, t = 0) = 0, i.e. flat geometry. To see how a defect can generate non-trivial geometry, we assume axisymmetric solution, i.e. ϕ = ϕ(r, t), which upon substitution of ∂∂ = 1 4 ∂ 2 ∂r 2 + 1 r ∂ ∂r and δ 2 (z) = 1 2πr δ(r) into Eq. (17), gives ∂ t ϕ = De −ϕ [ 1 4 ∂ 2 ϕ ∂r 2 + 1 4r ∂ϕ ∂r + 2r δ(r)].(18) We propose a self-similar ansatz, which upon substitution into Eq. (18) gives ϕ = ϕ(u ≡ r 2 Dt ),(19)− ue ϕ ∂ u ϕ = ∂ u (u∂ u ϕ) + χδ(u).(20) In Fig. 2, we show the comparison of the geometric diffusion equation (Eq. (20)) with the solution of the linearized diffusion equation − u∂ u ϕ = ∂ u (u∂ u ϕ) + χδ(u)(21) (for χ = 0.5), and find excellent agreement. The difference becomes more significant as χ approaches 1. The u → 0 limit of Eq. (20) corresponds to the steady state of Eq. (17), and yields the short distance / long time behavior of the geometry. To understand the long distance / short time behavior, we now consider the u → ∞ limit. Starting from flat configuration ϕ = 0, and to leading order in ϕ, Eq. (20) becomes − u∂ u ϕ = ∂ u (u∂ u ϕ).(22) Let f (u) = u∂ u ϕ. Then integrating Eq. (22) once immediately gives f = −C 1 e −u , with constant of integration C 1 > 0 (because f = u∂ u ϕ < 0), and so integrating Eq. (22) once more immediately gives ϕ = −C 1 u ∞ du u e −u ,(23) where by construction ϕ(u = ∞) = 0 and the constant of integration C 1 can in principle be determined by matching this solution with the one for u → 0, leading to C 1 = χ. Intriguingly, in both the u → 0 and u → ∞ limits, the exponential factor e ϕ was negligible. In fact, Eq. (23) is the Green's function of the geometric diffusion equation, ignoring the e ϕ terms. Having seen that the solution to the linearized equation is sufficiently good for sufficiently small χ j , then in this regime we can get a good approximation for our dynam-ical solution by simply solving the linearized equation, which leads to ϕ(r, t) = j χ j t 0 dt t exp − |r − r j | 2 Dt . V. EXTRINSIC GEOMETRY OF A DEFECT ON THE PLANE We now find the extrinsic geometry of a defect on the plane. In particular, for a single positive defect, we find the exact dynamical solution, and show that at all times, the height grows as h(t) ∝ √ t. We begin by considering a surface over the flat (z ,z ) plane with the height h(z ,z ) above the plane which reproduces the intrinsic metric we have found. In other words, we look for solutions to ds 2 = e ϕ dzdz = dh 2 + dz dz .(24) In terms of polar coordinates, z = re iφ andz = r e iφ , Eq. (24) becomes e ϕ (dr 2 + r 2 dφ 2 ) = dh 2 + dr 2 + r 2 dφ 2 .(25) We now consider separately the cases of a single positive or negative defect. A. Positive defect For a positive defect, noting that φ = φ implies that r 2 = r 2 e ϕ ,(26) and so dh 2 + dr 2 = e ϕ dr 2 . As in the case of intrinsic geometry, it is useful to write in terms of u ≡ r 2 /(Dt). Dividing Eq. (27) by dr 2 and using dr dr = (1 + r 2 ∂ r ϕ)e ϕ/2 = (1 + u∂ u ϕ)e ϕ/2 dh dr = dh du du dr = 2u √ uDt leads to h √ Dt = I − 1 2 u 0 du [−2∂ u ϕ − u(∂ u ϕ) 2 ]e ϕ/2 ,(28) where I = 1 2 ∞ 0 du [−2∂ u ϕ − u(∂ u ϕ) 2 ]e ϕ/2 is a constant that depends only on χ. We now study two limits of Eq. (28): the u 1 and u → ∞ limits. u 1 limit We first study the u 1 limit of Eq. (28), or equivalently, the t → ∞ or r = 0 limit. In this limit, since ϕ = −χ log u, then Eq. (28) simplifies to h √ Dt ≈ I − 1 2 u 0 du 1 − (1 − χ) 2 u − χ+1 2 = I − 1 − (1 − χ) 2 1 − χ u 1−χ 2(29) Then from Eq. (26), r 2 = r 2 e ϕ =⇒ u = r 2 Dt 1 1−χ , and thus upon substitution into Eq. (29), we have h = I √ Dt − cot β r ,(30) where sin β = 1 − χ. What this means is that we have a cone with half-cone angle β where the height grows proportional to √ t with proportionality I √ D that is determined in our model. (See Fig. 3 for a plot). Here for simplicity we have assumed λ = 0. If we were to restore λ, then away from the defect, in the steady state we get constant Gaussian curvature, whose sign is opposite that of λ. This makes contact with Ref. [10], which corresponds to studying the case where λ > 0. However, the main difference is that at the defect, we predict a finite, fixed angle cone, whereas Ref. [10] finds that the slope diverges and comment that they need mean curvature to smooth the divergence. Note that our results are consistent with the experimental observation of positive defects being correlated with positive Gaussian curvature in Hydra [6]. u → ∞ limit We now study the opposite limit, u → ∞ of Eq. (28) (or equivalently, t → 0). Then starting from flat configuration ϕ = 0, we can assume ϕ is small. Therefore, in the u → ∞ limit (and thus to leading order in ϕ), where h √ Dt ≈ 1 2 ∞ u du −2∂ u ϕ = 1 2 2C 1 ∞ u du e −u u = πC 1 erfc( u/2),(a)erfc(x) = 1 − erf(x) = 1 − 2 √ π x 0 e −t 2 dt is the complementary error function. See Fig. 3 for plots. If we had more than one positive defect, then as long as the defects are not influencing each other diffusively, that is, the distance between defects √ Dt, then each defect would create its own conical geometry. B. Negative defect As before, we look for solutions to Eq. (25). Unlike the case of positive defect, here there are no rotationally symmetric embeddings-even though the intrinsic metric is rotationally invariant, the embedding breaks the rotational symmetry. Hence we will assume that ϕ = ϕ(r), h = f (r ) cos(mφ ), r = r (r, φ), and φ = φ (φ). For ex-ample, m = 2 corresponds to a regular saddle, and m = 3 corresponds to a monkey saddle. By equating the coefficients of the differentials in Eq. (25), we find that we have to solve the following coupled system of nonlinear equations: We now check explicitly for a hyperbolic cone (see Fig. 4 for a diagram). The embedding for a hyperbolic cone is x = r cos φ y = r sin φ h = ar cos(mφ ). With the following change of variables r = (1 + χ) 1 + a 2 cos 2 (mφ )r VI. DEFECTS ON SURFACES While we can consider the general case of surfaces with constant positive or negative Gaussian curvature, by Hilbert's theorem we cannot embed constant negative Gaussian curvature surfaces in R 3 [24]. Ideally, we would like to answer the following questions: (i) What kinds of stationary solutions exist? (ii) Are they stable? (iii) What is the embedding? (iv) What are the dynamics of the intrinsic and extrinsic geometry? To attempt answers to these questions, we first consider multiple defects on surfaces, and then in more detail multiple defects on the sphere. A. Intrinsic geometry We begin by reviewing the case of surfaces. Integrating Eq. (15) over the surface gives 1 D ∂ t A = 4π(2g − 2) + 4π j χ j + 2R 0 A = 0, where g is the genus of the surface and A = d 2 z √ g is the area, leading to (2 − 2g) − j χ j = R 0 A 2π . Since away from the defects, the sign of the constant curvature is the same as the sign of R 0 , from the above it follows that the sign of the LHS correlates with the sign of the constant curvature away from the defects, i.e. (2 − 2g) j χ j ⇐⇒ R 0.(32) In particular, for the case of the sphere, g = 0, in which case 2 j χ j ⇐⇒ R 0.(33) We now comment on the stability of the stationary solution. For positive curvature, R > 0, the stationary solution is not stable if R 0 is a constant since ∂ t A = 2R 0 DA + const. which leads to runaway. However, in principle we can choose R 0 to be time-dependent, so that we can always ensure that the stationary solution be achieved, as we shall assume. For negative curvature, R < 0, the stationary solution is automatically stable. Regardless of whether we are at stationary point or not, we can always choose R 0 (t) such that the area does not change, as we will be assuming in the following. We now consider in more detail two cases in turn: R > 0 and R < 0. R > 0 We first consider R > 0 on the sphere. There is no stationary solution to Eq. (14) for n = 1. For n = 2, there exists a stationary solution if and only if χ 1 = χ 2 and the defects are at antipodal points [25]. In this case, the shape resembles that of a lemon (see Fig. 5). For n ≥ 3 defects and χ i ∈ (0, 1), denoting the deficit angle 2πχ i , and assuming the Troyanov inequality [26,27], 2 max χ i < i χ i < 2,(34) then there exists a 2n − 6 parameter family of metrics on the sphere with constant positive scalar curvature with n conical singularities given by deficit angles 2πχ i . Here, the 2n counts the coordinate degree of freedom, but there are 6 constraints associated with the Möbius transformations of the sphere. The first inequality is an intriguing prediction which would be interesting to interpret physically, while the second inequality in Eq. (34) follows from Eq. (33). For p-atic defects on a sphere that we discuss later, all the χ i are positive and equal, and so this inequality is automatically satisfied. Moreover, as t → ∞, in all cases where a solution exists, the solution indeed converges to the unique constant curvature solution [28][29][30], and can be embedded uniquely in R 3 up to translation and rotation [31]. We note that even though our model does not directly contain any information about the embedding of the surface in R 3 or the extrinsic geometry, the solution leads to recovering unique extrinsic geometry! Moreover, Ref. [32] extended the existence of stationary solutions to allow some χ i < 0, and found that a solution can exist if additional constraints on χ i are satisfied. Note that naively, if we had defects of mixed sign, then we would expect them to annihilate each other due to the Coulomb interaction, and is consistent with the mathematical result that the steady state metric isn't unique [26]. R < 0 We now consider R < 0. In this case, for a compact Riemann surface S of genus g with all χ i < 1, we know from Eq. (32) that we need i χ i > 2 − 2g, which has also been shown to be sufficient [26]. Note that here we are not assuming that all χ i are positive. The position degree of freedom gives 2n degrees for genus g > 1, 2n − 2 for g = 1 (−2 accounts for two translations of torus), and 2n − 6 for g = 0 (−6 accounts for Möbius transformations of the sphere). p-atic on the sphere Here we consider in more detail a p-atic on the sphere. Since the net charge is 2, we consider 2p defects each of the minimal charge +1/p. Since according to Eq. (16), j χ j < 2, then R is constant positive away from the defects because of Eq. (33). Moreover, the LHS of the Troyanov inequality Eq. (34) is also satisfied because all the χ j are equal. Thus a unique solution exists. An example where we can explicitly write the stationary state metric is for p = 1, corresponding to polar liquid crystal, with two +1 defects on the sphere with equal deficit angles of χ, which we place on the north and south poles of the sphere. We construct this metric by starting from the the round metric (spherically symmetric metric), ds 2 = 4 1 (1 + |z| 2 ) 2 |dz| 2 , which has Gaussian curvature of 1. The coordinate transformation z → z 1−χ then gives rise to conical singularities of strength χ at the north and south poles, giving the round conical metric [25], ds 2 = 4(1 − χ) 2 |z| −2χ (1 + |z| 2(1−χ) ) 2 |dz| 2 .(35) Here the Gaussian curvature is still 1 (away from the poles). At the two poles z = 0 and z = ∞, there are conic singularities of strength χ. We call this the lemon geometry. B. Extrinsic geometry For the lemon geometry, defined by Eq. (35), the embedding x i is [33] x 1 = a sin θ cos φ x 2 = a sin θ sin φ x 3 = θ 0 dθ 1 − a 2 cos 2 θ , where θ ∈ [0, π], φ ∈ [0, 2π], and a = 1 − χ. It can be checked that Fig. 5 for a plot. ds 2 = dx 2 1 + dx 2 2 + dx 2 3 = dθ 2 + a 2 sin 2 θdφ 2 = e ϕ |dz| 2 , where z = tan θ 2 1/a e iφ e ϕ = a 2 sin 2 θ tan θ 2 2/a . See VII. INCLUDING MEAN CURVATURE In this section, we consider the full model by including the effect of the bending energy. The key is that for axisymmetric surfaces, the extrinsic geometry is entirely encoded by the intrinsic geometry. We first review this fact and explicitly express the mean curvature, via the principal curvatures κ 1 and κ 2 , in terms of ϕ. For an axisymmetric surface, the embedding is X i = (ρ(u) cos θ, ρ(u) sin θ, h(u)), from which follows that the metrix is ds 2 = dX i 2 = ρ 2 + h 2 du 2 + ρ 2 dθ 2 .(37) The task at hand is to express the mean curvature, H = 1 2 (κ 1 + κ 2 ),(38) and thus the principal curvatures, κ 1 = 1 2 d 2 h dρ 2 1 + dh dρ 2 3/2 (39a) κ 2 = 1 2 dh dρ ρ 1 + dh dρ 2 1/2 ,(39b) in terms of ϕ. Choosing the parameter u in the metric (Eq (37)) such that [34] ρ 2 + h 2 = ρ 2(40) immediately gives ds 2 = ρ(u) 2 (du 2 + dθ 2 ).(41) (ρ, θ) can be viewed as cylindrical coordinates, and in terms of isothermal coordinates, z = u + iθ. In these coordinates, ds 2 = e ϕ(z,z) |dz| 2 , where ρ(u) 2 = e ϕ(u)(42) and u = (z +z)/2. Since θ is periodic, then z ∼ z + 2πi. Thus rotating the coordinates by α corresponds to shifting z by iα. This metric (Eq. (41)) is manifestly rotational invariant as ρ (Eq. (42)) depends only on u and not θ. We briefly explicitly consider the examples of sphere and lemon before turning to the general axisymmetric surfaces. A. Sphere geometry The round sphere has metric ds 2 = 4|dw| 2 (1 + |w| 2 ) 2 .(43) We define w = exp[z] so that the phase rotation of w is identified with shift of the imaginary component of z. In terms of z, Eq. (43) becomes ds 2 = 4|de z | 2 (1 + |e z | 2 ) 2 = 4|e z | 2 |dz| 2 (1 + |e z | 2 ) 2 . Using |e z | = e u , we learn that ds 2 = 4|de z | 2 (1 + |e z | 2 ) 2 = 4e 2u |dz| 2 (1 + e 2u ) 2 = sech 2 (u)|dz| 2 , and thus ρ(u) = sech(u). For the height, h = ± du sech 2 (u) = ± tanh(u). Note that as a consistency check, ρ 2 + h 2 = 1, which indeed describes a sphere. B. Lemon geometry The lemon geometry has metric ds 2 = 4(1 − χ) 2 |w| −2χ (1 + |w| 2(1−χ) ) 2 |dw| 2 .(44) In terms of z = ln w, Eq. (44) becomes ds 2 = 4(1−χ) 2 |e z | −2χ |de z | 2 (1 + |e z | 2(1−χ) ) 2 = 4(1−χ) 2 e 2u(1−χ) |dz| 2 (1 + e 2u(1−χ) ) 2 . Thus we learn χ)). ρ(u) = (1 − χ) sech(u(1 − C. General axisymmetric surfaces We now turn to general axisymmetric surfaces. We first note, using Eq. (40), that dh dρ 2 = h ρ 2 = ρ 2 − ρ 2 ρ 2 , from which follows that dh dρ = 4 ϕ 2 − 1, 1 + dh dρ 2 = 2 ϕ .(45) We also note that d 2 h dρ 2 = 1 ρ dh dρ = 2e −ϕ/2 ϕ d du 4 ϕ 2 − 1 = −8e −ϕ/2 ϕ ϕ 3 4 − ϕ 2 .(46) Now substituting Eqs. (45) and (46) into the principal curvatures (Eq. (39)) gives κ 1 = − ϕ e −ϕ/2 4 − ϕ 2 (47a) κ 2 = 1 2 e −ϕ/2 4 − ϕ 2 .(47b) Note that in these coordinates, the Gaussian curvature R takes the form R = κ 1 κ 2 = − 1 2 ϕ e −ϕ .(48) Upon substitution of Eq. (47) into F = F Q + F g + F el , we arrive at F Q = 2(K + K )× −π m σ m − 1 2 σ 2 m ϕ(u m ) + 1 8 d 2 u ϕ 2 (49a) F g = d 2 u e ϕ [K ϕ Rϕ + λ] (49b) F el = B d 2 u ϕ 4 − ϕ 2 + 1 2 4 − ϕ 2 2 ,(49c) completing the task at hand. We now comment on the effect of F el via the principal curvatures κ 1 and κ 2 . We first note that near a singularity, κ 1 (the first term in the parentheses of Eq. (49c)) vanishes whereas κ 2 (the second term in the parentheses of Eq. (49c)) is finite. Hence κ 2 contributes to the coefficient of ϕ 2 in the last term in Eq. (49a), leading to enchanced charge χ j = K + K 2K ϕ + K + K − B σ j − 1 2 σ 2 j , which is valid when B 2K ϕ + K + K < 1. We now comment on what we must choose for λ to keep constant the surface area. LetF = F − λ d 2 ue ϕ = F − λA. Then 0 = ∂ t A = d 2 ue ϕ ∂ t ϕ = d 2 u δF δϕ = d 2 u δF δϕ + λA from which follows that λ = − R 0 2 d 2 u δF δϕ(50) to fix A(t) = 2/R 0 . In other words, minimizing the free energy while fixing the area is equivalent to minimizing F =F + λ d 2 ue ϕ − 2R −1 0 , where we are treating λ as a Lagrange multiplier, given in Eq. (50). To analyze the dynamics of shaping these surfaces requires the formulation of a gradient flow based on the free energy contributions given by F = F Q + F g + F el leading to an equation of the form ∂ t ϕ = − 1 √ g δF δϕ . Here, we do not solve this complicated equation but resort to heuristic arguments to illuminate the basic physics. To gain insight, we consider the initial geometry of the lemon, and study the deformation of this lemon geometry due to the flow. We start with lemon geometry such that when B = 0, the lemon is a stationary solution. We now turn on the bending term, and consider the effect of B > 0 for the dynamics. Naively, we would expect the bending energy to flatten the lemon such that the principal curvatures are the same, as in the sphere. We see in Fig. 5 that indeed, near the equator (u = 0), the geometry does indeed become flattened. However, near the tips, in order to keep the area constant, the tips become more conical. D. Connecting our results to the shapes of pollen grains We now apply our understanding of the previous results to the shape of some spherical shell-like pollen grains that fold into reversible lemon-like shapes when they are dehydrated, and reverse into spherical shells when hydrated [35]. We assume polar order, i.e. p = 1. We note that on the sphere, the low energy configuration involves two +1 defects at the north and south poles [36]. Although, so far we have assumed that we are in the ordered phase, i.e., |Q| = 0, we now extend the potential V [Q] to allow |Q| = 0, i.e. account for the disordered phase as well. In terms of the humidity ρ, and a critical humidity ρ c where the grain switches from a spherical to a conically folded phase, we let V [Q] takes the form V [Q] = −2 (1 + r|Q| 2 ) 2 where r ∝ (ρ − ρ c ). For r > (<) 0, Q = ( =) 0. There is thus a 2nd order phase transition at ρ = ρ c , which separates the hydrated and hydrated phases. In the hydrated phase, corresponding to Q = 0, there are no topological defects, and the pollen grains remain spherical. In the dehydrated phase, corresponding to Q = 0, two topological defects at the north and south poles drive the pollen grains to take the shape of a lemon, deformed by the bending energy term, as in Fig. 5. This allows us to recover the two different geometries (Fig. 5) shown in [35] and provide an explanation for their origin in terms of the need to have topological defects that drive these shape changes. VIII. DISCUSSION Our minimal framework for the geometry of curved surfaces with frozen p-atic defects driven by relaxational dynamics leads to their diffusive equilibration. In particular, we show that a positive (negative) defect can dynamically generate a cone (hyperbolic cone), and we predict that the half cone angle β satisfies 1 − sin β ≤ 1/p(1 − 1/(2p)). Although we focused primarily on the intrinsic geometry of the surfaces, we showed that for axisymmetric surfaces, where the extrinsic geometry can be deduced entirely by the intrinsic geometry, we can deduce the changes in extrinsic shape as well. For nominally flat surfaces, this leads to a simple intuitive prediction that in the presence of a positive defect, a bump forms with height profile h(t) ∼ √ t for early times t, while for polar order on spheres, we find that the resulting stationary geometry is a deformed lemon. More generally, we can ask what would happen if the defects were mobile, and moved in response to spatial variations in the geometry, while themselves changing the surface geometry. Over long times, if we have both positive and negative defects, naively we would expect them to annihilate each other. However, if we consider charges of the same sign, as we did for the case of the sphere, there can in principle be a steady state for the defects. For example, Ref. [36] found equilibrium configurations of the p-atic defect on a sphere. Here, using this equilibrium configurations as our initial condition for geometric growth, we find that the defects will not move, but the surface will develop conical singularities at the locations of the defects, thus pinning the defects. For example, for the case of p = 1, we get the lemon configuration. Understanding the varying equilibrium configurations as a function of the number and type of defects is a natural next question to study. While our current study has mainly focused on positive defects, there are other cases where an equilibrium configuration with both positive and negative defects can be attained. For example, on a torus with varying mean and Gaussian curvature, plus-minus defect pairs can nucleate [37]. In active systems, we might expect that activity can stabilize a defect configuration of both signs, as for example shown in [15], and another interesting question is to study these cases further. With the recently increasing interest in the mathematical and physical study of textiles [38,39] that are knit or woven from filaments, or active versions thereof, our study suggests new ways to engineer shape by using patic defects to generate complex curvature patterns and enhance drapability of the human body, building on ancient empirical approaches that have been long known to artists and artisans. FIG. 1 . 1Examples of p-atic liquid crystals which exhibit topological defects. (a) Hydra, adapted fromFig. 1of[6]. Schematics in left and right corners depict textures of +1 and ±1/2 defects. Insets: zoomed in pictures of corresponding actin fiber orientation and scalar order parameter. (b) Starfish embryos, adapted fromFig. 1of[18]. Schematics in left and right corners depict textures of ±1/6 defects. FIG. 2 . 2Plot of ϕ(u) for the exact diffusion (Eq.(20)) and linearized equation (Eq (21)) for χ = 0.5. FIG. 3 . 3Plots of h(r , t) (Eq. (28)) (a): h(r , t) for t = 1.0. (b): h(r , t) for t = 0.5, 1.0, . . . , 2.5, D = 1.0, χ = 0.5. As t increases, curve is dilated by factor of √ t (both r and h(r , t)). Parameters used are D = 1.0 and χ = 0.5. a 2 cos 2 φ , the metric (Eq. (25)) takes the formds 2 = dr 2 + r 2 dφ 2 + dh 2 ,as expected. FIG. 4 . 4Plot of saddle geometry (Eq. (31)) for χ = −0.15 for (a) m = 2, i.e. the regular saddle) (b) m = 3, i.e. the monkey saddle). FIG. 5 . 5(a): plot of lemon geometry (Eq. (36)) for a = 0.7. (b): plot of long-time solution of Eq. (VII C), starting from initial configuration of sphere. Parameters used are χ = 0.1 and B K+K +2Kϕ = 0.6. (c): figure adapted from Fig. 2 of Ref. [35]. For both lily pollen (left) and euphorbia pollen (right), spherical shapes evolve into lemon shapes. ACKNOWLEDGMENTSWe thank David Nelson and Grace Zhang for valuable discussion of defect dynamics on a cone and Pengfei Guan, Craig Hodgson, Puskar Mondal, Freid Tong, Marc Troyanov, and Shing-Tung Yau for valuable discussions on Ricci flow equations and reconstructing the embedding from the intrinsic metric. We also like to thank Yeonsu Jung for discussions of experimental realizations of the model. This work is partially supported by the Center for Mathematical Sciences and Fluid lipid membranes: From differential geometry to curvature stresses. M Deserno, 10.1016/j.chemphyslip.2014.05.001Chemistry and Physics of Lipids. 185membrane mechanochemistry: From the molecular to the cellular scaleM. Deserno, Fluid lipid membranes: From differential geometry to curvature stresses, Chemistry and Physics of Lipids 185, 11 (2015), membrane mechanochemistry: From the molecular to the cellular scale. Yeomans, and B. Ladoux, Topological defects in epithelia govern cell death and extrusion. T B Saw, A Doostmohammadi, V Nier, L Kocgozlu, S Thampi, Y Toyama, P Marcq, C T Lim, J M , 10.1038/nature21718Nature. 544T. B. Saw, A. Doostmohammadi, V. Nier, L. Kocgozlu, S. Thampi, Y. Toyama, P. Marcq, C. T. Lim, J. M. Yeo- mans, and B. Ladoux, Topological defects in epithelia govern cell death and extrusion, Nature 544, 212-216 (2017). Topological defects control collective dynamics in neural progenitor cell cultures. K Kawaguchi, R Kageyama, M Sano, 10.1038/nature22321Nature. 545K. Kawaguchi, R. Kageyama, and M. Sano, Topological defects control collective dynamics in neural progenitor cell cultures, Nature 545, 327-331 (2017). Quantifying material properties of cell monolayers by analyzing integer topological defects. C Blanch-Mercader, P Guillamat, A Roux, K Kruse, Physical Review Letters. 12628101C. Blanch-Mercader, P. Guillamat, A. Roux, and K. Kruse, Quantifying material properties of cell mono- layers by analyzing integer topological defects, Physical Review Letters 126, 028101 (2021). Topological defects promote layer formation in Myxococcus xanthus colonies. K Copenhagen, R Alert, N S Wingreen, J W Shaevitz, 10.1038/s41567-020-01056-4Nature Physics. 17211K. Copenhagen, R. Alert, N. S. Wingreen, and J. W. Shaevitz, Topological defects promote layer formation in Myxococcus xanthus colonies, Nature Physics 17, 211 (2021). Topological defects in the nematic order of actin fibres as organization centres of hydra morphogenesis. Y Maroudas-Sacks, L Garion, L Shani-Zerbib, A Livshits, E Braun, K Keren, 10.1038/s41567-020-01083-1Nature Physics. Y. Maroudas-Sacks, L. Garion, L. Shani-Zerbib, A. Livshits, E. Braun, and K. Keren, Topological defects in the nematic order of actin fibres as organization centres of hydra morphogenesis, Nature Physics 10.1038/s41567- 020-01083-1 (2020). Defects in flexible membranes with crystalline order. H S Seung, D R Nelson, 10.1103/PhysRevA.38.1005Phys. Rev. A. 381005H. S. Seung and D. R. Nelson, Defects in flexible mem- branes with crystalline order, Phys. Rev. A 38, 1005 (1988). Topological defects on fluctuating surfaces: General properties and the kosterlitz-thouless transition. J.-M Park, T C Lubensky, 10.1103/PhysRevE.53.2648Phys. Rev. E. 532648J.-M. Park and T. C. Lubensky, Topological defects on fluctuating surfaces: General properties and the kosterlitz-thouless transition, Phys. Rev. E 53, 2648 (1996). Free energies of isolated five-and sevenfold disclinations in hexatic membranes. M W Deem, D R Nelson, 10.1103/PhysRevE.53.2551Phys. Rev. E. 532551M. W. Deem and D. R. Nelson, Free energies of isolated five-and sevenfold disclinations in hexatic membranes, Phys. Rev. E 53, 2551 (1996). Defects in nematic membranes can buckle into pseudospheres. J R Frank, M Kardar, 10.1103/PhysRevE.77.041705Phys. Rev. E. 7741705J. R. Frank and M. Kardar, Defects in nematic mem- branes can buckle into pseudospheres, Phys. Rev. E 77, 041705 (2008). Hyperbolic interfaces. L Giomi, 10.1103/PhysRevLett.109.136101Phys. Rev. Lett. 109136101L. Giomi, Hyperbolic interfaces, Phys. Rev. Lett. 109, 136101 (2012). Topology and morphology of self-deforming active shells. L Metselaar, J M Yeomans, A Doostmohammadi, Physical review letters. 123208001L. Metselaar, J. M. Yeomans, and A. Doostmohammadi, Topology and morphology of self-deforming active shells, Physical review letters 123, 208001 (2019). L A Hoffmann, L N Carenza, J Eckert, L Giomi, arXiv:2105.15200Defect-mediated morphogenesis (2021). cond-mat.softL. A. Hoffmann, L. N. Carenza, J. Eckert, and L. Giomi, Defect-mediated morphogenesis (2021), arXiv:2105.15200 [cond-mat.soft]. Active flows and deformable surfaces in development. S C Al-Izzi, R G Morris, 10.1016/j.semcdb.2021.07.001Seminars in Cell & Developmental Biology. Timothy Saunders and Ivo Telley120S. C. Al-Izzi and R. G. Morris, Active flows and de- formable surfaces in development, Seminars in Cell & De- velopmental Biology 120, 44 (2021), special issue: The mechanics of development by Timothy Saunders and Ivo Telley. Active nematic defects and epithelial morphogenesis. F Vafa, L Mahadevan, 10.1103/PhysRevLett.129.098102Phys. Rev. Lett. 12998102F. Vafa and L. Mahadevan, Active nematic defects and epithelial morphogenesis, Phys. Rev. Lett. 129, 098102 (2022). Defect absorption and emission for p-atic liquid crystals on cones. F Vafa, G H Zhang, D R Nelson, 10.1103/PhysRevE.106.024704Phys. Rev. E. 10624704F. Vafa, G. H. Zhang, and D. R. Nelson, Defect absorp- tion and emission for p-atic liquid crystals on cones, Phys. Rev. E 106, 024704 (2022). On conformal representations. C F Gauss, A Source Book in Mathematics. D. E. SmithDoverC. F. Gauss, On conformal representations, in A Source Book in Mathematics, edited by D. E. Smith (Dover, 1959) p. 463-475. Odd dynamics of living chiral crystals. T H Tan, A Mietke, J Li, Y Chen, H Higinbotham, P J Foster, S Gokhale, J Dunkel, N Fakhri, Nature. 607287T. H. Tan, A. Mietke, J. Li, Y. Chen, H. Higinbotham, P. J. Foster, S. Gokhale, J. Dunkel, and N. Fakhri, Odd dynamics of living chiral crystals, Nature 607, 287 (2022). Quantum geometry of bosonic strings. A Polyakov, 10.1016/0370-2693(81)90743-7Physics Letters B. 103207A. Polyakov, Quantum geometry of bosonic strings, Physics Letters B 103, 207 (1981). Note on embedded surfaces, An. Sti. Univ. T J Willmore, Al. I. Cuza" Iasi Sect. I a Mat.(NS) B. 1120T. J. Willmore, Note on embedded surfaces, An. Sti. Univ."Al. I. Cuza" Iasi Sect. I a Mat.(NS) B 11, 20 (1965). Elastic properties of lipid bilayers: theory and possible experiments. W Helfrich, Zeitschrift für Naturforschung c. 28693W. Helfrich, Elastic properties of lipid bilayers: theory and possible experiments, Zeitschrift für Naturforschung c 28, 693 (1973). Anomalous coupling between topological defects and curvature. V Vitelli, A M Turner, 10.1103/PhysRevLett.93.215301Phys. Rev. Lett. 93215V. Vitelli and A. M. Turner, Anomalous coupling be- tween topological defects and curvature, Phys. Rev. Lett. 93, 215 (2004). Three-manifolds with positive ricci curvature. R S Hamilton, 10.4310/jdg/1214436922J. Differential Geom. 17255R. S. Hamilton, Three-manifolds with positive ricci cur- vature, J. Differential Geom. 17, 255 (1982). Ueber flächen von constanter gaussscher krümmung. D Hilbert, Transactions of the American mathematical Society. 287D. Hilbert, Ueber flächen von constanter gaussscher krümmung, Transactions of the American mathematical Society 2, 87 (1901). Metrics of constant curvature on a sphere with two conical singularities. M Troyanov, 10.1007/BFb0086431Differential Geometry: Proceedings of the 3rd International Symposium, held at Peñiscola. F. J. Carreras, O. Gil-Medrano, and A. M. NaveiraSpain; Berlin Heidelberg; Berlin, HeidelbergSpringerM. Troyanov, Metrics of constant curvature on a sphere with two conical singularities, in Differential Geometry: Proceedings of the 3rd International Symposium, held at Peñiscola, Spain, June 5-12, 1988 , edited by F. J. Carreras, O. Gil-Medrano, and A. M. Naveira (Springer Berlin Heidelberg, Berlin, Heidelberg, 1989) pp. 296-306. Prescribing curvature on compact surfaces with conical singularities. M Troyanov, 10.1090/S0002-9947-1991-1005085-9Transactions of the American Mathematical Society. 324793M. Troyanov, Prescribing curvature on compact surfaces with conical singularities, Transactions of the American Mathematical Society 324, 793 (1991). Liouville equation and spherical convex polytopes. F Luo, G Tian, 10.1090/S0002-9939-1992-1137227-5Proceedings of the American Mathematical Society. 1161119F. Luo and G. Tian, Liouville equation and spherical con- vex polytopes, Proceedings of the American Mathemati- cal Society 116, 1119 (1992). Ricci flow on surfaces with conical singularities. H Yin, 10.1007/s12220-010-9136-1Journal of Geometric Analysis. 20970H. Yin, Ricci flow on surfaces with conical singularities, Journal of Geometric Analysis 20, 970 (2010). Ricci flow on surfaces with conic singularities. R Mazzeo, Y Rubinstein, N Sesum, 10.2140/apde.2015.8.839Analysis & PDE. 8839R. Mazzeo, Y. Rubinstein, and N. Sesum, Ricci flow on surfaces with conic singularities, Analysis & PDE 8, 839 (2015). The Ricci flow on the sphere with marked points. D H Phong, J Song, J Sturm, X Wang, 10.4310/jdg/1577502023Journal of Differential Geometry. 114117D. H. Phong, J. Song, J. Sturm, and X. Wang, The Ricci flow on the sphere with marked points, Journal of Differ- ential Geometry 114, 117 (2020). Surfaces of constant curvature in R 3 with isolated singularities. J A Gálvez, L Hauswirth, P Mira, 10.1016/j.aim.2012.11.019Advances in Mathematics. 241103J. A. Gálvez, L. Hauswirth, and P. Mira, Surfaces of constant curvature in R 3 with isolated singularities, Ad- vances in Mathematics 241, 103 (2013). G Mondello, D Panov, arXiv:1505.01994Spherical metrics with conical singularities on a 2-sphere: angle constraints. math.DGG. Mondello and D. Panov, Spherical metrics with coni- cal singularities on a 2-sphere: angle constraints (2015), arXiv:1505.01994 [math.DG]. Differential geometry of curves & surfaces, revised & updated second ed. M P D Carmo, Dover PublicationsINC, Mineola, New YorkM. P. d. Carmo, Differential geometry of curves & sur- faces, revised & updated second ed. (Dover Publications, INC, Mineola, New York, 2016). Infinitesimal ricci flows of minimal surfaces in the three-dimensional euclidean space. J Arteaga, M A Malakhaltsev, Russian Mathematics. 5129J. Arteaga and M. A. Malakhaltsev, Infinitesimal ricci flows of minimal surfaces in the three-dimensional eu- clidean space, Russian Mathematics 51, 29 (2007). Foldable structures and the natural design of pollen grains. E Katifori, S Alben, E Cerda, D R Nelson, J Dumais, 10.1073/pnas.0911223107Proceedings of the National Academy of Sciences. 1077635E. Katifori, S. Alben, E. Cerda, D. R. Nelson, and J. Dumais, Foldable structures and the nat- ural design of pollen grains, Proceedings of the National Academy of Sciences 107, 7635 (2010), https://www.pnas.org/content/107/17/7635.full.pdf. Orientational order and vesicle shape. T C Lubensky, J Prost, 10.1051/jp2:1992133Journal de Physique II. 2371T. C. Lubensky and J. Prost, Orientational order and vesicle shape, Journal de Physique II 2, 371 (1992). Curvatureinduced defect unbinding in toroidal geometries. M Bowick, D R Nelson, A Travesset, 10.1103/PhysRevE.69.041102Phys. Rev. E. 6941102M. Bowick, D. R. Nelson, and A. Travesset, Curvature- induced defect unbinding in toroidal geometries, Phys. Rev. E 69, 041102 (2004). Actuating textiles: next generation of smart textiles. N.-K Persson, J G Martinez, Y Zhong, A Maziz, E W Jager, Advanced Materials Technologies. 31700397N.-K. Persson, J. G. Martinez, Y. Zhong, A. Maziz, and E. W. Jager, Actuating textiles: next generation of smart textiles, Advanced Materials Technologies 3, 1700397 (2018). H Yasuda, P R Buskohl, A Gillman, T D Murphey, S Stepney, R A Vaia, J R Raney, Mechanical computing. 59839H. Yasuda, P. R. Buskohl, A. Gillman, T. D. Murphey, S. Stepney, R. A. Vaia, and J. R. Raney, Mechanical com- puting, Nature 598, 39 (2021).
{'fraction_non_alphanumeric': 0.07057779530664564, 'fraction_numerical': 0.04103727075527509, 'mean_word_length': 3.7491103202846974, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 11, 'https://': 1, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 34, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Inspired by epithelial morphogenesis, we consider a minimal model for the shaping of a surface driven by p-atic topological defects. We show that a positive (negative) defect can dynamically generate a (hyperbolic) cone whose shape evolves diffusively, and predict that a defect of charge +1/p leads to a final semi-cone angle β which satisfies the inequality sin β ≥ 1 − 1 p + 1 2p 2 . By exploiting the fact that for axisymmetric surfaces, the extrinsic geometry is tightly coupled to the intrinsic geometry, we further show that the resulting stationary shape of a membrane with negligible bending modulus and embedded polar order is a deformed lemon with two defects at antipodal points. Finally, we close by pointing out that our results may be relevant beyond epithelial morphogenesis in such contexts as shape transitions in macroscopic closed spheroidal surfaces such as pollen grains.', 'arxivid': '2303.00007', 'author': ['Farzan Vafa \nCenter of Mathematical Sciences and Applications\nHarvard University\n02138CambridgeMAUSA\n', 'L Mahadevan \nSchool of Engineering and Applied Sciences\nHarvard University\n02138CambridgeMAUSA\n\nDepartments of Physics, and Organismic and Evolutionary Biology\nHarvard University\n02138CambridgeMAUSA\n'], 'authoraffiliation': ['Center of Mathematical Sciences and Applications\nHarvard University\n02138CambridgeMAUSA', 'School of Engineering and Applied Sciences\nHarvard University\n02138CambridgeMAUSA', 'Departments of Physics, and Organismic and Evolutionary Biology\nHarvard University\n02138CambridgeMAUSA'], 'corpusid': 257254991, 'doi': None, 'github_urls': [], 'n_tokens_mistral': 17295, 'n_tokens_neox': 14508, 'n_words': 8971, 'pdfsha': 'a1ac5c150360c710715f94b24b4614c89437e616', 'pdfurls': ['https://export.arxiv.org/pdf/2303.00007v1.pdf'], 'title': ['Statics and diffusive dynamics of surfaces driven by p-atic topological defects', 'Statics and diffusive dynamics of surfaces driven by p-atic topological defects'], 'venue': []}
arxiv
Finite temperature hadrons from holographic QCD 11 Oct 2010 Floriana Giannuzzi Dipartimento di Fisica dell'Università degli Studi di Bari F.N Sezione di BariI-70126, I-70126Bari, BariItaly I.N., Italy Finite temperature hadrons from holographic QCD 11 Oct 2010 The properties of scalar mesons and glueballs at finite temperature are analyzed through a bottomup holographic approach. We focus on the spectral functions and mass spectra. A discussion on hadron dissociation and deconfinement phase transition is also put forward. PACS numbers: 12.38.Mh,11.25.Tq,25.75.Nq Since its appearence in 1998, the AdS/CFT correspondence [1] has been considered as a very promising tool for studying the non-perturbative regime of QCD, by relating it to a weakly-coupled theory. In particular, the correspondence can be better applied to the finite temperature case, one of the main reason being that in this limit the theory is no more conformal.The conjecture states that type IIB string theory in a AdS 5 × S 5 space, where AdS 5 is a five-dimensional anti-de Sitter space and S 5 is a five-dimensional sphere, is dual to N = 4 Super-Yang-Mills theory in a four-dimensional Minkowski space. It is also known as the holographic conjecture since the gauge theory can be constructed through a projection of the gravity theory on the boundary of the AdS space [2]. According to the correspondence, a fivedimensional field φ, whose boudary value is φ 0 , is related to a four-dimensional operator O by: Z S [φ 0 (x)] = e ∂AdS d+1 φ0(x) O(x) CFT ; (1) strictly speaking, the generating functional on the left-hand side of (1), computed in φ 0 , is equal to the generating functional of the correlation function of the operator whose source is φ 0 , on the right-hand side. In Poincaré coordinates, the AdS space is characterized by the metric ds 2 = R 2 z 2 (dt 2 − d x 2 − dz 2 ) ,(2) where z is called holographic coordinate. The boundary of the space is at z = 0. Up to now the dual theory of QCD has not been found yet. From a phenomenological point of view, people are trying to construct an ad hoc theory in a five-dimensional AdS space such that its projection on the four-dimensional boundary can reproduce as many as possible QCD properties. One of such bottom-up approaches developed so far is the Soft Wall model [3], in which conformal symmetry (proper of AdS spaces) is broken by inserting a factor e −c 2 z 2 in the action, with c a mass scale, here fixed from the ρ meson mass: c = m ρ /2 = 388 MeV [3]. In this holographic picture, temperature effects are introduced by modifying the metric of the anti-de Sitter space. In this respect, one can either impose a periodicity of Euclidean time, in which the temperature is the inverse of the compactification radius, or introduce a black hole in the metric along the fifth dimension, z, such that the temperature is related to the inverse of the position of the horizon of the black hole. From now on, the former case will be referred to as the "Thermal-AdS" model, whereas the latter case as "AdS-Black Hole" model. Therefore, "Thermal-AdS" model is characterized by the following metric: ds 2 = R 2 z 2 dτ 2 + d x 2 + dz 2 0 < τ < β ′ = 1/T ,(3) where τ is the Euclidean time and T is the temperature. On the other hand, the "AdS-Black Hole" metric is given by: ds 2 = R 2 z 2 f (z)dτ 2 + d x 2 + dz 2 f (z) f (z) = 1 − z 4 z 4 h ,(4) where z h is the position of the horizon of the black hole, such that 0 < z < z h = 1/(πT ) ; (5) in this case, the metric is smooth and complete if and only if the Euclidean time is periodic [4], with period β = 1/T = πz h . To find out what is the model that can better holographically describe QCD at finite temperature, one can either analyze both models separately and compare their outcomes with predictions coming from other approaches to QCD, or introduce a criterion for determining which metric is the stable one, for instance, by comparing the corresponding free energies and choosing the model with the lowest one. Here we consider both possibilities and compute spectral functions of scalar mesons and scalar glueballs. Let us start from analyzing each model separately. From the point of view of spectral functions, "Thermal-AdS" is completely analogous to the zero-temperature model, and it yields the same results: finite-temperature spectral functions are therefore expected to be characterized by zero-width peaks at fixed positions, as at T = 0. "AdS-Black Hole", instead, deserves more attention. In the following, it will be shown how to compute spectral functions in a particular case, i.e. for scalar glueballs; however the procedure is general and can be extended to any other observable. Scalar glueballs have been investigated in the Soft Wall model at zero temperature in [5]. The five-dimensional field which is dual to the QCD operator β(α s ) Tr(G 2 (x)), where β(α s ) is the Callan-Symanzik function, is a scalar massless field X(x, z), standing the relation between the mass and the conformal dimension (∆) of a (p-form) operator [2]: m 2 5 R 2 = (∆ − p)(∆ + p − 4) .(6) The five-dimensional action for this field in the Soft Wall model is: S = 1 2k d 5 x √ g e −c 2 z 2 g MN ∂ M X∂ N X (7) where g is the determinant of the metric and k is a parameter which makes the action dimensionless. In order to compute spectral functions we move to the Fourier space by defining X(x, z) = d 4 q e iq·xX (q, z) ,(8) and we write the Fourier transformed fieldX(q, z) = K(q, z)X 0 (q) as the product of the bulk-to-boundary propagator K(q, z) and the sourceX 0 (q). From the action (7) one can derive the equation of motion for K(q, z) K ′′ (q, z) − 4 − f (z) + 2 c 2 z 2 f (z) z f (z) K ′ (q, z) + q 2 0 f (z) 2 − q 2 f (z) K(q, z) = 0 ;(9) for semplicity, the case q = 0 (rest frame of the glueball) will be considered. The first boundary condition is K(q, 0) = 1, since the sourceX 0 is defined as the value of the fieldX at z = 0; the second one is that the bulk-toboundary propagator must behave as a "falling in" solution near the horizon of the black hole: K(q, u) − −− → u→1 (1 − u) −i q0 z h /4(10) where u = z/z h . The latter condition allows us to get in the end the retarded Green's function. Standing Eq. (1), the two-point correlation function can be computed by functionally deriving twice the action (7) with respect to the sourceX 0 (q), thus obtaining: Π(q 2 0 ) = δ 2 S δX 0 δX 0 X0=0 = 1 2k R 3 f (u) u 3 z 4 h e −c 2 z 2 K(q, u)∂ u K(q, u) u=0 .(11) The spectral function is the imaginary part of the Green's function ρ(q 2 0 ) = ℑ(Π(q 2 0 )); the first two peaks of the spectral function are shown in Fig. 1 for four values of temperature [6]. The figure shows that at low temperatures the spectral function is characterized by very narrow peaks which become broader and move towards smaller values of mass as the temperature is increased; at T 44 MeV we can find no more peaks in the spectral function, so bound states do not exist anymore. We can also notice that excited states dissociate at lower temperatures than the ground state. By fitting each peak through a Breit-Wigner function ρ(x) = a m Γ x b (x − m 2 ) 2 + m 2 Γ 2 ,(12) it is possible to find how the squared masses and widths of glueballs vary with temperature; they are shown in Fig. 2. As expected, we find that the masses decrease while increasing temperature, starting from the value at T = 0, m 2 n = 4c 2 (n + 2), while the widths increase. The qualitative behavior we have found for spectral functions, masses and widths is similar to the one observed in lattice simulations [7], but the dissociation temperature for the ground-state glueball, i.e. the temperature at which the first peak in the spectral function disappears, turns out to be much lower than the one found in lattice studies. Similar results can be observed in the scalar-meson sector [6]; the two main differences are that the dissociation temperature for the ground state (around 75 MeV) is higher than the one found for glueballs, and that the groundstate squared mass, although decreasing with temperature from its value at T = 0 [8], at a certain temperature starts growing until the dissociation. The mass and width of scalar mesons versus temperature are plotted in Fig. 3. In general, one can see that the behavior of the spectral function in Fig. 1 is quite universal, since it is similar for glueballs, scalar mesons and also vector mesons [9]. A further possibility for studying holographic QCD at finite temperature is to make the metric a dynamical quantity, so that it can vary with temperature. To determine which metric between "AdS-Black Hole" (4) and "Thermal-AdS" (3) should be used for each value of the temperature, one can compute and compare the corresponding free energies and choose the one with the lowest result [4]. In the Soft Wall model [10], the free energy in "Thermal-AdS" is V T H (ǫ) = 4R 3 κ 2 β ′ 0 dτ ∞ ǫ dz 1 z 5 e −c 2 z 2 ,(13) and in "AdS-Black Hole" V BH (ǫ) = 4R 3 κ 2 πz h 0 dτ z h ǫ dz 1 z 5 e −c 2 z 2 ,(14) where ǫ → 0 has been introduced in order to regularize the two quantities. It turns out that at low (resp. high) temperatures "Thermal-AdS" (resp. "AdS-Black Hole") is the right metric to use. A first order Hawking-Page phase transition [11] between the two metrics occurs at T c ≈ 191 MeV [10], and it has been identified with the deconfinement transition of QCD. As a matter of fact, in this framework the low-temperature spectral function has to be computed using the "Thermal-AdS" model, so it is characterized by zero-width peaks at fixed positions for every T < T c . On the other hand, at temperatures higher than the critical one Fig. 1 shows that the spectral function computed in "AdS-Black Hole" has no peaks, so hadrons have already melted. Thus, in this model dissociation of scalar glueballs takes place together with deconfinement as a first order phase transition. This is also what happens to scalar mesons. Some concluding remarks are then in order. We have analyzed two possible phenomenological models describing finite-temperature QCD in a holographic framework, looking at scalar-glueball spectral functions. If we use a model with a non-dynamical metric, in which the temperature is introduced through a black hole, we get a realistic qualitative description of the behavior of hadrons in a hot medium, but with a scale of temperature different from the one predicted by other models of QCD. If we let the metric change, describing the deconfinement transition on the boundary of the AdS space as a Hawking-Page phase transition between two metrics in the bulk, the resulting spectral function in the confined phase is always equal to the one at zero temperature, while becoming suddenly flat in the deconfined phase. The first description can better simulate how hadron properties could change in a medium, but it seems to fail from a quantitative point of view. The second description may reproduce hadron properties in the limit of large N c . Therefore, the finite-temperature holographic representation of QCD in this bottom up approach still needs much efforts, and slight modifications of the model may be required. Recently, some developments of the "AdS-Black Hole" model have been put forward. In [12] the deconfinement transition in the chemical potential-temperature space is investigated in the Soft Wall model, using, as order parameter, the behavior of the static quark-antiquark potential at large distances. Furthermore, in [13] the Authors construct a slightly different model, whose parameters are fitted from the masses and decay constants of J/ψ and ψ ′ , thus finding higher dissociation temperatures also in the light-meson sector. FIG. 1 :FIG. 2 : 12Spectral function of scalar glueballs in the model with black hole ("AdS-Black Hole") at T = 21 MeV (solid blue line), T = 25 MeV (dashed purple line), T = 29 MeV (dotted yellow line), and T = 44 MeV (dot-dashed green line). Squared mass (left panel) and width (right panel) of scalar glueballs versus temperature in the model with black hole ("AdS-Black Hole"). FIG. 3 : 3Squared mass (left panel) and width (right panel) of scalar mesons versus temperature in the model with black hole ("AdS-Black Hole"). AcknowledgmentsThis work was supported, in part, by the EU contract No. MRTN-CT-2006-035482, "FLAVIAnet" and by the grant "Borse di ricerca in collaborazione internazionale" by Regione Puglia, Italy. I thank the IPPP, Durham, for hospitality during the completion of this work.Finally, I would like to join the Organizers of QCD@Work 2010 in warmly remembering Beppe Nardulli as a man devoted to science. . J M Maldacena, arXiv:hep-th/9711200Adv. Theor. Math. Phys. 21113Int. J. Theor. Phys.J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998) [Int. J. Theor. Phys. 38, 1113 (1999)] [arXiv:hep-th/9711200]. . E Witten, arXiv:hep-th/9802150Adv. Theor. Math. Phys. 2253E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998) [arXiv:hep-th/9802150]. . A Karch, E Katz, D T Son, M A Stephanov, arXiv:hep-ph/0602229Phys. Rev. D. 7415005A. Karch, E. Katz, D. T. Son and M. A. Stephanov, Phys. Rev. D 74, 015005 (2006) [arXiv:hep-ph/0602229]. . E Witten, arXiv:hep-th/9803131Adv. Theor. Math. Phys. 2505E. Witten, Adv. Theor. Math. Phys. 2, 505 (1998) [arXiv:hep-th/9803131]. . P Colangelo, F De Fazio, F Jugeau, S Nicotri, arXiv:hep-ph/0703316Phys. Lett. B. 65273P. Colangelo, F. De Fazio, F. Jugeau and S. Nicotri, Phys. Lett. B 652, 73 (2007) [arXiv:hep-ph/0703316]; . arXiv:0711.4747Int. J. Mod. Phys. A. 244177hep-phInt. J. Mod. Phys. A 24, 4177 (2009) [arXiv:0711.4747 [hep-ph]]; . H Forkel, arXiv:0711.1179Phys. Rev. D. 7825001hep-phH. Forkel, Phys. Rev. D 78, 025001 (2008) [arXiv:0711.1179 [hep-ph]]; . H Boschi-Filho, N R F Braga, arXiv:hep-th/0209080Eur. Phys. J. C. 32529H. Boschi-Filho and N. R. F. Braga, Eur. Phys. J. C 32, 529 (2004) [arXiv:hep-th/0209080]; arXiv:hep-th/0212207JHEP 0305. 9JHEP 0305, 009 (2003) [arXiv:hep-th/0212207]. . P Colangelo, F Giannuzzi, S Nicotri, arXiv:0909.1534Phys. Rev. D. 8094019hep-phP. Colangelo, F. Giannuzzi and S. Nicotri, Phys. Rev. D 80, 094019 (2009) [arXiv:0909.1534 [hep-ph]]. . N Ishii, H Suganuma, H Matsufuru, arXiv:hep-lat/0109011Phys. Rev. D. 6614507N. Ishii, H. Suganuma and H. Matsufuru, Phys. Rev. D 66, 014507 (2002) [arXiv:hep-lat/0109011]; . arXiv:hep-lat/0206020Phys. Rev. D. 6694506Phys. Rev. D 66, 094506 (2002) [arXiv:hep-lat/0206020]; . X F Meng, G Li, Y Chen, C Liu, Y B Liu, J P Ma, J B Zhang, arXiv:0903.1991Phys. Rev. D. 80114502hep-latX. F. Meng, G. Li, Y. Chen, C. Liu, Y. B. Liu, J. P. Ma and J. B. Zhang, Phys. Rev. D 80, 114502 (2009) [arXiv:0903.1991 [hep-lat]]. . P Colangelo, F De Fazio, F Giannuzzi, F Jugeau, S Nicotri, arXiv:0807.1054Phys. Rev. D. 7855009hep-phP. Colangelo, F. De Fazio, F. Giannuzzi, F. Jugeau and S. Nicotri, Phys. Rev. D 78, 055009 (2008) [arXiv:0807.1054 [hep-ph]]. . M Fujita, K Fukushima, T Misumi, M Murata, arXiv:0903.2316Phys. Rev. D. 8035001hep-phM. Fujita, K. Fukushima, T. Misumi and M. Murata, Phys. Rev. D 80, 035001 (2009) arXiv:0903.2316 [hep-ph]. . C P Herzog, arXiv:hep-th/0608151Phys. Rev. Lett. 9891601C. P. Herzog, Phys. Rev. Lett. 98, 091601 (2007) [arXiv:hep-th/0608151]. . S W Hawking, D N Page, arXiv:hep-th/9803131Commun. Math. Phys. 87577S. W. Hawking and D. N. Page, Commun. Math. Phys. 87, 577 (1983) [arXiv:hep-th/9803131]. . P Colangelo, F Giannuzzi, S Nicotri, arXiv:1008.3116hep-phP. Colangelo, F. Giannuzzi and S. Nicotri, arXiv:1008.3116 [hep-ph]. . H R Grigoryan, P M Hohler, M A Stephanov, arXiv:1003.1138Phys. Rev. D. 8226005hep-phH. R. Grigoryan, P. M. Hohler and M. A. Stephanov, Phys. Rev. D 82, 026005 (2010) [arXiv:1003.1138 [hep-ph]].
{'fraction_non_alphanumeric': 0.07436125587898818, 'fraction_numerical': 0.05357823821024533, 'mean_word_length': 3.883612662942272, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 0, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 4, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'The properties of scalar mesons and glueballs at finite temperature are analyzed through a bottomup holographic approach. We focus on the spectral functions and mass spectra. A discussion on hadron dissociation and deconfinement phase transition is also put forward. PACS numbers: 12.38.Mh,11.25.Tq,25.75.Nq Since its appearence in 1998, the AdS/CFT correspondence [1] has been considered as a very promising tool for studying the non-perturbative regime of QCD, by relating it to a weakly-coupled theory. In particular, the correspondence can be better applied to the finite temperature case, one of the main reason being that in this limit the theory is no more conformal.The conjecture states that type IIB string theory in a AdS 5 × S 5 space, where AdS 5 is a five-dimensional anti-de Sitter space and S 5 is a five-dimensional sphere, is dual to N = 4 Super-Yang-Mills theory in a four-dimensional Minkowski space. It is also known as the holographic conjecture since the gauge theory can be constructed through a projection of the gravity theory on the boundary of the AdS space [2]. According to the correspondence, a fivedimensional field φ, whose boudary value is φ 0 , is related to a four-dimensional operator O by:', 'arxivid': '1010.2161', 'author': ["Floriana Giannuzzi \nDipartimento di Fisica dell'Università degli Studi di Bari\nF.N\nSezione di BariI-70126, I-70126Bari, BariItaly I.N., Italy\n"], 'authoraffiliation': ["Dipartimento di Fisica dell'Università degli Studi di Bari\nF.N\nSezione di BariI-70126, I-70126Bari, BariItaly I.N., Italy"], 'corpusid': 119113985, 'doi': '10.1063/1.3536582', 'github_urls': [], 'n_tokens_mistral': 5536, 'n_tokens_neox': 4585, 'n_words': 2659, 'pdfsha': 'feac3976b7ed3bd1ab98f9065cf6bea0235d41c5', 'pdfurls': ['https://arxiv.org/pdf/1010.2161v1.pdf'], 'title': ['Finite temperature hadrons from holographic QCD', 'Finite temperature hadrons from holographic QCD'], 'venue': []}
arxiv
Thermalization of Isolated BEC Under a P T -Symmetric Environment Javed Akram Department of Physics COMSATS University Islamabad IslamabadPakistan Asad Hussain Department of Physics COMSATS University Islamabad IslamabadPakistan Muhammad Nouman Department of Physics COMSATS University Islamabad IslamabadPakistan Jameel Hussian Department of Electronics Quaid-i-Azam University IslamabadPakistan Thermalization of Isolated BEC Under a P T -Symmetric Environment (Dated: November 2, 2022) The postulates of the eigenstate thermalization hypothesis (ET H) expresses that the thermalization occurs due to the individual eigenstate of the system's Hamiltonian. But the ET H put no light on the dynamics that lead toward the thermalization. In this paper, we observe the thermalization of a Bose-Einstein Condensate (BEC) confined in an optical lattice potential that is embedded on the harmonic trap. Such optical lattice potential offers local friction to the oscillating BEC. The spread in the temporal density plot of BEC shows the thermalization of the BEC. Moreover, we observe that the presence of a P T -symmetric potential greatly influences the BEC dynamics and the thermalization of the system. The presence of a P T -symmetric potential offers a way to manipulate the mean position of the BEC to a desire location and for a desired length of time. The postulates of the eigenstate thermalization hypothesis (ET H) expresses that the thermalization occurs due to the individual eigenstate of the system's Hamiltonian. But the ET H put no light on the dynamics that lead toward the thermalization. In this paper, we observe the thermalization of a Bose-Einstein Condensate (BEC) confined in an optical lattice potential that is embedded on the harmonic trap. Such optical lattice potential offers local friction to the oscillating BEC. The spread in the temporal density plot of BEC shows the thermalization of the BEC. Moreover, we observe that the presence of a P T -symmetric potential greatly influences the BEC dynamics and the thermalization of the system. The presence of a P T -symmetric potential offers a way to manipulate the mean position of the BEC to a desire location and for a desired length of time. I. INTRODUCTION Confirmation of long-standing diverse ideas of condensed matter physics begins with the first realization of Bose-Einstein condensates (BECs) of dilute atomic gases [1][2][3][4]. Among those ideas included the nature of superfluidity, the critical velocity for the beginning of the dissipation [5,6], quantization of vertices [7][8][9], the generation and dynamics of soliton waves [10][11][12][13], and the impact of impurities for different practical applications [10,11,14]. Recently it also leads to probe the long-standing question of thermalization of an isolated quantum system both theoretically and experimentally [15][16][17]. In these studies, the thermalization observed in double-well potential confinement under the influence of Josephson interaction [15], and it was also investigated experimentally in an optical lattice environment [17]. In this paper, we study the thermalization of BEC in harmonic trap embeds with optical lattice potential. We also investigate the impact of a P T -symmetric periodic potential on the thermalization of the BEC. We initially trapped BEC in a harmonic trap after achieving the equilibrium, we shift the harmonic potential minima for time t > 0. At the same time, we switched on the optical lattice which is embedded in the harmonic trap. Such a lattice potential offers friction for the dipole oscillations of the BEC. The idea of P T -symmetric potential represents the scenario in which there are alternative gain and lost regions in an optical lattice, e.g, as explained for double-well confinement [18]. In an optical lattice, the transmission between adjacent wells is controlled by controlling the tunneling between the wells. The periodic lattice potential here acts as a medium that absorbs the partial kinetic energy and partial potential energy of the BEC entropy. That medium helps in bringing the BEC into thermal equilibrium. Furthermore, we also test the localization and thermalization of BEC under a periodic P T -symmetric environment. The idea of non-Hermitian Hamiltonian obeying P T -symmetry was introduced by Bender and Boettcher [19] and this appears as an exten-sion to quantum mechanics from a real to the complex domain. The P T -symmetric conditions are more physical than the earlier strict mathematical condition of hermiticity of the Hamiltonian for real eigenvalues. The operator "P " and "T " represent parity reflection and time reversal, respectively. The operator P acts on position and momentum operator as P : x → −x, p → −p and the time operator T acts on position and momentum operator as T : x → x, p → −p, and i → −i. The remaining part of this paper is organized as follows. In Sec. II, we describe the working models, like analytical, numerical, and Ehrenfest methods. Where we discuss all the relevant issues. In Sec. III, we discuss our results, figures for the BEC dynamics through a periodic potential embedded on a harmonic confining potential. Moreover, we also discuss the impact of the P T -symmetric system over the thermalization of the BEC. We observe the impact of the complex part of the potential in the P T -symmetric Hamiltonian. Conclusion comes in Sec. IV, with the future suggestions for the related research, and we compare the BEC dynamics with and without the P T -symmetric. II. THEORETICAL MODELS To accurately model an elongated BEC, we use a dimensionless quasi-1D Gross-Pitaevskii equation (GPE) [20]. The dimensionless equation can be achieved by taking time t in ω −1 x , and scale length L= /mω x in terms of harmonic oscillator length along x-axis and energy is scaled by ω x , ι ∂ψ(x, t) ∂t = − 1 2 ∂ 2 ∂x 2 + U (x) + g s |ψ| 2 ψ(x, t),(1) where ψ(x, t) as a dimensionless macroscopic wavefunction of the BEC, t and x stands for time and 1D-space space coordinate, respectively. Here, we use normalized wavefunction such as,´|ψ(x, t)| 2 dx = 1. The 1D interaction strength is given as g s = 2N ω r a s /(ω describes the s-wave scattering length [11], the number of atoms in BEC is represented by N , ω r stands for the radial frequency component of the harmonic trap [10]. To study the thermalization and dipole oscillation for an isolated BEC, we purpose the trapping potential U (x), U (x) = V (x) + ιW (x),(2) where the real part of the potential defines as V (x) = x 2 /2 + V 0 cos 2 (x), here the first term represents the dimensionless harmonic potential confinement, second term models the periodic lattice potential of the system, which serves as an optical lattice of strength V 0 and offers friction to the BEC during its dipole oscillation. The complex part, W (x) = W 0 sin(x) compensate the gain and loss of the BEC atoms, such potentials makes our system a non-Hermitian system. However, this special potential follows the P T -symmetric condition. Here W (x) < 0 represents the loss of atoms and W (x) > 0 describes the gain of the BEC atoms. A. Analytical method The solution of time dependent GP E by a variational approach [21][22][23][24] can help to extract the qualitative and quantitative information about the system. The variational approach relies on the initial choice of the trial wave function. In our case, we use Gaussian shape wavefunction with time dependent variables. This approach helps us to find second order ordinary differential equations for the time dependent variables. Which in turn characterize the dynamics of the BEC. Here, we let the initial ansatz as ψ(x, t) = 1 a(t) √ π e − (x−x 0 (t)) 2 2a(t) 2 +ixα(t)+ix 2 β(t) ,(3) above ansatz is a Gaussian distribution centered at x 0 (t). Here, x is defined as the dimensionless space coordinate, x 0 (t) describes the dimensionless mean position of the BEC, a(t) tells us about the dimensionless width of the BEC, α(t) and β(t) are the variational parameters. To find all the unknown variational parameters, we let the Lagrangian density of our system as, L = i 2 ψ ∂ψ * ∂t − ψ * ∂ψ ∂t − 1 2 | ∂ψ ∂x | 2 + U (x)|ψ| 2 + gs 2 |ψ| 4 .(4) Using the above Lagrangian density and the trial wavefunction, we find the effective Lagrangian L =´Ldx of the quantum mechanical system. We begin by writing the total Lagrangian of the system as a sum of two terms, i.e., L = L c + L nc , where L c represents the conservative term and L nc defines the non-conservative term. The conservative part of the Lagrangian means that we only consider the real part of the external potential. While the non-conservative term describes the complex part of the external potential. By using the Lagrangian L, we deter- mine the complex Ginzburg-Landau equation (CGLE) as [18,25,26] d dt (a) (b) (c)∂L c ∂ṡ − ∂L c ∂s = 2Re ∞ −∞ iW (x)ψ * ∂ψ ∂s dx ,(5) where s describe the set of dimensionless variational parameters such as, x 0 (t), α(t), a(t), and β(t). By using equation (4) and equation (5), we determine timedependent equation for the dimensionless mean position of the BEC, x 0 (t) + x 0 (t) = V 0 sin(2x 0 (t))e −a 2 (t) ,(6) here, we let W (x) = 0. While the dynamics of the width of the BEC analytically is given by a (t)+a(t) = 1 a 3 (t) + g s √ 2πa 2 (t) +2a(t)V 0 cos(2x 0 (t))e −a 2 (t) . (7) For above equations (6) and (7) we deliberately avoid to write the complex part of the potential, i.e., we have only consider the conservative L c part of the Lagrangian while the "non-conservative" part makes our equations cumbersome, therefore, we avoid to present those lengthy equation here. B. Ehrenfest method The classical approximation of a quantum system can be realized by the Ehrenfest method [27], x = − V (x) ,(8) Where V (x) defines the trapping potential for the BEC wave packet and the Ehrenfest theorem leads to the dimensionless mean position of the BEC equation as <ẍ >= − < x > +2V 0 cos (< x >) sin (< x >),(9) the mean position of the wave-packet is strongly depends on the optical lattice potential V 0 . C. Numerical method To solve numerically the quasi-1D GPE, we use the time-splitting spectral method [28] We choose time step as t = 0.0001, and a space step as x = 0.0177, to discretize the dimensionless quasi-1D GPE Eq. (1). To give a momentum kick to the BEC wave packet, initially we trap the BEC at potential V (x) = (x − x 0 ) 2 /2, where x 0 defines the initial mean position of the BEC. Later, we switched off the trapping potential and switch the potential minimum to the new potential V (x) = x 2 /2 + V 0 cos 2 (x). In this way, BEC experience a kick and starts dipole oscillations in the left over potential. III. BEC DYNAMICS To study the dynamics of a BEC in this closed environment, we compare the analytical, Ehrenfest, and numerical methods discussed previously in Sec. II (A-C). First of all, the qualitative comparison of analytical and numerical results are presented in Fig. 1 in the form of a temporal density plot of the BEC in the absence of P T -symmetric potential. BEC dynamics applied under a P T -symmetric potential environment is presented in Fig. 5. We discuss both cases separately in the following two subsections. A. Without P T -symmetric Potential In this subsection, we compare and discuss the detailed result of the BEC dynamics without P T -symmetric potential, i.e., W 0 = 0, using analytical, Ehrenfest, and numerical methods. Density dynamics of the BEC The temporal density graph for the BEC is obtained numerically and analytically as shown in Fig. 1. For small values of the periodic potential strength, V 0 = 5, both analytical and numerical results are in agreement with each other as shown in Fig. 1(a-b). However, we observe that for higher values of the periodic potential strength, e.g., V 0 = 50, as presented in Fig. 1(c-d) both plots differ from each other. We note that the results obtained by the analytical method, Fig. 1(d), fails to reflect the physics of the dynamics of the BEC. Particularly, it shows no im-pact of scattering of the BEC due to the friction offered by the lattice potential. While the results obtained by the numerical method, Fig. 1(c), reveal the impact of the scattering from the peaks of the lattice potential and the dissipation. It reveals that quasi-particles are generated at the top of the BEC, which leads to thermalization of the BEC. So from now, for this section, we discuss only numerical results, because they present the real picture of the dynamics of the BEC. For further investigation, we plot the mean position of the BEC, by using the numerical method as shown in Fig. 2. We observe that for periodic potential strength V 0 = 0 the BEC starts dipole oscillation without any dissipation in the closed environment as shown in Fig. 2. We note that for V 0 = 30, in Fig. 2, the numerical results shows that the mean position of the BEC starts localizing at the global minima of the harmonic potential trap as presented in Fig. 2. For such a lattice potential the mean position dipole oscillation has a smaller amplitude. For a larger value of the periodic potential, V 0 = 100 small BEC dipole oscillation can be seen as depicted in Fig. 2. Initially, BEC starts moving towards the global minima of the external potential but on the way, it loses its energy and turns back, even without reaching the global minima of the potential and after some time it gets localized at local minima as shown in Fig. 2. This localization other than the global minima of the harmonic potential is due to the loss of the energy of the BEC by the periodic potential embedded on the harmonic potential. It is also quite surprising that the tunneling of the BEC is also suppressed in this special scenario, however, eventually, the BEC will localize to global minima but after a long time. The mean position vs initial energy of the BEC The qualitative comparison of the dimensionless mean position of the BEC is plotted in Fig. 3, for three different methods, numerical (N), analytical (A), and Ehrenfest (Eh). We compare the impact of the initial potential energy (P E) of the BEC on its dipole oscillations without any P T -symmetric environment. Under such condition, it is evident from the Fig. 3 that the initial potential energy of the BEC depends upon the choice of the initial mean position of the BEC, "x 0 . We know that the dimensionless potential energy is given by, P E ∝ x 2 0 . The initial energy of the BEC shows a considerable influence on the dipole oscillations as plotted in Fig. 3. In Fig. 3, we see the influence of initial energy on the dipole motion of the BEC for x 0 = 25, the numerical study shows that the dissipation of the BEC results in an earlier localization of the mean position of the BEC. On the other hand, for higher values of the magnitude of P E, say x 0 = 50, the BEC just experience the global harmonic potential and hence a to-and-fro motion results. While the initial high P E compensates the periodic frictional potential. We note that for low initial potential energy, the BEC localized earlier as presented in Fig. 3. We also observe that the Ehrenfest and analytical methods could not capture the physics of the dimensionless mean-position of the BEC. As we do not find the dependence of the mean-position on the initial PE of the system, which is not physical. However, from the numerical calculation, we conclude that the initial high P E of the BEC results in a delay in the localization of the mean position of the BEC. We observe that the BEC with high P E maintains longer dipole oscillation for a longer time. Therefore, it is appropriate for experimentalists to consider this point while localizing the BEC. i.e., they must not put the BEC far away from the global minima, as it could lead to decoherence in the experiment, which could destroy the BEC. In a classical harmonic system, the total energy of the system is dissipative, due to the environment interaction, however, in our special case the system is isolated therefore the total energy is conserved as shown in Fig. 4. It means that periodic potential does not store energy during the thermalization process, however, periodic-potential distribute the energy, few other writers have also studied this phenomenon in disorder potentials [29]. As someone can see from Fig. 4 that for higher values of V 0 the kinetic energy (KE) and potential energy (PE) are oscillating, however, as time passes oscillation becomes smaller and smaller which leads to equilibrium. As a matter of fact, this energy distribution indicates another kind of non-equilibrium to an equilibrium state. Since our system is in an isolated environment. So the localization is quite surprising in such a system. However, someone can answer this localization is due to the thermalization of the BEC. As the BEC is started to move from its initial position, it experiences friction in the system due to periodic potential. That periodic resistance generated quasi-particles at the top of the BEC, which can be seen in Fig. 1(c). This is not new as many studies already pointed out such quasi-particles due to the collision of the BEC wave-packet with external potentials [30]. B. With P T -Symmetric Environment In this sub-section, we compare analytical and numerical BEC dynamics by observing the temporal density and the mean position of the BEC under a P Tsymmetric environment. The amount of P T -symmetry or the imaginary part of the potential controlled by the parameter, W 0 . The temporal density graph The temporal density graph in Fig. 5 is obtained by analytical (right column) and numerical simulation (left column). Here, we note that for a small amount of P Tsymmetry, W 0 = 0.2 and for a small amount of the strength of the external periodic potential, V 0 = 5, the analytically and numerically obtained results for the dynamics of the BEC are in good agreement. While there is disagreement in the results for higher values of the periodic potential strength, like V 0 = 30. For larger values of V 0 once again, we observe that the numerical results are in agreement with the physics of the dissipative dynamics and analytical methods are no longer valid. Temporal mean position In this subsection, we discuss the dynamics of the dimensionless mean position of the BEC, which is calculated by numerical method. We observe that for small P T -symmetric potential the BEC shows the famous dipole oscillation, with a gradual decrease in dipole oscillation's amplitude as shown in Fig. 6. Which describes the localization of the BEC as already discussed in Sec. II. As we increase, the P T - symmetric potential "W 0 = 2" we observe that the mean of the BEC starts dipole oscillation but around "t = 15" we note that the dipole oscillation rapidly seized and the BEC localized at some local minima. However, as time passes the BEC mean position jumps to the global minima as predicted in Fig. 6. We realize that this jump is quite natural as in BEC particles are continuously ejected from the adjacent wells simultaneously. Additionally, there is a continuous compulsion for the BEC to move towards the global minima. As we raise the strength of the P T -symmetric to "W 0 = 5" we inspect that the mean point of the BEC is localized around "x = 30, in a relatively short time as shown in Fig. 6. And it stays in this local minima for a relatively long time. Later, the mean position of the BEC switches towards the global minima. It is quite strange that we could not see the localization of the BEC in between the "x = 30" and "x = 0". This is quite interesting as it leads to a kind of digital switching. So with this, we conclude that our proposed model can be used to study the localization of the BEC, additionally, it can also be used for discrete switching. IV. CONCLUSION In this paper, we compare the analytical, numerical, and Ehrenfest methods to study the BEC dynamics in a dissipative environment. Along with this, we also study the impact of the presence of the P T -symmetric on the dissipative dynamics of the BEC. The dissipative environment is created by adding a periodic potential over a harmonic potential. The dissipation is controlled by the strength of the periodic potential height. We conclude that the analytical and Ehrenfest methods have limita-tions for larger values of the periodic potential strength, V 0 . For larger values of the periodic potential strength, the numerical methods remain valid. The presence of a periodic P T -symmetric environment influences the dynamics of the BEC in such a way that it can control the localization of the BEC at the desire location. By controlling the amount of P T -symmertric strength and the strength of the periodic potential parameter we can localize the BEC to a desirable location for a desirable time. As a future prospective, someone can extend this work for spin-orbit coupled BEC's [31] and for dipolar condensate [32,33]. V. ACKNOWLEDGMENT Jameel Hussain gratefully acknowledges support from the COMSATS University Islamabad for providing him a workspace. x L), where a s arXiv:2211.00390v1 [cond-mat.quant-gas] 1 Nov 2022 FIG. 1: (Color online) Comparison of numerical results (left column) and analytical results (right column) of BEC temporal density. The others dimensionless parameters are, interaction strength gs = 3, initial mean-position of the BEC defines as x0 = 35. Here, in upper row the optical periodic potential is V0 = 5 and in the lower row it consider as V0 = 50. FIG. 2 : 2(Color online) The dimensionless mean position of the BEC versus the dimensionless time for different periodic potentials V0. Other dimensionless parameters are W0 = 0, gs = 3 and the initial mean position of the BEC is X0 = 35. FIG. 3 : 3(Color online) The dimensionless mean position of the BEC is plotted against the dimensionless time t for different initial mean position of the BEC x0 = 25 (a), x0 = 35 (b), and x0 = 50 (c). The mean position is calculated by numerical (blue box), analytical (red circle), and Ehrenfest (black line) methods. Other dimensionless parameters are, W0 = 0, gs = 3, and the strength of the periodic potential is V0 = 30. FIG. 4 : 4(Color online) Energy evolution of a Bose-Einstein Condensate. Total energy is conserved in the whole process and kinetic energy and potential energy associated with harmonic trapping are evenly distributed at equilibrium. The parameters used are (a)V0 = 0, (b)V0 = 30, and (c)V0 = 40. FIG. 5 : 5(Color online) Comparison of numerical (left column) and analytical (right column) results of the BEC temporal density with a P T -symmetric potential. The dimensionless parameters defines as the interaction strength gs = 3, initial mean position of the BEC x0 = 35. The periodic potential strength for upper row is V0 = 5 and the strength of the imaginary part of potential is W0 = 0.2, while for the lower row the periodic potential strength is V0 = 30 and the strength of the imaginary part of the potential is W0 = 0.2. FIG. 6 : 6(Color online) The impact of the amount of P Tsymmetric potential on mean position of the BEC. The BEC initially located at x0 = 35. Other dimensionless parameters are, gs = 3, V0 = 30. * Electronic address: javedakram@daad-alumni. de* Electronic address: [email protected] . M H Anderson, J R Ensher, M R Matthews, C E Wieman, E A Cornell, 10.1126/science.269.5221.198Science. 269198M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Science 269, 198 (1995). . K B Davis, M O Mewes, M R Andrews, N J Van Druten, D S Durfee, D M Kurn, W Ketterle, http:/link.aps.org/doi/10.1103/PhysRevLett.75.3969Phys. Rev. Lett. 75K. B. Davis, M. O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle, Phys. Rev. Lett. 75, 3969-3973 (1995). . C C Bradley, C A Sackett, R G Hulet, 10.1103/PhysRevLett.78.985Phys. Rev. Lett. 78C. C. Bradley, C. A. Sackett, and R. G. Hulet, Phys. Rev. Lett. 78, 985-989 (1997). . D G Fried, T C Killian, L Willmann, D Landhuis, S C Moss, D Kleppner, T J Greytak, 10.1103/PhysRevLett.81.3811Phys. Rev. Lett. 81D. G. Fried, T. C. Killian, L. Willmann, D. Landhuis, S. C. Moss, D. Kleppner, and T. J. Greytak, Phys. Rev. Lett. 81, 3811-3814 (1998). . C Raman, M Köhl, R Onofrio, D S Durfee, C E Kuklewicz, Z Hadzibabic, W Ketterle, 10.1103/PhysRevLett.83.2502Phys. Rev. Lett. 83C. Raman, M. Köhl, R. Onofrio, D. S. Durfee, C. E. Kuklewicz, Z. Hadzibabic, and W. Ketterle, Phys. Rev. Lett. 83, 2502-2505 (1999). . R Onofrio, C Raman, J M Vogels, J R Abo-Shaeer, A P Chikkatur, W Ketterle, 10.1103/PhysRevLett.85.2228Phys. Rev. Lett. 85R. Onofrio, C. Raman, J. M. Vogels, J. R. Abo-Shaeer, A. P. Chikkatur, and W. Ketterle, Phys. Rev. Lett. 85, 2228-2231 (2000). . K W Madison, F Chevy, W Wohlleben, J Dalibard, 10.1103/PhysRevLett.84.806Phys. Rev. Lett. 84K. W. Madison, F. Chevy, W. Wohlleben, and J. Dal- ibard, Phys. Rev. Lett. 84, 806-809 (2000). . P C Haljan, I Coddington, P Engels, E A Cornell, 10.1103/PhysRevLett.87.210403Phys. Rev. Lett. 87210403P. C. Haljan, I. Coddington, P. Engels, and E. A. Cor- nell, Phys. Rev. Lett. 87, 210403 (2001). . E Hodby, G Hechenblaikner, S A Hopkins, O M Maragò, C J Foot, 10.1103/PhysRevLett.88.010405Phys. Rev. Lett. 8810405E. Hodby, G. Hechenblaikner, S. A. Hopkins, O. M. Maragò, and C. J. Foot, Phys. Rev. Lett. 88, 010405 (2001). . J Akram, A Pelster, 10.1103/PhysRevA.93.023606Phys. Rev. A. 9323606J. Akram and A. Pelster, Phys. Rev. A 93, 023606 (2016). . J Akram, A Pelster, 10.1103/PhysRevA.93.033610Phys. Rev. A. 9333610J. Akram and A. Pelster, Phys. Rev. A 93, 033610 (2016). . J Akram, A Pelster, 10.1088/1054-660x/26/6/065501Laser Physics. 2665501J. Akram and A. Pelster, Laser Physics 26, 065501 (2016). . J Hussain, J Akram, F Saif, 10.1007/s10909-019-02172-zJournal of Low Temperature Physics. 195J. Hussain, J. Akram, and F. Saif, Journal of Low Tem- perature Physics 195, 429-436 (2019). . J Akram, 10.1088/1612-202x/aa8ec4Laser Physics Letters. 1525501J. Akram, Laser Physics Letters 15, 025501 (2018). . A Posazhennikova, M Trujillo-Martinez, J Kroha, 10.1002/andp.201700124Annalen der Physik. 5301700124A. Posazhennikova, M. Trujillo-Martinez, and J. Kroha, Annalen der Physik 530, 1700124 (2018). . A Polkovnikov, K Sengupta, A Silva, M Vengalattore, 10.1103/RevModPhys.83.863Rev. Mod. Phys. 83A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalat- tore, Rev. Mod. Phys. 83, 863-883 (2011). . S Trotzky, Y.-A Chen, A Flesch, I P Mcculloch, U Schollwöck, J Eisert, I Bloch, 10.1038/NPHYS2232Nature Physics. 8S. Trotzky, Y.-A. Chen, A. Flesch, I. P. McCulloch, U. Schollwöck, J. Eisert, and I. Bloch, Nature Physics 8, 325-330 (2012). . J Hussain, M Nouman, F Saif, J Akram, 10.1016/j.physb.2020.412152Physica B: Condensed Matter. 587412152J. Hussain, M. Nouman, F. Saif, and J. Akram, Physica B: Condensed Matter 587, 412152 (2020). . C M Bender, S Boettcher, 10.1103/PhysRevLett.80.5243Phys. Rev. Lett. 80C. M. Bender and S. Boettcher, Phys. Rev. Lett. 80, 5243-5246 (1998). L Pítajevskíj, L S Stringari, L Pitaevskii, S Stringari, S Stringari, O U Press, Bose-Einstein Condensation, International Series of Monographs on Physics. Clarendon PressL. Pítajevskíj, L. S. Stringari, L. Pitaevskii, S. Stringari, S. Stringari, and O. U. Press, Bose-Einstein Condensation, International Series of Monographs on Physics (Clarendon Press, 2003). . D Anderson, 10.1103/PhysRevA.27.3135Phys. Rev. A. 27D. Anderson, Phys. Rev. A 27, 3135-3145 (1983). . V M Pérez-García, H Michinel, J I Cirac, M Lewenstein, P Zoller, 10.1103/PhysRevA.56.1424Phys. Rev. A. 56V. M. Pérez-García, H. Michinel, J. I. Cirac, M. Lewen- stein, and P. Zoller, Phys. Rev. A 56, 1424-1432 (1997). . V M Pérez-García, H Michinel, J I Cirac, M Lewenstein, P Zoller, 10.1103/PhysRevLett.77.5320Phys. Rev. Lett. 77V. M. Pérez-García, H. Michinel, J. I. Cirac, M. Lewen- stein, and P. Zoller, Phys. Rev. Lett. 77, 5320-5323 (1996). . R Borghi, 10.1088/1361-6404/aaafd9European Journal of Physics. 35410R. Borghi, European Journal of Physics , 035410. . I S Aranson, L Kramer, 10.1103/RevModPhys.74.99Rev. Mod. Phys. 74I. S. Aranson and L. Kramer, Rev. Mod. Phys. 74, 99-143 (2002). . L Devassy, C P Jisha, A Alberucci, V C Kuriakose, 10.1103/PhysRevE.92.022914Phys. Rev. E. 9222914L. Devassy, C. P. Jisha, A. Alberucci, and V. C. Kuri- akose, Phys. Rev. E 92, 022914 (2015). . P Ehrenfest, 10.1007/BF01329203Zeitschrift für Physik. P. Ehrenfest, Zeitschrift für Physik , 455-457. . W Bao, D Jaksch, P A Markowich, 10.1016/S0021-9991(03)00102-5Journal of Computational Physics. 187W. Bao, D. Jaksch, and P. A. Markowich, Journal of Computational Physics 187, 318-342 (2003). . Y.-W Hsueh, C.-H Hsueh, W.-C Wu, 10.3390/e22080855Entropy. 22Y.-W. Hsueh, C.-H. Hsueh, and W.-C. Wu, Entropy 22 (2020), 10.3390/e22080855. . D Dries, S E Pollack, J M Hitchcock, R G Hulet, 10.1103/PhysRevA.82.033603Phys. Rev. A. 8233603D. Dries, S. E. Pollack, J. M. Hitchcock, and R. G. Hulet, Phys. Rev. A 82, 033603 (2010). . R Liao, Y Yi-Xiang, W.-M Liu, 10.1103/PhysRevLett.108.080406Phys. Rev. Lett. 10880406R. Liao, Y. Yi-Xiang, and W.-M. Liu, Phys. Rev. Lett. 108, 080406 (2012). . A R P Lima, A Pelster, 10.1103/PhysRevA.84.041604Phys. Rev. A. 8441604A. R. P. Lima and A. Pelster, Phys. Rev. A 84, 041604 (2011). . B Nikolić, A Balaž, A Pelster, 10.1103/PhysRevA.88.013624Phys. Rev. A. 8813624B. Nikolić, A. Balaž, and A. Pelster, Phys. Rev. A 88, 013624 (2013).
{'fraction_non_alphanumeric': 0.06398593121174202, 'fraction_numerical': 0.044472251344313304, 'mean_word_length': 3.9756015480397107, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': "The postulates of the eigenstate thermalization hypothesis (ET H) expresses that the thermalization occurs due to the individual eigenstate of the system's Hamiltonian. But the ET H put no light on the dynamics that lead toward the thermalization. In this paper, we observe the thermalization of a Bose-Einstein Condensate (BEC) confined in an optical lattice potential that is embedded on the harmonic trap. Such optical lattice potential offers local friction to the oscillating BEC. The spread in the temporal density plot of BEC shows the thermalization of the BEC. Moreover, we observe that the presence of a P T -symmetric potential greatly influences the BEC dynamics and the thermalization of the system. The presence of a P T -symmetric potential offers a way to manipulate the mean position of the BEC to a desire location and for a desired length of time.", 'arxivid': '2211.00390', 'author': ['Javed Akram \nDepartment of Physics\nCOMSATS University Islamabad\nIslamabadPakistan\n', 'Asad Hussain \nDepartment of Physics\nCOMSATS University Islamabad\nIslamabadPakistan\n', 'Muhammad Nouman \nDepartment of Physics\nCOMSATS University Islamabad\nIslamabadPakistan\n', 'Jameel Hussian \nDepartment of Electronics\nQuaid-i-Azam University\nIslamabadPakistan\n', 'Javed Akram \nDepartment of Physics\nCOMSATS University Islamabad\nIslamabadPakistan\n', 'Asad Hussain \nDepartment of Physics\nCOMSATS University Islamabad\nIslamabadPakistan\n', 'Muhammad Nouman \nDepartment of Physics\nCOMSATS University Islamabad\nIslamabadPakistan\n', 'Jameel Hussian \nDepartment of Electronics\nQuaid-i-Azam University\nIslamabadPakistan\n'], 'authoraffiliation': ['Department of Physics\nCOMSATS University Islamabad\nIslamabadPakistan', 'Department of Physics\nCOMSATS University Islamabad\nIslamabadPakistan', 'Department of Physics\nCOMSATS University Islamabad\nIslamabadPakistan', 'Department of Electronics\nQuaid-i-Azam University\nIslamabadPakistan', 'Department of Physics\nCOMSATS University Islamabad\nIslamabadPakistan', 'Department of Physics\nCOMSATS University Islamabad\nIslamabadPakistan', 'Department of Physics\nCOMSATS University Islamabad\nIslamabadPakistan', 'Department of Electronics\nQuaid-i-Azam University\nIslamabadPakistan'], 'corpusid': 236358881, 'doi': '10.1364/josab.422737', 'github_urls': [], 'n_tokens_mistral': 9626, 'n_tokens_neox': 8138, 'n_words': 4837, 'pdfsha': '9f9e515d533020ac331fed2cc19aada1428e7e8b', 'pdfurls': ['https://export.arxiv.org/pdf/2211.00390v1.pdf'], 'title': ['Thermalization of Isolated BEC Under a P T -Symmetric Environment', 'Thermalization of Isolated BEC Under a P T -Symmetric Environment', 'Thermalization of Isolated BEC Under a P T -Symmetric Environment', 'Thermalization of Isolated BEC Under a P T -Symmetric Environment'], 'venue': []}
arxiv
Topological Fulde-Ferrel-Larkin-Ovchinnikov states in Spin-orbit Coupled Fermi Gases (Dated: May 7, 2014) Wei Zhang Department of Physics Renmin University of China 100872BeijingPeople's Republic of China Wei Yi Key Laboratory of Quantum Information University of Science and Technology of China CAS 230026HefeiAnhuiPeople's Republic of China Topological Fulde-Ferrel-Larkin-Ovchinnikov states in Spin-orbit Coupled Fermi Gases (Dated: May 7, 2014)numbers: 0375Ss0375Lm0530Fk Pairing in an attractively interacting two-component Fermi gas in the absence of the inversion symmetry and/or the time-reversal symmetry may give rise to exotic superfluid states. Notable examples range from the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state with a finite center-ofmass momentum in a polarized Fermi gas, to the topological superfluid state in a two-dimensional Fermi gas under Rashba spin-orbit coupling and an out-of-plane Zeeman field. Here, we show that a topological FFLO state can be stabilized in a two-dimensional Fermi gas with Rashba spin-orbit coupling and both in-plane and out-of-plane Zeeman fields. We characterize the topological FFLO state by a non-trivial Berry phase, and demonstrate the stability region of the state on the zerotemperature phase diagram. Given its unique properties in both the quasi-particle dispersion spectra and the momentum distribution, signatures of the topological FFLO state can be detected using existing experimental techniques. Pairing in an attractively interacting two-component Fermi gas in the absence of the inversion symmetry and/or the time-reversal symmetry may give rise to exotic superfluid states. Notable examples range from the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state with a finite center-ofmass momentum in a polarized Fermi gas, to the topological superfluid state in a two-dimensional Fermi gas under Rashba spin-orbit coupling and an out-of-plane Zeeman field. Here, we show that a topological FFLO state can be stabilized in a two-dimensional Fermi gas with Rashba spin-orbit coupling and both in-plane and out-of-plane Zeeman fields. We characterize the topological FFLO state by a non-trivial Berry phase, and demonstrate the stability region of the state on the zerotemperature phase diagram. Given its unique properties in both the quasi-particle dispersion spectra and the momentum distribution, signatures of the topological FFLO state can be detected using existing experimental techniques. Introduction.-Since its original proposal in 1960s, the search for the unconventional pairing states with finite center-of-mass momentum has caught a considerable amount of attention in different physical contexts [1,2], e.g., heavy fermions [3], dense quark matter [4], and ultracold atomic gases [5], etc. Initially proposed as a compromise between superconductivity and finite magnetization, the key ingredient of this so-called Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) state is a pairing mechanism between fermions having a finite center-of-mass momentum. In the weak coupling limit, this can be achieved by pairing particles residing either on distinct Fermi surfaces, as in the case of spin-polarized systems where spinup and spin-down Fermi surfaces are mismatched, or on a single deformed Fermi surface which breaks the spatial inversion symmetry. The latter possibility has been discussed in the context of non-centrosymmetric superconductors, where the presence of Rashba spin-orbit coupling (SOC) and external magnetic field would lead to a non-uniform superconducting state [6][7][8]. The pairing physics in spin-orbit coupled Fermi systems is particularly interesting due to the lack of inversion symmetry. The study of exotic pairing superfluid states in these systems has attracted much attention recently, partly due to the realization of synthetic spinorbit coupling in ultracold atoms [9][10][11][12]. Theoretical investigation has demonstrated that the interplay of SOC, pairing superfluidity and effective Zeeman fields can lead to exotic superfluid phases in various dimensions . Notably, since the presence of SOC mixes different spin states, both intra-and inter-branch pairings can take place and the competition between them results in rich phase structures. An important example here is the topological superfluid state in a two-dimensional (2D) Fermi gas with Rashba spin-orbit coupling and an out-of-plane k x k y Zeeman field. When the chemical potential lies within or below the gap opened by the out-of-plane Zeeman field, the subsequent intra-branch pairing results in a topological superfluid (TSF) state, in which a chiral Majorana edge mode is protected by the gap in the bulk. As both the inversion and the time-reversal symmetries are broken in the system, the topological superfluid state here belongs to class D. In this work, we show that a topological FFLO state can be stabilized in a two-dimensional Fermi gas with Rashba spin-orbit coupling and both in-plane and outof-plane Zeeman fields. Similar to the case of a topological superfluid state, in the weak coupling limit, the emergence of the topological FFLO state can be understood as a result of single-band pairing within the lower helicity branch. As illustrated in Fig. 1, the application of the additional in-plane Zeeman field introduces deformation of the single-particle dispersion and, as a consequence, drives the system towards a more stable pairing state with a single-component non-zero center-of-mass momentum, i.e., the Fulde-Ferrel (FF) state. The resulting pairing state would preserve all topological properties provided that the deformation of the Fermi sur-face is not drastic enough to violate the single-band pairing scenario. This last condition is equivalent to the requirement that the introduction of the in-plane Zeeman field does not close the bulk gap. Thus, the topological nature of this state is protected by a full gap of quasi-particle spectra, and the topological FF (tFF) state belongs to the same classification as the topological superfluid state found in 2D Fermi gases with Rashba SOC [25]. We note that the center-of-mass momentum of this tFF state is antiparallel to the in-plane Zeeman field. By mapping out the zero-temperature phase diagram, we further discuss the competition between various FF states with different center-of-mass momentum. In particular, we find a nodal FF (nFF) state and characterize the evolution of its non-trivial gapless contours in momentum space. The tFF and nFF states should leave features in the spin-selective momentum distribution and momentum-resolved radio-frequency spectroscopy, respectively, which can in principle be detected using the existing experimental techniques. Model.-We consider a two-component Fermi gas in two dimensions with a Rashba type SOC and cross Zeeman fields, where the Hamiltonian can be written as H = k,σ=↑,↓ ξ k a † kσ a kσ − h k (a † k↑ a k↑ − a † k↓ a k↓ ) + k [α(k x + ik y ) − h x ]a † k↑ a k↓ + H.C. +U k,k ,q a † k+q↑ a † −k+q↓ a −k +q↓ a k +q↑ .(1) Here, ξ k = k − µ, k = 2 k 2 /2m, a kσ (a † kσ ) is the annihilation (creation) operator for the hyperfine spin state σ with σ = (↑, ↓), m is the atomic mass, α denotes the strength of the SOC, and H.C. stands for Hermitian conjugate. The out-of-plane (h) and in-plane (h x ) Zeeman fields can be effectively induced depending on how the synthetic SOC is implemented. As an example, h and h x are proportional to the effective Rabi-frequency and the two-photon detuning, respectively, of the Raman process in the current experimental scheme [10,11]. The bare s-wave interaction rate U should be renormalized as [37]: 1/U = −S −1 k 1/(E b + 2 k ), where S is the quantization area, and E b is the binding energy of the two-body bound state in two dimensions without SOC, which can be tuned, for instance, via the Feshbach resonance technique. We focus on the zero-temperature properties of the Fulde-Ferrell (FF) pairing states with a single valued center-of-mass momentum on the mean-field level [1]. This should provide a qualitatively correct phase diagram at zero temperature. The effective mean field Hamiltonian can then be arranged into a matrix form in the hy- 0 0.5 1 1.5 −1 0 1 2 αk h /h µ/h tFF x gFF x nFF x VAC N Mixedthrough 2 k 2 h /2m = h. perfine spin basis a k↑ , a † Q−k↑ , a † Q−k↓ , a k↓ T H eff = 1 2 k     ξ k − h ∆ Q 0 Λ k −ξ Q−k − h −Λ Q−k 0 −ξ Q−k + h −∆ * Q ξ k + h     + k ξ |Q−k| − |∆ Q | 2 U ,(2) where ξ Q−k = Q−k − µ, Λ k = α(k x + ik y ) − h x , and the order parameter ∆ Q = U k a Q−k↓ a k↑ . It is then straightforward to diagonalize the effective Hamiltonian and evaluate the thermodynamic potential at zero temperature Ω = k ξ |Q−k| + k,ν Θ(−E η k,ν )E η k,ν − |∆ Q | 2 U ,(3) where the quasi-particle (η = +) and quasi-hole (η = −) dispersions E η k,ν (ν = 1, 2) are the eigenvalues of the matrix in Hamiltonian (2), and Θ(x) is the Heaviside step function. Without loss of generality, we assume h, h x > 0, ∆ 0 = ∆, and ∆ Q to be real throughout the work. The pairing order parameter ∆ Q as well as the center-ofmass momentum Q for the pairs can then be found by minimizing the thermodynamic potential in Eq. (3). In general, the Hamiltonian (2) cannot be diagonalized analytically, and the thermodynamic potential needs to be evaluated numerically. However, for pairing states with zero center-of-mass momentum (Q = 0), analytical form of the dispersion spectrum can be obtained for h x = 0 [25]. In this case, a fully gapped topological superfluid phase can be stabilized in a fairly large parameter region. The topological nature of this phase is characterized by a non-trivial topological number, and is protected by the underlying particle-hole symmetry. For the case of h x = 0, it can be proved that the zero center-of-mass momentum state becomes unstable against an FF state with pairing momentum Q = Q xx . Thus, a topologically non-trivial FF state can be expected provided that the in-plane field h x is not large enough to close the gap of the bulk. This topological FF state hence belongs to the same classification as the TSF phase, and acquires all topological features including gapless edge modes and Majorana fermions in vortex cores. µ/h Q x /k h 0 0.5 0 0.2 h x /h E g /h 0 0.5 0.5 1 h x /h ∆ Q /h 0 0.2 0.4 0.6 0 0.05 0.1 h x /h E g /h 0 0.5 1 0.4 0.8 h x /h ∆ Q /h (a) (b) (c) (d) Phase diagram and topological FF state.-We map out the phase diagram on the µ-α plane with fixed h and h x = 0 (see Fig. 2). Under the local density approximation, the phases traversed by a downward vertical line in the diagram represent those one should encounter by moving from a trap center to its edge. From Fig. 2, we see that the topological superfluid phase in a 2D polarized Fermi gas with Rashba SOC and zero in-plane field is now replaced by a topological FF phase with center-of-mass momentum along the x-dirextion (tFF x ), as we have anticipated. To characterize the non-trivial topological nature of this state, we further calculate the Berry phase associated with each quasi-particle (η = +) or quasi-hole (η = −) bands γ η ν=1,2 = 1 2π ν dk x dk y Γ η ν (k x , k y ),(4) where the Berry curvature is defined as [38] Γ η ν (k) = i = |∂ kx H| |∂ ky H| − (k x ↔ k y ) (E η k,ν − E η k,ν ) 2 (5) with ≡ (ν, η) the shorthand notation. The Berry phase of the superfluid phase is then a summation over the contribution γ η=− ν from the two occupied quasi-hole bands. A numerical evaluation shows that the resulting Berry phase vanishes in the topological trivial phase and becomes unity in the tFF x state. As we have discussed before, the stabilization of the tFF x state is due to the SOC-induced single-branch pairing and the Fermi surface asymmetry. The picture of single-branch pairing is complicated with increasing chemical potential or increasing SOC intensity, such that particles on the higher helicity branch get involved into pairing. Due to the spin mixing induced by the SOC, both intra-and inter-band pairings can take place with center-of-momentum along either the x-or the y-direction. As a consequence, the ground state of the system is the result of competitions between the various FFLO pairing states, as depicted in Fig. 2. For strong SOC intensity, or equivalently weak outof-plane Zeeman field, the tFF x state is separated from a topologically trivial FF state (depicted as gFF x in Fig. 2) via a continuous phase transition. This gFF x phase is also fully gapped and with center-of-mass momentum along the x-axis. By tuning through the tFF-gFF phase boundary, the excitation gap closes and opens again, while the pairing order parameter ∆ Q remains finite. This leads to a change of topology as the boundary is crossed. A typical variation of the minimum excitation gap E g , the pairing order parameter ∆ Q , and the pairing momentum Q x with increasing chemical potential are plotted in Figs. 3(a) and 3(b), respectively. The inter-band pairing scenario becomes significant with decreasing SOC intensity or increasing out-of-plane Zeeman field, which can effectively polarize the helices within each helicity branch and hence hinder intra-brand pairing. As a result, we identify a region where the globally stable state is a nodal FF phase with pairing momentum along the x-direction, as denoted by nFF x in Fig. 2. This gapless superfluid state acquires two disconnected gapless contours in momentum space, which shrink to two separated gapless points and disappear at the continuous phase boundaries. A typical evolution of the two gapless contours are displayed in Fig. 4(a) with increasing chemical potential. Notice that at the phase boundaries between nFF x and the gapped FF states (gFF x or tFF x ), the quasi-particle and quasi-hole dispersions touch the Fermi surface at different places [see Fig. 4(c) for example]. This is in contrast to the phase boundary between the tFF x state and the gFF x state, where the gap closes at a single point k = (Q x /2, 0) in momentum space [see k x /k h E k /h −0.2 0 0.2 −0.05 0 0.05 k x /k h E k /h −0.2 0 0.2 −0.05 0 0.05 k x /k h E k /h (a) (c) (b) (d)h 2 + h 2 x − ∆ 2 + 2 Q 2 x 8m − µ 2 − αh x Q x + α 2 Q 2 x 4 = 0. (6) On the other hand, the phase boundary between the nFF x state and the tFF x state needs to be determined numerically. Finally, we note that by further increasing the effective Zeeman field h, inter-band pairing with center-of-mass momentum along other directions has to be taken into account, and the system can be stabilized as a general FFLO state (labeled as mixed in Fig. 2), where multiple FF states with various pairing momenta coexist [35]. We then investigate the effect of increasing in-plane Zeeman field h x . In the weak coupling limit, the presence of a larger h x lifts the upper helicity branch and enlarges the gap between two branches. As a consequence, the tFF x state, which is dominated by the intra-band pairing within the lower helicity branch, becomes stable in an extended parameter region, and the phase boundaries surrounding the tFF x state are shifted towards larger values of chemical potential and SOC intensity. In Fig. 3(cd), we show the variation of the minimum gap and the pairing order parameter with increasing in-plane field h x , indicating the two representative evolution paths of the system from gFF x to nFF x and eventually to tFF x [ Fig. 3(c)], or from gFF x directly to tFF x [ Fig. 3(d)], depending on the starting point on the phase diagram. Characterizing the FF states.-To characterize the properties of the various phases in the phase diagram, we calculate their respective momentum distribution. In Fig. 5, we show the momentum profiles of the minority component for cases within the tFF x region (a-b) and the gFF x region (c-d). It is apparent that the momentum distribution in a topologically non-trivial phase is drastically different from that in a topological trivial phase. In particular, the density profile in momentum space for the minority spin features a dip near zero momentum in the tFF x phase, in contrast to a peak in the gFF x case. By extracting the momentum distribution by species-selective time-of-flight imaging, this qualitative difference may serve as a signature for the detection of topological FF state in the underlying system. Conclusion.-In this manuscript, we investigate the pairing states of a two-dimensional Fermi gas with Rashba spin-orbit coupling and both in-plane and outof-plane Zeeman fields. We show that the BCS pairing state becomes unstable towards an FFLO state with finite center-of-mass momentum as the Fermi-surface becomes asymmetric in the presence of an in-plane Zeeman field. In particular, we identify a topological FF state with center-of-mass momentum antiparallel to the direction of the in-plane field. The topological nature of this tFF x phase is characterized by a non-trivial Berry phase, which vanishes for a topological trivial state. We further map out the zero-temperature phase diagram, where multiple FF states are separated by either first-order or second-order phase transitions. These FF states are characterized by different quasi-particle dispersion spectra and momentum distribution, which can be distinguished via spectroscopic detection and species selective time-offlight imaging technique, respectively. Note added.-After finalizing the present manuscript, we notice a simultaneous work by Qu et al. who also discuss the existence of a topological FFLO state in Fermi systems with spin-orbit coupling [39]. PACS numbers: 03.75.Ss, 03.75.Lm, 05.30.Fk FIG. 1 : 1(Color online) Illustration of pairing within the lower helicity branch, which can lead to a topological FF state in the presence of both out-of-plane and in-plane Zeeman fields. FIG. 2 : 2(Color online) Phase diagrams on the µ-α plane for E b /h = 0.5, hx/h = 0.1. The solid curves are firstorder boundaries, while the dashed-dotted curves represent phase boundaries of continuous phase transitions. The dashed curves surrounding the normal region (N) are the threshold with ∆/h = 10 −3 , while the dotted curves are the boundary against vacuum. The axial Zeeman field h is taken to be the unit of energy, while the unit of momentum k h is defined FIG. 3 : 3(Color online) (a-b) Evolution of the minimum excitation gap (a), the order parameter (a:inset), and pairing momentum (b) with increasing chemical potential. A full gap closes and opens again by traversing from the tFFx state to the gFFx state by passing through a continuous phase boundary. In these subplots, hx/h = 0.1. (c-d) Evolution of the minimum excitation gap and the pairing order parameter (insets) with increasing in-plane field hx, with: (c) µ/h = 1; and (d) µ/h = 0.8. Other parameters used in this figure are E b /h = 0.5, αk h /h = 1. Fig. 4(d)]. Hence, the phase boundary between the tFF x FIG. 4 : 4(Color online) (a) Evolution of gapless contours in the momentum space for the nFFx states with µ/h = 1.002 (red), 1.05 (blue), and 1.15 (black). (b-d) Dispersion spectra of quasi-particle and quasi-hole along the kx-axis, with (b) αk h /h = 0.49, µ/h = 1.05; (c) αk h /h = 0.49, µ/h = 1.002; and (d) αk h /h = 0.8, µ/h = 0.87. At the topological phase boundaries, two gapless points exist between the nFFx and the tFFx states (c), and a single gapless point exists between the gFFx and the tFFx states. Other parameters used in these plots are E b /h = 0.5 and hx/h = 0.1. and the gFF x state can be worked out by examining the gap closing condition online) Density distribution in the momentum space for the minority component. Parameters used in this plot are: E b /h = 0.5, hx/h = 0.1, αk h /h = 1, and the chemical potentials are: (a) µ/h = −0.85, (b) µ/h = 0.6977,(c) µ/h = 1, and (d) µ/h = 0.6980. Acknowledgements.-We thank Hong Yao and Ying Ran for helpful discussions. This work is supported by NFRP (2011CB921200, 2011CBA00200), NKBRP (2013CB922000), NNSF (60921091), NSFC (11105134, 11274009), the Fundamental Research Funds for the Central Universities (WK2470000006), and the Research Funds of Renmin University of China (10XNL016). . P Fulde, R A Ferrell, Phys. Rev. 135550P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964); . A I Larkin, Y N Ovchinnikov, Sov. Phys. JETP. 20762A. I. Larkin and Y. N. Ovchinnikov, Sov. Phys. JETP 20, 762 (1965). . R Casalbuoni, G Nardulli, Rev. Mod. Phys. 76263R. Casalbuoni and G. Nardulli, Rev. Mod. Phys. 76, 263 (2004). . H A Radovan, N A Fortune, T P Murphy, S T Hannahs, E C Palm, S W Tozer, D Hall, Nature. 42552H. A. Radovan, N. A. Fortune, T. P. Murphy, S. T. Han- nahs, E. C. Palm, S. W. Tozer, and D. Hall, Nature 425, 52 (2003). . M Alford, J A Bowers, K Rajagopal, Phys. Rev. D. 6374016M. Alford, J. A. Bowers, and K. Rajagopal, Phys. Rev. D 63, 074016 (2001). . Y A Liao, A S C Rittner, T Paprotta, W Li, G B Partridge, R G Hulet, S K Baur, E J Mueller, Nature. 467567Y. A. Liao, A. S. C. Rittner, T. Paprotta, W. Li, G. B. Partridge, R. G. Hulet, S. K. Baur, E. J. Mueller, Nature 467, 567 (2010). . D F Agterberg, Physica C. 38713D. F. Agterberg, Physica C 387, 13 (2003). . K V Samokhin, Phys. Rev. B. 70104521K. V. Samokhin, Phys. Rev. B 70, 104521 (2004). . R P Kaur, D F Agterberg, M Sigrist, Phys. Rev. Lett. 94137002R. P. Kaur, D. F. Agterberg, and M. Sigrist, Phys. Rev. Lett. 94, 137002 (2005). . Y.-J Lin, K Jiménez-García, I B Spielman, Nature. 47183Y.-J. Lin, K. Jiménez-García, and I. B. Spielman, Nature (London) 471, 83 (2011). . P Wang, Z.-Q Yu, Z Fu, J Miao, L Huang, S Chai, H Zhai, J Zhang, Phys. Rev. Lett. 10995301P. Wang, Z.-Q. Yu, Z. Fu, J. Miao, L. Huang, S. Chai, H. Zhai, and J. Zhang, Phys. Rev. Lett. 109, 095301 (2012). . L W Cheuk, A T Sommer, Z Hadzibabic, T Yefsah, W S Bakr, M W Zwierlein, Phys. Rev. Lett. 10995302L. W. Cheuk, A. T. Sommer, Z. Hadzibabic, T. Yefsah, W. S. Bakr, and M. W. Zwierlein, Phys. Rev. Lett. 109, 095302 (2012). . J.-Y Zhang, Phys. Rev. Lett. 109115301J.-Y. Zhang, et al., Phys. Rev. Lett. 109, 115301 (2012). . C Zhang, S Tewari, R M Lutchyn, S. Das Sarma, Phys. Rev. Lett. 101160401C. Zhang, S. Tewari, R. M. Lutchyn, and S. Das Sarma, Phys. Rev. Lett. 101, 160401 (2008). . M Sato, Y Takahashi, S Fujimoto, Phys. Rev. Lett. 10320401M. Sato, Y. Takahashi, and S. Fujimoto, Phys. Rev. Lett. 103, 020401 (2009). . J P Vyasanakere, S Zhang, V B Shenoy, Phys. Rev. B. 8414512J. P. Vyasanakere, S. Zhang, and V. B. Shenoy, Phys. Rev. B 84, 014512 (2011). . M Gong, S Tewari, C Zhang, Phys. Rev. Lett. 107195303M. Gong, S. Tewari, and C. Zhang, Phys. Rev. Lett. 107, 195303 (2011). . Z.-Q Yu, H Zhai, Phys. Rev. Lett. 107195305Z.-Q. Yu and H. Zhai, Phys. Rev. Lett. 107, 195305 (2011). . H Hu, L Jiang, X.-J Liu, H Pu, Phys. Rev. Lett. 107195304H. Hu, L. Jiang, X.-J. Liu, and H. Pu, Phys. Rev. Lett. 107, 195304 (2011). . M Iskin, A L Subasi, Phys. Rev. Lett. 10750402M. Iskin and A. L. Subasi, Phys. Rev. Lett. 107, 050402 (2011). . W Yi, G.-C Guo, Phys. Rev. A. 8431608W. Yi and G.-C. Guo, Phys. Rev. A 84, 031608(R) (2011). . L Dell&apos;anna, G Mazzarella, L Salasnich, Phys. Rev. A. 8433633L. Dell'Anna, G. Mazzarella, and L. Salasnich, Phys. Rev. A 84, 033633 (2011). . M Gong, G Chen, S Jia, C Zhang, Phys. Rev. Lett. 109105302M. Gong, G. Chen, S. Jia, and C. Zhang, Phys. Rev. Lett. 109, 105302 (2012). . L Han, C A R Sá De Melo, Phys. Rev. A. 8511606L. Han and C. A. R. Sá de Melo, Phys. Rev. A 85 011606(R) (2012). . L He, X.-G Huang, Phys. Rev. Lett. 108145302L. He and X.-G. Huang, Phys. Rev. Lett. 108, 145302 (2012). . J Zhou, W Zhang, W Yi, Phys. Rev. A. 8463603J. Zhou, W. Zhang, W. Yi, Phys. Rev. A 84, 063603 (2011). . X Yang, S Wan, Phys. Rev. A. 8523633X. Yang and S. Wan, Phys. Rev. A 85, 023633 (2012). . W Yi, W Zhang, Phys. Rev. Lett. 109140402W. Yi and W. Zhang, Phys. Rev. Lett. 109, 140402 (2012). . L Han, C A R Sá De Melo, arXiv:1206.4984L. Han and C. A. R. Sá de Melo, arXiv:1206.4984. . M Iskin, A L Subasi, arXiv:1211.4020M. Iskin and A. L. Subasi, arXiv:1211.4020. . F Wu, G.-C Guo, W Zhang, W Yi, Phys. Rev. Lett. 110110401F. Wu, G.-C. Guo, W. Zhang, W. Yi, Phys. Rev. Lett. 110, 110401 (2013). . X.-F Zhou, G.-C Guo, W Zhang, W Yi, Phys. Rev. A. 8763606X.-F. Zhou, G.-C. Guo, W. Zhang, W. Yi, Phys. Rev. A 87, 063606 (2013). . Z Zheng, M Gong, X Zou, C Zhang, G.-C Guo, Phys. Rev. A. 8731602Z. Zheng, M. Gong, X. Zou, C. Zhang, and G.-C. Guo, Phys. Rev. A 87, 031602(R) (2013). . L Dong, L Jiang, H Pu, arXiv:1302.1189L. Dong, L. Jiang, and H. Pu, arXiv:1302.1189. . M Iskin, arXiv:1304.1473M. Iskin, arXiv:1304.1473. . Y Xu, C Qu, M Gong, C Zhang, arXiv:1305.2152Y. Xu, C. Qu, M. Gong, and C. Zhang, arXiv:1305.2152. . G J Conduit, P H Conlon, B D Simons, Phys. Rev. A. 7753617G. J. Conduit, P. H. Conlon, and B. D. Simons, Phys. Rev. A 77, 053617 (2008). . M Randeria, J.-M Duan, L.-Y Shieh, Phys. Rev. Lett. 62981M. Randeria, J.-M. Duan, and L.-Y. Shieh, Phys. Rev. Lett. 62, 981 (1989). . D Xiao, M C Chang, Q Niu, Rev. Mod. Phys. 821959D. Xiao, M. C. Chang, Q. Niu, Rev. Mod. Phys. 82, 1959 (2010). . C Qu, Z Zheng, M Gong, Y Xu, L Mao, X Zou, G.-C Guo, C Zhang, arXiv:1307.1207C. Qu, Z. Zheng, M. Gong, Y. Xu, L. Mao, X. Zou, G.-C. Guo, and C. Zhang, arXiv:1307.1207.
{'fraction_non_alphanumeric': 0.07653323831017143, 'fraction_numerical': 0.043591875519657916, 'mean_word_length': 3.6739452257586973, 'pattern_counts': {'":': 0, '<': 0, '<?xml version=': 0, '>': 1, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 6, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'abstract': 'Pairing in an attractively interacting two-component Fermi gas in the absence of the inversion symmetry and/or the time-reversal symmetry may give rise to exotic superfluid states. Notable examples range from the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state with a finite center-ofmass momentum in a polarized Fermi gas, to the topological superfluid state in a two-dimensional Fermi gas under Rashba spin-orbit coupling and an out-of-plane Zeeman field. Here, we show that a topological FFLO state can be stabilized in a two-dimensional Fermi gas with Rashba spin-orbit coupling and both in-plane and out-of-plane Zeeman fields. We characterize the topological FFLO state by a non-trivial Berry phase, and demonstrate the stability region of the state on the zerotemperature phase diagram. Given its unique properties in both the quasi-particle dispersion spectra and the momentum distribution, signatures of the topological FFLO state can be detected using existing experimental techniques.', 'arxivid': '1307.2439', 'author': ["Wei Zhang \nDepartment of Physics\nRenmin University of China\n100872BeijingPeople's Republic of China\n", "Wei Yi \nKey Laboratory of Quantum Information\nUniversity of Science and Technology of China\nCAS\n230026HefeiAnhuiPeople's Republic of China\n"], 'authoraffiliation': ["Department of Physics\nRenmin University of China\n100872BeijingPeople's Republic of China", "Key Laboratory of Quantum Information\nUniversity of Science and Technology of China\nCAS\n230026HefeiAnhuiPeople's Republic of China"], 'corpusid': 29604921, 'doi': '10.1038/ncomms3711', 'github_urls': [], 'n_tokens_mistral': 8810, 'n_tokens_neox': 7424, 'n_words': 4266, 'pdfsha': '369889b4428e65dc6f1176f645f91d99bcb469c2', 'pdfurls': ['https://arxiv.org/pdf/1307.2439v1.pdf'], 'title': ['Topological Fulde-Ferrel-Larkin-Ovchinnikov states in Spin-orbit Coupled Fermi Gases', 'Topological Fulde-Ferrel-Larkin-Ovchinnikov states in Spin-orbit Coupled Fermi Gases'], 'venue': []}