Global seismicity on all three solar system's bodies with in situ measurements (Earth, Moon, and Mars) is due mainly to mechanical Rieger resonance (RR) of the solar wind's macroscopic flapping, driven by the well-known PRg=~154-day Rieger period and detected commonly in most heliophysical data types and the interplanetary magnetic field (IMF). Thus, InSight mission marsquakes rates are periodic with PRg as characterized by a very high (>>12) fidelity {\Phi}=2.8 10^6 and by being the only >99%-significant spectral peak in the 385.8-64.3-nHz (1-180-day) band of highest planetary energies; the longest-span (v.9) release of raw data revealed the entire RR, excluding a tectonically active Mars. For check, I analyze rates of Oct 2015-Feb 2019, Mw5.6+ earthquakes, and all (1969-1977) Apollo mission moonquakes. To decouple magnetosphere and IMF effects, I study Earth and Moon seismicity during traversals of Earth magnetotail vs. IMF. The analysis showed with >99-67% confidence and {\Phi}>>12 fidelity that (an unspecified majority of) moonquakes and Mw5.6+ earthquakes also recur at Rieger periods. About half of spectral peaks split but also into clusters that average to usual Rieger periodicities, where magnetotail reconnecting clears the signal. Moonquakes are mostly forced at times of solar-wind resonance and not just during tides, as previously and simplistically believed. Earlier claims that solar plasma dynamics could be seismogenic are confirmed. This result calls for reinterpreting the seismicity phenomenon and for reliance on global magnitude scales. Predictability of solar-wind macroscopic dynamics is now within reach, which paves the way for long-term physics-based seismic and space weather prediction and the safety of space missions. Gauss-Vanicek Spectral Analysis revolutionizes geophysics by computing nonlinear global dynamics directly (renders approximating of dynamics obsolete).
We give an explicit solution formula for the polynomial regression problem in terms of Schur polynomials and Vandermonde determinants. We thereby generalize the work of Chang, Deng, and Floater to the case of model functions of the form $\sum _{i=1}^{n} a_{i} x^{d_{i}}$ for some integer exponents $d_{1} >d_{2} >\dotsc >d_{n} \geq 0$ and phrase the results using Schur polynomials. Even though the solution circumvents the well-known problems with the forward stability of the normal equation, it is only of practical value if $n$ is small because the number of terms in the formula grows rapidly with the number $m$ of data points. The formula can be evaluated essentially without rounding.
Selecting the step size for the Metropolis-adjusted Langevin algorithm (MALA) is necessary in order to obtain satisfactory performance. However, finding an adequate step size for an arbitrary target distribution can be a difficult task and even the best step size can perform poorly in specific regions of the space when the target distribution is sufficiently complex. To resolve this issue we introduce autoMALA, a new Markov chain Monte Carlo algorithm based on MALA that automatically sets its step size at each iteration based on the local geometry of the target distribution. We prove that autoMALA has the correct invariant distribution, despite continual automatic adjustments of the step size. Our experiments demonstrate that autoMALA is competitive with related state-of-the-art MCMC methods, in terms of the number of log density evaluations per effective sample, and it outperforms state-of-the-art samplers on targets with varying geometries. Furthermore, we find that autoMALA tends to find step sizes comparable to optimally-tuned MALA when a fixed step size suffices for the whole domain.
Human groups are able to converge on more accurate beliefs through deliberation, even in the presence of polarization and partisan bias -- a phenomenon known as the "wisdom of partisan crowds." Generated agents powered by Large Language Models (LLMs) are increasingly used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompt and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence.
The current study investigates the asymptotic spectral properties of a finite difference approximation of nonlocal Helmholtz equations with a Caputo fractional Laplacian and a variable coefficient wave number $\mu$, as it occurs when considering a wave propagation in complex media, characterized by nonlocal interactions and spatially varying wave speeds. More specifically, by using tools from Toeplitz and generalized locally Toeplitz theory, the present research delves into the spectral analysis of nonpreconditioned and preconditioned matrix-sequences. We report numerical evidences supporting the theoretical findings. Finally, open problems and potential extensions in various directions are presented and briefly discussed.
Uncertainty is an inherent property of any complex system, especially those that integrate physical parts or operate in real environments. In this paper, we focus on the Digital Twins of adaptive systems, which are particularly complex to design, verify, and optimize. One of the problems of having two systems (the physical one and its digital replica) is that their behavior may not always be consistent. In addition, both twins are normally subject to different types of uncertainties, which complicates their comparison. In this paper we propose the explicit representation and treatment of the uncertainty of both twins, and show how this enables a more accurate comparison of their behaviors. Furthermore, this allows us to reduce the overall system uncertainty and improve its behavior by properly averaging the individual uncertainties of the two twins. An exemplary incubator system is used to illustrate and validate our proposal.
We consider the Sherrington-Kirkpatrick model of spin glasses at high-temperature and no external field, and study the problem of sampling from the Gibbs distribution $\mu$ in polynomial time. We prove that, for any inverse temperature $\beta<1/2$, there exists an algorithm with complexity $O(n^2)$ that samples from a distribution $\mu^{alg}$ which is close in normalized Wasserstein distance to $\mu$. Namely, there exists a coupling of $\mu$ and $\mu^{alg}$ such that if $(x,x^{alg})\in\{-1,+1\}^n\times \{-1,+1\}^n$ is a pair drawn from this coupling, then $n^{-1}\mathbb E\{||x-x^{alg}||_2^2\}=o_n(1)$. The best previous results, by Bauerschmidt and Bodineau and by Eldan, Koehler, and Zeitouni, implied efficient algorithms to approximately sample (under a stronger metric) for $\beta<1/4$. We complement this result with a negative one, by introducing a suitable "stability" property for sampling algorithms, which is verified by many standard techniques. We prove that no stable algorithm can approximately sample for $\beta>1$, even under the normalized Wasserstein metric. Our sampling method is based on an algorithmic implementation of stochastic localization, which progressively tilts the measure $\mu$ towards a single configuration, together with an approximate message passing algorithm that is used to approximate the mean of the tilted measure.
We consider the classical problems of interpolating a polynomial given a black box for evaluation, and of multiplying two polynomials, in the setting where the bit-lengths of the coefficients may vary widely, so-called unbalanced polynomials. Writing s for the total bit-length and D for the degree, our new algorithms have expected running time $\tilde{O}(s \log D)$, whereas previous methods for (resp.) dense or sparse arithmetic have at least $\tilde{O}(sD)$ or $\tilde{O}(s^2)$ bit complexity.
Non-Hermitian topological phases can produce some remarkable properties, compared with their Hermitian counterpart, such as the breakdown of conventional bulk-boundary correspondence and the non-Hermitian topological edge mode. Here, we introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we use the smallest module of the periodic circuit as one unit to construct high-dimensional circuit data features. Further, we use the Dense Convolutional Network (DenseNet), a type of convolutional neural network that utilizes dense connections between layers to design a non-Hermitian topolectrical Chern circuit, as the DenseNet algorithm is more suitable for processing high-dimensional data. Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.
In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies - also known as false-name strategies, where agents create fake identities to manipulate outcomes - has not yet been studied. In this paper, we delve into the details of different cost-sharing mechanisms proposed in the literature, highlighting their non-Sybil-resistant nature. Furthermore, we prove that a Sybil-proof cost-sharing mechanism for public excludable goods under mild conditions is at least $(n+1)/2-$approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of \textit{Sybil Welfare Invariant} mechanisms, where a mechanism does not decrease its welfare under Sybil-strategies when agents choose weak dominant strategies and have subjective prior beliefs over other players' actions. Finally, we prove that the Shapley value mechanism for symmetric and submodular cost functions holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ under equilibrium with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding suggests that any group of agents, each with private valuations, can fund public excludable goods both permissionless and anonymously, achieving efficiency comparable to that of permissioned and non-anonymous domains, even when the total number of participants is unknown.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.