We introduce a family of identities that express general linear non-unitary evolution operators as a linear combination of unitary evolution operators, each solving a Hamiltonian simulation problem. This formulation can exponentially enhance the accuracy of the recently introduced linear combination of Hamiltonian simulation (LCHS) method [An, Liu, and Lin, Physical Review Letters, 2023]. For the first time, this approach enables quantum algorithms to solve linear differential equations with both optimal state preparation cost and near-optimal scaling in matrix queries on all parameters.
We provide two families of algorithms to compute characteristic polynomials of endomorphisms and norms of isogenies of Drinfeld modules. Our algorithms work for Drinfeld modules of any rank, defined over any base curve. When the base curve is $\mathbb P^1_{\mathbb F_q}$, we do a thorough study of the complexity, demonstrating that our algorithms are, in many cases, the most asymptotically performant. The first family of algorithms relies on the correspondence between Drinfeld modules and Anderson motives, reducing the computation to linear algebra over a polynomial ring. The second family, available only for the Frobenius endomorphism, is based on a formula expressing the characteristic polynomial of the Frobenius as a reduced norm in a central simple algebra.
We determine the material parameters in the relaxed micromorphic generalized continuum model for a given periodic microstructure in this work. This is achieved through a least squares fitting of the total energy of the relaxed micromorphic homogeneous continuum to the total energy of the fully-resolved heterogeneous microstructure, governed by classical linear elasticity. The relaxed micromorphic model is a generalized continuum that utilizes the $\Curl$ of a micro-distortion field instead of its full gradient as in the classical micromorphic theory, leading to several advantages and differences. The most crucial advantage is that it operates between two well-defined scales. These scales are determined by linear elasticity with microscopic and macroscopic elasticity tensors, which respectively bound the stiffness of the relaxed micromorphic continuum from above and below. While the macroscopic elasticity tensor is established a priori through standard periodic first-order homogenization, the microscopic elasticity tensor remains to be determined. Additionally, the characteristic length parameter, associated with curvature measurement, controls the transition between the micro- and macro-scales. Both the microscopic elasticity tensor and the characteristic length parameter are here determined using a computational approach based on the least squares fitting of energies. This process involves the consideration of an adequate number of quadratic deformation modes and different specimen sizes. We conduct a comparative analysis between the least square fitting results of the relaxed micromorphic model, the fitting of a skew-symmetric micro-distortion field (Cosserat-micropolar model), and the fitting of the classical micromorphic model with two different formulations for the curvature...
This manuscript examines the problem of nonlinear stochastic fractional neutral integro-differential equations with weakly singular kernels. Our focus is on obtaining precise estimates to cover all possible cases of Abel-type singular kernels. Initially, we establish the existence, uniqueness, and continuous dependence on the initial value of the true solution, assuming a local Lipschitz condition and linear growth condition. Additionally, we develop the Euler-Maruyama method for the numerical solution of the equation and prove its strong convergence under the same conditions as the well-posedness. Moreover, we determine the accurate convergence rate of this method under global Lipschitz conditions and linear growth conditions. And also we have proven generalized Gronwall inequality with a multi-weakly singularity.
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high-dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method for both diffusion and Helmholtz problems.
Any interactive protocol between a pair of parties can be reliably simulated in the presence of noise with a multiplicative overhead on the number of rounds (Schulman 1996). The reciprocal of the best (least) overhead is called the interactive capacity of the noisy channel. In this work, we present lower bounds on the interactive capacity of the binary erasure channel. Our lower bound improves the best known bound due to Ben-Yishai et al. 2021 by roughly a factor of 1.75. The improvement is due to a tighter analysis of the correctness of the simulation protocol using error pattern analysis. More precisely, instead of using the well-known technique of bounding the least number of erasures needed to make the simulation fail, we identify and bound the probability of specific erasure patterns causing simulation failure. We remark that error pattern analysis can be useful in solving other problems involving stochastic noise, such as bounding the interactive capacity of different channels.
We study the problem of parameter estimation for large exchangeable interacting particle systems when a sample of discrete observations from a single particle is known. We propose a novel method based on martingale estimating functions constructed by employing the eigenvalues and eigenfunctions of the generator of the mean field limit, where the law of the process is replaced by the (unique) invariant measure of the mean field dynamics. We then prove that our estimator is asymptotically unbiased and asymptotically normal when the number of observations and the number of particles tend to infinity, and we provide a rate of convergence towards the exact value of the parameters. Finally, we present several numerical experiments which show the accuracy of our estimator and corroborate our theoretical findings, even in the case the mean field dynamics exhibit more than one steady states.
A new sparse semiparametric model is proposed, which incorporates the influence of two functional random variables in a scalar response in a flexible and interpretable manner. One of the functional covariates is included through a single-index structure, while the other is included linearly through the high-dimensional vector formed by its discretised observations. For this model, two new algorithms are presented for selecting relevant variables in the linear part and estimating the model. Both procedures utilise the functional origin of linear covariates. Finite sample experiments demonstrated the scope of application of both algorithms: the first method is a fast algorithm that provides a solution (without loss in predictive ability) for the significant computational time required by standard variable selection methods for estimating this model, and the second algorithm completes the set of relevant linear covariates provided by the first, thus improving its predictive efficiency. Some asymptotic results theoretically support both procedures. A real data application demonstrated the applicability of the presented methodology from a predictive perspective in terms of the interpretability of outputs and low computational cost.
In arXiv:2305.03945 [math.NA], a first-order optimization algorithm has been introduced to solve time-implicit schemes of reaction-diffusion equations. In this research, we conduct theoretical studies on this first-order algorithm equipped with a quadratic regularization term. We provide sufficient conditions under which the proposed algorithm and its time-continuous limit converge exponentially fast to a desired time-implicit numerical solution. We show both theoretically and numerically that the convergence rate is independent of the grid size, which makes our method suitable for large-scale problems. The efficiency of our algorithm has been verified via a series of numerical examples conducted on various types of reaction-diffusion equations. The choice of optimal hyperparameters as well as comparisons with some classical root-finding algorithms are also discussed in the numerical section.
We present new convergence estimates of generalized empirical interpolation methods in terms of the entropy numbers of the parametrized function class. Our analysis is transparent and leads to sharper convergence rates than the classical analysis via the Kolmogorov n-width. In addition, we also derive novel entropy-based convergence estimates of the Chebyshev greedy algorithm for sparse n-term nonlinear approximation of a target function. This also improves classical convergence analysis when corresponding entropy numbers decay fast enough.
We present a novel combination of dynamic embedded topic models and change-point detection to explore diachronic change of lexical semantic modality in classical and early Christian Latin. We demonstrate several methods for finding and characterizing patterns in the output, and relating them to traditional scholarship in Comparative Literature and Classics. This simple approach to unsupervised models of semantic change can be applied to any suitable corpus, and we conclude with future directions and refinements aiming to allow noisier, less-curated materials to meet that threshold.