{mayi_des}
We propose a non-linear state-space model to examine the relationship between CO$_2$ emissions, energy sources, and macroeconomic activity, using data from 1971 to 2019. CO$_2$ emissions are modeled as a weighted sum of fossil fuel use, with emission conversion factors that evolve over time to reflect technological changes. GDP is expressed as the outcome of linearly increasing energy efficiency and total energy consumption. The model is estimated using CO$_2$ data from the Global Carbon Budget, GDP statistics from the World Bank, and energy data from the International Energy Agency (IEA). Projections for CO$_2$ emissions and GDP from 2020 to 2100 from the model are based on energy scenarios from the Shared Socioeconomic Pathways (SSP) and the IEA's Net Zero roadmap. Emissions projections from the model are consistent with these scenarios but predict lower GDP growth. An alternative model version, assuming exponential energy efficiency improvement, produces GDP growth rates more in line with the benchmark projections. Our results imply that if internationally agreed net-zero objectives are to be fulfilled and economic growth is to follow SSP or IEA scenarios, then drastic changes in energy efficiency, not consistent with historical trends, are needed.
We prove sharp bounds on certain impedance-to-impedance maps (and their compositions) for the Helmholtz equation with large wavenumber (i.e., at high-frequency) using semiclassical defect measures. The paper [GGGLS] (Gong-Gander-Graham-Lafontaine-Spence, 2022) recently showed that the behaviour of these impedance-to-impedance maps (and their compositions) dictates the convergence of the parallel overlapping Schwarz domain-decomposition method with impedance boundary conditions on the subdomain boundaries. For a model decomposition with two subdomains and sufficiently-large overlap, the results of this paper combined with those in [GGGLS] show that the parallel Schwarz method is power contractive, independent of the wavenumber. For strip-type decompositions with many subdomains, the results of this paper show that the composite impedance-to-impedance maps, in general, behave "badly" with respect to the wavenumber; nevertheless, by proving results about the composite maps applied to a restricted class of data, we give insight into the wavenumber-robustness of the parallel Schwarz method observed in the numerical experiments in [GGGLS].
In this paper we analyze a method for approximating the first-passage time density and the corresponding distribution function for a CIR process. This approximation is obtained by truncating a series expansion involving the generalized Laguerre polynomials and the gamma probability density. The suggested approach involves a number of numerical issues which depend strongly on the coefficient of variation of the first passage time random variable. These issues are examined and solutions are proposed also involving the first passage time distribution function. Numerical results and comparisons with alternative approximation methods show the strengths and weaknesses of the proposed method. A general acceptance-rejection-like procedure, that makes use of the approximation, is presented. It allows the generation of first passage time data, even if its distribution is unknown.
We consider the inverse scattering problem for time-harmonic acoustic waves in a medium with pointwise inhomogeneities. In the Foldy-Lax model, the estimation of the scatterers' locations and intensities from far field measurements can be recast as the recovery of a discrete measure from nonlinear observations. We propose a "linearize and locally optimize" approach to perform this reconstruction. We first solve a convex program in the space of measures (known as the Beurling LASSO), which involves a linearization of the forward operator (the far field pattern in the Born approximation). Then, we locally minimize a second functional involving the nonlinear forward map, using the output of the first step as initialization. We provide guarantees that the output of the first step is close to the sought-after measure when the scatterers have small intensities and are sufficiently separated. We also provide numerical evidence that the second step still allows for accurate recovery in settings that are more involved.
The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.
Diffusion-based generative models in SE(3)-invariant space have demonstrated promising performance in molecular conformation generation, but typically require solving stochastic differential equations (SDEs) with thousands of update steps. Till now, it remains unclear how to effectively accelerate this procedure explicitly in SE(3)-invariant space, which greatly hinders its wide application in the real world. In this paper, we systematically study the diffusion mechanism in SE(3)-invariant space via the lens of approximate errors induced by existing methods. Thereby, we develop more precise approximate in SE(3) in the context of projected differential equations. Theoretical analysis is further provided as well as empirical proof relating hyper-parameters with such errors. Altogether, we propose a novel acceleration scheme for generating molecular conformations in SE(3)-invariant space. Experimentally, our scheme can generate high-quality conformations with 50x--100x speedup compared to existing methods.
The problem of estimating a parameter in the drift coefficient is addressed for $N$ discretely observed independent and identically distributed stochastic differential equations (SDEs). This is done considering additional constraints, wherein only public data can be published and used for inference. The concept of local differential privacy (LDP) is formally introduced for a system of stochastic differential equations. The objective is to estimate the drift parameter by proposing a contrast function based on a pseudo-likelihood approach. A suitably scaled Laplace noise is incorporated to meet the privacy requirements. Our key findings encompass the derivation of explicit conditions tied to the privacy level. Under these conditions, we establish the consistency and asymptotic normality of the associated estimator. Notably, the convergence rate is intricately linked to the privacy level, and is some situations may be completely different from the case where privacy constraints are ignored. Our results hold true as the discretization step approaches zero and the number of processes $N$ tends to infinity.
We propose an adaptive iteratively linearized finite element method (AILFEM) in the context of strongly monotone nonlinear operators in Hilbert spaces. The approach combines adaptive mesh-refinement with an energy-contractive linearization scheme (e.g., the Ka\v{c}anov method) and a norm-contractive algebraic solver (e.g., an optimal geometric multigrid method). Crucially, a novel parameter-free algebraic stopping criterion is designed and we prove that it leads to a uniformly bounded number of algebraic solver steps. Unlike available results requiring sufficiently small adaptivity parameters to ensure even plain convergence, the new AILFEM algorithm guarantees full R-linear convergence for arbitrary adaptivity parameters. Thus, parameter-robust convergence is guaranteed. Moreover, for sufficiently small adaptivity parameters, the new adaptive algorithm guarantees optimal complexity, i.e., optimal convergence rates with respect to the overall computational cost and, hence, time.
For the Crouzeix-Raviart and enriched Crouzeix-Raviart elements, asymptotic expansions of eigenvalues of the Stokes operator are derived by establishing two pseudostress interpolations, which admit a full one-order supercloseness with respect to the numerical velocity and the pressure, respectively. The design of these interpolations overcomes the difficulty caused by the lack of supercloseness of the canonical interpolations for the two nonconforming elements, and leads to an intrinsic and concise asymptotic analysis of numerical eigenvalues, which proves an optimal superconvergence of eigenvalues by the extrapolation algorithm. Meanwhile, an optimal superconvergence of postprocessed approximations for the Stokes equation is proved by use of this supercloseness. Finally, numerical experiments are tested to verify the theoretical results.
Use energy-based model for bridge-type innovation. The loss function is explained by the game theory, the logic is clear and the formula is simple and clear. Thus avoid the use of maximum likelihood estimation to explain the loss function and eliminate the need for Monte Carlo methods to solve the normalized denominator. Assuming that the bridge-type population follows a Boltzmann distribution, a neural network is constructed to represent the energy function. Use Langevin dynamics technology to generate a new sample with low energy value, thus a generative model of bridge-type based on energy is established. Train energy function on symmetric structured image dataset of three span beam bridge, arch bridge, cable-stayed bridge, and suspension bridge to accurately calculate the energy values of real and fake samples. Sampling from latent space, using gradient descent algorithm, the energy function transforms the sampling points into low energy score samples, thereby generating new bridge types different from the dataset. Due to unstable and slow training in this attempt, the possibility of generating new bridge types is rare and the image definition of generated images is low.
The modification of Amdahl's law for the case of increment of processor elements in a computer system is considered. The coefficient $k$ linking accelerations of parallel and parallel specialized computer systems is determined. The limiting values of the coefficient are investigated and its theoretical maximum is calculated. It is proved that $k$ > 1 for any positive increment of processor elements. The obtained formulas are combined into a single method allowing to determine the maximum theoretical acceleration of a parallel specialized computer system in comparison with the acceleration of a minimal parallel computer system. The method is tested on Apriori, k-nearest neighbors, CDF 9/7, fast Fourier transform and naive Bayesian classifier algorithms.