亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of approximating a matrix $\mathbf{A}$ with a matrix that has a fixed sparsity pattern (e.g., diagonal, banded, etc.), when $\mathbf{A}$ is accessed only by matrix-vector products. We describe a simple randomized algorithm that returns an approximation with the given sparsity pattern with Frobenius-norm error at most $(1+\varepsilon)$ times the best possible error. When each row of the desired sparsity pattern has at most $s$ nonzero entries, this algorithm requires $O(s/\varepsilon)$ non-adaptive matrix-vector products with $\mathbf{A}$. We also prove a matching lower-bound, showing that, for any sparsity pattern with $\Theta(s)$ nonzeros per row and column, any algorithm achieving $(1+\epsilon)$ approximation requires $\Omega(s/\varepsilon)$ matrix-vector products in the worst case. We thus resolve the matrix-vector product query complexity of the problem up to constant factors, even for the well-studied case of diagonal approximation, for which no previous lower bounds were known.

相關內容

We develop the frozen Gaussian approximation (FGA) for the fractional Schr\"odinger equation in the semi-classical regime, where the solution is highly oscillatory when the scaled Planck constant $\varepsilon$ is small. This method approximates the solution to the Schr\"odinger equation by an integral representation based on asymptotic analysis and provides a highly efficient computational method for high-frequency wave function evolution. In particular, we revise the standard FGA formula to address the singularities arising in the higher-order derivatives of coefficients of the associated Hamiltonian flow that are second-order continuously differentiable or smooth in conventional FGA analysis. We then establish its convergence to the true solution. Additionally, we provide some numerical examples to verify the accuracy and convergence behavior of the frozen Gaussian approximation method.

We establish a connection between stochastic optimal control and generative models based on stochastic differential equations (SDEs), such as recently developed diffusion probabilistic models. In particular, we derive a Hamilton-Jacobi-Bellman equation that governs the evolution of the log-densities of the underlying SDE marginals. This perspective allows to transfer methods from optimal control theory to generative modeling. First, we show that the evidence lower bound is a direct consequence of the well-known verification theorem from control theory. Further, we can formulate diffusion-based generative modeling as a minimization of the Kullback-Leibler divergence between suitable measures in path space. Finally, we develop a novel diffusion-based method for sampling from unnormalized densities -- a problem frequently occurring in statistics and computational sciences. We demonstrate that our time-reversed diffusion sampler (DIS) can outperform other diffusion-based sampling approaches on multiple numerical examples.

The present work is devoted to strong approximations of a generalized A\"{i}t-Sahalia model arising from mathematical finance. The numerical study of the considered model faces essential difficulties caused by a drift that blows up at the origin, highly nonlinear drift and diffusion coefficients and positivity-preserving requirement. In this paper, a novel explicit Euler-type scheme is proposed, which is easily implementable and able to preserve positivity of the original model unconditionally, i.e., for any time step-size $h >0$. A mean-square convergence rate of order $0.5$ is also obtained for the proposed scheme in both non-critical and general critical cases. Our work is motivated by the need to justify the multi-level Monte Carlo (MLMC) simulations for the underlying model, where the rate of mean-square convergence is required and the preservation of positivity is desirable particularly for large discretization time steps. Numerical experiments are finally provided to confirm the theoretical findings.

We build a transient multidimensional multiphysical model based on continuum theories, involving the coupled mechanical, thermal and electrochemical phenomena occurring simultaneously in the discharge or charge of lithium-ion batteries. The process delivers a system of coupled nonlinear partial differential equations. Besides initial and boundary conditions, we highlight the treatment of the electrode-electrolyte interface condition, which corresponds to a Butler-Volmer reaction kinetics equation. We present the derivation of the strong and weak forms of the model, as well as the discretization procedure in space and in time. The discretized model is computationally solved in two dimensions by means of a finite element method that employs $hp$ layered meshes, along with staggered second order semi-implicit time integration. The expected error estimate is of higher order than any other similar work, both in space and in time. A representative battery cell geometry, under distinct operating scenarios, is simulated. The numerical results show that the full model allows for important additional insights to be drawn than when caring only for the electrochemical coupling. Considering the multiphysics becomes more important as the applied current is increased, whether for discharge or for charge. Our full model provides battery design professionals with a valuable tool to optimize designs and advance the energy storage industry.

We consider the cubic nonlinear Schr\"odinger equation with a spatially rough potential, a key equation in the mathematical setup for nonlinear Anderson localization. Our study comprises two main parts: new optimal results on the well-posedness analysis on the PDE level, and subsequently a new efficient numerical method, its convergence analysis and simulations that illustrate our analytical results. In the analysis part, our results focus on understanding how the regularity of the solution is influenced by the regularity of the potential, where we provide quantitative and explicit characterizations. Ill-posedness results are also established to demonstrate the sharpness of the obtained regularity characterizations and to indicate the minimum regularity required from the potential for the NLS to be solvable. Building upon the obtained regularity results, we design an appropriate numerical discretization for the model and establish its convergence with an optimal error bound. The numerical experiments in the end not only verify the theoretical regularity results, but also confirm the established convergence rate of the proposed scheme. Additionally, a comparison with other existing schemes is conducted to demonstrate the better accuracy of our new scheme in the case of a rough potential.

Quantifying the heterogeneity is an important issue in meta-analysis, and among the existing measures, the $I^2$ statistic is most commonly used. In this paper, we first illustrate with a simple example that the $I^2$ statistic is heavily dependent on the study sample sizes, mainly because it is used to quantify the heterogeneity between the observed effect sizes. To reduce the influence of sample sizes, we introduce an alternative measure that aims to directly measure the heterogeneity between the study populations involved in the meta-analysis. We further propose a new estimator, namely the $I_A^2$ statistic, to estimate the newly defined measure of heterogeneity. For practical implementation, the exact formulas of the $I_A^2$ statistic are also derived under two common scenarios with the effect size as the mean difference (MD) or the standardized mean difference (SMD). Simulations and real data analysis demonstrate that the $I_A^2$ statistic provides an asymptotically unbiased estimator for the absolute heterogeneity between the study populations, and it is also independent of the study sample sizes as expected. To conclude, our newly defined $I_A^2$ statistic can be used as a supplemental measure of heterogeneity to monitor the situations where the study effect sizes are indeed similar with little biological difference. In such scenario, the fixed-effect model can be appropriate; nevertheless, when the sample sizes are sufficiently large, the $I^2$ statistic may still increase to 1 and subsequently suggest the random-effects model for meta-analysis.

Quantifying the heterogeneity is an important issue in meta-analysis, and among the existing measures, the $I^2$ statistic is the most commonly used measure in the literature. In this paper, we show that the $I^2$ statistic was, in fact, defined as problematic or even completely wrong from the very beginning. To confirm this statement, we first present a motivating example to show that the $I^2$ statistic is heavily dependent on the study sample sizes, and consequently it may yield contradictory results for the amount of heterogeneity. Moreover, by drawing a connection between ANOVA and meta-analysis, the $I^2$ statistic is shown to have, mistakenly, applied the sampling errors of the estimators rather than the variances of the study populations. Inspired by this, we introduce an Intrinsic measure for Quantifying the heterogeneity in meta-analysis, and meanwhile study its statistical properties to clarify why it is superior to the existing measures. We further propose an optimal estimator, referred to as the IQ statistic, for the new measure of heterogeneity that can be readily applied in meta-analysis. Simulations and real data analysis demonstrate that the IQ statistic provides a nearly unbiased estimate of the true heterogeneity and it is also independent of the study sample sizes.

We consider a class of symmetry hypothesis testing problems including testing isotropy on $\mathbb{R}^d$ and testing rotational symmetry on the hypersphere $\mathcal{S}^{d-1}$. For this class, we study the null and non-null behaviors of Sobolev tests, with emphasis on their consistency rates. Our main results show that: (i) Sobolev tests exhibit a detection threshold (see Bhattacharya, 2019, 2020) that does not only depend on the coefficients defining these tests; and (ii) tests with non-zero coefficients at odd (respectively, even) ranks only are blind to alternatives with angular functions whose $k$th-order derivatives at zero vanish for any $k$ odd (even). Our non-standard asymptotic results are illustrated with Monte Carlo exercises. A case study in astronomy applies the testing toolbox to evaluate the symmetry of orbits of long- and short-period comets.

Given a hypergraph $\mathcal{H}$, the dual hypergraph of $\mathcal{H}$ is the hypergraph of all minimal transversals of $\mathcal{H}$. The dual hypergraph is always Sperner, that is, no hyperedge contains another. A special case of Sperner hypergraphs are the conformal Sperner hypergraphs, which correspond to the families of maximal cliques of graphs. All these notions play an important role in many fields of mathematics and computer science, including combinatorics, algebra, database theory, etc. In this paper we study conformality of dual hypergraphs and prove several results related to the problem of recognizing this property. In particular, we show that the problem is in co-NP and can be solved in polynomial time for hypergraphs of bounded dimension. In the special case of dimension $3$, we reduce the problem to $2$-Satisfiability. Our approach has an implication in algorithmic graph theory: we obtain a polynomial-time algorithm for recognizing graphs in which all minimal transversals of maximal cliques have size at most $k$, for any fixed $k$.

Entropy conditions play a crucial role in the extraction of a physically relevant solution for a system of conservation laws, thus motivating the construction of entropy stable schemes that satisfy a discrete analogue of such conditions. TeCNO schemes (Fjordholm et al. 2012) form a class of arbitrary high-order entropy stable finite difference solvers, which require specialized reconstruction algorithms satisfying the sign property at each cell interface. Recently, third-order WENO schemes called SP-WENO (Fjordholm and Ray, 2016) and SP-WENOc (Ray, 2018) have been designed to satisfy the sign property. However, these WENO algorithms can perform poorly near shocks, with the numerical solutions exhibiting large spurious oscillations. In the present work, we propose a variant of the SP-WENO, termed as Deep Sign-Preserving WENO (DSP-WENO), where a neural network is trained to learn the WENO weighting strategy. The sign property and third-order accuracy are strongly imposed in the algorithm, which constrains the WENO weight selection region to a convex polygon. Thereafter, a neural network is trained to select the WENO weights from this convex region with the goal of improving the shock-capturing capabilities without sacrificing the rate of convergence in smooth regions. The proposed synergistic approach retains the mathematical framework of the TeCNO scheme while integrating deep learning to remedy the computational issues of the WENO-based reconstruction. We present several numerical experiments to demonstrate the significant improvement with DSP-WENO over the existing variants of WENO satisfying the sign property.

北京阿比特科技有限公司