亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate time complexities of finite difference methods for solving the high-dimensional linear heat equation, the high-dimensional linear hyperbolic equation and the multiscale hyperbolic heat system with quantum algorithms (hence referred to as the "quantum difference methods"). For the heat and linear hyperbolic equations we study the impact of explicit and implicit time discretizations on quantum advantages over the classical difference method. For the multiscale problem, we find the time complexity of both the classical treatment and quantum treatment for the explicit scheme scales as $\mathcal{O}(1/\varepsilon)$, where $\varepsilon$ is the scaling parameter, while the scaling for the multiscale Asymptotic-Preserving (AP) schemes does not depend on $\varepsilon$. This indicates that it is still of great importance to develop AP schemes for multiscale problems in quantum computing.

相關內容

We present an extension of the linear sampling method for solving the sound-soft inverse acoustic scattering problem with randomly distributed point sources. The theoretical justification of our sampling method is based on the Helmholtz--Kirchhoff identity, the cross-correlation between measurements, and the volume and imaginary near-field operators, which we introduce and analyze. Implementations in MATLAB using boundary elements, the SVD, Tikhonov regularization, and Morozov's discrepancy principle are also discussed. We demonstrate the robustness and accuracy of our algorithms with several numerical experiments in two dimensions.

Heterogeneity is a dominant factor in the behaviour of many biological processes. Despite this, it is common for mathematical and statistical analyses to ignore biological heterogeneity as a source of variability in experimental data. Therefore, methods for exploring the identifiability of models that explicitly incorporate heterogeneity through variability in model parameters are relatively underdeveloped. We develop a new likelihood-based framework, based on moment matching, for inference and identifiability analysis of differential equation models that capture biological heterogeneity through parameters that vary according to probability distributions. As our novel method is based on an approximate likelihood function, it is highly flexible; we demonstrate identifiability analysis using both a frequentist approach based on profile likelihood, and a Bayesian approach based on Markov-chain Monte Carlo. Through three case studies, we demonstrate our method by providing a didactic guide to inference and identifiability analysis of hyperparameters that relate to the statistical moments of model parameters from independent observed data. Our approach has a computational cost comparable to analysis of models that neglect heterogeneity, a significant improvement over many existing alternatives. We demonstrate how analysis of random parameter models can aid better understanding of the sources of heterogeneity from biological data.

Despite the numerous ways now available to quantify which parts or subsystems of a network are most important, there remains a lack of centrality measures that are related to the complexity of information flows and are derived directly from entropy measures. Here, we introduce a ranking of edges based on how each edge's removal would change a system's von Neumann entropy (VNE), which is a spectral-entropy measure that has been adapted from quantum information theory to quantify the complexity of information dynamics over networks. We show that a direct calculation of such rankings is computationally inefficient (or unfeasible) for large networks: e.g.\ the scaling is $\mathcal{O}(N^3)$ per edge for networks with $N$ nodes. To overcome this limitation, we employ spectral perturbation theory to estimate VNE perturbations and derive an approximate edge-ranking algorithm that is accurate and fast to compute, scaling as $\mathcal{O}(N)$ per edge. Focusing on a form of VNE that is associated with a transport operator $e^{-\beta{ L}}$, where ${ L}$ is a graph Laplacian matrix and $\beta>0$ is a diffusion timescale parameter, we apply this approach to diverse applications including a network encoding polarized voting patterns of the 117th U.S. Senate, a multimodal transportation system including roads and metro lines in London, and a multiplex brain network encoding correlated human brain activity. Our experiments highlight situations where the edges that are considered to be most important for information diffusion complexity can dramatically change as one considers short, intermediate and long timescales $\beta$ for diffusion.

We present a highly scalable strategy for developing mesh-free neuro-symbolic partial differential equation solvers from existing numerical discretizations found in scientific computing. This strategy is unique in that it can be used to efficiently train neural network surrogate models for the solution functions and the differential operators, while retaining the accuracy and convergence properties of state-of-the-art numerical solvers. This neural bootstrapping method is based on minimizing residuals of discretized differential systems on a set of random collocation points with respect to the trainable parameters of the neural network, achieving unprecedented resolution and optimal scaling for solving physical and biological systems.

This paper is concerned with developing an efficient numerical algorithm for fast implementation of the sparse grid method for computing the $d$-dimensional integral of a given function. The new algorithm, called the MDI-SG ({\em multilevel dimension iteration sparse grid}) method, implements the sparse grid method based on a dimension iteration/reduction procedure, it does not need to store the integration points, neither does it compute the function values independently at each integration point, instead, it re-uses the computation for function evaluations as much as possible by performing the function evaluations at all integration points in a cluster and iteratively along coordinate directions. It is shown numerically that the computational complexity (in terms of CPU time) of the proposed MDI-SG method is of polynomial order $O(Nd^3 )$ or better, compared to the exponential order $O(N(\log N)^{d-1})$ for the standard sparse grid method, where $N$ denotes the maximum number of integration points in each coordinate direction. As a result, the proposed MDI-SG method effectively circumvents the curse of dimensionality suffered by the standard sparse grid method for high-dimensional numerical integration.

Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.

In this study we consider unconditionally non-oscillatory, high order implicit time marching based on time-limiters. The first aspect of our work is to propose the high resolution Limited-DIRK3 (L-DIRK3) scheme for conservation laws and convection-diffusion equations in the method-of-lines framework. The scheme can be used in conjunction with an arbitrary high order spatial discretization scheme such as 5th order WENO scheme. It can be shown that the strongly S-stable DIRK3 scheme is not SSP and may introduce strong oscillations under large time step. To overcome the oscillatory nature of DIRK3, the key idea of L-DIRK3 scheme is to apply local time-limiters (K.Duraisamy, J.D.Baeder, J-G Liu), with which the order of accuracy in time is locally dropped to first order in the regions where the evolution of solution is not smooth. In this way, the monotonicity condition is locally satisfied, while a high order of accuracy is still maintained in most of the solution domain. For convenience of applications to systems of equations, we propose a new and simple construction of time-limiters which allows flexible choice of reference quantity with minimal computation cost. Another key aspect of our work is to extend the application of time-limiter schemes to multidimensional problems and convection-diffusion equations. Numerical experiments for scalar/systems of equations in one- and two-dimensions confirm the high resolution and the improved stability of L-DIRK3 under large time steps. Moreover, the results indicate the potential of time-limiter schemes to serve as a generic and convenient methodology to improve the stability of arbitrary DIRK methods.

For the well-known Survivable Network Design Problem (SNDP) we are given an undirected graph $G$ with edge costs, a set $R$ of terminal vertices, and an integer demand $d_{s,t}$ for every terminal pair $s,t\in R$. The task is to compute a subgraph $H$ of $G$ of minimum cost, such that there are at least $d_{s,t}$ disjoint paths between $s$ and $t$ in $H$. If the paths are required to be edge-disjoint we obtain the edge-connectivity variant (EC-SNDP), while internally vertex-disjoint paths result in the vertex-connectivity variant (VC-SNDP). Another important case is the element-connectivity variant (LC-SNDP), where the paths are disjoint on edges and non-terminals. In this work we shed light on the parameterized complexity of the above problems. We consider several natural parameters, which include the solution size $\ell$, the sum of demands $D$, the number of terminals $k$, and the maximum demand $d_\max$. Using simple, elegant arguments, we prove the following results. - We give a complete picture of the parameterized tractability of the three variants w.r.t. parameter $\ell$: both EC-SNDP and LC-SNDP are FPT, while VC-SNDP is W[1]-hard. - We identify some special cases of VC-SNDP that are FPT: * when $d_\max\leq 3$ for parameter $\ell$, * on locally bounded treewidth graphs (e.g., planar graphs) for parameter $\ell$, and * on graphs of treewidth $tw$ for parameter $tw+D$. - The well-known Directed Steiner Tree (DST) problem can be seen as single-source EC-SNDP with $d_\max=1$ on directed graphs, and is FPT parameterized by $k$ [Dreyfus & Wagner 1971]. We show that in contrast, the 2-DST problem, where $d_\max=2$, is W[1]-hard, even when parameterized by $\ell$.

In this paper, we propose and study a fast multilevel dimension iteration (MDI) algorithm for computing arbitrary $d$-dimensional integrals based on tensor product approximations. It reduces the computational complexity (in terms of the CPU time) of a tensor product method from the exponential order $O(N^d)$ to the polynomial order {\color{black} $O(d^3N^2)$ or better}, where $N$ stands for the number of quadrature points in each coordinate direction. As a result, the proposed MDI algorithm effectively circumvents the curse of the dimensionality of tensor product methods for high dimensional numerical integration. The main idea of the proposed MDI algorithm is to compute the function evaluations at all integration points in the cluster and iteratively along each coordinate direction, so lots of computations for function evaluations can be reused in each iteration. This idea is also applicable to any quadrature rule whose integration points have a lattice-like structure.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司