亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Randomized iterative algorithms for solving a factorized linear system, $\mathbf A\mathbf B\mathbf x=\mathbf b$ with $\mathbf A\in{\mathbb{R}}^{m\times \ell}$, $\mathbf B\in{\mathbb{R}}^{\ell\times n}$, and $\mathbf b\in{\mathbb{R}}^m$, have recently been proposed. They take advantage of the factorized form and avoid forming the matrix $\mathbf C=\mathbf A\mathbf B$ explicitly. However, they can only find the minimum norm (least squares) solution. In contrast, the regularized randomized Kaczmarz (RRK) algorithm can find solutions with certain structures from consistent linear systems. In this work, by combining the randomized Kaczmarz algorithm or the randomized Gauss--Seidel algorithm with the RRK algorithm, we propose two novel regularized randomized iterative algorithms to find (least squares) solutions with certain structures of $\mathbf A\mathbf B\mathbf x=\mathbf b$. We prove linear convergence of the new algorithms. Computed examples are given to illustrate that the new algorithms can find sparse (least squares) solutions of $\mathbf A\mathbf B\mathbf x=\mathbf b$ and can be better than the existing randomized iterative algorithms for the corresponding full linear system $\mathbf C\mathbf x=\mathbf b$ with $\mathbf C=\mathbf A\mathbf B$.

相關內容

We prove strong rate resp. weak rate ${\mathcal O}(\tau)$ for a structure preserving temporal discretization (with $\tau$ the step size) of the stochastic Allen-Cahn equation with additive resp. multiplicative colored noise in $d=1,2,3$ dimensions. Direct variational arguments exploit the one-sided Lipschitz property of the cubic nonlinearity in the first setting to settle first order strong rate. It is the same property which allows for uniform bounds for the derivatives of the solution of the related Kolmogorov equation, and then leads to weak rate ${\mathcal O}(\tau)$ in the presence of multiplicative noise. Hence, we obtain twice the rate of convergence known for the strong error in the presence of multiplicative noise.

Given a sound first-order p-time theory $T$ capable of formalizing syntax of first-order logic we define a p-time function $g_T$ that stretches all inputs by one bit and we use its properties to show that $T$ must be incomplete. We leave it as an open problem whether for some $T$ the range of $g_T$ intersects all infinite NP sets (i.e. whether it is a proof complexity generator hard for all proof systems). A propositional version of the construction shows that at least one of the following three statements is true: - there is no p-optimal propositional proof system (this is equivalent to the non-existence of a time-optimal propositional proof search algorithm), - $E \not\subseteq P/poly$, - there exists function $h$ that stretches all inputs by one bit, is computable in sub-exponential time and its range $Rng(h)$ intersects all infinite NP sets.

Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.

Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition probabilities.

Standard multiparameter eigenvalue problems (MEPs) are systems of $k\ge 2$ linear $k$-parameter square matrix pencils. Recently, a new form of multiparameter eigenvalue problems has emerged: a rectangular MEP (RMEP) with only one multivariate rectangular matrix pencil, where we are looking for combinations of the parameters for which the rank of the pencil is not full. Applications include finding the optimal least squares autoregressive moving average (ARMA) model and the optimal least squares realization of autonomous linear time-invariant (LTI) dynamical system. For linear and polynomial RMEPs, we give the number of solutions and show how these problems can be solved numerically by a transformation into a standard MEP. For the transformation we provide new linearizations for quadratic multivariate matrix polynomials with a specific structure of monomials and consider mixed systems of rectangular and square multivariate matrix polynomials. This numerical approach seems computationally considerably more attractive than the block Macaulay method, the only other currently available numerical method for polynomial RMEPs.

Iterative refinement (IR) is a popular scheme for solving a linear system of equations based on gradually improving the accuracy of an initial approximation. Originally developed to improve upon the accuracy of Gaussian elimination, interest in IR has been revived because of its suitability for execution on fast low-precision hardware such as analog devices and graphics processing units. IR generally converges when the error associated with the solution method is small, but is known to diverge when this error is large. We propose and analyze a novel enhancement to the IR algorithm by adding a line search optimization step that guarantees the algorithm will not diverge. Numerical experiments verify our theoretical results and illustrate the effectiveness of our proposed scheme.

In this paper we introduce a multilevel Picard approximation algorithm for semilinear parabolic partial integro-differential equations (PIDEs). We prove that the numerical approximation scheme converges to the unique viscosity solution of the PIDE under consideration. To that end, we derive a Feynman-Kac representation for the unique viscosity solution of the semilinear PIDE, extending the classical Feynman-Kac representation for linear PIDEs. Furthermore, we show that the algorithm does not suffer from the curse of dimensionality, i.e. the computational complexity of the algorithm is bounded polynomially in the dimension $d$ and the reciprocal of the prescribed accuracy $\varepsilon$. We also provide a numerical example in up to 10'000 dimensions to demonstrate its applicability.

In the Activation Edge-Multicover problem we are given a multigraph $G=(V,E)$ with activation costs $\{c_{e}^u,c_{e}^v\}$ for every edge $e=uv \in E$, and degree requirements $r=\{r_v:v \in V\}$. The goal is to find an edge subset $J \subseteq E$ of minimum activation cost $\sum_{v \in V}\max\{c_{uv}^v:uv \in J\}$,such that every $v \in V$ has at least $r_v$ neighbors in the graph $(V,J)$. Let $k= \max_{v \in V} r_v$ be the maximum requirement and let $\theta=\max_{e=uv \in E} \frac{\max\{c_e^u,c_e^v\}}{\min\{c_e^u,c_e^v\}}$ be the maximum quotient between the two costs of an edge. For $\theta=1$ the problem admits approximation ratio $O(\log k)$. For $k=1$ it generalizes the Set Cover problem (when $\theta=\infty$), and admits a tight approximation ratio $O(\log n)$. This implies approximation ratio $O(k \log n)$ for general $k$ and $\theta$, and no better approximation ratio was known. We obtain the first logarithmic approximation ratio $O(\log k +\log\min\{\theta,n\})$, that bridges between the two known ratios -- $O(\log k)$ for $\theta=1$ and $O(\log n)$ for $k=1$. This implies approximation ratio $O\left(\log k +\log\min\{\theta,n\}\right) +\beta \cdot (\theta+1)$ for the Activation $k$-Connected Subgraph problem, where $\beta$ is the best known approximation ratio for the ordinary min-cost version of the problem.

We study the classical problem of approximating a non-decreasing function $f: \mathcal{X} \to \mathcal{Y}$ in $L^p(\mu)$ norm by sequentially querying its values, for known compact real intervals $\mathcal{X}$, $\mathcal{Y}$ and a known probability measure $\mu$ on $\cX$. For any function~$f$ we characterize the minimum number of evaluations of $f$ that algorithms need to guarantee an approximation $\hat{f}$ with an $L^p(\mu)$ error below $\epsilon$ after stopping. Unlike worst-case results that hold uniformly over all $f$, our complexity measure is dependent on each specific function $f$. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the $L^p(\mu)$ error of GreedyBox decreases much faster for piecewise-$C^2$ functions than predicted by the algorithm (without any knowledge on the smoothness of $f$). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.

We define a family of $C^1$ functions which we call "nowhere coexpanding functions" that is closed under composition and includes all $C^3$ functions with non-positive Schwarzian derivative. We establish results on the number and nature of the fixed points of these functions, including a generalisation of a classic result of Singer.

北京阿比特科技有限公司