亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a loosely coupled, non-iterative time-splitting scheme based on Robin-Robin coupling conditions. We apply a novel unified analysis for this scheme applied to both a Parabolic/Parabolic coupled system and a Parabolic/Hyperbolic coupled system. We show for both systems that the scheme is stable, and the error converges as $\mathcal{O}\big(\Delta t \sqrt{T +\log{\frac{1}{\Delta t}}}\big)$, where $\Delta t$ is the time step

相關內容

Randomization has shown catalyzing effects in linear algebra with promising perspectives for tackling computational challenges in large-scale problems. For solving a system of linear equations, we demonstrate the convergence of a broad class of algorithms that at each step pick a subset of $n$ equations at random and update the iterate with the orthogonal projection to the subspace those equations represent. We identify, in this context, a specific degree-$n$ polynomial that non-linearly transforms the singular values of the system towards equalization. This transformation to singular values and the corresponding condition number then characterizes the expected convergence rate of iterations. As a consequence, our results specify the convergence rate of the stochastic gradient descent algorithm, in terms of the mini-batch size $n$, when used for solving systems of linear equations.

This paper introduces novel splitting schemes of first and second order for the wave equation with kinetic and acoustic boundary conditions of semi-linear type. For kinetic boundary conditions, we propose a reinterpretation of the system equations as a coupled system. This means that the bulk and surface dynamics are modeled separately and connected through a coupling constraint. This allows the implementation of splitting schemes, which show first-order convergence in numerical experiments. On the other hand, acoustic boundary conditions naturally separate bulk and surface dynamics. Here, Lie and Strang splitting schemes reach first- and second-order convergence, respectively, as we reveal numerically.

Performing exact Bayesian inference for complex models is computationally intractable. Markov chain Monte Carlo (MCMC) algorithms can provide reliable approximations of the posterior distribution but are expensive for large datasets and high-dimensional models. A standard approach to mitigate this complexity consists in using subsampling techniques or distributing the data across a cluster. However, these approaches are typically unreliable in high-dimensional scenarios. We focus here on a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated alternating direction of multipliers (ADMM) optimization algorithm. These methods appear to provide empirically state-of-the-art performance but their theoretical behavior in high dimension is currently unknown. In this paper, we propose a detailed theoretical study of one of these algorithms known as the split Gibbs sampler. Under regularity conditions, we establish explicit convergence rates for this scheme using Ricci curvature and coupling ideas. We support our theory with numerical illustrations.

We present a practical algorithm to approximate the exponential of skew-Hermitian matrices up to round-off error based on an efficient computation of Chebyshev polynomials of matrices and the corresponding error analysis. It is based on Chebyshev polynomials of degrees 2, 4, 8, 12 and 18 which are computed with only 1, 2, 3, 4 and 5 matrix-matrix products, respectively. For problems of the form $\exp(-iA)$, with $A$ a real and symmetric matrix, an improved version is presented that computes the sine and cosine of $A$ with a reduced computational cost. The theoretical analysis, supported by numerical experiments, indicates that the new methods are more efficient than schemes based on rational Pad\'e approximants and Taylor polynomials for all tolerances and time interval lengths. The new procedure is particularly recommended to be used in conjunction with exponential integrators for the numerical time integration of the Schr\"odinger equation.

We consider a standard distributed optimisation setting where $N$ machines, each holding a $d$-dimensional function $f_i$, aim to jointly minimise the sum of the functions $\sum_{i = 1}^N f_i (x)$. This problem arises naturally in large-scale distributed optimisation, where a standard solution is to apply variants of (stochastic) gradient descent. We focus on the communication complexity of this problem: our main result provides the first fully unconditional bounds on total number of bits which need to be sent and received by the $N$ machines to solve this problem under point-to-point communication, within a given error-tolerance. Specifically, we show that $\Omega( Nd \log d / N\varepsilon)$ total bits need to be communicated between the machines to find an additive $\epsilon$-approximation to the minimum of $\sum_{i = 1}^N f_i (x)$. The result holds for both deterministic and randomised algorithms, and, importantly, requires no assumptions on the algorithm structure. The lower bound is tight under certain restrictions on parameter values, and is matched within constant factors for quadratic objectives by a new variant of quantised gradient descent, which we describe and analyse. Our results bring over tools from communication complexity to distributed optimisation, which has potential for further applications.

We establish improved uniform error bounds on time-splitting methods for the long-time dynamics of the Dirac equation with small electromagnetic potentials characterized by a dimensionless parameter $\varepsilon\in (0, 1]$ representing the amplitude of the potentials. We begin with a semi-discritization of the Dirac equation in time by a time-splitting method, and then followed by a full-discretization in space by the Fourier pseudospectral method. Employing the unitary flow property of the second-order time-splitting method for the Dirac equation, we prove uniform error bounds at $C(t)\tau^2$ and $C(t)(h^m+\tau^2)$ for the semi-discretization and full-discretization, respectively, for any time $t\in[0,T_\varepsilon]$ with $T_\varepsilon = T/\varepsilon$ for $T > 0$, which are uniformly for $\varepsilon \in (0, 1]$, where $\tau$ is the time step, $h$ is the mesh size, $m\geq 2$ depends on the regularity of the solution, and $C(t) = C_0 + C_1\varepsilon t\le C_0+C_1T$ grows at most linearly with respect to $t$ with $C_0\ge0$ and $C_1>0$ two constants independent of $t$, $h$, $\tau$ and $\varepsilon$. Then by adopting the regularity compensation oscillation (RCO) technique which controls the high frequency modes by the regularity of the solution and low frequency modes by phase cancellation and energy method, we establish improved uniform error bounds at $O(\varepsilon\tau^2)$ and $O(h^m +\varepsilon\tau^2)$ for the semi-discretization and full-discretization, respectively, up to the long-time $T_\varepsilon$. Numerical results are reported to confirm our error bounds and to demonstrate that they are sharp. Comparisons on the accuracy of different time discretizations for the Dirac equation are also provided.

This work is devoted to design and study efficient and accurate numerical schemes to approximate a chemo-attraction model with consumption effects, which is a nonlinear parabolic system for two variables; the cell density and the concentration of the chemical signal that the cell feel attracted to. We present several finite element schemes to approximate the system, detailing the main properties of each of them, such as conservation of cells, energy-stability and approximated positivity. Moreover, we carry out several numerical simulations to study the efficiency of each of the schemes and to compare them with others classical schemes.

Progressively applying Gaussian noise transforms complex data distributions to approximately Gaussian. Reversing this dynamic defines a generative model. When the forward noising process is given by a Stochastic Differential Equation (SDE), Song et al. (2021) demonstrate how the time inhomogeneous drift of the associated reverse-time SDE may be estimated using score-matching. A limitation of this approach is that the forward-time SDE must be run for a sufficiently long time for the final distribution to be approximately Gaussian. In contrast, solving the Schr\"odinger Bridge problem (SB), i.e. an entropy-regularized optimal transport problem on path spaces, yields diffusions which generate samples from the data distribution in finite time. We present Diffusion SB (DSB), an original approximation of the Iterative Proportional Fitting (IPF) procedure to solve the SB problem, and provide theoretical analysis along with generative modeling experiments. The first DSB iteration recovers the methodology proposed by Song et al. (2021), with the flexibility of using shorter time intervals, as subsequent DSB iterations reduce the discrepancy between the final-time marginal of the forward (resp. backward) SDE with respect to the prior (resp. data) distribution. Beyond generative modeling, DSB offers a widely applicable computational optimal transport tool as the continuous state-space analogue of the popular Sinkhorn algorithm (Cuturi, 2013).

In this paper, we construct some piecewise defined functions, and study their $c$-differential uniformity. As a by-product, we improve upon several prior results. Further, we look at concatenations of functions with low differential uniformity and show several results. For example, we prove that given $\beta_i$ (a basis of $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$), some functions $f_i$ of $c$-differential uniformities $\delta_i$, and $L_i$ (specific linearized polynomials defined in terms of $\beta_i$), $1\leq i\leq n$, then $F(x)=\sum_{i=1}^n\beta_i f_i(L_i(x))$ has $c$-differential uniformity equal to $\prod_{i=1}^n \delta_i$.

The sum-utility maximization problem is known to be important in the energy systems literature. The conventional assumption to address this problem is that the utility is concave. But for some key applications, such an assumption is not reasonable and does not reflect well the actual behavior of the consumer. To address this issue, the authors pose and address a more general optimization problem, namely by assuming the consumer's utility to be sigmoidal and in a given class of functions. The considered class of functions is very attractive for at least two reasons. First, the classical NP-hardness issue associated with sum-utility maximization is circumvented. Second, the considered class of functions encompasses well-known performance metrics used to analyze the problems of pricing and energy-efficiency. This allows one to design a new and optimal inclining block rates (IBR) pricing policy which also has the virtue of flattening the power consumption and reducing the peak power. We also show how to maximize the energy-efficiency by a low-complexity algorithm. When compared to existing policies, simulations fully support the benefit from using the proposed approach.

北京阿比特科技有限公司