亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The first globally convergent numerical method for a Coefficient Inverse Problem (CIP) for the Riemannian Radiative Transfer Equation (RRTE) is constructed. This is a version of the so-called \textquotedblleft convexification" method, which has been pursued by this research group for a number of years for some other CIPs for PDEs. Those PDEs are significantly different from the RRTE. The presence of the Carleman Weight Function (CWF) in the numerical scheme is the key element which insures the global convergence. Convergence analysis is presented along with the results of numerical experiments, which confirm the theory. RRTE governs the propagation of photons in the diffuse medium in the case when they propagate along geodesic lines between their collisions. Geodesic lines are generated by the spatially variable dielectric constant of the medium.

相關內容

This paper investigates gradient descent for solving low-rank matrix approximation problems. We begin by establishing the local linear convergence of gradient descent for symmetric matrix approximation. Building on this result, we prove the rapid global convergence of gradient descent, particularly when initialized with small random values. Remarkably, we show that even with moderate random initialization, which includes small random initialization as a special case, gradient descent achieves fast global convergence in scenarios where the top eigenvalues are identical. Furthermore, we extend our analysis to address asymmetric matrix approximation problems and investigate the effectiveness of a retraction-free eigenspace computation method. Numerical experiments strongly support our theory. In particular, the retraction-free algorithm outperforms the corresponding Riemannian gradient descent method, resulting in a significant 29\% reduction in runtime.

We present and analyze a hybridizable discontinuous Galerkin (HDG) finite element method for the coupled Stokes--Biot problem. Of particular interest is that the discrete velocities and displacement are $H(\text{div})$-conforming and satisfy the compressibility equations pointwise on the elements. Furthermore, in the incompressible limit, the discretization is strongly conservative. We prove well-posedness of the discretization and, after combining the HDG method with backward Euler time stepping, present a priori error estimates that demonstrate that the method is free of volumetric locking. Numerical examples further demonstrate optimal rates of convergence in the $L^2$-norm for all unknowns and that the discretization is locking-free.

In this paper, we present a numerical approach to solve the McKean-Vlasov equations, which are distribution-dependent stochastic differential equations, under some non-globally Lipschitz conditions for both the drift and diffusion coefficients. We establish a propagation of chaos result, based on which the McKean-Vlasov equation is approximated by an interacting particle system. A truncated Euler scheme is then proposed for the interacting particle system allowing for a Khasminskii-type condition on the coefficients. To reduce the computational cost, the random batch approximation proposed in [Jin et al., J. Comput. Phys., 400(1), 2020] is extended to the interacting particle system where the interaction could take place in the diffusion term. An almost half order of convergence is proved in $L^p$ sense. Numerical tests are performed to verify the theoretical results.

Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub-Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method like MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.

The combined increase of energy demand and environmental pollution at a global scale is entailing a rethinking of the production models in sustainable terms. As a consequence, energy suppliers are starting to adopt strategies that flatten demand peaks in power plants by means of pricing policies that stimulate a change in the consumption practices of customers. A representative example is the Time-of-Use (TOU)-based tariffs policy, which encourages electricity usage at off-peak hours by means of low prices, while penalizing peak hours with higher prices. The TOU-based tariffs policy induces a partitioning of the time horizon into a set of time slots, each associated with a cost that becomes a part of the optimization objective. This thesis focuses on a representative bi-objective energy-efficient job scheduling problem on parallel identical machines under TOU-based tariffs by delving into the description of its inherent properties, mathematical formulations, and solution approaches. Specifically, the thesis starts by reviewing the flourishing literature on the subject, and providing a useful framework for theoreticians and practitioners. Subsequently, it describes the considered problem and investigates its theoretical properties. In the same chapter, it presents a first mathematical model for the problem, as well as a possible reformulation that exploits the structure of the solution space so as to achieve a considerable increase in compactness. Afterwards, the thesis introduces a sophisticated heuristic scheme to tackle the inherent hardness of the problem, and an exact algorithm that exploits the mathematical models. Then, it shows the computational efficiency of the presented solution approaches on a wide test benchmark. Finally, it presents a perspective on future research directions for the class of energy-efficient scheduling problems under TOU-based tariffs as a whole.

We study the convergence to local Nash equilibria of gradient methods for two-player zero-sum differentiable games. It is well-known that such dynamics converge locally when $S \succ 0$ and may diverge when $S=0$, where $S\succeq 0$ is the symmetric part of the Jacobian at equilibrium that accounts for the "potential" component of the game. We show that these dynamics also converge as soon as $S$ is nonzero (partial curvature) and the eigenvectors of the antisymmetric part $A$ are in general position with respect to the kernel of $S$. We then study the convergence rates when $S \ll A$ and prove that they typically depend on the average of the eigenvalues of $S$, instead of the minimum as an analogy with minimization problems would suggest. To illustrate our results, we consider the problem of computing mixed Nash equilibria of continuous games. We show that, thanks to partial curvature, conic particle methods -- which optimize over both weights and supports of the mixed strategies -- generically converge faster than fixed-support methods. For min-max games, it is thus beneficial to add degrees of freedom "with curvature": this can be interpreted as yet another benefit of over-parameterization.

We describe a family of decidable propositional dynamic logics, where atomic modalities satisfy some extra conditions (for example, given by axioms of the logics K5, S5, or K45 for different atomic modalities). It follows from recent results (Kikot, Shapirovsky, Zolin, 2014; 2020) that if a modal logic $L$ admits a special type of filtration (so-called definable filtration), then its enrichments with modalities for the transitive closure and converse relations also admit definable filtration. We use these results to show that if logics $L_1, \ldots , L_n$ admit definable filtration, then the propositional dynamic logic with converse extended by the fusion $L_1*\ldots * L_n$ has the finite model property.

Differentially private synthetic data provide a powerful mechanism to enable data analysis while protecting sensitive information about individuals. However, when the data lie in a high-dimensional space, the accuracy of the synthetic data suffers from the curse of dimensionality. In this paper, we propose a differentially private algorithm to generate low-dimensional synthetic data efficiently from a high-dimensional dataset with a utility guarantee with respect to the Wasserstein distance. A key step of our algorithm is a private principal component analysis (PCA) procedure with a near-optimal accuracy bound that circumvents the curse of dimensionality. Different from the standard perturbation analysis using the Davis-Kahan theorem, our analysis of private PCA works without assuming the spectral gap for the sample covariance matrix.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

北京阿比特科技有限公司