亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A parallel implementation of a compatible discretization scheme for steady-state Stokes problems is presented in this work. The scheme uses generalized moving least squares to generate differential operators and apply boundary conditions. This meshless scheme allows a high-order convergence for both the velocity and pressure, while also incorporates finite-difference-like sparse discretization. Additionally, the method is inherently scalable: the stencil generation process requires local inversion of matrices amenable to GPU acceleration, and the divergence-free treatment of velocity replaces the traditional saddle point structure of the global system with elliptic diagonal blocks amenable to algebraic multigrid. The implementation in this work uses a variety of Trilinos packages to exploit this local and global parallelism, and benchmarks demonstrating high-order convergence and weak scalability are provided.

相關內容

The Bayesian persuasion paradigm of strategic communication models interaction between a privately-informed agent, called the sender, and an ignorant but rational agent, called the receiver. The goal is typically to design a (near-)optimal communication (or signaling) scheme for the sender. It enables the sender to disclose information to the receiver in a way as to incentivize her to take an action that is preferred by the sender. Finding the optimal signaling scheme is known to be computationally difficult in general. This hardness is further exacerbated when there is also a constraint on the size of the message space, leading to NP-hardness of approximating the optimal sender utility within any constant factor. In this paper, we show that in several natural and prominent cases the optimization problem is tractable even when the message space is limited. In particular, we study signaling under a symmetry or an independence assumption on the distribution of utility values for the actions. For symmetric distributions, we provide a novel characterization of the optimal signaling scheme. It results in a polynomial-time algorithm to compute an optimal scheme for many compactly represented symmetric distributions. In the independent case, we design a constant-factor approximation algorithm, which stands in marked contrast to the hardness of approximation in the general case.

We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. Most previous communication-efficient approaches for this problem are limited to first-order optimization, and therefore have \emph{linear} dependence on the condition number in their communication complexity. We show that this dependence is not inherent: communication-efficient methods can in fact have sublinear dependence on the condition number. For this, we design and analyze the first communication-efficient distributed variants of preconditioned gradient descent for Generalized Linear Models, and for Newton's method. Our results rely on a new technique for quantizing both the preconditioner and the descent direction at each step of the algorithms, while controlling their convergence rate. We also validate our findings experimentally, showing fast convergence and reduced communication.

In this entry point into the subject, combining two elementary proofs, we decrease the gap between the upper and lower bounds by $0.2\%$ in a classical combinatorial number theory problem. We show that the maximum size of a Sidon set of $\{ 1, 2, \ldots, n\}$ is at most $\sqrt{n}+ 0.998n^{1/4}$ for sufficiently large $n$.

We consider the dissipative spin-orbit problem in Celestial Mechanics, which describes the rotational motion of a triaxial satellite moving on a Keplerian orbit subject to tidal forcing and "drift". Our goal is to construct quasi-periodic solutions with fixed frequency, satisfying appropriate conditions. With the goal of applying rigorous KAM theory, we compute such quasi-periodic solution with very high precision. To this end, we have developed a very efficient algorithm. The first step is to compute very accurately the return map to a surface of section (using a high order Taylor's method with extended precision). Then, we find an invariant curve for the return map using recent algorithms that take advantage of the geometric features of the problem. This method is based on a rapidly convergent Newton's method which is guaranteed to converge if the initial error is small enough. So, it is very suitable for a continuation algorithm. The resulting algorithm is quite efficient. We only need to deal with a one dimensional function. If this function is discretized in $N$ points, the algorithm requires $O(N \log N) $ operations and $O(N) $ storage. The most costly step (the numerical integration of the equation along a turn) is trivial to parallelize. The main goal of the paper is to present the algorithms, implementation details and several sample results of runs. We also present both a rigorous and a numerical comparison of the results of averaged and not averaged models.

Reliable and efficient trajectory generation methods are a fundamental need for autonomous dynamical systems of tomorrow. The goal of this article is to provide a comprehensive tutorial of three major convex optimization-based trajectory generation methods: lossless convexification (LCvx), and two sequential convex programming algorithms known as SCvx and GuSTO. In this article, trajectory generation is the computation of a dynamically feasible state and control signal that satisfies a set of constraints while optimizing key mission objectives. The trajectory generation problem is almost always nonconvex, which typically means that it is not readily amenable to efficient and reliable solution onboard an autonomous vehicle. The three algorithms that we discuss use problem reformulation and a systematic algorithmic strategy to nonetheless solve nonconvex trajectory generation tasks through the use of a convex optimizer. The theoretical guarantees and computational speed offered by convex optimization have made the algorithms popular in both research and industry circles. To date, the list of applications includes rocket landing, spacecraft hypersonic reentry, spacecraft rendezvous and docking, aerial motion planning for fixed-wing and quadrotor vehicles, robot motion planning, and more. Among these applications are high-profile rocket flights conducted by organizations like NASA, Masten Space Systems, SpaceX, and Blue Origin. This article aims to give the reader the tools and understanding necessary to work with each algorithm, and to know what each method can and cannot do. A publicly available source code repository supports the provided numerical examples. By the end of the article, the reader should be ready to use the methods, to extend them, and to contribute to their many exciting modern applications.

This paper introduces an ultra-weak space-time DPG method for the heat equation. We prove well-posedness of the variational formulation with broken test functions and verify quasi-optimality of a practical DPG scheme. Numerical experiments visualize beneficial properties of an adaptive and parabolically scaled mesh-refinement driven by the built-in error control of the DPG method.

We present a stationary iteration method, namely Alternating Symmetric positive definite and Scaled symmetric positive semidefinite Splitting (ASSS), for solving the system arisen from finite element discretization of a distributed optimal control problem with time-periodic parabolic equations. An upper bound for the spectral radius of the iteration method is given which is always less than 1. So convergence of the ASSS iteration method is guaranteed. The induced ASSS preconditioner is applied to accelerate the convergence speed of the GMRES method for solving the system. Numerical results are presented to demonstrate the effectiveness of both the ASSS iteration method and the ASSS preconditioner.

We analyse parallel overlapping Schwarz domain decomposition methods for the Helmholtz equation, where the subdomain problems satisfy first-order absorbing (impedance) transmission conditions, and exchange of information between subdomains is achieved using a partition of unity. We provide a novel analysis of this method at the PDE level (without discretization). First, we formulate the method as a fixed point iteration, and show (in dimensions 1,2,3) that it is well-defined in a tensor product of appropriate local function spaces, each with $L^2$ impedance boundary data. Given this, we then obtain a bound on the norm of the fixed point operator in terms of the local norms of certain impedance-to-impedance maps arising from local interactions between subdomains. These bounds provide conditions under which (some power of) the fixed point operator is a contraction. In 2-d, for rectangular domains and strip-wise domain decompositions (with each subdomain only overlapping its immediate neighbours), we present two techniques for verifying the assumptions on the impedance-to-impedance maps which ensure power contractivity of the fixed point operator. The first is through semiclassical analysis, which gives rigorous estimates valid as the frequency tends to infinity. These results verify the required assumptions for sufficiently large overlap. For more realistic domain decompositions, we directly compute the norms of the impedance-to-impedance maps by solving certain canonical (local) eigenvalue problems. We give numerical experiments that illustrate the theory. These also show that the iterative method remains convergent and/or provides a good preconditioner in cases not covered by the theory, including for general domain decompositions, such as those obtained via automatic graph-partitioning software.

This work presents a suitable mathematical analysis to understand the properties of convergence and bounded variation of a new { fully discrete locally conservative} Lagrangian--Eulerian {explicit} numerical scheme to the entropy solution in the sense of Kruzhkov via weak asymptotic method. We also make use of the weak asymptotic method to connect the theoretical developments with the computational approach within the practical framework of a solid numerical analysis. This method also serves to address the issue of notions of solutions, and its resulting algorithms have been proven to be effective to study nonlinear wave formations and rarefaction interactions in intricate applications. The weak asymptotic solutions we compute in this study with our novel Lagrangian--Eulerian framework are shown to coincide with classical solutions and Kruzhkov entropy solutions in the scalar case. Moreover, we present and discuss significant computational aspects by means of numerical experiments related to nontrivial problems: a nonlocal traffic model, the $2 \times 2$ symmetric Keyfitz--Kranzer system, and numerical studies via Wasserstein distance to explain shock interaction with the fundamental inviscid Burgers' model for fluids. Therefore, the proposed weak asymptotic analysis, when applied to the Lagrangian--Eulerian framework, fits in properly with the classical theory while optimizing the mathematical computations for the construction of new accurate numerical schemes.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司