亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper deals with the derivation of Non-Intrusive Reduced Basis (NIRB) techniques for sensitivity analysis, more specifically the direct and adjoint state methods. For highly complex parametric problems, these two approaches may become too costly. To reduce computational times, Proper Orthogonal Decomposition (POD) and Reduced Basis Methods (RBMs) have already been investigated. The majority of these algorithms are however intrusive in the sense that the High-Fidelity (HF) code must be modified. To address this issue, non-intrusive strategies are employed. The NIRB two-grid method uses the HF code solely as a ``black-box'', requiring no code modification. Like other RBMs, it is based on an offline-online decomposition. The offline stage is time-consuming, but it is only executed once, whereas the online stage is significantly less expensive than an HF evaluation. In this paper, we propose new NIRB two-grid algorithms for both the direct and adjoint state methods. On a classical model problem, the heat equation, we prove that HF evaluations of sensitivities reach an optimal convergence rate in $L^{\infty}(0,T;H^1(\Omega))$, and then establish that these rates are recovered by the proposed NIRB approximations. These results are supported by numerical simulations. We then numerically demonstrate that a further deterministic post-treatment can be applied to the direct method. This further reduces computational costs of the online step while only computing a coarse solution of the initial problem. All numerical results are run with the model problem as well as a more complex problem, namely the Brusselator system.

相關內容

In the literature on game-theoretic equilibrium finding, focus has mainly been on solving a single game in isolation. In practice, however, strategic interactions -- ranging from routing problems to online advertising auctions -- evolve dynamically, thereby leading to many similar games to be solved. To address this gap, we introduce meta-learning for equilibrium finding and learning to play games. We establish the first meta-learning guarantees for a variety of fundamental and well-studied classes of games, including two-player zero-sum games, general-sum games, and Stackelberg games. In particular, we obtain rates of convergence to different game-theoretic equilibria that depend on natural notions of similarity between the sequence of games encountered, while at the same time recovering the known single-game guarantees when the sequence of games is arbitrary. Along the way, we prove a number of new results in the single-game regime through a simple and unified framework, which may be of independent interest. Finally, we evaluate our meta-learning algorithms on endgames faced by the poker agent Libratus against top human professionals. The experiments show that games with varying stack sizes can be solved significantly faster using our meta-learning techniques than by solving them separately, often by an order of magnitude.

Algorithms for data assimilation try to predict the most likely state of a dynamical system by combining information from observations and prior models. Variational approaches, such as the weak-constraint four-dimensional variational data assimilation formulation considered in this problem, can ultimately be interpreted as a minimization problem. One of the main challenges of such a formulation is the solution of large linear systems of equations which arise within the inner linear step of the adopted nonlinear solver. Depending on the adopted approach, these linear algebraic problems amount to either a saddle point linear system or a symmetric positive definite (SPD) one. Both formulations can be solved by means of a Krylov method, like GMRES or CG, that needs to be preconditioned to ensure fast convergence in terms of the number of iterations. In this paper we illustrate novel, efficient preconditioning operators which involve the solution of certain Stein matrix equations. In addition to achieving better computational performance, the latter machinery allows us to derive tighter bounds for the eigenvalue distribution of the preconditioned linear system for certain problem settings. A panel of diverse numerical results displays the effectiveness of the proposed methodology compared to current state-of-the-art approaches.

The August 2022 special election for U.S. House Representative in Alaska featured three main candidates and was conducted by by single-winner ranked choice voting method known as ``instant runoff voting." The results of this election displayed a well-known but relatively rare phenomenon known as the ``center squeeze:" The most centrist candidate, Mark Begich, was eliminated in the first round despite winning an overwhelming majority of second-place votes. In fact, Begich was the {\em Condorcet winner} of this election: Based on the cast vote record, he would have defeated both of the other two candidates in head-to-head contests, but he was eliminated in the first round of ballot counting due to receiving the fewest first-place votes. The purpose of this paper is to use the data in the cast vote record to explore the range of likely outcomes if this election had been conducted under two alternative voting methods: Approval Voting and STAR (``Score Then Automatic Runoff") Voting. We find that under the best assumptions available about voter behavior, the most likely outcomes are that Peltola would still have won the election under Approval Voting, while Begich would have won under STAR Voting.

Many problems arising in control require the determination of a mathematical model of the application. This has often to be performed starting from input-output data, leading to a task known as system identification in the engineering literature. One emerging topic in this field is estimation of networks consisting of several interconnected dynamic systems. We consider the linear setting assuming that system outputs are the result of many correlated inputs, hence making system identification severely ill-conditioned. This is a scenario often encountered when modeling complex cybernetics systems composed by many sub-units with feedback and algebraic loops. We develop a strategy cast in a Bayesian regularization framework where any impulse response is seen as realization of a zero-mean Gaussian process. Any covariance is defined by the so called stable spline kernel which includes information on smooth exponential decay. We design a novel Markov chain Monte Carlo scheme able to reconstruct the impulse responses posterior by efficiently dealing with collinearity. Our scheme relies on a variation of the Gibbs sampling technique: beyond considering blocks forming a partition of the parameter space, some other (overlapping) blocks are also updated on the basis of the level of collinearity of the system inputs. Theoretical properties of the algorithm are studied obtaining its convergence rate. Numerical experiments are included using systems containing hundreds of impulse responses and highly correlated inputs.

In this work, we focus on the Neumann-Neumann method (NNM), which is one of the most popular non-overlapping domain decomposition methods. Even though the NNM is widely used and proves itself very efficient when applied to discrete problems in practical applications, it is in general not well defined at the continuous level when the geometric decomposition involves cross-points. Our goals are to investigate this well-posedness issue and to provide a complete analysis of the method at the continuous level, when applied to a simple elliptic problem on a configuration involving one cross-point. More specifically, we prove that the algorithm generates solutions that are singular near the cross-points. We also exhibit the type of singularity introduced by the method, and show how it propagates through the iterations. Then, based on this analysis, we design a new set of transmission conditions that makes the new NNM geometrically convergent for this simple configuration. Finally, we illustrate our results with numerical experiments.

Binary duadic codes are an interesting subclass of cyclic codes since they have large dimensions and their minimum distances may have a square-root bound. In this paper, we present several families of binary duadic codes of length $2^m-1$ and develop some lower bounds on their minimum distances by using the BCH bound on cyclic codes, which partially solves one case of the open problem proposed in \cite{LLD}. It is shown that the lower bounds on their minimum distances are close to the square root bound. Moreover, the parameters of the dual and extended codes of these binary duadic codes are investigated.

We investigate how to efficiently compute the difference result of two (or multiple) conjunctive queries, which is the last operator in relational algebra to be unraveled. The standard approach in practical database systems is to materialize the results for every input query as a separate set, and then compute the difference of two (or multiple) sets. This approach is bottlenecked by the complexity of evaluating every input query individually, which could be very expensive, particularly when there are only a few results in the difference. In this paper, we introduce a new approach by exploiting the structural property of input queries and rewriting the original query by pushing the difference operator down as much as possible. We show that for a large class of difference queries, this approach can lead to a linear-time algorithm, in terms of the input size and (final) output size, i.e., the number of query results that survive from the difference operator. We complete this result by showing the hardness of computing the remaining difference queries in linear time. Although a linear-time algorithm is hard to achieve in general, we also provide some heuristics that can provably improve the standard approach. At last, we compare our approach with standard SQL engines over graph and benchmark datasets. The experiment results demonstrate order-of-magnitude speedups achieved by our approach over the vanilla SQL.

We consider the problem of iteratively solving large and sparse double saddle-point systems arising from the stationary Stokes-Darcy equations in two dimensions, discretized by the Marker-and-Cell (MAC) finite difference method. We analyze the eigenvalue distribution of a few ideal block preconditioners. We then derive practical preconditioners that are based on approximations of Schur complements that arise in a block decomposition of the double saddle-point matrix. We show that including the interface conditions in the preconditioners is key in the pursuit of scalability. Numerical results show good convergence behavior of our preconditioned GMRES solver and demonstrate robustness of the proposed preconditioner with respect to the physical parameters of the problem.

The investigation of fluid-solid systems is very important in a lot of industrial processes. From a computational point of view, the simulation of such systems is very expensive, especially when a huge number of parametric configurations needs to be studied. In this context, we develop a non-intrusive data-driven reduced order model (ROM) built using the proper orthogonal decomposition with interpolation (PODI) method for Computational Fluid Dynamics (CFD) -- Discrete Element Method (DEM) simulations. The main novelties of the proposed approach rely in (i) the combination of ROM and FV methods, (ii) a numerical sensitivity analysis of the ROM accuracy with respect to the number of POD modes and to the cardinality of the training set and (iii) a parametric study with respect to the Stokes number. We test our ROM on the fluidized bed benchmark problem. The accuracy of the ROM is assessed against results obtained with the FOM both for Eulerian (the fluid volume fraction) and Lagrangian (position and velocity of the particles) quantities. We also discuss the efficiency of our ROM approach.

Models with high-dimensional parameter spaces are common in many applications. Global sensitivity analyses can provide insights on how uncertain inputs and interactions influence the outputs. Many sensitivity analysis methods face nontrivial challenges for computationally demanding models. Common approaches to tackle these challenges are to (i) use a computationally efficient emulator and (ii) sample adaptively. However, these approaches still involve potentially large computational costs and approximation errors. Here we compare the results and computational costs of four existing global sensitivity analysis methods applied to a test problem. We sample different model evaluation time and numbers of model parameters. We find that the emulation and adaptive sampling approaches are faster than Sobol' method for slow models. The Bayesian adaptive spline surface method is the fastest for most slow and high-dimensional models. Our results can guide the choice of a sensitivity analysis method under computational resources constraints.

北京阿比特科技有限公司