亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present in this paper the results of a research motivated by the need of a very fast solution of thermal flow in solar receivers. These receivers are composed by a large number of parallel pipes with the same geometry. We have introduced a reduced Schwarz algorithm that skips the computation in a large part of the pipes. The computation of the temperature in the skep domain is replaced by a reduced mapping that provides the transmission conditions. This reduced mapping is computed in an off-line stage. We have performed an error analysis of the reduced Schwarz algorithm, proving that the error is bounded in terms of the linearly decreasing error of the standard Schwarz algorithm, plus the error stemming from the reduction of the trace mapping. The last error is asymptotically dominant in the Schwarz iterative process. We obtain $L^2$ errors below $2\%$ with relatively small overlapping lengths.

相關內容

This paper is focused on the approximation of the Euler equations of compressible fluid dynamics on a staggered mesh. With this aim, the flow parameters are described by the velocity, the density and the internal energy. The thermodynamic quantities are described on the elements of the mesh, and thus the approximation is only in $L^2$, while the kinematic quantities are globally continuous. The method is general in the sense that the thermodynamic and kinetic parameters are described by an arbitrary degree of polynomials. In practice, the difference between the degrees of the kinematic parameters and the thermodynamic ones {is set} to $1$. The integration in time is done using the forward Euler method but can be extended straightforwardly to higher-order methods. In order to guarantee that the limit solution will be a weak solution of the problem, we introduce a general correction method in the spirit of the Lagrangian staggered method described in \cite{Svetlana,MR4059382, MR3023731}, and we prove a Lax Wendroff theorem. The proof is valid for multidimensional versions of the scheme, even though most of the numerical illustrations in this work, on classical benchmark problems, are one-dimensional because we have easy access to the exact solution for comparison. We conclude by explaining that the method is general and can be used in different settings, for example, Finite Volume, or discontinuous Galerkin method, not just the specific one presented in this paper.

Designing efficient and rigorous numerical methods for sequential decision-making under uncertainty is a difficult problem that arises in many applications frameworks. In this paper we focus on the numerical solution of a subclass of impulse control problem for piecewise deterministic Markov process (PDMP) when the jump times are hidden. We first state the problem as a partially observed Markov decision process (POMDP) on a continuous state space and with controlled transition kernels corresponding to some specific skeleton chains of the PDMP. Then we proceed to build a numerically tractable approximation of the POMDP by tailor-made discretizations of the state spaces. The main difficulty in evaluating the discretization error come from the possible random or boundary jumps of the PDMP between consecutive epochs of the POMDP and requires special care. Finally we extensively discuss the practical construction of discretization grids and illustrate our method on simulations.

In an earlier paper (//doi.org/10.1137/21M1393315), the Switch Point Algorithm was developed for solving optimal control problems whose solutions are either singular or bang-bang or both singular and bang-bang, and which possess a finite number of jump discontinuities in an optimal control at the points in time where the solution structure changes. The class of control problems that were considered had a given initial condition, but no terminal constraint. The theory is now extended to include problems with both initial and terminal constraints, a structure that often arises in boundary-value problems. Substantial changes to the theory are needed to handle this more general setting. Nonetheless, the derivative of the cost with respect to a switch point is again the jump in the Hamiltonian at the switch point.

Variable independence and decomposability are algorithmic techniques for simplifying logical formulas by tearing apart connections between free variables. These techniques were originally proposed to speed up query evaluation in constraint databases, in particular by representing the query as a Boolean combination of formulas with no interconnected variables. They also have many other applications in SMT, string analysis, databases, automata theory and other areas. However, the precise complexity of variable independence and decomposability has been left open especially for the quantifier-free theory of linear real arithmetic (LRA), which is central in database applications. We introduce a novel characterization of formulas admitting decompositions and use it to show that it is coNP-complete to decide variable decomposability over LRA. As a corollary, we obtain that deciding variable independence is in $ \Sigma_2^p $. These results substantially improve the best known double-exponential time algorithms for variable decomposability and independence. In many practical applications, it is crucial to be able to efficiently eliminate connections between variables whenever possible. We design and implement an algorithm for this problem, which is optimal in theory, exponentially faster compared to the current state-of-the-art algorithm and efficient on various microbenchmarks. In particular, our algorithm is the first one to overcome a fundamental barrier between non-discrete and discrete first-order theories. Formulas arising in practice often have few or even no free variables that are perfectly independent. In this case, our algorithm can compute a best-possible approximation of a decomposition, which can be used to optimize database queries by exploiting partial variable independence, which is present in almost every logical formula or database query constraint.

When considered as a standalone iterative solver for elliptic boundary value problems, the Dirichlet-Neumann (DN) method is known to converge geometrically for domain decompositions into strips, even for a large number of subdomains. However, whenever the domain decomposition includes cross-points, i.e.$\!$ points where more than two subdomains meet, the convergence proof does not hold anymore as the method generates subproblems that might not be well-posed. Focusing on a simple two-dimensional example involving one cross-point, we proposed in a previous work a decomposition of the solution into two parts: an even symmetric part and an odd symmetric part. Based on this decomposition, we proved that the DN method was geometrically convergent for the even symmetric part and that it was not well-posed for the odd symmetric part. Here, we introduce a new variant of the DN method which generates subproblems that remain well-posed for the odd symmetric part as well. Taking advantage of the symmetry properties of the domain decomposition considered, we manage to prove that our new method converges geometrically in the presence of cross-points. We also extend our results to the three-dimensional case, and present numerical experiments that illustrate our theoretical findings.

We consider gradient-related methods for low-rank matrix optimization with a smooth cost function. The methods operate on single factors of the low-rank factorization and share aspects of both alternating and Riemannian optimization. Two possible choices for the search directions based on Gauss-Southwell type selection rules are compared: one using the gradient of a factorized non-convex formulation, the other using the Riemannian gradient. While both methods provide gradient convergence guarantees that are similar to the unconstrained case, numerical experiments on a quadratic cost function indicate that the version based on the Riemannian gradient is significantly more robust with respect to small singular values and the condition number of the cost function. As a side result of our approach, we also obtain new convergence results for the alternating least squares method.

In this paper we derive tight lower bounds resolving the hardness status of several fundamental weighted matroid problems. One notable example is budgeted matroid independent set, for which we show there is no fully polynomial-time approximation scheme (FPTAS), indicating the Efficient PTAS of [Doron-Arad, Kulik and Shachnai, SOSA 2023] is the best possible. Furthermore, we show that there is no pseudo-polynomial time algorithm for exact weight matroid independent set, implying the algorithm of [Camerini, Galbiati and Maffioli, J. Algorithms 1992] for representable matroids cannot be generalized to arbitrary matroids. Similarly, we show there is no Fully PTAS for constrained minimum basis of a matroid and knapsack cover with a matroid, implying the existing Efficient PTAS for the former is optimal. For all of the above problems, we obtain unconditional lower bounds in the oracle model, where the independent sets of the matroid can be accessed only via a membership oracle. We complement these results by showing that the same lower bounds hold under standard complexity assumptions, even if the matroid is encoded as part of the instance. All of our bounds are based on a specifically structured family of paving matroids.

In this paper, we extend the Generalized Finite Difference Method (GFDM) on unknown compact submanifolds of the Euclidean domain, identified by randomly sampled data that (almost surely) lie on the interior of the manifolds. Theoretically, we formalize GFDM by exploiting a representation of smooth functions on the manifolds with Taylor's expansions of polynomials defined on the tangent bundles. We illustrate the approach by approximating the Laplace-Beltrami operator, where a stable approximation is achieved by a combination of Generalized Moving Least-Squares algorithm and novel linear programming that relaxes the diagonal-dominant constraint for the estimator to allow for a feasible solution even when higher-order polynomials are employed. We establish the theoretical convergence of GFDM in solving Poisson PDEs and numerically demonstrate the accuracy on simple smooth manifolds of low and moderate high co-dimensions as well as unknown 2D surfaces. For the Dirichlet Poisson problem where no data points on the boundaries are available, we employ GFDM with the volume-constraint approach that imposes the boundary conditions on data points close to the boundary. When the location of the boundary is unknown, we introduce a novel technique to detect points close to the boundary without needing to estimate the distance of the sampled data points to the boundary. We demonstrate the effectiveness of the volume-constraint employed by imposing the boundary conditions on the data points detected by this new technique compared to imposing the boundary conditions on all points within a certain distance from the boundary, where the latter is sensitive to the choice of truncation distance and require the knowledge of the boundary location.

We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of Optimal Control Problems (OCPs) constrained by random partial differential equations. The method requires to solve the OCP for several low-fidelity spatial grids and quadrature formulae for the objective functional. All the computed solutions are then linearly combined to get a final approximation which, under suitable regularity assumptions, preserves the same accuracy of fine tensor product approximations, while drastically reducing the computational cost. The combination technique involves only tensor product quadrature formulae, thus the discretized OCPs preserve the convexity of the continuous OCP. Hence, the combination technique avoids the inconveniences of Multilevel Monte Carlo and/or sparse grids approaches, but remains suitable for high dimensional problems. The manuscript presents an a-priori procedure to choose the most important mixed differences and an asymptotic complexity analysis, which states that the asymptotic complexity is exclusively determined by the spatial solver. Numerical experiments validate the results.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

北京阿比特科技有限公司