亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a new geometrically unfitted finite element method based on discontinuous Trefftz ansatz spaces. Trefftz methods allow for a reduction in the number of degrees of freedom in discontinuous Galerkin methods, thereby, the costs for solving arising linear systems significantly. This work shows that they are also an excellent way to reduce the number of degrees of freedom in an unfitted setting. We present a unified analysis of a class of geometrically unfitted discontinuous Galerkin methods with different stabilisation mechanisms to deal with small cuts between the geometry and the mesh. We cover stability and derive a-priori error bounds, including errors arising from geometry approximation for the class of discretisations for a model Poisson problem in a unified manner. The analysis covers Trefftz and full polynomial ansatz spaces, alike. Numerical examples validate the theoretical findings and demonstrate the potential of the approach.

相關內容

We present Surjective Sequential Neural Likelihood (SSNL) estimation, a novel method for simulation-based inference in models where the evaluation of the likelihood function is not tractable and only a simulator that can generate synthetic data is available. SSNL fits a dimensionality-reducing surjective normalizing flow model and uses it as a surrogate likelihood function which allows for conventional Bayesian inference using either Markov chain Monte Carlo methods or variational inference. By embedding the data in a low-dimensional space, SSNL solves several issues previous likelihood-based methods had when applied to high-dimensional data sets that, for instance, contain non-informative data dimensions or lie along a lower-dimensional manifold. We evaluate SSNL on a wide variety of experiments and show that it generally outperforms contemporary methods used in simulation-based inference, for instance, on a challenging real-world example from astrophysics which models the magnetic field strength of the sun using a solar dynamo model.

Global seismicity on all three solar system's bodies with in situ measurements (Earth, Moon, and Mars) is due mainly to mechanical Rieger resonance (RR) of the solar wind's macroscopic flapping, driven by the well-known PRg=~154-day Rieger period and detected commonly in most heliophysical data types and the interplanetary magnetic field (IMF). Thus, InSight mission marsquakes rates are periodic with PRg as characterized by a very high (>>12) fidelity {\Phi}=2.8 10^6 and by being the only >99%-significant spectral peak in the 385.8-64.3-nHz (1-180-day) band of highest planetary energies; the longest-span (v.9) release of raw data revealed the entire RR, excluding a tectonically active Mars. For check, I analyze rates of Oct 2015-Feb 2019, Mw5.6+ earthquakes, and all (1969-1977) Apollo mission moonquakes. To decouple magnetosphere and IMF effects, I study Earth and Moon seismicity during traversals of Earth magnetotail vs. IMF. The analysis showed with >99-67% confidence and {\Phi}>>12 fidelity that (an unspecified majority of) moonquakes and Mw5.6+ earthquakes also recur at Rieger periods. About half of spectral peaks split but also into clusters that average to usual Rieger periodicities, where magnetotail reconnecting clears the signal. Moonquakes are mostly forced at times of solar-wind resonance and not just during tides, as previously and simplistically believed. Earlier claims that solar plasma dynamics could be seismogenic are confirmed. This result calls for reinterpreting the seismicity phenomenon and for reliance on global magnitude scales. Predictability of solar-wind macroscopic dynamics is now within reach, which paves the way for long-term physics-based seismic and space weather prediction and the safety of space missions. Gauss-Vanicek Spectral Analysis revolutionizes geophysics by computing nonlinear global dynamics directly (renders approximating of dynamics obsolete).

A non-intrusive model order reduction (MOR) method that combines features of the dynamic mode decomposition (DMD) and the radial basis function (RBF) network is proposed to predict the dynamics of parametric nonlinear systems. In many applications, we have limited access to the information of the whole system, which motivates non-intrusive model reduction. One bottleneck is capturing the dynamics of the solution without knowing the physics inside the "black-box" system. DMD is a powerful tool to mimic the dynamics of the system and give a reliable approximation of the solution in the time domain using only the dominant DMD modes. However, DMD cannot reproduce the parametric behavior of the dynamics. Our contribution focuses on extending DMD to parametric DMD by RBF interpolation. Specifically, a RBF network is first trained using snapshot matrices at limited parameter samples. The snapshot matrix at any new parameter sample can be quickly learned from the RBF network. DMD will use the newly generated snapshot matrix at the online stage to predict the time patterns of the dynamics corresponding to the new parameter sample. The proposed framework and algorithm are tested and validated by numerical examples including models with parametrized and time-varying inputs.

We consider the problem of finite-time identification of linear dynamical systems from $T$ samples of a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix $A^* \in \mathbb{R}^{n \times n}$, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on $A^*$ is available, which can be captured in the form of a convex set $\mathcal{K}$ containing $A^*$. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm that depend on the local size of $\mathcal{K}$ at $A^*$. To illustrate the usefulness of these results, we instantiate them for three examples, namely when (i) $A^*$ is sparse and $\mathcal{K}$ is a suitably scaled $\ell_1$ ball; (ii) $\mathcal{K}$ is a subspace; (iii) $\mathcal{K}$ consists of matrices each of which is formed by sampling a bivariate convex function on a uniform $n \times n$ grid (convex regression). In all these situations, we show that $A^*$ can be reliably estimated for values of $T$ much smaller than what is needed for the unconstrained setting.

This article studies the convergence properties of trans-dimensional MCMC algorithms when the total number of models is finite. It is shown that, for reversible and some non-reversible trans-dimensional Markov chains, under mild conditions, geometric convergence is guaranteed if the Markov chains associated with the within-model moves are geometrically ergodic. This result is proved in an $L^2$ framework using the technique of Markov chain decomposition. While the technique was previously developed for reversible chains, this work extends it to the point that it can be applied to some commonly used non-reversible chains. Under geometric convergence, a central limit theorem holds for ergodic averages, even in the absence of Harris ergodicity. This allows for the construction of simultaneous confidence intervals for features of the target distribution. This procedure is rigorously examined in a trans-dimensional setting, and special attention is given to the case where the asymptotic covariance matrix in the central limit theorem is singular. The theory and methodology herein are applied to reversible jump algorithms for two Bayesian models: a robust autoregression with unknown model order, and a probit regression with variable selection.

Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.

This article introduces a new Neural Network stochastic model to generate a 1-dimensional stochastic field with turbulent velocity statistics. Both the model architecture and training procedure ground on the Kolmogorov and Obukhov statistical theories of fully developed turbulence, so guaranteeing descriptions of 1) energy distribution, 2) energy cascade and 3) intermittency across scales in agreement with experimental observations. The model is a Generative Adversarial Network with multiple multiscale optimization criteria. First, we use three physics-based criteria: the variance, skewness and flatness of the increments of the generated field that retrieve respectively the turbulent energy distribution, energy cascade and intermittency across scales. Second, the Generative Adversarial Network criterion, based on reproducing statistical distributions, is used on segments of different length of the generated field. Furthermore, to mimic multiscale decompositions frequently used in turbulence's studies, the model architecture is fully convolutional with kernel sizes varying along the multiple layers of the model. To train our model we use turbulent velocity signals from grid turbulence at Modane wind tunnel.

We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first extend the greedy control algorithm, which builds a reduced basis for the manifold of optimal final time adjoint states, to the setting where the objective functional consists of a penalty term measuring the deviation from a desired state and a term describing the control energy. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. We discuss the computational costs of all considered methods in detail and show by means of two numerical examples the tremendous potential of the proposed methodology.

The distance geometry problem asks to find a realization of a given simple edge-weighted graph in a Euclidean space of given dimension K, where the edges are realized as straight segments of lengths equal (or as close as possible) to the edge weights. The problem is often modelled as a mathematical programming formulation involving decision variables that determine the position of the vertices in the given Euclidean space. Solution algorithms are generally constructed using local or global nonlinear optimization techniques. We present a new modelling technique for this problem where, instead of deciding vertex positions, formulations decide the length of the segments representing the edges in each cycle in the graph, projected in every dimension. We propose an exact formulation and a relaxation based on a Eulerian cycle. We then compare computational results from protein conformation instances obtained with stochastic global optimization techniques on the new cycle-based formulation and on the existing edge-based formulation. While edge-based formulations take less time to reach termination, cycle-based formulations are generally better on solution quality measures.

Monte Carlo methods represent a cornerstone of computer science. They allow to sample high dimensional distribution functions in an efficient way. In this paper we consider the extension of Automatic Differentiation (AD) techniques to Monte Carlo process, addressing the problem of obtaining derivatives (and in general, the Taylor series) of expectation values. Borrowing ideas from the lattice field theory community, we examine two approaches. One is based on reweighting while the other represents an extension of the Hamiltonian approach typically used by the Hybrid Monte Carlo (HMC) and similar algorithms. We show that the Hamiltonian approach can be understood as a change of variables of the reweighting approach, resulting in much reduced variances of the coefficients of the Taylor series. This work opens the door to find other variance reduction techniques for derivatives of expectation values.

北京阿比特科技有限公司