亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Atmospheric systems incorporating thermal dynamics must be stable with respect to both energy and entropy. While energy conservation can be enforced via the preservation of the skew-symmetric structure of the Hamiltonian form of the equations of motion, entropy conservation is typically derived as an additional invariant of the Hamiltonian system, and satisfied via the exact preservation of the chain rule. This is particularly challenging since the function spaces used to represent the thermodynamic variables in compatible finite element discretisations are typically discontinuous at element boundaries. In the present work we negate this problem by constructing our equations of motion via weighted averages of skew-symmetric formulations using both flux form and material form advection of thermodynamic variables, which allow for the necessary cancellations required to conserve entropy without the chain rule. We show that such formulations allow for the stable simulation of both the thermal shallow water and 3D compressible Euler equations on the sphere using mixed compatible finite elements without entropy damping.

相關內容

在微(wei)(wei)積分(fen)中,鏈式(shi)規(gui)則是用于(yu)計算復合函(han)數的導數的公式(shi)。 也就是說,如果f和(he)g是可微(wei)(wei)函(han)數,則鏈式(shi)規(gui)則表示它們(men)的復合f°g的導數。

In phase-only compressive sensing (PO-CS), our goal is to recover low-complexity signals (e.g., sparse signals, low-rank matrices) from the phase of complex linear measurements. While perfect recovery of signal direction in PO-CS was observed quite early, the exact reconstruction guarantee for a fixed, real signal was recently done by Jacques and Feuillen [IEEE Trans. Inf. Theory, 67 (2021), pp. 4150-4161]. However, two questions remain open: the uniform recovery guarantee and exact recovery of complex signal. In this paper, we almost completely address these two open questions. We prove that, all complex sparse signals or low-rank matrices can be uniformly, exactly recovered from a near optimal number of complex Gaussian measurement phases. By recasting PO-CS as a linear compressive sensing problem, the exact recovery follows from restricted isometry property (RIP). Our approach to uniform recovery guarantee is based on covering arguments that involve a delicate control of the (original linear) measurements with overly small magnitude. To work with complex signal, a different sign-product embedding property and a careful rescaling of the sensing matrix are employed. In addition, we show an extension that the uniform recovery is stable under moderate bounded noise. We also propose to add Gaussian dither before capturing the phases to achieve full reconstruction with norm information. Experimental results are reported to corroborate and demonstrate our theoretical results.

Several physical problems modeled by second-order partial differential equations can be efficiently solved using mixed finite elements of the Raviart-Thomas family for N-simplexes, introduced in the seventies. In case Neumann conditions are prescribed on a curvilinear boundary, the normal component of the flux variable should preferably not take up values at nodes shifted to the boundary of the approximating polytope in the corresponding normal direction. This is because the method's accuracy downgrades, which was shown in \cite{FBRT}. In that work an order-preserving technique was studied, based on a parametric version of these elements with curved simplexes. In this paper an alternative with straight-edged triangles for two-dimensional problems is proposed. The key point of this method is a Petrov-Galerkin formulation of the mixed problem, in which the test-flux space is a little different from the shape-flux space. After carrying out a well-posedness and stability analysis, error estimates of optimal order are proven.

The randomized singular value decomposition (R-SVD) is a popular sketching-based algorithm for efficiently computing the partial SVD of a large matrix. When the matrix is low-rank, the R-SVD produces its partial SVD exactly; but when the rank is large, it only yields an approximation. Motivated by applications in data science and principal component analysis (PCA), we analyze the R-SVD under a low-rank signal plus noise measurement model; specifically, when its input is a spiked random matrix. The singular values produced by the R-SVD are shown to exhibit a BBP-like phase transition: when the SNR exceeds a certain detectability threshold, that depends on the dimension reduction factor, the largest singular value is an outlier; below the threshold, no outlier emerges from the bulk of singular values. We further compute asymptotic formulas for the overlap between the ground truth signal singular vectors and the approximations produced by the R-SVD. Dimensionality reduction has the adverse affect of amplifying the noise in a highly nonlinear manner. Our results demonstrate the statistical advantage -- in both signal detection and estimation -- of the R-SVD over more naive sketched PCA variants; the advantage is especially dramatic when the sketching dimension is small. Our analysis is asymptotically exact, and substantially more fine-grained than existing operator-norm error bounds for the R-SVD, which largely fail to give meaningful error estimates in the moderate SNR regime. It applies for a broad family of sketching matrices previously considered in the literature, including Gaussian i.i.d. sketches, random projections, and the sub-sampled Hadamard transform, among others. Lastly, we derive an optimal singular value shrinker for singular values and vectors obtained through the R-SVD, which may be useful for applications in matrix denoising.

Many stochastic processes in the physical and biological sciences can be modelled using Brownian dynamics with multiplicative noise. However, numerical integrators for these processes can lose accuracy or even fail to converge when the diffusion term is configuration-dependent. One remedy is to construct a transform to a constant-diffusion process and sample the transformed process instead. In this work, we explain how coordinate-based and time-rescaling-based transforms can be used either individually or in combination to map a general class of variable-diffusion Brownian motion processes into constant-diffusion ones. The transforms are invertible, thus allowing recovery of the original dynamics. We motivate our methodology using examples in one dimension before then considering multivariate diffusion processes. We illustrate the benefits of the transforms through numerical simulations, demonstrating how the right combination of integrator and transform can improve computational efficiency and the order of convergence to the invariant distribution. Notably, the transforms that we derive are applicable to a class of multibody, anisotropic Stokes-Einstein diffusion that has applications in biophysical modelling.

We propose a novel a-posteriori error estimation technique where the target quantities of interest are ratios of high-dimensional integrals, as occur e.g. in PDE constrained Bayesian inversion and PDE constrained optimal control subject to an entropic risk measure. We consider in particular parametric, elliptic PDEs with affine-parametric diffusion coefficient, on high-dimensional parameter spaces. We combine our recent a-posteriori Quasi-Monte Carlo (QMC) error analysis, with Finite Element a-posteriori error estimation. The proposed approach yields a computable a-posteriori estimator which is reliable, up to higher order terms. The estimator's reliability is uniform with respect to the PDE discretization, and robust with respect to the parametric dimension of the uncertain PDE input.

In this paper we extend to two-dimensional data two recently introduced one-dimensional compressibility measures: the $\gamma$ measure defined in terms of the smallest string attractor, and the $\delta$ measure defined in terms of the number of distinct substrings of the input string. Concretely, we introduce the two-dimensional measures $\gamma_{2D}$ and $\delta_{2D}$ as natural generalizations of $\gamma$ and $\delta$ and study some of their properties. Among other things, we prove that $\delta_{2D}$ is monotone and can be computed in linear time, and we show that although it is still true that $\delta_{2D} \leq \gamma_{2D}$ the gap between the two measures can be $\Omega(\sqrt{n})$ for families of $n\times n$ matrices and therefore asymptotically larger than the gap in one-dimension. Finally, we use the measures $\gamma_{2D}$ and $\delta_{2D}$ to provide the first analysis of the space usage of the two-dimensional block tree introduced in [Brisaboa et al., Two-dimensional block trees, The computer Journal, 2023].

Data reduction is a fundamental challenge of modern technology, where classical statistical methods are not applicable because of computational limitations. We consider linear regression for an extraordinarily large number of observations, but only a few covariates. Subsampling aims at the selection of a given percentage of the existing original data. Under distributional assumptions on the covariates, we derive D-optimal subsampling designs and study their theoretical properties. We make use of fundamental concepts of optimal design theory and an equivalence theorem from constrained convex optimization. The thus obtained subsampling designs provide simple rules for whether to accept or reject a data point, allowing for an easy algorithmic implementation. In addition, we propose a simplified subsampling method that differs from the D-optimal design but requires lower computing time. We present a simulation study, comparing both subsampling schemes with the IBOSS method.

The Independent Cutset problem asks whether there is a set of vertices in a given graph that is both independent and a cutset. Such a problem is $\textsf{NP}$-complete even when the input graph is planar and has maximum degree five. In this paper, we first present a $\mathcal{O}^*(1.4423^{n})$-time algorithm for the problem. We also show how to compute a minimum independent cutset (if any) in the same running time. Since the property of having an independent cutset is MSO$_1$-expressible, our main results are concerned with structural parameterizations for the problem considering parameters that are not bounded by a function of the clique-width of the input. We present $\textsf{FPT}$-time algorithms for the problem considering the following parameters: the dual of the maximum degree, the dual of the solution size, the size of a dominating set (where a dominating set is given as an additional input), the size of an odd cycle transversal, the distance to chordal graphs, and the distance to $P_5$-free graphs. We close by introducing the notion of $\alpha$-domination, which allows us to identify more fixed-parameter tractable and polynomial-time solvable cases.

To improve the convergence property of the randomized Kaczmarz (RK) method for solving linear systems, Bai and Wu (SIAM J. Sci. Comput., 40(1):A592--A606, 2018) originally introduced a greedy probability criterion for effectively selecting the working row from the coefficient matrix and constructed the greedy randomized Kaczmarz (GRK) method. Due to its simplicity and efficiency, this approach has inspired numerous subsequent works in recent years, such as the capped adaptive sampling rule, the greedy augmented randomized Kaczmarz method, and the greedy randomized coordinate descent method. Since the iterates of the GRK method are actually random variables, existing convergence analyses are all related to the expectation of the error. In this note, we prove that the linear convergence rate of the GRK method is deterministic, i.e. not in the sense of expectation. Moreover, the Polyak's heavy ball momentum technique is incorporated to improve the performance of the GRK method. We propose a refined convergence analysis, compared with the technique used in Loizou and Richt\'{a}rik (Comput. Optim. Appl., 77(3):653--710, 2020), of momentum variants of randomized iterative methods, which shows that the proposed GRK method with momentum (mGRK) also enjoys a deterministic linear convergence. Numerical experiments show that the mGRK method is more efficient than the GRK method.

The accurate and efficient simulation of Partial Differential Equations (PDEs) in and around arbitrarily defined geometries is critical for many application domains. Immersed boundary methods (IBMs) alleviate the usually laborious and time-consuming process of creating body-fitted meshes around complex geometry models (described by CAD or other representations, e.g., STL, point clouds), especially when high levels of mesh adaptivity are required. In this work, we advance the field of IBM in the context of the recently developed Shifted Boundary Method (SBM). In the SBM, the location where boundary conditions are enforced is shifted from the actual boundary of the immersed object to a nearby surrogate boundary, and boundary conditions are corrected utilizing Taylor expansions. This approach allows choosing surrogate boundaries that conform to a Cartesian mesh without losing accuracy or stability. Our contributions in this work are as follows: (a) we show that the SBM numerical error can be greatly reduced by an optimal choice of the surrogate boundary, (b) we mathematically prove the optimal convergence of the SBM for this optimal choice of the surrogate boundary, (c) we deploy the SBM on massively parallel octree meshes, including algorithmic advances to handle incomplete octrees, and (d) we showcase the applicability of these approaches with a wide variety of simulations involving complex shapes, sharp corners, and different topologies. Specific emphasis is given to Poisson's equation and the linear elasticity equations.

北京阿比特科技有限公司