亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a reduced basis method for cheaply constructing (possibly rough) approximations to the nodal basis functions of the virtual element space, and propose to use such approximations for the design of the stabilization term in the virtual element method and for the post-processing of the solution.

相關內容

G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles, rendering it applicable to settings arbitrarily far from equilibrium. For Markovian coarse-grainings, we prove a number of algorithmic fluctuation inequalities. The most important of these is a very general formulation of the second law of thermodynamics. In the presence of a heat and work reservoir, it implies algorithmic versions of Jarzynski's equality and Landauer's principle. Finally, to demonstrate how a deficiency of algorithmic entropy can be used as a resource, we model an information engine powered by compressible strings.

Numerical simulations of kinetic problems can become prohibitively expensive due to their large memory footprint and computational costs. A method that has proven to successfully reduce these costs is the dynamical low-rank approximation (DLRA). One key question when using DLRA methods is the construction of robust time integrators that preserve the invariances and associated conservation laws of the original problem. In this work, we demonstrate that the augmented basis update & Galerkin integrator (BUG) preserves solution invariances and the associated conservation laws when using a conservative truncation step and an appropriate time and space discretization. We present numerical comparisons to existing conservative integrators and discuss advantages and disadvantages

We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.

We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs. Our algorithm utilizes a vertex-percolation process with a carefully chosen rejection filter and works under a percolation subcriticality condition. We show that this condition is optimal in the sense that the task of (approximately) sampling weighted rooted graphlets becomes impossible in finite expected time for infinite graphs and intractable for finite graphs when the condition does not hold. We apply our sampling algorithm as a subroutine to give near linear-time perfect sampling algorithms for polymer models and weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. This new perfect sampling algorithm for polymer models gives improved sampling algorithms for spin systems at low temperatures on expander graphs and unbalanced bipartite graphs, among other applications.

We study the high-order local discontinuous Galerkin (LDG) method for the $p$-Laplace equation. We reformulate our spatial discretization as an equivalent convex minimization problem and use a preconditioned gradient descent method as the nonlinear solver. For the first time, a weighted preconditioner that provides $hk$-independent convergence is applied in the LDG setting. For polynomial order $k \geqslant 1$, we rigorously establish the solvability of our scheme and provide a priori error estimates in a mesh-dependent energy norm. Our error estimates are under a different and non-equivalent distance from existing LDG results. For arbitrarily high-order polynomials under the assumption that the exact solution has enough regularity, the error estimates demonstrate the potential for high-order accuracy. Our numerical results exhibit the desired convergence speed facilitated by the preconditioner, and we observe best convergence rates in gradient variables in alignment with linear LDG, and optimal rates in the primal variable when $1 < p \leqslant 2$.

In arXiv:1811.04313, a definition of determinant is formalized in the bounded arithmetic $VNC^{2}$. Following the presentation of [Gathen, 1993], we can formalize a definition of matrix rank in the same bounded arithmetic. In this article, we define a bounded arithmetic $LAPPD$, and show that $LAPPD$ seems to be a natural arithmetic theory formalizing the treatment of rank function following Mulmuley's algorithm. Furthermore, we give a formalization of rank in $VNC^{2}$ by interpreting $LAPPD$ by $VNC^{2}$.

For a nonlinear dynamical system that depends on parameters, the paper introduces a novel tensorial reduced-order model (TROM). The reduced model is projection-based, and for systems with no parameters involved, it resembles proper orthogonal decomposition (POD) combined with the discrete empirical interpolation method (DEIM). For parametric systems, TROM employs low-rank tensor approximations in place of truncated SVD, a key dimension-reduction technique in POD with DEIM. Three popular low-rank tensor compression formats are considered for this purpose: canonical polyadic, Tucker, and tensor train. The use of multilinear algebra tools allows the incorporation of information about the parameter dependence of the system into the reduced model and leads to a POD-DEIM type ROM that (i) is parameter-specific (localized) and predicts the system dynamics for out-of-training set (unseen) parameter values, (ii) mitigates the adverse effects of high parameter space dimension, (iii) has online computational costs that depend only on tensor compression ranks but not on the full-order model size, and (iv) achieves lower reduced space dimensions compared to the conventional POD-DEIM ROM. The paper explains the method, analyzes its prediction power, and assesses its performance for two specific parameter-dependent nonlinear dynamical systems.

Factor models have been widely used to summarize the variability of high-dimensional data through a set of factors with much lower dimensionality. Gaussian linear factor models have been particularly popular due to their interpretability and ease of computation. However, in practice, data often violate the multivariate Gaussian assumption. To characterize higher-order dependence and nonlinearity, models that include factors as predictors in flexible multivariate regression are popular, with GP-LVMs using Gaussian process (GP) priors for the regression function and VAEs using deep neural networks. Unfortunately, such approaches lack identifiability and interpretability and tend to produce brittle and non-reproducible results. To address these problems by simplifying the nonparametric factor model while maintaining flexibility, we propose the NIFTY framework, which parsimoniously transforms uniform latent variables using one-dimensional nonlinear mappings and then applies a linear generative model. The induced multivariate distribution falls into a flexible class while maintaining simple computation and interpretation. We prove that this model is identifiable and empirically study NIFTY using simulated data, observing good performance in density estimation and data visualization. We then apply NIFTY to bird song data in an environmental monitoring application.

We introduce and analyse a family of hash and predicate functions that are more likely to produce collisions for small reducible configurations of vectors. These may offer practical improvements to lattice sieving for short vectors. In particular, in one asymptotic regime the family exhibits significantly different convergent behaviour than existing hash functions and predicates.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司