亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The computational cost for inference and prediction of statistical models based on Gaussian processes with Mat\'ern covariance functions scales cubicly with the number of observations, limiting their applicability to large data sets. The cost can be reduced in certain special cases, but there are currently no generally applicable exact methods with linear cost. Several approximate methods have been introduced to reduce the cost, but most of these lack theoretical guarantees for the accuracy. We consider Gaussian processes on bounded intervals with Mat\'ern covariance functions and for the first time develop a generally applicable method with linear cost and with a covariance error that decreases exponentially fast in the order $m$ of the proposed approximation. The method is based on an optimal rational approximation of the spectral density and results in an approximation that can be represented as a sum of $m$ independent Gaussian Markov processes, which facilitates easy usage in general software for statistical inference, enabling its efficient implementation in general statistical inference software packages. Besides the theoretical justifications, we demonstrate the accuracy empirically through carefully designed simulation studies which show that the method outperforms all state-of-the-art alternatives in terms of accuracy for a fixed computational cost in statistical tasks such as Gaussian process regression.

相關內容

Change point detection (CPD) methods aim to identify abrupt shifts in the distribution of input data streams. Accurate estimators for this task are crucial across various real-world scenarios. Yet, traditional unsupervised CPD techniques face significant limitations, often relying on strong assumptions or suffering from low expressive power due to inherent model simplicity. In contrast, representation learning methods overcome these drawbacks by offering flexibility and the ability to capture the full complexity of the data without imposing restrictive assumptions. However, these approaches are still emerging in the CPD field and lack robust theoretical foundations to ensure their reliability. Our work addresses this gap by integrating the expressive power of representation learning with the groundedness of traditional CPD techniques. We adopt spectral normalization (SN) for deep representation learning in CPD tasks and prove that the embeddings after SN are highly informative for CPD. Our method significantly outperforms current state-of-the-art methods during the comprehensive evaluation via three standard CPD datasets.

Generative diffusion models apply the concept of Langevin dynamics in physics to machine leaning, attracting a lot of interests from engineering, statistics and physics, but a complete picture about inherent mechanisms is still lacking. In this paper, we provide a transparent physics analysis of diffusion models, formulating the fluctuation theorem, entropy production, equilibrium measure, and Franz-Parisi potential to understand the dynamic process and intrinsic phase transitions. Our analysis is rooted in a path integral representation of both forward and backward dynamics, and in treating the reverse diffusion generative process as a statistical inference, where the time-dependent state variables serve as quenched disorder akin to that in spin glass theory. Our study thus links stochastic thermodynamics, statistical inference and geometry based analysis together to yield a coherent picture about how the generative diffusion models work.

Many partial differential equations (PDEs) such as Navier--Stokes equations in fluid mechanics, inelastic deformation in solids, and transient parabolic and hyperbolic equations do not have an exact, primal variational structure. Recently, a variational principle based on the dual (Lagrange multiplier) field was proposed. The essential idea in this approach is to treat the given PDE as constraints, and to invoke an arbitrarily chosen auxiliary potential with strong convexity properties to be optimized. This leads to requiring a convex dual functional to be minimized subject to Dirichlet boundary conditions on dual variables, with the guarantee that even PDEs that do not possess a variational structure in primal form can be solved via a variational principle. The vanishing of the first variation of the dual functional is, up to Dirichlet boundary conditions on dual fields, the weak form of the primal PDE problem with the dual-to-primal change of variables incorporated. We derive the dual weak form for the linear, one-dimensional, transient convection-diffusion equation. A Galerkin discretization is used to obtain the discrete equations, with the trial and test functions chosen as linear combination of either RePU activation functions (shallow neural network) or B-spline basis functions; the corresponding stiffness matrix is symmetric. For transient problems, a space-time Galerkin implementation is used with tensor-product B-splines as approximating functions. Numerical results are presented for the steady-state and transient convection-diffusion equation, and transient heat conduction. The proposed method delivers sound accuracy for ODEs and PDEs and rates of convergence are established in the $L^2$ norm and $H^1$ seminorm for the steady-state convection-diffusion problem.

This paper addresses the challenge of proving the existence of solutions for nonlinear equations in Banach spaces, focusing on the Navier-Stokes equations and discretizations of thom. Traditional methods, such as monotonicity-based approaches and fixed-point theorems, often face limitations in handling general nonlinear operators or finite element discretizations. A novel concept, mapped coercivity, provides a unifying framework to analyze nonlinear operators through a continuous mapping. We apply these ideas to saddle-point problems in Banach spaces, emphasizing both infinite-dimensional formulations and finite element discretizations. Our analysis includes stabilization techniques to restore coercivity in finite-dimensional settings, ensuring stability and existence of solutions. For linear problems, we explore the relationship between the inf-sup condition and mapped coercivity, using the Stokes equation as a case study. For nonlinear saddle-point systems, we extend the framework to mapped coercivity via surjective mappings, enabling concise proofs of existence of solutions for various stabilized Navier-Stokes finite element methods. These include Brezzi-Pitk\"aranta, a simple variant, and local projection stabilization (LPS) techniques, with extensions to convection-dominant flows. The proposed methodology offers a robust tool for analyzing nonlinear PDEs and their discretizations, bypassing traditional decompositions and providing a foundation for future developments in computational fluid dynamics.

We consider the task of out-of-distribution (OOD) generalization, where the distribution shift is due to an unobserved confounder ($Z$) affecting both the covariates ($X$) and the labels ($Y$). In this setting, traditional assumptions of covariate and label shift are unsuitable due to the confounding, which introduces heterogeneity in the predictor, i.e., $\hat{Y} = f_Z(X)$. OOD generalization differs from traditional domain adaptation by not assuming access to the covariate distribution ($X^\text{te}$) of the test samples during training. These conditions create a challenging scenario for OOD robustness: (a) $Z^\text{tr}$ is an unobserved confounder during training, (b) $P^\text{te}{Z} \neq P^\text{tr}{Z}$, (c) $X^\text{te}$ is unavailable during training, and (d) the posterior predictive distribution depends on $P^\text{te}(Z)$, i.e., $\hat{Y} = E_{P^\text{te}(Z)}[f_Z(X)]$. In general, accurate predictions are unattainable in this scenario, and existing literature has proposed complex predictors based on identifiability assumptions that require multiple additional variables. Our work investigates a set of identifiability assumptions that tremendously simplify the predictor, whose resulting elegant simplicity outperforms existing approaches.

We consider a mixed variational formulation recently proposed for the coupling of the Brinkman--Forchheimer and Darcy equations and develop the first reliable and efficient residual-based a posteriori error estimator for the 2D version of the associated conforming mixed finite element scheme. For the reliability analysis, due to the nonlinear nature of the problem, we make use of the inf-sup condition and the strong monotonicity of the operators involved, along with a stable Helmholtz decomposition in Hilbert spaces and local approximation properties of the Raviart--Thomas and Cl\'ement interpolants. On the other hand, inverse inequalities, the localization technique through bubble functions, and known results from previous works are the main tools yielding the efficiency estimate. Finally, several numerical examples confirming the theoretical properties of the estimator and illustrating the performance of the associated adaptive algorithms are reported. In particular, the case of flow through a heterogeneous porous medium is considered.

Approximating a univariate function on the interval $[-1,1]$ with a polynomial is among the most classical problems in numerical analysis. When the function evaluations come with noise, a least-squares fit is known to reduce the effect of noise as more samples are taken. The generic algorithm for the least-squares problem requires $O(Nn^2)$ operations, where $N+1$ is the number of sample points and $n$ is the degree of the polynomial approximant. This algorithm is unstable when $n$ is large, for example $n\gg \sqrt{N}$ for equispaced sample points. In this study, we blend numerical analysis and statistics to introduce a stable and fast $O(N\log N)$ algorithm called NoisyChebtrunc based on the Chebyshev interpolation. It has the same error reduction effect as least-squares and the convergence is spectral until the error reaches $O(\sigma \sqrt{{n}/{N}})$, where $\sigma$ is the noise level, after which the error continues to decrease at the Monte-Carlo $O(1/\sqrt{N})$ rate. To determine the polynomial degree, NoisyChebtrunc employs a statistical criterion, namely Mallows' $C_p$. We analyze NoisyChebtrunc in terms of the variance and concentration in the infinity norm to the underlying noiseless function. These results show that with high probability the infinity-norm error is bounded by a small constant times $\sigma \sqrt{{n}/{N}}$, when the noise {is} independent and follows a subgaussian or subexponential distribution. We illustrate the performance of NoisyChebtrunc with numerical experiments.

This paper proposes a novel canonical correlation analysis for semiparametric inference in $I(1)/I(0)$ systems via functional approximation. The approach can be applied coherently to panels of $p$ variables with a generic number $s$ of stochastic trends, as well as to subsets or aggregations of variables. This study discusses inferential tools on $s$ and on the loading matrix $\psi$ of the stochastic trends (and on their duals $r$ and $\beta$, the cointegration rank and the cointegrating matrix): asymptotically pivotal test sequences and consistent estimators of $s$ and $r$, $T$-consistent, mixed Gaussian and efficient estimators of $\psi$ and $\beta$, Wald tests thereof, and misspecification tests for checking model assumptions. Monte Carlo simulations show that these tools have reliable performance uniformly in $s$ for small, medium and large-dimensional systems, with $p$ ranging from 10 to 300. An empirical analysis of 20 exchange rates illustrates the methods.

Block majorization-minimization (BMM) is a simple iterative algorithm for constrained nonconvex optimization that sequentially minimizes majorizing surrogates of the objective function in each block while the others are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We first establish that for general constrained nonsmooth nonconvex optimization, BMM with $\rho$-strongly convex and $L_g$-smooth surrogates can produce an $\epsilon$-approximate first-order optimal point within $\widetilde{O}((1+L_g+\rho^{-1})\epsilon^{-2})$ iterations and asymptotically converges to the set of first-order optimal points. Next, we show that BMM combined with trust-region methods with diminishing radius has an improved complexity of $\widetilde{O}((1+L_g) \epsilon^{-2})$, independent of the inverse strong convexity parameter $\rho^{-1}$, allowing improved theoretical and practical performance with `flat' surrogates. Our results hold robustly even when the convex sub-problems are solved as long as the optimality gaps are summable. Central to our analysis is a novel continuous first-order optimality measure, by which we bound the worst-case sub-optimality in each iteration by the first-order improvement the algorithm makes. We apply our general framework to obtain new results on various algorithms such as the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung, regularized nonnegative tensor decomposition, and the classical block projected gradient descent algorithm. Lastly, we numerically demonstrate that the additional use of diminishing radius can improve the convergence rate of BMM in many instances.

We study the numerical approximation of SDEs with singular drifts (including distributions) driven by a fractional Brownian motion. Under the Catellier-Gubinelli condition that imposes the regularity of the drift to be strictly greater than $1-1/(2H)$, we obtain an explicit rate of convergence of a tamed Euler scheme towards the SDE, extending results for bounded drifts. Beyond this regime, when the regularity of the drift is $1-1/(2H)$, we derive a non-explicit rate. As a byproduct, strong well-posedness for these equations is recovered. Proofs use new regularising properties of discrete-time fBm and a new critical Gr\"onwall-type lemma. We present examples and simulations.

北京阿比特科技有限公司