亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we present a positivity-preserving high-order flux reconstruction method for the polyatomic Boltzmann--BGK equation augmented with a discrete velocity model that ensures the scheme is discretely conservative. Through modeling the internal degrees of freedom, the approach is further extended to polyatomic molecules and can encompass arbitrary constitutive laws. The approach is validated on a series of large-scale complex numerical experiments, ranging from shock-dominated flows computed on unstructured grids to direct numerical simulation of three-dimensional compressible turbulent flows, the latter of which is the first instance of such a flow computed by directly solving the Boltzmann equation. The results show the ability of the scheme to directly resolve shock structures without any ad hoc numerical shock capturing method and correctly approximate turbulent flow phenomena in a consistent manner with the hydrodynamic equations.

相關內容

With the success of neural volume rendering in novel view synthesis, neural implicit reconstruction with volume rendering has become popular. However, most methods optimize per-scene functions and are unable to generalize to novel scenes. We introduce VolRecon, a generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct with fine details and little noise, we combine projection features, aggregated from multi-view features with a view transformer, and volume features interpolated from a coarse global feature volume. A ray transformer computes SRDF values of all the samples along a ray to estimate the surface location, which are used for volume rendering of color and depth. Extensive experiments on DTU and ETH3D demonstrate the effectiveness and generalization ability of our method. On DTU, our method outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable quality as MVSNet in full view reconstruction. Besides, our method shows good generalization ability on the large-scale ETH3D benchmark. Project page: //fangjinhuawang.github.io/VolRecon.

We introduce and analyze Structured Stochastic Zeroth order Descent (S-SZD), a finite difference approach which approximates a stochastic gradient on a set of $l\leq d$ orthogonal directions, where $d$ is the dimension of the ambient space. These directions are randomly chosen, and may change at each step. For smooth convex functions we prove almost sure convergence of the iterates and a convergence rate on the function values of the form $O(d/l k^{-c})$ for every $c<1/2$, which is arbitrarily close to the one of Stochastic Gradient Descent (SGD) in terms of number of iterations. Our bound also shows the benefits of using $l$ multiple directions instead of one. For non-convex functions satisfying the Polyak-{\L}ojasiewicz condition, we establish the first convergence rates for stochastic zeroth order algorithms under such an assumption. We corroborate our theoretical findings in numerical simulations where assumptions are satisfied and on the real-world problem of hyper-parameter optimization, observing that S-SZD has very good practical performances.

In this paper, we propose a low-cost, parameter-free, and pressure-robust Stokes solver based on the enriched Galerkin (EG) method with a discontinuous velocity enrichment function. The EG method employs the interior penalty discontinuous Galerkin (IPDG) formulation to weakly impose the continuity of the velocity function. However, the symmetric IPDG formulation, despite of its advantage of symmetry, requires a lot of computational effort to choose an optimal penalty parameter and to compute different trace terms. In order to reduce such effort, we replace the derivatives of the velocity function with its weak derivatives computed by the geometric data of elements. Therefore, our modified EG (mEG) method is a parameter-free numerical scheme which has reduced computational complexity as well as optimal rates of convergence. Moreover, we achieve pressure-robustness for the mEG method by employing a velocity reconstruction operator on the load vector on the right-hand side of the discrete system. The theoretical results are confirmed through numerical experiments with two- and three-dimensional examples.

Transformers have become the state-of-the-art neural network architecture across numerous domains of machine learning. This is partly due to their celebrated ability to transfer and to learn in-context based on few examples. Nevertheless, the mechanisms by which Transformers become in-context learners are not well understood and remain mostly an intuition. Here, we argue that training Transformers on auto-regressive tasks can be closely related to well-known gradient-based meta-learning formulations. We start by providing a simple weight construction that shows the equivalence of data transformations induced by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a regression loss. Motivated by that construction, we show empirically that when training self-attention-only Transformers on simple regression tasks either the models learned by GD and Transformers show great similarity or, remarkably, the weights found by optimization match the construction. Thus we show how trained Transformers implement gradient descent in their forward pass. This allows us, at least in the domain of regression problems, to mechanistically understand the inner workings of optimized Transformers that learn in-context. Furthermore, we identify how Transformers surpass plain gradient descent by an iterative curvature correction and learn linear models on deep data representations to solve non-linear regression tasks. Finally, we discuss intriguing parallels to a mechanism identified to be crucial for in-context learning termed induction-head (Olsson et al., 2022) and show how it could be understood as a specific case of in-context learning by gradient descent learning within Transformers.

We propose a First-Order System Least Squares (FOSLS) method based on deep-learning for numerically solving second-order elliptic PDEs. The method we propose is capable of dealing with either variational and non-variational problems, and because of its meshless nature, it can also deal with problems posed in high-dimensional domains. We prove the $\Gamma$-convergence of the neural network approximation towards the solution of the continuous problem, and extend the convergence proof to some well-known related methods. Finally, we present several numerical examples illustrating the performance of our discretization.

Extreme streamflow is a key indicator of flood risk, and quantifying the changes in its distribution under non-stationary climate conditions is key to mitigating the impact of flooding events. We propose a non-stationary process mixture model (NPMM) for annual streamflow maxima over the central US (CUS) which uses downscaled climate model precipitation projections to forecast extremal streamflow. Spatial dependence for the model is specified as a convex combination of transformed Gaussian and max-stable processes, indexed by a weight parameter which identifies the asymptotic regime of the process. The weight parameter is modeled as a function of region and of regional precipitation, introducing spatio-temporal non-stationarity within the model. The NPMM is flexible with desirable tail dependence properties, but yields an intractable likelihood. To address this, we embed a neural network within a density regression model which is used to learn a synthetic likelihood function using simulations from the NPMM with different parameter settings. Our model is fitted using observational data for 1972-2021, and inference carried out in a Bayesian framework. Annual streamflow maxima forecasts for 2021-2035 estimate an increase in the frequency and magnitude of extreme streamflow, with changes being more pronounced in the largest quantiles of the projected annual streamflow maxima.

A recent line of work has shown remarkable behaviors of the generalization error curves in simple learning models. Even the least-squares regression has shown atypical features such as the model-wise double descent, and further works have observed triple or multiple descents. Another important characteristic are the epoch-wise descent structures which emerge during training. The observations of model-wise and epoch-wise descents have been analytically derived in limited theoretical settings (such as the random feature model) and are otherwise experimental. In this work, we provide a full and unified analysis of the whole time-evolution of the generalization curve, in the asymptotic large-dimensional regime and under gradient-flow, within a wider theoretical setting stemming from a gaussian covariate model. In particular, we cover most cases already disparately observed in the literature, and also provide examples of the existence of multiple descent structures as a function of a model parameter or time. Furthermore, we show that our theoretical predictions adequately match the learning curves obtained by gradient descent over realistic datasets. Technically we compute averages of rational expressions involving random matrices using recent developments in random matrix theory based on "linear pencils". Another contribution, which is also of independent interest in random matrix theory, is a new derivation of related fixed point equations (and an extension there-off) using Dyson brownian motions.

This paper focusses on the optimal control problems governed by fourth-order linear elliptic equations with clamped boundary conditions in the framework of the Hessian discretisation method (HDM). The HDM is an abstract framework that enables the convergence analysis of numerical methods through a quadruplet known as a Hessian discretisation (HD) and three core properties of HD. The HDM covers several numerical schemes such as the conforming finite element methods, the Adini and Morley non-conforming finite element methods (ncFEMs), method based on gradient recovery (GR) operators and the finite volume methods (FVMs). Basic error estimates and superconvergence results are established for the state, adjoint and control variables in the HDM framework. The article concludes with numerical results that illustrates the theoretical convergence rates for the GR method, Adini ncFEM and FVM.

For large-scale data fitting, the least-squares progressive iterative approximation is a widely used method in many applied domains because of its intuitive geometric meaning and efficiency. In this work, we present a randomized progressive iterative approximation (RPIA) for the B-spline curve and surface fittings. In each iteration, RPIA locally adjusts the control points according to a random criterion of index selections. The difference for each control point is computed concerning the randomized block coordinate descent method. From geometric and algebraic aspects, the illustrations of RPIA are provided. We prove that RPIA constructs a series of fitting curves (resp., surfaces), whose limit curve (resp., surface) can converge in expectation to the least-squares fitting result of the given data points. Numerical experiments are given to confirm our results and show the benefits of RPIA.

In this article we present a numerical analysis for a third-order differential equation with non-periodic boundary conditions and time-dependent coefficients, namely, the linear Korteweg-de Vries Burgers equation. This numerical analysis is motived due to the dispersive and dissipative phenomena that government this kind of equations. This work builds on previous methods for dispersive equations with constant coefficients, expanding the field to include a new class of equations which until now have eluded the time-evolving parameters. More precisely, throughout the Legendre-Petrov-Galerkin method we prove stability and convergence results of the approximation in appropriate weighted Sobolev spaces. These results allow to show the role and trade off of these temporal parameters into the model. Afterwards, we numerically investigate the dispersion-dissipation relation for several profiles, further provide insights into the implementation method, which allow to exhibit the accuracy and efficiency of our numerical algorithms.

北京阿比特科技有限公司