亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Machine learned partial differential equation (PDE) solvers trade the reliability of standard numerical methods for potential gains in accuracy and/or speed. The only way for a solver to guarantee that it outputs the exact solution is to use a convergent method in the limit that the grid spacing $\Delta x$ and timestep $\Delta t$ approach zero. Machine learned solvers, which learn to update the solution at large $\Delta x$ and/or $\Delta t$, can never guarantee perfect accuracy. Some amount of error is inevitable, so the question becomes: how do we constrain machine learned solvers to give us the sorts of errors that we are willing to tolerate? In this paper, we design more reliable machine learned PDE solvers by preserving discrete analogues of the continuous invariants of the underlying PDE. Examples of such invariants include conservation of mass, conservation of energy, the second law of thermodynamics, and/or non-negative density. Our key insight is simple: to preserve invariants, at each timestep apply an error-correcting algorithm to the update rule. Though this strategy is different from how standard solvers preserve invariants, it is necessary to retain the flexibility that allows machine learned solvers to be accurate at large $\Delta x$ and/or $\Delta t$. This strategy can be applied to any autoregressive solver for any time-dependent PDE in arbitrary geometries with arbitrary boundary conditions. Although this strategy is very general, the specific error-correcting algorithms need to be tailored to the invariants of the underlying equations as well as to the solution representation and time-stepping scheme of the solver. The error-correcting algorithms we introduce have two key properties. First, by preserving the right invariants they guarantee numerical stability. Second, in closed or periodic systems they do so without degrading the accuracy of an already-accurate solver.

相關內容

Training machine learning (ML) algorithms is a computationally intensive process, which is frequently memory-bound due to repeatedly accessing large training datasets. As a result, processor-centric systems (e.g., CPU, GPU) suffer from costly data movement between memory units and processing units, which consumes large amounts of energy and execution cycles. Memory-centric computing systems, i.e., with processing-in-memory (PIM) capabilities, can alleviate this data movement bottleneck. Our goal is to understand the potential of modern general-purpose PIM architectures to accelerate ML training. To do so, we (1) implement several representative classic ML algorithms (namely, linear regression, logistic regression, decision tree, K-Means clustering) on a real-world general-purpose PIM architecture, (2) rigorously evaluate and characterize them in terms of accuracy, performance and scaling, and (3) compare to their counterpart implementations on CPU and GPU. Our evaluation on a real memory-centric computing system with more than 2500 PIM cores shows that general-purpose PIM architectures can greatly accelerate memory-bound ML workloads, when the necessary operations and datatypes are natively supported by PIM hardware. For example, our PIM implementation of decision tree is $27\times$ faster than a state-of-the-art CPU version on an 8-core Intel Xeon, and $1.34\times$ faster than a state-of-the-art GPU version on an NVIDIA A100. Our K-Means clustering on PIM is $2.8\times$ and $3.2\times$ than state-of-the-art CPU and GPU versions, respectively. To our knowledge, our work is the first one to evaluate ML training on a real-world PIM architecture. We conclude with key observations, takeaways, and recommendations that can inspire users of ML workloads, programmers of PIM architectures, and hardware designers & architects of future memory-centric computing systems.

Subsampling of node sets is useful in contexts such as multilevel methods, computer graphics, and machine learning. On uniform grid-based node sets, the process of subsampling is simple. However, on node sets with high density variation, the process of coarsening a node set through node elimination is more interesting. A novel method for the subsampling of variable density node sets is presented here. Additionally, two novel node set quality measures are presented to determine the ability of a subsampling method to preserve the quality of an initial node set. The new subsampling method is demonstrated on the test problems of solving the Poisson and Laplace equations by multilevel radial basis function-generated finite differences (RBF-FD) iterations. High-order solutions with robust convergence are achieved in linear time with respect to node set size.

The solution of computational fluid dynamics problems is one of the most computationally hard tasks, especially in the case of complex geometries and turbulent flow regimes. We propose to use Tensor Train (TT) methods, which possess logarithmic complexity in problem size and have great similarities with quantum algorithms in the structure of data representation. We develop the Tensor train Finite Element Method -- TetraFEM -- and the explicit numerical scheme for the solution of the incompressible Navier-Stokes equation via Tensor Trains. We test this approach on the simulation of liquids mixing in a T-shape mixer, which, to our knowledge, was done for the first time using tensor methods in such non-trivial geometries. As expected, we achieve exponential compression in memory of all FEM matrices and demonstrate an exponential speed-up compared to the conventional FEM implementation on dense meshes. In addition, we discuss the possibility of extending this method to a quantum computer to solve more complex problems. This paper is based on work we conducted for Evonik Industries AG.

We derive several sets of sufficient conditions for applicability of the new efficient numerical realization of the inverse $Z$-transform. For large $n$, the complexity of the new scheme is dozens of times smaller than the complexity of the trapezoid rule. As applications, pricing of European options and single barrier options with discrete monitoring are considered; applications to more general options with barrier-lookback features are outlined. In the case of sectorial transition operators, hence, for symmetric L\'evy models, the proof is straightforward. In the case of non-symmetric L\'evy models, we construct a non-linear deformation of the dual space, which makes the transition operator sectorial, with an arbitrary small opening angle, and justify the new realization. We impose mild conditions which are satisfied for wide classes of non-symmetric Stieltjes-L\'evy processes.

Coronal Mass Ejections (CMEs) correspond to dramatic expulsions of plasma and magnetic field from the solar corona into the heliosphere. CMEs are scientifically relevant because they are involved in the physical mechanisms characterizing the active Sun. However, more recently CMEs have attracted attention for their impact on space weather, as they are correlated to geomagnetic storms and may induce the generation of Solar Energetic Particles streams. In this space weather framework, the present paper introduces a physics-driven artificial intelligence (AI) approach to the prediction of CMEs travel time, in which the deterministic drag-based model is exploited to improve the training phase of a cascade of two neural networks fed with both remote sensing and in-situ data. This study shows that the use of physical information in the AI architecture significantly improves both the accuracy and the robustness of the travel time prediction.

The problem of packing equal spheres in a spherical container is a classic global optimization problem, which has attracted enormous studies in academia and found various applications in industry. This problem is computationally challenging, and many efforts focus on small-scale instances with the number of spherical items less than 200 in the literature. In this work, we propose an efficient local search heuristic algorithm named solution space exploring and descent for solving this problem, which can quantify the solution's quality to determine the number of exploring actions and quickly discover a high-quality solution. Besides, we propose an adaptive neighbor object maintenance method to speed up the convergence of the continuous optimization process and reduce the time consumption. Computational experiments on a large number of benchmark instances with $5 \leq n \leq 400$ spherical items show that our algorithm significantly outperforms the state-of-the-art algorithm. In particular, it improves the 274 best-known results and matches the 84 best-known results out of the 396 well-known benchmark instances.

This paper proposes a novel model-based policy gradient algorithm for tracking dynamic targets using a mobile robot, equipped with an onboard sensor with limited field of view. The task is to obtain a continuous control policy for the mobile robot to collect sensor measurements that reduce uncertainty in the target states, measured by the target distribution entropy. We design a neural network control policy with the robot $SE(3)$ pose and the mean vector and information matrix of the joint target distribution as inputs and attention layers to handle variable numbers of targets. We also derive the gradient of the target entropy with respect to the network parameters explicitly, allowing efficient model-based policy gradient optimization.

We characterise the learning of a mixture of two clouds of data points with generic centroids via empirical risk minimisation in the high dimensional regime, under the assumptions of generic convex loss and convex regularisation. Each cloud of data points is obtained by sampling from a possibly uncountable superposition of Gaussian distributions, whose variance has a generic probability density $\varrho$. Our analysis covers therefore a large family of data distributions, including the case of power-law-tailed distributions with no covariance. We study the generalisation performance of the obtained estimator, we analyse the role of regularisation, and the dependence of the separability transition on the distribution scale parameters.

In this work, we develop a new algorithm to solve large-scale incompressible time-dependent fluid--structure interaction (FSI) problems using a matrix-free finite element method in arbitrary Lagrangian--Eulerian (ALE) frame of reference. We derive a semi-implicit time integration scheme which improves the geometry-convective explicit (GCE) scheme for problems involving the interaction between incompressible hyperelastic solids and incompressible fluids. The proposed algorithm relies on the reformulation of the time-discrete problem as a generalized Stokes problem with strongly variable coefficients, for which optimal preconditioners have recently been developed. The resulting algorithm is scalable, optimal, and robust: we test our implementation on model problems that mimic classical Turek benchmarks in two and three dimensions, and investigate timing and scalability results.

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

北京阿比特科技有限公司