亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Rapid evolution of sensor technology, advances in instrumentation, and progress in devising data-acquisition softwares/hardwares are providing vast amounts of data for various complex phenomena, ranging from those in atomospheric environment, to large-scale porous formations, and biological systems. The tremendous increase in the speed of scientific computing has also made it possible to emulate diverse high-dimensional, multiscale and multiphysics phenomena that contain elements of stochasticity, and to generate large volumes of numerical data for them in heterogeneous systems. The difficulty is, however, that often the governing equations for such phenomena are not known. A prime example is flow, transport, and deformation processes in macroscopically-heterogeneous materials and geomedia. In other cases, the governing equations are only partially known, in the sense that they either contain various coefficients that must be evaluated based on data, or that they require constitutive relations, such as the relationship between the stress tensor and the velocity gradients for non-Newtonian fluids in the momentum conservation equation, in order for them to be useful to the modeling. Several classes of approaches are emerging to address such problems that are based on machine learning, symbolic regression, the Mori-Zwanzig projection operator formulation, sparse identification of nonlinear dynamics, data assimilation, and stochastic optimization and analysis, or a combination of two or more of such approaches. This Perspective describes the latest developments in this highly important area, and discusses possible future directions.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌(qian)入式系統編譯器、體系結(jie)構和綜合國際(ji)會議。 Publisher:ACM。 SIT:

Modern climate projections lack adequate spatial and temporal resolution due to computational constraints. A consequence is inaccurate and imprecise prediction of critical processes such as storms. Hybrid methods that combine physics with machine learning (ML) have introduced a new generation of higher fidelity climate simulators that can sidestep Moore's Law by outsourcing compute-hungry, short, high-resolution simulations to ML emulators. However, this hybrid ML-physics simulation approach requires domain-specific treatment and has been inaccessible to ML experts because of lack of training data and relevant, easy-to-use workflows. We present ClimSim, the largest-ever dataset designed for hybrid ML-physics research. It comprises multi-scale climate simulations, developed by a consortium of climate scientists and ML researchers. It consists of 5.7 billion pairs of multivariate input and output vectors that isolate the influence of locally-nested, high-resolution, high-fidelity physics on a host climate simulator's macro-scale physical state. The dataset is global in coverage, spans multiple years at high sampling frequency, and is designed such that resulting emulators are compatible with downstream coupling into operational climate simulators. We implement a range of deterministic and stochastic regression baselines to highlight the ML challenges and their scoring. The data (//huggingface.co/datasets/LEAP/ClimSim_high-res) and code (//leap-stc.github.io/ClimSim) are released openly to support the development of hybrid ML-physics and high-fidelity climate simulations for the benefit of science and society.

In the present work, advanced spatial and temporal discretization techniques are tailored to hyperelastic physics-augmented neural networks, i.e., neural network based constitutive models which fulfill all relevant mechanical conditions of hyperelasticity by construction. The framework takes into account the structure of neural network-based constitutive models, in particular, that their derivatives are more complex compared to analytical models. The proposed framework allows for convenient mixed Hu-Washizu like finite element formulations applicable to nearly incompressible material behavior. The key feature of this work is a tailored energy-momentum scheme for time discretization, which allows for energy and momentum preserving dynamical simulations. Both the mixed formulation and the energy-momentum discretization are applied in finite element analysis. For this, a hyperelastic physics-augmented neural network model is calibrated to data generated with an analytical potential. In all finite element simulations, the proposed discretization techniques show excellent performance. All of this demonstrates that, from a formal point of view, neural networks are essentially mathematical functions. As such, they can be applied in numerical methods as straightforwardly as analytical constitutive models. Nevertheless, their special structure suggests to tailor advanced discretization methods, to arrive at compact mathematical formulations and convenient implementations.

The ultimate goal of studying the magnetopause position is to accurately determine its location. Both traditional empirical computation methods and the currently popular machine learning approaches have shown promising results. In this study, we propose an Empirical Physics-Informed Neural Networks (Emp-PINNs) that combines physics-based numerical computation with vanilla machine learning. This new generation of Physics Informed Neural Networks overcomes the limitations of previous methods restricted to solving ordinary and partial differential equations by incorporating conventional empirical models to aid the convergence and enhance the generalization capability of the neural network. Compared to Shue et al. [1998], our model achieves a reduction of approximately 30% in root mean square error. The methodology presented in this study is not only applicable to space research but can also be referenced in studies across various fields, particularly those involving empirical models.

Personal mobility data from mobile phones and other sensors are increasingly used to inform policymaking during pandemics, natural disasters, and other humanitarian crises. However, even aggregated mobility traces can reveal private information about individual movements to potentially malicious actors. This paper develops and tests an approach for releasing private mobility data, which provides formal guarantees over the privacy of the underlying subjects. Specifically, we (1) introduce an algorithm for constructing differentially private mobility matrices, and derive privacy and accuracy bounds on this algorithm; (2) use real-world data from mobile phone operators in Afghanistan and Rwanda to show how this algorithm can enable the use of private mobility data in two high-stakes policy decisions: pandemic response and the distribution of humanitarian aid; and (3) discuss practical decisions that need to be made when implementing this approach, such as how to optimally balance privacy and accuracy. Taken together, these results can help enable the responsible use of private mobility data in humanitarian response.

Given tensors $\boldsymbol{\mathscr{A}}, \boldsymbol{\mathscr{B}}, \boldsymbol{\mathscr{C}}$ of size $m \times 1 \times n$, $m \times p \times 1$, and $1\times p \times n$, respectively, their Bhattacharya-Mesner (BM) product will result in a third order tensor of dimension $m \times p \times n$ and BM-rank of 1 (Mesner and Bhattacharya, 1990). Thus, if a third-order tensor can be written as a sum of a small number of such BM-rank 1 terms, this BM-decomposition (BMD) offers an implicitly compressed representation of the tensor. Therefore, in this paper, we give a generative model which illustrates that spatio-temporal video data can be expected to have low BM-rank. Then, we discuss non-uniqueness properties of the BMD and give an improved bound on the BM-rank of a third-order tensor. We present and study properties of an iterative algorithm for computing an approximate BMD, including convergence behavior and appropriate choices for starting guesses that allow for the decomposition of our spatial-temporal data into stationary and non-stationary components. Several numerical experiments show the impressive ability of our BMD algorithm to extract important temporal information from video data while simultaneously compressing the data. In particular, we compare our approach with dynamic mode decomposition (DMD): first, we show how the matrix-based DMD can be reinterpreted in tensor BMP form, then we explain why the low BM-rank decomposition can produce results with superior compression properties while simultaneously providing better separation of stationary and non-stationary features in the data. We conclude with a comparison of our low BM-rank decomposition to two other tensor decompositions, CP and the t-SVDM.

The important phenomenon of "stickiness" of chaotic orbits in low dimensional dynamical systems has been investigated for several decades, in view of its applications to various areas of physics, such as classical and statistical mechanics, celestial mechanics and accelerator dynamics. Most of the work to date has focused on two-degree of freedom Hamiltonian models often represented by two-dimensional (2D) area preserving maps. In this paper, we extend earlier results using a 4-dimensional extension of the 2D McMillan map, and show that a symplectic model of two coupled McMillan maps also exhibits stickiness phenomena in limited regions of phase space. To this end, we employ probability distributions in the sense of the Central Limit Theorem to demonstrate that, as in the 2D case, sticky regions near the origin are also characterized by "weak" chaos and Tsallis entropy, in sharp contrast to the "strong" chaos that extends over much wider domains and is described by Boltzmann Gibbs statistics. Remarkably, similar stickiness phenomena have been observed in higher dimensional Hamiltonian systems around unstable simple periodic orbits at various values of the total energy of the system.

Representation learning based on multi-task pretraining has become a powerful approach in many domains. In particular, task-aware representation learning aims to learn an optimal representation for a specific target task by sampling data from a set of source tasks, while task-agnostic representation learning seeks to learn a universal representation for a class of tasks. In this paper, we propose a general and versatile algorithmic and theoretic framework for \textit{active representation learning}, where the learner optimally chooses which source tasks to sample from. This framework, along with a tractable meta algorithm, allows most arbitrary target and source task spaces (from discrete to continuous), covers both task-aware and task-agnostic settings, and is compatible with deep representation learning practices. We provide several instantiations under this framework, from bilinear and feature-based nonlinear to general nonlinear cases. In the bilinear case, by leveraging the non-uniform spectrum of the task representation and the calibrated source-target relevance, we prove that the sample complexity to achieve $\varepsilon$-excess risk on target scales with $ (k^*)^2 \|v^*\|_2^2 \varepsilon^{-2}$ where $k^*$ is the effective dimension of the target and $\|v^*\|_2^2 \in (0,1]$ represents the connection between source and target space. Compared to the passive one, this can save up to $\frac{1}{d_W}$ of sample complexity, where $d_W$ is the task space dimension. Finally, we demonstrate different instantiations of our meta algorithm in synthetic datasets and robotics problems, from pendulum simulations to real-world drone flight datasets. On average, our algorithms outperform baselines by $20\%-70\%$.

Deep learning has shown great potential for modeling the physical dynamics of complex particle systems such as fluids (in Lagrangian descriptions). Existing approaches, however, require the supervision of consecutive particle properties, including positions and velocities. In this paper, we consider a partially observable scenario known as fluid dynamics grounding, that is, inferring the state transitions and interactions within the fluid particle systems from sequential visual observations of the fluid surface. We propose a differentiable two-stage network named NeuroFluid. Our approach consists of (i) a particle-driven neural renderer, which involves fluid physical properties into the volume rendering function, and (ii) a particle transition model optimized to reduce the differences between the rendered and the observed images. NeuroFluid provides the first solution to unsupervised learning of particle-based fluid dynamics by training these two models jointly. It is shown to reasonably estimate the underlying physics of fluids with different initial shapes, viscosity, and densities. It is a potential alternative approach to understanding complex fluid mechanics, such as turbulence, that are difficult to model using traditional methods of mathematical physics.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains.

北京阿比特科技有限公司