亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mesh-based numerical solvers are an important part in many design tool chains. However, accurate simulations like computational fluid dynamics are time and resource consuming which is why surrogate models are employed to speed-up the solution process. Machine Learning based surrogate models on the other hand are fast in predicting approximate solutions but often lack accuracy. Thus, the development of the predictor in a predictor-corrector approach is the focus here, where the surrogate model predicts a flow field and the numerical solver corrects it. This paper scales a state-of-the-art surrogate model from the domain of graph-based machine learning to industry-relevant mesh sizes of a numerical flow simulation. The approach partitions and distributes the flow domain to multiple GPUs and provides halo exchange between these partitions during training. The utilized graph neural network operates directly on the numerical mesh and is able to preserve complex geometries as well as all other properties of the mesh. The proposed surrogate model is evaluated with an application on a three dimensional turbomachinery setup and compared to a traditionally trained distributed model. The results show that the traditional approach produces superior predictions and outperforms the proposed surrogate model. Possible explanations, improvements and future directions are outlined.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 線性模型 · 非凸 · MoDELS · 樣本 ·
2023 年 9 月 15 日

The parallel alternating direction method of multipliers (ADMM) algorithms have gained popularity in statistics and machine learning for their efficient handling of large sample data problems. However, the parallel structure of these algorithms is based on the consensus problem, which can lead to an excessive number of auxiliary variables for high-dimensional data. In this paper, we propose a partition-insensitive parallel framework based on the linearized ADMM (LADMM) algorithm and apply it to solve nonconvex penalized smooth quantile regression problems. Compared to existing parallel ADMM algorithms, our algorithm does not rely on the consensus problem, resulting in a significant reduction in the number of variables that need to be updated at each iteration. It is worth noting that the solution of our algorithm remains unchanged regardless of how the total sample is divided, which is also known as partition-insensitivity. Furthermore, under some mild assumptions, we prove that the iterative sequence generated by the parallel LADMM algorithm converges to a critical point of the nonconvex optimization problem. Numerical experiments on synthetic and real datasets demonstrate the feasibility and validity of the proposed algorithm.

Accurate analytical and numerical modeling of multiscale systems is a daunting task. The need to properly resolve spatial and temporal scales spanning multiple orders of magnitude pushes the limits of both our theoretical models as well as our computational capabilities. Rigorous upscaling techniques enable efficient computation while bounding/tracking errors and helping to make informed cost-accuracy tradeoffs. The biggest challenges arise when the applicability conditions of upscaled models break down. Here, we present a non-intrusive two-way (iterative bottom-up top-down) coupled hybrid model, applied to thermal runaway in battery packs, that combines fine-scale and upscaled equations in the same numerical simulation to achieve predictive accuracy while limiting computational costs. First, we develop two methods with different orders of accuracy to enforce continuity at the coupling boundary. Then, we derive weak (i.e., variational) formulations of the fine-scale and upscaled governing equations for finite element (FE) discretization and numerical implementation in FEniCS. We demonstrate that hybrid simulations can accurately predict the average temperature fields within error bounds determined a priori by homogenization theory. Finally, we demonstrate the computational efficiency of the hybrid algorithm against fine-scale simulations.

Distribution-dependent stochastic dynamical systems arise widely in engineering and science. We consider a class of such systems which model the limit behaviors of interacting particles moving in a vector field with random fluctuations. We aim to examine the most likely transition path between equilibrium stable states of the vector field. In the small noise regime, the action functional does not involve the solution of the skeleton equation which describes the unperturbed deterministic flow of the vector field shifted by the interaction at zero distance. As a result, we are led to study the most likely transition path for a stochastic differential equation without distribution dependency. This enables the computation of the most likely transition path for these distribution-dependent stochastic dynamical systems by the adaptive minimum action method and we illustrate our approach in two examples.

Collision detection is essential to virtually all robotics applications. However, traditional geometric collision detection methods generally require pre-existing workspace geometry representations; thus, they are unable to infer the collision detection function from sampled data when geometric information is unavailable. Learning-based approaches can overcome this limitation. Following this line of research, we present DeepCollide, an implicit neural representation method for approximating the collision detection function from sampled collision data. As shown by our theoretical analysis and empirical evidence, DeepCollide presents clear benefits over the state-of-the-art, as it relates to time cost scalability with respect to training data and DoF, as well as the ability to accurately express complex workspace geometries. We publicly release our code.

Agent-based simulation is a versatile and potent computational modeling technique employed to analyze intricate systems and phenomena spanning diverse fields. However, due to their computational intensity, agent-based models become more resource-demanding when geographic considerations are introduced. This study delves into diverse strategies for crafting a series of Agent-Based Models, named "agent-in-the-cell," which emulate a city. These models, incorporating geographical attributes of the city and employing real-world open-source mobility data from Safegraph's publicly available dataset, simulate the dynamics of COVID spread under varying scenarios. The "agent-in-the-cell" concept designates that our representative agents, called meta-agents, are linked to specific home cells in the city's tessellation. We scrutinize tessellations of the mobility map with varying complexities and experiment with the agent density, ranging from matching the actual population to reducing the number of (meta-) agents for computational efficiency. Our findings demonstrate that tessellations constructed according to the Voronoi Diagram of specific location types on the street network better preserve dynamics compared to Census Block Group tessellations and better than Euclidean-based tessellations. Furthermore, the Voronoi Diagram tessellation and also a hybrid -- Voronoi Diagram - and Census Block Group - based -- tessellation require fewer meta-agents to adequately approximate full-scale dynamics. Our analysis spans a range of city sizes in the United States, encompassing small (Santa Fe, NM), medium (Seattle, WA), and large (Chicago, IL) urban areas. This examination also provides valuable insights into the effects of agent count reduction, varying sensitivity metrics, and the influence of city-specific factors.

Markov processes are widely used mathematical models for describing dynamic systems in various fields. However, accurately simulating large-scale systems at long time scales is computationally expensive due to the short time steps required for accurate integration. In this paper, we introduce an inference process that maps complex systems into a simplified representational space and models large jumps in time. To achieve this, we propose Time-lagged Information Bottleneck (T-IB), a principled objective rooted in information theory, which aims to capture relevant temporal features while discarding high-frequency information to simplify the simulation task and minimize the inference error. Our experiments demonstrate that T-IB learns information-optimal representations for accurately modeling the statistical properties and dynamics of the original process at a selected time lag, outperforming existing time-lagged dimensionality reduction methods.

Research on dynamics of robotic manipulators provides promising support for model-based control. In general, rigorous first-principles-based dynamics modeling and accurate identification of mechanism parameters are critical to achieving high precision in model-based control, while data-driven model reconstruction provides alternative approaches of the above process. Taking the level of activation of data as an indicator, this paper classifies the collected robotic manipulator data by means of K-means clustering algorithm. With the fundamental prior knowledge, we find the corresponding dynamical properties behind the classified data separately. Afterwards, the sparse identification of nonlinear dynamics (SINDy) method is used to reconstruct the dynamics model of the robotic manipulator step by step according to the activation level of the classified data. The simulation results show that the proposed method not only reduces the complexity of the basis function library, enabling the application of SINDy method to multi-degree-of-freedom robotic manipulators, but also decreases the influence of data noise on the regression results. Finally, the dynamic control based on the reconfigured model is deployed on the experimental platform, and the experimental results prove the effectiveness of the proposed method.

Until high-fidelity quantum computers with a large number of qubits become widely available, classical simulation remains a vital tool for algorithm design, tuning, and validation. We present a simulator for the Quantum Approximate Optimization Algorithm (QAOA). Our simulator is designed with the goal of reducing the computational cost of QAOA parameter optimization and supports both CPU and GPU execution. Our central observation is that the computational cost of both simulating the QAOA state and computing the QAOA objective to be optimized can be reduced by precomputing the diagonal Hamiltonian encoding the problem. We reduce the time for a typical QAOA parameter optimization by eleven times for $n = 26$ qubits compared to a state-of-the-art GPU quantum circuit simulator based on cuQuantum. Our simulator is available on GitHub: //github.com/jpmorganchase/QOKit

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

北京阿比特科技有限公司