亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The accurate prediction of aerodynamic drag on satellites orbiting in the upper atmosphere is critical to the operational success of modern space technologies, such as satellite-based communication or navigation systems, which have become increasingly popular in the last few years due to the deployment of constellations of satellites in low-Earth orbit. As a result, physics-based models of the ionosphere and thermosphere have emerged as a necessary tool for the prediction of atmospheric outputs under highly variable space weather conditions. This paper proposes a high-fidelity approach for physics-based space weather modeling based on the solution of the Navier-Stokes equations using a high-order discontinuous Galerkin method, combined with a matrix-free strategy suitable for high-performance computing on GPU architectures. The approach consists of a thermospheric model that describes a chemically frozen neutral atmosphere in non-hydrostatic equilibrium driven by the external excitation of the Sun. A novel set of variables is considered to treat the low densities present in the upper atmosphere and to accommodate the wide range of scales present in the problem. At the same time, and unlike most existing approaches, radial and angular directions are treated in a non-segregated approach. The study presents a set of numerical examples that demonstrate the accuracy of the approximation and validate the current approach against observational data along a satellite orbit, including estimates of established empirical and physics-based models of the ionosphere-thermosphere system. Finally, a 1D radial derivation of the physics-based model is presented and utilized for conducting a parametric study of the main thermal quantities under various solar conditions.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Learning · MoDELS · 講稿 · Performer ·
2023 年 7 月 24 日

We present a machine-learning strategy for finite element analysis of solid mechanics wherein we replace complex portions of a computational domain with a data-driven surrogate. In the proposed strategy, we decompose a computational domain into an "outer" coarse-scale domain that we resolve using a finite element method (FEM) and an "inner" fine-scale domain. We then develop a machine-learned (ML) model for the impact of the inner domain on the outer domain. In essence, for solid mechanics, our machine-learned surrogate performs static condensation of the inner domain degrees of freedom. This is achieved by learning the map from (virtual) displacements on the inner-outer domain interface boundary to forces contributed by the inner domain to the outer domain on the same interface boundary. We consider two such mappings, one that directly maps from displacements to forces without constraints, and one that maps from displacements to forces by virtue of learning a symmetric positive semi-definite (SPSD) stiffness matrix. We demonstrate, in a simplified setting, that learning an SPSD stiffness matrix results in a coarse-scale problem that is well-posed with a unique solution. We present numerical experiments on several exemplars, ranging from finite deformations of a cube to finite deformations with contact of a fastener-bushing geometry. We demonstrate that enforcing an SPSD stiffness matrix is critical for accurate FEM-ML coupled simulations, and that the resulting methods can accurately characterize out-of-sample loading configurations with significant speedups over the standard FEM simulations.

Model-based control requires an accurate model of the system dynamics for precisely and safely controlling the robot in complex and dynamic environments. Moreover, in the presence of variations in the operating conditions, the model should be continuously refined to compensate for dynamics changes. In this paper, we present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems. We combine offline learning from past experience and online learning from current robot interaction with the unknown environment. These two ingredients enable a highly sample-efficient and adaptive learning process, capable of accurately inferring model dynamics in real-time even in operating regimes that greatly differ from the training distribution. Moreover, we design an uncertainty-aware model predictive controller that is heuristically conditioned to the aleatoric (data) uncertainty of the learned dynamics. This controller actively chooses the optimal control actions that (i) optimize the control performance and (ii) improve the efficiency of online learning sample collection. We demonstrate the effectiveness of our method through a series of challenging real-world experiments using a quadrotor system. Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions, while it significantly outperforms classical and adaptive control baselines.

Given data on the choices made by consumers for different offer sets, a key challenge is to develop parsimonious models that describe and predict consumer choice behavior while being amenable to prescriptive tasks such as pricing and assortment optimization. The marginal distribution model (MDM) is one such model, that requires only the specification of marginal distributions of the random utilities. This paper aims to establish necessary and sufficient conditions for given choice data to be consistent with the MDM hypothesis, inspired by the utility of similar characterizations for the random utility model (RUM). This endeavor leads to an exact characterization of the set of choice probabilities that the MDM can represent. Verifying the consistency of choice data with this characterization is equivalent to solving a polynomial-sized linear program. Since the analogous verification task for RUM is computationally intractable and neither of these models subsumes the other, MDM is helpful in striking a balance between tractability and representational power. The characterization is convenient to be used with robust optimization for making data-driven sales and revenue predictions for new unseen assortments. When the choice data lacks consistency with the MDM hypothesis, finding the best-fitting MDM choice probabilities reduces to solving a mixed integer convex program. The results extend naturally to the case where the alternatives can be grouped based on the similarity of the marginal distributions of the utilities. Numerical experiments show that MDM provides better representational power and prediction accuracy than multinominal logit and significantly better computational performance than RUM.

Atmospheric near surface wind speed and wind direction play an important role in many applications, ranging from air quality modeling, building design, wind turbine placement to climate change research. It is therefore crucial to accurately estimate the joint probability distribution of wind speed and direction. In this work we develop a conditional approach to model these two variables, where the joint distribution is decomposed into the product of the marginal distribution of wind direction and the conditional distribution of wind speed given wind direction. To accommodate the circular nature of wind direction a von Mises mixture model is used; the conditional wind speed distribution is modeled as a directional dependent Weibull distribution via a two-stage estimation procedure, consisting of a directional binned Weibull parameter estimation, followed by a harmonic regression to estimate the dependence of the Weibull parameters on wind direction. A Monte Carlo simulation study indicates that our method outperforms an alternative method that uses periodic spline quantile regression in terms of estimation efficiency. We illustrate our method by using the output from a regional climate model to investigate how the joint distribution of wind speed and direction may change under some future climate scenarios.

We consider single-phase flow with solute transport where ions in the fluid can precipitate and form a mineral, and where the mineral can dissolve and release solute into the fluid. Such a setting includes an evolving interface between fluid and mineral. We approximate the evolving interface with a diffuse interface, which is modeled with an Allen-Cahn equation. We also include effects from temperature such that the reaction rate can depend on temperature, and allow heat conduction through fluid and mineral. As Allen-Cahn is generally not conservative due to curvature-driven motion, we include a reformulation that is conservative. This reformulation includes a non-local term which makes the use of standard Newton iterations for solving the resulting non-linear system of equations very slow. We instead apply L-scheme iterations, which can be proven to converge for any starting guess, although giving only linear convergence. The three coupled equations for diffuse interface, solute transport and heat transport are solved via an iterative coupling scheme. This allows the three equations to be solved more efficiently compared to a monolithic scheme, and only few iterations are needed for high accuracy. Through numerical experiments we highlight the usefulness and efficiency of the suggested numerical scheme and the applicability of the resulting model.

Despite temperature rise being a first-order design constraint, traditional thermal estimation techniques have severe limitations in modeling critical aspects affecting the temperature in modern-day chips. Existing thermal modeling techniques often ignore the effects of parameter variation, which can lead to significant errors. Such methods also ignore the dependence of conductivity on temperature and its variation. Leakage power is also incorporated inadequately by state-of-the-art techniques. Thermal modeling is a process that has to be repeated at least thousands of times in the design cycle, and hence speed is of utmost importance. To overcome these limitations, we propose VarSim, an ultrafast thermal simulator based on Green's functions. Green's functions have been shown to be faster than the traditional finite difference and finite element-based approaches but have rarely been employed in thermal modeling. Hence we propose a new Green's function-based method to capture the effects of leakage power as well as process variation analytically. We provide a closed-form solution for the Green's function considering the effects of variation on the process, temperature, and thermal conductivity. In addition, we propose a novel way of dealing with the anisotropicity introduced by process variation by splitting the Green's functions into shift-variant and shift-invariant components. Since our solutions are analytical expressions, we were able to obtain speedups that were several orders of magnitude over and above state-of-the-art proposals with a mean absolute error limited to 4% for a wide range of test cases. Furthermore, our method accurately captures the steady-state as well as the transient variation in temperature.

We consider the inverse acoustic obstacle problem for sound-soft star-shaped obstacles in two dimensions wherein the boundary of the obstacle is determined from measurements of the scattered field at a collection of receivers outside the object. One of the standard approaches for solving this problem is to reformulate it as an optimization problem: finding the boundary of the domain that minimizes the $L^2$ distance between computed values of the scattered field and the given measurement data. The optimization problem is computationally challenging since the local set of convexity shrinks with increasing frequency and results in an increasing number of local minima in the vicinity of the true solution. In many practical experimental settings, low frequency measurements are unavailable due to limitations of the experimental setup or the sensors used for measurement. Thus, obtaining a good initial guess for the optimization problem plays a vital role in this environment. We present a neural network warm-start approach for solving the inverse scattering problem, where an initial guess for the optimization problem is obtained using a trained neural network. We demonstrate the effectiveness of our method with several numerical examples. For high frequency problems, this approach outperforms traditional iterative methods such as Gauss-Newton initialized without any prior (i.e., initialized using a unit circle), or initialized using the solution of a direct method such as the linear sampling method. The algorithm remains robust to noise in the scattered field measurements and also converges to the true solution for limited aperture data. However, the number of training samples required to train the neural network scales exponentially in frequency and the complexity of the obstacles considered. We conclude with a discussion of this phenomenon and potential directions for future research.

With computational models becoming more expensive and complex, surrogate models have gained increasing attention in many scientific disciplines and are often necessary to conduct sensitivity studies, parameter optimization etc. In the scientific discipline of uncertainty quantification (UQ), model input quantities are often described by probability distributions. For the construction of surrogate models, space-filling designs are generated in the input space to define training points, and evaluations of the computational model at these points are then conducted. The physical parameter space is often transformed into an i.i.d. uniform input space in order to apply space-filling training procedures in a sensible way. Due to this transformation surrogate modeling techniques tend to suffer with regard to their prediction accuracy. Therefore, a new method is proposed in this paper where input parameter transformations are applied to basis functions for universal kriging. To speed up hyperparameter optimization for universal kriging, suitable expressions for efficient gradient-based optimization are developed. Several benchmark functions are investigated and the proposed method is compared with conventional methods.

The challenging deployment of compute-intensive applications from domains such Artificial Intelligence (AI) and Digital Signal Processing (DSP), forces the community of computing systems to explore new design approaches. Approximate Computing appears as an emerging solution, allowing to tune the quality of results in the design of a system in order to improve the energy efficiency and/or performance. This radical paradigm shift has attracted interest from both academia and industry, resulting in significant research on approximation techniques and methodologies at different design layers (from system down to integrated circuits). Motivated by the wide appeal of Approximate Computing over the last 10 years, we conduct a two-part survey to cover key aspects (e.g., terminology and applications) and review the state-of-the art approximation techniques from all layers of the traditional computing stack. In Part II of our survey, we classify and present the technical details of application-specific and architectural approximation techniques, which both target the design of resource-efficient processors/accelerators & systems. Moreover, we present a detailed analysis of the application spectrum of Approximate Computing and discuss open challenges and future directions.

Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model---namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)---which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task.

北京阿比特科技有限公司