亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present a unified nonequilibrium model of continuum mechanics for compressible multiphase flows. The model, which is formulated within the framework of Symmetric Hyperbolic Thermodynamically Compatible (SHTC) equations, can describe the arbitrary number of phases that can be heat-conducting inviscid and viscous fluids, as well as elastoplastic solids. The phases are allowed to have different velocities, pressures, temperatures, and shear stresses, while the material interfaces are treated as diffuse interfaces with the volume fraction playing the role of the interface field. To relate our model to other multiphase approaches, we reformulate the SHTC governing equations in terms of the phase state parameters and put them in the form of Baer-Nunziato-type models. It is the Baer-Nunziato form of the SHTC equations which is then solved numerically using a robust second-order path-conservative MUSCL-Hancock finite volume method on Cartesian meshes. Due to the fact that the obtained governing equations are very challenging, we restrict our numerical examples to a simplified version of the model, focusing on the isentropic limit for three-phase mixtures. To address the stiffness properties of the relaxation source terms present in the model, the implemented scheme incorporates a semi-analytical time integration method specifically designed for the non-linear stiff source terms governing the strain relaxation. The validation process involves a wide range of benchmarks and several applications for compressible multiphase problems. Notably, results are presented for multiphase flows in all the relaxation limit cases of the model, including inviscid and viscous Newtonian fluids, as well as non-linear hyperelastic and elastoplastic solids.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 設計 · ML · 歸納偏好 · 講稿 ·
2024 年 5 月 20 日

Spinodal metamaterials, with architectures inspired by natural phase-separation processes, have presented a significant alternative to periodic and symmetric morphologies when designing mechanical metamaterials with extreme performance. While their elastic mechanical properties have been systematically determined, their large-deformation, nonlinear responses have been challenging to predict and design, in part due to limited data sets and the need for complex nonlinear simulations. This work presents a novel physics-enhanced machine learning (ML) and optimization framework tailored to address the challenges of designing intricate spinodal metamaterials with customized mechanical properties in large-deformation scenarios where computational modeling is restrictive and experimental data is sparse. By utilizing large-deformation experimental data directly, this approach facilitates the inverse design of spinodal structures with precise finite-strain mechanical responses. The framework sheds light on instability-induced pattern formation in spinodal metamaterials -- observed experimentally and in selected nonlinear simulations -- leveraging physics-based inductive biases in the form of nonconvex energetic potentials. Altogether, this combined ML, experimental, and computational effort provides a route for efficient and accurate design of complex spinodal metamaterials for large-deformation scenarios where energy absorption and prediction of nonlinear failure mechanisms is essential.

We study three kinetic Langevin samplers including the Euler discretization, the BU and the UBU splitting scheme. We provide contraction results in $L^1$-Wasserstein distance for non-convex potentials. These results are based on a carefully tailored distance function and an appropriate coupling construction. Additionally, the error in the $L^1$-Wasserstein distance between the true target measure and the invariant measure of the discretization scheme is bounded. To get an $\varepsilon$-accuracy in $L^1$-Wasserstein distance, we show complexity guarantees of order $\mathcal{O}(\sqrt{d}/\varepsilon)$ for the Euler scheme and $\mathcal{O}(d^{1/4}/\sqrt{\varepsilon})$ for the UBU scheme under appropriate regularity assumptions on the target measure. The results are applicable to interacting particle systems and provide bounds for sampling probability measures of mean-field type.

In the present paper, we introduce new tensor krylov subspace methods for solving large Sylvester tensor equations. The proposed method uses the well-known T-product for tensors and tensor subspaces. We introduce some new tensor products and the related algebraic properties. These new products will enable us to develop third-order the tensor FOM (tFOM), GMRES (tGMRES), tubal Block Arnoldi and the tensor tubal Block Arnoldi method to solve large Sylvester tensor equation. We give some properties related to these method and present some numerical experiments.

To improve the robustness of transformer neural networks used for temporal-dynamics prediction of chaotic systems, we propose a novel attention mechanism called easy attention which we demonstrate in time-series reconstruction and prediction. While the standard self attention only makes use of the inner product of queries and keys, it is demonstrated that the keys, queries and softmax are not necessary for obtaining the attention score required to capture long-term dependencies in temporal sequences. Through the singular-value decomposition (SVD) on the softmax attention score, we further observe that self attention compresses the contributions from both queries and keys in the space spanned by the attention score. Therefore, our proposed easy-attention method directly treats the attention scores as learnable parameters. This approach produces excellent results when reconstructing and predicting the temporal dynamics of chaotic systems exhibiting more robustness and less complexity than self attention or the widely-used long short-term memory (LSTM) network. We show the improved performance of the easy-attention method in the Lorenz system, a turbulence shear flow and a model of a nuclear reactor.

Many of the tools available for robot learning were designed for Euclidean data. However, many applications in robotics involve manifold-valued data. A common example is orientation; this can be represented as a 3-by-3 rotation matrix or a quaternion, the spaces of which are non-Euclidean manifolds. In robot learning, manifold-valued data are often handled by relating the manifold to a suitable Euclidean space, either by embedding the manifold or by projecting the data onto one or several tangent spaces. These approaches can result in poor predictive accuracy, and convoluted algorithms. In this paper, we propose an "intrinsic" approach to regression that works directly within the manifold. It involves taking a suitable probability distribution on the manifold, letting its parameter be a function of a predictor variable, such as time, then estimating that function non-parametrically via a "local likelihood" method that incorporates a kernel. We name the method kernelised likelihood estimation. The approach is conceptually simple, and generally applicable to different manifolds. We implement it with three different types of manifold-valued data that commonly appear in robotics applications. The results of these experiments show better predictive accuracy than projection-based algorithms.

In this paper, we propose an algorithmic framework to automatically generate efficient deep neural networks and optimize their associated hyperparameters. The framework is based on evolving directed acyclic graphs (DAGs), defining a more flexible search space than the existing ones in the literature. It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention. Based on this search space we propose neighbourhood and evolution search operators to optimize both the architecture and hyper-parameters of our networks. These search operators can be used with any metaheuristic capable of handling mixed search spaces. We tested our algorithmic framework with an evolutionary algorithm on a time series prediction benchmark. The results demonstrate that our framework was able to find models outperforming the established baseline on numerous datasets.

As training datasets become increasingly drawn from unstructured, uncontrolled environments such as the web, researchers and industry practitioners have increasingly relied upon data filtering techniques to "filter out the noise" of web-scraped data. While datasets have been widely shown to reflect the biases and values of their creators, in this paper we contribute to an emerging body of research that assesses the filters used to create these datasets. We show that image-text data filtering also has biases and is value-laden, encoding specific notions of what is counted as "high-quality" data. In our work, we audit a standard approach of image-text CLIP-filtering on the academic benchmark DataComp's CommonPool by analyzing discrepancies of filtering through various annotation techniques across multiple modalities of image, text, and website source. We find that data relating to several imputed demographic groups -- such as LGBTQ+ people, older women, and younger men -- are associated with higher rates of exclusion. Moreover, we demonstrate cases of exclusion amplification: not only are certain marginalized groups already underrepresented in the unfiltered data, but CLIP-filtering excludes data from these groups at higher rates. The data-filtering step in the machine learning pipeline can therefore exacerbate representation disparities already present in the data-gathering step, especially when existing filters are designed to optimize a specifically-chosen downstream performance metric like zero-shot image classification accuracy. Finally, we show that the NSFW filter fails to remove sexually-explicit content from CommonPool, and that CLIP-filtering includes several categories of copyrighted content at high rates. Our conclusions point to a need for fundamental changes in dataset creation and filtering practices.

Manifold data analysis is challenging due to the lack of parametric distributions on manifolds. To address this, we introduce a series of Riemannian radial distributions on Riemannian symmetric spaces. By utilizing the symmetry, we show that for many Riemannian radial distributions, the Riemannian $L^p$ center of mass is uniquely given by the location parameter, and the maximum likelihood estimator (MLE) of this parameter is given by an M-estimator. Therefore, these parametric distributions provide a promising tool for statistical modeling and algorithmic design. In addition, our paper develops a novel theory for parameter estimation and minimax optimality by integrating statistics, Riemannian geometry, and Lie theory. We demonstrate that the MLE achieves a convergence rate of root-$n$ up to logarithmic terms, where the rate is quantified by both the hellinger distance between distributions and geodesic distance between parameters. Then we derive a root-$n$ minimax lower bound for the parameter estimation rate, demonstrating the optimality of the MLE. Our minimax analysis is limited to the case of simply connected Riemannian symmetric spaces for technical reasons, but is still applicable to numerous applications. Finally, we extend our studies to Riemannian radial distributions with an unknown temperature parameter, and establish the convergence rate of the MLE. We also derive the model complexity of von Mises-Fisher distributions on spheres and discuss the effects of geometry in statistical estimation.

We present and analyze three distinct semi-discrete schemes for solving nonlocal geometric flows incorporating perimeter terms. These schemes are based on the finite difference method, the finite element method, and the finite element method with a specific tangential motion. We offer rigorous proofs of quadratic convergence under $H^1$-norm for the first scheme and linear convergence under $H^1$-norm for the latter two schemes. All error estimates rely on the observation that the error of the nonlocal term can be controlled by the error of the local term. Furthermore, we explore the relationship between the convergence under $L^\infty$-norm and manifold distance. Extensive numerical experiments are conducted to verify the convergence analysis, and demonstrate the accuracy of our schemes under various norms for different types of nonlocal flows.

In this paper, we focus on exploiting the group structure for large-dimensional factor models, which captures the homogeneous effects of common factors on individuals within the same group. In view of the fact that datasets in macroeconomics and finance are typically heavy-tailed, we propose to identify the unknown group structure using the agglomerative hierarchical clustering algorithm and an information criterion with the robust two-step (RTS) estimates as initial values. The loadings and factors are then re-estimated conditional on the identified groups. Theoretically, we demonstrate the consistency of the estimators for both group membership and the number of groups determined by the information criterion. Under finite second moment condition, we provide the convergence rate for the newly estimated factor loadings with group information, which are shown to achieve efficiency gains compared to those obtained without group structure information. Numerical simulations and real data analysis demonstrate the nice finite sample performance of our proposed approach in the presence of both group structure and heavy-tailedness.

北京阿比特科技有限公司