亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents a novel direct Jacobian total Lagrangian explicit dynamics (DJ-TLED) finite element algorithm for real-time nonlinear mechanics simulation. The nodal force contributions are expressed using only the Jacobian operator, instead of the deformation gradient tensor and finite deformation tensor, for fewer computational operations at run-time. Owing to this proposed Jacobian formulation, novel expressions are developed for strain invariants and constant components, which are also based on the Jacobian operator. Results show that the proposed DJ-TLED consumed between 0.70x and 0.88x CPU solution times compared to state-of-the-art TLED and achieved up to 121.72x and 94.26x speed improvements in tetrahedral and hexahedral meshes, respectively, using GPU acceleration. Compared to TLED, the most notable difference is that the notions of stress and strain are not explicitly visible in the proposed DJ-TLED but embedded implicitly in the formulation of nodal forces. Such a force formulation can be beneficial for fast deformation computation and can be particularly useful if the displacement field is of primary interest, which is demonstrated using a neurosurgical simulation of brain deformations for image-guided neurosurgery. The present work contributes towards a comprehensive DJ-TLED algorithm concerning isotropic and anisotropic hyperelastic constitutive models and GPU implementation. The source code is available at //github.com/jinaojakezhang/DJTLED.

相關內容

Two novel parallel Newton-Krylov Balancing Domain Decomposition by Constraints (BDDC) and Dual-Primal Finite Element Tearing and Interconnecting (FETI-DP) solvers are here constructed, analyzed and tested numerically for implicit time discretizations of the three-dimensional Bidomain system of equations. This model represents the most advanced mathematical description of the cardiac bioelectrical activity and it consists of a degenerate system of two non-linear reaction-diffusion partial differential equations (PDEs), coupled with a stiff system of ordinary differential equations (ODEs). A finite element discretization in space and a segregated implicit discretization in time, based on decoupling the PDEs from the ODEs, yields at each time step the solution of a non-linear algebraic system. The Jacobian linear system at each Newton iteration is solved by a Krylov method, accelerated by BDDC or FETI-DP preconditioners, both augmented with the recently introduced {\em deluxe} scaling of the dual variables. A polylogarithmic convergence rate bound is proven for the resulting parallel Bidomain solvers. Extensive numerical experiments on linux clusters up to two thousands processors confirm the theoretical estimates, showing that the proposed parallel solvers are scalable and quasi-optimal.

Although Reinforcement Learning (RL) is effective for sequential decision-making problems under uncertainty, it still fails to thrive in real-world systems where risk or safety is a binding constraint. In this paper, we formulate the RL problem with safety constraints as a non-zero-sum game. While deployed with maximum entropy RL, this formulation leads to a safe adversarially guided soft actor-critic framework, called SAAC. In SAAC, the adversary aims to break the safety constraint while the RL agent aims to maximize the constrained value function given the adversary's policy. The safety constraint on the agent's value function manifests only as a repulsion term between the agent's and the adversary's policies. Unlike previous approaches, SAAC can address different safety criteria such as safe exploration, mean-variance risk sensitivity, and CVaR-like coherent risk sensitivity. We illustrate the design of the adversary for these constraints. Then, in each of these variations, we show the agent differentiates itself from the adversary's unsafe actions in addition to learning to solve the task. Finally, for challenging continuous control tasks, we demonstrate that SAAC achieves faster convergence, better efficiency, and fewer failures to satisfy the safety constraints than risk-averse distributional RL and risk-neutral soft actor-critic algorithms.

The phase retrieval problem is concerned with recovering an unknown signal $\bf{x} \in \mathbb{R}^n$ from a set of magnitude-only measurements $y_j=|\langle \bf{a}_j,\bf{x} \rangle|, \; j=1,\ldots,m$. A natural least squares formulation can be used to solve this problem efficiently even with random initialization, despite its non-convexity of the loss function. One way to explain this surprising phenomenon is the benign geometric landscape: (1) all local minimizers are global; and (2) the objective function has a negative curvature around each saddle point and local maximizer. In this paper, we show that $m=O(n \log n)$ Gaussian random measurements are sufficient to guarantee the loss function of a commonly used estimator has such benign geometric landscape with high probability. This is a step toward answering the open problem given by Sun-Qu-Wright, in which the authors suggest that $O(n \log n)$ or even $O(n)$ is enough to guarantee the favorable geometric property.

This work presents a numerical formulation to model isotropic viscoelastic material behavior for membranes and thin shells. The surface and the shell theory are formulated within a curvilinear coordinate system, which allows the representation of general surfaces and deformations. The kinematics follow from Kirchhoff-Love theory and the discretization makes use of isogeometric shape functions. A multiplicative split of the surface deformation gradient is employed, such that an intermediate surface configuration is introduced. The surface metric and curvature of this intermediate configuration follow from the solution of nonlinear evolution laws - ordinary differential equations (ODEs) - that stem from a generalized viscoelastic solid model. The evolution laws are integrated numerically with the implicit Euler scheme and linearized within the Newton-Raphson scheme of the nonlinear finite element framework. The implementation of surface and bending viscosity is verified with the help of analytical solutions and shows ideal convergence behavior. The chosen numerical examples capture large deformations and typical viscoelasticity behavior, such as creep, relaxation, and strain rate dependence. It is shown that the proposed formulation can also be straightforwardly applied to model boundary viscoelasticity of 3D bodies.

Bayesian optimization is a popular method for optimizing expensive black-box functions. Yet it oftentimes struggles in high dimensions where the computation could be prohibitively heavy. To alleviate this problem, we introduce Coordinate backoff Bayesian Optimization (CobBO) with two-stage kernels. During each round, the first stage uses a simple coarse kernel that sacrifices the approximation accuracy for computational efficiency. It captures the global landscape by purposely smoothing away local fluctuations. Then, in the second stage of the same round, past observed points in the full space are projected to the selected subspace to form virtual points. These virtual points, along with the means and variances of their unknown function values estimated using the simple kernel of the first stage, are fitted to a more sophisticated kernel model in the second stage. Within the selected low dimensional subspace, the computational cost of conducting Bayesian optimization therein becomes affordable. To further enhance the performance, a sequence of consecutive observations in the same subspace are collected, which can effectively refine the approximation of the function. This refinement lasts until a stopping rule is met determining when to back off from a certain subspace and switch to another. This decoupling significantly reduces the computational burden in high dimensions, which fully leverages the observations in the whole space rather than only relying on observations in each coordinate subspace. Extensive evaluations show that CobBO finds solutions comparable to or better than other state-of-the-art methods for dimensions ranging from tens to hundreds, while reducing both the trial complexity and computational costs.

Embodied AI is a recent research area that aims at creating intelligent agents that can move and operate inside an environment. Existing approaches in this field demand the agents to act in completely new and unexplored scenes. However, this setting is far from realistic use cases that instead require executing multiple tasks in the same environment. Even if the environment changes over time, the agent could still count on its global knowledge about the scene while trying to adapt its internal representation to the current state of the environment. To make a step towards this setting, we propose Spot the Difference: a novel task for Embodied AI where the agent has access to an outdated map of the environment and needs to recover the correct layout in a fixed time budget. To this end, we collect a new dataset of occupancy maps starting from existing datasets of 3D spaces and generating a number of possible layouts for a single environment. This dataset can be employed in the popular Habitat simulator and is fully compliant with existing methods that employ reconstructed occupancy maps during navigation. Furthermore, we propose an exploration policy that can take advantage of previous knowledge of the environment and identify changes in the scene faster and more effectively than existing agents. Experimental results show that the proposed architecture outperforms existing state-of-the-art models for exploration on this new setting.

The simulation of multi-body systems with frictional contacts is a fundamental tool for many fields, such as robotics, computer graphics, and mechanics. Hard frictional contacts are particularly troublesome to simulate because they make the differential equations stiff, calling for computationally demanding implicit integration schemes. We suggest to tackle this issue by using exponential integrators, a long-standing class of integration schemes (first introduced in the 60's) that in recent years has enjoyed a resurgence of interest. We show that this scheme can be easily applied to multi-body systems subject to stiff viscoelastic contacts, producing accurate results at lower computational cost than \changed{classic explicit or implicit schemes}. In our tests with quadruped and biped robots, our method demonstrated stable behaviors with large time steps (10 ms) and stiff contacts ($10^5$ N/m). Its excellent properties, especially for fast and coarse simulations, make it a valuable candidate for many applications in robotics, such as simulation, Model Predictive Control, Reinforcement Learning, and controller design.

Extracting non-Gaussian information from the non-linear regime of structure formation is key to fully exploiting the rich data from upcoming cosmological surveys probing the large-scale structure of the universe. However, due to theoretical and computational complexities, this remains one of the main challenges in analyzing observational data. We present a set of summary statistics for cosmological matter fields based on 3D wavelets to tackle this challenge. These statistics are computed as the spatial average of the complex modulus of the 3D wavelet transform raised to a power $q$ and are therefore known as invariant wavelet moments. The 3D wavelets are constructed to be radially band-limited and separable on a spherical polar grid and come in three types: isotropic, oriented, and harmonic. In the Fisher forecast framework, we evaluate the performance of these summary statistics on matter fields from the Quijote suite, where they are shown to reach state-of-the-art parameter constraints on the base $\Lambda$CDM parameters, as well as the sum of neutrino masses. We show that we can improve constraints by a factor 5 to 10 in all parameters with respect to the power spectrum baseline.

The minimum energy path (MEP) describes the mechanism of reaction, and the energy barrier along the path can be used to calculate the reaction rate in thermal systems. The nudged elastic band (NEB) method is one of the most commonly used schemes to compute MEPs numerically. It approximates an MEP by a discrete set of configuration images, where the discretization size determines both computational cost and accuracy of the simulations. In this paper, we consider a discrete MEP to be a stationary state of the NEB method and prove an optimal convergence rate of the discrete MEP with respect to the number of images. Numerical simulations for the transitions of some several proto-typical model systems are performed to support the theory.

Task graphs provide a simple way to describe scientific workflows (sets of tasks with dependencies) that can be executed on both HPC clusters and in the cloud. An important aspect of executing such graphs is the used scheduling algorithm. Many scheduling heuristics have been proposed in existing works; nevertheless, they are often tested in oversimplified environments. We provide an extensible simulation environment designed for prototyping and benchmarking task schedulers, which contains implementations of various scheduling algorithms and is open-sourced, in order to be fully reproducible. We use this environment to perform a comprehensive analysis of workflow scheduling algorithms with a focus on quantifying the effect of scheduling challenges that have so far been mostly neglected, such as delays between scheduler invocations or partially unknown task durations. Our results indicate that network models used by many previous works might produce results that are off by an order of magnitude in comparison to a more realistic model. Additionally, we show that certain implementation details of scheduling algorithms which are often neglected can have a large effect on the scheduler's performance, and they should thus be described in great detail to enable proper evaluation.

北京阿比特科技有限公司