亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Permitting multiple materials within a topology optimization setting increases the search space of the technique, which facilitates obtaining high-performing and efficient optimized designs. Structures with multiple materials involving fluidic pressure loads find various applications. However, dealing with the design-dependent nature of the pressure loads is challenging in topology optimization that gets even more pronounced with a multi-material framework. This paper provides a density-based topology optimization method to design fluidic pressure loadbearing multi-material structures. The design domain is parameterized using hexagonal elements as they ensure nonsingular connectivity. Pressure modeling is performed using the Darcy law with a conceptualized drainage term. The flow coefficient of each element is determined using a smooth Heaviside function considering its solid and void states. The consistent nodal loads are determined using the standard finite element methods. Multiple materials is modeled using the extended SIMP scheme. Compliance minimization with volume constraints is performed to achieve optimized loadbearing structures. Few examples are presented to demonstrate the efficacy and versatility of the proposed approach. The optimized results contain the prescribed amount of different materials.

相關內容

In this paper, practically computable low-order approximations of potentially high-dimensional differential equations driven by geometric rough paths are proposed and investigated. In particular, equations are studied that cover the linear setting, but we allow for a certain type of dissipative nonlinearity in the drift as well. In a first step, a linear subspace is found that contains the solution space of the underlying rough differential equation (RDE). This subspace is associated to covariances of linear Ito-stochastic differential equations which is shown exploiting a Gronwall lemma for matrix differential equations. Orthogonal projections onto the identified subspace lead to a first exact reduced order system. Secondly, a linear map of the RDE solution (quantity of interest) is analyzed in terms of redundant information meaning that state variables are found that do not contribute to the quantity of interest. Once more, a link to Ito-stochastic differential equations is used. Removing such unnecessary information from the RDE provides a further dimension reduction without causing an error. Finally, we discretize a linear parabolic rough partial differential equation in space. The resulting large-order RDE is subsequently tackled with the exact reduction techniques studied in this paper. We illustrate the enormous complexity reduction potential in the corresponding numerical experiments.

In this work, a comprehensive numerical study involving analysis and experiments shows why a two-layer neural network has difficulties handling high frequencies in approximation and learning when machine precision and computation cost are important factors in real practice. In particular, the following fundamental computational issues are investigated: (1) the best accuracy one can achieve given a finite machine precision, (2) the computation cost to achieve a given accuracy, and (3) stability with respect to perturbations. The key to the study is the spectral analysis of the corresponding Gram matrix of the activation functions which also shows how the properties of the activation function play a role in the picture.

Given the high incidence of cardio and cerebrovascular diseases (CVD), and its association with morbidity and mortality, its prevention is a major public health issue. A high level of blood pressure is a well-known risk factor for these events and an increasing number of studies suggest that blood pressure variability may also be an independent risk factor. However, these studies suffer from significant methodological weaknesses. In this work we propose a new location-scale joint model for the repeated measures of a marker and competing events. This joint model combines a mixed model including a subject-specific and time-dependent residual variance modeled through random effects, and cause-specific proportional intensity models for the competing events. The risk of events may depend simultaneously on the current value of the variance, as well as, the current value and the current slope of the marker trajectory. The model is estimated by maximizing the likelihood function using the Marquardt-Levenberg algorithm. The estimation procedure is implemented in a R-package and is validated through a simulation study. This model is applied to study the association between blood pressure variability and the risk of CVD and death from other causes. Using data from a large clinical trial on the secondary prevention of stroke, we find that the current individual variability of blood pressure is associated with the risk of CVD and death. Moreover, the comparison with a model without heterogeneous variance shows the importance of taking into account this variability in the goodness-of-fit and for dynamic predictions.

Topology optimization is a powerful tool utilized in various fields for structural design. However, its application has primarily been restricted to static or passively moving objects, mainly focusing on hard materials with limited deformations and contact capabilities. Designing soft and actively moving objects, such as soft robots equipped with actuators, poses challenges due to simulating dynamics problems involving large deformations and intricate contact interactions. Moreover, the optimal structure depends on the object's motion, necessitating a simultaneous design approach. To address these challenges, we propose "4D topology optimization," an extension of density-based topology optimization that incorporates the time dimension. This enables the simultaneous optimization of both the structure and self-actuation of soft bodies for specific dynamic tasks. Our method utilizes multi-indexed and hierarchized density variables distributed over the spatiotemporal design domain, representing the material layout, actuator layout, and time-varying actuation. These variables are efficiently optimized using gradient-based methods. Forward and backward simulations of soft bodies are done using the material point method, a Lagrangian-Eulerian hybrid approach, implemented on a recent automatic differentiation framework. We present several numerical examples of self-actuating soft body designs aimed at achieving locomotion, posture control, and rotation tasks. The results demonstrate the effectiveness of our method in successfully designing soft bodies with complex structures and biomimetic movements, benefiting from its high degree of design freedom.

We present DiffXPBD, a novel and efficient analytical formulation for the differentiable position-based simulation of compliant constrained dynamics (XPBD). Our proposed method allows computation of gradients of numerous parameters with respect to a goal function simultaneously leveraging a performant simulation model. The method is efficient, thus enabling differentiable simulations of high resolution geometries and degrees of freedom (DoFs). Collisions are naturally included in the framework. Our differentiable model allows a user to easily add additional optimization variables. Every control variable gradient requires the computation of only a few partial derivatives which can be computed using automatic differentiation code. We demonstrate the efficacy of the method with examples such as elastic material parameter estimation, initial value optimization, optimizing for underlying body shape and pose by only observing the clothing, and optimizing a time-varying external force sequence to match sparse keyframe shapes at specific times. Our approach demonstrates excellent efficiency and we demonstrate this on high resolution meshes with optimizations involving over 26 million degrees of freedom. Making an existing solver differentiable requires only a few modifications and the model is compatible with both modern CPU and GPU multi-core hardware.

In this paper we report our new finding on the linear sampling and factorization methods: in addition to shape identification, the linear sampling and factorization methods have capability in parameter identification. Our demonstration is for shape/parameter identification associated with a restricted Fourier integral operator which arises from the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. Within the framework of linear sampling method, we develop both a shape identification theory and a parameter identification theory which are stimulated, analyzed, and implemented with the help of the prolate spheroidal wave functions and their generalizations. Both the shape and parameter identification theories are general, since the theories allow any general regularization scheme such as the Tikhonov or the singular value cut off regularization. We further propose a prolate-Galerkin formulation of the linear sampling method for implementation and provide numerical experiments to demonstrate how the linear sampling method is capable of reconstructing both the shape and the parameter.

This paper is interested in developing reduced order models (ROMs) for repeated simulation of fractional elliptic partial differential equations (PDEs) for multiple values of the parameters (e.g., diffusion coefficients or fractional exponent) governing these models. These problems arise in many applications including simulating Gaussian processes, and geophysical electromagnetics. The approach uses the Kato integral formula to express the solution as an integral involving the solution of a parametrized elliptic PDE, which is discretized using finite elements in space and sinc quadrature for the fractional part. The offline stage of the ROM is accelerated using a solver for shifted linear systems, MPGMRES-Sh, and using a randomized approach for compressing the snapshot matrix. Our approach is both computational and memory efficient. Numerical experiments on a range of model problems, including an application to Gaussian processes, show the benefits of our approach.

Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural Networks (DNNs) for function approximation, has demonstrated considerable success in numerous applications. However, its practicality in addressing a wide range of real-world scenarios, characterized by diverse and unpredictable dynamics, noisy signals, and large state and action spaces, remains limited. This limitation stems from issues such as poor data efficiency, limited generalization capabilities, a lack of safety guarantees, and the absence of interpretability, among other factors. To overcome these challenges and improve performance across these crucial metrics, one promising avenue is to incorporate additional structural information about the problem into the RL learning process. Various sub-fields of RL have proposed methods for incorporating such inductive biases. We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure. By leveraging this comprehensive framework, we provide valuable insights into the challenges associated with structured RL and lay the groundwork for a design pattern perspective on RL research. This novel perspective paves the way for future advancements and aids in the development of more effective and efficient RL algorithms that can potentially handle real-world scenarios better.

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

北京阿比特科技有限公司