亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In traditional topology optimization, the computing time required to iteratively update the material distribution within a design domain strongly depends on the complexity or size of the problem, limiting its application in real engineering contexts. This work proposes a multi-stage machine learning strategy that aims to predict an optimal topology and the related stress fields of interest, either in 2D or 3D, without resorting to any iterative analysis and design process. The overall topology optimization is treated as regression task in a low-dimensional latent space, that encodes the variability of the target designs. First, a fully-connected model is employed to surrogate the functional link between the parametric input space characterizing the design problem and the latent space representation of the corresponding optimal topology. The decoder branch of an autoencoder is then exploited to reconstruct the desired optimal topology from its latent representation. The deep learning models are trained on a dataset generated through a standard method of topology optimization implementing the solid isotropic material with penalization, for varying boundary and loading conditions. The underlying hypothesis behind the proposed strategy is that optimal topologies share enough common patterns to be compressed into small latent space representations without significant information loss. Results relevant to a 2D Messerschmitt-B\"olkow-Blohm beam and a 3D bridge case demonstrate the capabilities of the proposed framework to provide accurate optimal topology predictions in a fraction of a second.

相關內容

Quantum neural networks (QNNs) use parameterized quantum circuits with data-dependent inputs and generate outputs through the evaluation of expectation values. Calculating these expectation values necessitates repeated circuit evaluations, thus introducing fundamental finite-sampling noise even on error-free quantum computers. We reduce this noise by introducing the variance regularization, a technique for reducing the variance of the expectation value during the quantum model training. This technique requires no additional circuit evaluations if the QNN is properly constructed. Our empirical findings demonstrate the reduced variance speeds up the training and lowers the output noise as well as decreases the number of necessary evaluations of gradient circuits. This regularization method is benchmarked on the regression of multiple functions and the potential energy surface of water. We show that in our examples, it lowers the variance by an order of magnitude on average and leads to a significantly reduced noise level of the QNN. We finally demonstrate QNN training on a real quantum device and evaluate the impact of error mitigation. Here, the optimization is feasible only due to the reduced number of necessary shots in the gradient evaluation resulting from the reduced variance.

This work concerns the analysis of the discontinuous Galerkin spectral element method (DGSEM) with implicit time stepping for the numerical approximation of nonlinear scalar conservation laws in multiple space dimensions. We consider either the DGSEM with a backward Euler time stepping, or a space-time DGSEM discretization to remove the restriction on the time step. We design graph viscosities in space, and in time for the space-time DGSEM, to make the schemes maximum principle preserving and entropy stable for every admissible convex entropy. We also establish well-posedness of the discrete problems by showing existence and uniqueness of the solutions to the nonlinear implicit algebraic relations that need to be solved at each time step. Numerical experiments in one space dimension are presented to illustrate the properties of these schemes.

Over the last two decades, the field of geometric curve evolutions has attracted significant attention from scientific computing. One of the most popular numerical methods for solving geometric flows is the so-called BGN scheme, which was proposed by Barrett, Garcke, and N\"urnberg (J. Comput. Phys., 222 (2007), pp.~441--467), due to its favorable properties (e.g., its computational efficiency and the good mesh property). However, the BGN scheme is limited to first-order accuracy in time, and how to develop a higher-order numerical scheme is challenging. In this paper, we propose a fully discrete, temporal second-order parametric finite element method, which integrates with two different mesh regularization techniques, for solving geometric flows of curves. The scheme is constructed based on the BGN formulation and a semi-implicit Crank-Nicolson leap-frog time stepping discretization as well as a linear finite element approximation in space. More importantly, we point out that the shape metrics, such as manifold distance and Hausdorff distance, instead of function norms, should be employed to measure numerical errors. Extensive numerical experiments demonstrate that the proposed BGN-based scheme is second-order accurate in time in terms of shape metrics. Moreover, by employing the classical BGN scheme as mesh regularization techniques, our proposed second-order schemes exhibit good properties with respect to the mesh distribution. In addition, an unconditional interlaced energy stability property is obtained for one of the mesh regularization techniques.

The problem of computing posterior functionals in general high-dimensional statistical models with possibly non-log-concave likelihood functions is considered. Based on the proof strategy of [49], but using only local likelihood conditions and without relying on M-estimation theory, nonasymptotic statistical and computational guarantees are provided for a gradient based MCMC algorithm. Given a suitable initialiser, these guarantees scale polynomially in key algorithmic quantities. The abstract results are applied to several concrete statistical models, including density estimation, nonparametric regression with generalised linear models and a canonical statistical non-linear inverse problem from PDEs.

Complex conjugate matrix equations (CCME) have aroused the interest of many researchers because of computations and antilinear systems. Existing research is dominated by its time-invariant solving methods, but lacks proposed theories for solving its time-variant version. Moreover, artificial neural networks are rarely studied for solving CCME. In this paper, starting with the earliest CCME, zeroing neural dynamics (ZND) is applied to solve its time-variant version. Firstly, the vectorization and Kronecker product in the complex field are defined uniformly. Secondly, Con-CZND1 model and Con-CZND2 model are proposed and theoretically prove convergence and effectiveness. Thirdly, three numerical experiments are designed to illustrate the effectiveness of the two models, compare their differences, highlight the significance of neural dynamics in the complex field, and refine the theory related to ZND.

A new type of systematic approach to study the incompressible Euler equations numerically via the vanishing viscosity limit is proposed in this work. We show the new strategy is unconditionally stable that the $L^2$-energy dissipates and $H^s$-norm is uniformly bounded in time without any restriction on the time step. Moreover, first-order convergence of the proposed method is established including both low regularity and high regularity error estimates. The proposed method is extended to full discretization with a newly developed iterative Fourier spectral scheme. Another main contributions of this work is to propose a new integration by parts technique to lower the regularity requirement from $H^4$ to $H^3$ in order to perform the $L^2$-error estimate. To our best knowledge, this is one of the very first work to study incompressible Euler equations by designing stable numerical schemes via the inviscid limit with rigorous analysis. Furthermore, we will present both low and high regularity errors from numerical experiments and demonstrate the dynamics in several benchmark examples.

We present a method to generate contingency tables that follow loglinear models with prescribed marginal probabilities and dependence structures. We make use of (loglinear) Poisson regression, where the dependence structures, described using odds ratios, are implemented using an offset term. We apply this methodology to carry out simulation studies in the context of population size estimation using dual system and triple system estimators, popular in official statistics. These estimators use contingency tables that summarise the counts of elements enumerated or captured within lists that are linked. The simulation is used to investigate these estimators in the situation that the model assumptions are fulfilled, and the situation that the model assumptions are violated.

We present a simple algorithm to approximate the viscosity solution of Hamilton-Jacobi~(HJ) equations by means of an artificial deep neural network. The algorithm uses a stochastic gradient descent-based algorithm to minimize the least square principle defined by a monotone, consistent numerical scheme. We analyze the least square principle's critical points and derive conditions that guarantee that any critical point approximates the sought viscosity solution. The use of a deep artificial neural network on a finite difference scheme lifts the restriction of conventional finite difference methods that rely on computing functions on a fixed grid. This feature makes it possible to solve HJ equations posed in higher dimensions where conventional methods are infeasible. We demonstrate the efficacy of our algorithm through numerical studies on various canonical HJ equations across different dimensions, showcasing its potential and versatility.

In this work, we extend the data-driven It\^{o} stochastic differential equation (SDE) framework for the pathwise assessment of short-term forecast errors to account for the time-dependent upper bound that naturally constrains the observable historical data and forecast. We propose a new nonlinear and time-inhomogeneous SDE model with a Jacobi-type diffusion term for the phenomenon of interest, simultaneously driven by the forecast and the constraining upper bound. We rigorously demonstrate the existence and uniqueness of a strong solution to the SDE model by imposing a condition for the time-varying mean-reversion parameter appearing in the drift term. The normalized forecast function is thresholded to keep such mean-reversion parameters bounded. The SDE model parameter calibration is applied to user-selected approximations of the likelihood function. Another novel contribution is estimating the unknown transition density of the forecast error process with a tailored kernel smoothing technique with the control variate method, coupling an adequate SDE to the original one. We provide a theoretical study about how to choose the optimal bandwidth. We fit the model to the 2019 photovoltaic (PV) solar power daily production and forecast data in Uruguay, computing the daily maximum solar PV production estimation. Two statistical versions of the constrained SDE model are fit, with the beta and truncated normal distributions as proxies for the transition density. Empirical results include simulations of the normalized solar PV power production and pathwise confidence bands generated through an indirect inference method. An objective comparison of optimal parametric points associated with the two selected statistical approximations is provided by applying our innovative kernel smoothing estimation technique of the transition function of the forecast error process.

Agglomeration techniques are important to reduce the computational costs of numerical simulations and stand at the basis of multilevel algebraic solvers. To automatically perform the agglomeration of polyhedral grids, we propose a novel Geometrical Deep Learning-based algorithm that can exploit the geometrical and physical information of the underlying computational domain to construct the agglomerated grid and simultaneously guarantee the agglomerated grid's quality. In particular, we propose a bisection model based on Graph Neural Networks (GNNs) to partition a suitable connectivity graph of computational three-dimensional meshes. The new approach has a high online inference speed and can simultaneously process the graph structure of the mesh, the geometrical information of the mesh (e.g. elements' volumes, centers' coordinates), and the physical information of the domain (e.g. physical parameters). Taking advantage of this new approach, our algorithm can agglomerate meshes of a domain composed of heterogeneous media in an automatic way. The proposed GNN techniques are compared with the k-means algorithm and METIS: standard approaches for graph partitioning that are meant to process only the connectivity information on the mesh. We demonstrate that the performance of our algorithms outperforms available approaches in terms of quality metrics and runtimes. Moreover, we demonstrate that our algorithm also shows a good level of generalization when applied to more complex geometries, such as three-dimensional geometries reconstructed from medical images. Finally, the capabilities of the model in performing agglomeration of heterogeneous domains are tested in the framework of problems containing microstructures and on a complex geometry such as the human brain.

北京阿比特科技有限公司