亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The main respiratory muscle, the diaphragm, is an example of a thin structure. We aim to perform detailed numerical simulations of the muscle mechanics based on individual patient data. This requires a representation of the diaphragm geometry extracted from medical image data. We design an adaptive reconstruction method based on a least-squares radial basis function partition of unity method. The method is adapted to thin structures by subdividing the structure rather than the surrounding space, and by introducing an anisotropic scaling of local subproblems. The resulting representation is an infinitely smooth level set function, which is stabilized such that there are no spurious zero level sets. We show reconstruction results for 2D cross sections of the diaphragm geometry as well as for the full 3D geometry. We also show solutions to basic PDE test problems in the reconstructed geometries.

相關內容

In CFD, mesh smoothing methods are commonly utilized to refine the mesh quality to achieve high-precision numerical simulations. Specifically, optimization-based smoothing is used for high-quality mesh smoothing, but it incurs significant computational overhead. Pioneer works improve its smoothing efficiency by adopting supervised learning to learn smoothing methods from high-quality meshes. However, they pose difficulty in smoothing the mesh nodes with varying degrees and also need data augmentation to address the node input sequence problem. Additionally, the required labeled high-quality meshes further limit the applicability of the proposed method. In this paper, we present GMSNet, a lightweight neural network model for intelligent mesh smoothing. GMSNet adopts graph neural networks to extract features of the node's neighbors and output the optimal node position. During smoothing, we also introduce a fault-tolerance mechanism to prevent GMSNet from generating negative volume elements. With a lightweight model, GMSNet can effectively smoothing mesh nodes with varying degrees and remain unaffected by the order of input data. A novel loss function, MetricLoss, is also developed to eliminate the need for high-quality meshes, which provides a stable and rapid convergence during training. We compare GMSNet with commonly used mesh smoothing methods on two-dimensional triangle meshes. The experimental results show that GMSNet achieves outstanding mesh smoothing performances with 5% model parameters of the previous model, and attains 13.56 times faster than optimization-based smoothing.

We develop a theory for the representation of opaque solids as volumes. Starting from a stochastic representation of opaque solids as random indicator functions, we prove the conditions under which such solids can be modeled using exponential volumetric transport. We also derive expressions for the volumetric attenuation coefficient as a functional of the probability distributions of the underlying indicator functions. We generalize our theory to account for isotropic and anisotropic scattering at different parts of the solid, and for representations of opaque solids as stochastic implicit surfaces. We derive our volumetric representation from first principles, which ensures that it satisfies physical constraints such as reciprocity and reversibility. We use our theory to explain, compare, and correct previous volumetric representations, as well as propose meaningful extensions that lead to improved performance in 3D reconstruction tasks.

One of the key elements of probabilistic seismic risk assessment studies is the fragility curve, which represents the conditional probability of failure of a mechanical structure for a given scalar measure derived from seismic ground motion. For many structures of interest, estimating these curves is a daunting task because of the limited amount of data available; data which is only binary in our framework, i.e., only describing the structure as being in a failure or non-failure state. A large number of methods described in the literature tackle this challenging framework through parametric log-normal models. Bayesian approaches, on the other hand, allow model parameters to be learned more efficiently. However, the impact of the choice of the prior distribution on the posterior distribution cannot be readily neglected and, consequently, neither can its impact on any resulting estimation. This paper proposes a comprehensive study of this parametric Bayesian estimation problem for limited and binary data. Using the reference prior theory as a cornerstone, this study develops an objective approach to choosing the prior. This approach leads to the Jeffreys prior, which is derived for this problem for the first time. The posterior distribution is proven to be proper with the Jeffreys prior but improper with some traditional priors found in the literature. With the Jeffreys prior, the posterior distribution is also shown to vanish at the boundaries of the parameters' domain, which means that sampling the posterior distribution of the parameters does not result in anomalously small or large values. Therefore, the use of the Jeffreys prior does not result in degenerate fragility curves such as unit-step functions, and leads to more robust credibility intervals. The numerical results obtained from different case studies-including an industrial example-illustrate the theoretical predictions.

We introduce a high-dimensional cubical complex, for any dimension t>0, and apply it to the design of quantum locally testable codes. Our complex is a natural generalization of the constructions by Panteleev and Kalachev and by Dinur et. al of a square complex (case t=2), which have been applied to the design of classical locally testable codes (LTC) and quantum low-density parity check codes (qLDPC) respectively. We turn the geometric (cubical) complex into a chain complex by relying on constant-sized local codes $h_1,\ldots,h_t$ as gadgets. A recent result of Panteleev and Kalachev on existence of tuples of codes that are product expanding enables us to prove lower bounds on the cycle and co-cycle expansion of our chain complex. For t=4 our construction gives a new family of "almost-good" quantum LTCs -- with constant relative rate, inverse-polylogarithmic relative distance and soundness, and constant-size parity checks. Both the distance of the quantum code and its local testability are proven directly from the cycle and co-cycle expansion of our chain complex.

We consider a sharp interface formulation for the multi-phase Mullins-Sekerka flow. The flow is characterized by a network of curves evolving such that the total surface energy of the curves is reduced, while the areas of the enclosed phases are conserved. Making use of a variational formulation, we introduce a fully discrete finite element method. Our discretization features a parametric approximation of the moving interfaces that is independent of the discretization used for the equations in the bulk. The scheme can be shown to be unconditionally stable and to satisfy an exact volume conservation property. Moreover, an inherent tangential velocity for the vertices on the discrete curves leads to asymptotically equidistributed vertices, meaning no remeshing is necessary in practice. Several numerical examples, including a convergence experiment for the three-phase Mullins-Sekerka flow, demonstrate the capabilities of the introduced method.

We exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise while a too large value would lead to an excessive penalization of the solution. In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, allowing different regularization for different components of the unknown, or for groups of them. A distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the Maximum A Posteriori (MAP) estimation, requires no statistical tools. A combination of numerical linear algebra and optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available.

The high volatility of renewable energies calls for more energy efficiency. Thus, different physical systems need to be coupled efficiently although they run on various time scales. Here, the port-Hamiltonian (pH) modeling framework comes into play as it has several advantages, e.g., physical properties are encoded in the system structure and systems running on different time scales can be coupled easily. Additionally, pH systems coupled by energy-preserving conditions are still pH. Furthermore, in the energy transition hydrogen becomes an important player and unlike in natural gas, its temperature-dependence is of importance. Thus, we introduce an infinite dimensional pH formulation of the compressible non-isothermal Euler equations to model flow with temperature-dependence. We set up the underlying Stokes-Dirac structure and deduce the boundary port variables. We introduce coupling conditions into our pH formulation, such that the whole network system is pH itself. This is achieved by using energy-preserving coupling conditions, i.e., mass conservation and equality of total enthalpy, at the coupling nodes. Furthermore, to close the system a third coupling condition is needed. Here, equality of the outgoing entropy at coupling nodes is used and included into our systems in a structure-preserving way. Following that, we adapt the structure-preserving aproximation methods from the isothermal to the non-isothermal case. Academic numerical examples will support our analytical findings.

In this work we consider the Allen--Cahn equation, a prototypical model problem in nonlinear dynamics that exhibits bifurcations corresponding to variations of a deterministic bifurcation parameter. Going beyond the state-of-the-art, we introduce a random coefficient function in the linear reaction part of the equation, thereby accounting for random, spatially-heterogeneous effects. Importantly, we assume a spatially constant, deterministic mean value of the random coefficient. We show that this mean value is in fact a bifurcation parameter in the Allen--Cahn equation with random coefficients. Moreover, we show that the bifurcation points and bifurcation curves become random objects. We consider two distinct modelling situations: (i) for a spatially homogeneous coefficient we derive analytical expressions for the distribution of the bifurcation points and show that the bifurcation curves are random shifts of a fixed reference curve; (ii) for a spatially heterogeneous coefficient we employ a generalized polynomial chaos expansion to approximate the statistical properties of the random bifurcation points and bifurcation curves. We present numerical examples in 1D physical space, where we combine the popular software package Continuation Core and Toolboxes (CoCo) for numerical continuation and the Sparse Grids Matlab Kit for the polynomial chaos expansion. Our exposition addresses both, dynamical systems and uncertainty quantification, highlighting how analytical and numerical tools from both areas can be combined efficiently for the challenging uncertainty quantification analysis of bifurcations in random differential equations.

We propose and analyse numerical schemes for a system of quasilinear, degenerate evolution equations modelling biofilm growth as well as other processes such as flow through porous media and the spreading of wildfires. The first equation in the system is parabolic and exhibits degenerate and singular diffusion, while the second is either uniformly parabolic or an ordinary differential equation. First, we introduce a semi-implicit time discretisation that has the benefit of decoupling the equations. We prove the positivity, boundedness, and convergence of the time-discrete solutions to the time-continuous solution. Then, we introduce an iterative linearisation scheme to solve the resulting nonlinear time-discrete problems. Under weak assumptions on the time-step size, we prove that the scheme converges irrespective of the space discretisation and mesh. Moreover, if the problem is non-degenerate, the convergence becomes faster as the time-step size decreases. Finally, employing the finite element method for the spatial discretisation, we study the behaviour of the scheme, and compare its performance to other commonly used schemes. These tests confirm that the proposed scheme is robust and fast.

We propose and analyse numerical schemes for a system of quasilinear, degenerate evolution equations modelling biofilm growth as well as other processes such as flow through porous media and the spreading of wildfires. The first equation in the system is parabolic and exhibits degenerate and singular diffusion, while the second is either uniformly parabolic or an ordinary differential equation. First, we introduce a semi-implicit time discretisation that has the benefit of decoupling the equations. We prove the positivity, boundedness, and convergence of the time-discrete solutions to the time-continuous solution. Then, we introduce an iterative linearisation scheme to solve the resulting nonlinear time-discrete problems. Under weak assumptions on the time-step size, we prove that the scheme converges irrespective of the space discretisation and mesh. Moreover, if the problem is non-degenerate, the convergence becomes faster as the time-step size decreases. Finally, employing the finite element method for the spatial discretisation, we study the behaviour of the scheme, and compare its performance to other commonly used schemes. These tests confirm that the proposed scheme is robust and fast.

北京阿比特科技有限公司