亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A comprehensive mathematical model of the multiphysics flow of blood and Cerebrospinal Fluid (CSF) in the brain can be expressed as the coupling of a poromechanics system and Stokes' equations: the first describes fluids filtration through the cerebral tissue and the tissue's elastic response, while the latter models the flow of the CSF in the brain ventricles. This model describes the functioning of the brain's waste clearance mechanism, which has been recently discovered to play an essential role in the progress of neurodegenerative diseases. To model the interactions between different scales in the porous medium, we propose a physically consistent coupling between Multi-compartment Poroelasticity (MPE) equations and Stokes' equations. In this work, we introduce a numerical scheme for the discretization of such coupled MPE-Stokes system, employing a high-order discontinuous Galerkin method on polytopal grids to efficiently account for the geometric complexity of the domain. We analyze the stability and convergence of the space semidiscretized formulation, we prove a-priori error estimates, and we present a temporal discretization based on a combination of Newmark's $\beta$-method for the elastic wave equation and the $\theta$-method for the other equations of the model. Numerical simulations carried out on test cases with manufactured solutions validate the theoretical error estimates. We also present numerical results on a two-dimensional slice of a patient-specific brain geometry reconstructed from diagnostic images, to test in practice the advantages of the proposed approach.

相關內容

We study the long time behavior of an underdamped mean-field Langevin (MFL) equation, and provide a general convergence as well as an exponential convergence rate result under different conditions. The results on the MFL equation can be applied to study the convergence of the Hamiltonian gradient descent algorithm for the overparametrized optimization. We then provide a numerical example of the algorithm to train a generative adversarial networks (GAN).

Flexoelectricity - the generation of electric field in response to a strain gradient - is a universal electromechanical coupling, dominant only at small scales due to its requirement of high strain gradients. This phenomenon is governed by a set of coupled fourth-order partial differential equations (PDEs), which require $C^1$ continuity of the basis in finite element methods for the numerical solution. While Isogeometric analysis (IGA) has been proven to meet this continuity requirement due to its higher-order B-spline basis functions, it is limited to simple geometries that can be discretized with a single IGA patch. For the domains, e.g., architected materials, requiring more than one patch for discretization IGA faces the challenge of $C^0$ continuity across the patch boundaries. Here we present a discontinuous Galerkin method-based isogeometric analysis framework, capable of solving fourth-order PDEs of flexoelectricity in the domain of truss-based architected materials. An interior penalty-based stabilization is implemented to ensure the stability of the solution. The present formulation is advantageous over the analogous finite element methods since it only requires the computation of interior boundary contributions on the boundaries of patches. As each strut can be modeled with only two trapezoid patches, the number of $C^0$ continuous boundaries is largely reduced. Further, we consider four unique unit cells to construct the truss lattices and analyze their flexoelectric response. The truss lattices show a higher magnitude of flexoelectricity compared to the solid beam, as well as retain this superior electromechanical response with the increasing size of the structure. These results indicate the potential of architected materials to scale up the flexoelectricity to larger scales, towards achieving universal electromechanical response in meso/macro scale dielectric materials.

We propose a new reduced order modeling strategy for tackling parametrized Partial Differential Equations (PDEs) with linear constraints, in particular Darcy flow systems in which the constraint is given by mass conservation. Our approach employs classical neural network architectures and supervised learning, but it is constructed in such a way that the resulting Reduced Order Model (ROM) is guaranteed to satisfy the linear constraints exactly. The procedure is based on a splitting of the PDE solution into a particular solution satisfying the constraint and a homogenous solution. The homogeneous solution is approximated by mapping a suitable potential function, generated by a neural network model, onto the kernel of the constraint operator; for the particular solution, instead, we propose an efficient spanning tree algorithm. Starting from this paradigm, we present three approaches that follow this methodology, obtained by exploring different choices of the potential spaces: from empirical ones, derived via Proper Orthogonal Decomposition (POD), to more abstract ones based on differential complexes. All proposed approaches combine computational efficiency with rigorous mathematical interpretation, thus guaranteeing the explainability of the model outputs. To demonstrate the efficacy of the proposed strategies and to emphasize their advantages over vanilla black-box approaches, we present a series of numerical experiments on fluid flows in porous media, ranging from mixed-dimensional problems to nonlinear systems. This research lays the foundation for further exploration and development in the realm of model order reduction, potentially unlocking new capabilities and solutions in computational geosciences and beyond.

Stochastic Primal-Dual Hybrid Gradient (SPDHG) is an algorithm proposed by Chambolle et al. (2018) to efficiently solve a wide class of nonsmooth large-scale optimization problems. In this paper we contribute to its theoretical foundations and prove its almost sure convergence for convex but neither necessarily strongly convex nor smooth functionals, as well as for any random sampling. In addition, we study SPDHG for parallel Magnetic Resonance Imaging reconstruction, where data from different coils are randomly selected at each iteration. We apply SPDHG using a wide range of random sampling methods and compare its performance across a range of settings, including mini-batch size and step size parameters. We show that the sampling can significantly affect the convergence speed of SPDHG and for many cases an optimal sampling can be identified.

We resurrect the infamous harmonic mean estimator for computing the marginal likelihood (Bayesian evidence) and solve its problematic large variance. The marginal likelihood is a key component of Bayesian model selection to evaluate model posterior probabilities; however, its computation is challenging. The original harmonic mean estimator, first proposed by Newton and Raftery in 1994, involves computing the harmonic mean of the likelihood given samples from the posterior. It was immediately realised that the original estimator can fail catastrophically since its variance can become very large (possibly not finite). A number of variants of the harmonic mean estimator have been proposed to address this issue although none have proven fully satisfactory. We present the \emph{learnt harmonic mean estimator}, a variant of the original estimator that solves its large variance problem. This is achieved by interpreting the harmonic mean estimator as importance sampling and introducing a new target distribution. The new target distribution is learned to approximate the optimal but inaccessible target, while minimising the variance of the resulting estimator. Since the estimator requires samples of the posterior only, it is agnostic to the sampling strategy used. We validate the estimator on a variety of numerical experiments, including a number of pathological examples where the original harmonic mean estimator fails catastrophically. We also consider a cosmological application, where our approach leads to $\sim$ 3 to 6 times more samples than current state-of-the-art techniques in 1/3 of the time. In all cases our learnt harmonic mean estimator is shown to be highly accurate. The estimator is computationally scalable and can be applied to problems of dimension $O(10^3)$ and beyond. Code implementing the learnt harmonic mean estimator is made publicly available

The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear time invariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.

Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.

We introduce numerical solvers for the steady-state Boltzmann equation based on the symmetric Gauss-Seidel (SGS) method. Due to the quadratic collision operator in the Boltzmann equation, the SGS method requires solving a nonlinear system on each grid cell, and we consider two methods, namely Newton's method and the fixed-point iteration, in our numerical tests. For small Knudsen numbers, our method has an efficiency between the classical source iteration and the modern generalized synthetic iterative scheme, and the complexity of its implementation is closer to the source iteration. A variety of numerical tests are carried out to demonstrate its performance, and it is concluded that the proposed method is suitable for applications with moderate to large Knudsen numbers.

Improving the resolution of fluorescence microscopy beyond the diffraction limit can be achievedby acquiring and processing multiple images of the sample under different illumination conditions.One of the simplest techniques, Random Illumination Microscopy (RIM), forms the super-resolvedimage from the variance of images obtained with random speckled illuminations. However, thevalidity of this process has not been fully theorized. In this work, we characterize mathematicallythe sample information contained in the variance of diffraction-limited speckled images as a functionof the statistical properties of the illuminations. We show that an unambiguous two-fold resolutiongain is obtained when the speckle correlation length coincides with the width of the observationpoint spread function. Last, we analyze the difference between the variance-based techniques usingrandom speckled illuminations (as in RIM) and those obtained using random fluorophore activation(as in Super-resolution Optical Fluctuation Imaging, SOFI).

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司