亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The analysis of the psoas muscle in morphological and functional imaging has proved to be an accurate approach to assess sarcopenia, i.e. a systemic loss of skeletal muscle mass and function that may be correlated to multifactorial etiological aspects. The inclusion of sarcopenia assessment into a radiological workflow would need the implementation of computational pipelines for image processing that guarantee segmentation reliability and a significant degree of automation. The present study utilizes three-dimensional numerical schemes for psoas segmentation in low-dose X-ray computed tomography images. Specifically, here we focused on the level set methodology and compared the performances of two standard approaches, a classical evolution model and a three-dimension geodesic model, with the performances of an original first-order modification of this latter one. The results of this analysis show that these gradient-based schemes guarantee reliability with respect to manual segmentation and that the first-order scheme requires a computational burden that is significantly smaller than the one needed by the second-order approach.

相關內容

We present a complete numerical analysis for a general discretization of a coupled flow-mechanics model in fractured porous media, considering single-phase flows and including frictionless contact at matrix-fracture interfaces, as well as nonlinear poromechanical coupling. Fractures are described as planar surfaces, yielding the so-called mixed- or hybrid-dimensional models. Small displacements and a linear elastic behavior are considered for the matrix. The model accounts for discontinuous fluid pressures at matrix-fracture interfaces in order to cover a wide range of normal fracture conductivities. The numerical analysis is carried out in the Gradient Discretization framework, encompassing a large family of conforming and nonconforming discretizations. The convergence result also yields, as a by-product, the existence of a weak solution to the continuous model. A numerical experiment in 2D is presented to support the obtained result, employing a Hybrid Finite Volume scheme for the flow and second-order finite elements ($\mathbb P_2$) for the mechanical displacement coupled with face-wise constant ($\mathbb P_0$) Lagrange multipliers on fractures, representing normal stresses, to discretize the contact conditions.

We introduce a predictor-corrector discretisation scheme for the numerical integration of a class of stochastic differential equations and prove that it converges with weak order 1.0. The key feature of the new scheme is that it builds up sequentially (and recursively) in the dimension of the state space of the solution, hence making it suitable for approximations of high-dimensional state space models. We show, using the stochastic Lorenz 96 system as a test model, that the proposed method can operate with larger time steps than the standard Euler-Maruyama scheme and, therefore, generate valid approximations with a smaller computational cost. We also introduce the theoretical analysis of the error incurred by the new predictor-corrector scheme when used as a building block for discrete-time Bayesian filters for continuous-time systems. Finally, we assess the performance of several ensemble Kalman filters that incorporate the proposed sequential predictor-corrector Euler scheme and the standard Euler-Maruyama method. The numerical experiments show that the filters employing the new sequential scheme can operate with larger time steps, smaller Monte Carlo ensembles and noisier systems.

It is well-known that decision-making problems from stochastic control can be formulated by means of forward-backward stochastic differential equation (FBSDE). Recently, the authors of Ji et al. 2022 proposed an efficient deep learning-based algorithm which was based on the stochastic maximum principle (SMP). In this paper, we provide a convergence result for this deep SMP-BSDE algorithm and compare its performance with other existing methods. In particular, by adopting a similar strategy as in Han and Long 2020, we derive a posteriori error estimate, and show that the total approximation error can be bounded by the value of the loss functional and the discretization error. We present numerical examples for high-dimensional stochastic control problems, both in case of drift- and diffusion control, which showcase superior performance compared to existing algorithms.

Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work we present a higher-order GNN model trained to predict the fourth-order stiffness tensor of periodic strut-based lattices. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate the benefits of the encoded equivariance and energy conservation in terms of predictive performance and reduced training requirements.

A popular method for variance reduction in observational causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Yang and Ding, 2018; Crump et al. 2009), but in modern settings where heteroscedastic data are analyzed with non-parametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold: first, we describe novel procedures for selecting which units to trim. Our procedures differ from previous work in that we trim not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed sub-populations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007-2008 National Health and Nutrition Examination Survey, and on a semi-synthetic Medicare dataset and find promising results in all settings.

We determine the material parameters in the relaxed micromorphic generalized continuum model for a given periodic microstructure in this work. This is achieved through a least squares fitting of the total energy of the relaxed micromorphic homogeneous continuum to the total energy of the fully-resolved heterogeneous microstructure, governed by classical linear elasticity. The relaxed micromorphic model is a generalized continuum that utilizes the $\Curl$ of a micro-distortion field instead of its full gradient as in the classical micromorphic theory, leading to several advantages and differences. The most crucial advantage is that it operates between two well-defined scales. These scales are determined by linear elasticity with microscopic and macroscopic elasticity tensors, which respectively bound the stiffness of the relaxed micromorphic continuum from above and below. While the macroscopic elasticity tensor is established a priori through standard periodic first-order homogenization, the microscopic elasticity tensor remains to be determined. Additionally, the characteristic length parameter, associated with curvature measurement, controls the transition between the micro- and macro-scales. Both the microscopic elasticity tensor and the characteristic length parameter are here determined using a computational approach based on the least squares fitting of energies. This process involves the consideration of an adequate number of quadratic deformation modes and different specimen sizes. We conduct a comparative analysis between the least square fitting results of the relaxed micromorphic model, the fitting of a skew-symmetric micro-distortion field (Cosserat-micropolar model), and the fitting of the classical micromorphic model with two different formulations for the curvature...

Any interactive protocol between a pair of parties can be reliably simulated in the presence of noise with a multiplicative overhead on the number of rounds (Schulman 1996). The reciprocal of the best (least) overhead is called the interactive capacity of the noisy channel. In this work, we present lower bounds on the interactive capacity of the binary erasure channel. Our lower bound improves the best known bound due to Ben-Yishai et al. 2021 by roughly a factor of 1.75. The improvement is due to a tighter analysis of the correctness of the simulation protocol using error pattern analysis. More precisely, instead of using the well-known technique of bounding the least number of erasures needed to make the simulation fail, we identify and bound the probability of specific erasure patterns causing simulation failure. We remark that error pattern analysis can be useful in solving other problems involving stochastic noise, such as bounding the interactive capacity of different channels.

The notion of an e-value has been recently proposed as a possible alternative to critical regions and p-values in statistical hypothesis testing. In this paper we consider testing the nonparametric hypothesis of symmetry, introduce analogues for e-values of three popular nonparametric tests, define an analogue for e-values of Pitman's asymptotic relative efficiency, and apply it to the three nonparametric tests. We discuss limitations of our simple definition of asymptotic relative efficiency and list directions of further research.

A new sparse semiparametric model is proposed, which incorporates the influence of two functional random variables in a scalar response in a flexible and interpretable manner. One of the functional covariates is included through a single-index structure, while the other is included linearly through the high-dimensional vector formed by its discretised observations. For this model, two new algorithms are presented for selecting relevant variables in the linear part and estimating the model. Both procedures utilise the functional origin of linear covariates. Finite sample experiments demonstrated the scope of application of both algorithms: the first method is a fast algorithm that provides a solution (without loss in predictive ability) for the significant computational time required by standard variable selection methods for estimating this model, and the second algorithm completes the set of relevant linear covariates provided by the first, thus improving its predictive efficiency. Some asymptotic results theoretically support both procedures. A real data application demonstrated the applicability of the presented methodology from a predictive perspective in terms of the interpretability of outputs and low computational cost.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司