亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Topology optimization (TO) has experienced a dramatic development over the last decades aided by the arising of metamaterials and additive manufacturing (AM) techniques, and it is intended to achieve the current and future challenges. In this paper we propose an extension for linear orthotropic materials of a three-dimensional TO algorithm which directly operates on the six elastic properties -- three longitudinal and shear moduli, having fixed three Poisson ratios -- of the finite element (FE) discretization of certain analysis domain. By performing a gradient-descent-alike optimization on these properties, the standard deviation of a strain-energy measurement is minimized, thus coming up with optimized, strain-homogenized structures with variable longitudinal and shear stiffness in their different material directions. To this end, an orthotropic formulation with two approaches -- direct or strain-based and complementary or stress-based -- has been developed for this optimization problem, being the stress-based more efficient as previous works on this topic have shown. The key advantages that we propose are: (1) the use of orthotropic ahead of isotropic materials, which enables a more versatile optimization process since the design space is increased by six times, and (2) no constraint needs to be imposed (such as maximum volume) in contrast to other methods widely used in this field such as Solid Isotropic Material with Penalization (SIMP), all of this by setting one unique hyper-parameter. Results of four designed load cases show that this orthotropic-TO algorithm outperforms the isotropic case, both for the similar algorithm from which this is an extension and for a SIMP run in a FE commercial software, presenting a comparable computational cost. We remark that it works particularly effectively on pure shear or shear-governed problems such as torsion loading.

相關內容

The investigation of mixture models is a key to understand and visualize the distribution of multivariate data. Most mixture models approaches are based on likelihoods, and are not adapted to distribution with finite support or without a well-defined density function. This study proposes the Augmented Quantization method, which is a reformulation of the classical quantization problem but which uses the p-Wasserstein distance. This metric can be computed in very general distribution spaces, in particular with varying supports. The clustering interpretation of quantization is revisited in a more general framework. The performance of Augmented Quantization is first demonstrated through analytical toy problems. Subsequently, it is applied to a practical case study involving river flooding, wherein mixtures of Dirac and Uniform distributions are built in the input space, enabling the identification of the most influential variables.

Batch effects are pervasive in biomedical studies. One approach to address the batch effects is repeatedly measuring a subset of samples in each batch. These remeasured samples are used to estimate and correct the batch effects. However, rigorous statistical methods for batch effect correction with remeasured samples are severely under-developed. In this study, we developed a framework for batch effect correction using remeasured samples in highly confounded case-control studies. We provided theoretical analyses of the proposed procedure, evaluated its power characteristics, and provided a power calculation tool to aid in the study design. We found that the number of samples that need to be remeasured depends strongly on the between-batch correlation. When the correlation is high, remeasuring a small subset of samples is possible to rescue most of the power.

Within the next decade the Laser Interferometer Space Antenna (LISA) is due to be launched, providing the opportunity to extract physics from stellar objects and systems, such as \textit{Extreme Mass Ratio Inspirals}, (EMRIs) otherwise undetectable to ground based interferometers and Pulsar Timing Arrays (PTA). Unlike previous sources detected by the currently available observational methods, these sources can \textit{only} be simulated using an accurate computation of the gravitational self-force. Whereas the field has seen outstanding progress in the frequency domain, metric reconstruction and self-force calculations are still an open challenge in the time domain. Such computations would not only further corroborate frequency domain calculations and models, but also allow for full self-consistent evolution of the orbit under the effect of the self-force. Given we have \textit{a priori} information about the local structure of the discontinuity at the particle, we will show how to construct discontinuous spatial and temporal discretisations by operating on discontinuous Lagrange and Hermite interpolation formulae and hence recover higher order accuracy. In this work we demonstrate how this technique in conjunction with well-suited gauge choice (hyperboloidal slicing) and numerical (discontinuous collocation with time symmetric) methods can provide a relatively simple method of lines numerical algorithm to the problem. This is the first of a series of papers studying the behaviour of a point-particle prescribing circular geodesic motion in Schwarzschild in the \textit{time domain}. In this work we describe the numerical machinery necessary for these computations and show not only our work is capable of highly accurate flux radiation measurements but it also shows suitability for evaluation of the necessary field and it's derivatives at the particle limit.

We collect robust proposals given in the field of regression models with heteroscedastic errors. Our motivation stems from the fact that the practitioner frequently faces the confluence of two phenomena in the context of data analysis: non--linearity and heteroscedasticity. The impact of heteroscedasticity on the precision of the estimators is well--known, however the conjunction of these two phenomena makes handling outliers more difficult. An iterative procedure to estimate the parameters of a heteroscedastic non--linear model is considered. The studied estimators combine weighted $MM-$regression estimators, to control the impact of high leverage points, and a robust method to estimate the parameters of the variance function.

Quadratic NURBS-based discretizations of the Galerkin method suffer from volumetric locking when applied to nearly-incompressible linear elasticity. Volumetric locking causes not only smaller displacements than expected, but also large-amplitude spurious oscillations of normal stresses. Continuous-assumed-strain (CAS) elements have been recently introduced to remove membrane locking in quadratic NURBS-based discretizations of linear plane curved Kirchhoff rods (Casquero et al., CMAME, 2022). In this work, we propose two generalizations of CAS elements (named CAS1 and CAS2 elements) to overcome volumetric locking in quadratic NURBS-based discretizations of nearly-incompressible linear elasticity. CAS1 elements linearly interpolate the strains at the knots in each direction for the term in the variational form involving the first Lam\'e parameter while CAS2 elements linearly interpolate the dilatational strains at the knots in each direction. For both element types, a displacement vector with C1 continuity across element boundaries results in assumed strains with C0 continuity across element boundaries. In addition, the implementation of the two locking treatments proposed in this work does not require any additional global or element matrix operations such as matrix inversions or matrix multiplications. The locking treatments are applied at the element level and the nonzero pattern of the global stiffness matrix is preserved. The numerical examples solved in this work show that CAS1 and CAS2 elements, using either two or three Gauss-Legrendre quadrature points per direction, are effective locking treatments since they not only result in more accurate displacements for coarse meshes, but also remove the spurious oscillations of normal stresses.

Augmented Reality (AR) has emerged as a significant advancement in surgical procedures, offering a solution to the challenges posed by traditional neuronavigation methods. These conventional techniques often necessitate surgeons to split their focus between the surgical site and a separate monitor that displays guiding images. Over the years, many systems have been developed to register and track the hologram at the targeted locations, each employed its own evaluation technique. On the other hand, hologram displacement measurement is not a straightforward task because of various factors such as occlusion, Vengence-Accomodation Conflict, and unstable holograms in space. In this study, we explore and classify different techniques for assessing an AR-assisted neurosurgery system and propose a new technique to systematize the assessment procedure. Moreover, we conduct a deeper investigation to assess surgeon error in the pre- and intra-operative phases of the surgery based on the respective feedback given. We found that although the system can undergo registration and tracking errors, physical feedback can significantly reduce the error caused by hologram displacement. However, the lack of visual feedback on the hologram does not have a significant effect on the user 3D perception.

The use of the non-parametric Restricted Mean Survival Time endpoint (RMST) has grown in popularity as trialists look to analyse time-to-event outcomes without the restrictions of the proportional hazards assumption. In this paper, we evaluate the power and type I error rate of the parametric and non-parametric RMST estimators when treatment effect is explained by multiple covariates, including an interaction term. Utilising the RMST estimator in this way allows the combined treatment effect to be summarised as a one-dimensional estimator, which is evaluated using a one-sided hypothesis Z-test. The estimators are either fully specified or misspecified, both in terms of unaccounted covariates or misspecified knot points (where trials exhibit crossing survival curves). A placebo-controlled trial of Gamma interferon is used as a motivating example to simulate associated survival times. When correctly specified, the parametric RMST estimator has the greatest power, regardless of the time of analysis. The misspecified RMST estimator generally performs similarly when covariates mirror those of the fitted case study dataset. However, as the magnitude of the unaccounted covariate increases, the associated power of the estimator decreases. In all cases, the non-parametric RMST estimator has the lowest power, and power remains very reliant on the time of analysis (with a later analysis time correlated with greater power).

Surface parameterization plays a fundamental role in many science and engineering problems. In particular, as genus-0 closed surfaces are topologically equivalent to a sphere, many spherical parameterization methods have been developed over the past few decades. However, in practice, mapping a genus-0 closed surface onto a sphere may result in a large distortion due to their geometric difference. In this work, we propose a new framework for computing ellipsoidal conformal and quasi-conformal parameterizations of genus-0 closed surfaces, in which the target parameter domain is an ellipsoid instead of a sphere. By combining simple conformal transformations with different types of quasi-conformal mappings, we can easily achieve a large variety of ellipsoidal parameterizations with their bijectivity guaranteed by quasi-conformal theory. Numerical experiments are presented to demonstrate the effectiveness of the proposed framework.

In prediction settings where data are collected over time, it is often of interest to understand both the importance of variables for predicting the response at each time point and the importance summarized over the time series. Building on recent advances in estimation and inference for variable importance measures, we define summaries of variable importance trajectories. These measures can be estimated and the same approaches for inference can be applied regardless of the choice of the algorithm(s) used to estimate the prediction function. We propose a nonparametric efficient estimation and inference procedure as well as a null hypothesis testing procedure that are valid even when complex machine learning tools are used for prediction. Through simulations, we demonstrate that our proposed procedures have good operating characteristics, and we illustrate their use by investigating the longitudinal importance of risk factors for suicide attempt.

We present novel Monte Carlo (MC) and multilevel Monte Carlo (MLMC) methods for determining the unbiased covariance of random variables using h-statistics. The advantage of this procedure lies in the unbiased construction of the estimator's mean square error in a closed form. This is in contrast to the conventional MC and MLMC covariance estimators, which are based on biased mean square errors defined solely by upper bounds, particularly within the MLMC. Finally, the numerical results of the algorithms are demonstrated by estimating the covariance of the stochastic response of a simple 1D stochastic elliptic PDE such as Poisson's model.

北京阿比特科技有限公司