亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

(Rephrased) Non-conformance decision-making processes in high-precision manufacturing of engineering structures are often delayed due to numerical simulations that are needed for analyzing the defective parts and assemblies. Interfaces between parts of assemblies can only be simulated using the modeling of contact. Thus, efficient parametric ROMs are necessary for performing contact mechanics simulations in near real-time scenarios. Typical strategies for reducing the cost of contact models use low-rank approximations. Assumptions include the existence of a low-dimensional subspace for displacement and a low-dimensional non-negative subcone for contact pressure. However, the contact pressure exhibits a local nature, as the position of contact can vary with parameters like loading or geometry. The adequacy of low-rank approximations for contact mechanics is investigated and alternative routes based on sparse regression techniques are explored. It is shown that the local nature leads to loss of linear separability of contact pressure, thereby limiting the accuracy of low-rank methods. The applicability of the low-rank assumption to contact pressure is analyzed using 3 different criteria: compactness, generalization and specificity. Subsequently, over-complete dictionaries with a large number of snapshots to mitigate the inseparability issues is investigated. Two strategies are devised: a greedy active-set method where the dictionary elements are selected greedily and a convex hull approximation method that eliminates the necessity of explicitly enforcing non-penetration constraints in convex problems. Lastly, Dynamic Time Warping is studied as a possible non-linear interpolation method that permits the exploration of the non-linear manifoldm synthesising snapshots not computed in the training set with low complexity; reducing the offline costs.

相關內容

Electromagnetic stimulation probes and modulates the neural systems that control movement. Key to understanding their effects is the muscle recruitment curve, which maps evoked potential size against stimulation intensity. Current methods to estimate curve parameters require large samples; however, obtaining these is often impractical due to experimental constraints. Here, we present a hierarchical Bayesian framework that accounts for small samples, handles outliers, simulates high-fidelity data, and returns a posterior distribution over curve parameters that quantify estimation uncertainty. It uses a rectified-logistic function that estimates motor threshold and outperforms conventionally used sigmoidal alternatives in predictive performance, as demonstrated through cross-validation. In simulations, our method outperforms non-hierarchical models by reducing threshold estimation error on sparse data and requires fewer participants to detect shifts in threshold compared to frequentist testing. We present two common use cases involving electrical and electromagnetic stimulation data and provide an open-source library for Python, called hbMEP, for diverse applications.

Recent methods in modeling spatial extreme events have focused on utilizing parametric max-stable processes and their underlying dependence structure. In this work, we provide a unified approach for analyzing spatial extremes with little available data by estimating the distribution of model parameters or the spatial dependence directly. By employing recent developments in generative neural networks we predict a full sample-based distribution, allowing for direct assessment of uncertainty regarding model parameters or other parameter dependent functionals. We validate our method by fitting several simulated max-stable processes, showing a high accuracy of the approach, regarding parameter estimation, as well as uncertainty quantification. Additional robustness checks highlight the generalization and extrapolation capabilities of the model, while an application to precipitation extremes across Western Germany demonstrates the usability of our approach in real-world scenarios.

A common method for estimating the Hessian operator from random samples on a low-dimensional manifold involves locally fitting a quadratic polynomial. Although widely used, it is unclear if this estimator introduces bias, especially in complex manifolds with boundaries and nonuniform sampling. Rigorous theoretical guarantees of its asymptotic behavior have been lacking. We show that, under mild conditions, this estimator asymptotically converges to the Hessian operator, with nonuniform sampling and curvature effects proving negligible, even near boundaries. Our analysis framework simplifies the intensive computations required for direct analysis.

Analyzing longitudinal data in health studies is challenging due to sparse and error-prone measurements, strong within-individual correlation, missing data and various trajectory shapes. While mixed-effect models (MM) effectively address these challenges, they remain parametric models and may incur computational costs. In contrast, Functional Principal Component Analysis (FPCA) is a non-parametric approach developed for regular and dense functional data that flexibly describes temporal trajectories at a potentially lower computational cost. This paper presents an empirical simulation study evaluating the behaviour of FPCA with sparse and error-prone repeated measures and its robustness under different missing data schemes in comparison with MM. The results show that FPCA is well-suited in the presence of missing at random data caused by dropout, except in scenarios involving most frequent and systematic dropout. Like MM, FPCA fails under missing not at random mechanism. The FPCA was applied to describe the trajectories of four cognitive functions before clinical dementia and contrast them with those of matched controls in a case-control study nested in a population-based aging cohort. The average cognitive declines of future dementia cases showed a sudden divergence from those of their matched controls with a sharp acceleration 5 to 2.5 years prior to diagnosis.

The problem of scheduling conflicting jobs on parallel machines consists in assigning a set of jobs to a set of machines so that no two conflicting jobs are allocated to the same machine, and the maximum processing time among all machines is minimized. We propose a new compact mixed integer linear formulation based on the representatives model for the vertex coloring problem, which overcomes a number of issues inherent in the natural assignment model. We present a polyhedral study of the associated polytope, and describe classes of valid inequalities inherited from the stable set polytope. We describe branch-and-cut algorithms for the problem, and report on computational experiments with benchmark instances. Our computational results on the hardest instances of the benchmark set show that the proposed algorithms are superior (either in running time or quality of the solutions) to the current state-of-the-art methods. We find that our new method performs better than the existing ones especially when the gap between the optimal value and the trivial lower bound (i.e., the sum of all processing times divided by the number of machines) increases.

Finding the optimal design of experiments in the Bayesian setting typically requires estimation and optimization of the expected information gain functional. This functional consists of one outer and one inner integral, separated by the logarithm function applied to the inner integral. When the mathematical model of the experiment contains uncertainty about the parameters of interest and nuisance uncertainty, (i.e., uncertainty about parameters that affect the model but are not themselves of interest to the experimenter), two inner integrals must be estimated. Thus, the already considerable computational effort required to determine good approximations of the expected information gain is increased further. The Laplace approximation has been applied successfully in the context of experimental design in various ways, and we propose two novel estimators featuring the Laplace approximation to alleviate the computational burden of both inner integrals considerably. The first estimator applies Laplace's method followed by a Laplace approximation, introducing a bias. The second estimator uses two Laplace approximations as importance sampling measures for Monte Carlo approximations of the inner integrals. Both estimators use Monte Carlo approximation for the remaining outer integral estimation. We provide four numerical examples demonstrating the applicability and effectiveness of our proposed estimators.

We consider a high-dimensional sparse normal means model where the goal is to estimate the mean vector assuming the proportion of non-zero means is unknown. We model the mean vector by a one-group global-local shrinkage prior belonging to a broad class of such priors that includes the horseshoe prior. We address some questions related to asymptotic properties of the resulting posterior distribution of the mean vector for the said class priors. We consider two ways to model the global parameter in this paper. Firstly by considering this as an unknown fixed parameter and then by an empirical Bayes estimate of it. In the second approach, we do a hierarchical Bayes treatment by assigning a suitable non-degenerate prior distribution to it. We first show that for the class of priors under study, the posterior distribution of the mean vector contracts around the true parameter at a near minimax rate when the empirical Bayes approach is used. Next, we prove that in the hierarchical Bayes approach, the corresponding Bayes estimate attains the minimax risk asymptotically under the squared error loss function. We also show that the posterior contracts around the true parameter at a near minimax rate. These results generalize those of van der Pas et al. (2014) \cite{van2014horseshoe}, (2017) \cite{van2017adaptive}, proved for the horseshoe prior. We have also studied in this work the asymptotic Bayes optimality of global-local shrinkage priors where the number of non-null hypotheses is unknown. Here our target is to propose some conditions on the prior density of the global parameter such that the Bayes risk induced by the decision rule attains Optimal Bayes risk, up to some multiplicative constant. Using our proposed condition, under the asymptotic framework of Bogdan et al. (2011) \cite{bogdan2011asymptotic}, we are able to provide an affirmative answer to satisfy our hunch.

Shape-restricted inferences have exhibited empirical success in various applications with survival data. However, certain works fall short in providing a rigorous theoretical justification and an easy-to-use variance estimator with theoretical guarantee. Motivated by Deng et al. (2023), this paper delves into an additive and shape-restricted partially linear Cox model for right-censored data, where each additive component satisfies a specific shape restriction, encompassing monotonic increasing/decreasing and convexity/concavity. We systematically investigate the consistencies and convergence rates of the shape-restricted maximum partial likelihood estimator (SMPLE) of all the underlying parameters. We further establish the aymptotic normality and semiparametric effiency of the SMPLE for the linear covariate shift. To estimate the asymptotic variance, we propose an innovative data-splitting variance estimation method that boasts exceptional versatility and broad applicability. Our simulation results and an analysis of the Rotterdam Breast Cancer dataset demonstrate that the SMPLE has comparable performance with the maximum likelihood estimator under the Cox model when the Cox model is correct, and outperforms the latter and Huang (1999)'s method when the Cox model is violated or the hazard is nonsmooth. Meanwhile, the proposed variance estimation method usually leads to reliable interval estimates based on the SMPLE and its competitors.

The spatial auditory attention decoding (Sp-AAD) technology aims to determine the direction of auditory attention in multi-talker scenarios via neural recordings. Despite the success of recent Sp-AAD algorithms, their performance is hindered by trial-specific features in EEG data. This study aims to improve decoding performance against these features. Studies in neuroscience indicate that spatial auditory attention can be reflected in the topological distribution of EEG energy across different frequency bands. This insight motivates us to propose Prototype Training, a neuroscience-inspired method for Sp-AAD. This method constructs prototypes with enhanced energy distribution representations and reduced trial-specific characteristics, enabling the model to better capture auditory attention features. To implement prototype training, an EEGWaveNet that employs the wavelet transform of EEG is further proposed. Detailed experiments indicate that the EEGWaveNet with prototype training outperforms other competitive models on various datasets, and the effectiveness of the proposed method is also validated. As a training method independent of model architecture, prototype training offers new insights into the field of Sp-AAD.

Recent advances in active materials and fabrication techniques have enabled the production of cyclically self-deployable metamaterials with an expanded functionality space. However, designing metamaterials that possess continuously tunable mechanical properties after self-deployment remains a challenge, notwithstanding its importance. Inspired by push puppets, we introduce an efficient design strategy to create reversibly self-deployable metamaterials with continuously tunable post-deployment stiffness and damping. Our metamaterial comprises contracting actuators threaded through beads with matching conical concavo-convex interfaces in networked chains. The slack network conforms to arbitrary shapes, but when actuated, it self-assembles into a preprogrammed configuration with beads gathered together. Further contraction of the actuators can dynamically tune the assembly's mechanical properties through the beads' particle jamming, while maintaining the overall structure with minimal change. We show that, after deployment, such metamaterials exhibit pronounced tunability in bending-dominated configurations: they can become more than 35 times stiffer and change their damping capability by over 50%. Through systematic analysis, we find that the beads'conical angle can introduce geometric nonlinearity, which has a major effect on the self-deployability and tunability of the metamaterial. Our work provides routes towards reversibly self-deployable, lightweight, and tunable metamaterials, with potential applications in soft robotics, reconfigurable architectures, and space engineering.

北京阿比特科技有限公司