亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Temporal analysis of products (TAP) reactors enable experiments that probe numerous kinetic processes within a single set of experimental data through variations in pulse intensity, delay, or temperature. Selecting additional TAP experiments often involves arbitrary selection of reaction conditions or the use of chemical intuition. To make experiment selection in TAP more robust, we explore the efficacy of model-based design of experiments (MBDoE) for precision in TAP reactor kinetic modeling. We successfully applied this approach to a case study of synthetic oxidative propane dehydrogenation (OPDH) that involves pulses of propane and oxygen. We found that experiments identified as optimal through the MBDoE for precision generally reduce parameter uncertainties to a higher degree than alternative experiments. The performance of MBDoE for model divergence was also explored for OPDH, with the relevant active sites (catalyst structure) being unknown. An experiment that maximized the divergence between the three proposed mechanisms was identified and led to clear mechanism discrimination. However, re-optimization of kinetic parameters eliminated the ability to discriminate. The findings yield insight into the prospects and limitations of MBDoE for TAP and transient kinetic experiments.

相關內容

ACM應用感知TAP(ACM Transactions on Applied Perception)旨在通過發表有助于統一這些領域研究的高質量論文來增強計算機科學與心理學/感知之間的協同作用。該期刊發表跨學科研究,在跨計算機科學和感知心理學的任何主題領域都具有重大而持久的價值。所有論文都必須包含感知和計算機科學兩個部分。主題包括但不限于:視覺感知:計算機圖形學,科學/數據/信息可視化,數字成像,計算機視覺,立體和3D顯示技術。聽覺感知:聽覺顯示和界面,聽覺聽覺編碼,空間聲音,語音合成和識別。觸覺:觸覺渲染,觸覺輸入和感知。感覺運動知覺:手勢輸入,身體運動輸入。感官感知:感官整合,多模式渲染和交互。 官網地址:

Randomness in the void distribution within a ductile metal complicates quantitative modeling of damage following the void growth to coalescence failure process. Though the sequence of micro-mechanisms leading to ductile failure is known from unit cell models, often based on assumptions of a regular distribution of voids, the effect of randomness remains a challenge. In the present work, mesoscale unit cell models, each containing an ensemble of four voids of equal size that are randomly distributed, are used to find statistical effects on the yield surface of the homogenized material. A yield locus is found based on a mean yield surface and a standard deviation of yield points obtained from 15 realizations of the four-void unit cells. It is found that the classical GTN model very closely agrees with the mean of the yield points extracted from the unit cell calculations with random void distributions, while the standard deviation $\textbf{S}$ varies with the imposed stress state. It is shown that the standard deviation is nearly zero for stress triaxialities $T\leq1/3$, while it rapidly increases for triaxialities above $T\approx 1$, reaching maximum values of about $\textbf{S}/\sigma_0\approx0.1$ at $T \approx 4$. At even higher triaxialities it decreases slightly. The results indicate that the dependence of the standard deviation on the stress state follows from variations in the deformation mechanism since a well-correlated variation is found for the volume fraction of the unit cell that deforms plastically at yield. Thus, the random void distribution activates different complex localization mechanisms at high stress triaxialities that differ from the ligament thinning mechanism forming the basis for the classical GTN model. A method for introducing the effect of randomness into the GTN continuum model is presented, and an excellent comparison to the unit cell yield locus is achieved.

The joint modeling of multiple longitudinal biomarkers together with a time-to-event outcome is a challenging modeling task of continued scientific interest. In particular, the computational complexity of high dimensional (generalized) mixed effects models often restricts the flexibility of shared parameter joint models, even when the subject-specific marker trajectories follow highly nonlinear courses. We propose a parsimonious multivariate functional principal components representation of the shared random effects. This allows better scalability, as the dimension of the random effects does not directly increase with the number of markers, only with the chosen number of principal component basis functions used in the approximation of the random effects. The functional principal component representation additionally allows to estimate highly flexible subject-specific random trajectories without parametric assumptions. The modeled trajectories can thus be distinctly different for each biomarker. We build on the framework of flexible Bayesian additive joint models implemented in the R-package 'bamlss', which also supports estimation of nonlinear covariate effects via Bayesian P-splines. The flexible yet parsimonious functional principal components basis used in the estimation of the joint model is first estimated in a preliminary step. We validate our approach in a simulation study and illustrate its advantages by analyzing a study on primary biliary cholangitis.

Numerical simulations of kinetic problems can become prohibitively expensive due to their large memory footprint and computational costs. A method that has proven to successfully reduce these costs is the dynamical low-rank approximation (DLRA). One key question when using DLRA methods is the construction of robust time integrators that preserve the invariances and associated conservation laws of the original problem. In this work, we demonstrate that the augmented basis update & Galerkin integrator (BUG) preserves solution invariances and the associated conservation laws when using a conservative truncation step and an appropriate time and space discretization. We present numerical comparisons to existing conservative integrators and discuss advantages and disadvantages

This study compares the performance of (1) fine-tuned models and (2) extremely large language models on the task of check-worthy claim detection. For the purpose of the comparison we composed a multilingual and multi-topical dataset comprising texts of various sources and styles. Building on this, we performed a benchmark analysis to determine the most general multilingual and multi-topical claim detector. We chose three state-of-the-art models in the check-worthy claim detection task and fine-tuned them. Furthermore, we selected three state-of-the-art extremely large language models without any fine-tuning. We made modifications to the models to adapt them for multilingual settings and through extensive experimentation and evaluation. We assessed the performance of all the models in terms of accuracy, recall, and F1-score in in-domain and cross-domain scenarios. Our results demonstrate that despite the technological progress in the area of natural language processing, the models fine-tuned for the task of check-worthy claim detection still outperform the zero-shot approaches in a cross-domain settings.

The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.

Causal investigations in observational studies pose a great challenge in scientific research where randomized trials or intervention-based studies are not feasible. Leveraging Shannon's seminal work on information theory, we consider a framework of asymmetry where any causal link between putative cause and effect must be explained through a mechanism governing the cause as well as a generative process yielding an effect of the cause. Under weak assumptions, this framework enables the assessment of whether X is a stronger predictor of Y or vice-versa. Under stronger identifiability assumptions our framework is able to distinguish between cause and effect using observational data. We establish key statistical properties of this framework. Our proposed methodology relies on scalable non-parametric density estimation using fast Fourier transformation. The resulting estimation method is manyfold faster than the classical bandwidth-based density estimation while maintaining comparable mean integrated squared error rates. We investigate key asymptotic properties of our methodology and introduce a data-splitting technique to facilitate inference. The key attraction of our framework is its inference toolkit, which allows researchers to quantify uncertainty in causal discovery findings. We illustrate the performance of our methodology through simulation studies as well as multiple real data examples.

We consider distributed recursive estimation of consensus+innovations type in the presence of heavy-tailed sensing and communication noises. We allow that the sensing and communication noises are mutually correlated while independent identically distributed (i.i.d.) in time, and that they may both have infinite moments of order higher than one (hence having infinite variances). Such heavy-tailed, infinite-variance noises are highly relevant in practice and are shown to occur, e.g., in dense internet of things (IoT) deployments. We develop a consensus+innovations distributed estimator that employs a general nonlinearity in both consensus and innovations steps to combat the noise. We establish the estimator's almost sure convergence, asymptotic normality, and mean squared error (MSE) convergence. Moreover, we establish and explicitly quantify for the estimator a sublinear MSE convergence rate. We then quantify through analytical examples the effects of the nonlinearity choices and the noises correlation on the system performance. Finally, numerical examples corroborate our findings and verify that the proposed method works in the simultaneous heavy-tail communication-sensing noise setting, while existing methods fail under the same noise conditions.

Uncontrolled hypertension is a global problem that needs to be addressed. Despite the many mHealth solutions in the market, the nonadherence relative to intended use jeopardizes treatment success. Although investigating user experience is one of the most important mechanisms for understanding mHealth discontinuance, surprisingly, the core determinants of overall user experience (i.e., positive and negative) about mHealth apps for hypertension are unknown. To address the mentioned gap in knowledge, this study adopts the computational grounded theory methodological framework and employs advanced deep learning algorithms to predict core quality criteria that affect overall user experience of hypertension apps published in the Apple App Store. This study contributes to theory and practice of designing evidence-based interventions for hypertension in the form of propositions and provide valuable managerial implications and recommendations for manufacturers.

Model-based optimization (MBO) is increasingly applied to design problems in science and engineering. A common scenario involves using a fixed training set to train models, with the goal of designing new samples that outperform those present in the training data. A major challenge in this setting is distribution shift, where the distributions of training and design samples are different. While some shift is expected, as the goal is to create better designs, this change can negatively affect model accuracy and subsequently, design quality. Despite the widespread nature of this problem, addressing it demands deep domain knowledge and artful application. To tackle this issue, we propose a straightforward method for design practitioners that detects distribution shifts. This method trains a binary classifier using knowledge of the unlabeled design distribution to separate the training data from the design data. The classifier's logit scores are then used as a proxy measure of distribution shift. We validate our method in a real-world application by running offline MBO and evaluate the effect of distribution shift on design quality. We find that the intensity of the shift in the design distribution varies based on the number of steps taken by the optimization algorithm, and our simple approach can identify these shifts. This enables users to constrain their search to regions where the model's predictions are reliable, thereby increasing the quality of designs.

The analysis of multivariate functional curves has the potential to yield important scientific discoveries in domains such as healthcare, medicine, economics and social sciences. However it is common for real-world settings to present data that are both sparse and irregularly sampled, and this introduces important challenges for the current functional data methodology. Here we propose a Bayesian hierarchical framework for multivariate functional principal component analysis which accommodates the intricacies of such sampling designs by flexibly pooling information across subjects and correlated curves. Our model represents common latent dynamics via shared functional principal component scores, thereby effectively borrowing strength across curves while circumventing the computationally challenging task of estimating covariance matrices. These scores also provide a parsimonious representation of the major modes of joint variation of the curves, and constitute interpretable scalar summaries that can be employed in follow-up analyses. We perform inference using a variational message passing algorithm which combines efficiency, modularity and approximate posterior density estimation, enabling the joint analysis of large datasets with parameter uncertainty quantification. We conduct detailed simulations to assess the effectiveness of our approach in sharing information under complex sampling designs. We also exploit it to estimate the molecular disease courses of individual patients with SARS-CoV-2 infection and characterise patient heterogeneity in recovery outcomes; this study reveals key coordinated dynamics across the immune, inflammatory and metabolic systems, which are associated with survival and long-COVID symptoms up to one year post disease onset. Our approach is implemented in the R package bayesFPCA.

北京阿比特科技有限公司