亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Digital technologies can be used to gather accurate information about the behavior of structural components for improving systems design, as well as for enabling advanced Structural Health Monitoring strategies. New avenues for achieving automated and continuous structural assessment are opened up via development of virtualization approaches delivering so-called Digital Twins, i.e., digital mirrored representations of physical. In this framework, the main motivation of this work stems from the existing challenges in the implementation and deployment of a real-time predictive framework for virtualization of dynamic systems. Kalman-based filters are usually employed in this context to address the task of joint input-state prediction in structural dynamics. A Gaussian Process Latent Force Model (GPLFM) approach is exploited in this work to construct flexible data-driven a priori models for the unknown inputs, which are then coupled with a mechanistic model of the structural component under study for input-state estimation. The use of GP regression for this task overcomes the limitations of the conventional random-walk model, thus limiting the necessity of offline user-dependent calibration of this type of data assimilation methods. This paper proposes the use of alternative covariance functions for GP regression in structural dynamics. A theoretical analysis of the GPLFMs linked to the investigated covariance functions is offered. The outcome of this study provides insights into the applicability of each covariance type for GP-based input-state estimation. The proposed framework is validated via an illustrative simulated example, namely a 3 Degrees of Freedom system subjected to an array of different loading scenarios. Additionally, the performance of the method is experimentally assessed on the task of joint input-state estimation during testing of a 3D-printed scaled wind turbine blade.

相關內容

Stochastic multi-scale modeling and simulation for nonlinear thermo-mechanical problems of composite materials with complicated random microstructures remains a challenging issue. In this paper, we develop a novel statistical higher-order multi-scale (SHOMS) method for nonlinear thermo-mechanical simulation of random composite materials, which is designed to overcome limitations of prohibitive computation involving the macro-scale and micro-scale. By virtue of statistical multi-scale asymptotic analysis and Taylor series method, the SHOMS computational model is rigorously derived for accurately analyzing nonlinear thermo-mechanical responses of random composite materials both in the macro-scale and micro-scale. Moreover, the local error analysis of SHOMS solutions in the point-wise sense clearly illustrates the crucial indispensability of establishing the higher-order asymptotic corrected terms in SHOMS computational model for keeping the conservation of local energy and momentum. Then, the corresponding space-time multi-scale numerical algorithm with off-line and on-line stages is designed to efficiently simulate nonlinear thermo-mechanical behaviors of random composite materials. Finally, extensive numerical experiments are presented to gauge the efficiency and accuracy of the proposed SHOMS approach.

Ecological spatial areal models encounter the well-known and challenging problem of spatial confounding. This issue makes it arduous to distinguish between the impacts of observed covariates and spatial random effects. Despite previous research and various proposed methods to tackle this problem, finding a definitive solution remains elusive. In this paper, we propose a one-step version of the spatial+ approach that involves dividing the covariate into two components. One component captures large-scale spatial dependence, while the other accounts for short-scale dependence. This approach eliminates the need to separately fit spatial models for the covariates. We apply this method to analyze two forms of crimes against women, namely rapes and dowry deaths, in Uttar Pradesh, India, exploring their relationship with socio-demographic covariates. To evaluate the performance of the new approach, we conduct extensive simulation studies under different spatial confounding scenarios. The results demonstrate that the proposed method provides reliable estimates of fixed effects and posterior correlations between different responses.

High-level synthesis (HLS) refers to the automatic translation of a software program written in a high-level language into a hardware design. Modern HLS tools have moved away from the traditional approach of static (compile time) scheduling of operations to generating dynamic circuits that schedule operations at run time. Such circuits trade-off area utilisation for increased dynamism and throughput. However, existing lowering flows in dynamically scheduled HLS tools rely on conservative assumptions on their input program due to both the intermediate representations (IR) utilised as well as the lack of formal specifications on the translation into hardware. These assumptions cause suboptimal hardware performance. In this work, we lift these assumptions by proposing a new and efficient abstraction for hardware mapping; namely h-GSA, an extension of the Gated Single Static Assignment (GSA) IR. Using this abstraction, we propose a lowering flow that transforms GSA into h-GSA and maps h-GSA into dynamically scheduled hardware circuits. We compare the schedules generated by our approach to those by the state-of-the-art dynamic-scheduling HLS tool, Dynamatic, and illustrate the potential performance improvement from hardware mapping using the proposed abstraction.

Although compartmental dynamical systems are used in many different areas of science, model selection based on the maximum entropy principle (MaxEnt) is challenging because of the lack of methods for quantifying the entropy for this type of systems. Here, we take advantage of the interpretation of compartmental systems as continuous-time Markov chains to obtain entropy measures that quantify model information content. In particular, we quantify the uncertainty of a single particle's path as it travels through the system as described by path entropy and entropy rates. Path entropy measures the uncertainty of the entire path of a traveling particle from its entry into the system until its exit, whereas entropy rates measure the average uncertainty of the instantaneous future of a particle while it is in the system. We derive explicit formulas for these two types of entropy for compartmental systems in equilibrium based on Shannon information entropy and show how they can be used to solve equifinality problems in the process of model selection by means of MaxEnt.

To minimize the average of a set of log-convex functions, the stochastic Newton method iteratively updates its estimate using subsampled versions of the full objective's gradient and Hessian. We contextualize this optimization problem as sequential Bayesian inference on a latent state-space model with a discriminatively-specified observation process. Applying Bayesian filtering then yields a novel optimization algorithm that considers the entire history of gradients and Hessians when forming an update. We establish matrix-based conditions under which the effect of older observations diminishes over time, in a manner analogous to Polyak's heavy ball momentum. We illustrate various aspects of our approach with an example and review other relevant innovations for the stochastic Newton method.

The rise of information technology has transformed the business landscape, with organizations increasingly relying on information systems to collect and store vast amounts of data. To stay competitive, businesses must harness this data to make informed decisions that optimize their actions in response to the market. Business intelligence (BI) is an approach that enables organizations to leverage data-driven insights for better decision-making, but implementing BI comes with its own set of challenges. Accordingly, understanding the key factors that contribute to successful implementation is crucial. This study examines the factors affecting the implementation of BI projects by analyzing the interactions between these factors using system dynamics modeling. The research draws on interviews with five BI experts and a review of the background literature to identify effective implementation strategies. Specifically, the study compares traditional and self-service implementation approaches and simulates their respective impacts on organizational acceptance of BI. The results show that the two approaches were equally effective in generating organizational acceptance until the twenty-fifth month of implementation, after which the self-service strategy generated significantly higher levels of acceptance than the traditional strategy. In fact, after 60 months, the self-service approach was associated with a 30% increase in organizational acceptance over the traditional approach. The paper also provides recommendations for increasing the acceptance of BI in both implementation strategies. Overall, this study underscores the importance of identifying and addressing key factors that impact BI implementation success, offering practical guidance to organizations seeking to leverage the power of BI in today's competitive business environment.

Quantization summarizes continuous distributions by calculating a discrete approximation. Among the widely adopted methods for data quantization is Lloyd's algorithm, which partitions the space into Vorono\"i cells, that can be seen as clusters, and constructs a discrete distribution based on their centroids and probabilistic masses. Lloyd's algorithm estimates the optimal centroids in a minimal expected distance sense, but this approach poses significant challenges in scenarios where data evaluation is costly, and relates to rare events. Then, the single cluster associated to no event takes the majority of the probability mass. In this context, a metamodel is required and adapted sampling methods are necessary to increase the precision of the computations on the rare clusters.

A central problem in computational statistics is to convert a procedure for sampling combinatorial from an objects into a procedure for counting those objects, and vice versa. Weconsider sampling problems coming from *Gibbs distributions*, which are probability distributions of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_\min, \beta_\max]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The *partition function* is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters are the log partition ratio $q = \log \tfrac{Z(\beta_\max)}{Z(\beta_\min)}$ and the vector of counts $c_x = |H^{-1}(x)|$. Our first result is an algorithm to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\epsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\epsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters). We show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs and perfect matchings in a graph. We develop a key subroutine for global estimation of the partition function. Specifically, we produce a data structure to estimate $Z(\beta)$ for \emph{all} values $\beta$, without further samples. Constructing the data structure requires $O(\frac{q \log n}{\epsilon^2})$ samples for general Gibbs distributions and $O(\frac{n^2 \log n}{\epsilon^2} + n \log q)$ samples for integer-valued distributions. This improves over a prior algorithm of Kolmogorov (2018) which computes the single point estimate $Z(\beta_\max)$ using $\tilde O(\frac{q}{\epsilon^2})$ samples. We also show that this complexity is optimal as a function of $n$ and $q$ up to logarithmic terms.

This paper focuses on investigating the density convergence of a fully discrete finite difference method when applied to numerically solve the stochastic Cahn--Hilliard equation driven by multiplicative space-time white noises. The main difficulty lies in the control of the drift coefficient that is neither globally Lipschitz nor one-sided Lipschitz. To handle this difficulty, we propose a novel localization argument and derive the strong convergence rate of the numerical solution to estimate the total variation distance between the exact and numerical solutions. This along with the existence of the density of the numerical solution finally yields the convergence of density in $L^1(\mathbb{R})$ of the numerical solution. Our results partially answer positively to the open problem emerged in [J. Cui and J. Hong, J. Differential Equations (2020)] on computing the density of the exact solution numerically.

We consider distributed recursive estimation of consensus+innovations type in the presence of heavy-tailed sensing and communication noises. We allow that the sensing and communication noises are mutually correlated while independent identically distributed (i.i.d.) in time, and that they may both have infinite moments of order higher than one (hence having infinite variances). Such heavy-tailed, infinite-variance noises are highly relevant in practice and are shown to occur, e.g., in dense internet of things (IoT) deployments. We develop a consensus+innovations distributed estimator that employs a general nonlinearity in both consensus and innovations steps to combat the noise. We establish the estimator's almost sure convergence, asymptotic normality, and mean squared error (MSE) convergence. Moreover, we establish and explicitly quantify for the estimator a sublinear MSE convergence rate. We then quantify through analytical examples the effects of the nonlinearity choices and the noises correlation on the system performance. Finally, numerical examples corroborate our findings and verify that the proposed method works in the simultaneous heavy-tail communication-sensing noise setting, while existing methods fail under the same noise conditions.

北京阿比特科技有限公司