亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes a variance-based measure of importance for coherent systems with dependent and heterogeneous components. The particular cases of independent components and homogeneous components are also considered. We model the dependence structure among the components by the concept of copula. The proposed measure allows us to provide the best estimation of the system lifetime, in terms of the mean squared error, under the assumption that the lifetime of one of its components is known. We include theoretical results that are useful to calculate a closed-form of our measure and to compare two components of a system. We also provide some procedures to approximate the importance measure by Monte Carlo simulation methods. Finally, we illustrate the main results with several examples.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌(qian)入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

This paper introduces a new algorithm to improve the accuracy of numerical phase-averaging in oscillatory, multiscale, differential equations. Phase-averaging is a timestepping method which averages a mapped variable to remove highly oscillatory linear terms from the differential equation. This retains the main contribution of fast waves on the low frequencies without explicitly resolving the rapid oscillations. However, this comes at the cost of introducing an averaging error. To offset this, we propose a modified mapping that includes a mean correction term encoding an average measure of the nonlinear interactions. This mapping was introduced in Tao (2019) for weak nonlinearity and relied on classical time-averaging, which leaves only the zero frequencies. Our algorithm instead considers mean corrected phase-averaging when 1) the nonlinearity is not weak but the linear oscillations are fast and 2) finite averaging windows are applied via a smooth kernel, which has the advantage of retaining low frequencies whilst still eliminating the fastest oscillations. In particular, we introduce a local mean correction that combines the concepts of a mean correction and finite averaging; this retains low-frequency components in the mean correction that are removed with classical time-averaging. We show that the new timestepping algorithm reduces phase errors in the mapped variable for the swinging spring ODE in various dynamical configurations. We also show accuracy improvements with a local mean correction compared to standard phase-averaging in the one-dimensional rotating shallow water equations, a useful test case for weather and climate applications.

We introduce a robust first order accurate meshfree method to numerically solve time-dependent nonlinear conservation laws. The main contribution of this work is the meshfree construction of first order consistent summation by parts differentiations. We describe how to efficiently construct such operators on a point cloud. We then study the performance of such differentiations, and then combine these operators with a numerical flux-based formulation to approximate the solution of nonlinear conservation laws, with focus on the advection equation and the compressible Euler equations. We observe numerically that, while the resulting mesh-free differentiation operators are only $O(h^\frac{1}{2})$ accurate in the $L^2$ norm, they achieve $O(h)$ rates of convergence when applied to the numerical solution of PDEs.

This paper shows how an uncertainty-aware, deep neural network can be trained to detect, recognise and localise objects in 2D RGB images, in applications lacking annotated train-ng datasets. We propose a self-supervising teacher-student pipeline, in which a relatively simple teacher classifier, trained with only a few labelled 2D thumbnails, automatically processes a larger body of unlabelled RGB-D data to teach a student network based on a modified YOLOv3 architecture. Firstly, 3D object detection with back projection is used to automatically extract and teach 2D detection and localisation information to the student network. Secondly, a weakly supervised 2D thumbnail classifier, with minimal training on a small number of hand-labelled images, is used to teach object category recognition. Thirdly, we use a Gaussian Process GP to encode and teach a robust uncertainty estimation functionality, so that the student can output confidence scores with each categorization. The resulting student significantly outperforms the same YOLO architecture trained directly on the same amount of labelled data. Our GP-based approach yields robust and meaningful uncertainty estimations for complex industrial object classifications. The end-to-end network is also capable of real-time processing, needed for robotics applications. Our method can be applied to many important industrial tasks, where labelled datasets are typically unavailable. In this paper, we demonstrate an example of detection, localisation, and object category recognition of nuclear mixed-waste materials in highly cluttered and unstructured scenes. This is critical for robotic sorting and handling of legacy nuclear waste, which poses complex environmental remediation challenges in many nuclearised nations.

In real-world data, information is stored in extremely large feature vectors. These variables are typically correlated due to complex interactions involving many features simultaneously. Such correlations qualitatively correspond to semantic roles and are naturally recognized by both the human brain and artificial neural networks. This recognition enables, for instance, the prediction of missing parts of an image or text based on their context. We present a method to detect these correlations in high-dimensional data represented as binary numbers. We estimate the binary intrinsic dimension of a dataset, which quantifies the minimum number of independent coordinates needed to describe the data, and is therefore a proxy of semantic complexity. The proposed algorithm is largely insensitive to the so-called curse of dimensionality, and can therefore be used in big data analysis. We test this approach identifying phase transitions in model magnetic systems and we then apply it to the detection of semantic correlations of images and text inside deep neural networks.

Coupled decompositions are a widely used tool for data fusion. This paper studies the coupled matrix factorization (CMF) where two matrices $X$ and $Y$ are represented in a low-rank format sharing one common factor, as well as the coupled matrix and tensor factorization (CMTF) where a matrix $Y$ and a tensor $\mathcal{X}$ are represented in a low-rank format sharing a factor matrix. We show that these problems are equivalent to the low-rank approximation of the matrix $[X \ Y]$ for CMF, that is $[X_{(1)} \ Y]$ for CMTF. Then, in order to speed up computation process, we adapt several randomization techniques, namely, randomized SVD, randomized subspace iteration, and randomized block Krylov iteration to the algorithms for coupled decompositions. We present extensive results of the numerical tests. Furthermore, as a novel approach and with a high success rate, we apply our randomized algorithms to the face recognition problem.

Algorithms for generating random numbers that follow a gamma distribution with shape parameter less than unity are proposed. Acceptance-rejection algorithms are developed, based on the generalized exponential distribution. The squeeze technique is applied to our method, and then piecewise envelope functions are further considered. The proposed methods are excellent in acceptance efficiency and promising in speed.

We show that confidence intervals in a variance component model, with asymptotically correct uniform coverage probability, can be obtained by inverting certain test-statistics based on the score for the restricted likelihood. The results apply in settings where the variance is near or at the boundary of the parameter set. Simulations indicate the proposed test-statistics are approximately pivotal and lead to confidence intervals with near-nominal coverage even in small samples. We illustrate our methods' application in spatially-resolved transcriptomics where we compute approximately 15,000 confidence intervals, used for gene ranking, in less than 4 minutes. In the settings we consider, the proposed method is between two and 28,000 times faster than popular alternatives, depending on how many confidence intervals are computed.

This paper considers the problem of reconstructing missing parts of functions based on their observed segments. It provides, for Gaussian processes and arbitrary bijective transformations thereof, theoretical expressions for the $L^2$-optimal reconstruction of the missing parts. These functions are obtained as solutions of explicit integral equations. In the discrete case, approximations of the solutions provide consistent expressions of all missing values of the processes. Rates of convergence of these approximations, under extra assumptions on the transformation function, are provided. In the case of Gaussian processes with a parametric covariance structure, the estimation can be conducted separately for each function, and yields nonlinear solutions in presence of memory. Simulated examples show that the proposed reconstruction indeed fares better than the conventional interpolation methods in various situations.

This paper presents a novel generic asymptotic expansion formula of expectations of multidimensional Wiener functionals through a Malliavin calculus technique. The uniform estimate of the asymptotic expansion is shown under a weaker condition on the Malliavin covariance matrix of the target Wiener functional. In particular, the method provides a tractable expansion for the expectation of an irregular functional of the solution to a multidimensional rough differential equation driven by fractional Brownian motion with Hurst index $H<1/2$, without using complicated fractional integral calculus for the singular kernel. In a numerical experiment, our expansion shows a much better approximation for a probability distribution function than its normal approximation, which demonstrates the validity of the proposed method.

This paper presents a new approach to urban sustainability assessment through the use of Large Language Models (LLMs) to streamline the use of the ISO 37101 framework to automate and standardise the assessment of urban initiatives against the six "sustainability purposes" and twelve "issues" outlined in the standard. The methodology includes the development of a custom prompt based on the standard definitions and its application to two different datasets: 527 projects from the Paris Participatory Budget and 398 activities from the PROBONO Horizon 2020 project. The results show the effectiveness of LLMs in quickly and consistently categorising different urban initiatives according to sustainability criteria. The approach is particularly promising when it comes to breaking down silos in urban planning by providing a holistic view of the impact of projects. The paper discusses the advantages of this method over traditional human-led assessments, including significant time savings and improved consistency. However, it also points out the importance of human expertise in interpreting results and ethical considerations. This study hopefully can contribute to the growing body of work on AI applications in urban planning and provides a novel method for operationalising standardised sustainability frameworks in different urban contexts.

北京阿比特科技有限公司