亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work sheds some light on the relationship between a distribution's standard deviation and its range, a topic that has been discussed extensively in the literature. While many previous studies have proposed inequalities or relationships that depend on the shape of the population distribution, the approach here is built on a family of bounded probability distributions based on skewing functions. We offer closed-form expressions for its moments and the asymptotic behavior as the support's semi-range tends to zero and $\infty$. We also establish an inequality in which the well-known Popoviciu's one is a special case. Finally, we provide an example using US dollar prices in four different currencies traded on foreign exchange markets to illustrate the results developed here.

相關內容

The question of whether $Y$ can be predicted based on $X$ often arises and while a well adjusted model may perform well on observed data, the risk of overfitting always exists, leading to poor generalization error on unseen data. This paper proposes a rigorous permutation test to assess the credibility of high $R^2$ values in regression models, which can also be applied to any measure of goodness of fit, without the need for sample splitting, by generating new pairings of $(X_i, Y_j)$ and providing an overall interpretation of the model's accuracy. It introduces a new formulation of the null hypothesis and justification for the test, which distinguishes it from previous literature. The theoretical findings are applied to both simulated data and sensor data of tennis serves in an experimental context. The simulation study underscores how the available information affects the test, showing that the less informative the predictors, the lower the probability of rejecting the null hypothesis, and emphasizing that detecting weaker dependence between variables requires a sufficient sample size.

Numerical models are used widely for parameter reconstructions in the field of optical nano metrology. To obtain geometrical parameters of a nano structured line grating, we fit a finite element numerical model to an experimental data set by using the Bayesian target vector optimization method. Gaussian process surrogate models are trained during the reconstruction. Afterwards, we employ a Markov chain Monte Carlo sampler on the surrogate models to determine the full model parameter distribution for the reconstructed model parameters. The choice of numerical discretization parameters, like the polynomial order of the finite element ansatz functions, impacts the numerical discretization error of the forward model. In this study we investigate the impact of numerical discretization parameters of the forward problem on the reconstructed parameters as well as on the model parameter distributions. We show that such a convergence study allows to determine numerical parameters which allow for efficient and accurate reconstruction results.

Grey-box fuzzing is the lightweight approach of choice for finding bugs in sequential programs. It provides a balance between efficiency and effectiveness by conducting a biased random search over the domain of program inputs using a feedback function from observed test executions. For distributed system testing, however, the state-of-practice is represented today by only black-box tools that do not attempt to infer and exploit any knowledge of the system's past behaviours to guide the search for bugs. In this work, we present Mallory: the first framework for grey-box fuzz-testing of distributed systems. Unlike popular black-box distributed system fuzzers, such as Jepsen, that search for bugs by randomly injecting network partitions and node faults or by following human-defined schedules, Mallory is adaptive. It exercises a novel metric to learn how to maximize the number of observed system behaviors by choosing different sequences of faults, thus increasing the likelihood of finding new bugs. The key enablers for our approach are the new ideas of timeline-driven testing and timeline abstraction that provide the feedback function guiding a biased random search for failures. Mallory dynamically constructs Lamport timelines of the system behaviour, abstracts these timelines into happens-before summaries, and introduces faults guided by its real-time observation of the summaries. We have evaluated Mallory on a diverse set of widely-used industrial distributed systems. Compared to the start-of-the-art black-box fuzzer Jepsen, Mallory explores more behaviours and takes less time to find bugs. Mallory discovered 22 zero-day bugs (of which 18 were confirmed by developers), including 10 new vulnerabilities, in rigorously-tested distributed systems such as Braft, Dqlite, and Redis. 6 new CVEs have been assigned.

Travel time derivatives are introduced as financial derivatives based on road travel times - a non-tradable underlying asset. In the transportation area, it is proposed as a more fundamental approach to value pricing because it conduct road pricing based on not only level but also volatility of travel time; in the financial market, it is propose as an innovative hedging instrument against market risk, especially after the recent stress of crypto market and traditional banking sector. The paper addresses (a) the motivation for introducing such derivatives (that is, the demand for hedging), (b) the potential market, and (c) the product design and pricing schemes. Pricing schemes are designed based on the travel time data captured by real time sensors, which are modeled as Ornstein - Uhlenbeck processes and more generally, continuous time auto regression moving average (CARMA) models. The calibration of such model is conducted via a hidden factor model, which described the dynamics of travel time processes. The risk neutral pricing principle is used to generate the derivative price, with reasonably designed procedures to identify the market value of risk.

Tukey's depth offers a powerful tool for nonparametric inference and estimation, but also encounters serious computational and methodological difficulties in modern statistical data analysis. This paper studies how to generalize and compute Tukey-type depths in multi-dimensions. A general framework of influence-driven polished subspace depth, which emphasizes the importance of the underlying influence space and discrepancy measure, is introduced. The new matrix formulation enables us to utilize state-of-the-art optimization techniques to develop scalable algorithms with implementation ease and guaranteed fast convergence. In particular, half-space depth as well as regression depth can now be computed much faster than previously possible, with the support from extensive experiments. A companion paper is also offered to the reader in the same issue of this journal.

Enriched Dirichlet process mixture (EDPM) models are Bayesian nonparametric models which can be used for nonparametric regression and conditional density estimation and which overcome a key disadvantage of jointly modeling the response and predictors as a Dirichlet process mixture (DPM) model: when there is a large number of predictors, the clusters induced by the DPM will be overwhelmingly determined by the predictors rather than the response. A truncation approximation to a DPM allows a blocked Gibbs sampling algorithm to be used rather than a Polya urn sampling algorithm. The blocked Gibbs sampler offers potential improvement in mixing. The truncation approximation also allows for implementation in standard software ($\textit{rjags}$ and $\textit{rstan}$). In this paper we introduce an analogous truncation approximation for an EDPM. We show that with sufficiently large truncation values in the approximation of the EDP prior, a precise approximation to the EDP is available. We verify that the truncation approximation and blocked Gibbs sampler with minimum truncation values that obtain adequate error bounds achieve similar accuracy to the truncation approximation and blocked Gibbs sampler with large truncation values using a simulated example. Further, we use the simulated example to show that the blocked Gibbs sampler improves upon the mixing in the Polya urn sampler, especially as the number of covariates increases.

Separating environmental effects from those of interspecific interactions on species distributions has always been a central objective of community ecology. Despite years of effort in analysing patterns of species co-occurrences and the developments of sophisticated tools, we are still unable to address this major objective. A key reason is that the wealth of ecological knowledge is not sufficiently harnessed in current statistical models, notably the knowledge on interspecific interactions. Here, we develop ELGRIN, a statistical model that simultaneously combines knowledge on interspecific interactions (i.e., the metanetwork), environmental data and species occurrences to tease apart their relative effects on species distributions. Instead of focusing on single effects of pairwise species interactions, which have little sense in complex communities, ELGRIN contrasts the overall effect of species interactions to that of the environment. Using various simulated and empirical data, we demonstrate the suitability of ELGRIN to address the objectives for various types of interspecific interactions like mutualism, competition and trophic interactions. We then apply the model on vertebrate trophic networks in the European Alps to map the effect of biotic interactions on species distributions.Data on ecological networks are everyday increasing and we believe the time is ripe to mobilize these data to better understand biodiversity patterns. ELGRIN provides this opportunity to unravel how interspecific interactions actually influence species distributions.

Automated Driving Systems (ADS) have made great achievements in recent years thanks to the efforts from both academia and industry. A typical ADS is composed of multiple modules, including sensing, perception, planning and control, which brings together the latest advances in multiple domains. Despite these achievements, safety assurance of the systems is still of great significance, since the unsafe behavior of ADS can bring catastrophic consequences and unacceptable economic and social losses. Testing is an important approach to system validation for the deployment in practice; in the context of ADS, it is extremely challenging, due to the system complexity and multidisciplinarity. There has been a great deal of literature that focuses on the testing of ADS, and a number of surveys have also emerged to summarize the technical advances. However, most of these surveys focus on the system-level testing that is performed within software simulators, and thereby ignore the distinct features of individual modules. In this paper, we provide a comprehensive survey on the existing ADS testing literature, which takes into account both module-level and system-level testing. Specifically, we make the following contributions: (1) we build a threat model that reveals the potential safety threats for each module of an ADS; (2) we survey the module-level testing techniques for ADS and highlight the technical differences affected by the properties of the modules; (3) we also survey the system-level testing techniques, but we focus on empirical studies that take a bird's-eye view on the system, the problems due to the collaborations between modules, and the gaps between ADS testing in simulators and real world; (4) we identify the challenges and opportunities in ADS testing, which facilitates the future research in this field.

Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems. For instance, in autonomous driving, we would like the driving system to issue an alert and hand over the control to humans when it detects unusual scenes or objects that it has never seen before and cannot make a safe decision. This problem first emerged in 2017 and since then has received increasing attention from the research community, leading to a plethora of methods developed, ranging from classification-based to density-based to distance-based ones. Meanwhile, several other problems are closely related to OOD detection in terms of motivation and methodology. These include anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD). Despite having different definitions and problem settings, these problems often confuse readers and practitioners, and as a result, some existing studies misuse terms. In this survey, we first present a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD detection, and OD. Under our framework, these five problems can be seen as special cases or sub-tasks, and are easier to distinguish. Then, we conduct a thorough review of each of the five areas by summarizing their recent technical developments. We conclude this survey with open challenges and potential research directions.

Classic machine learning methods are built on the $i.i.d.$ assumption that training and testing data are independent and identically distributed. However, in real scenarios, the $i.i.d.$ assumption can hardly be satisfied, rendering the sharp drop of classic machine learning algorithms' performances under distributional shifts, which indicates the significance of investigating the Out-of-Distribution generalization problem. Out-of-Distribution (OOD) generalization problem addresses the challenging setting where the testing distribution is unknown and different from the training. This paper serves as the first effort to systematically and comprehensively discuss the OOD generalization problem, from the definition, methodology, evaluation to the implications and future directions. Firstly, we provide the formal definition of the OOD generalization problem. Secondly, existing methods are categorized into three parts based on their positions in the whole learning pipeline, namely unsupervised representation learning, supervised model learning and optimization, and typical methods for each category are discussed in detail. We then demonstrate the theoretical connections of different categories, and introduce the commonly used datasets and evaluation metrics. Finally, we summarize the whole literature and raise some future directions for OOD generalization problem. The summary of OOD generalization methods reviewed in this survey can be found at //out-of-distribution-generalization.com.

北京阿比特科技有限公司