亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There is a wide range of applications where the local extrema of a function are the key quantity of interest. However, there is surprisingly little work on methods to infer local extrema with uncertainty quantification in the presence of noise. By viewing the function as an infinite-dimensional nuisance parameter, a semiparametric formulation of this problem poses daunting challenges, both methodologically and theoretically, as (i) the number of local extrema may be unknown, and (ii) the induced shape constraints associated with local extrema are highly irregular. In this article, we build upon a derivative-constrained Gaussian process prior recently proposed by Yu et al. (2022) to derive what we call an encompassing approach that indexes possibly multiple local extrema by a single parameter. We provide closed-form characterization of the posterior distribution and study its large sample behavior under this unconventional encompassing regime. We show that the posterior measure converges to a mixture of Gaussians with the number of components matching the underlying truth, leading to posterior exploration that accounts for multi-modality. Point and interval estimates of local extrema with frequentist properties are also provided. The encompassing approach leads to a remarkably simple, fast semiparametric approach for inference on local extrema. We illustrate the method through simulations and a real data application to event-related potential analysis.

相關內容

Identifying neural populations is a central problem in neuroscience, as we can now observe the spiking activity of multiple neurons simultaneously with expanding recording capabilities. Although previous work has tried to summarize neural activity within and between known populations by extracting low-dimensional latent factors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. To define population directly related to neural activity, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (mixDPFA) model, with the number of clusters and dimension of latent factors treated as unknown parameters. To analyze the proposed mixDPFA model, we propose a Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm through simulations, we find that it can accurately recover the unknown parameters and the true clustering in the model, and is insensitive to the initial cluster assignments. We then apply the proposed mixDPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis.

Optimal transport (OT) based data analysis is often faced with the issue that the underlying cost function is (partially) unknown. This paper is concerned with the derivation of distributional limits for the empirical OT value when the cost function and the measures are estimated from data. For statistical inference purposes, but also from the viewpoint of a stability analysis, understanding the fluctuation of such quantities is paramount. Our results find direct application in the problem of goodness-of-fit testing for group families, in machine learning applications where invariant transport costs arise, in the problem of estimating the distance between mixtures of distributions, and for the analysis of empirical sliced OT quantities. The established distributional limits assume either weak convergence of the cost process in uniform norm or that the cost is determined by an optimization problem of the OT value over a fixed parameter space. For the first setting we rely on careful lower and upper bounds for the OT value in terms of the measures and the cost in conjunction with a Skorokhod representation. The second setting is based on a functional delta method for the OT value process over the parameter space. The proof techniques might be of independent interest.

Controllers for dynamical systems that operate in safety-critical settings must account for stochastic disturbances. Such disturbances are often modeled as process noise in a dynamical system, and common assumptions are that the underlying distributions are known and/or Gaussian. In practice, however, these assumptions may be unrealistic and can lead to poor approximations of the true noise distribution. We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions. In particular, we address the problem of computing a controller that provides probabilistic guarantees on safely reaching a target, while also avoiding unsafe regions of the state space. First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states. As a key contribution, we adapt tools from the scenario approach to compute probably approximately correct (PAC) bounds on these transition probabilities, based on a finite number of samples of the noise. We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP). This iMDP is, with a user-specified confidence probability, robust against uncertainty in the transition probabilities, and the tightness of the probability intervals can be controlled through the number of samples. We use state-of-the-art verification techniques to provide guarantees on the iMDP and compute a controller for which these guarantees carry over to the original control system. In addition, we develop a tailored computational scheme that reduces the complexity of the synthesis of these guarantees on the iMDP. Benchmarks on realistic control systems show the practical applicability of our method, even when the iMDP has hundreds of millions of transitions.

We study the problem of selecting the best $m$ units from a set of $n$ as $m / n \to \alpha \in (0, 1)$, where noisy, heteroskedastic measurements of the units' true values are available and the decision-maker wishes to maximize the average true value of the units selected. Given a parametric prior distribution, the empirical Bayes decision rule incurs $O_p(n^{-1})$ regret relative to the Bayesian oracle that knows the true prior. More generally, if the error in the estimated prior is of order $O_p(r_n)$, regret is $O_p(r_n^2)$. In this sense selecting the best units is easier than estimating their values. We show this regret bound is sharp in the parametric case, by giving an example in which it is attained. Using priors calibrated from a dataset of over four thousand internet experiments, we find that empirical Bayes methods perform well in practice for detecting the best treatments given only a modest number of experiments.

Conventional methods for extreme event estimation rely on well-chosen parametric models asymptotically justified from extreme value theory (EVT). These methods, while powerful and theoretically grounded, could however encounter a difficult bias-variance tradeoff that exacerbates especially when data size is too small, deteriorating the reliability of the tail estimation. In this paper, we study a framework based on the recently surging literature of distributionally robust optimization. This approach can be viewed as a nonparametric alternative to conventional EVT, by imposing general shape belief on the tail instead of parametric assumption and using worst-case optimization as a resolution to handle the nonparametric uncertainty. We explain how this approach bypasses the bias-variance tradeoff in EVT. On the other hand, we face a conservativeness-variance tradeoff which we describe how to tackle. We also demonstrate computational tools for the involved optimization problems and compare our performance with conventional EVT across a range of numerical examples.

This article inspects whether a multivariate distribution is different from a specified distribution or not, and it also tests the equality of two multivariate distributions. In the course of this study, a graphical tool-kit based on well-known half-spaced depth is proposed, which is a two-dimensional plot, regardless of the dimension of the data, and it is even useful in comparing high-dimensional distributions. The simple interpretability of the proposed graphical tool-kit motivates us to formulate test statistics to carry out the corresponding testing of hypothesis problems. It is established that the proposed tests are consistent, and moreover, the asymptotic distributions of the test statistics under contiguous alternatives are derived, which enable us to compute the asymptotic power of these tests. Furthermore, it is observed that the computations associated with the proposed tests are unburdensome. Besides, these tests perform better than many other tests available in the literature when data are generated from various distributions such as heavy tailed distributions, which indicates that the proposed methodology is robust as well. Finally, the usefulness of the proposed graphical tool-kit and tests is shown on two benchmark real data sets.

The classical house allocation problem involves assigning $n$ houses (or items) to $n$ agents according to their preferences. A key criterion in such problems is satisfying some fairness constraints such as envy-freeness. We consider a generalization of this problem wherein the agents are placed along the vertices of a graph (corresponding to a social network), and each agent can only experience envy towards its neighbors. Our goal is to minimize the aggregate envy among the agents as a natural fairness objective, i.e., the sum of all pairwise envy values over all edges in a social graph. When agents have identical and evenly-spaced valuations, our problem reduces to the well-studied problem of linear arrangements. For identical valuations with possibly uneven spacing, we show a number of deep and surprising ways in which our setting is a departure from this classical problem. More broadly, we contribute several structural and computational results for various classes of graphs, including NP-hardness results for disjoint unions of paths, cycles, stars, or cliques, and fixed-parameter tractable (and, in some cases, polynomial-time) algorithms for paths, cycles, stars, cliques, and their disjoint unions. Additionally, a conceptual contribution of our work is the formulation of a structural property for disconnected graphs that we call separability which results in efficient parameterized algorithms for finding optimal allocations.

Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.

We study the Bayesian inverse problem for inferring the log-normal slowness function of the eikonal equation given noisy observation data on its solution at a set of spatial points. We study approximation of the posterior probability measure by solving the truncated eikonal equation, which contains only a finite number of terms in the Karhunen-Loeve expansion of the slowness function, by the Fast Marching Method. The error of this approximation in the Hellinger metric is deduced in terms of the truncation level of the slowness and the grid size in the Fast Marching Method resolution. It is well known that the plain Markov Chain Monte Carlo procedure for sampling the posterior probability is highly expensive. We develop and justify the convergence of a Multilevel Markov Chain Monte Carlo method. Using the heap sort procedure in solving the forward eikonal equation by the Fast Marching Method, our Multilevel Markov Chain Monte Carlo method achieves a prescribed level of accuracy for approximating the posterior expectation of quantities of interest, requiring only an essentially optimal level of complexity. Numerical examples confirm the theoretical results.

We set up a formal framework to characterize encompassing of nonparametric models through the L2 distance. We contrast it to previous literature on the comparison of nonparametric regression models. We then develop testing procedures for the encompassing hypothesis that are fully nonparametric. Our test statistics depend on kernel regression, raising the issue of bandwidth's choice. We investigate two alternative approaches to obtain a "small bias property" for our test statistics. We show the validity of a wild bootstrap method. We empirically study the use of a data-driven bandwidth and illustrate the attractive features of our tests for small and moderate samples.

北京阿比特科技有限公司