亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a novel class of ambiguity sets for distributionally robust optimization (DRO). These ambiguity sets, called cost-aware ambiguity sets, are defined as halfspaces which depend on the cost function evaluated at an independent estimate of the optimal solution, thus excluding only those distributions that are expected to have significant impact on the obtained worst-case cost. We show that the resulting DRO method provides both a high-confidence upper bound and a consistent estimator of the out-of-sample expected cost, and demonstrate empirically that it results in less conservative solutions compared to divergence-based ambiguity sets.

相關內容

In many industrial applications, obtaining labeled observations is not straightforward as it often requires the intervention of human experts or the use of expensive testing equipment. In these circumstances, active learning can be highly beneficial in suggesting the most informative data points to be used when fitting a model. Reducing the number of observations needed for model development alleviates both the computational burden required for training and the operational expenses related to labeling. Online active learning, in particular, is useful in high-volume production processes where the decision about the acquisition of the label for a data point needs to be taken within an extremely short time frame. However, despite the recent efforts to develop online active learning strategies, the behavior of these methods in the presence of outliers has not been thoroughly examined. In this work, we investigate the performance of online active linear regression in contaminated data streams. Our study shows that the currently available query strategies are prone to sample outliers, whose inclusion in the training set eventually degrades the predictive performance of the models. To address this issue, we propose a solution that bounds the search area of a conditional D-optimal algorithm and uses a robust estimator. Our approach strikes a balance between exploring unseen regions of the input space and protecting against outliers. Through numerical simulations, we show that the proposed method is effective in improving the performance of online active learning in the presence of outliers, thus expanding the potential applications of this powerful tool.

If the probability distribution model aims to approximate the hidden mother distribution, it is imperative to establish a useful criterion for the resemblance between the mother and the model distributions. This study proposes a criterion that measures the Hellinger distance between discretized (quantized) samples from both distributions. Unlike information criteria such as AIC, this criterion does not require the probability density function of the model distribution, which cannot be explicitly obtained for a complicated model such as a deep learning machine. Second, it can draw a positive conclusion (i.e., both distributions are sufficiently close) under a given threshold, whereas a statistical hypothesis test, such as the Kolmogorov-Smirnov test, cannot genuinely lead to a positive conclusion when the hypothesis is accepted. In this study, we establish a reasonable threshold for the criterion deduced from the Bayes error rate and also present the asymptotic bias of the estimator of the criterion. From these results, a reasonable and easy-to-use criterion is established that can be directly calculated from the two sets of samples from both distributions.

We develop the notion of discrete degrees of freedom of a log-concave sequence and use it to prove that geometric distribution minimises R\'enyi entropy of order infinity under fixed variance, among all discrete log-concave random variables in $\mathbb{Z}$. We also show that the quantity $\mathbb{P}(X=\mathbb{E} X)$ is maximised, among all ultra-log-concave random variables with fixed integral mean, for a Poisson distribution.

Bayesian optimization is a class of global optimization techniques. In Bayesian optimization, the underlying objective function is modeled as a realization of a Gaussian process. Although the Gaussian process assumption implies a random distribution of the Bayesian optimization outputs, quantification of this uncertainty is rarely studied in the literature. In this work, we propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, which proceeds by constructing confidence regions of the maximum point (or value) of the objective function. These regions can be computed efficiently, and their confidence levels are guaranteed by the uniform error bounds for sequential Gaussian process regression newly developed in the present work. Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.

We consider a multi-agent delegated search without money, which is the first to study the multi-agent extension of Kleinberg and Kleinberg (EC'18). In our model, given a set of agents, each agent samples a fixed number of solutions, and privately sends a signal, e.g., a subset of solutions, to the principal. Then, the principal selects a final solution based on the agents' signals. Our model captures a variety of real-world scenarios, spanning classical economical applications to modern intelligent system. In stark contrast to single-agent setting by Kleinberg and Kleinberg (EC'18) with an approximate Bayesian mechanism, we show that there exist efficient approximate prior-independent mechanisms with both information and performance gain, thanks to the competitive tension between the agents. Interestingly, however, the amount of such a compelling power significantly varies with respect to the information available to the agents, and the degree of correlation between the principal's and the agent's utility. Technically, we conduct a comprehensive study on the multi-agent delegated search problem and derive several results on the approximation factors of Bayesian/prior-independent mechanisms in complete/incomplete information settings. As a special case of independent interest, we obtain comparative statics regarding the number of agents which implies the dominance of the multi-agent setting ($n \ge 2$) over the single-agent setting ($n=1$) in terms of the principal's utility. We further extend our problem by considering an examination cost of the mechanism and derive some analogous results in the complete information setting.

We investigate expansions of Presburger arithmetic, i.e., the theory of the integers with addition and order, with additional structure related to exponentiation: either a function that takes a number to the power of $2$, or a predicate for the powers of $2$. The latter theory, denoted $\mathrm{PresPower}$, was introduced by B\"uchi as a first attempt at characterising the sets of tuples of numbers that can be expressed using finite automata; B\"uchi's method does not give an elementary upper bound, and the complexity of this theory has been open. The former theory, denoted as $\mathrm{PresExp}$, was shown decidable by Semenov; while the decision procedure for this theory differs radically from the automata-based method proposed by B\"uchi, the method is also non-elementary. And in fact, the theory with the power function has a non-elementary lower bound. In this paper, we show that while Semenov's and B\"uchi's approaches yield non-elementary blow-ups for $\mathrm{PresPower}$, the theory is in fact decidable in triply exponential time, similar to the best known quantifier-elimination algorithm for Presburger arithmetic. We also provide a $\mathrm{NExpTime}$ upper bound for the existential fragment of $\mathrm{PresExp}$, a step towards a finer-grained analysis of its complexity. Both these results are established by analysing a single parameterized satisfiability algorithm for $\mathrm{PresExp}$, which can be specialized to either the setting of $\mathrm{PresPower}$ or the existential theory of $\mathrm{PresExp}$. Besides the new upper bounds for the existential theory of $\mathrm{PresExp}$ and $\mathrm{PresPower}$, we believe our algorithm provides new intuition for the decidability of these theories, and for the features that lead to non-elementary blow-ups.

Patient specific brain mesh generation from MRI can be a time consuming task and require manual corrections, e.g., for meshing the ventricular system or defining subdomains. To address this issue, we consider an image registration approach. The idea is to use the registration of an input magnetic resonance image (MRI) to a respective target in order to obtain a new mesh from a high-quality template mesh. To obtain the transformation, we solve an optimization problem that is constrained by a linear hyperbolic transport equation. We use a higher-order discontinuous Galerkin finite element method for discretization and show that, under a restrictive assumption, the numerical upwind scheme can be derived from the continuous weak formulation of the transport equation. We present a numerical implementation that builds on the established finite element packages FEniCS and dolfin-adjoint. To demonstrate the efficacy of the proposed approach, numerical results for the registration of an input to a target MRI of two distinct individuals are presented. Moreover, it is shown that the registration transforms a manually crafted input mesh into a new mesh for the target subject whilst preserving mesh quality. Challenges of the algorithm with the complex cortical folding structure are discussed.

When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs relies on strong untestable structural assumptions. When one is unwilling to assert such structure, IVs can nonetheless be used to construct bounds on the ATE. Famously, Balke and Pearl (1997) proved tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting bounds on the ATE are non-smooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) under a novel margin condition, influence function-based estimators of the bounds that can attain parametric convergence rates when the nuisance functions are modeled flexibly, and (2) estimators of smooth approximations of these bounds. We propose extensions to continuous outcomes, explore finite sample properties in simulations, and illustrate the proposed estimators in a randomized experiment studying the effects of vaccination encouragement on flu-related hospital visits.

In various stereological problems an $n$-dimensional convex body is intersected with an $(n-1)$-dimensional Isotropic Uniformly Random (IUR) hyperplane. In this paper the cumulative distribution function associated with the $(n-1)$-dimensional volume of such a random section is studied. This distribution is also known as chord length distribution and cross section area distribution in the planar and spatial case respectively. For various classes of convex bodies it is shown that these distribution functions are absolutely continuous with respect to Lebesgue measure. A Monte Carlo simulation scheme is proposed for approximating the corresponding probability density functions.

Deep reinforcement learning algorithms can perform poorly in real-world tasks due to the discrepancy between source and target environments. This discrepancy is commonly viewed as the disturbance in transition dynamics. Many existing algorithms learn robust policies by modeling the disturbance and applying it to source environments during training, which usually requires prior knowledge about the disturbance and control of simulators. However, these algorithms can fail in scenarios where the disturbance from target environments is unknown or is intractable to model in simulators. To tackle this problem, we propose a novel model-free actor-critic algorithm -- namely, state-conservative policy optimization (SCPO) -- to learn robust policies without modeling the disturbance in advance. Specifically, SCPO reduces the disturbance in transition dynamics to that in state space and then approximates it by a simple gradient-based regularizer. The appealing features of SCPO include that it is simple to implement and does not require additional knowledge about the disturbance or specially designed simulators. Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.

北京阿比特科技有限公司