亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we study the fundamental limits of approximate recovery in the context of group testing. One of the most well-known, theoretically optimal, and easy to implement testing procedures is the non-adaptive Bernoulli group testing problem, where all tests are conducted in parallel, and each item is chosen to be part of any certain test independently with some fixed probability. In this setting, there is an observed gap between the number of tests above which recovery is information theoretically (IT) possible, and the number of tests required by the currently best known efficient algorithms to succeed. Often times such gaps are explained by a phase transition in the landscape of the solution space of the problem (an Overlap Gap Property phase transition). In this paper we seek to understand whether such a phenomenon takes place for Bernoulli group testing as well. Our main contributions are the following: (1) We provide first moment evidence that, perhaps surprisingly, such a phase transition does not take place throughout the regime for which recovery is IT possible. This fact suggests that the model is in fact amenable to local search algorithms ; (2) we prove the complete absence of "bad" local minima for a part of the "hard" regime, a fact which implies an improvement over known theoretical results on the performance of efficient algorithms for approximate recovery without false-negatives, and finally (3) we present extensive simulations that strongly suggest that a very simple local algorithm known as Glauber Dynamics does indeed succeed, and can be used to efficiently implement the well-known (theoretically optimal) Smallest Satisfying Set (SSS) estimator.

相關內容

Group一直是研究計算機支持的合作工作、人機交互、計算機支持的協作學習和社會技術研究的主要場所。該會議將社會科學、計算機科學、工程、設計、價值觀以及其他與小組工作相關的多個不同主題的工作結合起來,并進行了廣泛的概念化。官網鏈接: · 似然 · MoDELS · 平滑 · tuning ·
2021 年 9 月 29 日

The change-plane Cox model is a popular tool for the subgroup analysis of the survival data. Despite the rich literature on this model, there has been limited investigation on the asymptotic properties of the estimators of the finite dimensional parameter. Particularly, the convergence rate, not to mention the asymptotic distribution, remains an unsolved problem for the general model where classification is based on multiple covariates. To bridge this theoretical gap, this study proposes a maximum smoothed partial likelihood estimator and establishes the following asymptotic properties. First, it shows that the convergence rate for the classification parameter can be arbitrarily close to n^-1 up to a logarithmic factor, depending on a choice of tuning parameter. Second, it establishes the asymptotic normality for the regression parameter.

For nearly three decades, spatial games have produced a wealth of insights to the study of behavior and its relation to population structure. However, as different rules and factors are added or altered, the dynamics of spatial models often become increasingly complicated to interpret. To tackle this problem, we introduce persistent homology as a rigorous framework that can be used to both define and compute higher-order features of data in a manner which is invariant to parameter choices, robust to noise, and independent of human observation. Our work demonstrates its relevance for spatial games by showing how topological features of simulation data that persist over different spatial scales reflect the stability of strategies in 2D lattice games. To do so, we analyze the persistent homology of scenarios from two games: a Prisoner's Dilemma and a SIRS epidemic model. The experimental results show how the method accurately detects features that correspond to real aspects of the game dynamics. Unlike other tools that study dynamics of spatial systems, persistent homology can tell us something meaningful about population structure while remaining neutral about the underlying structure itself. Regardless of game complexity, since strategies either succeed or fail to conform to shapes of a certain topology there is much potential for the method to provide novel insights for a wide variety of spatially extended systems in biology, social science, and physics.

Unlike other metaheuristics, differential Evolution (DE) employs a crossover operation filtering variables to be mutated, which contributes to its successful applications in a variety of complicated optimization problems. However, the underlying working principles of the crossover operation is not yet fully understood. In this paper, we try to reveal the influence of the binomial crossover by performing a theoretical comparison between the $(1+1)EA$ and its variants, the $(1+1)EA_{C}$ and the $(1+1)EA_{CM}$. Generally, the introduction of the binomial crossover contributes to the enhancement of the exploration ability as well as degradation of the exploitation ability, and under some conditions, leads to the dominance of the transition matrix for binary optimization problems. As a result, both the $(1+1)EA_{C}$ and the $(1+1)EA_{CM}$ outperform the $(1+1)EA$ on the unimodal OneMax problem, but do not always dominate it on the Deceptive problem. Finally, we perform exploration analysis by investigating probabilities to transfer from non-optimal statuses to the optimal status of the Deceptive problem, inspired by which adaptive strategies are proposed to improve the ability of global exploration. It suggests that incorporation of the binomial crossover could be a feasible strategy to improve the performances of randomized search heuristics.

Evaluation of treatment effects and more general estimands is typically achieved via parametric modelling, which is unsatisfactory since model misspecification is likely. Data-adaptive model building (e.g. statistical/machine learning) is commonly employed to reduce the risk of misspecification. Naive use of such methods, however, delivers estimators whose bias may shrink too slowly with sample size for inferential methods to perform well, including those based on the bootstrap. Bias arises because standard data-adaptive methods are tuned towards minimal prediction error as opposed to e.g. minimal MSE in the estimator. This may cause excess variability that is difficult to acknowledge, due to the complexity of such strategies. Building on results from non-parametric statistics, targeted learning and debiased machine learning overcome these problems by constructing estimators using the estimand's efficient influence function under the non-parametric model. These increasingly popular methodologies typically assume that the efficient influence function is given, or that the reader is familiar with its derivation. In this paper, we focus on derivation of the efficient influence function and explain how it may be used to construct statistical/machine-learning-based estimators. We discuss the requisite conditions for these estimators to perform well and use diverse examples to convey the broad applicability of the theory.

Publication bias is a major concern in conducting systematic reviews and meta-analyses. Various sensitivity analysis or bias-correction methods have been developed based on selection models and they have some advantages over the widely used bias-correction method of the trim-and-fill method. However, likelihood methods based on selection models may have difficulty in obtaining precise estimates and reasonable confidence intervals or require a complicated sensitivity analysis process. In this paper, we develop a simple publication bias adjustment method utilizing information on conducted but still unpublished trials from clinical trial registries. We introduce an estimating equation for parameter estimation in the selection function by regarding the publication bias issue as a missing data problem under missing not at random. With the estimated selection function, we introduce the inverse probability weighting (IPW) method to estimate the overall mean across studies. Furthermore, the IPW versions of heterogeneity measures such as the between-study variance and the I2 measure are proposed. We propose methods to construct asymptotic confidence intervals and suggest intervals based on parametric bootstrapping as an alternative. Through numerical experiments, we observed that the estimators successfully eliminate biases and the confidence intervals had empirical coverage probabilities close to the nominal level. On the other hand, the asymptotic confidence interval is much wider in some scenarios than the bootstrap confidence interval. Therefore, the latter is recommended for practical use.

Piecewise deterministic Markov processes (PDMPs) are a class of stochastic processes with applications in several fields of applied mathematics spanning from mathematical modeling of physical phenomena to computational methods. A PDMP is specified by three characteristic quantities: the deterministic motion, the law of the random event times, and the jump kernels. The applicability of PDMPs to real world scenarios is currently limited by the fact that these processes can be simulated only when these three characteristics of the process can be simulated exactly. In order to overcome this problem, we introduce discretisation schemes for PDMPs which make their approximate simulation possible. In particular, we design both first order and higher order schemes that rely on approximations of one or more of the three characteristics. For the proposed approximation schemes we study both pathwise convergence to the continuous PDMP as the step size converges to zero and convergence in law to the invariant measure of the PDMP in the long time limit. Moreover, we apply our theoretical results to several PDMPs that arise from the computational statistics and mathematical biology literature.

We prove tight H\"olderian error bounds for all $p$-cones. Surprisingly, the exponents differ in several ways from those that have been previously conjectured; moreover, they illuminate $p$-cones as a curious example of a class of objects that possess properties in 3 dimensions that they do not in 4 or more. Using our error bounds, we analyse least squares problems with $p$-norm regularization, where our results enable us to compute the corresponding KL exponents for previously inaccessible values of $p$. Another application is a (relatively) simple proof that most $p$-cones are neither self-dual nor homogeneous. Our error bounds are obtained under the framework of facial residual functions and we expand it by establishing for general cones an optimality criterion under which the resulting error bound must be tight.

We study a randomized quadrature algorithm to approximate the integral of periodic functions defined over the high-dimensional unit cube. Recent work by Kritzer, Kuo, Nuyens and Ullrich (2019) shows that rank-1 lattice rules with a randomly chosen number of points and good generating vector achieve almost the optimal order of the randomized error in weighted Korobov spaces, and moreover, that the error is bounded independently of the dimension if the weight parameters satisfy the summability condition $\sum_{j=1}^{\infty}\gamma_j^{1/\alpha}<\infty$. The argument is based on the existence result that at least half of the possible generating vectors yield almost the optimal order of the worst-case error in the same function spaces. In this paper we provide a component-by-component construction algorithm of such randomized rank-1 lattice rules, without any need to check whether the constructed generating vectors satisfy a desired worst-case error bound. Similarly to the above-mentioned work, we prove that our algorithm achieves almost the optimal order of the randomized error and that the error bound is independent of the dimension if the same condition $\sum_{j=1}^{\infty}\gamma_j^{1/\alpha}<\infty$ holds. We also provide analogous results for tent-transformed lattice rules for weighted half-period cosine spaces and for polynomial lattice rules in weighted Walsh spaces, respectively.

Machine Learning models become increasingly proficient in complex tasks. However, even for experts in the field, it can be difficult to understand what the model learned. This hampers trust and acceptance, and it obstructs the possibility to correct the model. There is therefore a need for transparency of machine learning models. The development of transparent classification models has received much attention, but there are few developments for achieving transparent Reinforcement Learning (RL) models. In this study we propose a method that enables a RL agent to explain its behavior in terms of the expected consequences of state transitions and outcomes. First, we define a translation of states and actions to a description that is easier to understand for human users. Second, we developed a procedure that enables the agent to obtain the consequences of a single action, as well as its entire policy. The method calculates contrasts between the consequences of a policy derived from a user query, and of the learned policy of the agent. Third, a format for generating explanations was constructed. A pilot survey study was conducted to explore preferences of users for different explanation properties. Results indicate that human users tend to favor explanations about policy rather than about single actions.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司