亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Candidates arrive sequentially for an interview process which results in them being ranked relative to their predecessors. Based on the ranks available at each time, one must develop a decision mechanism that selects or dismisses the current candidate in an effort to maximize the chance to select the best. This classical version of the ``Secretary problem'' has been studied in depth using mostly combinatorial approaches, along with numerous other variants. In this work we consider a particular new version where during reviewing one is allowed to query an external expert to improve the probability of making the correct decision. Unlike existing formulations, we consider experts that are not necessarily infallible and may provide suggestions that can be faulty. For the solution of our problem we adopt a probabilistic methodology and view the querying times as consecutive stopping times which we optimize with the help of optimal stopping theory. For each querying time we must also design a mechanism to decide whether we should terminate the search at the querying time or not. This decision is straightforward under the usual assumption of infallible experts but, when experts are faulty, it has a far more intricate structure.

相關內容

Chernoff bounds are a powerful application of the Markov inequality to produce strong bounds on the tails of probability distributions. They are often used to bound the tail probabilities of sums of Poisson trials, or in regression to produce conservative confidence intervals for the parameters of such trials. The bounds provide expressions for the tail probabilities that can be inverted for a given probability/confidence to provide tail intervals. The inversions involve the solution of transcendental equations and it is often convenient to substitute approximations that can be exactly solved e.g. by the quadratic equation. In this paper we introduce approximations for the Chernoff bounds whose inversion can be exactly solved with a quadratic equation, but which are closer approximations than those adopted previously.

A ubiquitous learning problem in today's digital market is, during repeated interactions between a seller and a buyer, how a seller can gradually learn optimal pricing decisions based on the buyer's past purchase responses. A fundamental challenge of learning in such a strategic setup is that the buyer will naturally have incentives to manipulate his responses in order to induce more favorable learning outcomes for him. To understand the limits of the seller's learning when facing such a strategic and possibly manipulative buyer, we study a natural yet powerful buyer manipulation strategy. That is, before the pricing game starts, the buyer simply commits to "imitate" a different value function by pretending to always react optimally according to this imitative value function. We fully characterize the optimal imitative value function that the buyer should imitate as well as the resultant seller revenue and buyer surplus under this optimal buyer manipulation. Our characterizations reveal many useful insights about what happens at equilibrium. For example, a seller with concave production cost will obtain essentially 0 revenue at equilibrium whereas the revenue for a seller with convex production cost is the Bregman divergence of her cost function between no production and certain production. Finally, and importantly, we show that a more powerful class of pricing schemes does not necessarily increase, in fact, may be harmful to, the seller's revenue. Our results not only lead to an effective prescriptive way for buyers to manipulate learning algorithms but also shed lights on the limits of what a seller can really achieve when pricing in the dark.

Online testing procedures aim to control the extent of false discoveries over a sequence of hypothesis tests, allowing for the possibility that early-stage test results influence the choice of hypotheses to be tested in later stages. Typically, online methods assume that a permanent decision regarding the current test (reject or not reject) must be made before advancing to the next test. We instead assume that each hypothesis requires an immediate preliminary decision, but also allows us to update that decision until a preset deadline. Roughly speaking, this lets us apply a Benjamini-Hochberg-type procedure over a moving window of hypotheses, where the threshold parameters for upcoming tests can be determined based on preliminary results. Our method controls the false discovery rate (FDR) at every stage of testing, as well as at adaptively chosen stopping times. These results apply even under arbitrary p-value dependency structures.

To deal with the ill-posed nature of the inverse heat conduction problem (IHCP), the regularization parameter alpha can be incorporated into a minimization problem, which is known as Tikhonov regularization method, a popular technique to obtain stable sequential solutions. Because alpha is a penalty term, its excessive use may cause large bias errors. Ridge regression was developed as an estimator of the optimal alpha to minimize the magnitude of a gain coefficient matrix appropriately. However, the sensitivity coefficient matrix included in the gain coefficient matrix depends on the time integrator; thus, certain parameters of the time integrators should be carefully considered with alpha to handle instability. Based on this motivation, we propose an effective iterative hybrid parameter selection algorithm to obtain stable inverse solutions.

The noncentral Wishart distribution has become more mainstream in statistics as the prevalence of applications involving sample covariances with underlying multivariate Gaussian populations as dramatically increased since the advent of computers. Multiple sources in the literature deal with local approximations of the noncentral Wishart distribution with respect to its central counterpart. However, no source has yet developed explicit local approximations for the (central) Wishart distribution in terms of a normal analogue, which is important since Gaussian distributions are at the heart of the asymptotic theory for many statistical methods. In this paper, we prove a precise asymptotic expansion for the ratio of the Wishart density to the symmetric matrix-variate normal density with the same mean and covariances. The result is then used to derive an upper bound on the total variation between the corresponding probability measures and to find the pointwise variance of a new density estimator on the space of positive definite matrices with a Wishart asymmetric kernel. For the sake of completeness, we also find expressions for the pointwise bias of our new estimator, the pointwise variance as we move towards the boundary of its support, the mean squared error, the mean integrated squared error away from the boundary, and we prove its asymptotic normality.

The planted densest subgraph detection problem refers to the task of testing whether in a given (random) graph there is a subgraph that is unusually dense. Specifically, we observe an undirected and unweighted graph on $n$ nodes. Under the null hypothesis, the graph is a realization of an Erd\H{o}s-R\'{e}nyi graph with edge probability (or, density) $q$. Under the alternative, there is a subgraph on $k$ vertices with edge probability $p>q$. The statistical as well as the computational barriers of this problem are well-understood for a wide range of the edge parameters $p$ and $q$. In this paper, we consider a natural variant of the above problem, where one can only observe a small part of the graph using adaptive edge queries. For this model, we determine the number of queries necessary and sufficient for detecting the presence of the planted subgraph. Specifically, we show that any (possibly randomized) algorithm must make $\mathsf{Q} = \Omega(\frac{n^2}{k^2\chi^4(p||q)}\log^2n)$ adaptive queries (on expectation) to the adjacency matrix of the graph to detect the planted subgraph with probability more than $1/2$, where $\chi^2(p||q)$ is the Chi-Square distance. On the other hand, we devise a quasi-polynomial-time algorithm that finds the planted subgraph with high probability by making $\mathsf{Q} = O(\frac{n^2}{k^2\chi^4(p||q)}\log^2n)$ adaptive queries. We then propose a polynomial-time algorithm which is able to detect the planted subgraph using $\mathsf{Q} = O(\frac{n^4}{k^4\chi^2(p||q)}\log n)$ queries. We conjecture that in the leftover regime, where $\frac{n^2}{k^2}\ll\mathsf{Q}\ll \frac{n^4}{k^4}$, no polynomial-time algorithms exist; we give an evidence for this hypothesis using the planted clique conjecture. Our results resolve three questions posed in \cite{racz2020finding}, where the special case of adaptive detection and recovery of a planted clique was considered.

We consider the problem of scheduling to minimize mean response time in M/G/1 queues where only estimated job sizes (processing times) are known to the scheduler, where a job of true size $s$ has estimated size in the interval $[\beta s, \alpha s]$ for some $\alpha \geq \beta > 0$. We evaluate each scheduling policy by its approximation ratio, which we define to be the ratio between its mean response time and that of Shortest Remaining Processing Time (SRPT), the optimal policy when true sizes are known. Our question: is there a scheduling policy that (a) has approximation ratio near 1 when $\alpha$ and $\beta$ are near 1, (b) has approximation ratio bounded by some function of $\alpha$ and $\beta$ even when they are far from 1, and (c) can be implemented without knowledge of $\alpha$ and $\beta$? We first show that naively running SRPT using estimated sizes in place of true sizes is not such a policy: its approximation ratio can be arbitrarily large for any fixed $\beta < 1$. We then provide a simple variant of SRPT for estimated sizes that satisfies criteria (a), (b), and (c). In particular, we prove its approximation ratio approaches 1 uniformly as $\alpha$ and $\beta$ approach 1. This is the first result showing this type of convergence for M/G/1 scheduling. We also study the Preemptive Shortest Job First (PSJF) policy, a cousin of SRPT. We show that, unlike SRPT, naively running PSJF using estimated sizes in place of true sizes satisfies criteria (b) and (c), as well as a weaker version of (a).

Mobile network operator (MNO) data are a rich data source for official statistics, such as present population, mobility, migration, and tourism. Estimating the geographic location of mobile devices is an essential step for statistical inference. Most studies use the Voronoi tessellation for this, which is based on the assumption that mobile devices are always connected to the nearest radio cell. This paper uses a modular Bayesian approach, allowing for different modules of prior knowledge about where devices are expected to be, and different modules for the likelihood of connection given a geographic location. We discuss and compare the use of several prior modules, including one that is based on land use. We show that the Voronoi tessellation can be used as a likelihood module. Alternatively, we propose a signal strength model using radio cell properties such as antenna height, propagation direction, and power. Using Bayes' rule, we derive a posterior probability distribution that is an estimate for the geographic location, which can be used for further statistical inference. We describe the method and provide illustrations of a fictional example that resembles a real-world situation. The method has been implemented in the R packages mobloc and mobvis, which are briefly described.

Linear logic was conceived in 1987 by Girard and, in contrast to classical logic, restricts the usage of the structural inference rules of weakening and contraction. With this, atoms of the logic are no longer interpreted as truth, but as information or resources. This interpretation makes linear logic a useful tool for formalisation in mathematics and computer science. Linear logic has, for example, found applications in proof theory, quantum logic, and the theory of programming languages. A central problem of the logic is the question whether a given list of formulas is provable with the calculus. In the research regarding the complexity of this problem, some results were achieved, but other questions are still open. To present these questions and give new perspectives, this thesis consists of three main parts which build on each other: We present the syntax, proof theory, and various approaches to a semantics for linear logic. Here already, we will meet some open research questions. We present the current state of the complexity-theoretic characterization of the most important fragments of linear logic. Here, further research problems are presented and it becomes apparent that until now, the results have all made use of different approaches. We prove an original complexity characterization of a fragment of the logic and present ideas for a new, structural approach to the examination of provability in linear logic.

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.

北京阿比特科技有限公司