亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $p(n)$ denote the smallest prime divisor of the integer $n$. Define the function $g(k)$ to be the smallest integer $>k+1$ such that $p(\binom{g(k)}{k})>k$. So we have $g(2)=6$ and $g(3)=g(4)=7$. In this paper we present the following new results on the Erd\H{o}s-Selfridge function $g(k)$: We present a new algorithm to compute the value of $g(k)$, and use it to both verify previous work and compute new values of $g(k)$, with our current limit being $$ g(323)= 1\ 69829\ 77104\ 46041\ 21145\ 63251\ 22499. $$ We define a new function $\hat{g}(k)$, and under the assumption of our Uniform Distribution Heuristic we show that $$ \log g(k) = \log \hat{g}(k) + O(\log k) $$ with high "probability". We also provide computational evidence to support our claim that $\hat{g}(k)$ estimates $g(k)$ reasonably well in practice. There are several open conjectures on the behavior of $g(k)$ which we are able to prove for $\hat{g}(k)$, namely that $$ 0.525\ldots +o(1) \quad \le \quad \frac{\log \hat{g}(k)}{k/\log k} \quad \le \quad 1+o(1), $$ and that $$ \limsup_{k\rightarrow\infty} \frac{\hat{g}(k+1)}{\hat{g}(k)}=\infty.$$ Let $G(x,k)$ count the number of integers $n\le x$ such that $p(\binom{n}{k})>k$. Unconditionally, we prove that for large $x$, $G(x,k)$ is asymptotic to $x/\hat{g}(k)$. And finally, we show that the running time of our new algorithm is at most $g(k) \exp[ -c (k\log\log k) /(\log k)^2 (1+o(1))]$ for a constant $c>0$.

相關內容

Given a subset $A$ of the $n$-dimensional Boolean hypercube $\mathbb{F}_2^n$, the sumset $A+A$ is the set $\{a+a': a, a' \in A\}$ where addition is in $\mathbb{F}_2^n$. Sumsets play an important role in additive combinatorics, where they feature in many central results of the field. The main result of this paper is a sublinear-time algorithm for the problem of sumset size estimation. In more detail, our algorithm is given oracle access to (the indicator function of) an arbitrary $A \subseteq \mathbb{F}_2^n$ and an accuracy parameter $\epsilon > 0$, and with high probability it outputs a value $0 \leq v \leq 1$ that is $\pm \epsilon$-close to $\mathrm{Vol}(A' + A')$ for some perturbation $A' \subseteq A$ of $A$ satisfying $\mathrm{Vol}(A \setminus A') \leq \epsilon.$ It is easy to see that without the relaxation of dealing with $A'$ rather than $A$, any algorithm for estimating $\mathrm{Vol}(A+A)$ to any nontrivial accuracy must make $2^{\Omega(n)}$ queries. In contrast, we give an algorithm whose query complexity depends only on $\epsilon$ and is completely independent of the ambient dimension $n$.

We analyze a number of natural estimators for the optimal transport map between two distributions and show that they are minimax optimal. We adopt the plugin approach: our estimators are simply optimal couplings between measures derived from our observations, appropriately extended so that they define functions on $\mathbb{R}^d$. When the underlying map is assumed to be Lipschitz, we show that computing the optimal coupling between the empirical measures, and extending it using linear smoothers, already gives a minimax optimal estimator. When the underlying map enjoys higher regularity, we show that the optimal coupling between appropriate nonparametric density estimates yields faster rates. Our work also provides new bounds on the risk of corresponding plugin estimators for the quadratic Wasserstein distance, and we show how this problem relates to that of estimating optimal transport maps using stability arguments for smooth and strongly convex Brenier potentials. As an application of our results, we derive a central limit theorem for a density plugin estimator of the squared Wasserstein distance, which is centered at its population counterpart when the underlying distributions have sufficiently smooth densities. In contrast to known central limit theorems for empirical estimators, this result easily lends itself to statistical inference for Wasserstein distances.

Consider any locally checkable labeling problem $\Pi$ in rooted regular trees: there is a finite set of labels $\Sigma$, and for each label $x \in \Sigma$ we specify what are permitted label combinations of the children for an internal node of label $x$ (the leaf nodes are unconstrained). This formalism is expressive enough to capture many classic problems studied in distributed computing, including vertex coloring, edge coloring, and maximal independent set. We show that the distributed computational complexity of any such problem $\Pi$ falls in one of the following classes: it is $O(1)$, $\Theta(\log^* n)$, $\Theta(\log n)$, or $\Theta(n)$ rounds in trees with $n$ nodes (and all of these classes are nonempty). We show that the complexity of any given problem is the same in all four standard models of distributed graph algorithms: deterministic LOCAL, randomized LOCAL, deterministic CONGEST, and randomized CONGEST model. In particular, we show that randomness does not help in this setting, and complexity classes such as $\Theta(\log \log n)$ or $\Theta(\sqrt{n})$ do not exist (while they do exist in the broader setting of general trees). We also show how to systematically determine the distributed computational complexity of any such problem $\Pi$. We present an algorithm that, given the description of $\Pi$, outputs the round complexity of $\Pi$ in these models. While the algorithm may take exponential time in the size of the description of $\Pi$, it is nevertheless practical: we provide a freely available implementation of the classifier algorithm, and it is fast enough to classify many typical problems of interest.

The Minimum Linear Arrangement problem (MLA) consists of finding a mapping $\pi$ from vertices of a graph to distinct integers that minimizes $\sum_{\{u,v\}\in E}|\pi(u) - \pi(v)|$. In that setting, vertices are often assumed to lie on a horizontal line and edges are drawn as semicircles above said line. For trees, various algorithms are available to solve the problem in polynomial time in $n=|V|$. There exist variants of the MLA in which the arrangements are constrained. Iordanskii, and later Hochberg and Stallmann (HS), put forward $O(n)$-time algorithms that solve the problem when arrangements are constrained to be planar (also known as one-page book embeddings). We also consider linear arrangements of rooted trees that are constrained to be projective (planar embeddings where the root is not covered by any edge). Gildea and Temperley (GT) sketched an algorithm for projective arrangements which they claimed runs in $O(n)$ but did not provide any justification of its cost. In contrast, Park and Levy claimed that GT's algorithm runs in $O(n \log d_{max})$ where $d_{max}$ is the maximum degree but did not provide sufficient detail. Here we correct an error in HS's algorithm for the planar case, show its relationship with the projective case, and derive simple algorithms for the projective and planar cases that run undoubtlessly in $O(n)$-time.

Estimation of reliability and hazard rate is one of the most important problems raised in many applications especially in engineering studies as well as human lifetime. In this regard, different methods of estimation have been used. Each method exploits various tools and suffers from problems such as complexity of computations, low precision, and so forth. This study is employed the E-Bayesian method, for estimating the parameter and survival functions of the Weibull Generalized Exponential distribution. The estimators are obtained under squared error and LINEX loss functions under progressive type-II censored samples. E-Bayesian estimations are derived based on three priors of hyperparameters to investigate the influence of different priors on estimations. The asymptotic behaviours of E-Bayesian estimations have been investigated as well as relationships among them. Finally, a comparison among the maximum likelihood, Bayes, and E-Bayesian estimations are made, using real data and Monte Carlo simulation. Results show that the new method is more efficient than previous methods.

This paper studies the optimal rate of estimation in a finite Gaussian location mixture model in high dimensions without separation conditions. We assume that the number of components $k$ is bounded and that the centers lie in a ball of bounded radius, while allowing the dimension $d$ to be as large as the sample size $n$. Extending the one-dimensional result of Heinrich and Kahn \cite{HK2015}, we show that the minimax rate of estimating the mixing distribution in Wasserstein distance is $\Theta((d/n)^{1/4} + n^{-1/(4k-2)})$, achieved by an estimator computable in time $O(nd^2+n^{5/4})$. Furthermore, we show that the mixture density can be estimated at the optimal parametric rate $\Theta(\sqrt{d/n})$ in Hellinger distance and provide a computationally efficient algorithm to achieve this rate in the special case of $k=2$. Both the theoretical and methodological development rely on a careful application of the method of moments. Central to our results is the observation that the information geometry of finite Gaussian mixtures is characterized by the moment tensors of the mixing distribution, whose low-rank structure can be exploited to obtain a sharp local entropy bound.

This paper concentrates on the approximation power of deep feed-forward neural networks in terms of width and depth. It is proved by construction that ReLU networks with width $\mathcal{O}\big(\max\{d\lfloor N^{1/d}\rfloor,\, N+2\}\big)$ and depth $\mathcal{O}(L)$ can approximate a H\"older continuous function on $[0,1]^d$ with an approximation rate $\mathcal{O}\big(\lambda\sqrt{d} (N^2L^2\ln N)^{-\alpha/d}\big)$, where $\alpha\in (0,1]$ and $\lambda>0$ are H\"older order and constant, respectively. Such a rate is optimal up to a constant in terms of width and depth separately, while existing results are only nearly optimal without the logarithmic factor in the approximation rate. More generally, for an arbitrary continuous function $f$ on $[0,1]^d$, the approximation rate becomes $\mathcal{O}\big(\,\sqrt{d}\,\omega_f\big( (N^2L^2\ln N)^{-1/d}\big)\,\big)$, where $\omega_f(\cdot)$ is the modulus of continuity. We also extend our analysis to any continuous function $f$ on a bounded set. Particularly, if ReLU networks with depth $31$ and width $\mathcal{O}(N)$ are used to approximate one-dimensional Lipschitz continuous functions on $[0,1]$ with a Lipschitz constant $\lambda>0$, the approximation rate in terms of the total number of parameters, $W=\mathcal{O}(N^2)$, becomes $\mathcal{O}(\tfrac{\lambda}{W\ln W})$, which has not been discovered in the literature for fixed-depth ReLU networks.

This paper studies higher-order inference properties of nonparametric local polynomial regression methods under random sampling. We prove Edgeworth expansions for $t$ statistics and coverage error expansions for interval estimators that (i) hold uniformly in the data generating process, (ii) allow for the uniform kernel, and (iii) cover estimation of derivatives of the regression function. The terms of the higher-order expansions, and their associated rates as a function of the sample size and bandwidth sequence, depend on the smoothness of the population regression function, the smoothness exploited by the inference procedure, and on whether the evaluation point is in the interior or on the boundary of the support. We prove that robust bias corrected confidence intervals have the fastest coverage error decay rates in all cases, and we use our results to deliver novel, inference-optimal bandwidth selectors. The main methodological results are implemented in companion \textsf{R} and \textsf{Stata} software packages.

We develop an error estimator for neural network approximations of PDEs. The proposed approach is based on dual weighted residual estimator (DWR). It is destined to serve as a stopping criterion that guarantees the accuracy of the solution independently of the design of the neural network training. The result is equipped with computational examples for Laplace and Stokes problems.

We examine the behaviour of the Laplace and saddlepoint approximations in the high-dimensional setting, where the dimension of the model is allowed to increase with the number of observations. Approximations to the joint density, the marginal posterior density and the conditional density are considered. Our results show that under the mildest assumptions on the model, the error of the joint density approximation is $O(p^4/n)$ if $p = o(n^{1/4})$ for the Laplace approximation and saddlepoint approximation, with improvements being possible under additional assumptions. Stronger results are obtained for the approximation to the marginal posterior density.

北京阿比特科技有限公司