亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The length function $\ell_q(r,R)$ is the smallest length of a $ q $-ary linear code with codimension (redundancy) $r$ and covering radius $R$. In this work, new upper bounds on $\ell_q(tR+1,R)$ are obtained in the following forms: \begin{equation*} \begin{split} &(a)~\ell_q(r,R)\le cq^{(r-R)/R}\cdot\sqrt[R]{\ln q},~ R\ge3,~r=tR+1,~t\ge1, &\phantom{(a)~} q\text{ is an arbitrary prime power},~c\text{ is independent of }q. \end{split} \end{equation*} \begin{equation*} \begin{split} &(b)~\ell_q(r,R)< 4.5Rq^{(r-R)/R}\cdot\sqrt[R]{\ln q},~ R\ge3,~r=tR+1,~t\ge1, &\phantom{(b)~} q\text{ is an arbitrary prime power},~q\text{ is large enough}. \end{split} \end{equation*} In the literature, for $q=(q')^R$ with $q'$ a prime power, smaller upper bounds are known; however, when $q$ is an arbitrary prime power, the bounds of this paper are better than the known ones. For $t=1$, we use a one-to-one correspondence between $[n,n-(R+1)]_qR$ codes and $(R-1)$-saturating $n$-sets in the projective space $\mathrm{PG}(R,q)$. A new construction of such saturating sets providing sets of small size is proposed. Then the $[n,n-(R+1)]_qR$ codes, obtained by geometrical methods, are taken as the starting ones in the lift-constructions (so-called "$q^m$-concatenating constructions") for covering codes to obtain infinite families of codes with growing codimension $r=tR+1$, $t\ge1$.

相關內容

The $\beta$-model is a powerful tool for modeling network generation driven by node degree heterogeneity. Its simple yet expressive nature particularly well-suits large and sparse networks, where many network models become infeasible due to computational challenge and observation scarcity. However, existing estimation algorithms for $\beta$-model do not scale up; and theoretical understandings remain limited to dense networks. This paper brings several major improvements to the method and theory of $\beta$-model to address urgent needs of practical applications. Our contributions include: 1. method: we propose a new $\ell_2$ penalized MLE scheme; we design a novel algorithm that can comfortably handle sparse networks of millions of nodes, much faster and more memory-parsimonious than any existing algorithm; 2. theory: we present new error bounds on beta-models under much weaker assumptions; we also establish new lower-bounds and new asymptotic normality results; distinct from existing literature, our results cover both small and large regularization scenarios and reveal their distinct asymptotic dependency structures; 3. application: we apply our method to large COVID-19 network data sets and discover meaningful results.

This paper is devoted to the estimation of the minimal dimension P of the state-space realizations of a high-dimensional time series y, defined as a noisy version (the noise is white and Gaussian) of a useful signal with low rank rational spectral density, in the high-dimensional asymptotic regime where the number of available samples N and the dimension of the time series M converge towards infinity at the same rate. In the classical low-dimensional regime, P is estimated as the number of significant singular values of the empirical autocovariance matrix between the past and the future of y, or as the number of significant estimated canonical correlation coefficients between the past and the future of y. Generalizing large random matrix methods developed in the past to analyze classical spiked models, the behaviour of the above singular values and canonical correlation coefficients is studied in the high-dimensional regime. It is proved that they are smaller than certain thresholds depending on the statistics of the noise, except a finite number of outliers that are due to the useful signal. The number of singular values of the sample autocovariance matrix above the threshold is evaluated, is shown to be almost independent from P in general, and cannot therefore be used to estimate P accurately. In contrast, the number s of canonical correlation coefficients larger than the corresponding threshold is shown to be less than or equal to P, and explicit conditions under which it is equal to P are provided. Under the corresponding assumptions, s is thus a consistent estimate of P in the high-dimensional regime. The core of the paper is the development of the necessary large random matrix tools.

We prove that a formula predicted on the basis of non-rigorous physics arguments [Zdeborova and Krzakala: Phys. Rev. E (2007)] provides a lower bound on the chromatic number of sparse random graphs. The proof is based on the interpolation method from mathematical physics. In the case of random regular graphs the lower bound can be expressed algebraically, while in the case of the binomial random we obtain a variational formula. As an application we calculate improved explicit lower bounds on the chromatic number of random graphs for small (average) degrees. Additionally, show how asymptotic formulas for large degrees that were previously obtained by lengthy and complicated combinatorial arguments can be re-derived easily from these new results.

The list-decodable code has been an active topic in theoretical computer science since the seminal papers of M. Sudan and V. Guruswami in 1997-1998. List-decodable codes are also considered in rank-metric, subspace metric, cover-metric, pair metric and insdel metric settings. In this paper we show that rates, list-decodable radius and list sizes are closely related to the classical topic of covering codes. We prove new general simple but strong upper bounds for list-decodable codes in general finite metric spaces based on various covering codes of finite metric spaces. The general covering code upper bounds can apply to the case when the volumes of the balls depend on the centers, not only on the radius case. Then any good upper bound on the covering radius or the size of covering code imply a good upper bound on the size of list-decodable codes.Our results give exponential improvements on the recent generalized Singleton upper bound in STOC 2020 for Hamming metric list-decodable codes, when the code lengths are large. Even for the list size $L=1$ case our covering code upper bounds give highly non-trivial upper bounds on the sizes of codes with the given minimum distance.The generalized Singleton upper bound for average-radius list-decodable codes is given. The asymptotic forms of covering code bounds can partially recover the Blinovsky bound and the combinatorial bound of Guruswami-H{\aa}stad-Sudan-Zuckerman in Hamming metric setting. We also suggest to study the combinatorial covering list-decodable codes as a natural generalization of combinatorial list-decodable codes. We apply our general covering code upper bounds for list-decodable rank-metric codes, list-decodable subspace codes, list-decodable insertion codes and list-decodable deletion codes. Some new better results about non-list-decodability of rank-metric codes and subspace codes are obtained.

Decision trees are important both as interpretable models amenable to high-stakes decision-making, and as building blocks of ensemble methods such as random forests and gradient boosting. Their statistical properties, however, are not well understood. The most cited prior works have focused on deriving pointwise consistency guarantees for CART in a classical nonparametric regression setting. We take a different approach, and advocate studying the generalization performance of decision trees with respect to different generative regression models. This allows us to elicit their inductive bias, that is, the assumptions the algorithms make (or do not make) to generalize to new data, thereby guiding practitioners on when and how to apply these methods. In this paper, we focus on sparse additive generative models, which have both low statistical complexity and some nonparametric flexibility. We prove a sharp squared error generalization lower bound for a large class of decision tree algorithms fitted to sparse additive models with $C^1$ component functions. This bound is surprisingly much worse than the minimax rate for estimating such sparse additive models. The inefficiency is due not to greediness, but to the loss in power for detecting global structure when we average responses solely over each leaf, an observation that suggests opportunities to improve tree-based algorithms, for example, by hierarchical shrinkage. To prove these bounds, we develop new technical machinery, establishing a novel connection between decision tree estimation and rate-distortion theory, a sub-field of information theory.

We consider the termination problem for triangular weakly non-linear loops (twn-loops) over some ring $\mathcal{S}$ like $\mathbb{Z}$, $\mathbb{Q}$, or $\mathbb{R}$. Essentially, the guard of such a loop is an arbitrary quantifier-free Boolean formula over (possibly non-linear) polynomial inequations, and the body is a single assignment of the form $(x_1, \ldots, x_d) \longleftarrow (c_1 \cdot x_1 + p_1, \ldots, c_d \cdot x_d + p_d)$ where each $x_i$ is a variable, $c_i \in \mathcal{S}$, and each $p_i$ is a (possibly non-linear) polynomial over $\mathcal{S}$ and the variables $x_{i+1},\ldots,x_{d}$. We show that the question of termination can be reduced to the existential fragment of the first-order theory of $\mathcal{S}$ and $\mathbb{R}$. For loops over $\mathbb{R}$, our reduction implies decidability of termination. For loops over $\mathbb{Z}$ and $\mathbb{Q}$, it proves semi-decidability of non-termination. Furthermore, we present a transformation to convert certain non-twn-loops into twn-form. Then the original loop terminates iff the transformed loop terminates over a specific subset of $\mathbb{R}$, which can also be checked via our reduction. Moreover, we formalize a technique to linearize twn-loops in our setting and analyze its complexity. Based on these results, we prove complexity bounds for the termination problem of twn-loops as well as tight bounds for two important classes of loops which can always be transformed into twn-loops. Finally, we show that there is an important class of linear loops where our decision procedure results in an efficient procedure for termination analysis, i.e., where the parameterized complexity of deciding termination is polynomial.

The approximate uniform sampling of graphs with a given degree sequence is a well-known, extensively studied problem in theoretical computer science and has significant applications, e.g., in the analysis of social networks. In this work we study an extension of the problem, where degree intervals are specified rather than a single degree sequence. We are interested in sampling and counting graphs whose degree sequences satisfy the degree interval constraints. A natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed. In this work, we provide the first fully polynomial almost uniform sampler (FPAUS) as well as the first fully polynomial randomized approximation scheme (FPRAS) for sampling and counting, respectively, graphs with near-regular degree intervals, in which every node $i$ has a degree from an interval not too far away from a given $d \in \N$. In order to design our FPAUS, we rely on various state-of-the-art tools from Markov chain theory and combinatorics. In particular, we provide the first non-trivial algorithmic application of a breakthrough result of Liebenau and Wormald (2017) regarding an asymptotic formula for the number of graphs with a given near-regular degree sequence. Furthermore, we also make use of the recent breakthrough of Anari et al. (2019) on sampling a base of a matroid under a strongly log-concave probability distribution. As a more direct approach, we also study a natural Markov chain recently introduced by Rechner, Strowick and M\"uller-Hannemann (2018), based on three simple local operations: Switches, hinge flips, and additions/deletions of a single edge. We obtain the first theoretical results for this Markov chain by showing it is rapidly mixing for the case of near-regular degree intervals of size at most one.

Recently (Elkin, Filtser, Neiman 2017) introduced the concept of a {\it terminal embedding} from one metric space $(X,d_X)$ to another $(Y,d_Y)$ with a set of designated terminals $T\subset X$. Such an embedding $f$ is said to have distortion $\rho\ge 1$ if $\rho$ is the smallest value such that there exists a constant $C>0$ satisfying \begin{equation*} \forall x\in T\ \forall q\in X,\ C d_X(x, q) \le d_Y(f(x), f(q)) \le C \rho d_X(x, q) . \end{equation*} In the case that $X,Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$. This generalizes the Johnson-Lindenstrauss lemma, which only preserves distances within $T$ and not to $T$ from the rest of space. The downside is that evaluating the embedding on some $q\in \mathbb{R}^d$ required solving a semidefinite program with $\Theta(n)$ constraints in $m$ variables and thus required some superlinear $\mathrm{poly}(n)$ runtime. Our main contribution in this work is to give a new data structure for computing terminal embeddings. We show how to pre-process $T$ to obtain an almost linear-space data structure that supports computing the terminal embedding image of any $q\in\mathbb{R}^d$ in sublinear time $n^{1-\Theta(\epsilon^2)+o(1)} + dn^{o(1)}$. To accomplish this, we leverage tools developed in the context of approximate nearest neighbor search.

Consider a system of $m$ polynomial equations $\{p_i(x) = b_i\}_{i \leq m}$ of degree $D\geq 2$ in $n$-dimensional variable $x \in \mathbb{R}^n$ such that each coefficient of every $p_i$ and $b_i$s are chosen at random and independently from some continuous distribution. We study the basic question of determining the smallest $m$ -- the algorithmic threshold -- for which efficient algorithms can find refutations (i.e. certificates of unsatisfiability) for such systems. This setting generalizes problems such as refuting random SAT instances, low-rank matrix sensing and certifying pseudo-randomness of Goldreich's candidate generators and generalizations. We show that for every $d \in \mathbb{N}$, the $(n+m)^{O(d)}$-time canonical sum-of-squares (SoS) relaxation refutes such a system with high probability whenever $m \geq O(n) \cdot (\frac{n}{d})^{D-1}$. We prove a lower bound in the restricted low-degree polynomial model of computation which suggests that this trade-off between SoS degree and the number of equations is nearly tight for all $d$. We also confirm the predictions of this lower bound in a limited setting by showing a lower bound on the canonical degree-$4$ sum-of-squares relaxation for refuting random quadratic polynomials. Together, our results provide evidence for an algorithmic threshold for the problem at $m \gtrsim \widetilde{O}(n) \cdot n^{(1-\delta)(D-1)}$ for $2^{n^{\delta}}$-time algorithms for all $\delta$.

In 1992 Mansour proved that every size-$s$ DNF formula is Fourier-concentrated on $s^{O(\log\log s)}$ coefficients. We improve this to $s^{O(\log\log k)}$ where $k$ is the read number of the DNF. Since $k$ is always at most $s$, our bound matches Mansour's for all DNFs and strengthens it for small-read ones. The previous best bound for read-$k$ DNFs was $s^{O(k^{3/2})}$. For $k$ up to $\tilde{\Theta}(\log\log s)$, we further improve our bound to the optimal $\mathrm{poly}(s)$; previously no such bound was known for any $k = \omega_s(1)$. Our techniques involve new connections between the term structure of a DNF, viewed as a set system, and its Fourier spectrum.

北京阿比特科技有限公司