亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce the notion of logarithmically concave (or log-concave) sequences in Coding Theory. A sequence $a_0, a_1, \dots, a_n$ of real numbers is called log-concave if $a_i^2 \ge a_{i-1}a_{i+1}$ for all $1 \le i \le n-1$. A natural sequence of positive numbers in coding theory is the weight distribution of a linear code consisting of the nonzero values among $A_i$'s where $A_i$ denotes the number of codewords of weight $i$. We call a linear code log-concave if its nonzero weight distribution is log-concave. Our main contribution is to show that all binary general Hamming codes of length $2^r -1$ ($r=3$ or $r \ge 5$), the binary extended Hamming codes of length $2^r ~(r \ge 3)$, and the second order Reed-Muller codes $R(2, m)~ (m \ge 2)$ are all log-concave while the homogeneous and projective second order Reed-Muller codes are either log-concave, or 1-gap log-concave. Furthermore, we show that any MDS $[n, k]$ code over $\mathbb F_q$ satisfying $3 \leqslant k \leqslant n/2 +3$ is log-concave if $q \geqslant q_0(n, k)$ which is the larger root of a quadratic polynomial. Hence, we expect that the concept of log-concavity in coding theory will stimulate many interesting problems.

相關內容

代碼(Code)是專知網的一個重要知識資料文檔板塊,旨在整理收錄論文源代碼、復現代碼,經典工程代碼等,便于用戶查閱下載使用。

Evaluating real-valued expressions to high precision is a key building block in computational mathematics, physics, and numerics. A typical implementation uses a uniform precision for each operation, and doubles that precision until the real result can be bounded to some sufficiently narrow interval. However, this is wasteful: usually only a few operations really need to be performed at high precision, and the bulk of the expression could use much lower precision. Uniform precision can also waste iterations discovering the necessary precision and then still overestimate by up to a factor of two. We propose to instead use mixed-precision interval arithmetic to evaluate real-valued expressions. A key challenge is deriving the mixed-precision assignment both soundly and quickly. To do so, we introduce a sound variation of error Taylor series and condition numbers, specialized to interval arithmetic, that can be evaluated with minimal overhead thanks to an "exponent trick". Our implementation, Reval, achieves an average speed-up of 1.47x compared to the state-of-the-art Sollya tool, with the speed-up increasing to 4.92x on the most difficult input points. An examination of the precisions used with and without precision tuning shows that the speed-up results come from quickly assigning lower precisions for the majority of operations.

We study functions $f : [0, 1]^d \rightarrow [0, 1]^d$ that are both monotone and contracting, and we consider the problem of finding an $\varepsilon$-approximate fixed point of $f$. We show that the problem lies in the complexity class UEOPL. We give an algorithm that finds an $\varepsilon$-approximate fixed point of a three-dimensional monotone contraction using $O(\log (1/\varepsilon))$ queries to $f$. We also give a decomposition theorem that allows us to use this result to obtain an algorithm that finds an $\varepsilon$-approximate fixed point of a $d$-dimensional monotone contraction using $O((c \cdot \log (1/\varepsilon))^{\lceil d / 3 \rceil})$ queries to $f$ for some constant $c$. Moreover, each step of both of our algorithms takes time that is polynomial in the representation of $f$. These results are strictly better than the best-known results for functions that are only monotone, or only contracting. All of our results also apply to Shapley stochastic games, which are known to be reducible to the monotone contraction problem. Thus we put Shapley games in UEOPL, and we give a faster algorithm for approximating the value of a Shapley game.

Several statistical models for regression of a function $F$ on $\mathbb{R}^d$ without the statistical and computational curse of dimensionality exist, for example by imposing and exploiting geometric assumptions on the distribution of the data (e.g. that its support is low-dimensional), or strong smoothness assumptions on $F$, or a special structure $F$. Among the latter, compositional models assume $F=f\circ g$ with $g$ mapping to $\mathbb{R}^r$ with $r\ll d$, have been studied, and include classical single- and multi-index models and recent works on neural networks. While the case where $g$ is linear is rather well-understood, much less is known when $g$ is nonlinear, and in particular for which $g$'s the curse of dimensionality in estimating $F$, or both $f$ and $g$, may be circumvented. In this paper, we consider a model $F(X):=f(\Pi_\gamma X) $ where $\Pi_\gamma:\mathbb{R}^d\to[0,\rm{len}_\gamma]$ is the closest-point projection onto the parameter of a regular curve $\gamma: [0,\rm{len}_\gamma]\to\mathbb{R}^d$ and $f:[0,\rm{len}_\gamma]\to\mathbb{R}^1$. The input data $X$ is not low-dimensional, far from $\gamma$, conditioned on $\Pi_\gamma(X)$ being well-defined. The distribution of the data, $\gamma$ and $f$ are unknown. This model is a natural nonlinear generalization of the single-index model, which corresponds to $\gamma$ being a line. We propose a nonparametric estimator, based on conditional regression, and show that under suitable assumptions, the strongest of which being that $f$ is coarsely monotone, it can achieve the $one$-$dimensional$ optimal min-max rate for non-parametric regression, up to the level of noise in the observations, and be constructed in time $\mathcal{O}(d^2n\log n)$. All the constants in the learning bounds, in the minimal number of samples required for our bounds to hold, and in the computational complexity are at most low-order polynomials in $d$.

We say that a function is rare-case hard against a given class of algorithms (the adversary) if all algorithms in the class can compute the function only on an $o(1)$-fraction of instances of size $n$ for large enough $n$. Starting from any NP-complete language, for each $k > 0$, we construct a function that cannot be computed correctly on even a $1/n^k$-fraction of instances for polynomial-sized circuit families if NP $\not \subset$ P/POLY and by polynomial-time algorithms if NP $\not \subset$ BPP - functions that are rare-case hard against polynomial-time algorithms and polynomial-sized circuits. The constructed function is a number-theoretic polynomial evaluated over specific finite fields. For NP-complete languages that admit parsimonious reductions from all of NP (for example, SAT), the constructed functions are hard to compute on even a $1/n^k$-fraction of instances by polynomial-time algorithms and polynomial-sized circuit families simply if $P^{\#P} \not \subset$ BPP and $P^{\#P} \not \subset$ P/POLY, respectively. We also show that if the Randomized Exponential Time Hypothesis (RETH) is true, none of these constructed functions can be computed on even a $1/n^k$-fraction of instances in subexponential time. These functions are very hard, almost always. While one may not be able to efficiently compute the values of these constructed functions themselves, in polynomial time, one can verify that the evaluation of a function, $s = f(x)$, is correct simply by asking a prover to compute $f(y)$ on targeted queries.

Recently, Miller and Wu introduced the positive $\lambda$-calculus, a call-by-value $\lambda$-calculus with sharing obtained by assigning proof terms to the positively polarized focused proofs for minimal intuitionistic logic. The positive $\lambda$-calculus stands out among $\lambda$-calculi with sharing for a compactness property related to the sharing of variables. We show that -- thanks to compactness -- the positive calculus neatly captures the core of useful sharing, a technique for the study of reasonable time cost models.

This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior. Our proposed method utilizes ideas from classical dictionary-based CS and, in particular, sparse Bayesian learning (SBL), to integrate a strong regularization towards sparse solutions. At the same time, by leveraging the notion of conditional Gaussianity, it also incorporates the adaptability from generative models to training data. However, unlike most state-of-the-art generative models, it is able to learn from a few compressed and noisy data samples and requires no optimization algorithm for solving the inverse problem. Additionally, similar to Dirichlet prior networks, our model parameterizes a conjugate prior enabling its application for uncertainty quantification. We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals.

The tensor notation used in several areas of mathematics is a useful one, but it is not widely available to the functional programming community. In a practical sense, the (embedded) domain-specific languages (DSLs) that are currently in use for tensor algebra are either 1. array-oriented languages that do not enforce or take advantage of tensor properties and algebraic structure or 2. follow the categorical structure of tensors but require the programmer to manipulate tensors in an unwieldy point-free notation. A deeper issue is that for tensor calculus, the dominant pedagogical paradigm assumes an audience which is either comfortable with notational liberties which programmers cannot afford, or focus on the applied mathematics of tensors, largely leaving their linguistic aspects (behaviour of variable binding, syntax and semantics, etc.) for the reader to figure out by themselves. This state of affairs is hardly surprising, because, as we highlight, several properties of standard tensor notation are somewhat exotic from the perspective of lambda calculi. We bridge the gap by defining a DSL, embedded in Haskell, whose syntax closely captures the index notation for tensors in wide use in the literature. The semantics of this EDSL is defined in terms of the algebraic structures which define tensors in their full generality. This way, we believe that our EDSL can be used both as a tool for scientific computing, but also as a vehicle to express and present the theory and applications of tensors.

We develop algorithms for the optimization of convex objectives that have H\"older continuous $q$-th derivatives with respect to a $p$-norm by using a $q$-th order oracle, for $p, q \geq 1$. We can also optimize other structured functions. We do this by developing a non-Euclidean inexact accelerated proximal point method that makes use of an inexact uniformly convex regularizer. We also provide nearly matching lower bounds for any deterministic algorithm that interacts with the function via a local oracle.

The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.

Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.

北京阿比特科技有限公司