亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A minimum storage regenerating (MSR) subspace family of $\mathbb{F}_q^{2m}$ is a set $\mathcal{S}$ of $m$-spaces in $\mathbb{F}_q^{2m}$ such that for any $m$-space $S$ in $\mathcal{S}$ there exists an element in $\mathrm{PGL}(2m, q)$ which maps $S$ to itself and fixes $\mathcal{S} \setminus \{ S \}$ pointwise. We show that an MSR subspace family of $2$-spaces in $\mathbb{F}_q^4$ has at most size $6$ with equality if and only if it is a particular subset of a Segre variety. This implies that an $(n, n-2, 4)$-MSR code has $n \leq 9$.

相關內容

In Bayesian inference, a widespread technique to approximately sample from and compute statistics of a high-dimensional posterior is to use the Laplace approximation, a Gaussian proxy to the posterior. The Laplace approximation accuracy improves as sample size grows, but the question of how fast dimension $d$ can grow with sample size $n$ has not been fully resolved. Prior works have shown that $d^3\ll n$ is a sufficient condition for accuracy of the approximation. But by deriving the leading order contribution to the TV error, we show that $d^2\ll n$ is sufficient. We show for a logistic regression posterior that this growth condition is necessary.

Reinforcement learning (RL) algorithms have proven transformative in a range of domains. To tackle real-world domains, these systems often use neural networks to learn policies directly from pixels or other high-dimensional sensory input. By contrast, much theory of RL has focused on discrete state spaces or worst-case analysis, and fundamental questions remain about the dynamics of policy learning in high-dimensional settings. Here, we propose a solvable high-dimensional model of RL that can capture a variety of learning protocols, and derive its typical dynamics as a set of closed-form ordinary differential equations (ODEs). We derive optimal schedules for the learning rates and task difficulty - analogous to annealing schemes and curricula during training in RL - and show that the model exhibits rich behaviour, including delayed learning under sparse rewards; a variety of learning regimes depending on reward baselines; and a speed-accuracy trade-off driven by reward stringency. Experiments on variants of the Procgen game "Bossfight" and Arcade Learning Environment game "Pong" also show such a speed-accuracy trade-off in practice. Together, these results take a step towards closing the gap between theory and practice in high-dimensional RL.

The algebraic degree is an important parameter of Boolean functions used in cryptography. When a function in a large number of variables is not given explicitly in algebraic normal form, it might not be feasible to compute its degree. Instead, one can try to estimate the degree using probabilistic tests. We propose a probabilistic test for deciding whether the algebraic degree of a Boolean function $f$ is below a certain value $k$. The test involves picking an affine space of dimension $k$ and testing whether the values on $f$ on that space sum up to zero. If $deg(f)<k$, then $f$ will always pass the test, otherwise it will sometimes pass and sometimes fail the test, depending on which affine space was chosen. The probability of failing the proposed test is closely related to the number of monomials of degree $k$ in a polynomial $g$, averaged over all the polynomials $g$ which are affine equivalent to $f$. We initiate the study of the probability of failing the proposed ``$deg(f)<k$'' test. We show that in the particular case when the degree of $f$ is actually equal to $k$, the probability will be in the interval $(0.288788, 0.5]$, and therefore a small number of runs of the test is sufficient to give, with very high probability, the correct answer. Exact values of this probability for all the polynomials in 8 variables were computed using the representatives listed by Hou and by Langevin and Leander.

The Plackett--Luce model is a popular approach for ranking data analysis, where a utility vector is employed to determine the probability of each outcome based on Luce's choice axiom. In this paper, we investigate the asymptotic theory of utility vector estimation by maximizing different types of likelihood, such as the full-, marginal-, and quasi-likelihood. We provide a rank-matching interpretation for the estimating equations of these estimators and analyze their asymptotic behavior as the number of items being compared tends to infinity. In particular, we establish the uniform consistency of these estimators under conditions characterized by the topology of the underlying comparison graph sequence and demonstrate that the proposed conditions are sharp for common sampling scenarios such as the nonuniform random hypergraph model and the hypergraph stochastic block model; we also obtain the asymptotic normality of these estimators and discuss the trade-off between statistical efficiency and computational complexity for practical uncertainty quantification. Both results allow for nonuniform and inhomogeneous comparison graphs with varying edge sizes and different asymptotic orders of edge probabilities. We verify our theoretical findings by conducting detailed numerical experiments.

We study a pricing problem where a seller has $k$ identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When $k=1$, this is the so called "prophet inequality" problem for which there is a simple pricing scheme achieving a competitive ratio of $1/2$. On the other end of the spectrum, as $k$ goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood. We provide a static pricing scheme for the small-supply regime: where $k$ is small but larger than $1$. Prior to our work, the best competitive ratio known for this setting was the $1/2$ that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical -- it is anonymous, non-adaptive, and order-oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This extends an approach introduced by Samuel-Cahn for the case of $k=1$. This pricing scheme achieves a competitive ratio that increases gradually with the supply. Subsequent work by Jiang, Ma, and Zhang shows that our pricing scheme is the optimal static pricing for every value of $k$.

We propose and study a new privacy definition, termed Probably Approximately Correct (PAC) Privacy. PAC Privacy characterizes the information-theoretic hardness to recover sensitive data given arbitrary information disclosure/leakage during/after any processing. Unlike the classic cryptographic definition and Differential Privacy (DP), which consider the adversarial (input-independent) worst case, PAC Privacy is a simulatable metric that quantifies the instance-based impossibility of inference. A fully automatic analysis and proof generation framework is proposed: security parameters can be produced with arbitrarily high confidence via Monte-Carlo simulation for any black-box data processing oracle. This appealing automation property enables analysis of complicated data processing, where the worst-case proof in the classic privacy regime could be loose or even intractable. Moreover, we show that the produced PAC Privacy guarantees enjoy simple composition bounds and the automatic analysis framework can be implemented in an online fashion to analyze the composite PAC Privacy loss even under correlated randomness. On the utility side, the magnitude of (necessary) perturbation required in PAC Privacy is not lower bounded by Theta(\sqrt{d}) for a d-dimensional release but could be O(1) for many practical data processing tasks, which is in contrast to the input-independent worst-case information-theoretic lower bound. Example applications of PAC Privacy are included with comparisons to existing works.

The Taylor expansion, which stems from Linear Logic and its differential extensions, is an approximation framework for the $\lambda$-calculus (and many of its variants). The reduction of the approximants of a $\lambda$-term induces a reduction on the $\lambda$-term itself, which enjoys a simulation property: whenever a term reduces to another, the approximants reduce accordingly. In recent work, we extended this result to an infinitary $\lambda$-calculus (namely, $\Lambda_{\infty}^{001}$). This short paper solves the question whether the converse property also holds: if the approximants of some term reduce to the approximants of another term, is there a $\beta$-reduction between these terms? This happens to be true for the $\lambda$-calculus, as we show, but our proof fails in the infinitary case. We exhibit a counter-example, refuting the conservativity for $\Lambda_{\infty}^{001}$.

We study the approximation properties of complex-valued polynomial Trefftz spaces for the $(d+1)$-dimensional linear time-dependent Schr\"odinger equation. More precisely, we prove that for the space-time Trefftz discontinuous Galerkin variational formulation proposed by G\'omez, Moiola (SIAM. J. Num. Anal. 60(2): 688-714, 2022), the same $h$-convergence rates as for polynomials of degree $p$ in $(d + 1)$ variables can be obtained in a mesh-dependent norm by using a space of Trefftz polynomials of anisotropic degree. For such a space, the dimension is equal to that of the space of polynomials of degree $2p$ in $d$ variables, and bases are easily constructed.

This paper proposes a connection method \`a la Bibel for an exception-tolerant family of description logics (DLs). As for the language, we assume the DL $\mathcal{ALCH}$ extended with two typicality operators: one on (complex) concepts and one on role names. The language is a variant of defeasible DLs, as broadly studied in the literature over the past decade, in which most of these can be embedded. We revisit the definition of the matrix representation of a knowledge base and establish the conditions for a given axiom to be provable. We show that the calculus terminates and is sound and complete w.r.t. a DL version of the preferential semantics widely adopted in non-monotonic reasoning.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司