亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We describe a decisional attack against a version of the PLWE problem in which the samples are taken from a certain proper subring of large dimension of the cyclotomic ring $\mathbb{F}_q[x]/(\Phi_{p^k}(x))$ with $k>1$ in the case where $q\equiv 1\pmod{p}$ but $\Phi_{p^k}(x)$ is not totally split over $\mathbb{F}_q$. Our attack uses the fact that the roots of $\Phi_{p^k}(x)$ over suitable extensions of $\mathbb{F}_q$ have zero-trace and has overwhelming success probability as a function of the number of input samples. An implementation in Maple and some examples of our attack are also provided.

相關內容

The max-relative entropy together with its smoothed version is a basic tool in quantum information theory. In this paper, we derive the exact exponent for the asymptotic decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance. We then apply this result to the problem of privacy amplification against quantum side information, and we obtain an upper bound for the exponent of the asymptotic decreasing of the insecurity, measured using either purified distance or relative entropy. Our upper bound complements the earlier lower bound established by Hayashi, and the two bounds match when the rate of randomness extraction is above a critical value. Thus, for the case of high rate, we have determined the exact security exponent. Following this, we give examples and show that in the low-rate case, neither the upper bound nor the lower bound is tight in general. This exhibits a picture similar to that of the error exponent in channel coding. Lastly, we investigate the asymptotics of equivocation and its exponent under the security measure using the sandwiched R\'enyi divergence of order $s\in (1,2]$, which has not been addressed previously in the quantum setting.

One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., $\mathbf{P}\not\subseteq\mathbf{NC}^{1}$). Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995) suggested to approach this problem by proving that depth complexity of a composition of functions $f\diamond g$ is roughly the sum of the depth complexities of $f$ and $g$. They showed that the validity of this conjecture would imply that $\mathbf{P}\not\subseteq\mathbf{NC}^{1}$. The intuition that underlies the KRW conjecture is that the composition $f\diamond g$ should behave like a "direct-sum problem", in a certain sense, and therefore the depth complexity of $f\diamond g$ should be the sum of the individual depth complexities. Nevertheless, there are two obstacles toward turning this intuition into a proof: first, we do not know how to prove that $f\diamond g$ must behave like a direct-sum problem; second, we do not know how to prove that the complexity of the latter direct-sum problem is indeed the sum of the individual complexities. In this work, we focus on the second obstacle. To this end, we study a notion called "strong composition", which is the same as $f\diamond g$ except that it is forced to behave like a direct-sum problem. We prove a variant of the KRW conjecture for strong composition, thus overcoming the above second obstacle. This result demonstrates that the first obstacle above is the crucial barrier toward resolving the KRW conjecture. Along the way, we develop some general techniques that might be of independent interest.

In this article, we discuss the error analysis for a certain class of monotone finite volume schemes approximating nonlocal scalar conservation laws, modeling traffic flow and crowd dynamics, without any additional assumptions on monotonicity or linearity of the kernel $\mu$ or the flux $f$. We first prove a novel Kuznetsov-type lemma for this class of PDEs and thereby show that the finite volume approximations converge to the entropy solution at the rate of $\sqrt{\Delta t}$ in $L^1(\mathbb{R})$. To the best of our knowledge, this is the first proof of any type of convergence rate for this class of conservation laws. We also present numerical experiments to illustrate this result.

Bayesian Optimization (BO) is typically used to optimize an unknown function $f$ that is noisy and costly to evaluate, by exploiting an acquisition function that must be maximized at each optimization step. Although provably asymptotically optimal BO algorithms are efficient at optimizing low-dimensional functions, scaling them to high-dimensional spaces remains an open research problem, often tackled by assuming an additive structure for $f$. However, such algorithms introduce additional restrictive assumptions on the additive structure that reduce their applicability domain. In this paper, we relax the restrictive assumptions on the additive structure of $f$, at the expense of weakening the maximization guarantees of the acquisition function, and we address the over-exploration problem for decentralized BO algorithms. To these ends, we propose DuMBO, an asymptotically optimal decentralized BO algorithm that achieves very competitive performance against state-of-the-art BO algorithms, especially when the additive structure of $f$ does not exist or comprises high-dimensional factors.

It is more and more frequently the case in applications that the data we observe come from one or more random variables taking values in an infinite dimensional space, e.g. curves. The need to have tools adapted to the nature of these data explains the growing interest in the field of functional data analysis. The model we study in this paper assumes a linear dependence between a quantity of interest and several covariates, at least one of which has an infinite dimension. To select the relevant covariates in this context, we investigate adaptations of the Lasso method. Two estimation methods are defined. The first one consists in the minimization of a Group-Lasso criterion on the multivariate functional space H. The second one minimizes the same criterion but on a finite dimensional subspaces of H whose dimension is chosen by a penalized least squares method. We prove oracle inequalities of sparsity in the case where the design is fixed or random. To compute the solutions of both criteria in practice, we propose a coordinate descent algorithm. A numerical study on simulated and real data illustrates the behavior of the estimators.

The Tensor Isomorphism problem (TI) has recently emerged as having connections to multiple areas of research within complexity and beyond, but the current best upper bound is essentially the brute force algorithm. Being an algebraic problem, TI (or rather, proving that two tensors are non-isomorphic) lends itself very naturally to algebraic and semi-algebraic proof systems, such as the Polynomial Calculus (PC) and Sum of Squares (SoS). For its combinatorial cousin Graph Isomorphism, essentially optimal lower bounds are known for approaches based on PC and SoS (Berkholz & Grohe, SODA '17). Our main results are an $\Omega(n)$ lower bound on PC degree or SoS degree for Tensor Isomorphism, and a nontrivial upper bound for testing isomorphism of tensors of bounded rank. We also show that PC cannot perform basic linear algebra in sub-linear degree, such as comparing the rank of two matrices, or deriving $BA=I$ from $AB=I$. As linear algebra is a key tool for understanding tensors, we introduce a strictly stronger proof system, PC+Inv, which allows as derivation rules all substitution instances of the implication $AB=I \rightarrow BA=I$. We conjecture that even PC+Inv cannot solve TI in polynomial time either, but leave open getting lower bounds on PC+Inv for any system of equations, let alone those for TI. We also highlight many other open questions about proof complexity approaches to TI.

Stochastic Gradient Descent (SGD) algorithms are widely used in optimizing neural networks, with Random Reshuffling (RR) and Single Shuffle (SS) being popular choices for cycling through random or single permutations of the training data. However, the convergence properties of these algorithms in the non-convex case are not fully understood. Existing results suggest that, in realistic training scenarios where the number of epochs is smaller than the training set size, RR may perform worse than SGD. In this paper, we analyze a general SGD algorithm that allows for arbitrary data orderings and show improved convergence rates for non-convex functions. Specifically, our analysis reveals that SGD with random and single shuffling is always faster or at least as good as classical SGD with replacement, regardless of the number of iterations. Overall, our study highlights the benefits of using SGD with random/single shuffling and provides new insights into its convergence properties for non-convex optimization.

When analysing Quantum Key Distribution (QKD) protocols several metrics can be determined, but one of the most important is the Secret Key Rate. The Secret Key Rate is the number of bits per transmission that result in being part of a Secret Key between two parties. There are equations that give the Secret Key Rate, for example, for the BB84 protocol, equation 52 from [1, p.1032] gives the Secret Key Rate for a given Quantum Bit Error Rate (QBER). However, the analysis leading to equations such as these often rely on an Asymptotic approach, where it is assumed that an infinite number of transmissions are sent between the two communicating parties (henceforth denoted as Alice and Bob). In a practical implementation this is obviously impossible. Moreover, some QKD protocols belong to a category called Asymmetric protocols, for which it is significantly more difficult to perform such an analysis. As such, there is currently a lot of investigation into a different approach called the Finite-key regime. Work by Bunandar et al. [2] has produced code that used Semi-Definite Programming to produce lower bounds on the Secret Key Rate of even Asymmetric protocols. Our work looks at devising a novel QKD protocol taking inspiration from both the 3-state version of BB84 [3], and the Twin-Field protocol [4], and then using this code to perform analysis of the new protocol.

We consider the problem of inferring an unknown number of clusters in replicated multinomial data. Under a model based clustering point of view, this task can be treated by estimating finite mixtures of multinomial distributions with or without covariates. Both Maximum Likelihood (ML) as well as Bayesian estimation are taken into account. Under a Maximum Likelihood approach, we provide an Expectation--Maximization (EM) algorithm which exploits a careful initialization procedure combined with a ridge--stabilized implementation of the Newton--Raphson method in the M--step. Under a Bayesian setup, a stochastic gradient Markov chain Monte Carlo (MCMC) algorithm embedded within a prior parallel tempering scheme is devised. The number of clusters is selected according to the Integrated Completed Likelihood criterion in the ML approach and estimating the number of non-empty components in overfitting mixture models in the Bayesian case. Our method is illustrated in simulated data and applied to two real datasets. An R package is available at //github.com/mqbssppe/multinomialLogitMix.

Evolutionary game theory provides a mathematical foundation for cross-disciplinary fertilization, especially for integrating ideas from artificial intelligence and game theory. Such integration offers a transparent and rigorous approach to complex decision-making problems in a variety of important contexts, ranging from evolutionary computation to machine behavior. Despite the astronomically huge individual behavioral strategy space for interactions in the iterated Prisoner's Dilemma (IPD) games, the so-called Zero-Determinant (ZD) strategies is a set of rather simple memory-one strategies yet can unilaterally set a linear payoff relationship between themselves and their opponent. Although the witting of ZD strategies gives players an upper hand in the IPD games, we find and characterize unbending strategies that can force ZD players to be fair in their own interest. Moreover, our analysis reveals the ubiquity of unbending properties in common IPD strategies which are previously overlooked. In this work, we demonstrate the important steering role of unbending strategies in fostering fairness and cooperation in pairwise interactions. Our results will help bring a new perspective by means of combining game theory and multi-agent learning systems for optimizing winning strategies that are robust to noises, errors, and deceptions in non-zero-sum games.

北京阿比特科技有限公司