亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modular addition is, on its face, a simple operation: given $N$ elements in $\mathbb{Z}_q$, compute their sum modulo $q$. Yet, scalable machine learning solutions to this problem remain elusive: prior work trains ML models that sum $N \le 6$ elements mod $q \le 1000$. Promising applications of ML models for cryptanalysis-which often involve modular arithmetic with large $N$ and $q$-motivate reconsideration of this problem. This work proposes three changes to the modular addition model training pipeline: more diverse training data, an angular embedding, and a custom loss function. With these changes, we demonstrate success with our approach for $N = 256, q = 3329$, a case which is interesting for cryptographic applications, and a significant increase in $N$ and $q$ over prior work. These techniques also generalize to other modular arithmetic problems, motivating future work.

相關內容

We present two new explicit constructions of Cayley high dimensional expanders (HDXs) over the abelian group $\mathbb{F}_2^n$. Our expansion proofs use only linear algebra and combinatorial arguments. The first construction gives local spectral HDXs of any constant dimension and subpolynomial degree $\exp(n^\epsilon)$ for every $\epsilon >0$, improving on a construction by Golowich [Gol23] which achieves $\epsilon =1/2$. [Gol23] derives these HDXs by sparsifying the complete Grassmann poset of subspaces. The novelty in our construction is the ability to sparsify any expanding Grassmannian posets, leading to iterated sparsification and much smaller degrees. The sparse Grassmannian (which is of independent interest in the theory of HDXs) serves as the generating set of the Cayley graph. Our second construction gives a 2-dimensional HDXs of any polynomial degree $\exp(\epsilon n$) for any constant $\epsilon > 0$, which is simultaneously a spectral expander and a coboundary expander. To the best of our knowledge, this is the first such non-trivial construction. We name it the Johnson complex, as it is derived from the classical Johnson scheme, whose vertices serve as the generating set of this Cayley graph. This construction may be viewed as a derandomization of the recent random geometric complexes of [LMSY23]. Establishing coboundary expansion through Gromov's "cone method" and the associated isoperimetric inequalities is the most intricate aspect of this construction. While these two constructions are quite different, we show that they both share a common structure, resembling the intersection patterns of vectors in the Hadamard code. We propose a general framework of such "Hadamard-like" constructions in the hope that it will yield new HDXs.

In the distributed localization problem (DLP), n anonymous robots (agents) A0, A1, ..., A(n-1) begin at arbitrary positions p0, ..., p(n-1) in S, where S is a Euclidean space. The primary goal in DLP is for agents to reach a consensus on a unified coordinate system that accurately reflects the relative positions of all points, p0, ... , p(n-1), in S. Extensive research on DLP has primarily focused on the feasibility and complexity of achieving consensus when agents have limited access to inter-agent distances, often due to missing or imprecise data. In this paper, however, we examine a minimalist, computationally efficient model of distributed computing in which agents have access to all pairwise distances, if needed. Specifically, we introduce a novel variant of population protocols, referred to as the spatial population protocols model. In this variant each agent can memorise one or a fixed number of coordinates, and when agents A(i) and A(j) interact, they can not only exchange their current knowledge but also either determine the distance d(i,j) between them in S (distance query model) or obtain the vector v(i,j) spanning points p(i) and p(j) (vector query model). We examine three DLP scenarios: - Self-stabilising localisation protocol with distance queries We propose and analyse self-stabilising localisation protocol based on pairwise distance adjustment. We also discuss several hard instances in this scenario, and suggest possible improvements for the considered protocol, - Leader-based localisation protocol with distance queries We propose and analyse several leader-based protocols which stabilise in o(n) parallel time. These protocols rely on efficient solution to multi-contact epidemic, and - Self-stabilising localisation protocol with vector queries We propose and analyse superfast self-stabilising DLP protocol which stabilises in O(log n) parallel time.

In this work, we show that the class of multivariate degree-$d$ polynomials mapping $\{0,1\}^{n}$ to any Abelian group $G$ is locally correctable with $\widetilde{O}_{d}((\log n)^{d})$ queries for up to a fraction of errors approaching half the minimum distance of the underlying code. In particular, this result holds even for polynomials over the reals or the rationals, special cases that were previously not known. Further, we show that they are locally list correctable up to a fraction of errors approaching the minimum distance of the code. These results build on and extend the prior work of the authors [ABPSS24] (STOC 2024) who considered the case of linear polynomials and gave analogous results. Low-degree polynomials over the Boolean cube $\{0,1\}^{n}$ arise naturally in Boolean circuit complexity and learning theory, and our work furthers the study of their coding-theoretic properties. Extending the results of [ABPSS24] from linear to higher-degree polynomials involves several new challenges and handling them gives us further insights into properties of low-degree polynomials over the Boolean cube. For local correction, we construct a set of points in the Boolean cube that lie between two exponentially close parallel hyperplanes and is moreover an interpolating set for degree-$d$ polynomials. To show that the class of degree-$d$ polynomials is list decodable up to the minimum distance, we stitch together results on anti-concentration of low-degree polynomials, the Sunflower lemma, and the Footprint bound for counting common zeroes of polynomials. Analyzing the local list corrector of [ABPSS24] for higher degree polynomials involves understanding random restrictions of non-zero degree-$d$ polynomials on a Hamming slice. In particular, we show that a simple random restriction process for reducing the dimension of the Boolean cube is a suitably good sampler for Hamming slices.

The singular value decomposition (SVD) allows to put a matrix as a product of three matrices: a matrix with the left singular vectors, a matrix with the positive-valued singular values and a matrix with the right singular vectors. There are two main approaches allowing to get the SVD result: the classical method and the randomized method. The analysis of the classical approach leads to accurate singular values. The randomized approach is especially used for high dimensional matrix and is based on the approximation accuracy without computing necessary all singular values. In this paper, the SVD computation is formalized as an optimization problem and a use of the gradient search algorithm. That results in a power method allowing to get all or the first largest singular values and their associated right vectors. In this iterative search, the accuracy on the singular values and the associated vector matrix depends on the user settings. Two applications of the SVD are the principal component analysis and the autoencoder used in the neural network models.

A $(\beta,\delta,\Delta)$-padded decomposition of an edge-weighted graph $G = (V,E,w)$ is a stochastic decomposition into clusters of diameter at most $\Delta$ such that for every vertex $v\in V$, the probability that $\rm{ball}_G(v,\gamma\Delta)$ is entirely contained in the cluster containing $v$ is at least $e^{-\beta\gamma}$ for every $\gamma \in [0,\delta]$. Padded decompositions have been studied for decades and have found numerous applications, including metric embedding, multicommodity flow-cut gap, multicut, and zero extension problems, to name a few. In these applications, parameter $\beta$, called the padding parameter, is the most important parameter since it decides either the distortion or the approximation ratios. For general graphs with $n$ vertices, $\beta = \Theta(\log n)$. Klein, Plotkin, and Rao showed that $K_r$-minor-free graphs have padding parameter $\beta = O(r^3)$, which is a significant improvement over general graphs when $r$ is a constant. A long-standing conjecture is to construct a padded decomposition for $K_r$-minor-free graphs with padding parameter $\beta = O(\log r)$. Despite decades of research, the best-known result is $\beta = O(r)$, even for graphs with treewidth at most $r$. In this work, we make significant progress toward the aforementioned conjecture by showing that graphs with treewidth $\rm{tw}$ admit a padded decomposition with padding parameter $O(\log \rm{tw})$, which is tight. As corollaries, we obtain an exponential improvement in dependency on treewidth in a host of algorithmic applications: $O(\sqrt{ \log n \cdot \log(\rm{tw})})$ flow-cut gap, max flow-min multicut ratio of $O(\log(\rm{tw}))$, an $O(\log(\rm{tw}))$ approximation for the 0-extension problem, an $\ell^{O(\log n)}_\infty$ embedding with distortion $O(\log \rm{tw})$, and an $O(\log \rm{tw})$ bound for integrality gap for the uniform sparsest cut.

In our work, we consider the problem of computing a vector $x \in Z^n$ of minimum $\|\cdot\|_p$-norm such that $a^\top x \not= a_0$, for any vector $(a,a_0)$ from a given subset of $Z^n$ of size $m$. In other words, we search for a vector of minimum norm that avoids a given finite set of hyperplanes, which is natural to call as the $\textit{Hyperplanes Avoiding Problem}$. This problem naturally appears as a subproblem in Barvinok-type algorithms for counting integer points in polyhedra. More precisely, it appears when one needs to evaluate certain rational generating functions in an avoidable critical point. We show that: 1) With respect to $\|\cdot\|_1$, the problem admits a feasible solution $x$ with $\|x\|_1 \leq (m+n)/2$, and show that such solution can be constructed by a deterministic polynomial-time algorithm with $O(n \cdot m)$ operations. Moreover, this inequality is the best possible. This is a significant improvement over the previous randomized algorithm, which computes $x$ with a guaranty $\|x\|_{1} \leq n \cdot m$. The original approach of A.~Barvinok can guarantee only $\|x\|_1 = O\bigl((n \cdot m)^n\bigr)$; 2) The problem is NP-hard with respect to any norm $\|\cdot\|_p$, for $p \in \bigl(R_{\geq 1} \cup \{\infty\}\bigr)$. 3) As an application, we show that the problem to count integer points in a polytope $P = \{x \in R^n \colon A x \leq b\}$, for given $A \in Z^{m \times n}$ and $b \in Q^m$, can be solved by an algorithm with $O\bigl(\nu^2 \cdot n^3 \cdot \Delta^3 \bigr)$ operations, where $\nu$ is the maximum size of a normal fan triangulation of $P$, and $\Delta$ is the maximum value of rank-order subdeterminants of $A$. It refines the previous state-of-the-art $O\bigl(\nu^2 \cdot n^4 \cdot \Delta^3\bigr)$-time algorithm.

The Kaczmarz method is a way to iteratively solve a linear system of equations $Ax = b$. One interprets the solution $x$ as the point where hyperplanes intersect and then iteratively projects an approximate solution onto these hyperplanes to get better and better approximations. We note a somewhat related idea: one could take two random hyperplanes and project one into the orthogonal complement of the other. This leads to a sequence of linear systems $A^{(k)} x = b^{(k)}$ which is fast to compute, preserves the original solution and whose small singular values grow like $\sigma_{\ell}(A^{(k)}) \sim \exp(k/n^2) \cdot \sigma_{\ell}(A)$.

Consider a following NP-problem DOUBLE CLIQUE (abbr.: CLIQ$_{2}$): Given a natural number $k>2$ and a pair of two disjoint subgraphs of a fixed graph $G$ decide whether each subgraph in question contains a $k$-clique. I prove that CLIQ$_{2}$ can't be solved in polynomial time by a deterministic TM, which infers $\mathbf{P}\neq \mathbf{NP}$. This proof upgrades the well-known proof of polynomial unsolvability of the partial result with respect to analogous monotone problem CLIQUE (abbr.: CLIQ) as well as my previous presentation that used appropriate 3-value semantics. Note that problem CLIQ$_{2}$ is not monotone and appears more complex than just iterated CLIQ, as the required subgraphs are mutually dependent.

Accurate approximation of a real-valued function depends on two aspects of the available data: the density of inputs within the domain of interest and the variation of the outputs over that domain. There are few methods for assessing whether the density of inputs is \textit{sufficient} to identify the relevant variations in outputs -- i.e., the ``geometric scale'' of the function -- despite the fact that sampling density is closely tied to the success or failure of an approximation method. In this paper, we introduce a general purpose, computational approach to detecting the geometric scale of real-valued functions over a fixed domain using a deterministic interpolation technique from computational geometry. The algorithm is intended to work on scalar data in moderate dimensions (2-10). Our algorithm is based on the observation that a sequence of piecewise linear interpolants will converge to a continuous function at a quadratic rate (in $L^2$ norm) if and only if the data are sampled densely enough to distinguish the feature from noise (assuming sufficiently regular sampling). We present numerical experiments demonstrating how our method can identify feature scale, estimate uncertainty in feature scale, and assess the sampling density for fixed (i.e., static) datasets of input-output pairs. We include analytical results in support of our numerical findings and have released lightweight code that can be adapted for use in a variety of data science settings.

In submodular multiway partition (SUB-MP), the input is a non-negative submodular function $f:2^V \rightarrow \mathbb{R}_{\ge 0}$ given by an evaluation oracle along with $k$ terminals $t_1, t_2, \ldots, t_k\in V$. The goal is to find a partition $V_1, V_2, \ldots, V_k$ of $V$ with $t_i\in V_i$ for every $i\in [k]$ in order to minimize $\sum_{i=1}^k f(V_i)$. In this work, we focus on SUB-MP when the input function is monotone (termed MONO-SUB-MP). MONO-SUB-MP formulates partitioning problems over several interesting structures -- e.g., matrices, matroids, graphs, and hypergraphs. MONO-SUB-MP is NP-hard since the graph multiway cut problem can be cast as a special case. We investigate the approximability of MONO-SUB-MP: we show that it admits a $4/3$-approximation and does not admit a $(10/9-\epsilon)$-approximation for every constant $\epsilon>0$. Next, we study a special case of MONO-SUB-MP where the monotone submodular function of interest is the coverage function of an input graph, termed GRAPH-COVERAGE-MP. GRAPH-COVERAGE-MP is equivalent to the classic multiway cut problem for the purposes of exact optimization. We show that GRAPH-COVERAGE-MP admits a $1.125$-approximation and does not admit a $(1.00074-\epsilon)$-approximation for every constant $\epsilon>0$ assuming the Unique Games Conjecture. These results separate GRAPH-COVERAGE-MP from graph multiway cut in terms of approximability.

北京阿比特科技有限公司