亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. These problems are characterized by the complexity class $\mathsf{FP}^{\mathsf{PSPACE}}$, and include natural $\mathsf{FP}^{\mathsf{PSPACE}}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.

相關內容

Multidimensional scaling (MDS) is the act of embedding proximity information about a set of $n$ objects in $d$-dimensional Euclidean space. As originally conceived by the psychometric community, MDS was concerned with embedding a fixed set of proximities associated with a fixed set of objects. Modern concerns, e.g., that arise in developing asymptotic theories for statistical inference on random graphs, more typically involve studying the limiting behavior of a sequence of proximities associated with an increasing set of objects. Standard results from the theory of point-to-set maps imply that, if $n$ is fixed and a sequence of proximities converges, then the limit of the embedded structures is the embedded structure of the limiting proximities. But what if $n$ increases? It then becomes necessary to reformulate MDS so that the entire sequence of embedding problems can be viewed as a sequence of optimization problems in a fixed space. We present such a reformulation and derive some consequences.

It is well-known that a complex circulant matrix can be diagonalized by a discrete Fourier matrix with imaginary unit $\mathtt{i}$. The main aim of this paper is to demonstrate that a quaternion circulant matrix cannot be diagonalized by a discrete quaternion Fourier matrix with three imaginary units $\mathtt{i}$, $\mathtt{j}$ and $\mathtt{k}$. Instead, a quaternion circulant matrix can be block-diagonalized into 1-by-1 block and 2-by-2 block matrices by permuted discrete quaternion Fourier transform matrix. With such a block-diagonalized form, the inverse of a quaternion circulant matrix can be determined efficiently similar to the inverse of a complex circulant matrix. We make use of this block-diagonalized form to study quaternion tensor singular value decomposition of quaternion tensors where the entries are quaternion numbers. The applications including computing the inverse of a quaternion circulant matrix, and solving quaternion Toeplitz system arising from linear prediction of quaternion signals are employed to validate the efficiency of our proposed block diagonalized results. A numerical example of color video as third-order quaternion tensor is employed to validate the effectiveness of quaternion tensor singular value decomposition.

A graph $G=(V,E)$ is said to be distance magic if there is a bijection $f$ from a vertex set of $G$ to the first $|V(G)|$ natural numbers such that for each vertex $v$, its weight given by $\sum_{u \in N(v)}f(u)$ is constant, where $N(v)$ is an open neighborhood of a vertex $v$. In this paper, we introduce the concept of $p$-distance magic labeling and establish the necessary and sufficient condition for a graph to be distance magic. Additionally, we introduce necessary and sufficient conditions for a connected regular graph to exhibit distance magic properties in terms of the eigenvalues of its adjacency and Laplacian matrices. Furthermore, we study the spectra of distance magic graphs, focusing on singular distance magic graphs. Also, we show that the number of distance magic labelings of a graph is, at most, the size of its automorphism group.

This paper contains a recipe for deriving new PAC-Bayes generalisation bounds based on the $(f, \Gamma)$-divergence, and, in addition, presents PAC-Bayes generalisation bounds where we interpolate between a series of probability divergences (including but not limited to KL, Wasserstein, and total variation), making the best out of many worlds depending on the posterior distributions properties. We explore the tightness of these bounds and connect them to earlier results from statistical learning, which are specific cases. We also instantiate our bounds as training objectives, yielding non-trivial guarantees and practical performances.

Let $X$ be a $d$-dimensional simplicial complex. A function $F\colon X(k)\to \{0,1\}^k$ is said to be a direct product function if there exists a function $f\colon X(1)\to \{0,1\}$ such that $F(\sigma) = (f(\sigma_1), \ldots, f(\sigma_k))$ for each $k$-face $\sigma$. In an effort to simplify components of the PCP theorem, Goldreich and Safra introduced the problem of direct product testing, which asks whether one can test if $F\colon X(k)\to \{0,1\}^k$ is correlated with a direct product function by querying $F$ on only $2$ inputs. Dinur and Kaufman conjectured that there exist bounded degree complexes with a direct product test in the small soundness regime. We resolve their conjecture by showing that for all $\delta>0$, there exists a family of high-dimensional expanders with degree $O_{\delta}(1)$ and a $2$-query direct product tester with soundness $\delta$. We use the characterization given by a subset of the authors and independently by Dikstein and Dinur, who showed that some form of non-Abelian coboundary expansion (which they called "Unique-Games coboundary expansion") is a necessary and sufficient condition for a complex to admit such direct product testers. Our main technical contribution is a general technique for showing coboundary expansion of complexes with coefficients in a non-Abelian group. This allows us to prove that the high dimensional expanders constructed by Chapman and Lubotzky satisfies the necessary conditions, thus admitting a 2-query direct product tester with small soundness.

The frame scaling problem is: given vectors $U := \{u_{1}, ..., u_{n} \} \subseteq \mathbb{R}^{d}$, marginals $c \in \mathbb{R}^{n}_{++}$, and precision $\varepsilon > 0$, find left and right scalings $L \in \mathbb{R}^{d \times d}, r \in \mathbb{R}^n$ such that $(v_1,\dots,v_n) := (Lu_1 r_1,\dots,Lu_nr_n)$ simultaneously satisfies $\sum_{i=1}^n v_i v_i^{\mathsf{T}} = I_d$ and $\|v_{j}\|_{2}^{2} = c_{j}, \forall j \in [n]$, up to error $\varepsilon$. This problem has appeared in a variety of fields throughout linear algebra and computer science. In this work, we give a strongly polynomial algorithm for frame scaling with $\log(1/\varepsilon)$ convergence. This answers a question of Diakonikolas, Tzamos and Kane (STOC 2023), who gave the first strongly polynomial randomized algorithm with poly$(1/\varepsilon)$ convergence for the special case $c = \frac{d}{n} 1_{n}$. Our algorithm is deterministic, applies for general $c \in \mathbb{R}^{n}_{++}$, and requires $O(n^{3} \log(n/\varepsilon))$ iterations as compared to $O(n^{5} d^{11}/\varepsilon^{5})$ iterations of DTK. By lifting the framework of Linial, Samorodnitsky and Wigderson (Combinatorica 2000) for matrix scaling to frames, we are able to simplify both the algorithm and analysis. Our main technical contribution is to generalize the potential analysis of LSW to the frame setting and compute an update step in strongly polynomial time that achieves geometric progress in each iteration. In fact, we can adapt our results to give an improved analysis of strongly polynomial matrix scaling, reducing the $O(n^{5} \log(n/\varepsilon))$ iteration bound of LSW to $O(n^{3} \log(n/\varepsilon))$. Additionally, we prove a novel bound on the size of approximate frame scaling solutions, involving the condition measure $\bar{\chi}$ studied in the linear programming literature, which may be of independent interest.

On an orientable surface $S$, consider a collection $\Gamma$ of closed curves. The (geometric) intersection number $i_S(\Gamma)$ is the minimum number of self-intersections that a collection $\Gamma'$ can have, where $\Gamma'$ results from a continuous deformation (homotopy) of $\Gamma$. We provide algorithms that compute $i_S(\Gamma)$ and such a $\Gamma'$, assuming that $\Gamma$ is given by a collection of closed walks of length $n$ in a graph $M$ cellularly embedded on $S$, in $O(n \log n)$ time when $M$ and $S$ are fixed. The state of the art is a paper of Despr\'e and Lazarus [SoCG 2017, J. ACM 2019], who compute $i_S(\Gamma)$ in $O(n^2)$ time, and $\Gamma'$ in $O(n^4)$ time if $\Gamma$ is a single closed curve. Our result is more general since we can put an arbitrary number of closed curves in minimal position. Also, our algorithms are quasi-linear in $n$ instead of quadratic and quartic, and our proofs are simpler and shorter. We use techniques from two-dimensional topology and from the theory of hyperbolic surfaces. Most notably, we prove a new property of the reducing triangulations introduced by Colin de Verdi\`ere, Despr\'e, and Dubois [SODA 2024], reducing our problem to the case of surfaces with boundary. As a key subroutine, we rely on an algorithm of Fulek and T\'oth [JCO 2020].

Recently, Ainsworth et al. showed that using weight matching (WM) to minimize the $L_2$ distance in a permutation search of model parameters effectively identifies permutations that satisfy linear mode connectivity (LMC), in which the loss along a linear path between two independently trained models with different seeds remains nearly constant. This paper provides a theoretical analysis of LMC using WM, which is crucial for understanding stochastic gradient descent's effectiveness and its application in areas like model merging. We first experimentally and theoretically show that permutations found by WM do not significantly reduce the $L_2$ distance between two models and the occurrence of LMC is not merely due to distance reduction by WM in itself. We then provide theoretical insights showing that permutations can change the directions of the singular vectors, but not the singular values, of the weight matrices in each layer. This finding shows that permutations found by WM mainly align the directions of singular vectors associated with large singular values across models. This alignment brings the singular vectors with large singular values, which determine the model functionality, closer between pre-merged and post-merged models, so that the post-merged model retains functionality similar to the pre-merged models, making it easy to satisfy LMC. Finally, we analyze the difference between WM and straight-through estimator (STE), a dataset-dependent permutation search method, and show that WM outperforms STE, especially when merging three or more models.

Two graphs $G$ and $H$ are homomorphism indistinguishable over a class of graphs $\mathcal{F}$ if for all graphs $F \in \mathcal{F}$ the number of homomorphisms from $F$ to $G$ is equal to the number of homomorphisms from $F$ to $H$. Many natural equivalence relations comparing graphs such as (quantum) isomorphism, spectral, and logical equivalences can be characterised as homomorphism indistinguishability relations over certain graph classes. Abstracting from the wealth of such instances, we show in this paper that equivalences w.r.t. any self-complementarity logic admitting a characterisation as homomorphism indistinguishability relation can be characterised by homomorphism indistinguishability over a minor-closed graph class. Self-complementarity is a mild property satisfied by most well-studied logics. This result follows from a correspondence between closure properties of a graph class and preservation properties of its homomorphism indistinguishability relation. Furthermore, we classify all graph classes which are in a sense finite (essentially profinite) and satisfy the maximality condition of being homomorphism distinguishing closed, i.e. adding any graph to the class strictly refines its homomorphism indistinguishability relation. Thereby, we answer various questions raised by Roberson (2022) on general properties of the homomorphism distinguishing closure.

We consider a high-dimensional stochastic contextual linear bandit problem when the parameter vector is $s_{0}$-sparse and the decision maker is subject to privacy constraints under both central and local models of differential privacy. We present PrivateLASSO, a differentially private LASSO bandit algorithm. PrivateLASSO is based on two sub-routines: (i) a sparse hard-thresholding-based privacy mechanism and (ii) an episodic thresholding rule for identifying the support of the parameter $\theta$. We prove minimax private lower bounds and establish privacy and utility guarantees for PrivateLASSO for the central model under standard assumptions.

北京阿比特科技有限公司