亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Our work explores the hardness of $3$SUM instances without certain additive structures, and its applications. As our main technical result, we show that solving $3$SUM on a size-$n$ integer set that avoids solutions to $a+b=c+d$ for $\{a, b\} \ne \{c, d\}$ still requires $n^{2-o(1)}$ time, under the $3$SUM hypothesis. Such sets are called Sidon sets and are well-studied in the field of additive combinatorics. - Combined with previous reductions, this implies that the All-Edges Sparse Triangle problem on $n$-vertex graphs with maximum degree $\sqrt{n}$ and at most $n^{k/2}$ $k$-cycles for every $k \ge 3$ requires $n^{2-o(1)}$ time, under the $3$SUM hypothesis. This can be used to strengthen the previous conditional lower bounds by Abboud, Bringmann, Khoury, and Zamir [STOC'22] of $4$-Cycle Enumeration, Offline Approximate Distance Oracle and Approximate Dynamic Shortest Path. In particular, we show that no algorithm for the $4$-Cycle Enumeration problem on $n$-vertex $m$-edge graphs with $n^{o(1)}$ delays has $O(n^{2-\varepsilon})$ or $O(m^{4/3-\varepsilon})$ pre-processing time for $\varepsilon >0$. We also present a matching upper bound via simple modifications of the known algorithms for $4$-Cycle Detection. - A slight generalization of the main result also extends the result of Dudek, Gawrychowski, and Starikovskaya [STOC'20] on the $3$SUM hardness of nontrivial 3-Variate Linear Degeneracy Testing (3-LDTs): we show $3$SUM hardness for all nontrivial 4-LDTs. The proof of our main technical result combines a wide range of tools: Balog-Szemer{\'e}di-Gowers theorem, sparse convolution algorithm, and a new almost-linear hash function with almost $3$-universal guarantee for integers that do not have small-coefficient linear relations.

相關內容

A code of length $n$ is said to be (combinatorially) $(\rho,L)$-list decodable if the Hamming ball of radius $\rho n$ around any vector in the ambient space does not contain more than $L$ codewords. We study a recently introduced class of higher order MDS codes, which are closely related (via duality) to codes that achieve a generalized Singleton bound for list decodability. For some $\ell\geq 1$, higher order MDS codes of length $n$, dimension $k$, and order $\ell$ are denoted as $(n,k)$-MDS($\ell$) codes. We present a number of results on the structure of these codes, identifying the `extend-ability' of their parameters in various scenarios. Specifically, for some parameter regimes, we identify conditions under which $(n_1,k_1)$-MDS($\ell_1$) codes can be obtained from $(n_2,k_2)$-MDS($\ell_2$) codes, via various techniques. We believe that these results will aid in efficient constructions of higher order MDS codes. We also obtain a new field size upper bound for the existence of such codes, which arguably improves over the best known existing bound, in some parameter regimes.

We study an envy-free pricing problem, in which each buyer wishes to buy a shortest path connecting her individual pair of vertices in a network owned by a single vendor. The vendor sets the prices of individual edges with the aim of maximizing the total revenue generated by all buyers. Each customer buys a path as long as its cost does not exceed her individual budget. In this case, the revenue generated by her equals the sum of prices of edges along this path. We consider the unlimited supply setting, where each edge can be sold to arbitrarily many customers. The problem is to find a price assignment which maximizes vendor's revenue. A special case in which the network is a tree is known under the name of the tollbooth problem. Gamzu and Segev proposed a $\mathcal{O} \left( \frac{\log m}{\log \log m} \right)$-approximation algorithm for revenue maximization in that setting. Note that paths in a tree network are unique, and hence the tollbooth problem falls under the category of single-minded bidders, i.e., each buyer is interested in a single fixed set of goods. In this work we step out of the single-minded setting and consider more general networks that may contain cycles. We obtain an algorithm for pricing cactus shaped networks, namely networks in which each edge can belong to at most one simple cycle. Our result is a polynomial time $\mathcal{0} \left( \frac{\log m}{\log \log m}\right)$-approximation algorithm for revenue maximization in tollbooth pricing on a cactus graph. It builds upon the framework of Gamzu and Segev, but requires substantially extending its main ideas: the recursive decomposition of the graph, the dynamic programming for rooted instances and rounding the prices.

Tensor network (TN) is a powerful framework in machine learning, but selecting a good TN model, known as TN structure search (TN-SS), is a challenging and computationally intensive task. The recent approach TNLS~\cite{li2022permutation} showed promising results for this task, however, its computational efficiency is still unaffordable, requiring too many evaluations of the objective function. We propose TnALE, a new algorithm that updates each structure-related variable alternately by local enumeration, \emph{greatly} reducing the number of evaluations compared to TNLS. We theoretically investigate the descent steps for TNLS and TnALE, proving that both algorithms can achieve linear convergence up to a constant if a sufficient reduction of the objective is \emph{reached} in each neighborhood. We also compare the evaluation efficiency of TNLS and TnALE, revealing that $\Omega(2^N)$ evaluations are typically required in TNLS for \emph{reaching} the objective reduction in the neighborhood, while ideally $O(N^2R)$ evaluations are sufficient in TnALE, where $N$ denotes the tensor order and $R$ reflects the \emph{``low-rankness''} of the neighborhood. Experimental results verify that TnALE can find practically good TN-ranks and permutations with vastly fewer evaluations than the state-of-the-art algorithms.

In decentralized settings, the shuffle model of differential privacy has emerged as a promising alternative to the classical local model. Analyzing privacy amplification via shuffling is a critical component in both single-message and multi-message shuffle protocols. However, current methods used in these two areas are distinct and specific, making them less convenient for protocol designers and practitioners. In this work, we introduce variation-ratio reduction as a unified framework for privacy amplification analyses in the shuffle model. This framework utilizes total variation bounds of local messages and probability ratio bounds of other users' blanket messages, converting them to indistinguishable levels. Our results indicate that the framework yields tighter bounds for both single-message and multi-message encoders (e.g., with local DP, local metric DP, or multi-message randomizers). Specifically, for a broad range of local randomizers having extremal probability design, our amplification bounds are precisely tight. We also demonstrate that variation-ratio reduction is well-suited for parallel composition in the shuffle model and results in stricter privacy accounting for common sampling-based local randomizers. Our experimental findings show that, compared to existing amplification bounds, our numerical amplification bounds can save up to $30\%$ of the budget for single-message protocols, $75\%$ of the budget for multi-message protocols, and $75\%$-$95\%$ of the budget for parallel composition. Additionally, our implementation for numerical amplification bounds has only $\tilde{O}(n)$ complexity and is highly efficient in practice, taking just $2$ minutes for $n=10^8$ users. The code for our implementation can be found at \url{//github.com/wangsw/PrivacyAmplification}.

The method of common lines is a well-established reconstruction technique in cryogenic electron microscopy (cryo-EM), which can be used to extract the relative orientations of an object given tomographic projection images from different directions. In this paper, we deal with an analogous problem in optical diffraction tomography. Based on the Fourier diffraction theorem, we show that rigid motions of the object, i.e., rotations and translations, can be determined by detecting common circles in the Fourier-transformed data. We introduce two methods to identify common circles. The first one is motivated by the common line approach for projection images and detects the relative orientation by parameterizing the common circles in the two images. The second one assumes a smooth motion over time and calculates the angular velocity of the rotational motion via an infinitesimal version of the common circle method. Interestingly, using the stereographic projection, both methods can be reformulated as common line methods, but these lines are, in contrast to those used in cryo-EM, not confined to pass through the origin and allow for a full reconstruction of the relative orientations. Numerical proof-of-the-concept examples demonstrate the performance of our reconstruction methods.

A family $\mathcal F$ has covering number $\tau$ if the size of the smallest set intersecting all sets from $\mathcal F$ is equal to $\tau$. Let $M(n,k,\tau)$ stand for the size of the largest intersecting family $\mathcal F$ of $k$-element subsets of $\{1,\ldots,n\}$ with covering number $\tau$. It is a classical result of Erd\H os and Lov\'asz that $M(n,k,k)\le k^k$ for any $n$. In this short note, we explore the behaviour of $M(n,k,\tau)$ for $n<k^2$ and large $\tau$. The results are quite surprising: For example, we show that $M(n,k,\tau) =(1-o(1)){n-1\choose k-1}$, if $n = \lfloor k^{3/2}\rfloor$, and $\tau\le k-k^{3/4+o(1)}$ as $k\to\infty$; $M(n,k,\tau) <e^{-ck^{1/2}}{n\choose k}$, if $n = \lfloor k^{3/2}\rfloor$ and $\tau>k-\frac 12k^{1/2}$.

Given access to the hypergraph through a subset query oracle in the query model, we give sublinear time algorithms for Hitting-Set with almost tight parameterized query complexity. In parameterized query complexity, we estimate the number of queries to the oracle based on the parameter $k$, the size of the Hitting-Set. The subset query oracle we use in this paper is called Generalized $d$-partite Independent Set query oracle (GPIS) and it was introduced by Bishnu et al. (ISAAC'18). GPIS is a generalization to hypergraphs of the Bipartite Independent Set query oracle (BIS) introduced by Beame et al. (ITCS'18 and TALG'20) for estimating the number of edges in graphs. Formally, GPIS is defined as follows: GPIS oracle for a $d$-uniform hypergraph $\mathcal{H}$ takes as input $d$ pairwise disjoint non-empty subsets $A_1, \ldots, A_d$ of vertices in $\cal H$ and answers whether there is a hyperedge in $\mathcal{H}$ that intersects each set $A_i$, where $i \in \{1, \, 2, \, \ldots, d\}$. } For $d=2$, the GPIS oracle is nothing but BIS oracle. We show that $d$-Hitting-Set, the hitting set problem for $d$-uniform hypergraphs, can be solved using $\widetilde{\mathcal{O}}_d(k^{d} \log n)$ GPIS queries. Additionally, we also showed that $d$-Decesion-Hitting-Set, the decision version of $d$-Hitting-Set can be solved with $\widetilde{\mathcal{O}}_d\left( \min \left\{ k^d\log n, k^{2d^2} \right\} \right)$ {\sc GPIS} queries. We complement these parameterized upper bounds with an almost matching parameterized lower bound that states that any algorithm that solves $d$-Decesion-Hitting-Set requires $\Omega \left( \binom{k+d}{d} \right)$ GPIS queries.

Motivated by an application from geodesy, we introduce a novel clustering problem which is a $k$-center (or k-diameter) problem with a side constraint. For the side constraint, we are given an undirected connectivity graph $G$ on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in $G$. We call the resulting problems the connected $k$-center problem and the connected $k$-diameter problem. We prove several results on the complexity and approximability of these problems. Our main result is an $O(\log^2{k})$-approximation algorithm for the connected $k$-center and the connected $k$-diameter problem. For Euclidean metrics and metrics with constant doubling dimension, the approximation factor of this algorithm improves to $O(1)$. We also consider the special cases that the connectivity graph is a line or a tree. For the line we give optimal polynomial-time algorithms and for the case that the connectivity graph is a tree, we either give an optimal polynomial-time algorithm or a $2$-approximation algorithm for all variants of our model. We complement our upper bounds by several lower bounds.

Peter Andrews has proposed, in 1971, the problem of finding an analog of the Skolem theorem for Simple Type Theory. A first idea lead to a naive rule that worked only for Simple Type Theory with the axiom of choice and the general case has only been solved, more than ten years later, by Dale Miller. More recently, we have proposed with Th{\'e}r{\`e}se Hardin and Claude Kirchner a new way to prove analogs of the Miller theorem for different, but equivalent, formulations of Simple Type Theory. In this paper, that does not contain new technical results, I try to show that the history of the skolemization problem and of its various solutions is an illustration of a tension between two points of view on Simple Type Theory: the logical and the theoretical points of view.

In Bayesian inference, the approximation of integrals of the form $\psi = \mathbb{E}_{F}{l(X)} = \int_{\chi} l(\mathbf{x}) d F(\mathbf{x})$ is a fundamental challenge. Such integrals are crucial for evidence estimation, which is important for various purposes, including model selection and numerical analysis. The existing strategies for evidence estimation are classified into four categories: deterministic approximation, density estimation, importance sampling, and vertical representation (Llorente et al., 2020). In this paper, we show that the Riemann sum estimator due to Yakowitz (1978) can be used in the context of nested sampling (Skilling, 2006) to achieve a $O(n^{-4})$ rate of convergence, faster than the usual Ergodic Central Limit Theorem. We provide a brief overview of the literature on the Riemann sum estimators and the nested sampling algorithm and its connections to vertical likelihood Monte Carlo. We provide theoretical and numerical arguments to show how merging these two ideas may result in improved and more robust estimators for evidence estimation, especially in higher dimensional spaces. We also briefly discuss the idea of simulating the Lorenz curve that avoids the problem of intractable $\Lambda$ functions, essential for the vertical representation and nested sampling.

北京阿比特科技有限公司