亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we study the problem of finding $(\epsilon, \phi)$-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a $\phi$-expander, while the number of inter-cluster edges is only an $\epsilon$ fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a $(O(\phi \log n), \phi)$-expander decomposition of an $n$-vertex graph using $\widetilde{O}(n/\phi^2)$ bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter $\phi$ is inherent. We move towards answering this question on two fronts. We prove that a $(O(\phi \log n), \phi)$-expander decomposition can be found using $\widetilde{O}(n)$ space, for every $\phi$. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of $\phi$, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of $(O(\phi \log n), \phi)$-expander decompositions requires ${\widetilde{\Omega}}(n/\phi)$ bits of space, even in insertion only streams.

相關內容

This note provides a basic description of subgaussianity, by defining $(\sigma, \rho)$-subgaussian random variables $X$ ($\sigma>0, \rho>0$) as those satisfying $\mathbb{E}(\exp(\lambda X))\leq \rho\exp(\frac{1}{2}\sigma^2\lambda^2)$ for any $\lambda\in\mathbb{R}$. The introduction of the parameter $\rho$ may be particularly useful for those seeking to refine bounds, or align results from different sources, in the analysis of stochastic processes and concentration inequalities.

Despite the success of input transformation-based attacks on boosting adversarial transferability, the performance is unsatisfying due to the ignorance of the discrepancy across models. In this paper, we propose a simple but effective feature augmentation attack (FAUG) method, which improves adversarial transferability without introducing extra computation costs. Specifically, we inject the random noise into the intermediate features of the model to enlarge the diversity of the attack gradient, thereby mitigating the risk of overfitting to the specific model and notably amplifying adversarial transferability. Moreover, our method can be combined with existing gradient attacks to augment their performance further. Extensive experiments conducted on the ImageNet dataset across CNN and transformer models corroborate the efficacy of our method, e.g., we achieve improvement of +26.22% and +5.57% on input transformation-based attacks and combination methods, respectively.

The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries ($\mathsf{SQ}$), which we call Differentiable Learning Queries ($\mathsf{DLQ}$), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of $\mathsf{DLQ}$ for learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss, $\mathsf{DLQ}$ matches the complexity of Correlation Statistical Queries $(\mathsf{CSQ})$--potentially much worse than $\mathsf{SQ}$. But for other simple loss functions, including the $\ell_1$ loss, $\mathsf{DLQ}$ always achieves the same complexity as $\mathsf{SQ}$. We also provide evidence that $\mathsf{DLQ}$ can indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling.

Differentiable simulators continue to push the state of the art across a range of domains including computational physics, robotics, and machine learning. Their main value is the ability to compute gradients of physical processes, which allows differentiable simulators to be readily integrated into commonly employed gradient-based optimization schemes. To achieve this, a number of design decisions need to be considered representing trade-offs in versatility, computational speed, and accuracy of the gradients obtained. This paper presents an in-depth review of the evolving landscape of differentiable physics simulators. We introduce the foundations and core components of differentiable simulators alongside common design choices. This is followed by a practical guide and overview of open-source differentiable simulators that have been used across past research. Finally, we review and contextualize prominent applications of differentiable simulation. By offering a comprehensive review of the current state-of-the-art in differentiable simulation, this work aims to serve as a resource for researchers and practitioners looking to understand and integrate differentiable physics within their research. We conclude by highlighting current limitations as well as providing insights into future directions for the field.

We prove that for any integers $\alpha, \beta > 1$, the existential fragment of the first-order theory of the structure $\langle \mathbb{Z}; 0,1,<, +, \alpha^{\mathbb{N}}, \beta^{\mathbb{N}}\rangle$ is decidable (where $\alpha^{\mathbb{N}}$ is the set of positive integer powers of $\alpha$, and likewise for $\beta^{\mathbb{N}}$). On the other hand, we show by way of hardness that decidability of the existential fragment of the theory of $\langle \mathbb{N}; 0,1, <, +, x\mapsto \alpha^x, x \mapsto \beta^x\rangle$ for any multiplicatively independent $\alpha,\beta > 1$ would lead to mathematical breakthroughs regarding base-$\alpha$ and base-$\beta$ expansions of certain transcendental numbers.

We study the algorithmic task of finding large independent sets in Erdos-Renyi $r$-uniform hypergraphs on $n$ vertices having average degree $d$. Krivelevich and Sudakov showed that the maximum independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$. We show that the class of low-degree polynomial algorithms can find independent sets of density $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$ but no larger. This extends and generalizes earlier results of Gamarnik and Sudan, Rahman and Virag, and Wein on graphs, and answers a question of Bal and Bennett. We conjecture that this statistical-computational gap holds for this problem. Additionally, we explore the universality of this gap by examining $r$-partite hypergraphs. A hypergraph $H=(V,E)$ is $r$-partite if there is a partition $V=V_1\cup\cdots\cup V_r$ such that each edge contains exactly one vertex from each set $V_i$. We consider the problem of finding large balanced independent sets (independent sets containing the same number of vertices in each partition) in random $r$-partite hypergraphs with $n$ vertices in each partition and average degree $d$. We prove that the maximum balanced independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$ asymptotically. Furthermore, we prove an analogous low-degree computational threshold of $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$. Our results recover and generalize recent work of Perkins and the second author on bipartite graphs. While the graph case has been extensively studied, this work is the first to consider statistical-computational gaps of optimization problems on random hypergraphs. Our results suggest that these gaps persist for larger uniformities as well as across many models. A somewhat surprising aspect of the gap for balanced independent sets is that the algorithm achieving the lower bound is a simple degree-1 polynomial.

Given integers $\Delta\ge 2$ and $t\ge 2\Delta$, suppose there is a graph of maximum degree $\Delta$ and a partition of its vertices into blocks of size at least $t$. By a seminal result of Haxell, there must be some independent set of the graph that is transversal to the blocks, a so-called independent transversal. We show that, if moreover $t\ge2\Delta+1$, then every independent transversal can be transformed within the space of independent transversals to any other through a sequence of one-vertex modifications, showing connectivity of the so-called reconfigurability graph of independent transversals. This is sharp in that for $t=2\Delta$ (and $\Delta\ge 2$) the connectivity conclusion can fail. In this case we show furthermore that in an essential sense it can only fail for the disjoint union of copies of the complete bipartite graph $K_{\Delta,\Delta}$. This constitutes a qualitative strengthening of Haxell's theorem.

In this paper, we study several generalizations of multiway cut where the terminals can be chosen as \emph{representatives} from sets of \emph{candidates} $T_1,\ldots,T_q$. In this setting, one is allowed to choose these representatives so that the minimum-weight cut separating these sets \emph{via their representatives} is as small as possible. We distinguish different cases depending on (A) whether the representative of a candidate set has to be separated from the other candidate sets completely or only from the representatives, and (B) whether there is a single representative for each candidate set or the choice of representative is independent for each pair of candidate sets. For fixed $q$, we give approximation algorithms for each of these problems that match the best known approximation guarantee for multiway cut. Our technical contribution is a new extension of the CKR relaxation that preserves approximation guarantees. For general $q$, we show $o(\log q)$-inapproximability for all cases where the choice of representatives may depend on the pair of candidate sets, as well as for the case where the goal is to separate a fixed node from a single representative from each candidate set. As a positive result, we give a $2$-approximation algorithm for the case where we need to choose a single representative from each candidate set. This is a generalization of the $(2-2/k)$-approximation for k-cut, and we can solve it by relating the tree case to optimization over a gammoid.

Many organizations use algorithms that have a disparate impact, i.e., the benefits or harms of the algorithm fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, however, because it is often unclear whether it is possible to reduce this impact without sacrificing other objectives of the organization, such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policy-maker can formally substantiate, or refute, this "necessity" defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other pre-specified objectives. Our proposed test is simple to implement and can be applied under any exogenous constraint on the algorithm space. We establish the large-sample validity and consistency of our test, and illustrate its practical application by evaluating a healthcare algorithm originally considered by Obermeyer et al. (2019). In this application, we reject the null hypothesis that it is not possible to reduce the algorithm's disparate impact without compromising the accuracy of its predictions.

Let $X$ be an $n$-element point set in the $k$-dimensional unit cube $[0,1]^k$ where $k \geq 2$. According to an old result of Bollob\'as and Meir (1992), there exists a cycle (tour) $x_1, x_2, \ldots, x_n$ through the $n$ points, such that $\left(\sum_{i=1}^n |x_i - x_{i+1}|^k \right)^{1/k} \leq c_k$, where $|x-y|$ is the Euclidean distance between $x$ and $y$, and $c_k$ is an absolute constant that depends only on $k$, where $x_{n+1} \equiv x_1$. From the other direction, for every $k \geq 2$ and $n \geq 2$, there exist $n$ points in $[0,1]^k$, such that their shortest tour satisfies $\left(\sum_{i=1}^n |x_i - x_{i+1}|^k \right)^{1/k} = 2^{1/k} \cdot \sqrt{k}$. For the plane, the best constant is $c_2=2$ and this is the only exact value known. Bollob{\'a}s and Meir showed that one can take $c_k = 9 \left(\frac23 \right)^{1/k} \cdot \sqrt{k}$ for every $k \geq 3$ and conjectured that the best constant is $c_k = 2^{1/k} \cdot \sqrt{k}$, for every $k \geq 2$. Here we significantly improve the upper bound and show that one can take $c_k = 3 \sqrt5 \left(\frac23 \right)^{1/k} \cdot \sqrt{k}$ or $c_k = 2.91 \sqrt{k} \ (1+o_k(1))$. Our bounds are constructive. We also show that $c_3 \geq 2^{7/6}$, which disproves the conjecture for $k=3$. Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollob\'as--Meir conjecture is proposed.

北京阿比特科技有限公司