亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of minimum weight in a graph $G=(V,E)$. This is a fundamental graph problem with classical sequential algorithms that run in $\tilde{O}(n^3)$ and $\tilde{O}(mn)$ time where $n=|V|$ and $m=|E|$. In recent years this problem has received significant attention in the context of fine-grained sequential complexity as well as in the design of faster sequential approximation algorithms, though not much is known in the distributed CONGEST model. We present sublinear-round approximation algorithms for computing MWC in directed graphs, and weighted graphs. Our algorithms use a variety of techniques in non-trivial ways, such as in our approximate directed unweighted MWC algorithm that efficiently computes BFS from all vertices restricted to certain implicitly computed neighborhoods in sublinear rounds, and in our weighted approximation algorithms that use unweighted MWC algorithms on scaled graphs combined with a fast and streamlined method for computing multiple source approximate SSSP. We present $\tilde{\Omega}(\sqrt{n})$ lower bounds for arbitrary constant factor approximation of MWC in directed graphs and undirected weighted graphs.

相關內容

The rise of powerful AI models, more formally $\textit{General-Purpose AI Systems}$ (GPAIS), has led to impressive leaps in performance across a wide range of tasks. At the same time, researchers and practitioners alike have raised a number of privacy concerns, resulting in a wealth of literature covering various privacy risks and vulnerabilities of AI models. Works surveying such risks provide differing focuses, leading to disparate sets of privacy risks with no clear unifying taxonomy. We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS, as well as proposed mitigation strategies. The developed privacy framework strives to unify the identified privacy risks and mitigations at a technical level that is accessible to non-experts. This serves as the basis for a practitioner-focused interview study to assess technical stakeholder perceptions of privacy risks and mitigations in GPAIS.

In the Directed Steiner Tree (DST) problem the input is a directed edge-weighted graph $G=(V,E)$, a root vertex $r$ and a set $S \subseteq V$ of $k$ terminals. The goal is to find a min-cost subgraph that connects $r$ to each of the terminals. DST admits an $O(\log^2 k/\log \log k)$-approximation in quasi-polynomial time, and an $O(k^{\epsilon})$-approximation for any fixed $\epsilon > 0$ in polynomial-time. Resolving the existence of a polynomial-time poly-logarithmic approximation is a major open problem in approximation algorithms. In a recent work, Friggstad and Mousavi [ICALP 2023] obtained a simple and elegant polynomial-time $O(\log k)$-approximation for DST in planar digraphs via Thorup's shortest path separator theorem. We build on their work and obtain several new results on DST and related problems. - We develop a tree embedding technique for rooted problems in planar digraphs via an interpretation of the recursion in Friggstad and Mousavi [ICALP 2023]. Using this we obtain polynomial-time poly-logarithmic approximations for Group Steiner Tree, Covering Steiner Tree, and the Polymatroid Steiner Tree problems in planar digraphs. All these problems are hard to approximate to within a factor of $\Omega(\log^2 n/\log \log n)$ even in trees. - We prove that the natural cut-based LP relaxation for DST has an integrality gap of $O(\log^2 k)$ in planar graphs. This is in contrast to general graphs where the integrality gap of this LP is known to be $\Omega(k)$ and $\Omega(n^{\delta})$ for some fixed $\delta > 0$. - We combine the preceding results with density based arguments to obtain poly-logarithmic approximations for the multi-rooted versions of the problems in planar digraphs. For DST our result improves the $O(R + \log k)$ approximation of Friggstad and Mousavi [ICALP 2023] when $R= \omega(\log^2 k)$.

We propose a general-purpose approximation to the Ferguson-Klass algorithm for generating samples from L\'evy processes without Gaussian components. We show that the proposed method is more than 1000 times faster than the standard Ferguson-Klass algorithm without a significant loss of precision. This method can open an avenue for computationally efficient and scalable Bayesian nonparametric models which go beyond conjugacy assumptions, as demonstrated in the examples section.

Given a graph $G=(V,E)$ and an integer $k\in \mathbb{N}$, we investigate the 2-Eigenvalue Vertex Deletion (2-EVD) problem. The objective is to remove at most $k$ vertices such that the adjacency matrix of the resulting graph has at most two eigenvalues. It is established that the adjacency matrix of a graph has at most two eigenvalues if and only if the graph is a collection of equal-sized cliques. Thus, the 2-Eigenvalue Vertex Deletion amounts to removing a set of at most $k$ vertices to transform the graph into a collection of equal-sized cliques. The 2-Eigenvalue Edge Editing (2-EEE), 2-Eigenvalue Edge Deletion (2-EED) and 2-Eigenvalue Edge Addition (2-EEA) problems are defined analogously. We present a kernel of size $\mathcal{O}(k^{3})$ for $2$-EVD, along with an FPT algorithm with a running time of $\mathcal{O}^{*}(2^{k})$. For the problem $2$-EEE, we provide a kernel of size $\mathcal{O}(k^{2})$. Additionally, we present linear kernels of size $5k$ and $6k$ for $2$-EEA and $2$-EED respectively. For the $2$-EED, we also construct an algorithm with running time $\mathcal{O}^{*}(1.47^{k})$ . These results address open questions posed by Misra et al. (ISAAC 2023) regarding the complexity of these problems when parameterized by the solution size.

The continuous random energy model (CREM) is a toy model of spin glasses on $\{0,1\}^N$ that, in the limit, exhibits an infinitely hierarchical correlation structure. We give two polynomial-time algorithms to approximately sample from the Gibbs distribution of the CREM in the high-temperature regime, based on a Markov chain and a sequential sampler. The running time depends algebraically on the desired TV distance and failure probability and exponentially in $(1/g')^{O(1)}$, where $g'$ is the gap to a certain inverse temperature threshold; this contrasts with previous results which only attain $o(N)$ accuracy in KL divergence. If the covariance function $A$ of the CREM is concave, the algorithms work up to the critical threshold $\beta_c$, which is the static phase transition point; moreover, for certain $A$, the algorithms work up to the known algorithmic threshold $\beta_G$ proposed in Addario-Berry and Maillard (2020) for non-trivial sampling guarantees. Our result depends on quantitative bounds for the fluctuation of the partition function and a new contiguity result of the ``tilted" CREM obtained from sampling, which is of independent interest. We also show that the spectral gap is exponentially small with high probability, suggesting that the algebraic dependence is unavoidable with a Markov chain approach.

Deep learning-based partial differential equation(PDE) solvers have received much attention in the past few years. Methods of this category can solve a wide range of PDEs with high accuracy, typically by transforming the problems into highly nonlinear optimization problems of neural network parameters. This work reviews several deep learning solvers proposed a few years ago, including PINN, WAN, DRM, and VPINN. Numerical results are provided to make comparisons amongst them and address the importance of loss formulation and the optimization method. A rigorous error analysis for PINN is also presented. Finally, we discuss the current limitations and bottlenecks of these methods.

The $N$-point discrete Fourier transform (DFT) is a cornerstone for several signal processing applications. Many of these applications operate in real-time, making the computational complexity of the DFT a critical performance indicator to be optimized. Unfortunately, whether the $\mathcal{O}(N\log_2 N)$ time complexity of the fast Fourier transform (FFT) can be outperformed remains an unresolved question in the theory of computation. However, in many applications of the DFT -- such as compressive sensing, image processing, and wideband spectral analysis -- only a small fraction of the output signal needs to be computed because the signal is sparse. This motivates the development of algorithms that compute specific DFT coefficients more efficiently than the FFT algorithm. In this article, we show that the number of points of some DFT coefficients can be dramatically reduced by means of elementary mathematical properties. We present an algorithm that compacts the square index coefficients (SICs) of DFT (i.e., $X_{k\sqrt{N}}$, $k=0,1,\cdots, \sqrt{N}-1$, for a square number $N$) from $N$ to $\sqrt{N}$ points at the expense of $N-1$ complex sums and no multiplication. Based on this, any regular DFT algorithm can be straightforwardly applied to compute the SICs with a reduced number of complex multiplications. If $N$ is a power of two, one can combine our algorithm with the FFT to calculate all SICs in $\mathcal{O}(\sqrt{N}\log_2\sqrt{N})$ time complexity.

An effective exact method is proposed for computing generalized eigenspaces of a matrix of integers or rational numbers. Keys of our approach are the use of minimal annihilating polynomials and the concept of the Jourdan-Krylov basis. A new method, called Jordan-Krylov elimination, is introduced to design an algorithm for computing Jordan-Krylov basis. The resulting algorithm outputs generalized eigenspaces as a form of Jordan chains. Notably, in the output, components of generalized eigenvectors are expressed as polynomials in the associated eigenvalue as a variable.

Estimating the density of a distribution from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is an appropriate error metric for density estimation. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this work we study differentially private density estimation in the Wasserstein distance. We design and analyze instance-optimal algorithms for this problem that can adapt to easy instances. For distributions $P$ over $\mathbb{R}$, we consider a strong notion of instance-optimality: an algorithm that uniformly achieves the instance-optimal estimation rate is competitive with an algorithm that is told that the distribution is either $P$ or $Q_P$ for some distribution $Q_P$ whose probability density function (pdf) is within a factor of 2 of the pdf of $P$. For distributions over $\mathbb{R}^2$, we use a different notion of instance optimality. We say that an algorithm is instance-optimal if it is competitive with an algorithm that is given a constant-factor multiplicative approximation of the density of the distribution. We characterize the instance-optimal estimation rates in both these settings and show that they are uniformly achievable (up to polylogarithmic factors). Our approach for $\mathbb{R}^2$ extends to arbitrary metric spaces as it goes via hierarchically separated trees. As a special case our results lead to instance-optimal private learning in TV distance for discrete distributions.

Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.

北京阿比特科技有限公司