亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A kernelization for a parameterized decision problem $\mathcal{Q}$ is a polynomial-time preprocessing algorithm that reduces any parameterized instance $(x,k)$ into an instance $(x',k')$ whose size is bounded by a function of $k$ alone and which has the same yes/no answer for $\mathcal{Q}$. Such preprocessing algorithms cannot exist in the context of counting problems, when the answer to be preserved is the number of solutions, since this number can be arbitrarily large compared to $k$. However, we show that for counting minimum feedback vertex sets of size at most $k$, and for counting minimum dominating sets of size at most $k$ in a planar graph, there is a polynomial-time algorithm that either outputs the answer or reduces to an instance $(G',k')$ of size polynomial in $k$ with the same number of minimum solutions. This shows that a meaningful theory of kernelization for counting problems is possible and opens the door for future developments. Our algorithms exploit that if the number of solutions exceeds $2^{\mathsf{poly}(k)}$, the size of the input is exponential in terms of $k$ so that the running time of a parameterized counting algorithm can be bounded by $\mathsf{poly}(n)$. Otherwise, we can use gadgets that slightly increase $k$ to represent choices among $2^{O(k)}$ options by only $\mathsf{poly}(k)$ vertices.

相關內容

For $d \ge 2$, let $X$ be a random vector having a Bingham distribution on $\mathcal{S}^{d-1}$, the unit sphere centered at the origin in $\R^d$, and let $\Sigma$ denote the symmetric matrix parameter of the distribution. Let $\Psi(\Sigma)$ be the normalizing constant of the distribution and let $\nabla \Psi_d(\Sigma)$ be the matrix of first-order partial derivatives of $\Psi(\Sigma)$ with respect to the entries of $\Sigma$. We derive complete asymptotic expansions for $\Psi(\Sigma)$ and $\nabla \Psi_d(\Sigma)$, as $d \to \infty$; these expansions are obtained subject to the growth condition that $\|\Sigma\|$, the Frobenius norm of $\Sigma$, satisfies $\|\Sigma\| \le \gamma_0 d^{r/2}$ for all $d$, where $\gamma_0 > 0$ and $r \in [0,1)$. Consequently, we obtain for the covariance matrix of $X$ an asymptotic expansion up to terms of arbitrary degree in $\Sigma$. Using a range of values of $d$ that have appeared in a variety of applications of high-dimensional spherical data analysis we tabulate the bounds on the remainder terms in the expansions of $\Psi(\Sigma)$ and $\nabla \Psi_d(\Sigma)$ and we demonstrate the rapid convergence of the bounds to zero as $r$ decreases.

In this paper, we consider the \emph{planar two-center problem}: Given a set $S$ of $n$ points in the plane, the goal is to find two smallest congruent disks whose union contains all points of $S$. We present an $O(n\log n)$-time algorithm for the planar two-center problem. This matches the best known lower bound of $\Omega(n\log n)$ as well as improving the previously best known algorithms which takes $O(n\log^2 n)$ time.

Given a function $f$ from the set $[N]$ to a $d$-dimensional integer grid, we consider data structures that allow efficient orthogonal range searching queries in the image of $f$, without explicitly storing it. We show that, if $f$ is of the form $[N]\to [2^{w}]^d$ for some $w=\mathrm{polylog} (N)$ and is computable in constant time, then, for any $0<\alpha <1$, we can obtain a data structure using $\tilde {O}(N^{1-\alpha / 3})$ words of space such that, for a given $d$-dimensional axis-aligned box $B$, we can search for some $x\in [N]$ such that $f(x) \in B$ in time $\tilde{O}(N^{\alpha})$. This result is obtained simply by combining integer range searching with the Fiat-Naor function inversion scheme, which was already used in data-structure problems previously. We further obtain - data structures for range counting and reporting, predecessor, selection, ranking queries, and combinations thereof, on the set $f([N])$, - data structures for preimage size and preimage selection queries for a given value of $f$, and - data structures for selection and ranking queries on geometric quantities computed from tuples of points in $d$-space. These results unify and generalize previously known results on 3SUM-indexing and string searching, and are widely applicable as a black box to a variety of problems. In particular, we give a data structure for a generalized version of gapped string indexing, and show how to preprocess a set of points on an integer grid in order to efficiently compute (in sublinear time), for points contained in a given axis-aligned box, their Theil-Sen estimator, the $k$th largest area triangle, or the induced hyperplane that is the $k$th furthest from the origin.

We study local filters for the Lipschitz property of real-valued functions $f: V \to [0,r]$, where the Lipschitz property is defined with respect to an arbitrary undirected graph $G=(V,E)$. We give nearly optimal local Lipschitz filters both with respect to $\ell_1$-distance and $\ell_0$-distance. Previous work only considered unbounded-range functions over $[n]^d$. Jha and Raskhodnikova (SICOMP `13) gave an algorithm for such functions with lookup complexity exponential in $d$, which Awasthi et al. (ACM Trans. Comput. Theory) showed was necessary in this setting. We demonstrate that important applications of local Lipschitz filters can be accomplished with filters for functions with bounded-range. For functions $f: [n]^d\to [0,r]$, we circumvent the lower bound and achieve running time $(d^r\log n)^{O(\log r)}$ for the $\ell_1$-respecting filter and $d^{O(r)}\text{polylog } n$ for the $\ell_0$-respecting filter. Our local filters provide a novel Lipschitz extension that can be implemented locally. Furthermore, we show that our algorithms have nearly optimal dependence on $r$ for the domain $\{0,1\}^d$. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing and a new technique for proving hardness for {\em adaptive} algorithms. We provide two applications of our local filters to arbitrary real-valued functions. In the first application, we use them in conjunction with the Laplace mechanism for differential privacy and noisy binary search to provide mechanisms for privately releasing outputs of black-box functions, even in the presence of malicious clients. In the second application, we use our local filters to obtain the first nontrivial tolerant tester for the Lipschitz property.

Given an edge-weighted (metric/general) complete graph with $n$ vertices, the maximum weight (metric/general) $k$-cycle/path packing problem is to find a set of $\frac{n}{k}$ vertex-disjoint $k$-cycles/paths such that the total weight is maximized. In this paper, we consider approximation algorithms. For metric $k$-cycle packing, we improve the previous approximation ratio from $3/5$ to $7/10$ for $k=5$, and from $7/8\cdot(1-1/k)^2$ for $k>5$ to $(7/8-0.125/k)(1-1/k)$ for constant odd $k>5$ and to $7/8\cdot (1-1/k+\frac{1}{k(k-1)})$ for even $k>5$. For metric $k$-path packing, we improve the approximation ratio from $7/8\cdot (1-1/k)$ to $\frac{27k^2-48k+16}{32k^2-36k-24}$ for even $10\geq k\geq 6$. For the case of $k=4$, we improve the approximation ratio from $3/4$ to $5/6$ for metric 4-cycle packing, from $2/3$ to $3/4$ for general 4-cycle packing, and from $3/4$ to $14/17$ for metric 4-path packing.

Multimodal summarization with multimodal output (MSMO) has emerged as a promising research direction. Nonetheless, numerous limitations exist within existing public MSMO datasets, including insufficient maintenance, data inaccessibility, limited size, and the absence of proper categorization, which pose significant challenges. To address these challenges and provide a comprehensive dataset for this new direction, we have meticulously curated the \textbf{MMSum} dataset. Our new dataset features (1) Human-validated summaries for both video and textual content, providing superior human instruction and labels for multimodal learning. (2) Comprehensively and meticulously arranged categorization, spanning 17 principal categories and 170 subcategories to encapsulate a diverse array of real-world scenarios. (3) Benchmark tests performed on the proposed dataset to assess various tasks and methods, including \textit{video summarization}, \textit{text summarization}, and \textit{multimodal summarization}. To champion accessibility and collaboration, we will release the \textbf{MMSum} dataset and the data collection tool as fully open-source resources, fostering transparency and accelerating future developments. Our project website can be found at~\url{//mmsum-dataset.github.io/}

We design algorithms for minimizing $\max_{i\in[n]} f_i(x)$ over a $d$-dimensional Euclidean or simplex domain. When each $f_i$ is $1$-Lipschitz and $1$-smooth, our method computes an $\epsilon$-approximate solution using $\widetilde{O}(n \epsilon^{-1/3} + \epsilon^{-2})$ gradient and function evaluations, and $\widetilde{O}(n \epsilon^{-4/3})$ additional runtime. For large $n$, our evaluation complexity is optimal up to polylogarithmic factors. In the special case where each $f_i$ is linear -- which corresponds to finding a near-optimal primal strategy in a matrix game -- our method finds an $\epsilon$-approximate solution in runtime $\widetilde{O}(n (d/\epsilon)^{2/3} + nd + d\epsilon^{-2})$. For $n>d$ and $\epsilon=1/\sqrt{n}$ this improves over all existing first-order methods. When additionally $d = \omega(n^{8/11})$ our runtime also improves over all known interior point methods. Our algorithm combines three novel primitives: (1) A dynamic data structure which enables efficient stochastic gradient estimation in small $\ell_2$ or $\ell_1$ balls. (2) A mirror descent algorithm tailored to our data structure implementing an oracle which minimizes the objective over these balls. (3) A simple ball oracle acceleration framework suitable for non-Euclidean geometry.

Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. However, we can show it for theories axiomatized by a (not necessarily finite) set of judgements in normal form over a finite set of propositional variables when we restrict to models that do not map variables to $\infty$; the proof uses Hurwicz's general form of the Farkas' Lemma.

The circuit class $\mathsf{QAC}^0$ was introduced by Moore (1999) as a model for constant depth quantum circuits where the gate set includes many-qubit Toffoli gates. Proving lower bounds against such circuits is a longstanding challenge in quantum circuit complexity; in particular, showing that polynomial-size $\mathsf{QAC}^0$ cannot compute the parity function has remained an open question for over 20 years. In this work, we identify a notion of the Pauli spectrum of $\mathsf{QAC}^0$ circuits, which can be viewed as the quantum analogue of the Fourier spectrum of classical $\mathsf{AC}^0$ circuits. We conjecture that the Pauli spectrum of $\mathsf{QAC}^0$ circuits satisfies low-degree concentration, in analogy to the famous Linial, Nisan, Mansour theorem on the low-degree Fourier concentration of $\mathsf{AC}^0$ circuits. If true, this conjecture immediately implies that polynomial-size $\mathsf{QAC}^0$ circuits cannot compute parity. We prove this conjecture for the class of depth-$d$, polynomial-size $\mathsf{QAC}^0$ circuits with at most $n^{O(1/d)}$ auxiliary qubits. We obtain new circuit lower bounds and learning results as applications: this class of circuits cannot correctly compute - the $n$-bit parity function on more than $(\frac{1}{2} + 2^{-\Omega(n^{1/d})})$-fraction of inputs, and - the $n$-bit majority function on more than $(1 - 1/\mathrm{poly}(n))$-fraction of inputs. Additionally we show that this class of $\mathsf{QAC}^0$ circuits with limited auxiliary qubits can be learned with quasipolynomial sample complexity, giving the first learning result for $\mathsf{QAC}^0$ circuits. More broadly, our results add evidence that "Pauli-analytic" techniques can be a powerful tool in studying quantum circuits.

While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.

北京阿比特科技有限公司