亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of non-monotone $k$-submodular maximization under a knapsack constraint ($\kSMK$) over the ground set size $n$ has been raised in many applications in machine learning, such as data summarization, information propagation, etc. However, existing algorithms for the problem are facing questioning of how to overcome the non-monotone case and how to fast return a good solution in case of the big size of data. This paper introduces two deterministic approximation algorithms for the problem that competitively improve the query complexity of existing algorithms. Our first algorithm, $\LAA$, returns an approximation ratio of $1/19$ within $O(nk)$ query complexity. The second one, $\RLA$, improves the approximation ratio to $1/5-\epsilon$ in $O(nk)$ queries, where $\epsilon$ is an input parameter. Our algorithms are the first ones that provide constant approximation ratios within only $O(nk)$ query complexity for the non-monotone objective. They, therefore, need fewer the number of queries than state-of-the-the-art ones by a factor of $\Omega(\log n)$. Besides the theoretical analysis, we have evaluated our proposed ones with several experiments in some instances: Influence Maximization and Sensor Placement for the problem. The results confirm that our algorithms ensure theoretical quality as the cutting-edge techniques and significantly reduce the number of queries.

相關內容

We consider the problem of maintaining a $(1+\epsilon)\Delta$-edge coloring in a dynamic graph $G$ with $n$ nodes and maximum degree at most $\Delta$. The state-of-the-art update time is $O_\epsilon(\text{polylog}(n))$, by Duan, He and Zhang [SODA'19] and by Christiansen [STOC'23], and more precisely $O(\log^7 n/\epsilon^2)$, where $\Delta = \Omega(\log^2 n / \epsilon^2)$. The following natural question arises: What is the best possible update time of an algorithm for this task? More specifically, \textbf{ can we bring it all the way down to some constant} (for constant $\epsilon$)? This question coincides with the \emph{static} time barrier for the problem: Even for $(2\Delta-1)$-coloring, there is only a naive $O(m \log \Delta)$-time algorithm. We answer this fundamental question in the affirmative, by presenting a dynamic $(1+\epsilon)\Delta$-edge coloring algorithm with $O(\log^4 (1/\epsilon)/\epsilon^9)$ update time, provided $\Delta = \Omega_\epsilon(\text{polylog}(n))$. As a corollary, we also get the first linear time (for constant $\epsilon$) \emph{static} algorithm for $(1+\epsilon)\Delta$-edge coloring; in particular, we achieve a running time of $O(m \log (1/\epsilon)/\epsilon^2)$. We obtain our results by carefully combining a variant of the \textsc{Nibble} algorithm from Bhattacharya, Grandoni and Wajc [SODA'21] with the subsampling technique of Kulkarni, Liu, Sah, Sawhney and Tarnawski [STOC'22].

The problem of determining whether a graph $G$ contains another graph $H$ as a minor, referred to as the minor containment problem, is a fundamental problem in the field of graph algorithms. While it is NP-complete when $G$ and $H$ are general graphs, it is sometimes tractable on more restricted graph classes. This study focuses on the case where both $G$ and $H$ are trees, known as the tree minor containment problem. Even in this case, the problem is known to be NP-complete. In contrast, polynomial-time algorithms are known for the case when both trees are caterpillars or when the maximum degree of $H$ is a constant. Our research aims to clarify the boundary of tractability and intractability for the tree minor containment problem. Specifically, we provide dichotomies for the computational complexities of the problem based on three structural parameters: the diameter, pathwidth, and path eccentricity.

Erd\H{o}s and West (Discrete Mathematics'85) considered the class of $n$ vertex intersection graphs which have a {\em $d$-dimensional} {\em $t$-representation}, that is, each vertex of a graph in the class has an associated set consisting of at most $t$ $d$-dimensional axis-parallel boxes. In particular, for a graph $G$ and for each $d \geq 1$, they consider $i_d(G)$ to be the minimum $t$ for which $G$ has such a representation. For fixed $t$ and $d$, they consider the class of $n$ vertex labeled graphs for which $i_d(G) \leq t$, and prove an upper bound of $(2nt+\frac{1}{2})d \log n - (n - \frac{1}{2})d \log(4\pi t)$ on the logarithm of size of the class. In this work, for fixed $t$ and $d$ we consider the class of $n$ vertex unlabeled graphs which have a {\em $d$-dimensional $t$-representation}, denoted by $\mathcal{G}_{t,d}$. We address the problem of designing a succinct data structure for the class $\mathcal{G}_{t,d}$ in an attempt to generalize the relatively recent results on succinct data structures for interval graphs (Algorithmica'21). To this end, for each $n$ such that $td^2$ is in $o(n / \log n)$, we first prove a lower bound of $(2dt-1)n \log n - O(ndt \log \log n)$-bits on the size of any data structure for encoding an arbitrary graph that belongs to $\mathcal{G}_{t,d}$. We then present a $((2dt-1)n \log n + dt\log t + o(ndt \log n))$-bit data structure for $\mathcal{G}_{t,d}$ that supports navigational queries efficiently. Contrasting this data structure with our lower bound argument, we show that for each fixed $t$ and $d$, and for all $n \geq 0$ when $td^2$ is in $o(n/\log n)$ our data structure for $\mathcal{G}_{t,d}$ is succinct. As a byproduct, we also obtain succinct data structures for graphs of bounded boxicity (denoted by $d$ and $t = 1$) and graphs of bounded interval number (denoted by $t$ and $d=1$) when $td^2$ is in $o(n/\log n)$.

In adaptive data analysis, a mechanism gets $n$ i.i.d. samples from an unknown distribution $D$, and is required to provide accurate estimations to a sequence of adaptively chosen statistical queries with respect to $D$. Hardt and Ullman (FOCS 2014) and Steinke and Ullman (COLT 2015) showed that in general, it is computationally hard to answer more than $\Theta(n^2)$ adaptive queries, assuming the existence of one-way functions. However, these negative results strongly rely on an adversarial model that significantly advantages the adversarial analyst over the mechanism, as the analyst, who chooses the adaptive queries, also chooses the underlying distribution $D$. This imbalance raises questions with respect to the applicability of the obtained hardness results -- an analyst who has complete knowledge of the underlying distribution $D$ would have little need, if at all, to issue statistical queries to a mechanism which only holds a finite number of samples from $D$. We consider more restricted adversaries, called \emph{balanced}, where each such adversary consists of two separated algorithms: The \emph{sampler} who is the entity that chooses the distribution and provides the samples to the mechanism, and the \emph{analyst} who chooses the adaptive queries, but has no prior knowledge of the underlying distribution (and hence has no a priori advantage with respect to the mechanism). We improve the quality of previous lower bounds by revisiting them using an efficient \emph{balanced} adversary, under standard public-key cryptography assumptions. We show that these stronger hardness assumptions are unavoidable in the sense that any computationally bounded \emph{balanced} adversary that has the structure of all known attacks, implies the existence of public-key cryptography.

The well-known Lee-Carter model uses a bilinear form $\log(m_{x,t})=a_x+b_xk_t$ to represent the log mortality rate and has been widely researched and developed over the past thirty years. However, there has been little attention being paid to the robustness of the parameters against outliers, especially when estimating $b_x$. In response, we propose a robust estimation method for a wide family of Lee-Carter-type models, treating the problem as a Probabilistic Principal Component Analysis (PPCA) with multivariate $t$-distributions. An efficient Expectation-Maximization (EM) algorithm is also derived for implementation. The benefits of the method are threefold: 1) it produces more robust estimates of both $b_x$ and $k_t$, 2) it can be naturally extended to a large family of Lee-Carter type models, including those for modelling multiple populations, and 3) it can be integrated with other existing time series models for $k_t$. Using numerical studies based on United States mortality data from the Human Mortality Database, we show the proposed model performs more robust compared to conventional methods in the presence of outliers.

We design and implement two single-pass semi-streaming algorithms for the maximum weight $k$-disjoint matching ($k$-DM) problem. Given an integer $k$, the $k$-DM problem is to find $k$ pairwise edge-disjoint matchings such that the sum of the weights of the matchings is maximized. For $k \geq 2$, this problem is NP-hard. Our first algorithm is based on the primal-dual framework of a linear programming relaxation of the problem and is $\frac{1}{3+\varepsilon}$-approximate. We also develop an approximation preserving reduction from $k$-DM to the maximum weight $b$-matching problem. Leveraging this reduction and an existing semi-streaming $b$-matching algorithm, we design a $\frac{k}{(2+\varepsilon)(k+1)}$-approximate semi-streaming algorithm for $k$-DM. For any constant $\varepsilon > 0$, both of these algorithms require $O(nk \log_{1+\varepsilon}^2 n)$ bits of space. To the best of our knowledge, this is the first study of semi-streaming algorithms for the $k$-DM problem. We compare our two algorithms to state-of-the-art offline algorithms on 82 real-world and synthetic test problems. On the smaller instances, our streaming algorithms used significantly less memory (ranging from 6$\times$ to 114$\times$ less) and were faster in runtime than the offline algorithms. Our solutions were often within 5\% of the best weights from the offline algorithms. On a collection of six large graphs with a memory limit of 1 TB and with $k=8$, the offline algorithms terminated only on one graph (mycielskian20). The best offline algorithm on this instance required 640 GB of memory and 20 minutes to complete. In contrast, our slowest streaming algorithm for this instance took under four minutes and produced a matching that was 18\% better in weight, using only 1.4 GB of memory.

Classically, the edit distance of two length-$n$ strings can be computed in $O(n^2)$ time, whereas an $O(n^{2-\epsilon})$-time procedure would falsify the Orthogonal Vectors Hypothesis. If the edit distance does not exceed $k$, the running time can be improved to $O(n+k^2)$, which is near-optimal (conditioned on OVH) as a function of $n$ and $k$. Our first main contribution is a quantum $\tilde{O}(\sqrt{nk}+k^2)$-time algorithm that uses $\tilde{O}(\sqrt{nk})$ queries, where $\tilde{O}(\cdot)$ hides polylogarithmic factors. This query complexity is unconditionally optimal, and any significant improvement in the time complexity would resolve a long-standing open question of whether edit distance admits an $O(n^{2-\epsilon})$-time quantum algorithm. Our divide-and-conquer quantum algorithm reduces the edit distance problem to a case where the strings have small Lempel-Ziv factorizations. Then, it combines a quantum LZ compression algorithm with a classical edit-distance subroutine for compressed strings. The LZ factorization problem can be classically solved in $O(n)$ time, which is unconditionally optimal in the quantum setting. We can, however, hope for a quantum speedup if we parameterize the complexity in terms of the factorization size $z$. Already a generic oracle identification algorithm yields the optimal query complexity of $\tilde{O}(\sqrt{nz})$ at the price of exponential running time. Our second main contribution is a quantum algorithm that achieves the optimal time complexity of $\tilde{O}(\sqrt{nz})$. The key tool is a novel LZ-like factorization of size $O(z\log^2n)$ whose subsequent factors can be efficiently computed through a combination of classical and quantum techniques. We can then obtain the string's run-length encoded Burrows-Wheeler Transform (BWT), construct the $r$-index, and solve many fundamental string processing problems in time $\tilde{O}(\sqrt{nz})$.

We present a novel stochastic variational Gaussian process ($\mathcal{GP}$) inference method, based on a posterior over a learnable set of weighted pseudo input-output points (coresets). Instead of a free-form variational family, the proposed coreset-based, variational tempered family for $\mathcal{GP}$s (CVTGP) is defined in terms of the $\mathcal{GP}$ prior and the data-likelihood; hence, accommodating the modeling inductive biases. We derive CVTGP's lower bound for the log-marginal likelihood via marginalization of the proposed posterior over latent $\mathcal{GP}$ coreset variables, and show it is amenable to stochastic optimization. CVTGP reduces the learnable parameter size to $\mathcal{O}(M)$, enjoys numerical stability, and maintains $\mathcal{O}(M^3)$ time- and $\mathcal{O}(M^2)$ space-complexity, by leveraging a coreset-based tempered posterior that, in turn, provides sparse and explainable representations of the data. Results on simulated and real-world regression problems with Gaussian observation noise validate that CVTGP provides better evidence lower-bound estimates and predictive root mean squared error than alternative stochastic $\mathcal{GP}$ inference methods.

A new $H(\textrm{divdiv})$-conforming finite element is presented, which avoids the need for super-smoothness by redistributing the degrees of freedom to edges and faces. This leads to a hybridizable mixed method with superconvergence for the biharmonic equation. Moreover, new finite element divdiv complexes are established. Finally, new weak Galerkin and $C^0$ discontinuous Galerkin methods for the biharmonic equation are derived.

We revisit the noisy binary search model of Karp and Kleinberg, in which we have $n$ coins with unknown probabilities $p_i$ that we can flip. The coins are sorted by increasing $p_i$, and we would like to find where the probability crosses (to within $\varepsilon$) of a target value $\tau$. This generalized the fixed-noise model of Burnashev and Zigangirov , in which $p_i = \frac{1}{2} \pm \varepsilon$, to a setting where coins near the target may be indistinguishable from it. Karp and Kleinberg showed that $\Theta(\frac{1}{\varepsilon^2} \log n)$ samples are necessary and sufficient for this task. We produce a practical algorithm by solving two theoretical challenges: high-probability behavior and sharp constants. We give an algorithm that succeeds with probability $1-\delta$ from \[ \frac{1}{C_{\tau, \varepsilon}} \cdot \left(\lg n + O(\log^{2/3} n \log^{1/3} \frac{1}{\delta} + \log \frac{1}{\delta})\right) \] samples, where $C_{\tau, \varepsilon}$ is the optimal such constant achievable. For $\delta > n^{-o(1)}$ this is within $1 + o(1)$ of optimal, and for $\delta \ll 1$ it is the first bound within constant factors of optimal.

北京阿比特科技有限公司