亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $G=(V,E)$ be an undirected weighted graph on $n=|V|$ vertices and $S\subseteq V$ be a Steiner set. Steiner mincut is a well-studied concept, which provides a generalization to both (s,t)-mincut (when $|S|=2$) and global mincut (when $|S|=n$). Here, we address the problem of designing a compact data structure that can efficiently report a Steiner mincut and its capacity after the failure of any edge in $G$; such a data structure is known as a \textit{Sensitivity Oracle} for Steiner mincut. In the area of minimum cuts, although many Sensitivity Oracles have been designed in unweighted graphs, however, in weighted graphs, Sensitivity Oracles exist only for (s,t)-mincut [Annals of Operations Research 1991, NETWORKS 2019, ICALP 2024], which is just a special case of Steiner mincut. Here, we generalize this result to any arbitrary set $S\subseteq V$. 1. Sensitivity Oracle: Assuming the capacity of every edge is known, a. there is an ${\mathcal O}(n)$ space data structure that can report the capacity of Steiner mincut in ${\mathcal O}(1)$ time and b. there is an ${\mathcal O}(n(n-|S|+1))$ space data structure that can report a Steiner mincut in ${\mathcal O}(n)$ time after the failure of any edge in $G$. 2. Lower Bound: We show that any data structure that, after the failure of any edge, can report a Steiner mincut or its capacity must occupy $\Omega(n^2)$ bits of space in the worst case, irrespective of the size of the Steiner set. The lower bound in (2) shows that the assumption in (1) is essential to break the $\Omega(n^2)$ lower bound on space. For $|S|=n-k$ for any constant $k\ge 0$, it occupies only ${\mathcal O}(n)$ space. So, we also present the first Sensitivity Oracle occupying ${\mathcal O}(n)$ space for global mincut.

相關內容

甲骨文公司,全稱甲骨文股份有限公司(甲骨文軟件系統有限公司),是全球最大的企業級軟件公司,總部位于美國加利福尼亞州的紅木灘。1989年正式進入中國市場。2013年,甲骨文已超越 IBM ,成為繼 Microsoft 后全球第二大軟件公司。

Modern text-to-vision generative models often hallucinate when the prompt describing the scene to be generated is underspecified. In large language models (LLMs), a prevalent strategy to reduce hallucinations is to retrieve factual knowledge from an external database. While such retrieval augmentation strategies have great potential to enhance text-to-vision generators, existing static top-K retrieval methods explore the knowledge pool once, missing the broader context necessary for high-quality generation. Furthermore, LLMs internally possess rich world knowledge learned during large-scale training (parametric knowledge) that could mitigate the need for external data retrieval. This paper proposes Contextual Knowledge Pursuit (CKPT), a framework that leverages the complementary strengths of external and parametric knowledge to help generators produce reliable visual content. Instead of the one-time retrieval of facts from an external database to improve a given prompt, CKPT uses (1) an LLM to decide whether to seek external knowledge or to self-elicit descriptions from LLM parametric knowledge, (2) a knowledge pursuit process to contextually seek and sequentially gather most relevant facts, (3) a knowledge aggregator for prompt enhancement with the gathered fact context, and (4) a filtered fine-tuning objective to improve visual synthesis with richer prompts. We evaluate CKPT across multiple text-driven generative tasks (image, 3D rendering, and video) on datasets of rare objects and daily scenarios. Our results show that CKPT is capable of generating faithful and semantically rich content across diverse visual domains, offering a promising data source for zero-shot synthesis and filtered fine-tuning of text-to-vision generative models.

Matrix sketching, aimed at approximating a matrix $\boldsymbol{A} \in \mathbb{R}^{N\times d}$ consisting of vector streams of length $N$ with a smaller sketching matrix $\boldsymbol{B} \in \mathbb{R}^{\ell\times d}, \ell \ll N$, has garnered increasing attention in fields such as large-scale data analytics and machine learning. A well-known deterministic matrix sketching method is the Frequent Directions algorithm, which achieves the optimal $O\left(\frac{d}{\varepsilon}\right)$ space bound and provides a covariance error guarantee of $\varepsilon = \lVert \boldsymbol{A}^\top \boldsymbol{A} - \boldsymbol{B}^\top \boldsymbol{B} \rVert_2/\lVert \boldsymbol{A} \rVert_F^2$. The matrix sketching problem becomes particularly interesting in the context of sliding windows, where the goal is to approximate the matrix $\boldsymbol{A}_W$, formed by input vectors over the most recent $N$ time units. However, despite recent efforts, whether achieving the optimal $O\left(\frac{d}{\varepsilon}\right)$ space bound on sliding windows is possible has remained an open question. In this paper, we introduce the DS-FD algorithm, which achieves the optimal $O\left(\frac{d}{\varepsilon}\right)$ space bound for matrix sketching over row-normalized, sequence-based sliding windows. We also present matching upper and lower space bounds for time-based and unnormalized sliding windows, demonstrating the generality and optimality of \dsfd across various sliding window models. This conclusively answers the open question regarding the optimal space bound for matrix sketching over sliding windows. Furthermore, we conduct extensive experiments with both synthetic and real-world datasets, validating our theoretical claims and thus confirming the correctness and effectiveness of our algorithm, both theoretically and empirically.

A {\em bipartite tournament} is a directed graph $T:=(A \cup B, E)$ such that every pair of vertices $(a,b), a\in A,b\in B$ are connected by an arc, and no arc connects two vertices of $A$ or two vertices of $B$. A {\em feedback vertex set} is a set $S$ of vertices in $T$ such that $T - S$ is acyclic. In this article we consider the {\sc Feedback Vertex Set} problem in bipartite tournaments. Here the input is a bipartite tournament $T$ on $n$ vertices together with an integer $k$, and the task is to determine whether $T$ has a feedback vertex set of size at most $k$. We give a new algorithm for {\sc Feedback Vertex Set in Bipartite Tournaments}. The running time of our algorithm is upper-bounded by $O(1.6181^k + n^{O(1)})$, improving over the previously best known algorithm with running time $2^kk^{O(1)} + n^{O(1)}$ [Hsiao, ISAAC 2011]. As a by-product, we also obtain the fastest currently known exact exponential-time algorithm for the problem, with running time $O(1.3820^n)$.

Sensitivity measures how much the output of an algorithm changes, in terms of Hamming distance, when part of the input is modified. While approximation algorithms with low sensitivity have been developed for many problems, no sensitivity lower bounds were previously known for approximation algorithms. In this work, we establish the first polynomial lower bound on the sensitivity of (randomized) approximation algorithms for constraint satisfaction problems (CSPs) by adapting the probabilistically checkable proof (PCP) framework to preserve sensitivity lower bounds. From this, we derive polynomial sensitivity lower bounds for approximation algorithms for a variety of problems, including maximum clique, minimum vertex cover, and maximum cut. Given the connection between sensitivity and distributed algorithms, our sensitivity lower bounds also allow us to recover various round complexity lower bounds for distributed algorithms in the LOCAL model. Additionally, we present new lower bounds for distributed CSPs.

Scalarization is a general, parallizable technique that can be deployed in any multiobjective setting to reduce multiple objectives into one, yet some have dismissed this versatile approach because linear scalarizations cannot explore concave regions of the Pareto frontier. To that end, we aim to find simple non-linear scalarizations that provably explore a diverse set of $k$ objectives on the Pareto frontier, as measured by the dominated hypervolume. We show that hypervolume scalarizations with uniformly random weights achieves an optimal sublinear hypervolume regret bound of $O(T^{-1/k})$, with matching lower bounds that preclude any algorithm from doing better asymptotically. For the setting of multiobjective stochastic linear bandits, we utilize properties of hypervolume scalarizations to derive a novel non-Euclidean analysis to get regret bounds of $\tilde{O}( d T^{-1/2} + T^{-1/k})$, removing unnecessary $\text{poly}(k)$ dependencies. We support our theory with strong empirical performance of using non-linear scalarizations that outperforms both their linear counterparts and other standard multiobjective algorithms in a variety of natural settings.

When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC, a highly diverse dataset of abstract reasoning tasks. We train neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). Our models are trained on synthetic data generated by prompting LLMs to produce Python code specifying a function to be inferred, plus a stochastic subroutine for generating inputs to that function. We find inductive and transductive models solve very different problems, despite training on the same problems, and despite sharing the same neural architecture.

Privacy-preserving machine learning (PPML) solutions are gaining widespread popularity. Among these, many rely on homomorphic encryption (HE) that offers confidentiality of the model and the data, but at the cost of large latency and memory requirements. Pruning neural network (NN) parameters improves latency and memory in plaintext ML but has little impact if directly applied to HE-based PPML. We introduce a framework called HE-PEx that comprises new pruning methods, on top of a packing technique called tile tensors, for reducing the latency and memory of PPML inference. HE-PEx uses permutations to prune additional ciphertexts, and expansion to recover inference loss. We demonstrate the effectiveness of our methods for pruning fully-connected and convolutional layers in NNs on PPML tasks, namely, image compression, denoising, and classification, with autoencoders, multilayer perceptrons (MLPs) and convolutional neural networks (CNNs). We implement and deploy our networks atop a framework called HElayers, which shows a 10-35% improvement in inference speed and a 17-35% decrease in memory requirement over the unpruned network, corresponding to 33-65% fewer ciphertexts, within a 2.5% degradation in inference accuracy over the unpruned network. Compared to the state-of-the-art pruning technique for PPML, our techniques generate networks with 70% fewer ciphertexts, on average, for the same degradation limit.

In the classical context, the cooperative game theory concept of the Shapley value has been adapted for post hoc explanations of machine learning models. However, this approach does not easily translate to eXplainable Quantum ML (XQML). Finding Shapley values can be highly computationally complex. We propose quantum algorithms which can extract Shapley values within some confidence interval. Our results perform in polynomial time. We demonstrate the validity of each approach under specific examples of cooperative voting games.

In computer graphics, simplifying a polygonal mesh surface~$\mathcal{M}$ into a geometric proxy that maintains close conformity to~$\mathcal{M}$ is crucial, as it can significantly reduce computational demands in various applications. In this paper, we introduce the Implicit Thin Shell~(ITS), a concept designed to implicitly represent the sandwich-walled space surrounding~$\mathcal{M}$, defined as~$\{\textbf{x}\in\mathbb{R}^3|\epsilon_1\leq f(\textbf{x}) \leq \epsilon_2, \epsilon_1< 0, \epsilon_2>0\}$. Here, $f$ is an approximation of the signed distance function~(SDF) of~$\mathcal{M}$, and we aim to minimize the thickness~$\epsilon_2-\epsilon_1$. To achieve a balance between mathematical simplicity and expressive capability in~$f$, we employ a tri-variate tensor-product B-spline to represent~$f$. This representation is coupled with adaptive knot grids that adapt to the inherent shape variations of~$\mathcal{M}$, while restricting~$f$'s basis functions to the first degree. In this manner, the analytical form of~$f$ can be rapidly determined by solving a sparse linear system. Moreover, the process of identifying the extreme values of~$f$ among the infinitely many points on~$\mathcal{M}$ can be simplified to seeking extremes among a finite set of candidate points. By exhausting the candidate points, we find the extreme values~$\epsilon_1<0$ and $\epsilon_2>0$ that minimize the thickness. The constructed ITS is guaranteed to wrap~$\mathcal{M}$ rigorously, without any intersections between the bounding surfaces and~$\mathcal{M}$. ITS offers numerous potential applications thanks to its rigorousness, tightness, expressiveness, and computational efficiency. We demonstrate the efficacy of ITS in rapid inside-outside tests and in mesh simplification through the control of global error.

We propose the convex floating body membership problem, which consists of efficiently determining when a query point $a\in\mathbb{R}^d$ belongs to the so-called $\varepsilon$-convex floating body of a given bounded convex domain $K\subset\mathbb{R}^d$. We consider this problem in an approximate setting, i.e., given a parameter $\delta>0$, the query can be answered either way if the Hilbert distance in $K$ of $a$ from the boundary of a relatively-scaled $\varepsilon$-convex floating body is less than $\delta$. We present a data structure for this problem that has storage size $O(\delta^{-d}\varepsilon^{-(d-1)/2})$ and achieves query time of $O({\delta^{-1}}\ln 1/\varepsilon)$. Our construction is motivated by a recent work of Abdelkader and Mount on APM queries, and relies on a comparison of convex floating bodies with balls in the Hilbert metric on $K$.

北京阿比特科技有限公司