亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is a long-standing open question to construct a classical oracle relative to which BQP/qpoly $\neq$ BQP/poly or QMA $\neq$ QCMA. In this paper, we construct classically-accessible classical oracles relative to which BQP/qpoly $\neq$ BQP/poly and QMA $\neq$ QCMA. Here, classically-accessible classical oracles are oracles that can be accessed only classically even for quantum algorithms. Based on a similar technique, we also show an alternative proof for the separation of QMA and QCMA relative to a distributional quantumly-accessible classical oracle, which was recently shown by Natarajan and Nirkhe.

相關內容

甲骨(gu)文公司,全(quan)稱甲骨(gu)文股份有限(xian)公司(甲骨(gu)文軟(ruan)件(jian)系統(tong)有限(xian)公司),是全(quan)球最大的(de)企業級軟(ruan)件(jian)公司,總部位(wei)于美國加利福(fu)尼亞(ya)州的(de)紅木灘。1989年正(zheng)式進入中國市場。2013年,甲骨(gu)文已超越 IBM ,成(cheng)為繼 Microsoft 后全(quan)球第二大軟(ruan)件(jian)公司。

Recent work by Dhulipala et al. \cite{DLRSSY22} initiated the study of the $k$-core decomposition problem under differential privacy via a connection between low round/depth distributed/parallel graph algorithms and private algorithms with small error bounds. They showed that one can output differentially private approximate $k$-core numbers, while only incurring a multiplicative error of $(2 +\eta)$ (for any constant $\eta >0$) and additive error of $\poly(\log(n))/\eps$. In this paper, we revisit this problem. Our main result is an $\eps$-edge differentially private algorithm for $k$-core decomposition which outputs the core numbers with no multiplicative error and $O(\text{log}(n)/\eps)$ additive error. This improves upon previous work by a factor of 2 in the multiplicative error, while giving near-optimal additive error. Our result relies on a novel generalized form of the sparse vector technique, which is especially well-suited for threshold-based graph algorithms; thus, we further strengthen the connection between distributed/parallel graph algorithms and differentially private algorithms.

A string $w$ is said to be a minimal absent word (MAW) for a string $S$ if $w$ does not occur in $S$ and any proper substring of $w$ occurs in $S$. We focus on non-trivial MAWs which are of length at least 2. Finding such non-trivial MAWs for a given string is motivated for applications in bioinformatics and data compression. Fujishige et al. [TCS 2023] proposed a data structure of size $\Theta(n)$ that can output the set $\mathsf{MAW}(S)$ of all MAWs for a given string $S$ of length $n$ in $O(n + |\mathsf{MAW}(S)|)$ time, based on the directed acyclic word graph (DAWG). In this paper, we present a more space efficient data structure based on the compact DAWG (CDAWG), which can output $\mathsf{MAW}(S)$ in $O(|\mathsf{MAW}(S)|)$ time with $O(e)$ space, where $e$ denotes the minimum of the sizes of the CDAWGs for $S$ and for its reversal $S^R$. For any strings of length $n$, it holds that $e < 2n$, and for highly repetitive strings $e$ can be sublinear (up to logarithmic) in $n$. We also show that MAWs and their generalization minimal rare words have close relationships with extended bispecial factors, via the CDAWG.

Prior work studies the question of ``fairly'' ordering transactions in a replicated state machine. Each of $n$ replicas receives transactions in a possibly different order, and the system must aggregate the observed orderings into a single order. We argue that this problem is best viewed through the lens of social choice theory, in which (in the preference aggregation problem) rankings on candidates are aggregated into an election result. Two features make this problem novel. First, the number of transactions is unbounded, and an ordering must be defined over a countably infinite set. And second, decisions must be made quickly, with only partial information. Additionally, some faulty replicas might alter their reported observations; their influence on the output should be bounded and well understood. Prior work studies a ``$\gamma$-batch-order-fairness'' property, which divides an ordering into contiguous batches. If a $\gamma$ fraction of replicas receive $\tau$ before $\tau^\prime$, then $\tau^\prime$ cannot be in an earlier batch than $\tau$. We strengthen this definition to require that batches have minimal size ($\gamma$-batch-order-fairness can be vacuously satisfied by large batches) while accounting for the possibility of faulty replicas. This social choice lens enables an ordering protocol with strictly stronger fairness and liveness properties than prior work. We study the Ranked Pairs method. Analysis of how missing information moves through the algorithm allows our streaming version to know when it can output a transaction. Deliberate construction of a tiebreaking rule ensures our algorithm outputs a transaction after a bounded time (in a synchronous network). Prior work relies on a fixed choice of $\gamma$ and bound on the number of faulty replicas $f$, but our algorithm satisfies our definition for every $\frac{1}{2}<\gamma\leq 1$ simultaneously and for any $f$.

Maximum mean discrepancy (MMD) is a particularly useful distance metric for differentially private data generation: when used with finite-dimensional features it allows us to summarize and privatize the data distribution once, which we can repeatedly use during generator training without further privacy loss. An important question in this framework is, then, what features are useful to distinguish between real and synthetic data distributions, and whether those enable us to generate quality synthetic data. This work considers the using the features of $\textit{neural tangent kernels (NTKs)}$, more precisely $\textit{empirical}$ NTKs (e-NTKs). We find that, perhaps surprisingly, the expressiveness of the untrained e-NTK features is comparable to that of the features taken from pre-trained perceptual features using public data. As a result, our method improves the privacy-accuracy trade-off compared to other state-of-the-art methods, without relying on any public data, as demonstrated on several tabular and image benchmark datasets.

Simulation-based inference (SBI) is constantly in search of more expressive algorithms for accurately inferring the parameters of complex models from noisy data. We present consistency models for neural posterior estimation (CMPE), a new free-form conditional sampler for scalable, fast, and amortized SBI with generative neural networks. CMPE combines the advantages of normalizing flows and flow matching methods into a single generative architecture: It essentially distills a continuous probability flow and enables rapid few-shot inference with an unconstrained architecture that can be tailored to the structure of the estimation problem. Our empirical evaluation demonstrates that CMPE not only outperforms current state-of-the-art algorithms on three hard low-dimensional problems but also achieves competitive performance in a high-dimensional Bayesian denoising experiment and in estimating a computationally demanding multi-scale model of tumor spheroid growth.

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

Most object recognition approaches predominantly focus on learning discriminative visual patterns while overlooking the holistic object structure. Though important, structure modeling usually requires significant manual annotations and therefore is labor-intensive. In this paper, we propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions into the traditional framework. We show the recognition backbone can be substantially enhanced for more robust representation learning, without any cost of extra annotation and inference speed. Specifically, we first propose an object-extent learning module for localizing the object according to the visual patterns shared among the instances in the same category. We then design a spatial context learning module for modeling the internal structures of the object, through predicting the relative positions within the extent. These two modules can be easily plugged into any backbone networks during training and detached at inference time. Extensive experiments show that our look-into-object approach (LIO) achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft). We also show that this learning paradigm is highly generalizable to other tasks such as object detection and segmentation (MS COCO). Project page: //github.com/JDAI-CV/LIO.

Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs. To make DL more robust, several posthoc anomaly detection techniques to detect (and discard) these anomalous samples have been proposed in the recent past. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications. We provide a taxonomy for existing techniques based on their underlying assumptions and adopted approaches. We discuss various techniques in each of the categories and provide the relative strengths and weaknesses of the approaches. Our goal in this survey is to provide an easier yet better understanding of the techniques belonging to different categories in which research has been done on this topic. Finally, we highlight the unsolved research challenges while applying anomaly detection techniques in DL systems and present some high-impact future research directions.

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司