亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we establish anti-concentration inequalities for additive noise mechanisms which achieve $f$-differential privacy ($f$-DP), a notion of privacy phrased in terms of a tradeoff function (a.k.a. ROC curve) $f$ which limits the ability of an adversary to determine which individuals were in the database. We show that canonical noise distributions (CNDs), proposed by Awan and Vadhan (2023), match the anti-concentration bounds at half-integer values, indicating that their tail behavior is near-optimal. We also show that all CNDs are sub-exponential, regardless of the $f$-DP guarantee. In the case of log-concave CNDs, we show that they are the stochastically smallest noise compared to any other noise distributions with the same privacy guarantee. In terms of integer-valued noise, we propose a new notion of discrete CND and prove that a discrete CND always exists, can be constructed by rounding a continuous CND, and that the discrete CND is unique when designed for a statistic with sensitivity 1. We further show that the discrete CND at sensitivity 1 is stochastically smallest compared to other integer-valued noises. Our theoretical results shed light on the different types of privacy guarantees possible in the $f$-DP framework and can be incorporated in more complex mechanisms to optimize performance.

相關內容

Recent methods in text-to-3D leverage powerful pretrained diffusion models to optimize NeRF. Notably, these methods are able to produce high-quality 3D scenes without training on 3D data. Due to the open-ended nature of the task, most studies evaluate their results with subjective case studies and user experiments, thereby presenting a challenge in quantitatively addressing the question: How has current progress in Text-to-3D gone so far? In this paper, we introduce T$^3$Bench, the first comprehensive text-to-3D benchmark containing diverse text prompts of three increasing complexity levels that are specially designed for 3D generation. To assess both the subjective quality and the text alignment, we propose two automatic metrics based on multi-view images produced by the 3D contents. The quality metric combines multi-view text-image scores and regional convolution to detect quality and view inconsistency. The alignment metric uses multi-view captioning and Large Language Model (LLM) evaluation to measure text-3D consistency. Both metrics closely correlate with different dimensions of human judgments, providing a paradigm for efficiently evaluating text-to-3D models. The benchmarking results, shown in Fig. 1, reveal performance differences among six prevalent text-to-3D methods. Our analysis further highlights the common struggles for current methods on generating surroundings and multi-object scenes, as well as the bottleneck of leveraging 2D guidance for 3D generation. Our project page is available at: //t3bench.com.

Based on the theory of homogeneous spaces we derive \textit{geometrically optimal edge attributes} to be used within the flexible message passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions $\mathbb{R}^3$, position and orientations $\mathbb{R}^3 {\times} S^2$, and the group SE$(3)$ itself. Among these, $\mathbb{R}^3 {\times} S^2$ is an optimal choice due to the ability to represent directional information, which $\mathbb{R}^3$ methods cannot, and it significantly enhances computational efficiency compared to indexing features on the full SE$(3)$ group. We empirically support this claim by reaching state-of-the-art results -- in accuracy and speed -- on three different benchmarks: interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.

Spoken language understanding (SLU) typically includes two subtasks: intent detection and slot filling. Currently, it has achieved great success in high-resource languages, but it still remains challenging in low-resource languages due to the scarcity of labeled training data. Hence, there is a growing interest in zero-shot cross-lingual SLU. Despite of the success of existing zero-shot cross-lingual SLU models, most of them neglect to achieve the mutual guidance between intent and slots. To address this issue, we propose an Intra-Inter Knowledge Distillation framework for zero-shot cross-lingual Spoken Language Understanding (I$^2$KD-SLU) to model the mutual guidance. Specifically, we not only apply intra-knowledge distillation between intent predictions or slot predictions of the same utterance in different languages, but also apply inter-knowledge distillation between intent predictions and slot predictions of the same utterance. Our experimental results demonstrate that our proposed framework significantly improves the performance compared with the strong baselines and achieves the new state-of-the-art performance on the MultiATIS++ dataset, obtaining a significant improvement over the previous best model in overall accuracy.

Let $G$ be an undirected graph, and $s,t$ distinguished vertices of $G$. A minimal $s,t$-separator is an inclusion-wise minimal vertex-set whose removal places $s$ and $t$ in distinct connected components. We present an algorithm for listing the minimal $s,t$-separators of a graph, whose cardinality is at most $k$, with FPT-delay, where the parameter depends only on $k$. This problem finds applications in various algorithms parameterized by treewidth, which include query evaluation in relational databases, probabilistic inference, and many more. We also present a simple algorithm that enumerates all of the (not necessarily minimal) $s,t$-separators of a graph in ranked order by size.

Symbolic automata are finite state automata that support potentially infinite alphabets, such as the set of rational numbers, generally applied to regular expressions/languages over finite words. In symbolic automata (or automata modulo theories), an alphabet is represented by an effective Boolean algebra, supported by a decision procedure for satisfiability. Regular languages over infinite words (so called $\omega$-regular languages) have a rich history paralleling that of regular languages over finite words, with well known applications to model checking via B\"uchi automata and temporal logics. We generalize symbolic automata to support $\omega$-regular languages via symbolic transition terms and symbolic derivatives, bringing together a variety of classic automata and logics in a unified framework that provides all the necessary ingredients to support symbolic model checking modulo $A$, $NBW_A$. In particular, we define: (1) alternating B\"uchi automata modulo $A$, $ABW_A$ as well (non-alternating) non-deterministic B\"uchi automata modulo $A$, $NBW_A$; (2) an alternation elimination algorithm that incrementally constructs an $NBW_A$ from an $ABW_A$, and can also be used for constructing the product of two $NBW_A$'s; (3) a definition of linear temporal logic (LTL) modulo $A$ that generalizes Vardi's construction of alternating B\"uchi automata from LTL, using (2) to go from LTL modulo $A$ to $NBW_A$ via $ABW_A$. Finally, we present a combination of LTL modulo $A$ with extended regular expressions modulo $A$ that generalizes the Property Specification Language (PSL). Our combination allows regex complement, that is not supported in PSL but can be supported naturally by using symbolic transition terms.

In this paper, we argue that current work has failed to provide a comprehensive and maintainable in-memory representation for persistent memory. PM data should be easily mappable into a process address space, shareable across processes, shippable between machines, consistent after a crash, and accessible to legacy code with fast, efficient pointers as first-class abstractions. While existing systems have provided niceties like mmap()-based load/store access, they have not been able to support all these necessary properties due to conflicting requirements. We propose Puddles, a new persistent memory abstraction, to solve these problems. Puddles provide application-independent recovery after a power outage; they make recovery from a system failure a system-level property of the stored data rather than the responsibility of the programs that access it. Puddles use native pointers, so they are compatible with existing code. Finally, Puddles implement support for sharing and shipping of PM data between processes and systems without expensive serialization and deserialization. Compared to existing systems, Puddles are at least as fast as and up to 1.34$\times$ faster than PMDK while being competitive with other PM libraries across YCSB workloads. Moreover, to demonstrate Puddles' ability to relocate data, we showcase a sensor network data-aggregation workload that results in a 4.7$\times$ speedup over PMDK.

In this paper, we establish a joint (bivariate) functional central limit theorem of the sample quantile and the $r$-th absolute centred sample moment for functionals of mixing processes. More precisely, we consider $L_2$-near epoch dependent processes that are functionals of either $\phi$-mixing or absolutely regular processes. The general results we obtain can be used for two classes of popular and important processes in applications: The class of augmented GARCH($p$,$q$) processes with independent and identically distributed innovations (including many GARCH variations used in practice) and the class of ARMA($p$,$q$) processes with mixing innovations (including, e.g., ARMA-GARCH processes). For selected examples, we provide exact conditions on the moments and parameters of the process for the joint asymptotics to hold.

In this paper we propose two new subclasses of Petri nets with resets, for which the reachability and coverability problems become tractable. We add an acyclicity condition that only applies to the consumptions and productions, not the resets. The first class is acyclic Petri nets with resets, and we show that coverability is PSPACE-complete for them. This contrasts the known Ackermann-hardness for coverability in (not necessarily acyclic) Petri nets with resets. We prove that the reachability problem remains undecidable for acyclic Petri nets with resets. The second class concerns workflow nets, a practically motivated and natural subclass of Petri nets. Here, we show that both coverability and reachability in acyclic workflow nets with resets are PSPACE-complete. Without the acyclicity condition, reachability and coverability in workflow nets with resets are known to be equally hard as for Petri nets with resets, that being Ackermann-hard and undecidable, respectively.

Large Language Models (LLMs) have shown promise in the autonomous driving sector, particularly in generalization and interpretability. We introduce a unique object-level multimodal LLM architecture that merges vectorized numeric modalities with a pre-trained LLM to improve context understanding in driving situations. We also present a new dataset of 160k QA pairs derived from 10k driving scenarios, paired with high quality control commands collected with RL agent and question answer pairs generated by teacher LLM (GPT-3.5). A distinct pretraining strategy is devised to align numeric vector modalities with static LLM representations using vector captioning language data. We also introduce an evaluation metric for Driving QA and demonstrate our LLM-driver's proficiency in interpreting driving scenarios, answering questions, and decision-making. Our findings highlight the potential of LLM-based driving action generation in comparison to traditional behavioral cloning. We make our benchmark, datasets, and model available for further exploration.

Given a graph $G$ that is modified by a sequence of edge insertions and deletions, we study the Maximum $k$-Edge Coloring problem Having access to $k$ colors, how can we color as many edges of $G$ as possible such that no two adjacent edges share the same color? While this problem is different from simply maintaining a $b$-matching with $b=k$, the two problems are closely related: a maximum $k$-matching always contains a $\frac{k+1}k$-approximate maximum $k$-edge coloring. However, maximum $b$-matching can be solved efficiently in the static setting, whereas the Maximum $k$-Edge Coloring problem is NP-hard and even APX-hard for $k \ge 2$. We present new results on both problems: For $b$-matching, we show a new integrality gap result and for the case where $b$ is a constant, we adapt Wajc's matching sparsification scheme~[STOC20]. Using these as basis, we give three new algorithms for the dynamic Maximum $k$-Edge Coloring problem: Our MatchO algorithm builds on the dynamic $(2+\epsilon)$-approximation algorithm of Bhattacharya, Gupta, and Mohan~[ESA17] for $b$-matching and achieves a $(2+\epsilon)\frac{k+1} k$-approximation in $O(poly(\log n, \epsilon^{-1}))$ update time against an oblivious adversary. Our MatchA algorithm builds on the dynamic $8$-approximation algorithm by Bhattacharya, Henzinger, and Italiano~[SODA15] for fractional $b$-matching and achieves a $(8+\epsilon)\frac{3k+3}{3k-1}$-approximation in $O(poly(\log n, \epsilon^{-1}))$ update time against an adaptive adversary. Moreover, our reductions use the dynamic $b$-matching algorithm as a black box, so any future improvement in the approximation ratio for dynamic $b$-matching will automatically translate into a better approximation ratio for our algorithms. Finally, we present a greedy algorithm that runs in $O(\Delta+k)$ update time, while guaranteeing a $2.16$~approximation factor.

北京阿比特科技有限公司