亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that interactive protocols between a prover and a verifier, a well-known tool of complexity theory, can be used in practice to certify the correctness of automated reasoning tools. Theoretically, interactive protocols exist for all $\textsf{PSPACE}$ problems. The verifier of a protocol checks the prover's answer to a problem instance in probabilistic polynomial time, with polynomially many bits of communication, and with exponentially small probability of error. (The prover may need exponential time.) Existing interactive protocols are not used in practice because their provers use naive algorithms, inefficient even for small instances, that are incompatible with practical implementations of automated reasoning. We bridge the gap between theory and practice by means of an interactive protocol whose prover uses BDDs. We consider the problem of counting the number of assignments to a QBF instance ($\#\textrm{CP}$), which has a natural BDD-based algorithm. We give an interactive protocol for $\#\textrm{CP}$ whose prover is implemented on top of an extended BDD library. The prover has only a linear overhead in computation time over the natural algorithm. We have implemented our protocol in $\textsf{blic}$, a certifying tool for $\#\textrm{CP}$. Experiments on standard QBF benchmarks show that $\textsf{blic}$ is competitive with state-of-the-art QBF-solvers. The run time of the verifier is negligible. While loss of absolute certainty can be concerning, the error probability in our experiments is at most $10^{-10}$ and reduces to $10^{-10k}$ by repeating the verification $k$ times.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · GROUP · 正則的 · 數據轉換 · 特征向量 ·
2023 年 10 月 23 日

Increases in the deployment of machine learning algorithms for applications that deal with sensitive data have brought attention to the issue of fairness in machine learning. Many works have been devoted to applications that require different demographic groups to be treated fairly. However, algorithms that aim to satisfy inter-group fairness (also called group fairness) may inadvertently treat individuals within the same demographic group unfairly. To address this issue, we introduce a formal definition of within-group fairness that maintains fairness among individuals from within the same group. We propose a pre-processing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy. The framework maps the feature vectors of members from different groups to an inter-group-fair canonical domain before feeding them into a scoring function. The mapping is constructed to preserve the relative relationship between the scores obtained from the unprocessed feature vectors of individuals from the same demographic group, guaranteeing within-group fairness. We apply this framework to the COMPAS risk assessment and Law School datasets and compare its performance in achieving inter-group and within-group fairness to two regularization-based methods.

Simultaneous machine translation (SiMT) requires a robust read/write policy in conjunction with a high-quality translation model. Traditional methods rely on either a fixed wait-$k$ policy coupled with a standalone wait-$k$ translation model, or an adaptive policy jointly trained with the translation model. In this study, we propose a more flexible approach by decoupling the adaptive policy model from the translation model. Our motivation stems from the observation that a standalone multi-path wait-$k$ model performs competitively with adaptive policies utilized in state-of-the-art SiMT approaches. Specifically, we introduce DaP, a divergence-based adaptive policy, that makes read/write decisions for any translation model based on the potential divergence in translation distributions resulting from future information. DaP extends a frozen wait-$k$ model with lightweight parameters, and is both memory and computation efficient. Experimental results across various benchmarks demonstrate that our approach offers an improved trade-off between translation accuracy and latency, outperforming strong baselines.

Passage retrieval is a fundamental task in many information systems, such as web search and question answering, where both efficiency and effectiveness are critical concerns. In recent years, neural retrievers based on pre-trained language models (PLM), such as dual-encoders, have achieved huge success. Yet, studies have found that the performance of dual-encoders are often limited due to the neglecting of the interaction information between queries and candidate passages. Therefore, various interaction paradigms have been proposed to improve the performance of vanilla dual-encoders. Particularly, recent state-of-the-art methods often introduce late-interaction during the model inference process. However, such late-interaction based methods usually bring extensive computation and storage cost on large corpus. Despite their effectiveness, the concern of efficiency and space footprint is still an important factor that limits the application of interaction-based neural retrieval models. To tackle this issue, we incorporate implicit interaction into dual-encoders, and propose I^3 retriever. In particular, our implicit interaction paradigm leverages generated pseudo-queries to simulate query-passage interaction, which jointly optimizes with query and passage encoders in an end-to-end manner. It can be fully pre-computed and cached, and its inference process only involves simple dot product operation of the query vector and passage vector, which makes it as efficient as the vanilla dual encoders. We conduct comprehensive experiments on MSMARCO and TREC2019 Deep Learning Datasets, demonstrating the I^3 retriever's superiority in terms of both effectiveness and efficiency. Moreover, the proposed implicit interaction is compatible with special pre-training and knowledge distillation for passage retrieval, which brings a new state-of-the-art performance.

Concentration shift keying (CSK) is a widely adopted modulation technique for molecular communication-based nanonetworks, which is a key enabler for the Internet of Bio-NanoThings (IoBNT). However, existing methods provide optimal error performance at the cost of high operational complexity that scales poorly as the number of transmitters, $K$, increases. This paper proposes a novel $M$-ary CSK method termed CSK with Common detection Thresholds (CSK-CT). CSK-CT uses $\textit{common}$ thresholds that are sufficiently low to ensure the reliable detection of symbols transmitted by every transmitter, regardless of their distance. We derive closed-form expressions to obtain the common thresholds and release concentrations. To enhance the error performance, we optimize the release concentration using a scaling exponent that further optimizes the common thresholds. We evaluate the performance of CSK-CT in comparison to the benchmark CSK for varying values of $K$ and $M$. In terms of the error probability, CSK-CT offers between $10^{-7}$ and $10^{-4}$, which are a substantial improvement from the $10^{-4}$ to $10^{-3}$ offered by the benchmark. In terms of complexity, CSK-CT is $\textit{O}\big(n\big)$ and does not scale with $K$ but $M$ ($M\ll K$), while the benchmark is $\textit{O}\big(n^2\big)$. Furthermore, CSK-CT showcased the ability to mitigate inter-symbol interference, although this facet warrants further investigation. Due to its low error probability, improved scalability, low complexity, and potential ISI mitigation features, CSK-CT demonstrates benefits in applications of IoBNT focused on data-gathering. Specifically, its utility is well-noted in settings where a computationally strained receiver collects sensitive health-related data from multiple transmitters.

This tutorial gives an advanced introduction to string diagrams and graph languages for higher-order computation. The subject matter develops in a principled way, starting from the two dimensional syntax of key categorical concepts such as functors, adjunctions, and strictification, and leading up to Cartesian Closed Categories, the core mathematical model of the lambda calculus and of functional programming languages. This methodology inverts the usual approach of proceeding from syntax to a categorical interpretation, by rationally reconstructing a syntax from the categorical model. The result is a graph syntax -- more precisely, a hierarchical hypergraph syntax -- which in many ways is shown to be an improvement over the conventional linear term syntax. The rest of the tutorial focuses on applications of interest to programming languages: operational semantics, general frameworks for type inference, and complex whole-program transformations such as closure conversion and automatic differentiation.

Higher-order regularization problem formulations are popular frameworks used in machine learning, inverse problems and image/signal processing. In this paper, we consider the computational problem of finding the minimizer of the Sobolev $\mathrm{W}^{1,p}$ semi-norm with a data-fidelity term. We propose a discretization procedure and prove convergence rates between our numerical solution and the target function. Our approach consists of discretizing an appropriate gradient flow problem in space and time. The space discretization is a nonlocal approximation of the p-Laplacian operator and our rates directly depend on the localization parameter $\epsilon_n$ and the time mesh-size $\tau_n$. We precisely characterize the asymptotic behaviour of $\epsilon_n$ and $\tau_n$ in order to ensure convergence to the considered minimizer. Finally, we apply our results to the setting of random graph models.

Motivated by demand-responsive parking pricing systems, we consider posted-price algorithms for the online metric matching problem. We give an $O(\log n)$-competitive posted-price randomized algorithm in the case that the metric space is a line. In particular, in this setting we show how to implement the ubiquitous guess-and-double technique using prices.

We study the question of which visibly pushdown languages (VPLs) are in the complexity class $\mathsf{AC}^0$ and how to effectively decide this question. Our contribution is to introduce a particular subclass of one-turn VPLs, called intermediate VPLs, for which the raised question is entirely unclear: to the best of our knowledge our research community is unaware of containment or non-containment in $\mathsf{AC}^0$ for any intermediate VPL. Our main result states that there is an algorithm that, given a visibly pushdown automaton, correctly outputs either that its language is in $\mathsf{AC}^0$, outputs some $m\geq 2$ such that $\mathsf{MOD}_m$ is constant-depth reducible to $L$ (implying that $L$ is not in $\mathsf{AC}^0$), or outputs a finite disjoint union of intermediate VPLs that $L$ is constant-depth equivalent to. In the latter case one can moreover effectively compute $k,l\in\mathbb{N}_{>0}$ with $k\not=l$ such that the concrete intermediate VPL $L(S\rightarrow\varepsilon\mid a c^{k-1} S b_1\mid ac^{l-1}Sb_2)$ is constant-depth reducible to the language $L$. Due to their particular nature we conjecture that either all intermediate VPLs are in $\mathsf{AC}^0$ or all are not. As a corollary of our main result we obtain that in case the input language is a visibly counter language our algorithm can effectively determine if it is in $\mathsf{AC}^0$ -- hence our main result generalizes a result by Krebs et al. stating that it is decidable if a given visibly counter language is in $\mathsf{AC}^0$ (when restricted to well-matched words). For our proofs we revisit so-called Ext-algebras (introduced by Czarnetzki et al.), which are closely related to forest algebras (introduced by Boja\'nczyk and Walukiewicz), and use Green's relations.

With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interactive nodes connected by edges whose weights can be either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.

Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.

北京阿比特科技有限公司