Judgment aggregation is a framework to aggregate individual opinions on multiple, logically connected issues into a collective outcome. These opinions are cast by judges, which can be for example referees, experts, advisors or jurors, depending on the application and context. It is open to manipulative attacks such as \textsc{Manipulation} where judges cast their judgments strategically. Previous works have shown that most computational problems corresponding to these manipulative attacks are \NP-hard. This desired computational barrier, however, often relies on formulas that are either of unbounded size or of complex structure. We revisit the computational complexity for various \textsc{Manipulation} and \textsc{Bribery} problems in premise-based judgment aggregation, now focusing on simple and realistic formulas. We restrict all formulas to be clauses that are (positive) monotone, Horn-clauses, or have bounded length. For basic variants of \textsc{Manipulation}, we show that these restrictions make several variants, which were in general known to be \NP-hard, polynomial-time solvable. Moreover, we provide a P vs.\ NP dichotomy for a large class of clause restrictions (generalizing monotone and Horn clauses) by showing a close relationship between variants of \textsc{Manipulation} and variants of \textsc{Satisfiability}. For Hamming distance based \textsc{Manipulation}, we show that \NP-hardness even holds for positive monotone clauses of length three, but the problem becomes polynomial-time solvable for positive monotone clauses of length two. For \textsc{Bribery}, we show that \NP-hardness even holds for positive monotone clauses of length two, but it becomes polynomial-time solvable for the same clause set if there is a constant budget.
The round complexity of interactive proof systems is a key question of practical and theoretical relevance in complexity theory and cryptography. Moreover, results such as QIP = QIP(3) (STOC'00) show that quantum resources significantly help in such a task. In this work, we initiate the study of round compression of protocols in the bounded quantum storage model (BQSM). In this model, the malicious parties have a bounded quantum memory and they cannot store the all the qubits that are transmitted in the protocol. Our main results in this setting are the following: 1. There is a non-interactive (statistical) witness indistinguishable proof for any language in NP (and even QMA) in BQSM in the plain model. We notice that in this protocol, only the memory of the verifier is bounded. 2. Any classical proof system can be compressed in a two-message quantum proof system in BQSM. Moreover, if the original proof system is zero-knowledge, the quantum protocol is zero-knowledge too. In this result, we assume that the prover has bounded memory. Finally, we give evidence towards the "tightness" of our results. First, we show that NIZK in the plain model against BQS adversaries is unlikely with standard techniques. Second, we prove that without the BQS model there is no 2-message zero-knowledge quantum interactive proof, even under computational assumptions.
Principal stratification provides a causal inference framework that allows adjustment for confounded post-treatment variables when comparing treatments. Although the literature has focused mainly on binary post-treatment variables, there is a growing interest in principal stratification involving continuous post-treatment variables. However, characterizing the latent principal strata with a continuous post-treatment presents a significant challenge, which is further complicated in observational studies where the treatment is not randomized. In this paper, we introduce the Confounders-Aware SHared atoms BAyesian mixture (CASBAH), a novel approach for principal stratification with continuous post-treatment variables that can be directly applied to observational studies. CASBAH leverages a dependent Dirichlet process, utilizing shared atoms across treatment levels, to effectively control for measured confounders and facilitate information sharing between treatment groups in the identification of principal strata membership. CASBAH also offers a comprehensive quantification of uncertainty surrounding the membership of the principal strata. Through Monte Carlo simulations, we show that the proposed methodology has excellent performance in characterizing the latent principal strata and estimating the effects of treatment on post-treatment variables and outcomes. Finally, CASBAH is applied to a case study in which we estimate the causal effects of US national air quality regulations on pollution levels and health outcomes.
While a number of promising uncertainty quantification methods have been proposed to address the prevailing shortcomings of deep neural networks like overconfidence and lack of explainability, quantifying predictive uncertainties in the context of joint semantic segmentation and monocular depth estimation has not been explored yet. Since many real-world applications are multi-modal in nature and, hence, have the potential to benefit from multi-task learning, this is a substantial gap in current literature. To this end, we conduct a comprehensive series of experiments to study how multi-task learning influences the quality of uncertainty estimates in comparison to solving both tasks separately.
Lane detection is a fundamental task in autonomous driving. While the problem is typically formulated as the detection of continuous boundaries, we study the problem of detecting lane boundaries that are sparsely marked by 2D points with many false positives. This problem arises in the Formula Student Driverless (FSD) competition and is challenging due to its inherent ambiguity. Previous methods are inefficient and unable to find long-horizon solutions. We propose a deterministic algorithm called CLC that uses backtracking graph search with a learned likelihood function to overcome these limitations. We impose geometric constraints on the lane candidates to guarantee a geometrically sound lane. Our exhaustive search leads to finding the global optimum in 45% of instances, and the algorithm is overall robust to up to 50% false positives. Our algorithm runs in less than 15 ms on a single CPU core, meeting the low latency requirements of autonomous racing. We extensively evaluate our method on real data and realistic racetrack layouts, and show that it outperforms the state-of-the-art by detecting long lanes over 100 m with few (0.6%) critical failures. This allows our autonomous racecar to drive close to its physical limits on a previously unknown racetrack without being limited by perception. We release our dataset with realistic Formula Student racetracks to enable further research.
Scalable surrogate models enable efficient emulation of computer models (or simulators), particularly when dealing with large ensembles of runs. While Gaussian Process (GP) models are commonly employed for emulation, they face limitations in scaling to truly large datasets. Furthermore, when dealing with dense functional output, such as spatial or time-series data, additional complexities arise, requiring careful handling to ensure fast emulation. This work presents a highly scalable emulator for functional data, building upon the works of Kennedy and O'Hagan (2001) and Higdon et al. (2008), while incorporating the local approximate Gaussian Process framework proposed by Gramacy and Apley (2015). The emulator utilizes global GP lengthscale parameter estimates to scale the input space, leading to a substantial improvement in prediction speed. We demonstrate that our fast approximation-based emulator can serve as a viable alternative to the methods outlined in Higdon et al. (2008) for functional response, while drastically reducing computational costs. The proposed emulator is applied to quickly calibrate the multiphysics continuum hydrodynamics simulator FLAG with a large ensemble of 20000 runs. The methods presented are implemented in the R package FlaGP.
We introduce a new concept of the locally conservative flux and investigate its relationship with the compatible discretization pioneered by Dawson, Sun and Wheeler [11]. We then demonstrate how the new concept of the locally conservative flux can play a crucial role in obtaining the L2 norm stability of the discontinuous Galerkin finite element scheme for the transport in the coupled system with flow. In particular, the lowest order discontinuous Galerkin finite element for the transport is shown to inherit the positivity and maximum principle when the locally conservative flux is used, which has been elusive for many years in literature. The theoretical results established in this paper are based on the equivalence between Lesaint-Raviart discontinuous Galerkin scheme and Brezzi-Marini-Suli discontinuous Galerkin scheme for the linear hyperbolic system as well as the relationship between the Lesaint-Raviart discontinuous Galerkin scheme and the characteristic method along the streamline. Sample numerical experiments have also been performed to justify our theoretical findings
A proof of quantumness is an efficiently verifiable interactive test that an efficient quantum computer can pass, but all efficient classical computers cannot (under some cryptographic assumption). Such protocols play a crucial role in the certification of quantum devices. Existing single-round protocols (like asking the quantum computer to factor a large number) require large quantum circuits, whereas multi-round ones use smaller circuits but require experimentally challenging mid-circuit measurements. As such, current proofs of quantumness are out of reach for near-term devices. In this work, we construct efficient single-round proofs of quantumness based on existing knowledge assumptions. While knowledge assumptions have not been previously considered in this context, we show that they provide a natural basis for separating classical and quantum computation. Specifically, we show that multi-round protocols based on Decisional Diffie-Hellman (DDH) or Learning With Errors (LWE) can be "compiled" into single-round protocols using a knowledge-of-exponent assumption or knowledge-of-lattice-point assumption, respectively. We also prove an adaptive hardcore-bit statement for a family of claw-free functions based on DDH, which might be of independent interest. Previous approaches to constructing single-round protocols relied on the random oracle model and thus incurred the overhead associated with instantiating the oracle with a cryptographic hash function. In contrast, our protocols have the same resource requirements as their multi-round counterparts without necessitating mid-circuit measurements, making them, arguably, the most efficient single-round proofs of quantumness to date. Our work also helps in understanding the interplay between black-box/white-box reductions and cryptographic assumptions in the design of proofs of quantumness.
We present a logical framework that enables us to define a formal theory of computational trust in which this notion is analysed in terms of epistemic attitudes towards the possible objects of trust and in relation to existing evidence in favour of the trustworthiness of these objects. The framework is based on a quantified epistemic and justification logic featuring a non-standard handling of identities. Thus, the theory is able to account for the hyperintensional nature of computational trust. We present a proof system and a frame semantics for the logic, we prove soundness and completeness results and we introduce the syntactical machinery required to define a theory of trust.
We conduct a systematic study of the approximation properties of Transformer for sequence modeling with long, sparse and complicated memory. We investigate the mechanisms through which different components of Transformer, such as the dot-product self-attention, positional encoding and feed-forward layer, affect its expressive power, and we study their combined effects through establishing explicit approximation rates. Our study reveals the roles of critical parameters in the Transformer, such as the number of layers and the number of attention heads. These theoretical insights are validated experimentally and offer natural suggestions for alternative architectures.
In knowledge graph construction, a challenging issue is how to extract complex (e.g., overlapping) entities and relationships from a small amount of unstructured historical data. The traditional pipeline methods are to divide the extraction into two separate subtasks, which misses the potential interaction between the two subtasks and may lead to error propagation. In this work, we propose an effective cascade dual-decoder method to extract overlapping relational triples, which includes a text-specific relation decoder and a relation-corresponded entity decoder. Our approach is straightforward and it includes a text-specific relation decoder and a relation-corresponded entity decoder. The text-specific relation decoder detects relations from a sentence at the text level. That is, it does this according to the semantic information of the whole sentence. For each extracted relation, which is with trainable embedding, the relation-corresponded entity decoder detects the corresponding head and tail entities using a span-based tagging scheme. In this way, the overlapping triple problem can be tackled naturally. We conducted experiments on a real-world open-pit mine dataset and two public datasets to verify the method's generalizability. The experimental results demonstrate the effectiveness and competitiveness of our proposed method and achieve better F1 scores under strict evaluation metrics. Our implementation is available at //github.com/prastunlp/DualDec.