In this paper, we present two non-overlapping Schwarz algorithms for the hybridizable discontinuous Galerkin (HDG) method. The first algorithm is based on the Neumann-Neumann method. The second one is an iterative algorithm uses both trace and flux interface unknowns on interfaces between subdomains. Numerical results are provided to verify the validity of our algorithms.
This work explores the relationship between the set of Wardrop equilibria~(WE) of a routing game, the total demand of that game, and the occurrence of Braess's paradox~(BP). The BP formalizes the counter-intuitive fact that for some networks, removing a path from the network decreases congestion at WE. For a single origin-destination routing games with affine cost functions, the first part of this work provides tools for analyzing the evolution of the WE as the demand varies. It characterizes the piece-wise affine nature of this dependence by showing that the set of directions in which the WE can vary in each piece is the solution of a variational inequality problem. In the process we establish various properties of changes in the set of used and minimal-cost paths as demand varies. As a consequence of these characterizations, we derive a procedure to obtain the WE for all demands above a certain threshold. The second part of the paper deals with detecting the presence of BP in a network. We supply a number of sufficient conditions that reveal the presence of BP and that are computationally tractable. We also discuss a different perspective on BP, where we establish that a path causing BP at a particular demand must be strictly beneficial to the network at a lower demand. Several examples throughout this work illustrate and elaborate our findings.
In this paper, we present a novel algorithm called the Hybrid Search algorithm that integrates the Zermelo's Navigation Initial Value Problem with the Ferraro-Mart\'in de Diego-Almagro algorithm to find the optimal route for a vessel to reach its destination. Our algorithm is designed to work in both Euclidean and spherical spaces and utilizes a heuristic that allows the vessel to move forward while remaining within a predetermined search cone centred around the destination. This approach not only improves efficiency but also includes obstacle avoidance, making it well-suited for real-world applications. We evaluate the performance of the Hybrid Search algorithm on synthetic vector fields and real ocean currents data, demonstrating its effectiveness and performance.
In this report, we present a versatile and efficient preconditioned Anderson acceleration (PAA) method for fixed-point iterations. The proposed framework offers flexibility in balancing convergence rates (linear, super-linear, or quadratic) and computational costs related to the Jacobian matrix. Our approach recovers various fixed-point iteration techniques, including Picard, Newton, and quasi-Newton iterations. The PAA method can be interpreted as employing Anderson acceleration (AA) as its own preconditioner or as an accelerator for quasi-Newton methods when their convergence is insufficient. Adaptable to a wide range of problems with differing degrees of nonlinearity and complexity, the method achieves improved convergence rates and robustness by incorporating suitable preconditioners. We test multiple preconditioning strategies on various problems and investigate a delayed update strategy for preconditioners to further reduce the computational costs.
This paper focuses on the performance of equalizer zero-determinant (ZD) strategies in discounted repeated Stackerberg asymmetric games. In the leader-follower adversarial scenario, the strong Stackelberg equilibrium (SSE) deriving from the opponents' best response (BR), is technically the optimal strategy for the leader. However, computing an SSE strategy may be difficult since it needs to solve a mixed-integer program and has exponential complexity in the number of states. To this end, we propose to adopt an equalizer ZD strategy, which can unilaterally restrict the opponent's expected utility. We first study the existence of an equalizer ZD strategy with one-to-one situations, and analyze an upper bound of its performance with the baseline SSE strategy. Then we turn to multi-player models, where there exists one player adopting an equalizer ZD strategy. We give bounds of the sum of opponents' utilities, and compare it with the SSE strategy. Finally, we give simulations on unmanned aerial vehicles (UAVs) and the moving target defense (MTD) to verify the effectiveness of our approach.
This paper develops a weak Galerkin (WG) finite element method of arbitrary order for the steady incompressible Magnetohydrodynamics equations. The WG scheme uses piecewise polynomials of degrees $k(k\geq 1),k,k-1$, and $k-1$ respectively for the approximations of the velocity, the magnetic field, the pressure, and the magnetic pseudo-pressure in the interior of elements, and uses piecewise polynomials of degree $k$ for their numerical traces on the interfaces of elements. The method is shown to yield globally divergence-free approximations of the velocity and magnetic fields. We give existence and uniqueness results for the discrete scheme and derive optimal a priori error estimates. We also present a convergent linearized iterative algorithm. Numerical experiments are provided to verify the obtained theoretical results.
A comprehensive picture of three Bethe-Kikuchi variational principles including their relationship to belief propagation (BP) algorithms on hypergraphs is given. The structure of BP equations is generalized to define continuous-time diffusions, solving localized versions of the max-entropy principle (A), the variational free energy principle (B), and a less usual equilibrium free energy principle (C), Legendre dual to A. Both critical points of Bethe-Kikuchi functionals and stationary beliefs are shown to lie at the non-linear intersection of two constraint surfaces, enforcing energy conservation and marginal consistency respectively. The hypersurface of singular beliefs, accross which equilibria become unstable as the constraint surfaces meet tangentially, is described by polynomial equations in the convex polytope of consistent beliefs. This polynomial is expressed by a loop series expansion for graphs of binary variables.
We introduce BSDetector, a method for detecting bad and speculative answers from a pretrained Large Language Model by estimating a numeric confidence score for any output it generated. Our uncertainty quantification technique works for any LLM accessible only via a black-box API, whose training data remains unknown. By expending a bit of extra computation, users of any LLM API can now get the same response as they would ordinarily, as well as a confidence estimate that cautions when not to trust this response. Experiments on both closed and open-form Question-Answer benchmarks reveal that BSDetector more accurately identifies incorrect LLM responses than alternative uncertainty estimation procedures (for both GPT-3 and ChatGPT). By sampling multiple responses from the LLM and considering the one with the highest confidence score, we can additionally obtain more accurate responses from the same LLM, without any extra training steps. In applications involving automated evaluation with LLMs, accounting for our confidence scores leads to more reliable evaluation in both human-in-the-loop and fully-automated settings (across both GPT 3.5 and 4).
Classical forgery attacks against Offset Two-round (OTR) structures require some harsh conditions, such as some plaintext and ciphertext pairs need to be known, and the success probability is not too high. To solve these problems, a quantum forgery attack on OTR structure using Simon's algorithm is proposed. The attacker intercept the ciphertext-tag pair $(C,T)$ between the sender and receiver, while Simon's algorithm is used to find the period of the tag generation function in OTR, then we can successfully forge new ciphertext $C'$ ($C'\ne C$) for intercepted tag $T$. For a variant of OTR structure (Pr{/o}st-OTR-Even-Mansour structure), a universal forgery attack, in which it is easy to generate the correct tag of any given message if the attacker is allowed to change a single block in it, is proposed. It first obtains the secret parameter L using Simon's algorithm, then the secret parameter L is used to find the keys $k_1$ and $k_2$, so that an attacker can forge the changed messages. It only needs several plaintext blocks to help obtain the keys to forge any messages. Performance analysis shows that the query complexity of our attack is $O(n)$, and its success probability is very close to 1.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
*《Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs》A Jolicoeur-Martineau, I Mitliagkas [Mila] (2019)