The $1-N$ generalized Stackelberg game (single-leader multi-follower game) is intricately intertwined with the interaction between a leader and followers (hierarchical interaction) and the interaction among followers (simultaneous interaction). However, obtaining the optimal strategy of the leader is generally challenging due to the complex interactions among the leader and followers. Here, we propose a general methodology to find a generalized Stackelberg equilibrium of a $1-N$ generalized Stackelberg game. Specifically, we first provide the conditions where a generalized Stackelberg equilibrium always exists using the variational equilibrium concept. Next, to find an equilibrium in polynomial time, we transformed the $1-N$ generalized Stackelberg game into a $1-1$ Stackelberg game whose Stackelberg equilibrium is identical to that of the original. Finally, we propose an effective computation procedure based on the projected implicit gradient descent algorithm to find a Stackelberg equilibrium of the transformed $1-1$ Stackelberg game. We validate the proposed approaches using the two problems of deriving operating strategies for EV charging stations: (1) the first problem is optimizing the one-time charging price for EV users, in which a platform operator determines the price of electricity and EV users determine the optimal amount of charging for their satisfaction; and (2) the second problem is to determine the spatially varying charging price to optimally balance the demand and supply over every charging station.
We prove that an $m$ out of $n$ bootstrap procedure for Chatterjee's rank correlation is consistent whenever asymptotic normality of Chatterjee's rank correlation can be established. In particular, we prove that $m$ out of $n$ bootstrap works for continuous as well as for discrete and independent data; furthermore, simulations indicate that it also performs well for discrete and dependent data, and that it outperforms alternative estimation methods.
In this note, we first introduce a new problem called the longest common subsequence and substring problem. Let $X$ and $Y$ be two strings over an alphabet $\Sigma$. The longest common subsequence and substring problem for $X$ and $Y$ is to find the longest string which is a subsequence of $X$ and a substring of $Y$. We propose an algorithm to solve the problem.
We prove a non-asymptotic distribution-independent lower bound for the expected mean squared generalization error caused by label noise in ridgeless linear regression. Our lower bound generalizes a similar known result to the overparameterized (interpolating) regime. In contrast to most previous works, our analysis applies to a broad class of input distributions with almost surely full-rank feature matrices, which allows us to cover various types of deterministic or random feature maps. Our lower bound is asymptotically sharp and implies that in the presence of label noise, ridgeless linear regression does not perform well around the interpolation threshold for any of these feature maps. We analyze the imposed assumptions in detail and provide a theory for analytic (random) feature maps. Using this theory, we can show that our assumptions are satisfied for input distributions with a (Lebesgue) density and feature maps given by random deep neural networks with analytic activation functions like sigmoid, tanh, softplus or GELU. As further examples, we show that feature maps from random Fourier features and polynomial kernels also satisfy our assumptions. We complement our theory with further experimental and analytic results.
An expurgating linear function (ELF) is a linear outer code that disallows the low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. A list-decoding sieve of the inner code starting from the noiseless all-zeros codeword is an efficient way to identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides distance spectrum union (DSU) upper bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of $K=64$ bits or constant codeword blocklength of $N=152$ bits, an ELF can reduce the gap at CER $10^{-6}$ between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. The reduction in DSU-to-RCU gap comes with a minimal increase in average complexity at desired CER operating points. List Viterbi decoding guided by the ELF approaches maximum likelihood (ML) decoding of the concatenated code, and average list size converges to 1 as SNR increases. Thus, average complexity is similar to Viterbi decoding on the trellis of the inner code at high SNR. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword.
We develop a combinatorial theory of vector bundles with connection that is natural with respect to appropriate mappings of the base space. The base space is a simplicial complex, the main objects defined are discrete vector bundle valued cochains and the main operators we develop are a discrete exterior covariant derivative and a combinatorial wedge product. Key properties of these operators are demonstrated and it is shown that they are natural with respect to the mappings referred to above. We also formulate a well-behaved definition of metric compatible discrete connections. A characterization is given for when a discrete vector bundle with connection is trivializable or has a trivial lower rank subbundle. This machinery is used to define discrete curvature as linear maps and we show that our formulation satisfies a discrete Bianchi identity. Recently an alternative framework for discrete vector bundles with connection has been given by Christiansen and Hu. We show that our framework reproduces and extends theirs when we apply our constructions on a subdivision of the base simplicial complex.
The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. However, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer, compared to MDS matrices. In this paper, we study NMDS matrices, exploring their construction in both recursive and nonrecursive settings. We provide several theoretical results and explore the hardware efficiency of the construction of NMDS matrices. Additionally, we make comparisons between the results of NMDS and MDS matrices whenever possible. For the recursive approach, we study the DLS matrices and provide some theoretical results on their use. Some of the results are used to restrict the search space of the DLS matrices. We also show that over a field of characteristic 2, any sparse matrix of order $n\geq 4$ with fixed XOR value of 1 cannot be an NMDS when raised to a power of $k\leq n$. Following that, we use the generalized DLS (GDLS) matrices to provide some lightweight recursive NMDS matrices of several orders that perform better than the existing matrices in terms of hardware cost or the number of iterations. For the nonrecursive construction of NMDS matrices, we study various structures, such as circulant and left-circulant matrices, and their generalizations: Toeplitz and Hankel matrices. In addition, we prove that Toeplitz matrices of order $n>4$ cannot be simultaneously NMDS and involutory over a field of characteristic 2. Finally, we use GDLS matrices to provide some lightweight NMDS matrices that can be computed in one clock cycle. The proposed nonrecursive NMDS matrices of orders 4, 5, 6, 7, and 8 can be implemented with 24, 50, 65, 96, and 108 XORs over $\mathbb{F}_{2^4}$, respectively.
Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.
For a long time, the Von Neumann has been a successful model of computation for sequential computing .Many models including the dataflow model have been unsuccessfully developed to emulate the same results in parallel computing. It is widely accepted that high performance computation is better-achieved using parallel architectures and is seen as the basis for future computational architectures with the ever-increasing need for high performance computation. We describe a new model of parallel computation known as the Arithmetic Deduction Model (AriDem) which has some similarities with the Von Neumann. A theoretical evaluation conducted on this model in comparison with the predominant von Neumann model indicated AriDeM to be more efficient in resources utilization. In this paper, we conduct an empirical evaluation of the model and the results reflect the output of the theoretical evaluation.
Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.