Since the seminal result of Karger, Motwani, and Sudan, algorithms for approximate 3-coloring have primarily centered around SDP-based rounding. However, it is likely that important combinatorial or algebraic insights are needed in order to break the $n^{o(1)}$ threshold. One way to develop new understanding in graph coloring is to study special subclasses of graphs. For instance, Blum studied the 3-coloring of random graphs, and Arora and Ge studied the 3-coloring of graphs with low threshold-rank. In this work, we study graphs which arise from a tensor product, which appear to be novel instances of the 3-coloring problem. We consider graphs of the form $H = (V,E)$ with $V =V( K_3 \times G)$ and $E = E(K_3 \times G) \setminus E'$, where $E' \subseteq E(K_3 \times G)$ is any edge set such that no vertex has more than an $\epsilon$ fraction of its edges in $E'$. We show that one can construct $\widetilde{H} = K_3 \times \widetilde{G}$ with $V(\widetilde{H}) = V(H)$ that is close to $H$. For arbitrary $G$, $\widetilde{H}$ satisfies $|E(H) \Delta E(\widetilde{H})| \leq O(\epsilon|E(H)|)$. Additionally when $G$ is a mild expander, we provide a 3-coloring for $H$ in polynomial time. These results partially generalize an exact tensor factorization algorithm of Imrich. On the other hand, without any assumptions on $G$, we show that it is NP-hard to 3-color $H$.
Goal representation affects the performance of Hierarchical Reinforcement Learning (HRL) algorithms by decomposing the complex learning problem into easier subtasks. Recent studies show that representations that preserve temporally abstract environment dynamics are successful in solving difficult problems and provide theoretical guarantees for optimality. These methods however cannot scale to tasks where environment dynamics increase in complexity i.e. the temporally abstract transition relations depend on larger number of variables. On the other hand, other efforts have tried to use spatial abstraction to mitigate the previous issues. Their limitations include scalability to high dimensional environments and dependency on prior knowledge. In this paper, we propose a novel three-layer HRL algorithm that introduces, at different levels of the hierarchy, both a spatial and a temporal goal abstraction. We provide a theoretical study of the regret bounds of the learned policies. We evaluate the approach on complex continuous control tasks, demonstrating the effectiveness of spatial and temporal abstractions learned by this approach.
Stride determines the distance between adjacent filter positions as the filter moves across the input. A fixed stride causes important information contained in the image can not be captured, so that important information is not classified. Therefore, in previous research, the DiffStride Method was applied, namely the Strided Convolution Method with which it can learn its own stride value. Severe Quantization and a constraining lower bound on preserved information are arises with Max Pooling Downsampling Method. Spectral Pooling reduce the constraint lower bound on preserved information by cutting off the representation in the frequency domain. In this research a CNN Model is proposed with the Downsampling Learnable Stride Technique performed by Backpropagation combined with the Spectral Pooling Technique. Diffstride and Spectral Pooling techniques are expected to maintain most of the information contained in the image. In this study, we compare the Hybrid Method, which is a combined implementation of Spectral Pooling and DiffStride against the Baseline Method, which is the DiffStride implementation on ResNet 18. The accuracy result of the DiffStride combination with Spectral Pooling improves over DiffStride which is baseline method by 0.0094. This shows that the Hybrid Method can maintain most of the information by cutting of the representation in the frequency domain and determine the stride of the learning result through Backpropagation.
Quasiperiodic systems are important space-filling ordered structures, without decay and translational invariance. How to solve quasiperiodic systems accurately and efficiently is of great challenge. A useful approach, the projection method (PM) [J. Comput. Phys., 256: 428, 2014], has been proposed to compute quasiperiodic systems. Various studies have demonstrated that the PM is an accurate and efficient method to solve quasiperiodic systems. However, there is a lack of theoretical analysis of PM. In this paper, we present a rigorous convergence analysis of the PM by establishing a mathematical framework of quasiperiodic functions and their high-dimensional periodic functions. We also give a theoretical analysis of quasiperiodic spectral method (QSM) based on this framework. Results demonstrate that PM and QSM both have exponential decay, and the QSM (PM) is a generalization of the periodic Fourier spectral (pseudo-spectral) method. Then we analyze the computational complexity of PM and QSM in calculating quasiperiodic systems. The PM can use fast Fourier transform, while the QSM cannot. Moreover, we investigate the accuracy and efficiency of PM, QSM and periodic approximation method in solving the linear time-dependent quasiperiodic Schr\"{o}dinger equation.
Verifiable credentials are a digital analogue of physical credentials. Their authenticity and integrity are protected by means of cryptographic techniques, and they can be presented to verifiers to reveal attributes or even predicates about the attributes included in the credential. One way to preserve privacy during presentation consists in selectively disclosing the attributes in a credential. In this paper we present the most widespread cryptographic mechanisms used to enable selective disclosure of attributes identifying two categories: the ones based on hiding commitments - e.g., mdl ISO/IEC 18013-5 - and the ones based on non-interactive zero-knowledge proofs - e.g., BBS signatures. We also include a description of the cryptographic primitives used to design such cryptographic mechanisms. We describe the design of the cryptographic mechanisms and compare them by performing an analysis on their standard maturity in terms of standardization, cryptographic agility and quantum safety, then we compare the features that they support with main focus on the unlinkability of presentations, the ability to create predicate proofs and support for threshold credential issuance. Finally we perform an experimental evaluation based on the Rust open source implementations that we have considered most relevant. In particular we evaluate the size of credentials and presentations built using different cryptographic mechanisms and the time needed to generate and verify them. We also highlight some trade-offs that must be considered in the instantiation of the cryptographic mechanisms.
Strategies for partially observable Markov decision processes (POMDP) typically require memory. One way to represent this memory is via automata. We present a method to learn an automaton representation of a strategy using the L*-algorithm. Compared to the tabular representation of a strategy, the resulting automaton is dramatically smaller and thus also more explainable. Moreover, in the learning process, our heuristics may even improve the strategy's performance. In contrast to approaches that synthesize an automaton directly from the POMDP thereby solving it, our approach is incomparably more scalable.
We explore the fairness of a redistricting game introduced by Mixon and Villar, which provides a two-party protocol for dividing a state into electoral districts, without the participation of an impartial independent authority. We analyze the game in an abstract setting that ignores the geographic distribution of voters and assumes that voter preferences are fixed and known. We first show that the minority player can always win at least $p-1$ districts, where $p$ is proportional to the percentage of minority voters, and that when the minority is large they can win more than $p$ districts. We also show that a "cracking" strategy by the majority party limits the number of districts the minority player can win as a function of the size of the minority.
Adsorption energy, a reactivity descriptor, should be accurately assessed for efficient catalyst screening. This evaluation requires determining the lowest energy across various adsorption configurations on the catalytic surface. While graph neural networks (GNNs) have gained popularity as a machine learning approach for computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates and often lack clarity in their interpretations. Recent advancements in language models have broadened their applicability to predicting catalytic properties, allowing us to bypass the complexities of graph representation. These models are adept at handling textual data, making it possible to incorporate observable features in a human-readable format. However, language models encounter challenges in accurately predicting the energy of adsorption configurations, typically showing a high mean absolute error (MAE) of about 0.71 eV. Our study addresses this limitation by introducing a self-supervised multi-modal learning approach, termed graph-assisted pretraining. This method significantly reduces the MAE to 0.35 eV through a combination of data augmentation, achieving comparable accuracy with DimeNet++ while using 0.4% of its training data size. Furthermore, the Transformer encoder at the core of the language model can provide insights into the feature focus through its attention scores. This analysis shows that our multimodal training effectively redirects the model's attention toward relevant adsorption configurations from adsorbate-related features, enhancing prediction accuracy and interpretability.
Revolutionizing the field of deep learning, Transformer-based models have achieved remarkable performance in many tasks. Recent research has recognized these models are robust to shuffling but are limited to inter-token permutation in the forward propagation. In this work, we propose our definition of permutation equivariance, a broader concept covering both inter- and intra- token permutation in the forward and backward propagation of neural networks. We rigorously proved that such permutation equivariance property can be satisfied on most vanilla Transformer-based models with almost no adaptation. We examine the property over a range of state-of-the-art models including ViT, Bert, GPT, and others, with experimental validations. Further, as a proof-of-concept, we explore how real-world applications including privacy-enhancing split learning, and model authorization, could exploit the permutation equivariance property, which implicates wider, intriguing application scenarios.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.