In this paper, we study two graph convexity parameters: iteration time and general position number. The iteration time was defined in 1981 in the geodesic convexity, but its computational complexity was still open. The general position number was defined in the geodesic convexity and proved NP-hard in 2018. We extend these parameters to any graph convexity and prove that the iteration number is NP-hard in the $P_3$ convexity and, with this result, we can prove that the iteration time is also NP-hard in the geodesic convexity even in graphs with diameter two, a very natural question which was unsolved since 1981. These results are also important, since they are the last two missing NP-hardness results regarding the ten most studied graph convexity parameters in the geodesic and $P_3$ convexities. Finally, we also prove that the general position number of the monophonic convexity is NP-hard, W[1]-hard (parameterized by the size of the solution) and $n^{1-\varepsilon}$-inapproximable in polynomial time for any $\varepsilon>0$ unless P=NP, even in graphs with diameter two.
In this work, we study the convergence and performance of nonlinear solvers for the Bidomain equations after decoupling the ordinary and partial differential equations of the cardiac system. Firstly, we provide a rigorous proof of the global convergence of Quasi-Newton methods, such as BFGS, and nonlinear Conjugate-Gradient methods, such as Fletcher--Reeves, for the Bidomain system, by analyzing an auxiliary variational problem under physically reasonable hypotheses. Secondly, we compare several nonlinear Bidomain solvers in terms of execution time, robustness with respect to the data and parallel scalability. Our findings indicate that Quasi-Newton methods are the best choice for nonlinear Bidomain systems, since they exhibit faster convergence rates compared to standard Newton-Krylov methods, while maintaining robustness and scalability. Furthermore, first-order methods also demonstrate competitiveness and serve as a viable alternative, particularly for matrix-free implementations that are well-suited for GPU computing.
The Gromov--Hausdorff distance measures the difference in shape between compact metric spaces and poses a notoriously difficult problem in combinatorial optimization. We introduce its quadratic relaxation over a convex polytope whose solutions provably deliver the Gromov--Hausdorff distance. The optimality guarantee is enabled by the fact that the search space of our approach is not constrained to a generalization of bijections, unlike in other relaxations such as the Gromov--Wasserstein distance. We suggest the Frank--Wolfe algorithm for solving the relaxation in $O(n^3)$ time per iteration, and numerically demonstrate its performance on metric spaces of hundreds of points. In particular, we use it to obtain a new bound of the Gromov--Hausdorff distance between the unit circle and the unit hemisphere equipped with Euclidean metric. Our approach is implemented as a Python package dGH.
Learned cardinality estimation methods have achieved high precision compared to traditional methods. Among learned methods, query-driven approaches face the data and workload drift problem for a long time. Although both query-driven and hybrid methods are proposed to avoid this problem, even the state-of-art of them suffer from high training and estimation costs, limited scalability, instability, and long-tailed distribution problem on high cardinality and high dimensional tables, which seriously affects the practical application of learned cardinality estimators. In this paper, we prove that most of these problems are directly caused by the widely used progressive sampling. We solve this problem by introducing predicates into the autoregressive model and propose Duet, a stable, efficient, and scalable hybrid method to estimate cardinality directly without sampling or any non-differentiable process, which can not only reduces the inference complexity from $O(n)$ to $O(1)$ compared to Naru and UAE but also achieve higher accuracy on high cardinality and high dimensional tables. Experimental results show that Duet can achieve all the design goals above and be much more practical and even has a lower inference cost on CPU than that of most learned methods on GPU.
The Weighted Path Order of Yamada is a powerful technique for proving termination. It is also supported by CeTA, a certifier for checking untrusted termination proofs. To be more precise, CeTA contains a verified function that computes for two terms whether one of them is larger than the other for a given WPO, i.e., where all parameters of the WPO have been fixed. The problem of this verified function is its exponential runtime in the worst case. Therefore, in this work we develop a polynomial time implementation of WPO that is based on memoization. It also improves upon an earlier verified implementation of the Recursive Path Order: the RPO-implementation uses full terms as keys for the memory, a design which simplified the soundness proofs, but has some runtime overhead. In this work, keys are just numbers, so that the lookup in the memory is faster. Although trivial on paper, this change introduces some challenges for the verification task.
The behaviour of multi-agent learning in many player games has been shown to display complex dynamics outside of restrictive examples such as network zero-sum games. In addition, it has been shown that convergent behaviour is less likely to occur as the number of players increase. To make progress in resolving this problem, we study Q-Learning dynamics and determine a sufficient condition for the dynamics to converge to a unique equilibrium in any network game. We find that this condition depends on the nature of pairwise interactions and on the network structure, but is explicitly independent of the total number of agents in the game. We evaluate this result on a number of representative network games and show that, under suitable network conditions, stable learning dynamics can be achieved with an arbitrary number of agents.
Stochastic gradient descent (SGD) is the simplest deep learning optimizer with which to train deep neural networks. While SGD can use various learning rates, such as constant or diminishing rates, the previous numerical results showed that SGD performs better than other deep learning optimizers using when it uses learning rates given by line search methods. In this paper, we perform a convergence analysis on SGD with a learning rate given by an Armijo line search for nonconvex optimization. The analysis indicates that the upper bound of the expectation of the squared norm of the full gradient becomes small when the number of steps and the batch size are large. Next, we show that, for SGD with the Armijo-line-search learning rate, the number of steps needed for nonconvex optimization is a monotone decreasing convex function of the batch size; that is, the number of steps needed for nonconvex optimization decreases as the batch size increases. Furthermore, we show that the stochastic first-order oracle (SFO) complexity, which is the stochastic gradient computation cost, is a convex function of the batch size; that is, there exists a critical batch size that minimizes the SFO complexity. Finally, we provide numerical results that support our theoretical results. The numerical results indicate that the number of steps needed for training deep neural networks decreases as the batch size increases and that there exist the critical batch sizes that can be estimated from the theoretical results.
We study the consistent k-center clustering problem. In this problem, the goal is to maintain a constant factor approximate $k$-center solution during a sequence of $n$ point insertions and deletions while minimizing the recourse, i.e., the number of changes made to the set of centers after each point insertion or deletion. Previous works by Lattanzi and Vassilvitskii [ICML '12] and Fichtenberger, Lattanzi, Norouzi-Fard, and Svensson [SODA '21] showed that in the incremental setting, where deletions are not allowed, one can obtain $k \cdot \textrm{polylog}(n) / n$ amortized recourse for both $k$-center and $k$-median, and demonstrated a matching lower bound. However, no algorithm for the fully dynamic setting achieves less than the trivial $O(k)$ changes per update, which can be obtained by simply reclustering the full dataset after every update. In this work, we give the first algorithm for consistent $k$-center clustering for the fully dynamic setting, i.e., when both point insertions and deletions are allowed, and improves upon a trivial $O(k)$ recourse bound. Specifically, our algorithm maintains a constant factor approximate solution while ensuring worst-case constant recourse per update, which is optimal in the fully dynamic setting. Moreover, our algorithm is deterministic and is therefore correct even if an adaptive adversary chooses the insertions and deletions.
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data, achieving state-of-the-art performance. GNNs typically employ a message-passing scheme, in which every node aggregates information from its neighbors using a permutation-invariant aggregation function. Standard well-examined choices such as the mean or sum aggregation functions have limited capabilities, as they are not able to capture interactions among neighbors. In this work, we formalize these interactions using an information-theoretic framework that notably includes synergistic information. Driven by this definition, we introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood. This is achieved by learning local node orderings via an attention mechanism and processing the ordered representations using a recurrent neural network aggregator. This design allows us to make use of a permutation-sensitive aggregator while maintaining the permutation-equivariance of the proposed GOAT layer. The GOAT model demonstrates its increased performance in modeling graph metrics that capture complex information, such as the betweenness centrality and the effective size of a node. In practical use-cases, its superior modeling capability is confirmed through its success in several real-world node classification benchmarks.
Transformers have achieved superior performances in many tasks in natural language processing and computer vision, which also intrigues great interests in the time series community. Among multiple advantages of transformers, the ability to capture long-range dependencies and interactions is especially attractive for time series modeling, leading to exciting progress in various time series applications. In this paper, we systematically review transformer schemes for time series modeling by highlighting their strengths as well as limitations through a new taxonomy to summarize existing time series transformers in two perspectives. From the perspective of network modifications, we summarize the adaptations of module level and architecture level of the time series transformers. From the perspective of applications, we categorize time series transformers based on common tasks including forecasting, anomaly detection, and classification. Empirically, we perform robust analysis, model size analysis, and seasonal-trend decomposition analysis to study how Transformers perform in time series. Finally, we discuss and suggest future directions to provide useful research guidance. To the best of our knowledge, this paper is the first work to comprehensively and systematically summarize the recent advances of Transformers for modeling time series data. We hope this survey will ignite further research interests in time series Transformers.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.