In this paper, we propose a method for solving a PPAD-complete problem [Papadimitriou, 1994]. Given is the payoff matrix $C$ of a symmetric bimatrix game $(C, C^T)$ and our goal is to compute a Nash equilibrium of $(C, C^T)$. In this paper, we devise a nonlinear replicator dynamic (whose right-hand-side can be obtained by solving a pair of convex optimization problems) with the following property: Under any invertible $0 \leq C \leq 1$, every orbit of our dynamic starting at an interior strategy of the standard simplex approaches a set of strategies of $(C, C^T)$ such that, for each strategy in this set, a symmetric Nash equilibrium strategy can be computed by solving the aforementioned convex mathematical programs. We prove convergence using previous results in analysis (the analytic implicit function theorem), nonlinear optimization theory (duality theory, Berge's maximum principle, and a theorem of Robinson [1980] on the Lipschitz continuity of parametric nonlinear programs), and dynamical systems theory (a theorem of Losert and Akin [1983] related to the LaSalle invariance principle that is stronger under a stronger assumption).
This paper analyzes a popular loss function used in machine learning called the log-cosh loss function. A number of papers have been published using this loss function but, to date, no statistical analysis has been presented in the literature. In this paper, we present the distribution function from which the log-cosh loss arises. We compare it to a similar distribution, called the Cauchy distribution, and carry out various statistical procedures that characterize its properties. In particular, we examine its associated pdf, cdf, likelihood function and Fisher information. Side-by-side we consider the Cauchy and Cosh distributions as well as the MLE of the location parameter with asymptotic bias, asymptotic variance, and confidence intervals. We also provide a comparison of robust estimators from several other loss functions, including the Huber loss function and the rank dispersion function. Further, we examine the use of the log-cosh function for quantile regression. In particular, we identify a quantile distribution function from which a maximum likelihood estimator for quantile regression can be derived. Finally, we compare a quantile M-estimator based on log-cosh with robust monotonicity against another approach to quantile regression based on convolutional smoothing.
In this paper, we propose two algorithms for a hybrid construction of all $n\times n$ MDS and involutory MDS matrices over a finite field $\mathbb{F}_{p^m}$, respectively. The proposed algorithms effectively narrow down the search space to identify $(n-1) \times (n-1)$ MDS matrices, facilitating the generation of all $n \times n$ MDS and involutory MDS matrices over $\mathbb{F}_{p^m}$. To the best of our knowledge, existing literature lacks methods for generating all $n\times n$ MDS and involutory MDS matrices over $\mathbb{F}_{p^m}$. In our approach, we introduce a representative matrix form for generating all $n\times n$ MDS and involutory MDS matrices over $\mathbb{F}_{p^m}$. The determination of these representative MDS matrices involves searching through all $(n-1)\times (n-1)$ MDS matrices over $\mathbb{F}_{p^m}$. Our contributions extend to proving that the count of all $3\times 3$ MDS matrices over $\mathbb{F}_{2^m}$ is precisely $(2^m-1)^5(2^m-2)(2^m-3)(2^{2m}-9\cdot 2^m+21)$. Furthermore, we explicitly provide the count of all $4\times 4$ MDS and involutory MDS matrices over $\mathbb{F}_{2^m}$ for $m=2, 3, 4$.
This paper proposes a methodology for constructing such corpora of child directed speech (CDS) paired with sentential logical forms, and uses this method to create two such corpora, in English and Hebrew. The approach enforces a cross-linguistically consistent representation, building on recent advances in dependency representation and semantic parsing. Specifically, the approach involves two steps. First, we annotate the corpora using the Universal Dependencies (UD) scheme for syntactic annotation, which has been developed to apply consistently to a wide variety of domains and typologically diverse languages. Next, we further annotate these data by applying an automatic method for transducing sentential logical forms (LFs) from UD structures. The UD and LF representations have complementary strengths: UD structures are language-neutral and support consistent and reliable annotation by multiple annotators, whereas LFs are neutral as to their syntactic derivation and transparently encode semantic relations. Using this approach, we provide syntactic and semantic annotation for two corpora from CHILDES: Brown's Adam corpus (English; we annotate ~80% of its child-directed utterances), all child-directed utterances from Berman's Hagar corpus (Hebrew). We verify the quality of the UD annotation using an inter-annotator agreement study, and manually evaluate the transduced meaning representations. We then demonstrate the utility of the compiled corpora through (1) a longitudinal corpus study of the prevalence of different syntactic and semantic phenomena in the CDS, and (2) applying an existing computational model of language acquisition to the two corpora and briefly comparing the results across languages.
This paper studies the problem of estimating the order of arrival of the vertices in a random recursive tree. Specifically, we study two fundamental models: the uniform attachment model and the linear preferential attachment model. We propose an order estimator based on the Jordan centrality measure and define a family of risk measures to quantify the quality of the ordering procedure. Moreover, we establish a minimax lower bound for this problem, and prove that the proposed estimator is nearly optimal. Finally, we numerically demonstrate that the proposed estimator outperforms degree-based and spectral ordering procedures.
The recent paper (IEEE Trans. IT 69, 1680) introduced an analytical method for calculating the channel capacity without the need for iteration. This method has certain limitations that restrict its applicability. Furthermore, the paper does not provide an explanation as to why the channel capacity can be solved analytically in this particular case. In order to broaden the scope of this method and address its limitations, we turn our attention to the reverse em-problem, proposed by Toyota (Information Geometry, 3, 1355 (2020)). This reverse em-problem involves iteratively applying the inverse map of the em iteration to calculate the channel capacity, which represents the maximum mutual information. However, several open problems remained unresolved in Toyota's work. To overcome these challenges, we formulate the reverse em-problem based on Bregman divergence and provide solutions to these open problems. Building upon these results, we transform the reverse em-problem into em-problems and derive a non-iterative formula for the reverse em-problem. This formula can be viewed as a generalization of the aforementioned analytical calculation method. Importantly, this derivation sheds light on the information geometrical structure underlying this special case. By effectively addressing the limitations of the previous analytical method and providing a deeper understanding of the underlying information geometrical structure, our work significantly expands the applicability of the proposed method for calculating the channel capacity without iteration.
In this paper, we present a~generalisation of proof simulation procedures for Frege systems by Bonet and Buss to some logics for which the deduction theorem does not hold. In particular, we study the case of finite-valued \L{}ukasiewicz logics. To this end, we provide proof systems that augment Avron's Frege system for \L{}ukasiewicz three-valued logic with nested and general versions of the disjunction elimination rule, respectively. For these systems we provide upper bounds on speed-ups w.r.t.\ both the number of steps in proofs and the length of proofs. We also consider Tamminga's natural deduction and Avron's hypersequent calculus for 3-valued \L{}ukasiewicz logic and generalise our results considering the disjunction elimination rule to all finite-valued \L{}ukasiewicz logics.
Confidence intervals based on the central limit theorem (CLT) are a cornerstone of classical statistics. Despite being only asymptotically valid, they are ubiquitous because they permit statistical inference under weak assumptions and can often be applied to problems even when nonasymptotic inference is impossible. This paper introduces time-uniform analogues of such asymptotic confidence intervals, adding to the literature on confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time -- which provide valid inference at arbitrary stopping times and incur no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance. Existing CSs in the literature are nonasymptotic, enjoying finite-sample guarantees but not the aforementioned broad applicability of asymptotic confidence intervals. This work provides a definition for "asymptotic CSs" and a general recipe for deriving them. Asymptotic CSs forgo nonasymptotic validity for CLT-like versatility and (asymptotic) time-uniform guarantees. While the CLT approximates the distribution of a sample average by that of a Gaussian for a fixed sample size, we use strong invariance principles (stemming from the seminal 1960s work of Strassen) to uniformly approximate the entire sample average process by an implicit Gaussian process. As an illustration, we derive asymptotic CSs for the average treatment effect in observational studies (for which nonasymptotic bounds are essentially impossible to derive even in the fixed-time regime) as well as randomized experiments, enabling causal inference in sequential environments.
In this paper, we present and analyze fully discrete finite difference schemes designed for solving the initial value problem associated with the fractional Korteweg-de Vries (KdV) equation involving the fractional Laplacian. We design the scheme by introducing the discrete fractional Laplacian operator which is consistent with the continuous operator, and posses certain properties which are instrumental for the convergence analysis. Assuming the initial data (u_0 \in H^{1+\alpha}(\mathbb{R})), where (\alpha \in [1,2)), our study establishes the convergence of the approximate solutions obtained by the fully discrete finite difference schemes to a classical solution of the fractional KdV equation. Theoretical results are validated through several numerical illustrations for various values of fractional exponent $\alpha$. Furthermore, we demonstrate that the Crank-Nicolson finite difference scheme preserves the inherent conserved quantities along with the improved convergence rates.
The aim of this paper is to design the explicit radial basis function (RBF) Runge-Kutta methods for the initial value problem. We construct the two-, three- and four-stage RBF Runge-Kutta methods based on the Gaussian RBF Euler method with the shape parameter, where the analysis of the local truncation error shows that the s-stage RBF Runge-Kutta method could formally achieve order s+1. The proof for the convergence of those RBF Runge-Kutta methods follows. We then plot the stability region of each RBF Runge-Kutta method proposed and compare with the one of the correspondent Runge-Kutta method. Numerical experiments are provided to exhibit the improved behavior of the RBF Runge-Kutta methods over the standard ones.
Multi-modal Large Language Models (MLLMs) have shown remarkable capabilities in various multi-modal tasks. Nevertheless, their performance in fine-grained image understanding tasks is still limited. To address this issue, this paper proposes a new framework to enhance the fine-grained image understanding abilities of MLLMs. Specifically, we present a new method for constructing the instruction tuning dataset at a low cost by leveraging annotations in existing datasets. A self-consistent bootstrapping method is also introduced to extend existing dense object annotations into high-quality referring-expression-bounding-box pairs. These methods enable the generation of high-quality instruction data which includes a wide range of fundamental abilities essential for fine-grained image perception. Moreover, we argue that the visual encoder should be tuned during instruction tuning to mitigate the gap between full image perception and fine-grained image perception. Experimental results demonstrate the superior performance of our method. For instance, our model exhibits a 5.2% accuracy improvement over Qwen-VL on GQA and surpasses the accuracy of Kosmos-2 by 24.7% on RefCOCO_val. We have also attained the top rank on the leaderboard of MMBench. This promising performance is achieved by training on only publicly available data, making it easily reproducible. The models, datasets, and codes are publicly available at //github.com/SY-Xuan/Pink.