Computing the edit distance of two strings is one of the most basic problems in computer science and combinatorial optimization. Tree edit distance is a natural generalization of edit distance in which the task is to compute a measure of dissimilarity between two (unweighted) rooted trees with node labels. Perhaps the most notable recent application of tree edit distance is in NoSQL big databases, such as MongoDB, where each row of the database is a JSON document represented as a labeled rooted tree, and finding dissimilarity between two rows is a basic operation. Until recently, the fastest algorithm for tree edit distance ran in cubic time (Demaine, Mozes, Rossman, Weimann; TALG'10); however, Mao (FOCS'21) broke the cubic barrier for the tree edit distance problem using fast matrix multiplication. Given a parameter $k$ as an upper bound on the distance, an $O(n+k^2)$-time algorithm for edit distance has been known since the 1980s due to the works of Myers (Algorithmica'86) and Landau and Vishkin (JCSS'88). The existence of an $\tilde{O}(n+\mathrm{poly}(k))$-time algorithm for tree edit distance has been posed as an open question, e.g., by Akmal and Jin (ICALP'21), who gave a state-of-the-art $\tilde{O}(nk^2)$-time algorithm. In this paper, we answer this question positively.
As one of the challenging NLP tasks, designing math word problem (MWP) solvers has attracted increasing research attention for the past few years. In previous work, models designed by taking into account the properties of the binary tree structure of mathematical expressions at the output side have achieved better performance. However, the expressions corresponding to a MWP are often diverse (e.g., $n_1+n_2 \times n_3-n_4$, $n_3\times n_2-n_4+n_1$, etc.), and so are the corresponding binary trees, which creates difficulties in model learning due to the non-deterministic output space. In this paper, we propose the Structure-Unified M-Tree Coding Solver (SUMC-Solver), which applies a tree with any M branches (M-tree) to unify the output structures. To learn the M-tree, we use a mapping to convert the M-tree into the M-tree codes, where codes store the information of the paths from tree root to leaf nodes and the information of leaf nodes themselves, and then devise a Sequence-to-Code (seq2code) model to generate the codes. Experimental results on the widely used MAWPS and Math23K datasets have demonstrated that SUMC-Solver not only outperforms several state-of-the-art models under similar experimental settings but also performs much better under low-resource conditions.
Stochastic versions of proximal methods have gained much attention in statistics and machine learning. These algorithms tend to admit simple, scalable forms, and enjoy numerical stability via implicit updates. In this work, we propose and analyze a stochastic version of the recently proposed proximal distance algorithm, a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter $\rho \rightarrow \infty$. By uncovering connections to related stochastic proximal methods and interpreting the penalty parameter as the learning rate, we justify heuristics used in practical manifestations of the proximal distance method, establishing their convergence guarantees for the first time. Moreover, we extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates regimes. We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.
Mixed data arise when observations are described by a mixture of numerical and categorical variables. The R package PCAmixdata extends to this type of data standard multivariate analysis methods which allow description, exploration and visualization of the data. The key techniques/methods included in the package are principal component analysis for mixed data (PCAmix), varimax-like orthogonal rotation for PCAmix, and multiple factor analysis for mixed multi-table data. This paper proposes a unified mathematical presentation of the different methods with common notations, as well as providing a summarised presentation of the three algorithms, with details to help the user understand graphical and numerical outputs of the corresponding R functions. This then allows the user to easily provide relevant interpretations of the results obtained. The three main methods are illustrated on a real dataset composed of four data tables characterizing living conditions in different municipalities in the Gironde region of southwest France.
Machine learning models built on datasets containing discriminative instances attributed to various underlying factors result in biased and unfair outcomes. It's a well founded and intuitive fact that existing bias mitigation strategies often sacrifice accuracy in order to ensure fairness. But when AI engine's prediction is used for decision making which reflects on revenue or operational efficiency such as credit risk modelling, it would be desirable by the business if accuracy can be somehow reasonably preserved. This conflicting requirement of maintaining accuracy and fairness in AI motivates our research. In this paper, we propose a fresh approach for simultaneous improvement of fairness and accuracy of ML models within a realistic paradigm. The essence of our work is a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training and we further show that such instance removal will have no adverse impact on model accuracy. In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes , an inherent bias gets induced in the dataset, which can be identified and mitigated through our novel scheme. Our experimental evaluation on two open-source datasets demonstrates how the proposed method can mitigate bias along with improving rather than degrading accuracy, while offering certain set of control for end user.
The independent set polynomial is important in many areas. For every integer $\Delta\geq 2$, the Shearer threshold is the value $\lambda^*(\Delta)=(\Delta-1)^{\Delta-1}/\Delta^{\Delta}$ . It is known that for $\lambda < - \lambda^*(\Delta)$, there are graphs~$G$ with maximum degree~$\Delta$ whose independent set polynomial, evaluated at~$\lambda$, is at most~$0$. Also, there are no such graphs for any $\lambda > -\lambda^*(\Delta)$. This paper is motivated by the computational problem of approximating the independent set polynomial when $\lambda < - \lambda^*(\Delta)$. The key issue in complexity bounds for this problem is "implementation". Informally, an implementation of a real number $\lambda'$ is a graph whose hard-core partition function, evaluated at~$\lambda$, simulates a vertex-weight of~$\lambda'$ in the sense that $\lambda'$ is the ratio between the contribution to the partition function from independent sets containing a certain vertex and the contribution from independent sets that do not contain that vertex. Implementations are the cornerstone of intractability results for the problem of approximately evaluating the independent set polynomial. Our main result is that, for any $\lambda < - \lambda^*(\Delta)$, it is possible to implement a set of values that is dense over the reals. The result is tight in the sense that it is not possible to implement a set of values that is dense over the reals for any $\lambda> \lambda^*(\Delta)$. Our result has already been used in a paper with \bezakova{} (STOC 2018) to show that it is \#P-hard to approximate the evaluation of the independent set polynomial on graphs of degree at most~$\Delta$ at any value $\lambda<-\lambda^*(\Delta)$. In the appendix, we give an additional incomparable inapproximability result (strengthening the inapproximability bound to an exponential factor, but weakening the hardness to NP-hardness).
A recent work of Larsen [Lar23] gave a faster combinatorial alternative to Bansal's SDP algorithm for finding a coloring $x\in\{-1,1\}^n$ that approximately minimizes the discrepancy $\mathrm{disc}(A,x) : = \| A x \|_{\infty}$ of a general real-valued $m\times n$ matrix $A$. Larsen's algorithm runs in $\widetilde{O}(mn^2)$ time compared to Bansal's $\widetilde{O}(mn^{4.5})$-time algorithm, at the price of a slightly weaker logarithmic approximation ratio in terms of the hereditary discrepancy of $A$ [Ban10]. In this work we present a combinatorial $\widetilde{O}(\mathrm{nnz}(A) + n^3)$ time algorithm with the same approximation guarantee as Larsen, which is optimal for tall matrices $m=\mathrm{poly}(n)$. Using a more intricate analysis and fast matrix-multiplication, we achieve $\widetilde{O}(\mathrm{nnz}(A) + n^{2.53})$ time, which breaks cubic runtime for square matrices, and bypasses the barrier of linear-programming approaches [ES14] for which input-sparsity time is currently out of reach. Our algorithm relies on two main ideas: (i) A new sketching technique for finding a projection matrix with short $\ell_2$-basis using implicit leverage-score sampling; (ii) A data structure for faster implementation of the iterative Edge-Walk partial-coloring algorithm of Lovett-Meka, using an alternative analysis that enables ``lazy" batch-updates with low-rank corrections. Our result nearly closes the computational gap between real-valued and binary matrices (set-systems), for which input-sparsity time coloring was very recently obtained [JSS23].
The profitable tour problem (PTP) is a well-known NP-hard routing problem searching for a tour visiting a subset of customers while maximizing profit as the difference between total revenue collected and traveling costs. PTP is known to be solvable in polynomial time when special structures of the underlying graph are considered. However, the computational complexity of the corresponding probabilistic generalizations is still an open issue in many cases. In this paper, we analyze the probabilistic PTP where customers are located on a tree and need, with a known probability, for a service provision at a predefined prize. The problem objective is to select a priori a subset of customers with whom to commit the service so to maximize the expected profit. We provide a polynomial time algorithm computing the optimal solution in $O(n^2)$, where $n$ is the number of nodes in the tree.
V. Levenshtein first proposed the sequence reconstruction problem in 2001. This problem studies the model where the same sequence from some set is transmitted over multiple channels, and the decoder receives the different outputs. Assume that the transmitted sequence is at distance $d$ from some code and there are at most $r$ errors in every channel. Then the sequence reconstruction problem is to find the minimum number of channels required to recover exactly the transmitted sequence that has to be greater than the maximum intersection between two metric balls of radius $r$, where the distance between their centers is at least $d$. In this paper, we study the sequence reconstruction problem of permutations under the Hamming distance. In this model, we define a Cayley graph and find the exact value of the largest intersection of two metric balls in this graph under the Hamming distance for $r=4$ with $d\geqslant 5$, and for $d=2r$.
It is now well documented that genetic covariance between functionally related traits leads to an uneven distribution of genetic variation across multivariate trait combinations, and possibly a large part of phenotype-space that is inaccessible to evolution. How the size of this nearly-null genetic space translates to the broader phenome level is unknown. High dimensional phenotype data to address these questions are now within reach, however, incorporating these data into genetic analyses remains a challenge. Multi-trait genetic analyses, of more than a handful of traits, are slow and often fail to converge when fit with REML. This makes it challenging to estimate the genetic covariance ($\mathbf{G}$) underlying thousands of traits, let alone study its properties. We present a previously proposed REML algorithm that is feasible for high dimensional genetic studies in the specific setting of a balanced nested half-sib design, common of quantitative genetics. We show that it substantially outperforms other common approaches when the number of traits is large, and we use it to investigate the bias in estimated eigenvalues of $\mathbf{G}$ and the size of the nearly-null genetic subspace. We show that the high-dimensional biases observed are qualitatively similar to those substantiated by asymptotic approximation in a simpler setting of a sample covariance matrix based on i.i.d. vector observation, and that interpreting the estimated size of the nearly-null genetic subspace requires considerable caution in high-dimensional studies of genetic variation. Our results provide the foundation for future research characterizing the asymptotic approximation of estimated genetic eigenvalues, and a statistical null distribution for phenome-wide studies of genetic variation.
Minimum distance estimation methodology based on empirical distribution function has been popular due to its desirable properties including robustness. Even though the statistical literature is awash with the research on the minimum distance estimation, the most of it is confined to the theoretical findings: only few statisticians conducted research on the application of the method to real world problems. Through this paper, we extend the domain of application of this methodology to various applied fields by providing a solution to a rather challenging and complicated computational problem. The problem this paper tackles is an image segmentation which has been used in various fields. We propose a novel method based on the classical minimum distance estimation theory to solve the image segmentation problem. The performance of the proposed method is then further elevated by integrating it with the "segmenting-together" strategy.