In the context of linear regression, we construct a data-driven convex loss function with respect to which empirical risk minimisation yields optimal asymptotic variance in the downstream estimation of the regression coefficients. Our semiparametric approach targets the best decreasing approximation of the derivative of the log-density of the noise distribution. At the population level, this fitting process is a nonparametric extension of score matching, corresponding to a log-concave projection of the noise distribution with respect to the Fisher divergence. The procedure is computationally efficient, and we prove that our procedure attains the minimal asymptotic covariance among all convex $M$-estimators. As an example of a non-log-concave setting, for Cauchy errors, the optimal convex loss function is Huber-like, and our procedure yields an asymptotic efficiency greater than 0.87 relative to the oracle maximum likelihood estimator of the regression coefficients that uses knowledge of this error distribution; in this sense, we obtain robustness without sacrificing much efficiency. Numerical experiments confirm the practical merits of our proposal.
Randomized iterative methods, such as the Kaczmarz method and its variants, have gained growing attention due to their simplicity and efficiency in solving large-scale linear systems. Meanwhile, absolute value equations (AVE) have attracted increasing interest due to their connection with the linear complementarity problem. In this paper, we investigate the application of randomized iterative methods to generalized AVE (GAVE). Our approach differs from most existing works in that we tackle GAVE with non-square coefficient matrices. We establish more comprehensive sufficient and necessary conditions for characterizing the solvability of GAVE and propose precise error bound conditions. Furthermore, we introduce a flexible and efficient randomized iterative algorithmic framework for solving GAVE, which employs sampling matrices drawn from user-specified distributions. This framework is capable of encompassing many well-known methods, including the Picard iteration method and the randomized Kaczmarz method. Leveraging our findings on solvability and error bounds, we establish both almost sure convergence and linear convergence rates for this versatile algorithmic framework. Finally, we present numerical examples to illustrate the advantages of the new algorithms.
In this paper, we study the Boltzmann equation with uncertainties and prove that the spectral convergence of the semi-discretized numerical system holds in a combined velocity and random space, where the Fourier-spectral method is applied for approximation in the velocity space whereas the generalized polynomial chaos (gPC)-based stochastic Galerkin (SG) method is employed to discretize the random variable. Our proof is based on a delicate energy estimate for showing the well-posedness of the numerical solution as well as a rigorous control of its negative part in our well-designed functional space that involves high-order derivatives of both the velocity and random variables. This paper rigorously justifies the statement proposed in [Remark 4.4, J. Hu and S. Jin, J. Comput. Phys., 315 (2016), pp. 150-168].
Petersen's theorem, one of the earliest results in graph theory, states that any bridgeless cubic multigraph contains a perfect matching. While the original proof was neither constructive nor algorithmic, Biedl, Bose, Demaine, and Lubiw [J. Algorithms 38(1)] showed how to implement a later constructive proof by Frink in $\mathcal{O}(n\log^{4}n)$ time using a fully dynamic 2-edge-connectivity structure. Then, Diks and Sta\'nczyk [SOFSEM 2010] described a faster approach that only needs a fully dynamic connectivity structure and works in $\mathcal{O}(n\log^{2}n)$ time. Both algorithms, while reasonable simple, utilize non-trivial (2-edge-)connectivity structures. We show that this is not necessary, and in fact a structure for maintaining a dynamic tree, e.g. link-cut trees, suffices to obtain a simple $\mathcal{O}(n\log n)$ time algorithm.
We present a novel data-driven strategy to choose the hyperparameter $k$ in the $k$-NN regression estimator without using any hold-out data. We treat the problem of choosing the hyperparameter as an iterative procedure (over $k$) and propose using an easily implemented in practice strategy based on the idea of early stopping and the minimum discrepancy principle. This model selection strategy is proven to be minimax-optimal, under the fixed-design assumption on covariates, over some smoothness function classes, for instance, the Lipschitz functions class on a bounded domain. The novel method often improves statistical performance on artificial and real-world data sets in comparison to other model selection strategies, such as the Hold-out method, 5-fold cross-validation, and AIC criterion. The novelty of the strategy comes from reducing the computational time of the model selection procedure while preserving the statistical (minimax) optimality of the resulting estimator. More precisely, given a sample of size $n$, if one should choose $k$ among $\left\{ 1, \ldots, n \right\}$, and $\left\{ f^1, \ldots, f^n \right\}$ are the estimators of the regression function, the minimum discrepancy principle requires calculation of a fraction of the estimators, while this is not the case for the generalized cross-validation, Akaike's AIC criteria or Lepskii principle.
Measuring the complexity of real numbers is of major importance in computer science, for the purpose of knowing which computations are allowed. Consider a non-computable real number $s$, i.e. a real number which cannot be stored on a computer. We can store only an approximation of $x$, for instance by considering a finite bitstring representing a finite prefix of its binary expansion. For a fixed approximation error $\varepsilon>0$, the size of this finite bitstring is dependent on the \textit{algorithmic complexity} of the finite prefixes of the binary expansion of $s$. The \textit{algorithmic complexity} of a binary sequence $x$, often referred to as \textit{Kolmogorov complexity}, is the length of the smallest binary sequence $x'$, for which there exists an algorithm, such that when presented with $x'$ as input, it outputs $x$. The algorithmic complexity of the binary expansion of real numbers is widely studied, but the algorithmic complexity of other ways of representing real numbers remains poorly reported. However, knowing the algorithmic complexity of different representations may allow to define new and more efficient strategies to represent real numbers. Several papers have established an equivalence between the algorithmic complexity of the $q$-ary expansions, with $q \in \mathbb{N}$, $q \geq 2$, i.e. representations of real numbers in any integer base. In this paper, we study the algorithmic complexity of the so-called $\beta$-expansions, which are representations of real numbers in a base $\beta \in (1,2)$ that display a much more complex behavior as compared to the $q$-ary expansion. We show that for a given real number $s$, the binary expansion is a minimizer of algorithmic complexity, and that for every given $\beta \in (1,2)$, there exists a $\beta$-expansion of $s$ which achieves the lower bound of algorithmic complexity displayed by the binary expansion of $s$.
First order shape optimization methods, in general, require a large number of iterations until they reach a locally optimal design. While higher order methods can significantly reduce the number of iterations, they exhibit only local convergence properties, necessitating a sufficiently close initial guess. In this work, we present an unregularized shape-Newton method and combine shape optimization with homotopy (or continuation) methods in order to allow for the use of higher order methods even if the initial design is far from a solution. The idea of homotopy methods is to continuously connect the problem of interest with a simpler problem and to follow the corresponding solution path by a predictor-corrector scheme. We use a shape-Newton method as a corrector and arbitrary order shape derivatives for the predictor. Moreover, we apply homotopy methods also to the case of multi-objective shape optimization to efficiently obtain well-distributed points on a Pareto front. Finally, our results are substantiated with a set of numerical experiments.
Wilf-Zeilberger pairs are fundamental in the algorithmic theory of Wilf and Zeilberger for computer-generated proofs of combinatorial identities. Wilf-Zeilberger forms are their high-dimensional generalizations, which can be used for proving and discovering convergence acceleration formulas. This paper presents a structural description of all possible rational such forms, which can be viewed as an additive analog of the classical Ore-Sato theorem. Based on this analog, we show a structural decomposition of so-called multivariate hyperarithmetic terms, which extend multivariate hypergeometric terms to the additive setting.
We present a new algorithm for imitation learning in infinite horizon linear MDPs dubbed ILARL which greatly improves the bound on the number of trajectories that the learner needs to sample from the environment. In particular, we remove exploration assumptions required in previous works and we improve the dependence on the desired accuracy $\epsilon$ from $\mathcal{O}\br{\epsilon^{-5}}$ to $\mathcal{O}\br{\epsilon^{-4}}$. Our result relies on a connection between imitation learning and online learning in MDPs with adversarial losses. For the latter setting, we present the first result for infinite horizon linear MDP which may be of independent interest. Moreover, we are able to provide a strengthen result for the finite horizon case where we achieve $\mathcal{O}\br{\epsilon^{-2}}$. Numerical experiments with linear function approximation shows that ILARL outperforms other commonly used algorithms.
In this paper, we introduce a novel approach to improve the diversity of Top-N recommendations while maintaining recommendation performance. Our approach employs a user-centric pre-processing strategy aimed at exposing users to a wide array of content categories and topics. We personalize this strategy by selectively adding and removing a percentage of interactions from user profiles. This personalization ensures we remain closely aligned with user preferences while gradually introducing distribution shifts. Our pre-processing technique offers flexibility and can seamlessly integrate into any recommender architecture. To evaluate our approach, we run extensive experiments on two publicly available data sets for news and book recommendations. We test various standard and neural network-based recommender system algorithms. Our results show that our approach generates diverse recommendations, ensuring users are exposed to a wider range of items. Furthermore, leveraging pre-processed data for training leads to recommender systems achieving performance levels comparable to, and in some cases, better than those trained on original, unmodified data. Additionally, our approach promotes provider fairness by facilitating exposure to minority or niche categories.
Matching on a low dimensional vector of scalar covariates consists of constructing groups of individuals in which each individual in a group is within a pre-specified distance from an individual in another group. However, matching in high dimensional spaces is more challenging because the distance can be sensitive to implementation details, caliper width, and measurement error of observations. To partially address these problems, we propose to use extensive sensitivity analyses and identify the main sources of variation and bias. We illustrate these concepts by examining the racial disparity in all-cause mortality in the US using the National Health and Nutrition Examination Survey (NHANES 2003-2006). In particular, we match African Americans to Caucasian Americans on age, gender, BMI and objectively measured physical activity (PA). PA is measured every minute using accelerometers for up to seven days and then transformed into an empirical distribution of all of the minute-level observations. The Wasserstein metric is used as the measure of distance between these participant-specific distributions.