A dataset with two labels is linearly separable if it can be split into its two classes with a hyperplane. This inflicts a curse on some statistical tools (such as logistic regression) but forms a blessing for others (e.g. support vector machines). Recently, the following question has regained interest: What is the probability that the data are linearly separable? We provide a formula for the probability of linear separability for Gaussian features and labels depending only on one marginal of the features (as in generalized linear models). In this setting, we derive an upper bound that complements the recent result by Hayakawa, Lyons, and Oberhauser [2023], and a sharp upper bound for sign-flip noise. To prove our results, we exploit that this probability can be expressed as a sum of the intrinsic volumes of a polyhedral cone of the form $\text{span}\{v\}\oplus[0,\infty)^n$, as shown in Cand\`es and Sur [2020]. After providing the inequality description for this cone, and an algorithm to project onto it, we calculate its intrinsic volumes. In doing so, we encounter Youden's demon problem, for which we provide a formula following Kabluchko and Zaporozhets [2020]. The key insight of this work is the following: The number of correctly labeled observations in the data affects the structure of this polyhedral cone, allowing the translation of insights from geometry into statistics.
This paper introduces a new numerical scheme for a system that includes evolution equations describing a perfect plasticity model with a time-dependent yield surface. We demonstrate that the solution to the proposed scheme is stable under suitable norms. Moreover, the stability leads to the existence of an exact solution, and we also prove that the solution to the proposed scheme converges strongly to the exact solution under suitable norms.
Underdetermined generalized absolute value equations (GAVE) has real applications. The underdetermined GAVE may have no solution, one solution, finitely multiple solutions or infinitely many solutions. This paper aims to give some sufficient conditions which guarantee the existence or nonexistence of solutions for the underdetermined GAVE. Particularly, sufficient conditions under which certain or each sign pattern possesses infinitely many solutions of the underdetermined GAVE are given. In addition, iterative methods are developed to solve a solution of the underdetermined GAVE. Some existing results about the square GAVE are extended.
We consider the task of constructing confidence intervals with differential privacy. We propose two private variants of the non-parametric bootstrap, which privately compute the median of the results of multiple "little" bootstraps run on partitions of the data and give asymptotic bounds on the coverage error of the resulting confidence intervals. For a fixed differential privacy parameter $\epsilon$, our methods enjoy the same error rates as that of the non-private bootstrap to within logarithmic factors in the sample size $n$. We empirically validate the performance of our methods for mean estimation, median estimation, and logistic regression with both real and synthetic data. Our methods achieve similar coverage accuracy to existing methods (and non-private baselines) while providing notably shorter ($\gtrsim 10$ times) confidence intervals than previous approaches.
Scoring rules promote rational and honest decision-making, which is becoming increasingly important for automated procedures in `auto-ML'. In this paper we survey common squared and logarithmic scoring rules for survival analysis and determine which losses are proper and improper. We prove that commonly utilised squared and logarithmic scoring rules that are claimed to be proper are in fact improper, such as the Integrated Survival Brier Score (ISBS). We further prove that under a strict set of assumptions a class of scoring rules is strictly proper for, what we term, `approximate' survival losses. Despite the difference in properness, experiments in simulated and real-world datasets show there is no major difference between improper and proper versions of the widely-used ISBS, ensuring that we can reasonably trust previous experiments utilizing the original score for evaluation purposes. We still advocate for the use of proper scoring rules, as even minor differences between losses can have important implications in automated processes such as model tuning. We hope our findings encourage further research into the properties of survival measures so that robust and honest evaluation of survival models can be achieved.
Model-based clustering of moderate or large dimensional data is notoriously difficult. We propose a model for simultaneous dimensionality reduction and clustering by assuming a mixture model for a set of latent scores, which are then linked to the observations via a Gaussian latent factor model. This approach was recently investigated by Chandra et al. (2023). The authors use a factor-analytic representation and assume a mixture model for the latent factors. However, performance can deteriorate in the presence of model misspecification. Assuming a repulsive point process prior for the component-specific means of the mixture for the latent scores is shown to yield a more robust model that outperforms the standard mixture model for the latent factors in several simulated scenarios. The repulsive point process must be anisotropic to favor well-separated clusters of data, and its density should be tractable for efficient posterior inference. We address these issues by proposing a general construction for anisotropic determinantal point processes. We illustrate our model in simulations as well as a plant species co-occurrence dataset.
This paper deals with unit root issues in time series analysis. It has been known for a long time that unit root tests may be flawed when a series although stationary has a root close to unity. That motivated recent papers dedicated to autoregressive processes where the bridge between stability and instability is expressed by means of time-varying coefficients. The process we consider has a companion matrix $A_{n}$ with spectral radius $\rho(A_{n}) < 1$ satisfying $\rho(A_{n}) \rightarrow 1$, a situation described as `nearly-unstable'. The question we investigate is: given an observed path supposed to come from a nearly-unstable process, is it possible to test for the `extent of instability', i.e. to test how close we are to the unit root? In this regard, we develop a strategy to evaluate $\alpha$ and to test for $\mathcal{H}_0 : ``\alpha = \alpha_0"$ against $\mathcal{H}_1 : ``\alpha > \alpha_0"$ when $\rho(A_{n})$ lies in an inner $O(n^{-\alpha})$-neighborhood of the unity, for some $0 < \alpha < 1$. Empirical evidence is given about the advantages of the flexibility induced by such a procedure compared to the common unit root tests. We also build a symmetric procedure for the usually left out situation where the dominant root lies around $-1$.
Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.
A swarm intelligence-based optimization algorithm, named Duck Swarm Algorithm (DSA), is proposed in this study, which is inspired by the searching for food sources and foraging behaviors of the duck swarm. Two rules are modeled from the finding food and foraging of the duck, which corresponds to the exploration and exploitation phases of the proposed DSA, respectively. The performance of the DSA is verified by using multiple CEC benchmark functions, where its statistical (best, mean, standard deviation, and average running-time) results are compared with seven well-known algorithms like Particle swarm optimization (PSO), Firefly algorithm (FA), Chicken swarm optimization (CSO), Grey wolf optimizer (GWO), Sine cosine algorithm (SCA), and Marine-predators algorithm (MPA), and Archimedes optimization algorithm (AOA). Moreover, the Wilcoxon rank-sum test, Friedman test, and convergence curves of the comparison results are utilized to prove the superiority of the DSA against other algorithms. The results demonstrate that DSA is a high-performance optimization method in terms of convergence speed and exploration-exploitation balance for solving the numerical optimization problems. Also, DSA is applied for the optimal design of six engineering constrained optimization problems and the node optimization deployment task of the Wireless Sensor Network (WSN). Overall, the comparison results revealed that the DSA is a promising and very competitive algorithm for solving different optimization problems.
Symplectic integrators are widely implemented numerical integrators for Hamiltonian mechanics, which preserve the Hamiltonian structure (symplecticity) of the system. Although the symplectic integrator does not conserve the energy of the system, it is well known that there exists a conserving modified Hamiltonian, called the shadow Hamiltonian. For the Nambu mechanics, which is a kind of generalized Hamiltonian mechanics, we can also construct structure-preserving integrators by the same procedure used to construct the symplectic integrators. In the structure-preserving integrator, however, the existence of shadow Hamiltonians is nontrivial. This is because the Nambu mechanics is driven by multiple Hamiltonians and it is nontrivial whether the time evolution by the integrator can be cast into the Nambu mechanical time evolution driven by multiple shadow Hamiltonians. In this paper we present a general procedure to calculate the shadow Hamiltonians of structure-preserving integrators for Nambu mechanics, and give an example where the shadow Hamiltonians exist. This is the first attempt to determine the concrete forms of the shadow Hamiltonians for a Nambu mechanical system. We show that the fundamental identity, which corresponds to the Jacobi identity in Hamiltonian mechanics, plays an important role in calculating the shadow Hamiltonians using the Baker-Campbell-Hausdorff formula. It turns out that the resulting shadow Hamiltonians have indefinite forms depending on how the fundamental identities are used. This is not a technical artifact, because the exact shadow Hamiltonians obtained independently have the same indefiniteness.
Multivariate Cryptography is one of the main candidates for Post-quantum Cryptography. Multivariate schemes are usually constructed by applying two secret affine invertible transformations $\mathcal S,\mathcal T$ to a set of multivariate polynomials $\mathcal{F}$ (often quadratic). The secret polynomials $\mathcal{F}$ posses a trapdoor that allows the legitimate user to find a solution of the corresponding system, while the public polynomials $\mathcal G=\mathcal S\circ\mathcal F\circ\mathcal T$ look like random polynomials. The polynomials $\mathcal G$ and $\mathcal F$ are said to be affine equivalent. In this article, we present a more general way of constructing a multivariate scheme by considering the CCZ equivalence, which has been introduced and studied in the context of vectorial Boolean functions.