After reviewing a large body of literature on the modeling of bivariate discrete distributions with finite support, \cite{Gee20} made a compelling case for the use of $I$-projections in the sense of \cite{Csi75} as a sound way to attempt to decompose a bivariate probability mass function (p.m.f.) into its two univariate margins and a bivariate p.m.f.\ with uniform margins playing the role of a discrete copula. From a practical perspective, the necessary $I$-projections on Fr\'echet classes can be carried out using the iterative proportional fitting procedure (IPFP), also known as Sinkhorn's algorithm or matrix scaling in the literature. After providing conditions under which a bivariate p.m.f.\ can be decomposed in the aforementioned sense, we investigate, for starting bivariate p.m.f.s with rectangular supports, nonparametric and parametric estimation procedures as well as goodness-of-fit tests for the underlying discrete copula. Related asymptotic results are provided and build upon a differentiability result for $I$-projections on Fr\'echet classes which can be of independent interest. Theoretical results are complemented by finite-sample experiments and a data example.
We introduce a single-set axiomatisation of cubical $\omega$-categories, including connections and inverses. We justify these axioms by establishing a series of equivalences between the category of single-set cubical $\omega$-categories, and their variants with connections and inverses, and the corresponding cubical $\omega$-categories. We also report on the formalisation of cubical $\omega$-categories with the Isabelle/HOL proof assistant, which has been instrumental in finding the single-set axioms.
We consider the problem of zero-error function computation with side information. Alice has a source $X$ and Bob has correlated source $Y$ and they can communicate via either classical or a quantum channel. Bob wants to calculate $f(X,Y)$ with zero error. We aim to characterize the minimum amount of information that Alice needs to send to Bob for this to happen with zero-error. In the classical setting, this quantity depends on the asymptotic growth of $\chi(G^{(m)})$, the chromatic number of an appropriately defined $m$-instance "confusion graph". In this work we present structural characterizations of $G^{(m)}$ and demonstrate two function computation scenarios that have the same single-instance confusion graph. However, in one case there a strict advantage in using quantum transmission as against classical transmission, whereas there is no such advantage in the other case.
Classical tests are available for the two-sample test of correspondence of distribution functions. From these, the Kolmogorov-Smirnov test provides also the graphical interpretation of the test results, in different forms. Here, we propose modifications of the Kolmogorov-Smirnov test with higher power. The proposed tests are based on the so-called global envelope test which allows for graphical interpretation, similarly as the Kolmogorov-Smirnov test. The tests are based on rank statistics and are suitable also for the comparison of $n$ samples, with $n \geq 2$. We compare the alternatives for the two-sample case through an extensive simulation study and discuss their interpretation. Finally, we apply the tests to real data. Specifically, we compare the height distributions between boys and girls at different ages, as well as sepal length distributions of different flower species using the proposed methodologies.
This paper studies distribution-free inference in settings where the data set has a hierarchical structure -- for example, groups of observations, or repeated measurements. In such settings, standard notions of exchangeability may not hold. To address this challenge, a hierarchical form of exchangeability is derived, facilitating extensions of distribution-free methods, including conformal prediction and jackknife+. While the standard theoretical guarantee obtained by the conformal prediction framework is a marginal predictive coverage guarantee, in the special case of independent repeated measurements, it is possible to achieve a stronger form of coverage -- the "second-moment coverage" property -- to provide better control of conditional miscoverage rates, and distribution-free prediction sets that achieve this property are constructed. Simulations illustrate that this guarantee indeed leads to uniformly small conditional miscoverage rates. Empirically, this stronger guarantee comes at the cost of a larger width of the prediction set in scenarios where the fitted model is poorly calibrated, but this cost is very mild in cases where the fitted model is accurate.
This study explores the sample complexity for two-layer neural networks to learn a generalized linear target function under Stochastic Gradient Descent (SGD), focusing on the challenging regime where many flat directions are present at initialization. It is well-established that in this scenario $n=O(d \log d)$ samples are typically needed. However, we provide precise results concerning the pre-factors in high-dimensional contexts and for varying widths. Notably, our findings suggest that overparameterization can only enhance convergence by a constant factor within this problem class. These insights are grounded in the reduction of SGD dynamics to a stochastic process in lower dimensions, where escaping mediocrity equates to calculating an exit time. Yet, we demonstrate that a deterministic approximation of this process adequately represents the escape time, implying that the role of stochasticity may be minimal in this scenario.
We address the problem of testing conditional mean and conditional variance for non-stationary data. We build e-values and p-values for four types of non-parametric composite hypotheses with specified mean and variance as well as other conditions on the shape of the data-generating distribution. These shape conditions include symmetry, unimodality, and their combination. Using the obtained e-values and p-values, we construct tests via e-processes, also known as testing by betting, as well as some tests based on combining p-values for comparison. Although we mainly focus on one-sided tests, the two-sided test for the mean is also studied. Simulation and empirical studies are conducted under a few settings, and they illustrate features of the methods based on e-processes.
We systematically evaluated the performance of seven large language models in generating programming code using various prompt strategies, programming languages, and task difficulties. GPT-4 substantially outperforms other large language models, including Gemini Ultra and Claude 2. The coding performance of GPT-4 varies considerably with different prompt strategies. In most LeetCode and GeeksforGeeks coding contests evaluated in this study, GPT-4 employing the optimal prompt strategy outperforms 85 percent of human participants. Additionally, GPT-4 demonstrates strong capabilities in translating code between different programming languages and in learning from past errors. The computational efficiency of the code generated by GPT-4 is comparable to that of human programmers. These results suggest that GPT-4 has the potential to serve as a reliable assistant in programming code generation and software development.
We study the problem of adaptive variable selection in a Gaussian white noise model of intensity $\varepsilon$ under certain sparsity and regularity conditions on an unknown regression function $f$. The $d$-variate regression function $f$ is assumed to be a sum of functions each depending on a smaller number $k$ of variables ($1 \leq k \leq d$). These functions are unknown to us and only few of them are nonzero. We assume that $d=d_\varepsilon \to \infty$ as $\varepsilon \to 0$ and consider the cases when $k$ is fixed and when $k=k_\varepsilon \to \infty$, $k=o(d)$ as $\varepsilon \to 0$. In this work, we introduce an adaptive selection procedure that, under some model assumptions, identifies exactly all nonzero $k$-variate components of $f$. In addition, we establish conditions under which exact identification of the nonzero components is impossible. These conditions ensure that the proposed selection procedure is the best possible in the asymptotically minimax sense with respect to the Hamming risk.
This paper presents a method for thematic agreement assessment of geospatial data products of different semantics and spatial granularities, which may be affected by spatial offsets between test and reference data. The proposed method uses a multi-scale framework allowing for a probabilistic evaluation whether thematic disagreement between datasets is induced by spatial offsets due to different nature of the datasets or not. We test our method using real-estate derived settlement locations and remote-sensing derived building footprint data.
We derive sharp-interface models for one-dimensional brittle fracture via the inverse-deformation approach. Methods of Gamma-convergence are employed to obtain the singular limits of previously proposed models. The latter feature a local, non-convex stored energy of inverse strain, augmented by small interfacial energy, formulated in terms of the inverse-strain gradient. They predict spontaneous fracture with exact crack-opening discontinuities, without the use of damage (phase) fields or pre-existing cracks; crack faces are endowed with a thin layer of surface energy. The models obtained herewith inherit the same properties, except that surface energy is now concentrated at the crack faces. Accordingly, we construct energy-minimizing configurations. For a composite bar with a breakable layer, our results predict a pattern of equally spaced cracks whose number is given as an increasing function of applied load.