Within dependent type theory, we provide a topological counterpart of well-founded trees (for short, W-types) by using a proof-relevant version of the notion of inductively generated suplattices introduced in the context of formal topology under the name of inductively generated basic covers. In more detail, we show, firstly, that in Homotopy Type Theory, W-types and proof relevant inductively generated basic covers are propositionally mutually encodable. Secondly, we prove they are definitionally mutually encodable in the Agda implementation of intensional Martin-Loef's type theory. Finally, we reframe the equivalence in the Minimalist Foundation framework by introducing well-founded predicates as the logical counterpart for predicates of dependent W-types. All the results have been checked in the Agda proof-assistant.
We consider nonlinear solvers for the incompressible, steady (or at a fixed time step for unsteady) Navier-Stokes equations in the setting where partial measurement data of the solution is available. The measurement data is incorporated/assimilated into the solution through a nudging term addition to the the Picard iteration that penalized the difference between the coarse mesh interpolants of the true solution and solver solution, analogous to how continuous data assimilation (CDA) is implemented for time dependent PDEs. This was considered in the paper [Li et al. {\it CMAME} 2023], and we extend the methodology by improving the analysis to be in the $L^2$ norm instead of a weighted $H^1$ norm where the weight depended on the coarse mesh width, and to the case of noisy measurement data. For noisy measurement data, we prove that the CDA-Picard method is stable and convergent, up to the size of the noise. Numerical tests illustrate the results, and show that a very good strategy when using noisy data is to use CDA-Picard to generate an initial guess for the classical Newton iteration.
Covariance matrices of random vectors contain information that is crucial for modelling. Certain structures and patterns of the covariances (or correlations) may be used to justify parametric models, e.g., autoregressive models. Until now, there have been only few approaches for testing such covariance structures systematically and in a unified way. In the present paper, we propose such a unified testing procedure, and we will exemplify the approach with a large variety of covariance structure models. This includes common structures such as diagonal matrices, Toeplitz matrices, and compound symmetry but also the more involved autoregressive matrices. We propose hypothesis tests for these structures, and we use bootstrap techniques for better small-sample approximation. The structures of the proposed tests invite for adaptations to other covariance patterns by choosing the hypothesis matrix appropriately. We prove their correctness for large sample sizes. The proposed methods require only weak assumptions. With the help of a simulation study, we assess the small sample properties of the tests. We also analyze a real data set to illustrate the application of the procedure.
Characterizing the solution sets in a problem by closedness under operations is recognized as one of the key aspects of algorithm development, especially in constraint satisfaction. An example from the Boolean satisfiability problem is that the solution set of a Horn conjunctive normal form (CNF) is closed under the minimum operation, and this property implies that minimizing a nonnegative linear function over a Horn CNF can be done in polynomial time. In this paper, we focus on the set of integer points (vectors) in a polyhedron, and study the relation between these sets and closedness under operations from the viewpoint of 2-decomposability. By adding further conditions to the 2-decomposable polyhedra, we show that important classes of sets of integer vectors in polyhedra are characterized by 2-decomposability and closedness under certain operations, and in some classes, by closedness under operations alone. The most prominent result we show is that the set of integer vectors in a unit-two-variable-per-inequality polyhedron can be characterized by closedness under the median and directed discrete midpoint operations, each of these operations was independently considered in constraint satisfaction and discrete convex analysis.
Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.
The advent of large-scale inference has spurred reexamination of conventional statistical thinking. In a Gaussian model for $n$ many $z$-scores with at most $k < \frac{n}{2}$ nonnulls, Efron suggests estimating the location and scale parameters of the null distribution. Placing no assumptions on the nonnull effects, the statistical task can be viewed as a robust estimation problem. However, the best known robust estimators fail to be consistent in the regime $k \asymp n$ which is especially relevant in large-scale inference. The failure of estimators which are minimax rate-optimal with respect to other formulations of robustness (e.g. Huber's contamination model) might suggest the impossibility of consistent estimation in this regime and, consequently, a major weakness of Efron's suggestion. A sound evaluation of Efron's model thus requires a complete understanding of consistency. We sharply characterize the regime of $k$ for which consistent estimation is possible and further establish the minimax estimation rates. It is shown consistent estimation of the location parameter is possible if and only if $\frac{n}{2} - k = \omega(\sqrt{n})$, and consistent estimation of the scale parameter is possible in the entire regime $k < \frac{n}{2}$. Faster rates than those in Huber's contamination model are achievable by exploiting the Gaussian character of the data. The minimax upper bound is obtained by considering estimators based on the empirical characteristic function. The minimax lower bound involves constructing two marginal distributions whose characteristic functions match on a wide interval containing zero. The construction notably differs from those in the literature by sharply capturing a scaling of $n-2k$ in the minimax estimation rate of the location.
The paper introduces a tree-based varying coefficient model (VCM) where the varying coefficients are modelled using the cyclic gradient boosting machine (CGBM) from Delong et al. (2023). Modelling the coefficient functions using a CGBM allows for dimension-wise early stopping and feature importance scores. The dimension-wise early stopping not only reduces the risk of dimension-specific overfitting, but also reveals differences in model complexity across dimensions. The use of feature importance scores allows for simple feature selection and easy model interpretation. The model is evaluated on the same simulated and real data examples as those used in Richman and W\"uthrich (2023), and the results show that it produces results in terms of out of sample loss that are comparable to those of their neural network-based VCM called LocalGLMnet.
We are interested in generating surfaces with arbitrary roughness and forming patterns on the surfaces. Two methods are applied to construct rough surfaces. In the first method, some superposition of wave functions with random frequencies and angles of propagation are used to get periodic rough surfaces with analytic parametric equations. The amplitude of such surfaces is also an important variable in the provided eigenvalue analysis for the Laplace-Beltrami operator and in the generation of pattern formation. Numerical experiments show that the patterns become irregular as the amplitude and frequency of the rough surface increase. For the sake of easy generalization to closed manifolds, we propose a second construction method for rough surfaces, which uses random nodal values and discretized heat filters. We provide numerical evidence that both surface {construction methods} yield comparable patterns to those {observed} in real-life animals.
In the context of social choice theory with ordinal preferences, we say that the defensible set is the set of alternatives $x$ such that for any alternative $y$, if $y$ beats $x$ in a head-to-head majority comparison, then there is an alternative $z$ that beats $y$ in a head-to-head majority comparison by a margin at least as large as the margin by which $y$ beat $x$. We show that any ordinal voting method satisfying two well-known axioms from voting theory--positive involvement and the Condorcet winner criterion--refines the defensible set. Using this lemma, we prove an impossibility theorem: there is no such voting method that also satisfies the Condorcet loser criterion, resolvability, and a common invariance property for Condorcet methods, namely that the choice of winners depends only on the relative sizes of majority margins.
High order schemes are known to be unstable in the presence of shock discontinuities or under-resolved solution features for nonlinear conservation laws. Entropy stable schemes address this instability by ensuring that physically relevant solutions satisfy a semi-discrete entropy inequality independently of discretization parameters. This work extends high order entropy stable schemes to the quasi-1D shallow water equations and the quasi-1D compressible Euler equations, which model one-dimensional flows through channels or nozzles with varying width. We introduce new non-symmetric entropy conservative finite volume fluxes for both sets of quasi-1D equations, as well as a generalization of the entropy conservation condition to non-symmetric fluxes. When combined with an entropy stable interface flux, the resulting schemes are high order accurate, conservative, and semi-discretely entropy stable. For the quasi-1D shallow water equations, the resulting schemes are also well-balanced.
Large-amplitude current-driven plasma instabilities, which can transition to the Buneman instability, were observed in one-dimensional (1D) simulations to generate high-energy backstreaming ions. We investigate the saturation of multi-dimensional plasma instabilities and its effects on energetic ion formation. Such ions directly impact spacecraft thruster lifetimes and are associated with magnetic reconnection and cosmic ray inception. An Eulerian Vlasov--Poisson solver employing the grid-based direct kinetic method is used to study the growth and saturation of 2D2V collisionless, electrostatic current-driven instabilities spanning two dimensions each in the configuration (D) and velocity (V) spaces supporting ion and electron phase-space transport. Four stages characterise the electric potential evolution in such instabilities: linear modal growth, harmonic growth, accelerated growth via quasi-linear mechanisms alongside non-linear fill-in, and saturated turbulence. Its transition and isotropisation process bears considerable similarities to the development of hydrodynamic turbulence. While a tendency to isotropy is observed in the plasma waves, followed by electron and then ion phase space after several ion-acoustic periods, the formation of energetic backstreaming ions is more limited in the 2D2V than in the 1D1V simulations. Plasma waves formed by two-dimensional electrostatic kinetic instabilities can propagate in the direction perpendicular to the net electron drift. Thus, large-amplitude multi-dimensional waves generate high-energy transverse-streaming ions and eventually limit energetic backward-streaming ions along the longitudinal direction. The multi-dimensional study sheds light on interactions between longitudinal and transverse electrostatic plasma instabilities, as well as fundamental characteristics of the inception and sustenance of unmagnetised plasma turbulence.