This paper investigates the structural changes in the parameters of first-order autoregressive models by analyzing the edge eigenvalues of the precision matrices. Specifically, edge eigenvalues in the precision matrix are observed if and only if there is a structural change in the autoregressive coefficients. We demonstrate that these edge eigenvalues correspond to the zeros of some determinantal equation. Additionally, we propose a consistent estimator for detecting outliers within the panel time series framework, supported by numerical experiments.
This paper compares statistical experiments in discounted problems, ranging from the simplest ones where the state is fixed and the flow of information exogenous to more complex ones, where the decision-maker controls the flow of information or the state changes over time.
Scoring rules promote rational and honest decision-making, which is becoming increasingly important for automated procedures in `auto-ML'. In this paper we survey common squared and logarithmic scoring rules for survival analysis and determine which losses are proper and improper. We prove that commonly utilised squared and logarithmic scoring rules that are claimed to be proper are in fact improper, such as the Integrated Survival Brier Score (ISBS). We further prove that under a strict set of assumptions a class of scoring rules is strictly proper for, what we term, `approximate' survival losses. Despite the difference in properness, experiments in simulated and real-world datasets show there is no major difference between improper and proper versions of the widely-used ISBS, ensuring that we can reasonably trust previous experiments utilizing the original score for evaluation purposes. We still advocate for the use of proper scoring rules, as even minor differences between losses can have important implications in automated processes such as model tuning. We hope our findings encourage further research into the properties of survival measures so that robust and honest evaluation of survival models can be achieved.
A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give several new and efficient protocols: an estimator of state fidelity, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.
Helmholtz decompositions of the elastic fields open up new avenues for the solution of linear elastic scattering problems via boundary integral equations (BIE) [Dong, Lai, Li, Mathematics of Computation,2021]. The main appeal of this approach is that the ensuing systems of BIE feature only integral operators associated with the Helmholtz equation. However, these BIE involve non standard boundary integral operators that do not result after the application of either the Dirichlet or the Neumann trace to Helmholtz single and double layer potentials. Rather, the Helmholtz decomposition approach leads to BIE formulations of elastic scattering problems with Neumann boundary conditions that involve boundary traces of the Hessians of Helmholtz layer potential. As a consequence, the classical combined field approach applied in the framework of the Helmholtz decompositions leads to BIE formulations which, although robust, are not of the second kind. Following the regularizing methodology introduced in [Boubendir, Dominguez, Levadoux, Turc, SIAM Journal on Applied Mathematics 2015] we design and analyze novel robust Helmholtz decomposition BIE for the solution of elastic scattering that are of the second kind in the case of smooth scatterers in two dimensions. We present a variety of numerical results based on Nystrom discretizations that illustrate the good performance of the second kind regularized formulations in connections to iterative solvers.
This paper is devoted to the study of a novel mixed Finite Element Method for approximating the solutions of fourth order variational problems subjected to a constraint. The first problem we consider consists in establishing the convergence of the error of the numerical approximation of the solution of a biharmonic obstacle problem. The contents of this section are meant to generalise the approach originally proposed by Ciarlet \& Raviart, and then complemented by Ciarlet \& Glowinski. The second problem we consider amounts to studying a two-dimensional variational problem for linearly elastic shallow shells subjected to remaining confined in a prescribed half-space. We first study the case where the parametrisation of the middle surface for the linearly elastic shallow shell under consideration has non-zero curvature, and we observe that the numerical approximation of this model via a mixed Finite Element Method based on conforming elements requires the implementation of the additional constraint according to which the gradient matrix of the dual variable has to be symmetric. However, differently from the biharmonic obstacle problem previously studied, we show that the numerical implementation of this result cannot be implemented by solely resorting to Courant triangles. Finally, we show that if the middle surface of the linearly elastic shallow shell under consideration is flat, the symmetry constraint required for formulating the constrained mixed variational problem announced in the second part of the paper is not required, and the solution can thus be approximated by solely resorting to Courant triangles. The theoretical results we derived are complemented by a series of numerical experiments.
We present LinApart, a routine designed for efficiently performing the univariate partial fraction decomposition of large symbolic expressions. Our method is based on an explicit closed formula for the decomposition of rational functions with fully factorized denominators. We provide implementations in both the Wolfram Mathematica and C languages, made available at //github.com/fekeshazy/LinApart . The routine can provide very significant performance gains over available tools such as the Apart command in Mathematica.
This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.
General ridge estimators are typical linear estimators in a general linear model. The class of them include some shrinkage estimators in addition to classical linear unbiased estimators such as the ordinary least squares estimator and the weighted least squares estimator. We derive necessary and sufficient conditions under which two typical general ridge estimators coincide. In particular, two noteworthy conditions are added to those from previous studies. The first condition is given as a seemingly column space relationship to the covariance matrix of the error term, and the second one is based on the biases of general ridge estimators. Another problem studied in this paper is to derive an equivalence condition such that equality between two residual sums of squares holds when general ridge estimators are considered.
In the contemporary data landscape characterized by multi-source data collection and third-party sharing, ensuring individual privacy stands as a critical concern. While various anonymization methods exist, their utility preservation and privacy guarantees remain challenging to quantify. In this work, we address this gap by studying the utility and privacy of the spectral anonymization (SA) algorithm, particularly in an asymptotic framework. Unlike conventional anonymization methods that directly modify the original data, SA operates by perturbing the data in a spectral basis and subsequently reverting them to their original basis. Alongside the original version $\mathcal{P}$-SA, employing random permutation transformation, we introduce two novel SA variants: $\mathcal{J}$-spectral anonymization and $\mathcal{O}$-spectral anonymization, which employ sign-change and orthogonal matrix transformations, respectively. We show how well, under some practical assumptions, these SA algorithms preserve the first and second moments of the original data. Our results reveal, in particular, that the asymptotic efficiency of all three SA algorithms in covariance estimation is exactly 50% when compared to the original data. To assess the applicability of these asymptotic results in practice, we conduct a simulation study with finite data and also evaluate the privacy protection offered by these algorithms using distance-based record linkage. Our research reveals that while no method exhibits clear superiority in finite-sample utility, $\mathcal{O}$-SA distinguishes itself for its exceptional privacy preservation, never producing identical records, albeit with increased computational complexity. Conversely, $\mathcal{P}$-SA emerges as a computationally efficient alternative, demonstrating unmatched efficiency in mean estimation.
Temporal reasoning with conditionals is more complex than both classical temporal reasoning and reasoning with timeless conditionals, and can lead to some rather counter-intuitive conclusions. For instance, Aristotle's famous "Sea Battle Tomorrow" puzzle leads to a fatalistic conclusion: whether there will be a sea battle tomorrow or not, but that is necessarily the case now. We propose a branching-time logic LTC to formalise reasoning about temporal conditionals and provide that logic with adequate formal semantics. The logic LTC extends the Nexttime fragment of CTL*, with operators for model updates, restricting the domain to only future moments where antecedent is still possible to satisfy. We provide formal semantics for these operators that implements the restrictor interpretation of antecedents of temporalized conditionals, by suitably restricting the domain of discourse. As a motivating example, we demonstrate that a naturally formalised in our logic version of the `Sea Battle' argument renders it unsound, thereby providing a solution to the problem with fatalist conclusion that it entails, because its underlying reasoning per cases argument no longer applies when these cases are treated not as material implications but as temporal conditionals. On the technical side, we analyze the semantics of LTC and provide a series of reductions of LTC-formulae, first recursively eliminating the dynamic update operators and then the path quantifiers in such formulae. Using these reductions we obtain a sound and complete axiomatization for LTC, and reduce its decision problem to that of the modal logic KD.