In this paper we propose an approach for solving systems of nonlinear equations without computing function derivatives. Motivated by the application area of tomographic absorption spectroscopy, which is a highly-nonlinear problem with variables coupling, we consider a situation where straightforward translation to a fixed point problem is not possible because the operators that represent the relevant systems of nonlinear equations are not self-mappings, i.e., they operate between spaces of different dimensions. To overcome this difficulty we suggest an "alternating common fixed points algorithm" that acts alternatingly on the different vector variables. This approach translates the original problem to a common fixed point problem for which iterative algorithms are abound and exhibits a viable alternative to translation to an optimization problem, which usually requires derivatives information. However, to apply any of these iterative algorithms requires to ascertain the conditions that appear in their convergence theorems. To circumvent the need to verify conditions for convergence, we propose and motivate a derivative-free algorithm that better suits the tomographic absorption spectroscopy problem at hand and is even further improved by applying to it the superiorization approach. This is presented along with experimental results that demonstrate our approach.
In this paper, we consider a change-point problem for a centered, stationary and $m$-dependent multivariate random field. Under the distribution free assumption, a change-point test using CUSUM statistic is proposed to detect anomalies within a multidimensional random field, controlling the false positive rate as well as the Family-Wise Error in the multiple hypotheses testing context.
Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a p x p covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a p x k factor loadings matrix and its transpose. If k << p, this defines a parsimonious factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.
This paper presents a multivariate normal integral expression for the joint survival function of the cumulated components of any multinomial random vector. This result can be viewed as a multivariate analog of Equation (7) from Carter & Pollard (2004), who improved Tusn\'ady's inequality. Our findings are based on a crucial relationship between the joint survival function of the cumulated components of any multinomial random vector and the joint cumulative distribution function of a corresponding Dirichlet distribution. We offer two distinct proofs: the first expands the logarithm of the Dirichlet density, while the second employs Laplace's method applied to the Dirichlet integral.
In this paper we combine the principled approach to modalities from multimodal type theory (MTT) with the computationally well-behaved realization of identity types from cubical type theory (CTT). The result -- cubical modal type theory (Cubical MTT) -- has the desirable features of both systems. In fact, the whole is more than the sum of its parts: Cubical MTT validates desirable extensionality principles for modalities that MTT only supported through ad hoc means. We investigate the semantics of Cubical MTT and provide an axiomatic approach to producing models of Cubical MTT based on the internal language of topoi and use it to construct presheaf models. Finally, we demonstrate the practicality and utility of this axiomatic approach to models by constructing a model of (cubical) guarded recursion in a cubical version of the topos of trees. We then use this model to justify an axiomatization of L\"ob induction and thereby use Cubical MTT to smoothly reason about guarded recursion.
In this paper, we propose nonparametric estimators for varextropy function of an absolutely continuous random variable. Consistency of the estimators is established under suitable regularity conditions. Moreover, a simulation study is performed to compare the performance of the proposed estimators based on mean squared error (MSE) and bias. Furthermore, by using the proposed estimators some tests are constructed for uniformity. It is shown that the varextropybased test proposed here performs well compared to the power of the other uniformity hypothesis tests.
Rational approximation is a powerful tool to obtain accurate surrogates for nonlinear functions that are easy to evaluate and linearize. The interpolatory adaptive Antoulas--Anderson (AAA) method is one approach to construct such approximants numerically. For large-scale vector- and matrix-valued functions, however, the direct application of the set-valued variant of AAA becomes inefficient. We propose and analyze a new sketching approach for such functions called sketchAAA that, with high probability, leads to much better approximants than previously suggested approaches while retaining efficiency. The sketching approach works in a black-box fashion where only evaluations of the nonlinear function at sampling points are needed. Numerical tests with nonlinear eigenvalue problems illustrate the efficacy of our approach, with speedups above 200 for sampling large-scale black-box functions without sacrificing on accuracy.
We develop a high order accurate numerical method for solving the elastic wave equation in second-order form. We hybridize the computationally efficient Cartesian grid formulation of finite differences with geometrically flexible discontinuous Galerkin methods on unstructured grids by a penalty based technique. At the interface between the two methods, we construct projection operators for the pointwise finite difference solutions and discontinuous Galerkin solutions based on piecewise polynomials. In addition, we optimize the projection operators for both accuracy and spectrum. We prove that the overall discretization conserves a discrete energy, and verify optimal convergence in numerical experiments.
In this paper, we propose an implicit staggered algorithm for crystal plasticity finite element method (CPFEM) which makes use of dynamic relaxation at the constitutive integration level. An uncoupled version of the constitutive system consists of a multi-surface flow law complemented by an evolution law for the hardening variables. Since a saturation law is adopted for hardening, a sequence of nonlinear iteration followed by a linear system is feasible. To tie the constitutive unknowns, the dynamic relaxation method is adopted. A Green-Naghdi plasticity model is adopted based on the Hencky strain calculated using a $[2/2]$ Pad\'e approximation. For the incompressible case, the approximation error is calculated exactly. A enhanced-assumed strain (EAS) element technology is adopted, which was found to be especially suited to localization problems such as the ones resulting from crystal plasticity plane slipping. Analysis of the results shows significant reduction of drift and well defined localization without spurious modes or hourglassing.
Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.
Solving partial differential equations using an annealing-based approach is based on solving generalized eigenvalue problems. When a partial differential equation is discretized, it leads to a system of linear equations (SLE). Solving an SLE can be expressed as a general eigenvalue problem, which can be converted into an optimization problem with the objective function being a generalized Rayleigh quotient. The proposed algorithm allows the computation of eigenvectors at arbitrary precision without increasing the number of variables using an Ising machine. Simple examples solved using this method and theoretical analysis provide a guideline for appropriate parameter settings.