A new format for commutator-free Lie group methods is proposed based on explicit classical Runge-Kutta schemes. In this format exponentials are reused at every stage and the storage is required only for two quantities: the right hand side of the differential equation evaluated at a given Runge-Kutta stage and the function value updated at the same stage. The next stage of the scheme is able to overwrite these values. The result is proven for a 3-stage third order method and a conjecture for higher order methods is formulated. Five numerical examples are provided in support of the conjecture. This new class of structure-preserving integrators has a wide variety of applications for numerically solving differential equations on manifolds.
We present novel upper and lower bounds to estimate the collision probability of motion plans for autonomous agents with discrete-time linear Gaussian dynamics. Motion plans generated by planning algorithms cannot be perfectly executed by autonomous agents in reality due to the inherent uncertainties in the real world. Estimating collision probability is crucial to characterize the safety of trajectories and plan risk optimal trajectories. Our approach is an application of standard results in probability theory including the inequalities of Hunter, Kounias, Frechet, and Dawson. Using a ground robot navigation example, we numerically demonstrate that our method is considerably faster than the naive Monte Carlo sampling method and the proposed bounds are significantly less conservative than Boole's bound commonly used in the literature.
Examinations of any experiment involving living organisms require justifications of the need and moral defensibleness of the study. Statistical planning, design and sample size calculation of the experiment are no less important review criteria than general medical and ethical points to consider. Errors made in the statistical planning and data evaluation phase can have severe consequences on both results and conclusions. They might proliferate and thus impact future trials-an unintended outcome of fundamental research with profound ethical consequences. Therefore, any trial must be efficient in both a medical and statistical way in answering the questions of interests to be considered as approvable. Unified statistical standards are currently missing for animal review boards in Germany. In order to accompany, we developed a biometric form to be filled and handed in with the proposal at the local authority on animal welfare. It addresses relevant points to consider for biostatistical planning of animal experiments and can help both the applicants and the reviewers in overseeing the entire experiment(s) planned. Furthermore, the form might also aid in meeting the current standards set by the 3+3R's principle of animal experimentation Replacement, Reduction, Refinement as well as Robustness, Registration and Reporting. The form has already been in use by the local authority of animal welfare in Berlin, Germany. In addition, we provide reference to our user guide giving more detailed explanation and examples for each section of the biometric form. Unifying the set of biostatistical aspects will help both the applicants and the reviewers to equal standards and increase quality of preclinical research projects, also for translational, multicenter, or international studies.
In image processing, the amount of data to be processed grows rapidly, in particular when imaging methods yield images of more than two dimensions or time series of images. Thus, efficient processing is a challenge, as data sizes may push even supercomputers to their limits. Quantum image processing promises to encode images with logarithmically less qubits than classical pixels in the image. In theory, this is a huge progress, but so far not many experiments have been conducted in practice, in particular on real backends. Often, the precise conversion of classical data to quantum states, the exact implementation, and the interpretation of the measurements in the classical context are challenging. We investigate these practical questions in this paper. In particular, we study the feasibility of the Flexible Representation of Quantum Images (FRQI). Furthermore, we check experimentally what is the limit in the current noisy intermediate-scale quantum era, i.e. up to which image size an image can be encoded, both on simulators and on real backends. Finally, we propose a method for simplifying the circuits needed for the FRQI. With our alteration, the number of gates needed, especially of the error-prone controlled-NOT gates, can be reduced. As a consequence, the size of manageable images increases.
In numerical simulations of complex flows with discontinuities, it is necessary to use nonlinear schemes. The spectrum of the scheme used have a significant impact on the resolution and stability of the computation. Based on the approximate dispersion relation method, we combine the corresponding spectral property with the dispersion relation preservation proposed by De and Eswaran (J. Comput. Phys. 218 (2006) 398-416) and propose a quasi-linear dispersion relation preservation (QL-DRP) analysis method, through which the group velocity of the nonlinear scheme can be determined. In particular, we derive the group velocity property when a high-order Runge-Kutta scheme is used and compare the performance of different time schemes with QL-DRP. The rationality of the QL-DRP method is verified by a numerical simulation and the discrete Fourier transform method. To further evaluate the performance of a nonlinear scheme in finding the group velocity, new hyperbolic equations are designed. The validity of QL-DRP and the group velocity preservation of several schemes are investigated using two examples of the equation for one-dimensional wave propagation and the new hyperbolic equations. The results show that the QL-DRP method integrated with high-order time schemes can determine the group velocity for nonlinear schemes and evaluate their performance reasonably and efficiently.
Projections of bipartite or two-mode networks capture co-occurrences, and are used in diverse fields (e.g., ecology, economics, bibliometrics, politics) to represent unipartite networks. A key challenge in analyzing such networks is determining whether an observed number of co-occurrences between two nodes is significant, and therefore whether an edge exists between them. One approach, the fixed degree sequence model (FDSM), evaluates the significance of an edge's weight by comparison to a null model in which the degree sequences of the original bipartite network are fixed. Although the FDSM is an intuitive null model, it is computationally expensive because it requires Monte Carlo simulation to estimate each edge's $p$-value, and therefore is impractical for large projections. In this paper, we explore four potential alternatives to FDSM: fixed fill model (FFM), fixed row model (FRM), fixed column model (FCM), and stochastic degree sequence model (SDSM). We compare these models to FDSM in terms of accuracy, speed, statistical power, similarity, and ability to recover known communities. We find that the computationally-fast SDSM offers a statistically conservative but close approximation of the computationally-impractical FDSM under a wide range of conditions, and that it correctly recovers a known community structure even when the signal is weak. Therefore, although each backbone model may have particular applications, we recommend SDSM for extracting the backbone of bipartite projections when FDSM is impractical.
We consider the problems of the numerical solution of the Cauchy problem for an evolutionary equation with memory when the kernel of the integral term is a difference one. The computational implementation is associated with the need to work with an approximate solution for all previous points in time. In this paper, the considered nonlocal problem is transformed into a local one; a loosely coupled equation system with additional ordinary differential equations is solved. This approach is based on the approximation of the difference kernel by the sum of exponentials. Estimates for the stability of the solution concerning the initial data and the right-hand side for the corresponding Cauchy problem are obtained. Two-level schemes with weights with convenient computational implementation are constructed and investigated. The theoretical consideration is supplemented by the results of the numerical solution of the integrodifferential equation when the kernel is the stretching exponential function.
Regular expressions with capture variables, also known as regex-formulas, extract relations of spans (intervals identified by their start and end indices) from text. In turn, the class of regular document spanners is the closure of the regex formulas under the Relational Algebra. We investigate the computational complexity of querying text by aggregate functions, such as sum, average, and quantile, on top of regular document spanners. To this end, we formally define aggregate functions over regular document spanners and analyze the computational complexity of exact and approximate computation. More precisely, we show that in a restricted case, all studied aggregate functions can be computed in polynomial time. In general, however, even though exact computation is intractable, some aggregates can still be approximated with fully polynomial-time randomized approximation schemes (FPRAS).
The support vector machine (SVM) and minimum Euclidean norm least squares regression are two fundamentally different approaches to fitting linear models, but they have recently been connected in models for very high-dimensional data through a phenomenon of support vector proliferation, where every training example used to fit an SVM becomes a support vector. In this paper, we explore the generality of this phenomenon and make the following contributions. First, we prove a super-linear lower bound on the dimension (in terms of sample size) required for support vector proliferation in independent feature models, matching the upper bounds from previous works. We further identify a sharp phase transition in Gaussian feature models, bound the width of this transition, and give experimental support for its universality. Finally, we hypothesize that this phase transition occurs only in much higher-dimensional settings in the $\ell_1$ variant of the SVM, and we present a new geometric characterization of the problem that may elucidate this phenomenon for the general $\ell_p$ case.
Models of stochastic processes are widely used in almost all fields of science. Theory validation, parameter estimation, and prediction all require model calibration and statistical inference using data. However, data are almost always incomplete observations of reality. This leads to a great challenge for statistical inference because the likelihood function will be intractable for almost all partially observed stochastic processes. This renders many statistical methods, especially within a Bayesian framework, impossible to implement. Therefore, computationally expensive likelihood-free approaches are applied that replace likelihood evaluations with realisations of the model and observation process. For accurate inference, however, likelihood-free techniques may require millions of expensive stochastic simulations. To address this challenge, we develop a new method based on recent advances in multilevel and multifidelity. Our approach combines the multilevel Monte Carlo telescoping summation, applied to a sequence of approximate Bayesian posterior targets, with a multifidelity rejection sampler to minimise the number of computationally expensive exact simulations required for accurate inference. We present the derivation of our new algorithm for likelihood-free Bayesian inference, discuss practical implementation details, and demonstrate substantial performance improvements. Using examples from systems biology, we demonstrate improvements of more than two orders of magnitude over standard rejection sampling techniques. Our approach is generally applicable to accelerate other sampling schemes, such as sequential Monte Carlo, to enable feasible Bayesian analysis for realistic practical applications in physics, chemistry, biology, epidemiology, ecology and economics.
We show NP-completeness for various problems about the existence of arithmetic expression trees. When given a set of operations, inputs, and a target value does there exist an expression tree with those inputs and operations that evaluates to the target? We consider the variations where the structure of the tree is also given and the variation where no parentheses are allowed in the expression.