Objective: Bland and Altman plot method is a widely cited and applied graphical approach for assessing the equivalence of quantitative measurement techniques, usually aiming to replace a traditional technique with a new, less invasive, or less expensive one. Although easy to communicate, Bland and Altman plot is often misinterpreted by lacking suitable inferential statistical support. Usual alternatives, such as Pearson's correlation or ordinal least-square linear regression, also fail to locate the weakness of each measurement technique. Method: Here, inferential statistics support for equivalence between measurement techniques is proposed in three nested tests based on structural regressions to assess the equivalence of structural means (accuracy), the equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements obtained from the same subject), by analytical methods and robust approach by bootstrapping. Graphical outputs are also implemented to follow Bland and Altman's principles for easy communication. Results: The performance of this method is shown and confronted with five data sets from previously published articles that applied Bland and Altman's method. One case demonstrated strict equivalence, three cases showed partial equivalence, and one showed poor equivalence. The developed R package containing open codes and data are available with installation instructions for free distribution at Harvard Dataverse at //doi.org/10.7910/DVN/AGJPZH. It is possible to test whether two techniques may have full equivalence, preserving graphical communication according to Bland and Altman's principles, but adding robust and suitable inferential statistics. Decomposing the equivalence in accuracy, precision, and agreement helps the location of the source of the problem in order to fix a new technique.
We present new Dirichlet-Neumann and Neumann-Dirichlet algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, three variants can be found for the Dirichlet-Neumann and Neumann-Dirichlet algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.
Eigenvector decomposition (EVD) is an inevitable operation to obtain the precoders in practical massive multiple-input multiple-output (MIMO) systems. Due to the large antenna size and at finite computation resources at the base station (BS), the overwhelming computation complexity of EVD is one of the key limiting factors of the system performance. To address this problem, we propose an eigenvector prediction (EGVP) method by interpolating the precoding matrix with predicted eigenvectors. The basic idea is to exploit a few historical precoders to interpolate the rest of them without EVD of the channel state information (CSI). We transform the nonlinear EVD into a linear prediction problem and prove that the prediction of the eigenvectors can be achieved with a complex exponential model. Furthermore, a channel prediction method called fast matrix pencil prediction (FMPP) is proposed to cope with the CSI delay when applying the EGVP method in mobility environments. The asymptotic analysis demonstrates how many samples are needed to achieve asymptotically error-free eigenvector predictions and channel predictions. Finally, the simulation results demonstrate the spectral efficiency improvement of our scheme over the benchmarks and the robustness to different mobility scenarios.
We introduce a new class of absorbing boundary conditions (ABCs) for the Helmholtz equation. The proposed ABCs can be derived from a certain simple class of perfectly matched layers using $L$ discrete layers and using the $Q_N$ Lagrange finite element in conjunction with the $N$-point Gauss-Legendre quadrature reduced integration rule. The proposed ABCs are classified by a tuple $(L,N)$, and achieve reflection error of order $O(R^{2LN})$ for some $R<1$. The new ABCs generalise the perfectly matched discrete layers proposed by Guddati and Lim [Int. J. Numer. Meth. Engng 66 (6) (2006) 949-977], including them as type $(L,1)$. An analysis of the proposed ABCs is performed motivated by the work of Ainsworth [J. Comput. Phys. 198 (1) (2004) 106-130]. The new ABCs facilitate numerical implementations of the Helmholtz problem with ABCs if $Q_N$ finite elements are used in the physical domain. Moreover, giving more insight, the analysis presented in this work potentially aids with developing ABCs in related areas.
We consider the solution of large stiff systems of ordinary differential equations with explicit exponential Runge--Kutta integrators. These problems arise from semi-discretized semi-linear parabolic partial differential equations on continuous domains or on inherently discrete graph domains. A series of results reduces the requirement of computing linear combinations of $\varphi$-functions in exponential integrators to the approximation of the action of a smaller number of matrix exponentials on certain vectors. State-of-the-art computational methods use polynomial Krylov subspaces of adaptive size for this task. They have the drawback that the required number of Krylov subspace iterations to obtain a desired tolerance increase drastically with the spectral radius of the discrete linear differential operator, e.g., the problem size. We present an approach that leverages rational Krylov subspace methods promising superior approximation qualities. We prove a novel a-posteriori error estimate of rational Krylov approximations to the action of the matrix exponential on vectors for single time points, which allows for an adaptive approach similar to existing polynomial Krylov techniques. We discuss pole selection and the efficient solution of the arising sequences of shifted linear systems by direct and preconditioned iterative solvers. Numerical experiments show that our method outperforms the state of the art for sufficiently large spectral radii of the discrete linear differential operators. The key to this are approximately constant numbers of rational Krylov iterations, which enable a near-linear scaling of the runtime with respect to the problem size.
We consider a linear implicit-explicit (IMEX) time discretization of the Cahn-Hilliard equation with a source term, endowed with Dirichlet boundary conditions. For every time step small enough, we build an exponential attractor of the discrete-in-time dynamical system associated to the discretization. We prove that, as the time step tends to 0, this attractor converges for the symmmetric Hausdorff distance to an exponential attractor of the continuous-in-time dynamical system associated with the PDE. We also prove that the fractal dimension of the exponential attractor (and consequently, of the global attractor) is bounded by a constant independent of the time step. The results also apply to the classical Cahn-Hilliard equation with Neumann boundary conditions.
We analyze a space-time hybridizable discontinuous Galerkin method to solve the time-dependent advection-diffusion equation on deforming domains. We prove stability of the discretization in the advection-dominated regime by using weighted test functions and derive a priori space-time error estimates. A numerical example illustrates the theoretical results.
Many economic panel and dynamic models, such as rational behavior and Euler equations, imply that the parameters of interest are identified by conditional moment restrictions with high dimensional conditioning instruments. We develop a novel inference method for the parameters identified by conditional moment restrictions, where the dimension of the conditioning instruments is high and there is no prior information about which conditioning instruments are weak or irrelevant. Building on Bierens (1990), we propose penalized maximum statistics and combine bootstrap inference with model selection. Our method optimizes the asymptotic power against a set of $n^{-1/2}$-local alternatives of interest by solving a data-dependent max-min problem for tuning parameter selection. We demonstrate the efficacy of our method by two empirical examples: the elasticity of intertemporal substitution and rational unbiased reporting of ability status. Extensive Monte Carlo experiments based on the first empirical example show that our inference procedure is superior to those available in the literature in realistic settings.
In recent decades, a growing number of discoveries in fields of mathematics have been assisted by computer algorithms, primarily for exploring large parameter spaces that humans would take too long to investigate. As computers and algorithms become more powerful, an intriguing possibility arises - the interplay between human intuition and computer algorithms can lead to discoveries of novel mathematical concepts that would otherwise remain elusive. To realize this perspective, we have developed a massively parallel computer algorithm that discovers an unprecedented number of continued fraction formulas for fundamental mathematical constants. The sheer number of formulas discovered by the algorithm unveils a novel mathematical structure that we call the conservative matrix field. Such matrix fields (1) unify thousands of existing formulas, (2) generate infinitely many new formulas, and most importantly, (3) lead to unexpected relations between different mathematical constants, including multiple integer values of the Riemann zeta function. Conservative matrix fields also enable new mathematical proofs of irrationality. In particular, we can use them to generalize the celebrated proof by Ap\'ery for the irrationality of $\zeta(3)$. Utilizing thousands of personal computers worldwide, our computer-supported research strategy demonstrates the power of experimental mathematics, highlighting the prospects of large-scale computational approaches to tackle longstanding open problems and discover unexpected connections across diverse fields of science.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
Stochastic multi-scale modeling and simulation for nonlinear thermo-mechanical problems of composite materials with complicated random microstructures remains a challenging issue. In this paper, we develop a novel statistical higher-order multi-scale (SHOMS) method for nonlinear thermo-mechanical simulation of random composite materials, which is designed to overcome limitations of prohibitive computation involving the macro-scale and micro-scale. By virtue of statistical multi-scale asymptotic analysis and Taylor series method, the SHOMS computational model is rigorously derived for accurately analyzing nonlinear thermo-mechanical responses of random composite materials both in the macro-scale and micro-scale. Moreover, the local error analysis of SHOMS solutions in the point-wise sense clearly illustrates the crucial indispensability of establishing the higher-order asymptotic corrected terms in SHOMS computational model for keeping the conservation of local energy and momentum. Then, the corresponding space-time multi-scale numerical algorithm with off-line and on-line stages is designed to efficiently simulate nonlinear thermo-mechanical behaviors of random composite materials. Finally, extensive numerical experiments are presented to gauge the efficiency and accuracy of the proposed SHOMS approach.