Thepaperprovesconvergenceofone-levelandmultilevelunsymmetriccollocationforsecondorderelliptic boundary value problems on the bounded domains. By using Schaback's linear discretization theory,L2 errors are obtained based on the kernel-based trial spaces generated by the compactly supported radial basis functions. For the one-level unsymmetric collocation case, we obtain convergence when the testing discretization is finer than the trial discretization. The convergence rates depend on the regularity of the solution, the smoothness of the computing domain, and the approximation of scaled kernel-based spaces. The multilevel process is implemented by employing successive refinement scattered data sets and scaled compactly supported radial basis functions with varying support radii. Convergence of multilevel collocation is further proved based on the theoretical results of one-level unsymmetric collocation. In addition to having the same dependencies as the one-level collocation, the convergence rates of multilevel unsymmetric collocation especially depends on the increasing rules of scattered data and the selection of scaling parameters.
The widespread use of maximum Jeffreys'-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (2021, Biometrika), who show that the resulting estimates are also always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically linearly and slower than the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing $O(n)$ quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. We assess the procedures through a real-data application with millions of observations, and in high-dimensional logistic regression, where a large-scale simulation experiment produces concrete evidence for the existence of a simple adjustment to the maximum Jeffreys'-penalized likelihood estimates that delivers high accuracy in terms of signal recovery even in cases where estimates from ML and other recently-proposed corrective methods do not exist.
Blow-up solutions to a heat equation with spatial periodicity and a quadratic nonlinearity are studied through asymptotic analyses and a variety of numerical methods. The focus is on the dynamics of the singularities in the complexified space domain. Blow up in finite time is caused by these singularities eventually reaching the real axis. The analysis provides a distinction between small and large nonlinear effects, as well as insight into the various time scales on which blow up is approached. It is shown that an ordinary differential equation with quadratic nonlinearity plays a central role in the asymptotic analysis. This equation is studied in detail, including its numerical computation on multiple Riemann sheets, and the far-field solutions are shown to be given at leading order by a Weierstrass elliptic function.
We design a monotone meshfree finite difference method for linear elliptic equations in the non-divergence form on point clouds via a nonlocal relaxation method. The key idea is a novel combination of a nonlocal integral relaxation of the PDE problem with a robust meshfree discretization on point clouds. Minimal positive stencils are obtained through a local $l_1$-type optimization procedure that automatically guarantees the stability and, therefore, the convergence of the meshfree discretization for linear elliptic equations. A major theoretical contribution is the existence of consistent and positive stencils for a given point cloud geometry. We provide sufficient conditions for the existence of positive stencils by finding neighbors within an ellipse (2d) or ellipsoid (3d) surrounding each interior point, generalizing the study for Poisson's equation by Seibold (Comput Methods Appl Mech Eng 198(3-4):592-601, 2008). It is well-known that wide stencils are in general needed for constructing consistent and monotone finite difference schemes for linear elliptic equations. Our result represents a significant improvement in the stencil width estimate for positive-type finite difference methods for linear elliptic equations in the near-degenerate regime (when the ellipticity constant becomes small), compared to previously known works in this area. Numerical algorithms and practical guidance are provided with an eye on the case of small ellipticity constant. At the end, we present numerical results for the performance of our method in both 2d and 3d, examining a range of ellipticity constants including the near-degenerate regime.
A population-averaged additive subdistribution hazards model is proposed to assess the marginal effects of covariates on the cumulative incidence function and to analyze correlated failure time data subject to competing risks. This approach extends the population-averaged additive hazards model by accommodating potentially dependent censoring due to competing events other than the event of interest. Assuming an independent working correlation structure, an estimating equations approach is outlined to estimate the regression coefficients and a new sandwich variance estimator is proposed. The proposed sandwich variance estimator accounts for both the correlations between failure times and between the censoring times, and is robust to misspecification of the unknown dependency structure within each cluster. We further develop goodness-of-fit tests to assess the adequacy of the additive structure of the subdistribution hazards for the overall model and each covariate. Simulation studies are conducted to investigate the performance of the proposed methods in finite samples. We illustrate our methods using data from the STrategies to Reduce Injuries and Develop confidence in Elders (STRIDE) trial.
Penalized $M-$estimators for logistic regression models have been previously study for fixed dimension in order to obtain sparse statistical models and automatic variable selection. In this paper, we derive asymptotic results for penalized $M-$estimators when the dimension $p$ grows to infinity with the sample size $n$. Specifically, we obtain consistency and rates of convergence results, for some choices of the penalty function. Moreover, we prove that these estimators consistently select variables with probability tending to 1 and derive their asymptotic distribution.
This paper considers the regularization continuation method and the trust-region updating strategy for the nonlinearly equality-constrained optimization problem. Namely, it uses the inverse of the regularization quasi-Newton matrix as the pre-conditioner to improve its computational efficiency in the well-posed phase, and it adopts the inverse of the regularization two-sided projection of the Hessian as the pre-conditioner to improve its robustness in the ill-conditioned phase. Since it only solves a linear system of equations at every iteration and the sequential quadratic programming (SQP) needs to solve a quadratic programming subproblem at every iteration, it is faster than SQP. Numerical results also show that it is more robust and faster than SQP (the built-in subroutine fmincon.m of the MATLAB2020a environment and the subroutine SNOPT executed in GAMS v28.2 (2019) environment). The computational time of the new method is about one third of that of fmincon.m for the large-scale problem. Finally, the global convergence analysis of the new method is also given.
The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. Clifford algebras provide the natural generalizations of complex, dual numbers and quaternions into non-commutative multivectors. The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension. The algorithm is a variation of the Faddeev-LeVerrier-Souriau algorithm and is implemented in the open-source Computer Algebra System Maxima. Symbolic and numerical examples in different Clifford algebras are presented.
We prove lower bounds for the randomized approximation of the embedding $\ell_1^m \rightarrow \ell_\infty^m$ based on algorithms that use arbitrary linear (hence non-adaptive) information provided by a (randomized) measurement matrix $N \in \mathbb{R}^{n \times m}$. These lower bounds reflect the increasing difficulty of the problem for $m \to \infty$, namely, a term $\sqrt{\log m}$ in the complexity $n$. This result implies that non-compact operators between arbitrary Banach spaces are not approximable using non-adaptive Monte Carlo methods. We also compare these lower bounds for non-adaptive methods with upper bounds based on adaptive, randomized methods for recovery for which the complexity $n$ only exhibits a $(\log\log m)$-dependence. In doing so we give an example of linear problems where the error for adaptive vs. non-adaptive Monte Carlo methods shows a gap of order $n^{1/2} ( \log n)^{-1/2}$.
We study the numerical solution of a Cahn-Hilliard/Allen-Cahn system with strong coupling through state and gradient dependent non-diagonal mobility matrices. A fully discrete approximation scheme in space and time is proposed which preserves the underlying gradient flow structure and leads to dissipation of the free-energy on the discrete level. Existence and uniqueness of the discrete solution is established and relative energy estimates are used to prove optimal convergence rates in space and time under minimal smoothness assumptions. Numerical tests are presented for illustration of the theoretical results and to demonstrate the viability of the proposed methods.
We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n\geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant, $S_h^n$, converges to the optimal Hardy constant $S^n$ no slower than $O(1/\vert \log h \vert)$. We also show that the convergence is no faster than $O(1/\vert \log h \vert^2)$ if $n=1$ or if $n\geq 3$, the domain is the unit ball, and the finite element discretization exploits the rotational symmetry of the problem. Our estimates are compared to exact values for $S_h^n$ obtained computationally.