Capturing solution near the singular point of any nonlinear SBVPs is challenging because coefficients involved in the differential equation blow up near singularities. In this article, we aim to construct a general method based on orthogonal polynomials as wavelets. We discuss multiresolution analysis for wavelets generated by orthogonal polynomials, e.g., Hermite, Legendre, Chebyshev, Laguerre, and Gegenbauer. Then we use these wavelets for solving nonlinear SBVPs. These wavelets can deal with singularities easily and efficiently. To deal with the nonlinearity, we use both Newton's quasilinearization and the Newton-Raphson method. To show the importance and accuracy of the proposed methods, we solve the Lane-Emden type of problems and compare the computed solutions with the known solutions. As the resolution is increased the computed solutions converge to exact solutions or known solutions. We observe that the proposed technique performs well on a class of Lane-Emden type BVPs. As the paper deals with singularity, non-linearity significantly and different wavelets are used to compare the results.
We consider a nonlinear inverse problem $\mathbf{y}= f(\mathbf{Ax})$, where observations $\mathbf{y} \in \mathbb{R}^m$ are the componentwise nonlinear transformation of $\mathbf{Ax} \in \mathbb{R}^m$, $\mathbf{x} \in \mathbb{R}^n$ is the signal of interest and $\mathbf{A}$ is a known linear mapping. By properly specifying the nonlinear processing function, this model can be particularized to many signal processing problems, including compressed sensing and phase retrieval. Our main goal in this paper is to understand the impact of sensing matrices, or more specifically the spectrum of sensing matrices, on the difficulty of recovering $\mathbf{x}$ from $\mathbf{y}$. Towards this goal, we study the performance of one of the most successful recovery methods, i.e. the expectation propagation algorithm (EP). We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP. Whether the spikiness of the spectrum can hurt or help the recovery performance of EP depends on $f$. We define certain quantities based on the function $f$ that enables us to describe the impact of the spikiness of the spectrum on EP recovery. Based on our framework, we are able to show that for instance, in phase-retrieval problems, matrices with spikier spectrums are better for EP, while in 1-bit compressed sensing problems, less spiky (flatter) spectrums offer better recoveries. Our results unify and substantially generalize the existing results that compare sub-Gaussian and orthogonal matrices, and provide a platform toward designing optimal sensing systems.
We consider methods for finding a simple polygon of minimum (Min-Area) or maximum (Max-Area) possible area for a given set of points in the plane. Both problems are known to be NP-hard; at the center of the recent CG Challenge, practical methods have received considerable attention. However, previous methods focused on heuristic methods, with no proof of optimality. We develop exact methods, based on a combination of geometry and integer programming. As a result, we are able to solve instances of up to n=25 points to provable optimality. While this extends the range of solvable instances by a considerable amount, it also illustrates the practical difficulty of both problem variants.
In this paper, we study two kinds of structure-preserving splitting methods, including the Lie--Trotter type splitting method and the finite difference type method, for the stochasticlogarithmic Schr\"odinger equation (SlogS equation) via a regularized energy approximation. We first introduce a regularized SlogS equation with a small parameter $0<\epsilon\ll1$ which approximates the SlogS equation and avoids the singularity near zero density. Then we present a priori estimates, the regularized entropy and energy, and the stochastic symplectic structure of the proposed numerical methods. Furthermore, we derive both the strong convergence rates and the convergence rates of the regularized entropy and energy. To the best of our knowledge, this is the first result concerning the construction and analysis of numerical methods for stochastic Schr\"odinger equations with logarithmic nonlinearities.
This paper proposes a data-driven set-based estimation algorithm for a class of nonlinear systems with polynomial nonlinearities. Using the system's input-output data, the proposed method computes in real-time a set that guarantees the inclusion of the system's state. Although the system is assumed to be polynomial type, the exact polynomial functions and their coefficients need not be known. To this end, the estimator relies on offline and online phases. The offline phase utilizes past input-output data to estimate a set of possible coefficients of the polynomial system. Then, using this estimated set of coefficients and the side information about the system, the online phase provides a set estimate of the state. Finally, the proposed methodology is evaluated through its application on SIR (Susceptible, Infected, Recovered) epidemic model.
We study the asymptotic behavior of second-order algorithms mixing Newton's method and inertial gradient descent in non-convex landscapes. We show that, despite the Newtonian behavior of these methods, they almost always escape strict saddle points. We also evidence the role played by the hyper-parameters of these methods in their qualitative behavior near critical points. The theoretical results are supported by numerical illustrations.
In this paper, we study two kinds of structure-preserving splitting methods, including the Lie--Trotter type splitting method and the finite difference type method, for the stochasticlogarithmic Schr\"odinger equation (SlogS equation) via a regularized energy approximation. We first introduce a regularized SlogS equation with a small parameter $0<\epsilon\ll1$ which approximates the SlogS equation and avoids the singularity near zero density. Then we present a priori estimates, the regularized entropy and energy, and the stochastic symplectic structure of the proposed numerical methods. Furthermore, we derive both the strong convergence rates and the convergence rates of the regularized entropy and energy. To the best of our knowledge, this is the first result concerning the construction and analysis of numerical methods for stochastic Schr\"odinger equations with logarithmic nonlinearities.
In this paper, we derive the mixed and componentwise condition numbers for a linear function of the solution to the total least squares with linear equality constraint (TLSE) problem. The explicit expressions of the mixed and componentwise condition numbers by dual techniques under both unstructured and structured componentwise perturbations is considered. With the intermediate result, i.e. we can recover the both unstructured and structured condition number for the TLS problem. We choose the small-sample statistical condition estimation method to estimate both unstructured and structured condition numbers with high reliability. Numerical experiments are provided to illustrate the obtained results.
We present Bayesian techniques for solving inverse problems which involve mean-square convergent random approximations of the forward map. Noisy approximations of the forward map arise in several fields, such as multiscale problems and probabilistic numerical methods. In these fields, a random approximation can enhance the quality or the efficiency of the inference procedure, but entails additional theoretical and computational difficulties due to the randomness of the forward map. A standard technique to address this issue is to combine Monte Carlo averaging with Markov chain Monte Carlo samplers, as for example in the pseudo-marginal Metropolis--Hastings methods. In this paper, we consider mean-square convergent random approximations, and quantify how Monte Carlo errors propagate from the forward map to the solution of the inverse problems. Moreover, we review and describe simple techniques to solve such inverse problems, and compare performances with a series of numerical experiments.
We show that the mass matrix derived from finite elements can be effectively used as a preconditioner for iteratively solving the linear system arising from finite-difference discretization of the Poisson equation, using the conjugate gradient method. We derive analytically the condition number of the preconditioned operator. Theoretical analysis shows that the ratio of the condition number of the Laplacian to the preconditioned operator is $8/3$ in one dimension, $9/2$ in two dimensions, and $2^9/3^4 \approx 6.3$ in three dimensions. From this it follows that the expected iteration count for achieving a fixed reduction of the norm of the residual is smaller than a half of the number of the iterations of unpreconditioned CG in 2D and 3D. The scheme is easy to implement, and numerical experiments show its efficiency.
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.