In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.
Numerical predictions of quantities of interest measured within physical systems rely on the use of mathematical models that should be validated, or at best, not invalidated. Model validation usually involves the comparison of experimental data (outputs from the system of interest) and model predictions, both obtained at a specific validation scenario. The design of this validation experiment should be directly relevant to the objective of the model, that of predicting a quantity of interest at a prediction scenario. In this paper, we address two specific issues arising when designing validation experiments. The first issue consists in determining an appropriate validation scenario in cases where the prediction scenario cannot be carried out in a controlled environment. The second issue concerns the selection of observations when the quantity of interest cannot be readily observed. The proposed methodology involves the computation of influence matrices that characterize the response surface of given model functionals. Minimization of the distance between influence matrices allow one for selecting a validation experiment most representative of the prediction scenario. We illustrate our approach on two numerical examples. The first example considers the validation of a simple model based on an ordinary differential equation governing an object in free fall to put in evidence the importance of the choice of the validation experiment. The second numerical experiment focuses on the transport of a pollutant and demonstrates the impact that the choice of the quantity of interest has on the validation experiment to be performed.
The main focus of this paper is the study of efficient multigrid methods for large linear systems with a particular saddle-point structure. Indeed, when the system matrix is symmetric, but indefinite, the variational convergence theory that is usually used to prove multigrid convergence cannot be directly applied. However, different algebraic approaches analyze properly preconditioned saddle-point problems, proving convergence of the Two-Grid method. In particular, this is efficient when the blocks of the coefficient matrix possess a Toeplitz or circulant structure. Indeed, it is possible to derive sufficient conditions for convergence and provide optimal parameters for the preconditioning of the saddle-point problem in terms of the associated generating symbols. In this paper, we propose a symbol-based convergence analysis for problems that have a hidden block Toeplitz structure. Then, they can be investigated focusing on the properties of the associated generating function f, which consequently is a matrix-valued function with dimension depending on the block size of the problem. As numerical tests we focus on the matrix sequence stemming from the finite element approximation of the Stokes problem. We show the efficiency of the methods studying the hidden 9-by-9 block multilevel structure of the obtained matrix sequence. Moreover, we propose an efficient algebraic multigrid method with convergence rate independent of the matrix size. Finally, we present several numerical tests comparing the results with state-of-the-art strategies.
This paper is concerned with the convergence of a series associated with a certain version of the convexification method. That version has been recently developed by the research group of the first author for solving coefficient inverse problems. The convexification method aims to construct a globally convex Tikhonov-like functional with a Carleman Weight Function in it. In the previous works the construction of the strictly convex weighted Tikhonov-like functional assumes a truncated Fourier series (i.e. a finite series instead of an infinite one) for a function generated by the total wave field. In this paper we prove a convergence property for this truncated Fourier series approximation. More precisely, we show that the residual of the approximate PDE obtained by using the truncated Fourier series tends to zero in $L^{2}$ as the truncation index in the truncated Fourier series tends to infinity. The proof relies on a convergence result in the $H^{1}$-norm for a sequence of $L^{2}$-orthogonal projections on finite-dimensional subspaces spanned by elements of a special Fourier basis. However, due to the ill-posed nature of coefficient inverse problems, we cannot prove that the solution of that approximate PDE, which results from the minimization of that Tikhonov-like functional, converges to the correct solution.
In this paper, we investigate the almost sure convergence, in supremum norm, of the rank-based linear wavelet estimator for a multivariate copula density. Based on empirical process tools, we prove a uniform limit law for the deviation, from its expectation, of an oracle estimator (obtained for known margins), from which we derive the exact convergence rate of the rank-based linear estimator. This rate reveals to be optimal in a minimax sense over Besov balls for the supremum norm loss, whenever the resolution level is suitably chosen.
We consider the problem of finding the matching map between two sets of $d$-dimensional noisy feature-vectors. The distinctive feature of our setting is that we do not assume that all the vectors of the first set have their corresponding vector in the second set. If $n$ and $m$ are the sizes of these two sets, we assume that the matching map that should be recovered is defined on a subset of unknown cardinality $k^*\le \min(n,m)$. We show that, in the high-dimensional setting, if the signal-to-noise ratio is larger than $5(d\log(4nm/\alpha))^{1/4}$, then the true matching map can be recovered with probability $1-\alpha$. Interestingly, this threshold does not depend on $k^*$ and is the same as the one obtained in prior work in the case of $k = \min(n,m)$. The procedure for which the aforementioned property is proved is obtained by a data-driven selection among candidate mappings $\{\hat\pi_k:k\in[\min(n,m)]\}$. Each $\hat\pi_k$ minimizes the sum of squares of distances between two sets of size $k$. The resulting optimization problem can be formulated as a minimum-cost flow problem, and thus solved efficiently. Finally, we report the results of numerical experiments on both synthetic and real-world data that illustrate our theoretical results and provide further insight into the properties of the algorithms studied in this work.
In this paper, we establish novel data-dependent upper bounds on the generalization error through the lens of a "variable-size compressibility" framework that we introduce newly here. In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data. This is shown to yield bounds that depend on the empirical measure of the given input data at hand, rather than its unknown distribution. Our new generalization bounds that we establish are tail bounds, tail bounds on the expectation, and in-expectations bounds. Moreover, it is shown that our framework also allows to derive general bounds on any function of the input data and output hypothesis random variables. In particular, these general bounds are shown to subsume and possibly improve over several existing PAC-Bayes and data-dependent intrinsic dimension-based bounds that are recovered as special cases, thus unveiling a unifying character of our approach. For instance, a new data-dependent intrinsic dimension based bounds is established, which connects the generalization error to the optimization trajectories and reveals various interesting connections with rate-distortion dimension of process, R\'enyi information dimension of process, and metric mean dimension.
In Part I of this paper, we introduced a two dimensional eigenvalue problem (2DEVP) of a matrix pair and investigated its fundamental theory such as existence, variational characterization and number of 2D-eigenvalues. In Part II, we proposed a Rayleigh quotient iteration (RQI)-like algorithm (2DRQI) for computing a 2D-eigentriplet of the 2DEVP near a prescribed point, and discussed applications of 2DEVP and 2DRQI for solving the minimax problem of Rayleigh quotients, and computing the distance to instability. In this third part, we present convergence analysis of the 2DRQI. We show that under some mild conditions, the 2DRQI is locally quadratically convergent for computing a nonsingular 2D-eigentriplet.
We introduce a class of networked Markov potential games where agents are associated with nodes in a network. Each agent has its own local potential function, and the reward of each agent depends only on the states and actions of agents within a $\kappa$-hop neighborhood. In this context, we propose a localized actor-critic algorithm. The algorithm is scalable since each agent uses only local information and does not need access to the global state. Further, the algorithm overcomes the curse of dimensionality through the use of function approximation. Our main results provide finite-sample guarantees up to a localization error and a function approximation error. Specifically, we achieve an $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity measured by the averaged Nash regret. This is the first finite-sample bound for multi-agent competitive games that does not depend on the number of agents.
Can machines think? Since Alan Turing asked this question in 1950, nobody is able to give a direct answer, due to the lack of solid mathematical foundations for general intelligence. In this paper, we introduce a categorical framework towards this goal, consisting of four components: the sensor, world category, planner with objectives, and actor. By leveraging category theory, many important notions in general intelligence can be rigorously defined and analyzed. For instance, we introduce the concept of self-state awareness as a categorical analogy for self-consciousness and provide algorithms for learning and evaluating it. For communication with other agents, we propose to use diagrams that capture the exact representation of the context, instead of using natural languages. Additionally, we demonstrate that by designing the objectives as the output of function over self-state, the model's human-friendliness is guaranteed. Most importantly, our framework naturally introduces various constraints based on categorical invariance that can serve as the alignment signals for training a model that fits into the framework.
For $V : \mathbb{R}^d \to \mathbb{R}$ coercive, we study the convergence rate for the $L^1$-distance of the empiric minimizer, which is the true minimum of the function $V$ sampled with noise with a finite number $n$ of samples, to the minimum of $V$. We show that in general, for unbounded functions with fast growth, the convergence rate is bounded above by $a_n n^{-1/q}$, where $q$ is the dimension of the latent random variable and where $a_n = o(n^\varepsilon)$ for every $\varepsilon > 0$. We then present applications to optimization problems arising in Machine Learning and in Monte Carlo simulation.