The paper is concerned with efficient numerical methods for solving a linear system $\phi(A) x= b$, where $\phi(z)$ is a $\phi$-function and $A\in \mathbb R^{N\times N}$. In particular in this work we are interested in the computation of ${\phi(A)}^{-1} b$ for the case where $\phi(z)=\phi_1(z)=\displaystyle\frac{e^z-1}{z}, \quad \phi(z)=\phi_2(z)=\displaystyle\frac{e^z-1-z}{z^2}$. Under suitable conditions on the spectrum of $A$ we design fast algorithms for computing both ${\phi_\ell(A)}^{-1}$ and ${\phi_\ell(A)}^{-1} b$ based on Newton's iteration and Krylov-type methods, respectively. Adaptations of these schemes for structured matrices are considered. In particular the cases of banded and more generally quasiseparable matrices are investigated. Numerical results are presented to show the effectiveness of our proposed algorithms.
The ubiquitous multi-camera setup on modern autonomous vehicles provides an opportunity to construct surround-view depth. Existing methods, however, either perform independent monocular depth estimations on each camera or rely on computationally heavy self attention mechanisms. In this paper, we propose a novel guided attention architecture, EGA-Depth, which can improve both the efficiency and accuracy of self-supervised multi-camera depth estimation. More specifically, for each camera, we use its perspective view as the query to cross-reference its neighboring views to derive informative features for this camera view. This allows the model to perform attention only across views with considerable overlaps and avoid the costly computations of standard self-attention. Given its efficiency, EGA-Depth enables us to exploit higher-resolution visual features, leading to improved accuracy. Furthermore, EGA-Depth can incorporate more frames from previous time steps as it scales linearly w.r.t. the number of views and frames. Extensive experiments on two challenging autonomous driving benchmarks nuScenes and DDAD demonstrate the efficacy of our proposed EGA-Depth and show that it achieves the new state-of-the-art in self-supervised multi-camera depth estimation.
Given a Hilbert space $\mathcal H$ and a finite measure space $\Omega$, the approximation of a vector-valued function $f: \Omega \to \mathcal H$ by a $k$-dimensional subspace $\mathcal U \subset \mathcal H$ plays an important role in dimension reduction techniques, such as reduced basis methods for solving parameter-dependent partial differential equations. For functions in the Lebesgue--Bochner space $L^2(\Omega;\mathcal H)$, the best possible subspace approximation error $d_k^{(2)}$ is characterized by the singular values of $f$. However, for practical reasons, $\mathcal U$ is often restricted to be spanned by point samples of $f$. We show that this restriction only has a mild impact on the attainable error; there always exist $k$ samples such that the resulting error is not larger than $\sqrt{k+1} \cdot d_k^{(2)}$. Our work extends existing results by Binev at al. (SIAM J. Math. Anal., 43(3):1457--1472, 2011) on approximation in supremum norm and by Deshpande et al. (Theory Comput., 2:225--247, 2006) on column subset selection for matrices.
Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension $d\ge 2$ requires $\Omega(\log \kappa)$ queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension $d$ (hence also from general log-concave and log-smooth distributions in dimension $d$) requires $\widetilde \Omega(\min(\sqrt\kappa \log d, d))$ queries, which is nearly sharp for the class of Gaussians. Here $\kappa$ denotes the condition number of the target distribution. Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in harmonic analysis, and (2) a novel reduction that demonstrates that block Krylov algorithms are optimal for this problem, as well as connections to lower bound techniques based on Wishart matrices developed in the matrix-vector query literature.
We investigate the fundamental limits of the unsourced random access over the binary-input Gaussian channel. By fundamental limits, we mean the minimal energy per bit required to achieve the target per-user probability of error. The original method proposed by Y. Polyanskiy (2017) and based on Gallager's trick does not work well for binary signaling. We utilize Fano's method, which is based on the choice of the so-called ``good'' region. We apply this method for the cases of Gaussian and binary codebooks and obtain two achievability bounds. The first bound is very close to Polyanskiy's bound but does not lead to any improvement. At the same time, the numerical results show that the bound for the binary case practically coincides with the bound for the Gaussian codebook. Thus, we conclude that binary modulation does not lead to performance degradation, and energy-efficient schemes with binary modulation do exist.
We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.
One method to compute multiple precision integer quotients is to use a Newton iteration with multiple precision fixed point or floating point values. On one hand, this allows quotients to be calculated efficiently by employing an efficient multiplication method. On the other hand, this leads to a library structure where exact and approximate arithmetic are interdependent. This paper develops the concept of a shifted inverse and modified Newton iteration to compute quotients efficiently using whole numbers only. The method is equally applicable to computing polynomial quotients efficiently.
In this paper we investigate the stability properties of the so-called gBBKS and GeCo methods, which belong to the class of nonstandard schemes and preserve the positivity as well as all linear invariants of the underlying system of ordinary differential equations for any step size. A stability investigation for these methods, which are outside the class of general linear methods, is challenging since the iterates are always generated by a nonlinear map even for linear problems. Recently, a stability theorem was derived presenting criteria for understanding such schemes. For the analysis, the schemes are applied to general linear equations and proven to be generated by $\mathcal C^1$-maps with locally Lipschitz continuous first derivatives. As a result, the above mentioned stability theorem can be applied to investigate the Lyapunov stability of non-hyperbolic fixed points of the numerical method by analyzing the spectrum of the corresponding Jacobian of the generating map. In addition, if a fixed point is proven to be stable, the theorem guarantees the local convergence of the iterates towards it. In the case of first and second order gBBKS schemes the stability domain coincides with that of the underlying Runge--Kutta method. Furthermore, while the first order GeCo scheme converts steady states to stable fixed points for all step sizes and all linear test problems of finite size, the second order GeCo scheme has a bounded stability region for the considered test problems. Finally, all theoretical predictions from the stability analysis are validated numerically.
A reliable model order reduction process for parametric analysis in electromagnetics is detailed. Special emphasis is placed on certifying the accuracy of the reduced-order model. For this purpose, a sharp state error estimator is proposed. Standard a posteriori state error estimation for model order reduction relies on the inf-sup constant. For parametric systems, the inf-sup constant is parameter-dependent. The a posteriori error estimation for systems with very small or vanishing inf-sup constant poses a challenge, since it is inversely proportional to the inf-sup constant, resulting in rather useless, overly pessimistic error estimators. Such systems appear in electromagnetics since the inf-sup constant values are close to zero at points close to resonant frequencies, where they eventually vanish. We propose a novel a posteriori state error estimator which avoids the calculation of the inf-sup constant. The proposed state error estimator is compared with the standard error estimator and a recently proposed one in the literature. It is shown that our proposed error estimator outperforms both existing estimators. Numerical experiments are performed on real-life microwave devices such as narrowband and wideband antennas, two types of dielectric resonator filters as well as a dual-mode waveguide filter. These examples show the capabilities and efficiency of the proposed methodology.
We derive sharp approximation error bounds for inverse block Toeplitz matrices associated with multivariate long-memory stationary processes. The error bounds are evaluated for both column and row sums. These results are used to prove the strong convergence of the solutions of general block Toeplitz systems. A crucial part of the proof is to bound sums consisting of the Fourier coefficients of the phase function attached to the singular symbol of the Toeplitz matrices.
Homogeneity tests and interval estimations of the risk difference between two groups are of general interest under paired Bernoulli settings with the presence of stratification effects. Dallal [1] proposed a model by parameterizing the probability of an occurrence at one site given an occurrence at the other site. Based on this model, we propose three test statistics and evaluate their performances regarding type I error controls and powers. Confidence intervals of a common risk difference with satisfactory coverage probabilities and interval length are constructed. Our simulation results show that the score test is the most robust and the profile likelihood confidence interval outperforms other methods proposed. Data from a study of acute otitis media is used to illustrate our proposed procedures.