Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the $\omega$-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the eigenvalues. In particular, we concentrate on linear systems with low rank updates of positive definite matrices which are close to singular. These systems arise in the contexts of nonsmooth Newton methods using generalized Jacobians. We derive an explicit formula for the optimal $\omega$-preconditioned update in this framework. Evaluating or estimating the classical condition number $\kappa$ can be expensive. We show that the $\omega$-condition number can be evaluated exactly following a Cholesky or LU factorization and it estimates the actual condition of a linear system significantly better. Moreover, our empirical results show a significant decrease in the number of iterations required for a requested accuracy in the residual during an iterative method, i.e., these results confirm the efficacy of using the $\omega$-condition number compared to the classical condition number.
Markov Chain Monte Carlo methods for sampling from complex distributions and estimating normalization constants often simulate samples from a sequence of intermediate distributions along an annealing path, which bridges between a tractable initial distribution and a target density of interest. Prior work has constructed annealing paths using quasi-arithmetic means, and interpreted the resulting intermediate densities as minimizing an expected divergence to the endpoints. We provide a comprehensive analysis of this 'centroid' property using Bregman divergences under a monotonic embedding of the density function, thereby associating common divergences such as Amari's and Renyi's ${\alpha}$-divergences, ${(\alpha,\beta)}$-divergences, and the Jensen-Shannon divergence with intermediate densities along an annealing path. Our analysis highlights the interplay between parametric families, quasi-arithmetic means, and divergence functions using the rho-tau Bregman divergence framework of Zhang 2004,2013.
Directional interpolation is a fast and efficient compression technique for high-frequency Helmholtz boundary integral equations, but it requires a very large amount of storage in its original form. Algebraic recompression can significantly reduce the storage requirements and speed up the solution process accordingly. During the recompression process, weight matrices are required to correctly measure the influence of different basis vectors on the final result, and for highly accurate approximations, these weight matrices require more storage than the final compressed matrix. We present a compression method for the weight matrices and demonstrate that it introduces only a controllable error to the overall approximation. Numerical experiments show that the new method leads to a significant reduction in storage requirements.
Consider the problem of estimating a random variable $X$ from noisy observations $Y = X+ Z$, where $Z$ is standard normal, under the $L^1$ fidelity criterion. It is well known that the optimal Bayesian estimator in this setting is the conditional median. This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian. Along the way, several other results are presented. In particular, it is demonstrated that if the conditional distribution $P_{X|Y=y}$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution. Additionally, we consider other $L^p$ losses and observe the following phenomenon: for $p \in [1,2]$, Gaussian is the only prior distribution that induces a linear optimal Bayesian estimator, and for $p \in (2,\infty)$, infinitely many prior distributions on $X$ can induce linearity. Finally, extensions are provided to encompass noise models leading to conditional distributions from certain exponential families.
We consider online reinforcement learning (RL) in episodic Markov decision processes (MDPs) under the linear $q^\pi$-realizability assumption, where it is assumed that the action-values of all policies can be expressed as linear functions of state-action features. This class is known to be more general than linear MDPs, where the transition kernel and the reward function are assumed to be linear functions of the feature vectors. As our first contribution, we show that the difference between the two classes is the presence of states in linearly $q^\pi$-realizable MDPs where for any policy, all the actions have approximately equal values, and skipping over these states by following an arbitrarily fixed policy in those states transforms the problem to a linear MDP. Based on this observation, we derive a novel (computationally inefficient) learning algorithm for linearly $q^\pi$-realizable MDPs that simultaneously learns what states should be skipped over and runs another learning algorithm on the linear MDP hidden in the problem. The method returns an $\epsilon$-optimal policy after $\text{polylog}(H, d)/\epsilon^2$ interactions with the MDP, where $H$ is the time horizon and $d$ is the dimension of the feature vectors, giving the first polynomial-sample-complexity online RL algorithm for this setting. The results are proved for the misspecified case, where the sample complexity is shown to degrade gracefully with the misspecification error.
In this paper we give the detailed error analysis of two algorithms (denoted as $W_1$ and $W_2$) for computing the symplectic factorization of a symmetric positive definite and symplectic matrix $A \in \mathbb R^{2n \times 2n}$ in the form $A=LL^T$, where $L \in \mathbb R^{2n \times 2n}$ is a symplectic block lower triangular matrix. Algorithm $W_1$ is an implementation of the $HH^T$ factorization from [Dopico et al., 2009]. Algorithm $W_2$, proposed in [Bujok et al., 2023], uses both Cholesky and Reverse Cholesky decompositions of symmetric positive definite matrix blocks that appear during the factorization. We prove that Algorithm $W_2$ is numerically stable for a broader class of symmetric positive definite matrices $A \in \mathbb R^{2n \times 2n}$, producing the computed factors $\tilde L$ in floating-point arithmetic with machine precision $u$, such that $||A-\tilde L {\tilde L}^T||_2= {\cal O}(u ||A||_2)$. However, Algorithm $W_1$ is unstable in general for symmetric positive definite and symplectic matrix $A$. This was confirmed by numerical experiments in [Bujok et al., 2023]. In this paper we give corresponding bounds also for Algorithm $W_1$ that are weaker, since we show that the factorization error depends on the size of the inverse of the principal submatrix $A_{11}$. The tests performed in MATLAB illustrate that our error bounds for considered algorithms are reasonably sharp.
Time-Aware Shaper (TAS) is a time-triggered scheduling mechanism that ensures bounded latency for time-critical Scheduled Traffic (ST) flows. The Linux kernel implementation (a.k.a TAPRIO) has limited capabilities due to varying CPU workloads and thus does not offer tight latency bound for the ST flows. Also, currently only higher cycle times are possible. Other software implementations are limited to simulation studies without physical implementation. In this paper, we present $\mu$TAS, a MicroC-based hardware implementation of TAS onto a programmable SmartNIC. $\mu$TAS takes advantage of the parallel-processing architecture of the SmartNIC to configure the scheduling behaviour of its queues at runtime. To demonstrate the effectiveness of $\mu$TAS, we built a Time-Sensitive Networking (TSN) testbed from scratch. This consists of multiple end-hosts capable of generating ST and Best Effort (BE) flows and TSN switches equipped with SmartNICs running $\mu$TAS. Time synchronization is maintained between the switches and hosts. Our experiments demonstrate that the ST flows experience a bounded latency of the order of tens of microseconds.
The top-$k$-sum operator computes the sum of the largest $k$ components of a given vector. The Euclidean projection onto the top-$k$-sum constraint serves as a crucial subroutine in iterative methods to solve composite superquantile optimization problems. In this paper, we introduce a solver that implements two finite-termination algorithms to compute this projection. Both algorithms have complexity $O(n)$ when applied to a sorted $n$-dimensional input vector, where the absorbed constant is independent of $k$. This stands in contrast to the existing grid-search-inspired method that has $O(k(n-k))$ complexity. The improvement is significant when $k$ is linearly dependent on $n$, which frequently encountered in practical superquantile optimization applications. In instances where the input vector is unsorted, an additional cost is incurred to (partially) sort the vector. To reduce this cost, we further derive a rigorous procedure that leverages approximate sorting to compute the projection, which is particularly useful when solving a sequence of similar projection problems. Numerical results show that our methods solve problems of scale $n=10^7$ and $k=10^4$ within $0.05$ seconds, whereas the existing grid-search-based method and the Gurobi QP solver can take minutes to hours.
We present approximation algorithms for the Fault-tolerant $k$-Supplier with Outliers ($\mathsf{F}k\mathsf{SO}$) problem. This is a common generalization of two known problems -- $k$-Supplier with Outliers, and Fault-tolerant $k$-Supplier -- each of which generalize the well-known $k$-Supplier problem. In the $k$-Supplier problem the goal is to serve $n$ clients $C$, by opening $k$ facilities from a set of possible facilities $F$; the objective function is the farthest that any client must travel to access an open facility. In $\mathsf{F}k\mathsf{SO}$, each client $v$ has a fault-tolerance $\ell_v$, and now desires $\ell_v$ facilities to serve it; so each client $v$'s contribution to the objective function is now its distance to the $\ell_v^{\text{th}}$ closest open facility. Furthermore, we are allowed to choose $m$ clients that we will serve, and only those clients contribute to the objective function, while the remaining $n-m$ are considered outliers. Our main result is a $\min\{4t-1,2^t+1\}$-approximation for the $\mathsf{F}k\mathsf{SO}$ problem, where $t$ is the number of distinct values of $\ell_v$ that appear in the instance. At $t=1$, i.e. in the case where the $\ell_v$'s are uniformly some $\ell$, this yields a $3$-approximation, improving upon the $11$-approximation given for the uniform case by Inamdar and Varadarajan [2020], who also introduced the problem. Our result for the uniform case matches tight $3$-approximations that exist for $k$-Supplier, $k$-Supplier with Outliers, and Fault-tolerant $k$-Supplier. Our key technical contribution is an application of the round-or-cut schema to $\mathsf{F}k\mathsf{SO}$. Guided by an LP relaxation, we reduce to a simpler optimization problem, which we can solve to obtain distance bounds for the "round" step, and valid inequalities for the "cut" step.
This paper introduces a novel class of fair and interpolatory curves called $p\kappa$-curves. These curves are comprised of smoothly stitched B\'ezier curve segments, where the curvature distribution of each segment is made to closely resemble a parabola, resulting in an aesthetically pleasing shape. Moreover, each segment passes through an interpolated point at a parameter where the parabola has an extremum, encouraging the alignment of interpolated points with curvature extrema. To achieve these properties, we tailor an energy function that guides the optimization process to obtain the desired curve characteristics. Additionally, we develop an efficient algorithm and an initialization method, enabling interactive modeling of the $p\kappa$-curves without the need for global optimization. We provide various examples and comparisons with existing state-of-the-art methods to demonstrate the curve modeling capabilities and visually pleasing appearance of $p\kappa$-curves.
From the literature, it is known that the choice of basis functions in hp-FEM heavily influences the computational cost in order to obtain an approximate solution. Depending on the choice of the reference element, suitable tensor product like basis functions of Jacobi polynomials with different weights lead to optimal properties due to condition number and sparsity. This paper presents biorthogonal basis functions to the primal basis functions mentioned above. The authors investigate hypercubes and simplices as reference elements, as well as the cases of $H^1$ and H(Curl). The functions can be expressed sums of tensor products of Jacobi polynomials with maximal two summands.