The problem of low rank approximation is ubiquitous in science. Traditionally this problem is solved in unitary invariant norms such as Frobenius or spectral norm due to existence of efficient methods for building approximations. However, recent results reveal the potential of low rank approximations in Chebyshev norm, which naturally arises in many applications. In this paper we tackle the problem of building optimal rank-1 approximations in the Chebyshev norm. We investigate the properties of alternating minimization algorithm for building the low rank approximations and demonstrate how to use it to construct optimal rank-1 approximation. As a result we propose an algorithm that is capable of building optimal rank-1 approximations in Chebyshev norm for small matrices.
While several previous studies have devised methods for segmentation of polyps, most of these methods are not rigorously assessed on multi-center datasets. Variability due to appearance of polyps from one center to another, difference in endoscopic instrument grades, and acquisition quality result in methods with good performance on in-distribution test data, and poor performance on out-of-distribution or underrepresented samples. Unfair models have serious implications and pose a critical challenge to clinical applications. We adapt an implicit bias mitigation method which leverages Bayesian epistemic uncertainties during training to encourage the model to focus on underrepresented sample regions. We demonstrate the potential of this approach to improve generalisability without sacrificing state-of-the-art performance on a challenging multi-center polyp segmentation dataset (PolypGen) with different centers and image modalities.
We analyse a numerical scheme for a system arising from a novel description of the standard elastic--perfectly plastic response. The elastic--perfectly plastic response is described via rate-type equations that do not make use of the standard elastic-plastic decomposition, and the model does not require the use of variational inequalities. Furthermore, the model naturally includes the evolution equation for temperature. We present a low order discretisation based on the finite element method. Under certain restrictions on the mesh we subsequently prove the existence of discrete solutions, and we discuss the stability properties of the numerical scheme. The analysis is supplemented with computational examples.
The nonnegative rank of nonnegative matrices is an important quantity that appears in many fields, such as combinatorial optimization, communication complexity, and information theory. In this paper, we study the asymptotic growth of the nonnegative rank of a fixed nonnegative matrix under Kronecker product. This quantity is called the asymptotic nonnegative rank, which is already studied in information theory. By applying the theory of asymptotic spectra of V. Strassen (J. Reine Angew. Math. 1988), we introduce the asymptotic spectrum of nonnegative matrices and give a dual characterization of the asymptotic nonnegative rank. As the opposite of nonnegative rank, we introduce the notion of the subrank of a nonnegative matrix and show that it is exactly equal to the size of the maximum induced matching of the bipartite graph defined on the support of the matrix (therefore, independent of the value of entries). Finally, we show that two matrix parameters, namely rank and fractional cover number, belong to the asymptotic spectrum of nonnegative matrices.
We study a variant of quantum hypothesis testing wherein an additional 'inconclusive' measurement outcome is added, allowing one to abstain from attempting to discriminate the hypotheses. The error probabilities are then conditioned on a successful attempt, with inconclusive trials disregarded. We completely characterise this task in both the single-shot and asymptotic regimes, providing exact formulas for the optimal error probabilities. In particular, we prove that the asymptotic error exponent of discriminating any two quantum states $\rho$ and $\sigma$ is given by the Hilbert projective metric $D_{\max}(\rho\|\sigma) + D_{\max}(\sigma \| \rho)$ in asymmetric hypothesis testing, and by the Thompson metric $\max \{ D_{\max}(\rho\|\sigma), D_{\max}(\sigma \| \rho) \}$ in symmetric hypothesis testing. This endows these two quantities with fundamental operational interpretations in quantum state discrimination. Our findings extend to composite hypothesis testing, where we show that the asymmetric error exponent with respect to any convex set of density matrices is given by a regularisation of the Hilbert projective metric. We apply our results also to quantum channels, showing that no advantage is gained by employing adaptive or even more general discrimination schemes over parallel ones, in both the asymmetric and symmetric settings. Our state discrimination results make use of no properties specific to quantum mechanics and are also valid in general probabilistic theories.
The (modern) arbitrary derivative (ADER) approach is a popular technique for the numerical solution of differential problems based on iteratively solving an implicit discretization of their weak formulation. In this work, focusing on an ODE context, we investigate several strategies to improve this approach. Our initial emphasis is on the order of accuracy of the method in connection with the polynomial discretization of the weak formulation. We demonstrate that precise choices lead to higher-order convergences in comparison to the existing literature. Then, we put ADER methods into a Deferred Correction (DeC) formalism. This allows to determine the optimal number of iterations, which is equal to the formal order of accuracy of the method, and to introduce efficient $p$-adaptive modifications. These are defined by matching the order of accuracy achieved and the degree of the polynomial reconstruction at each iteration. We provide analytical and numerical results, including the stability analysis of the new modified methods, the investigation of the computational efficiency, an application to adaptivity and an application to hyperbolic PDEs with a Spectral Difference (SD) space discretization.
We present a constant-factor approximation algorithm for the Nash social welfare maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a template for NSW optimization by solving a configuration-type LP and using a rounding procedure for (utilitarian) social welfare as a blackbox, which could be applicable to other variants of the problem.
Challenging combinatorial optimization problems are ubiquitous in science and engineering. Several quantum methods for optimization have recently been developed, in different settings including both exact and approximate solvers. Addressing this field of research, this manuscript has three distinct purposes. First, we present an intuitive method for synthesizing and analyzing discrete (i.e., integer-based) optimization problems, wherein the problem and corresponding algorithmic primitives are expressed using a discrete quantum intermediate representation (DQIR) that is encoding-independent. This compact representation often allows for more efficient problem compilation, automated analyses of different encoding choices, easier interpretability, more complex runtime procedures, and richer programmability, as compared to previous approaches, which we demonstrate with a number of examples. Second, we perform numerical studies comparing several qubit encodings; the results exhibit a number of preliminary trends that help guide the choice of encoding for a particular set of hardware and a particular problem and algorithm. Our study includes problems related to graph coloring, the traveling salesperson problem, factory/machine scheduling, financial portfolio rebalancing, and integer linear programming. Third, we design low-depth graph-derived partial mixers (GDPMs) up to 16-level quantum variables, demonstrating that compact (binary) encodings are more amenable to QAOA than previously understood. We expect this toolkit of programming abstractions and low-level building blocks to aid in designing quantum algorithms for discrete combinatorial problems.
We consider the linear lambda-calculus extended with the sup type constructor, which provides an additive conjunction along with a non-deterministic destructor. The sup type constructor has been introduced in the context of quantum computing. In this paper, we study this type constructor within a simple linear logic categorical model, employing the category of semimodules over a commutative semiring. We demonstrate that the non-deterministic destructor finds a suitable model in a "weighted" codiagonal map. This approach offers a valid and insightful alternative to interpreting non-determinism, especially in instances where the conventional Powerset Monad interpretation does not align with the category's structure, as is the case with the category of semimodules. The validity of this alternative relies on the presence of biproducts within the category.
Besov priors are nonparametric priors that can model spatially inhomogeneous functions. They are routinely used in inverse problems and imaging, where they exhibit attractive sparsity-promoting and edge-preserving features. A recent line of work has initiated the study of their asymptotic frequentist convergence properties. In the present paper, we consider the theoretical recovery performance of the posterior distributions associated to Besov-Laplace priors in the density estimation model, under the assumption that the observations are generated by a possibly spatially inhomogeneous true density belonging to a Besov space. We improve on existing results and show that carefully tuned Besov-Laplace priors attain optimal posterior contraction rates. Furthermore, we show that hierarchical procedures involving a hyper-prior on the regularity parameter lead to adaptation to any smoothness level.
A new hybridizable discontinuous Galerkin method, named the CHDG method, is proposed for solving time-harmonic scalar wave propagation problems. This method relies on a standard discontinuous Galerkin scheme with upwind numerical fluxes and high-order polynomial bases. Auxiliary unknowns corresponding to characteristic variables are defined at the interface between the elements, and the physical fields are eliminated to obtain a reduced system. The reduced system can be written as a fixed-point problem that can be solved with stationary iterative schemes. Numerical results with 2D benchmarks are presented to study the performance of the approach. Compared to the standard HDG approach, the properties of the reduced system are improved with CHDG, which is more suited for iterative solution procedures. The condition number of the reduced system is smaller with CHDG than with the standard HDG method. Iterative solution procedures with CGNR or GMRES required smaller numbers of iterations with CHDG.