Determining capacities of quantum channels is a fundamental question in quantum information theory. Despite having rigorous coding theorems quantifying the flow of information across quantum channels, their capacities are poorly understood due to super-additivity effects. Studying these phenomena is important for deepening our understanding of quantum information, yet simple and clean examples of super-additive channels are scarce. Here we study a family of channels called platypus channels. Its simplest member, a qutrit channel, is shown to display super-additivity of coherent information when used jointly with a variety of qubit channels. Higher-dimensional family members display super-additivity of quantum capacity together with an erasure channel. Subject to the "spin-alignment conjecture" introduced in the companion paper [IEEE Trans. Inf. Theory 69(6), pp. 3825-3849, 2023; arXiv:2202.08380], our results on super-additivity of quantum capacity extend to lower-dimensional channels as well as larger parameter ranges. In particular, super-additivity occurs between two weakly additive channels each with large capacity on their own, in stark contrast to previous results. Remarkably, a single, novel transmission strategy achieves super-additivity in all examples. Our results show that super-additivity is much more prevalent than previously thought. It can occur across a wide variety of channels, even when both participating channels have large quantum capacity.
We derive a model for the optimization of the bending and torsional rigidities of non-homogeneous elastic rods. This is achieved by studying a sharp interface shape optimization problem with perimeter penalization, that treats both rigidities as objectives. We then formulate a phase field approximation of the optimization problem and show the convergence to the aforementioned sharp interface model via $\Gamma$-convergence. In the final part of this work we numerically approximate minimizers of the phase field problem by using a steepest descent approach and relate the resulting optimal shapes to the development of the morphology of plant stems.
The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. Clifford algebras provide the natural generalizations of complex, dual numbers and quaternions into non-commutative multivectors. The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension. The algorithm is a variation of the Faddeev-LeVerrier-Souriau algorithm and is implemented in the open-source Computer Algebra System Maxima. Symbolic and numerical examples in different Clifford algebras are presented.
In recent years, the concept of introducing physics to machine learning has become widely popular. Most physics-inclusive ML-techniques however are still limited to a single geometry or a set of parametrizable geometries. Thus, there remains the need to train a new model for a new geometry, even if it is only slightly modified. With this work we introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization. This technique is based on a combination of a U-Net-like CNN and well established discretization methods from the field of the finite difference method.The results of our physics-aware CNN are compared to a state-of-the-art data-based approach. Additionally, it is also shown how our approach performs when combined with the data-based approach.
This work focuses on solving super-linear stochastic differential equations (SDEs) involving different time scales numerically. Taking advantages of being explicit and easily implementable, a multiscale truncated Euler-Maruyama scheme is proposed for slow-fast SDEs with local Lipschitz coefficients. By virtue of the averaging principle, the strong convergence of its numerical solutions to the exact ones in pth moment is obtained. Furthermore, under mild conditions on the coefficients, the corresponding strong error estimate is also provided. Finally, two examples and some numerical simulations are given to verify the theoretical results.
In estimation of a normal mean matrix under the matrix quadratic loss, we develop a general formula for the matrix quadratic risk of orthogonally invariant estimators. The derivation is based on several formulas for matrix derivatives of orthogonally invariant functions of matrices. As an application, we calculate the matrix quadratic risk of a singular value shrinkage estimator motivated by Stein's proposal for improving on the Efron--Morris estimator 50 years ago.
This paper focuses on investigating the learning operators for identifying weak solutions to the Navier-Stokes equations. Our objective is to establish a connection between the initial data as input and the weak solution as output. To achieve this, we employ a combination of deep learning methods and compactness argument to derive learning operators for weak solutions for any large initial data in 2D, and for low-dimensional initial data in 3D. Additionally, we utilize the universal approximation theorem to derive a lower bound on the number of sensors required to achieve accurate identification of weak solutions to the Navier-Stokes equations. Our results demonstrate the potential of using deep learning techniques to address challenges in the study of fluid mechanics, particularly in identifying weak solutions to the Navier-Stokes equations.
We investigate error of the Euler scheme in the case when the right-hand side function of the underlying ODE satisfies nonstandard assumptions such as local one-sided Lipschitz condition and local H\"older continuity. Moreover, we assume two cases in regards to information availability: exact and noisy with respect to the right-hand side function. Optimality analysis of the Euler scheme is also provided. Finally, we present the results of some numerical experiments.
We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n\geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant, $S_h^n$, converges to the optimal Hardy constant $S^n$ no slower than $O(1/\vert \log h \vert)$. We also show that the convergence is no faster than $O(1/\vert \log h \vert^2)$ if $n=1$ or if $n\geq 3$, the domain is the unit ball, and the finite element discretization exploits the rotational symmetry of the problem. Our estimates are compared to exact values for $S_h^n$ obtained computationally.
The capacity of a channel characterizes the maximum rate at which information can be transmitted through the channel asymptotically faithfully. For a channel with multiple senders and a single receiver, computing its sum capacity is possible in theory, but challenging in practice because of the nonconvex optimization involved. To address this challenge, we investigate three topics in our study. In the first part, we study the sum capacity of a family of multiple access channels (MACs) obtained from nonlocal games. For any MAC in this family, we obtain an upper bound on the sum rate that depends only on the properties of the game when allowing assistance from an arbitrary set of correlations between the senders. This approach can be used to prove separations between sum capacities when the senders are allowed to share different sets of correlations, such as classical, quantum or no-signalling correlations. We also construct a specific nonlocal game to show that the approach of bounding the sum capacity by relaxing the nonconvex optimization can give arbitrarily loose bounds. Owing to this result, in the second part, we study algorithms for non-convex optimization of a class of functions we call Lipschitz-like functions. This class includes entropic quantities, and hence these results may be of independent interest in information theory. Subsequently, in the third part, we show that one can use these techniques to compute the sum capacity of an arbitrary two-sender MACs to a fixed additive precision in quasi-polynomial time. We showcase our method by efficiently computing the sum capacity of a family of two-sender MACs for which one of the input alphabets has size two. Furthermore, we demonstrate with an example that our algorithm may compute the sum capacity to a higher precision than using the convex relaxation.
High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.