The capacity of a channel characterizes the maximum rate at which information can be transmitted through the channel asymptotically faithfully. For a channel with multiple senders and a single receiver, computing its sum capacity is possible in theory, but challenging in practice because of the nonconvex optimization involved. To address this challenge, we investigate three topics in our study. In the first part, we study the sum capacity of a family of multiple access channels (MACs) obtained from nonlocal games. For any MAC in this family, we obtain an upper bound on the sum rate that depends only on the properties of the game when allowing assistance from an arbitrary set of correlations between the senders. This approach can be used to prove separations between sum capacities when the senders are allowed to share different sets of correlations, such as classical, quantum or no-signalling correlations. We also construct a specific nonlocal game to show that the approach of bounding the sum capacity by relaxing the nonconvex optimization can give arbitrarily loose bounds. Owing to this result, in the second part, we study algorithms for non-convex optimization of a class of functions we call Lipschitz-like functions. This class includes entropic quantities, and hence these results may be of independent interest in information theory. Subsequently, in the third part, we show that one can use these techniques to compute the sum capacity of an arbitrary two-sender MACs to a fixed additive precision in quasi-polynomial time. We showcase our method by efficiently computing the sum capacity of a family of two-sender MACs for which one of the input alphabets has size two. Furthermore, we demonstrate with an example that our algorithm may compute the sum capacity to a higher precision than using the convex relaxation.
Selfadhesivity is a property of entropic polymatroids which guarantees that the polymatroid can be glued to an identical copy of itself along arbitrary restrictions such that the two pieces are independent given the common restriction. We show that positive definite matrices satisfy this condition as well and examine consequences for Gaussian conditional independence structures. New axioms of Gaussian CI are obtained by applying selfadhesivity to the previously known axioms of structural semigraphoids and orientable gaussoids.
Dye experimentation is a widely used method in experimental fluid mechanics for flow analysis or for the study of the transport of particles within a fluid. This technique is particularly useful in biomedical diagnostic applications ranging from hemodynamic analysis of cardiovascular systems to ocular circulation. However, simulating dyes governed by convection-diffusion partial differential equations (PDEs) can also be a useful post-processing analysis approach for computational fluid dynamics (CFD) applications. Such simulations can be used to identify the relative significance of different spatial subregions in particular time intervals of interest in an unsteady flow field. Additionally, dye evolution is closely related to non-discrete particle residence time (PRT) calculations that are governed by similar PDEs. This contribution introduces a pseudo-spectral method based on Fourier continuation (FC) for conducting dye simulations and non-discrete particle residence time calculations without numerical diffusion errors. Convergence and error analyses are performed with both manufactured and analytical solutions. The methodology is applied to three distinct physical/physiological cases: 1) flow over a two-dimensional (2D) cavity; 2) pulsatile flow in a simplified partially-grafted aortic dissection model; and 3) non-Newtonian blood flow in a Fontan graft. Although velocity data is provided in this work by numerical simulation, the proposed approach can also be applied to velocity data collected through experimental techniques such as from particle image velocimetry.
A rigidity circuit (in 2D) is a minimal dependent set in the rigidity matroid, i.e. a minimal graph supporting a non-trivial stress in any generic placement of its vertices in $\mathbb R^2$. Any rigidity circuit on $n\geq 5$ vertices can be obtained from rigidity circuits on a fewer number of vertices by applying the combinatorial resultant (CR) operation. The inverse operation is called a combinatorial resultant decomposition (CR-decomp). Any rigidity circuit on $n\geq 5$ vertices can be successively decomposed into smaller circuits, until the complete graphs $K_4$ are reached. This sequence of CR-decomps has the structure of a rooted binary tree called the combinatorial resultant tree (CR-tree). A CR-tree encodes an elimination strategy for computing circuit polynomials via Sylvester resultants. Different CR-trees lead to elimination strategies that can vary greatly in time and memory consumption. It is an open problem to establish criteria for optimal CR-trees, or at least to characterize those CR-trees that lead to good elimination strategies. In [12] we presented an algorithm for enumerating CR-trees where we give the algorithms for decomposing 3-connected rigidity circuits in polynomial time. In this paper we focus on those circuits that are not 3-connected, which we simply call 2-connected. In order to enumerate CR-decomps of 2-connected circuits $G$, a brute force exp-time search has to be performed among the subgraphs induced by the subsets of $V(G)$. This exp-time bottleneck is not present in the 3-connected case. In this paper we will argue that we do not have to account for all possible CR-decomps of 2-connected rigidity circuits to find a good elimination strategy; we only have to account for those CR-decomps that are a 2-split, all of which can be enumerated in polynomial time. We present algorithms and computational evidence in support of this heuristic.
We consider a sharp interface formulation for the multi-phase Mullins-Sekerka flow. The flow is characterized by a network of curves evolving such that the total surface energy of the curves is reduced, while the areas of the enclosed phases are conserved. Making use of a variational formulation, we introduce a fully discrete finite element method. Our discretization features a parametric approximation of the moving interfaces that is independent of the discretization used for the equations in the bulk. The scheme can be shown to be unconditionally stable and to satisfy an exact volume conservation property. Moreover, an inherent tangential velocity for the vertices on the discrete curves leads to asymptotically equidistributed vertices, meaning no remeshing is necessary in practice. Several numerical examples, including a convergence experiment for the three-phase Mullins-Sekerka flow, demonstrate the capabilities of the introduced method.
We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space. To this end, we use an additive family as hidden layer maps and a non-linear activation function applied to each hidden layer. Relying on Stone-Weierstrass theorems on weighted spaces, we can prove a global universal approximation result for generalizations of continuous functions going beyond the usual approximation on compact sets. This then applies in particular to approximation of (non-anticipative) path space functionals via functional input neural networks. As a further application of the weighted Stone-Weierstrass theorem we prove a global universal approximation result for linear functions of the signature. We also introduce the viewpoint of Gaussian process regression in this setting and show that the reproducing kernel Hilbert space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes. This paves the way towards uncertainty quantification for signature kernel regression.
Neuromorphic computing is one of the few current approaches that have the potential to significantly reduce power consumption in Machine Learning and Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm that runs on a neuromorphic architecture and is inspired by circuits described in the mammalian olfactory bulb. They assess the algorithm's performance in "rapid online learning and identification" of gaseous odorants and odorless gases (short "gases") using a set of gas sensor recordings of different odour presentations and corrupting them by impulse noise. We replicated parts of the study and discovered limitations that affect some of the conclusions drawn. First, the dataset used suffers from sensor drift and a non-randomised measurement protocol, rendering it of limited use for odour identification benchmarks. Second, we found that the model is restricted in its ability to generalise over repeated presentations of the same gas. We demonstrate that the task the study refers to can be solved with a simple hash table approach, matching or exceeding the reported results in accuracy and runtime. Therefore, a validation of the model that goes beyond restoring a learned data sample remains to be shown, in particular its suitability to odour identification tasks.
We propose a method to modify a polygonal mesh in order to fit the zero-isoline of a level set function by extending a standard body-fitted strategy to a tessellation with arbitrarily-shaped elements. The novel level set-fitted approach, in combination with a Discontinuous Galerkin finite element approximation, provides an ideal setting to model physical problems characterized by embedded or evolving complex geometries, since it allows skipping any mesh post-processing in terms of grid quality. The proposed methodology is firstly assessed on the linear elasticity equation, by verifying the approximation capability of the level set-fitted approach when dealing with configurations with heterogeneous material properties. Successively, we combine the level set-fitted methodology with a minimum compliance topology optimization technique, in order to deliver optimized layouts exhibiting crisp boundaries and reliable mechanical performances. An extensive numerical test campaign confirms the effectiveness of the proposed method.
We examine a stochastic formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution, but has knowledge that it lies in some hypothesis set and possesses a historical data set, from which information about it can be gleaned. We define a prescriptive solution as a decision rule mapping such a data set to decisions. As there does not exist prescriptive solutions that are generalizable over the entire hypothesis set, we define out-of-sample optimality as a local average over a neighbourhood of hypotheses, and averaged over the sampling distribution. We prove sufficient conditions for local out-of-sample optimality, which reduces to functions of the sufficient statistic of the hypothesis family. We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms. Finally, we illustrate our model on the newsvendor model, and find strong performance when compared against alternatives in the literature. There are potential implications of our research on end-to-end learning and Bayesian optimization.
We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.