Inverse imaging problems that are ill-posed can be encountered across multiple domains of science and technology, ranging from medical diagnosis to astronomical studies. To reconstruct images from incomplete and distorted data, it is necessary to create algorithms that can take into account both, the physical mechanisms responsible for generating these measurements and the intrinsic characteristics of the images being analyzed. In this work, the sparse representation of images is reviewed, which is a realistic, compact and effective generative model for natural images inspired by the visual system of mammals. It enables us to address ill-posed linear inverse problems by training the model on a vast collection of images. Moreover, we extend the application of sparse coding to solve the non-linear and ill-posed problem in microwave tomography imaging, which could lead to a significant improvement of the state-of-the-arts algorithms.
We consider Maxwell eigenvalue problems on uncertain shapes with perfectly conducting TESLA cavities being the driving example. Due to the shape uncertainty, the resulting eigenvalues and eigenmodes are also uncertain and it is well known that the eigenvalues may exhibit crossings or bifurcations under perturbation. We discuss how the shape uncertainties can be modelled using the domain mapping approach and how the deformation mapping can be expressed as coefficients in Maxwell's equations. Using derivatives of these coefficients and derivatives of the eigenpairs, we follow a perturbation approach to compute approximations of mean and covariance of the eigenpairs. For small perturbations, these approximations are faster and more accurate than Monte Carlo or similar sampling-based strategies. Numerical experiments for a three-dimensional 9-cell TESLA cavity are presented.
We study the complexity (that is, the weight of the multiplication table) of the elliptic normal bases introduced by Couveignes and Lercier. We give an upper bound on the complexity of these elliptic normal bases, and we analyze the weight of some special vectors related to the multiplication table of those bases. This analysis leads us to some perspectives on the search for low complexity normal bases from elliptic periods.
Nosocomial infections have important consequences for patients and hospital staff: they worsen patient outcomes and their management stresses already overburdened health systems. Accurate judgements of whether an infection is nosocomial helps staff make appropriate choices to protect other patients within the hospital. Nosocomiality cannot be properly assessed without considering whether the infected patient came into contact with high risk potential infectors within the hospital. We developed a Bayesian model that integrates epidemiological, contact and pathogen genetic data to determine how likely an infection is to be nosocomial and the probability of given infection candidates being the source of the infection.
Recently, a family of unconventional integrators for ODEs with polynomial vector fields was proposed, based on the polarization of vector fields. The simplest instance is the by now famous Kahan discretization for quadratic vector fields. All these integrators seem to possess remarkable conservation properties. In particular, it has been proved that, when the underlying ODE is Hamiltonian, its polarization discretization possesses an integral of motion and an invariant volume form. In this note, we propose a new algebraic approach to derivation of the integrals of motion for polarization discretizations.
The need to Fourier transform data sets with irregular sampling is shared by various domains of science. This is the case for example in astronomy or sismology. Iterative methods have been developed that allow to reach approximate solutions. Here an exact solution to the problem for band-limited periodic signals is presented. The exact spectrum can be deduced from the spectrum of the non-equispaced data through the inversion of a Toeplitz matrix. The result applies to data of any dimension. This method also provides an excellent approximation for non-periodic band-limit signals. The method allows to reach very high dynamic ranges ($10^{13}$ with double-float precision) which depend on the regularity of the samples.
Deterministic communication is required for applications of several industry verticals including manufacturing, automotive, financial, and health care, etc. These applications rely on reliable and time-synchronized delivery of information among the communicating devices. Therefore, large delay variations in packet delivery or inaccuracies in time synchronization cannot be tolerated. In particular, the industrial revolution on digitization, connectivity of digital and physical systems, and flexible production design require deterministic and time-synchronized communication. A network supporting deterministic communication guarantees data delivery in a specified time with high reliability. The IEEE 802.1 TSN task group is developing standards to provide deterministic communication through IEEE 802 networks. The IEEE 802.1AS standard defines time synchronization mechanism for accurate distribution of time among the communicating devices. The time synchronization accuracy depends on the accurate calculation of the residence time which is the time between the ingress and the egress ports of the bridge and includes the processing, queuing, transmission, and link latency of the timing information. This paper discusses time synchronization mechanisms supported in current wired and wireless integrated systems.
Neural network wavefunctions optimized using the variational Monte Carlo method have been shown to produce highly accurate results for the electronic structure of atoms and small molecules, but the high cost of optimizing such wavefunctions prevents their application to larger systems. We propose the Subsampled Projected-Increment Natural Gradient Descent (SPRING) optimizer to reduce this bottleneck. SPRING combines ideas from the recently introduced minimum-step stochastic reconfiguration optimizer (MinSR) and the classical randomized Kaczmarz method for solving linear least-squares problems. We demonstrate that SPRING outperforms both MinSR and the popular Kronecker-Factored Approximate Curvature method (KFAC) across a number of small atoms and molecules, given that the learning rates of all methods are optimally tuned. For example, on the oxygen atom, SPRING attains chemical accuracy after forty thousand training iterations, whereas both MinSR and KFAC fail to do so even after one hundred thousand iterations.
We generalize the Poisson limit theorem to binary functions of random objects whose law is invariant under the action of an amenable group. Examples include stationary random fields, exchangeable sequences, and exchangeable graphs. A celebrated result of E. Lindenstrauss shows that normalized sums over certain increasing subsets of such groups approximate expectations. Our results clarify that the corresponding unnormalized sums of binary statistics are asymptotically Poisson, provided suitable mixing conditions hold. They extend further to randomly subsampled sums and also show that strict invariance of the distribution is not needed if the requisite mixing condition defined by the group holds. We illustrate the results with applications to random fields, Cayley graphs, and Poisson processes on groups.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.