In the context of visual perception, the optical signal from a scene is transferred into the electronic domain by detectors in the form of image data, which are then processed for the extraction of visual information. In noisy and weak-signal environments such as thermal imaging for night vision applications, however, the performance of neural computing tasks faces a significant bottleneck due to the inherent degradation of data quality upon noisy detection. Here, we propose a concept of optical signal processing before detection to address this issue. We demonstrate that spatially redistributing optical signals through a properly designed linear transformer can enhance the detection noise resilience of visual perception tasks, as benchmarked with the MNIST classification. Our idea is supported by a quantitative analysis detailing the relationship between signal concentration and noise robustness, as well as its practical implementation in an incoherent imaging system. This compute-first detection scheme can pave the way for advancing infrared machine vision technologies widely used for industrial and defense applications.
We study half-space separation in the convexity of chordless paths of a graph, i.e., monophonic convexity. In this problem, one is given a graph and two (disjoint) subsets of vertices and asks whether these two sets can be separated by complementary convex sets, called half-spaces. While it is known this problem is $\mathbf{NP}$-complete for geodesic convexity -- the convexity of shortest paths -- we show that it can be solved in polynomial time for monophonic convexity.
The influence of natural image transformations on receptive field responses is crucial for modelling visual operations in computer vision and biological vision. In this regard, covariance properties with respect to geometric image transformations in the earliest layers of the visual hierarchy are essential for expressing robust image operations, and for formulating invariant visual operations at higher levels. This paper defines and proves a set of joint covariance properties under compositions of spatial scaling transformations, spatial affine transformations, Galilean transformations and temporal scaling transformations, which make it possible to characterize how different types of image transformations interact with each other and the associated spatio-temporal receptive field responses. In this regard, we also extend the notion of scale-normalized derivatives to affine-normalized derivatives, to be able to obtain true affine-covariant properties of spatial derivatives, that are computed based on spatial smoothing with affine Gaussian kernels. The derived relations show how the parameters of the receptive fields need to be transformed, in order to match the output from spatio-temporal receptive fields under composed spatio-temporal image transformations. As a side effect, the presented proof for the joint covariance property over the integrated combination of the different geometric image transformations also provides specific proofs for the individual transformation properties, which have not previously been fully reported in the literature. The paper also presents an in-depth theoretical analysis of geometric interpretations of the derived covariance properties, as well as outlines a number of biological interpretations of these results.
Twin nodes in a static network capture the idea of being substitutes for each other for maintaining paths of the same length anywhere in the network. In dynamic networks, we model twin nodes over a time-bounded interval, noted $(\Delta,d)$-twins, as follows. A periodic undirected time-varying graph $\mathcal G=(G_t)_{t\in\mathbb N}$ of period $p$ is an infinite sequence of static graphs where $G_t=G_{t+p}$ for every $t\in\mathbb N$. For $\Delta$ and $d$ two integers, two distinct nodes $u$ and $v$ in $\mathcal G$ are $(\Delta,d)$-twins if, starting at some instant, the outside neighbourhoods of $u$ and $v$ has non-empty intersection and differ by at most $d$ elements for $\Delta$ consecutive instants. In particular when $d=0$, $u$ and $v$ can act during the $\Delta$ instants as substitutes for each other in order to maintain journeys of the same length in time-varying graph $\mathcal G$. We propose a distributed deterministic algorithm enabling each node to enumerate its $(\Delta,d)$-twins in $2p$ rounds, using messages of size $O(\delta_\mathcal G\log n)$, where $n$ is the total number of nodes and $\delta_\mathcal G$ is the maximum degree of the graphs $G_t$'s. Moreover, using randomized techniques borrowed from distributed hash function sampling, we reduce the message size down to $O(\log n)$ w.h.p.
We introduce a new conjecture on the computational hardness of detecting random lifts of graphs: we claim that there is no polynomial-time algorithm that can distinguish between a large random $d$-regular graph and a large random lift of a Ramanujan $d$-regular base graph (provided that the lift is corrupted by a small amount of extra noise), and likewise for bipartite random graphs and lifts of bipartite Ramanujan graphs. We give evidence for this conjecture by proving lower bounds against the local statistics hierarchy of hypothesis testing semidefinite programs. We then explore the consequences of this conjecture for the hardness of certifying bounds on numerous functions of random regular graphs, expanding on a direction initiated by Bandeira, Banks, Kunisky, Moore, and Wein (2021). Conditional on this conjecture, we show that no polynomial-time algorithm can certify tight bounds on the maximum cut of random 3- or 4-regular graphs, the maximum independent set of random 3- or 4-regular graphs, or the chromatic number of random 7-regular graphs. We show similar gaps asymptotically for large degree for the maximum independent set and for any degree for the minimum dominating set, finding that naive spectral and combinatorial bounds are optimal among all polynomial-time certificates. Likewise, for small-set vertex and edge expansion in the limit of very small sets, we show that the spectral bounds of Kahale (1995) are optimal among all polynomial-time certificates.
This paper addresses the construction and analysis of a class of domain decomposition methods for the iterative solution of the quasi-static Biot problem in three-field formulation. The considered discrete model arises from time discretization by the implicit Euler method and space discretization by a family of strongly mass-conserving methods exploiting $H^{div}$-conforming approximations of the solid displacement and fluid flux fields. For the resulting saddle-point problem, we construct monolithic overlapping domain decomposition (DD) methods whose analysis relies on a transformation into an equivalent symmetric positive definite system and on stable decompositions of the involved finite element spaces under proper problem-dependent norms. Numerical results on two-dimensional test problems are in accordance with the provided theoretical uniform convergence estimates for the two-level multiplicative Schwarz method.
This paper focuses on the development of a space-variant regularization model for solving an under-determined linear inverse problem. The case study is a medical image reconstruction from few-view tomographic noisy data. The primary objective of the proposed optimization model is to achieve a good balance between denoising and the preservation of fine details and edges, overcoming the performance of the popular and largely used Total Variation (TV) regularization through the application of appropriate pixel-dependent weights. The proposed strategy leverages the role of gradient approximations for the computation of the space-variant TV weights. For this reason, a convolutional neural network is designed, to approximate both the ground truth image and its gradient using an elastic loss function in its training. Additionally, the paper provides a theoretical analysis of the proposed model, showing the uniqueness of its solution, and illustrates a Chambolle-Pock algorithm tailored to address the specific problem at hand. This comprehensive framework integrates innovative regularization techniques with advanced neural network capabilities, demonstrating promising results in achieving high-quality reconstructions from low-sampled tomographic data.
High-dimensional linear models have been widely studied, but the developments in high-dimensional generalized linear models, or GLMs, have been slower. In this paper, we propose an empirical or data-driven prior leading to an empirical Bayes posterior distribution which can be used for estimation of and inference on the coefficient vector in a high-dimensional GLM, as well as for variable selection. We prove that our proposed posterior concentrates around the true/sparse coefficient vector at the optimal rate, provide conditions under which the posterior can achieve variable selection consistency, and prove a Bernstein--von Mises theorem that implies asymptotically valid uncertainty quantification. Computation of the proposed empirical Bayes posterior is simple and efficient, and is shown to perform well in simulations compared to existing Bayesian and non-Bayesian methods in terms of estimation and variable selection.
Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.