We propose a scheme for imaging periodic surfaces using a superlens. By employing an inverse scattering model and the transformed field expansion method, we derive an approximate reconstruction formula for the surface profile, assuming small amplitude. This formula suggests that unlimited resolution can be achieved for the linearized inverse problem with perfectly matched parameters. Our method requires only a single incident wave at a fixed frequency and can be efficiently implemented using fast Fourier transform. Through numerical experiments, we demonstrate that our method achieves resolution significantly surpassing the resolution limit for both smooth and non-smooth surface profiles with either perfect or marginally imperfect parameters.
We present near-optimal algorithms for detecting small vertex cuts in the CONGEST model of distributed computing. Despite extensive research in this area, our understanding of the vertex connectivity of a graph is still incomplete, especially in the distributed setting. To this date, all distributed algorithms for detecting cut vertices suffer from an inherent dependency in the maximum degree of the graph, $\Delta$. Hence, in particular, there is no truly sub-linear time algorithm for this problem, not even for detecting a single cut vertex. We take a new algorithmic approach for vertex connectivity which allows us to bypass the existing $\Delta$ barrier. As a warm-up to our approach, we show a simple $\widetilde{O}(D)$-round randomized algorithm for computing all cut vertices in a $D$-diameter $n$-vertex graph. This improves upon the $O(D+\Delta/\log n)$-round algorithm of [Pritchard and Thurimella, ICALP 2008]. Our key technical contribution is an $\widetilde{O}(D)$-round randomized algorithm for computing all cut pairs in the graph, improving upon the state-of-the-art $O(\Delta \cdot D)^4$-round algorithm by [Parter, DISC '19]. Note that even for the considerably simpler setting of edge cuts, currently $\widetilde{O}(D)$-round algorithms are known only for detecting pairs of cut edges. Our approach is based on employing the well-known linear graph sketching technique [Ahn, Guha and McGregor, SODA 2012] along with the heavy-light tree decomposition of [Sleator and Tarjan, STOC 1981]. Combining this with a careful characterization of the survivable subgraphs, allows us to determine the connectivity of $G \setminus \{x,y\}$ for every pair $x,y \in V$, using $\widetilde{O}(D)$-rounds. We believe that the tools provided in this paper are useful for omitting the $\Delta$-dependency even for larger cut values.
Recently deep learning and machine learning approaches have been widely employed for various applications in acoustics. Nonetheless, in the area of sound field processing and reconstruction classic methods based on the solutions of wave equation are still widespread. Recently, physics-informed neural networks have been proposed as a deep learning paradigm for solving partial differential equations which govern physical phenomena, bridging the gap between purely data-driven and model based methods. Here, we exploit physics-informed neural networks to reconstruct the early part of missing room impulse responses in an uniform linear array. This methodology allows us to exploit the underlying law of acoustics, i.e., the wave equation, forcing the neural network to generate physically meaningful solutions given only a limited number of data points. The results on real measurements show that the proposed model achieves accurate reconstruction and performance in line with respect to state-of-the-art deep-learning and compress sensing techniques while maintaining a lightweight architecture.
Obtaining a relevant dataset is central to conducting empirical studies in software engineering. However, in the context of mining software repositories, the lack of appropriate tooling for large scale mining tasks hinders the creation of new datasets. Moreover, limitations related to data sources that change over time (e.g., code bases) and the lack of documentation of extraction processes make it difficult to reproduce datasets over time. This threatens the quality and reproducibility of empirical studies. In this paper, we propose a tool-supported approach facilitating the creation of large tailored datasets while ensuring their reproducibility. We leveraged all the sources feeding the Software Heritage append-only archive which are accessible through a unified programming interface to outline a reproducible and generic extraction process. We propose a way to define a unique fingerprint to characterize a dataset which, when provided to the extraction process, ensures that the same dataset will be extracted. We demonstrate the feasibility of our approach by implementing a prototype. We show how it can help reduce the limitations researchers face when creating or reproducing datasets.
Eye blinking detection in the wild plays an essential role in deception detection, driving fatigue detection, etc. Despite the fact that numerous attempts have already been made, the majority of them have encountered difficulties, such as the derived eye images having different resolutions as the distance between the face and the camera changes; or the requirement of a lightweight detection model to obtain a short inference time in order to perform in real-time. In this research, two problems are addressed: how the eye blinking detection model can learn efficiently from different resolutions of eye pictures in diverse conditions; and how to reduce the size of the detection model for faster inference time. We propose to utilize upsampling and downsampling the input eye images to the same resolution as one potential solution for the first problem, then find out which interpolation method can result in the highest performance of the detection model. For the second problem, although a recent spatiotemporal convolutional neural network used for eye blinking detection has a strong capacity to extract both spatial and temporal characteristics, it remains having a high number of network parameters, leading to high inference time. Therefore, using Depth-wise Separable Convolution rather than conventional convolution layers inside each branch is considered in this paper as a feasible solution.
In general, high order splitting methods suffer from an order reduction phenomena when applied to the time integration of partial differential equations with non-periodic boundary conditions. In the last decade, there were introduced several modifications to prevent the second order Strang Splitting method from such a phenomena. In this article, inspired by these recent corrector techniques, we introduce a splitting method of order three for a class of semilinear parabolic problems that avoids order reduction in the context of non-periodic boundary conditions. We give a proof for the third order convergence of the method in a simplified linear setting and confirm the result by numerical experiments. Moreover, we show numerically that the result also persists with a nonlinear source term.
Communication plays a vital role in multi-agent systems, fostering collaboration and coordination. However, in real-world scenarios where communication is bandwidth-limited, existing multi-agent reinforcement learning (MARL) algorithms often provide agents with a binary choice: either transmitting a fixed number of bytes or no information at all. This limitation hinders the ability to effectively utilize the available bandwidth. To overcome this challenge, we present the Dynamic Size Message Scheduling (DSMS) method, which introduces a finer-grained approach to scheduling by considering the actual size of the information to be exchanged. Our contribution lies in adaptively adjusting message sizes using Fourier transform-based compression techniques, enabling agents to tailor their messages to match the allocated bandwidth while striking a balance between information loss and transmission efficiency. Receiving agents can reliably decompress the messages using the inverse Fourier transform. Experimental results demonstrate that DSMS significantly improves performance in multi-agent cooperative tasks by optimizing the utilization of bandwidth and effectively balancing information value.
Sea surface temperature (SST) is an essential climate variable that can be measured via ground truth, remote sensing, or hybrid model methodologies. Here, we celebrate SST surveillance progress via the application of a few relevant technological advances from the late 20th and early 21st century. We further develop our existing water cycle observation framework, Flux to Flow (F2F), to fuse AMSR-E and MODIS into a higher resolution product with the goal of capturing gradients and filling cloud gaps that are otherwise unavailable. Our neural network architecture is constrained to a deep convolutional residual regressive neural network. We utilize three snapshots of twelve monthly SST measurements in 2010 as measured by the passive microwave radiometer AMSR-E, the visible and infrared monitoring MODIS instrument, and the in situ Argo dataset ISAS. The performance of the platform and success of this approach is evaluated using the root mean squared error (RMSE) metric. We determine that the 1:1 configuration of input and output data and a large observation region is too challenging for the single compute node and dcrrnn structure as is. When constrained to a single 100 x 100 pixel region and a small training dataset, the algorithm improves from the baseline experiment covering a much larger geography. For next discrete steps, we envision the consideration of a large input range with a very small output range. Furthermore, we see the need to integrate land and sea variables before performing computer vision tasks like those within. Finally, we see parallelization as necessary to overcome the compute obstacles we encountered.
Adaptive sampling and planning in robotic environmental monitoring are challenging when the target environmental process varies over space and time. The underlying environmental dynamics require the planning module to integrate future environmental changes so that action decisions made earlier do not quickly become outdated. We propose a Monte Carlo tree search method which not only well balances the environment exploration and exploitation in space, but also catches up to the temporal environmental dynamics. This is achieved by incorporating multi-objective optimization and a look-ahead model-predictive rewarding mechanism. We show that by allowing the robot to leverage the simulated and predicted spatiotemporal environmental process, the proposed informative planning approach achieves a superior performance after comparing with other baseline methods in terms of the root mean square error of the environment model and the distance to the ground truth.
We consider the problem of detecting multiple changes in multiple independent time series. The search for the best segmentation can be expressed as a minimization problem over a given cost function. We focus on dynamic programming algorithms that solve this problem exactly. When the number of changes is proportional to data length, an inequality-based pruning rule encoded in the PELT algorithm leads to a linear time complexity. Another type of pruning, called functional pruning, gives a close-to-linear time complexity whatever the number of changes, but only for the analysis of univariate time series. We propose a few extensions of functional pruning for multiple independent time series based on the use of simple geometric shapes (balls and hyperrectangles). We focus on the Gaussian case, but some of our rules can be easily extended to the exponential family. In a simulation study we compare the computational efficiency of different geometric-based pruning rules. We show that for small dimensions (2, 3, 4) some of them ran significantly faster than inequality-based approaches in particular when the underlying number of changes is small compared to the data length.
Reliable probabilistic primality tests are fundamental in public-key cryptography. In adversarial scenarios, a composite with a high probability of passing a specific primality test could be chosen. In such cases, we need worst-case error estimates for the test. However, in many scenarios the numbers are randomly chosen and thus have significantly smaller error probability. Therefore, we are interested in average case error estimates. In this paper, we establish such bounds for the strong Lucas primality test, as only worst-case, but no average case error bounds, are currently available. This allows us to use this test with more confidence. We examine an algorithm that draws odd $k$-bit integers uniformly and independently, runs $t$ independent iterations of the strong Lucas test with randomly chosen parameters, and outputs the first number that passes all $t$ consecutive rounds. We attain numerical upper bounds on the probability on returing a composite. Furthermore, we consider a modified version of this algorithm that excludes integers divisible by small primes, resulting in improved bounds. Additionally, we classify the numbers that contribute most to our estimate.