Bayesian geoacoustic inversion problems are conventionally solved by Markov chain Monte Carlo methods or its variants, which are computationally expensive. This paper extends the classic Bayesian geoacoustic inversion framework by deriving important geoacoustic statistics of Bayesian geoacoustic inversion from the multidimensional posterior probability density (PPD) using the mixture density network (MDN) theory. These statistics make it convenient to train the network directly on the whole parameter space and get the multidimensional PPD of model parameters. The present approach provides a much more efficient way to solve geoacoustic inversion problems in Bayesian inference framework. The network is trained on a simulated dataset of surface-wave dispersion curves with shear-wave velocities as labels and tested on both synthetic and real data cases. The results show that the network gives reliable predictions and has good generalization performance on unseen data. Once trained, the network can rapidly (within seconds) give a fully probabilistic solution which is comparable to Monte Carlo methods. It provides an promising approach for real-time inversion.
MIMO (multiple input, multiple output) approaches are a recent trend in neural network architectures for video restoration problems, where each network evaluation produces multiple output frames. The video is split into non-overlapping stacks of frames that are processed independently, resulting in a very appealing trade-off between output quality and computational cost. In this work we focus on the low-latency setting by limiting the number of available future frames. We find that MIMO architectures suffer from problems that have received little attention so far, namely (1) the performance drops significantly due to the reduced temporal receptive field, particularly for frames at the borders of the stack, (2) there are strong temporal discontinuities at stack transitions which induce a step-wise motion artifact. We propose two simple solutions to alleviate these problems: recurrence across MIMO stacks to boost the output quality by implicitly increasing the temporal receptive field, and overlapping of the output stacks to smooth the temporal discontinuity at stack transitions. These modifications can be applied to any MIMO architecture. We test them on three state-of-the-art video denoising networks with different computational cost. The proposed contributions result in a new state-of-the-art for low-latency networks, both in terms of reconstruction error and temporal consistency. As an additional contribution, we introduce a new benchmark consisting of drone footage that highlights temporal consistency issues that are not apparent in the standard benchmarks.
We consider the use of multipreconditioning, which allows for multiple preconditioners to be applied in parallel, on high-frequency Helmholtz problems. Typical applications present challenging sparse linear systems which are complex non-Hermitian and, due to the pollution effect, either very large or else still large but under-resolved in terms of the physics. These factors make finding general purpose, efficient and scalable solvers difficult and no one approach has become the clear method of choice. In this work we take inspiration from domain decomposition strategies known as sweeping methods, which have gained notable interest for their ability to yield nearly-linear asymptotic complexity and which can also be favourable for high-frequency problems. While successful approaches exist, such as those based on higher-order interface conditions, perfectly matched layers (PMLs), or complex tracking of wave fronts, they can often be quite involved or tedious to implement. We investigate here the use of simple sweeping techniques applied in different directions which can then be incorporated in parallel into a multipreconditioned GMRES strategy. Preliminary numerical results on a two-dimensional benchmark problem will demonstrate the potential of this approach.
We propose a decoder for the correction of erasures with hypergraph product codes, which form one of the most popular families of quantum LDPC codes. Our numerical simulations show that this decoder provides a close approximation of the maximum likelihood decoder that can be implemented in O(N^2) bit operations where N is the length of the quantum code. A probabilistic version of this decoder can be implemented in O(N^1.5) bit operations.
The method of multivariable Mendelian randomization uses genetic variants to instrument multiple exposures, to estimate the effect that a given exposure has on an outcome conditional on all other exposures included in a linear model. Unfortunately, the inclusion of every additional exposure makes a weak instruments problem more likely, because we require conditionally strong genetic predictors of each exposure. This issue is well appreciated in practice, with different versions of F-statistics routinely reported as measures of instument strength. Less transparently, however, these F-statistics are sometimes used to guide instrument selection, and even to decide whether to report empirical results. Rather than discarding findings with low F-statistics, weak instrument-robust methods can provide valid inference under weak instruments. For multivariable Mendelian randomization with two-sample summary data, we encourage use of the inference strategy of Andrews (2018) that reports both robust and non-robust confidence sets, along with a statistic that measures how reliable the non-robust confidence set is in terms of coverage. We also propose a novel adjusted-Kleibergen statistic that corrects for overdispersion heterogeneity in genetic associations with the outcome.
For studying intrusion detection data we consider data points referring to individual IP addresses and their connections: We build networks associated with those data points, such that vertices in a graph are associated via the respective IP addresses, with the key property that attacked data points are part of the structure of the network. More precisely, we propose a novel approach using simplicial complexes to model the desired network and the respective intrusions in terms of simplicial attributes thus generalizing previous graph-based approaches. Adapted network centrality measures related to simplicial complexes yield so-called patterns associated to vertices, which themselves contain a set of features. These are then used to describe the attacked or the attacker vertices, respectively. Comparing this new strategy with classical concepts demonstrates the advantages of the presented approach using simplicial features for detecting and characterizing intrusions.
Due to their length and complexity, long regulatory texts are challenging to summarize. To address this, a multi-step extractive-abstractive architecture is proposed to handle lengthy regulatory documents more effectively. In this paper, we show that the effectiveness of a two-step architecture for summarizing long regulatory texts varies significantly depending on the model used. Specifically, the two-step architecture improves the performance of decoder-only models. For abstractive encoder-decoder models with short context lengths, the effectiveness of an extractive step varies, whereas for long-context encoder-decoder models, the extractive step worsens their performance. This research also highlights the challenges of evaluating generated texts, as evidenced by the differing results from human and automated evaluations. Most notably, human evaluations favoured language models pretrained on legal text, while automated metrics rank general-purpose language models higher. The results underscore the importance of selecting the appropriate summarization strategy based on model architecture and context length.
Video Moment Retrieval, which aims to locate in-context video moments according to a natural language query, is an essential task for cross-modal grounding. Existing methods focus on enhancing the cross-modal interactions between all moments and the textual description for video understanding. However, constantly interacting with all locations is unreasonable because of uneven semantic distribution across the timeline and noisy visual backgrounds. This paper proposes a cross-modal Context Denoising Network (CDNet) for accurate moment retrieval by disentangling complex correlations and denoising irrelevant dynamics.Specifically, we propose a query-guided semantic disentanglement (QSD) to decouple video moments by estimating alignment levels according to the global and fine-grained correlation. A Context-aware Dynamic Denoisement (CDD) is proposed to enhance understanding of aligned spatial-temporal details by learning a group of query-relevant offsets. Extensive experiments on public benchmarks demonstrate that the proposed CDNet achieves state-of-the-art performances.
Given a family of pretrained models and a hold-out set, how can we construct a valid conformal prediction set while selecting a model that minimizes the width of the set? If we use the same hold-out data set both to select a model (the model that yields the smallest conformal prediction sets) and then to construct a conformal prediction set based on that selected model, we suffer a loss of coverage due to selection bias. Alternatively, we could further splitting the data to perform selection and calibration separately, but this comes at a steep cost if the size of the dataset is limited. In this paper, we address the challenge of constructing a valid prediction set after efficiency-oriented model selection. Our novel methods can be implemented efficiently and admit finite-sample validity guarantees without invoking additional sample-splitting. We show that our methods yield prediction sets with asymptotically optimal size under certain notion of continuity for the model class. The improved efficiency of the prediction sets constructed by our methods are further demonstrated through applications to synthetic datasets in various settings and a real data example.
We develop a versatile framework which allows us to rigorously estimate the Hausdorff dimension of maximal conformal graph directed Markov systems in $\mathbb{R}^n$ for $n \geq 2$. Our method is based on piecewise linear approximations of the eigenfunctions of the Perron-Frobenius operator via a finite element framework for discretization and iterative mesh schemes. One key element in our approach is obtaining bounds for the derivatives of these eigenfunctions, which, besides being essential for the implementation of our method, are of independent interest.
In emergencies, the ability to quickly and accurately gather environmental data and command information, and to make timely decisions, is particularly critical. Traditional semantic communication frameworks, primarily based on a single modality, are susceptible to complex environments and lighting conditions, thereby limiting decision accuracy. To this end, this paper introduces a multimodal generative semantic communication framework named mm-GESCO. The framework ingests streams of visible and infrared modal image data, generates fused semantic segmentation maps, and transmits them using a combination of one-hot encoding and zlib compression techniques to enhance data transmission efficiency. At the receiving end, the framework can reconstruct the original multimodal images based on the semantic maps. Additionally, a latent diffusion model based on contrastive learning is designed to align different modal data within the latent space, allowing mm-GESCO to reconstruct latent features of any modality presented at the input. Experimental results demonstrate that mm-GESCO achieves a compression ratio of up to 200 times, surpassing the performance of existing semantic communication frameworks and exhibiting excellent performance in downstream tasks such as object classification and detection.