The remarkable generative capabilities of denoising diffusion models have raised new concerns regarding the authenticity of the images we see every day on the Internet. However, the vast majority of existing deepfake detection models are tested against previous generative approaches (e.g. GAN) and usually provide only a "fake" or "real" label per image. We believe a more informative output would be to augment the per-image label with a localization map indicating which regions of the input have been manipulated. To this end, we frame this task as a weakly-supervised localization problem and identify three main categories of methods (based on either explanations, local scores or attention), which we compare on an equal footing by using the Xception network as the common backbone architecture. We provide a careful analysis of all the main factors that parameterize the design space: choice of method, type of supervision, dataset and generator used in the creation of manipulated images; our study is enabled by constructing datasets in which only one of the components is varied. Our results show that weakly-supervised localization is attainable, with the best performing detection method (based on local scores) being less sensitive to the looser supervision than to the mismatch in terms of dataset or generator.
A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.
Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly undersampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed cross-modal spatial alignment term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative steps of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on three real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.
Resonance based numerical schemes are those in which cancellations in the oscillatory components of the equation are taken advantage of in order to reduce the regularity required of the initial data to achieve a particular order of error and convergence. We investigate the potential for the derivation of resonance based schemes in the context of nonlinear stochastic PDEs. By comparing the regularity conditions required for error analysis to traditional exponential schemes we demonstrate that at orders less than $ \mathcal{O}(t^2) $, the techniques are successful and provide a significant gain on the regularity of the initial data, while at orders greater than $ \mathcal{O}(t^2) $, that the resonance based techniques does not achieve any gain. This is due to limitations in the explicit path-wise analysis of stochastic integrals. As examples of applications of the method, we present schemes for the Sch\"odinger equation and Manakov system accompanied by local error and stability analysis as well as proof of global convergence in both the strong and path-wise sense.
We propose and analyze discontinuous Galerkin (dG) approximations to 3D-1D coupled systems which model diffusion in a 3D domain containing a small inclusion reduced to its 1D centerline. Convergence to weak solutions of a steady state problem is established via deriving a posteriori error estimates and bounds on residuals defined with suitable lift operators. For the time dependent problem, a backward Euler dG formulation is also presented and analysed. Further, we propose a dG method for networks embedded in 3D domains, which is, up to jump terms, locally mass conservative on bifurcation points. Numerical examples in idealized geometries portray our theoretical findings, and simulations in realistic 1D networks show the robustness of our method.
This article explores the estimation of precision matrices in high-dimensional Gaussian graphical models. We address the challenge of improving the accuracy of maximum likelihood-based precision estimation through penalization. Specifically, we consider an elastic net penalty, which incorporates both L1 and Frobenius norm penalties while accounting for the target matrix during estimation. To enhance precision matrix estimation, we propose a novel two-step estimator that combines the strengths of ridge and graphical lasso estimators. Through this approach, we aim to improve overall estimation performance. Our empirical analysis demonstrates the superior efficiency of our proposed method compared to alternative approaches. We validate the effectiveness of our proposal through numerical experiments and application on three real datasets. These examples illustrate the practical applicability and usefulness of our proposed estimator.
Knowledge tracing consists in predicting the performance of some students on new questions given their performance on previous questions, and can be a prior step to optimizing assessment and learning. Deep knowledge tracing (DKT) is a competitive model for knowledge tracing relying on recurrent neural networks, even if some simpler models may match its performance. However, little is known about why DKT works so well. In this paper, we frame deep knowledge tracing as a encoderdecoder architecture. This viewpoint not only allows us to propose better models in terms of performance, simplicity or expressivity but also opens up promising avenues for future research directions. In particular, we show on several small and large datasets that a simpler decoder, with possibly fewer parameters than the one used by DKT, can predict student performance better.
The guesswork of a classical-quantum channel quantifies the cost incurred in guessing the state transmitted by the channel when only one state can be queried at a time, maximized over any classical pre-processing and minimized over any quantum post-processing. For arbitrary-dimensional covariant classical-quantum channels, we prove the invariance of the optimal pre-processing and the covariance of the optimal post-processing. In the qubit case, we compute the optimal guesswork for the class of so-called highly symmetric informationally complete classical-quantum channels.
We consider finite-dimensional Bayesian linear inverse problems with Gaussian priors and additive Gaussian noise models. The goal of this note is to present a simple derivation of the well-known fact that solving the Bayesian D-optimal experimental design problem, i.e., maximizing the expected information gain, is equivalent to minimizing the log-determinant of posterior covariance operator. We focus on finite-dimensional inverse problems. However, the presentation is kept generic to facilitate extensions to infinite-dimensional inverse problems.
Marginal structural models (MSMs) are often used to estimate causal effects of treatments on survival time outcomes from observational data when time-dependent confounding may be present. They can be fitted using, e.g., inverse probability of treatment weighting (IPTW). It is important to evaluate the performance of statistical methods in different scenarios, and simulation studies are a key tool for such evaluations. In such simulation studies, it is common to generate data in such a way that the model of interest is correctly specified, but this is not always straightforward when the model of interest is for potential outcomes, as is an MSM. Methods have been proposed for simulating from MSMs for a survival outcome, but these methods impose restrictions on the data-generating mechanism. Here we propose a method that overcomes these restrictions. The MSM can be a marginal structural logistic model for a discrete survival time or a Cox or additive hazards MSM for a continuous survival time. The hazard of the potential survival time can be conditional on baseline covariates, and the treatment variable can be discrete or continuous. We illustrate the use of the proposed simulation algorithm by carrying out a brief simulation study. This study compares the coverage of confidence intervals calculated in two different ways for causal effect estimates obtained by fitting an MSM via IPTW.
Scientific claims gain credibility by replicability, especially if replication under different circumstances and varying designs yields equivalent results. Aggregating results over multiple studies is, however, not straightforward, and when the heterogeneity between studies increases, conventional methods such as (Bayesian) meta-analysis and Bayesian sequential updating become infeasible. *Bayesian Evidence Synthesis*, built upon the foundations of the Bayes factor, allows to aggregate support for conceptually similar hypotheses over studies, regardless of methodological differences. We assess the performance of Bayesian Evidence Synthesis over multiple effect and sample sizes, with a broad set of (inequality-constrained) hypotheses using Monte Carlo simulations, focusing explicitly on the complexity of the hypotheses under consideration. The simulations show that this method can evaluate complex (informative) hypotheses regardless of methodological differences between studies, and performs adequately if the set of studies considered has sufficient statistical power. Additionally, we pinpoint challenging conditions that can lead to unsatisfactory results, and provide suggestions on handling these situations. Ultimately, we show that Bayesian Evidence Synthesis is a promising tool that can be used when traditional research synthesis methods are not applicable due to insurmountable between-study heterogeneity.