This paper investigates the language of propaganda and its stylistic features. It presents the PPN dataset, standing for Propagandist Pseudo-News, a multisource, multilingual, multimodal dataset composed of news articles extracted from websites identified as propaganda sources by expert agencies. A limited sample from this set was randomly mixed with papers from the regular French press, and their URL masked, to conduct an annotation-experiment by humans, using 11 distinct labels. The results show that human annotators were able to reliably discriminate between the two types of press across each of the labels. We propose different NLP techniques to identify the cues used by the annotators, and to compare them with machine classification. They include the analyzer VAGO to measure discourse vagueness and subjectivity, a TF-IDF to serve as a baseline, and four different classifiers: two RoBERTa-based models, CATS using syntax, and one XGBoost combining syntactic and semantic features. Keywords: Propaganda, Fake News, Explainability, AI alignment, Vagueness, Subjectivity, Exaggeration, Stylistic analysis
In this paper, we investigate the reconstruction error, $N_\e^{\text{rec}}(x)$, when a linear, filtered back-projection (FBP) algorithm is applied to noisy, discrete Radon transform data with sampling step size $\epsilon$ in two-dimensions. Specifically, we analyze $N_\e^{\text{rec}}(x)$ for $x$ in small, $O(\e)$-sized neighborhoods around a generic fixed point, $x_0$, in the plane, where the measurement noise values, $\eta_{k,j}$ (i.e., the errors in the sinogram space), are random variables. The latter are independent, but not necessarily identically distributed. We show, under suitable assumptions on the first three moments of the $\eta_{k,j}$, that the following limit exists: $N^{\text{rec}}(\chx;x_0) = \lim_{\e\to0}N_\e^{\text{rec}}(x_0+\e\chx)$, for $\check x$ in a bounded domain. Here, $N_\e^{\text{rec}}$ and $ N^{\text{rec}}$ are viewed as continuous random variables, and the limit is understood in the sense of distributions. Once the limit is established, we prove that $N^{\text{rec}}$ is a zero mean Gaussian random field and compute explicitly its covariance. In addition, we validate our theory using numerical simulations and pseudo random noise.
A fundamental aspect of statistics is the integration of data from different sources. Classically, Fisher and others were focused on how to integrate homogeneous (or only mildly heterogeneous) sets of data. More recently, as data are becoming more accessible, the question of if data sets from different sources should be integrated is becoming more relevant. The current literature treats this as a question with only two answers: integrate or don't. Here we take a different approach, motivated by information-sharing principles coming from the shrinkage estimation literature. In particular, we deviate from the do/don't perspective and propose a dial parameter that controls the extent to which two data sources are integrated. How far this dial parameter should be turned is shown to depend, for example, on the informativeness of the different data sources as measured by Fisher information. In the context of generalized linear models, this more nuanced data integration framework leads to relatively simple parameter estimates and valid tests/confidence intervals. Moreover, we demonstrate both theoretically and empirically that setting the dial parameter according to our recommendation leads to more efficient estimation compared to other binary data integration schemes.
In this paper, we study parameter identification for solutions to (possibly non-linear) SDEs driven by additive Rosenblatt process and singularity of the induced laws on the path space. We propose a joint estimator for the drift parameter, diffusion intensity, and Hurst index that can be computed from discrete-time observations with a bounded time horizon and we prove its strong consistency (as well as the speed of convergence) under in-fill asymptotics with a fixed time horizon. As a consequence of this strong consistency, singularity of measures generated by the solutions with different drifts is shown. This results in the invalidity of a Girsanov-type theorem for Rosenblatt processes.
This paper proposes a systematic and novel component level co-rotational (CR) framework, for upgrading existing 3D continuum finite elements to flexible multibody analysis. Without using any model reduction techniques, the high efficiency is achieved through sophisticated operations in both modeling and numerical implementation phrases. In modeling phrase, as in conventional 3D nonlinear finite analysis, the nodal absolute coordinates are used as the system generalized coordinates, therefore simple formulations of the inertia force terms can be obtained. For the elastic force terms, inspired by existing floating frame of reference formulation (FFRF) and conventional element-level CR formulation, a component-level CR modeling strategy is developed. By in combination with Schur complement theory and fully exploring the nature of the component-level CR modeling method, an extremely efficient procedure is developed, which enables us to transform the linear equations raised from each Newton-Raphson iteration step into linear systems with constant coefficient matrix. The coefficient matrix thus can be pre-calculated and decomposed only once, and at all the subsequent time steps only back substitutions are needed, which avoids frequently updating the Jacobian matrix and avoids directly solving the large-scale linearized equation in each iteration. Multiple examples are presented to demonstrate the performance of the proposed framework.
Fourth-order variational inequalities are encountered in various scientific and engineering disciplines, including elliptic optimal control problems and plate obstacle problems. In this paper, we consider additive Schwarz methods for solving fourth-order variational inequalities. Based on a unified framework of various finite element methods for fourth-order variational inequalities, we develop one- and two-level additive Schwarz methods. We prove that the two-level method is scalable in the sense that the convergence rate of the method depends on $H/h$ and $H/\delta$ only, where $h$ and $H$ are the typical diameters of an element and a subdomain, respectively, and $\delta$ measures the overlap among the subdomains. This proof relies on a new nonlinear positivity-preserving coarse interpolation operator, the construction of which was previously unknown. To the best of our knowledge, this analysis represents the first investigation into the scalability of the two-level additive Schwarz method for fourth-order variational inequalities. Our theoretical results are verified by numerical experiments.
Creating Computer Vision (CV) models remains a complex practice, despite their ubiquity. Access to data, the requirement for ML expertise, and model opacity are just a few points of complexity that limit the ability of end-users to build, inspect, and improve these models. Interactive ML perspectives have helped address some of these issues by considering a teacher in the loop where planning, teaching, and evaluating tasks take place. We present and evaluate two interactive visualizations in the context of Sprite, a system for creating CV classification and detection models for images originating from videos. We study how these visualizations help Sprite's users identify (evaluate) and select (plan) images where a model is struggling and can lead to improved performance, compared to a baseline condition where users used a query language. We found that users who had used the visualizations found more images across a wider set of potential types of model errors.
Large-scale applications of Visual Place Recognition (VPR) require computationally efficient approaches. Further, a well-balanced combination of data-based and training-free approaches can decrease the required amount of training data and effort and can reduce the influence of distribution shifts between the training and application phases. This paper proposes a runtime and data-efficient hierarchical VPR pipeline that extends existing approaches and presents novel ideas. There are three main contributions: First, we propose Local Positional Graphs (LPG), a training-free and runtime-efficient approach to encode spatial context information of local image features. LPG can be combined with existing local feature detectors and descriptors and considerably improves the image-matching quality compared to existing techniques in our experiments. Second, we present Attentive Local SPED (ATLAS), an extension of our previous local features approach with an attention module that improves the feature quality while maintaining high data efficiency. The influence of the proposed modifications is evaluated in an extensive ablation study. Third, we present a hierarchical pipeline that exploits hyperdimensional computing to use the same local features as holistic HDC-descriptors for fast candidate selection and for candidate reranking. We combine all contributions in a runtime and data-efficient VPR pipeline that shows benefits over the state-of-the-art method Patch-NetVLAD on a large collection of standard place recognition datasets with 15$\%$ better performance in VPR accuracy, 54$\times$ faster feature comparison speed, and 55$\times$ less descriptor storage occupancy, making our method promising for real-world high-performance large-scale VPR in changing environments. Code will be made available with publication of this paper.
This paper develops an in-depth treatment concerning the problem of approximating the Gaussian smoothing and Gaussian derivative computations in scale-space theory for application on discrete data. With close connections to previous axiomatic treatments of continuous and discrete scale-space theory, we consider three main ways discretizing these scale-space operations in terms of explicit discrete convolutions, based on either (i) sampling the Gaussian kernels and the Gaussian derivative kernels, (ii) locally integrating the Gaussian kernels and the Gaussian derivative kernels over each pixel support region and (iii) basing the scale-space analysis on the discrete analogue of the Gaussian kernel, and then computing derivative approximations by applying small-support central difference operators to the spatially smoothed image data. We study the properties of these three main discretization methods both theoretically and experimentally, and characterize their performance by quantitative measures, including the results they give rise to with respect to the task of scale selection, investigated for four different use cases, and with emphasis on the behaviour at fine scales. The results show that the sampled Gaussian kernels and derivatives as well as the integrated Gaussian kernels and derivatives perform very poorly at very fine scales. At very fine scales, the discrete analogue of the Gaussian kernel with its corresponding discrete derivative approximations performs substantially better. The sampled Gaussian kernel and the sampled Gaussian derivatives do, on the other hand, lead to numerically very good approximations of the corresponding continuous results, when the scale parameter is sufficiently large, in the experiments presented in the paper, when the scale parameter is greater than a value of about 1, in units of the grid spacing.
The brains of all bilaterally symmetric animals on Earth are divided into left and right hemispheres. The anatomy and functionality of the hemispheres have a large degree of overlap, but there are asymmetries and they specialise to possess different attributes. Other authors have used computational models to mimic hemispheric asymmetries with a focus on reproducing human data on semantic and visual processing tasks. We took a different approach and aimed to understand how dual hemispheres in a bilateral architecture interact to perform well in a given task. We propose a bilateral artificial neural network that imitates lateralisation observed in nature: that the left hemisphere specialises in specificity and the right in generality. We used different training objectives to achieve the desired specialisation and tested it on an image classification task with two different CNN backbones -- ResNet and VGG. Our analysis found that the hemispheres represent complementary features that are exploited by a network head which implements a type of weighted attention. The bilateral architecture outperformed a range of baselines of similar representational capacity that don't exploit differential specialisation, with the exception of a conventional ensemble of unilateral networks trained on a dual training objective for specifics and generalities. The results demonstrate the efficacy of bilateralism, contribute to the discussion of bilateralism in biological brains and the principle may serves as an inductive bias for new AI systems.
Confidence intervals based on the central limit theorem (CLT) are a cornerstone of classical statistics. Despite being only asymptotically valid, they are ubiquitous because they permit statistical inference under weak assumptions and can often be applied to problems even when nonasymptotic inference is impossible. This paper introduces time-uniform analogues of such asymptotic confidence intervals, adding to the literature on confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time -- which provide valid inference at arbitrary stopping times and incur no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance. Existing CSs in the literature are nonasymptotic, enjoying finite-sample guarantees but not the aforementioned broad applicability of asymptotic confidence intervals. This work provides a definition for "asymptotic CSs" and a general recipe for deriving them. Asymptotic CSs forgo nonasymptotic validity for CLT-like versatility and (asymptotic) time-uniform guarantees. While the CLT approximates the distribution of a sample average by that of a Gaussian for a fixed sample size, we use strong invariance principles (stemming from the seminal 1960s work of Strassen) to uniformly approximate the entire sample average process by an implicit Gaussian process. As an illustration, we derive asymptotic CSs for the average treatment effect in observational studies (for which nonasymptotic bounds are essentially impossible to derive even in the fixed-time regime) as well as randomized experiments, enabling causal inference in sequential environments.