In multisite trials, statistical goals often include obtaining individual site-specific treatment effects, determining their rankings, and examining their distribution across multiple sites. This paper explores two strategies for improving inferences related to site-specific effects: (a) semiparametric modeling of the prior distribution using Dirichlet process mixture (DPM) models to relax the normality assumption, and (b) using estimators other than the posterior mean, such as the constrained Bayes or triple-goal estimators, to summarize the posterior. We conduct a large-scale simulation study, calibrated to multisite trials common in education research. We then explore the conditions and degrees to which these strategies and their combinations succeed or falter in the limited data environments. We found that the average reliability of within-site effect estimates is crucial for determining effective estimation strategies. In settings with low-to-moderate data informativeness, flexible DPM models perform no better than the simple parametric Gaussian model coupled with a posterior summary method tailored to a specific inferential goal. DPM models outperform Gaussian models only in select high-information settings, indicating considerable sensitivity to the level of cross-site information available in the data. We discuss the implications of our findings for balancing trade-offs associated with shrinkage for the design and analysis of future multisite randomized experiments.
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models. They comprise a neural ODE and a certified upper bound on the error between the abstract neural network and the concrete dynamical model. So far neural abstractions have exclusively been obtained as neural networks consisting entirely of $ReLU$ activation functions, resulting in neural ODE models that have piecewise affine dynamics, and which can be equivalently interpreted as linear hybrid automata. In this work, we observe that the utility of an abstraction depends on its use: some scenarios might require coarse abstractions that are easier to analyse, whereas others might require more complex, refined abstractions. We therefore consider neural abstractions of alternative shapes, namely either piecewise constant or nonlinear non-polynomial (specifically, obtained via sigmoidal activations). We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics. Empirically, we demonstrate the trade-off that these different neural abstraction templates have vis-a-vis their precision and synthesis time, as well as the time required for their safety verification (done via reachability computation). We improve existing synthesis techniques to enable abstraction of higher-dimensional models, and additionally discuss the abstraction of complex neural ODEs to improve the efficiency of reachability analysis for these models.
Recently, Chatterjee (2021) introduced a new rank-based correlation coefficient which can be used to measure the strength of dependence between two random variables. This coefficient has already attracted much attention as it converges to the Dette-Siburg-Stoimenov measure (see Dette et al. (2013)), which equals $0$ if and only if the variables are independent and $1$ if and only if one variable is a function of the other. Further, Chatterjee's coefficient is computable in (near) linear time, which makes it appropriate for large-scale applications. In this paper, we expand the theoretical understanding of Chatterjee's coefficient in two directions: (a) First we consider the problem of testing for independence using Chatterjee's correlation. We obtain its asymptotic distribution under any changing sequence of alternatives converging to the null hypothesis (of independence). We further obtain a general result that gives exact detection thresholds and limiting power for Chatterjee's test of independence under natural nonparametric alternatives converging to the null. As applications of this general result, we prove a $n^{-1/4}$ detection boundary for this test and compute explicitly the limiting local power on the detection boundary for popularly studied alternatives in the literature. (b) We then construct a test for non-trivial levels of dependence using Chatterjee's coefficient. In contrast to testing for independence, we prove that, in this case, Chatterjee's coefficient indeed yields a minimax optimal procedure with a $n^{-1/2}$ detection boundary. Our proof techniques rely on Stein's method of exchangeable pairs, a non-asymptotic projection result, and information theoretic lower bounds.
Face recognition models embed a face image into a low-dimensional identity vector containing abstract encodings of identity-specific facial features that allow individuals to be distinguished from one another. We tackle the challenging task of inverting the latent space of pre-trained face recognition models without full model access (i.e. black-box setting). A variety of methods have been proposed in literature for this task, but they have serious shortcomings such as a lack of realistic outputs and strong requirements for the data set and accessibility of the face recognition model. By analyzing the black-box inversion problem, we show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution even without an identity-specific loss. Our method, named identity denoising diffusion probabilistic model (ID3PM), leverages the stochastic nature of the denoising diffusion process to produce high-quality, identity-preserving face images with various backgrounds, lighting, poses, and expressions. We demonstrate state-of-the-art performance in terms of identity preservation and diversity both qualitatively and quantitatively, and our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
The optimal branch number of MDS matrices has established their prominence in the design of diffusion layers for various block ciphers and hash functions. Consequently, several matrix structures have been proposed for designing MDS matrices, including Hadamard and circulant matrices. In this paper, we first provide the count of Hadamard MDS matrices of order $4$ over the field $\mathbb{F}_{2^r}$. Subsequently, we present the counts of order $2$ MDS matrices and order $2$ involutory MDS matrices over the field $\mathbb{F}_{2^r}$. Finally, leveraging these counts of order $2$ matrices, we derive an upper bound for the number of all involutory MDS matrices of order $4$ over $\mathbb{F}_{2^r}$.
Current synthetic speech detection (SSD) methods perform well on certain datasets but still face issues of robustness and interpretability. A possible reason is that these methods do not analyze the deficiencies of synthetic speech. In this paper, the flaws of the speaker features inherent in the text-to-speech (TTS) process are analyzed. Differences in the temporal consistency of intra-utterance speaker features arise due to the lack of fine-grained control over speaker features in TTS. Since the speaker representations in TTS are based on speaker embeddings extracted by encoders, the distribution of inter-utterance speaker features differs between synthetic and bonafide speech. Based on these analyzes, an SSD method based on temporal consistency and distribution of speaker features is proposed. On one hand, modeling the temporal consistency of intra-utterance speaker features can aid speech anti-spoofing. On the other hand, distribution differences in inter-utterance speaker features can be utilized for SSD. The proposed method offers low computational complexity and performs well in both cross-dataset and silence trimming scenarios.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.