This article explores the estimation of precision matrices in high-dimensional Gaussian graphical models. We address the challenge of improving the accuracy of maximum likelihood-based precision estimation through penalization. Specifically, we consider an elastic net penalty, which incorporates both L1 and Frobenius norm penalties while accounting for the target matrix during estimation. To enhance precision matrix estimation, we propose a novel two-step estimator that combines the strengths of ridge and graphical lasso estimators. Through this approach, we aim to improve overall estimation performance. Our empirical analysis demonstrates the superior efficiency of our proposed method compared to alternative approaches. We validate the effectiveness of our proposal through numerical experiments and application on three real datasets. These examples illustrate the practical applicability and usefulness of our proposed estimator.
Temporal graph neural networks Tgnn have exhibited state-of-art performance in future-link prediction tasks. Training of these TGNNs is enumerated by uniform random sampling based unsupervised loss. During training, in the context of a positive example, the loss is computed over uninformative negatives, which introduces redundancy and sub-optimal performance. In this paper, we propose modified unsupervised learning of Tgnn, by replacing the uniform negative sampling with importance-based negative sampling. We theoretically motivate and define the dynamically computed distribution for a sampling of negative examples. Finally, using empirical evaluations over three real-world datasets, we show that Tgnn trained using loss based on proposed negative sampling provides consistent superior performance.
Fairness is one of the socio-technical concerns that must be addressed in AI-based systems. Unfair AI-based systems, particularly unfair AI-based mobile apps, can pose difficulties for a significant proportion of the global population. This paper aims to analyze fairness concerns in AI-based app reviews.We first manually constructed a ground-truth dataset, including a statistical sample of fairness and non-fairness reviews. Leveraging the ground-truth dataset, we developed and evaluated a set of machine learning and deep learning classifiers that distinguish fairness reviews from non-fairness reviews. Our experiments show that our best-performing classifier can detect fairness reviews with a precision of 94%. We then applied the best-performing classifier on approximately 9.5M reviews collected from 108 AI-based apps and identified around 92K fairness reviews. Next, applying the K-means clustering technique to the 92K fairness reviews, followed by manual analysis, led to the identification of six distinct types of fairness concerns (e.g., 'receiving different quality of features and services in different platforms and devices' and 'lack of transparency and fairness in dealing with user-generated content'). Finally, the manual analysis of 2,248 app owners' responses to the fairness reviews identified six root causes (e.g., 'copyright issues') that app owners report to justify fairness concerns.
Remote Attestation (RA) enables the integrity and authenticity of applications in Trusted Execution Environment (TEE) to be verified. Existing TEE RA designs employ a centralized trust model where they rely on a single provisioned secret key and a centralized verifier to establish trust for remote parties. This model is however brittle and can be untrusted under advanced attacks nowadays. Besides, most designs only provide fixed functionalities once deployed, making them hard to adapt to different needs on availability, Quality of Service (QoS), etc. Therefore, we propose JANUS, an open and resilient TEE RA scheme. To decentralize trust, we, on one hand, introduce Physically Unclonable Function (PUF) as an intrinsic root of trust (RoT) in TEE to provide additional measurements and cryptographic enhancements. On the other hand, we use blockchain and smart contract to realize decentralized verification and result audit. Furthermore, we design an automated turnout mechanism that allows JANUS to remain resilient and offer flexible RA services under various situations. We provide a UC-based security proof and demonstrate the scalability and generality of JANUS by implementing an open-sourced prototype.
This article studies structure-preserving discretizations of Hilbert complexes with nonconforming spaces that rely on projections onto an underlying conforming subcomplex. This approach follows the conforming/nonconforming Galerkin (CONGA) method introduced in [doi.org/10.1090/mcom/3079, doi.org/10.5802/smai-jcm.20, doi.org/10.5802/smai-jcm.21] to derive efficient structure-preserving finite element schemes for the time-dependent Maxwell and Maxwell-Vlasov systems by relaxing the curl-conforming constraint in finite element exterior calculus (FEEC) spaces. Here, it is extended to the discretization of full Hilbert complexes with possibly nontrivial harmonic fields, and the properties of the CONGA Hodge Laplacian operator are investigated. By using block-diagonal mass matrices which may be locally inverted, this framework possesses a canonical sequence of dual commuting projection operators which are local, and it naturally yields local discrete coderivative operators, in contrast to conforming FEEC discretizations. The resulting CONGA Hodge Laplacian operator is also local, and its kernel consists of the same discrete harmonic fields as the underlying conforming operator, provided that a symmetric stabilization term is added to handle the space nonconformities. Under the assumption that the underlying conforming subcomplex admits a bounded cochain projection, and that the conforming projections are stable with moment-preserving properties, a priori convergence results are established for both the CONGA Hodge Laplace source and eigenvalue problems. Our theory is finally illustrated with a spectral element method, and numerical experiments are performed which corroborate our results. Applications to spline finite elements on multi-patch mapped domains are described in a related article [arXiv:2208.05238] for which the present work provides a theoretical background.
Efficient inference in high-dimensional models remains a central challenge in machine learning. This paper introduces the Gaussian Ensemble Belief Propagation (GEnBP) algorithm, a fusion of the Ensemble Kalman filter and Gaussian belief propagation (GaBP) methods. GEnBP updates ensembles by passing low-rank local messages in a graphical model structure. This combination inherits favourable qualities from each method. Ensemble techniques allow GEnBP to handle high-dimensional states, parameters and intricate, noisy, black-box generation processes. The use of local messages in a graphical model structure ensures that the approach is suited to distributed computing and can efficiently handle complex dependence structures. GEnBP is particularly advantageous when the ensemble size is considerably smaller than the inference dimension. This scenario often arises in fields such as spatiotemporal modelling, image processing and physical model inversion. GEnBP can be applied to general problem structures, including jointly learning system parameters, observation parameters, and latent state variables.
The article provides a comprehensive overview of using quadratic polynomials in Python for modeling and analyzing data. It starts by explaining the basic concept of a quadratic polynomial, its general form, and its significance in capturing the curvature in data indicative of natural phenomena. The paper highlights key features of quadratic polynomials, their applications in regression analysis, and the process of fitting these polynomials to data using Python's `numpy` and `matplotlib` libraries. It also discusses the calculation of the coefficient of determination (R-squared) to quantify the fit of the polynomial model. Practical examples, including Python scripts, are provided to demonstrate how to apply these concepts in data analysis. The document serves as a bridge between theoretical knowledge and applied analytics, aiding in understanding and communicating data patterns.
Graph Neural Networks (GNNs) have gained momentum in graph representation learning and boosted the state of the art in a variety of areas, such as data mining (\emph{e.g.,} social network analysis and recommender systems), computer vision (\emph{e.g.,} object detection and point cloud learning), and natural language processing (\emph{e.g.,} relation extraction and sequence learning), to name a few. With the emergence of Transformers in natural language processing and computer vision, graph Transformers embed a graph structure into the Transformer architecture to overcome the limitations of local neighborhood aggregation while avoiding strict structural inductive biases. In this paper, we present a comprehensive review of GNNs and graph Transformers in computer vision from a task-oriented perspective. Specifically, we divide their applications in computer vision into five categories according to the modality of input data, \emph{i.e.,} 2D natural images, videos, 3D data, vision + language, and medical images. In each category, we further divide the applications according to a set of vision tasks. Such a task-oriented taxonomy allows us to examine how each task is tackled by different GNN-based approaches and how well these approaches perform. Based on the necessary preliminaries, we provide the definitions and challenges of the tasks, in-depth coverage of the representative approaches, as well as discussions regarding insights, limitations, and future directions.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
We study the problem of embedding-based entity alignment between knowledge graphs (KGs). Previous works mainly focus on the relational structure of entities. Some further incorporate another type of features, such as attributes, for refinement. However, a vast of entity features are still unexplored or not equally treated together, which impairs the accuracy and robustness of embedding-based entity alignment. In this paper, we propose a novel framework that unifies multiple views of entities to learn embeddings for entity alignment. Specifically, we embed entities based on the views of entity names, relations and attributes, with several combination strategies. Furthermore, we design some cross-KG inference methods to enhance the alignment between two KGs. Our experiments on real-world datasets show that the proposed framework significantly outperforms the state-of-the-art embedding-based entity alignment methods. The selected views, cross-KG inference and combination strategies all contribute to the performance improvement.
The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.