In this paper, we show that the constant-dimensional Weisfeiler-Leman algorithm for groups (Brachter & Schweitzer, LICS 2020) can be fruitfully used to improve parallel complexity upper bounds on isomorphism testing for several families of groups. In particular, we show: - Groups with an Abelian normal Hall subgroup whose complement is $O(1)$-generated are identified by constant-dimensional Weisfeiler-Leman using only a constant number of rounds. This places isomorphism testing for this family of groups into $\textsf{L}$; the previous upper bound for isomorphism testing was $\textsf{P}$ (Qiao, Sarma, & Tang, STACS 2011). - We use the individualize-and-refine paradigm to obtain a $\textsf{quasiSAC}^{1}$ isomorphism test for groups without Abelian normal subgroups, previously only known to be in $\textsf{P}$ (Babai, Codenotti, & Qiao, ICALP 2012). - We extend a result of Brachter & Schweitzer (arXiv, 2021) on direct products of groups to the parallel setting. Namely, we also show that Weisfeiler-Leman can identify direct products in parallel, provided it can identify each of the indecomposable direct factors in parallel. They previously showed the analogous result for $\textsf{P}$. We finally consider the count-free Weisfeiler-Leman algorithm, where we show that count-free WL is unable to even distinguish Abelian groups in polynomial-time. Nonetheless, we use count-free WL in tandem with bounded non-determinism and limited counting to obtain a new upper bound of $\beta_{1}\textsf{MAC}^{0}(\textsf{FOLL})$ for isomorphism testing of Abelian groups. This improves upon the previous $\textsf{TC}^{0}(\textsf{FOLL})$ upper bound due to Chattopadhyay, Tor\'an, & Wagner (ACM Trans. Comput. Theory, 2013).
Philosophical research in AI has hitherto largely focused on the ethics of AI. In this paper we, an ethicist of belief and a machine learning scientist, suggest that we need to pursue a novel area of philosophical research in AI - the epistemology of AI, and in particular an ethics of belief for AI. Here we take the ethics of belief, a field that has been defined in various ways, to refer to a sub-field within epistemology. This subfield is concerned with the study of possible moral, practical, and other non-alethic dimensions of belief. And in this paper, we will primarily be concerned with the normative question within the ethics of belief regarding what agents - both human and artificial - ought to believe, rather than with descriptive questions concerning whether certain beliefs meet various evaluative standards such as being true, being justified or warranted, constituting knowledge, and so on. We suggest four topics in extant work in the ethics of (human) belief that can be applied to an ethics of AI belief: doxastic wronging by AI; morally owed beliefs; pragmatic and moral encroachment on AI beliefs; and moral responsibility for AI beliefs. We also indicate two relatively nascent areas of philosophical research that haven't yet been generally recognized as ethics of AI belief research, but that do fall within this field of research in virtue of investigating various moral and practical dimensions of belief: the epistemic and ethical decolonization of AI; and epistemic injustice in AI.
The innovations algorithm is a classical recursive forecasting algorithm used in time series analysis. We develop the innovations algorithm for a class of nonnegative regularly varying time series models constructed via transformed-linear arithmetic. In addition to providing the best linear predictor, the algorithm also enables us to estimate parameters of transformed-linear regularly-varying moving average (MA) models, thus providing a tool for modeling. We first construct an inner product space of transformed-linear combinations of nonnegative regularly-varying random variables and prove its link to a Hilbert space which allows us to employ the projection theorem, from which we develop the transformed-linear innovations algorithm. Turning our attention to the class of transformed linear MA($\infty$) models, we give results on parameter estimation and also show that this class of models is dense in the class of possible tail pairwise dependence functions (TPDFs). We also develop an extremes analogue of the classical Wold decomposition. Simulation study shows that our class of models captures tail dependence for the GARCH(1,1) model and a Markov time series model, both of which are outside our class of models.
In quantum machine field, detecting two-dimensional (2D) materials in Silicon chips is one of the most critical problems. Instance segmentation can be considered as a potential approach to solve this problem. However, similar to other deep learning methods, the instance segmentation requires a large scale training dataset and high quality annotation in order to achieve a considerable performance. In practice, preparing the training dataset is a challenge since annotators have to deal with a large image, e.g 2K resolution, and extremely dense objects in this problem. In this work, we present a novel method to tackle the problem of missing annotation in instance segmentation in 2D quantum material identification. We propose a new mechanism for automatically detecting false negative objects and an attention based loss strategy to reduce the negative impact of these objects contributing to the overall loss function. We experiment on the 2D material detection datasets, and the experiments show our method outperforms previous works.
In this paper, we explore zero- and few-shot generalization for fact verification (FV), which aims to generalize the FV model trained on well-resourced domains (e.g., Wikipedia) to low-resourced domains that lack human annotations. To this end, we first construct a benchmark dataset collection which contains 11 FV datasets representing 6 domains. We conduct an empirical analysis of generalization across these FV datasets, finding that current models generalize poorly. Our analysis reveals that several factors affect generalization, including dataset size, length of evidence, and the type of claims. Finally, we show that two directions of work improve generalization: 1) incorporating domain knowledge via pretraining on specialized domains, and 2) automatically generating training data via claim generation.
In this paper, we combine the Smolyak technique for multi-dimensional interpolation with the Filon-Clenshaw-Curtis (FCC) rule for one-dimensional oscillatory integration, to obtain a new Filon-Clenshaw-Curtis-Smolyak (FCCS) rule for oscillatory integrals with linear phase over the $d-$dimensional cube $[-1,1]^d$. By combining stability and convergence estimates for the FCC rule with error estimates for the Smolyak interpolation operator, we obtain an error estimate for the FCCS rule, consisting of the product of a Smolyak-type error estimate multiplied by a term that decreases with $\mathcal{O}(k^{-\tilde{d}})$, where $k$ is the wavenumber and $\tilde{d}$ is the number of oscillatory dimensions. If all dimensions are oscillatory, a higher negative power of $k$ appears in the estimate. As an application, we consider the forward problem of uncertainty quantification (UQ) for a one-space-dimensional Helmholtz problem with wavenumber $k$ and a random heterogeneous refractive index, depending in an affine way on $d$ i.i.d. uniform random variables. After applying a classical hybrid numerical-asymptotic approximation, expectations of functionals of the solution of this problem can be formulated as a sum of oscillatory integrals over $[-1,1]^d$, which we compute using the FCCS rule. We give numerical results for the FCCS rule and the UQ algorithm showing that accuracy improves when both $k$ and the order of the rule increase. We also give results for dimension-adaptive sparse grid FCCS quadrature showing its efficiency as dimension increases.
We present the algebra of assume-guarantee (AG) contracts. We define contracts, provide new as well as known operations, and show how these operations are related. Contracts are functorial: any Boolean algebra has an associated contract algebra. We study monoid and semiring structures in contract algebra -- and the mappings between such structures. We discuss the actions of a Boolean algebra on its contract algebra.
We study the problem of Out-of-Distribution (OOD) detection, that is, detecting whether a learning algorithm's output can be trusted at inference time. While a number of tests for OOD detection have been proposed in prior work, a formal framework for studying this problem is lacking. We propose a definition for the notion of OOD that includes both the input distribution and the learning algorithm, which provides insights for the construction of powerful tests for OOD detection. We propose a multiple hypothesis testing inspired procedure to systematically combine any number of different statistics from the learning algorithm using conformal p-values. We further provide strong guarantees on the probability of incorrectly classifying an in-distribution sample as OOD. In our experiments, we find that threshold-based tests proposed in prior work perform well in specific settings, but not uniformly well across different types of OOD instances. In contrast, our proposed method that combines multiple statistics performs uniformly well across different datasets and neural networks.
Retrieval augmentation, which enhances downstream models by a knowledge retriever and an external corpus instead of by merely increasing the number of model parameters, has been successfully applied to many natural language processing (NLP) tasks such as text classification, question answering and so on. However, existing methods that separately or asynchronously train the retriever and downstream model mainly due to the non-differentiability between the two parts, usually lead to degraded performance compared to end-to-end joint training. In this paper, we propose Differentiable Retrieval Augmentation via Generative lANguage modeling(Dragan), to address this problem by a novel differentiable reformulation. We demonstrate the effectiveness of our proposed method on a challenging NLP task in e-commerce search, namely query intent classification. Both the experimental results and ablation study show that the proposed method significantly and reasonably improves the state-of-the-art baselines on both offline evaluation and online A/B test.
In this work, we introduce a framework for cross-lingual speech synthesis, which involves an upstream Voice Conversion (VC) model and a downstream Text-To-Speech (TTS) model. The proposed framework consists of 4 stages. In the first two stages, we use a VC model to convert utterances in the target locale to the voice of the target speaker. In the third stage, the converted data is combined with the linguistic features and durations from recordings in the target language, which are then used to train a single-speaker acoustic model. Finally, the last stage entails the training of a locale-independent vocoder. Our evaluations show that the proposed paradigm outperforms state-of-the-art approaches which are based on training a large multilingual TTS model. In addition, our experiments demonstrate the robustness of our approach with different model architectures, languages, speakers and amounts of data. Moreover, our solution is especially beneficial in low-resource settings.
In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax