For $N \geq 2$, an $N$-qubit doily is a doily living in the $N$-qubit symplectic polar space. These doilies are related to operator-based proofs of quantum contextuality. Following and extending the strategy of Saniga et al. (Mathematics 9 (2021) 2272) that focused exclusively on three-qubit doilies, we first bring forth several formulas giving the number of both linear and quadratic doilies for any $N > 2$. Then we present an effective algorithm for the generation of all $N$-qubit doilies. Using this algorithm for $N=4$ and $N=5$, we provide a classification of $N$-qubit doilies in terms of types of observables they feature and number of negative lines they are endowed with. We also list several distinguished findings about $N$-qubit doilies that are absent in the three-qubit case, point out a couple of specific features exhibited by linear doilies and outline some prospective extensions of our approach.
Unsupervised time series anomaly detection is instrumental in monitoring and alarming potential faults of target systems in various domains. Current state-of-the-art time series anomaly detectors mainly focus on devising advanced neural network structures and new reconstruction/prediction learning objectives to learn data normality (normal patterns and behaviors) as accurately as possible. However, these one-class learning methods can be deceived by unknown anomalies in the training data (i.e., anomaly contamination). Further, their normality learning also lacks knowledge about the anomalies of interest. Consequently, they often learn a biased, inaccurate normality boundary. This paper proposes a novel one-class learning approach, named calibrated one-class classification, to tackle this problem. Our one-class classifier is calibrated in two ways: (1) by adaptively penalizing uncertain predictions, which helps eliminate the impact of anomaly contamination while accentuating the predictions that the one-class model is confident in, and (2) by discriminating the normal samples from native anomaly examples that are generated to simulate genuine time series abnormal behaviors on the basis of original data. These two calibrations result in contamination-tolerant, anomaly-informed one-class learning, yielding a significantly improved normality modeling. Extensive experiments on six real-world datasets show that our model substantially outperforms twelve state-of-the-art competitors and obtains 6% - 31% F1 score improvement. The source code is available at \url{//github.com/xuhongzuo/couta}.
We establish optimal convergence rates up to a log-factor for a class of deep neural networks in a classification setting under a restraint sometimes referred to as the Tsybakov noise condition. We construct classifiers in a general setting where the boundary of the bayes-rule can be approximated well by neural networks. Corresponding rates of convergence are proven with respect to the misclassification error. It is then shown that these rates are optimal in the minimax sense if the boundary satisfies a smoothness condition. Non-optimal convergence rates already exist for this setting. Our main contribution lies in improving existing rates and showing optimality, which was an open problem. Furthermore, we show almost optimal rates under some additional restraints which circumvent the curse of dimensionality. For our analysis we require a condition which gives new insight on the restraint used. In a sense it acts as a requirement for the "correct noise exponent" for a class of functions.
We consider an analysis of variance type problem, where the sample observations are random elements in an infinite dimensional space. This scenario covers the case, where the observations are random functions. For such a problem, we propose a test based on spatial signs. We develop an asymptotic implementation as well as a bootstrap implementation and a permutation implementation of this test and investigate their size and power properties. We compare the performance of our test with that of several mean based tests of analysis of variance for functional data studied in the literature. Interestingly, our test not only outperforms the mean based tests in several non-Gaussian models with heavy tails or skewed distributions, but in some Gaussian models also. Further, we also compare the performance of our test with the mean based tests in several models involving contaminated probability distributions. Finally, we demonstrate the performance of these tests in three real datasets: a Canadian weather dataset, a spectrometric dataset on chemical analysis of meat samples and a dataset on orthotic measurements on volunteers.
We consider studies where multiple measures on an outcome variable are collected over time, but some subjects drop out before the end of follow up. Analyses of such data often proceed under either a 'last observation carried forward' or 'missing at random' assumption. We consider two alternative strategies for identification; the first is closely related to the difference-in-differences methodology in the causal inference literature. The second enables correction for violations of the parallel trend assumption, so long as one has access to a valid 'bespoke instrumental variable'. These are compared with existing approaches, first conceptually and then in an analysis of data from the Framingham Heart Study.
Negabent functions were introduced as a generalization of bent functions, which have applications in coding theory and cryptography. In this paper, we have extended the notion of negabent functions to the functions defined from $\mathbb{Z}_q^n$ to $\mathbb{Z}_{2q}$ ($2q$-negabent), where $q \geq 2$ is a positive integer and $\mathbb{Z}_q$ is the ring of integers modulo $q$. For this, a new unitary transform (the nega-Hadamard transform) is introduced in the current set up, and some of its properties are discussed. Some results related to $2q$-negabent functions are presented. We present two constructions of $2q$-negabent functions. In the first construction, $2q$-negabent functions on $n$ variables are constructed when $q$ is an even positive integer. In the second construction, $2q$-negabent functions on two variables are constructed for arbitrary positive integer $q \ge 2$. Some examples of $2q$-negabent functions for different values of $q$ and $n$ are also presented.
In order to be able to apply graph canonical labelling and isomorphism checking within interactive theorem provers, either these checking algorithms must be mechanically verified, or their results must be verifiable by independent checkers. We analyze a state-of-the-art graph canonical labelling algorithm (described by McKay and Piperno) and formulate it in a form of a formal proof system. We provide an implementation that can export a proof that the obtained graph is the canonical form of a given graph. Such proofs are then verified by our independent checker, and can be used to certify that two given graphs are non-isomorphic.
Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others. Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we are to violating the constraints. Yet, safety must be guaranteed at all times, not only for the final output of the algorithm. We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial. Our approach called LB-SGD is based on applying stochastic gradient descent (SGD) with a carefully chosen adaptive step size to a logarithmic barrier approximation of the original problem. We provide a complete convergence analysis of non-convex, convex, and strongly-convex smooth constrained problems, with first-order and zeroth-order feedback. Our approach yields efficient updates and scales better with dimensionality compared to existing approaches. We empirically compare the sample complexity and the computational cost of our method with existing safe learning approaches. Beyond synthetic benchmarks, we demonstrate the effectiveness of our approach on minimizing constraint violation in policy search tasks in safe reinforcement learning (RL).
As sustainability becomes an increasing priority throughout global society, academic and research institutions are assessed on their contribution to relevant research publications. This study compares four methods of identifying research publications related to United Nations Sustainable Development Goal 13: climate action. The four methods, Elsevier, STRINGS, SIRIS, and Dimensions have each developed search strings with the help of subject matter experts which are then enhanced through distinct methods to produce a final set of publications. Our analysis showed that the methods produced comparable quantities of publications but with little overlap between them. We visualised some difference in topic focus between the methods and drew links with the search strategies used. Differences between publications retrieved are likely to come from subjective interpretation of the goals, keyword selection, operationalising search strategies, AI enhancements, and selection of bibliographic database. Each of the elements warrants deeper investigation to understand their role in identifying SDG-related research. Before choosing any method to assess the research contribution to SDGs, end users of SDG data should carefully consider their interpretation of the goal and determine which of the available methods produces the closest dataset. Meanwhile data providers might customise their methods for varying interpretations of the SDGs.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.