Random walks are widely used for mining networks due to the computational efficiency of computing them. For instance, graph representation learning learns a d-dimensional embedding space, so that the nodes that tend to co-occur on random walks (a proxy of being in the same network neighborhood) are close in the embedding space. Specific local network topology (i.e., structure) influences the co-occurrence of nodes on random walks, so random walks of limited length capture only partial topological information, hence diminishing the performance of downstream methods. We explicitly capture all topological neighborhood information and improve performance by introducing orbit adjacencies that quantify the adjacencies of two nodes as co-occurring on a given pair of graphlet orbits, which are symmetric positions on graphlets (small, connected, non-isomorphic, induced subgraphs of a large network). Importantly, we mathematically prove that random walks on up to k nodes capture only a subset of all the possible orbit adjacencies for up to k-node graphlets. Furthermore, we enable orbit adjacency-based analysis of networks by developing an efficient GRaphlet-orbit ADjacency COunter (GRADCO), which exhaustively computes all 28 orbit adjacency matrices for up to four-node graphlets. Note that four-node graphlets suffice, because real networks are usually small-world. In large networks on around 20,000 nodes, GRADCOcomputesthe28matricesinminutes. Onsixrealnetworksfromvarious domains, we compare the performance of node-label predictors obtained by using the network embeddings based on our orbit adjacencies to those based on random walks. We find that orbit adjacencies, which include those unseen by random walks, outperform random walk-based adjacencies, demonstrating the importance of the inclusion of the topological neighborhood information that is unseen by random walks.
Probabilistic forecasts comprehensively describe the uncertainty in the unknown future outcome, making them essential for decision making and risk management. While several methods have been introduced to evaluate probabilistic forecasts, existing evaluation techniques are ill-suited to the evaluation of tail properties of such forecasts. However, these tail properties are often of particular interest to forecast users due to the severe impacts caused by extreme outcomes. In this work, we introduce a general notion of tail calibration for probabilistic forecasts, which allows forecasters to assess the reliability of their predictions for extreme outcomes. We study the relationships between tail calibration and standard notions of forecast calibration, and discuss connections to peaks-over-threshold models in extreme value theory. Diagnostic tools are introduced and applied in a case study on European precipitation forecasts
To avoid poor empirical performance in Metropolis-Hastings and other accept-reject-based algorithms practitioners often tune them by trial and error. Lower bounds on the convergence rate are developed in both total variation and Wasserstein distances in order to identify how the simulations will fail so these settings can be avoided, providing guidance on tuning. Particular attention is paid to using the lower bounds to study the convergence complexity of accept-reject-based Markov chains and to constrain the rate of convergence for geometrically ergodic Markov chains. The theory is applied in several settings. For example, if the target density concentrates with a parameter n (e.g. posterior concentration, Laplace approximations), it is demonstrated that the convergence rate of a Metropolis-Hastings chain can be arbitrarily slow if the tuning parameters do not depend carefully on n. This is demonstrated with Bayesian logistic regression with Zellner's g-prior when the dimension and sample increase together and flat prior Bayesian logistic regression as n tends to infinity.
One persistent obstacle in industrial quality inspection is the detection of anomalies. In real-world use cases, two problems must be addressed: anomalous data is sparse and the same types of anomalies need to be detected on previously unseen objects. Current anomaly detection approaches can be trained with sparse nominal data, whereas domain generalization approaches enable detecting objects in previously unseen domains. Utilizing those two observations, we introduce the hybrid task of domain generalization on sparse classes. To introduce an accompanying dataset for this task, we present a modification of the well-established MVTec AD dataset by generating three new datasets. In addition to applying existing methods for benchmark, we design two embedding-based approaches, Spatial Embedding MLP (SEMLP) and Labeled PatchCore. Overall, SEMLP achieves the best performance with an average image-level AUROC of 87.2 % vs. 80.4 % by MIRO. The new and openly available datasets allow for further research to improve industrial anomaly detection.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.
Today, hate speech classification from Arabic tweets has drawn the attention of several researchers. Many systems and techniques have been developed to resolve this classification task. Nevertheless, two of the major challenges faced in this context are the limited performance and the problem of imbalanced data. In this study, we propose a novel approach that leverages ensemble learning and semi-supervised learning based on previously manually labeled. We conducted experiments on a benchmark dataset by classifying Arabic tweets into 5 distinct classes: non-hate, general hate, racial, religious, or sexism. Experimental results show that: (1) ensemble learning based on pre-trained language models outperforms existing related works; (2) Our proposed data augmentation improves the accuracy results of hate speech detection from Arabic tweets and outperforms existing related works. Our main contribution is the achievement of encouraging results in Arabic hate speech detection.
We explore theoretical aspects of boundary conditions for lattice Boltzmann methods, focusing on a toy two-velocities scheme. By mapping lattice Boltzmann schemes to Finite Difference schemes, we facilitate rigorous consistency and stability analyses. We develop kinetic boundary conditions for inflows and outflows, highlighting the trade-off between accuracy and stability, which we successfully overcome. Stability is assessed using GKS (Gustafsson, Kreiss, and Sundstr{\"o}m) analysis and -- when this approach fails on coarse meshes -- spectral and pseudo-spectral analyses of the scheme's matrix that explain effects germane to low resolutions.
While undulatory swimming of elongate limbless robots has been extensively studied in open hydrodynamic environments, less research has been focused on limbless locomotion in complex, cluttered aquatic environments. Motivated by the concept of mechanical intelligence, where controls for obstacle navigation can be offloaded to passive body mechanics in terrestrial limbless locomotion, we hypothesize that principles of mechanical intelligence can be extended to cluttered hydrodynamic regimes. To test this, we developed an untethered limbless robot capable of undulatory swimming on water surfaces, utilizing a bilateral cable-driven mechanism inspired by organismal muscle actuation morphology to achieve programmable anisotropic body compliance. We demonstrated through robophysical experiments that, similar to terrestrial locomotion, an appropriate level of body compliance can facilitate emergent swim through complex hydrodynamic environments under pure open-loop control. Moreover, we found that swimming performance depends on undulation frequency, with effective locomotion achieved only within a specific frequency range. This contrasts with highly damped terrestrial regimes, where inertial effects can often be neglected. Further, to enhance performance and address the challenges posed by nondeterministic obstacle distributions, we incorporated computational intelligence by developing a real-time body compliance tuning controller based on cable tension feedback. This controller improves the robot's robustness and overall speed in heterogeneous hydrodynamic environments.
We study how abstract representations emerge in a Deep Belief Network (DBN) trained on benchmark datasets. Our analysis targets the principles of learning in the early stages of information processing, starting from the "primordial soup" of the under-sampling regime. As the data is processed by deeper and deeper layers, features are detected and removed, transferring more and more "context-invariant" information to deeper layers. We show that the representation approaches an universal model -- the Hierarchical Feature Model (HFM) -- determined by the principle of maximal relevance. Relevance quantifies the uncertainty on the model of the data, thus suggesting that "meaning" -- i.e. syntactic information -- is that part of the data which is not yet captured by a model. Our analysis shows that shallow layers are well described by pairwise Ising models, which provide a representation of the data in terms of generic, low order features. We also show that plasticity increases with depth, in a similar way as it does in the brain. These findings suggest that DBNs are capable of extracting a hierarchy of features from the data which is consistent with the principle of maximal relevance.
Neural networks used in computations with more advanced algebras than real numbers perform better in some applications. However, there is no general framework for constructing hypercomplex neural networks. We propose a library integrated with Keras that can do computations within TensorFlow and PyTorch. It provides Dense and Convolutional 1D, 2D, and 3D layers architectures.
Turnover consists of moving into and out of professional employees in the company in a given period. Such a phenomenon significantly impacts the software industry since it generates knowledge loss, delays in the schedule, and increased costs in the final project. Despite the efforts made by researchers and professionals to minimize the turnover, more studies are needed to understand the motivation that drives Software Engineers to leave their jobs and the main strategies CEOs adopt to retain these professionals in software development companies. In this paper, we contribute a mixed methods study involving semi-structured interviews with Software Engineers and CEOs to obtain a wider opinion of these professionals about turnover and a subsequent validation survey with additional software engineers to check and review the insights from interviews. In studying such aspects, we identified 19 different reasons for software engineers' turnover and 18 more efficient strategies used in the software development industry to reduce it. Our findings provide several implications for industry and academia, which can drive future research.