We consider a statistical problem to estimate variables (effects) that are associated with the edges of a complete bipartite graph $K_{v_1, v_2}=(V_1, V_2 \, ; E)$. Each data is obtained as a sum of selected effects, a subset of $E$. In order to estimate efficiently, we propose a design called Spanning Bipartite Block Design (SBBD). For SBBDs such that the effects are estimable, we proved that the estimators have the same variance (variance balanced). If each block (a subgraph of $K_{v_1, v_2}$) of SBBD is a semi-regular or a regular bipartite graph, we show that the design is A-optimum. We also show a construction of SBBD using an ($r,\lambda$)-design and an ordered design. A BIBD with prime power blocks gives an A-optimum semi-regular or regular SBBD. At last, we mention that this SBBD is able to use for deep learning.
Behavioural metrics provide a quantitative refinement of classical two-valued behavioural equivalences on systems with quantitative data, such as metric or probabilistic transition systems. In analogy to the linear-time/branching-time spectrum of two-valued behavioural equivalences on transition systems, behavioural metrics vary in granularity. We provide a unifying treatment of spectra of behavioural metrics in the emerging framework of graded monads, working in coalgebraic generality, that is, parametrically in the system type. In the ensuing development of quantitative graded semantics, we introduce algebraic presentations of graded monads on the category of metric spaces. Moreover, we obtain a canonical generic notion of invariant real-valued modal logic, and provide criteria for such logics to be expressive in the sense that logical distance coincides with behavioural distance. We present positive examples based on this criterion, covering both known and new expressiveness results; in particular, we show that expressiveness holds essentially always for Eilenberg-Moore type trace semantics, and we obtain a new expressiveness result for trace semantics of fuzzy transition systems. As a negative result, we show that trace distance on probabilistic metric transition systems does not admit any characteristic real-valued modal logic, even in a more broadly understood sense.
Robotic manipulation can greatly benefit from the data efficiency, robustness, and predictability of model-based methods if robots can quickly generate models of novel objects they encounter. This is especially difficult when effects like complex joint friction lack clear first-principles models and are usually ignored by physics simulators. Further, numerically-stiff contact dynamics can make common model-building approaches struggle. We propose a method to simultaneously learn contact and continuous dynamics of a novel, possibly multi-link object by observing its motion through contact-rich trajectories. We formulate a system identification process with a loss that infers unmeasured contact forces, penalizing their violation of physical constraints and laws of motion given current model parameters. Our loss is unlike prediction-based losses used in differentiable simulation. Using a new dataset of real articulated object trajectories and an existing cube toss dataset, our method outperforms differentiable simulation and end-to-end alternatives with more data efficiency. See our project page for code, datasets, and media: //sites.google.com/view/continuous-contact-nets/home
A classical problem of statistical inference is the valid specification of a model that can account for the statistical dependencies between observations when the true structure is dense, intractable, or unknown. To address this problem, a new variance identity is presented, which is closely related to the Moulton factor. This identity does not require the specification of an entire covariance structure and instead relies on the choice of two summary constants. Using this result, a weak law of large numbers is also established for additive statistics and common variance estimators under very general conditions of statistical dependence. Furthermore, this paper proves a sharper version of Hoeffding's inequality for symmetric and bounded random variables under these same conditions of statistical dependence. Put otherwise, it is shown that, under relatively mild conditions, finite sample inference is possible in common settings such as linear regression, and even when every outcome variable is statistically dependent with all others. All results are extended to estimating equations. Simulation experiments and an application to climate data are also provided.
To benefit from the abundance of data and the insights it brings data processing pipelines are being used in many areas of research and development in both industry and academia. One approach to automating data processing pipelines is the workflow technology, as it also supports collaborative, trial-and-error experimentation with the pipeline architecture in different application domains. In addition to the necessary flexibility that such pipelines need to possess, in collaborative settings cross-organisational interactions are plagued by lack of trust. While capturing provenance information related to the pipeline execution and the processed data is a first step towards enabling trusted collaborations, the current solutions do not allow for provenance of the change in the processing pipelines, where the subject of change can be made on any aspect of the workflow implementing the pipeline and on the data used while the pipeline is being executed. Therefore in this work we provide a solution architecture and a proof of concept implementation of a service, called Provenance Holder, which enable provenance of collaborative, adaptive data processing pipelines in a trusted manner. We also contribute a definition of a set of properties of such a service and identify future research directions.
Transparent object perception is a rapidly developing research problem in artificial intelligence. The ability to perceive transparent objects enables robots to achieve higher levels of autonomy, unlocking new applications in various industries such as healthcare, services and manufacturing. Despite numerous datasets and perception methods being proposed in recent years, there is still a lack of in-depth understanding of these methods and the challenges in this field. To address this gap, this article provides a comprehensive survey of the platforms and recent advances for robotic perception of transparent objects. We highlight the main challenges and propose future directions of various transparent object perception tasks, i.e., segmentation, reconstruction, and pose estimation. We also discuss the limitations of existing datasets in diversity and complexity, and the benefits of employing multi-modal sensors, such as RGB-D cameras, thermal cameras, and polarised imaging, for transparent object perception. Furthermore, we identify perception challenges in complex and dynamic environments, as well as for objects with changeable geometries. Finally, we provide an interactive online platform to navigate each reference: \url{//sites.google.com/view/transperception}.
We study the expressivity of ReLU neural networks in the setting of a binary classification problem from a topological perspective. Recently, empirical studies showed that neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simpler one as it passes through the layers. This topological simplification has been measured by Betti numbers, which are algebraic invariants of a topological space. We use the same measure to establish lower and upper bounds on the topological simplification a ReLU neural network can achieve with a given architecture. We therefore contribute to a better understanding of the expressivity of ReLU neural networks in the context of binary classification problems by shedding light on their ability to capture the underlying topological structure of the data. In particular the results show that deep ReLU neural networks are exponentially more powerful than shallow ones in terms of topological simplification. This provides a mathematically rigorous explanation why deeper networks are better equipped to handle complex and topologically rich datasets.
We consider a simple setting in neuroevolution where an evolutionary algorithm optimizes the weights and activation functions of a simple artificial neural network. We then define simple example functions to be learned by the network and conduct rigorous runtime analyses for networks with a single neuron and for a more advanced structure with several neurons and two layers. Our results show that the proposed algorithm is generally efficient on two example problems designed for one neuron and efficient with at least constant probability on the example problem for a two-layer network. In particular, the so-called harmonic mutation operator choosing steps of size $j$ with probability proportional to $1/j$ turns out as a good choice for the underlying search space. However, for the case of one neuron, we also identify situations with hard-to-overcome local optima. Experimental investigations of our neuroevolutionary algorithm and a state-of-the-art CMA-ES support the theoretical findings.
Inverse problems are characterized by their inherent non-uniqueness and sensitivity with respect to data perturbations. Their stable solution requires the application of regularization methods including variational and iterative regularization methods. Superiorization is a heuristic approach that can steer basic iterative algorithms to have small value of certain regularization functional while keeping the algorithms simplicity and computational efforts, but is able to account for additional prior information. In this note, we combine the superiorization methodology with iterative regularization methods and show that the superiorized version of the scheme yields again a regularization method, however accounting for different prior information.
We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.