Graph theory and enumerative combinatorics are two branches of mathematical sciences that have developed astonishingly over the past one hundred years. It is especially important to point out that graph theory employs combinatorial techniques to solve key problems of characterization, construction, enumeration and classification of an enormous set of different families of graphs. This paper describes the construction of two classes of bigeodetic blocks using balanced incomplete block designs (BIBDs). On the other hand, even though graph theory and combinatorics have a close relationship, the opposite problem, that is, considering certain graph constructions when solving problems of combinatorics is not common, but possible. The construction of the second class of bigeodetic blocks described in this paper represents an example of how graph theory could somehow give a clue to the description of a problem of existence in combinatorics. We refer to the problem of existence for biplanes. A connection between the mentioned construction, the Bruck-Ryser-Chowla theorem and the problem of existence for biplanes is considered.
We propose a local model-checking proof system for a fragment of CTL. The rules of the proof system are motivated by the well-known fixed-point characterisation of CTL based on unfolding of the temporal operators. To guarantee termination of proofs, we tag the sequents of our proof system with the set of states that have already been explored for the respective temporal formula. We define the semantics of tagged sequents, and then state and prove soundness and completeness of the proof system, as well as termination of proof search for finite-state models.
We study the complexity of the problem of verifying differential privacy for while-like programs working over boolean values and making probabilistic choices. Programs in this class can be interpreted into finite-state discrete-time Markov Chains (DTMC). We show that the problem of deciding whether a program is differentially private for specific values of the privacy parameters is PSPACE-complete. To show that this problem is in PSPACE, we adapt classical results about computing hitting probabilities for DTMC. To show PSPACE-hardness we use a reduction from the problem of checking whether a program almost surely terminates or not. We also show that the problem of approximating the privacy parameters that a program provides is PSPACE-hard. Moreover, we investigate the complexity of similar problems also for several relaxations of differential privacy: R\'enyi differential privacy, concentrated differential privacy, and truncated concentrated differential privacy. For these notions, we consider gap-versions of the problem of deciding whether a program is private or not and we show that all of them are PSPACE-complete.
Almost surely, the difference between the randomness deficiencies of two infinite sequences will be unbounded with respect to repeated iterations of the shift operator.
In this paper we consider a mathematical model which describes the equilibrium of two elastic rods attached to a nonlinear spring. We derive the variational formulation of the model which is in the form of an elliptic quasivariational inequality for the displacement field. We prove the unique weak solvability of the problem, then we state and prove some convergence results, for which we provide the corresponding mechanical interpretation. Next, we turn to the numerical approximation of the problem based on a finite element scheme. We use a relaxation method to solve the discrete problems that we implement on the computer. Using this method, we provide numerical simulations which validate our convergence results.
Spherical polygons used in practice are nice, but the spherical point-in-polygon problem (SPiP) has long eluded solutions based on the winding number (wn). That a punctured sphere is simply connected is to blame. As a workaround, we prove that requiring the boundary of a spherical polygon to never intersect its antipode is sufficient to reduce its SPiP problem to the planar, point-in-polygon (PiP) problem, whose state-of-the-art solution uses wn and does not utilize known interior points (KIP). We refer to such spherical polygons as boundary antipode-excluding (BAE) and show that all spherical polygons fully contained within an open hemisphere is BAE. We document two successful reduction methods, one based on rotation and the other on shearing, and address a common concern. Both reduction algorithms, when combined with a wn-PiP algorithm, solve SPiP correctly and efficiently for BAE spherical polygons. The MATLAB code provided demonstrates scenarios that are problematic for previous work.
The far-field channel model has historically been used in wireless communications due to the simplicity of mathematical modeling and convenience for algorithm design, and its validity for relatively small array apertures. With the need for high data rates, low latency, and ubiquitous connectivity in the sixth generation (6G) of communication systems, new technology enablers such as extremely large antenna arrays (ELAA), reconfigurable intelligent surfaces (RISs), and distributed multiple-input-multiple-output (D-MIMO) systems will be adopted. These enablers not only aim to improve communication services but also have an impact on localization and sensing (L\&S), which are expected to be integrated into future wireless systems. Despite appearing in different scenarios and supporting different frequency bands, these enablers share the so-called near-field (NF) features, which will provide extra geometric information. In this work, starting from a brief description of NF channel features, we highlight the opportunities and challenges for 6G NF L\&S.
Current deep learning research is dominated by benchmark evaluation. A method is regarded as favorable if it empirically performs well on the dedicated test set. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving sets of benchmark data are investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten due to the iterative parameter updates. However, comparison of individual methods is nevertheless treated in isolation from real world application and typically judged by monitoring accumulated test set performance. The closed world assumption remains predominant. It is assumed that during deployment a model is guaranteed to encounter data that stems from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown instances and break down in the face of corrupted data. In this work we argue that notable lessons from open set recognition, the identification of statistically deviating data outside of the observed dataset, and the adjacent field of active learning, where data is incrementally queried such that the expected performance gain is maximized, are frequently overlooked in the deep learning era. Based on these forgotten lessons, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Our results show that this not only benefits each individual paradigm, but highlights the natural synergies in a common framework. We empirically demonstrate improvements when alleviating catastrophic forgetting, querying data in active learning, selecting task orders, while exhibiting robust open world application where previously proposed methods fail.
The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.
Over the last several years, the field of natural language processing has been propelled forward by an explosion in the use of deep learning models. This survey provides a brief introduction to the field and a quick overview of deep learning architectures and methods. It then sifts through the plethora of recent studies and summarizes a large assortment of relevant contributions. Analyzed research areas include several core linguistic processing issues in addition to a number of applications of computational linguistics. A discussion of the current state of the art is then provided along with recommendations for future research in the field.
We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SimpleQuestions dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.