The problem of state estimation in the setting of partially-observed discrete event systems subject to cyber attacks is considered. An operator observes a plant through a natural projection that hides the occurrence of certain events. The objective of the operator is that of estimating the current state of the system. The observation is corrupted by an attacker which can tamper with the readings of a set of sensors thus inserting some fake events or erasing some observations. The aim of the attacker is that of altering the state estimation of the operator. An automaton, called attack structure, is defined to describe the set of all possible attacks. In more details, an unbounded attack structure is obtained by concurrent composition of two state observers, the attacker observer and the operator observer. The attack structure shows, for each possible corrupted observation, the joint state estimation, i.e., the set of states consistent with the uncorrupted observation and the set of states consistent with the corrupted observation. Such a structure can be used to establish if an attack function is harmful w.r.t. a misleading relation. Our approach is also extended to the case in which the attacker may insert at most n events between two consecutive observations.
Cities are changing constantly. All urban systems face different conditions from day to day. Even when averaged regularities can be found, urban systems will be more efficient if they can adapt to changes at the same temporal scales at which these occur. Still, the functionality of urban systems must be robust to changes, either caused by adaptation or by other factors. Technology can assist humans in designing and regulating this adaptation and robustness. To achieve this, we propose a description of cities as cybernetic systems. We identify three main components: information, algorithms, and agents, which we illustrate with current and future examples. The implications of cybernetic cities are manifold, with direct impacts on mobility, sustainability, resilience, governance, and society. Still, the potential of a cybernetic perspective on cities will not depend so much on technology as on how we use it.
In granular computing, fuzzy sets can be approximated by granularly representable sets that are as close as possible to the original fuzzy set w.r.t. a given closeness measure. Such sets are called granular approximations. In this article, we introduce the concepts of disjoint and adjacent granules and we examine how the new definitions affect the granular approximations. First, we show that the new concepts are important for binary classification problems since they help to keep decision regions separated (disjoint granules) and at the same time to cover as much as possible of the attribute space (adjacent granules). Later, we consider granular approximations for multi-class classification problems leading to the definition of a multi-class granular approximation. Finally, we show how to efficiently calculate multi-class granular approximations for {\L}ukasiewicz fuzzy connectives. We also provide graphical illustrations for a better understanding of the introduced concepts.
Congestion; operational delays due to a vicious circle of passenger-congestion and train-queuing; is an escalating problem for metro systems because it has negative consequences from passenger discomfort to eventual mode-shifts. Congestion arises due to large volumes of passenger boardings and alightings at bottleneck stations, which may lead to increased stopping times at stations and consequent queuing of trains upstream, further reducing line throughput and implying an even greater accumulation of passengers at stations. Alleviating congestion requires control strategies such as regulating the inflow of passengers entering bottleneck stations. The availability of large-scale smartcard and train movement data from day-to-day operations facilitates the development of models that can inform such strategies in a data-driven way. In this paper, we propose to model station-level passenger-congestion via empirical passenger boarding-alightings and train flow relationships, henceforth, fundamental diagrams (FDs). We emphasise that estimating FDs using station-level data is empirically challenging due to confounding biases arising from the interdependence of operations at different stations, which obscures the true sources of congestion in the network. We thus adopt a causal statistical modelling approach to produce FDs that are robust to confounding and as such suitable to properly inform control strategies. The closest antecedent to the proposed model is the FD for road traffic networks, which informs traffic management strategies, for instance, via locating the optimum operation point. Our analysis of data from the Mass Transit Railway, Hong Kong indicates the existence of concave FDs at identified bottleneck stations, and an associated critical level of boarding-alightings above which congestion sets-in unless there is an intervention.
Multiple-input multiple-output (MIMO) systems greatly increase the overall throughput of wireless systems since they are capable of transmitting multiple streams employing the same time-frequency resources. However, this gain requires an appropriate precoder design and a power allocation technique. In general, precoders and power allocation schemes are designed assuming perfect channel estate information (CSI). Nonetheless, this is an optimistic assumption since real systems only possess partial or imperfect CSI at the transmitter (CSIT). The imperfect CSIT originates residual inter-user interference, which is detrimental for wireless systems. In this paper, two adaptive power allocation algorithms are proposed, which are more robust against CSIT imperfections than conventional techniques. Both techniques employ the mean square error as the objective function. Simulation results show that the proposed techniques obtain a higher performance in terms of sum-rate than conventional approaches.
In the Strip Packing problem (SP), we are given a vertical half-strip $[0,W]\times[0,\infty)$ and a set of $n$ axis-aligned rectangles of width at most $W$. The goal is to find a non-overlapping packing of all rectangles into the strip such that the height of the packing is minimized. A well-studied and frequently used practical constraint is to allow only those packings that are guillotine separable, i.e., every rectangle in the packing can be obtained by recursively applying a sequence of edge-to-edge axis-parallel cuts (guillotine cuts) that do not intersect any item of the solution. In this paper, we study approximation algorithms for the Guillotine Strip Packing problem (GSP), i.e., the Strip Packing problem where we require additionally that the packing needs to be guillotine separable. This problem generalizes the classical Bin Packing problem and also makespan minimization on identical machines, and thus it is already strongly NP-hard. Moreover, due to a reduction from the Partition problem, it is NP-hard to obtain a polynomial-time $(3/2-\varepsilon)$-approximation algorithm for GSP for any $\varepsilon>0$ (exactly as Strip Packing). We provide a matching polynomial time $(3/2+\varepsilon)$-approximation algorithm for GSP. Furthermore, we present a pseudo-polynomial time $(1+\varepsilon)$-approximation algorithm for GSP. This is surprising as it is NP-hard to obtain a $(5/4-\varepsilon)$-approximation algorithm for (general) Strip Packing in pseudo-polynomial time. Thus, our results essentially settle the approximability of GSP for both the polynomial and the pseudo-polynomial settings.
We develop a novel a posteriori error estimator for the L2 error committed by the finite element discretization of the solution of the fractional Laplacian. Our a posteriori error estimator takes advantage of the semi-discretization scheme using a rational approximation which allows to reformulate the fractional problem into a family of non-fractional parametric problems. The estimator involves applying the implicit Bank-Weiser error estimation strategy to each parametric non-fractional problem and reconstructing the fractional error through the same rational approximation used to compute the solution to the original fractional problem. We provide several numerical examples in both two and three-dimensions demonstrating the effectivity of our estimator for varying fractional powers and its ability to drive an adaptive mesh refinement strategy.
Speaker verification systems have been widely used in smart phones and Internet of things devices to identify a legitimate user. In recent work, it has been shown that adversarial attacks, such as FAKEBOB, can work effectively against speaker verification systems. The goal of this paper is to design a detector that can distinguish an original audio from an audio contaminated by adversarial attacks. Specifically, our designed detector, called MEH-FEST, calculates the minimum energy in high frequencies from the short-time Fourier transform of an audio and uses it as a detection metric. Through both analysis and experiments, we show that our proposed detector is easy to implement, fast to process an input audio, and effective in determining whether an audio is corrupted by FAKEBOB attacks. The experimental results indicate that the detector is extremely effective: with near zero false positive and false negative rates for detecting FAKEBOB attacks in Gaussian mixture model (GMM) and i-vector speaker verification systems. Moreover, adaptive adversarial attacks against our proposed detector and their countermeasures are discussed and studied, showing the game between attackers and defenders.
The motivation for this paper comes from the ongoing SARS-CoV-2 Pandemic. Its goal is to present a previously neglected approach to non-adaptive group testing and describes it in terms of residuated pairs on partially ordered sets. Our investigation has the advantage, as it naturally yields an efficient decision scheme (decoder) for any given testing scheme. This decoder allows to detect a large amount of infection patterns. Apart from this, we devise a construction of good group testing schemes that are based on incidence matrices of finite partial linear spaces. The key idea is to exploit the structure of these matrices and make them available as test matrices for group testing. These matrices may generally be tailored for different estimated disease prevalence levels. As an example, we discuss the group testing schemes based on generalized quadrangles. In the context at hand, we state our results only for the error-free case so far. An extension to a noisy scenario is desirable and will be treated in a subsequent account on the topic.
We introduce Monte-Carlo Attention (MCA), a randomized approximation method for reducing the computational cost of self-attention mechanisms in Transformer architectures. MCA exploits the fact that the importance of each token in an input sequence varies with respect to their attention scores; thus, some degree of error can be tolerable when encoding tokens with low attention. Using approximate matrix multiplication, MCA applies different error bounds to encode input tokens such that those with low attention scores are computed with relaxed precision, whereas errors of salient elements are minimized. MCA can operate in parallel with other attention optimization schemes and does not require model modification. We study the theoretical error bounds and demonstrate that MCA reduces attention complexity (in FLOPS) for various Transformer models by up to 11$\times$ in GLUE benchmarks without compromising model accuracy.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.