Stochastic models are widely used to verify whether systems satisfy their reliability, performance and other nonfunctional requirements. However, the validity of the verification depends on how accurately the parameters of these models can be estimated using data from component unit testing, monitoring, system logs, etc. When insufficient data are available, the models are affected by epistemic parametric uncertainty, the verification results are inaccurate, and any engineering decisions based on them may be invalid. To address these problems, we introduce VERACITY, a tool-supported iterative approach for the efficient and accurate verification of nonfunctional requirements under epistemic parameter uncertainty. VERACITY integrates confidence-interval quantitative verification with a new adaptive uncertainty reduction heuristic that collects additional data about the parameters of the verified model by unit-testing specific system components over a series of verification iterations. VERACITY supports the quantitative verification of discrete-time Markov chains, deciding which components are to be tested in each iteration based on factors that include the sensitivity of the model to variations in the parameters of different components, and the overheads (e.g., time or cost) of unit-testing each of these components. We show the effectiveness and efficiency of VERACITY by using it for the verification of the nonfunctional requirements of a tele-assistance service-based system and an online shopping web application.
Two-player zero-sum games of infinite duration and their quantitative versions are used in verification to model the interaction between a controller (Eve) and its environment (Adam). The question usually addressed is that of the existence (and computability) of a strategy for Eve that can maximize her payoff against any strategy of Adam. In this work, we are interested in strategies of Eve that minimize her regret, i.e. strategies that minimize the difference between her actual payoff and the payoff she could have achieved if she had known the strategy of Adam in advance. We give algorithms to compute the strategies of Eve that ensure minimal regret against an adversary whose choice of strategy is (i) unrestricted, (ii) limited to positional strategies, or (iii) limited to word strategies. We also establish relations between the latter version and other problems studied in the literature.
Finding the similarities and differences between groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. This paper presents an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, the interactive visualization interface facilitates interpretation and refinement of the ULCA results. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of this framework.
District heating is a network of pipes through which heat is delivered from a centralised source. It is expected to play an important role in the decarbonisation of the energy sector in the coming years. In district heating, heat is traditionally generated through fossil fuels, often with combined heat and power (CHP) units. However, increasingly, waste heat is being used as a low carbon alternative, either directly or, for low temperature sources, via a heat pump. The design of district heating often has competing objectives: the need for inexpensive energy and meeting low carbon targets. In addition, the planning of district heating schemes is subject to multiple sources of uncertainty such as variability in heat demand and energy prices. This paper proposes a decision support tool to analyse and compare system designs for district heating under uncertainty using stochastic ordering (dominance). Contrary to traditional uncertainty metrics that provide statistical summaries and impose total ordering, stochastic ordering is a partial ordering and operates with full probability distributions. In our analysis, we apply the orderings in the mean and dispersion to the waste heat recovery problem in Brunswick, Germany.
Over the past few years, various word-level textual attack approaches have been proposed to reveal the vulnerability of deep neural networks used in natural language processing. Typically, these approaches involve an important optimization step to determine which substitute to be used for each word in the original input. However, current research on this step is still rather limited, from the perspectives of both problem-understanding and problem-solving. In this paper, we address these issues by uncovering the theoretical properties of the problem and proposing an efficient local search algorithm (LS) to solve it. We establish the first provable approximation guarantee on solving the problem in general cases.Extensive experiments involving 5 NLP tasks, 8 datasets and 26 NLP models show that LS can largely reduce the number of queries usually by an order of magnitude to achieve high attack success rates. Further experiments show that the adversarial examples crafted by LS usually have higher quality, exhibit better transferability, and can bring more robustness improvement to victim models by adversarial training.
The goal of many scientific experiments including A/B testing is to estimate the average treatment effect (ATE), which is defined as the difference between the expected outcomes of two or more treatments. In this paper, we consider a situation where an experimenter can assign a treatment to research subjects sequentially. In adaptive experimental design, the experimenter is allowed to change the probability of assigning a treatment using past observations for estimating the ATE efficiently. However, with this approach, it is difficult to apply a standard statistical method to construct an estimator because the observations are not independent and identically distributed. We thus propose an algorithm for efficient experiments with estimators constructed from dependent samples. We also introduce a sequential testing framework using the proposed estimator. To justify our proposed approach, we provide finite and infinite sample analyses. Finally, we experimentally show that the proposed algorithm exhibits preferable performance.
One of the most common approaches to the analysis of dynamic networks is through time-window aggregation. The resulting representation is a sequence of static networks, i.e. the snapshot graph. Despite this representation being widely used in the literature, a general framework to evaluate the soundness of snapshot graphs is still missing. In this article, we propose two scores to quantify conflicting objectives: Stability measures how much stable the sequence of snapshots is, while Fidelity measures the loss of information compared to the original data. We also develop a technique of targeted filtering of the links, to simplify the original temporal network. Our framework is tested on datasets of proximity and face-to-face interactions.
As a mathematical model of high-speed flow and shock wave propagation in a complex multimaterial setting, Lagrangian hydrodynamics is characterized by moving meshes, advection-dominated solutions, and moving shock fronts with sharp gradients. These challenges hinder the existing projection-based model reduction schemes from being practical. We develop several variations of projection-based reduced order model techniques for Lagrangian hydrodynamics by introducing three different reduced bases for position, velocity, and energy fields. A time-windowing approach is also developed to address the challenge imposed by the advection-dominated solutions. Lagrangian hydrodynamics is formulated as a nonlinear problem, which requires a proper hyper-reduction technique. Therefore, we apply the over-sampling DEIM and SNS approaches to reduce the complexity due to the nonlinear terms. Finally, we also present both a posteriori and a priori error bounds associated with our reduced order model. We compare the performance of the spatial and time-windowing reduced order modeling approaches in terms of accuracy and speed-up with respect to the corresponding full order model for several numerical examples, namely Sedov blast, Gresho vortices, Taylor-Green vortices, and triple-point problems.
Depending on energy sources and demand, the carbon intensity of the public power grid fluctuates over time. Exploiting this variability is an important factor in reducing the emissions caused by data centers. However, regional differences in the availability of low-carbon energy sources make it hard to provide general best practices for when to consume electricity. Moreover, existing research in this domain focuses mostly on carbon-aware workload migration across geo-distributed data centers, or addresses demand response purely from the perspective of power grid stability and costs. In this paper, we examine the potential impact of shifting computational workloads towards times where the energy supply is expected to be less carbon-intensive. To this end, we identify characteristics of delay-tolerant workloads and analyze the potential for temporal workload shifting in Germany, Great Britain, France, and California over the year 2020. Furthermore, we experimentally evaluate two workload shifting scenarios in a simulation to investigate the influence of time constraints, scheduling strategies, and the accuracy of carbon intensity forecasts. To accelerate research in the domain of carbon-aware computing and to support the evaluation of novel scheduling algorithms, our simulation framework and datasets are publicly available.
This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.