In this paper we extend the equal division and the equal surplus division values for transferable utility cooperative games to the more general setup of transferable utility cooperative games with level structures. In the case of the equal surplus division value we propose three possible extensions, one of which has already been described in the literature. We provide axiomatic characterizations of the values considered, apply them to a particular cost sharing problem and compare them in the framework of such an application.
We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon codes can be made to run in $\tilde{O}(n)$ time. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list decoding. It is known that for every $\epsilon>0$, and rate $r \in (0,1)$, there exist explicit families of these codes that have rate $r$ and can be list decoded from a $(1-r-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above list-decoding tasks in $\tilde{O}(n)$, where $n$ is the block-length of the code. Our algorithms have two main components. The first component builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a $\tilde{O}(n)$ time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design $\tilde{O}(n)$ time algorithms for two natural algebraic problems: given a $(m+2)$-variate polynomial $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i=0}^m Q_i(x)\cdot y_i$ the first algorithm solves order-$m$ linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ while the second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$, where $m$ is an arbitrary constant and $\gamma$ is a field element of sufficiently high order. These algorithms can be viewed as generalizations of classical $\tilde{O}(n)$ time algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.
We consider an extension of the classical Total Store Order (TSO) semantics by expanding it to turn-based 2-player safety games. During her turn, a player can select any of the communicating processes and perform its next transition. We consider different formulations of the safety game problem depending on whether one player or both of them transfer messages from the process buffers to the shared memory. We give the complete decidability picture for all the possible alternatives.
In this paper, without requiring any constraint qualifications, we establish tight error bounds for the log-determinant cone, which is the closure of the hypograph of the perspective function of the log-determinant function. This error bound is obtained using the recently developed framework based on one-step facial residual functions.
We propose a game-based formulation for learning dimensionality-reducing representations of feature vectors, when only a prior knowledge on future prediction tasks is available. In this game, the first player chooses a representation, and then the second player adversarially chooses a prediction task from a given class, representing the prior knowledge. The first player aims is to minimize, and the second player to maximize, the regret: The minimal prediction loss using the representation, compared to the same loss using the original features. For the canonical setting in which the representation, the response to predict and the predictors are all linear functions, and under the mean squared error loss function, we derive the theoretically optimal representation in pure strategies, which shows the effectiveness of the prior knowledge, and the optimal regret in mixed strategies, which shows the usefulness of randomizing the representation. For general representations and loss functions, we propose an efficient algorithm to optimize a randomized representation. The algorithm only requires the gradients of the loss function, and is based on incrementally adding a representation rule to a mixture of such rules.
Charts, figures, and text derived from data play an important role in decision making, from data-driven policy development to day-to-day choices informed by online articles. Making sense of, or fact-checking, outputs means understanding how they relate to the underlying data. Even for domain experts with access to the source code and data sets, this poses a significant challenge. In this paper we introduce a new program analysis framework which supports interactive exploration of fine-grained I/O relationships directly through computed outputs, making use of dynamic dependence graphs. Our main contribution is a novel notion in data provenance which we call related inputs, a relation of mutual relevance or "cognacy" which arises between inputs when they contribute to common features of the output. Queries of this form allow readers to ask questions like "What outputs use this data element, and what other data elements are used along with it?". We show how Jonsson and Tarski's concept of conjugate operators on Boolean algebras appropriately characterises the notion of cognacy in a dependence graph, and give a procedure for computing related inputs over such a graph.
Off-policy evaluation (OPE) is the problem of estimating the value of a target policy using historical data collected under a different logging policy. OPE methods typically assume overlap between the target and logging policy, enabling solutions based on importance weighting and/or imputation. In this work, we approach OPE without assuming either overlap or a well-specified model by considering a strategy based on partial identification under non-parametric assumptions on the conditional mean function, focusing especially on Lipschitz smoothness. Under such smoothness assumptions, we formulate a pair of linear programs whose optimal values upper and lower bound the contributions of the no-overlap region to the off-policy value. We show that these linear programs have a concise closed form solution that can be computed efficiently and that their solutions converge, under the Lipschitz assumption, to the sharp partial identification bounds on the off-policy value. Furthermore, we show that the rate of convergence is minimax optimal, up to log factors. We deploy our methods on two semi-synthetic examples, and obtain informative and valid bounds that are tighter than those possible without smoothness assumptions.
The sparsity-ranked lasso (SRL) has been developed for model selection and estimation in the presence of interactions and polynomials. The main tenet of the SRL is that an algorithm should be more skeptical of higher-order polynomials and interactions *a priori* compared to main effects, and hence the inclusion of these more complex terms should require a higher level of evidence. In time series, the same idea of ranked prior skepticism can be applied to the possibly seasonal autoregressive (AR) structure of the series during the model fitting process, becoming especially useful in settings with uncertain or multiple modes of seasonality. The SRL can naturally incorporate exogenous variables, with streamlined options for inference and/or feature selection. The fitting process is quick even for large series with a high-dimensional feature set. In this work, we discuss both the formulation of this procedure and the software we have developed for its implementation via the **fastTS** R package. We explore the performance of our SRL-based approach in a novel application involving the autoregressive modeling of hourly emergency room arrivals at the University of Iowa Hospitals and Clinics. We find that the SRL is considerably faster than its competitors, while producing more accurate predictions.
This paper concerns an expansion of first-order Belnap-Dunn logic whose connectives and quantifiers are all familiar from classical logic. The language and logical consequence relation of the logic are defined, a proof system for the defined logic is presented, and the soundness and completeness of the presented proof system is established. The close relationship between the logical consequence relations of the defined logic and the version of classical logic with the same language is illustrated by the minor differences between the presented proof system and a sound and complete proof system for the version of classical logic with the same language. Moreover, fifteen classical laws of logical equivalence are given by which the logical equivalence relation of the defined logic distinguishes itself from the logical equivalence relation of many logics that are closely related at first glance.
The problem of statistical inference for open chaotic systems measured with error is complicated by the interaction of the uncertainty introduced by chaos, and the various sources of random or external variation. Here a method of representing measured data from large open chaotic systems subject to error as collections of threads of plausible pseudo future histories to enable statistical analysis is described. This representation provides asymptotically consistent predictive distributions, for use in developing predictive likelihood methods which: 1. provide a framework for variable selection, 2. provide a framework for Bayesian updating, so for example 4 season ahead predictions learn naturally as the 3rd season ahead is measured. 3. allows examination of conditional scenarios along the future histories for planning purposes. 4. allows the ranking of variable, delay combinations with higher signal to noise ratio. The method is tested for learning and variable selection by examining its behavior in predicting 9 years across 4 seasons of climate variables, including local temperature and rainfall measurements at two locations, predicting up to 4 seasons ahead.
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.