We prove axiomatic characterizations of several important multiwinner rules within the class of approval-based committee choice rules. These are voting rules that return a set of (fixed-size) committees. In particular, we provide axiomatic characterizations of Proportional Approval Voting, the Chamberlin--Courant rule, and other Thiele methods. These rules share the important property that they satisfy an axiom called consistency, which is crucial in our characterizations.
Scale-free dynamics, formalized by selfsimilarity, provides a versatile paradigm massively and ubiquitously used to model temporal dynamics in real-world data. However, its practical use has mostly remained univariate so far. By contrast, modern applications often demand multivariate data analysis. Accordingly, models for multivariate selfsimilarity were recently proposed. Nevertheless, they have remained rarely used in practice because of a lack of available robust estimation procedures for the vector of selfsimilarity parameters. Building upon recent mathematical developments, the present work puts forth an efficient estimation procedure based on the theoretical study of the multiscale eigenstructure of the wavelet spectrum of multivariate selfsimilar processes. The estimation performance is studied theoretically in the asymptotic limits of large scale and sample sizes, and computationally for finite-size samples. As a practical outcome, a fully operational and documented multivariate signal processing estimation toolbox is made freely available and is ready for practical use on real-world data. Its potential benefits are illustrated in epileptic seizure prediction from multi-channel EEG data.
Simply-verifiable mathematical conditions for existence, uniqueness and explicit analytical computation of minimal adversarial paths (MAP) and minimal adversarial distances (MAD) for (locally) uniquely-invertible classifiers, for generalized linear models (GLM), and for entropic AI (EAI) are formulated and proven. Practical computation of MAP and MAD, their comparison and interpretations for various classes of AI tools (for neuronal networks, boosted random forests, GLM and EAI) are demonstrated on the common synthetic benchmarks: on a double Swiss roll spiral and its extensions, as well as on the two biomedical data problems (for the health insurance claim predictions, and for the heart attack lethality classification). On biomedical applications it is demonstrated how MAP provides unique minimal patient-specific risk-mitigating interventions in the predefined subsets of accessible control variables.
Thanks to the singularity of the solution of linear subdiffusion problems, most time-stepping methods on uniform meshes can result in $O(\tau)$ accuracy where $\tau$ denotes the time step. The present work aims to discover the reason why some type of Crank-Nicolson schemes (the averaging Crank-Nicolson scheme) for the subdiffusion can only yield $O(\tau^\alpha)$$(\alpha<1)$ accuracy, which is much lower than the desired. The existing well developed error analysis for the subdiffusion, which has been successfully applied to many time-stepping methods such as the fractional BDF-$p (1\leq p\leq 6)$, all requires singular points be out of the path of contour integrals involved. The averaging Crank-Nicolson scheme in this work is quite natural but fails to meet this requirement. By resorting to the residue theorem, some novel sharp error analysis is developed in this study, upon which correction methods are further designed to obtain the optimal $O(\tau^2)$ accuracy. All results are verified by numerical tests.
We study the last fall degrees of {\em semi-local} polynomial systems, and the computational complexity of solving such systems for closed-point and rational-point solutions, where the systems are defined over a finite field. A semi-local polynomial system specifies an algebraic set which is the image of a global linear transformation of a direct product of local affine algebraic sets. As a special but interesting case, polynomial systems that arise from Weil restriction of algebraic sets in an affine space of low dimension are semi-local. Such systems have received considerable attention due to their application in cryptography. Our main results bound the last fall degree of a semi-local polynomial system in terms of the number of closed point solutions, and yield an efficient algorithm for finding all rational-point solutions when the prime characteristic of the finite field and the number of rational solutions are small. Our results on solving semi-local systems imply an improvement on a previously known polynomial-time attack on the HFE (Hidden Field Equations) cryptosystems. The attacks implied in our results extend to public key encryption functions which are based on semi-local systems where either the number of closed point solutions is small, or the characteristic of the field is small. It remains plausible to construct public key cryptosystems based on semi-local systems over a finite field of large prime characteristic with exponential number of closed point solutions. Such a method is presented in the paper, followed by further cryptanalysis involving the isomorphism of polynomials (IP) problem, as well as a concrete public key encryption scheme which is secure against all the attacks discussed in this paper.
Completely random measures (CRMs) and their normalizations (NCRMs) offer flexible models in Bayesian nonparametrics. But their infinite dimensionality presents challenges for inference. Two popular finite approximations are truncated finite approximations (TFAs) and independent finite approximations (IFAs). While the former have been well-studied, IFAs lack similarly general bounds on approximation error, and there has been no systematic comparison between the two options. In the present work, we propose a general recipe to construct practical finite-dimensional approximations for homogeneous CRMs and NCRMs, in the presence or absence of power laws. We call our construction the automated independent finite approximation (AIFA). Relative to TFAs, we show that AIFAs facilitate more straightforward derivations and use of parallel computing in approximate inference. We upper bound the approximation error of AIFAs for a wide class of common CRMs and NCRMs -- and thereby develop guidelines for choosing the approximation level. Our lower bounds in key cases suggest that our upper bounds are tight. We prove that, for worst-case choices of observation likelihoods, TFAs are more efficient than AIFAs. Conversely, we find that in real-data experiments with standard likelihoods, AIFAs and TFAs perform similarly. Moreover, we demonstrate that AIFAs can be used for hyperparameter estimation even when other potential IFA options struggle or do not apply.
We determine the minimum possible column multiplicity of even, doubly-, and triply-even codes given their length. This refines a classification result for the possible lengths of $q^r$-divisible codes over $\mathbb{F}_q$. We also give a few computational results for field sizes $q>2$. Non-existence results of divisible codes with restricted column multiplicities for a given length have applications e.g. in Galois geometry and can be used for upper bounds on the maximum cardinality of subspace codes.
Using fault-tolerant constructions, computations performed with unreliable components can simulate their noiseless counterparts though the introduction of a modest amount of redundancy. Given the modest overhead required to achieve fault-tolerance, and the fact that increasing the reliability of basic components often comes at a cost, are there situations where fault-tolerance may be more economical? We present a general framework to account for this overhead cost in order to effectively compare fault-tolerant to non-fault-tolerant approaches for computation, in the limit of small logical error rates. Using this detailed accounting, we determine explicit boundaries at which fault-tolerant designs become more efficient than designs that achieve comparable reliability through direct consumption of resources. We find that the fault-tolerant construction is always preferred in the limit of high reliability in cases where the resources required to construct a basic unit grows faster than $\log(1 / \epsilon)$ asymptotically for small $\epsilon$.
We study stability properties of the expected utility function in Bayesian optimal experimental design. We provide a framework for this problem in a non-parametric setting and prove a convergence rate of the expected utility with respect to a likelihood perturbation. This rate is uniform over the design space and its sharpness in the general setting is demonstrated by proving a lower bound in a special case. To make the problem more concrete we proceed by considering non-linear Bayesian inverse problems with Gaussian likelihood and prove that the assumptions set out for the general case are satisfied and regain the stability of the expected utility with respect to perturbations to the observation map. Theoretical convergence rates are demonstrated numerically in three different examples.
We show a deterministic constant-time local algorithm for constructing an approximately maximum flow and minimum fractional cut in multisource-multitarget networks with bounded degrees and bounded edge capacities. Locality means that the decision we make about each edge only depends on its constant radius neighborhood. We show two applications of the algorithms: one is related to the Aldous-Lyons Conjecture, and the other is about approximating the neighborhood distribution of graphs by bounded-size graphs. The scope of our results can be extended to unimodular random graphs and networks. As a corollary, we generalize the Maximum Flow Minimum Cut Theorem to unimodular random flow networks.
We propose a new variable selection procedure for a functional linear model with multiple scalar responses and multiple functional predictors. This method is based on basis expansions of the involved functional predictors and coefficients that lead to a multivariate linear regression model. Then a criterion by means of which the variable selection problem reduces to that of estimating a suitable set is introduced. Estimation of this set is achieved by using appropriate penalizations of estimates of this criterion, so leading to our proposal. A simulation study that permits to investigate the effectiveness of the proposed approach and to compare it with existing methods is given.