The subset sum problem is known to be an NP-hard problem in the field of computer science with the fastest known approach having a run-time complexity of $O(2^{0.3113n})$. A modified version of this problem is known as the perfect sum problem and extends the subset sum idea further. This extension results in additional complexity, making it difficult to compute for a large input. In this paper, I propose a probabilistic approach which approximates the solution to the perfect sum problem by approximating the distribution of potential sums. Since this problem is an extension of the subset sum, our approximation also grants some probabilistic insight into the solution for the subset sum problem. We harness distributional approximations to model the number of subsets which sum to a certain size. These distributional approximations are formulated in two ways: using bounds to justify normal approximation, and approximating the empirical distribution via density estimation. These approximations can be computed in $O(n)$ complexity, and can increase in accuracy with the size of the input data making it useful for large-scale combinatorial problems. Code is available at //github.com/KristofPusztai/PerfectSum.
Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.
In this paper, we introduce a new causal framework capable of dealing with probabilistic and non-probabilistic problems. Indeed, we provide a direct causal effect formula called Probabilistic vAriational Causal Effect (PACE) and its variations satisfying some ideas and postulates. Our formula of causal effect uses the idea of the total variation of a function integrated with probability theory. The probabilistic part is the natural availability of changing an exposure values given some variables. These variables interfere with the effect of the exposure on a given outcome. PACE has a parameter $d$ determining the degree of considering the natural availability of changing the exposure values. The lower values of $d$ refer to the scenarios for which rare cases are important. In contrast, with the higher values of $d$, our framework deals with the problems that are in nature probabilistic. Hence, instead of a single value for causal effect, we provide a causal effect vector by discretizing $d$. Further, we introduce the positive and negative PACE to measure the positive and the negative causal changes in the outcome while changing the exposure values. Furthermore, we provide an identifiability criterion for PACE to deal with observational studies. We also address the problem of computing counterfactuals in causal reasoning. We compare our framework to the Pearl, the mutual information, the conditional mutual information, and the Janzing et al. frameworks by investigating several examples.
Determining process-structure-property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process-structure and structure-property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model (CPFEM) matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification (UQ) framework based on push-forward probability measures, which combines techniques from measure theory and Bayes rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning (ML) Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure-property linkages.
In this paper, we study the predict-then-optimize problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as predictive analytics or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called \textit{maximum optimality margin} which designs the machine learning loss function by the optimality condition of the downstream optimization. The max-margin formulation enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse linear programming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.
We study the set of optimal solutions of the dual linear programming formulation of the linear assignment problem (LAP) to propose a method for computing a solution from the relative interior of this set. Assuming that an arbitrary dual-optimal solution and an optimal assignment are available (for which many efficient algorithms already exist), our method computes a relative-interior solution in linear time. Since LAP occurs as a subproblem in the linear programming relaxation of quadratic assignment problem (QAP), we employ our method as a new component in the family of dual-ascent algorithms that provide bounds on the optimal value of QAP. To make our results applicable to incomplete QAP, which is of interest in practical use-cases, we also provide a linear-time reduction from incomplete LAP to complete LAP along with a mapping that preserves optimality and membership in the relative interior. Our experiments on publicly available benchmarks indicate that our approach with relative-interior solution is frequently capable of providing superior bounds and otherwise is at least comparable.
Multivariate point processes are widely applied to model event-type data such as natural disasters, online message exchanges, financial transactions or neuronal spike trains. One very popular point process model in which the probability of occurrences of new events depend on the past of the process is the Hawkes process. In this work we consider the nonlinear Hawkes process, which notably models excitation and inhibition phenomena between dimensions of the process. In a nonparametric Bayesian estimation framework, we obtain concentration rates of the posterior distribution on the parameters, under mild assumptions on the prior distribution and the model. These results also lead to convergence rates of Bayesian estimators. Another object of interest in event-data modelling is to recover the graph of interaction - or Granger connectivity graph - of the phenomenon. We provide consistency guarantees on Bayesian methods for estimating this quantity; in particular, we prove that the posterior distribution is consistent on the graph adjacency matrix of the process, as well as a Bayesian estimator based on an adequate loss function.
Conventional rule learning algorithms aim at finding a set of simple rules, where each rule covers as many examples as possible. In this paper, we argue that the rules found in this way may not be the optimal explanations for each of the examples they cover. Instead, we propose an efficient algorithm that aims at finding the best rule covering each training example in a greedy optimization consisting of one specialization and one generalization loop. These locally optimal rules are collected and then filtered for a final rule set, which is much larger than the sets learned by conventional rule learning algorithms. A new example is classified by selecting the best among the rules that cover this example. In our experiments on small to very large datasets, the approach's average classification accuracy is higher than that of state-of-the-art rule learning algorithms. Moreover, the algorithm is highly efficient and can inherently be processed in parallel without affecting the learned rule set and so the classification accuracy. We thus believe that it closes an important gap for large-scale classification rule induction.
Maximum Inner Product Search or top-k retrieval on sparse vectors is well-understood in information retrieval, with a number of mature algorithms that solve it exactly. However, all existing algorithms are tailored to text and frequency-based similarity measures. To achieve optimal memory footprint and query latency, they rely on the near stationarity of documents and on laws governing natural languages. We consider, instead, a setup in which collections are streaming -- necessitating dynamic indexing -- and where indexing and retrieval must work with arbitrarily distributed real-valued vectors. As we show, existing algorithms are no longer competitive in this setup, even against naive solutions. We investigate this gap and present a novel approximate solution, called Sinnamon, that can efficiently retrieve the top-k results for sparse real valued vectors drawn from arbitrary distributions. Notably, Sinnamon offers levers to trade-off memory consumption, latency, and accuracy, making the algorithm suitable for constrained applications and systems. We give theoretical results on the error introduced by the approximate nature of the algorithm, and present an empirical evaluation of its performance on two hardware platforms and synthetic and real-valued datasets. We conclude by laying out concrete directions for future research on this general top-k retrieval problem over sparse vectors.
For a finite set $\cal F$ of polynomials over fixed finite prime field of size $p$ containing all polynomials $x^2 - x$ a Nullstellensatz proof of the unsolvability of the system $$ f = 0\ ,\ \mbox{ all } f \in {\cal F} $$ in the field is a linear combination $\sum_{f \in {\cal F}} \ h_f \cdot f$ that equals to $1$ in the ring of polynomails. The measure of complexity of such a proof is its degree: $\max_f deg(h_f f)$. We study the problem to establish degree lower bounds for some {\em extended} NS proof systems: these systems prove the unsolvability of $\cal F$ by proving the unsolvability of a bigger set ${\cal F}\cup {\cal E}$, where set $\cal E$ may use new variables $r$ and contains all polynomials $r^p - r$, and satisfies the following soundness condition: -- - Any $0,1$-assignment $\overline a$ to variables $\overline x$ can be appended by an assignment $\overline b$ to variables $\overline r$ such that for all $g \in {\cal E}$ it holds that $g(\overline a, \overline b) = 0$. We define a notion of pseudo-solutions of $\cal F$ and prove that the existence of pseudo-solutions with suitable parameters implies lower bounds for two extended NS proof systems ENS and UENS defined in Buss et al. (1996/97). Further we give a combinatorial example of $\cal F$ and candidate pseudo-solutions based on the pigeonhole principle.
Current machine learning models achieve super-human performance in many real-world applications. Still, they are susceptible against imperceptible adversarial perturbations. The most effective solution for this problem is adversarial training that trains the model with adversarially perturbed samples instead of original ones. Various methods have been developed over recent years to improve adversarial training such as data augmentation or modifying training attacks. In this work, we examine the same problem from a new data-centric perspective. For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples. By using this finding, we propose detecting and removing these hard samples directly from the training procedure rather than applying complicated algorithms to mitigate their effects. For detection, we use maximum softmax probability as an effective method in out-of-distribution detection since we can consider the hard samples as the out-of-distribution samples for the whole data distribution. Our results on SVHN and CIFAR-10 datasets show the effectiveness of this method in improving the adversarial training without adding too much computational cost.