We investigate a family of approximate multi-step proximal point methods, accelerated by implicit linear discretizations of gradient flow. The resulting methods are multi-step proximal point methods, with similar computational cost in each update as the proximal point method. We explore several optimization methods where applying an approximate multistep proximal points method results in improved convergence behavior. We argue that this is the result of the lowering of truncation error in approximating gradient flow
We examine (directed) greybox fuzzing from a geometrical perspective, viewing dissimilarities on inputs and on control flow graphs (with dynamical statistics) as primitive objects of interest. We prototype and evaluate GoExploreFuzz, a greybox fuzzer for time-intensive programs that incorporates this perspective. The results indicate useful capabilities for greybox fuzzing that have hitherto been underutilized, notably quantifying the diversity of paths and autonomously tuning the "bandwidth" of mutations.
In this paper, a new method based on TOPSIS and optimization models is proposed for multi-attribute group decision-making in the environment of interval-valued intuitionistic fuzzy sets.Firstly, by minimizing the sum of differences between individual evaluations and the overallconsistent evaluations of all experts, a new optimization model is established for determining expert weights. Secondly, based on TOPSIS method, the improved closeness index for evaluating each alternative is obtained. Finally, the attribute weight is determined by establishing an optimization model with the goal of maximizing the closeness of each alternative, and it is brought into the closeness index so that the alternatives can be ranked. Combining all these together, the complete fuzzy multi-attribute group decision-making algorithm is formulated, which can give full play to the advantages of subjective and objective weighting methods. In the end, the feasibility and effectiveness of the provided method are verified by a real case study.
We construct an efficient class of increasingly high-order (up to 17th-order) essentially non-oscillatory schemes with multi-resolution (ENO-MR) for solving hyperbolic conservation laws. The candidate stencils for constructing ENO-MR schemes range from first-order one-point stencil increasingly up to the designed very high-order stencil. The proposed ENO-MR schemes adopt a very simple and efficient strategy that only requires the computation of the highest-order derivatives of a part of candidate stencils. Besides simplicity and high efficiency, ENO-MR schemes are completely parameter-free and essentially scale-invariant. Theoretical analysis and numerical computations show that ENO-MR schemes achieve designed high-order convergence in smooth regions which may contain high-order critical points (local extrema) and retain ENO property for strong shocks. In addition, ENO-MR schemes could capture complex flow structures very well.
We propose a new framework called recursive lattice reduction for finding short non-zero vectors in a lattice or for finding dense sublattices of a lattice. At a high level, the framework works by recursively searching for dense sublattices of dense sublattices (or their duals). Eventually, the procedure encounters a recursive call on a lattice $\mathcal{L}$ with relatively low rank $k$, at which point we simply use a known algorithm to find a short non-zero vector in $\mathcal{L}$. We view our framework as complementary to basis reduction algorithms, which similarly work to reduce an $n$-dimensional lattice problem with some approximation factor $\gamma$ to an exact lattice problem in dimension $k < n$, with a tradeoff between $\gamma$, $n$, and $k$. Our framework provides an alternative and arguably simpler perspective, which in particular can be described without explicitly referencing any specific basis of the lattice, Gram-Schmidt vectors, or even projection (though implementations of algorithms in this framework will likely make use of such things). We present a number of specific instantiations of our framework. Our main concrete result is a reduction that matches the tradeoff between $\gamma$, $n$, and $k$ achieved by the best-known basis reduction algorithms (in terms of the Hermite factor, up to low-order terms) across all parameter regimes. In fact, this reduction also can be used to find dense sublattices with any rank $\ell$ satisfying $\min\{\ell,n-\ell\} \leq n-k+1$, using only an oracle for SVP (or even just Hermite SVP) in $k$ dimensions, which is itself a novel result (as far as the authors know). We also show a very simple reduction that achieves the same tradeoff in quasipolynomial time. Finally, we present an automated approach for searching for algorithms in this framework that (provably) achieve better approximations with fewer oracle calls.
We consider the problem of aggregating the judgements of a group of experts to form a single prior distribution representing the judgements of the group. We develop a Bayesian hierarchical model to reconcile the judgements of the group of experts based on elicited quantiles for continuous quantities and probabilities for one-off events. Previous Bayesian reconciliation methods have not been used widely, if at all, in contrast to pooling methods and consensus-based approaches. To address this we embed Bayesian reconciliation within the probabilistic Delphi method. The result is to furnish the outcome of the probabilistic Delphi method with a direct probabilistic interpretation, with the resulting prior representing the judgements of the decision maker. We can use the rationales from the Delphi process to group the experts for the hierarchical modelling. We illustrate the approach with applications to studies evaluating erosion in embankment dams and pump failures in a water pumping station, and assess the properties of the approach using the TU Delft database of expert judgement studies. We see that, even using an off-the-shelf implementation of the approach, it out-performs individual experts, equal weighting of experts and the classical method based on the log score.
We resurrect the infamous harmonic mean estimator for computing the marginal likelihood (Bayesian evidence) and solve its problematic large variance. The marginal likelihood is a key component of Bayesian model selection to evaluate model posterior probabilities; however, its computation is challenging. The original harmonic mean estimator, first proposed by Newton and Raftery in 1994, involves computing the harmonic mean of the likelihood given samples from the posterior. It was immediately realised that the original estimator can fail catastrophically since its variance can become very large (possibly not finite). A number of variants of the harmonic mean estimator have been proposed to address this issue although none have proven fully satisfactory. We present the \emph{learnt harmonic mean estimator}, a variant of the original estimator that solves its large variance problem. This is achieved by interpreting the harmonic mean estimator as importance sampling and introducing a new target distribution. The new target distribution is learned to approximate the optimal but inaccessible target, while minimising the variance of the resulting estimator. Since the estimator requires samples of the posterior only, it is agnostic to the sampling strategy used. We validate the estimator on a variety of numerical experiments, including a number of pathological examples where the original harmonic mean estimator fails catastrophically. We also consider a cosmological application, where our approach leads to $\sim$ 3 to 6 times more samples than current state-of-the-art techniques in 1/3 of the time. In all cases our learnt harmonic mean estimator is shown to be highly accurate. The estimator is computationally scalable and can be applied to problems of dimension $O(10^3)$ and beyond. Code implementing the learnt harmonic mean estimator is made publicly available
We present a new Krylov subspace recycling method for solving a linear system of equations, or a sequence of slowly changing linear systems. Our new method, named GMRES-SDR, combines randomized sketching and deflated restarting in a way that avoids orthogononalizing a full Krylov basis. We provide new theory which characterizes sketched GMRES with and without augmentation as a projection method using a semi-inner product. We present results of numerical experiments demonstrating the effectiveness of GMRES-SDR over competitor methods such as GMRES-DR and GCRO-DR.
We propose and compare methods for the analysis of extreme events in complex systems governed by PDEs that involve random parameters, in situations where we are interested in quantifying the probability that a scalar function of the system's solution is above a threshold. If the threshold is large, this probability is small and its accurate estimation is challenging. To tackle this difficulty, we blend theoretical results from large deviation theory (LDT) with numerical tools from PDE-constrained optimization. Our methods first compute parameters that minimize the LDT-rate function over the set of parameters leading to extreme events, using adjoint methods to compute the gradient of this rate function. The minimizers give information about the mechanism of the extreme events as well as estimates of their probability. We then propose a series of methods to refine these estimates, either via importance sampling or geometric approximation of the extreme event sets. Results are formulated for general parameter distributions and detailed expressions are provided when Gaussian distributions. We give theoretical and numerical arguments showing that the performance of our methods is insensitive to the extremeness of the events we are interested in. We illustrate the application of our approach to quantify the probability of extreme tsunami events on shore. Tsunamis are typically caused by a sudden, unpredictable change of the ocean floor elevation during an earthquake. We model this change as a random process, which takes into account the underlying physics. We use the one-dimensional shallow water equation to model tsunamis numerically. In the context of this example, we present a comparison of our methods for extreme event probability estimation, and find which type of ocean floor elevation change leads to the largest tsunamis on shore.
Numerical methods for computing the solutions of Markov backward stochastic differential equations (BSDEs) driven by continuous-time Markov chains (CTMCs) are explored. The main contributions of this paper are as follows: (1) we observe that Euler-Maruyama temporal discretization methods for solving Markov BSDEs driven by CTMCs are equivalent to exponential integrators for solving the associated systems of ordinary differential equations (ODEs); (2) we introduce multi-stage Euler-Maruyama methods for effectively solving "stiff" Markov BSDEs driven by CTMCs; these BSDEs typically arise from the spatial discretization of Markov BSDEs driven by Brownian motion; (3) we propose a multilevel spatial discretization method on sparse grids that efficiently approximates high-dimensional Markov BSDEs driven by Brownian motion with a combination of multiple Markov BSDEs driven by CTMCs on grids with different resolutions. We also illustrate the effectiveness of the presented methods with a number of numerical experiments in which we treat nonlinear BSDEs arising from option pricing problems in finance.
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.