Learning decompositions of expensive-to-evaluate black-box functions promises to scale Bayesian optimisation (BO) to high-dimensional problems. However, the success of these techniques depends on finding proper decompositions that accurately represent the black-box. While previous works learn those decompositions based on data, we investigate data-independent decomposition sampling rules in this paper. We find that data-driven learners of decompositions can be easily misled towards local decompositions that do not hold globally across the search space. Then, we formally show that a random tree-based decomposition sampler exhibits favourable theoretical guarantees that effectively trade off maximal information gain and functional mismatch between the actual black-box and its surrogate as provided by the decomposition. Those results motivate the development of the random decomposition upper-confidence bound algorithm (RDUCB) that is straightforward to implement - (almost) plug-and-play - and, surprisingly, yields significant empirical gains compared to the previous state-of-the-art on a comprehensive set of benchmarks. We also confirm the plug-and-play nature of our modelling component by integrating our method with HEBO, showing improved practical gains in the highest dimensional tasks from Bayesmark.
Maximizing long-term rewards is the primary goal in sequential decision-making problems. The majority of existing methods assume that side information is freely available, enabling the learning agent to observe all features' states before making a decision. In real-world problems, however, collecting beneficial information is often costly. That implies that, besides individual arms' reward, learning the observations of the features' states is essential to improve the decision-making strategy. The problem is aggravated in a non-stationary environment where reward and cost distributions undergo abrupt changes over time. To address the aforementioned dual learning problem, we extend the contextual bandit setting and allow the agent to observe subsets of features' states. The objective is to maximize the long-term average gain, which is the difference between the accumulated rewards and the paid costs on average. Therefore, the agent faces a trade-off between minimizing the cost of information acquisition and possibly improving the decision-making process using the obtained information. To this end, we develop an algorithm that guarantees a sublinear regret in time. Numerical results demonstrate the superiority of our proposed policy in a real-world scenario.
Time evolving surfaces can be modeled as two-dimensional Functional time series, exploiting the tools of Functional data analysis. Leveraging this approach, a forecasting framework for such complex data is developed. The main focus revolves around Conformal Prediction, a versatile nonparametric paradigm used to quantify uncertainty in prediction problems. Building upon recent variations of Conformal Prediction for Functional time series, a probabilistic forecasting scheme for two-dimensional functional time series is presented, while providing an extension of Functional Autoregressive Processes of order one to this setting. Estimation techniques for the latter process are introduced and their performance are compared in terms of the resulting prediction regions. Finally, the proposed forecasting procedure and the uncertainty quantification technique are applied to a real dataset, collecting daily observations of Sea Level Anomalies of the Black Sea
We initiate the study of Bayesian conversations, which model interactive communication between two strategic agents without a mediator. We compare this to communication through a mediator and investigate the settings in which mediation can expand the range of implementable outcomes. In the first part of the paper, we ask whether the distributions of posterior beliefs that can be induced by a mediator protocol can also be induced by a (unmediated) Bayesian conversation. We show this is not possible -- mediator protocols can ``correlate'' the posteriors in a way that unmediated conversations cannot. Additionally, we provide characterizations of which distributions over posteriors are achievable via mediator protocols and Bayesian conversations. In the second part of the paper, we delve deeper into the eventual outcome of two-player games after interactive communication. We focus on games where only one agent has a non-trivial action and examine the performance of communication protocols that are individually rational (IR) for both parties. We consider different levels of IR including ex-ante, interim, and ex-post; and we impose different restrictions on how Alice and Bob can deviate from the protocol: the players are committed/non-committed. Our key findings reveal that, in the cases of ex-ante and interim IR, the expected utilities achievable through a mediator are equivalent to those achievable through unmediated Bayesian conversations. However, in the models of ex-post IR and non-committed interim IR, we observe a separation in the achievable outcomes.
We consider hypergraph network design problems where the goal is to construct a hypergraph satisfying certain properties. In graph network design problems, the number of edges in an arbitrary solution is at most the square of the number of vertices. In contrast, in hypergraph network design problems, the number of hyperedges in an arbitrary solution could be exponential in the number of vertices and hence, additional care is necessary to design polynomial-time algorithms. The central theme of this work is to show that certain hypergraph network design problems admit solutions with polynomial number of hyperedges and moreover, can be solved in strongly polynomial time. Our work improves on the previous fastest pseudo-polynomial run-time for these problems. In addition, we develop algorithms that return (near-)uniform hypergraphs as solutions. The hypergraph network design problems that we focus upon are splitting-off operation in hypergraphs, connectivity augmentation using hyperedges, and covering skew-supermodular functions using hyperedges. Our definition of the splitting-off operation in hypergraphs and our proof showing the existence of the operation using a strongly polynomial-time algorithm to compute it are likely to be of independent graph-theoretical interest.
This paper focuses on spatial time-optimal motion planning, a generalization of the exact time-optimal path following problem that allows the system to plan within a predefined space. In contrast to state-of-the-art methods, we drop the assumption that a collision-free geometric reference is given. Instead, we present a two-stage motion planning method that solely relies on a goal location and a geometric representation of the environment to compute a time-optimal trajectory that is compliant with system dynamics and constraints. To do so, the proposed scheme first computes an obstacle-free Pythagorean Hodograph parametric spline, and second solves a spatially reformulated minimum-time optimization problem. The spline obtained in the first stage is not a geometric reference, but an extension of the environment representation, and thus, time-optimality of the solution is guaranteed. The efficacy of the proposed approach is benchmarked by a known planar example and validated in a more complex spatial system, illustrating its versatility and applicability.
Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-Lipschitz losses. For this setting we provide an algorithm which guarantees $R_{T}(u)\le \tilde O(G\|u\|\sqrt{T}+L\|u\|^{2}\sqrt{T})$ regret on any problem where the subgradients satisfy $\|g_{t}\|\le G+L\|w_{t}\|$, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic regret in an unbounded domain for non-Lipschitz losses, as well as a matching lower bound. The regret of our dynamic regret algorithm automatically improves to a novel $L^{*}$ bound when the losses are smooth.
We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of Optimal Control Problems (OCPs) constrained by random partial differential equations. The method requires to solve the OCP for several low-fidelity spatial grids and quadrature formulae for the objective functional. All the computed solutions are then linearly combined to get a final approximation which, under suitable regularity assumptions, preserves the same accuracy of fine tensor product approximations, while drastically reducing the computational cost. The combination technique involves only tensor product quadrature formulae, thus the discretized OCPs preserve the convexity of the continuous OCP. Hence, the combination technique avoids the inconveniences of Multilevel Monte Carlo and/or sparse grids approaches, but remains suitable for high dimensional problems. The manuscript presents an a-priori procedure to choose the most important mixed differences and an asymptotic complexity analysis, which states that the asymptotic complexity is exclusively determined by the spatial solver. Numerical experiments validate the results.
Hougardy and Schroeder (WG 2014) proposed a combinatorial technique for pruning the search space in the traveling salesman problem, establishing that, for a given instance, certain edges cannot be present in any optimal tour. We describe an implementation of their technique, employing an exact TSP solver to locate k-opt moves in the elimination process. In our computational study, we combine LP reduced-cost elimination together with the new combinatorial algorithm. We report results on a set of geometric instances, with the number of points n ranging from 3,038 up to 115,475. The test set includes all TSPLIB instances having at least 3,000 points, together with 250 randomly generated instances, each with 10,000 points, and three currently unsolved instances having 100,000 or more points. In all but two of the test instances, the complete-graph edge sets were reduced to under 3n edges. For the three large unsolved instances, repeated runs of the elimination process reduced the graphs to under 2.5n edges.
The adaptive processing of structured data is a long-standing research topic in machine learning that investigates how to automatically learn a mapping from a structured input to outputs of various nature. Recently, there has been an increasing interest in the adaptive processing of graphs, which led to the development of different neural network-based methodologies. In this thesis, we take a different route and develop a Bayesian Deep Learning framework for graph learning. The dissertation begins with a review of the principles over which most of the methods in the field are built, followed by a study on graph classification reproducibility issues. We then proceed to bridge the basic ideas of deep learning for graphs with the Bayesian world, by building our deep architectures in an incremental fashion. This framework allows us to consider graphs with discrete and continuous edge features, producing unsupervised embeddings rich enough to reach the state of the art on several classification tasks. Our approach is also amenable to a Bayesian nonparametric extension that automatizes the choice of almost all model's hyper-parameters. Two real-world applications demonstrate the efficacy of deep learning for graphs. The first concerns the prediction of information-theoretic quantities for molecular simulations with supervised neural models. After that, we exploit our Bayesian models to solve a malware-classification task while being robust to intra-procedural code obfuscation techniques. We conclude the dissertation with an attempt to blend the best of the neural and Bayesian worlds together. The resulting hybrid model is able to predict multimodal distributions conditioned on input graphs, with the consequent ability to model stochasticity and uncertainty better than most works. Overall, we aim to provide a Bayesian perspective into the articulated research field of deep learning for graphs.
This manuscript portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.