In this paper, we construct a derivative-free multi-step iterative scheme based on Steffensen's method. To avoid excessively increasing the number of functional evaluations and, at the same time, to increase the order of convergence, we freeze the divided differences used from the second step and use a weight function on already evaluated operators. Therefore, we define a family of multi-step methods with convergence order 2m, where m is the number of steps, free of derivatives, with several parameters and with dynamic behaviour, in some cases, similar to Steffensen's method. In addition, we study how to increase the convergence order of the defined family by introducing memory in two different ways: using the usual divided differences and the Kurchatov divided differences. We perform some numerical experiments to see the behaviour of the proposed family and suggest different weight functions to visualize with dynamical planes in some cases the dynamical behaviour.
Grover's algorithm is a well-known contribution to quantum computing. It searches one value within an unordered sequence faster than any classical algorithm. A fundamental part of this algorithm is the so-called oracle, a quantum circuit that marks the quantum state corresponding to the desired value. A generalization of it is the oracle for Amplitude Amplification, that marks multiple desired states. In this work we present a classical algorithm that builds a phase-marking oracle for Amplitude Amplification. This oracle performs a less-than operation, marking states representing natural numbers smaller than a given one. Results of both simulations and experiments are shown to prove its functionality. This less-than oracle implementation works on any number of qubits and does not require any ancilla qubits. Regarding depth, the proposed implementation is compared with the one generated by Qiskit automatic method, UnitaryGate. We show that the depth of our less-than oracle implementation is always lower. This difference is significant enough for our method to outperform UnitaryGate on real quantum hardware.
Combining machine learning and constrained optimization, Predict+Optimize tackles optimization problems containing parameters that are unknown at the time of solving. Prior works focus on cases with unknowns only in the objectives. A new framework was recently proposed to cater for unknowns also in constraints by introducing a loss function, called Post-hoc Regret, that takes into account the cost of correcting an unsatisfiable prediction. Since Post-hoc Regret is non-differentiable, the previous work computes only its approximation. While the notion of Post-hoc Regret is general, its specific implementation is applicable to only packing and covering linear programming problems. In this paper, we first show how to compute Post-hoc Regret exactly for any optimization problem solvable by a recursive algorithm satisfying simple conditions. Experimentation demonstrates substantial improvement in the quality of solutions as compared to the earlier approximation approach. Furthermore, we show experimentally the empirical behavior of different combinations of correction and penalty functions used in the Post-hoc Regret of the same benchmarks. Results provide insights for defining the appropriate Post-hoc Regret in different application scenarios.
The geometric high-order regularization methods such as mean curvature and Gaussian curvature, have been intensively studied during the last decades due to their abilities in preserving geometric properties including image edges, corners, and contrast. However, the dilemma between restoration quality and computational efficiency is an essential roadblock for high-order methods. In this paper, we propose fast multi-grid algorithms for minimizing both mean curvature and Gaussian curvature energy functionals without sacrificing accuracy for efficiency. Unlike the existing approaches based on operator splitting and the Augmented Lagrangian method (ALM), no artificial parameters are introduced in our formulation, which guarantees the robustness of the proposed algorithm. Meanwhile, we adopt the domain decomposition method to promote parallel computing and use the fine-to-coarse structure to accelerate convergence. Numerical experiments are presented on image denoising, CT, and MRI reconstruction problems to demonstrate the superiority of our method in preserving geometric structures and fine details. The proposed method is also shown effective in dealing with large-scale image processing problems by recovering an image of size $1024\times 1024$ within $40$s, while the ALM method requires around $200$s.
Many promising applications of supervised machine learning face hurdles in the acquisition of labeled data in sufficient quantity and quality, creating an expensive bottleneck. To overcome such limitations, techniques that do not depend on ground truth labels have been studied, including weak supervision and generative modeling. While these techniques would seem to be usable in concert, improving one another, how to build an interface between them is not well-understood. In this work, we propose a model fusing programmatic weak supervision and generative adversarial networks and provide theoretical justification motivating this fusion. The proposed approach captures discrete latent variables in the data alongside the weak supervision derived label estimate. Alignment of the two allows for better modeling of sample-dependent accuracies of the weak supervision sources, improving the estimate of unobserved labels. It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels. Additionally, its learned latent variables can be inspected qualitatively. The model outperforms baseline weak supervision label models on a number of multiclass image classification datasets, improves the quality of generated images, and further improves end-model performance through data augmentation with synthetic samples.
Estimating a Gibbs density function given a sample is an important problem in computational statistics and statistical learning. Although the well established maximum likelihood method is commonly used, it requires the computation of the partition function (i.e., the normalization of the density). This function can be easily calculated for simple low-dimensional problems but its computation is difficult or even intractable for general densities and high-dimensional problems. In this paper we propose an alternative approach based on Maximum A-Posteriori (MAP) estimators, we name Maximum Recovery MAP (MR-MAP), to derive estimators that do not require the computation of the partition function, and reformulate the problem as an optimization problem. We further propose a least-action type potential that allows us to quickly solve the optimization problem as a feed-forward hyperbolic neural network. We demonstrate the effectiveness of our methods on some standard data sets.
We present a force feedback controller for a dexterous robotic hand equipped with force sensors on its fingertips. Our controller uses the conditional postural synergies framework to generate the grasp postures, i.e. the finger configuration of the robot, at each time step based on forces measured on the robot's fingertips. Using this framework we are able to control the hand during different grasp types using only one variable, the grasp size, which we define as the distance between the tip of the thumb and the index finger. Instead of controlling the finger limbs independently, our controller generates control signals for all the hand joints in a (low-dimensional) shared space (i.e. synergy space). In addition, our approach is modular, which allows to execute various types of precision grips, by changing the synergy space according to the type of grasp. We show that our controller is able to lift objects of various weights and materials, adjust the grasp configuration during changes in the object's weight, and perform object placements and object handovers.
We consider entropy conservative and dissipative discretizations of nonlinear conservation laws with implicit time discretizations and investigate the influence of iterative methods used to solve the arising nonlinear equations. We show that Newton's method can turn an entropy dissipative scheme into an anti-dissipative one, even when the iteration error is smaller than the time integration error. We explore several remedies, of which the most performant is a relaxation technique, originally designed to fix entropy errors in time integration methods. Thus, relaxation works well in consort with iterative solvers, provided that the iteration errors are on the order of the time integration method. To corroborate our findings, we consider Burgers' equation and nonlinear dispersive wave equations. We find that entropy conservation results in more accurate numerical solutions than non-conservative schemes, even when the tolerance is an order of magnitude larger.
Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this end, Deterministic Uncertainty Methods (DUMs) is a promising model family capable to perform uncertainty estimation in a single forward pass. This work investigates important design choices in DUMs: (1) we show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances. (2) we demonstrate that the core architecture expressiveness is crucial for uncertainty performance and that additional architecture constraints to avoid feature collapse can deteriorate the trade-off between OOD generalization and detection. (3) Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances.
Model selection is a ubiquitous problem that arises in the application of many statistical and machine learning methods. In the likelihood and related settings, it is typical to use the method of information criteria (IC) to choose the most parsimonious among competing models by penalizing the likelihood-based objective function. Theorems guaranteeing the consistency of IC can often be difficult to verify and are often specific and bespoke. We present a set of results that guarantee consistency for a class of IC, which we call PanIC (from the Greek root 'pan', meaning 'of everything'), with easily verifiable regularity conditions. The PanIC are applicable in any loss-based learning problem and are not exclusive to likelihood problems. We illustrate the verification of regularity conditions for model selection problems regarding finite mixture models, least absolute deviation and support vector regression, and principal component analysis, and we demonstrate the effectiveness of the PanIC for such problems via numerical simulations. Furthermore, we present new sufficient conditions for the consistency of BIC-like estimators and provide comparisons of the BIC to PanIC.
We investigate discontinuous Galerkin methods for an elliptic optimal control problem with a general state equation and pointwise state constraints on general polygonal domains. We show that discontinuous Galerkin methods for general second-order elliptic boundary value problems can be used to solve the elliptic optimal control problems with pointwise state constraints. We establish concrete error estimates and numerical experiments are shown to support the theoretical results.