This paper is concerned with function reconstruction from samples. The sampling points used in several approaches are (1) structured points connected with fast algorithms or (2) unstructured points coming from, e.g., an initial random draw to achieve an improved information complexity. We connect both approaches and propose a subsampling of structured points in an offline step. In particular, we start with structured quadrature points (QMC), which provide stable $L_2$ reconstruction properties. The subsampling procedure consists of a computationally inexpensive random step followed by a deterministic procedure to further reduce the number of points while keeping its information. In these points functions (belonging to a RKHS of bounded functions) will be sampled and reconstructed from whilst achieving state of the art error decay. Our method is dimension-independent and is applicable as soon as we know some initial quadrature points. We apply our general findings on the $d$-dimensional torus to subsample rank-1 lattices, where it is known that full rank-1 lattices lose half the optimal order of convergence (expressed in terms of the size of the lattice). In contrast to that, our subsampled version regains the optimal rate since many of the lattice points are not needed. Moreover, we utilize fast and memory efficient Fourier algorithms in order to compute the approximation. Numerical experiments in several dimensions support our findings.
Trajectory optimization is a powerful tool for robot motion planning and control. State-of-the-art general-purpose nonlinear programming solvers are versatile, handle constraints effectively and provide a high numerical robustness, but they are slow because they do not fully exploit the optimal control problem structure at hand. Existing structure-exploiting solvers are fast, but they often lack techniques to deal with nonlinearity or rely on penalty methods to enforce (equality or inequality) path constraints. This work presents Fatrop: a trajectory optimization solver that is fast and benefits from the salient features of general-purpose nonlinear optimization solvers. The speed-up is mainly achieved through the integration of a specialized linear solver, based on a Riccati recursion that is generalized to also support stagewise equality constraints. To demonstrate the algorithm's potential, it is benchmarked on a set of robot problems that are challenging from a numerical perspective, including problems with a minimum-time objective and no-collision constraints. The solver is shown to solve problems for trajectory generation of a quadrotor, a robot manipulator and a truck-trailer problem in a few tens of milliseconds. The algorithm's C++-code implementation accompanies this work as open source software, released under the GNU Lesser General Public License (LGPL). This software framework may encourage and enable the robotics community to use trajectory optimization in more challenging applications.
We explore algorithmic aspects of a simply transitive commutative group action coming from the class field theory of imaginary hyperelliptic function fields. Namely, the Jacobian of an imaginary hyperelliptic curve defined over $\mathbb F_q$ acts on a subset of isomorphism classes of Drinfeld modules. We describe an algorithm to compute the group action efficiently. This is a function field analog of the Couveignes-Rostovtsev-Stolbunov group action. We report on an explicit computation done with our proof-of-concept C++/NTL implementation; it took a fraction of a second on a standard computer. We prove that the problem of inverting the group action reduces to the problem of finding isogenies of fixed $\tau$-degree between Drinfeld $\mathbb F_q[X]$-modules, which is solvable in polynomial time thanks to an algorithm by Wesolowski. We give asymptotic complexity bounds for all algorithms presented in this paper.
Constraint programming is known for being an efficient approach for solving combinatorial problems. Important design choices in a solver are the branching heuristics, which are designed to lead the search to the best solutions in a minimum amount of time. However, developing these heuristics is a time-consuming process that requires problem-specific expertise. This observation has motivated many efforts to use machine learning to automatically learn efficient heuristics without expert intervention. To the best of our knowledge, it is still an open research question. Although several generic variable-selection heuristics are available in the literature, the options for a generic value-selection heuristic are more scarce. In this paper, we propose to tackle this issue by introducing a generic learning procedure that can be used to obtain a value-selection heuristic inside a constraint programming solver. This has been achieved thanks to the combination of a deep Q-learning algorithm, a tailored reward signal, and a heterogeneous graph neural network architecture. Experiments on graph coloring, maximum independent set, and maximum cut problems show that our framework is able to find better solutions close to optimality without requiring a large amounts of backtracks while being generic.
In this work, we consider the problem of distributed computing of functions of structured sources, focusing on the classical setting of two correlated sources and one user that seeks the outcome of the function while benefiting from low-rate side information provided by a helper node. Focusing on the case where the sources are jointly distributed according to a very general mixture model, we here provide an achievable coding scheme that manages to substantially reduce the communication cost of distributed computing by exploiting the nature of the joint distribution of the sources, the side information, as well as the symmetry enjoyed by the desired functions. Our scheme -- which can readily apply in a variety of real-life scenarios including learning, combinatorics, and graph neural network applications -- is here shown to provide substantial reductions in the communication costs, while simultaneously providing computational savings by reducing the exponential complexity of joint decoding techniques to a complexity that is merely linear.
The utilization of finite field multipliers is pervasive in contemporary digital systems, with hardware implementation for bit parallel operation often necessitating millions of logic gates. However, various digital design issues, whether inherent or stemming from soft errors, can result in gate malfunction, ultimately can cause gates to malfunction, which in turn results in incorrect multiplier outputs. Thus, to prevent susceptibility to error, it is imperative to employ a reliable finite field multiplier implementation that boasts a robust fault detection capability. In order to achieve the best fault detection performance for finite field detection performance for finite field multipliers while maintaining a low-complexity implementation, this study proposes a novel fault detection scheme for a recent bit-parallel polynomial basis over GF(2m). The primary concept behind the proposed approach is centered on the implementation of an efficient BCH decoder that utilize Berlekamp-Rumsey-Solomon (BRS) algorithm and Chien-search method to effectively locate errors with minimal delay. The results of our synthesis indicate that our proposed error detection and correction architecture for a 45-bit multiplier with 5-bit errors achieves a 37% and 49% reduction in critical path delay compared to existing designs. Furthermore, a 45-bit multiplicand with five errors has hardware complexity that is only 80%, which is significantly less complex than the most advanced BCH-based fault recognition techniques, such as TMR, Hamming's single error correction, and LDPC-based methods for finite field multiplication which is desirable in constrained applications, such as smart cards, IoT devices, and implantable medical devices.
Many materials processes and properties depend on the anisotropy of the energy of grain boundaries, i.e.~on the fact that this energy is a function of the five geometric degrees of freedom (DOF) of the interface. To access this parameter space in an efficient way and to discover energy cusps in unexplored regions, a method was recently established, which combines atomistic simulations with statistical methods 10.1002/adts.202100615. This sequential sampling technique is now extended in the spirit of an active learning algorithm by adding a criterion to decide when the sampling has advanced enough to stop. In this instance, two parameters to analyse the sampling results on the fly are introduced: the number of cusps, which correspond to the most interesting and important regions of the energy landscape, and the maximum change of energy between two sequential iterations. Monitoring these two quantities provides valuable insight into how the subspaces are energetically structured. The combination of both parameters provides the necessary information to evaluate the sampling of the 2D subspaces of grain boundary plane inclinations of even non-periodic, low angle grain boundaries. With a reasonable number of data points in the initial design, only a few appropriately chosen sequential iterations already improve the accuracy of the sampling substantially and unknown cusps can be found within a few additional sequential steps.
Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such \emph{approximation factors} -- especially their optimal form in a given learning problem -- is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as with the weighted $L_2$-norm (where the weighting is the offline state distribution), the $L_\infty$ norm, the presence vs. absence of state aliasing, and full vs. partial coverage of the state space. We establish the optimal asymptotic approximation factors (up to constants) for all of these settings. In particular, our bounds identify two instance-dependent factors for the $L_2(\mu)$ norm and only one for the $L_\infty$ norm, which are shown to dictate the hardness of off-policy evaluation under misspecification.
We present a novel method for reconstructing clothed humans from a sparse set of, e.g., 1 to 6 RGB images. Despite impressive results from recent works employing deep implicit representation, we revisit the volumetric approach and demonstrate that better performance can be achieved with proper system design. The volumetric representation offers significant advantages in leveraging 3D spatial context through 3D convolutions, and the notorious quantization error is largely negligible with a reasonably large yet affordable volume resolution, e.g., 512. To handle memory and computation costs, we propose a sophisticated coarse-to-fine strategy with voxel culling and subspace sparse convolution. Our method starts with a discretized visual hull to compute a coarse shape and then focuses on a narrow band nearby the coarse shape for refinement. Once the shape is reconstructed, we adopt an image-based rendering approach, which computes the colors of surface points by blending input images with learned weights. Extensive experimental results show that our method significantly reduces the mean point-to-surface (P2S) precision of state-of-the-art methods by more than 50% to achieve approximately 2mm accuracy with a 512 volume resolution. Additionally, images rendered from our textured model achieve a higher peak signal-to-noise ratio (PSNR) compared to state-of-the-art methods.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.