Adaptive mesh refinement is a key component of efficient unstructured space-time finite element methods. Underlying any adaptive mesh refinement scheme is, of course, a method for local refinement of simplices. However, simplex bisection algorithms in dimension greater than three have strict mesh preconditions which can be hard to satisfy. We prove that certain four-dimensional simplex space-time meshes can be handled with a relaxed precondition. Namely, we prove that if a tetrahedral mesh is 4-colorable, then we can produce a 4D simplex mesh which always satisfies the bisection precondition. We also briefly discuss strategies to handle tetrahedral meshes which are not 4-colorable.
We focus on finite element method computations for time-dependent problems. We prove that the computational cost of the space-time formulation is higher than the cost of the time-marching schemes. This applies to both direct and iterative solvers. It concerns both uniform and adaptive grids. The only exception from this rule is the h adaptive space-time simulation of the traveling point object, resulting in refinements towards their trajectory in the space-time domain. However, if this object has wings and the mesh refinements capture the shape of the wing (if the mesh refinements capture any two-dimensional manifold) the space-time formulation is more expensive than time-marching schemes. We also show that the cost of static condensation for the higher-order finite element method with hierarchical basis functions is always higher for space-time formulations. Numerical experiments with Octave confirm our theoretical findings.
A core problem in statistical network analysis is to develop network analogues of classical techniques. The problem of bootstrapping network data stands out as especially challenging, since typically one observes only a single network, rather than a sample. Here we propose two methods for obtaining bootstrap samples for networks drawn from latent space models. The first method generates bootstrap replicates of network statistics that can be represented as U-statistics in the latent positions, and avoids actually constructing new bootstrapped networks. The second method generates bootstrap replicates of whole networks, and thus can be used for bootstrapping any network function. Commonly studied network quantities that can be represented as U-statistics include many popular summaries, such as average degree and subgraph counts, but other equally popular summaries, such as the clustering coefficient, are not expressible as U-statistics and thus require the second bootstrap method. Under the assumption of a random dot product graph, a type of latent space network model, we show consistency of the proposed bootstrap methods. We give motivating examples throughout and demonstrate the effectiveness of our methods on synthetic data.
The class of Gibbs point processes (GPP) is a large class of spatial point processes in the sense that they can model both clustered and repulsive point patterns. They are specified by their conditional intensity, which for a point pattern $\mathbf{x}$ and a location $u$, is roughly speaking the probability that an event occurs in an infinitesimal ball around $u$ given the rest of the configuration is $\mathbf{x}$. The most simple, natural and easiest to interpret class of models is the class of pairwise interaction point processes where the conditional intensity depends on the number of points and pairwise distances between them. Estimating this function non parametrically has almost never been considered in the literature. We tackle this question and propose an orthogonal series estimation procedure of the log pairwise interaction function. Under some conditions provided on the spatial GPP and on the basis system, we show that this orthogonal series estimator is consistent and asymptotically normal. The estimation procedure is simple, fast and completely data-driven. We show its efficiency through simulation experiments and we apply it to three datasets.
We give a number of approximation metatheorems for monotone maximization problems expressible in the first-order logic, in substantially more general settings than the previously known. We obtain * constant-factor approximation algorithm in any class of graphs with bounded expansion, * a QPTAS in any class with strongly sublinear separators, and * a PTAS in any fractionally treewidth-fragile class (which includes all common classes with strongly sublinear separators. Moreover, our tools also give an exact subexponential-time algorithm in any class with strongly sublinear separators.
Conditional disclosure of secrets (CDS) allows multiple parties to reveal a secret to a third party if and only if some pre-decided condition is satisfied. In this work, we bolster the privacy guarantees of CDS by introducing function-private CDS wherein the pre-decided condition is never revealed to the third party. We also derive a function secret sharing scheme from our function-private CDS solution. The second problem that we consider concerns threshold distributed point functions, which allow one to split a point function such that at least a threshold number of shares are required to evaluate it at any given input. We consider a setting wherein a point function is split among a set of parties such that multiple evaluations do not leak non-negligible information about it. Finally, we present a provably optimal procedure to perform threshold function secret sharing of any polynomial in a finite field.
We consider a generic and explicit tamed Euler--Maruyama scheme for multidimensional time-inhomogeneous stochastic differential equations with multiplicative Brownian noise. The diffusion coefficient is uniformly elliptic, H\"older continuous and weakly differentiable in the spatial variables while the drift satisfies the Ladyzhenskaya--Prodi--Serrin condition, as considered by Krylov and R\"ockner (2005). In the discrete scheme, the drift is tamed by replacing it by an approximation. A strong rate of convergence of the scheme is provided in terms of the approximation error of the drift in a suitable and possibly very weak topology. A few examples of approximating drifts are discussed in detail. The parameters of the approximating drifts can vary and be fine-tuned to achieve the standard $1/2$-strong convergence rate with a logarithmic factor.
We study query and computationally efficient planning algorithms with linear function approximation and a simulator. We assume that the agent only has local access to the simulator, meaning that the agent can only query the simulator at states that have been visited before. This setting is more practical than many prior works on reinforcement learning with a generative model. We propose an algorithm named confident Monte Carlo least square policy iteration (Confident MC-LSPI) for this setting. Under the assumption that the Q-functions of all deterministic policies are linear in known features of the state-action pairs, we show that our algorithm has polynomial query and computational complexities in the dimension of the features, the effective planning horizon and the targeted sub-optimality, while these complexities are independent of the size of the state space. One technical contribution of our work is the introduction of a novel proof technique that makes use of a virtual policy iteration algorithm. We use this method to leverage existing results on $\ell_\infty$-bounded approximate policy iteration to show that our algorithm can learn the optimal policy for the given initial state even only with local access to the simulator. We believe that this technique can be extended to broader settings beyond this work.
Semantic reconstruction of indoor scenes refers to both scene understanding and object reconstruction. Existing works either address one part of this problem or focus on independent objects. In this paper, we bridge the gap between understanding and reconstruction, and propose an end-to-end solution to jointly reconstruct room layout, object bounding boxes and meshes from a single image. Instead of separately resolving scene understanding and object reconstruction, our method builds upon a holistic scene context and proposes a coarse-to-fine hierarchy with three components: 1. room layout with camera pose; 2. 3D object bounding boxes; 3. object meshes. We argue that understanding the context of each component can assist the task of parsing the others, which enables joint understanding and reconstruction. The experiments on the SUN RGB-D and Pix3D datasets demonstrate that our method consistently outperforms existing methods in indoor layout estimation, 3D object detection and mesh reconstruction.
Single-image piece-wise planar 3D reconstruction aims to simultaneously segment plane instances and recover 3D plane parameters from an image. Most recent approaches leverage convolutional neural networks (CNNs) and achieve promising results. However, these methods are limited to detecting a fixed number of planes with certain learned order. To tackle this problem, we propose a novel two-stage method based on associative embedding, inspired by its recent success in instance segmentation. In the first stage, we train a CNN to map each pixel to an embedding space where pixels from the same plane instance have similar embeddings. Then, the plane instances are obtained by grouping the embedding vectors in planar regions via an efficient mean shift clustering algorithm. In the second stage, we estimate the parameter for each plane instance by considering both pixel-level and instance-level consistencies. With the proposed method, we are able to detect an arbitrary number of planes. Extensive experiments on public datasets validate the effectiveness and efficiency of our method. Furthermore, our method runs at 30 fps at the testing time, thus could facilitate many real-time applications such as visual SLAM and human-robot interaction. Code is available at //github.com/svip-lab/PlanarReconstruction.
Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.