Dye experimentation is a widely used method in experimental fluid mechanics for flow analysis or for the study of the transport of particles within a fluid. This technique is particularly useful in biomedical diagnostic applications ranging from hemodynamic analysis of cardiovascular systems to ocular circulation. However, simulating dyes governed by convection-diffusion partial differential equations (PDEs) can also be a useful post-processing analysis approach for computational fluid dynamics (CFD) applications. Such simulations can be used to identify the relative significance of different spatial subregions in particular time intervals of interest in an unsteady flow field. Additionally, dye evolution is closely related to non-discrete particle residence time (PRT) calculations that are governed by similar PDEs. PRT is a widely used metric for various fluid dynamics applications (e.g., environmental fluids, biological flows) and is a well-accepted biomarker for cardiovascular diseases since it is linked to thrombus formation. This contribution introduces a pseudo-spectral method based on Fourier continuation (FC) for conducting dye simulations and non-discrete particle residence time calculations without numerical diffusion errors. Convergence and error analyses are performed with both manufactured and analytical solutions. The methodology is applied to three distinct physical/physiological cases: 1) flow over a two-dimensional (2D) cavity; 2) pulsatile flow in a simplified partially-grafted aortic dissection model; and 3) non-Newtonian blood flow in a Fontan graft. Although velocity data is provided in this work by numerical simulation, the proposed approach can also be applied to velocity data collected through experimental techniques such as from particle image velocimetry.
In recent years, empirical Bayesian (EB) inference has become an attractive approach for estimation in parametric models arising in a variety of real-life problems, especially in complex and high-dimensional scientific applications. However, compared to the relative abundance of available general methods for computing point estimators in the EB framework, the construction of confidence sets and hypothesis tests with good theoretical properties remains difficult and problem specific. Motivated by the universal inference framework of Wasserman et al. (2020), we propose a general and universal method, based on holdout likelihood ratios, and utilizing the hierarchical structure of the specified Bayesian model for constructing confidence sets and hypothesis tests that are finite sample valid. We illustrate our method through a range of numerical studies and real data applications, which demonstrate that the approach is able to generate useful and meaningful inferential statements in the relevant contexts.
Generating feasible robot motions in real-time requires achieving multiple tasks (i.e., kinematic requirements) simultaneously. These tasks can have a specific goal, a range of equally valid goals, or a range of acceptable goals with a preference toward a specific goal. To satisfy multiple and potentially competing tasks simultaneously, it is important to exploit the flexibility afforded by tasks with a range of goals. In this paper, we propose a real-time motion generation method that accommodates all three categories of tasks within a single, unified framework and leverages the flexibility of tasks with a range of goals to accommodate other tasks. Our method incorporates tasks in a weighted-sum multiple-objective optimization structure and uses barrier methods with novel loss functions to encode the valid range of a task. We demonstrate the effectiveness of our method through a simulation experiment that compares it to state-of-the-art alternative approaches, and by demonstrating it on a physical camera-in-hand robot that shows that our method enables the robot to achieve smooth and feasible camera motions.
When humans perform contact-rich manipulation tasks, customized tools are often necessary to simplify the task. For instance, we use various utensils for handling food, such as knives, forks and spoons. Similarly, robots may benefit from specialized tools that enable them to more easily complete a variety of tasks. We present an end-to-end framework to automatically learn tool morphology for contact-rich manipulation tasks by leveraging differentiable physics simulators. Previous work relied on manually constructed priors requiring detailed specification of a 3D object model, grasp pose and task description to facilitate the search or optimization process. Our approach only requires defining the objective with respect to task performance and enables learning a robust morphology through randomizing variations of the task. We make this optimization tractable by casting it as a continual learning problem. We demonstrate the effectiveness of our method for designing new tools in several scenarios, such as winding ropes, flipping a box and pushing peas onto a scoop in simulation. Additionally, experiments with real robots show that the tool shapes discovered by our method help them succeed in these scenarios.
Kinetic equations model the position-velocity distribution of particles subject to transport and collision effects. Under a diffusive scaling, these combined effects converge to a diffusion equation for the position density in the limit of an infinite collision rate. Despite this well-defined limit, numerical simulation is expensive when the collision rate is high but finite, as small time steps are then required. In this work, we present an asymptotic-preserving multilevel Monte Carlo particle scheme that makes use of this diffusive limit to accelerate computations. In this scheme, we first sample the diffusive limiting model to compute a biased initial estimate of a Quantity of Interest, using large time steps. We then perform a limited number of finer simulations with transport and collision dynamics to correct the bias. The efficiency of the multilevel method depends on being able to perform correlated simulations of particles on a hierarchy of discretization levels. We present a method for correlating particle trajectories and present both an analysis and numerical experiments. We demonstrate that our approach significantly reduces the cost of particle simulations in high-collisional regimes, compared with prior work, indicating significant potential for adopting these schemes in various areas of active research.
Strict stationarity is a common assumption used in the time series literature in order to derive asymptotic distributional results for second-order statistics, like sample autocovariances and sample autocorrelations. Focusing on weak stationarity, this paper derives the asymptotic distribution of the maximum of sample autocovariances and sample autocorrelations under weak conditions by using Gaussian approximation techniques. The asymptotic theory for parameter estimation obtained by fitting a (linear) autoregressive model to a general weakly stationary time series is revisited and a Gaussian approximation theorem for the maximum of the estimators of the autoregressive coefficients is derived. To perform statistical inference for the second order parameters considered, a bootstrap algorithm, the so-called second-order wild bootstrap, is applied. Consistency of this bootstrap procedure is proven. In contrast to existing bootstrap alternatives, validity of the second-order wild bootstrap does not require the imposition of strict stationary conditions or structural process assumptions, like linearity. The good finite sample performance of the second-order wild bootstrap is demonstrated by means of simulations.
Diffusion generative models have recently been applied to domains where the available data can be seen as a discretization of an underlying function, such as audio signals or time series. However, these models operate directly on the discretized data, and there are no semantics in the modeling process that relate the observed data to the underlying functional forms. We generalize diffusion models to operate directly in function space by developing the foundational theory for such models in terms of Gaussian measures on Hilbert spaces. A significant benefit of our function space point of view is that it allows us to explicitly specify the space of functions we are working in, leading us to develop methods for diffusion generative modeling in Sobolev spaces. Our approach allows us to perform both unconditional and conditional generation of function-valued data. We demonstrate our methods on several synthetic and real-world benchmarks.
Time-dependent Maxwell's equations govern electromagnetics. Under certain conditions, we can rewrite these equations into a partial differential equation of second order, which in this case is the vectorial wave equation. For the vectorial wave, we investigate the numerical application and the challenges in the implementation. For this purpose, we consider a space-time variational setting, i.e. time is just another spatial dimension. More specifically, we apply integration by parts in time as well as in space, leading to a space-time variational formulation with different trial and test spaces. Conforming discretizations of tensor-product type result in a Galerkin--Petrov finite element method that requires a CFL condition for stability. For this Galerkin--Petrov variational formulation, we study the CFL condition and its sharpness. To overcome the CFL condition, we use a Hilbert-type transformation that leads to a variational formulation with equal trial and test spaces. Conforming space-time discretizations result in a new Galerkin--Bubnov finite element method that is unconditionally stable. In numerical examples, we demonstrate the effectiveness of this Galerkin--Bubnov finite element method. Furthermore, we investigate different projections of the right-hand side and their influence on the convergence rates. This paper is the first step towards a more stable computation and a better understanding of vectorial wave equations in a conforming space-time approach.
A Shared Nearest Neighbor (SNN) graph is a type of graph construction using shared nearest neighbor information, which is a secondary similarity measure based on the rankings induced by a primary $k$-nearest neighbor ($k$-NN) measure. SNN measures have been touted as being less prone to the curse of dimensionality than conventional distance measures, and thus methods using SNN graphs have been widely used in applications, particularly in clustering high-dimensional data sets and in finding outliers in subspaces of high dimensional data. Despite this, the theoretical study of SNN graphs and graph Laplacians remains unexplored. In this pioneering work, we make the first contribution in this direction. We show that large scale asymptotics of an SNN graph Laplacian reach a consistent continuum limit; this limit is the same as that of a $k$-NN graph Laplacian. Moreover, we show that the pointwise convergence rate of the graph Laplacian is linear with respect to $(k/n)^{1/m}$ with high probability.
To use heterogeneous hardware, programmers must have sufficient technical skills to utilize OpenMP, CUDA, and OpenCL. On the basis of this, I have proposed environment-adaptive software that enables automatic conversion, configuration, and high performance operation of once written code, in accordance with the hardware. However, although it has been considered to convert the code according to the offload devices, there has been no study where to place the offloaded applications to satisfy users' requirements of price and response time. In this paper, as a new element of environment-adapted software, I examine a method to calculate appropriate locations using linear programming method. I confirm that applications can be arranged appropriately through simulation experiments when some conditions such as application type and users' requirements are changed.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.