In this paper we study the convergence rate of a finite volume approximation of the compressible Navier--Stokes--Fourier system. To this end we first show the local existence of a highly regular unique strong solution and analyse its global extension in time as far as the density and temperature remain bounded. We make a physically reasonable assumption that the numerical density and temperature are uniformly bounded from above and below. The relative energy provides us an elegant way to derive a priori error estimates between finite volume solutions and the strong solution.
Conformal prediction has received tremendous attention in recent years and has offered new solutions to problems in missing data and causal inference; yet these advances have not leveraged modern semiparametric efficiency theory for more robust and efficient uncertainty quantification. In this paper, we consider the problem of obtaining distribution-free prediction regions accounting for a shift in the distribution of the covariates between the training and test data. Under an explainable covariate shift assumption analogous to the standard missing at random assumption, we propose three variants of a general framework to construct well-calibrated prediction regions for the unobserved outcome in the test sample. Our approach is based on the efficient influence function for the quantile of the unobserved outcome in the test population combined with an arbitrary machine learning prediction algorithm, without compromising asymptotic coverage. Next, we extend our approach to account for departure from the explainable covariate shift assumption in a semiparametric sensitivity analysis for potential latent covariate shift. In all cases, we establish that the resulting prediction sets eventually attain nominal average coverage in large samples. This guarantee is a consequence of the product bias form of our proposal which implies correct coverage if either the propensity score or the conditional distribution of the response is estimated sufficiently well. Our results also provide a framework for construction of doubly robust prediction sets of individual treatment effects, under both unconfoundedness and allowing for some degree of unmeasured confounding. Finally, we discuss aggregation of prediction sets from different machine learning algorithms for optimal prediction and illustrate the performance of our methods in both synthetic and real data.
Navigation is one of the most heavily studied problems in robotics, and is conventionally approached as a geometric mapping and planning problem. However, real-world navigation presents a complex set of physical challenges that defies simple geometric abstractions. Machine learning offers a promising way to go beyond geometry and conventional planning, allowing for navigational systems that make decisions based on actual prior experience. Such systems can reason about traversability in ways that go beyond geometry, accounting for the physical outcomes of their actions and exploiting patterns in real-world environments. They can also improve as more data is collected, potentially providing a powerful network effect. In this article, we present a general toolkit for experiential learning of robotic navigation skills that unifies several recent approaches, describe the underlying design principles, summarize experimental results from several of our recent papers, and discuss open problems and directions for future work.
The quality of consequences in a decision making problem under (severe) uncertainty must often be compared among different targets (goals, objectives) simultaneously. In addition, the evaluations of a consequence's performance under the various targets often differ in their scale of measurement, classically being either purely ordinal or perfectly cardinal. In this paper, we transfer recent developments from abstract decision theory with incomplete preferential and probabilistic information to this multi-target setting and show how -- by exploiting the (potentially) partial cardinal and partial probabilistic information -- more informative orders for comparing decisions can be given than the Pareto order. We discuss some interesting properties of the proposed orders between decision options and show how they can be concretely computed by linear optimization. We conclude the paper by demonstrating our framework in an artificial (but quite real-world) example in the context of comparing algorithms under different performance measures.
The performance of decision policies and prediction models often deteriorates when applied to environments different from the ones seen during training. To ensure reliable operation, we propose and analyze the stability of a system under distribution shift, which is defined as the smallest change in the underlying environment that causes the system's performance to deteriorate beyond a permissible threshold. In contrast to standard tail risk measures and distributionally robust losses that require the specification of a plausible magnitude of distribution shift, the stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation. We develop a minimax optimal estimator of stability and analyze its convergence rate, which exhibits a fundamental phase shift behavior. Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost. Empirically, we demonstrate the practical utility of our stability framework by using it to compare system designs on problems where robustness to distribution shift is critical.
We derive minimax testing errors in a distributed framework where the data is split over multiple machines and their communication to a central machine is limited to $b$ bits. We investigate both the $d$- and infinite-dimensional signal detection problem under Gaussian white noise. We also derive distributed testing algorithms reaching the theoretical lower bounds. Our results show that distributed testing is subject to fundamentally different phenomena that are not observed in distributed estimation. Among our findings, we show that testing protocols that have access to shared randomness can perform strictly better in some regimes than those that do not. We also observe that consistent nonparametric distributed testing is always possible, even with as little as $1$-bit of communication and the corresponding test outperforms the best local test using only the information available at a single local machine. Furthermore, we also derive adaptive nonparametric distributed testing strategies and the corresponding theoretical lower bounds.
Purpose: This study aims to assess the robustness and accuracy of the face-centred finite volume (FCFV) method for the simulation of compressible laminar flows in different regimes, using numerical benchmarks. Design/methodology/approach: The work presents a detailed comparison with reference solutions published in the literature -- when available -- and numerical results computed using a commercial cell-centred finite volume software. Findings: The FCFV scheme provides first-order accurate approximations of the viscous stress tensor and the heat flux, insensitively to cell distortion or stretching. The strategy demonstrates its efficiency in inviscid and viscous flows, for a wide range of Mach numbers, also in the incompressible limit. In purely inviscid flows, non-oscillatory approximations are obtained in the presence of shock waves. In the incompressible limit, accurate solutions are computed without pressure correction algorithms. The method shows its superior performance for viscous high Mach number flows, achieving physically admissible solutions without carbuncle effect and predictions of quantities of interest with errors below 5%. Originality/value: The FCFV method accurately evaluates, for a wide range of compressible laminar flows, quantities of engineering interest, such as drag, lift and heat transfer coefficients, on unstructured meshes featuring distorted and highly stretched cells, with an aspect ratio up to ten thousand. The method is suitable to simulate industrial flows on complex geometries, relaxing the requirements on mesh quality introduced by existing finite volume solvers and alleviating the need for time-consuming manual procedures for mesh generation to be performed by specialised technicians.
This paper presents the first systematic study on the fundamental problem of seeking optimal cell average decomposition (OCAD), which arises from constructing efficient high-order bound-preserving (BP) numerical methods within Zhang--Shu framework. Since proposed in 2010, Zhang--Shu framework has attracted extensive attention and been applied to developing many high-order BP discontinuous Galerkin and finite volume schemes for various hyperbolic equations. An essential ingredient in the framework is the decomposition of the cell averages of the numerical solution into a convex combination of the solution values at certain quadrature points. The classic CAD originally proposed by Zhang and Shu has been widely used in the past decade. However, the feasible CADs are not unique, and different CAD would affect the theoretical BP CFL condition and thus the computational costs. Zhang and Shu only checked, for the 1D $\mathbb P^2$ and $\mathbb P^3$ spaces, that their classic CAD based on the Gauss--Lobatto quadrature is optimal in the sense of achieving the mildest BP CFL conditions. In this paper, we establish the general theory for studying the OCAD problem on Cartesian meshes in 1D and 2D. We rigorously prove that the classic CAD is optimal for general 1D $\mathbb P^k$ spaces and general 2D $\mathbb Q^k$ spaces of arbitrary $k$. For the widely used 2D $\mathbb P^k$ spaces, the classic CAD is not optimal, and we establish the general approach to find out the genuine OCAD and propose a more practical quasi-optimal CAD, both of which provide much milder BP CFL conditions than the classic CAD. As a result, our OCAD and quasi-optimal CAD notably improve the efficiency of high-order BP schemes for a large class of hyperbolic or convection-dominated equations, at little cost of only a slight and local modification to the implementation code.
We propose an end-to-end inverse rendering pipeline called SupeRVol that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner. To this end, we represent both the bidirectional reflectance distribution function (BRDF) and the signed distance function (SDF) by multi-layer perceptrons. In order to obtain both the surface shape and its reflectance properties, we revert to a differentiable volume renderer with a physically based illumination model that allows us to decouple reflectance and lighting. This physical model takes into account the effect of the camera's point spread function thereby enabling a reconstruction of shape and material in a super-resolution quality. Experimental validation confirms that SupeRVol achieves state of the art performance in terms of inverse rendering quality. It generates reconstructions that are sharper than the individual input images, making this method ideally suited for 3D modeling from low-resolution imagery.
Athletes routinely undergo fitness evaluations to evaluate their training progress. Typically, these evaluations require a trained professional who utilizes specialized equipment like force plates. For the assessment, athletes perform drop and squat jumps, and key variables are measured, e.g. velocity, flight time, and time to stabilization, to name a few. However, amateur athletes may not have access to professionals or equipment that can provide these assessments. Here, we investigate the feasibility of estimating key variables using video recordings. We focus on jump velocity as a starting point because it is highly correlated with other key variables and is important for determining posture and lower-limb capacity. We find that velocity can be estimated with a high degree of precision across a range of athletes, with an average R-value of 0.71 (SD = 0.06).
Charge dynamics play essential role in many practical applications such as semiconductors, electrochemical devices and transmembrane ion channels. A Maxwell-Amp\`{e}re Nernst-Planck (MANP) model that describes charge dynamics via concentrations and the electric displacement is able to take effects beyond mean-field approximations into account. To obtain physically faithful numerical solutions, we develop a structure-preserving numerical method for the MANP model whose solution has several physical properties of importance. By the Slotboom transform with entropic-mean approximations, a positivity preserving scheme with Scharfetter-Gummel fluxes is derived for the generalized Nernst-Planck equations. To deal with the curl-free constraint, the dielectric displacement from the Maxwell-Amp\`{e}re equation is further updated with a local relaxation algorithm of linear computational complexity. We prove that the proposed numerical method unconditionally preserves the mass conservation and the solution positivity at the discrete level, and satisfies the discrete energy dissipation law with a time-step restriction. Numerical experiments verify that our numerical method has expected accuracy and structure-preserving properties. Applications to ion transport with large convection, arising from boundary-layer electric field and Born solvation interactions, further demonstrate that the MANP formulation with the proposed numerical scheme has attractive performance and can effectively describe charge dynamics with large convection of high numerical cell P\'{e}clet numbers.