Provably stable flux reconstruction (FR) schemes are derived for partial differential equations cast in curvilinear coordinates. Specifically, energy stable flux reconstruction (ESFR) schemes are considered as they allow for design flexibility as well as stability proofs for the linear advection problem on affine elements. Additionally, split forms are examined as they enable the development of energy stability proofs. The first critical step proves, that in curvilinear coordinates, the discontinuous Galerkin (DG) conservative and non-conservative forms are inherently different--even under exact integration and analytically exact metric terms. This analysis demonstrates that the split form is essential to developing provably stable DG schemes on curvilinear coordinates and motivates the construction of metric dependent ESFR correction functions in each element. Furthermore, the provably stable FR schemes differ from schemes in the literature that only apply the ESFR correction functions to surface terms or on the conservative form, and instead incorporate the ESFR correction functions on the full split form of the equations. It is demonstrated that the scheme is divergent when the correction functions are only used for surface reconstruction in curvilinear coordinates. We numerically verify the stability claims for our proposed FR split forms and compare them to ESFR schemes in the literature. Lastly, the newly proposed provably stable FR schemes are shown to obtain optimal orders of convergence. The scheme loses the orders of accuracy at the equivalent correction parameter value c as that of the one-dimensional ESFR scheme.
The kinetic theory provides a physical basis for developing multiscal methods for gas flows covering a wide range of flow regimes. A particular challenge for kinetic schemes is whether they can capture the correct hydrodynamic behaviors of the system in the continuum regime (i.e., as the Knudsen number $\epsilon\ll 1$ ) without enforcing kinetic scale resolution. At the current stage, {the main approach to analyze such a property is the asymptotic preserving (AP) concept, which aims to show whether a kinetic scheme reduces to a solver for the hydrodynamic equations as $\epsilon \to 0$, such as the shock capturing scheme for the Euler equations. However, the detailed asymptotic properties of the kinetic scheme are indistinguishable when $\epsilon$ is small but finite under the AP framework}. In order to distinguish different characteristics of kinetic schemes, in this paper we introduce the concept of unified preserving (UP) aiming at assessing asmyptotic orders of a kinetic scheme by employing the modified equation approach and Chapman-Enskon analysis. It is shown that the UP properties of a kinetic scheme generally depend on the spatial/temporal accuracy and closely on the inter-connections among the three scales (kinetic scale, numerical scale, and hydrodynamic scale) and their corresponding coupled dynamics. Specifically, the numerical resolution and specific discretization of particle transport and collision determine the flow physics of the scheme in different regimes, especially in the near continuum limit. As two examples, the UP methodology is applied to analyze the discrete unified gas-kinetic scheme and a second-order implicit-explicit Runge-Kutta scheme in their asymptotic behaviors in the continuum limit.
The method-of-moments implementation of the electric-field integral equation yields many code-verification challenges due to the various sources of numerical error and their possible interactions. Matters are further complicated by singular integrals, which arise from the presence of a Green's function. In this paper, we provide approaches to separately assess the numerical errors arising from the use of basis functions to approximate the solution and the use of quadrature to approximate the integration. Through these approaches, we are able to verify the code and compare the error from different quadrature options.
The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.
We propose a method for unsupervised reconstruction of a temporally-consistent sequence of surfaces from a sequence of time-evolving point clouds. It yields dense and semantically meaningful correspondences between frames. We represent the reconstructed surfaces as atlases computed by a neural network, which enables us to establish correspondences between frames. The key to making these correspondences semantically meaningful is to guarantee that the metric tensors computed at corresponding points are as similar as possible. We have devised an optimization strategy that makes our method robust to noise and global motions, without a priori correspondences or pre-alignment steps. As a result, our approach outperforms state-of-the-art ones on several challenging datasets. The code is available at //github.com/bednarikjan/temporally_coherent_surface_reconstruction.
We study a class of bilevel integer programs with second-order cone constraints at the upper level and a convex quadratic objective and linear constraints at the lower level. We develop disjunctive cuts to separate bilevel infeasible points using a second-order-cone-based cut-generating procedure. To the best of our knowledge, this is the first time disjunctive cuts are studied in the context of discrete bilevel optimization. Using these disjunctive cuts, we establish a branch-and-cut algorithm for the problem class we study, and a cutting plane method for the problem variant with only binary variables. Our computational study demonstrates that both our approaches outperform a state-of-the-art generic solver for mixed-integer bilevel linear programs that is able to solve a linearized version of our test instances, where the non-linearities are linearized in a McCormick fashion.
The Half-Space Matching (HSM) method has recently been developed as a new method for the solution of 2D scattering problems with complex backgrounds, providing an alternative to Perfectly Matched Layers (PML) or other artificial boundary conditions. Based on half-plane representations for the solution, the scattering problem is rewritten as a system coupling (1) a standard finite element discretisation localised around the scatterer and (2) integral equations whose unknowns are traces of the solution on the boundaries of a finite number of overlapping half-planes contained in the domain. While satisfactory numerical results have been obtained for real wavenumbers, well-posedness and equivalence of this HSM formulation to the original scattering problem have been established only for complex wavenumbers. In the present paper we show, in the case of a homogeneous background, that the HSM formulation is equivalent to the original scattering problem also for real wavenumbers, and so is well-posed, provided the traces satisfy radiation conditions at infinity analogous to the standard Sommerfeld radiation condition. As a key component of our argument we show that, if the trace on the boundary of a half-plane satisfies our new radiation condition, then the corresponding solution to the half-plane Dirichlet problem satisfies the Sommerfeld radiation condition in a slightly smaller half-plane. We expect that this last result will be of independent interest, in particular in studies of rough surface scattering.
Reinforcement learning algorithms often require finiteness of state and action spaces in Markov decision processes (MDPs) and various efforts have been made in the literature towards the applicability of such algorithms for continuous state and action spaces. In this paper, we show that under very mild regularity conditions (in particular, involving only weak continuity of the transition kernel of an MDP), Q-learning for standard Borel MDPs via quantization of states and actions converge to a limit, and furthermore this limit satisfies an optimality equation which leads to near optimality with either explicit performance bounds or which are guaranteed to be asymptotically optimal. Our approach builds on (i) viewing quantization as a measurement kernel and thus a quantized MDP as a POMDP, (ii) utilizing near optimality and convergence results of Q-learning for POMDPs, and (iii) finally, near-optimality of finite state model approximations for MDPs with weakly continuous kernels which we show to correspond to the fixed point of the constructed POMDP. Thus, our paper presents a very general convergence and approximation result for the applicability of Q-learning for continuous MDPs.
Self-energy recycling (sER), which allows transmit energy re-utilization, has emerged as a viable option for improving the energy efficiency (EE) in low-power Internet of Things networks. In this work, we investigate its benefits also in terms of reliability improvements and compare the performance of full-duplex (FD) and half-duplex (HD) schemes when using multi-antenna techniques in a communication system. We analyze the trade-offs when considering not only the energy spent on transmission but also the circuitry power consumption, thus making the analysis of much more practical interest. In addition to the well known spectral efficiency improvements, results show that FD also outperforms HD in terms of reliability. We show that sER introduces not only benefits in EE matters but also some modifications on how to achieve maximum reliability fairness between uplink and downlink transmissions, which is the main goal in this work. In order to achieve this objective, we propose the use of a dynamic FD scheme where the small base station (SBS) determines the optimal allocation of antennas for transmission and reception. We show the significant improvement gains of this strategy for the system outage probability when compared to the simple HD and FD schemes.
Computational fluctuating hydrodynamics aims at understanding the impact of thermal fluctuations on fluid motions at small scales through numerical exploration. These fluctuations are modeled as stochastic flux terms and incorporated into the classical Navier-Stokes equations, which need to be solved numerically. In this paper, we present a novel projection-based method for solving the incompressible fluctuating hydrodynamics equations. By analyzing the equilibrium structure factor spectrum of the velocity field, we investigate how the inherent splitting errors affect the numerical solution of the stochastic partial differential equations in the presence of non-periodic boundary conditions, and how iterative corrections can reduce these errors. Our computational examples demonstrate both the capability of our approach to reproduce correctly stochastic properties of fluids at small scales as well as its potential use in the simulations of multi-physics problems.
In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) with a novel encoder-decoder-reconstructor architecture, which leverages both the forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder makes use of the forward flow to produce the sentence description based on the encoded video semantic features. Two types of reconstructors are customized to employ the backward flow and reproduce the video features based on the hidden state sequence generated by the decoder. The generation loss yielded by the encoder-decoder and the reconstruction loss introduced by the reconstructor are jointly drawn into training the proposed RecNet in an end-to-end fashion. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the encoder-decoder models and leads to significant gains in video caption accuracy.