In this paper, we study two graph convexity parameters: iteration time and general position number. The iteration time was defined in 1981 in the geodesic convexity, but its computational complexity was so far open. The general position number was defined in the geodesic convexity and proved NP-hard in 2018. We extend these parameters to any graph convexity and prove that the iteration number is NP-hard in the P3 convexity. We use this result to prove that the iteration time is also NP-hard in the geodesic convexity even in graphs with diameter two, a long standing open question. These results are also important since they are the last two missing NP-hardness results regarding the ten most studied graph convexity parameters in the geodesic and P3 convexities. We also prove that the general position number of the monophonic convexity is W[1]-hard (parameterized by the size of the solution) and $n^{1-\varepsilon}$-inapproximable in polynomial time for any $\varepsilon>0$ unless P=NP, even in graphs with diameter two. Finally, we also obtain FPT results on the general position number in the P3 convexity and we prove that it is W[1]-hard (parameterized by the size of the solution).
This work is concerned with solving high-dimensional Fokker-Planck equations with the novel perspective that solving the PDE can be reduced to independent instances of density estimation tasks based on the trajectories sampled from its associated particle dynamics. With this approach, one sidesteps error accumulation occurring from integrating the PDE dynamics on a parameterized function class. This approach significantly simplifies deployment, as one is free of the challenges of implementing loss terms based on the differential equation. In particular, we introduce a novel class of high-dimensional functions called the functional hierarchical tensor (FHT). The FHT ansatz leverages a hierarchical low-rank structure, offering the advantage of linearly scalable runtime and memory complexity relative to the dimension count. We introduce a sketching-based technique that performs density estimation over particles simulated from the particle dynamics associated with the equation, thereby obtaining a representation of the Fokker-Planck solution in terms of our ansatz. We apply the proposed approach successfully to three challenging time-dependent Ginzburg-Landau models with hundreds of variables.
We propose a new space-time variational formulation for wave equation initial-boundary value problems. The key property is that the formulation is coercive (sign-definite) and continuous in a norm stronger than $H^1(Q)$, $Q$ being the space-time cylinder. Coercivity holds for constant-coefficient impedance cavity problems posed in star-shaped domains, and for a class of impedance-Dirichlet problems. The formulation is defined using simple Morawetz multipliers and its coercivity is proved with elementary analytical tools, following earlier work on the Helmholtz equation. The formulation can be stably discretised with any $H^2(Q)$-conforming discrete space, leading to quasi-optimal space-time Galerkin schemes. Several numerical experiments show the excellent properties of the method.
In this work, we present a new approach for constructing models for correlation matrices with a user-defined graphical structure. The graphical structure makes correlation matrices interpretable and avoids the quadratic increase of parameters as a function of the dimension. We suggest an automatic approach to define a prior using a natural sequence of simpler models within the Penalized Complexity framework for the unknown parameters in these models. We illustrate this approach with three applications: a multivariate linear regression of four biomarkers, a multivariate disease mapping, and a multivariate longitudinal joint modelling. Each application underscores our method's intuitive appeal, signifying a substantial advancement toward a more cohesive and enlightening model that facilitates a meaningful interpretation of correlation matrices.
In this work we consider the two dimensional instationary Navier-Stokes equations with homogeneous Dirichlet/no-slip boundary conditions. We show error estimates for the fully discrete problem, where a discontinuous Galerkin method in time and inf-sup stable finite elements in space are used. Recently, best approximation type error estimates for the Stokes problem in the $L^\infty(I;L^2(\Omega))$, $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms have been shown. The main result of the present work extends the error estimate in the $L^\infty(I;L^2(\Omega))$ norm to the Navier-Stokes equations, by pursuing an error splitting approach and an appropriate duality argument. In order to discuss the stability of solutions to the discrete primal and dual equations, a specially tailored discrete Gronwall lemma is presented. The techniques developed towards showing the $L^\infty(I;L^2(\Omega))$ error estimate, also allow us to show best approximation type error estimates in the $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms, which complement this work.
In this paper, we investigate the numerical solution of the two-dimensional fractional Laplacian wave equations. After splitting out the Riesz fractional derivatives from the fractional Laplacian, we treat the Riesz fractional derivatives with an implicit scheme while solving the rest part explicitly. Thanks to the tensor structure of the Riesz fractional derivatives, a splitting alternative direction implicit (S-ADI) scheme is proposed by incorporating an ADI remainder. Then the Gohberg-Semencul formula, combined with fast Fourier transform, is proposed to solve the derived Toeplitz linear systems at each time integration. Theoretically, we demonstrate that the S-ADI scheme is unconditionally stable and possesses second-order accuracy. Finally, numerical experiments are performed to demonstrate the accuracy and efficiency of the S-ADI scheme.
In this paper, we discuss some numerical realizations of Shannon's sampling theorem. First we show the poor convergence of classical Shannon sampling sums by presenting sharp upper and lower bounds of the norm of the Shannon sampling operator. In addition, it is known that in the presence of noise in the samples of a bandlimited function, the convergence of Shannon sampling series may even break down completely. To overcome these drawbacks, one can use oversampling and regularization with a convenient window function. Such a window function can be chosen either in frequency domain or in time domain. We especially put emphasis on the comparison of these two approaches in terms of error decay rates. It turns out that the best numerical results are obtained by oversampling and regularization in time domain using a sinh-type window function or a continuous Kaiser-Bessel window function, which results in an interpolating approximation with localized sampling. Several numerical experiments illustrate the theoretical results.
In this study, we present a precise anisotropic interpolation error estimate for the Morley finite element method (FEM) and apply it to fourth-order elliptical equations. We did not impose a shape-regularity mesh condition for the analysis. Therefore, anisotropic meshes can be used. The main contributions of this study include providing new proof of the consistency term. This enabled us to obtain an anisotropic consistency error estimate. The core idea of the proof involves using the relationship between the Raviart--Thomas and Morley finite element spaces. Our results show optimal convergence rates and imply that the modified Morley FEM may be effective for errors.
In this paper, we study power series with coefficients equal to a product of a generic sequence and an explicitly given function of a positive parameter expressible in terms of the Pochhammer symbols. Four types of such series are treated. We show that logarithmic concavity (convexity) of the generic sequence leads to logarithmic concavity (convexity) of the sum of the series with respect to the argument of the explicitly given function. The logarithmic concavity (convexity) is derived from a stronger property, namely, positivity (negativity) of the power series coefficients of the so-called generalized Tur\'{a}nian. Applications to special functions such as the generalized hypergeometric function and the Fox-Wright function are also discussed.
Control theory deals with the study of controlling dynamical systems. Robots today are growing increasingly complex and moving out of factory floors to real world environment. These robots have to interact with real world environment factors such as disturbances and this requires the robot to have a control system that is robust. Testing control algorithms on robots in real world environment can pose critical safety issues and can be financially expensive. This has resulted in a heavy emphasis on using simulation to test control algorithms before deploying them in real world environments. Designing control algorithms is an iterative process that starts with modelling the target system in simulation, designing a controller, testing the controller in simulation and then changing the controller parameters to design a better controller. This report explores how an approximated system model of a target hardware system can be developed, which can then be used to design a LQR controller for the target system. The controller is then tested under a disturbance, on hardware and in simulation, and the system response is recorded. The system response from hardware and simulation are then compared to validate the use of approximated system models in simulation for designing and testing control algorithms.
In this paper, we first introduce and define several new information divergences in the space of transition matrices of finite Markov chains which measure the discrepancy between two Markov chains. These divergences offer natural generalizations of classical information-theoretic divergences, such as the $f$-divergences and the R\'enyi divergence between probability measures, to the context of finite Markov chains. We begin by detailing and deriving fundamental properties of these divergences and notably gives a Markov chain version of the Pinsker's inequality and Chernoff information. We then utilize these notions in a few applications. First, we investigate the binary hypothesis testing problem of Markov chains, where the newly defined R\'enyi divergence between Markov chains and its geometric interpretation play an important role in the analysis. Second, we propose and analyze information-theoretic (Ces\`aro) mixing times and ergodicity coefficients, along with spectral bounds of these notions in the reversible setting. Examples of the random walk on the hypercube, as well as the connections between the critical height of the low-temperature Metropolis-Hastings chain and these proposed ergodicity coefficients, are highlighted.