A computationally efficient high-order solver is developed to compute the wall distances, which are typically used for turbulence modelling, peripheral flow simulations, Computer Aided Design (CAD) etc. The wall distances are computed by solving the differential equations namely: Eikonal, Hamilton-Jacobi (H-J) and Poisson. The computational benefit of using high-order schemes (explicit/compact schemes) for wall-distance solvers, both in terms of accuracy and computational time, has been demonstrated. A new H-J formulation based on the localized artificial diffusivity (LAD) approach has been proposed, which has produced results with an accuracy comparable to that of the Eikonal formulation. When compared to the baseline H-J solver using upwind schemes, the solution accuracy has improved by an order of magnitude and the calculations are $\approx$ 5 times faster using the modified H-J formulation. A modified curvature correction has also been implemented into the H-J solver to account for the near-wall errors due to concave/convex wall curvatures. The performance of the solver using different schemes has been tested both on the steady canonical test cases and the unsteady test cases like `piston-cylinder arrangement', `bouncing cube' and `burning of a star grain propellant' where the wall-distance evolves with time.
This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a $k$-parameter family of surfaces $S_c$, specified as a neural network function $f : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}$, where $S_c$ is the zero-level set of the implicit function $f(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} $, with $c \in \mathbb{R}^k$, with variations induced by the control variable $c$. In that context, restricted to each coordinate of $\mathbb{R}^k$, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.
In this paper we study the eigenvalues of the angular spheroidal wave equation and its generalization, the Coulomb spheroidal wave equation. An associated differential system and a formula for the connection coefficients between the various Floquet solutions give rise to an entire function whose zeros are exactly the eigenvalues of the Coulomb spheroidal wave equation. This entire function can be calculated by means of a recurrence formula with arbitrary accuracy and low computational cost. Finally, one obtains an easy-to-use method for computing spheroidal eigenvalues and the corresponding eigenfunctions.
This paper provides a comprehensive framework to analyze the performance of non-orthogonal multiple access (NOMA) in the downlink transmission of a single-carrier and multi-carrier terahertz (THz) network. Specifically, we first develop a novel user pairing scheme for the THz-NOMA network which ensures the performance gains of NOMA over orthogonal multiple access (OMA) for each individual user in the NOMA pair and adapts according to the molecular absorption. Then, we characterize novel outage probability expressions considering a single-carrier and multi-carrier THz-NOMA network in the presence of various user pairing schemes, Nakagami-m channel fading, and molecular absorption noise. We propose a moment-generating-function (MGF) based approach to analyze the outage probability of users in a multi-carrier THz network. Furthermore, for negligible thermal noise, we provide simplified single-integral expressions to compute the outage probability in a multi-carrier network. Numerical results demonstrate the performance of the proposed user-pairing scheme and validate the accuracy of the derived expressions.
We develop a stable finite difference method for the elastic wave equation in bounded media, where the material properties can be discontinuous at curved interfaces. The governing equation is discretized in second order form by a fourth or sixth order accurate summation-by-parts operator. The mesh size is determined by the velocity structure of the material, resulting in nonconforming grid interfaces with hanging nodes. We use order-preserving interpolation and the ghost point technique to couple adjacent mesh blocks in an energy-conserving manner, which is supported by a fully discrete stability analysis. In our previous work for the wave equation, two pairs of order-preserving interpolation operators are needed when imposing the interface conditions weakly by a penalty technique. Here, we only use one pair in the ghost point method. In numerical experiments, we demonstrate that the convergence rate is optimal, and is the same as when a globally uniform mesh is used in a single domain. In addition, with a predictor-corrector time integration method, we obtain time stepping stability with stepsize almost the same as given by the usual Courant-Friedrichs-Lewy condition.
We consider situations where the applicability of sequential Monte Carlo particle filters is compromised due to the expensive evaluation of the particle weights. To alleviate this problem, we propose a new particle filter algorithm based on the multilevel approach. We show that the resulting multilevel bootstrap particle filter (MLBPF) retains the strong law of large numbers as well as the central limit theorem of classical particle filters under mild conditions. Our numerical experiments demonstrate up to 85\% reduction in computation time compared to the classical bootstrap particle filter, in certain settings. While it should be acknowledged that this reduction is highly application dependent, and a similar gain should not be expected for all applications across the board, we believe that this substantial improvement in certain settings makes MLBPF an important addition to the family of sequential Monte Carlo methods.
This paper concerns the Time-Domain Full Waveform Inversion (FWI) for dispersive and dissipative poroelastic materials. The forward problem is an initial boundary value problem (IBVP) of the poroelastic equations with a memory term; the FWI is formulated as a minimization problem of a least-square misfit function with the (IBVP) as the constraint. In this paper, we derive the adjoint problem of this minimization problem, whose solution can be applied to computed the direction of steepest descent in the iterative process for minimization. The adjoint problem has a similar numerical structure as the forward problem and hence can be solved by the same numerical solver. Because the tracking of the energy evolution plays an important role in the FWI for dissipative and dispersive equations, the energy analysis of the forward system is also carried out in this paper.
The Green's function approach of Giles and Pierce is used to build the lift and drag based analytic adjoint solutions for the two-dimensional incompressible Euler equations around irrotational base flows. The drag-based adjoint solution turns out to have a very simple closed form in terms of the flow variables and is smooth throughout the flow domain, while the lift-based solution is singular at rear stagnation points and sharp trailing edges owing to the Kutta condition. This singularity is propagated to the whole dividing streamline (comprising the incoming stagnation streamline and the wall) upstream of the rear singularity (trailing edge or rear stagnation point) by the sensitivity of the Kutta condition to changes in the stagnation pressure.
Alternating Direction Method of Multipliers (ADMM) is a widely used tool for machine learning in distributed settings, where a machine learning model is trained over distributed data sources through an interactive process of local computation and message passing. Such an iterative process could cause privacy concerns of data owners. The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning. Prior approaches on differentially private ADMM exhibit low utility under high privacy guarantee and often assume the objective functions of the learning problems to be smooth and strongly convex. To address these concerns, we propose a novel differentially private ADMM-based distributed learning algorithm called DP-ADMM, which combines an approximate augmented Lagrangian function with time-varying Gaussian noise addition in the iterative process to achieve higher utility for general objective functions under the same differential privacy guarantee. We also apply the moments accountant method to bound the end-to-end privacy loss. The theoretical analysis shows that DP-ADMM can be applied to a wider class of distributed learning problems, is provably convergent, and offers an explicit utility-privacy tradeoff. To our knowledge, this is the first paper to provide explicit convergence and utility properties for differentially private ADMM-based distributed learning algorithms. The evaluation results demonstrate that our approach can achieve good convergence and model accuracy under high end-to-end differential privacy guarantee.
We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.