In this paper, we develop a framework to construct energy-preserving methods for multi-components Hamiltonian systems, combining the exponential integrator and the partitioned averaged vector field method. This leads to numerical schemes with both advantages of long-time stability and excellent behavior for highly oscillatory or stiff problems. Compared to the existing energy-preserving exponential integrators (EP-EI) in practical implementation, our proposed methods are much efficient which can at least be computed by subsystem instead of handling a nonlinear coupling system at a time. Moreover, for most cases, such as the Klein-Gordon-Schr\"{o}dinger equations and the Klein-Gordon-Zakharov equations considered in this paper, the computational cost can be further reduced. Specifically, one part of the derived schemes is totally explicit, and the other is linearly implicit. In addition, we present rigorous proof of conserving the original energy of Hamiltonian systems, in which an alternative technique is utilized so that no additional assumptions are required, in contrast to the proof strategies used for the existing EP-EI. Numerical experiments are provided to demonstrate the significant advantages in accuracy, computational efficiency, and the ability to capture highly oscillatory solutions.
Power allocation is one of the fundamental problems in wireless networks and a wide variety of algorithms address this problem from different perspectives. A common element among these algorithms is that they rely on an estimation of the channel state, which may be inaccurate on account of hardware defects, noisy feedback systems, and environmental and adversarial disturbances. Therefore, it is essential that the output power allocation of these algorithms is stable with respect to input perturbations, to the extent that the variations in the output are bounded for bounded variations in the input. In this paper, we focus on UWMMSE -- a modern algorithm leveraging graph neural networks --, and illustrate its stability to additive input perturbations of bounded energy through both theoretical analysis and empirical validation.
In this work, we design and analyze a Hybrid High-Order (HHO) discretization method for incompressible flows of non-Newtonian fluids with power-like convective behaviour. We work under general assumptions on the viscosity and convection laws, that are associated with possibly different Sobolev exponents r > 1 and s > 1. After providing a novel weak formulation of the continuous problem, we study its well-posedness highlighting how a subtle interplay between the exponents r and s determines the existence and uniqueness of a solution. We next design an HHO scheme based on this weak formulation and perform a comprehensive stability and convergence analysis, including convergence for general data and error estimates for shear-thinning fluids and small data. The HHO scheme is validated on a complete panel of model problems.
In this paper, we design and analyze a Hybrid High-Order discretization method for the steady motion of non-Newtonian, incompressible fluids in the Stokes approximation of small velocities. The proposed method has several appealing features including the support of general meshes and high-order, unconditional inf-sup stability, and orders of convergence that match those obtained for scalar Leray-Lions problems. A complete well-posedness and convergence analysis of the method is carried out under new, general assumptions on the strain rate-shear stress law, which encompass several common examples such as the power-law and Carreau-Yasuda models. Numerical examples complete the exposition.
A contiguous area cartogram is a geographic map in which the area of each region is rescaled to be proportional to numerical data (e.g., population size) while keeping neighboring regions connected. Few studies have investigated whether readers can make accurate quantitative assessments using contiguous area cartograms. Therefore, we conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We investigated whether including an area-to-value legend (in the form of a square symbol next to the value represented by the square's area) makes it easier for map readers to estimate magnitudes. We also evaluated the effectiveness of two additional features: grid lines and an interactive area-to-value legend that allows participants to select the value represented by the square. Without any legends and only informed about the total numerical value represented by the whole cartogram, the distribution of estimates for individual regions was centered near the true value with substantial spread. Selectable legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. When comparing differences between regions or between cartograms, legends and grid lines made estimation slower but not more accurate. However, legends and grid lines made it more likely that participants completed the tasks. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.
Non-orthogonal multiple access (NOMA) is considered a key technology for improving the spectral efficiency of fifth-generation (5G) and beyond 5G cellular networks. NOMA is beneficial when the channel vectors of the users are in the same direction, which is not always possible in conventional wireless systems. With the help of a reconfigurable intelligent surface (RIS), the base station can control the directions of the channel vectors of the users. Thus, by combining both technologies, the RIS-assisted NOMA systems are expected to achieve greater improvements in the network throughput. However, ideal phase control at the RIS is unrealizable in practice because of the imperfections in the channel estimations and the hardware limitations. This imperfection in phase control can have a significant impact on the system performance. Motivated by this, in this paper, we consider an RIS-assisted uplink NOMA system in the presence of imperfect phase compensation. We formulate the criterion for pairing the users that achieves minimum required data rates. We propose adaptive user pairing algorithms that maximize spectral or energy efficiency. We then derive various bounds on power allocation factors for the paired users. Through extensive simulation results, we show that the proposed algorithms significantly outperform the state-of-the-art algorithms in terms of spectral and energy efficiency.
We propose a novel numerical method for high dimensional Hamilton--Jacobi--Bellman (HJB) type elliptic partial differential equations (PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by the actor-critic framework inspired by reinforcement learning, based on neural network parametrization of the value and control functions. Within the actor-critic framework, we employ a policy gradient approach to improve the control, while for the value function, we derive a variance reduced least-squares temporal difference method using stochastic calculus. To numerically discretize the stochastic control problem, we employ an adaptive step size scheme to improve the accuracy near the domain boundary. Numerical examples up to $20$ spatial dimensions including the linear quadratic regulators, the stochastic Van der Pol oscillators, the diffusive Eikonal equations, and fully nonlinear elliptic PDEs derived from a regulator problem are presented to validate the effectiveness of our proposed method.
Optimal experimental design (OED) plays an important role in the problem of identifying uncertainty with limited experimental data. In many applications, we seek to minimize the uncertainty of a predicted quantity of interest (QoI) based on the solution of the inverse problem, rather than the inversion model parameter itself. In these scenarios, we develop an efficient method for goal-oriented optimal experimental design (GOOED) for large-scale Bayesian linear inverse problem that finds sensor locations to maximize the expected information gain (EIG) for a predicted QoI. By deriving a new formula to compute the EIG, exploiting low-rank structures of two appropriate operators, we are able to employ an online-offline decomposition scheme and a swapping greedy algorithm to maximize the EIG at a cost measured in model solutions that is independent of the problem dimensions. We provide detailed error analysis of the approximated EIG, and demonstrate the efficiency, accuracy, and both data- and parameter-dimension independence of the proposed algorithm for a contaminant transport inverse problem with infinite-dimensional parameter field.
Navier-Stokes equations are well known in modelling of an incompressible Newtonian fluid, such as air or water. This system of equations is very complex due to the non-linearity term that characterizes it. After the linearization and the discretization parts, we get a descriptor system of index-2 described by a set of differential algebraic equations (DAEs). The two main parts we develop through this paper are focused firstly on constructing an efficient algorithm based on a projection technique onto an extended block Krylov subspace, that appropriately allows us to construct a reduced system of the original DAE system. Secondly, we solve a Linear Quadratic Regulator (LQR) problem based on a Riccati feedback approach. This approach uses numerical solutions of large-scale algebraic Riccati equations. To this end, we use the extended Krylov subspace method that allows us to project the initial large matrix problem onto a low order one that is solved by some direct methods. These numerical solutions are used to obtain a feedback matrix that will be used to stabilize the original system. We conclude by providing some numerical results to confirm the performances of our proposed method compared to other known methods.
Optimization under uncertainty and risk is indispensable in many practical situations. Our paper addresses stability of optimization problems using composite risk functionals which are subjected to measure perturbations. Our main focus is the asymptotic behavior of data-driven formulations with empirical or smoothing estimators such as kernels or wavelets applied to some or to all functions of the compositions. We analyze the properties of the new estimators and we establish strong law of large numbers, consistency, and bias reduction potential under fairly general assumptions. Our results are germane to risk-averse optimization and to data science in general.
Kendall transformation is a conversion of an ordered feature into a vector of pairwise order relations between individual values. This way, it preserves ranking of observations and represents it in a categorical form. Such transformation allows for generalisation of methods requiring strictly categorical input, especially in the limit of small number of observations, when discretisation becomes problematic. In particular, many approaches of information theory can be directly applied to Kendall-transformed continuous data without relying on differential entropy or any additional parameters. Moreover, by filtering information to this contained in ranking, Kendall transformation leads to a better robustness at a reasonable cost of dropping sophisticated interactions which are anyhow unlikely to be correctly estimated. In bivariate analysis, Kendall transformation can be related to popular non-parametric methods, showing the soundness of the approach. The paper also demonstrates its efficiency in multivariate problems, as well as provides an example analysis of a real-world data.