The Volume-Averaged Navier-Stokes equations are used to study fluid flow in the presence of fixed or moving solids such as packed or fluidized beds. We develop a high-order finite element solver using both forms A and B of these equations. We introduce tailored stabilization techniques to prevent oscillations in regions of sharp gradients, to relax the Ladyzhenskaya-Babuska-Brezzi inf-sup condition, and to enhance the local mass conservation and the robustness of the formulation. We calculate the void fraction using the Particle Centroid Method. Using different drag models, we calculate the drag force exerted by the solids on the fluid. We implement the method of manufactured solution to verify our solver. We demonstrate that the model preserves the order of convergence of the underlying finite element discretization. Finally, we simulate gas flow through a randomly packed bed and study the pressure drop and mass conservation properties to validate our model.
In its simplest form, the chemostat consists of microorganisms or cells which grow continually in a specific phase of growth while competing for a single limiting nutrient. Under certain conditions on the cells' growth rate, substrate concentration, and dilution rate, the theory predicts and numerical experiments confirm that a periodically operated chemostat exhibits an "over-yielding" state in which the performance becomes higher than that at the steady-state operation. In this paper we show that an optimal control policy for maximizing the chemostat performance can be accurately and efficiently derived numerically using a novel class of integral-pseudospectral methods and adaptive h-integral-pseudospectral methods composed through a predictor-corrector algorithm. Some new formulas for the construction of Fourier pseudospectral integration matrices and barycentric shifted Gegenbauer quadratures are derived. A rigorous study of the errors and convergence rates of shifted Gegenbauer quadratures as well as the truncated Fourier series, interpolation operators, and integration operators for nonsmooth and generally T-periodic functions is presented. We introduce also a novel adaptive scheme for detecting jump discontinuities and reconstructing a discontinuous function from the pseudospectral data. An extensive set of numerical simulations is presented to support the derived theoretical foundations.
We study \textit{rescaled gradient dynamical systems} in a Hilbert space $\mathcal{H}$, where implicit discretization in a finite-dimensional Euclidean space leads to high-order methods for solving monotone equations (MEs). Our framework can be interpreted as a natural generalization of celebrated dual extrapolation method~\citep{Nesterov-2007-Dual} from first order to high order via appeal to the regularization toolbox of optimization theory~\citep{Nesterov-2021-Implementable, Nesterov-2021-Inexact}. More specifically, we establish the existence and uniqueness of a global solution and analyze the convergence properties of solution trajectories. We also present discrete-time counterparts of our high-order continuous-time methods, and we show that the $p^{th}$-order method achieves an ergodic rate of $O(k^{-(p+1)/2})$ in terms of a restricted merit function and a pointwise rate of $O(k^{-p/2})$ in terms of a residue function. Under regularity conditions, the restarted version of $p^{th}$-order methods achieves local convergence with the order $p \geq 2$. Notably, our methods are \textit{optimal} since they have matched the lower bound established for solving the monotone equation problems under a standard linear span assumption~\citep{Lin-2022-Perseus}.
We study reliable communication over point-to-point adversarial channels in which the adversary can observe the transmitted codeword via some function that takes the $n$-bit codeword as input and computes an $rn$-bit output for some given $r \in [0,1]$. We consider the scenario where the $rn$-bit observation is computationally bounded -- the adversary is free to choose an arbitrary observation function as long as the function can be computed using a polynomial amount of computational resources. This observation-based restriction differs from conventional channel-based computational limitations, where in the later case, the resource limitation applies to the computation of the (adversarial) channel error. For all $r \in [0,1-H(p)]$ where $H(\cdot)$ is the binary entropy function and $p$ is the adversary's error budget, we characterize the capacity of the above channel. For this range of $r$, we find that the capacity is identical to the completely obvious setting ($r=0$). This result can be viewed as a generalization of known results on myopic adversaries and channels with active eavesdroppers for which the observation process depends on a fixed distribution and fixed-linear structure, respectively, that cannot be chosen arbitrarily by the adversary.
Many internet platforms that collect behavioral big data use it to predict user behavior for internal purposes and for their business customers (e.g., advertisers, insurers, security forces, governments, political consulting firms) who utilize the predictions for personalization, targeting, and other decision-making. Improving predictive accuracy is therefore extremely valuable. Data science researchers design algorithms, models, and approaches to improve prediction. Prediction is also improved with larger and richer data. Beyond improving algorithms and data, platforms can stealthily achieve better prediction accuracy by pushing users' behaviors towards their predicted values, using behavior modification techniques, thereby demonstrating more certain predictions. Such apparent "improved" prediction can result from employing reinforcement learning algorithms that combine prediction and behavior modification. This strategy is absent from the machine learning and statistics literature. Investigating its properties requires integrating causal with predictive notation. To this end, we incorporate Pearl's causal do(.) operator into the predictive vocabulary. We then decompose the expected prediction error given behavior modification, and identify the components impacting predictive power. Our derivation elucidates implications of such behavior modification to data scientists, platforms, their customers, and the humans whose behavior is manipulated. Behavior modification can make users' behavior more predictable and even more homogeneous; yet this apparent predictability might not generalize when business customers use predictions in practice. Outcomes pushed towards their predictions can be at odds with customers' intentions, and harmful to manipulated users.
In this paper, we introduce and analyse numerical schemes for the homogeneous and the kinetic L\'evy-Fokker-Planck equation. The discretizations are designed to preserve the main features of the continuous model such as conservation of mass, heavy-tailed equilibrium and (hypo)coercivity properties. We perform a thorough analysis of the numerical scheme and show exponential stability and convergence of the scheme. Along the way, we introduce new tools of discrete functional analysis, such as discrete nonlocal Poincar\'e and interpolation inequalities adapted to fractional diffusion. Our theoretical findings are illustrated and complemented with numerical simulations.
Given a partial differential equation (PDE), goal-oriented error estimation allows us to understand how errors in a diagnostic quantity of interest (QoI), or goal, occur and accumulate in a numerical approximation, for example using the finite element method. By decomposing the error estimates into contributions from individual elements, it is possible to formulate adaptation methods, which modify the mesh with the objective of minimising the resulting QoI error. However, the standard error estimate formulation involves the true adjoint solution, which is unknown in practice. As such, it is common practice to approximate it with an 'enriched' approximation (e.g. in a higher order space or on a refined mesh). Doing so generally results in a significant increase in computational cost, which can be a bottleneck compromising the competitiveness of (goal-oriented) adaptive simulations. The central idea of this paper is to develop a "data-driven" goal-oriented mesh adaptation approach through the selective replacement of the expensive error estimation step with an appropriately configured and trained neural network. In doing so, the error estimator may be obtained without even constructing the enriched spaces. An element-by-element construction is employed here, whereby local values of various parameters related to the mesh geometry and underlying problem physics are taken as inputs, and the corresponding contribution to the error estimator is taken as output. We demonstrate that this approach is able to obtain the same accuracy with a reduced computational cost, for adaptive mesh test cases related to flow around tidal turbines, which interact via their downstream wakes, and where the overall power output of the farm is taken as the QoI. Moreover, we demonstrate that the element-by-element approach implies reasonably low training costs.
This paper generalizes the earlier work on the energy-based discontinuous Galerkin method for second-order wave equations to fourth-order semilinear wave equations. We first rewrite the problem into a system with a second-order spatial derivative, then apply the energy-based discontinuous Galerkin method to the system. The proposed scheme, on the one hand, is more computationally efficient compared with the local discontinuous Galerkin method because of fewer auxiliary variables. On the other hand, it is unconditionally stable without adding any penalty terms, and admits optimal convergence in the $L^2$ norm for both solution and auxiliary variables. In addition, the energy-dissipating or energy-conserving property of the scheme follows from simple, mesh-independent choices of the interelement fluxes. We also present a stability and convergence analysis along with numerical experiments to demonstrate optimal convergence for certain choices of the interelement fluxes.
We study the problem of the nonparametric estimation for the density $\pi$ of the stationary distribution of a $d$-dimensional stochastic differential equation $(X_t)_{t \in [0, T]}$ with possibly unbounded drift. From the continuous observation of the sampling path on $[0, T]$, we study the rate of estimation of $\pi(x)$ as $T$ goes to infinity. One finding is that, for $d \ge 3$, the rate of estimation depends on the smoothness $\beta = (\beta_1, ... , \beta_d)$ of $\pi$. In particular, having ordered the smoothness such that $\beta_1 \le ... \le \beta_d$, it depends on the fact that $\beta_2 < \beta_3$ or $\beta_2 = \beta_3$. We show that kernel density estimators achieve the rate $(\frac{\log T}{T})^\gamma$ in the first case and $(\frac{1}{T})^\gamma$ in the second, for an explicit exponent $\gamma$ depending on the dimension and on $\bar{\beta}_3$, the harmonic mean of the smoothness over the $d$ directions after having removed $\beta_1$ and $\beta_2$, the smallest ones. Moreover, we obtain a minimax lower bound on the $\mathbf{L}^2$-risk for the pointwise estimation with the same rates $(\frac{\log T}{T})^\gamma$ or $(\frac{1}{T})^\gamma$, depending on the value of $\beta_2$ and $\beta_3$.
Recently it was shown that the so-called guided local Hamiltonian problem -- estimating the smallest eigenvalue of a $k$-local Hamiltonian when provided with a description of a quantum state ('guiding state') that is guaranteed to have substantial overlap with the true groundstate -- is BQP-complete for $k \geq 6$ when the required precision is inverse polynomial in the system size $n$, and remains hard even when the overlap of the guiding state with the groundstate is close to a constant $\left(\frac12 - \Omega\left(\frac{1}{\mathop{poly}(n)}\right)\right)$. We improve upon this result in three ways: by showing that it remains BQP-complete when i) the Hamiltonian is 2-local, ii) the overlap between the guiding state and target eigenstate is as large as $1 - \Omega\left(\frac{1}{\mathop{poly}(n)}\right)$, and iii) when one is interested in estimating energies of excited states, rather than just the groundstate. Interestingly, iii) is only made possible by first showing that ii) holds.
Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.