This paper concerns the numerical solution of the two-dimensional time-dependent partial integro-differential equation (PIDE) that holds for the values of European-style options under the two-asset Kou jump-diffusion model. A main feature of this equation is the presence of a nonlocal double integral term. For its numerical evaluation, we extend a highly efficient algorithm derived by Toivanen (2008) in the case of the one-dimensional Kou integral. The acquired algorithm for the two-dimensional Kou integral has optimal computational cost: the number of basic arithmetic operations is directly proportional to the number of spatial grid points in the semidiscretization. For the effective discretization in time, we study seven contemporary operator splitting schemes of the implicit-explicit (IMEX) and the alternating direction implicit (ADI) kind. All these schemes allow for a convenient, explicit treatment of the integral term. We analyze their (von Neumann) stability. By ample numerical experiments for put-on-the-average option values, the actual convergence behavior as well as the mutual performance of the seven operator splitting schemes are investigated. Moreover, the Greeks Delta and Gamma are considered.
In this paper we present a new H(div)-conforming unfitted finite element method for the mixed Poisson problem which is robust in the cut configuration and preserves conservation properties of body-fitted finite element methods. The key is to formulate the divergence-constraint on the active mesh, instead of the physical domain, in order to obtain robustness with respect to cut configurations without the need for a stabilization that pollutes the mass balance. This change in the formulation results in a slight inconsistency, but does not affect the accuracy of the flux variable. By applying post-processings for the scalar variable, in virtue of classical local post-processings in body-fitted methods, we retain optimal convergence rates for both variables and even the superconvergence after post-processing of the scalar variable. We present the method and perform a rigorous a-priori error analysis of the method and discuss several variants and extensions. Numerical experiments confirm the theoretical results.
Microgrids incorporate distributed energy resources (DERs) and flexible loads, which can provide energy and reserve services for the main grid. However, due to uncertain renewable generations such as solar power, microgrids might under-deliver reserve services and breach day-ahead contracts in real-time. If multiple microgrids breach their reserve contracts simultaneously, this could lead to a severe grid contingency. This paper designs a distributionally robust joint chance-constrained (DRJCC) game-theoretical framework considering uncertain real-time reserve provisions and the value of lost load (VoLL). Leveraging historical error samples, the reserve bidding strategy of each microgrid is formulated into a two-stage Wasserstein-metrics distribution robust optimization (DRO) model. A JCC is employed to regulate the under-delivered reserve capacity of all microgrids in a non-cooperative game. Considering the unknown correlation among players, a novel Bayesian optimization method approximates the optimal individual violation rates of microgrids and market equilibrium. The proposed game framework with the optimal rates is simulated with up to 14 players in a 30-bus network. Case studies are conducted using the California power market data. The proposed Bayesian method can effectively regulate the joint violation rate of the under-delivered reserve and secure the profit of microgrids in the reserve market.
Sampling schemes are fundamental tools in statistics, survey design, and algorithm design. A fundamental result in differential privacy is that a differentially private mechanism run on a simple random sample of a population provides stronger privacy guarantees than the same algorithm run on the entire population. However, in practice, sampling designs are often more complex than the simple, data-independent sampling schemes that are addressed in prior work. In this work, we extend the study of privacy amplification results to more complex, data-dependent sampling schemes. We find that not only do these sampling schemes often fail to amplify privacy, they can actually result in privacy degradation. We analyze the privacy implications of the pervasive cluster sampling and stratified sampling paradigms, as well as provide some insight into the study of more general sampling designs.
In this paper, we propose an efficient quadratic interpolation formula utilizing solution gradients computed and stored at nodes and demonstrate its application to a third-order cell-centered finite-volume discretization on tetrahedral grids. The proposed quadratic formula is constructed based on an efficient formula of computing a projected derivative. It is efficient in that it completely eliminates the need to compute and store second derivatives of solution variables or any other quantities, which are typically required in upgrading a second-order cell-centered unstructured-grid finite-volume discretization to third-order accuracy. Moreover, a high-order flux quadrature formula, as required for third-order accuracy, can also be simplified by utilizing the efficient projected-derivative formula, resulting in a numerical flux at a face centroid plus a curvature correction not involving second derivatives of the flux. Similarly, a source term can be integrated over a cell to high-order in the form of a source term evaluated at the cell centroid plus a curvature correction, again, not requiring second derivatives of the source term. The discretization is defined as an approximation to an integral form of a conservation law but the numerical solution is defined as a point value at a cell center, leading to another feature that there is no need to compute and store geometric moments for a quadratic polynomial to preserve a cell average. Third-order accuracy and improved second-order accuracy are demonstrated and investigated for simple but illustrative test cases in three dimensions.
For population studies or for the training of complex machine learning models, it is often required to gather data from different actors. In these applications, summation is an important primitive: for computing means, counts or mini-batch gradients. In many cases, the data is privacy-sensitive and therefore cannot be collected on a central server. Hence the summation needs to be performed in a distributed and privacy-preserving way. Existing solutions for distributed summation with computational privacy guarantees make trust or connection assumptions - e.g., the existence of a trusted server or peer-to-peer connections between clients - that might not be fulfilled in real world settings. Motivated by these challenges, we propose Secure Summation via Subset Sums (S5), a method for distributed summation that works in the presence of a malicious server and only two honest clients, and without the need for peer-to-peer connections between clients. S5 adds zero-sum noise to clients' messages and shuffles them before sending them to the aggregating server. Our main contribution is a proof that this scheme yields a computational privacy guarantee based on the multidimensional subset sum problem. Our analysis of this problem may be of independent interest for other privacy and cryptography applications.
In September 2022, Ethereum transitioned from Proof-of-Work (PoW) to Proof-of-Stake (PoS) during 'the merge' - making it the largest PoS cryptocurrency in terms of market capitalization. With this work, we present a comprehensive measurement study of the current state of the Ethereum PoS consensus layer on the beacon chain. We perform a longitudinal study over the entire history of the beacon chain, which ranges from 1 December 2020 until 15 May 2023. Our work finds that all dips in network participation, unrelated to network upgrades, are caused by issues with major consensus clients or service operators controlling a large number of validators. Thus, we analyze the decentralization of staking power over time by clustering validators to entities. We find that the staking power is concentrated in the hands of a few large entities. Further, we also analyze the consensus client landscape, given that bugs in a consensus client pose a security risk to the consensus layer. While the consensus client landscape exhibits significant concentration, with a single client accounting for one-third of the market share throughout the entire history of the beacon chain, we observe an improving trend.
User reporting is an essential component of content moderation on many online platforms -- in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level -- for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms.
In randomized clinical trials, adjusting for baseline covariates has been advocated as a way to improve credibility and efficiency for demonstrating and quantifying treatment effects. This article studies the augmented inverse propensity weighted (AIPW) estimator, which is a general form of covariate adjustment that includes approaches using linear and generalized linear models and machine learning models. Under covariate-adaptive randomization, we establish a general theorem that shows a complete picture about the asymptotic normality, efficiency gain, and applicability of AIPW estimators. Based on the general theorem, we provide insights on the conditions for guaranteed efficiency gain and universal applicability under different randomization schemes, which also motivate a joint calibration strategy using some constructed covariates after applying AIPW. We illustrate the application of the general theorem with two examples, the generalized linear model and the machine learning model. We provide the first theoretical justification of using machine learning methods with dependent data under covariate-adaptive randomization. Our methods are implemented in the R package RobinCar.
Network interference, where the outcome of an individual is affected by the treatment assignment of those in their social network, is pervasive in real-world settings. However, it poses a challenge to estimating causal effects. We consider the task of estimating the total treatment effect (TTE), or the difference between the average outcomes of the population when everyone is treated versus when no one is, under network interference. Under a Bernoulli randomized design, we provide an unbiased estimator for the TTE when network interference effects are constrained to low order interactions among neighbors of an individual. We make no assumptions on the graph other than bounded degree, allowing for well-connected networks that may not be easily clustered. We derive a bound on the variance of our estimator and show in simulated experiments that it performs well compared with standard estimators for the TTE. We also derive a minimax lower bound on the mean squared error of our estimator which suggests that the difficulty of estimation can be characterized by the degree of interactions in the potential outcomes model. We also prove that our estimator is asymptotically normal under boundedness conditions on the network degree and potential outcomes model. Central to our contribution is a new framework for balancing model flexibility and statistical complexity as captured by this low order interactions structure.
Communication plays a vital role in multi-agent systems, fostering collaboration and coordination. However, in real-world scenarios where communication is bandwidth-limited, existing multi-agent reinforcement learning (MARL) algorithms often provide agents with a binary choice: either transmitting a fixed number of bytes or no information at all. This limitation hinders the ability to effectively utilize the available bandwidth. To overcome this challenge, we present the Dynamic Size Message Scheduling (DSMS) method, which introduces a finer-grained approach to scheduling by considering the actual size of the information to be exchanged. Our contribution lies in adaptively adjusting message sizes using Fourier transform-based compression techniques, enabling agents to tailor their messages to match the allocated bandwidth while striking a balance between information loss and transmission efficiency. Receiving agents can reliably decompress the messages using the inverse Fourier transform. Experimental results demonstrate that DSMS significantly improves performance in multi-agent cooperative tasks by optimizing the utilization of bandwidth and effectively balancing information value.