Classical results in general equilibrium theory assume divisible goods and convex preferences of market participants. In many real-world markets, participants have non-convex preferences and the allocation problem needs to consider complex constraints. Electricity markets are a prime example. In such markets, Walrasian prices are impossible, and heuristic pricing rules based on the dual of the relaxed allocation problem are used in practice. However, these rules have been criticized for high side-payments and inadequate congestion signals. We show that existing pricing heuristics optimize specific design goals that can be conflicting. The trade-offs can be substantial, and we establish that the design of pricing rules is fundamentally a multi-objective optimization problem addressing different incentives. In addition to traditional multi-objective optimization techniques using weighing of individual objectives, we introduce a novel parameter-free pricing rule that minimizes incentives for market participants to deviate locally. Our findings show how the new pricing rule capitalizes on the upsides of existing pricing rules under scrutiny today. It leads to prices that incur low make-whole payments while providing adequate congestion signals and low lost opportunity costs. Our suggested pricing rule does not require weighing of objectives, it is computationally scalable, and balances trade-offs in a principled manner, addressing an important policy issue in electricity markets.
To achieve the goal of providing the best possible care to each patient, physicians need to customize treatments for patients with the same diagnosis, especially when treating diseases that can progress further and require additional treatments, such as cancer. Making decisions at multiple stages as a disease progresses can be formalized as a dynamic treatment regime (DTR). Most of the existing optimization approaches for estimating dynamic treatment regimes including the popular method of Q-learning were developed in a frequentist context. Recently, a general Bayesian machine learning framework that facilitates using Bayesian regression modeling to optimize DTRs has been proposed. In this article, we adapt this approach to censored outcomes using Bayesian additive regression trees (BART) for each stage under the accelerated failure time modeling framework, along with simulation studies and a real data example that compare the proposed approach with Q-learning. We also develop an R wrapper function that utilizes a standard BART survival model to optimize DTRs for censored outcomes. The wrapper function can easily be extended to accommodate any type of Bayesian machine learning model.
Zero Trust is a novel cybersecurity model that focuses on continually evaluating trust to prevent the initiation and horizontal spreading of attacks. A cloud-native Service Mesh is an example of Zero Trust Architecture that can filter out external threats. However, the Service Mesh does not shield the Application Owner from internal threats, such as a rogue administrator of the cluster where their application is deployed. In this work, we are enhancing the Service Mesh to allow the definition and reinforcement of a Verifiable Configuration that is defined and signed off by the Application Owner. Backed by automated digital signing solutions and confidential computing technologies, the Verifiable Configuration allows changing the trust model of the Service Mesh, from the data plane fully trusting the control plane to partially trusting it. This lets the application benefit from all the functions provided by the Service Mesh (resource discovery, traffic management, mutual authentication, access control, observability), while ensuring that the Cluster Administrator cannot change the state of the application in a way that was not intended by the Application Owner.
In randomized experiments and observational studies, weighting methods are often used to generalize and transport treatment effect estimates to a target population. Traditional methods construct the weights by separately modeling the treatment assignment and study selection probabilities and then multiplying functions (e.g., inverses) of their estimates. However, these estimated multiplicative weights may not produce adequate covariate balance and can be highly variable, resulting in biased and unstable estimators, especially when there is limited covariate overlap across populations or treatment groups. To address these limitations, we propose a general weighting approach that weights each treatment group towards the target population in a single step. We present a framework and provide a justification for this one-step approach in terms of generic probability distributions. We show a formal connection between our method and inverse probability and inverse odds weighting. By construction, the proposed approach balances covariates and produces stable estimators. We show that our estimator for the target average treatment effect is consistent, asymptotically Normal, multiply robust, and semiparametrically efficient. We demonstrate the performance of this approach using a simulation study and a randomized case study on the effects of physician racial diversity on preventive healthcare utilization among Black men in California.
Wind power forecasting is essential to power system operation and electricity markets. As abundant data became available thanks to the deployment of measurement infrastructures and the democratization of meteorological modelling, extensive data-driven approaches have been developed within both point and probabilistic forecasting frameworks. These models usually assume that the dataset at hand is complete and overlook missing value issues that often occur in practice. In contrast to that common approach, we rigorously consider here the wind power forecasting problem in the presence of missing values, by jointly accommodating imputation and forecasting tasks. Our approach allows inferring the joint distribution of input features and target variables at the model estimation stage based on incomplete observations only. We place emphasis on a fully conditional specification method owing to its desirable properties, e.g., being assumption-free when it comes to these joint distributions. Then, at the operational forecasting stage, with available features at hand, one can issue forecasts by implicitly imputing all missing entries. The approach is applicable to both point and probabilistic forecasting, while yielding competitive forecast quality within both simulation and real-world case studies. It confirms that by using a powerful universal imputation method like fully conditional specification, the proposed approach is superior to the common approach, especially in the context of probabilistic forecasting.
This work provides a theoretical analysis for optimally solving the pose estimation problem using total least squares for vector observations from landmark features, which is central to applications involving simultaneous localization and mapping. First, the optimization process is formulated with observation vectors extracted from point-cloud features. Then, error-covariance expressions are derived. The attitude and position estimates obtained via the derived optimization process are proven to reach the bounds defined by the Cram\'er-Rao lower bound under the small-angle approximation of attitude errors. A fully populated observation noise-covariance matrix is assumed as the weight in the cost function to cover the most general case of the sensor uncertainty. This includes more generic correlations in the errors than previous cases involving an isotropic noise assumption. The proposed solution is verified using Monte Carlo simulations and an experiment with an actual LIDAR to validate the error-covariance analysis.
Matching and pricing are two critical levers in two-sided marketplaces to connect demand and supply. The platform can produce more efficient matching and pricing decisions by batching the demand requests. We initiate the study of the two-stage stochastic matching problem, with or without pricing, to enable the platform to make improved decisions in a batch with an eye toward the imminent future demand requests. This problem is motivated in part by applications in online marketplaces such as ride hailing platforms. We design online competitive algorithms for vertex-weighted (or unweighted) two-stage stochastic matching for maximizing supply efficiency, and two-stage joint matching and pricing for maximizing market efficiency. In the former problem, using a randomized primal-dual algorithm applied to a family of ``balancing'' convex programs, we obtain the optimal $3/4$ competitive ratio against the optimum offline benchmark. Using a factor revealing program and connections to submodular optimization, we improve this ratio against the optimum online benchmark to $(1-1/e+1/e^2)\approx 0.767$ for the unweighted and $0.761$ for the weighted case. In the latter problem, we design optimal $1/2$-competitive joint pricing and matching algorithm by borrowing ideas from the ex-ante prophet inequality literature. We also show an improved $(1-1/e)$-competitive algorithm for the special case of demand efficiency objective using the correlation gap of submodular functions. Finally, we complement our theoretical study by using DiDi's ride-sharing dataset for Chengdu city and numerically evaluating the performance of our proposed algorithms in practical instances of this problem.
In solving multi-modal, multi-objective optimization problems (MMOPs), the objective is not only to find a good representation of the Pareto-optimal front (PF) in the objective space but also to find all equivalent Pareto-optimal subsets (PSS) in the variable space. Such problems are practically relevant when a decision maker (DM) is interested in identifying alternative designs with similar performance. There has been significant research interest in recent years to develop efficient algorithms to deal with MMOPs. However, the existing algorithms still require prohibitive number of function evaluations (often in several thousands) to deal with problems involving as low as two objectives and two variables. The algorithms are typically embedded with sophisticated, customized mechanisms that require additional parameters to manage the diversity and convergence in the variable and the objective spaces. In this letter, we introduce a steady-state evolutionary algorithm for solving MMOPs, with a simple design and no additional userdefined parameters that need tuning compared to a standard EA. We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations. The performance of the proposed algorithm is compared with six state-of-the-art algorithms (MO Ring PSO SCD, DN-NSGAII, TriMOEA-TA&R, CPDEA, MMOEA/DC and MMEA-WI). The proposed algorithm exhibits significantly better performance than the above algorithms based on the established metrics including IGDX, PSP and IGD. We hope this study would encourage design of simple, efficient and generalized algorithms to improve its uptake for practical applications.
We study a submodular maximization problem motivated by applications in online retail. A platform displays a list of products to a user in response to a search query. The user inspects the first $k$ items in the list for a $k$ chosen at random from a given distribution, and decides whether to purchase an item from that set based on a choice model. The goal of the platform is to maximize the engagement of the shopper defined as the probability of purchase. This problem gives rise to a less-studied variation of submodular maximization in which we are asked to choose an $\textit{ordering}$ of a set of elements to maximize a linear combination of different submodular functions. First, using a reduction to maximizing submodular functions over matroids, we give an optimal $\left(1-1/e\right)$-approximation for this problem. We then consider a variant in which the platform cares not only about user engagement, but also about diversification across various groups of users, that is, guaranteeing a certain probability of purchase in each group. We characterize the polytope of feasible solutions and give a bi-criteria $((1-1/e)^2,(1-1/e)^2)$-approximation for this problem by rounding an approximate solution of a linear programming relaxation. For rounding, we rely on our reduction and the particular rounding techniques for matroid polytopes. For the special case in which underlying submodular functions are coverage functions -- which is practically relevant in online retail -- we propose an alternative LP relaxation and a simpler randomized rounding for the problem. This approach yields to an optimal bi-criteria $(1-1/e,1-1/e)$-approximation algorithm for the special case of the problem with coverage functions.
In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.
This paper presents a succinct review of attempts in the literature to use game theory to model decision making scenarios relevant to defence applications. Game theory has been proven as a very effective tool in modelling decision making processes of intelligent agents, entities, and players. It has been used to model scenarios from diverse fields such as economics, evolutionary biology, and computer science. In defence applications, there is often a need to model and predict actions of hostile actors, and players who try to evade or out-smart each other. Modelling how the actions of competitive players shape the decision making of each other is the forte of game theory. In past decades, there have been several studies which applied different branches of game theory to model a range of defence-related scenarios. This paper provides a structured review of such attempts, and classifies existing literature in terms of the kind of warfare modelled, the types of game used, and the players involved. The presented analysis provides a concise summary about the state-of-the-art with regards to the use of game theory in defence applications, and highlights the benefits and limitations of game theory in the considered scenarios.