We introduce a hull operator on Poisson point processes, the easiest example being the convex hull of the support of a point process in Euclidean space. Assuming that the intensity measure of the process is known on the set generated by the hull operator, we discuss estimation of the expected linear statistics built on the Poisson process. In special cases, our general scheme yields an estimator of the volume of a convex body or an estimator of an integral of a H\"older function. We show that the estimation error is given by the Kabanov--Skorohod integral with respect to the underlying Poisson process. A crucial ingredient of our approach is a spatial Markov property of the underlying Poisson process with respect to the hull. We derive the rate of normal convergence for the estimation error, and illustrate it on an application to estimators of integrals of a H\"older function. We also discuss estimation of higher order symmetric statistics.
In this work, an approximate family of implicit multiderivative Runge-Kutta (MDRK) time integrators for stiff initial value problems is presented. The approximation procedure is based on the recent Approximate Implicit Taylor method (Baeza et al. in Comput. Appl. Math. 39:304, 2020). As a Taylor method can be written in MDRK format, the novel family constitutes a multistage generalization. Two different alternatives are investigated for the computation of the higher order derivatives: either directly as part of the stage equation, or either as a separate formula for each derivative added on top of the stage equation itself. From linearizing through Newton's method, it turns out that the conditioning of the Newton matrix behaves significantly different for both cases. We show that direct computation results in a matrix with a conditioning that is highly dependent on the stiffness, increasing exponentially in the stiffness parameter with the amount of derivatives. Adding separate formulas has a more favorable behavior, the matrix conditioning being linearly dependent on the stiffness, regardless of the amount of derivatives. Despite increasing the Newton system significantly in size, through several numerical results it is demonstrated that doing so can be considerably beneficial.
Data scarcity and heterogeneity pose significant performance challenges for personalized federated learning, and these challenges are mainly reflected in overfitting and low precision in existing methods. To overcome these challenges, a multi-layer multi-fusion strategy framework is proposed in this paper, i.e., the server adopts the network layer parameters of each client upload model as the basic unit of fusion for information-sharing calculation. Then, a new fusion strategy combining personalized and generic is purposefully proposed, and the network layer number fusion threshold of each fusion strategy is designed according to the network layer function. Under this mechanism, the L2-Norm negative exponential similarity metric is employed to calculate the fusion weights of the corresponding feature extraction layer parameters for each client, thus improving the efficiency of heterogeneous data personalized collaboration. Meanwhile, the federated global optimal model approximation fusion strategy is adopted in the network full-connect layer, and this generic fusion strategy alleviates the overfitting introduced by forceful personalized. Finally, the experimental results show that the proposed method is superior to the state-of-the-art methods.
We propose a new approach to portfolio optimization that utilizes a unique combination of synthetic data generation and a CVaR-constraint. We formulate the portfolio optimization problem as an asset allocation problem in which each asset class is accessed through a passive (index) fund. The asset-class weights are determined by solving an optimization problem which includes a CVaR-constraint. The optimization is carried out by means of a Modified CTGAN algorithm which incorporates features (contextual information) and is used to generate synthetic return scenarios, which, in turn, are fed into the optimization engine. For contextual information we rely on several points along the U.S. Treasury yield curve. The merits of this approach are demonstrated with an example based on ten asset classes (covering stocks, bonds, and commodities) over a fourteen-and-half year period (January 2008-June 2022). We also show that the synthetic generation process is able to capture well the key characteristics of the original data, and the optimization scheme results in portfolios that exhibit satisfactory out-of-sample performance. We also show that this approach outperforms the conventional equal-weights (1/N) asset allocation strategy and other optimization formulations based on historical data only.
A variety of control tasks such as inverse kinematics (IK), trajectory optimization (TO), and model predictive control (MPC) are commonly formulated as energy minimization problems. Numerical solutions to such problems are well-established. However, these are often too slow to be used directly in real-time applications. The alternative is to learn solution manifolds for control problems in an offline stage. Although this distillation process can be trivially formulated as a behavioral cloning (BC) problem in an imitation learning setting, our experiments highlight a number of significant shortcomings arising due to incompatible local minima, interpolation artifacts, and insufficient coverage of the state space. In this paper, we propose an alternative to BC that is efficient and numerically robust. We formulate the learning of solution manifolds as a minimization of the energy terms of a control objective integrated over the space of problems of interest. We minimize this energy integral with a novel method that combines Monte Carlo-inspired adaptive sampling strategies with the derivatives used to solve individual instances of the control task. We evaluate the performance of our formulation on a series of robotic control problems of increasing complexity, and we highlight its benefits through comparisons against traditional methods such as behavioral cloning and Dataset aggregation (Dagger).
This paper addresses the following question: given a sample of i.i.d. random variables with finite variance, can one construct an estimator of the unknown mean that performs nearly as well as if the data were normally distributed? One of the most popular examples achieving this goal is the median of means estimator. However, it is inefficient in a sense that the constants in the resulting bounds are suboptimal. We show that a permutation-invariant modification of the median of means estimator admits deviation guarantees that are sharp up to $1+o(1)$ factor if the underlying distribution possesses more than $\frac{3+\sqrt{5}}{2}\approx 2.62$ moments and is absolutely continuous with respect to the Lebesgue measure. This result yields potential improvements for a variety of algorithms that rely on the median of means estimator as a building block. At the core of our argument is are the new deviation inequalities for the U-statistics of order that is allowed to grow with the sample size, a result that could be of independent interest.
Early detection of many life-threatening diseases (e.g., prostate and breast cancer) within at-risk population can improve clinical outcomes and reduce cost of care. While numerous disease-specific "screening" tests that are closer to Point-of-Care (POC) are in use for this task, their low specificity results in unnecessary biopsies, leading to avoidable patient trauma and wasteful healthcare spending. On the other hand, despite the high accuracy of Magnetic Resonance (MR) imaging in disease diagnosis, it is not used as a POC disease identification tool because of poor accessibility. The root cause of poor accessibility of MR stems from the requirement to reconstruct high-fidelity images, as it necessitates a lengthy and complex process of acquiring large quantities of high-quality k-space measurements. In this study we explore the feasibility of an ML-augmented MR pipeline that directly infers the disease sidestepping the image reconstruction process. We hypothesise that the disease classification task can be solved using a very small tailored subset of k-space data, compared to image reconstruction. Towards that end, we propose a method that performs two tasks: 1) identifies a subset of the k-space that maximizes disease identification accuracy, and 2) infers the disease directly using the identified k-space subset, bypassing the image reconstruction step. We validate our hypothesis by measuring the performance of the proposed system across multiple diseases and anatomies. We show that comparable performance to image-based classifiers, trained on images reconstructed with full k-space data, can be achieved using small quantities of data: 8% of the data for detecting multiple abnormalities in prostate and brain scans, and 5% of the data for knee abnormalities. To better understand the proposed approach and instigate future research, we provide an extensive analysis and release code.
The rapid ascent in carbon dioxide emissions is a major cause of global warming and climate change, which pose a huge threat to human survival and impose far-reaching influence on the global ecosystem. Therefore, it is very necessary to effectively control carbon dioxide emissions by accurately predicting and analyzing the change trend timely, so as to provide a reference for carbon dioxide emissions mitigation measures. This paper is aiming to select a suitable model to predict the near-real-time daily emissions based on univariate daily time-series data from January 1st, 2020 to September 30st, 2022 of all sectors (Power, Industry, Ground Transport, Residential, Domestic Aviation, International Aviation) in China. We proposed six prediction models, which including three statistical models: Grey prediction (GM(1,1)), autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average with exogenous factors (SARIMAX); three machine learning models: artificial neural network (ANN), random forest (RF) and long short term memory (LSTM). To evaluate the performance of these models, five criteria: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Coefficient of Determination () are imported and discussed in detail. In the results, three machine learning models perform better than that three statistical models, in which LSTM model performs the best on five criteria values for daily emissions prediction with the 3.5179e-04 MSE value, 0.0187 RMSE value, 0.0140 MAE value, 14.8291% MAPE value and 0.9844 value.
We introduce and analyse the first order Enlarged Enhancement Virtual Element Method (E$^2$VEM) for the Poisson problem. The method allows the definition of bilinear forms that do not require a stabilization term, thanks to the exploitation of higher order polynomial projections that are made computable by suitably enlarging the enhancement (from which comes the prefix of the name E$^2$) property of local virtual spaces. The polynomial degree of local projections is chosen based on the number of vertices of each polygon. We provide a proof of well-posedness and optimal order a priori error estimates. Numerical tests on convex and non-convex polygonal meshes confirm the criterium for well-posedness and the theoretical convergence rates.
Many models for point process data are defined through a thinning procedure where locations of a base process (often Poisson) are either kept (observed) or discarded (thinned). In this paper, we go back to the fundamentals of the distribution theory for point processes and provide a colouring theorem that characterizes the joint density of thinned and observed locations in any of such models. In practice, the marginal model of observed points is often intractable, but thinned locations can be instantiated from their conditional distribution and typical data augmentation schemes can be employed to circumvent this problem. Such approaches have been employed in recent publications, but conceptual flaws have been introduced in this literature. We concentrate on an example: the so-called sigmoidal Gaussian Cox process. We apply our general theory to resolve what are contradicting viewpoints in the data augmentation step of the inference procedures therein. Finally, we provide a multitype extension to this process and conduct Bayesian inference on data consisting of positions of 2 different species of trees in Lansing Woods, Illinois.
Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing the generalization capabilities of a model, it can also address many other challenges and problems, from overcoming a limited amount of training data over regularizing the objective to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation (C1) and a taxonomy for existing works (C2), this survey is concerned with data augmentation methods for textual classification and aims to achieve a concise and comprehensive overview for researchers and practitioners (C3). Derived from the taxonomy, we divided more than 100 methods into 12 different groupings and provide state-of-the-art references expounding which methods are highly promising (C4). Finally, research perspectives that may constitute a building block for future work are given (C5).