Underwriting is one of the important stages in an insurance company. The insurance company uses different factors to classify the policyholders. In this study, we apply several machine learning models such as nearest neighbour and logistic regression to the Actuarial Challenge dataset used by Qazvini (2019) to classify liability insurance policies into two groups: 1 - policies with claims and 2 - policies without claims.
A discrete spatial lattice can be cast as a network structure over which spatially-correlated outcomes are observed. A second network structure may also capture similarities among measured features, when such information is available. Incorporating the network structures when analyzing such doubly-structured data can improve predictive power, and lead to better identification of important features in the data-generating process. Motivated by applications in spatial disease mapping, we develop a new doubly regularized regression framework to incorporate these network structures for analyzing high-dimensional datasets. Our estimators can be easily implemented with standard convex optimization algorithms. In addition, we describe a procedure to obtain asymptotically valid confidence intervals and hypothesis tests for our model parameters. We show empirically that our framework provides improved predictive accuracy and inferential power compared to existing high-dimensional spatial methods. These advantages hold given fully accurate network information, and also with networks which are partially misspecified or uninformative. The application of the proposed method to modeling COVID-19 mortality data suggests that it can improve prediction of deaths beyond standard spatial models, and that it selects relevant covariates more often.
For several types of information relations, the induced rough sets system RS does not form a lattice but only a partially ordered set. However, by studying its Dedekind-MacNeille completion DM(RS), one may reveal new important properties of rough set structures. Building upon D. Umadevi's work on describing joins and meets in DM(RS), we previously investigated pseudo-Kleene algebras defined on DM(RS) for reflexive relations. This paper delves deeper into the order-theoretic properties of DM(RS) in the context of reflexive relations. We describe the completely join-irreducible elements of DM(RS) and characterize when DM(RS) is a spatial completely distributive lattice. We show that even in the case of a non-transitive reflexive relation, DM(RS) can form a Nelson algebra, a property generally associated with quasiorders. We introduce a novel concept, the core of a relational neighborhood, and use it to provide a necessary and sufficient condition for DM(RS) to determine a Nelson algebra.
We present a novel, model-free, and data-driven methodology for controlling complex dynamical systems into previously unseen target states, including those with significantly different and complex dynamics. Leveraging a parameter-aware realization of next-generation reservoir computing, our approach accurately predicts system behavior in unobserved parameter regimes, enabling control over transitions to arbitrary target states. Crucially, this includes states with dynamics that differ fundamentally from known regimes, such as shifts from periodic to intermittent or chaotic behavior. The method's parameter-awareness facilitates non-stationary control, ensuring smooth transitions between states. By extending the applicability of machine learning-based control mechanisms to previously inaccessible target dynamics, this methodology opens the door to transformative new applications while maintaining exceptional efficiency. Our results highlight reservoir computing as a powerful alternative to traditional methods for dynamic system control.
Image classification from independent and identically distributed random variables is considered. Image classifiers are defined which are based on a linear combination of deep convolutional networks with max-pooling layer. Here all the weights are learned by stochastic gradient descent. A general result is presented which shows that the image classifiers are able to approximate the best possible deep convolutional network. In case that the a posteriori probability satisfies a suitable hierarchical composition model it is shown that the corresponding deep convolutional neural network image classifier achieves a rate of convergence which is independent of the dimension of the images.
In many situations humans have to reason with inconsistent knowledge. These inconsistencies may occur due to not fully reliable sources of information. In order to reason with inconsistent knowledge, it is not possible to view a set of premisses as absolute truths as is done in predicate logic. Viewing the set of premisses as a set of assumptions, however, it is possible to deduce useful conclusions from an inconsistent set of premisses. In this paper a logic for reasoning with inconsistent knowledge is described. This logic is a generalization of the work of N. Rescher [15]. In the logic a reliability relation is used to choose between incompatible assumptions. These choices are only made when a contradiction is derived. As long as no contradiction is derived, the knowledge is assumed to be consistent. This makes it possible to define an argumentation-based deduction process for the logic. For the logic a semantics based on the ideas of Y. Shoham [22, 23], is defined. It turns out that the semantics for the logic is a preferential semantics according to the definition S. Kraus, D. Lehmann and M. Magidor [12]. Therefore the logic is a logic of system P and possesses all the properties of an ideal non-monotonic logic.
We study the problem of constructing an estimator of the average treatment effect (ATE) with observational data. The celebrated doubly-robust, augmented-IPW (AIPW) estimator generally requires consistent estimation of both nuisance functions for standard root-n inference, and moreover that the product of the errors of the nuisances should shrink at a rate faster than $n^{-1/2}$. A recent strand of research has aimed to understand the extent to which the AIPW estimator can be improved upon (in a minimax sense). Under structural assumptions on the nuisance functions, the AIPW estimator is typically not minimax-optimal, and improvements can be made using higher-order influence functions (Robins et al, 2017). Conversely, without any assumptions on the nuisances beyond the mean-square-error rates at which they can be estimated, the rate achieved by the AIPW estimator is already optimal (Balakrishnan et al, 2023; Jin and Syrgkanis, 2024). We make three main contributions. First, we propose a new hybrid class of distributions that combine structural agnosticism regarding the nuisance function space with additional smoothness constraints. Second, we calculate minimax lower bounds for estimating the ATE in the new class, as well as in the pure structure-agnostic one. Third, we propose a new estimator of the ATE that enjoys doubly-robust asymptotic linearity; it can yield asymptotically valid Wald-type confidence intervals even when the propensity score or the outcome model is inconsistently estimated, or estimated at a slow rate. Under certain conditions, we show that its rate of convergence in the new class can be much faster than that achieved by the AIPW estimator and, in particular, matches the minimax lower bound rate, thereby establishing its optimality. Finally, we complement our theoretical findings with simulations.
The digital twin approach has gained recognition as a promising solution to the challenges faced by the Architecture, Engineering, Construction, Operations, and Management (AECOM) industries. However, its broader application across AECOM sectors remains limited. One significant obstacle is that traditional digital twins rely on deterministic models, which require deterministic input parameters. This limits their accuracy, as they do not account for the substantial uncertainties inherent in AECOM projects. These uncertainties are particularly pronounced in geotechnical design and construction. To address this challenge, we propose a Probabilistic Digital Twin (PDT) framework that extends traditional digital twin methodologies by incorporating uncertainties, and is tailored to the requirements of geotechnical design and construction. The PDT framework provides a structured approach to integrating all sources of uncertainty, including aleatoric, data, model, and prediction uncertainties, and propagates them throughout the entire modeling process. To ensure that site-specific conditions are accurately reflected as additional information is obtained, the PDT leverages Bayesian methods for model updating. The effectiveness of the probabilistic digital twin framework is showcased through an application to a highway foundation construction project, demonstrating its potential to improve decision-making and project outcomes in the face of significant uncertainties.
Density dependence occurs at the individual level but is often evaluated at the population level, leading to difficulties or even controversies in detecting such a process. Bayesian individual-based models such as spatial capture-recapture (SCR) models provide opportunities to study density dependence at the individual level, but such an approach remains to be developed and evaluated. In this study, we developed a SCR model that links habitat use to apparent survival and recruitment through density dependent processes at the individual level. Using simulations, we found that the model can properly inform habitat use, but tends to underestimate the effect of density dependence on apparent survival and recruitment. The reason for such underestimations is likely due to the fact that SCR models have difficulties in identifying the locations of unobserved individuals while assuming they are uniformly distributed. How to accurately estimate the locations of unobserved individuals, and thus density dependence, remains a challenging topic in spatial statistics and statistical ecology.
This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero error? In the first part of the thesis, we will empirically study how training deep networks via stochastic gradient descent implicitly controls the networks' capacity. Subsequently, to show how this leads to better generalization, we will derive {\em data-dependent} {\em uniform-convergence-based} generalization bounds with improved dependencies on the parameter count. Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality. Given its popularity, in this thesis, we will also take a step back to identify the fundamental limits of uniform convergence as a tool to explain generalization. In particular, we will show that in some example overparameterized settings, {\em any} uniform convergence bound will provide only a vacuous generalization bound. With this realization in mind, in the last part of the thesis, we will change course and introduce an {\em empirical} technique to estimate generalization using unlabeled data. Our technique does not rely on any notion of uniform-convergece-based complexity and is remarkably precise. We will theoretically show why our technique enjoys such precision. We will conclude by discussing how future work could explore novel ways to incorporate distributional assumptions in generalization bounds (such as in the form of unlabeled data) and explore other tools to derive bounds, perhaps by modifying uniform convergence or by developing completely new tools altogether.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.