The balanced incomplete block design (BIBD) problem is a difficult combinatorial problem with a large number of symmetries, which add complexity to its resolution. In this paper, we propose a dual (integer) problem representation that serves as an alternative to the classical binary formulation of the problem. We attack this problem incrementally: firstly, we propose basic algorithms (i.e. local search techniques and genetic algorithms) intended to work separately on the two different search spaces (i.e. binary and integer); secondly, we propose two hybrid schemes: an integrative approach (i.e. a memetic algorithm) and a collaborative model in which the previous methods work in parallel, occasionally exchanging information. Three distinct two-dimensional structures are proposed as communication topology among the algorithms involved in the collaborative model, as well as a number of migration and acceptance criteria for sending and receiving data. An empirical analysis comparing a large number of instances of our schemes (with algorithms possibly working on different search spaces and with/without symmetry breaking methods) shows that some of these algorithms can be considered the state of the art of the metaheuristic methods applied to finding BIBDs. Moreover, our cooperative proposal is a general scheme from which distinct algorithmic variants can be instantiated to handle symmetrical optimisation problems. For this reason, we have also analysed its key parameters, thereby providing general guidelines for the design of efficient/robust cooperative algorithms devised from our proposal.
Tree-based models for probability distributions are usually specified using a predetermined, data-independent collection of candidate recursive partitions of the sample space. To characterize an unknown target density in detail over the entire sample space, candidate partitions must have the capacity to expand deeply into all areas of the sample space with potential non-zero sampling probability. Such an expansive system of partitions often incurs prohibitive computational costs and makes inference prone to overfitting, especially in regions with little probability mass. Existing models typically make a compromise and rely on relatively shallow trees. This hampers one of the most desirable features of trees, their ability to characterize local features, and results in reduced statistical efficiency. Traditional wisdom suggests that this compromise is inevitable to ensure coherent likelihood-based reasoning, as a data-dependent partition system that allows deeper expansion only in regions with more observations would induce double dipping of the data and thus lead to inconsistent inference. We propose a simple strategy to restore coherency while allowing the candidate partitions to be data-dependent, using Cox's partial likelihood. This strategy parametrizes the tree-based sampling model according to the allocation of probability mass based on the observed data, and yet under appropriate specification, the resulting inference remains valid. Our partial likelihood approach is broadly applicable to existing likelihood-based methods and in particular to Bayesian inference on tree-based models. We give examples in density estimation in which the partial likelihood is endowed with existing priors on tree-based models and compare with the standard, full-likelihood approach. The results show substantial gains in estimation accuracy and computational efficiency from using the partial likelihood.
Most of the scientific literature on causal modeling considers the structural framework of Pearl and the potential-outcome framework of Rubin to be formally equivalent, and therefore interchangeably uses do-interventions and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we agnostically superimpose the two causal models to specify under which mathematical conditions structural counterfactual outcomes and potential outcomes need to, do not need to, can, or cannot be equal (almost surely or law). Our comparison reminds that a structural causal model and a Rubin causal model compatible with the same observations do not have to coincide, and highlights real-world problems where they even cannot correspond. Then, we examine common claims and practices from the causal-inference literature in the light of these results. In doing so, we aim at clarifying the relationship between the two causal frameworks, and the interpretation of their respective counterfactuals.
High-dimensional parabolic partial differential equations (PDEs) often involve large-scale Hessian matrices, which are computationally expensive for deep learning methods relying on automatic differentiation to compute derivatives. This work aims to address this issue. In the proposed method, the PDE is reformulated into a martingale formulation, which allows the computation of loss functions to be derivative-free and parallelized in time-space domain. Then, the martingale formulation is enforced using a Galerkin method via adversarial learning techniques, which eliminate the need of computing conditional expectations in the margtingale property. This method is further extended to solve Hamilton-Jacobi-Bellman (HJB) equations and the associated Stochastic optimal control problems, enabling the simultaneous solution of the value function and optimal feedback control in a derivative-free manner. Numerical results demonstrate the effectiveness and efficiency of the proposed method, capable of solving HJB equations accurately with dimensionality up to 10,000.
A new variant of the GMRES method is presented for solving linear systems with the same matrix and subsequently obtained multiple right-hand sides. The new method keeps such properties of the classical GMRES algorithm as follows. Both bases of the search space and its image are maintained orthonormal that increases the robustness of the method. Moreover there is no need to store both bases since they are effectively represented within a common basis. Along with it our method is theoretically equivalent to the GCR method extended for a case of multiple right-hand sides but is more numerically robust and requires less memory. The main result of the paper is a mechanism of adding an arbitrary direction vector to the search space that can be easily adopted for flexible GMRES or GMRES with deflated restarting.
For several types of information relations, the induced rough sets system RS does not form a lattice but only a partially ordered set. However, by studying its Dedekind-MacNeille completion DM(RS), one may reveal new important properties of rough set structures. Building upon D. Umadevi's work on describing joins and meets in DM(RS), we previously investigated pseudo-Kleene algebras defined on DM(RS) for reflexive relations. This paper delves deeper into the order-theoretic properties of DM(RS) in the context of reflexive relations. We describe the completely join-irreducible elements of DM(RS) and characterize when DM(RS) is a spatial completely distributive lattice. We show that even in the case of a non-transitive reflexive relation, DM(RS) can form a Nelson algebra, a property generally associated with quasiorders. We introduce a novel concept, the core of a relational neighborhood, and use it to provide a necessary and sufficient condition for DM(RS) to determine a Nelson algebra.
The Cox proportional hazards model (Cox model) is a popular model for survival data analysis. When the sample size is small relative to the dimension of the model, the standard maximum partial likelihood inference is often problematic. In this work, we propose the Cox catalytic prior distributions for Bayesian inference on Cox models, which is an extension of a general class of prior distributions originally designed for stabilizing complex parametric models. The Cox catalytic prior is formulated as a weighted likelihood of the regression coefficients based on synthetic data and a surrogate baseline hazard constant. This surrogate hazard can be either provided by the user or estimated from the data, and the synthetic data are generated from the predictive distribution of a fitted simpler model. For point estimation, we derive an approximation of the marginal posterior mode, which can be computed conveniently as a regularized log partial likelihood estimator. We prove that our prior distribution is proper and the resulting estimator is consistent under mild conditions. In simulation studies, our proposed method outperforms standard maximum partial likelihood inference and is on par with existing shrinkage methods. We further illustrate the application of our method to a real dataset.
Randomized iterative algorithms, such as the randomized Kaczmarz method, have gained considerable popularity due to their efficacy in solving matrix-vector and matrix-matrix regression problems. Our present work leverages the insights gained from studying such algorithms to develop regression methods for tensors, which are the natural setting for many application problems, e.g., image deblurring. In particular, we extend the randomized Kaczmarz method to solve a tensor system of the form $\mathbf{\mathcal{A}}\mathcal{X} = \mathcal{B}$, where $\mathcal{X}$ can be factorized as $\mathcal{X} = \mathcal{U}\mathcal{V}$, and all products are calculated using the t-product. We develop variants of the randomized factorized Kaczmarz method for matrices that approximately solve tensor systems in both the consistent and inconsistent regimes. We provide theoretical guarantees of the exponential convergence rate of our algorithms, accompanied by illustrative numerical simulations. Furthermore, we situate our method within a broader context by linking our novel approaches to earlier randomized Kaczmarz methods.
In reinsurance, Poisson and Negative binomial distributions are employed for modeling frequency. However, the incomplete data regarding reported incurred claims above a priority level presents challenges in estimation. This paper focuses on frequency estimation using Schnieper's framework for claim numbering. We demonstrate that Schnieper's model is consistent with a Poisson distribution for the total number of claims above a priority at each year of development, providing a robust basis for parameter estimation. Additionally, we explain how to build an alternative assumption based on a Negative binomial distribution, which yields similar results. The study includes a bootstrap procedure to manage uncertainty in parameter estimation and a case study comparing assumptions and evaluating the impact of the bootstrap approach.
We consider the problem of causal inference based on observational data (or the related missing data problem) with a binary or discrete treatment variable. In that context, we study inference for the counterfactual density functions and contrasts thereof, which can provide more nuanced information than counterfactual means and the average treatment effect. We impose the shape-constraint of log-concavity, a type of unimodality constraint, on the counterfactual densities, and then develop doubly robust estimators of the log-concave counterfactual density based on augmented inverse-probability weighted pseudo-outcomes. We provide conditions under which the estimator is consistent in various global metrics. We also develop asymptotically valid pointwise confidence intervals for the counterfactual density functions and differences and ratios thereof, which serve as a building block for more comprehensive analyses of distributional differences. We also present a method for using our estimator to implement density confidence bands.
Two sequential estimators are proposed for the odds p/(1-p) and log odds log(p/(1-p)) respectively, using independent Bernoulli random variables with parameter p as inputs. The estimators are unbiased, and guarantee that the variance of the estimation error divided by the true value of the odds, or the variance of the estimation error of the log odds, are less than a target value for any p in (0,1). The estimators are close to optimal in the sense of Wolfowitz's bound.