Composite likelihood usually ignores dependencies among response components, while variational approximation to likelihood ignores dependencies among parameter components. We derive a Gaussian variational approximation to the composite log-likelihood function for Poisson and Gamma regression models with crossed random effects. We show consistency and asymptotic normality of the estimates derived from this approximation and support this theory with some simulation studies. The approach is computationally much faster than a Gaussian variational approximation to the full log-likelihood function.
We present a rigorous and precise analysis of the maximum degree and the average degree in a dynamic duplication-divergence graph model introduced by Sol\'e, Pastor-Satorras et al. in which the graph grows according to a duplication-divergence mechanism, i.e. by iteratively creating a copy of some node and then randomly alternating the neighborhood of a new node with probability $p$. This model captures the growth of some real-world processes e.g. biological or social networks. In this paper, we prove that for some $0 < p < 1$ the maximum degree and the average degree of a duplication-divergence graph on $t$ vertices are asymptotically concentrated with high probability around $t^p$ and $\max\{t^{2 p - 1}, 1\}$, respectively, i.e. they are within at most a polylogarithmic factor from these values with probability at least $1 - t^{-A}$ for any constant $A > 0$.
To understand high precision observations of exoplanets and brown dwarfs, we need detailed and complex general circulation models (GCMs) that incorporate hydrodynamics, chemistry, and radiation. For this study, we specifically examined the coupling between chemistry and radiation in GCMs and compared different methods for the mixing of opacities of different chemical species in the correlated-k assumption, when equilibrium chemistry cannot be assumed. We propose a fast machine learning method based on DeepSets (DS), which effectively combines individual correlated-k opacities (k-tables). We evaluated the DS method alongside other published methods such as adaptive equivalent extinction (AEE) and random overlap with rebinning and resorting (RORR). We integrated these mixing methods into our GCM (expeRT/MITgcm) and assessed their accuracy and performance for the example of the hot Jupiter HD~209458 b. Our findings indicate that the DS method is both accurate and efficient for GCM usage, whereas RORR is too slow. Additionally, we observed that the accuracy of AEE depends on its specific implementation and may introduce numerical issues in achieving radiative transfer solution convergence. We then applied the DS mixing method in a simplified chemical disequilibrium situation, where we modeled the rainout of TiO and VO, and confirmed that the rainout of TiO and VO would hinder the formation of a stratosphere. To further expedite the development of consistent disequilibrium chemistry calculations in GCMs, we provide documentation and code for coupling the DS mixing method with correlated-k radiative transfer solvers. The DS method has been extensively tested to be accurate enough for GCMs; however, other methods might be needed for accelerating atmospheric retrievals.
Prediction models are popular in medical research and practice. By predicting an outcome of interest for specific patients, these models may help inform difficult treatment decisions, and are often hailed as the poster children for personalized, data-driven healthcare. We show however, that using prediction models for decision making can lead to harmful decisions, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients but the worse outcome of these patients does not invalidate the predictive power of the model. Our main result is a formal characterization of a set of such prediction models. Next we show that models that are well calibrated before and after deployment are useless for decision making as they made no change in the data distribution. These results point to the need to revise standard practices for validation, deployment and evaluation of prediction models that are used in medical decisions.
A growing number of scholars and data scientists are conducting randomized experiments to analyze causal relationships in network settings where units influence one another. A dominant methodology for analyzing these network experiments has been design-based, leveraging randomization of treatment assignment as the basis for inference. In this paper, we generalize this design-based approach so that it can be applied to more complex experiments with a variety of causal estimands with different target populations. An important special case of such generalized network experiments is a bipartite network experiment, in which the treatment assignment is randomized among one set of units and the outcome is measured for a separate set of units. We propose a broad class of causal estimands based on stochastic intervention for generalized network experiments. Using a design-based approach, we show how to estimate the proposed causal quantities without bias, and develop conservative variance estimators. We apply our methodology to a randomized experiment in education where a group of selected students in middle schools are eligible for the anti-conflict promotion program, and the program participation is randomized within this group. In particular, our analysis estimates the causal effects of treating each student or his/her close friends, for different target populations in the network. We find that while the treatment improves the overall awareness against conflict among students, it does not significantly reduce the total number of conflicts.
Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.
Robust inferential methods based on divergences measures have shown an appealing trade-off between efficiency and robustness in many different statistical models. In this paper, minimum density power divergence estimators (MDPDEs) for the scale and shape parameters of the log-logistic distribution are considered. The log-logistic is a versatile distribution modeling lifetime data which is commonly adopted in survival analysis and reliability engineering studies when the hazard rate is initially increasing but then it decreases after some point. Further, it is shown that the classical estimators based on maximum likelihood (MLE) are included as a particular case of the MDPDE family. Moreover, the corresponding influence function of the MDPDE is obtained, and its boundlessness is proved, thus leading to robust estimators. A simulation study is carried out to illustrate the slight loss in efficiency of MDPDE with respect to MLE and, at besides, the considerable gain in robustness.
Multistate Markov models are a canonical parametric approach for data modeling of observed or latent stochastic processes supported on a finite state space. Continuous-time Markov processes describe data that are observed irregularly over time, as is often the case in longitudinal medical data, for example. Assuming that a continuous-time Markov process is time-homogeneous, a closed-form likelihood function can be derived from the Kolmogorov forward equations -- a system of differential equations with a well-known matrix-exponential solution. Unfortunately, however, the forward equations do not admit an analytical solution for continuous-time, time-inhomogeneous Markov processes, and so researchers and practitioners often make the simplifying assumption that the process is piecewise time-homogeneous. In this paper, we provide intuitions and illustrations of the potential biases for parameter estimation that may ensue in the more realistic scenario that the piecewise-homogeneous assumption is violated, and we advocate for a solution for likelihood computation in a truly time-inhomogeneous fashion. Particular focus is afforded to the context of multistate Markov models that allow for state label misclassifications, which applies more broadly to hidden Markov models (HMMs), and Bayesian computations bypass the necessity for computationally demanding numerical gradient approximations for obtaining maximum likelihood estimates (MLEs). Supplemental materials are available online.
The Horvitz-Thompson (H-T) estimator is widely used for estimating various types of average treatment effects under network interference. We systematically investigate the optimality properties of H-T estimator under network interference, by embedding it in the class of all linear estimators. In particular, we show that in presence of any kind of network interference, H-T estimator is in-admissible in the class of all linear estimators when using a completely randomized and a Bernoulli design. We also show that the H-T estimator becomes admissible under certain restricted randomization schemes termed as ``fixed exposure designs''. We give examples of such fixed exposure designs. It is well known that the H-T estimator is unbiased when correct weights are specified. Here, we derive the weights for unbiased estimation of various causal effects, and illustrate how they depend not only on the design, but more importantly, on the assumed form of interference (which in many real world situations is unknown at design stage), and the causal effect of interest.
A prompt is a sequence of symbol or tokens, selected from a vocabulary according to some rule, which is prepended/concatenated to a textual query. A key problem is how to select the sequence of tokens: in this paper we formulate it as a combinatorial optimization problem. The high dimensionality of the token space com-pounded by the length of the prompt sequence requires a very efficient solution. In this paper we propose a Bayesian optimization method, executed in a continuous em-bedding of the combinatorial space. In this paper we focus on hard prompt tuning (HPT) which directly searches for discrete tokens to be added to the text input with-out requiring access to the large language model (LLM) and can be used also when LLM is available only as a black-box. This is critically important if LLMs are made available in the Model as a Service (MaaS) manner as in GPT-4. The current manu-script is focused on the optimization of discrete prompts for classification tasks. The discrete prompts give rise to difficult combinatorial optimization problem which easily become intractable given the dimension of the token space in realistic applications. The optimization method considered in this paper is Bayesian optimization (BO) which has become the dominant approach in black-box optimization for its sample efficiency along with its modular structure and versatility. In this paper we use BoTorch, a library for Bayesian optimization research built on top of pyTorch. Albeit preliminary and obtained using a 'vanilla' version of BO, the experiments on RoB-ERTa on six benchmarks, show a good performance across a variety of tasks and enable an analysis of the tradeoff between size of the search space, accuracy and wall clock time.
Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.