In Japan, the Housing and Land Survey (HLS) provides municipality-level grouped data on household incomes. Although these data can be used for effective local policymaking, their analyses are hindered by several challenges, such as limited information attributed to grouping, the presence of non-sampled areas, and the very low frequency of implementing surveys. To address these challenges, we propose a novel grouped-data-based spatio-temporal finite mixture model to model the income distributions of multiple spatial units at multiple time points. A unique feature of the proposed method is that all the areas share common latent distributions and that the mixing proportions that include the spatial and temporal effects capture the potential area-wise heterogeneity. Thus, incorporating these effects can smooth out the quantities of interest over time and space, impute missing values, and predict future values. By treating the HLS data with the proposed method, we obtain complete maps of the income and poverty measures at an arbitrary time point, which can be used to facilitate rapid and efficient policymaking with fine granularity.
The choice to participate in a data-driven service, often made on the basis of quality of that service, influences the ability of the service to learn and improve. We study the participation and retraining dynamics that arise when both the learners and sub-populations of users are \emph{risk-reducing}, which cover a broad class of updates including gradient descent, multiplicative weights, etc. Suppose, for example, that individuals choose to spend their time amongst social media platforms proportionally to how well each platform works for them. Each platform also gathers data about its active users, which it uses to update parameters with a gradient step. For this example and for our general class of dynamics, we show that the only asymptotically stable equilibria are segmented, with sub-populations allocated to a single learner. Under mild assumptions, the utilitarian social optimum is a stable equilibrium. In contrast to previous work, which shows that repeated risk minimization can result in representation disparity and high overall loss for a single learner \citep{hashimoto2018fairness,miller2021outside}, we find that repeated myopic updates with multiple learners lead to better outcomes. We illustrate the phenomena via a simulated example initialized from real data.
Replication studies are increasingly conducted to assess the credibility of scientific findings. Most of these replication attempts target studies with a superiority design, and there is a lack of methodology regarding the analysis of replication studies with alternative types of designs, such as equivalence. In order to fill this gap, we propose two approaches, the two-trials rule and the sceptical TOST procedure, adapted from methods used in superiority settings. Both methods have the same overall Type-I error rate, but the sceptical TOST procedure allows replication success even for non-significant original or replication studies. This leads to a larger project power and other differences in relevant operating characteristics. Both methods can be used for sample size calculation of the replication study, based on the results from the original one. The two methods are applied to data from the Reproducibility Project: Cancer Biology.
Group sparsity in Machine Learning (ML) encourages simpler, more interpretable models with fewer active parameter groups. This work aims to incorporate structured group sparsity into the shared parameters of a Multi-Task Learning (MTL) framework, to develop parsimonious models that can effectively address multiple tasks with fewer parameters while maintaining comparable or superior performance to a dense model. Sparsifying the model during training helps decrease the model's memory footprint, computation requirements, and prediction time during inference. We use channel-wise l1/l2 group sparsity in the shared layers of the Convolutional Neural Network (CNN). This approach not only facilitates the elimination of extraneous groups (channels) but also imposes a penalty on the weights, thereby enhancing the learning of all tasks. We compare the outcomes of single-task and multi-task experiments under group sparsity on two publicly available MTL datasets, NYU-v2 and CelebAMask-HQ. We also investigate how changing the sparsification degree impacts both the performance of the model and the sparsity of groups.
We provide the first convergence guarantees for the Consistency Models (CMs), a newly emerging type of one-step generative models that can generate comparable samples to those generated by Diffusion Models. Our main result is that, under the basic assumptions on score-matching errors, consistency errors and smoothness of the data distribution, CMs can efficiently sample from any realistic data distribution in one step with small $W_2$ error. Our results (1) hold for $L^2$-accurate score and consistency assumption (rather than $L^\infty$-accurate); (2) do note require strong assumptions on the data distribution such as log-Sobelev inequality; (3) scale polynomially in all parameters; and (4) match the state-of-the-art convergence guarantee for score-based generative models (SGMs). We also provide the result that the Multistep Consistency Sampling procedure can further reduce the error comparing to one step sampling, which support the original statement of "Consistency Models, Yang Song 2023". Our result further imply a TV error guarantee when take some Langevin-based modifications to the output distributions.
We combine Kronecker products, and quantitative information flow, to give a novel formal analysis for the fine-grained verification of utility in complex privacy pipelines. The combination explains a surprising anomaly in the behaviour of utility of privacy-preserving pipelines -- that sometimes a reduction in privacy results also in a decrease in utility. We use the standard measure of utility for Bayesian analysis, introduced by Ghosh at al., to produce tractable and rigorous proofs of the fine-grained statistical behaviour leading to the anomaly. More generally, we offer the prospect of formal-analysis tools for utility that complement extant formal analyses of privacy. We demonstrate our results on a number of common privacy-preserving designs.
This paper investigates the temporal patterns of activity in the cryptocurrency market with a focus on Bitcoin, Ethereum, Dogecoin, and WINkLink from January 2020 to December 2022. Market activity measures - logarithmic returns, volume, and transaction number, sampled every 10 seconds, were divided into intraday and intraweek periods and then further decomposed into recurring and noise components via correlation matrix formalism. The key findings include the distinctive market behavior from traditional stock markets due to the nonexistence of trade opening and closing. This was manifest in three enhanced-activity phases aligning with Asian, European, and U.S. trading sessions. An intriguing pattern of activity surge in 15-minute intervals, particularly at full hours, was also noticed, implying the potential role of algorithmic trading. Most notably, recurring bursts of activity in bitcoin and ether were identified to coincide with the release times of significant U.S. macroeconomic reports such as Nonfarm payrolls, Consumer Price Index data, and Federal Reserve statements. The most correlated daily patterns of activity occurred in 2022, possibly reflecting the documented correlations with U.S. stock indices in the same period. Factors that are external to the inner market dynamics are found to be responsible for the repeatable components of the market dynamics, while the internal factors appear to be substantially random, which manifests itself in a good agreement between the empirical eigenvalue distributions in their bulk and the random matrix theory predictions expressed by the Marchenko-Pastur distribution. The findings reported support the growing integration of cryptocurrencies into the global financial markets.
We consider a variation of Cops and Robber, introduced in [D. Cox and A. Sanaei, The damage number of a graph, [Aust. J. of Comb. 75(1) (2019) 1-16] where vertices visited by a robber are considered damaged and a single cop aims to minimize the number of distinct vertices damaged by a robber. Motivated by the interesting relationships that often emerge between input graphs and their Cartesian product, we study the damage number of the Cartesian product of graphs. We provide a general upper bound and consider the damage number of the product of two trees or cycles. We also consider graphs with small damage number.
Quantization summarizes continuous distributions by calculating a discrete approximation. Among the widely adopted methods for data quantization is Lloyd's algorithm, which partitions the space into Vorono\"i cells, that can be seen as clusters, and constructs a discrete distribution based on their centroids and probabilistic masses. Lloyd's algorithm estimates the optimal centroids in a minimal expected distance sense, but this approach poses significant challenges in scenarios where data evaluation is costly, and relates to rare events. Then, the single cluster associated to no event takes the majority of the probability mass. In this context, a metamodel is required and adapted sampling methods are necessary to increase the precision of the computations on the rare clusters.
A population-averaged additive subdistribution hazards model is proposed to assess the marginal effects of covariates on the cumulative incidence function and to analyze correlated failure time data subject to competing risks. This approach extends the population-averaged additive hazards model by accommodating potentially dependent censoring due to competing events other than the event of interest. Assuming an independent working correlation structure, an estimating equations approach is outlined to estimate the regression coefficients and a new sandwich variance estimator is proposed. The proposed sandwich variance estimator accounts for both the correlations between failure times and between the censoring times, and is robust to misspecification of the unknown dependency structure within each cluster. We further develop goodness-of-fit tests to assess the adequacy of the additive structure of the subdistribution hazards for the overall model and each covariate. Simulation studies are conducted to investigate the performance of the proposed methods in finite samples. We illustrate our methods using data from the STrategies to Reduce Injuries and Develop confidence in Elders (STRIDE) trial.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.