We introduce a Bayesian framework for mixed-type multivariate regression using shrinkage priors. Our method enables joint analysis of mixed continuous and discrete outcomes and facilitates variable selection where the number of covariates $p$ may be larger than sample size $n$. Our model can be implemented with a Gibbs sampling algorithm where all conditional distributions are tractable, leading to a simple one-step estimation procedure. We derive the posterior contraction rate for the one-step estimator when $p$ grows subexponentially with respect to $n$. We further establish that subexponential growth is both a necessary and a sufficient condition for the one-step estimator to achieve posterior consistency. We then introduce a two-step variable selection approach that is suitable for large $p$. We prove that our two-step algorithm possesses the sure screening property. Moreover, our two-step estimator can provably achieve posterior contraction even when $p$ grows exponentially in $n$, thus overcoming a limitation of the one-step estimator. Numerical experiments and analyses of real datasets demonstrate the ability of our joint modeling approach to improve predictive accuracy and identify significant variables in multivariate mixed response models. R codes to implement our method are available at //github.com/raybai07/MtMBSP.
The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification. A common practice for solving this problem is to modify the original data so that it could be protected from being recognized by malicious face recognition (FR) systems. However, such ``adversarial examples'' obtained by existing methods usually suffer from low transferability and poor image quality, which severely limits the application of these methods in real-world scenarios. In this paper, we propose a 3D-Aware Adversarial Makeup Generation GAN (3DAM-GAN). which aims to improve the quality and transferability of synthetic makeup for identity information concealing. Specifically, a UV-based generator consisting of a novel Makeup Adjustment Module (MAM) and Makeup Transfer Module (MTM) is designed to render realistic and robust makeup with the aid of symmetric characteristics of human faces. Moreover, a makeup attack mechanism with an ensemble training strategy is proposed to boost the transferability of black-box models. Extensive experiment results on several benchmark datasets demonstrate that 3DAM-GAN could effectively protect faces against various FR models, including both publicly available state-of-the-art models and commercial face verification APIs, such as Face++, Baidu and Aliyun.
The AI community is increasingly focused on merging logic with deep learning to create Neuro-Symbolic (NeSy) paradigms and assist neural approaches with symbolic knowledge. A significant trend in the literature involves integrating axioms and facts in loss functions by grounding logical symbols with neural networks and operators with fuzzy semantics. Logic Tensor Networks (LTN) is one of the main representatives in this category, known for its simplicity, efficiency, and versatility. However, it has been previously shown that not all fuzzy operators perform equally when applied in a differentiable setting. Researchers have proposed several configurations of operators, trading off between effectiveness, numerical stability, and generalization to different formulas. This paper presents a configuration of fuzzy operators for grounding formulas end-to-end in the logarithm space. Our goal is to develop a configuration that is more effective than previous proposals, able to handle any formula, and numerically stable. To achieve this, we propose semantics that are best suited for the logarithm space and introduce novel simplifications and improvements that are crucial for optimization via gradient-descent. We use LTN as the framework for our experiments, but the conclusions of our work apply to any similar NeSy framework. Our findings, both formal and empirical, show that the proposed configuration outperforms the state-of-the-art and that each of our modifications is essential in achieving these results.
We consider the problem of testing for the martingale difference hypothesis for univariate strictly stationary time series by implementing a novel test for conditional mean independence based on the concept of martingale difference divergence. The martingale difference divergence function allows us to measure the degree to which a certain variable is conditionally mean dependent upon its past values: in particular, it does so by computing the regularized norm of the covariance between the current value of the variable and the characteristic function of its past values. In this paper, we make use of such a concept, along with the theoretical framework of generalized spectral density, to construct a Ljung-Box type test for the martingale difference hypothesis. In addition to the results obtained with the implementation of the test statistic, we proceed to show some asymptotics for martingale difference divergence in the time series framework.
This paper proposes an adaptive penalized weighted mean regression for outlier detection of high-dimensional data. In comparison to existing approaches based on the mean shift model, the proposed estimators demonstrate robustness against outliers present in both response variables and/or covariates. By utilizing the adaptive Huber loss function, the proposed method is effective in high-dimensional linear models characterized by heavy-tailed and heteroscedastic error distributions. The proposed framework enables simultaneous and collaborative estimation of regression parameters and outlier detection. Under regularity conditions, outlier detection consistency and oracle inequalities of robust estimates in high-dimensional settings are established. Additionally, theoretical robustness properties, such as the breakdown point and a smoothed limiting influence function, are ascertained. Extensive simulation studies and a breast cancer survival data are used to evaluate the numerical performance of the proposed method, demonstrating comparable or superior variable selection and outlier detection capabilities.
We study the problem of learning with selectively labeled data, which arises when outcomes are only partially labeled due to historical decision-making. The labeled data distribution may substantially differ from the full population, especially when the historical decisions and the target outcome can be simultaneously affected by some unobserved factors. Consequently, learning with only the labeled data may lead to severely biased results when deployed to the full population. Our paper tackles this challenge by exploiting the fact that in many applications the historical decisions were made by a set of heterogeneous decision-makers. In particular, we analyze this setup in a principled instrumental variable (IV) framework. We establish conditions for the full-population risk of any given prediction rule to be point-identified from the observed data and provide sharp risk bounds when the point identification fails. We further propose a weighted learning approach that learns prediction rules robust to the label selection bias in both identification settings. Finally, we apply our proposed approach to a semi-synthetic financial dataset and demonstrate its superior performance in the presence of selection bias.
Health disparity research often evaluates health outcomes across demographic subgroups. Multilevel regression and poststratification (MRP) is a popular approach for small subgroup estimation due to its ability to stabilize estimates by fitting multilevel models and to adjust for selection bias by poststratifying on auxiliary variables, which are population characteristics predictive of the analytic outcome. However, the granularity and quality of the estimates produced by MRP are limited by the availability of the auxiliary variables' joint distribution; data analysts often only have access to the marginal distributions. To overcome this limitation, we embed the estimation of population cell counts needed for poststratification into the MRP workflow: embedded MRP (EMRP). Under EMRP, we generate synthetic populations of the auxiliary variables before implementing MRP. All sources of estimation uncertainty are propagated with a fully Bayesian framework. Through simulation studies, we compare different methods and demonstrate EMRP's improvements over alternatives on the bias-variance tradeoff to yield valid subpopulation inferences of interest. As an illustration, we apply EMRP to the Longitudinal Survey of Wellbeing and estimate food insecurity prevalence among vulnerable groups in New York City. We find that all EMRP estimators can correct for the bias in classical MRP while maintaining lower standard errors and narrower confidence intervals than directly imputing with the WFPBB and design-based estimates. Performances from the EMRP estimators do not differ substantially from each other, though we would generally recommend the WFPBB-MRP for its consistently high coverage rates.
We propose to use L\'evy {\alpha}-stable distributions for constructing priors for Bayesian inverse problems. The construction is based on Markov fields with stable-distributed increments. Special cases include the Cauchy and Gaussian distributions, with stability indices {\alpha} = 1, and {\alpha} = 2, respectively. Our target is to show that these priors provide a rich class of priors for modelling rough features. The main technical issue is that the {\alpha}-stable probability density functions do not have closed-form expressions in general, and this limits their applicability. For practical purposes, we need to approximate probability density functions through numerical integration or series expansions. Current available approximation methods are either too time-consuming or do not function within the range of stability and radius arguments needed in Bayesian inversion. To address the issue, we propose a new hybrid approximation method for symmetric univariate and bivariate {\alpha}-stable distributions, which is both fast to evaluate, and accurate enough from a practical viewpoint. Then we use approximation method in the numerical implementation of {\alpha}-stable random field priors. We demonstrate the applicability of the constructed priors on selected Bayesian inverse problems which include the deconvolution problem, and the inversion of a function governed by an elliptic partial differential equation. We also demonstrate hierarchical {\alpha}-stable priors in the one-dimensional deconvolution problem. We employ maximum-a-posterior-based estimation at all the numerical examples. To that end, we exploit the limited-memory BFGS and its bounded variant for the estimator.
Generative Adversarial Networks (GANs) are powerful models able to synthesize data samples closely resembling the distribution of real data, yet the diversity of those generated samples is limited due to the so-called mode collapse phenomenon observed in GANs. Especially prone to mode collapse are conditional GANs, which tend to ignore the input noise vector and focus on the conditional information. Recent methods proposed to mitigate this limitation increase the diversity of generated samples, yet they reduce the performance of the models when similarity of samples is required. To address this shortcoming, we propose a novel method to selectively increase the diversity of GAN-generated samples. By adding a simple, yet effective regularization to the training loss function we encourage the generator to discover new data modes for inputs related to diverse outputs while generating consistent samples for the remaining ones. More precisely, we maximise the ratio of distances between generated images and input latent vectors scaling the effect according to the diversity of samples for a given conditional input. We show the superiority of our method in a synthetic benchmark as well as a real-life scenario of simulating data from the Zero Degree Calorimeter of ALICE experiment in LHC, CERN.
Many multivariate data sets exhibit a form of positive dependence, which can either appear globally between all variables or only locally within particular subgroups. A popular notion of positive dependence that allows for localized positivity is positive association. In this work we introduce the notion of extremal positive association for multivariate extremes from threshold exceedances. Via a sufficient condition for extremal association, we show that extremal association generalizes extremal tree models. For H\"usler--Reiss distributions the sufficient condition permits a parametric description that we call the metric property. As the parameter of a H\"usler--Reiss distribution is a Euclidean distance matrix, the metric property relates to research in electrical network theory and Euclidean geometry. We show that the metric property can be localized with respect to a graph and study surrogate likelihood inference. This gives rise to a two-step estimation procedure for locally metrical H\"usler--Reiss graphical models. The second step allows for a simple dual problem, which is implemented via a gradient descent algorithm. Finally, we demonstrate our results on simulated and real data.
Diffusion models have achieved state-of-the-art performance in generating many different kinds of data, including images, text, and videos. Despite their success, there has been limited research on how the underlying diffusion process and the final convergent prior can affect generative performance; this research has also been limited to continuous data types and a score-based diffusion framework. To fill this gap, we explore how different discrete diffusion kernels (which converge to different prior distributions) affect the performance of diffusion models for graphs. To this end, we developed a novel formulation of a family of discrete diffusion kernels which are easily adjustable to converge to different Bernoulli priors, and we study the effect of these different kernels on generative performance. We show that the quality of generated graphs is sensitive to the prior used, and that the optimal choice cannot be explained by obvious statistics or metrics, which challenges the intuitions which previous works have suggested.