The majority of internet traffic is video content. This drives the demand for video compression to deliver high quality video at low target bitrates. Optimising the parameters of a video codec for a specific video clip (per-clip optimisation) has been shown to yield significant bitrate savings. In previous work we have shown that per-clip optimisation of the Lagrangian multiplier leads to up to 24% BD-Rate improvement. A key component of these algorithms is modeling the R-D characteristic across the appropriate bitrate range. This is computationally heavy as it usually involves repeated video encodes of the high resolution material at different parameter settings. This work focuses on reducing this computational load by deploying a NN operating on lower bandwidth features. Our system achieves BD-Rate improvement in approximately 90% of a large corpus with comparable results to previous work in direct optimisation.
Missing data is a common problem in clinical data collection, which causes difficulty in the statistical analysis of such data. In this article, we consider the problem under a framework of a semiparametric partially linear model when observations are subject to missingness with complex patterns. If the correct model structure of the additive partially linear model is available, we propose to use a new imputation method called Partial Replacement IMputation Estimation (PRIME), which can overcome problems caused by incomplete data in the partially linear model. Also, we use PRIME in conjunction with model averaging (PRIME-MA) to tackle the problem of unknown model structure in the partially linear model. In simulation studies, we use various error distributions, sample sizes, missing data rates, covariate correlations, and noise levels, and PRIME outperforms other methods in almost all cases. With an unknown correct model structure, PRIME-MA has satisfactory performance in terms of prediction, while slightly worse than PRIME. Moreover, we conduct a study of influential factors in Pima Indians Diabetes data, which shows that our method performs better than the other models.
Due to the curse of statistical heterogeneity across clients, adopting a personalized federated learning method has become an essential choice for the successful deployment of federated learning-based services. Among diverse branches of personalization techniques, a model mixture-based personalization method is preferred as each client has their own personalized model as a result of federated learning. It usually requires a local model and a federated model, but this approach is either limited to partial parameter exchange or requires additional local updates, each of which is helpless to novel clients and burdensome to the client's computational capacity. As the existence of a connected subspace containing diverse low-loss solutions between two or more independent deep networks has been discovered, we combined this interesting property with the model mixture-based personalized federated learning method for improved performance of personalization. We proposed SuPerFed, a personalized federated learning method that induces an explicit connection between the optima of the local and the federated model in weight space for boosting each other. Through extensive experiments on several benchmark datasets, we demonstrated that our method achieves consistent gains in both personalization performance and robustness to problematic scenarios possible in realistic services.
Correlation clustering is a ubiquitous paradigm in unsupervised machine learning where addressing unfairness is a major challenge. Motivated by this, we study Fair Correlation Clustering where the data points may belong to different protected groups and the goal is to ensure fair representation of all groups across clusters. Our paper significantly generalizes and improves on the quality guarantees of previous work of Ahmadi et al. and Ahmadian et al. as follows. - We allow the user to specify an arbitrary upper bound on the representation of each group in a cluster. - Our algorithm allows individuals to have multiple protected features and ensure fairness simultaneously across them all. - We prove guarantees for clustering quality and fairness in this general setting. Furthermore, this improves on the results for the special cases studied in previous work. Our experiments on real-world data demonstrate that our clustering quality compared to the optimal solution is much better than what our theoretical result suggests.
Deep learning based techniques achieve state-of-the-art results in a wide range of image reconstruction tasks like compressed sensing. These methods almost always have hyperparameters, such as the weight coefficients that balance the different terms in the optimized loss function. The typical approach is to train the model for a hyperparameter setting determined with some empirical or theoretical justification. Thus, at inference time, the model can only compute reconstructions corresponding to the pre-determined hyperparameter values. In this work, we present a hypernetwork-based approach, called HyperRecon, to train reconstruction models that are agnostic to hyperparameter settings. At inference time, HyperRecon can efficiently produce diverse reconstructions, which would each correspond to different hyperparameter values. In this framework, the user is empowered to select the most useful output(s) based on their own judgement. We demonstrate our method in compressed sensing, super-resolution and denoising tasks, using two large-scale and publicly-available MRI datasets. Our code is available at //github.com/alanqrwang/hyperrecon.
The effects of a treatment may differ between patients with different characteristics. Addressing such treatment heterogeneity is crucial to identify which patients benefit from a treatment, but can be complex in the context of multiple correlated binary outcomes. The current paper presents a novel Bayesian method for estimation and inference for heterogeneous treatment effects in a multivariate binary setting. The framework is suitable for prediction of heterogeneous treatment effects and superiority/inferiority decision-making within subpopulations, while taking advantage of the size of the entire study sample. We introduce a decision-making framework based on Bayesian multivariate logistic regression analysis with a P\'olya-Gamma expansion. The obtained regression coefficients are transformed into differences between success probabilities of treatments to allow for treatment comparison in terms of point estimation and superiority and/or inferiority decisions for different (sub)populations. Procedures for a priori sample size estimation under a non-informative prior distribution are included in the framework. A numerical evaluation demonstrated that a) average and conditional treatment effect parameters could be estimated unbiasedly when the sample is large enough; b) decisions based on a priori sample size estimation resulted in anticipated error rates. Application to the International Stroke Trial dataset revealed a heterogeneous treatment effect: The model showed conditional treatment effects in opposite directions for patients with different levels of blood pressure, while the average treatment effect among the trial population was close to zero.
In this paper, we provide a novel and simple algorithm, Clairvoyant Multiplicative Weights Updates (CMWU) for regret minimization in general games. CMWU effectively corresponds to the standard MWU algorithm but where all agents, when updating their mixed strategies, use the payoff profiles based on tomorrow's behavior, i.e. the agents are clairvoyant. CMWU achieves constant regret of $\ln(m)/\eta$ in all normal-form games with m actions and fixed step-sizes $\eta$. Although CMWU encodes in its definition a fixed point computation, which in principle could result in dynamics that are neither computationally efficient nor uncoupled, we show that both of these issues can be largely circumvented. Specifically, as long as the step-size $\eta$ is upper bounded by $\frac{1}{(n-1)V}$, where $n$ is the number of agents and $[0,V]$ is the payoff range, then the CMWU updates can be computed linearly fast via a contraction map. This implementation results in an uncoupled online learning dynamic that admits a $o (\log T)$-sparse sub-sequence where each agent experiences at most $O(nV\log m)$ regret. This implies that the CMWU dynamics converge with rate $O(nV \log mW( T) / T)$ to a Coarse Correlated Equilibrium where $W(T)$ is the inverse of the function $g(t):=t\cdot 2^t$. The latter improves on the current state-of-the-art convergence rate of uncoupled online learning dynamics.
We propose a co-variance corrected random batch method for interacting particle systems. By establishing a certain entropic central limit theorem, we provide entropic convergence guarantees for the law of the entire trajectories of all particles of the proposed method to the law of the trajectories of the discrete time interacting particle system whenever the batch size $B \gg (\alpha n)^{\frac{1}{3}}$ (where $n$ is the number of particles and $\alpha$ is the time discretization parameter). This in turn implies that the outputs of these methods are nearly \emph{statistically indistinguishable} when $B$ is even moderately large. Previous works mainly considered convergence in Wasserstein distance with required stringent assumptions on the potentials or the bounds had an exponential dependence on the time horizon. This work makes minimal assumptions on the interaction potentials and in particular establishes that even when the particle trajectories diverge to infinity, they do so in the same way for both the methods. Such guarantees are very useful in light of the recent advances in interacting particle based algorithms for sampling.
Population dynamics is the study of temporal and spatial variation in the size of populations of organisms and is a major part of population ecology. One of the main difficulties in analyzing population dynamics is that we can only obtain observation data with coarse time intervals from fixed-point observations due to experimental costs or measurement constraints. Recently, modeling population dynamics by using continuous normalizing flows (CNFs) and dynamic optimal transport has been proposed to infer the sample trajectories from a fixed-point observed population. While the sample behavior in CNFs is deterministic, the actual sample in biological systems moves in an essentially random yet directional manner. Moreover, when a sample moves from point A to point B in dynamical systems, its trajectory typically follows the principle of least action in which the corresponding action has the smallest possible value. To satisfy these requirements of the sample trajectories, we formulate the Lagrangian Schr\"odinger bridge (LSB) problem and propose to solve it approximately using neural SDE with regularization. We also develop a model architecture that enables faster computation. Experimental results show that the proposed method can efficiently approximate the population-level dynamics even for high-dimensional data and that using the prior knowledge introduced by the Lagrangian enables us to estimate the trajectories of individual samples with stochastic behavior.
Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.
Estimating the conditional quantile of the interested variable with respect to changes in the covariates is frequent in many economical applications as it can offer a comprehensive insight. In this paper, we propose a novel semiparametric model averaging to predict the conditional quantile even if all models under consideration are potentially misspecified. Specifically, we first build a series of non-nested partially linear sub-models, each with different nonlinear component. Then a leave-one-out cross-validation criterion is applied to choose the model weights. Under some regularity conditions, we have proved that the resulting model averaging estimator is asymptotically optimal in terms of minimizing the out-of-sample average quantile prediction error. Our modelling strategy not only effectively avoids the problem of specifying which a covariate should be nonlinear when one fits a partially linear model, but also results in a more accurate prediction than traditional model-based procedures because of the optimality of the selected weights by the cross-validation criterion. Simulation experiments and an illustrative application show that our proposed model averaging method is superior to other commonly used alternatives.