In this paper, we propose a novel model to analyze serially correlated two-dimensional functional data observed sparsely and irregularly on a domain which may not be a rectangle. Our approach employs a mixed effects model that specifies the principal component functions as bivariate splines on triangulations and the principal component scores as random effects which follow an auto-regressive model. We apply the thin-plate penalty for regularizing the bivariate function estimation and develop an effective EM algorithm along with Kalman filter and smoother for calculating the penalized likelihood estimates of the parameters. Our approach was applied on simulated datasets and on Texas monthly average temperature data from January year 1915 to December year 2014.
In this paper, we provide a theoretical analysis for a preconditioned steepest descent (PSD) iterative solver that improves the computational time of a finite difference numerical scheme for the Cahn-Hilliard equation with Flory-Huggins energy potential. In the numerical design, a convex splitting approach is applied to the chemical potential such that the logarithmic and the surface diffusion terms are treated implicitly while the expansive concave term is treated with an explicit update. The nonlinear and singular nature of the logarithmic energy potential makes the numerical implementation very challenging. However, the positivity-preserving property for the logarithmic arguments, unconditional energy stability, and optimal rate error estimates have been established in a recent work and it has been shown that successful solvers ensure a similar positivity-preserving property at each iteration stage. Therefore, in this work, we will show that the PSD solver ensures a positivity-preserving property at each iteration stage. The PSD solver consists of first computing a search direction (involved with solving a Poisson-like equation) and then takes a one-parameter optimization step over the search direction in which the Newton iteration becomes very powerful. A theoretical analysis is applied to the PSD iteration solver and a geometric convergence rate is proved for the iteration. In particular, the strict separation property of the numerical solution, which indicates a uniform distance between the numerical solution and the singular limit values of $\pm 1$ for the phase variable, plays an essential role in the iteration convergence analysis. A few numerical results are presented to demonstrate the robustness and efficiency of the PSD solver.
In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting and give localized versions of $k-$WL for any $k$. We analyze the power of Local $k-$WL and prove that it is more expressive than $k-$WL and at most as expressive as $(k+1)-$WL. We give a characterization of patterns whose count as a subgraph and induced subgraph are invariant if two graphs are Local $k-$WL equivalent. We also introduce two variants of $k-$WL: Layer $k-$WL and recursive $k-$WL. These methods are more time and space efficient than applying $k-$WL on the whole graph. We also propose a fragmentation technique that guarantees the exact count of all induced subgraphs of size at most 4 using just $1-$WL. The same idea can be extended further for larger patterns using $k>1$. We also compare the expressive power of Local $k-$WL with other GNN hierarchies and show that given a bound on the time-complexity, our methods are more expressive than the ones mentioned in Papp and Wattenhofer[2022a].
In this paper, we propose a deep generative time series approach using latent temporal processes for modeling and holistically analyzing complex disease trajectories. We aim to find meaningful temporal latent representations of an underlying generative process that explain the observed disease trajectories in an interpretable and comprehensive way. To enhance the interpretability of these latent temporal processes, we develop a semi-supervised approach for disentangling the latent space using established medical concepts. By combining the generative approach with medical knowledge, we leverage the ability to discover novel aspects of the disease while integrating medical concepts into the model. We show that the learned temporal latent processes can be utilized for further data analysis and clinical hypothesis testing, including finding similar patients and clustering the disease into new sub-types. Moreover, our method enables personalized online monitoring and prediction of multivariate time series including uncertainty quantification. We demonstrate the effectiveness of our approach in modeling systemic sclerosis, showcasing the potential of our machine learning model to capture complex disease trajectories and acquire new medical knowledge.
In this paper, we introduce a novel explicit family of subcodes of Reed-Solomon (RS) codes that efficiently achieve list decoding capacity with a constant output list size. Our approach builds upon the idea of large linear subcodes of RS codes evaluated on a subfield, similar to the method employed by Guruswami and Xing (STOC 2013). However, our approach diverges by leveraging the idea of {\it permuted product codes}, thereby simplifying the construction by avoiding the need of {\it subspace designs}. Specifically, the codes are constructed by initially forming the tensor product of two RS codes with carefully selected evaluation sets, followed by specific cyclic shifts to the codeword rows. This process results in each codeword column being treated as an individual coordinate, reminiscent of prior capacity-achieving codes, such as folded RS codes and univariate multiplicity codes. This construction is easily shown to be a subcode of an interleaved RS code, equivalently, an RS code evaluated on a subfield. Alternatively, the codes can be constructed by the evaluation of bivariate polynomials over orbits generated by \emph{two} affine transformations with coprime orders, extending the earlier use of a single affine transformation in folded RS codes and the recent affine folded RS codes introduced by Bhandari {\it et al.} (IEEE T-IT, Feb.~2024). While our codes require large, yet constant characteristic, the two affine transformations facilitate achieving code length equal to the field size, without the restriction of the field being prime, contrasting with univariate multiplicity codes.
In this paper, we study zeroth-order algorithms for nonconvex-concave minimax problems, which have attracted widely attention in machine learning, signal processing and many other fields in recent years. We propose a zeroth-order alternating randomized gradient projection (ZO-AGP) algorithm for smooth nonconvex-concave minimax problems, and its iteration complexity to obtain an $\varepsilon$-stationary point is bounded by $\mathcal{O}(\varepsilon^{-4})$, and the number of function value estimation is bounded by $\mathcal{O}(d_{x}+d_{y})$ per iteration. Moreover, we propose a zeroth-order block alternating randomized proximal gradient algorithm (ZO-BAPG) for solving block-wise nonsmooth nonconvex-concave minimax optimization problems, and the iteration complexity to obtain an $\varepsilon$-stationary point is bounded by $\mathcal{O}(\varepsilon^{-4})$ and the number of function value estimation per iteration is bounded by $\mathcal{O}(K d_{x}+d_{y})$. To the best of our knowledge, this is the first time that zeroth-order algorithms with iteration complexity gurantee are developed for solving both general smooth and block-wise nonsmooth nonconvex-concave minimax problems. Numerical results on data poisoning attack problem and distributed nonconvex sparse principal component analysis problem validate the efficiency of the proposed algorithms.
In this paper, we extend our prior research named DKIC and propose the perceptual-oriented learned image compression method, PO-DKIC. Specifically, DKIC adopts a dynamic kernel-based dynamic residual block group to enhance the transform coding and an asymmetric space-channel context entropy model to facilitate the estimation of gaussian parameters. Based on DKIC, PO-DKIC introduces PatchGAN and LPIPS loss to enhance visual quality. Furthermore, to maximize the overall perceptual quality under a rate constraint, we formulate this challenge into a constrained programming problem and use the Linear Integer Programming method for resolution. The experiments demonstrate that our proposed method can generate realistic images with richer textures and finer details when compared to state-of-the-art image compression techniques.
In this paper, we propose an online-matching-based model to tackle the two fundamental issues, matching and pricing, existing in a wide range of real-world gig platforms, including ride-hailing (matching riders and drivers), crowdsourcing markets (pairing workers and tasks), and online recommendations (offering items to customers). Our model assumes the arriving distributions of dynamic agents (e.g., riders, workers, and buyers) are accessible in advance, and they can change over time, which is referred to as \emph{Known Heterogeneous Distributions} (KHD). In this paper, we initiate variance analysis for online matching algorithms under KHD. Unlike the popular competitive-ratio (CR) metric, the variance of online algorithms' performance is rarely studied due to inherent technical challenges, though it is well linked to robustness. We focus on two natural parameterized sampling policies, denoted by $\mathsf{ATT}(\gamma)$ and $\mathsf{SAMP}(\gamma)$, which appear as foundational bedrock in online algorithm design. We offer rigorous competitive ratio (CR) and variance analyses for both policies. Specifically, we show that $\mathsf{ATT}(\gamma)$ with $\gamma \in [0,1/2]$ achieves a CR of $\gamma$ and a variance of $\gamma \cdot (1-\gamma) \cdot B$ on the total number of matches with $B$ being the total matching capacity. In contrast, $\mathsf{SAMP}(\gamma)$ with $\gamma \in [0,1]$ accomplishes a CR of $\gamma (1-\gamma)$ and a variance of $\bar{\gamma} (1-\bar{\gamma})\cdot B$ with $\bar{\gamma}=\min(\gamma,1/2)$. All CR and variance analyses are tight and unconditional of any benchmark. As a byproduct, we prove that $\mathsf{ATT}(\gamma=1/2)$ achieves an optimal CR of $1/2$.
In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model re-training. Code is available.
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.