This paper deals with the problem of soft guessing under log-loss distortion (logarithmic loss) that was recently investigated by [Wu and Joudeh, IEEE ISIT, pp. 466--471, 2023]. We extend this problem to soft guessing allowing errors, i.e., at each step, a guesser decides whether to stop the guess or not with some probability and if the guesser stops guessing, then the guesser declares an error. We show that the minimal expected value of the cost of guessing under the constraint of the error probability is characterized by the smooth R\'enyi entropy. Furthermore, we carry out an asymptotic analysis for a stationary and memoryless source.
In this paper, a type of novel projection-based, time-segmented reduced order model (ROM) is proposed for dynamic fluid-structure interaction (FSI) problems based upon the arbitrary Lagrangian--Eulerian (ALE)-finite element method (FEM) in a monolithic frame, where spatially, each variable is separated from others in terms of their attribution (fluid/structure), category (velocity/pressure) and component (horizontal/vertical) while temporally, the proper orthogonal decomposition (POD) bases are constructed in some deliberately partitioned time segments tailored through extensive numerical trials. By the combination of spatial and temporal decompositions, the developed ROM approach enables prolonged simulations under prescribed accuracy thresholds. Numerical experiments are carried out to compare numerical performances of the proposed ROM with corresponding full-order model (FOM) by solving a two-dimensional FSI benchmark problem that involves a vibrating elastic beam in the fluid, where the performance of offline ROM on perturbed physical parameters in the online phase is investigated as well. Extensive numerical results demonstrate that the proposed ROM has a comparable accuracy to while much higher efficiency than the FOM. The developed ROM approach is dimension-independent and can be seamlessly extended to solve high dimensional FSI problems.
Finetuning on task-specific datasets is a widely-embraced paradigm of harnessing the powerful capability of pretrained LLMs for various downstream tasks. Due to the popularity of LLMs finetuning and its accompanying privacy concerns, differentially private (DP) finetuning of pretrained LLMs has garnered increasing attention to safeguarding the privacy of task-specific datasets. Lying at the design core of DP LLM finetuning methods is the satisfactory tradeoff between privacy, utility, and scalability. Most existing methods build upon the seminal work of DP-SGD. Despite pushing the scalability of DP-SGD to its limit, DP-SGD-based finetuning methods are unfortunately limited by the inherent inefficiency of SGD. In this paper, we investigate the potential of DP zeroth-order methods for LLM pretraining, which avoids the scalability bottleneck of SGD by approximating the gradient with the more efficient zeroth-order gradient. Rather than treating the zeroth-order method as a drop-in replacement for SGD, this paper presents a comprehensive study both theoretically and empirically. First, we propose the stagewise DP zeroth-order method that dynamically schedules key hyperparameters. This design is grounded on the synergy between DP random perturbation and the gradient approximation error of the zeroth-order method, and its effect on finetuning trajectory. Second, we further enhance the scalability by reducing the trainable parameters that are identified by repurposing a data-free pruning technique requiring no additional data or extra privacy budget. We provide theoretical analysis for both proposed methods. We conduct extensive empirical analysis on both encoder-only masked language model and decoder-only autoregressive language model, achieving impressive results in terms of scalability and utility.
Robust fine-tuning aims to ensure performance on out-of-distribution (OOD) samples, which is sometimes compromised by pursuing adaptation on in-distribution (ID) samples. However, another criterion for reliable machine learning -- confidence calibration has been overlooked despite its increasing demand for real-world high-stakes applications, e.g., autonomous driving. We raise concerns about the calibration of fine-tuned vision-language models (VLMs) under distribution shift by showing that naive fine-tuning and even state-of-the-art robust fine-tuning hurt the calibration of pre-trained VLMs, especially on OOD datasets. We first show the OOD calibration error is bounded from above with ID calibration errors and domain discrepancy between ID and OOD. From this analysis, we propose CaRot, a calibrated robust fine-tuning method that incentivizes ID calibration and robust prediction across domains to reduce the upper bound of OOD calibration error. Extensive experiments on three types of distribution shifts (natural, synthetic, and adversarial) on ImageNet-1K classification demonstrate the effectiveness of CaRot across diverse environments. We justify the empirical success of CaRot through our theoretical analysis.
Given the significance of speech emotion recognition, numerous methods have been developed in recent years to create effective and efficient systems in this domain. One of these methods involves the use of pretrained transformers, fine-tuned to address this specific problem, resulting in high accuracy. Despite extensive discussions and global-scale efforts to enhance these systems, the application of this innovative and effective approach has received less attention in the context of Persian speech emotion recognition. In this article, we review the field of speech emotion recognition and its background, with an emphasis on the importance of employing transformers in this context. We present two models, one based on spectrograms and the other on the audio itself, fine-tuned using the shEMO dataset. These models significantly enhance the accuracy of previous systems, increasing it from approximately 65% to 80% on the mentioned dataset. Subsequently, to investigate the effect of multilinguality on the fine-tuning process, these same models are fine-tuned twice. First, they are fine-tuned using the English IEMOCAP dataset, and then they are fine-tuned with the Persian shEMO dataset. This results in an improved accuracy of 82% for the Persian emotion recognition system. Keywords: Persian Speech Emotion Recognition, shEMO, Self-Supervised Learning
The linear Fisher market (LFM) is a basic equilibrium model from economics, which also has applications in fair and efficient resource allocation. First-price pacing equilibrium (FPPE) is a model capturing budget-management mechanisms in first-price auctions. In certain practical settings such as advertising auctions, there is an interest in performing statistical inference over these models. A popular methodology for general statistical inference is the bootstrap procedure. Yet, for LFM and FPPE there is no existing theory for the valid application of bootstrap procedures. In this paper, we introduce and devise several statistically valid bootstrap inference procedures for LFM and FPPE. The most challenging part is to bootstrap general FPPE, which reduces to bootstrapping constrained M-estimators, a largely unexplored problem. We devise a bootstrap procedure for FPPE under mild degeneracy conditions by using the powerful tool of epi-convergence theory. Experiments with synthetic and semi-real data verify our theory.
Personalized PageRank (PPR) is a fundamental tool in unsupervised learning of graph representations such as node ranking, labeling, and graph embedding. However, while data privacy is one of the most important recent concerns, existing PPR algorithms are not designed to protect user privacy. PPR is highly sensitive to the input graph edges: the difference of only one edge may cause a big change in the PPR vector, potentially leaking private user data. In this work, we propose an algorithm which outputs an approximate PPR and has provably bounded sensitivity to input edges. In addition, we prove that our algorithm achieves similar accuracy to non-private algorithms when the input graph has large degrees. Our sensitivity-bounded PPR directly implies private algorithms for several tools of graph learning, such as, differentially private (DP) PPR ranking, DP node classification, and DP node embedding. To complement our theoretical analysis, we also empirically verify the practical performances of our algorithms.
Evolutionary algorithms (EAs) have been widely and successfully applied to solve multi-objective optimization problems, due to their nature of population-based search. Population update, a key component in multi-objective EAs (MOEAs), is usually performed in a greedy, deterministic manner. That is, the next-generation population is formed by selecting the best solutions from the current population and newly-generated solutions (irrespective of the selection criteria used such as Pareto dominance, crowdedness and indicators). In this paper, we question this practice. We analytically present that stochastic population update can be beneficial for the search of MOEAs. Specifically, we prove that the expected running time of two well-established MOEAs, SMS-EMOA and NSGA-II, for solving two bi-objective problems, OneJumpZeroJump and bi-objective RealRoyalRoad, can be exponentially decreased if replacing its deterministic population update mechanism by a stochastic one. Empirical studies also verify the effectiveness of the proposed population update method. This work is an attempt to challenge a common practice in the design of existing MOEAs. Its positive results, which might hold more generally, should encourage the exploration of developing new MOEAs in the area.
The paper aims to address load imbalance caused by high in-degree distribution in graphs by applying the idea of rhizome to vertex-centric message-driven graph processing. Rhizome construction of the graph creates multiple named vertex address for any number of single large in-degree vertices. It then allows other vertices to point to any of the named addresses thus sharing the in-degree load. The rhizomes internally communicate and remain consistent to provide a unified and correct view of the vertex. Simulated experimental results show performance speed ups for BFS graph traversal on large chip sizes for the tested input graph datasets containing highly skewed in-degree distribution. The improvements come from sharing the in-degree compute workload among memory-processing elements and also lowering contention on the network-on-chip.
The key issue of few-shot learning is learning to generalize. In this paper, we propose a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the softmax classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning models, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.