We introduce a framework for solving a class of parabolic partial differential equations on triangle mesh surfaces, including the Hamilton-Jacobi equation and the Fokker-Planck equation. PDE in this class often have nonlinear or stiff terms that cannot be resolved with standard methods on curved triangle meshes. To address this challenge, we leverage a splitting integrator combined with a convex optimization step to solve these PDE. Our machinery can be used to compute entropic approximation of optimal transport distances on geometric domains, overcoming the numerical limitations of the state-of-the-art method. In addition, we demonstrate the versatility of our method on a number of linear and nonlinear PDE that appear in diffusion and front propagation tasks in geometry processing.
Model specification searches and modifications are commonly employed in covariance structure analysis (CSA) or structural equation modeling (SEM) to improve the goodness-of-fit. However, these practices can be susceptible to capitalizing on chance, as a model that fits one sample may not generalize to another sample from the same population. This paper introduces the improved Lagrange Multipliers (LM) test, which provides a reliable method for conducting a thorough model specification search and effectively identifying missing parameters. By leveraging the stepwise bootstrap method in the standard LM and Wald tests, our data-driven approach enhances the accuracy of parameter identification. The results from Monte Carlo simulations and two empirical applications in political science demonstrate the effectiveness of the improved LM test, particularly when dealing with small sample sizes and models with large degrees of freedom. This approach contributes to better statistical fit and addresses the issue of capitalization on chance in model specification.
Recent trends in deep learning (DL) imposed hardware accelerators as the most viable solution for several classes of high-performance computing (HPC) applications such as image classification, computer vision, and speech recognition. This survey summarizes and classifies the most recent advances in designing DL accelerators suitable to reach the performance requirements of HPC applications. In particular, it highlights the most advanced approaches to support deep learning accelerations including not only GPU and TPU-based accelerators but also design-specific hardware accelerators such as FPGA-based and ASIC-based accelerators, Neural Processing Units, open hardware RISC-V-based accelerators and co-processors. The survey also describes accelerators based on emerging memory technologies and computing paradigms, such as 3D-stacked Processor-In-Memory, non-volatile memories (mainly, Resistive RAM and Phase Change Memories) to implement in-memory computing, Neuromorphic Processing Units, and accelerators based on Multi-Chip Modules. Among emerging technologies, we also include some insights into quantum-based accelerators and photonics. To conclude, the survey classifies the most influential architectures and technologies proposed in the last years, with the purpose of offering the reader a comprehensive perspective in the rapidly evolving field of deep learning.
This article introduces a quick and simple combinatorial approximation algorithm for the weighted correlation clustering problem. In this problem, we have a set of vertices and two weight values for each pair of vertices denoting their difference and similarity. The goal is to cluster the vertices with minimum total intra-cluster difference weights plus inter-cluster similarity weights. Our algorithm is a randomized approximation algorithm with $O(n^2)$ running time where $n$ is the number of vertices. Its approximation factor is 3 when the instance satisfies probability constraints. If the instance satisfies triangle inequality in addition to probability constraints, the approximation factor is 1.6. Both algorithms are superior to the best known results in terms of running time and the second one is also superior in terms of the approximation factor.
Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging. In particular, although ODEs are differentiable and would allow for gradient-based parameter optimization, the nonlinear dynamics of ODEs often lead to many local minima and extreme sensitivity to initial conditions. We therefore propose diffusion tempering, a novel regularization technique for probabilistic numerical methods which improves convergence of gradient-based parameter optimization in ODEs. By iteratively reducing a noise parameter of the probabilistic integrator, the proposed method converges more reliably to the true parameters. We demonstrate that our method is effective for dynamical systems of different complexity and show that it obtains reliable parameter estimates for a Hodgkin-Huxley model with a practically relevant number of parameters.
Stochastic approximation is a class of algorithms that update a vector iteratively, incrementally, and stochastically, including, e.g., stochastic gradient descent and temporal difference learning. One fundamental challenge in analyzing a stochastic approximation algorithm is to establish its stability, i.e., to show that the stochastic vector iterates are bounded almost surely. In this paper, we extend the celebrated Borkar-Meyn theorem for stability from the Martingale difference noise setting to the Markovian noise setting, which greatly improves its applicability in reinforcement learning, especially in those off-policy reinforcement learning algorithms with linear function approximation and eligibility traces. Central to our analysis is the diminishing asymptotic rate of change of a few functions, which is implied by both a form of strong law of large numbers and a commonly used V4 Lyapunov drift condition and trivially holds if the Markov chain is finite and irreducible.
Structural equation models (SEMs) are commonly used to study the structural relationship between observed variables and latent constructs. Recently, Bayesian fitting procedures for SEMs have received more attention thanks to their potential to facilitate the adoption of more flexible model structures, and variational approximations have been shown to provide fast and accurate inference for Bayesian analysis of SEMs. However, the application of variational approximations is currently limited to very simple, elemental SEMs. We develop mean-field variational Bayes algorithms for two SEM formulations for data that present non-Gaussian features such as skewness and multimodality. The proposed models exploit the use of mixtures of Gaussians, include covariates for the analysis of latent traits and consider missing data. We also examine two variational information criteria for model selection that are straightforward to compute in our variational inference framework. The performance of the MFVB algorithms and information criteria is investigated in a simulated data study and a real data application.
We develop a high order reconstructed discontinuous approximation (RDA) method for solving a mixed formulation of the quad-curl problem in two and three dimensions. This mixed formulation is established by adding an auxiliary variable to control the divergence of the field. The approximation space for the original variables is constructed by patch reconstruction with exactly one degree of freedom per element in each dimension and the auxiliary variable is approximated by the piecewise constant space. We prove the optimal convergence rate under the energy norm and also suboptimal $L^2$ convergence using a duality approach. Numerical results are provided to verify the theoretical analysis.
Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of up-to-date external information. This methodology, focusing primarily on the text domain, provides a cost-effective solution to the generation of plausible but incorrect responses by LLMs, thereby enhancing the accuracy and reliability of their outputs through the use of real-world data. As RAG grows in complexity and incorporates multiple concepts that can influence its performance, this paper organizes the RAG paradigm into four categories: pre-retrieval, retrieval, post-retrieval, and generation, offering a detailed perspective from the retrieval viewpoint. It outlines RAG's evolution and discusses the field's progression through the analysis of significant studies. Additionally, the paper introduces evaluation methods for RAG, addressing the challenges faced and proposing future research directions. By offering an organized framework and categorization, the study aims to consolidate existing research on RAG, clarify its technological underpinnings, and highlight its potential to broaden the adaptability and applications of LLMs.
Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing the generalization capabilities of a model, it can also address many other challenges and problems, from overcoming a limited amount of training data over regularizing the objective to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation (C1) and a taxonomy for existing works (C2), this survey is concerned with data augmentation methods for textual classification and aims to achieve a concise and comprehensive overview for researchers and practitioners (C3). Derived from the taxonomy, we divided more than 100 methods into 12 different groupings and provide state-of-the-art references expounding which methods are highly promising (C4). Finally, research perspectives that may constitute a building block for future work are given (C5).
We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.