Physics-informed Neural Networks (PINNs) is a method for numerical simulation that incorporates a loss function corresponding to the governing equations into a neural network. While PINNs have been explored for their utility in inverse analysis, their application in acoustic analysis remains limited. This study presents a method to identify loss parameters in acoustic tubes using PINNs. We categorized the loss parameters into two groups: one dependent on the tube's diameter and another constant, independent of it. The latter were set as the trainable parameters of the neural network. The problem of identifying the loss parameter was formulated as an optimization problem, with the physical properties being determined through this process. The neural network architecture employed was based on our previously proposed ResoNet, which is designed for analyzing acoustic resonance. The efficacy of the proposed method is assessed through both forward and inverse analysis, specifically through the identification of loss parameters. The findings demonstrate that it is feasible to accurately identify parameters that significantly impact the sound field under analysis. By merely altering the governing equations in the loss function, this method could be adapted to various sound fields, suggesting its potential for broad application.
Kaplan et al. and Hoffmann et al. developed influential scaling laws for the optimal model size as a function of the compute budget, but these laws yield substantially different predictions. We explain the discrepancy by reproducing the Kaplan scaling law on two datasets (OpenWebText2 and RefinedWeb) and identifying three factors causing the difference: last layer computational cost, warmup duration, and scale-dependent optimizer tuning. With these factors corrected, we obtain excellent agreement with the Hoffmann et al. (i.e., "Chinchilla") scaling law. Counter to a hypothesis of Hoffmann et al., we find that careful learning rate decay is not essential for the validity of their scaling law. As a secondary result, we derive scaling laws for the optimal learning rate and batch size, finding that tuning the AdamW $\beta_2$ parameter is essential at lower batch sizes.
Language model (LM) distillation is a trending area that aims to distil the knowledge residing in a large teacher LM to a small student one. While various methods have been proposed to maximize the effectiveness of the distillation, significant challenges persist, particularly when there is a substantial capacity gap between the teacher and student LMs. This issue, often referred to as the \textit{curse} of capacity gap, suggests that a larger teacher does not necessarily result in a superior student compared to one distilled from a smaller teacher. In other words, there is likely an optimal teacher yielding the best student along the scaling course of the teacher. However, the curse of capacity gap can not be tackled without notable compute overhead, as indicated in previous studies. In the context of large LMs (LLMs), previously viable approaches become much less meaningful, as it is an impossible triangle to distill an expected student from an optimal teacher student with small compute overhead. Fortunately, the impossible triangle can fortunately be possible provided an inducted \textit{law} of capacity gap. In this paper, we take the spirits of scaling law and reveal that the optimal teacher scale almost consistently follows a linear scaling with the student scale across different model architectures and data scales. The law later guides us to distil a 3B student LM (termed \textsc{MiniMA}) from LLaMA2-7B. \textsc{MiniMA} is demonstrated to outperform a wide range of 3B competitors and could even compete with several 7B models.
We describe Bayes factors based on z, t, $\chi^2$, and F statistics when non-local moment prior distributions are used to define alternative hypotheses. The non-local alternative prior distributions are centered on standardized effects. The prior densities include a dispersion parameter that can be used to model prior precision and the variation of effect sizes across replicated experiments. We examine the convergence rates of Bayes factors under true null and true alternative hypotheses and show how these Bayes factors can be used to construct Bayes factor functions. An example illustrates the application of resulting Bayes factors to psychological experiments.
Based on the partition of parameter space, two algorithms for computing the rational univariate representation of zero-dimensional ideals with parameters are presented in the paper. Unlike the rational univariate representation of zero-dimensional ideals without parameters, the number of zeros of zero-dimensional ideals with parameters under various specializations is different, which leads to choosing and checking the separating element, the key to computing the rational univariate representation, is difficult. In order to pick out the separating element, we first ensure that under each branch the ideal has the same number of zeros by partitioning the parameter space. Subsequently two ideas are given to choose and check the separating element. One idea is that by extending the subresultant theorem to parametric cases, we utilize the extended subresultant theorem to choose the separating element with the further partition of parameter space and then with the help of parametric greatest common divisor theory compute rational univariate representations. Another one is that we go straight to choose and check the separating element by the computation of parametric greatest common divisors, then immediately get the rational univariate representations. Based on these, we design two different algorithms for computing rational univariate representations of zero-dimensional ideals with parameters. Furthermore, the algorithms have been implemented on Singular and the performance comparison are presented.
The classical Fokker-Planck equation (FPE) is a key tool in physics for describing systems influenced by drag forces and Gaussian noise, with applications spanning multiple fields. We consider the fractional Fokker-Planck equation (FFPE), which models the time evolution of probability densities for systems driven by L\'evy processes, relevant in scenarios where Gaussian assumptions fail. The paper presents an efficient and accurate numerical approach for the free-space FFPE with constant coefficients and Dirac-delta initial conditions. This method utilizes the integral representation of the solutions and enables the efficient handling of very high-dimensional problems using fast algorithms. Our work is the first to present a high-precision numerical solver for the free-space FFPE with Dirac-delta initial conditions. This opens the door for future research on more complex scenarios, including those with variable coefficients and other types of initial conditions.
Active subspace (AS) methods are a valuable tool for understanding the relationship between the inputs and outputs of a Physics simulation. In this paper, an elegant generalization of the traditional ASM is developed to assess the co-activity of two computer models. This generalization, which we refer to as a Co-Active Subspace (C-AS) Method, allows for the joint analysis of two or more computer models allowing for thorough exploration of the alignment (or non-alignment) of the respective gradient spaces. We define co-active directions, co-sensitivity indices, and a scalar ``concordance" metric (and complementary ``discordance" pseudo-metric) and we demonstrate that these are powerful tools for understanding the behavior of a class of computer models, especially when used to supplement traditional AS analysis. Details for efficient estimation of the C-AS and an accompanying R package (github.com/knrumsey/concordance) are provided. Practical application is demonstrated through analyzing a set of simulated rate stick experiments for PBX 9501, a high explosive, offering insights into complex model dynamics.
Deep learning-based partial differential equation(PDE) solvers have received much attention in the past few years. Methods of this category can solve a wide range of PDEs with high accuracy, typically by transforming the problems into highly nonlinear optimization problems of neural network parameters. This work reviews several deep learning solvers proposed a few years ago, including PINN, WAN, DRM, and VPINN. Numerical results are provided to make comparisons amongst them and address the importance of loss formulation and the optimization method. A rigorous error analysis for PINN is also presented. Finally, we discuss the current limitations and bottlenecks of these methods.
Test-Time Adaptation (TTA) has recently emerged as a promising strategy for tackling the problem of machine learning model robustness under distribution shifts by adapting the model during inference without access to any labels. Because of task difficulty, hyperparameters strongly influence the effectiveness of adaptation. However, the literature has provided little exploration into optimal hyperparameter selection. In this work, we tackle this problem by evaluating existing TTA methods using surrogate-based hp-selection strategies (which do not assume access to the test labels) to obtain a more realistic evaluation of their performance. We show that some of the recent state-of-the-art methods exhibit inferior performance compared to the previous algorithms when using our more realistic evaluation setup. Further, we show that forgetting is still a problem in TTA as the only method that is robust to hp-selection resets the model to the initial state at every step. We analyze different types of unsupervised selection strategies, and while they work reasonably well in most scenarios, the only strategies that work consistently well use some kind of supervision (either by a limited number of annotated test samples or by using pretraining data). Our findings underscore the need for further research with more rigorous benchmarking by explicitly stating model selection strategies, to facilitate which we open-source our code.
We propose a new framework to design and analyze accelerated methods that solve general monotone equation (ME) problems $F(x)=0$. Traditional approaches include generalized steepest descent methods and inexact Newton-type methods. If $F$ is uniformly monotone and twice differentiable, these methods achieve local convergence rates while the latter methods are globally convergent thanks to line search and hyperplane projection. However, a global rate is unknown for these methods. The variational inequality methods can be applied to yield a global rate that is expressed in terms of $\|F(x)\|$ but these results are restricted to first-order methods and a Lipschitz continuous operator. It has not been clear how to obtain global acceleration using high-order Lipschitz continuity. This paper takes a continuous-time perspective where accelerated methods are viewed as the discretization of dynamical systems. Our contribution is to propose accelerated rescaled gradient systems and prove that they are equivalent to closed-loop control systems. Based on this connection, we establish the properties of solution trajectories. Moreover, we provide a unified algorithmic framework obtained from discretization of our system, which together with two approximation subroutines yields both existing high-order methods and new first-order methods. We prove that the $p^{th}$-order method achieves a global rate of $O(k^{-p/2})$ in terms of $\|F(x)\|$ if $F$ is $p^{th}$-order Lipschitz continuous and the first-order method achieves the same rate if $F$ is $p^{th}$-order strongly Lipschitz continuous. If $F$ is strongly monotone, the restarted versions achieve local convergence with order $p$ when $p \geq 2$. Our discrete-time analysis is largely motivated by the continuous-time analysis and demonstrates the fundamental role that rescaled gradients play in global acceleration for solving ME problems.
Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.