Quadratic NURBS-based discretizations of the Galerkin method suffer from membrane locking when applied to Kirchhoff-Love shell formulations. Membrane locking causes not only smaller displacements than expected, but also large-amplitude spurious oscillations of the membrane forces. Continuous-assumed-strain (CAS) elements have been recently introduced to remove membrane locking in quadratic NURBS-based discretizations of linear plane curved Kirchhoff rods (Casquero et al., CMAME, 2022). In this work, we generalize CAS elements to vanquish membrane locking in quadratic NURBS-based discretizations of linear Kirchhoff-Love shells. CAS elements bilinearly interpolate the membrane strains at the four corners of each element. Thus, the assumed strains have C0 continuity across element boundaries. To the best of the authors' knowledge, CAS elements are the first assumed-strain treatment to effectively overcome membrane locking in quadratic NURBS-based discretizations of Kirchhoff-Love shells while satisfying the following important characteristics for computational efficiency: (1) No additional degrees of freedom are added, (2) No additional systems of algebraic equations need to be solved, (3) No matrix multiplications or matrix inversions are needed to obtain the stiffness matrix, and (4) The nonzero pattern of the stiffness matrix is preserved. The benchmark problems show that CAS elements, using either 2x2 or 3x3 Gauss-Legendre quadrature points per element, are an effective locking treatment since this element type results in more accurate displacements for coarse meshes and excises the spurious oscillations of the membrane forces. The benchmark problems also show that CAS elements outperform state-of-the-art element types based on Lagrange polynomials equipped with either assumed-strain or reduced-integration locking treatments.
In spatial blind source separation the observed multivariate random fields are assumed to be mixtures of latent spatially dependent random fields. The objective is to recover latent random fields by estimating the unmixing transformation. Currently, the algorithms for spatial blind source separation can only estimate linear unmixing transformations. Nonlinear blind source separation methods for spatial data are scarce. In this paper we extend an identifiable variational autoencoder that can estimate nonlinear unmixing transformations to spatially dependent data and demonstrate its performance for both stationary and nonstationary spatial data using simulations. In addition, we introduce scaled mean absolute Shapley additive explanations for interpreting the latent components through nonlinear mixing transformation. The spatial identifiable variational autoencoder is applied to a geochemical dataset to find the latent random fields, which are then interpreted by using the scaled mean absolute Shapley additive explanations. Finally, we illustrate how the proposed method can be used as a pre-processing method when making multivariate predictions.
We derive and analyze a symmetric interior penalty discontinuous Galerkin scheme for the approximation of the second-order form of the radiative transfer equation in slab geometry. Using appropriate trace lemmas, the analysis can be carried out as for more standard elliptic problems. Supporting examples show the accuracy and stability of the method also numerically, for different polynomial degrees. For discretization, we employ quad-tree grids, which allow for local refinement in phase-space, and we show exemplary that adaptive methods can efficiently approximate discontinuous solutions. We investigate the behavior of hierarchical error estimators and error estimators based on local averaging.
Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. This challenge is pronounced in low-to-middle income countries where access to large datasets is often limited or even absent. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this technical challenge, we introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. While diverse, not all the data generated by LLMs will help increase utility for a downstream task, as for any generative model. Consequently, we introduce a principled curation process, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of LLMs in the low-data regime compared to conventional generators. We further show our curation mechanism improves the downstream performance for all generators, including LLMs. Additionally, we provide insights and understanding into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets. CLLM paves the way for wider usage of ML in data scarce domains and regions, by allying the strengths of LLMs with a robust data-centric approach.
We prove the linear orbital stability of spectrally stable stationary discrete shock profiles for conservative finite difference schemes applied to systems of conservation laws. The proof relies on an accurate description of the pointwise asymptotic behavior of the Green's function associated with those discrete shock profiles, improving on the result of Lafitte-Godillon [God03]. The main novelty of this stability result is that it applies to a fairly large family of schemes that introduce some artificial possibly high-order viscosity. The result is obtained under a sharp spectral assumption rather than by imposing a smallness assumption on the shock amplitude.
This paper presents a learnable solver tailored to solve discretized linear partial differential equations (PDEs). This solver requires only problem-specific training data, without using specialized expertise. Its development is anchored by three core principles: (1) a multilevel hierarchy to promote rapid convergence, (2) adherence to linearity concerning the right-hand side of equations, and (3) weights sharing across different levels to facilitate adaptability to various problem sizes. Built on these foundational principles, we introduce a network adept at solving PDEs discretized on structured grids, even when faced with heterogeneous coefficients. The cornerstone of our proposed solver is the convolutional neural network (CNN), chosen for its capacity to learn from structured data and its similar computation pattern as multigrid components. To evaluate its effectiveness, the solver was trained to solve convection-diffusion equations featuring heterogeneous diffusion coefficients. The solver exhibited swift convergence to high accuracy over a range of grid sizes, extending from $31 \times 31$ to $4095 \times 4095$. Remarkably, our method outperformed the classical Geometric Multigrid (GMG) solver, demonstrating a speedup of approximately 3 to 8 times. Furthermore, we explored the solver's generalizability to untrained coefficient distributions. The findings showed consistent reliability across various other coefficient distributions, revealing that when trained on a mixed coefficient distribution, the solver is nearly as effective in generalizing to all types of coefficient distributions.
We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.
Advances in survival analysis have facilitated unprecedented flexibility in data modeling, yet there remains a lack of tools for graphically illustrating the influence of continuous covariates on predicted survival outcomes. We propose the utilization of a colored contour plot to depict the predicted survival probabilities over time, and provide a Shiny app and R package as implementations of this tool. Our approach is capable of supporting conventional models, including the Cox and Fine-Gray models. However, its capability shines when coupled with cutting-edge machine learning models such as random survival forests and deep neural networks.
In this paper, a new two-relaxation-time regularized (TRT-R) lattice Boltzmann (LB) model for convection-diffusion equation (CDE) with variable coefficients is proposed. Within this framework, we first derive a TRT-R collision operator by constructing a new regularized procedure through the high-order Hermite expansion of non-equilibrium. Then a first-order discrete-velocity form of discrete source term is introduced to improve the accuracy of the source term. Finally and most importantly, a new first-order space-derivative auxiliary term is proposed to recover the correct CDE with variable coefficients. To evaluate this model, we simulate a classic benchmark problem of the rotating Gaussian pulse. The results show that our model has better accuracy, stability and convergence than other popular LB models, especially in the case of a large time step.
Sentence embeddings induced with various transformer architectures encode much semantic and syntactic information in a distributed manner in a one-dimensional array. We investigate whether specific grammatical information can be accessed in these distributed representations. Using data from a task developed to test rule-like generalizations, our experiments on detecting subject-verb agreement yield several promising results. First, we show that while the usual sentence representations encoded as one-dimensional arrays do not easily support extraction of rule-like regularities, a two-dimensional reshaping of these vectors allows various learning architectures to access such information. Next, we show that various architectures can detect patterns in these two-dimensional reshaped sentence embeddings and successfully learn a model based on smaller amounts of simpler training data, which performs well on more complex test data. This indicates that current sentence embeddings contain information that is regularly distributed, and which can be captured when the embeddings are reshaped into higher dimensional arrays. Our results cast light on representations produced by language models and help move towards developing few-shot learning approaches.
In speech recognition applications, it is important to recognize context-specific rare words, such as proper nouns. Tree-constrained Pointer Generator (TCPGen) has shown promise for this purpose, which efficiently biases such words with a prefix tree. While the original TCPGen relies on grapheme-based encoding, we propose extending it with phoneme-aware encoding to better recognize words of unusual pronunciations. As TCPGen handles biasing words as subword units, we propose obtaining subword-level phoneme-aware encoding by using alignment between phonemes and subwords. Furthermore, we propose injecting phoneme-level predictions from CTC into queries of TCPGen so that the model better interprets the phoneme-aware encodings. We conducted ASR experiments with TCPGen for RNN transducer. We observed that proposed phoneme-aware encoding outperformed ordinary grapheme-based encoding on both the English LibriSpeech and Japanese CSJ datasets, demonstrating the robustness of our approach across linguistically diverse languages.