We propose a second-order accurate semi-implicit and well-balanced finite volume scheme for the equations of ideal magnetohydrodynamics (MHD) including gravitational source terms. The scheme treats all terms associated with the acoustic pressure implicitly while keeping the remaining terms part of the explicit sub-system. This semi-implicit approach makes the method particularly well suited for problems in the low Mach regime. We combine the semi-implicit scheme with the deviation well-balancing technique and prove that it maintains equilibrium solutions for the magnetohydrostatic case up to rounding errors. In order to preserve the divergence-free property of the magnetic field enforced by the solenoidal constraint, we incorporate a constrained transport method in the semi-implicit framework. Second order of accuracy is achieved by means of a standard spatial reconstruction technique with total variation diminishing (TVD) property, and by an asymptotic preserving (AP) time stepping algorithm built upon the implicit-explicit (IMEX) Runge-Kutta time integrators. Numerical tests in the low Mach regime and near magnetohydrostatic equilibria support the low Mach and well-balanced properties of the numerical method.
We introduce an efficient numerical implementation of a Markov Chain Monte Carlo method to sample a probability distribution on a manifold (introduced theoretically in Zappa, Holmes-Cerfon, Goodman (2018)), where the manifold is defined by the level set of constraint functions, and the probability distribution may involve the pseudodeterminant of the Jacobian of the constraints, as arises in physical sampling problems. The algorithm is easy to implement and scales well to problems with thousands of dimensions and with complex sets of constraints provided their Jacobian retains sparsity. The algorithm uses direct linear algebra and requires a single matrix factorization per proposal point, which enhances its efficiency over previously proposed methods but becomes the computational bottleneck of the algorithm in high dimensions. We test the algorithm on several examples inspired by soft-matter physics and materials science to study its complexity and properties.
In the logic synthesis stage, structure transformations in the synthesis tool need to be combined into optimization sequences and act on the circuit to meet the specified circuit area and delay. However, logic synthesis optimization sequences are time-consuming to run, and predicting the quality of the results (QoR) against the synthesis optimization sequence for a circuit can help engineers find a better optimization sequence faster. In this work, we propose a deep learning method to predict the QoR of unseen circuit-optimization sequences pairs. Specifically, the structure transformations are translated into vectors by embedding methods and advanced natural language processing (NLP) technology (Transformer) is used to extract the features of the optimization sequences. In addition, to enable the prediction process of the model to be generalized from circuit to circuit, the graph representation of the circuit is represented as an adjacency matrix and a feature matrix. Graph neural networks(GNN) are used to extract the structural features of the circuits. For this problem, the Transformer and three typical GNNs are used. Furthermore, the Transformer and GNNs are adopted as a joint learning policy for the QoR prediction of the unseen circuit-optimization sequences. The methods resulting from the combination of Transformer and GNNs are benchmarked. The experimental results show that the joint learning of Transformer and GraphSage gives the best results. The Mean Absolute Error (MAE) of the predicted result is 0.412.
We consider the solution to the biharmonic equation in mixed form discretized by the Hybrid High-Order (HHO) methods. The two resulting second-order elliptic problems can be decoupled via the introduction of a new unknown, corresponding to the boundary value of the solution of the first Laplacian problem. This technique yields a global linear problem that can be solved iteratively via a Krylov-type method. More precisely, at each iteration of the scheme, two second-order elliptic problems have to be solved, and a normal derivative on the boundary has to be computed. In this work, we specialize this scheme for the HHO discretization. To this aim, an explicit technique to compute the discrete normal derivative of an HHO solution of a Laplacian problem is proposed. Moreover, we show that the resulting discrete scheme is well-posed. Finally, a new preconditioner is designed to speed up the convergence of the Krylov method. Numerical experiments assessing the performance of the proposed iterative algorithm on both two- and three-dimensional test cases are presented.
Human infants acquire their verbal lexicon with minimal prior knowledge of language based on the statistical properties of phonological distributions and the co-occurrence of other sensory stimuli. This study proposes a novel fully unsupervised learning method for discovering speech units using phonological information as a distributional cue and object information as a co-occurrence cue. The proposed method can acquire words and phonemes from speech signals using unsupervised learning and utilize object information based on multiple modalities-vision, tactile, and auditory-simultaneously. The proposed method is based on the nonparametric Bayesian double articulation analyzer (NPB-DAA) discovering phonemes and words from phonological features, and multimodal latent Dirichlet allocation (MLDA) categorizing multimodal information obtained from objects. In an experiment, the proposed method showed higher word discovery performance than baseline methods. Words that expressed the characteristics of objects (i.e., words corresponding to nouns and adjectives) were segmented accurately. Furthermore, we examined how learning performance is affected by differences in the importance of linguistic information. Increasing the weight of the word modality further improved performance relative to that of the fixed condition.
Statistical depth functions provide measures of the outlyingness, or centrality, of the elements of a space with respect to a distribution. It is a nonparametric concept applicable to spaces of any dimension, for instance, multivariate and functional. Liu and Singh (1993) presented a multivariate two-sample test based on depth-ranks. We dedicate this paper to improving the power of the associated test statistic and incorporating its applicability to functional data. In doing so, we obtain a more natural test statistic that is symmetric in both samples. We derive the null asymptotic of the proposed test statistic, also proving the validity of the testing procedure for functional data. Finally, the finite sample performance of the test for functional data is illustrated by means of a simulation study and a real data analysis on annual temperature curves of ocean drifters is executed.
We propose a numerically efficient method for evaluating the random-coding union bound with parameter $s$ on the error probability achievable in the finite-blocklength regime by a pilot-assisted transmission scheme employing Gaussian codebooks and operating over a memoryless block-fading channel. Our method relies on the saddlepoint approximation, which, differently from previous results reported for similar scenarios, is performed with respect to the number of fading blocks (a.k.a. diversity branches) spanned by each codeword, instead of the number of channel uses per block. This different approach avoids a costly numerical averaging of the error probability over the realizations of the fading process and of its pilot-based estimate at the receiver and results in a significant reduction of the number of channel realizations required to estimate the error probability accurately. Our numerical experiments for both single-antenna communication links and massive multiple-input multiple-output (MIMO) networks show that, when two or more diversity branches are available, the error probability can be estimated accurately with the saddlepoint approximation with respect to the number of fading blocks using a numerical method that requires about two orders of magnitude fewer Monte-Carlo samples than with the saddlepoint approximation with respect to the number of channel uses per block.
The probe and singular sources methods are well-known two classical direct reconstruction methods in inverse obstacle problems governed by partial differential equations. The common part of both methods is the notion of the indicator functions which are defined outside an unknown obstacle and blow up on the surface of the obstacle. However, their appearance is completely different. In this paper, by considering an inverse obstacle problem governed by the Laplace equation in a bounded domain as a prototype case, an integrated version of the probe and singular sources methods which fills the gap between their indicator functions is introduced. The main result is decomposed into three parts. First, the singular sources method combined with the probe method and notion of the Carleman function is formulated. Second, the indicator functions of both methods can be obtained as a result of decomposing a third indicator function into two ways. The third indicator function blows up on both the outer and obstacle surfaces. Third, the probe and singular sources methods are reformulated and it is shown that the indicator functions on which both reformulated methods based, completely coincide with each other. As a byproduct, it turns out that the reformulated singular sources method has also the Side B of the probe method, which is a characterization of the unknown obstacle by means of the blowing up property of an indicator sequence.
We analyze the dynamics of streaming stochastic gradient descent (SGD) in the high-dimensional limit when applied to generalized linear models and multi-index models (e.g. logistic regression, phase retrieval) with general data-covariance. In particular, we demonstrate a deterministic equivalent of SGD in the form of a system of ordinary differential equations that describes a wide class of statistics, such as the risk and other measures of sub-optimality. This equivalence holds with overwhelming probability when the model parameter count grows proportionally to the number of data. This framework allows us to obtain learning rate thresholds for stability of SGD as well as convergence guarantees. In addition to the deterministic equivalent, we introduce an SDE with a simplified diffusion coefficient (homogenized SGD) which allows us to analyze the dynamics of general statistics of SGD iterates. Finally, we illustrate this theory on some standard examples and show numerical simulations which give an excellent match to the theory.
As an alternative to classical numerical solvers for partial differential equations (PDEs) subject to boundary value constraints, there has been a surge of interest in investigating neural networks that can solve such problems efficiently. In this work, we design a general solution operator for two different time-independent PDEs using graph neural networks (GNNs) and spectral graph convolutions. We train the networks on simulated data from a finite elements solver on a variety of shapes and inhomogeneities. In contrast to previous works, we focus on the ability of the trained operator to generalize to previously unseen scenarios. Specifically, we test generalization to meshes with different shapes and superposition of solutions for a different number of inhomogeneities. We find that training on a diverse dataset with lots of variation in the finite element meshes is a key ingredient for achieving good generalization results in all cases. With this, we believe that GNNs can be used to learn solution operators that generalize over a range of properties and produce solutions much faster than a generic solver. Our dataset, which we make publicly available, can be used and extended to verify the robustness of these models under varying conditions.
Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.