Electroencephalogram (EEG) signals reflect brain activity across different brain states, characterized by distinct frequency distributions. Through multifractal analysis tools, we investigate the scaling behaviour of different classes of EEG signals and artifacts. We show that brain states associated to sleep and general anaesthesia are not in general characterized by scale invariance. The lack of scale invariance motivates the development of artifact removal algorithms capable of operating independently at each scale. We examine here the properties of the wavelet quantile normalization algorithm, a recently introduced adaptive method for real-time correction of transient artifacts in EEG signals. We establish general results regarding the regularization properties of the WQN algorithm, showing how it can eliminate singularities introduced by artefacts, and we compare it to traditional thresholding algorithms. Furthermore, we show that the algorithm performance is independent of the wavelet basis. We finally examine its continuity and boundedness properties and illustrate its distinctive non-local action on the wavelet coefficients through pathological examples.
We define some Schnyder-type combinatorial structures on a class of planar triangulations of the pentagon which are closely related to 5-connected triangulations. The combinatorial structures have three incarnations defined in terms of orientations, corner-labelings, and woods respectively. The wood incarnation consists in 5 spanning trees crossing each other in an orderly fashion. Similarly as for Schnyder woods on triangulations, it induces, for each vertex, a partition of the inner triangles into face-connected regions (5~regions here). We show that the induced barycentric vertex-placement, where each vertex is at the barycenter of the 5 outer vertices with weights given by the number of faces in each region, yields a planar straight-line drawing.
Recently, several algorithms have been proposed for decomposing reactive synthesis specifications into independent and simpler sub-specifications. Being inspired by one of the approaches, developed by Antonio Iannopollo (2018), who designed the so-called (DC) algorithm, we present here our solution that takes his ideas further and provides mathematical formalisation of the strategy behind DC. We rigorously define the main notions involved in the algorithm, explain the technique, and demonstrate its application on examples. The core technique of DC is based on the detection of independent variables in linear temporal logic formulae by exploiting the power and efficiency of a model checker. Although the DC algorithm is sound, it is not complete, as its author already pointed out. In this paper, we provide a counterexample that shows this fact and propose relevant changes to adapt the original DC strategy to ensure its correctness. The modification of DC and the detailed proof of its soundness and completeness are the main contributions of this work.
Non-autoregressive approaches aim to improve the inference speed of translation models, particularly those that generate output in a one-pass forward manner. However, these approaches often suffer from a significant drop in translation quality compared to autoregressive models. This paper introduces a series of innovative techniques to enhance the translation quality of Non-Autoregressive Translation (NAT) models while maintaining a substantial acceleration in inference speed. We propose fine-tuning Pretrained Multilingual Language Models (PMLMs) with the CTC loss to train NAT models effectively. Furthermore, we adopt the MASK insertion scheme for up-sampling instead of token duplication, and we present an embedding distillation method to further enhance performance. In our experiments, our model outperforms the baseline autoregressive model (Transformer \textit{base}) on multiple datasets, including WMT'14 DE$\leftrightarrow$EN, WMT'16 RO$\leftrightarrow$EN, and IWSLT'14 DE$\leftrightarrow$EN. Notably, our model achieves better performance than the baseline autoregressive model on the IWSLT'14 En$\leftrightarrow$De and WMT'16 En$\leftrightarrow$Ro datasets, even without using distillation data during training. It is worth highlighting that on the IWSLT'14 DE$\rightarrow$EN dataset, our model achieves an impressive BLEU score of 39.59, setting a new state-of-the-art performance. Additionally, our model exhibits a remarkable speed improvement of 16.35 times compared to the autoregressive model.
In spatial regression models, spatial heterogeneity may be considered with either continuous or discrete specifications. The latter is related to delineation of spatially connected regions with homogeneous relationships between variables (spatial regimes). Although various regionalization algorithms have been proposed and studied in the field of spatial analytics, methods to optimize spatial regimes have been largely unexplored. In this paper, we propose two new algorithms for spatial regime delineation, two-stage K-Models and Regional-K-Models. We also extend the classic Automatic Zoning Procedure to spatial regression context. The proposed algorithms are applied to a series of synthetic datasets and two real-world datasets. Results indicate that all three algorithms achieve superior or comparable performance to existing approaches, while the two-stage K-Models algorithm largely outperforms existing approaches on model fitting, region reconstruction, and coefficient estimation. Our work enriches the spatial analytics toolbox to explore spatial heterogeneous processes.
Gaussian elimination (GE) is the most used dense linear solver. Error analysis of GE with selected pivoting strategies on well-conditioned systems can focus on studying the behavior of growth factors. Although exponential growth is possible with GE with partial pivoting (GEPP), growth tends to stay much smaller in practice. Support for this behavior was provided last year by Huang and Tikhomirov's average-case analysis of GEPP, which showed GEPP growth factors stay at most polynomial with very high probability when using small Gaussian perturbations. GE with complete pivoting (GECP) has also seen a lot of recent interest, with recent improvements to lower bounds on worst-case GECP growth provided by Edelman and Urschel earlier this year. We are interested in studying how GEPP and GECP behave on the same linear systems as well as studying large growth on particular subclasses of matrices, including orthogonal matrices. We will also study systems when GECP leads to larger growth than GEPP, which will lead to new empirical lower bounds on how much worse GECP can behave compared to GEPP in terms of growth. We also present an empirical study on a family of exponential GEPP growth matrices whose polynomial behavior in small neighborhoods limits to the initial GECP growth factor.
The joint retrieval of surface reflectances and atmospheric parameters in VSWIR imaging spectroscopy is a computationally challenging high-dimensional problem. Using NASA's Surface Biology and Geology mission as the motivational context, the uncertainty associated with the retrievals is crucial for further application of the retrieved results for environmental applications. Although Markov chain Monte Carlo (MCMC) is a Bayesian method ideal for uncertainty quantification, the full-dimensional implementation of MCMC for the retrieval is computationally intractable. In this work, we developed a block Metropolis MCMC algorithm for the high-dimensional VSWIR surface reflectance retrieval that leverages the structure of the forward radiative transfer model to enable tractable fully Bayesian computation. We use the posterior distribution from this MCMC algorithm to assess the limitations of optimal estimation, the state-of-the-art Bayesian algorithm in operational retrievals which is more computationally efficient but uses a Gaussian approximation to characterize the posterior. Analyzing the differences in the posterior computed by each method, the MCMC algorithm was shown to give more physically sensible results and reveals the non-Gaussian structure of the posterior, specifically in the atmospheric aerosol optical depth parameter and the low-wavelength surface reflectances.
The Metropolis algorithm is a Markov chain Monte Carlo (MCMC) algorithm used to simulate from parameter distributions of interest, such as generalized linear model parameters. The "Metropolis step" is a keystone concept that underlies classical and modern MCMC methods and facilitates simple analysis of complex statistical models. Beyond Bayesian analysis, MCMC is useful for generating uncertainty intervals, even under the common scenario in causal inference in which the target parameter is not directly estimated by a single, fitted statistical model. We demonstrate, with a worked example, pseudo-code, and R code, the basic mechanics of the Metropolis algorithm. We use the Metropolis algorithm to estimate the odds ratio and risk difference contrasting the risk of childhood leukemia among those exposed to high versus low level magnetic fields. This approach can be used for inference from Bayesian and frequentist paradigms and, in small samples, offers advantages over large-sample methods like the bootstrap.
Whether or not the Kronecker coefficients of the symmetric group count some set of combinatorial objects is a longstanding open question. In this work we show that a given Kronecker coefficient is proportional to the rank of a projector that can be measured efficiently using a quantum computer. In other words a Kronecker coefficient counts the dimension of the vector space spanned by the accepting witnesses of a QMA verifier, where QMA is the quantum analogue of NP. This implies that approximating the Kronecker coefficients to within a given relative error is not harder than a certain natural class of quantum approximate counting problems that captures the complexity of estimating thermal properties of quantum many-body systems. A second consequence is that deciding positivity of Kronecker coefficients is contained in QMA, complementing a recent NP-hardness result of Ikenmeyer, Mulmuley and Walter. We obtain similar results for the related problem of approximating row sums of the character table of the symmetric group. Finally, we discuss an efficient quantum algorithm that approximates normalized Kronecker coefficients to inverse-polynomial additive error.
The QZ algorithm computes the Schur form of a matrix pencil. It is an iterative algorithm and at some point, it must decide that an eigenvalue has converged and move on with another one. Choosing a criterion that makes this decision is nontrivial. If it is too strict, the algorithm might waste iterations on already converged eigenvalues. If it is not strict enough, the computed eigenvalues might be inaccurate. Additionally, the criterion should not be computationally expensive to evaluate. This paper introduces a new criterion based on the size of and the gap between the eigenvalues. This is similar to the work of Ahues and Tissuer for the QR algorithm. Theoretical arguments and numerical experiments suggest that it outperforms the most popular criteria in terms of accuracy. Additionally, this paper evaluates some commonly used criteria for infinite eigenvalues.
By a semi-Lagrangian change of coordinates, the hydrostatic Euler equations describing free-surface sheared flows is rewritten as a system of quasilinear equations, where stability conditions can be determined by the analysis of its hyperbolic structure. This new system can be written as a quasi linear system in time and horizontal variables and involves no more vertical derivatives. However, the coefficients in front of the horizontal derivatives include an integral operator acting on the new vertical variable. The spectrum of these operators is studied in detail, in particular it includes a continuous part. Riemann invariants are then determined as conserved quantities along the characteristic curves. Examples of solutions are provided, in particular stationary solutions and solutions blowing-up in finite time. Eventually, we propose an exact multi-layer $\mathbb{P}_0$-discretization, which could be used to solve numerically this semi-Lagrangian system, and analyze the eigenvalues of the corresponding discretized operator to investigate the hyperbolic nature of the approximated system.