In the literature, finite mixture models are described as linear combinations of probability distribution functions having the form $\displaystyle f(x) = \Lambda \sum_{i=1}^n w_i f_i(x)$, $x \in \mathbb{R}$, where $w_i$ are positive weights, $\Lambda$ is a suitable normalising constant and $f_i(x)$ are given probability density functions. The fact that $f(x)$ is a probability density function follows naturally in this setting. Our question is: what happens when we remove the sign condition on the coefficients $w_i$? The answer is that it is possible to determine the sign pattern of the function $f(x)$ by an algorithm based on finite sequence that we call a generalized Budan-Fourier sequence. In this paper we provide theoretical motivation for the functioning of the algorithm, and we describe with various examples its strength and possible applications.
We investigate online classification with paid stochastic experts. Here, before making their prediction, each expert must be paid. The amount that we pay each expert directly influences the accuracy of their prediction through some unknown Lipschitz "productivity" function. In each round, the learner must decide how much to pay each expert and then make a prediction. They incur a cost equal to a weighted sum of the prediction error and upfront payments for all experts. We introduce an online learning algorithm whose total cost after $T$ rounds exceeds that of a predictor which knows the productivity of all experts in advance by at most $\mathcal{O}(K^2(\log T)\sqrt{T})$ where $K$ is the number of experts. In order to achieve this result, we combine Lipschitz bandits and online classification with surrogate losses. These tools allow us to improve upon the bound of order $T^{2/3}$ one would obtain in the standard Lipschitz bandit setting. Our algorithm is empirically evaluated on synthetic data
In this study, we focus on the development and implementation of a comprehensive ensemble of numerical time series forecasting models, collectively referred to as the Group of Numerical Time Series Prediction Model (G-NM). This inclusive set comprises traditional models such as Autoregressive Integrated Moving Average (ARIMA), Holt-Winters' method, and Support Vector Regression (SVR), in addition to modern neural network models including Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). G-NM is explicitly constructed to augment our predictive capabilities related to patterns and trends inherent in complex natural phenomena. By utilizing time series data relevant to these events, G-NM facilitates the prediction of such phenomena over extended periods. The primary objective of this research is to both advance our understanding of such occurrences and to significantly enhance the accuracy of our forecasts. G-NM encapsulates both linear and non-linear dependencies, seasonalities, and trends present in time series data. Each of these models contributes distinct strengths, from ARIMA's resilience in handling linear trends and seasonality, SVR's proficiency in capturing non-linear patterns, to LSTM's adaptability in modeling various components of time series data. Through the exploitation of the G-NM potential, we strive to advance the state-of-the-art in large-scale time series forecasting models. We anticipate that this research will represent a significant stepping stone in our ongoing endeavor to comprehend and forecast the complex events that constitute the natural world.
This paper is a collection of results on combinatorial properties of codes for the Z-channel. A Z-channel with error fraction $\tau$ takes as input a length-$n$ binary codeword and injects in an adversarial manner up to $n\tau$ asymmetric errors, i.e., errors that only zero out bits but do not flip $0$'s to $1$'s. It is known that the largest $(L-1)$-list-decodable code for the Z-channel with error fraction $\tau$ has exponential size (in $n$) if $\tau$ is less than a critical value that we call the $(L-1)$-list-decoding Plotkin point and has constant size if $\tau$ is larger than the threshold. The $(L-1)$-list-decoding Plotkin point is known to be $ L^{-\frac{1}{L-1}} - L^{-\frac{L}{L-1}} $, which equals $1/4$ for unique-decoding with $ L-1=1 $. In this paper, we derive various results for the size of the largest codes above and below the list-decoding Plotkin point. In particular, we show that the largest $(L-1)$-list-decodable code $\epsilon$-above the Plotkin point, {for any given sufficiently small positive constant $ \epsilon>0 $,} has size $\Theta_L(\epsilon^{-3/2})$ for any $L-1\ge1$. We also devise upper and lower bounds on the exponential size of codes below the list-decoding Plotkin point.
The Immersed Boundary (IB) method of Peskin (J. Comput. Phys., 1977) is useful for problems involving fluid-structure interactions or complex geometries. By making use of a regular Cartesian grid that is independent of the geometry, the IB framework yields a robust numerical scheme that can efficiently handle immersed deformable structures. Additionally, the IB method has been adapted to problems with prescribed motion and other PDEs with given boundary data. IB methods for these problems traditionally involve penalty forces which only approximately satisfy boundary conditions, or they are formulated as constraint problems. In the latter approach, one must find the unknown forces by solving an equation that corresponds to a poorly conditioned first-kind integral equation. This operation can require a large number of iterations of a Krylov method, and since a time-dependent problem requires this solve at each time step, this method can be prohibitively inefficient without preconditioning. In this work, we introduce a new, well-conditioned IB formulation for boundary value problems, which we call the Immersed Boundary Double Layer (IBDL) method. We present the method as it applies to Poisson and Helmholtz problems to demonstrate its efficiency over the original constraint method. In this double layer formulation, the equation for the unknown boundary distribution corresponds to a well-conditioned second-kind integral equation that can be solved efficiently with a small number of iterations of a Krylov method. Furthermore, the iteration count is independent of both the mesh size and immersed boundary point spacing. The method converges away from the boundary, and when combined with a local interpolation, it converges in the entire PDE domain. Additionally, while the original constraint method applies only to Dirichlet problems, the IBDL formulation can also be used for Neumann conditions.
In the problem of quantum channel certification, we have black box access to a quantum process and would like to decide if this process matches some predefined specification or is $\varepsilon$-far from this specification. The objective is to achieve this task while minimizing the number of times the black box is used. Here, we focus on optimal incoherent strategies for two relevant extreme cases of channel certification. The first one is when the predefined specification is a unitary channel, e.g., a gate in a quantum circuit. In this case, we show that testing whether the black box is described by a fixed unitary operator in dimension $d$ or $\varepsilon$-far from it in the trace norm requires $\Theta(d/\varepsilon^2)$ uses of the black box. The second setting we consider is when the predefined specification is a completely depolarizing channel with input dimension $d_{\text{in}}$ and output dimension $d_{\text{out}}$. In this case, we prove that, in the non-adaptive setting, $\tilde{\Theta}(d_{\text{in}}^2d_{\text{out}}^{1.5}/\varepsilon^2)$ uses of the channel are necessary and sufficient to verify whether it is equal to the depolarizing channel or $\varepsilon$-far from it in the diamond norm. Finally, we prove a lower bound of $\Omega(d_{\text{in}}^2d_{\text{out}}/\varepsilon^2)$ for this problem in the adaptive setting. Note that the special case $d_{\text{in}} = 1$ corresponds to the well-studied quantum state certification problem.
Noise-shaping quantization techniques are widely used for converting bandlimited signals from the analog to the digital domain. They work by "shaping" the quantization noise so that it falls close to the reconstruction operator's null space. We investigate the compatibility of two such schemes, specifically $\Sigma\Delta$ quantization and distributed noise-shaping quantization, with random samples of bandlimited functions. Let $f$ be a real-valued $\pi$-bandlimited function. Suppose $R>1$ is a real number and assume that $\{x_i\}_{i=1}^m$ is a sequence of i.i.d random variables uniformly distributed on $[-\tilde{R},\tilde{R}]$, where $\tilde{R}>R$ is appropriately chosen. We show that by using a noise-shaping quantizer to quantize the values of $f$ at $\{x_i\}_{i=1}^m$, a function $f^{\sharp}$ can be reconstructed from these quantized values such that $\|f-f^{\sharp}\|_{L^2[-R, R]}$ decays with high probability as $m$ and $\tilde{R}$ increase. We emphasize that the sample points $\{x_i\}_{i=1}^m$ are completely random, i.e., they have no predefined structure, which makes our findings the first of their kind.
This paper focuses on developing a reduction-based algebraic multigrid method that is suitable for solving general (non)symmetric linear systems and is naturally robust from pure advection to pure diffusion. Initial motivation comes from a new reduction-based algebraic multigrid (AMG) approach, $\ell$AIR (local approximate ideal restriction), that was developed for solving advection-dominated problems. Though this new solver is very effective in the advection dominated regime, its performance degrades in cases where diffusion becomes dominant. This is consistent with the fact that in general, reduction-based AMG methods tend to suffer from growth in complexity and/or convergence rates as the problem size is increased, especially for diffusion dominated problems in two or three dimensions. Motivated by the success of $\ell$AIR in the advective regime, our aim in this paper is to generalize the AIR framework with the goal of improving the performance of the solver in diffusion dominated regimes. To do so, we propose a novel way to combine mode constraints as used commonly in energy minimization AMG methods with the local approximation of ideal operators used in $\ell$AIR. The resulting constrained $\ell$AIR (C$\ell$AIR) algorithm is able to achieve fast scalable convergence on advective and diffusive problems. In addition, it is able to achieve standard low complexity hierarchies in the diffusive regime through aggressive coarsening, something that has been previously difficult for reduction-based methods.
The univariate generalized extreme value (GEV) distribution is the most commonly used tool for analyzing the properties of rare events. The ever greater utilization of Bayesian methods for extreme value analysis warrants detailed theoretical investigation, which has thus far been underdeveloped. Even the most basic asymptotic results are difficult to obtain because the GEV fails to satisfy standard regularity conditions. Here, we prove that the posterior distribution of the GEV parameter vector, given $n$ independent and identically distributed samples, converges in distribution to a trivariate normal distribution. The proof necessitates analyzing integrals of the GEV likelihood function over the entire parameter space, which requires considerable care because the support of the GEV density depends on the parameters in complicated ways.
Neuro-symbolic artificial intelligence is an emerging area that combines traditional symbolic techniques with neural networks. In this paper, we consider its application to sequential decision making under uncertainty. We introduce neuro-symbolic partially observable Markov decision processes (NS-POMDPs), which model an agent that perceives a continuous-state environment using a neural network and makes decisions symbolically, and study the problem of optimising discounted cumulative rewards. This requires functions over continuous-state beliefs, for which we propose a novel piecewise linear and convex representation (P-PWLC) in terms of polyhedra covering the continuous-state space and value vectors, and extend Bellman backups to this representation. We prove the convexity and continuity of value functions and present two value iteration algorithms that ensure finite representability by exploiting the underlying structure of the continuous-state model and the neural perception mechanism. The first is a classical (exact) value iteration algorithm extending $\alpha$-functions of Porta et al (2006) to the P-PWLC representation for continuous-state spaces. The second is a point-based (approximate) method called NS-HSVI, which uses the P-PWLC representation and belief-value induced functions to approximate value functions from below and above for two types of beliefs, particle-based and region-based. Using a prototype implementation, we show the practical applicability of our approach on two case studies that employ (trained) ReLU neural networks as perception functions, dynamic car parking and an aircraft collision avoidance system, by synthesising (approximately) optimal strategies. An experimental comparison with the finite-state POMDP solver SARSOP demonstrates that NS-HSVI is more robust to particle disturbances.
This paper introduces a novel low-latency online beamforming (BF) algorithm, named Modified Parametric Multichannel Wiener Filter (Mod-PMWF), for enhancing speech mixtures with unknown and varying number of speakers. Although conventional BFs such as linearly constrained minimum variance BF (LCMV BF) can enhance a speech mixture, they typically require such attributes of the speech mixture as the number of speakers and the acoustic transfer functions (ATFs) from the speakers to the microphones. When the mixture attributes are unavailable, estimating them by low-latency processing is challenging, hindering the application of the BFs to the problem. In this paper, we overcome this problem by modifying a conventional Parametric Multichannel Wiener Filter (PMWF). The proposed Mod-PMWF can adaptively form a directivity pattern that enhances all the speakers in the mixture without explicitly estimating these attributes. Our experiments will show the proposed BF's effectiveness in interference reduction ratios and subjective listening tests.