Matrix factorizations in dual number algebra, a hypercomplex system, have been applied to kinematics, mechanisms, and other fields recently. We develop an approach to identify spatiotemporal patterns in the brain such as traveling waves using the singular value decomposition of dual matrices in this paper. Theoretically, we propose the compact dual singular value decomposition (CDSVD) of dual complex matrices with explicit expressions as well as a necessary and sufficient condition for its existence. Furthermore, based on the CDSVD, we report on the optimal solution to the best rank-$k$ approximation under a newly defined quasi-metric in the dual complex number system. The CDSVD is also related to the dual Moore-Penrose generalized inverse. Numerically, comparisons with other available algorithms are conducted, which indicate less computational costs of our proposed CDSVD. In addition, the infinitesimal part of the CDSVD can identify the true rank of the original matrix from the noise-added matrix, but the classical SVD cannot. Next, we employ experiments on simulated time-series data and a road monitoring video to demonstrate the beneficial effect of the infinitesimal parts of dual matrices in spatiotemporal pattern identification. Finally, we apply this approach to the large-scale brain fMRI data, identify three kinds of traveling waves, and further validate the consistency between our analytical results and the current knowledge of cerebral cortex function.
Quantum density matrix represents all the information of the entire quantum system, and novel models of meaning employing density matrices naturally model linguistic phenomena such as hyponymy and linguistic ambiguity, among others in quantum question answering tasks. Naturally, we argue that applying the quantum density matrix into classical Question Answering (QA) tasks can show more effective performance. Specifically, we (i) design a new mechanism based on Long Short-Term Memory (LSTM) to accommodate the case when the inputs are matrixes; (ii) apply the new mechanism to QA problems with Convolutional Neural Network (CNN) and gain the LSTM-based QA model with the quantum density matrix. Experiments of our new model on TREC-QA and WIKI-QA data sets show encouraging results. Similarly, we argue that the quantum density matrix can also enhance the image feature information and the relationship between the features for the classical image classification. Thus, we (i) combine density matrices and CNN to design a new mechanism; (ii) apply the new mechanism to some representative classical image classification tasks. A series of experiments show that the application of quantum density matrix in image classification has the generalization and high efficiency on different datasets. The application of quantum density matrix both in classical question answering tasks and classical image classification tasks show more effective performance.
This paper investigates the performance of a singleuser fluid antenna system (FAS), by exploiting a class of elliptical copulas to describe the structure of dependency amongst the fluid antenna ports. By expressing Jakes' model in terms of the Gaussian copula, we consider two cases: (i) the general case, i.e., any arbitrary correlated fading distribution; and (ii) the specific case, i.e., correlated Nakagami-m fading. For both scenarios, we first derive analytical expressions for the cumulative distribution function (CDF) and probability density function (PDF) of the equivalent channel in terms of multivariate normal distribution. Then, we obtain the outage probability (OP) and the delay outage rate (DOR) to analyze the performance of the FAS. By employing the popular rank correlation coefficients such as Spearman's \{rho} and Kendall's {\tau}, we measure the degree of dependency in correlated arbitrary fading channels and illustrate how the Gaussian copula can be accurately connected to Jakes' model in FAS without complicated mathematical analysis. Numerical results show that increasing the fluid antenna size provides lower OP and DOR, but the system performance saturates as the number of antenna ports increases. In addition, our results indicate that FAS provides better performance compared to conventional single-fixed antenna systems even when the size of fluid antenna is small.
We develop in this paper a multi-grade deep learning method for solving nonlinear partial differential equations (PDEs). Deep neural networks (DNNs) have received super performance in solving PDEs in addition to their outstanding success in areas such as natural language processing, computer vision, and robotics. However, training a very deep network is often a challenging task. As the number of layers of a DNN increases, solving a large-scale non-convex optimization problem that results in the DNN solution of PDEs becomes more and more difficult, which may lead to a decrease rather than an increase in predictive accuracy. To overcome this challenge, we propose a two-stage multi-grade deep learning (TS-MGDL) method that breaks down the task of learning a DNN into several neural networks stacked on top of each other in a staircase-like manner. This approach allows us to mitigate the complexity of solving the non-convex optimization problem with large number of parameters and learn residual components left over from previous grades efficiently. We prove that each grade/stage of the proposed TS-MGDL method can reduce the value of the loss function and further validate this fact through numerical experiments. Although the proposed method is applicable to general PDEs, implementation in this paper focuses only on the 1D, 2D, and 3D viscous Burgers equations. Experimental results show that the proposed two-stage multi-grade deep learning method enables efficient learning of solutions of the equations and outperforms existing single-grade deep learning methods in predictive accuracy. Specifically, the predictive errors of the single-grade deep learning are larger than those of the TS-MGDL method in 26-60, 4-31 and 3-12 times, for the 1D, 2D, and 3D equations, respectively.
Kuiper's statistic is a good measure for the difference of ideal distribution and empirical distribution in the goodness-of-fit test. However, it is a challenging problem to solve the critical value and upper tail quantile, or simply Kuiper pair, of Kuiper's statistics due to the difficulties of solving the nonlinear equation and reasonable approximation of infinite series. The pioneering work by Kuiper just provided the key ideas and few numerical tables created from the upper tail probability $\alpha$ and sample capacity $n$, which limited its propagation and possible applications in various fields since there are infinite configurations for the parameters $\alpha$ and $n$. In this work, the contributions lie in three perspectives: firstly, the second order approximation for the infinite series of the cumulative distribution of the critical value is used to achieve higher precision; secondly, the principles and fixed-point algorithms for solving the Kuiper pair are presented with details; finally, an error in Kuiper's table of critical value is discovered and fixed. The algorithms are verified and validated by comparing with the table provided by Kuiper. The methods and algorithms proposed are enlightening and worthy of introducing to the college students, computer programmers, engineers, experimental psychologists and so on.
Agent-based simulation is a versatile and potent computational modeling technique employed to analyze intricate systems and phenomena spanning diverse fields. However, due to their computational intensity, agent-based models become more resource-demanding when geographic considerations are introduced. This study delves into diverse strategies for crafting a series of Agent-Based Models, named "agent-in-the-cell," which emulate a city. These models, incorporating geographical attributes of the city and employing real-world open-source mobility data from Safegraph's publicly available dataset, simulate the dynamics of COVID spread under varying scenarios. The "agent-in-the-cell" concept designates that our representative agents, called meta-agents, are linked to specific home cells in the city's tessellation. We scrutinize tessellations of the mobility map with varying complexities and experiment with the agent density, ranging from matching the actual population to reducing the number of (meta-) agents for computational efficiency. Our findings demonstrate that tessellations constructed according to the Voronoi Diagram of specific location types on the street network better preserve dynamics compared to Census Block Group tessellations and better than Euclidean-based tessellations. Furthermore, the Voronoi Diagram tessellation and also a hybrid -- Voronoi Diagram - and Census Block Group - based -- tessellation require fewer meta-agents to adequately approximate full-scale dynamics. Our analysis spans a range of city sizes in the United States, encompassing small (Santa Fe, NM), medium (Seattle, WA), and large (Chicago, IL) urban areas. This examination also provides valuable insights into the effects of agent count reduction, varying sensitivity metrics, and the influence of city-specific factors.
Markov processes are widely used mathematical models for describing dynamic systems in various fields. However, accurately simulating large-scale systems at long time scales is computationally expensive due to the short time steps required for accurate integration. In this paper, we introduce an inference process that maps complex systems into a simplified representational space and models large jumps in time. To achieve this, we propose Time-lagged Information Bottleneck (T-IB), a principled objective rooted in information theory, which aims to capture relevant temporal features while discarding high-frequency information to simplify the simulation task and minimize the inference error. Our experiments demonstrate that T-IB learns information-optimal representations for accurately modeling the statistical properties and dynamics of the original process at a selected time lag, outperforming existing time-lagged dimensionality reduction methods.
Generalized metric spaces are obtained by weakening the requirements (e.g., symmetry) on the distance function and by allowing it to take values in structures (e.g., quantales) that are more general than the set of non-negative real numbers. Quantale-valued metric spaces have gained prominence due to their use in quantitative reasoning on programs/systems, and for defining various notions of behavioral metrics. We investigate imprecision and robustness in the framework of quantale-valued metric spaces, when the quantale is continuous. In particular, we study the relation between the robust topology, which captures robustness of analyses, and the Hausdorff-Smyth hemi-metric. To this end, we define a preorder-enriched monad $\mathsf{P}_S$, called the Hausdorff-Smyth monad, and when $Q$ is a continuous quantale and $X$ is a $Q$-metric space, we relate the topology induced by the metric on $\mathsf{P}_S(X)$ with the robust topology on the powerset $\mathsf{P}(X)$ defined in terms of the metric on $X$.
We propose a new framework for the sampling, compression, and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces. Our approach involves constructing a tensor called the RaySense sketch, which captures nearest neighbors from the underlying geometry of points along a set of rays. We explore various operations that can be performed on the RaySense sketch, leading to different properties and potential applications. Statistical information about the data set can be extracted from the sketch, independent of the ray set. Line integrals on point sets can be efficiently computed using the sketch. We also present several examples illustrating applications of the proposed strategy in practical scenarios.
Identifiability of discrete statistical models with latent variables is known to be challenging to study, yet crucial to a model's interpretability and reliability. This work presents a general algebraic technique to investigate identifiability of complicated discrete models with latent and graphical components. Specifically, motivated by diagnostic tests collecting multivariate categorical data, we focus on discrete models with multiple binary latent variables. In the considered model, the latent variables can have arbitrary dependencies among themselves while the latent-to-observed measurement graph takes a "star-forest" shape. We establish necessary and sufficient graphical criteria for identifiability, and reveal an interesting and perhaps surprising phenomenon of blessing-of-dependence geometry: under the minimal conditions for generic identifiability, the parameters are identifiable if and only if the latent variables are not statistically independent. Thanks to this theory, we can perform formal hypothesis tests of identifiability in the boundary case by testing certain marginal independence of the observed variables. Our results give new understanding of statistical properties of graphical models with latent variables. They also entail useful implications for designing diagnostic tests or surveys that measure binary latent traits.
Adversarial attacks expose vulnerabilities of deep learning models by introducing minor perturbations to the input, which lead to substantial alterations in the output. Our research focuses on the impact of such adversarial attacks on sequence-to-sequence (seq2seq) models, specifically machine translation models. We introduce algorithms that incorporate basic text perturbation heuristics and more advanced strategies, such as the gradient-based attack, which utilizes a differentiable approximation of the inherently non-differentiable translation metric. Through our investigation, we provide evidence that machine translation models display robustness displayed robustness against best performed known adversarial attacks, as the degree of perturbation in the output is directly proportional to the perturbation in the input. However, among underdogs, our attacks outperform alternatives, providing the best relative performance. Another strong candidate is an attack based on mixing of individual characters.