Over the past two decades, some scholars have noticed the correlation between quantum mechanics and finance/economy, making some novel attempts to introduce the theoretical framework of quantum mechanics into financial and economic research, subsequently a new research domain called quantum finance or quantum economy was set up. In particular, some studies have made their endeavour in the stock market, utilizing the quantum mechanical paradigm to describe the movement of stock price. Nevertheless, the majority of researches have paid attention to describing the motion of a single stock, and drawn an analogy between the motion of a single stock and a one-dimensional infinite well, or one-dimensional harmonic oscillator model, whose modality looks alike to the one-electron Schr\"odinger equation, in which the information is solved analytically in most cases. Hitherto, the whole stock market system composed of all stocks and stock indexes have not been discussed. In this paper, the concept of stock molecular system is first proposed with pioneer. The modality of stock molecular system resembles the multi-electrons Schr\"odinger equation with Born-Oppenheimer approximation. Similar to the interaction among all nuclei and electrons in a molecule, the interaction exist among all stock indexes and stocks. This paper also establish the stock-index Coulomb potential, stock-index Coulomb potential, stock-stock Coulomb potential and stock coulomb correlation terms by statistical theory. At length, the conceive and feasibility of drawing upon density functional theory (DFT) to solve the Schr\"odinger equation of stock molecular system are put forward together with proof, ending up with experiments executed in CSI 300 index system.
We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.
Modelling noisy data in a network context remains an unavoidable obstacle; fortunately, random matrix theory may comprehensively describe network environments effectively. Thus it necessitates the probabilistic characterisation of these networks (and accompanying noisy data) using matrix variate models. Denoising network data using a Bayes approach is not common in surveyed literature. This paper adopts the Bayesian viewpoint and introduces a new matrix variate t-model in a prior sense by relying on the matrix variate gamma distribution for the noise process, following the Gaussian graphical network for the cases when the normality assumption is violated. From a statistical learning viewpoint, such a theoretical consideration indubitably benefits the real-world comprehension of structures causing noisy data with network-based attributes as part of machine learning in data science. A full structural learning procedure is provided for calculating and approximating the resulting posterior of interest to assess the considered model's network centrality measures. Experiments with synthetic and real-world stock price data are performed not only to validate the proposed algorithm's capabilities but also to show that this model has wider flexibility than originally implied in Billio et al. (2021).
Ecosystems are ubiquitous but trust within them is not guaranteed. Trust is paramount because stakeholders within an ecosystem must collaborate to achieve their objectives. With the twin transitions, digital transformation to go in parallel with green transition, accelerating the deployment of autonomous systems, trust has become even more critical to ensure that the deployed technology creates value. To address this need, we propose an ecosystem of trust approach to support deployment of technology by enabling trust among and between stakeholders, technologies and infrastructures, institutions and governance, and the artificial and natural environments in an ecosystem. The approach can help the stakeholders in the ecosystem to create, deliver, and receive value by addressing their concerns and aligning their objectives. We present an autonomous, zero emission ferry as a real world use case to demonstrate the approach from a stakeholder perspective. We argue that assurance, defined as grounds for justified confidence originated from evidence and knowledge, is a prerequisite to enable the approach. Assurance provides evidence and knowledge that are collected, analysed, and communicated in a systematic, targeted, and meaningful way. Assurance can enable the approach to help successfully deploy technology by ensuring that risk is managed, trust is shared, and value is created.
We propose an innovative and generic methodology to analyse individual and collective behaviour through individual trajectory data. The work is motivated by the analysis of GPS trajectories of fishing vessels collected from regulatory tracking data in the context of marine biodiversity conservation and ecosystem-based fisheries management. We build a low-dimensional latent representation of trajectories using convolutional neural networks as non-linear mapping. This is done by training a conditional variational auto-encoder taking into account covariates. The posterior distributions of the latent representations can be linked to the characteristics of the actual trajectories. The latent distributions of the trajectories are compared with the Bhattacharyya coefficient, which is well-suited for comparing distributions. Using this coefficient, we analyse the variation of the individual behaviour of each vessel during time. For collective behaviour analysis, we build proximity graphs and use an extension of the stochastic block model for multiple networks. This model results in a clustering of the individuals based on their set of trajectories. The application to French fishing vessels enables us to obtain groups of vessels whose individual and collective behaviours exhibit spatio-temporal patterns over the period 2014-2018.
Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.
Decentralized finance (DeFi) has the potential to disrupt centralized finance by validating peer-to-peer transactions through tamper-proof smart contracts, thus significantly lowering the transaction cost charged by financial intermediaries. However, the actual realization of peer-to-peer transactions and the levels and effects of decentralization are largely unknown. Our research pioneers a blockchain network study that applies social network analysis to measure the level, dynamics, and impacts of decentralization in DeFi token transactions on the Ethereum blockchain. First, we find a significant core-periphery structure in the AAVE token transaction network where the cores include the two largest centralized crypto exchanges. Second, we provide evidence that multiple network features consistently characterize decentralization dynamics. Finally, we document that a more decentralized network significantly predicts a higher return and lower volatility of the decentralized market of AAVE tokens on the Ethereum blockchain. We point out that our approach is seminal for inspiring future extensions related to the facets of application scenarios, research questions, and methodologies on the mechanics of blockchain decentralization.
Experimental and observational studies often lead to spurious association between the outcome and independent variables describing the intervention, because of confounding to third-party factors. Even in randomized clinical trials, confounding might be unavoidable due to small sample sizes. Practically, this poses a problem, because it is either expensive to re-design and conduct a new study or even impossible to alleviate the contribution of some confounders due to e.g. ethical concerns. Here, we propose a method to consistently derive hypothetical studies that retain as many of the dependencies in the original study as mathematically possible, while removing any association of observed confounders to the independent variables. Using historic studies, we illustrate how the confounding-free scenario re-estimates the effect size of the intervention. The new effect size estimate represents a concise prediction in the hypothetical scenario which paves a way from the original data towards the design of future studies.
In semi-supervised learning, the prevailing understanding suggests that observing additional unlabeled samples improves estimation accuracy for linear parameters only in the case of model misspecification. This paper challenges this notion, demonstrating its inaccuracy in high dimensions. Initially focusing on a dense scenario, we introduce robust semi-supervised estimators for the regression coefficient without relying on sparse structures in the population slope. Even when the true underlying model is linear, we show that leveraging information from large-scale unlabeled data improves both estimation accuracy and inference robustness. Moreover, we propose semi-supervised methods with further enhanced efficiency in scenarios with a sparse linear slope. Diverging from the standard semi-supervised literature, we also allow for covariate shift. The performance of the proposed methods is illustrated through extensive numerical studies, including simulations and a real-data application to the AIDS Clinical Trials Group Protocol 175 (ACTG175).
AI research is increasingly industry-driven, making it crucial to understand company contributions to this field. We compare leading AI companies by research publications, citations, size of training runs, and contributions to algorithmic innovations. Our analysis reveals the substantial role played by Google, OpenAI and Meta. We find that these three companies have been responsible for some of the largest training runs, developed a large fraction of the algorithmic innovations that underpin large language models, and led in various metrics of citation impact. In contrast, leading Chinese companies such as Tencent and Baidu had a lower impact on many of these metrics compared to US counterparts. We observe many industry labs are pursuing large training runs, and that training runs from relative newcomers -- such as OpenAI and Anthropic -- have matched or surpassed those of long-standing incumbents such as Google. The data reveals a diverse ecosystem of companies steering AI progress, though US labs such as Google, OpenAI and Meta lead across critical metrics.
Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.