We propose a set of dependence measures that are non-linear, local, invariant to a wide range of transformations on the marginals, can show tail and risk asymmetries, are always well-defined, are easy to estimate and can be used on any dataset. We propose a nonparametric estimator and prove its consistency and asymptotic normality. Thereby we significantly improve on existing (extreme) dependence measures used in asset pricing and statistics. To show practical utility, we use these measures on high-frequency stock return data around market distress events such as the 2010 Flash Crash and during the GFC. Contrary to ubiquitously used correlations we find that our measures clearly show tail asymmetry, non-linearity, lack of diversification and endogenous buildup of risks present during these distress events. Additionally, our measures anticipate large (joint) losses during the Flash Crash while also anticipating the bounce back and flagging the subsequent market fragility. Our findings have implications for risk management, portfolio construction and hedging at any frequency.
Friction drag from a turbulent fluid moving past or inside an object plays a crucial role in domains as diverse as transportation, public utility infrastructure, energy technology, and human health. As a direct measure of the shear-induced friction forces, an accurate prediction of the wall-shear stress can contribute to sustainability, conservation of resources, and carbon neutrality in civil aviation as well as enhanced medical treatment of vascular diseases and cancer. Despite such importance for our modern society, we still lack adequate experimental methods to capture the instantaneous wall-shear stress dynamics. In this contribution, we present a holistic approach that derives velocity and wall-shear stress fields with impressive spatial and temporal resolution from flow measurements using a deep optical flow estimator with physical knowledge. The validity and physical correctness of the derived flow quantities is demonstrated with synthetic and real-world experimental data covering a range of relevant fluid flows.
Over the past two decades, some scholars have noticed the correlation between quantum mechanics and finance/economy, making some novel attempts to introduce the theoretical framework of quantum mechanics into financial and economic research, subsequently a new research domain called quantum finance or quantum economy was set up. In particular, some studies have made their endeavour in the stock market, utilizing the quantum mechanical paradigm to describe the movement of stock price. Nevertheless, the majority of researches have paid attention to describing the motion of a single stock, and drawn an analogy between the motion of a single stock and a one-dimensional infinite well, or one-dimensional harmonic oscillator model, whose modality looks alike to the one-electron Schr\"odinger equation, in which the information is solved analytically in most cases. Hitherto, the whole stock market system composed of all stocks and stock indexes have not been discussed. In this paper, the concept of stock molecular system is first proposed with pioneer. The modality of stock molecular system resembles the multi-electrons Schr\"odinger equation with Born-Oppenheimer approximation. Similar to the interaction among all nuclei and electrons in a molecule, the interaction exist among all stock indexes and stocks. This paper also establish the stock-index Coulomb potential, stock-index Coulomb potential, stock-stock Coulomb potential and stock coulomb correlation terms by statistical theory. At length, the conceive and feasibility of drawing upon density functional theory (DFT) to solve the Schr\"odinger equation of stock molecular system are put forward together with proof, ending up with experiments executed in CSI 300 index system.
Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.
Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.
The accuracy of solving partial differential equations (PDEs) on coarse grids is greatly affected by the choice of discretization schemes. In this work, we propose to learn time integration schemes based on neural networks which satisfy three distinct sets of mathematical constraints, i.e., unconstrained, semi-constrained with the root condition, and fully-constrained with both root and consistency conditions. We focus on the learning of 3-step linear multistep methods, which we subsequently applied to solve three model PDEs, i.e., the one-dimensional heat equation, the one-dimensional wave equation, and the one-dimensional Burgers' equation. The results show that the prediction error of the learned fully-constrained scheme is close to that of the Runge-Kutta method and Adams-Bashforth method. Compared to the traditional methods, the learned unconstrained and semi-constrained schemes significantly reduce the prediction error on coarse grids. On a grid that is 4 times coarser than the reference grid, the mean square error shows a reduction of up to an order of magnitude for some of the heat equation cases, and a substantial improvement in phase prediction for the wave equation. On a 32 times coarser grid, the mean square error for the Burgers' equation can be reduced by up to 35% to 40%.
Generative AI has seen remarkable growth over the past few years, with diffusion models being state-of-the-art for image generation. This study investigates the use of diffusion models in generating artificial data generation for electronic circuits for enhancing the accuracy of subsequent machine learning models in tasks such as performance assessment, design, and testing when training data is usually known to be very limited. We utilize simulations in the HSPICE design environment with 22nm CMOS technology nodes to obtain representative real training data for our proposed diffusion model. Our results demonstrate the close resemblance of synthetic data using diffusion model to real data. We validate the quality of generated data, and demonstrate that data augmentation certainly effective in predictive analysis of VLSI design for digital circuits.
This article proposes entropy stable discontinuous Galerkin schemes (DG) for two-fluid relativistic plasma flow equations. These equations couple the flow of relativistic fluids via electromagnetic quantities evolved using Maxwell's equations. The proposed schemes are based on the Gauss-Lobatto quadrature rule, which has the summation by parts (SBP) property. We exploit the structure of the equations having the flux with three independent parts coupled via nonlinear source terms. We design entropy stable DG schemes for each flux part, coupled with the fact that the source terms do not affect entropy, resulting in an entropy stable scheme for the complete system. The proposed schemes are then tested on various test problems in one and two dimensions to demonstrate their accuracy and stability.
We numerically investigate the possibility of defining stabilization-free Virtual Element (VEM) discretizations of advection-diffusion problems in the advection-dominated regime. To this end, we consider a SUPG stabilized formulation of the scheme. Numerical tests comparing the proposed method with standard VEM show that the lack of an additional arbitrary stabilization term, typical of VEM schemes, that adds artificial diffusion to the discrete solution, allows to better approximate boundary layers, in particular in the case of a low order scheme.
Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal $O(n \log n)$ sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using $L^p$-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of $L^p$-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the $L^p$-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph $G(n,d/n)$, where the previously known algorithms run in time $n^{O(\log d)}$ or applied only to large $d$. We refine these algorithmic bounds significantly, and develop fast $n^{1+o(1)}$ algorithms based on Glauber dynamics that apply to all $d$, throughout the uniqueness regime.
The aim of this paper is to show the relationship that lies in the fact of a person being right or left handed, in their skateboarding stance. Starting from the null hypothesis that there is no relationship, the Pearson's X^2 with Yates correction tests, as well as its respective p-value will be used to test the hypothesis. It will also be calculated and analyzed the residuals, Cramer's V and the Risk and Odds Ratios, with their respective confidence intervals to know the intensity of the association.