Developing a system of indicators that reflects the degree to which the securities market fulfils its key functions, is essential to assess the level of its development. In the conditions of asymmetric information it can also provide effective policies for securities market development. This paper is aimed to develop a set of indicators to assess the securities market performance, especially in the asymmetric information context. To this goal, we selected the Russian securities market as a case of asymmetric information context, in comparison with other post-Soviet countries, to investigate its success and failure in fulfilling its key functions. Regarding this, we developed research hypotheses and we conducted a normative research method, based on an ideal model of market functioning that is used as a criterion for testing the hypotheses. The results offer an original scale for assessing the performance of securities market of its functions. The findings also help practitioners with effective policy making in securities market regulation and its development toward its ideal state. The key contribution of this research is in developing a new scale for determining the performance and efficiency of the securities market, based on the conditions of information asymmetry.
This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.
As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate each term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the elaboration, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in an algorithm which uses a constraint-postponement strategy to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.
Consider sensitivity analysis to assess the worst-case possible values of counterfactual outcome means and average treatment effects under sequential unmeasured confounding in a longitudinal study with time-varying treatments and covariates. We formulate several multi-period sensitivity models to relax the corresponding versions of the assumption of sequential unconfounding. The primary sensitivity model involves only counterfactual outcomes, whereas the joint and product sensitivity models involve both counterfactual covariates and outcomes. We establish and compare explicit representations for the sharp and conservative bounds at the population level through convex optimization, depending only on the observed data. These results provide for the first time a satisfactory generalization from the marginal sensitivity model in the cross-sectional setting.
The modeling of high-frequency data that qualify financial asset transactions has been an area of relevant interest among statisticians and econometricians -- above all, the analysis of time series of financial durations. Autoregressive conditional duration (ACD) models have been the main tool for modeling financial transaction data, where duration is usually defined as the time interval between two successive events. These models are usually specified in terms of a time-varying mean (or median) conditional duration. In this paper, a new extension of ACD models is proposed which is built on the basis of log-symmetric distributions reparametrized by their quantile. The proposed quantile log-symmetric conditional duration autoregressive model allows us to model different percentiles instead of the traditionally used conditional mean (or median) duration. We carry out an in-depth study of theoretical properties and practical issues, such as parameter estimation using maximum likelihood method and diagnostic analysis based on residuals. A detailed Monte Carlo simulation study is also carried out to evaluate the performance of the proposed models and estimation method in retrieving the true parameter values as well as to evaluate a form of residuals. Finally, the proposed class of models is applied to a price duration data set and then used to derive a semi-parametric intraday value-at-risk (IVaR) model.
Probabilistic price forecasting has recently gained attention in power trading because decisions based on such predictions can yield significantly higher profits than those made with point forecasts alone. At the same time, methods are being developed to combine predictive distributions, since no model is perfect and averaging generally improves forecasting performance. In this article we address the question of whether using CRPS learning, a novel weighting technique minimizing the continuous ranked probability score (CRPS), leads to optimal decisions in day-ahead bidding. To this end, we conduct an empirical study using hourly day-ahead electricity prices from the German EPEX market. We find that increasing the diversity of an ensemble can have a positive impact on accuracy. At the same time, the higher computational cost of using CRPS learning compared to an equal-weighted aggregation of distributions is not offset by higher profits, despite significantly more accurate predictions.
Statistical techniques are needed to analyse data structures with complex dependencies such that clinically useful information can be extracted. Individual-specific networks, which capture dependencies in complex biological systems, are often summarized by graph-theoretical features. These features, which lend themselves to outcome modelling, can be subject to high variability due to arbitrary decisions in network inference and noise. Correlation-based adjacency matrices often need to be sparsified before meaningful graph-theoretical features can be extracted, requiring the data analysts to determine an optimal threshold.. To address this issue, we propose to incorporate a flexible weighting function over the full range of possible thresholds to capture the variability of graph-theoretical features over the threshold domain. The potential of this approach, which extends concepts from functional data analysis to a graph-theoretical setting, is explored in a plasmode simulation study using real functional magnetic resonance imaging (fMRI) data from the Autism Brain Imaging Data Exchange (ABIDE) Preprocessed initiative. The simulations show that our modelling approach yields accurate estimates of the functional form of the weight function, improves inference efficiency, and achieves a comparable or reduced root mean square prediction error compared to competitor modelling approaches. This assertion holds true in settings where both complex functional forms underlie the outcome-generating process and a universal threshold value is employed. We demonstrate the practical utility of our approach by using resting-state fMRI data to predict biological age in children. Our study establishes the flexible modelling approach as a statistically principled, serious competitor to ad-hoc methods with superior performance.
There has been growing interest in high-order tensor methods for nonconvex optimization, with adaptive regularization, as they possess better/optimal worst-case evaluation complexity globally and faster convergence asymptotically. These algorithms crucially rely on repeatedly minimizing nonconvex multivariate Taylor-based polynomial sub-problems, at least locally. Finding efficient techniques for the solution of these sub-problems, beyond the second-order case, has been an open question. This paper proposes a second-order method, Quadratic Quartic Regularisation (QQR), for efficiently minimizing nonconvex quartically-regularized cubic polynomials, such as the AR$p$ sub-problem [3] with $p=3$. Inspired by [35], QQR approximates the third-order tensor term by a linear combination of quadratic and quartic terms, yielding (possibly nonconvex) local models that are solvable to global optimality. In order to achieve accuracy $\epsilon$ in the first-order criticality of the sub-problem, we show that the error in the QQR method decreases either linearly or by at least $\mathcal{O}(\epsilon^{4/3})$ for locally convex iterations, while in the sufficiently nonconvex case, by at least $\mathcal{O}(\epsilon)$; thus improving, on these types of iterations, the general cubic-regularization bound. Preliminary numerical experiments indicate that two QQR variants perform competitively with state-of-the-art approaches such as ARC (also known as AR$p$ with $p=2$), achieving either a lower objective value or iteration counts.
Over the past decades, cognitive neuroscientists and behavioral economists have recognized the value of describing the process of decision making in detail and modeling the emergence of decisions over time. For example, the time it takes to decide can reveal more about an agents true hidden preferences than only the decision itself. Similarly, data that track the ongoing decision process such as eye movements or neural recordings contain critical information that can be exploited, even if no decision is made. Here, we argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time and incorporate related process data to improve AI predictions in general and human-AI interactions in particular. First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence, and we present related empirical work in psychology, neuroscience, and economics. Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making. Finally, we outline how a more principled inclusion of the evidence-accumulation framework into the training and use of AI can help to improve human-AI interactions in the future.
When testing a statistical hypothesis, is it legitimate to deliberate on the basis of initial data about whether and how to collect further data? Game-theoretic probability's fundamental principle for testing by betting says yes, provided that you are testing by betting and do not risk more capital than initially committed. Standard statistical theory uses Cournot's principle, which does not allow such optional continuation. Cournot's principle can be extended to allow optional continuation when testing is carried out by multiplying likelihood ratios, but the extension lacks the simplicity and generality of testing by betting. Game-theoretic probability can also help us with descriptive data analysis. To obtain a purely and honestly descriptive analysis using competing probability distributions, we have them bet against each other using the Kelly principle. The place of confidence intervals is then taken by a sets of distributions that do relatively well in the competition. In the simplest implementation, these sets coincide with R. A. Fisher's likelihood intervals.
In epidemiology and social sciences, propensity score methods are popular for estimating treatment effects using observational data, and multiple imputation is popular for handling covariate missingness. However, how to appropriately use multiple imputation for propensity score analysis is not completely clear. This paper aims to bring clarity on the consistency (or lack thereof) of methods that have been proposed, focusing on the within approach (where the effect is estimated separately in each imputed dataset and then the multiple estimates are combined) and the across approach (where typically propensity scores are averaged across imputed datasets before being used for effect estimation). We show that the within method is valid and can be used with any causal effect estimator that is consistent in the full-data setting. Existing across methods are inconsistent, but a different across method that averages the inverse probability weights across imputed datasets is consistent for propensity score weighting. We also comment on methods that rely on imputing a function of the missing covariate rather than the covariate itself, including imputation of the propensity score and of the probability weight. Based on consistency results and practical flexibility, we recommend generally using the standard within method. Throughout, we provide intuition to make the results meaningful to the broad audience of applied researchers.