This paper is intended to investigate the dynamics of heterogeneous Cournot duopoly games, where the first players adopt identical gradient adjustment mechanisms but the second players are endowed with distinct rationality levels. Based on tools of symbolic computations, we introduce a new approach and use it to establish rigorous conditions of the local stability for these models. We analytically investigate the bifurcations and prove that the period-doubling bifurcation is the only possible bifurcation that may occur for all the considered models. The most important finding of our study is regarding the influence of players' rational levels on the stability of heterogeneous duopolistic competition. It is derived that the stability region of the model where the second firm is rational is the smallest, while that of the one where the second firm is boundedly rational is the largest. This fact is counterintuitive and contrasts with relative conclusions in the existing literature. Furthermore, we also provide numerical simulations to demonstrate the emergence of complex dynamics such as periodic solutions with different orders and strange attractors.
We study the statistical inference of nonlinear stochastic approximation algorithms utilizing a single trajectory of Markovian data. Our methodology has practical applications in various scenarios, such as Stochastic Gradient Descent (SGD) on autoregressive data and asynchronous Q-Learning. By utilizing the standard stochastic approximation (SA) framework to estimate the target parameter, we establish a functional central limit theorem for its partial-sum process, $\boldsymbol{\phi}_T$. To further support this theory, we provide a matching semiparametric efficient lower bound and a non-asymptotic upper bound on its weak convergence, measured in the L\'evy-Prokhorov metric. This functional central limit theorem forms the basis for our inference method. By selecting any continuous scale-invariant functional $f$, the asymptotic pivotal statistic $f(\boldsymbol{\phi}_T)$ becomes accessible, allowing us to construct an asymptotically valid confidence interval. We analyze the rejection probability of a family of functionals $f_m$, indexed by $m \in \mathbb{N}$, through theoretical and numerical means. The simulation results demonstrate the validity and efficiency of our method.
A practical challenge for structural estimation is the requirement to accurately minimize a sample objective function which is often non-smooth, non-convex, or both. This paper proposes a simple algorithm designed to find accurate solutions without performing an exhaustive search. It augments each iteration from a new Gauss-Newton algorithm with a grid search step. A finite sample analysis derives its optimization and statistical properties simultaneously using only econometric assumptions. After a finite number of iterations, the algorithm automatically transitions from global to fast local convergence, producing accurate estimates with high probability. Simulated examples and an empirical application illustrate the results.
A Peskun ordering between two samplers, implying a dominance of one over the other, is known among the Markov chain Monte Carlo community for being a remarkably strong result, but it is also known for being one that is notably difficult to establish. Indeed, one has to prove that the probability to reach a state $\mathbf{y}$ from a state $\mathbf{x}$, using a sampler, is greater than or equal to the probability using the other sampler, and this must hold for all pairs $(\mathbf{x}, \mathbf{y})$ such that $\mathbf{x} \neq \mathbf{y}$. We provide in this paper a weaker version that does not require an inequality between the probabilities for all these states: essentially, the dominance holds asymptotically, as a varying parameter grows without bound, as long as the states for which the probabilities are greater than or equal to belong to a mass-concentrating set. The weak ordering turns out to be useful to compare lifted samplers for partially-ordered discrete state-spaces with their Metropolis--Hastings counterparts. An analysis in great generality yields a qualitative conclusion: they asymptotically perform better in certain situations (and we are able to identify them), but not necessarily in others (and the reasons why are made clear). A thorough study in a specific context of graphical-model simulation is also conducted.
Consider a graph where each of the $n$ nodes is either in state $\mathcal{R}$ or $\mathcal{B}$. Herein, we analyze the \emph{synchronous $k$-Majority dynamics}, where in each discrete-time round nodes simultaneously sample $k$ neighbors uniformly at random with replacement and adopt the majority state among those of the nodes in the sample (breaking ties uniformly at random). Differently from previous work, we study the robustness of the $k$-Majority in \emph{maintaining a $\mathcal{R}$ majority}, when the dynamics is subject to two forms of \emph{bias} toward state $\mathcal{B}$. The bias models an external agent that attempts to subvert the initial majority by altering the communication between nodes, with a probability of success $p$ in each round: in the first form of bias, the agent tries to alter the communication links by transmitting state $\mathcal{B}$; in the second form of bias, the agent tries to corrupt nodes directly by making them update to $\mathcal{B}$. Our main result shows a \emph{sharp phase transition} in both forms of bias. By considering initial configurations in which every node has probability $q \in (\frac{1}{2},1]$ of being in state $\mathcal{R}$, we prove that for every $k\geq3$ there exists a critical value $p_{k,q}^*$ such that, with high probability, the external agent is able to subvert the initial majority either in $n^{\omega(1)}$ rounds, if $p<p_{k,q}^*$, or in $O(1)$ rounds, if $p>p_{k,q}^*$. When $k<3$, instead, no phase transition phenomenon is observed and the disruption happens in $O(1)$ rounds for $p>0$.
We present a continuous-time probabilistic approach for estimating the chirp signal and its instantaneous frequency function when the true forms of these functions are not accessible. Our model represents these functions by non-linearly cascaded Gaussian processes represented as non-linear stochastic differential equations. The posterior distribution of the functions is then estimated with stochastic filters and smoothers. We compute a (posterior) Cram\'er--Rao lower bound for the Gaussian process model, and derive a theoretical upper bound for the estimation error in the mean squared sense. The experiments show that the proposed method outperforms a number of state-of-the-art methods on a synthetic data. We also show that the method works out-of-the-box for two real-world datasets.
As one myth of polynomial interpolation and quadrature, Trefethen [30] revealed that the Chebyshev interpolation of $|x-a|$ (with $|a|<1 $) at the Clenshaw-Curtis points exhibited a much smaller error than the best polynomial approximation (in the maximum norm) in about $95\%$ range of $[-1,1]$ except for a small neighbourhood near the singular point $x=a.$ In this paper, we rigorously show that the Jacobi expansion for a more general class of $\Phi$-functions also enjoys such a local convergence behaviour. Our assertion draws on the pointwise error estimate using the reproducing kernel of Jacobi polynomials and the Hilb-type formula on the asymptotic of the Bessel transforms. We also study the local superconvergence and show the gain in order and the subregions it occurs. As a by-product of this new argument, the undesired $\log n$-factor in the pointwise error estimate for the Legendre expansion recently stated in Babu\u{s}ka and Hakula [5] can be removed. Finally, all these estimates are extended to the functions with boundary singularities. We provide ample numerical evidences to demonstrate the optimality and sharpness of the estimates.
Estimation of heterogeneous causal effects - i.e., how effects of policies and treatments vary across subjects - is a fundamental task in causal inference, playing a crucial role in optimal treatment allocation, generalizability, subgroup effects, and more. Many flexible methods for estimating conditional average treatment effects (CATEs) have been proposed in recent years, but questions surrounding optimality have remained largely unanswered. In particular, a minimax theory of optimality has yet to be developed, with the minimax rate of convergence and construction of rate-optimal estimators remaining open problems. In this paper we derive the minimax rate for CATE estimation, in a nonparametric model where distributional components are Holder-smooth, and present a new local polynomial estimator, giving high-level conditions under which it is minimax optimal. More specifically, our minimax lower bound is derived via a localized version of the method of fuzzy hypotheses, combining lower bound constructions for nonparametric regression and functional estimation. Our proposed estimator can be viewed as a local polynomial R-Learner, based on a localized modification of higher-order influence function methods; it is shown to be minimax optimal under a condition on how accurately the covariate distribution is estimated. The minimax rate we find exhibits several interesting features, including a non-standard elbow phenomenon and an unusual interpolation between nonparametric regression and functional estimation rates. The latter quantifies how the CATE, as an estimand, can be viewed as a regression/functional hybrid. We conclude with some discussion of a few remaining open problems.
In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. Recently, many flexible machine learning methods have been developed for instrumental variable estimation. However, these methods have at least one of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) only obtaining estimation error rates in terms of pseudometrics (\emph{e.g.,} projected norm) rather than valid metrics (\emph{e.g.,} $L_2$ norm); or (3) imposing the so-called closedness condition that requires a certain conditional expectation operator to be sufficiently smooth. In this paper, we present the first method and analysis that can avoid all three limitations, while still permitting general function approximation. Specifically, we propose a new penalized minimax estimator that can converge to a fixed IV solution even when there are multiple solutions, and we derive a strong $L_2$ error rate for our estimator under lax conditions. Notably, this guarantee only needs a widely-used source condition and realizability assumptions, but not the so-called closedness condition. We argue that the source condition and the closedness condition are inherently conflicting, so relaxing the latter significantly improves upon the existing literature that requires both conditions. Our estimator can achieve this improvement because it builds on a novel formulation of the IV estimation problem as a constrained optimization problem.
This paper proposes a flexible framework for inferring large-scale time-varying and time-lagged correlation networks from multivariate or high-dimensional non-stationary time series with piecewise smooth trends. Built on a novel and unified multiple-testing procedure of time-lagged cross-correlation functions with a fixed or diverging number of lags, our method can accurately disclose flexible time-varying network structures associated with complex functional structures at all time points. We broaden the applicability of our method to the structure breaks by developing difference-based nonparametric estimators of cross-correlations, achieve accurate family-wise error control via a bootstrap-assisted procedure adaptive to the complex temporal dynamics, and enhance the probability of recovering the time-varying network structures using a new uniform variance reduction technique. We prove the asymptotic validity of the proposed method and demonstrate its effectiveness in finite samples through simulation studies and empirical applications.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.