亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the context of the Cobb-Douglas productivity model we consider the $N \times N$ input-output linkage matrix $W$ for a network of $N$ firms $f_1, f_2, \cdots, f_N$. The associated influence vector $v_w$ of $W$ is defined in terms of the Leontief inverse $L_W$ of $W$ as $v_W = \frac{\alpha}{N} L_W \vec{\mathbf{1}}$ where $L_W = (I - (1-\alpha) W')^{-1}$, $W'$ denotes the transpose of $W$ and $I$ is the identity matrix. Here $\vec{\mathbf{1}}$ is the $N \times 1$ vector whose entries are all one. The influence vector is a metric of the importance for the firms in the production network. Under the realistic assumption that the data to compute the influence vector is incomplete, we prove bounds on the worst-case error for the influence vector that are sharp up to a constant factor. We also consider the situation where the missing data is binomially distributed and contextualize the bound on the influence vector accordingly. We also investigate how far off the influence vector can be when we only have data on nodes and connections that are within distance $k$ of some source node. A comparison of our results is juxtaposed against PageRank analogues. We close with a discussion on a possible extension beyond Cobb-Douglas to the Constant Elasticity of Substitution model, as well as the possibility of considering other probability distributions for missing data.

相關內容

We prove absolute regularity ($\beta$-mixing) for nonstationary and multivariate versions of two popular classes of integer-valued processes. We show how this result can be used to prove asymptotic normality of a least squares estimator of an involved model parameter.

Given an integer partition of $n$, we consider the impartial combinatorial game LCTR in which moves consist of removing either the left column or top row of its Young diagram. We show that for both normal and mis\`ere play, the optimal strategy can consist mostly of mirroring the opponent's moves. We also establish that both LCTR and Downright are domestic as well as returnable, and on the other hand neither tame nor forced. For both games, those structural observations allow for computing the Sprague-Grundy value any position in $O(\log(n))$ time, assuming that the time unit allows for reading an integer, or performing a basic arithmetic operation. This improves on the previously known bound of $O(n)$ due to Ili\'c (2019). We also cover some other complexity measures of both games, such as state-space complexity, and number of leaves and nodes in the corresponding game tree.

We offer an alternative proof, using the Stein-Chen method, of Bollob\'{a}s' theorem concerning the distribution of the extreme degrees of a random graph. Our proof also provides a rate of convergence of the extreme degree to its asymptotic distribution. The same method also applies in a more general setting where the probability of every pair of vertices being connected by edges depends on the number of vertices.

We derive novel and sharp high-dimensional Berry--Esseen bounds for the sum of $m$-dependent random vectors over the class of hyper-rectangles exhibiting only a poly-logarithmic dependence in the dimension. Our results hold under minimal assumptions, such as non-degenerate covariances and finite third moments, and yield a sample complexity of order $\sqrt{m/n}$, aside from logarithmic terms, matching the optimal rates established in the univariate case. When specialized to the sums of independent non-degenerate random vectors, we obtain sharp rates under the weakest possible conditions. On the technical side, we develop an inductive relationship between anti-concentration inequalities and Berry--Esseen bounds, inspired by the classical Lindeberg swapping method and the concentration inequality approach for dependent data, that may be of independent interest.

A common way to approximate $F(A)b$ -- the action of a matrix function on a vector -- is to use the Arnoldi approximation. Since a new vector needs to be generated and stored in every iteration, one is often forced to rely on restart algorithms which are either not efficient, not stable or only applicable to restricted classes of functions. We present a new representation of the error of the Arnoldi iterates if the function $F$ is given as a Laplace transform. Based on this representation we build an efficient and stable restart algorithm. In doing so we extend earlier work for the class of Stieltjes functions which are special Laplace transforms. We report several numerical experiments including comparisons with the restart method for Stieltjes functions.

Consider the family of power divergence statistics based on $n$ trials, each leading to one of $r$ possible outcomes. This includes the log-likelihood ratio and Pearson's statistic as important special cases. It is known that in certain regimes (e.g., when $r$ is of order $n^2$ and the allocation is asymptotically uniform as $n\to\infty$) the power divergence statistic converges in distribution to a linear transformation of a Poisson random variable. We establish explicit error bounds in the Kolmogorov (or uniform) metric to complement this convergence result, which may be applied for any values of $n$, $r$ and the index parameter $\lambda$ for which such a finite-sample bound is meaningful. We further use this Poisson approximation result to derive error bounds in Gaussian approximation of the power divergence statistics.

The spread of an infection, a contagion, meme, emotion, message and various other spreadable objects have been discussed in several works. Burning and firefighting have been discussed in particular on static graphs. Graph burning simulates the notion of the spread of "fire" throughout a graph (plus, one unburned node burned at each time-step); graph firefighting simulates the defending of nodes by placing firefighters on the nodes which have not been already burned while the fire is being spread (started by only a single fire source). This article studies a combination of firefighting and burning on a graph class which is a variation (generalization) of temporal graphs. Nodes can be infected from "outside" a network. We present a notion of both upgrading (of unburned nodes, similar to firefighting) and repairing (of infected nodes). The nodes which are burned, firefighted, or repaired are chosen probabilistically. So a variable amount of nodes are allowed to be infected, upgraded and repaired in each time step. In the model presented in this article, both burning and firefighting proceed concurrently, we introduce such a system to enable the community to study the notion of spread of an infection and the notion of upgrade/repair against each other. The graph class that we study (on which, these processes are simulated) is a variation of temporal graph class in which at each time-step, probabilistically, a communication takes place (iff an edge exists in that time step). In addition, a node can be "worn out" and thus can be removed from the network, and a new healthy node can be added to the network as well. This class of graphs enables systems with high complexity to be able to be simulated and studied.

In this work we investigate the problem of producing iso-dual algebraic geometry (AG) codes over a finite field $\mathbb{F}_q$ with $q$ elements. Given a finite separable extension $\mathcal{M}/\mathcal{F}$ of function fields and an iso-dual AG-code $\mathcal{C}$ defined over $\mathcal{F}$, we provide a general method to lift the code $\mathcal{C}$ to another iso-dual AG-code $\tilde{\mathcal{C}}$ defined over $\mathcal{M}$ under some assumptions on the parity of the involved different exponents. We apply this method to lift iso-dual AG-codes over the rational function field to elementary abelian $p$-extensions, like the maximal function fields defined by the Hermitian, Suzuki, and one covered by the $GGS$ function field. We also obtain long binary and ternary iso-dual AG-codes defined over cyclotomic extensions.

Gaussian processes (GPs) are a popular class of Bayesian nonparametric models, but its training can be computationally burdensome for massive training datasets. While there has been notable work on scaling up these models for big data, existing methods typically rely on a stationary GP assumption for approximation, and can thus perform poorly when the underlying response surface is non-stationary, i.e., it has some regions of rapid change and other regions with little change. Such non-stationarity is, however, ubiquitous in real-world problems, including our motivating application for surrogate modeling of computer experiments. We thus propose a new Product of Sparse GP (ProSpar-GP) method for scalable GP modeling with massive non-stationary data. The ProSpar-GP makes use of a carefully-constructed product-of-experts formulation of sparse GP experts, where different experts are placed within local regions of non-stationarity. These GP experts are fit via a novel variational inference approach, which capitalizes on mini-batching and GPU acceleration for efficient optimization of inducing points and length-scale parameters for each expert. We further show that the ProSpar-GP is Kolmogorov-consistent, in that its generative distribution defines a valid stochastic process over the prediction space; such a property provides essential stability for variational inference, particularly in the presence of non-stationarity. We then demonstrate the improved performance of the ProSpar-GP over the state-of-the-art, in a suite of numerical experiments and an application for surrogate modeling of a satellite drag simulator.

Let $k \geq 1$. A graph $G$ is $\mathbf{W_k}$ if for any $k$ pairwise disjoint independent vertex subsets $A_1, \dots, A_k$ in $G$, there exist $k$ pairwise disjoint maximum independent sets $S_1, \dots, S_k$ in $G$ such that $A_i \subseteq S_i$ for $i \in [k]$. Recognizing $\mathbf{W_1}$ graphs is co-NP-hard, as shown by Chv\'atal and Slater (1993) and, independently, by Sankaranarayana and Stewart (1992). Extending this result and answering a recent question of Levit and Tankus, we show that recognizing $\mathbf{W_k}$ graphs is co-NP-hard for $k \geq 2$. On the positive side, we show that recognizing $\mathbf{W_k}$ graphs is, for each $k\geq 2$, FPT parameterized by clique-width and by tree-width. Finally, we construct graphs $G$ that are not $\mathbf{W_2}$ such that, for every vertex $v$ in $G$ and every maximal independent set $S$ in $G - N[v]$, the largest independent set in $N(v) \setminus S$ consists of a single vertex, thereby refuting a conjecture of Levit and Tankus.

北京阿比特科技有限公司