亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study the severity of cascading failures in supply chain networks defined by a node percolation process corresponding to product suppliers failing independently due to systemic shocks. We first show that the size of the cascades follows a power law in random directed acyclic graphs, whose topology encodes the natural ordering of products from simple raw materials to complex products. This motivates the need for a supply chain resilience metric, which we define as the maximum magnitude shock that the production network can withstand such that at least $(1 - \varepsilon)$-fraction of the products are produced with high probability as the size of the production network grows to infinity. Next, we study the resilience of many network architectures and classify them as resilient, where large cascading failures can be avoided almost surely, and as fragile, where large cascades are inevitable. In the next step, we give bounds on the expected size of cascading failures in a given production network graph as the solution to a linear program and show that extending the node percolation process to a joint percolation process that affects the nodes and the links of the production network becomes a special instance of the well-studied financial contagion model of Eisenberg and Noe. We show that under certain assumptions, the Katz centrality of each node can be used as a measure of their vulnerability and give general lower bounds as well as optimal interventions for improving resilience as a function of Katz centralities. Finally, to validate our theoretical results, we empirically calculate the resilience metric and study interventions in a variety of real-world networks.

相關內容

Several kernel based testing procedures are proposed to solve the problem of model selection in the presence of parameter estimation in a family of candidate models. Extending the two sample test of Gretton et al. (2006), we first provide a way of testing whether some data is drawn from a given parametric model (model specification). Second, we provide a test statistic to decide whether two parametric models are equally valid to describe some data (model comparison), in the spirit of Vuong (1989). All our tests are asymptotically standard normal under the null, even when the true underlying distribution belongs to the competing parametric families.Some simulations illustrate the performance of our tests in terms of power and level.

We investigate two types of query games played on a graph, pair queries and edge queries. We concentrate on investigating the two associated graph parameters for binomial random graphs, and showing that determining any of the two parameters is NP-hard for bounded degree graphs.

The paper considers the distribution of a general linear combination of central and non-central chi-square random variables by exploring the branch cut regions that appear in the standard Laplace inversion process. Due to the original interest from the directional statistics, the focus of this paper is on the density function of such distributions and not on their cumulative distribution function. In fact, our results confirm that the latter is a special case of the former. Our approach provides new insight by generating alternative characterizations of the probability density function in terms of a finite number of feasible univariate integrals. In particular, the central cases seem to allow an interesting representation in terms of the branch cuts, while general degrees of freedom and non-centrality can be easily adopted using recursive differentiation. Numerical results confirm that the proposed approach works well while more transparency and therefore easier control in the accuracy is ensured.

The query model has generated considerable interest in both classical and quantum computing communities. Typically, quantum advantages are demonstrated by showcasing a quantum algorithm with a better query complexity compared to its classical counterpart. Exact quantum query algorithms play a pivotal role in developing quantum algorithms. For example, the Deutsch-Jozsa algorithm demonstrated exponential quantum advantages over classical deterministic algorithms. As an important complexity measure, exact quantum query complexity describes the minimum number of queries required to solve a specific problem exactly using a quantum algorithm. In this paper, we consider the exact quantum query complexity of the following two $n$-bit symmetric functions $\text{MOD}_m^n:\{0,1\}^n \rightarrow \{0,...,m-1\}$ and $\text{EXACT}_{k,l}^n:\{0,1\}^n \rightarrow \{0,1\}$, which are defined as $\text{MOD}_m^n(x) = |x| \bmod m$ and $ \text{EXACT}_{k,l}^n(x) = 1$ iff $|x| \in \{k,l\}$, where $|x|$ is the number of $1$'s in $x$. Our results are as follows: i) We present an optimal quantum algorithm for computing $\text{MOD}_m^n$, achieving a query complexity of $\lceil n(1-\frac{1}{m}) \rceil$ for $1 < m \le n$. This settles a conjecture proposed by Cornelissen, Mande, Ozols and de Wolf (2021). Based on this algorithm, we show the exact quantum query complexity of a broad class of symmetric functions that map $\{0,1\}^n$ to a finite set $X$ is less than $n$. ii) When $l-k \ge 2$, we give an optimal exact quantum query algorithm to compute $\text{EXACT}_{k,l}^n$ for the case $k=0$ or $k=1,l=n-1$. This resolves the conjecture proposed by Ambainis, Iraids and Nagaj (2017) partially.

Combining predictions from different models is a central problem in Bayesian inference and machine learning more broadly. Currently, these predictive distributions are almost exclusively combined using linear mixtures such as Bayesian model averaging, Bayesian stacking, and mixture of experts. Such linear mixtures impose idiosyncrasies that might be undesirable for some applications, such as multi-modality. While there exist alternative strategies (e.g. geometric bridge or superposition), optimising their parameters usually involves computing an intractable normalising constant repeatedly. We present two novel Bayesian model combination tools. These are generalisations of model stacking, but combine posterior densities by log-linear pooling (locking) and quantum superposition (quacking). To optimise model weights while avoiding the burden of normalising constants, we investigate the Hyvarinen score of the combined posterior predictions. We demonstrate locking with an illustrative example and discuss its practical application with importance sampling.

Observational studies are the primary source of data for causal inference, but it is challenging when existing unmeasured confounding. Missing data problems are also common in observational studies. How to obtain the causal effects from the nonignorable missing data with unmeasured confounding is a challenge. In this paper, we consider that how to obtain complier average causal effect with unmeasured confounding from the nonignorable missing outcomes. We propose an auxiliary variable which plays two roles simultaneously, the one is the shadow variable for identification and the other is the instrumental variable for inference. We also illustrate some difference between some missing outcomes mechanisms in the previous work and the shadow variable assumption. We give a causal diagram to illustrate this description. Under such a setting, we present a general condition for nonparametric identification of the full data law from the nonignorable missing outcomes with this auxiliary variable. For inference, firstly, we recover the mean value of the outcome based on the generalized method of moments. Secondly, we propose an estimator to adjust for the unmeasured confounding to obtain complier average causal effect. We also establish the asymptotic results of the estimated parameters. We evaluate its performance via simulations and apply it to a real-life dataset about a political analysis.

The Straightness is a measure designed to characterize a pair of vertices in a spatial graph. It is defined as the ratio of the Euclidean distance to the graph distance between these vertices. It is often used as an average, for instance to describe the accessibility of a single vertex relatively to all the other vertices in the graph, or even to summarize the graph as a whole. In some cases, one needs to process the Straightness between not only vertices, but also any other points constituting the graph of interest. Suppose for instance that our graph represents a road network and we do not want to limit ourselves to crossroad-to-crossroad itineraries, but allow any street number to be a starting point or destination. In this situation, the standard approach consists in: 1) discretizing the graph edges, 2) processing the vertex-to-vertex Straightness considering the additional vertices resulting from this discretization, and 3) performing the appropriate average on the obtained values. However, this discrete approximation can be computationally expensive on large graphs, and its precision has not been clearly assessed. In this article, we adopt a continuous approach to average the Straightness over the edges of spatial graphs. This allows us to derive 5 distinct measures able to characterize precisely the accessibility of the whole graph, as well as individual vertices and edges. Our method is generic and could be applied to other measures designed for spatial graphs. We perform an experimental evaluation of our continuous average Straightness measures, and show how they behave differently from the traditional vertex-to-vertex ones. Moreover, we also study their discrete approximations, and show that our approach is globally less demanding in terms of both processing time and memory usage. Our R source code is publicly available under an open source license.

In digital images, the performance of optical aberration is a multivariate degradation, where the spectral of the scene, the lens imperfections, and the field of view together contribute to the results. Besides eliminating it at the hardware level, the post-processing system, which utilizes various prior information, is significant for correction. However, due to the content differences among priors, the pipeline that aligns these factors shows limited efficiency and unoptimized restoration. Here, we propose a prior quantization model to correct the optical aberrations in image processing systems. To integrate these messages, we encode various priors into a latent space and quantify them by the learnable codebooks. After quantization, the prior codes are fused with the image restoration branch to realize targeted optical aberration correction. Comprehensive experiments demonstrate the flexibility of the proposed method and validate its potential to accomplish targeted restoration for a specific camera. Furthermore, our model promises to analyze the correlation between the various priors and the optical aberration of devices, which is helpful for joint soft-hardware design.

Given a matrix $A$ and vector $b$ with polynomial entries in $d$ real variables $\delta=(\delta_1,\ldots,\delta_d)$ we consider the following notion of feasibility: the pair $(A,b)$ is locally feasible if there exists an open neighborhood $U$ of $0$ such that for every $\delta\in U$ there exists $x$ satisfying $A(\delta)x\ge b(\delta)$ entry-wise. For $d=1$ we construct a polynomial time algorithm for deciding local feasibility. For $d \ge 2$ we show local feasibility is NP-hard. This also gives the first polynomial-time algorithm for the asymptotic linear program problem introduced by Jeroslow in 1973. As an application (which was the primary motivation for this work) we give a computer-assisted proof of ergodicity of the following elementary 1D cellular automaton: given the current state $\eta_t \in \{0,1\}^{\mathbb{Z}}$ the next state $\eta_{t+1}(n)$ at each vertex $n\in \mathbb{Z}$ is obtained by $\eta_{t+1}(n)= \text{NAND}\big(\text{BSC}_\delta(\eta_t(n-1)), \text{BSC}_\delta(\eta_t(n))\big)$. Here the binary symmetric channel $\text{BSC}_\delta$ takes a bit as input and flips it with probability $\delta$ (and leaves it unchanged with probability $1-\delta$). It is shown that there exists $\delta_0>0$ such that for all $0<\delta<\delta_0$ the distribution of $\eta_t$ converges to a unique stationary measure irrespective of the initial condition $\eta_0$. We also consider the problem of broadcasting information on the 2D-grid of noisy binary-symmetric channels $\text{BSC}_\delta$, where each node may apply an arbitrary processing function to its input bits. We prove that there exists $\delta_0'>0$ such that for all noise levels $0<\delta<\delta_0'$ it is impossible to broadcast information for any processing function, as conjectured by Makur, Mossel and Polyanskiy.

This paper combines modern numerical computation with theoretical results to improve our understanding of the growth factor problem for Gaussian elimination. On the computational side we obtain lower bounds for the maximum growth for complete pivoting for $n=1:75$ and $n=100$ using the Julia JuMP optimization package. At $n=100$ we obtain a growth factor bigger than $3n$. The numerical evidence suggests that the maximum growth factor is bigger than $n$ if and only if $n \ge 11$. We also present a number of theoretical results. We show that the maximum growth factor over matrices with entries restricted to a subset of the reals is nearly equal to the maximum growth factor over all real matrices. We also show that the growth factors under floating point arithmetic and exact arithmetic are nearly identical. Finally, through numerical search, and stability and extrapolation results, we provide improved lower bounds for the maximum growth factor. Specifically, we find that the largest growth factor is bigger than $1.0045n$, and the lim sup of the ratio with $n$ is greater than or equal to $3.317$. In contrast to the old conjecture that growth might never be bigger than $n$, it seems likely that the maximum growth divided by $n$ goes to infinity as $n \rightarrow \infty$.

北京阿比特科技有限公司