亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The eigenvalue method, suggested by the developer of the extensively used Analytic Hierarchy Process methodology, exhibits right-left asymmetry: the priorities derived from the right eigenvector do not necessarily coincide with the priorities derived from the reciprocal left eigenvector. This paper offers a comprehensive numerical experiment to compare the two eigenvector-based weighting procedures and their reasonable alternative of the row geometric mean with respect to four measures. The underlying pairwise comparison matrices are constructed randomly with different dimensions and levels of inconsistency. The disagreement between the two eigenvectors turns out to be not always a monotonic function of these important characteristics of the matrix. The ranking contradictions can affect alternatives with relatively distant priorities. The row geometric mean is found to be almost at the midpoint between the right and inverse left eigenvectors, making it a straightforward compromise between them.

相關內容

We consider the challenges associated with causal inference in settings where data from a randomized trial is augmented with control data from an external source to improve efficiency in estimating the average treatment effect (ATE). Through the development of a formal causal inference framework, we outline sufficient causal assumptions about the exchangeability between the internal and external controls to identify the ATE and establish the connection to a novel graphical criteria. We propose estimators, review efficiency bounds, develop an approach for efficient doubly-robust estimation even when unknown nuisance models are estimated with flexible machine learning methods, and demonstrate finite-sample performance through a simulation study. To illustrate the ideas and methods, we apply the framework to a trial investigating the effect of risdisplam on motor function in patients with spinal muscular atrophy for which there exists an external set of control patients from a previous trial.

We give the first numerical calculation of the spectrum of the Laplacian acting on bundle-valued forms on a Calabi-Yau three-fold. Specifically, we show how to compute the approximate eigenvalues and eigenmodes of the Dolbeault Laplacian acting on bundle-valued $(p,q)$-forms on K\"ahler manifolds. We restrict our attention to line bundles over complex projective space and Calabi-Yau hypersurfaces therein. We give three examples. For two of these, $\mathbb{P}^3$ and a Calabi-Yau one-fold (a torus), we compare our numerics with exact results available in the literature and find complete agreement. For the third example, the Fermat quintic three-fold, there are no known analytic results, so our numerical calculations are the first of their kind. The resulting spectra pass a number of non-trivial checks that arise from Serre duality and the Hodge decomposition. The outputs of our algorithm include all the ingredients one needs to compute physical Yukawa couplings in string compactifications.

Optimal control problems driven by evolutionary partial differential equations arise in many industrial applications and their numerical solution is known to be a challenging problem. One approach to obtain an optimal feedback control is via the Dynamic Programming principle. Nevertheless, despite many theoretical results, this method has been applied only to very special cases since it suffers from the curse of dimensionality. Our goalis to mitigate this crucial obstruction developing a new version of dynamic programming algorithms based on a tree structure and exploiting the compact representation of the dynamical systems based on tensors notations via a model reduction approach. Here, we want to show how this algorithm can be constructed for general nonlinear control problems and to illustrate its performances on a number of challenging numerical tests. Our numerical results indicate a large decrease in memory requirements, as well as computational time, for the proposed problems. Moreover, we prove the convergence of the algorithm and give some hints on its implementation

We propose a self-stabilizing leader election (SS-LE) protocol on ring networks in the population protocol model. Given a rough knowledge $\psi = \lceil \log n \rceil + O(1)$ on the population size $n$, the proposed protocol lets the population reach a safe configuration within $O(n^2 \log n)$ steps with high probability starting from any configuration. Thereafter, the population keeps the unique leader forever. Since no protocol solves SS-LE in $o(n^2)$ steps with high probability, the convergence time is near-optimal: the gap is only an $O(\log n)$ multiplicative factor. This protocol uses only $polylog(n)$ states. There exist two state-of-the-art algorithms in current literature that solve SS-LE on ring networks. The first algorithm uses a polynomial number of states and solves SS-LE in $O(n^2)$ steps, whereas the second algorithm requires exponential time but it uses only a constant number of states. Our proposed algorithm provides an excellent middle ground between these two.

We study fair mechanisms for the (asymmetric) one-sided allocation problem with m items and n multi-unit demand agents with additive, unit-sum valuations. The symmetric case (m=n), the one-sided matching problem, has been studied extensively for the class of unit demand agents, in particular with respect to the folklore Random Priority mechanism and the Probabilistic Serial mechanism, introduced by Bogomolnaia and Moulin. Under the assumption of unit-sum valuation functions, Christodoulou et al. proved that the price of anarchy is $\Theta(\sqrt{n})$ in the one-sided matching problem for both the Random Priority and Probabilistic Serial mechanisms. Whilst both Random Priority and Probabilistic Serial are ordinal mechanisms, these approximation guarantees are the best possible even for the broader class of cardinal mechanisms. To extend these results to the general setting there are two technical obstacles. One, asymmetry ($m\neq n$) is problematic especially when the number of items is much greater than the number of items. Two, it is necessary to study multi-unit demand agents rather than simply unit demand agents. Our approach is to study a cardinal mechanism variant of Probabilistic Serial, which we call Cardinal Probabilistic Serial. We present structural theorems for this mechanism and use them to obtain bounds on the price of anarchy. Our first main result is an upper bound of $O(\sqrt{n}\cdot \log m)$ on the price of anarchy for the asymmetric one-sided allocation problem with multi-unit demand agents. This upper bound applies to Probabilistic Serial as well and there is a complementary lower bound of $\Omega(\sqrt{n})$ for any fair mechanism. Our second main result is that the price of anarchy degrades with the number of items. Specifically, a logarithmic dependence on the number of items is necessary for both mechanisms.

As social issues related to gender bias attract closer scrutiny, accurate tools to determine the gender profile of large groups become essential. When explicit data is unavailable, gender is often inferred from names. Current methods follow a strategy whereby individuals of the group, one by one, are assigned a gender label or probability based on gender-name correlations observed in the population at large. We show that this strategy is logically inconsistent and has practical shortcomings, the most notable of which is the systematic underestimation of gender bias. We introduce a global inference strategy that estimates gender composition according to the context of the full list of names. The tool suffers from no intrinsic methodological effects, is robust against errors, easily implemented, and computationally light.

Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss-Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show, that a fixed approximation can provide competitive results with considerable computational speed-up.

Observational studies are the primary source of data for causal inference, but it is challenging when existing unmeasured confounding. Missing data problems are also common in observational studies. How to obtain the causal effects from the nonignorable missing data with unmeasured confounding is a challenge. In this paper, we consider that how to obtain complier average causal effect with unmeasured confounding from the nonignorable missing outcomes. We propose an auxiliary variable which plays two roles simultaneously, the one is the shadow variable for identification and the other is the instrumental variable for inference. We also illustrate some difference between some missing outcomes mechanisms in the previous work and the shadow variable assumption. We give a causal diagram to illustrate this description. Under such a setting, we present a general condition for nonparametric identification of the full data law from the nonignorable missing outcomes with this auxiliary variable. For inference, firstly, we recover the mean value of the outcome based on the generalized method of moments. Secondly, we propose an estimator to adjust for the unmeasured confounding to obtain complier average causal effect. We also establish the asymptotic results of the estimated parameters. We evaluate its performance via simulations and apply it to a real-life dataset about a political analysis.

While quantitative methods have been used to examine changes in word usage in books, studies have focused on overall trends, such as the shapes of narratives, which are independent of book length. We instead look at how words change over the course of a book as a function of the number of words, rather than the fraction of the book, completed at any given point; we define this measure as "cumulative word-time". Using ousiometrics, a reinterpretation of the valence-arousal-dominance framework of meaning obtained from semantic differentials, we convert text into time series of power and danger scores in cumulative word-time. Each time series is then decomposed using empirical mode decomposition into a sum of constituent oscillatory modes and a non-oscillatory trend. By comparing the decomposition of the original power and danger time series with those derived from shuffled text, we find that shorter books exhibit only a general trend, while longer books have fluctuations in addition to the general trend. These fluctuations typically have a period of a few thousand words regardless of the book length or library classification code, but vary depending on the content and structure of the book. Our findings suggest that, in the ousiometric sense, longer books are not expanded versions of shorter books, but are more similar in structure to a concatenation of shorter texts. Further, they are consistent with editorial practices that require longer texts to be broken down into sections, such as chapters. Our method also provides a data-driven denoising approach that works for texts of various lengths, in contrast to the more traditional approach of using large window sizes that may inadvertently smooth out relevant information, especially for shorter texts. These results open up avenues for future work in computational literary analysis, particularly the measurement of a basic unit of narrative.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司