亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The paper studies a probabilistic notion of causes in Markov chains that relies on the counterfactuality principle and the probability-raising property. This notion is motivated by the use of causes for monitoring purposes where the aim is to detect faulty or undesired behaviours before they actually occur. A cause is a set of finite executions of the system after which the probability of the effect exceeds a given threshold. We introduce multiple types of costs that capture the consumption of resources from different perspectives, and study the complexity of computing cost-minimal causes.

相關內容

馬(ma)(ma)爾(er)(er)(er)可(ke)(ke)夫鏈,因安德烈·馬(ma)(ma)爾(er)(er)(er)可(ke)(ke)夫(A.A.Markov,1856-1922)得名,是指數學中具有馬(ma)(ma)爾(er)(er)(er)可(ke)(ke)夫性質的(de)(de)(de)離散事件隨機(ji)過(guo)程(cheng)。該過(guo)程(cheng)中,在給定當前(qian)(qian)知識或信息的(de)(de)(de)情況下,過(guo)去(即當前(qian)(qian)以(yi)前(qian)(qian)的(de)(de)(de)歷史狀(zhuang)(zhuang)(zhuang)態(tai)(tai))對于預測將來(即當前(qian)(qian)以(yi)后的(de)(de)(de)未來狀(zhuang)(zhuang)(zhuang)態(tai)(tai))是無關(guan)的(de)(de)(de)。 在馬(ma)(ma)爾(er)(er)(er)可(ke)(ke)夫鏈的(de)(de)(de)每(mei)一步(bu)(bu),系統根據概率(lv)分布,可(ke)(ke)以(yi)從(cong)一個(ge)狀(zhuang)(zhuang)(zhuang)態(tai)(tai)變到(dao)另一個(ge)狀(zhuang)(zhuang)(zhuang)態(tai)(tai),也可(ke)(ke)以(yi)保持當前(qian)(qian)狀(zhuang)(zhuang)(zhuang)態(tai)(tai)。狀(zhuang)(zhuang)(zhuang)態(tai)(tai)的(de)(de)(de)改(gai)(gai)變叫做(zuo)轉移,與不同的(de)(de)(de)狀(zhuang)(zhuang)(zhuang)態(tai)(tai)改(gai)(gai)變相(xiang)關(guan)的(de)(de)(de)概率(lv)叫做(zuo)轉移概率(lv)。隨機(ji)漫(man)步(bu)(bu)就是馬(ma)(ma)爾(er)(er)(er)可(ke)(ke)夫鏈的(de)(de)(de)例子。隨機(ji)漫(man)步(bu)(bu)中每(mei)一步(bu)(bu)的(de)(de)(de)狀(zhuang)(zhuang)(zhuang)態(tai)(tai)是在圖(tu)形中的(de)(de)(de)點(dian),每(mei)一步(bu)(bu)可(ke)(ke)以(yi)移動(dong)到(dao)任何一個(ge)相(xiang)鄰(lin)的(de)(de)(de)點(dian),在這里移動(dong)到(dao)每(mei)一個(ge)點(dian)的(de)(de)(de)概率(lv)都是相(xiang)同的(de)(de)(de)(無論之前(qian)(qian)漫(man)步(bu)(bu)路(lu)徑是如何的(de)(de)(de))。

An irreducible stochastic matrix with rational entries has a stationary distribution given by a vector of rational numbers. We give an upper bound on the lowest common denominator of the entries of this vector. Bounds of this kind are used to study the complexity of algorithms for solving stochastic mean payoff games. They are usually derived using the Hadamard inequality, but this leads to suboptimal results. We replace the Hadamard inequality with the Markov chain tree formula in order to obtain optimal bounds. We also adapt our approach to obtain bounds on the absorption probabilities of finite Markov chains and on the gains and bias vectors of Markov chains with rewards.

We consider the problem of entanglement-assisted one-shot classical communication. In the zero-error regime, entanglement can increase the one-shot zero-error capacity of a family of classical channels following the strategy of Cubitt et al., Phys. Rev. Lett. 104, 230503 (2010). This strategy uses the Kochen-Specker theorem which is applicable only to projective measurements. As such, in the regime of noisy states and/or measurements, this strategy cannot increase the capacity. To accommodate generically noisy situations, we examine the one-shot success probability of sending a fixed number of classical messages. We show that preparation contextuality powers the quantum advantage in this task, increasing the one-shot success probability beyond its classical maximum. Our treatment extends beyond Cubitt et al. and includes, for example, the experimentally implemented protocol of Prevedel et al., Phys. Rev. Lett. 106, 110505 (2011). We then show a mapping between this communication task and a corresponding nonlocal game. This mapping generalizes the connection with pseudotelepathy games previously noted in the zero-error case. Finally, after motivating a constraint we term context-independent guessing, we show that contextuality witnessed by noise-robust noncontextuality inequalities obtained in R. Kunjwal, Quantum 4, 219 (2020), is sufficient for enhancing the one-shot success probability. This provides an operational meaning to these inequalities and the associated hypergraph invariant, the weighted max-predictability, introduced in R. Kunjwal, Quantum 3, 184 (2019). Our results show that the task of entanglement-assisted one-shot classical communication provides a fertile ground to study the interplay of the Kochen-Specker theorem, Spekkens contextuality, and Bell nonlocality.

In verified generic programming, one cannot exploit the structure of concrete data types but has to rely on well chosen sets of specifications or abstract data types (ADTs). Functors and monads are at the core of many applications of functional programming. This raises the question of what useful ADTs for verified functors and monads could look like. The functorial map of many important monads preserves extensional equality. For instance, if $f, g : A \rightarrow B$ are extensionally equal, that is, $\forall x \in A, \ f \ x = g \ x$, then $map \ f : List \ A \rightarrow List \ B$ and $map \ g$ are also extensionally equal. This suggests that preservation of extensional equality could be a useful principle in verified generic programming. We explore this possibility with a minimalist approach: we deal with (the lack of) extensional equality in Martin-L\"of's intensional type theories without extending the theories or using full-fledged setoids. Perhaps surprisingly, this minimal approach turns out to be extremely useful. It allows one to derive simple generic proofs of monadic laws but also verified, generic results in dynamical systems and control theory. In turn, these results avoid tedious code duplication and ad-hoc proofs. Thus, our work is a contribution towards pragmatic, verified generic programming.

In this paper, we propose the first optimum process scheduling algorithm for an increasingly prevalent type of heterogeneous multicore (HEMC) system that combines high-performance big cores and energy-efficient small cores with the same instruction-set architecture (ISA). Existing algorithms are all heuristics-based, and the well-known IPC-driven approach essentially tries to schedule high scaling factor processes on big cores. Our analysis shows that, for optimum solutions, it is also critical to consider placing long running processes on big cores. Tests of SPEC 2006 cases on various big-small core combinations show that our proposed optimum approach is up to 34% faster than the IPC-driven heuristic approach in terms of total workload completion time. The complexity of our algorithm is O(NlogN) where N is the number of processes. Therefore, the proposed optimum algorithm is practical for use.

The modeling of emergent swarm intelligence constitutes a major challenge and it has been tacked in a number of different ways. However, existing approaches fail to capture the nature of swarm intelligence and they are either too abstract for practical application or not generic enough to describe the various types of emergence phenomena. In this paper, a contradiction-centric model for swarm intelligence is proposed, in which individuals determine their behaviors based on their internal contradictions whilst they associate and interact to update their contradictions. The model hypothesizes that 1) the emergence of swarm intelligence is rooted in the development of individuals' internal contradictions and the interactions taking place between individuals and the environment, and 2) swarm intelligence is essentially a combinative reflection of the configurations of individuals' internal contradictions and the distributions of these contradictions across individuals. The model is formally described and five swarm intelligence systems are studied to illustrate its broad applicability. The studies confirm the generic character of the model and its effectiveness for describing the emergence of various kinds of swarm intelligence; and they also demonstrate that the model is straightforward to apply, without the need for complicated computations.

This paper addresses the following fundamental maximum throughput routing problem: Given an arbitrary edge-capacitated $n$-node directed network and a set of $k$ commodities, with source-destination pairs $(s_i,t_i)$ and demands $d_i> 0$, admit and route the largest possible number of commodities -- i.e., the maximum {\em throughput} -- to satisfy their demands. The main contributions of this paper are two-fold: First, we present a bi-criteria approximation algorithm for this all-or-nothing multicommodity flow (ANF) problem. Our algorithm is the first to achieve a {\em constant approximation of the maximum throughput} with an {\em edge capacity violation ratio that is at most logarithmic in $n$}, with high probability. Our approach is based on a version of randomized rounding that keeps splittable flows, rather than approximating those via a non-splittable path for each commodity: This allows our approach to work for {\em arbitrary directed edge-capacitated graphs}, unlike most of the prior work on the ANF problem. Our algorithm also works if we consider the weighted throughput, where the benefit gained by fully satisfying the demand for commodity $i$ is determined by a given weight $w_i>0$. Second, we present a derandomization of our algorithm that maintains the same approximation bounds, using novel pessimistic estimators for Bernstein's inequality. In addition, we show how our framework can be adapted to achieve a polylogarithmic fraction of the maximum throughput while maintaining a constant edge capacity violation, if the network capacity is large enough. One important aspect of our randomized and derandomized algorithms is their {\em simplicity}, which lends to efficient implementations in practice.

The Multi-valued Action Reasoning System (MARS) is an automated value-based ethical decision-making model for artificial agents (AI). Given a set of available actions and an underlying moral paradigm, by employing MARS one can identify the ethically preferred action. It can be used to implement and model different ethical theories, different moral paradigms, as well as combinations of such, in the context of automated practical reasoning and normative decision analysis. It can also be used to model moral dilemmas and discover the moral paradigms that result in the desired outcomes therein. In this paper, we give a condensed description of MARS, explain its uses, and comparatively place it in the existing literature.

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司