亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Distributed certification, whether it be proof-labeling schemes, locally checkable proofs, etc., deals with the issue of certifying the legality of a distributed system with respect to a given boolean predicate. A certificate is assigned to each process in the system by a non-trustable oracle, and the processes are in charge of verifying these certificates, so that two properties are satisfied: completeness, i.e., for every legal instance, there is a certificate assignment leading all processes to accept, and soundness, i.e., for every illegal instance, and for every certificate assignment, at least one process rejects. The verification of the certificates must be fast, and the certificates themselves must be small. A large quantity of results have been produced in this framework, each aiming at designing a distributed certification mechanism for specific boolean predicates. This paper presents a "meta-theorem", applying to many boolean predicates at once. Specifically, we prove that, for every boolean predicate on graphs definable in the monadic second-order (MSO) logic of graphs, there exists a distributed certification mechanism using certificates on $O(\log^2n)$ bits in $n$-node graphs of bounded treewidth, with a verification protocol involving a single round of communication between neighbors.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

In the context of estimating stochastically ordered distribution functions, the pool-adjacent-violators algorithm (PAVA) can be modified such that the computation times are reduced substantially. This is achieved by studying the dependence of antitonic weighted least squares fits on the response vector to be approximated.

Platform trials evaluate multiple experimental treatments under a single master protocol, where new treatment arms are added to the trial over time. Given the multiple treatment comparisons, there is the potential for inflation of the overall type I error rate, which is complicated by the fact that the hypotheses are tested at different times and are not all necessarily pre-specified. Online error control methodology provides a possible solution to the problem of multiplicity for platform trials where a relatively large number of hypotheses are expected to be tested over time. In the online testing framework, hypotheses are tested in a sequential manner, where at each time-step an analyst decides whether to reject the current null hypothesis without knowledge of future tests but based solely on past decisions. Methodology has recently been developed for online control of the false discovery rate as well as the familywise error rate (FWER). In this paper, we describe how to apply online error control to the platform trial setting, present extensive simulation results, and give some recommendations for the use of this new methodology in practice. We show that the algorithms for online error rate control can have a substantially lower FWER than uncorrected testing, while still achieving noticeable gains in power when compared with the use of a Bonferroni procedure. We also illustrate how online error control would have impacted a currently ongoing platform trial.

Ranking aggregation is commonly adopted in cooperative decision-making to assist in combining multiple rankings into a single representative. To protect the actual ranking of each individual, some privacy-preserving strategies, such as differential privacy, are often used. This, however, does not consider the scenario where the curator, who collects all rankings from individuals, is untrustworthy. This paper proposed a mechanism to solve the above situation using the distribute differential privacy framework. The proposed mechanism collects locally differential private rankings from individuals, then randomly permutes pairwise rankings using a shuffle model to further amplify the privacy protection. The final representative is produced by hierarchical rank aggregation. The mechanism was theoretically analysed and experimentally compared against existing methods, and demonstrated competitive results in both the output accuracy and privacy protection.

The main problem in the area of property testing is to understand which graph properties are \emph{testable}, which means that with constantly many queries to any input graph $G$, a tester can decide with good probability whether $G$ satisfies the property, or is far from satisfying the property. Testable properties are well understood in the dense model and in the bounded degree model, but little is known in sparse graph classes when graphs are allowed to have unbounded degree. This is the setting of the \emph{sparse model}. We prove that for any proper minor-closed class $\mathcal{G}$, any monotone property (i.e., any property that is closed under taking subgraphs) is testable for graphs from $\mathcal{G}$ in the sparse model. This extends a result of Czumaj and Sohler (FOCS'19), who proved it for monotone properties with finitely many obstructions. Our result implies for instance that for any integers $k$ and $t$, $k$-colorability of $K_t$-minor free graphs is testable in the sparse model. Elek recently proved that monotone properties of bounded degree graphs from minor-closed classes that are closed under disjoint union can be verified by an approximate proof labeling scheme in constant time. We show again that the assumption of bounded degree can be omitted in his result.

A triangle in a hypergraph $\mathcal{H}$ is a set of three distinct edges $e, f, g\in\mathcal{H}$ and three distinct vertices $u, v, w\in V(\mathcal{H})$ such that $\{u, v\}\subseteq e$, $\{v, w\}\subseteq f$, $\{w, u\}\subseteq g$ and $\{u, v, w\}\cap e\cap f\cap g=\emptyset$. Johansson proved in 1996 that $\chi(G)=\mathcal{O}(\Delta/\log\Delta)$ for any triangle-free graph $G$ with maximum degree $\Delta$. Cooper and Mubayi later generalized the Johansson's theorem to all rank $3$ hypergraphs. In this paper we provide a common generalization of both these results for all hypergraphs, showing that if $\mathcal{H}$ is a rank $k$, triangle-free hypergraph, then the list chromatic number \[ \chi_{\ell}(\mathcal{H})\leq \mathcal{O}\left(\max_{2\leq \ell \leq k} \left\{\left( \frac{\Delta_{\ell}}{\log \Delta_{\ell}} \right)^{\frac{1}{\ell-1}} \right\}\right), \] where $\Delta_{\ell}$ is the maximum $\ell$-degree of $\mathcal{H}$. The result is sharp apart from the constant. Moreover, our result implies, generalizes and improves several earlier results on the chromatic number and also independence number of hypergraphs, while its proof is based on a different approach than prior works in hypergraphs (and therefore provides alternative proofs to them). In particular, as an application, we establish a bound on chromatic number of sparse hypergraphs in which each vertex is contained in few triangles, and thus extend results of Alon, Krivelevich and Sudakov, and Cooper and Mubayi from hypergraphs of rank 2 and 3, respectively, to all hypergraphs.

We consider information-theoretic bounds on expected generalization error for statistical learning problems in a networked setting. In this setting, there are $K$ nodes, each with its own independent dataset, and the models from each node have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of $1/K$ on the number of nodes. These "per node" bounds are in terms of the mutual information between the training dataset and the trained weights at each node, and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.

We study distributed binary hypothesis testing with a single sensor and two remote decision centers that are also equipped with local sensors. The communication between the sensor and the two decision centers takes place over three links: a shared link to both centers and an individual link to each of the two centers. All communication links are subject to expected rate constraints. This paper characterizes the optimal exponents region of the type-II error for given type-I error thresholds at the two decision centers and further simplifies the expressions in the special case of having only the single shared link. The exponents region illustrates a gain under expected rate constraints compared to equivalent maximum rate constraints. Moreover, it exhibits a tradeoff between the exponents achieved at the two centers.

Presently, the practice of distributed computing is such that problems exist in a mathematical realm different from their solutions: a problem is presented as a set of requirements on possible process or system behaviors, and its solution is presented as algorithmic pseudocode satisfying the requirements. Here, we present a novel mathematical realm, termed \emph{multiagent transition systems with faults}, that aims to accommodate both distributed computing problems and their solutions. A problem is presented as a specification -- a multiagent transition system -- and a solution as an implementation of the specification by another, lower-level multiagent transition system, which may be proven to be resilient to a given set of faults. This duality of roles of a multiagent transition system can be exploited all the way from a high-level distributed computing problem description down to an agreed-upon base layer, say TCP/IP, resulting in a mathematical protocol stack where each protocol in the stack both implements the protocol above it and serves as a specification for the protocol below it. Correct implementations are compositional and thus provide also an implementation of the protocol stack as a whole. The framework also offers a formal -- yet natural and expressive -- notions of faults, fault-resilient implementations, and their composition.

Attributed network embedding has received much interest from the research community as most of the networks come with some content in each node, which is also known as node attributes. Existing attributed network approaches work well when the network is consistent in structure and attributes, and nodes behave as expected. But real world networks often have anomalous nodes. Typically these outliers, being relatively unexplainable, affect the embeddings of other nodes in the network. Thus all the downstream network mining tasks fail miserably in the presence of such outliers. Hence an integrated approach to detect anomalies and reduce their overall effect on the network embedding is required. Towards this end, we propose an unsupervised outlier aware network embedding algorithm (ONE) for attributed networks, which minimizes the effect of the outlier nodes, and hence generates robust network embeddings. We align and jointly optimize the loss functions coming from structure and attributes of the network. To the best of our knowledge, this is the first generic network embedding approach which incorporates the effect of outliers for an attributed network without any supervision. We experimented on publicly available real networks and manually planted different types of outliers to check the performance of the proposed algorithm. Results demonstrate the superiority of our approach to detect the network outliers compared to the state-of-the-art approaches. We also consider different downstream machine learning applications on networks to show the efficiency of ONE as a generic network embedding technique. The source code is made available at //github.com/sambaranban/ONE.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司