亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a graph $G=(V,E)$ with arboricity $\alpha$, we study the problem of decomposing the edges of $G$ into $(1+\epsilon)\alpha$ disjoint forests in the distributed LOCAL model. Barenboim and Elkin [PODC `08] gave a LOCAL algorithm that computes a $(2+\epsilon)\alpha$-forest decomposition using $O(\frac{\log n}{\epsilon})$ rounds. Ghaffari and Su [SODA `17] made further progress by computing a $(1+\epsilon) \alpha$-forest decomposition in $O(\frac{\log^3 n}{\epsilon^4})$ rounds when $\epsilon \alpha = \Omega(\sqrt{\alpha \log n})$, i.e. the limit of their algorithm is an $(\alpha+ \Omega(\sqrt{\alpha \log n}))$-forest decomposition. This algorithm, based on a combinatorial construction of Alon, McDiarmid \& Reed [Combinatorica `92], in fact provides a decomposition of the graph into \emph{star-forests}, i.e. each forest is a collection of stars. Our main result in this paper is to reduce the threshold of $\epsilon \alpha$ in $(1+\epsilon)\alpha$-forest decomposition and star-forest decomposition. This further answers the $10^{\text{th}}$ open question from Barenboim and Elkin's "Distributed Graph Algorithms" book. Moreover, it gives the first $(1+\epsilon)\alpha$-orientation algorithms with {\it linear dependencies} on $\epsilon^{-1}$. At a high level, our results for forest-decomposition are based on a combination of network decomposition, load balancing, and a new structural result on local augmenting sequences. Our result for star-forest decomposition uses a more careful probabilistic analysis for the construction of Alon, McDiarmid, \& Reed; the bounds on star-arboricity here were not previously known, even non-constructively.

相關內容

Statistical wisdom suggests that very complex models, interpolating training data, will be poor at predicting unseen examples.Yet, this aphorism has been recently challenged by the identification of benign overfitting regimes, specially studied in the case of parametric models: generalization capabilities may be preserved despite model high complexity.While it is widely known that fully-grown decision trees interpolate and, in turn, have bad predictive performances, the same behavior is yet to be analyzed for Random Forests (RF).In this paper, we study the trade-off between interpolation and consistency for several types of RF algorithms. Theoretically, we prove that interpolation regimes and consistency cannot be achieved simultaneously for several non-adaptive RF.Since adaptivity seems to be the cornerstone to bring together interpolation and consistency, we study interpolating Median RF which are proved to be consistent in the interpolating regime. This is the first result conciliating interpolation and consistency for RF, highlighting that the averaging effect introduced by feature randomization is a key mechanism, sufficient to ensure the consistency in the interpolation regime and beyond.Numerical experiments show that Breiman's RF are consistent while exactly interpolating, when no bootstrap step is involved.We theoretically control the size of the interpolation area, which converges fast enough to zero, giving a necessary condition for exact interpolation and consistency to occur in conjunction.

The majority of Americans fail to achieve recommended levels of physical activity, which leads to numerous preventable health problems such as diabetes, hypertension, and heart diseases. This has generated substantial interest in monitoring human activity to gear interventions toward environmental features that may relate to higher physical activity. Wearable devices, such as wrist-worn sensors that monitor gross motor activity (actigraph units) continuously record the activity levels of a subject, producing massive amounts of high-resolution measurements. Analyzing actigraph data needs to account for spatial and temporal information on trajectories or paths traversed by subjects wearing such devices. Inferential objectives include estimating a subject's physical activity levels along a given trajectory; identifying trajectories that are more likely to produce higher levels of physical activity for a given subject; and predicting expected levels of physical activity in any proposed new trajectory for a given set of health attributes. Here, we devise a Bayesian hierarchical modeling framework for spatial-temporal actigraphy data to deliver fully model-based inference on trajectories while accounting for subject-level health attributes and spatial-temporal dependencies. We undertake a comprehensive analysis of an original dataset from the Physical Activity through Sustainable Transport Approaches in Los Angeles (PASTA-LA) study to ascertain spatial zones and trajectories exhibiting significantly higher levels of physical activity while accounting for various sources of heterogeneity.

We consider the differentially private estimation of multiple quantiles (MQ) of a distribution from a dataset, a key building block in modern data analysis. We apply the recent non-smoothed Inverse Sensitivity (IS) mechanism to this specific problem. We establish that the resulting method is closely related to the recently published ad hoc algorithm JointExp. In particular, they share the same computational complexity and a similar efficiency. We prove the statistical consistency of these two algorithms for continuous distributions. Furthermore, we demonstrate both theoretically and empirically that this method suffers from an important lack of performance in the case of peaked distributions, which can degrade up to a potentially catastrophic impact in the presence of atoms. Its smoothed version (i.e. by applying a max kernel to its output density) would solve this problem, but remains an open challenge to implement. As a proxy, we propose a simple and numerically efficient method called Heuristically Smoothed JointExp (HSJointExp), which is endowed with performance guarantees for a broad class of distributions and achieves results that are orders of magnitude better on problematic datasets.

The complexity of free games with two or more classical players was essentially settled by Aaronson, Impagliazzo, and Moshkovitz (CCC'14). There are two complexity classes that can be considered quantum analogues of classical free games: (1) AM*, the multiprover interactive proof class corresponding to free games with entangled players, and, somewhat less obviously, (2) BellQMA(2), the class of quantum Merlin-Arthur proof systems with two unentangled Merlins, whose proof states are separately measured by Arthur. In this work, we make significant progress towards a tight characterization of both of these classes. 1. We show a BellQMA(2) protocol for 3SAT on $n$ variables, where the total amount of communication is $\tilde{O}(\sqrt{n})$. This answers an open question of Chen and Drucker (2010) and also shows, conditional on ETH, that the algorithm of Brand\~{a}o, Christandl and Yard (STOC'11) is tight up to logarithmic factors. 2. We show that $\mathsf{AM}^*[n_{\text{provers}} = 2, q = O(1), a =\mathrm{poly}\log(n)] = \mathsf{RE}$, i.e. that free entangled games with constant-sized questions are as powerful as general entangled games. Our result is a significant improvement over the headline result of Ji et al. (2020), whose MIP* protocol for the halting problem has $\mathrm{poly}(n)$-sized questions and answers. 3. We obtain a zero-gap AM* protocol for a $\Pi_2$ complete language with constant-size questions and almost logarithmically large answers, improving on the headline result of Mousavi, Nezhadi and Yuen (STOC'22). 4. Using a connection to the nonuniform complexity of the halting problem we show that any MIP* protocol for RE requires $\Omega(\log n)$ bits of communication. It follows that our results in item 3 are optimal up to an $O(\log^* n)$ factor, and that the gapless compression theorems of MNY'22 are asymptotically optimal.

This paper investigates the Age of Incorrect Information (AoII) in a communication system whose channel suffers a random delay. We consider a slotted-time system where a transmitter observes a dynamic source and decides when to send updates to a remote receiver through the communication channel. The threshold policy, under which the transmitter initiates transmission only when the AoII exceeds the threshold, governs the transmitter's decision. In this paper, we analyze and calculate the performance of the threshold policy in terms of the achieved AoII. Using the Markov chain to characterize the system evolution, the expected AoII can be obtained precisely by solving a system of linear equations whose size is finite and depends on the threshold. We also give closed-form expressions of the expected AoII under two particular thresholds. Finally, calculation results show that there are better strategies than the transmitter constantly transmitting new updates.

Multiple Instance Learning (MIL) is a weakly supervised learning paradigm that is becoming increasingly popular because it requires less labeling effort than fully supervised methods. This is especially interesting for areas where the creation of large annotated datasets remains challenging, as in medicine. Although recent deep learning MIL approaches have obtained state-of-the-art results, they are fully deterministic and do not provide uncertainty estimations for the predictions. In this work, we introduce the Attention Gaussian Process (AGP) model, a novel probabilistic attention mechanism based on Gaussian Processes for deep MIL. AGP provides accurate bag-level predictions as well as instance-level explainability, and can be trained end-to-end. Moreover, its probabilistic nature guarantees robustness to overfitting on small datasets and uncertainty estimations for the predictions. The latter is especially important in medical applications, where decisions have a direct impact on the patient's health. The proposed model is validated experimentally as follows. First, its behavior is illustrated in two synthetic MIL experiments based on the well-known MNIST and CIFAR-10 datasets, respectively. Then, it is evaluated in three different real-world cancer detection experiments. AGP outperforms state-of-the-art MIL approaches, including deterministic deep learning ones. It shows a strong performance even on a small dataset with less than 100 labels and generalizes better than competing methods on an external test set. Moreover, we experimentally show that predictive uncertainty correlates with the risk of wrong predictions, and therefore it is a good indicator of reliability in practice. Our code is publicly available.

Learning causal relationships between variables is a fundamental task in causal inference and directed acyclic graphs (DAGs) are a popular choice to represent the causal relationships. As one can recover a causal graph only up to its Markov equivalence class from observations, interventions are often used for the recovery task. Interventions are costly in general and it is important to design algorithms that minimize the number of interventions performed. In this work, we study the problem of identifying the smallest set of interventions required to learn the causal relationships between a subset of edges (target edges). Under the assumptions of faithfulness, causal sufficiency, and ideal interventions, we study this problem in two settings: when the underlying ground truth causal graph is known (subset verification) and when it is unknown (subset search). For the subset verification problem, we provide an efficient algorithm to compute a minimum sized interventional set; we further extend these results to bounded size non-atomic interventions and node-dependent interventional costs. For the subset search problem, in the worst case, we show that no algorithm (even with adaptivity or randomization) can achieve an approximation ratio that is asymptotically better than the vertex cover of the target edges when compared with the subset verification number. This result is surprising as there exists a logarithmic approximation algorithm for the search problem when we wish to recover the whole causal graph. To obtain our results, we prove several interesting structural properties of interventional causal graphs that we believe have applications beyond the subset verification/search problems studied here.

Horizontal (automatic) and vertical (control) processes have been observed and reported for a long time in translation production. Schaeffer and Carl's Monitor Model integrates these two processes into one framework, assuming that priming mechanisms underlie horizontal/automatic processes, while vertical/monitoring processes implement consciously accessible control mechanisms. The Monitor Model has been criticized in various ways and several misconceptions have accumulated over the past years. In this chapter, I update the Monitor Model with additional evidence and argue that it is compatible with an enactivist approach to cognition. I address several misconceptions related to the Monitor Model.

Applying artificial intelligence techniques in medical imaging is one of the most promising areas in medicine. However, most of the recent success in this area highly relies on large amounts of carefully annotated data, whereas annotating medical images is a costly process. In this paper, we propose a novel method, called FocalMix, which, to the best of our knowledge, is the first to leverage recent advances in semi-supervised learning (SSL) for 3D medical image detection. We conducted extensive experiments on two widely used datasets for lung nodule detection, LUNA16 and NLST. Results show that our proposed SSL methods can achieve a substantial improvement of up to 17.3% over state-of-the-art supervised learning approaches with 400 unlabeled CT scans.

Graph-based semi-supervised learning (SSL) is an important learning problem where the goal is to assign labels to initially unlabeled nodes in a graph. Graph Convolutional Networks (GCNs) have recently been shown to be effective for graph-based SSL problems. GCNs inherently assume existence of pairwise relationships in the graph-structured data. However, in many real-world problems, relationships go beyond pairwise connections and hence are more complex. Hypergraphs provide a natural modeling tool to capture such complex relationships. In this work, we explore the use of GCNs for hypergraph-based SSL. In particular, we propose HyperGCN, an SSL method which uses a layer-wise propagation rule for convolutional neural networks operating directly on hypergraphs. To the best of our knowledge, this is the first principled adaptation of GCNs to hypergraphs. HyperGCN is able to encode both the hypergraph structure and hypernode features in an effective manner. Through detailed experimentation, we demonstrate HyperGCN's effectiveness at hypergraph-based SSL.

北京阿比特科技有限公司