亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the approximability of an existing framework for clustering edge-colored hypergraphs, which is closely related to chromatic correlation clustering and is motivated by machine learning and data mining applications where the goal is to cluster a set of objects based on multiway interactions of different categories or types. We present improved approximation guarantees based on linear programming, and show they are tight by proving a matching integrality gap. Our results also include new approximation hardness results, a combinatorial 2-approximation whose runtime is linear in the hypergraph size, and several new connections to well-studied objectives such as vertex cover and hypergraph multiway cut.

相關內容

Let $G=(V,E)$ be an $n$-vertex connected graph of maximum degree $\Delta$. Given access to $V$ and an oracle that given two vertices $u,v\in V$, returns the shortest path distance between $u$ and $v$, how many queries are needed to reconstruct $E$? We give a simple deterministic algorithm to reconstruct trees using $\Delta n\log_\Delta n+(\Delta+2)n$ distance queries and show that even randomised algorithms need to use at least $\frac1{100} \Delta n\log_\Delta n$ queries in expectation. The best previous lower bound was an information-theoretic lower bound of $\Omega(n\log n/\log \log n)$. Our lower bound also extends to related query models including distance queries for phylogenetic trees, membership queries for learning partitions and path queries in directed trees. We extend our deterministic algorithm to reconstruct graphs without induced cycles of length at least $k$ using $O_{\Delta,k}(n\log n)$ queries, which includes various graph classes of interest such as chordal graphs, permutation graphs and AT-free graphs. Since the previously best known randomised algorithm for chordal graphs uses $O_{\Delta}(n\log^2 n)$ queries in expectation, we both get rid off the randomness and get the optimal dependency in $n$ for chordal graphs and various other graph classes. Finally, we build on an algorithm of Kannan, Mathieu, and Zhou [ICALP, 2015] to give a randomised algorithm for reconstructing graphs of treelength $k$ using $O_{\Delta,k}(n\log^2n)$ queries in expectation.

Inverse protein folding is challenging due to its inherent one-to-many mapping characteristic, where numerous possible amino acid sequences can fold into a single, identical protein backbone. This task involves not only identifying viable sequences but also representing the sheer diversity of potential solutions. However, existing discriminative models, such as transformer-based auto-regressive models, struggle to encapsulate the diverse range of plausible solutions. In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones. We propose a novel graph denoising diffusion model for inverse protein folding, where a given protein backbone guides the diffusion process on the corresponding amino acid residue types. The model infers the joint distribution of amino acids conditioned on the nodes' physiochemical properties and local environment. Moreover, we utilize amino acid replacement matrices for the diffusion forward process, encoding the biologically-meaningful prior knowledge of amino acids from their spatial and sequential neighbors as well as themselves, which reduces the sampling space of the generative process. Our model achieves state-of-the-art performance over a set of popular baseline methods in sequence recovery and exhibits great potential in generating diverse protein sequences for a determined protein backbone structure.

In this paper, we study the maximum clique problem on hyperbolic random graphs. A hyperbolic random graph is a mathematical model for analyzing scale-free networks since it effectively explains the power-law degree distribution of scale-free networks. We propose a simple algorithm for finding a maximum clique in hyperbolic random graph. We first analyze the running time of our algorithm theoretically. We can compute a maximum clique on a hyperbolic random graph $G$ in $O(m + n^{4.5(1-\alpha)})$ expected time if a geometric representation is given or in $O(m + n^{6(1-\alpha)})$ expected time if a geometric representation is not given, where $n$ and $m$ denote the numbers of vertices and edges of $G$, respectively, and $\alpha$ denotes a parameter controlling the power-law exponent of the degree distribution of $G$. Also, we implemented and evaluated our algorithm empirically. Our algorithm outperforms the previous algorithm [BFK18] practically and theoretically. Beyond the hyperbolic random graphs, we have experiment on real-world networks. For most of instances, we get large cliques close to the optimum solutions efficiently.

DBSCAN is a well-known density-based clustering algorithm to discover arbitrary shape clusters. While conceptually simple in serial, the algorithm is challenging to efficiently parallelize on manycore GPU architectures. Common pitfalls, such as asynchronous range query calls, result in high thread execution divergence in many implementations. In this paper, we propose a new framework for GPU-accelerated DBSCAN, and describe two tree-based algorithms within that framework. Both algorithms fuse the search for neighbors with updating cluster information, but differ in their treatment of dense regions of the data. We show that the time taken to compute clusters is at most twice that of determination of the neighbors. We compare the proposed algorithms with existing CPU and GPU implementations, and demonstrate their competitiveness and performance using a fast traversal structure (bounding volume hierarchy) for low dimensional data. We also show that the memory usage can be reduced by processing object neighbors dynamically without storing them.

For many decades, advances in static verification have focused on linear integer arithmetic (LIA) programs. Many real-world programs are, however, written with non-linear integer arithmetic (NLA) expressions, such as programs that model physical events, control systems, or nonlinear activation functions in neural networks. While there are some approaches to reasoning about such NLA programs, still many verification tools fall short when trying to analyze them. To expand the scope of existing tools, we introduce a new method of converting programs with NLA expressions into semantically equivalent LIA programs via a technique we call dual rewriting. Dual rewriting discovers a linear replacement for an NLA Boolean expression (e.g. as found in conditional branching), simultaneously exploring both the positive and negative side of the condition, and using a combination of static validation and dynamic generalization of counterexamples. While perhaps surprising at first, this is often possible because the truth value of a Boolean NLA expression can be characterized in terms of a Boolean combination of linearly-described regions/intervals where the expression is true and those where it is false. The upshot is that rewriting NLA expressions to LIA expressions beforehand enables off-the-shelf LIA tools to be applied to the wider class of NLA programs. We built a new tool DrNLA and show it can discover LIA replacements for a variety of NLA programs. We then applied our work to branching-time verification of NLA programs, creating the first set of such benchmarks (92 in total) and showing that DrNLA's rewriting enable tools such as FuncTion and T2 to verify CTL properties of 42 programs that previously could not be verified. We also show a potential use of DrNLA assisting Frama-C in program slicing, and report that execution speed is not impacted much by rewriting.

We consider the problem of constructing distribution-free prediction sets with finite-sample conditional guarantees. Prior work has shown that it is impossible to provide exact conditional coverage universally in finite samples. Thus, most popular methods only provide marginal coverage over the covariates. This paper bridges this gap by defining a spectrum of problems that interpolate between marginal and conditional validity. We motivate these problems by reformulating conditional coverage as coverage over a class of covariate shifts. When the target class of shifts is finite dimensional, we show how to simultaneously obtain exact finite sample coverage over all possible shifts. For example, given a collection of protected subgroups, our algorithm outputs intervals with exact coverage over each group. For more flexible, infinite dimensional classes where exact coverage is impossible, we provide a simple procedure for quantifying the gap between the coverage of our algorithm and the target level. Moreover, by tuning a single hyperparameter, we allow the practitioner to control the size of this gap across shifts of interest. Our methods can be easily incorporated into existing split conformal inference pipelines, and thus can be used to quantify the uncertainty of modern black-box algorithms without distributional assumptions.

We study the problem of locally private mean estimation of high-dimensional vectors in the Euclidean ball. Existing algorithms for this problem either incur sub-optimal error or have high communication and/or run-time complexity. We propose a new algorithmic framework, ProjUnit, for private mean estimation that yields algorithms that are computationally efficient, have low communication complexity, and incur optimal error up to a $1+o(1)$-factor. Our framework is deceptively simple: each randomizer projects its input to a random low-dimensional subspace, normalizes the result, and then runs an optimal algorithm such as PrivUnitG in the lower-dimensional space. In addition, we show that, by appropriately correlating the random projection matrices across devices, we can achieve fast server run-time. We mathematically analyze the error of the algorithm in terms of properties of the random projections, and study two instantiations. Lastly, our experiments for private mean estimation and private federated learning demonstrate that our algorithms empirically obtain nearly the same utility as optimal ones while having significantly lower communication and computational cost.

In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise. Hypergraphs provide a flexible and natural modeling tool to model such complex relationships. The obvious existence of such complex relationships in many real-world networks naturaly motivates the problem of learning with hypergraphs. A popular learning paradigm is hypergraph-based semi-supervised learning (SSL) where the goal is to assign labels to initially unlabeled vertices in a hypergraph. Motivated by the fact that a graph convolutional network (GCN) has been effective for graph-based SSL, we propose HyperGCN, a novel GCN for SSL on attributed hypergraphs. Additionally, we show how HyperGCN can be used as a learning-based approach for combinatorial optimisation on NP-hard hypergraph problems. We demonstrate HyperGCN's effectiveness through detailed experimentation on real-world hypergraphs.

Graph-based semi-supervised learning (SSL) is an important learning problem where the goal is to assign labels to initially unlabeled nodes in a graph. Graph Convolutional Networks (GCNs) have recently been shown to be effective for graph-based SSL problems. GCNs inherently assume existence of pairwise relationships in the graph-structured data. However, in many real-world problems, relationships go beyond pairwise connections and hence are more complex. Hypergraphs provide a natural modeling tool to capture such complex relationships. In this work, we explore the use of GCNs for hypergraph-based SSL. In particular, we propose HyperGCN, an SSL method which uses a layer-wise propagation rule for convolutional neural networks operating directly on hypergraphs. To the best of our knowledge, this is the first principled adaptation of GCNs to hypergraphs. HyperGCN is able to encode both the hypergraph structure and hypernode features in an effective manner. Through detailed experimentation, we demonstrate HyperGCN's effectiveness at hypergraph-based SSL.

Traditional methods for link prediction can be categorized into three main types: graph structure feature-based, latent feature-based, and explicit feature-based. Graph structure feature methods leverage some handcrafted node proximity scores, e.g., common neighbors, to estimate the likelihood of links. Latent feature methods rely on factorizing networks' matrix representations to learn an embedding for each node. Explicit feature methods train a machine learning model on two nodes' explicit attributes. Each of the three types of methods has its unique merits. In this paper, we propose SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction), a new framework for link prediction which combines the power of all the three types into a single graph neural network (GNN). GNN is a new type of neural network which directly accepts graphs as input and outputs their labels. In SEAL, the input to the GNN is a local subgraph around each target link. We prove theoretically that our local subgraphs also reserve a great deal of high-order graph structure features related to link existence. Another key feature is that our GNN can naturally incorporate latent features and explicit features. It is achieved by concatenating node embeddings (latent features) and node attributes (explicit features) in the node information matrix for each subgraph, thus combining the three types of features to enhance GNN learning. Through extensive experiments, SEAL shows unprecedentedly strong performance against a wide range of baseline methods, including various link prediction heuristics and network embedding methods.

北京阿比特科技有限公司