亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the 90's Clark, Colbourn and Johnson wrote a seminal paper where they proved that maximum clique can be solved in polynomial time in unit disk graphs. Since then, the complexity of maximum clique in intersection graphs of d-dimensional (unit) balls has been investigated. For ball graphs, the problem is NP-hard, as shown by Bonamy et al. (FOCS '18). They also gave an efficient polynomial time approximation scheme (EPTAS) for disk graphs. However, the complexity of maximum clique in this setting remains unknown. In this paper, we show the existence of a polynomial time algorithm for a geometric superclass of unit disk graphs. Moreover, we give partial results toward obtaining an EPTAS for intersection graphs of convex pseudo-disks.

相關內容

In this paper, we study the non-monotone adaptive submodular maximization problem subject to a knapsack and a $k$-system constraints. The input of our problem is a set of items, where each item has a particular state drawn from a known prior distribution. However, the state of an item is initially unknown, one must select an item in order to reveal the state of that item. There is a utility function which is defined over items and states. Our objective is to sequentially select a group of items to maximize the expected utility. Although the cardinality-constrained non-monotone adaptive submodular maximization has been well studied in the literature, whether there exists a constant approximation solution for the knapsack-constrained or $k$-system constrained adaptive submodular maximization problem remains an open problem. It fact, it has only been settled given the additional assumption of pointwise submodularity. In this paper, we remove the common assumption on pointwise submodularity and propose the first constant approximation solutions for both cases. Inspired by two recent studies on non-monotone adaptive submodular maximization, we develop a sampling-based randomized algorithm that achieves a $\frac{1}{10}$ approximation for the case of a knapsack constraint and that achieves a $\frac{1}{2k+4}$ approximation ratio for the case of a $k$-system constraint.

This paper considers the problem of Byzantine dispersion and extends previous work along several parameters. The problem of Byzantine dispersion asks: given $n$ robots, up to $f$ of which are Byzantine, initially placed arbitrarily on an $n$ node anonymous graph, design a terminating algorithm to be run by the robots such that they eventually reach a configuration where each node has at most one non-Byzantine robot on it. Previous work solved this problem for rings and tolerated up to $n-1$ Byzantine robots. In this paper, we investigate the problem on more general graphs. We first develop an algorithm that tolerates up to $n-1$ Byzantine robots and works for a more general class of graphs. We then develop an algorithm that works for any graph but tolerates a lesser number of Byzantine robots. We subsequently turn our focus to the strength of the Byzantine robots. Previous work considers only ``weak" Byzantine robots that cannot fake their IDs. We develop an algorithm that solves the problem when Byzantine robots are not weak and can fake IDs. Finally, we study the situation where the number of the robots is not $n$ but some $k$. We show that in such a scenario, the number of Byzantine robots that can be tolerated is severely restricted. Specifically, we show that it is impossible to deterministically solve Byzantine dispersion when $\lceil k/n \rceil > \lceil (k-f)/n \rceil$.

A recent UK Biobank study clustered 156 parameterised models associating risk factors with common diseases, to identify shared causes of disease. Parametric models are often more familiar and interpretable than clustered data, can build-in prior knowledge, adjust for known confounders, and use marginalisation to emphasise parameters of interest. Estimates include a Maximum Likelihood Estimate (MLE) that is (approximately) normally distributed, and its covariance. Clustering models rarely consider the covariances of data points, that are usually unavailable. Here a clustering model is formulated that accounts for covariances of the data, and assumes that all MLEs in a cluster are the same. The log-likelihood is exactly calculated in terms of the fitted parameters, with the unknown cluster means removed by marginalisation. The procedure is equivalent to calculating the Bayesian Information Criterion (BIC) without approximation, and can be used to assess the optimum number of clusters for a given clustering algorithm. The log-likelihood has terms to penalise poor fits and model complexity, and can be maximised to determine the number and composition of clusters. Results can be similar to using the ad-hoc "elbow criterion", but are less subjective. The model is also formulated as a Dirichlet process mixture model (DPMM). The overall approach is equivalent to a multi-layer algorithm that characterises features through the normally distributed MLEs of a fitted model, and then clusters the normal distributions. Examples include simulated data, and clustering of diseases in UK Biobank data using estimated associations with risk factors. The results can be applied directly to measured data and their estimated covariances, to the output from clustering models, or the DPMM implementation can be used to cluster fitted models directly.

We study distributionally robust optimization with Sinkorn distance -- a variant of Wasserstein distance based on entropic regularization. We derive convex programming dual reformulations when the nominal distribution is an empirical distribution and a general distribution, respectively. Compared with Wasserstein DRO, it is computationally tractable for a larger class of loss functions, and its worst-case distribution is more reasonable. To solve the dual reformulation, we propose an efficient batch gradient descent with a bisection search algorithm. Finally, we provide various numerical examples using both synthetic and real data to demonstrate its competitive performance.

Rapid technological advances in the domain of Wireless Power Transfer (WPT) pave the way for novel methods for power management in systems of wireless devices and recent research works have already started considering algorithmic solutions for tackling emerging problems. However, many of those works are limited by the system modelling, and more specifically the one-dimensional abstraction suggested by Friis formula for the power received by one antenna under idealized conditions given another antenna some distance away. Different to those works, we use a model which arises naturally from fundamental properties of the superposition of energy fields. This model has been shown to be more realistic than other one-dimensional models that have been used in the past and can capture superadditive and cancellation effects. Under this model, we define two new interesting problems for configuring the wireless power transmitters so as to maximize the total power in the system and we prove that the first problem can be solved in polynomial time. We present a distributed solution that runs in pseudo-polynomial time and uses various knowledge levels and we provide theoretical performance guarantees. Finally, we design three heuristics for the second problem and evaluate them via simulations.

Coloring unit-disk graphs efficiently is an important problem in the global and distributed setting, with applications in radio channel assignment problems when the communication relies on omni-directional antennas of the same power. In this context it is important to bound not only the complexity of the coloring algorithms, but also the number of colors used. In this paper, we consider two natural distributed settings. In the location-aware setting (when nodes know their coordinates in the plane), we give a constant time distributed algorithm coloring any unit-disk graph $G$ with at most $(3+\epsilon)\omega(G)+6$ colors, for any constant $\epsilon>0$, where $\omega(G)$ is the clique number of $G$. This improves upon a classical 3-approximation algorithm for this problem, for all unit-disk graphs whose chromatic number significantly exceeds their clique number. When nodes do not know their coordinates in the plane, we give a distributed algorithm in the LOCAL model that colors every unit-disk graph $G$ with at most $5.68\omega(G)$ colors in (see \cref{sec:comput}) rounds. Moreover, when $\omega(G)=O(1)$, the algorithm runs in $O(\log^* n)$ rounds. This algorithm is based on a study of the local structure of unit-disk graphs, which is of independent interest. We conjecture that every unit-disk graph $G$ has average degree at most $4\omega(G)$, which would imply the existence of a $O(\log n)$ round algorithm coloring any unit-disk graph $G$ with (approximatively) $4\omega(G)$ colors.

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.

Domain shift is a fundamental problem in visual recognition which typically arises when the source and target data follow different distributions. The existing domain adaptation approaches which tackle this problem work in the closed-set setting with the assumption that the source and the target data share exactly the same classes of objects. In this paper, we tackle a more realistic problem of open-set domain shift where the target data contains additional classes that are not present in the source data. More specifically, we introduce an end-to-end Progressive Graph Learning (PGL) framework where a graph neural network with episodic training is integrated to suppress underlying conditional shift and adversarial learning is adopted to close the gap between the source and target distributions. Compared to the existing open-set adaptation approaches, our approach guarantees to achieve a tighter upper bound of the target error. Extensive experiments on three standard open-set benchmarks evidence that our approach significantly outperforms the state-of-the-arts in open-set domain adaptation.

Deep learning is the mainstream technique for many machine learning tasks, including image recognition, machine translation, speech recognition, and so on. It has outperformed conventional methods in various fields and achieved great successes. Unfortunately, the understanding on how it works remains unclear. It has the central importance to lay down the theoretic foundation for deep learning. In this work, we give a geometric view to understand deep learning: we show that the fundamental principle attributing to the success is the manifold structure in data, namely natural high dimensional data concentrates close to a low-dimensional manifold, deep learning learns the manifold and the probability distribution on it. We further introduce the concepts of rectified linear complexity for deep neural network measuring its learning capability, rectified linear complexity of an embedding manifold describing the difficulty to be learned. Then we show for any deep neural network with fixed architecture, there exists a manifold that cannot be learned by the network. Finally, we propose to apply optimal mass transportation theory to control the probability distribution in the latent space.

Metric learning learns a metric function from training data to calculate the similarity or distance between samples. From the perspective of feature learning, metric learning essentially learns a new feature space by feature transformation (e.g., Mahalanobis distance metric). However, traditional metric learning algorithms are shallow, which just learn one metric space (feature transformation). Can we further learn a better metric space from the learnt metric space? In other words, can we learn metric progressively and nonlinearly like deep learning by just using the existing metric learning algorithms? To this end, we present a hierarchical metric learning scheme and implement an online deep metric learning framework, namely ODML. Specifically, we take one online metric learning algorithm as a metric layer, followed by a nonlinear layer (i.e., ReLU), and then stack these layers modelled after the deep learning. The proposed ODML enjoys some nice properties, indeed can learn metric progressively and performs superiorly on some datasets. Various experiments with different settings have been conducted to verify these properties of the proposed ODML.

北京阿比特科技有限公司