亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 1982, Papadimitriou and Yannakakis introduced the Exact Matching (EM) problem where given an edge colored graph, with colors red and blue, and an integer $k$, the goal is to decide whether or not the graph contains a perfect matching with exactly $k$ red edges. Although they conjectured it to be $\textbf{NP}$-complete, soon after it was shown to be solvable in randomized polynomial time in the seminal work of Mulmuley et al., placing it in the complexity class $\textbf{RP}$. Since then, all attempts at finding a deterministic algorithm for EM have failed, thus leaving it as one of the few natural combinatorial problems in $\textbf{RP}$ but not known to be contained in $\textbf{P}$, and making it an interesting instance for testing the hypothesis $\textbf{RP}=\textbf{P}$. Progress has been lacking even on very restrictive classes of graphs despite the problem being quite well known as evidenced by the number of works citing it. In this paper we aim to gain more insight into EM by studying a new optimization problem we call Top-k Perfect Matching (TkPM) which we show to be polynomially equivalent to EM. By virtue of being an optimization problem, it is more natural to approximate TkPM so we provide approximation algorithms for it. Some of the approximation algorithms rely on a relaxation of EM on bipartite graphs where the output is required to be a perfect matching with a number of red edges differing from $k$ by at most $k/2$, which is of independent interest and generalizes to the Exact Weight Perfect Matching (EWPM) problem. We also consider parameterized algorithms and show that TkPM can be solved in FPT time parameterized by $k$ and the independence number of the graph. This result again relies on new tools developed for EM which are also of independent interest.

相關內容

This article presents a general approximation-theoretic framework to analyze measure-transport algorithms for sampling and characterizing probability measures. Sampling is a task that frequently arises in data science and uncertainty quantification. We provide error estimates in the continuum limit, i.e., when the measures (or their densities) are given, but when the transport map is discretized or approximated using a finite-dimensional function space. Our analysis relies on the regularity theory of transport maps, as well as on classical approximation theory for high-dimensional functions. A third element of our analysis, which is of independent interest, is the development of new stability estimates that relate the normed distance between two maps to the divergence between the pushforward measures they define. We further present a series of applications where quantitative convergence rates are obtained for practical problems using Wasserstein metrics, maximum mean discrepancy, and Kullback-Leibler divergence. Specialized rates for approximations of the popular triangular Kn{\"o}the-Rosenblatt maps are obtained, followed by numerical experiments that demonstrate and extend our theory.

We describe a formal correctness proof of RANKING, an online algorithm for online bipartite matching. An outcome of our formalisation is that it shows that there is a gap in all combinatorial proofs of the algorithm. Filling that gap constituted the majority of the effort which went into this work. This is despite the algorithm being one of the most studied algorithms and a central result in theoretical computer science. This gap is an example of difficulties in formalising graphical arguments which are ubiquitous in the theory of computing.

The topology-aware Massively Parallel Computation (MPC) model is proposed and studied recently, which enhances the classical MPC model by the awareness of network topology. The work of Hu et. al. on topology-aware MPC model considers only the tree topology. In this paper a more general case is considered, where the underlying network is a weighted complete graph. We then call this model as Weighted Massively Parallel Computation (WMPC) model, and study the problem of minimizing communication cost under it. Three communication cost minimization problems are defined based on different pattern of communication, which are the Data Redistribution Problem, Data Allocation Problem on Continuous data, and Data Allocation Problem on Categorized data. We also define four kinds of objective functions for communication cost, which consider the total cost, bottleneck cost, maximum of send and receive cost, and summation of send and receive cost, respectively. Combining the three problems in different communication pattern with the four kinds of objective cost functions, 12 problems are obtained. The hardness results and algorithms of the 12 problems make up the content of this paper. With rigorous proof, we prove that some of the 12 problems are in P, some FPT, some NP-complete, and some W[1]-complete. Approximate algorithms are proposed for several selected problems.

This work initiates the systematic study of explicit distributions that are indistinguishable from a single exponential-size combinatorial object. In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness). Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory. Problems that are well understood in the setting of pseudorandomness present significant challenges and at times are impossible when considering generative models of huge objects. We demonstrate the versatility of this study by providing a learning algorithm for huge indistinguishable objects in several natural settings including: dense functions and graphs with a truthfulness requirement on the number of ones in the function or edges in the graphs, and a version of the weak regularity lemma for sparse graphs that satisfy some global properties. These and other results generalize basic pseudorandom objects as well as notions introduced in algorithmic fairness. The results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory.

Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion. This question has received considerable attention from the evolutionary community in recent years. Having a good benchmark collection to gain structural understanding on the effectiveness and limitations of different solution methods for DAC is therefore strongly desirable. Following recent work on proposing DAC benchmarks with well-understood theoretical properties and ground truth information, in this work, we suggest as a new DAC benchmark the controlling of the key parameter $\lambda$ in the $(1+(\lambda,\lambda))$~Genetic Algorithm for solving OneMax problems. We conduct a study on how to solve the DAC problem via the use of (static) automated algorithm configuration on the benchmark, and propose techniques to significantly improve the performance of the approach. Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes. We also present new findings on the landscape of the parameter-control search policies and propose methods to compute stronger baselines for the benchmark via numerical approximations of the true optimal policies.

Our goal is to produce methods for observational causal inference that are auditable, easy to troubleshoot, yield accurate treatment effect estimates, and scalable to high-dimensional data. We describe an almost-exact matching approach that achieves these goals by (i) learning a distance metric via outcome modeling, (ii) creating matched groups using the distance metric, and (iii) using the matched groups to estimate treatment effects. Our proposed method uses variable importance measurements to construct a distance metric, making it a flexible method that can be adapted to various applications. Concentrating on the scalability of the problem in the number of potential confounders, we operationalize our approach with LASSO. We derive performance guarantees for settings where LASSO outcome modeling consistently identifies all confounders (importantly without requiring the linear model to be correctly specified). We also provide experimental results demonstrating the auditability of matches, as well as extensions to more general nonparametric outcome modeling.

A perfect matching cut is a perfect matching that is also a cutset, or equivalently a perfect matching containing an even number of edges on every cycle. The corresponding algorithmic problem, Perfect Matching Cut, is known to be NP-complete in subcubic bipartite graphs [Le & Telle, TCS '22] but its complexity was open in planar graphs and in cubic graphs. We settle both questions at once by showing that Perfect Matching Cut is NP-complete in 3-connected cubic bipartite planar graphs or Barnette graphs. Prior to our work, among problems whose input is solely an undirected graph, only Distance-2 4-Coloring was known NP-complete in Barnette graphs. Notably, Hamiltonian Cycle would only join this private club if Barnette's conjecture were refuted.

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA). However, most existing EA methods are inefficient and poor in scalability. A recent summary points out that some of them even require several days to deal with a dataset containing 200,000 nodes (DWY100K). We believe over-complex graph encoder and inefficient negative sampling strategy are the two main reasons. In this paper, we propose a novel KG encoder -- Dual Attention Matching Network (Dual-AMN), which not only models both intra-graph and cross-graph information smartly, but also greatly reduces computational complexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to smoothly select hard negative samples with reduced loss shift. The experimental results on widely used public datasets indicate that our method achieves both high accuracy and high efficiency. On DWY100K, the whole running process of our method could be finished in 1,100 seconds, at least 10* faster than previous work. The performances of our method also outperform previous works across all datasets, where Hits@1 and MRR have been improved from 6% to 13%.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司