亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper presents an $O^{*}(1.42^{n})$ time algorithm for the Maximum Cut problem on split graphs, along with a subexponential time algorithm for its decision variant.

相關內容

Flow sparsification is a classic graph compression technique which, given a capacitated graph $G$ on $k$ terminals, aims to construct another capacitated graph $H$, called a \emph{flow sparsifier}, that preserves, either exactly or approximately, every \emph{multicommodity flow} between terminals (ideally, with size as a small function of $k$). Cut sparsifiers are a restricted variant of flow sparsifiers which are only required to preserve maximum flows between bipartitions of the terminal set. It is known that exact cut sparsifiers require $2^{\Omega(k)}$ many vertices [Krauthgamer and Rika, SODA 2013], with the hard instances being \emph{quasi-bipartite} graphs, {where there are no edges between non-terminals}. On the other hand, it has been shown recently that exact (or even $(1+\varepsilon)$-approximate) flow sparsifiers on networks with just 6 terminals require unbounded size [Krauthgamer and Mosenzon, SODA 2023, Chen and Tan, SODA 2024]. In this paper, we construct exact flow sparsifiers of size $3^{k^{3}}$ and exact cut sparsifiers of size $2^{k^2}$ for quasi-bipartite graphs. In particular, the flow sparsifiers are contraction-based, that is, they are obtained from the input graph by (vertex) contraction operations. Our main contribution is a new technique to construct sparsifiers that exploits connections to polyhedral geometry, and that can be generalized to graphs with a small separator that separates the graph into small components. We also give an improved reduction theorem for graphs of bounded treewidth~[Andoni et al., SODA 2011], implying a flow sparsifier of size $O(k\cdot w)$ and quality $O\bigl(\frac{\log w}{\log \log w}\bigr)$, where $w$ is the treewidth.

We consider intersection graphs of disks of radius $r$ in the hyperbolic plane. Unlike the Euclidean setting, these graph classes are different for different values of $r$, where very small $r$ corresponds to an almost-Euclidean setting and $r \in \Omega(\log n)$ corresponds to a firmly hyperbolic setting. We observe that larger values of $r$ create simpler graph classes, at least in terms of separators and the computational complexity of the \textsc{Independent Set} problem. First, we show that intersection graphs of disks of radius $r$ in the hyperbolic plane can be separated with $\mathcal{O}((1+1/r)\log n)$ cliques in a balanced manner. Our second structural insight concerns Delaunay complexes in the hyperbolic plane and may be of independent interest. We show that for any set $S$ of $n$ points with pairwise distance at least $2r$ in the hyperbolic plane the corresponding Delaunay complex has outerplanarity $1+\mathcal{O}(\frac{\log n}{r})$, which implies a similar bound on the balanced separators and treewidth of such Delaunay complexes. Using this outerplanarity (and treewidth) bound we prove that \textsc{Independent Set} can be solved in $n^{\mathcal{O}(1+\frac{\log n}{r})}$ time. The algorithm is based on dynamic programming on some unknown sphere cut decomposition that is based on the solution. The resulting algorithm is a far-reaching generalization of a result of Kisfaludi-Bak (SODA 2020), and it is tight under the Exponential Time Hypothesis. In particular, \textsc{Independent Set} is polynomial-time solvable in the firmly hyperbolic setting of $r\in \Omega(\log n)$. Finally, in the case when the disks have ply (depth) at most $\ell$, we give a PTAS for \textsc{Maximum Independent Set} that has only quasi-polynomial dependence on $1/\varepsilon$ and $\ell$. Our PTAS is a further generalization of our exact algorithm.

We study a general factor analysis framework where the $n$-by-$p$ data matrix is assumed to follow a general exponential family distribution entry-wise. While this model framework has been proposed before, we here further relax its distributional assumption by using a quasi-likelihood setup. By parameterizing the mean-variance relationship on data entries, we additionally introduce a dispersion parameter and entry-wise weights to model large variations and missing values. The resulting model is thus not only robust to distribution misspecification but also more flexible and able to capture non-Gaussian covariance structures of the data matrix. Our main focus is on efficient computational approaches to perform the factor analysis. Previous modeling frameworks rely on simulated maximum likelihood (SML) to find the factorization solution, but this method was shown to lead to asymptotic bias when the simulated sample size grows slower than the square root of the sample size $n$, eliminating its practical application for data matrices with large $n$. Borrowing from expectation-maximization (EM) and stochastic gradient descent (SGD), we investigate three estimation procedures based on iterative factorization updates. Our proposed solution does not show asymptotic biases, and scales even better for large matrix factorizations with error $O(1/p)$. To support our findings, we conduct simulation experiments and discuss its application in three case studies.

This paper addresses the problem of finding a minimum-cost $m$-state Markov chain $(S_0,\ldots,S_{m-1})$ in a large set of chains. The chains studied have a reward associated with each state. The cost of a chain is its "gain", i.e., its average reward under its stationary distribution. Specifically, for each $k=0,\ldots,m-1$ there is a known set ${\mathbb S}_k$ of type-$k$ states. A permissible Markov chain contains exactly one state of each type; the problem is to find a minimum-cost permissible chain. The original motivation was to find a cheapest binary AIFV-$m$ lossless code on a source alphabet of size $n$. Such a code is an $m$-tuple of trees, in which each tree can be viewed as a Markov Chain state. This formulation was then used to address other problems in lossless compression. The known solution techniques for finding minimum-cost Markov chains were iterative and ran in exponential time. This paper shows how to map every possible type-$k$ state into a type-$k$ hyperplane and then define a "Markov Chain Polytope" as the lower envelope of all such hyperplanes. Finding a minimum-cost Markov chain can then be shown to be equivalent to finding a "highest" point on this polytope. The local optimization procedures used in the previous iterative algorithms are shown to be separation oracles for this polytope. Since these were often polynomial time, an application of the Ellipsoid method immediately leads to polynomial time algorithms for these problems.

We develop a high order reconstructed discontinuous approximation (RDA) method for solving a mixed formulation of the quad-curl problem in two and three dimensions. This mixed formulation is established by adding an auxiliary variable to control the divergence of the field. The approximation space for the original variables is constructed by patch reconstruction with exactly one degree of freedom per element in each dimension and the auxiliary variable is approximated by the piecewise constant space. We prove the optimal convergence rate under the energy norm and also suboptimal $L^2$ convergence using a duality approach. Numerical results are provided to verify the theoretical analysis.

We study the amount of maximal extractable value (MEV) captured by validators, as a function of searcher competition, in blockchains with competitive block building markets such as Ethereum. We argue that the core is a suitable solution concept in this context that makes robust predictions that are independent of implementation details or specific mechanisms chosen. We characterize how much value validators extract in the core and quantify the surplus share of validators as a function of searcher competition. Searchers can obtain at most the marginal value increase of the winning block relative to the best block that can be built without their bundles. Dually this gives a lower bound on the value extracted by the validator. If arbitrages are easy to find and many searchers find similar bundles, the validator gets paid all value almost surely, while searchers can capture most value if there is little searcher competition per arbitrage. For the case of passive block-proposers we study, moreover, mechanisms that implement core allocations in dominant strategies and find that for submodular value, there is a unique dominant-strategy incentive compatible core-selecting mechanism that gives each searcher exactly their marginal value contribution to the winning block. We validate our theoretical prediction empirically with aggregate bundle data and find a significant positive relation between the number of submitted backruns for the same opportunity and the median value captured by the proposer from the opportunity.

Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of up-to-date external information. This methodology, focusing primarily on the text domain, provides a cost-effective solution to the generation of plausible but incorrect responses by LLMs, thereby enhancing the accuracy and reliability of their outputs through the use of real-world data. As RAG grows in complexity and incorporates multiple concepts that can influence its performance, this paper organizes the RAG paradigm into four categories: pre-retrieval, retrieval, post-retrieval, and generation, offering a detailed perspective from the retrieval viewpoint. It outlines RAG's evolution and discusses the field's progression through the analysis of significant studies. Additionally, the paper introduces evaluation methods for RAG, addressing the challenges faced and proposing future research directions. By offering an organized framework and categorization, the study aims to consolidate existing research on RAG, clarify its technological underpinnings, and highlight its potential to broaden the adaptability and applications of LLMs.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

Named entity recognition (NER) is the task to identify text spans that mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language applications such as question answering, text summarization, and machine translation. Although early NER systems are successful in producing decent recognition accuracy, they often require much human effort in carefully designing rules or features. In recent years, deep learning, empowered by continuous real-valued vector representations and semantic composition through nonlinear processing, has been employed in NER systems, yielding stat-of-the-art performance. In this paper, we provide a comprehensive review on existing deep learning techniques for NER. We first introduce NER resources, including tagged NER corpora and off-the-shelf NER tools. Then, we systematically categorize existing works based on a taxonomy along three axes: distributed representations for input, context encoder, and tag decoder. Next, we survey the most representative methods for recent applied techniques of deep learning in new NER problem settings and applications. Finally, we present readers with the challenges faced by NER systems and outline future directions in this area.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司