亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present deterministic algorithms for maintaining a $(3/2 + \epsilon)$ and $(2 + \epsilon)$-approximate maximum matching in a fully dynamic graph with worst-case update times $\hat{O}(\sqrt{n})$ and $\tilde{O}(1)$ respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio $(2 - \delta)$ (for any $\delta > 0$) and $(2 + \epsilon)$ were both shown by Roghani et al. [2021] with update times $O(n^{3/4})$ and $O_\epsilon(\sqrt{n})$ respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are $O_\epsilon(\sqrt{n})$ and $\tilde{O}(1)$ which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively. In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time. \textbf{Independent Work:} Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a $(3/2 + \epsilon)$-approximate maximum matching with deterministic worst-case update time $O_\epsilon(\sqrt{n})$.

相關內容

Motivated by the serious problem that hospitals in rural areas suffer from a shortage of residents, we study the Hospitals/Residents model in which hospitals are associated with lower quotas and the objective is to satisfy them as much as possible. When preference lists are strict, the number of residents assigned to each hospital is the same in any stable matching because of the well-known rural hospitals theorem; thus there is no room for algorithmic interventions. However, when ties are introduced to preference lists, this will no longer apply because the number of residents may vary over stable matchings. In this paper, we formulate an optimization problem to find a stable matching with the maximum total satisfaction ratio for lower quotas. We first investigate how the total satisfaction ratio varies over choices of stable matchings in four natural scenarios and provide the exact values of these maximum gaps. Subsequently, we propose a strategy-proof approximation algorithm for our problem; in one scenario it solves the problem optimally, and in the other three scenarios, which are NP-hard, it yields a better approximation factor than that of a naive tie-breaking method. Finally, we show inapproximability results for the above-mentioned three NP-hard scenarios.

In this paper, we introduce a new class of codes, called weighted parity-check codes, where each parity-check bit has a weight that indicates its likelihood to be one (instead of fixing each parity-check bit to be zero). It is applicable to a wide range of settings, e.g. asymmetric channels, channels with state and/or cost constraints, and the Wyner-Ziv problem, and can provably achieve the capacity. For the channel with state (Gelfand-Pinsker) setting, the proposed coding scheme has two advantages compared to the nested linear code. First, it achieves the capacity of any channel with state (e.g. asymmetric channels). Second, simulation results show that the proposed code achieves a smaller error rate compared to the nested linear code.

In this paper, we propose a new scalar linear coding scheme for the index coding problem called update-based maximum column distance (UMCD) coding scheme. The central idea in each transmission is to code messages such that one of the receivers with the minimum size of side information is instantaneously eliminated from unsatisfied receivers. One main contribution of the paper is to prove that the other satisfied receivers can be identified after each transmission, using a polynomial-time algorithm solving the well-known maximum cardinality matching problem in graph theory. This leads to determining the total number of transmissions without knowing the coding coefficients. Once this number and what messages to transmit in each round are found, we then propose a method to determine all coding coefficients from a sufficiently large finite field. We provide concrete instances where the proposed UMCD coding scheme has a better broadcast performance compared to the most efficient existing coding schemes, including the recursive scheme (Arbabjolfaei and Kim, 2014) and the interlinked-cycle cover (ICC) scheme (Thapa et al., 2017). We prove that the proposed UMCD coding scheme performs at least as well as the MDS coding scheme in terms of broadcast rate. By characterizing two classes of index coding instances, we show that the gap between the broadcast rates of the recursive and ICC schemes and the UMCD scheme grows linearly with the number of messages. Then, we extend the UMCD coding scheme to its vector version by applying it as a basic coding block to solve the subinstances.

The Partitioning Min-Max Weighted Matching (PMMWM) problem is an NP-hard problem that combines the problem of partitioning a group of vertices of a bipartite graph into disjoint subsets with limited size and the classical Min-Max Weighted Matching (MMWM) problem. Kress et al. proposed this problem in 2015 and they also provided several algorithms, among which MP$_{\text{LS}}$ is the state-of-the-art. In this work, we observe there is a time bottleneck in the matching phase of MP$_{\text{LS}}$. Hence, we optimize the redundant operations during the matching iterations, and propose an efficient algorithm called the MP$_{\text{KM-M}}$ that greatly speeds up MP$_{\text{LS}}$. The bottleneck time complexity is optimized from $O(n^3)$ to $O(n^2)$. We also prove the correctness of MP$_{\text{KM-M}}$ by the primal-dual method. To test the performance on diverse instances, we generate various types and sizes of benchmarks, and carried out an extensive computational study on the performance of MP$_{\text{KM-M}}$ and MP$_{\text{LS}}$. The evaluation results show that our MP$_{\text{KM-M}}$ greatly shortens the runtime as compared with MP$_{\text{LS}}$ while yielding the same solution quality.

We study the shape reconstruction of an inclusion from the {faraway} measurement of the associated electric field. This is an inverse problem of practical importance in biomedical imaging and is known to be notoriously ill-posed. By incorporating Drude's model of the permittivity parameter, we propose a novel reconstruction scheme by using the plasmon resonance with a significantly enhanced resonant field. We conduct a delicate sensitivity analysis to establish a sharp relationship between the sensitivity of the reconstruction and the plasmon resonance. It is shown that when plasmon resonance occurs, the sensitivity functional blows up and hence ensures a more robust and effective construction. Then we combine the Tikhonov regularization with the Laplace approximation to solve the inverse problem, which is an organic hybridization of the deterministic and stochastic methods and can quickly calculate the minimizer while capture the uncertainty of the solution. We conduct extensive numerical experiments to illustrate the promising features of the proposed reconstruction scheme.

We study how good a lexicographically maximal solution is in the weighted matching and matroid intersection problems. A solution is lexicographically maximal if it takes as many heaviest elements as possible, and subject to this, it takes as many second heaviest elements as possible, and so on. If the distinct weight values are sufficiently dispersed, e.g., the minimum ratio of two distinct weight values is at least the ground set size, then the lexicographical maximality and the usual weighted optimality are equivalent. We show that the threshold of the ratio for this equivalence to hold is exactly $2$. Furthermore, we prove that if the ratio is less than $2$, say $\alpha$, then a lexicographically maximal solution achieves $(\alpha/2)$-approximation, and this bound is tight.

The decision time of an infinite time algorithm is the supremum of its halting times over all real inputs. The decision time of a set of reals is the least decision time of an algorithm that decides the set; semidecision times of semidecidable sets are defined similary. It is not hard to see that $\omega_1$ is the maximal decision time of sets of reals. Our main results determine the supremum of countable decision times as $\sigma$ and that of countable semidecision times as $\tau$, where $\sigma$ and $\tau$ denote the suprema of $\Sigma_1$- and $\Sigma_2$-definable ordinals, respectively, over $L_{\omega_1}$. We further compute analogous suprema for singletons.

We study the framework of universal dynamic regret minimization with strongly convex losses. We answer an open problem in Baby and Wang 2021 by showing that in a proper learning setup, Strongly Adaptive algorithms can achieve the near optimal dynamic regret of $\tilde O(d^{1/3} n^{1/3}\text{TV}[u_{1:n}]^{2/3} \vee d)$ against any comparator sequence $u_1,\ldots,u_n$ simultaneously, where $n$ is the time horizon and $\text{TV}[u_{1:n}]$ is the Total Variation of comparator. These results are facilitated by exploiting a number of new structures imposed by the KKT conditions that were not considered in Baby and Wang 2021 which also lead to other improvements over their results such as: (a) handling non-smooth losses and (b) improving the dimension dependence on regret. Further, we also derive near optimal dynamic regret rates for the special case of proper online learning with exp-concave losses and an $L_\infty$ constrained decision set.

\emph{$K$-best enumeration}, which asks to output $k$ best solutions without duplication, plays an important role in data analysis for many fields. In such fields, data can be typically represented by graphs, and thus subgraph enumeration has been paid much attention to. However, $k$-best enumeration tends to be intractable since, in many cases, finding one optimum solution is \NP-hard. To overcome this difficulty, we combine $k$-best enumeration with a new concept of enumeration algorithms called \emph{approximation enumeration algorithms}, which has been recently proposed. As a main result, we propose an $\alpha$-approximation algorithm for minimal connected edge dominating sets which outputs $k$ minimal solutions with cardinality at most $\alpha\cdot\overline{\rm OPT}$, where $\overline{\rm OPT}$ is the cardinality of a mini\emph{mum} solution which is \emph{not} outputted by the algorithm, and $\alpha$ is constant. Moreover, our proposed algorithm runs in $O(nm^2\Delta)$ delay, where $n$, $m$, $\Delta$ are the number of vertices, the number of edges, and the maximum degree of an input graph.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

北京阿比特科技有限公司