亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Assuming the Unique Games Conjecture (UGC), the best approximation ratio that can be obtained in polynomial time for the MAX CUT problem is $\alpha_{\text{CUT}}\simeq 0.87856$, obtained by the celebrated SDP-based approximation algorithm of Goemans and Williamson. The currently best approximation algorithm for MAX DI-CUT, i.e., the MAX CUT problem in directed graphs, achieves a ratio of about $0.87401$, leaving open the question whether MAX DI-CUT can be approximated as well as MAX CUT. We obtain a slightly improved algorithm for MAX DI-CUT and a new UGC-hardness result for it, showing that $0.87446\le \alpha_{\text{DI-CUT}}\le 0.87461$, where $\alpha_{\text{DI-CUT}}$ is the best approximation ratio that can be obtained in polynomial time for MAX DI-CUT under UGC. The new upper bound separates MAX DI-CUT from MAX CUT, resolving a question raised by Feige and Goemans. A natural generalization of MAX DI-CUT is the MAX 2-AND problem in which each constraint is of the form $z_1\land z_2$, where $z_1$ and $z_2$ are literals, i.e., variables or their negations (In MAX DI-CUT each constraint is of the form $\bar{x}_1\land x_2$, where $x_1$ and $x_2$ are variables.) Austrin separated MAX 2-AND from MAX CUT by showing that $\alpha_{\text{2AND}} < 0.87435$ and conjectured that MAX 2-AND and MAX DI-CUT have the same approximation ratio. Our new lower bound on MAX DI-CUT refutes this conjecture, completing the separation of the three problems MAX 2-AND, MAX DI-CUT and MAX CUT. We also obtain a new lower bound for MAX 2-AND, showing that $0.87414\le \alpha_{\text{2AND}}\le 0.87435$. Our upper bound on MAX DI-CUT is achieved via a simple, analytical proof. The lower bounds on MAX DI-CUT and MAX 2-AND (the new approximation algorithms) use experimentally-discovered distributions of rounding functions which are then verified via computer-assisted proofs.

相關內容

We consider the pebble game on DAGs with bounded fan-in introduced in [Paterson and Hewitt '70] and the reversible version of this game in [Bennett '89], and study the question of how hard it is to decide exactly or approximately the number of pebbles needed for a given DAG in these games. We prove that the problem of eciding whether $s$~pebbles suffice to reversibly pebble a DAG $G$ is PSPACE-complete, as was previously shown for the standard pebble game in [Gilbert, Lengauer and Tarjan '80]. Via two different graph product constructions we then strengthen these results to establish that both standard and reversible pebbling space are PSPACE-hard to approximate to within any additive constant. To the best of our knowledge, these are the first hardness of approximation results for pebble games in an unrestricted setting (even for polynomial time). Also, since [Chan '13] proved that reversible pebbling is equivalent to the games in [Dymond and Tompa '85] and [Raz and McKenzie '99], our results apply to the Dymond--Tompa and Raz--McKenzie games as well, and from the same paper it follows that resolution depth is PSPACE-hard to determine up to any additive constant. We also obtain a multiplicative logarithmic separation between reversible and standard pebbling space. This improves on the additive logarithmic separation previously known and could plausibly be tight, although we are not able to prove this. We leave as an interesting open problem whether our additive hardness of approximation result could be strengthened to a multiplicative bound if the computational resources are decreased from polynomial space to the more common setting of polynomial time.

A longstanding open problem asks for an aperiodic monotile, also known as an "einstein": a shape that admits tilings of the plane, but never periodic tilings. We answer this problem for topological disk tiles by exhibiting a continuum of combinatorially equivalent aperiodic polygons. We first show that a representative example, the "hat" polykite, can form clusters called "metatiles", for which substitution rules can be defined. Because the metatiles admit tilings of the plane, so too does the hat. We then prove that generic members of our continuum of polygons are aperiodic, through a new kind of geometric incommensurability argument. Separately, we give a combinatorial, computer-assisted proof that the hat must form hierarchical -- and hence aperiodic -- tilings.

A long standing open question is whether the distinguisher of high rate alternant codes or Goppa codes \cite{FGOPT11} can be turned into an algorithm recovering the algebraic structure of such codes from the mere knowledge of an arbitrary generator matrix of it. This would allow to break the McEliece scheme as soon as the code rate is large enough and would break all instances of the CFS signature scheme. We give for the first time a positive answer for this problem when the code is {\em a generic alternant code} and when the code field size $q$ is small : $q \in \{2,3\}$ and for {\em all} regime of other parameters for which the aforementioned distinguisher works. This breakthrough has been obtained by two different ingredients : (i) a way of using code shortening and the component-wise product of codes to derive from the original alternant code a sequence of alternant codes of decreasing degree up to getting an alternant code of degree $3$ (with a multiplier and support related to those of the original alternant code); (ii) an original Gr\"obner basis approach which takes into account the non standard constraints on the multiplier and support of an alternant code which recovers in polynomial time the relevant algebraic structure of an alternant code of degree $3$ from the mere knowledge of a basis for it.

Regression analysis under the assumption of monotonicity is a well-studied statistical problem and has been used in a wide range of applications. However, there remains a lack of a broadly applicable methodology that permits information borrowing, for efficiency gains, when jointly estimating multiple monotonic regression functions. We introduce such a methodology by extending the isotonic regression problem presented in the article "The isotonic regression problem and its dual" (Barlow and Brunk, 1972). The presented approach can be applied to both fixed and random designs and any number of explanatory variables (regressors). Our framework penalizes pairwise differences in the values (levels) of the monotonic function estimates, with the weight of penalty being determined based on a statistical test, which results in information being shared across data sets if similarities in the regression functions exist. Function estimates are subsequently derived using an iterative optimization routine that uses existing solution algorithms for the isotonic regression problem. Simulation studies for normally and binomially distributed response data illustrate that function estimates are consistently improved if similarities between functions exist, and are not oversmoothed otherwise. We further apply our methodology to analyse two public health data sets: neonatal mortality data for Porto Alegre, Brazil, and stroke patient data for North West England.

State-of-the-art parallel sorting algorithms for distributed-memory architectures are based on computing a balanced partitioning via sampling and histogramming. By finding samples that partition the sorted keys into evenly-sized chunks, these algorithms minimize the number of communication rounds required. Histogramming (computing positions of samples) guides sampling, enabling a decrease in the overall number of samples collected. We derive lower and upper bounds on the number of sampling/histogramming rounds required to compute a balanced partitioning. We improve on prior results to demonstrate that when using $p$ processors, $O(\log^* p)$ rounds with $O(p/\log^* p)$ samples per round suffice. We match that with a lower bound that shows that any algorithm with $O(p)$ samples per round requires at least $\Omega(\log^* p)$ rounds. Additionally, we prove the $\Omega(p \log p)$ samples lower bound for one round, thus proving that existing one round algorithms: sample sort, AMS sort and HSS have optimal sample size complexity. To derive the lower bound, we propose a hard randomized input distribution and apply classical results from the distribution theory of runs.

In the \emph{$k$-Diameter-Optimally Augmenting Tree Problem} we are given a tree $T$ of $n$ vertices as input. The tree is embedded in an unknown \emph{metric} space and we have unlimited access to an oracle that, given two distinct vertices $u$ and $v$ of $T$, can answer queries reporting the cost of the edge $(u,v)$ in constant time. We want to augment $T$ with $k$ shortcuts in order to minimize the diameter of the resulting graph. For $k=1$, $O(n \log n)$ time algorithms are known both for paths [Wang, CG 2018] and trees [Bil\`o, TCS 2022]. In this paper we investigate the case of multiple shortcuts. We show that no algorithm that performs $o(n^2)$ queries can provide a better than $10/9$-approximate solution for trees for $k\geq 3$. For any constant $\varepsilon > 0$, we instead design a linear-time $(1+\varepsilon)$-approximation algorithm for paths and $k = o(\sqrt{\log n})$, thus establishing a dichotomy between paths and trees for $k\geq 3$. We achieve the claimed running time by designing an ad-hoc data structure, which also serves as a key component to provide a linear-time $4$-approximation algorithm for trees, and to compute the diameter of graphs with $n + k - 1$ edges in time $O(n k \log n)$ even for non-metric graphs. Our data structure and the latter result are of independent interest.

We present a mass lumping approach based on an isogeometric Petrov-Galerkin method that preserves higher-order spatial accuracy in explicit dynamics calculations irrespective of the polynomial degree of the spline approximation. To discretize the test function space, our method uses an approximate dual basis, whose functions are smooth, have local support and satisfy approximate bi-orthogonality with respect to a trial space of B-splines. The resulting mass matrix is ``close'' to the identity matrix. Specifically, a lumped version of this mass matrix preserves all relevant polynomials when utilized in a Galerkin projection. Consequently, the mass matrix can be lumped (via row-sum lumping) without compromising spatial accuracy in explicit dynamics calculations. We address the imposition of Dirichlet boundary conditions and the preservation of approximate bi-orthogonality under geometric mappings. In addition, we establish a link between the exact dual and approximate dual basis functions via an iterative algorithm that improves the approximate dual basis towards exact bi-orthogonality. We demonstrate the performance of our higher-order accurate mass lumping approach via convergence studies and spectral analyses of discretized beam, plate and shell models.

We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of non-conflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most $k$ different interval lengths, we give a simple algorithm that achieves a competitive ratio of $2k$ and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar 2003; namely whether a randomized algorithm without access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. 1997 can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. 2014 into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals.

Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems. However, assigning elements, such as samples in a dataset or neurons in a network layer, to an unknown and discrete number of subsets is inherently non-differentiable, prohibiting end-to-end gradient-based optimization of parameters. We overcome this limitation by proposing a novel two-step method for inferring partitions, which allows its usage in variational inference tasks. This new approach enables reparameterized gradients with respect to the parameters of the new random partition model. Our method works by inferring the number of elements per subset and, second, by filling these subsets in a learned order. We highlight the versatility of our general-purpose approach on three different challenging experiments: variational clustering, inference of shared and independent generative factors under weak supervision, and multitask learning.

Modal parameter estimation of operational structures is often a challenging task when confronted with unwanted distortions (outliers) in field measurements. Atypical observations present a problem to operational modal analysis (OMA) algorithms, such as stochastic subspace identification (SSI), severely biasing parameter estimates and resulting in misidentification of the system. Despite this predicament, no simple mechanism currently exists capable of dealing with such anomalies in SSI. Addressing this problem, this paper first introduces a novel probabilistic formulation of stochastic subspace identification (Prob-SSI), realised using probabilistic projections. Mathematically, the equivalence between this model and the classic algorithm is demonstrated. This fresh perspective, viewing SSI as a problem in probabilistic inference, lays the necessary mathematical foundation to enable a plethora of new, more sophisticated OMA approaches. To this end, a statistically robust SSI algorithm (robust Prob-SSI) is developed, capable of providing a principled and automatic way of handling outlying or anomalous data in the measured timeseries, such as may occur in field recordings, e.g. intermittent sensor dropout. Robust Prob-SSI is shown to outperform conventional SSI when confronted with 'corrupted' data, exhibiting improved identification performance and higher levels of confidence in the found poles when viewing consistency (stabilisation) diagrams. Similar benefits are also demonstrated on the Z24 Bridge benchmark dataset, highlighting enhanced performance on measured systems.

北京阿比特科技有限公司