亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we present an invariance proof of three properties on Simpson's 4-slot algorithm, i.e. data-race freedom, data coherence and data freshness, which together implies linearisability of the algorithm. It is an extension of previous works whose proof focuses mostly on data-race freedom. In addition, our proof uses simply inductive invariants and transition invariants, whereas previous work uses more sophisticated machinery like separation logics, rely-guarantee or ownership transfer.

相關內容

This paper studies the sensitivity (or insensitivity) of a class of load balancing algorithms that achieve asymptotic zero-waiting in the sub-Halfin-Whitt regime, named LB-zero. Most existing results on zero-waiting load balancing algorithms assume the service time distribution is exponential. This paper establishes the {\em large-system insensitivity} of LB-zero for jobs whose service time follows a Coxian distribution with a finite number of phases. This result suggests that LB-zero achieves asymptotic zero-waiting for a large class of service time distributions, which is confirmed in our simulations. To prove this result, this paper develops a new technique, called "Iterative State-Space Peeling" (or ISSP for short). ISSP first identifies an iterative relation between the upper and lower bounds on the queue states and then proves that the system lives near the fixed point of the iterative bounds with a high probability. Based on ISSP, the steady-state distribution of the system is further analyzed by applying Stein's method in the neighborhood of the fixed point. ISSP, like state-space collapse in heavy-traffic analysis, is a general approach that may be used to study other complex stochastic systems.

We show $\textsf{EOPL}=\textsf{PLS}\cap\textsf{PPAD}$. Here the class $\textsf{EOPL}$ consists of all total search problems that reduce to the End-of-Potential-Line problem, which was introduced in the works by Hubacek and Yogev (SICOMP 2020) and Fearnley et al. (JCSS 2020). In particular, our result yields a new simpler proof of the breakthrough collapse $\textsf{CLS}=\textsf{PLS}\cap\textsf{PPAD}$ by Fearnley et al. (STOC 2021). We also prove a companion result $\textsf{SOPL}=\textsf{PLS}\cap\textsf{PPADS}$, where $\textsf{SOPL}$ is the class associated with the Sink-of-Potential-Line problem.

$\newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}$We give a simple proof that the (approximate, decisional) Shortest Vector Problem is $\NP$-hard under a randomized reduction. Specifically, we show that for any $p \geq 1$ and any constant $\gamma < 2^{1/p}$, the $\gamma$-approximate problem in the $\ell_p$ norm ($\gamma$-$\GapSVP_p$) is not in $\mathsf{RP}$ unless $\NP \subseteq \mathsf{RP}$. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of $\gamma$-$\GapSVP_p$ using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known $\NP$-hardness results for $\GapSVP_p$ with $p < \infty$, our reduction uses randomness. Indeed, it is a notorious open problem to prove $\NP$-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding $n$-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an $O(\sqrt{\log n})$ factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis.

We present a novel energy-based numerical analysis of semilinear diffusion-reaction boundary value problems. Based on a suitable variational setting, the proposed computational scheme can be seen as an energy minimisation approach. More specifically, this procedure aims to generate a sequence of numerical approximations, which results from the iterative solution of related (stabilised) linearised discrete problems, and tends to a local minimum of the underlying energy functional. Simultaneously, the finite-dimensional approximation spaces are adaptively refined; this is implemented in terms of a new mesh refinement strategy in the context of finite element discretisations, which again relies on the energy structure of the problem under consideration, and does not involve any a posteriori error indicators. In combination, the resulting adaptive algorithm consists of an iterative linearisation procedure on a sequence of hierarchically refined discrete spaces, which we prove to converge towards a solution of the continuous problem in an appropriate sense. Numerical experiments demonstrate the robustness and reliability of our approach for a series of examples.

Recent studies revealed that convolutional neural networks do not generalize well to small image transformations, e.g. rotations by a few degrees or translations of a few pixels. To improve the robustness to such transformations, we propose to introduce data augmentation at intermediate layers of the neural architecture, in addition to the common data augmentation applied on the input images. By introducing small perturbations to activation maps (features) at various levels, we develop the capacity of the neural network to cope with such transformations. We conduct experiments on three image classification benchmarks (Tiny ImageNet, Caltech-256 and Food-101), considering two different convolutional architectures (ResNet-18 and DenseNet-121). When compared with two state-of-the-art stabilization methods, the empirical results show that our approach consistently attains the best trade-off between accuracy and mean flip rate.

A common explanation for the failure of deep networks to generalize out-of-distribution is that they fail to recover the "correct" features. Focusing on the domain generalization setting, we challenge this notion with a simple experiment which suggests that ERM already learns sufficient features and that the current bottleneck is not feature learning, but robust regression. We therefore argue that devising simpler methods for learning predictors on existing features is a promising direction for future research. Towards this end, we introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift. Rather than learning one function, DARE performs a domain-specific adjustment to unify the domains in a canonical latent space and learns to predict in this space. Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions. Further, we provide the first finite-environment convergence guarantee to the minimax risk, improving over existing results which show a "threshold effect". Evaluated on finetuned features, we find that DARE compares favorably to prior methods, consistently achieving equal or better performance.

We show that some natural problems that are XNLP-hard (which implies W[t]-hardness for all t) when parameterized by pathwidth or treewidth, become FPT when parameterized by stable gonality, a novel graph parameter based on optimal maps from graphs to trees. The problems we consider are classical flow and orientation problems, such as Undirected Flow with Lower Bounds (which is strongly NP-complete, as shown by Itai), Minimum Maximum Outdegree (for which W[1]-hardness for treewidth was proven by Szeider), and capacitated optimization problems such as Capacitated (Red-Blue) Dominating Set (for which W[1]-hardness was proven by Dom, Lokshtanov, Saurabh and Villanger). Our hardness proofs (that beat existing results) use reduction to a recent XNLP-complete problem (Accepting Non-deterministic Checking Counter Machine). The new easy parameterized algorithms use a novel notion of weighted tree partition with an associated parameter that we call treebreadth, inspired by Seese's notion of tree-partite graphs, as well as techniques from dynamical programming and integer linear programming.

For centuries, it has been widely believed that the influence of a small coalition of voters is negligible in a large election. Consequently, there is a large body of literature on characterizing the asymptotic likelihood for an election to be influence, especially by the manipulation of a single voter, establishing an $O(\frac{1}{\sqrt n})$ upper bound and an $\Omega(\frac{1}{n^{67}})$ lower bound for many commonly studied voting rules under the i.i.d.~uniform distribution, known as Impartial Culture (IC) in social choice, where $n$ is the number is voters. In this paper, we extend previous studies in three aspects: (1) we consider a more general and realistic semi-random model that resembles the model in smoothed analysis, (2) we consider many coalitional influence problems, including coalitional manipulation, margin of victory, and various vote controls and bribery, and (3) we consider arbitrary and variable coalition size $B$. Our main theorem provides asymptotically tight bounds on the semi-random likelihood of the existence of a size-$B$ coalition that can successfully influence the election under a wide range of voting rules. Applications of the main theorem and its proof techniques resolve long-standing open questions about the likelihood of coalitional manipulability under IC, by showing that the likelihood is $\Theta\left(\min\left\{\frac{B}{\sqrt n}, 1\right\}\right)$ for many commonly studied voting rules. The main technical contribution is a characterization of the semi-random likelihood for a Poisson multinomial variable (PMV) to be unstable, which we believe to be a general and useful technique with independent interest.

We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain new convergence guarantees under weaker assumptions than strong log-concavity: namely, our results hold for (1) weakly log-concave targets, and (2) targets satisfying isoperimetric assumptions which allow for non-log-concavity. We demonstrate our results by obtaining new state-of-the-art sampling guarantees for several classes of target distributions. We also strengthen the connection between the proximal sampler and the proximal method in optimization by interpreting the proximal sampler as an entropically regularized Wasserstein proximal method, and the proximal point method as the limit of the proximal sampler with vanishing noise.

We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs under a vertex-percolation subcriticality condition. We show that this subcriticality condition is optimal in the sense that the problem of (approximately) sampling weighted rooted graphlets becomes impossible for infinite graphs and intractable for finite graphs if the condition does not hold. We apply our rooted graphlet sampling algorithm as a subroutine to give a fast perfect sampling algorithm for polymer models and a fast perfect sampling algorithm for weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. We apply this polymer model algorithm to give improved sampling algorithms for spin systems at low temperatures on expander graphs and other structured families of graphs: under the least restrictive conditions known we give near linear-time algorithms, while previous algorithms in these regimes required large polynomial running times.

北京阿比特科技有限公司