亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a set system $(E, \mathcal{P})$ with $\rho \in [0, 1]^E$ and $\pi \in [0,1]^{ \mathcal{P}}$, our goal is to find a probability distribution for a random set $S \subseteq E$ such that $\operatorname{Pr}[e \in S] = \rho_e$ for all $e \in E$ and $\operatorname{Pr}[P \cap S \neq \emptyset] \geq \pi_P$ for all $P \in \mathcal{P}$. We extend the results of Dahan, Amin, and Jaillet (MOR 2022) who studied this problem motivated by a security game in a directed acyclic graph (DAG). We focus on the setting where $\pi$ is of the affine form $\pi_P = 1 - \sum_{e \in P} \mu_e$ for $\mu \in [0, 1]^E$. A necessary condition for the existence of the desired distribution is that $\sum_{e \in P} \rho_e \geq \pi_P$ for all $P \in \mathcal{P}$. We show that this condition is sufficient if and only if $\mathcal{P}$ has the weak max-flow/min-cut property. We further provide an efficient combinatorial algorithm for computing the corresponding distribution in the special case where $(E, \mathcal{P})$ is an abstract network. As a consequence, equilibria for the security game by Dahan et al. can be efficiently computed in a wide variety of settings (including arbitrary digraphs). As a subroutine of our algorithm, we provide a combinatorial algorithm for computing shortest paths in abstract networks, partially answering an open question by McCormick (SODA 1996). We further show that a conservation law proposed by Dahan et al. for the requirement vector $\pi$ in DAGs can be reduced to the setting of affine requirements described above.

相關內容

In this paper, we considier the limiting distribution of the maximum interpoint Euclidean distance $M_n=\max _{1 \leq i<j \leq n}\left\|\boldsymbol{X}_i-\boldsymbol{X}_j\right\|$, where $\boldsymbol{X}_1, \boldsymbol{X}_2, \ldots, \boldsymbol{X}_n$ be a random sample coming from a $p$-dimensional population with dependent sub-gaussian components. When the dimension tends to infinity with the sample size, we proves that $M_n^2$ under a suitable normalization asymptotically obeys a Gumbel type distribution. The proofs mainly depend on the Stein-Chen Poisson approximation method and high dimensional Gaussian approximation.

Complexity classes such as $\#\mathbf{P}$, $\oplus\mathbf{P}$, $\mathbf{GapP}$, $\mathbf{OptP}$, $\mathbf{NPMV}$, or the class of fuzzy languages realised by polynomial-time fuzzy nondeterministic Turing machines, can all be described in terms of a class $\mathbf{NP}[S]$ for a suitable semiring $S$, defined via weighted Turing machines over $S$ similarly as $\mathbf{NP}$ is defined via the classical nondeterministic Turing machines. Other complexity classes of decision problems can be lifted to the quantitative world using the same recipe as well, and the resulting classes relate to the original ones in the same way as weighted automata or logics relate to their unweighted counterparts. The article surveys these too-little-known connexions between weighted automata theory and computational complexity theory implicit in the existing literature, suggests a systematic approach to the study of weighted complexity classes, and presents several new observations strengthening the relation between both fields. In particular, it is proved that a natural extension of the Boolean satisfiability problem to weighted propositional logic is complete for the class $\mathbf{NP}[S]$ when $S$ is a finitely generated semiring. Moreover, a class of semiring-valued functions $\mathbf{FP}[S]$ is introduced for each semiring $S$ as a counterpart to the class $\mathbf{P}$, and the relations between $\mathbf{FP}[S]$ and $\mathbf{NP}[S]$ are considered.

Existing out-of-distribution (OOD) methods have shown great success on balanced datasets but become ineffective in long-tailed recognition (LTR) scenarios where 1) OOD samples are often wrongly classified into head classes and/or 2) tail-class samples are treated as OOD samples. To address these issues, current studies fit a prior distribution of auxiliary/pseudo OOD data to the long-tailed in-distribution (ID) data. However, it is difficult to obtain such an accurate prior distribution given the unknowingness of real OOD samples and heavy class imbalance in LTR. A straightforward solution to avoid the requirement of this prior is to learn an outlier class to encapsulate the OOD samples. The main challenge is then to tackle the aforementioned confusion between OOD samples and head/tail-class samples when learning the outlier class. To this end, we introduce a novel calibrated outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence. Extensive empirical results on three popular benchmarks CIFAR10-LT, CIFAR100-LT, and ImageNet-LT demonstrate that COCL substantially outperforms state-of-the-art OOD detection methods in LTR while being able to improve the classification accuracy on ID data. Code is available at //github.com/mala-lab/COCL.

In this paper, we describe an algorithm for approximating functions of the form $f(x) = < \sigma(\mu), x^\mu >$ over $[0,1] \subset \mathbb{R}$, where $\sigma(\mu)$ is some distribution supported on $[a,b]$, with $0 <a < b < \infty$. One example from this class of functions is $x^c (\log{x})^m=(-1)^m < \delta^{(m)}(\mu-c), x^\mu >$, where $a\leq c \leq b$ and $m \geq 0$ is an integer. Given the desired accuracy $\epsilon$ and the values of $a$ and $b$, our method determines a priori a collection of non-integer powers $t_1$, $t_2$, $\ldots$, $t_N$, so that the functions are approximated by series of the form $f(x)\approx \sum_{j=1}^N c_j x^{t_j}$, and a set of collocation points $x_1$, $x_2$, $\ldots$, $x_N$, such that the expansion coefficients can be found by collocating the function at these points. We prove that our method has a small uniform approximation error which is proportional to $\epsilon$ multiplied by some small constants. We demonstrate the performance of our algorithm with several numerical experiments, and show that the number of singular powers and collocation points grows as $N=O(\log{\frac{1}{\epsilon}})$.

In the Multiagent Path Finding problem (MAPF for short), we focus on efficiently finding non-colliding paths for a set of $k$ agents on a given graph $G$, where each agent seeks a path from its source vertex to a target. An important measure of the quality of the solution is the length of the proposed schedule $\ell$, that is, the length of a longest path (including the waiting time). In this work, we propose a systematic study under the parameterized complexity framework. The hardness results we provide align with many heuristics used for this problem, whose running time could potentially be improved based on our fixed-parameter tractability results. We show that MAPF is W[1]-hard with respect to $k$ (even if $k$ is combined with the maximum degree of the input graph). The problem remains NP-hard in planar graphs even if the maximum degree and the makespan$\ell$ are fixed constants. On the positive side, we show an FPT algorithm for $k+\ell$. As we delve further, the structure of~$G$ comes into play. We give an FPT algorithm for parameter $k$ plus the diameter of the graph~$G$. The MAPF problem is W[1]-hard for cliquewidth of $G$ plus $\ell$ while it is FPT for treewidth of $G$ plus $\ell$.

For which unary predicates $P_1, \ldots, P_m$ is the MSO theory of the structure $\langle \mathbb{N}; <, P_1, \ldots, P_m \rangle$ decidable? We survey the state of the art, leading us to investigate combinatorial properties of almost-periodic, morphic, and toric words. In doing so, we show that if each $P_i$ can be generated by a toric dynamical system of a certain kind, then the attendant MSO theory is decidable.

Given a graph $G$, an integer $k\geq 0$, and a non-negative integral function $f:V(G) \rightarrow \mathcal{N}$, the {\sc Vector Domination} problem asks whether a set $S$ of vertices, of cardinality $k$ or less, exists in $G$ so that every vertex $v \in V(G)-S$ has at least $f(v)$ neighbors in $S$. The problem generalizes several domination problems and it has also been shown to generalize Bounded-Degree Vertex Deletion. In this paper, the parameterized version of Vector Domination is studied when the input graph is planar. A linear problem kernel is presented.

Dimensional analysis (DA) pays attention to fundamental physical dimensions such as length and mass when modelling scientific and engineering systems. It goes back at least a century to Buckingham's Pi theorem, which characterizes a scientifically meaningful model in terms of a limited number of dimensionless variables. The methodology has only been exploited relatively recently by statisticians for design and analysis of experiments, however, and computer experiments in particular. The basic idea is to build models in terms of new dimensionless quantities derived from the original input and output variables. A scientifically valid formulation has the potential for improved prediction accuracy in principle, but the implementation of DA is far from straightforward. There can be a combinatorial number of possible models satisfying the conditions of the theory. Empirical approaches for finding effective derived variables will be described, and improvements in prediction accuracy will be demonstrated. As DA's dimensionless quantities for a statistical model typically compare the original variables rather than use their absolute magnitudes, DA is less dependent on the choice of experimental ranges in the training data. Hence, we are also able to illustrate sustained accuracy gains even when extrapolating substantially outside the training data.

We provide an algorithm that maintains, against an adaptive adversary, a $(1-\varepsilon)$-approximate maximum matching in $n$-node $m$-edge general (not necessarily bipartite) undirected graph undergoing edge deletions with high probability with (amortized) $O(\mathrm{poly}(\varepsilon^{-1}, \log n))$ time per update. We also obtain the same update time for maintaining a fractional approximate weighted matching (and hence an approximation to the value of the maximum weight matching) and an integral approximate weighted matching in dense graphs. Our unweighted result improves upon the prior state-of-the-art which includes a $\mathrm{poly}(\log{n}) \cdot 2^{O(1/\varepsilon^2)}$ update time [Assadi-Bernstein-Dudeja 2022] and an $O(\sqrt{m} \varepsilon^{-2})$ update time [Gupta-Peng 2013], and our weighted result improves upon the $O(\sqrt{m}\varepsilon^{-O(1/\varepsilon)}\log{n})$ update time due to [Gupta-Peng 2013]. To obtain our results, we generalize a recent optimization approach to dynamic algorithms from [Jambulapati-Jin-Sidford-Tian 2022]. We show that repeatedly solving entropy-regularized optimization problems yields a lazy updating scheme for fractional decremental problems with a near-optimal number of updates. To apply this framework we develop optimization methods compatible with it and new dynamic rounding algorithms for the matching polytope.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司