亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A {\em bipartite tournament} is a directed graph $T:=(A \cup B, E)$ such that every pair of vertices $(a,b), a\in A,b\in B$ are connected by an arc, and no arc connects two vertices of $A$ or two vertices of $B$. A {\em feedback vertex set} is a set $S$ of vertices in $T$ such that $T - S$ is acyclic. In this article we consider the {\sc Feedback Vertex Set} problem in bipartite tournaments. Here the input is a bipartite tournament $T$ on $n$ vertices together with an integer $k$, and the task is to determine whether $T$ has a feedback vertex set of size at most $k$. We give a new algorithm for {\sc Feedback Vertex Set in Bipartite Tournaments}. The running time of our algorithm is upper-bounded by $O(1.6181^k + n^{O(1)})$, improving over the previously best known algorithm with running time $2^kk^{O(1)} + n^{O(1)}$ [Hsiao, ISAAC 2011]. As a by-product, we also obtain the fastest currently known exact exponential-time algorithm for the problem, with running time $O(1.3820^n)$.

相關內容

We study the discrete quantum walk on a regular graph $X$ that assigns negative identity coins to marked vertices $S$ and Grover coins to the unmarked ones. We find combinatorial bases for the eigenspaces of the transtion matrix, and derive a formula for the average vertex mixing matrix $\AMM$. We then find bounds for entries in $\AMM$, and study when these bounds are tight. In particular, the average probabilities between marked vertices are lower bounded by a matrix determined by the induced subgraph $X[S]$, the vertex-deleted subgraph $X\backslash S$, and the edge deleted subgraph $X-E(S)$. We show this bound is achieved if and only if the marked vertices have walk-equitable neighborhoods in the vertex-deleted subgraph. Finally, for quantum walks attaining this bound, we determine when $\AMM[S,S]$ is symmetric, positive semidefinite or uniform.

Schr\"{o}dinger Bridges (SB) are diffusion processes that steer, in finite time, a given initial distribution to another final one while minimizing a suitable cost functional. Although various methods for computing SBs have recently been proposed in the literature, most of these approaches require computationally expensive training schemes, even for solving low-dimensional problems. In this work, we propose an analytic parametrization of a set of feasible policies for steering the distribution of a dynamical system from one Gaussian Mixture Model (GMM) to another. Instead of relying on standard non-convex optimization techniques, the optimal policy within the set can be approximated as the solution of a low-dimensional linear program whose dimension scales linearly with the number of components in each mixture. Furthermore, our method generalizes naturally to more general classes of dynamical systems such as controllable Linear Time-Varying systems that cannot currently be solved using traditional neural SB approaches. We showcase the potential of this approach in low-to-moderate dimensional problems such as image-to-image translation in the latent space of an autoencoder, and various other examples. We also benchmark our approach on an Entropic Optimal Transport (EOT) problem and show that it outperforms state-of-the-art methods in cases where the boundary distributions are mixture models while requiring virtually no training.

We consider temporal numeric planning problems $\Pi$ expressed in PDDL2.1 level 3, and show how to produce SMT formulas $(i)$ whose models correspond to valid plans of $\Pi$, and $(ii)$ that extend the recently proposed planning with patterns approach from the numeric to the temporal case. We prove the correctness and completeness of the approach and show that it performs very well on 10 domains with required concurrency.

A matrix $\Phi \in \mathbb{R}^{Q \times N}$ satisfies the restricted isometry property if $\|\Phi x\|_2^2$ is approximately equal to $\|x\|_2^2$ for all $k$-sparse vectors $x$. We give a construction of RIP matrices with the optimal $Q = O(k \log(N/k))$ rows using $O(k\log(N/k)\log(k))$ bits of randomness. The main technical ingredient is an extension of the Hanson-Wright inequality to $\epsilon$-biased distributions.

When rows of an $n \times d$ matrix $A$ are given in a stream, we study algorithms for approximating the top eigenvector of the matrix ${A}^TA$ (equivalently, the top right singular vector of $A$). We consider worst case inputs $A$ but assume that the rows are presented to the streaming algorithm in a uniformly random order. We show that when the gap parameter $R = \sigma_1(A)^2/\sigma_2(A)^2 = \Omega(1)$, then there is a randomized algorithm that uses $O(h \cdot d \cdot \operatorname{polylog}(d))$ bits of space and outputs a unit vector $v$ that has a correlation $1 - O(1/\sqrt{R})$ with the top eigenvector $v_1$. Here $h$ denotes the number of \emph{heavy rows} in the matrix, defined as the rows with Euclidean norm at least $\|{A}\|_F/\sqrt{d \cdot \operatorname{polylog}(d)}$. We also provide a lower bound showing that any algorithm using $O(hd/R)$ bits of space can obtain at most $1 - \Omega(1/R^2)$ correlation with the top eigenvector. Thus, parameterizing the space complexity in terms of the number of heavy rows is necessary for high accuracy solutions. Our results improve upon the $R = \Omega(\log n \cdot \log d)$ requirement in a recent work of Price and Xun (FOCS 2024). We note that the algorithm of Price and Xun works for arbitrary order streams whereas our algorithm requires a stronger assumption that the rows are presented in a uniformly random order. We additionally show that the gap requirements in their analysis can be brought down to $R = \Omega(\log^2 d)$ for arbitrary order streams and $R = \Omega(\log d)$ for random order streams. The requirement of $R = \Omega(\log d)$ for random order streams is nearly tight for their analysis as we obtain a simple instance with $R = \Omega(\log d/\log\log d)$ for which their algorithm, with any fixed learning rate, cannot output a vector approximating the top eigenvector $v_1$.

We consider the dunking problem: a solid body at uniform temperature $T_{\text i}$ is placed in a environment characterized by farfield temperature $T_\infty$ and spatially uniform time-independent heat transfer coefficient. We permit heterogeneous material composition: spatially dependent density, specific heat, and thermal conductivity. Mathematically, the problem is described by a heat equation with Robin boundary conditions. The crucial parameter is the Biot number -- a nondimensional heat transfer (Robin) coefficient; we consider the limit of small Biot number. We introduce first-order and second-order asymptotic approximations (in Biot number) for several quantities of interest, notably the spatial domain average temperature as a function of time; the first-order approximation is simply the standard engineering `lumped' model. We then provide asymptotic error estimates for the first-order and second-order approximations for small Biot number, and also, for the first-order approximation, alternative strict bounds valid for all Biot number. Companion numerical solutions of the heat equation confirm the effectiveness of the error estimates for small Biot number. The second-order approximation and the first-order and second-order error estimates depend on several functional outputs associated to an elliptic partial differential equation; the latter is derived from Biot-sensitivity analysis of the heat equation eigenproblem in the limit of small Biot number. Most important is $\phi$, the only functional output required for the first-order error estimates; $\phi$ admits a simple physical interpretation in terms of conduction length scale. We investigate the domain and property dependence of $\phi$: most notably, we characterize spatial domains for which the standard lumped-model error criterion -- Biot number (based on volume-to-area length scale) small -- is deficient.

We study frequency domain electromagnetic scattering at a bounded, penetrable, and inhomogeneous obstacle $ \Omega \subset \mathbb{R}^3 $. From the Stratton-Chu integral representation, we derive a new representation formula when constant reference coefficients are given for the interior domain. The resulting integral representation contains the usual layer potentials, but also volume potentials on $\Omega$. Then it is possible to follow a single-trace approach to obtain boundary integral equations perturbed by traces of compact volume integral operators with weakly singular kernels. The coupled boundary and volume integral equations are discretized with a Galerkin approach with usual Curl-conforming and Div-conforming finite elements on the boundary and in the volume. Compression techniques and special quadrature rules for singular integrands are required for an efficient and accurate method. Numerical experiments provide evidence that our new formulation enjoys promising properties.

Is there a fixed dimension $n$ such that translational tiling of $\mathbb{Z}^n$ with a monotile is undecidable? Several recent results support a positive answer to this question. Greenfeld and Tao disprove the periodic tiling conjecture by showing that an aperiodic monotile exists in sufficiently high dimension $n$ [Ann. Math. 200(2024), 301-363]. In another paper [to appear in J. Eur. Math. Soc.], they also show that if the dimension $n$ is part of the input, then the translational tiling for subsets of $\mathbb{Z}^n$ with one tile is undecidable. These two results are very strong pieces of evidence for the conjecture that translational tiling of $\mathbb{Z}^n$ with a monotile is undecidable, for some fixed $n$. This paper gives another supportive result for this conjecture by showing that translational tiling of the $4$-dimensional space with a set of three connected tiles is undecidable.

We study the problem of privately releasing an approximate minimum spanning tree (MST). Given a graph $G = (V, E, \vec{W})$ where $V$ is a set of $n$ vertices, $E$ is a set of $m$ undirected edges, and $ \vec{W} \in \mathbb{R}^{|E|} $ is an edge-weight vector, our goal is to publish an approximate MST under edge-weight differential privacy, as introduced by Sealfon in PODS 2016, where $V$ and $E$ are considered public and the weight vector is private. Our neighboring relation is $\ell_\infty$-distance on weights: for a sensitivity parameter $\Delta_\infty$, graphs $ G = (V, E, \vec{W}) $ and $ G' = (V, E, \vec{W}') $ are neighboring if $\|\vec{W}-\vec{W}'\|_\infty \leq \Delta_\infty$. Existing private MST algorithms face a trade-off, sacrificing either computational efficiency or accuracy. We show that it is possible to get the best of both worlds: With a suitable random perturbation of the input that does not suffice to make the weight vector private, the result of any non-private MST algorithm will be private and achieves a state-of-the-art error guarantee. Furthermore, by establishing a connection to Private Top-k Selection [Steinke and Ullman, FOCS '17], we give the first privacy-utility trade-off lower bound for MST under approximate differential privacy, demonstrating that the error magnitude, $\tilde{O}(n^{3/2})$, is optimal up to logarithmic factors. That is, our approach matches the time complexity of any non-private MST algorithm and at the same time achieves optimal error. We complement our theoretical treatment with experiments that confirm the practicality of our approach.

While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.

北京阿比特科技有限公司