亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It was conjectured by Gupta et al. [Combinatorica04] that every planar graph can be embedded into $\ell_1$ with constant distortion. However, given an $n$-vertex weighted planar graph, the best upper bound on the distortion is only $O(\sqrt{\log n})$, by Rao [SoCG99]. In this paper we study the case where there is a set $K$ of terminals, and the goal is to embed only the terminals into $\ell_1$ with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into $\ell_1$. The more general case, where the set of terminals can be covered by $\gamma$ faces, was studied by Lee and Sidiropoulos [STOC09] and Chekuri et al. [J.Comb.Theory13]. The state of the art is an upper bound of $O(\log \gamma)$ by Krauthgamer, Lee and Rika [SODA19]. Our contribution is a further improvement on the upper bound to $O(\sqrt{\log\gamma})$. Since every planar graph has at most $O(n)$ faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound. Moreover, it is well known that the flow-cut gap equals to the distortion of the best embedding into $\ell_1$. Therefore, our result provides a polynomial time $O(\sqrt{\log \gamma})$-approximation to the sparsest cut problem on planar graphs, for the case where all the demand pairs can be covered by $\gamma$ faces.

相關內容

Sample selection models represent a common methodology for correcting bias induced by data missing not at random. It is well known that these models are not empirically identifiable without exclusion restrictions. In other words, some variables predictive of missingness do not affect the outcome model of interest. The drive to establish this requirement often leads to the inclusion of irrelevant variables in the model. A recent proposal uses adaptive LASSO to circumvent this problem, but its performance depends on the so-called covariance assumption, which can be violated in small to moderate samples. Additionally, there are no tools yet for post-selection inference for this model. To address these challenges, we propose two families of spike-and-slab priors to conduct Bayesian variable selection in sample selection models. These prior structures allow for constructing a Gibbs sampler with tractable conditionals, which is scalable to the dimensions of practical interest. We illustrate the performance of the proposed methodology through a simulation study and present a comparison against adaptive LASSO and stepwise selection. We also provide two applications using publicly available real data. An implementation and code to reproduce the results in this paper can be found at //github.com/adam-iqbal/selection-spike-slab

Shape is a powerful tool to understand point sets. A formal notion of shape is given by $\alpha$-shapes, which generalize the convex hull and provide adjustable level of detail. Many real-world point sets have an inherent temporal property as natural processes often happen over time, like lightning strikes during thunderstorms or moving animal swarms. To explore such point sets, where each point is associated with one timestamp, interactive applications may utilize $\alpha$-shapes and allow the user to specify different time windows and $\alpha$-values. We show how to compute the temporal $\alpha$-shape $\alpha_T$, a minimal description of all $\alpha$-shapes over all time windows, in output-sensitive linear time. We also give complexity bounds on $|\alpha_T|$. We use $\alpha_T$ to interactively visualize $\alpha$-shapes of user-specified time windows without having to constantly compute requested $\alpha$-shapes. Experimental results suggest that our approach outperforms an existing approach by a factor of at least $\sim$52 and that the description we compute has reasonable size in practice. The basis for our algorithm is an existing algorithm which computes all Delaunay triangles over all time windows using $\mathcal{O}(1)$ time per triangle. Our approach generalizes to higher dimensions with the same runtime for fixed $d$.

We design a deterministic particle method for the solution of the spatially homogeneous Landau equation with uncertainty. The deterministic particle approximation is based on the reformulation of the Landau equation as a formal gradient flow on the set of probability measures, whereas the propagation of uncertain quantities is computed by means of a sg representation of each particle. This approach guarantees spectral accuracy in uncertainty space while preserving the fundamental structural properties of the model: the positivity of the solution, the conservation of invariant quantities, and the entropy production. We provide a regularity results for the particle method in the random space. We perform the numerical validation of the particle method in a wealth of test cases.

A Young diagram $Y$ is called wide if every sub-diagram $Z$ formed by a subset of the rows of $Y$ dominates $Z'$, the conjugate of $Z$. A Young diagram $Y$ is called Latin if its squares can be assigned numbers so that for each $i$, the $i$th row is filled injectively with the numbers $1, \ldots ,a_i$, where $a_i$ is the length of $i$th row of $Y$, and every column is also filled injectively. A conjecture of Chow and Taylor, publicized by Chow, Fan, Goemans, and Vondrak is that a wide Young diagram is Latin. We prove a dual version of the conjecture.

If $G$ is a graph, $A$ and $B$ its induced subgraphs, and $f\colon A\to B$ an isomorphism, we say that $f$ is a partial automorphism of $G$. In 1992, Hrushovski proved that graphs have the extension property for partial automorphisms (EPPA, also called the Hrushovski property), that is, for every finite graph $G$ there is a finite graph $H$, its EPPA-witness, such that $G$ is an induced subgraph of $H$ and every partial automorphism of $G$ extends to an automorphism of $H$. The EPPA number of a graph $G$, denoted by $\mathop{\mathrm{eppa}}\nolimits(G)$, is the smallest number of vertices of an EPPA-witness for $G$, and we put $\mathop{\mathrm{eppa}}\nolimits(n) = \max\{\mathop{\mathrm{eppa}}\nolimits(G) : \lvert G\rvert = n\}$. In this note we review the state of the area, prove several lower bounds (in particular, we show that $\mathop{\mathrm{eppa}}\nolimits(n)\geq \frac{2^n}{\sqrt{n}}$, thereby identifying the correct base of the exponential) and pose many open questions. We also briefly discuss EPPA numbers of hypergraphs, directed graphs, and $K_k$-free graphs.

Normal modal logics extending the logic K4.3 of linear transitive frames are known to lack the Craig interpolation property, except some logics of bounded depth such as S5. We turn this `negative' fact into a research question and pursue a non-uniform approach to Craig interpolation by investigating the following interpolant existence problem: decide whether there exists a Craig interpolant between two given formulas in any fixed logic above K4.3. Using a bisimulation-based characterisation of interpolant existence for descriptive frames, we show that this problem is decidable and coNP-complete for all finitely axiomatisable normal modal logics containing K4.3. It is thus not harder than entailment in these logics, which is in sharp contrast to other recent non-uniform interpolation results. We also extend our approach to Priorean temporal logics (with both past and future modalities) over the standard time flows-the integers, rationals, reals, and finite strict linear orders-none of which is blessed with the Craig interpolation property.

The sub-packetization $\ell$ and the field size $q$ are of paramount importance in the MSR array code constructions. For optimal-access MSR codes, Balaji et al. proved that $\ell\geq s^{\left\lceil n/s \right\rceil}$, where $s = d-k+1$. Rawat et al. showed that this lower bound is attainable for all admissible values of $d$ when the field size is exponential in $n$. After that, tremendous efforts have been devoted to reducing the field size. However, till now, reduction to linear field size is only available for $d\in\{k+1,k+2,k+3\}$ and $d=n-1$. In this paper, we construct the first class of explicit optimal-access MSR codes with the smallest sub-packetization $\ell = s^{\left\lceil n/s \right\rceil}$ for all $d$ between $k+1$ and $n-1$, resolving an open problem in the survey (Ramkumar et al., Foundations and Trends in Communications and Information Theory: Vol. 19: No. 4). We further propose another class of explicit MSR code constructions (not optimal-access) with even smaller sub-packetization $s^{\left\lceil n/(s+1)\right\rceil }$ for all admissible values of $d$, making significant progress on another open problem in the survey. Previously, MSR codes with $\ell=s^{\left\lceil n/(s+1)\right\rceil }$ and $q=O(n)$ were only known for $d=k+1$ and $d=n-1$. The key insight that enables a linear field size in our construction is to reduce $\binom{n}{r}$ global constraints of non-vanishing determinants to $O_s(n)$ local ones, which is achieved by carefully designing the parity check matrices.

The integration of data and knowledge from several sources is known as data fusion. When data is only available in a distributed fashion or when different sensors are used to infer a quantity of interest, data fusion becomes essential. In Bayesian settings, a priori information of the unknown quantities is available and, possibly, present among the different distributed estimators. When the local estimates are fused, the prior knowledge used to construct several local posteriors might be overused unless the fusion node accounts for this and corrects it. In this paper, we analyze the effects of shared priors in Bayesian data fusion contexts. Depending on different common fusion rules, our analysis helps to understand the performance behavior as a function of the number of collaborative agents and as a consequence of different types of priors. The analysis is performed by using two divergences which are common in Bayesian inference, and the generality of the results allows to analyze very generic distributions. These theoretical results are corroborated through experiments in a variety of estimation and classification problems, including linear and nonlinear models, and federated learning schemes.

With the rising concern on model interpretability, the application of eXplainable AI (XAI) tools on deepfake detection models has been a topic of interest recently. In image classification tasks, XAI tools highlight pixels influencing the decision given by a model. This helps in troubleshooting the model and determining areas that may require further tuning of parameters. With a wide range of tools available in the market, choosing the right tool for a model becomes necessary as each one may highlight different sets of pixels for a given image. There is a need to evaluate different tools and decide the best performing ones among them. Generic XAI evaluation methods like insertion or removal of salient pixels/segments are applicable for general image classification tasks but may produce less meaningful results when applied on deepfake detection models due to their functionality. In this paper, we perform experiments to show that generic removal/insertion XAI evaluation methods are not suitable for deepfake detection models. We also propose and implement an XAI evaluation approach specifically suited for deepfake detection models.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司