亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Twin-width is a width parameter introduced by Bonnet, Kim, Thomass\'e and Watrigant [FOCS'20, JACM'22], which has many structural and algorithmic applications. We prove that the twin-width of every graph embeddable in a surface of Euler genus $g$ is $18\sqrt{47g}+O(1)$, which is asymptotically best possible as it asymptotically differs from the lower bound by a constant multiplicative factor. Our proof also yields a quadratic time algorithm to find a corresponding contraction sequence. To prove the upper bound on twin-width of graphs embeddable in surfaces, we provide a stronger version of the Product Structure Theorem for graphs of Euler genus $g$ that asserts that every such graph is a subgraph of the strong product of a path and a graph with a tree-decomposition with all bags of size at most eight with a single exceptional bag of size $\max\{8,32g-27\}$.

相關內容

We say that a Hamilton cycle $C=(x_1,\ldots,x_n)$ in a graph $G$ is $k$-symmetric, if the mapping $x_i\mapsto x_{i+n/k}$ for all $i=1,\ldots,n$, where indices are considered modulo $n$, is an automorphism of $G$. In other words, if we lay out the vertices $x_1,\ldots,x_n$ equidistantly on a circle and draw the edges of $G$ as straight lines, then the drawing of $G$ has $k$-fold rotational symmetry, i.e., all information about the graph is compressed into a $360^\circ/k$ wedge of the drawing. The maximum $k$ for which there exists a $k$-symmetric Hamilton cycle in $G$ is referred to as the Hamilton compression of $G$. We investigate the Hamilton compression of four different families of vertex-transitive graphs, namely hypercubes, Johnson graphs, permutahedra and Cayley graphs of abelian groups. In several cases we determine their Hamilton compression exactly, and in other cases we provide close lower and upper bounds. The constructed cycles have a much higher compression than several classical Gray codes known from the literature. Our constructions also yield Gray codes for bitstrings, combinations and permutations that have few tracks and/or that are balanced.

Positive spanning sets span a given vector space by nonnegative linear combinations of their elements. These have attracted significant attention in recent years, owing to their extensive use in derivative-free optimization. In this setting, the quality of a positive spanning set is assessed through its cosine measure, a geometric quantity that expresses how well such a set covers the space of interest. In this paper, we investigate the construction of positive $k$-spanning sets with geometrical guarantees.Our results build on recently identified positive spanning sets, called orthogonally structured positive bases. We first describe how to identify such sets and compute their cosine measures efficiently. We then focus our study on positive $k$-spanning sets, for which we provide a complete description, as well as a new notion of cosine measure that accounts for the resilient nature of such sets. By combining our results, we are able to use orthogonally structured positive bases to create positive $k$-spanning sets with guarantees on the value of their cosine measures.

Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.

We establish tight bi-Lipschitz bounds certifying quasi-universality (universality up to a constant factor) for various distances between Reeb graphs: the interleaving distance, the functional distortion distance, and the functional contortion distance. The definition of the latter distance is a novel contribution, and for the special case of contour trees we also prove strict universality of this distance. Furthermore, we prove that for the special case of merge trees the functional contortion distance coincides with the interleaving distance, yielding universality of all four distances in this case.

Many low-Mach or all-Mach number codes are based on space discretizations which in combination with the first order explicit Euler method as time integration would lead to an unstable scheme. In this paper, we investigate how the choice of a suitable explicit time integration method can stabilize these schemes. We restrict ourselves to some old prototypical examples in order to find directions for further research in this field.

An adjacency-crossing graph is a graph that can be drawn such that every two edges that cross the same edge share a common endpoint. We show that the number of edges in an $n$-vertex adjacency-crossing graph is at most $5n-10$. If we require the edges to be drawn as straight-line segments, then this upper bound becomes $5n-11$. Both of these bounds are tight. The former result also follows from a very recent and independent work of Cheong et al.\cite{cheong2023weakly} who showed that the maximum size of weakly and strongly fan-planar graphs coincide. By combining this result with the bound of Kaufmann and Ueckerdt\cite{KU22} on the size of strongly fan-planar graphs and results of Brandenburg\cite{Br20} by which the maximum size of adjacency-crossing graphs equals the maximum size of fan-crossing graphs which in turn equals the maximum size of weakly fan-planar graphs, one obtains the same bound on the size of adjacency-crossing graphs. However, the proof presented here is different, simpler and direct.

We propose a Small Area Estimation model based on Generalized Additive Models for Location, Scale and Shape (SAE-GAMLSS), for the estimation of household economic indicators. SAE-GAMLSS release the exponential family distributional assumption and allow each distributional parameter to depend on covariates. A bootstrap approach to estimate MSE is proposed. The SAE-GAMLSS estimator shows a largely better performance than the well-known EBLUP, under various simulated scenarios. Based on SAE-GAMLSS per-capita consumption of Italian and foreign households in Italian regions, in urban and rural areas, is estimated. Results show that the well-known Italian North-South divide does not hold for foreigners.

Confounder selection, namely choosing a set of covariates to control for confounding between a treatment and an outcome, is arguably the most important step in the design of observational studies. Previous methods, such as Pearl's celebrated back-door criterion, typically require pre-specifying a causal graph, which can often be difficult in practice. We propose an interactive procedure for confounder selection that does not require pre-specifying the graph or the set of observed variables. This procedure iteratively expands the causal graph by finding what we call "primary adjustment sets" for a pair of possibly confounded variables. This can be viewed as inverting a sequence of latent projections of the underlying causal graph. Structural information in the form of primary adjustment sets is elicited from the user, bit by bit, until either a set of covariates are found to control for confounding or it can be determined that no such set exists. We show that if the user correctly specifies the primary adjustment sets in every step, our procedure is both sound and complete.

Zero-shot Learning (ZSL), which aims to predict for those classes that have never appeared in the training data, has arisen hot research interests. The key of implementing ZSL is to leverage the prior knowledge of classes which builds the semantic relationship between classes and enables the transfer of the learned models (e.g., features) from training classes (i.e., seen classes) to unseen classes. However, the priors adopted by the existing methods are relatively limited with incomplete semantics. In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL via ontology-based knowledge representation and semantic embedding. Meanwhile, to address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs). Our main findings include: (i) an ontology-enhanced ZSL framework that can be applied to different domains, such as image classification (IMGC) and knowledge graph completion (KGC); (ii) a comprehensive evaluation with multiple zero-shot datasets from different domains, where our method often achieves better performance than the state-of-the-art models. In particular, on four representative ZSL baselines of IMGC, the ontology-based class semantics outperform the previous priors e.g., the word embeddings of classes by an average of 12.4 accuracy points in the standard ZSL across two example datasets (see Figure 4).

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司