亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A strong edge-coloring of a graph is a proper edge-coloring, in which the edges of every path of length 3 receive distinct colors; in other words, every pair of edges at distance at most 2 must be colored differently. The least number of colors needed for a strong edge-coloring of a graph is the strong chromatic index. We consider the list version of the coloring and prove that the list strong chromatic index of graphs with maximum degree 3 is at most 10. This bound is tight and improves the previous bound of 11 colors. We also consider the question whether the strong chromatic index and the list strong chromatic index always coincide. We answer it in negative by presenting an infinite family of graphs for which the two invariants differ. For the special case of the Petersen graph, we show that its list strong chromatic index equals 7, while its strong chromatic index is 5. Up to our best knowledge, this is the first known edge-coloring for which there are graphs with distinct values of the chromatic index and its list version. In relation to the above, we also initiate the study of the list version of the normal edge-coloring. A normal edge-coloring of a cubic graph is a proper edge-coloring, in which every edge is adjacent to edges colored with 4 colors or to edges colored with 2 colors. It is conjectured that 5 colors suffice for a normal edge-coloring of any bridgeless cubic graph which is equivalent to the Petersen Coloring Conjecture. Similarly to the strong edge-coloring, the list normal edge-coloring is much more restrictive and consequently for many graphs the list normal chromatic index is greater than the normal chromatic index. In particular, we show that there are cubic graphs with the list normal chromatic index at least 9, there are bridgeless cubic graphs with its value at least 8, and there are cyclically 4-edge-connected cubic graphs with value at least 7.

相關內容

Uncertainty Quantification (UQ) is crucial for reliable image segmentation. Yet, while the field sees continual development of novel methods, a lack of agreed-upon benchmarks limits their systematic comparison and evaluation: Current UQ methods are typically tested either on overly simplistic toy datasets or on complex real-world datasets that do not allow to discern true uncertainty. To unify both controllability and complexity, we introduce Arctique, a procedurally generated dataset modeled after histopathological colon images. We chose histopathological images for two reasons: 1) their complexity in terms of intricate object structures and highly variable appearance, which yields challenging segmentation problems, and 2) their broad prevalence for medical diagnosis and respective relevance of high-quality UQ. To generate Arctique, we established a Blender-based framework for 3D scene creation with intrinsic noise manipulation. Arctique contains 50,000 rendered images with precise masks as well as noisy label simulations. We show that by independently controlling the uncertainty in both images and labels, we can effectively study the performance of several commonly used UQ methods. Hence, Arctique serves as a critical resource for benchmarking and advancing UQ techniques and other methodologies in complex, multi-object environments, bridging the gap between realism and controllability. All code is publicly available, allowing re-creation and controlled manipulations of our shipped images as well as creation and rendering of new scenes.

We develop a powerful and general method to provide arbitrarily accurate rigorous upper and lower bounds for Lyapunov exponents of stochastic flows. Our approach is based on computer-assisted tools, the adjoint method and established results on the ergodicity of diffusion processes. We do not require any structural assumptions on the stochastic system and work under mild hypoellipticity conditions outside of perturbative regimes. Therefore, our method allows for the treatment of systems that were so far inaccessible from existing mathematical tools. We demonstrate our method to exhibit the chaotic nature of three non-Hamiltonian systems. Finally, we show that our approach is robust to continuation methods to produce bounds on Lyapunov exponents for large parameter regions.

Understanding animal behaviour is central to predicting, understanding, and mitigating impacts of natural and anthropogenic changes on animal populations and ecosystems. However, the challenges of acquiring and processing long-term, ecologically relevant data in wild settings have constrained the scope of behavioural research. The increasing availability of Unmanned Aerial Vehicles (UAVs), coupled with advances in machine learning, has opened new opportunities for wildlife monitoring using aerial tracking. However, limited availability of datasets with wild animals in natural habitats has hindered progress in automated computer vision solutions for long-term animal tracking. Here we introduce BuckTales, the first large-scale UAV dataset designed to solve multi-object tracking (MOT) and re-identification (Re-ID) problem in wild animals, specifically the mating behaviour (or lekking) of blackbuck antelopes. Collected in collaboration with biologists, the MOT dataset includes over 1.2 million annotations including 680 tracks across 12 high-resolution (5.4K) videos, each averaging 66 seconds and featuring 30 to 130 individuals. The Re-ID dataset includes 730 individuals captured with two UAVs simultaneously. The dataset is designed to drive scalable, long-term animal behaviour tracking using multiple camera sensors. By providing baseline performance with two detectors, and benchmarking several state-of-the-art tracking methods, our dataset reflects the real-world challenges of tracking wild animals in socially and ecologically relevant contexts. In making these data widely available, we hope to catalyze progress in MOT and Re-ID for wild animals, fostering insights into animal behaviour, conservation efforts, and ecosystem dynamics through automated, long-term monitoring.

We propose nodal auxiliary space preconditioners for facet and edge virtual elements of lowest order by deriving discrete regular decompositions on polytopal grids and generalizing the Hiptmair-Xu preconditioner to the virtual element framework. The preconditioner consists of solving a sequence of elliptic problems on the nodal virtual element space, combined with appropriate smoother steps. Under assumed regularity of the mesh, the preconditioned system is proven to have bounded spectral condition number independent of the mesh size and this is verified by numerical experiments on a sequence of polygonal meshes. Moreover, we observe numerically that the preconditioner is robust on meshes containing elements with high aspect ratios.

In data assimilation, an ensemble provides a way to propagate the probability density of a system described by a nonlinear prediction model. Although a large ensemble size is required for statistical accuracy, the ensemble size is typically limited to a small number due to the computational cost of running the prediction model, which leads to a sampling error. Several methods, such as localization and inflation, exist to mitigate the sampling error, often requiring problem-dependent fine-tuning and design. This work introduces a nonintrusive sampling error mitigation method that modifies the ensemble to ensure a smooth turbulent spectrum. It turns out that the ensemble modification to satisfy the smooth spectrum leads to inhomogeneous localization and inflation, which apply spatially varying localization and inflation levels at different locations. The efficacy of the new idea is validated through a suite of stringent test regimes of the Lorenz 96 turbulent model.

A numerical scheme is presented for solving the Helmholtz equation with Dirichlet or Neumann boundary conditions on piecewise smooth open curves, where the curves may have corners and multiple junctions. Existing integral equation methods for smooth open curves rely on analyzing the exact singularities of the density at endpoints for associated integral operators, explicitly extracting these singularities from the densities in the formulation, and using global quadrature to discretize the boundary integral equation. Extending these methods to handle curves with corners and multiple junctions is challenging because the singularity analysis becomes much more complex, and constructing high-order quadrature for discretizing layer potentials with singular and hypersingular kernels and singular densities is nontrivial. The proposed scheme is built upon the following two observations. First, the single-layer potential operator and the normal derivative of the double-layer potential operator serve as effective preconditioners for each other locally. Second, the recursively compressed inverse preconditioning (RCIP) method can be extended to address "implicit" second-kind integral equations. The scheme is high-order, adaptive, and capable of handling corners and multiple junctions without prior knowledge of the density singularity. It is also compatible with fast algorithms, such as the fast multipole method. The performance of the scheme is illustrated with several numerical examples.

The accurate alignment of 3D woodblock geometrical models with 2D orthographic projection images presents a significant challenge in the digital preservation of Vietnamese cultural heritage. This paper proposes a unified image processing algorithm to address this issue, enhancing the registration quality between 3D woodblock models and their 2D representations. The method includes determining the plane of the 3D character model, establishing a transformation matrix to align this plane with the 2D printed image plane, and creating a parallel-projected depth map for precise alignment. This process minimizes disocclusions and ensures that character shapes and strokes are correctly positioned. Experimental results highlight the importance of structure-based comparisons to optimize alignment for large-scale Han-Nom character datasets. The proposed approach, combining density-based and structure-based methods, demonstrates improved registration performance, offering an effective normalization scheme for digital heritage preservation.

This work deals with the numerical approximation of plasmas which are confined by the effect of a fast oscillating magnetic field (see \cite{Bostan2012}) in the Vlasov model. The presence of this magnetic field induces oscillations (in time) to the solution of the characteristic equations. Due to its multiscale character, a standard time discretization would lead to an inefficient solver. In this work, time integrators are derived and analyzed for a class of highly oscillatory differential systems. We prove the uniform accuracy property of these time integrators, meaning that the accuracy does not depend on the small parameter $\varepsilon$. Moreover, we construct an extension of the scheme which degenerates towards an energy preserving numerical scheme for the averaged model, when $\varepsilon\to 0$. Several numerical results illustrate the capabilities of the method.

In the present work, strong approximation errors are analyzed for both the spatial semi-discretization and the spatio-temporal fully discretization of stochastic wave equations (SWEs) with cubic polynomial nonlinearities and additive noises. The fully discretization is achieved by the standard Galerkin ffnite element method in space and a novel exponential time integrator combined with the averaged vector ffeld approach. The newly proposed scheme is proved to exactly satisfy a trace formula based on an energy functional. Recovering the convergence rates of the scheme, however, meets essential difffculties, due to the lack of the global monotonicity condition. To overcome this issue, we derive the exponential integrability property of the considered numerical approximations, by the energy functional. Armed with these properties, we obtain the strong convergence rates of the approximations in both spatial and temporal direction. Finally, numerical results are presented to verify the previously theoretical findings.

We establish a refined version of a graph container lemma due to Galvin and discuss several applications related to the hard-core model on bipartite expander graphs. Given a graph $G$ and $\lambda>0$, the hard-core model on $G$ at activity $\lambda$ is the probability distribution $\mu_{G,\lambda}$ on independent sets in $G$ given by $\mu_{G,\lambda}(I)\propto \lambda^{|I|}$. As one of our main applications, we show that the hard-core model at activity $\lambda$ on the hypercube $Q_d$ exhibits a `structured phase' for $\lambda= \Omega( \log^2 d/d^{1/2})$ in the following sense: in a typical sample from $\mu_{Q_d,\lambda}$, most vertices are contained in one side of the bipartition of $Q_d$. This improves upon a result of Galvin which establishes the same for $\lambda=\Omega(\log d/ d^{1/3})$. As another application, we establish a fully polynomial-time approximation scheme (FPTAS) for the hard-core model on a $d$-regular bipartite $\alpha$-expander, with $\alpha>0$ fixed, when $\lambda= \Omega( \log^2 d/d^{1/2})$. This improves upon the bound $\lambda=\Omega(\log d/ d^{1/4})$ due to the first author, Perkins and Potukuchi. We discuss similar improvements to results of Galvin-Tetali, Balogh-Garcia-Li and Kronenberg-Spinka.

北京阿比特科技有限公司