亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a graph whose vertices are points in $\mathbb R^d$, consider the closed balls with diameters induced by its edges. The graph is called a Tverberg graph if these closed balls intersect. A max-sum tree of a finite point set $X \subset \mathbb R^d$ is a tree with vertex set $X$ that maximizes the sum of Euclidean distances of its edges among all trees with vertex set $X$. Similarly, a max-sum matching of an even set $X \subset \mathbb R^d$ is a perfect matching of $X$ maximizing the sum of Euclidean distances between the matched points among all perfect matchings of $X$. We prove that a max-sum tree of any finite point set in $\mathbb R^d$ is a Tverberg graph, which generalizes a recent result of Abu-Affash et al., who established this claim in the plane. Additionally, we provide a new proof of a theorem by Bereg et al., which states that a max-sum matching of any even point set in the plane is a Tverberg graph. Moreover, we proved a slightly stronger version of this theorem.

相關內容

We consider scalar semilinear elliptic PDEs, where the nonlinearity is strongly monotone, but only locally Lipschitz continuous. To linearize the arising discrete nonlinear problem, we employ a damped Zarantonello iteration, which leads to a linear Poisson-type equation that is symmetric and positive definite. The resulting system is solved by a contractive algebraic solver such as a multigrid method with local smoothing. We formulate a fully adaptive algorithm that equibalances the various error components coming from mesh refinement, iterative linearization, and algebraic solver. We prove that the proposed adaptive iteratively linearized finite element method (AILFEM) guarantees convergence with optimal complexity, where the rates are understood with respect to the overall computational cost (i.e., the computational time). Numerical experiments investigate the involved adaptivity parameters.

We show that the graph property of having a (very) large $k$-th Betti number $\beta_k$ for constant $k$ is testable with a constant number of queries in the dense graph model. More specifically, we consider a clique complex defined by an underlying graph and prove that for any $\varepsilon>0$, there exists $\delta(\varepsilon,k)>0$ such that testing whether $\beta_k \geq (1-\delta) d_k$ for $\delta \leq \delta(\varepsilon,k)$ reduces to tolerantly testing $(k+2)$-clique-freeness, which is known to be testable. This complements a result by Elek (2010) showing that Betti numbers are testable in the bounded-degree model. Our result combines the Euler characteristic, matroid theory and the graph removal lemma.

Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data, i.e., images, text, and audio. Accordingly, its promising performance has led to the GAN-based adversarial attack methods in the white-box and black-box attack scenarios. The importance of transferable black-box attacks lies in their ability to be effective across different models and settings, more closely aligning with real-world applications. However, it remains challenging to retain the performance in terms of transferable adversarial examples for such methods. Meanwhile, we observe that some enhanced gradient-based transferable adversarial attack algorithms require prolonged time for adversarial sample generation. Thus, in this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples whilst improving the algorithm's efficiency. The main approach is via optimising the training process of the generator parameters. With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models. Moreover, by exploring the frequency domain information to determine the gradient editing direction, GE-AdvGAN can generate highly transferable adversarial samples while minimizing the execution time in comparison to the state-of-the-art transferable adversarial attack algorithms. The performance of GE-AdvGAN is comprehensively evaluated by large-scale experiments on different datasets, which results demonstrate the superiority of our algorithm. The code for our algorithm is available at: //github.com/LMBTough/GE-advGAN

In this paper, we introduce a novel non-linear uniform subdivision scheme for the generation of curves in $\mathbb{R}^n$, $n\geq2$. This scheme is distinguished by its capacity to reproduce second-degree polynomial data on non-uniform grids without necessitating prior knowledge of the grid specificities. Our approach exploits the potential of annihilation operators to infer the underlying grid, thereby obviating the need for end-users to specify such information. We define the scheme in a non-stationary manner, ensuring that it progressively approaches a classical linear scheme as the iteration number increases, all while preserving its polynomial reproduction capability. The convergence is established through two distinct theoretical methods. Firstly, we propose a new class of schemes, including ours, for which we establish $\mathcal{C}^1$ convergence by combining results from the analysis of quasilinear schemes and asymptotically equivalent linear non-uniform non-stationary schemes. Secondly, we adapt conventional analytical tools for non-linear schemes to the non-stationary case, allowing us to again conclude the convergence of the proposed class of schemes. We show its practical usefulness through numerical examples, showing that the generated curves are curvature continuous.

Aboulker et al. proved that a digraph with large enough dichromatic number contains any fixed digraph as a subdivision. The dichromatic number of a digraph is the smallest order of a partition of its vertex set into acyclic induced subdigraphs. A digraph is dicritical if the removal of any arc or vertex decreases its dichromatic number. In this paper we give sufficient conditions on a dicritical digraph of large order or large directed girth to contain a given digraph as a subdivision. In particular, we prove that (i) for every integers $k,\ell$, large enough dicritical digraphs with dichromatic number $k$ contain an orientation of a cycle with at least $\ell$ vertices; (ii) there are functions $f,g$ such that for every subdivision $F^*$ of a digraph $F$, digraphs with directed girth at least $f(F^*)$ and dichromatic number at least $g(F)$ contain a subdivision of $F^*$, and if $F$ is a tree, then $g(F)=|V(F)|$; (iii) there is a function $f$ such that for every subdivision $F^*$ of $TT_3$ (the transitive tournament on three vertices), digraphs with directed girth at least $f(F^*)$ and minimum out-degree at least $2$ contain $F^*$ as a subdivision.

Spatial regression models are central to the field of spatial statistics. Nevertheless, their estimation in case of large and irregular gridded spatial datasets presents considerable computational challenges. To tackle these computational problems, Arbia \citep{arbia_2014_pairwise} introduced a pseudo-likelihood approach (called pairwise likelihood, say PL) which required the identification of pairs of observations that are internally correlated, but mutually conditionally uncorrelated. However, while the PL estimators enjoy optimal theoretical properties, their practical implementation when dealing with data observed on irregular grids suffers from dramatic computational issues (connected with the identification of the pairs of observations) that, in most empirical cases, negatively counter-balance its advantages. In this paper we introduce an algorithm specifically designed to streamline the computation of the PL in large and irregularly gridded spatial datasets, dramatically simplifying the estimation phase. In particular, we focus on the estimation of Spatial Error models (SEM). Our proposed approach, efficiently pairs spatial couples exploiting the KD tree data structure and exploits it to derive the closed-form expressions for fast parameter approximation. To showcase the efficiency of our method, we provide an illustrative example using simulated data, demonstrating the computational advantages if compared to a full likelihood inference are not at the expenses of accuracy.

The DTW Barycenter Averaging (DBA) algorithm is a widely used algorithm for estimating the mean of a given set of point sequences. In this context, the mean is defined as a point sequence that minimises the sum of dynamic time warping distances (DTW). The algorithm is similar to the $k$-means algorithm in the sense that it alternately repeats two steps: (1) computing an optimal assignment to the points of the current mean, and (2) computing an optimal mean under the current assignment. The popularity of DBA can be attributed to the fact that it works well in practice, despite any theoretical guarantees to be known. In our paper, we aim to initiate a theoretical study of the number of iterations that DBA performs until convergence. We assume the algorithm is given $n$ sequences of $m$ points in $\mathbb{R}^d$ and a parameter $k$ that specifies the length of the mean sequence to be computed. We show that, in contrast to its fast running time in practice, the number of iterations can be exponential in $k$ in the worst case - even if the number of input sequences is $n=2$. We complement these findings with experiments on real-world data that suggest this worst-case behaviour is likely degenerate. To better understand the performance of the algorithm on non-degenerate input, we study DBA in the model of smoothed analysis, upper-bounding the expected number of iterations in the worst case under random perturbations of the input. Our smoothed upper bound is polynomial in $k$, $n$ and $d$, and for constant $n$, it is also polynomial in $m$. For our analysis, we adapt the set of techniques that were developed for analysing $k$-means and observe that this set of techniques is not sufficient to obtain tight bounds for general $n$.

We are interested in generating surfaces with arbitrary roughness and forming patterns on the surfaces. Two methods are applied to construct rough surfaces. In the first method, some superposition of wave functions with random frequencies and angles of propagation are used to get periodic rough surfaces with analytic parametric equations. The amplitude of such surfaces is also an important variable in the provided eigenvalue analysis for the Laplace-Beltrami operator and in the generation of pattern formation. Numerical experiments show that the patterns become irregular as the amplitude and frequency of the rough surface increase. For the sake of easy generalization to closed manifolds, we propose a second construction method for rough surfaces, which uses random nodal values and discretized heat filters. We provide numerical evidence that both surface {construction methods} yield comparable patterns to those {observed} in real-life animals.

We consider the on-line coloring problem restricted to proper interval graphs with known interval representation. Chrobak and \'{S}lusarek (1981) showed that the greedy $\textrm{First-Fit}$ algorithm has a strict competitive ratio of $2$. It remains open whether there is an on-line algorithm that performs better than $\textrm{First-Fit}$. Piotr (2008) showed that if the representation is not known, there is no better on-line algorithm. Epstein and Levy (2005) showed that no on-line algorithm has a strict competitive ratio less than $1.5$ when a unit-interval representation is known, which was later improved to $1.\overline{3}$. In this paper, we show that there is no on-line algorithm with strict competitive ratio less than $1.75$ by presenting a strategy that can force any on-line algorithm to use $7$ colors on a proper interval graph $G$ with chromatic number $\chi(G)\leq 4$ and known interval representation.

Zero-shot Learning (ZSL), which aims to predict for those classes that have never appeared in the training data, has arisen hot research interests. The key of implementing ZSL is to leverage the prior knowledge of classes which builds the semantic relationship between classes and enables the transfer of the learned models (e.g., features) from training classes (i.e., seen classes) to unseen classes. However, the priors adopted by the existing methods are relatively limited with incomplete semantics. In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL via ontology-based knowledge representation and semantic embedding. Meanwhile, to address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs). Our main findings include: (i) an ontology-enhanced ZSL framework that can be applied to different domains, such as image classification (IMGC) and knowledge graph completion (KGC); (ii) a comprehensive evaluation with multiple zero-shot datasets from different domains, where our method often achieves better performance than the state-of-the-art models. In particular, on four representative ZSL baselines of IMGC, the ontology-based class semantics outperform the previous priors e.g., the word embeddings of classes by an average of 12.4 accuracy points in the standard ZSL across two example datasets (see Figure 4).

北京阿比特科技有限公司