亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Graph auto-encoders are widely used to construct graph representations in Euclidean vector spaces. However, it has already been pointed out empirically that linear models on many tasks can outperform graph auto-encoders. In our work, we prove that the solution space induced by graph auto-encoders is a subset of the solution space of a linear map. This demonstrates that linear embedding models have at least the representational power of graph auto-encoders based on graph convolutional networks. So why are we still using nonlinear graph auto-encoders? One reason could be that actively restricting the linear solution space might introduce an inductive bias that helps improve learning and generalization. While many researchers believe that the nonlinearity of the encoder is the critical ingredient towards this end, we instead identify the node features of the graph as a more powerful inductive bias. We give theoretical insights by introducing a corresponding bias in a linear model and analyzing the change in the solution space. Our experiments are aligned with other empirical work on this question and show that the linear encoder can outperform the nonlinear encoder when using feature information.

相關內容

In this paper, we characterize the class of {\em contraction perfect} graphs which are the graphs that remain perfect after the contraction of any edge set. We prove that a graph is contraction perfect if and only if it is perfect and the contraction of any single edge preserves its perfection. This yields a characterization of contraction perfect graphs in terms of forbidden induced subgraphs, and a polynomial algorithm to recognize them. We also define the utter graph $u(G)$ which is the graph whose stable sets are in bijection with the co-2-plexes of $G$, and prove that $u(G)$ is perfect if and only if $G$ is contraction perfect.

Polynomial approximations of functions are widely used in scientific computing. In certain applications, it is often desired to require the polynomial approximation to be non-negative (resp. non-positive), or bounded within a given range, due to constraints posed by the underlying physical problems. Efficient numerical methods are thus needed to enforce such conditions. In this paper, we discuss effective numerical algorithms for polynomial approximation under non-negativity constraints. We first formulate the constrained optimization problem, its primal and dual forms, and then discuss efficient first-order convex optimization methods, with a particular focus on high dimensional problems. Numerical examples are provided, for up to $200$ dimensions, to demonstrate the effectiveness and scalability of the methods.

Diffusion generative models have achieved remarkable success in generating images with a fixed resolution. However, existing models have limited ability to generalize to different resolutions when training data at those resolutions are not available. Leveraging techniques from operator learning, we present a novel deep-learning architecture, Dual-FNO UNet (DFU), which approximates the score operator by combining both spatial and spectral information at multiple resolutions. Comparisons of DFU to baselines demonstrate its scalability: 1) simultaneously training on multiple resolutions improves FID over training at any single fixed resolution; 2) DFU generalizes beyond its training resolutions, allowing for coherent, high-fidelity generation at higher-resolutions with the same model, i.e. zero-shot super-resolution image-generation; 3) we propose a fine-tuning strategy to further enhance the zero-shot super-resolution image-generation capability of our model, leading to a FID of 11.3 at 1.66 times the maximum training resolution on FFHQ, which no other method can come close to achieving.

Consider large signal-plus-noise data matrices of the form $S + \Sigma^{1/2} X$, where $S$ is a low-rank deterministic signal matrix and the noise covariance matrix $\Sigma$ can be anisotropic. We establish the asymptotic joint distribution of its spiked singular values when the dimensionality and sample size are comparably large and the signals are supercritical under general assumptions concerning the structure of $(S, \Sigma)$ and the distribution of the random noise $X$. It turns out that the asymptotic distributions exhibit nonuniversality in the sense of dependence on the distributions of the entries of $X$, which contrasts with what has previously been established for the spiked sample eigenvalues in the context of spiked population models. Such a result yields the asymptotic distribution of the sample spiked eigenvalues associated with mixture models. We also explore the application of these findings in detecting mean heterogeneity of data matrices.

The Concordance Index (C-index) is a commonly used metric in Survival Analysis for evaluating the performance of a prediction model. In this paper, we propose a decomposition of the C-index into a weighted harmonic mean of two quantities: one for ranking observed events versus other observed events, and the other for ranking observed events versus censored cases. This decomposition enables a finer-grained analysis of the relative strengths and weaknesses between different survival prediction methods. The usefulness of this decomposition is demonstrated through benchmark comparisons against classical models and state-of-the-art methods, together with the new variational generative neural-network-based method (SurVED) proposed in this paper. The performance of the models is assessed using four publicly available datasets with varying levels of censoring. Using the C-index decomposition and synthetic censoring, the analysis shows that deep learning models utilize the observed events more effectively than other models. This allows them to keep a stable C-index in different censoring levels. In contrast to such deep learning methods, classical machine learning models deteriorate when the censoring level decreases due to their inability to improve on ranking the events versus other events.

Among semiparametric regression models, partially linear additive models provide a useful tool to include additive nonparametric components as well as a parametric component, when explaining the relationship between the response and a set of explanatory variables. This paper concerns such models under sparsity assumptions for the covariates included in the linear component. Sparse covariates are frequent in regression problems where the task of variable selection is usually of interest. As in other settings, outliers either in the residuals or in the covariates involved in the linear component have a harmful effect. To simultaneously achieve model selection for the parametric component of the model and resistance to outliers, we combine preliminary robust estimators of the additive component, robust linear $MM-$regression estimators with a penalty such as SCAD on the coefficients in the parametric part. Under mild assumptions, consistency results and rates of convergence for the proposed estimators are derived. A Monte Carlo study is carried out to compare, under different models and contamination schemes, the performance of the robust proposal with its classical counterpart. The obtained results show the advantage of using the robust approach. Through the analysis of a real data set, we also illustrate the benefits of the proposed procedure.

Deep learning models have exhibited remarkable performance across various domains. Nevertheless, the burgeoning model sizes compel edge devices to offload a significant portion of the inference process to the cloud. While this practice offers numerous advantages, it also raises critical concerns regarding user data privacy. In scenarios where the cloud server's trustworthiness is in question, the need for a practical and adaptable method to safeguard data privacy becomes imperative. In this paper, we introduce Ensembler, an extensible framework designed to substantially increase the difficulty of conducting model inversion attacks for adversarial parties. Ensembler leverages model ensembling on the adversarial server, running in parallel with existing approaches that introduce perturbations to sensitive data during colloborative inference. Our experiments demonstrate that when combined with even basic Gaussian noise, Ensembler can effectively shield images from reconstruction attacks, achieving recognition levels that fall below human performance in some strict settings, significantly outperforming baseline methods lacking the Ensembler framework.

Many standard graph classes are known to be characterized by means of layouts (a permutation of its vertices) excluding some patterns. Important such graph classes are among others: proper interval graphs, interval graphs, chordal graphs, permutation graphs, (co-)comparability graphs. For example, a graph $G=(V,E)$ is a proper interval graph if and only if $G$ has a layout $L$ such that for every triple of vertices such that $x\prec_L y\prec_L z$, if $xz\in E$, then $xy\in E$ and $yz\in E$. Such a triple $x$, $y$, $z$ is called an indifference triple and layouts excluding indifference triples are known as indifference layouts. In this paper, we investigate the concept of tree-layouts. A tree-layout $T_G=(T,r,\rho_G)$ of a graph $G=(V,E)$ is a tree $T$ rooted at some node $r$ and equipped with a one-to-one mapping $\rho_G$ between $V$ and the nodes of $T$ such that for every edge $xy\in E$, either $x$ is an ancestor of $y$ or $y$ is an ancestor of $x$. Clearly, layouts are tree-layouts. Excluding a pattern in a tree-layout is defined similarly as excluding a pattern in a layout, but now using the ancestor relation. Unexplored graph classes can be defined by means of tree-layouts excluding some patterns. As a proof of concept, we show that excluding non-indifference triples in tree-layouts yields a natural notion of proper chordal graphs. We characterize proper chordal graphs and position them in the hierarchy of known subclasses of chordal graphs. We also provide a canonical representation of proper chordal graphs that encodes all the indifference tree-layouts rooted at some vertex. Based on this result, we first design a polynomial time recognition algorithm for proper chordal graphs. We then show that the problem of testing isomorphism between two proper chordal graphs is in P, whereas this problem is known to be GI-complete on chordal graphs.

Large-scale language-vision pre-training models, such as CLIP, have achieved remarkable text-guided image morphing results by leveraging several unconditional generative models. However, existing CLIP-guided image morphing methods encounter difficulties when morphing photorealistic images. Specifically, existing guidance fails to provide detailed explanations of the morphing regions within the image, leading to misguidance. In this paper, we observed that such misguidance could be effectively mitigated by simply using a proper regularization loss. Our approach comprises two key components: 1) a geodesic cosine similarity loss that minimizes inter-modality features (i.e., image and text) on a projected subspace of CLIP space, and 2) a latent regularization loss that minimizes intra-modality features (i.e., image and image) on the image manifold. By replacing the na\"ive directional CLIP loss in a drop-in replacement manner, our method achieves superior morphing results on both images and videos for various benchmarks, including CLIP-inversion.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

北京阿比特科技有限公司