亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Gromov-Wasserstein (GW) distance, rooted in optimal transport (OT) theory, provides a natural framework for aligning heterogeneous datasets. Alas, statistical estimation of the GW distance suffers from the curse of dimensionality and its exact computation is NP hard. To circumvent these issues, entropic regularization has emerged as a remedy that enables parametric estimation rates via plug-in and efficient computation using Sinkhorn iterations. Motivated by further scaling up entropic GW (EGW) alignment methods to data dimensions and sample sizes that appear in modern machine learning applications, we propose a novel neural estimation approach. Our estimator parametrizes a minimax semi-dual representation of the EGW distance by a neural network, approximates expectations by sample means, and optimizes the resulting empirical objective over parameter space. We establish non-asymptotic error bounds on the EGW neural estimator of the alignment cost and optimal plan. Our bounds characterize the effective error in terms of neural network (NN) size and the number of samples, revealing optimal scaling laws that guarantee parametric convergence. The bounds hold for compactly supported distributions, and imply that the proposed estimator is minimax-rate optimal over that class. Numerical experiments validating our theory are also provided.

相關內容

Primal-dual methods have a natural application in Safe Reinforcement Learning (SRL), posed as a constrained policy optimization problem. In practice however, applying primal-dual methods to SRL is challenging, due to the inter-dependency of the learning rate (LR) and Lagrangian multipliers (dual variables) each time an embedded unconstrained RL problem is solved. In this paper, we propose, analyze and evaluate adaptive primal-dual (APD) methods for SRL, where two adaptive LRs are adjusted to the Lagrangian multipliers so as to optimize the policy in each iteration. We theoretically establish the convergence, optimality and feasibility of the APD algorithm. Finally, we conduct numerical evaluation of the practical APD algorithm with four well-known environments in Bullet-Safey-Gym employing two state-of-the-art SRL algorithms: PPO-Lagrangian and DDPG-Lagrangian. All experiments show that the practical APD algorithm outperforms (or achieves comparable performance) and attains more stable training than the constant LR cases. Additionally, we substantiate the robustness of selecting the two adaptive LRs by empirical evidence.

We propose semantic communication over wireless channels for various modalities, e.g., text and images, in a task-oriented communications setup where the task is classification. We present two approaches based on memory and learning. Both approaches rely on a pre-trained neural network to extract semantic information but differ in codebook construction. In the memory-based approach, we use semantic quantization and compression models, leveraging past source realizations as a codebook to eliminate the need for further training. In the learning-based approach, we use a semantic vector quantized autoencoder model that learns a codebook from scratch. Both are followed by a channel coder in order to reliably convey semantic information to the receiver (classifier) through the wireless medium. In addition to classification accuracy, we define system time efficiency as a new performance metric. Our results demonstrate that the proposed memory-based approach outperforms its learning-based counterpart with respect to system time efficiency while offering comparable accuracy to semantic agnostic conventional baselines.

Subsumption resolution is an expensive but highly effective simplifying inference for first-order saturation theorem provers. We present a new SAT-based reasoning technique for subsumption resolution, without requiring radical changes to the underlying saturation algorithm. We implemented our work in the theorem prover Vampire, and show that it is noticeably faster than the state of the art.

Independent parallel q-ary symmetric channels are a suitable transmission model for several applications. The proposed weighted-Hamming metric is tailored to this setting and enables optimal decoding performance. We show that some weighted-Hamming-metric codes exhibit the unusual property that all errors beyond half the minimum distance can be corrected. Nevertheless, a tight relation between the error-correction capability of a code and its minimum distance can be established. Generalizing their Hamming-metric counterparts, upper and lower bounds on the cardinality of a code with a given weighted-Hamming distance are obtained. Finally, we propose a simple code construction with optimal minimum distance for specific parameters.

Conformal prediction (CP) is a method for constructing a prediction interval around the output of a fitted model, whose validity does not rely on the model being correct--the CP interval offers a coverage guarantee that is distribution-free, but relies on the training data being drawn from the same distribution as the test data. A recent variant, weighted conformal prediction (WCP), reweights the method to allow for covariate shift between the training and test distributions. However, WCP requires knowledge of the nature of the covariate shift-specifically,the likelihood ratio between the test and training covariate distributions. In practice, since this likelihood ratio is estimated rather than known exactly, the coverage guarantee may degrade due to the estimation error. In this paper, we consider a special scenario where observations belong to a finite number of groups, and these groups determine the covariate shift between the training and test distributions-for instance, this may arise if the training set is collected via stratified sampling. Our results demonstrate that in this special case, the predictive coverage guarantees of WCP can be drastically improved beyond the bounds given by existing estimation error bounds.

Despite advances in generative methods, accurately modeling the distribution of graphs remains a challenging task primarily because of the absence of predefined or inherent unique graph representation. Two main strategies have emerged to tackle this issue: 1) restricting the number of possible representations by sorting the nodes, or 2) using permutation-invariant/equivariant functions, specifically Graph Neural Networks (GNNs). In this paper, we introduce a new framework named Discrete Graph Auto-Encoder (DGAE), which leverages the strengths of both strategies and mitigate their respective limitations. In essence, we propose a strategy in 2 steps. We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations, each node being represented by a sequence of quantized vectors. In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model based on the Transformer architecture. Through multiple experimental evaluations, we demonstrate the competitive performances of our model in comparison to the existing state-of-the-art across various datasets. Various ablation studies support the interest of our method.

Movable antenna (MA) provides an innovative way to arrange antennas that can contribute to improved signal quality and more effective interference management. This method is especially beneficial for full-duplex (FD) wireless, which struggles with self-interference (SI) that usually overpowers the desired incoming signals. By dynamically repositioning transmit/receive antennas, we can mitigate the SI and enhance the reception of incoming signals. Thus, this paper proposes a novel MA-enabled point-to-point FD wireless system and formulates the minimum achievable rate of two FD terminals. To maximize the minimum achievable rate and determine the near-optimal positions of the MAs, we introduce a solution based on projected particle swarm optimization (PPSO), which can circumvent common suboptimal positioning issues. Moreover, numerical results reveal that the PPSO method leads to a better performance compared to the conventional alternating position optimization (APO). The results also demonstrate that an MA-enabled FD system outperforms the one using fixed-position antennas (FPAs).

For turbulent problems of industrial scale, computational cost may become prohibitive due to the stability constraints associated with explicit time discretization of the underlying conservation laws. On the other hand, implicit methods allow for larger time-step sizes but require exorbitant computational resources. Implicit-explicit (IMEX) formulations combine both temporal approaches, using an explicit method in nonstiff portions of the domain and implicit in stiff portions. While these methods can be shown to be orders of magnitude faster than typical explicit discretizations, they are still limited by their implicit discretization in terms of cost. Hybridization reduces the scaling of these systems to an effective lower dimension, which allows the system to be solved at significant speedup factors compared to standard implicit methods. This work proposes an IMEX scheme that combines hybridized and standard flux reconstriction (FR) methods to tackle geometry-induced stiffness. By using the so-called transmission conditions, an overall conservative formulation can be obtained after combining both explicit FR and hybridized implicit FR methods. We verify and apply our approach to a series of numerical examples, including a multi-element airfoil at Reynolds number 1.7 million. Results demonstrate speedup factors of four against standard IMEX formulations and at least 15 against standard explicit formulations for the same problem.

We propose GAN-Supervised Learning, a framework for learning discriminative models and their GAN-generated training data jointly end-to-end. We apply our framework to the dense visual alignment problem. Inspired by the classic Congealing method, our GANgealing algorithm trains a Spatial Transformer to map random samples from a GAN trained on unaligned data to a common, jointly-learned target mode. We show results on eight datasets, all of which demonstrate our method successfully aligns complex data and discovers dense correspondences. GANgealing significantly outperforms past self-supervised correspondence algorithms and performs on-par with (and sometimes exceeds) state-of-the-art supervised correspondence algorithms on several datasets -- without making use of any correspondence supervision or data augmentation and despite being trained exclusively on GAN-generated data. For precise correspondence, we improve upon state-of-the-art supervised methods by as much as $3\times$. We show applications of our method for augmented reality, image editing and automated pre-processing of image datasets for downstream GAN training.

Knowledge graphs (KGs) serve as useful resources for various natural language processing applications. Previous KG completion approaches require a large number of training instances (i.e., head-tail entity pairs) for every relation. The real case is that for most of the relations, very few entity pairs are available. Existing work of one-shot learning limits method generalizability for few-shot scenarios and does not fully use the supervisory information; however, few-shot KG completion has not been well studied yet. In this work, we propose a novel few-shot relation learning model (FSRL) that aims at discovering facts of new relations with few-shot references. FSRL can effectively capture knowledge from heterogeneous graph structure, aggregate representations of few-shot references, and match similar entity pairs of reference set for every relation. Extensive experiments on two public datasets demonstrate that FSRL outperforms the state-of-the-art.

北京阿比特科技有限公司