亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper deals with the recoverable robust shortest path problem under the interval uncertainty representation. The problem is known to be strongly NP-hard and not approximable in general digraphs. Polynomial time algorithms for the problem under consideration in DAGs are proposed.

相關內容

Most of the literature on causality considers the structural framework of Pearl and the potential-outcome framework of Neyman and Rubin to be formally equivalent, and therefore interchangeably uses the do-notation and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we superimpose the two causal frameworks to prove that structural counterfactual outcomes and potential outcomes do not coincide in general -- not even in law. More precisely, we express the law of the potential outcomes in terms of the latent structural causal model under the fundamental assumptions of causal inference. This enables us to precisely identify when counterfactual inference is or is not equivalent between approaches, and to clarify the meaning of each kind of counterfactuals.

Intractable is the problem of finding two link-disjoint paths of minimal cost if the path cost is limited since it can be a special case of the partition problem. In optical networks, this limit can be introduced by the signal modulation reach. Even without this limit, the existing literature suggested the problem intractable because of the spectrum continuity and contiguity constraints, but we show that the problem can be solved exactly with the recently-proposed generic Dijkstra algorithm over a polynomially-bounded search space, thus proving the problem tractable.

We investigate the link between regularised self-transport problems and maximum likelihood estimation in Gaussian mixture models (GMM). This link suggests that self-transport followed by a clustering technique leads to principled estimators at a reasonable computational cost. Also, robustness, sparsity and stability properties of the optimal transport plan arguably make the regularised self-transport a statistical tool of choice for the GMM.

In this work, we introduce Regularity Structures B-series which are used for describing solutions of singular stochastic partial differential equations (SPDEs). We define composition and substitutions of these B-series and as in the context of B-series for ordinary differential equations, these operations can rewritten via products and Hopf algebras which have been used for building up renormalised models. These models provide a suitable topology for solving singular SPDEs. This new construction sheds a new light on these products and open interesting perspectives for the study of singular SPDEs in connection with B-series.

We study the problem of reconstructing solutions of inverse problems when only noisy measurements are available. We assume that the problem can be modeled with an infinite-dimensional forward operator that is not continuously invertible. Then, we restrict this forward operator to finite-dimensional spaces so that the inverse is Lipschitz continuous. For the inverse operator, we demonstrate that there exists a neural network which is a robust-to-noise approximation of the operator. In addition, we show that these neural networks can be learned from appropriately perturbed training data. We demonstrate the admissibility of this approach to a wide range of inverse problems of practical interest. Numerical examples are given that support the theoretical findings.

In two and three dimensions, we design and analyze a posteriori error estimators for the mixed Stokes eigenvalue problem. The unknowns on this mixed formulation are the pseudotress, velocity and pressure. With a lowest order mixed finite element scheme, together with a postprocressing technique, we prove that the proposed estimator is reliable and efficient. We illustrate the results with several numerical tests in two and three dimensions in order to assess the performance of the estimator.

We formulate and test a technique to use Emergent Communication (EC) with a pre-trained multilingual model to improve on modern Unsupervised NMT systems, especially for low-resource languages. It has been argued that the current dominant paradigm in NLP of pre-training on text-only corpora will not yield robust natural language understanding systems, and the need for grounded, goal-oriented, and interactive language learning has been high lighted. In our approach, we embed a multilingual model (mBART, Liu et al., 2020) into an EC image-reference game, in which the model is incentivized to use multilingual generations to accomplish a vision-grounded task. The hypothesis is that this will align multiple languages to a shared task space. We present two variants of EC Fine-Tuning (Steinert-Threlkeld et al., 2022), one of which outperforms a backtranslation-only baseline in all four languages investigated, including the low-resource language Nepali.

The reliance of text classifiers on spurious correlations can lead to poor generalization at deployment, raising concerns about their use in safety-critical domains such as healthcare. In this work, we propose to use counterfactual data augmentation, guided by knowledge of the causal structure of the data, to simulate interventions on spurious features and to learn more robust text classifiers. We show that this strategy is appropriate in prediction problems where the label is spuriously correlated with an attribute. Under the assumptions of such problems, we discuss the favorable sample complexity of counterfactual data augmentation, compared to importance re-weighting. Pragmatically, we match examples using auxiliary data, based on diff-in-diff methodology, and use a large language model (LLM) to represent a conditional probability of text. Through extensive experimentation on learning caregiver-invariant predictors of clinical diagnoses from medical narratives and on semi-synthetic data, we demonstrate that our method for simulating interventions improves out-of-distribution (OOD) accuracy compared to baseline invariant learning algorithms.

This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language

Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR). However, pre-training objectives tailored for ad-hoc retrieval have not been well explored. In this paper, we propose Pre-training with Representative wOrds Prediction (PROP) for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the "ideal" document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. Given an input document, we sample a pair of word sets according to the document language model, where the set with higher likelihood is deemed as more representative of the document. We then pre-train the Transformer model to predict the pairwise preference between the two word sets, jointly with the Masked Language Model (MLM) objective. By further fine-tuning on a variety of representative downstream ad-hoc retrieval tasks, PROP achieves significant improvements over baselines without pre-training or with other pre-training methods. We also show that PROP can achieve exciting performance under both the zero- and low-resource IR settings. The code and pre-trained models are available at //github.com/Albert-Ma/PROP.

北京阿比特科技有限公司