亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Already since the work by Abbe and Rayleigh the difficulty of super resolution where one wants to recover a collection of point sources from low-resolved microscopy measurements is thought to be dependent on whether the distance between the sources is below or above a certain resolution or diffraction limit. Even though there has been a number of approaches to define this limit more rigorously, there is still a gap between the situation where the task is known to be hard and scenarios where the task is provably simpler. For instance, an interesting approach for the univariate case using the size of the Cram\'er-Rao lower bound was introduced in a recent work by Ferreira Da Costa and Mitra. In this paper, we prove their conjecture on the transition point between good and worse tractability of super resolution and extend it to higher dimensions. Specifically, the bivariate statistical analysis allows to link the findings by the Cram\'er-Rao lower bound to the classical Rayleigh limit.

相關內容

Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion with invariant measure $d\mu_\phi \propto e^{-\phi} \mathrm{dvol}_g $ on a compact Riemannian manifold. Two estimators of linear functionals of $\mu_\phi $ based on the discretized Markov process are considered: a time-averaging estimator based on a single trajectory and an ensemble-averaging estimator based on multiple independent trajectories. Imposing no restrictions beyond a nominal level of smoothness on $\phi$, first-order error bounds, in discretization step size, on the bias and variances of both estimators are derived. The order of error matches the optimal rate in Euclidean and flat spaces, and leads to a first-order bound on distance between the invariant measure $\mu_\phi$ and a stationary measure of the discretized Markov process. Generality of the proof techniques, which exploit links between two partial differential equations and the semigroup of operators corresponding to the Langevin diffusion, renders them amenable for the study of a more general class of sampling algorithms related to the Langevin diffusion. Conditions for extending analysis to the case of non-compact manifolds are discussed. Numerical illustrations with distributions, log-concave and otherwise, on the manifolds of positive and negative curvature elucidate on the derived bounds and demonstrate practical utility of the sampling algorithm.

The complexity of a well-quasi-order (wqo) can be measured through three classical ordinal invariants: the width as a measure of antichains, the height as a measure of chains, and the maximal order type as a measure of bad sequences. This article considers the "finitary powerset" construction: the collection Pf(X) of finite subsets of a wqo X ordered with the Hoare embedding relation remains a wqo. The width, height and maximal order type of Pf(X) cannot be expressed as a function of the invariants of X, and we provide tight upper and lower bounds for the three invariants. The article also identifies an algebra of well-behaved wqos, that include finitary powersets as well as other more classical constructions, and for which the ordinal invariants can be computed compositionnally. This relies on a new ordinal invariant called the approximated maximal order type.

A sock ordering is a sequence of socks with different colors. A sock ordering is foot-sortable if the sequence of socks can be sorted by a stack so that socks with the same color form a contiguous block. The problem of deciding whether a given sock ordering is foot-sortable was first considered by Defant and Kravitz, who resolved the case for alignment-free 2-uniform sock orderings. In this paper, we resolve the problem in a more general setting, where each color appears in the sock ordering at most twice. A key component of the argument is a fast algorithm that determines the foot-sortability of a sock ordering of length $N$ in time $O(N\log N)$, which is also an interesting result on its own.

Query-focused summarization (QFS) aims to provide a summary of a single document/multi documents that can satisfy the information needs of a given query. It is useful for various real-world applications, such as abstractive snippet generation or more recent retrieval augmented generation (RAG). A prototypical QFS pipeline consists of a retriever (sparse or dense retrieval) and a generator (usually a large language model). However, applying large language models (LLM) potentially leads to hallucinations, especially when the evidence contradicts the prior belief of LLMs. There has been growing interest in developing new decoding methods to improve generation quality and reduce hallucination. In this work, we conduct a large-scale reproducibility on one recently proposed decoding method -- Context-aware Decoding (CAD). In addition to replicating CAD's experiments on news summarization datasets, we include experiments on QFS datasets, and conduct more rigorous analysis on computational complexity and hyperparameter sensitivity. Experiments with eight different language models show that performance-wise, CAD improves QFS quality by (1) reducing factuality errors/hallucinations while (2) mostly retaining the match of lexical patterns, measured by ROUGE scores, while also at a cost of increased inference-time FLOPs and reduced decoding speed. The code implementation based on Huggingface Library is made available //github.com/zhichaoxu-shufe/context-aware-decoding-qfs

Time series forecasting is a challenging task due to the existence of complex and dynamic temporal dependencies. This can lead to incorrect predictions by even the best forecasting models. Using more training data is one way to improve the accuracy, but this source is often limited. In contrast, we are building on successful denoising approaches for image generation by advocating for an end-to-end forecasting and denoising paradigm. We propose an end-to-end forecast-blur-denoise forecasting framework by encouraging a division of labors between the forecasting and the denoising models. The initial forecasting model is directed to focus on accurately predicting the coarse-grained behavior, while the denoiser model focuses on capturing the fine-grained behavior that is locally blurred by integrating a Gaussian Process model. All three parts are interacting for the best end-to-end performance. Our extensive experiments demonstrate that our proposed approach is able to improve the forecasting accuracy of several state-of-the-art forecasting models as well as several other denoising approaches.

May's Theorem [K. O. May, Econometrica 20 (1952) 680-684] characterizes majority voting on two alternatives as the unique preferential voting method satisfying several simple axioms. Here we show that by adding some desirable axioms to May's axioms, we can uniquely determine how to vote on three alternatives. In particular, we add two axioms stating that the voting method should mitigate spoiler effects and avoid the so-called strong no show paradox. We prove a theorem stating that any preferential voting method satisfying our enlarged set of axioms, which includes some weak homogeneity and preservation axioms, agrees with Minimax voting in all three-alternative elections, except perhaps in some improbable knife-edged elections in which ties may arise and be broken in different ways.

The new era of technology has brought us to the point where it is convenient for people to share their opinions over an abundance of platforms. These platforms have a provision for the users to express themselves in multiple forms of representations, including text, images, videos, and audio. This, however, makes it difficult for users to obtain all the key information about a topic, making the task of automatic multi-modal summarization (MMS) essential. In this paper, we present a comprehensive survey of the existing research in the area of MMS.

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.

We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.

This paper reports Deep LOGISMOS approach to 3D tumor segmentation by incorporating boundary information derived from deep contextual learning to LOGISMOS - layered optimal graph image segmentation of multiple objects and surfaces. Accurate and reliable tumor segmentation is essential to tumor growth analysis and treatment selection. A fully convolutional network (FCN), UNet, is first trained using three adjacent 2D patches centered at the tumor, providing contextual UNet segmentation and probability map for each 2D patch. The UNet segmentation is then refined by Gaussian Mixture Model (GMM) and morphological operations. The refined UNet segmentation is used to provide the initial shape boundary to build a segmentation graph. The cost for each node of the graph is determined by the UNet probability maps. Finally, a max-flow algorithm is employed to find the globally optimal solution thus obtaining the final segmentation. For evaluation, we applied the method to pancreatic tumor segmentation on a dataset of 51 CT scans, among which 30 scans were used for training and 21 for testing. With Deep LOGISMOS, DICE Similarity Coefficient (DSC) and Relative Volume Difference (RVD) reached 83.2+-7.8% and 18.6+-17.4% respectively, both are significantly improved (p<0.05) compared with contextual UNet and/or LOGISMOS alone.

北京阿比特科技有限公司