亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Wildfire detection using satellite images is a widely studied task in remote sensing with many applications to fire delineation and mapping. Recently, deep learning methods have become a scalable solution to automate this task, especially in the field of unsupervised learning where no training data is available. This is particularly important in the context of emergency risk monitoring where fast and effective detection is needed, generally based on high-resolution satellite data. Among various approaches, Anomaly Detection (AD) appears to be highly potential thanks to its broad applications in computer vision, medical imaging, as well as remote sensing. In this work, we build upon the framework of Vector Quantized Variational Autoencoder (VQ-VAE), a popular reconstruction-based AD method with discrete latent spaces, to perform unsupervised burnt area extraction. We integrate VQ-VAE into an end-to-end framework with an intensive post-processing step using dedicated vegetation, water and brightness indexes. Our experiments conducted on high-resolution SPOT-6/7 images provide promising results of the proposed technique, showing its high potential in future research on unsupervised burnt area extraction.

相關內容

The development of successful artificial intelligence models for chest X-ray analysis relies on large, diverse datasets with high-quality annotations. While several databases of chest X-ray images have been released, most include disease diagnosis labels but lack detailed pixel-level anatomical segmentation labels. To address this gap, we introduce an extensive chest X-ray multi-center segmentation dataset with uniform and fine-grain anatomical annotations for images coming from six well-known publicly available databases: CANDID-PTX, ChestX-ray8, Chexpert, MIMIC-CXR-JPG, Padchest, and VinDr-CXR, resulting in 676,803 segmentation masks. Our methodology utilizes the HybridGNet model to ensure consistent and high-quality segmentations across all datasets. Rigorous validation, including expert physician evaluation and automatic quality control, was conducted to validate the resulting masks. Additionally, we provide individualized quality indices per mask and an overall quality estimation per dataset. This dataset serves as a valuable resource for the broader scientific community, streamlining the development and assessment of innovative methodologies in chest X-ray analysis. The CheXmask dataset is publicly available at: //physionet.org/content/chexmask-cxr-segmentation-data/

Recurrent neural networks (RNNs) have yielded promising results for both recognizing objects in challenging conditions and modeling aspects of primate vision. However, the representational dynamics of recurrent computations remain poorly understood, especially in large-scale visual models. Here, we studied such dynamics in RNNs trained for object classification on MiniEcoset, a novel subset of ecoset. We report two main insights. First, upon inference, representations continued to evolve after correct classification, suggesting a lack of the notion of being ``done with classification''. Second, focusing on ``readout zones'' as a way to characterize the activation trajectories, we observe that misclassified representations exhibit activation patterns with lower L2 norm, and are positioned more peripherally in the readout zones. Such arrangements help the misclassified representations move into the correct zones as time progresses. Our findings generalize to networks with lateral and top-down connections, and include both additive and multiplicative interactions with the bottom-up sweep. The results therefore contribute to a general understanding of RNN dynamics in naturalistic tasks. We hope that the analysis framework will aid future investigations of other types of RNNs, including understanding of representational dynamics in primate vision.

We consider multi-variate signals spanned by the integer shifts of a set of generating functions with distinct frequency profiles and the problem of reconstructing them from samples taken on a random periodic set. We show that such a sampling strategy succeeds with high probability provided that the density of the sampling pattern exceeds the number of frequency profiles by a logarithmic factor. The signal model includes bandlimited functions with multi-band spectra. While in this well-studied setting delicate constructions provide sampling strategies that meet the information theoretic benchmark of Shannon and Landau, the sampling pattern that we consider provides, at the price of a logarithmic oversampling factor, a simple alternative that is accompanied by favorable a priori stability margins (snug frames). More generally, we also treat bandlimited functions with arbitrary compact spectra, and different measures of its complexity and approximation rates by integer tiles. At the technical level, we elaborate on recent work on relevant sampling, with the key difference that the reconstruction guarantees that we provide hold uniformly for all signals, rather than for a subset of well-concentrated ones. This is achieved by methods of concentration of measure formulated on the Zak domain.

In this work, we consider the problem of building distribution-free prediction intervals with finite-sample conditional coverage guarantees. Conformal prediction (CP) is an increasingly popular framework for building prediction intervals with distribution-free guarantees, but these guarantees only ensure marginal coverage: the probability of coverage is averaged over a random draw of both the training and test data, meaning that there might be substantial undercoverage within certain subpopulations. Instead, ideally, we would want to have local coverage guarantees that hold for each possible value of the test point's features. While the impossibility of achieving pointwise local coverage is well established in the literature, many variants of conformal prediction algorithm show favorable local coverage properties empirically. Relaxing the definition of local coverage can allow for a theoretical understanding of this empirical phenomenon. We aim to bridge this gap between theoretical validation and empirical performance by proving achievable and interpretable guarantees for a relaxed notion of local coverage. Building on the localized CP method of Guan (2023) and the weighted CP framework of Tibshirani et al. (2019), we propose a new method, randomly-localized conformal prediction (RLCP), which returns prediction intervals that are not only marginally valid but also achieve a relaxed local coverage guarantee and guarantees under covariate shift. Through a series of simulations and real data experiments, we validate these coverage guarantees of RLCP while comparing it with the other local conformal prediction methods.

Suitable discretizations through tensor product formulas of popular multidimensional operators (diffusion--advection, for instance) lead to matrices with $d$-dimensional Kronecker sum structure. For evolutionary PDEs containing such operators and integrated in time with exponential integrators, it is of paramount importance to efficiently approximate actions of $\varphi$-functions of this kind of matrices. In this work, we show how to produce directional split approximations of third order with respect to the time step size. They conveniently employ tensor-matrix products (realized with highly performance level 3 BLAS) and that allow for the effective usage in practice of exponential integrators up to order three. The approach has been successfully tested against state-of-the-art techniques on two well-known physical models, namely FitzHugh--Nagumo and Schnakenberg.

Projected distributions have proved to be useful in the study of circular and directional data. Although any multivariate distribution can be used to produce a projected model, these distributions are typically parametric. In this article we consider a multivariate P\'olya tree on $R^k$ and project it to the unit hypersphere $S^k$ to define a new Bayesian nonparametric model for directional data. We study the properties of the proposed model and in particular, concentrate on the implied conditional distributions of some directions given the others to define a directional-directional regression model. We also define a multivariate linear regression model with P\'olya tree error and project it to define a linear-directional regression model. We obtain the posterior characterisation of all models and show their performance with simulated and real datasets.

In this paper, we tackle the problem of video alignment, the process of matching the frames of a pair of videos containing similar actions. The main challenge in video alignment is that accurate correspondence should be established despite the differences in the execution processes and appearances between the two videos. We introduce an unsupervised method for alignment that uses global and local features of the frames. In particular, we introduce effective features for each video frame using three machine vision tools: person detection, pose estimation, and VGG network. Then, the features are processed and combined to construct a multidimensional time series that represents the video. The resulting time series are used to align videos of the same actions using a novel version of dynamic time warping named Diagonalized Dynamic Time Warping(DDTW). The main advantage of our approach is that no training is required, which makes it applicable for any new type of action without any need to collect training samples for it. For evaluation, we considered video synchronization and phase classification tasks on the Penn action dataset. Also, for an effective evaluation of the video synchronization task, we present a new metric called Enclosed Area Error(EAE). The results show that our method outperforms previous state-of-the-art methods, such as TCC, and other self-supervised and weakly supervised methods.

Private synthetic data sharing is preferred as it keeps the distribution and nuances of original data compared to summary statistics. The state-of-the-art methods adopt a select-measure-generate paradigm, but measuring large domain marginals still results in much error and allocating privacy budget iteratively is still difficult. To address these issues, our method employs a partition-based approach that effectively reduces errors and improves the quality of synthetic data, even with a limited privacy budget. Results from our experiments demonstrate the superiority of our method over existing approaches. The synthetic data produced using our approach exhibits improved quality and utility, making it a preferable choice for private synthetic data sharing.

Recovering images corrupted by multiplicative noise is a well known challenging task. Motivated by the success of multiscale hierarchical decomposition methods (MHDM) in image processing, we adapt a variety of both classical and new multiplicative noise removing models to the MHDM form. On the basis of previous work, we further present a tight and a refined version of the corresponding multiplicative MHDM. We discuss existence and uniqueness of solutions for the proposed models, and additionally, provide convergence properties. Moreover, we present a discrepancy principle stopping criterion which prevents recovering excess noise in the multiscale reconstruction. Through comprehensive numerical experiments and comparisons, we qualitatively and quantitatively evaluate the validity of all proposed models for denoising and deblurring images degraded by multiplicative noise. By construction, these multiplicative multiscale hierarchical decomposition methods have the added benefit of recovering many scales of an image, which can provide features of interest beyond image denoising.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司