亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Similarity index is an important scientific tool frequently used to determine whether different pairs of entities are similar with respect to some prefixed characteristics. Some standard measures of similarity index include Jaccard index, S{\o}rensen-Dice index, and Simpson's index. Recently, a better index ($\hat{\alpha}$) for the co-occurrence and/or similarity has been developed, and this measure really outperforms and gives theoretically supported reasonable predictions. However, the measure $\hat{\alpha}$ is not data dependent. In this article we propose a new measure of similarity which depends strongly on the data before introducing randomness in prevalence. Then, we propose a new method of randomization which changes the whole pattern of results. Before randomization our measure is similar to the Jaccard index, while after randomization it is close to $\hat{\alpha}$. We consider the popular ecological dataset from the Tuscan Archipelago, Italy; and compare the performance of the proposed index to other measures. Since our proposed index is data dependent, it has some interesting properties which we illustrate in this article through numerical studies.

相關內容

Many industrial and engineering processes monitored as times series have smooth trends that indicate normal behavior and occasionally anomalous patterns that can indicate a problem. This kind of behavior can be modeled by a smooth trend such as a spline or Gaussian process and a disruption based on a sparser representation. Our approach is to expand the process signal into two sets of basis functions: one set uses $L_2$ penalties on the coefficients and the other set uses $L_1$ penalties to control sparsity. From a frequentist perspective, this results in a hybrid smoother that combines cubic smoothing splines and the LASSO, and as a Bayesian hierarchical model (BHM), this is equivalent to priors giving a Gaussian process and a Laplace distribution for anomaly coefficients. For the hybrid smoother we propose two new ways of determining the penalty parameters that use effective degrees of freedom and contrast this with the BHM that uses loosely informative inverse gamma priors. Several reformulations are used to make sampling the BHM posterior more efficient including some novel features in orthogonalizing and regularizing the model basis functions. This methodology is motivated by a substantive application, monitoring the water treatment process for the Denver Metropolitan area. We also test the methods with a Monte Carlo study designed around the kind of anomalies expected in this application. Both the hybrid smoother and the full BHM give comparable results with small false positive and false negative rates. Besides being successful in the water treatment application, this work can be easily extended to other Gaussian process models and other features that represent process disruptions.

While individual robots are becoming increasingly capable, with new sensors and actuators, the complexity of expected missions increased exponentially in comparison. To cope with this complexity, heterogeneous teams of robots have become a significant research interest in recent years. Making effective use of the robots and their unique skills in a team is challenging. Dynamic runtime conditions often make static task allocations infeasible, therefore requiring a dynamic, capability-aware allocation of tasks to team members. To this end, we propose and implement a system that allows a user to specify missions using Bheavior Trees (BTs), which can then, at runtime, be dynamically allocated to the current robot team. The system allows to statically model an individual robot's capabilities within our ros_bt_py BT framework. It offers a runtime auction system to dynamically allocate tasks to the most capable robot in the current team. The system leverages utility values and pre-conditions to ensure that the allocation improves the overall mission execution quality while preventing faulty assignments. To evaluate the system, we simulated a find-and-decontaminate mission with a team of three heterogeneous robots and analyzed the utilization and overall mission times as metrics. Our results show that our system can improve the overall effectiveness of a team while allowing for intuitive mission specification and flexibility in the team composition.

Dimensionality reduction methods, such as principal component analysis (PCA) and factor analysis, are central to many problems in data science. There are, however, serious and well-understood challenges to finding robust low dimensional approximations for data with significant heteroskedastic noise. This paper introduces a relaxed version of Minimum Trace Factor Analysis (MTFA), a convex optimization method with roots dating back to the work of Ledermann in 1940. This relaxation is particularly effective at not overfitting to heteroskedastic perturbations and addresses the commonly cited Heywood cases in factor analysis and the recently identified "curse of ill-conditioning" for existing spectral methods. We provide theoretical guarantees on the accuracy of the resulting low rank subspace and the convergence rate of the proposed algorithm to compute that matrix. We develop a number of interesting connections to existing methods, including HeteroPCA, Lasso, and Soft-Impute, to fill an important gap in the already large literature on low rank matrix estimation. Numerical experiments benchmark our results against several recent proposals for dealing with heteroskedastic noise.

Extremely large aperture arrays can enable unprecedented spatial multiplexing in beyond 5G systems due to their extremely narrow beamfocusing capabilities. However, acquiring the spatial correlation matrix to enable efficient channel estimation is a complex task due to the vast number of antenna dimensions. Recently, a new estimation method called the "reduced-subspace least squares (RS-LS) estimator" has been proposed for densely packed arrays. This method relies solely on the geometry of the array to limit the estimation resources. In this paper, we address a gap in the existing literature by deriving the average spectral efficiency for a certain distribution of user equipments (UEs) and a lower bound on it when using the RS-LS estimator. This bound is determined by the channel gain and the statistics of the normalized spatial correlation matrices of potential UEs but, importantly, does not require knowledge of a specific UE's spatial correlation matrix. We establish that there exists a pilot length that maximizes this expression. Additionally, we derive an approximate expression for the optimal pilot length under low signal-to-noise ratio (SNR) conditions. Simulation results validate the tightness of the derived lower bound and the effectiveness of using the optimized pilot length.

Information Bottleneck (IB) is a widely used framework that enables the extraction of information related to a target random variable from a source random variable. In the objective function, IB controls the trade-off between data compression and predictiveness through the Lagrange multiplier $\beta$. Traditionally, to find the trade-off to be learned, IB requires a search for $\beta$ through multiple training cycles, which is computationally expensive. In this study, we introduce Flexible Variational Information Bottleneck (FVIB), an innovative framework for classification task that can obtain optimal models for all values of $\beta$ with single, computationally efficient training. We theoretically demonstrate that across all values of reasonable $\beta$, FVIB can simultaneously maximize an approximation of the objective function for Variational Information Bottleneck (VIB), the conventional IB method. Then we empirically show that FVIB can learn the VIB objective as effectively as VIB. Furthermore, in terms of calibration performance, FVIB outperforms other IB and calibration methods by enabling continuous optimization of $\beta$. Our codes are available at //github.com/sotakudo/fvib.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

Incompleteness is a common problem for existing knowledge graphs (KGs), and the completion of KG which aims to predict links between entities is challenging. Most existing KG completion methods only consider the direct relation between nodes and ignore the relation paths which contain useful information for link prediction. Recently, a few methods take relation paths into consideration but pay less attention to the order of relations in paths which is important for reasoning. In addition, these path-based models always ignore nonlinear contributions of path features for link prediction. To solve these problems, we propose a novel KG completion method named OPTransE. Instead of embedding both entities of a relation into the same latent space as in previous methods, we project the head entity and the tail entity of each relation into different spaces to guarantee the order of relations in the path. Meanwhile, we adopt a pooling strategy to extract nonlinear and complex features of different paths to further improve the performance of link prediction. Experimental results on two benchmark datasets show that the proposed model OPTransE performs better than state-of-the-art methods.

We study how to generate captions that are not only accurate in describing an image but also discriminative across different images. The problem is both fundamental and interesting, as most machine-generated captions, despite phenomenal research progresses in the past several years, are expressed in a very monotonic and featureless format. While such captions are normally accurate, they often lack important characteristics in human languages - distinctiveness for each caption and diversity for different images. To address this problem, we propose a novel conditional generative adversarial network for generating diverse captions across images. Instead of estimating the quality of a caption solely on one image, the proposed comparative adversarial learning framework better assesses the quality of captions by comparing a set of captions within the image-caption joint space. By contrasting with human-written captions and image-mismatched captions, the caption generator effectively exploits the inherent characteristics of human languages, and generates more discriminative captions. We show that our proposed network is capable of producing accurate and diverse captions across images.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司