亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Precision and Recall are two prominent metrics of generative performance, which were proposed to separately measure the fidelity and diversity of generative models. Given their central role in comparing and improving generative models, understanding their limitations are crucially important. To that end, in this work, we identify a critical flaw in the common approximation of these metrics using k-nearest-neighbors, namely, that the very interpretations of fidelity and diversity that are assigned to Precision and Recall can fail in high dimensions, resulting in very misleading conclusions. Specifically, we empirically and theoretically show that as the number of dimensions grows, two model distributions with supports at equal point-wise distance from the support of the real distribution, can have vastly different Precision and Recall regardless of their respective distributions, hence an emergent asymmetry in high dimensions. Based on our theoretical insights, we then provide simple yet effective modifications to these metrics to construct symmetric metrics regardless of the number of dimensions. Finally, we provide experiments on real-world datasets to illustrate that the identified flaw is not merely a pathological case, and that our proposed metrics are effective in alleviating its impact.

相關內容

Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty -- e.g., in weather forecasts -- is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.

We consider a causal inference model in which individuals interact in a social network and they may not comply with the assigned treatments. In particular, we suppose that the form of network interference is unknown to researchers. To estimate meaningful causal parameters in this situation, we introduce a new concept of exposure mapping, which summarizes potentially complicated spillover effects into a fixed dimensional statistic of instrumental variables. We investigate identification conditions for the intention-to-treat effects and the average treatment effects for compliers, while explicitly considering the possibility of misspecification of exposure mapping. Based on our identification results, we develop nonparametric estimation procedures via inverse probability weighting. Their asymptotic properties, including consistency and asymptotic normality, are investigated using an approximate neighborhood interference framework. For an empirical illustration, we apply our method to experimental data on the anti-conflict intervention school program. The proposed methods are readily available with the companion R package latenetwork.

Many research explore how well computers are able to examine emotions displayed by humans and use that data to perform different tasks. However, there have been very few research which evaluate the computers ability to generate emotion classification information in an attempt to help the user make decisions or perform tasks. This is a crucial area to explore as it is paramount to the two way communication between humans and computers. This research conducted an experiment to investigate the impact of different uncertainty information displays of emotion classification on the human decision making process. Results show that displaying more uncertainty information can help users to be more confident when making decisions.

A sparse polynomial (also called a lacunary polynomial) is a polynomial that has relatively few terms compared to its degree. The sparse-representation of a polynomial represents the polynomial as a list of its non-zero terms (coefficient-degree pairs). In particular, the degree of a sparse polynomial can be exponential in the sparse-representation size. We prove that for monic polynomials $f, g \in \mathbb{C}[x]$ such that $g$ divides $f$, the $\ell_2$-norm of the quotient polynomial $f/g$ is bounded by $\lVert f \rVert_1 \cdot \tilde{O}(\lVert{g}\rVert_0^3\text{deg}^2{ f})^{\lVert{g}\rVert_0 - 1}$. This improves upon the exponential (in $\text{deg}{ f}$) bounds for general polynomials and implies that the trivial long division algorithm runs in time quasi-linear in the input size and number of terms of the quotient polynomial $f/g$, thus solving a long-standing problem on exact divisibility of sparse polynomials. We also study the problem of bounding the number of terms of $f/g$ in some special cases. When $f, g \in \mathbb{Z}[x]$ and $g$ is a cyclotomic-free (i.e., it has no cyclotomic factors) trinomial, we prove that $\lVert{f/g}\rVert_0 \leq O(\lVert{f}\rVert_0 \text{size}({f})^2 \cdot \log^6{\text{deg}{ g}})$. When $g$ is a binomial with $g(\pm 1) \neq 0$, we prove that the sparsity is at most $O(\lVert{f}\rVert_0 ( \log{\lVert{f}\rVert_0} + \log{\lVert{f}\rVert_{\infty}}))$. Both upper bounds are polynomial in the input-size. We leverage these results and give a polynomial time algorithm for deciding whether a cyclotomic-free trinomial divides a sparse polynomial over the integers. As our last result, we present a polynomial time algorithm for testing divisibility by pentanomials over small finite fields when $\text{deg}{ f} = \tilde{O}(\text{deg}{ g})$.

There are a number of measures of direct and indirect effects in the literature. They are suitable in some cases and unsuitable in others. We describe a case where the existing measures are unsuitable and propose new suitable ones. We also show that the new measures can partially handle unmeasured treatment-outcome confounding, and bound long-term effects by combining experimental and observational data.

Before and after study frameworks are widely adopted to evaluate the effectiveness of transportation policies and emerging technologies. However, many factors such as seasonal factors, holidays, and lane closure might interfere with the evaluation process by inducing variation in traffic volume during the before and after periods. In practice, limited effort has been made to eliminate the effects of these factors. In this study, an extreme gradient boosting (XGBoost)-based propensity score matching method is proposed to reduce the biases caused by traffic volume variation during the before and after periods. In order to evaluate the effectiveness of the proposed method, a corridor in the City of Chandler, Arizona where an advanced traffic signal control system has been recently implemented was selected. The results indicated that the proposed method is able to effectively eliminate the variation in traffic volume caused by the COVID-19 global Pandemic during the evaluation process. In addition, the results of the t-test and Kolmogorov-Smirnov (KS) test demonstrated that the proposed method outperforms other conventional propensity score matching methods. The application of the proposed method is also transferrable to other before and after evaluation studies and can significantly assist the transportation engineers to eliminate the impacts of traffic volume variation on the evaluation process.

We consider the problem of estimating the false-/ true-positive-rate (FPR/TPR) for a binary classification model when there are incorrect labels (label noise) in the validation set. Our motivating application is fraud prevention where accurate estimates of FPR are critical to preserving the experience for good customers, and where label noise is highly asymmetric. Existing methods seek to minimize the total error in the cleaning process - to avoid cleaning examples that are not noise, and to ensure cleaning of examples that are. This is an important measure of accuracy but insufficient to guarantee good estimates of the true FPR or TPR for a model, and we show that using the model to directly clean its own validation data leads to underestimates even if total error is low. This indicates a need for researchers to pursue methods that not only reduce total error but also seek to de-correlate cleaning error with model scores.

Convolutional networks are considered shift invariant, but it was demonstrated that their response may vary according to the exact location of the objects. In this paper we will demonstrate that most commonly investigated datasets have a bias, where objects are over-represented at the center of the image during training. This bias and the boundary condition of these networks can have a significant effect on the performance of these architectures and their accuracy drops significantly as an object approaches the boundary. We will also demonstrate how this effect can be mitigated with data augmentation techniques.

Fractional (hyper-)graph theory is concerned with the specific problems that arise when fractional analogues of otherwise integer-valued (hyper-)graph invariants are considered. The focus of this paper is on fractional edge covers of hypergraphs. Our main technical result generalizes and unifies previous conditions under which the size of the support of fractional edge covers is bounded independently of the size of the hypergraph itself. This allows us to extend previous tractability results for checking if the fractional hypertree width of a given hypergraph is $\leq k$ for some constant $k$. We also show how our results translate to fractional vertex covers.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司