亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Iris recognition technology has attracted an increasing interest in the last decades in which we have witnessed a migration from research laboratories to real world applications. The deployment of this technology raises questions about the main vulnerabilities and security threats related to these systems. Among these threats presentation attacks stand out as some of the most relevant and studied. Presentation attacks can be defined as presentation of human characteristics or artifacts directly to the capture device of a biometric system trying to interfere its normal operation. In the case of the iris, these attacks include the use of real irises as well as artifacts with different level of sophistication such as photographs or videos. This chapter introduces iris Presentation Attack Detection (PAD) methods that have been developed to reduce the risk posed by presentation attacks. First, we summarise the most popular types of attacks including the main challenges to address. Secondly, we present a taxonomy of Presentation Attack Detection methods as a brief introduction to this very active research area. Finally, we discuss the integration of these methods into Iris Recognition Systems according to the most important scenarios of practical application.

相關內容

Iris數(shu)據(ju)集(ji)是常用的(de)分(fen)類(lei)實驗數(shu)據(ju)集(ji),由Fisher, 1936收集(ji)整理。Iris也稱鳶(yuan)尾花(hua)(hua)卉數(shu)據(ju)集(ji),是一(yi)(yi)類(lei)多重變量分(fen)析的(de)數(shu)據(ju)集(ji)。數(shu)據(ju)集(ji)包(bao)含150個(ge)(ge)數(shu)據(ju)集(ji),分(fen)為(wei)3類(lei),每類(lei)50個(ge)(ge)數(shu)據(ju),每個(ge)(ge)數(shu)據(ju)包(bao)含4個(ge)(ge)屬性。可通過花(hua)(hua)萼長度,花(hua)(hua)萼寬度,花(hua)(hua)瓣(ban)長度,花(hua)(hua)瓣(ban)寬度4個(ge)(ge)屬性預測鳶(yuan)尾花(hua)(hua)卉屬于(Setosa,Versicolour,Virginica)三個(ge)(ge)種類(lei)中的(de)哪一(yi)(yi)類(lei)。

In spite of considerable practical importance, current algorithmic fairness literature lacks technical methods to account for underlying geographic dependency while evaluating or mitigating bias issues for spatial data. We initiate the study of bias in spatial applications in this paper, taking the first step towards formalizing this line of quantitative methods. Bias in spatial data applications often gets confounded by underlying spatial autocorrelation. We propose hypothesis testing methodology to detect the presence and strength of this effect, then account for it by using a spatial filtering-based approach -- in order to enable application of existing bias detection metrics. We evaluate our proposed methodology through numerical experiments on real and synthetic datasets, demonstrating that in the presence of several types of confounding effects due to the underlying spatial structure our testing methods perform well in maintaining low type-II errors and nominal type-I errors.

In recent years, increasing deployment of face recognition technology in security-critical settings, such as border control or law enforcement, has led to considerable interest in the vulnerability of face recognition systems to attacks utilising legitimate documents, which are issued on the basis of digitally manipulated face images. As automated manipulation and attack detection remains a challenging task, conventional processes with human inspectors performing identity verification remain indispensable. These circumstances merit a closer investigation of human capabilities in detecting manipulated face images, as previous work in this field is sparse and often concentrated only on specific scenarios and biometric characteristics. This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics and subsequently discusses interdisciplinary opportunities with the aim of examining human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching. In addition to analysing appropriate performance measures, a possible metric of detectability is explored. Experimental data of 306 probands indicate that detection performance is widely distributed across the population and detection of certain types of face image manipulations is much more challenging than others.

The growing proliferation of FPGAs and High-level Synthesis (HLS) tools has led to a large interest in designing hardware accelerators for complex operations and algorithms. However, existing HLS toolflows typically require a significant amount of user knowledge or training to be effective in both industrial and research applications. In this paper, we propose using the Julia language as the basis for an HLS tool. The Julia HLS tool aims to decrease the barrier to entry for hardware acceleration by taking advantage of the readability of the Julia language and by allowing the use of the existing large library of standard mathematical functions written in Julia. We present a prototype Julia HLS tool, written in Julia, that transforms Julia code to VHDL. We highlight how features of Julia and its compiler simplified the creation of this tool, and we discuss potential directions for future work.

Objective. Insecure Direct Object Reference (IDOR) or Broken Object Level Authorization (BOLA) are one of the critical type of access control vulnerabilities for modern applications. As a result, an attacker can bypass authorization checks leading to information leakage, account takeover. Our main research goal was to help an application security architect to optimize security design and testing process by giving an algorithm and tool that allows to automatically analyze system API specifications and generate list of possible vulnerabilities and attack vector ready to be used as security non-functional requirements. Method. We conducted a multivocal review of research and conference papers, bug bounty program reports and other grey sources of literature to outline patterns of attacks against IDOR vulnerability. These attacks are collected in groups proceeding with further analysis common attributes between these groups and what features compose the group. Endpoint properties and attack techniques comprise a group of attacks. Mapping between group features and existing OpenAPI specifications is performed to implement a tool for automatic discovery of potentially vulnerable endpoints. Results and practical relevance. In this work, we provide systematization of IDOR/BOLA attack techniques based on literature review, real cases analysis and derive IDOR/BOLA attack groups. We proposed an approach to describe IDOR/BOLA attacks based on OpenAPI specifications properties. We develop an algorithm of potential IDOR/BOLA vulnerabilities detection based on OpenAPI specification processing. We implemented our novel algorithm using Python and evaluated it. The results show that algorithm is resilient and can be used in practice to detect potential IDOR/BOLA vulnerabilities.

The rapid changes in the finance industry due to the increasing amount of data have revolutionized the techniques on data processing and data analysis and brought new theoretical and computational challenges. In contrast to classical stochastic control theory and other analytical approaches for solving financial decision-making problems that heavily reply on model assumptions, new developments from reinforcement learning (RL) are able to make full use of the large amount of financial data with fewer model assumptions and to improve decisions in complex financial environments. This survey paper aims to review the recent developments and use of RL approaches in finance. We give an introduction to Markov decision processes, which is the setting for many of the commonly used RL approaches. Various algorithms are then introduced with a focus on value and policy based methods that do not require any model assumptions. Connections are made with neural networks to extend the framework to encompass deep RL algorithms. Our survey concludes by discussing the application of these RL algorithms in a variety of decision-making problems in finance, including optimal execution, portfolio optimization, option pricing and hedging, market making, smart order routing, and robo-advising.

Deep generative models have gained much attention given their ability to generate data for applications as varied as healthcare to financial technology to surveillance, and many more - the most popular models being generative adversarial networks and variational auto-encoders. Yet, as with all machine learning models, ever is the concern over security breaches and privacy leaks and deep generative models are no exception. These models have advanced so rapidly in recent years that work on their security is still in its infancy. In an attempt to audit the current and future threats against these models, and to provide a roadmap for defense preparations in the short term, we prepared this comprehensive and specialized survey on the security and privacy preservation of GANs and VAEs. Our focus is on the inner connection between attacks and model architectures and, more specifically, on five components of deep generative models: the training data, the latent code, the generators/decoders of GANs/ VAEs, the discriminators/encoders of GANs/ VAEs, and the generated data. For each model, component and attack, we review the current research progress and identify the key challenges. The paper concludes with a discussion of possible future attacks and research directions in the field.

Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.

The question addressed in this paper is: If we present to a user an AI system that explains how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? In other words, how do we know that an explanainable AI system (XAI) is any good? Our focus is on the key concepts of measurement. We discuss specific methods for evaluating: (1) the goodness of explanations, (2) whether users are satisfied by explanations, (3) how well users understand the AI systems, (4) how curiosity motivates the search for explanations, (5) whether the user's trust and reliance on the AI are appropriate, and finally, (6) how the human-XAI work system performs. The recommendations we present derive from our integration of extensive research literatures and our own psychometric evaluations.

北京阿比特科技有限公司