亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Non-Fungible Tokens (NFTs) have emerged as a way to collect digital art as well as an investment vehicle. Despite having been popularized only recently, over the last year, NFT markets have witnessed several high-profile (and high-value) asset sales and a tremendous growth in trading volumes. However, these marketplaces have not yet received much scrutiny. Most academic researchers have analyzed decentralized finance (DeFi) protocols, studied attacks on those protocols, and developed automated techniques to detect smart contract vulnerabilities. To the best of our knowledge, we are the first to study the market dynamics and security issues of the multi-billion dollar NFT ecosystem. In this paper, we first present a systematic overview of how the NFT ecosystem works, and we identify three major actors: marketplaces, external entities, and users. We study the design of the underlying protocols of the top 8 marketplaces (ranked by transaction volume) and discover security, privacy, and usability issues. Many of these issues can lead to substantial financial losses. During our analysis, we reported 5 security bugs in 3 top marketplaces; all of them have been confirmed by the affected parties. Moreover, we provide insights on how the entities external to the blockchain are able to interfere with NFT markets, leading to serious consequences. We also collect a large amount of asset and event data pertaining to the NFTs being traded in the examined marketplaces, and we quantify malicious trading behaviors carried out by users under the cloak of anonymity. Finally, we studied the 15 most expensive NFT sales to date, and discovered discrepancies in at least half of these transactions.

相關內容

There is a growing need for authentication methodology in virtual reality applications. Current systems assume that the immersive experience technology is a collection of peripheral devices connected to a personal computer or mobile device. Hence there is a complete reliance on the computing device with traditional authentication mechanisms to handle the authentication and authorization decisions. Using the virtual reality controllers and headset poses a different set of challenges as it is subject to unauthorized observation, unannounced to the user given the fact that the headset completely covers the field of vision in order to provide an immersive experience. As the need for virtual reality experiences in the commercial world increases, there is a need to provide other alternative mechanisms for secure authentication. In this paper, we analyze a few proposed authentication systems and reached a conclusion that a multidimensional approach to authentication is needed to address the granular nature of authentication and authorization needs of a commercial virtual reality applications in the commercial world.

As quantum programming evolves, more and more quantum programming languages are being developed. As a result, debugging and testing quantum programs have become increasingly important. While bug fixing in classical programs has come a long way, there is a lack of research in quantum programs. To this end, this paper presents a comprehensive study on bug fixing in quantum programs. We collect and investigate 96 real-world bugs and their fixes from four popular quantum programming languages Qiskit, Cirq, Q#, and ProjectQ). Our study shows that a high proportion of bugs in quantum programs are quantum-specific bugs (over 80%), which requires further research in the bug fixing domain. We also summarize and extend the bug patterns in quantum programs and subdivide the most critical part, math-related bugs, to make it more applicable to the study of quantum programs. Our findings summarize the characteristics of bugs in quantum programs and provide a basis for studying testing and debugging quantum programs.

Federated Learning (FL) provides privacy preservation by allowing the model training at edge devices without the need of sending the data from edge to a centralized server. FL has distributed the implementation of ML. Another variant of FL which is well suited for the Internet of Things (IoT) is known as Collaborated Federated Learning (CFL), which does not require an edge device to have a direct link to the model aggregator. Instead, the devices can connect to the central model aggregator via other devices using them as relays. Although, FL and CFL protect the privacy of edge devices but raises security challenges for a centralized server that performs model aggregation. The centralized server is prone to malfunction, backdoor attacks, model corruption, adversarial attacks and external attacks. Moreover, edge device to centralized server data exchange is not required in FL and CFL, but model parameters are sent from the model aggregator (global model) to edge devices (local model), which is still prone to cyber-attacks. These security and privacy concerns can be potentially addressed by Blockchain technology. The blockchain is a decentralized and consensus-based chain where devices can share consensus ledgers with increased reliability and security, thus significantly reducing the cyberattacks on an exchange of information. In this work, we will investigate the efficacy of blockchain-based decentralized exchange of model parameters and relevant information among edge devices and from a centralized server to edge devices. Moreover, we will be conducting the feasibility analysis for blockchain-based CFL models for different application scenarios like the internet of vehicles, and the internet of things. The proposed study aims to improve the security, reliability and privacy preservation by the use of blockchain-powered CFL.

Advancements of the Web technology provide this opportunity for Internet of Things (IoT) to take steps towards Web of Things (WoT). By increasing trend of reusing Web techniques to create a monolithic environment to control, monitor, and compose the smart objects, a mature WoT architecture is finally emerged in four layers to be a solution for IoT-middleware. Although WoT architecture facilitates addressing requirements of IoT in architectural or service aspects, but the effectiveness of this solution is indeterminate to meet IoT-middleware objectives. The most surveys and related reviews in this field just investigate IoT-middleware and WoT separately and thereby, report some new technologies or protocols on various middlewares or WoT models. In this paper a comprehensive survey is proposed on common area of IoT and WoT disciplines by leveraging Systematic Literature Review (SLR) as research methodology. This survey classifies variant types of IoT-middleware and WoT architecture to specifies their requirements and characteristics, respectively. Hence, WoT requirements could be categorized by comparing and analyzing IoT-middleware requirements and WoT characteristics. This research heavily reviews existing academic and industrial contributions to select potential platforms (or frameworks) and assess them against proposed WoT requirements. As a result of this survey, strengths and weaknesses of WoT architecture, as a IoT-middleware, are presented. Finally, this research attempts to open new horizon for the WoT architecture to enable researchers to dig role of WoT technologies in the IoT.

Predicting forced displacement is an important undertaking of many humanitarian aid agencies, which must anticipate flows in advance in order to provide vulnerable refugees and Internally Displaced Persons (IDPs) with shelter, food, and medical care. While there is a growing interest in using machine learning to better anticipate future arrivals, there is little standardized knowledge on how to predict refugee and IDP flows in practice. Researchers and humanitarian officers are confronted with the need to make decisions about how to structure their datasets and how to fit their problem to predictive analytics approaches, and they must choose from a variety of modeling options. Most of the time, these decisions are made without an understanding of the full range of options that could be considered, and using methodologies that have primarily been applied in different contexts - and with different goals - as opportunistic references. In this work, we attempt to facilitate a more comprehensive understanding of this emerging field of research by providing a systematic model-agnostic framework, adapted to the use of big data sources, for structuring the prediction problem. As we do so, we highlight existing work on predicting refugee and IDP flows. We also draw on our own experience building models to predict forced displacement in Somalia, in order to illustrate the choices facing modelers and point to open research questions that may be used to guide future work.

Internet of Things devices are widely adopted by the general population. People today are more connected than ever before. The widespread use and low-cost driven construction of these devices in a competitive marketplace render Internet-connected devices an easier and attractive target for malicious actors. This paper demonstrates non-invasive physical attacks against IoT devices in two case studies in a tutorial style format. The study focuses on demonstrating the: i)exploitation of debug interfaces, often left open after manufacture; and ii)the exploitation of exposed memory buses. We illustrate a person could commit such attacks with entry-level knowledge, inexpensive equipment, and limited time (in 8 to 25 minutes).

Detecting and understanding reasons for defects and inadvertent behavior in software is challenging due to their increasing complexity. In configurable software systems, the combinatorics that arises from the multitude of features a user might select from adds a further layer of complexity. We introduce the notion of feature causality, which is based on counterfactual reasoning and inspired by the seminal definition of actual causality by Halpern and Pearl. Feature causality operates at the level of system configurations and is capable of identifying features and their interactions that are the reason for emerging functional and non-functional properties. We present various methods to explicate these reasons, in particular well-established notions of responsibility and blame that we extend to the feature-oriented setting. Establishing a close connection of feature causality to prime implicants, we provide algorithms to effectively compute feature causes and causal explications. By means of an evaluation on a wide range of configurable software systems, including community benchmarks and real-world systems, we demonstrate the feasibility of our approach: We illustrate how our notion of causality facilitates to identify root causes, estimate the effects of features, and detect feature interactions.

Software technology has high impact on the global economy as in many sectors of contemporary society. As a product enabling the most varied daily activities, the software product has to be produced reflecting high quality. Software quality is dependent on its development that is based in a large set of software development processes. However, the implementation and continuous improvement of software process aimed at software product should be carefully institutionalized by software development organizations such as software factories, testing factories, V&V organizations, among others. The institutionalization of programs such as a Software Process Improvement Program, or SPI Program, require a strategic planning, which is addressed in this article from the perspective of specific models and frameworks, as well as reflections based on software process engineering models and standards. In addition, a set of strategic drivers is proposed to assist the implementation of a Strategic Plan for a SPI Program which can be considered by the organizations before starting this kind of Program.

The world population is anticipated to increase by close to 2 billion by 2050 causing a rapid escalation of food demand. A recent projection shows that the world is lagging behind accomplishing the "Zero Hunger" goal, in spite of some advancements. Socio-economic and well being fallout will affect the food security. Vulnerable groups of people will suffer malnutrition. To cater to the needs of the increasing population, the agricultural industry needs to be modernized, become smart, and automated. Traditional agriculture can be remade to efficient, sustainable, eco-friendly smart agriculture by adopting existing technologies. In this survey paper the authors present the applications, technological trends, available datasets, networking options, and challenges in smart agriculture. How Agro Cyber Physical Systems are built upon the Internet-of-Agro-Things is discussed through various application fields. Agriculture 4.0 is also discussed as a whole. We focus on the technologies, such as Artificial Intelligence (AI) and Machine Learning (ML) which support the automation, along with the Distributed Ledger Technology (DLT) which provides data integrity and security. After an in-depth study of different architectures, we also present a smart agriculture framework which relies on the location of data processing. We have divided open research problems of smart agriculture as future research work in two groups - from a technological perspective and from a networking perspective. AI, ML, the blockchain as a DLT, and Physical Unclonable Functions (PUF) based hardware security fall under the technology group, whereas any network related attacks, fake data injection and similar threats fall under the network research problem group.

Steve Jobs, one of the greatest visionaries of our time was quoted in 1996 saying "a lot of times, people do not know what they want until you show it to them" [38] indicating he advocated products to be developed based on human intuition rather than research. With the advancements of mobile devices, social networks and the Internet of Things, enormous amounts of complex data, both structured and unstructured are being captured in hope to allow organizations to make better business decisions as data is now vital for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Data-lifecycle, Privacy & Security, and Data Representation. This paper reviews the fundamental concept of Big Data, the Data Storage domain, the MapReduce programming paradigm used in processing these large datasets, and focuses on two case studies showing the effectiveness of Big Data Analytics and presents how it could be of greater good in the future if handled appropriately.

北京阿比特科技有限公司