亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mobile banking applications have gained popularity and have significantly revolutionised the banking industry. Despite the convenience offered by M-Banking Apps, users are often distrustful of the security of the applications due to an increasing trend of cyber security compromises, cyber-attacks, and data breaches. Considering the upsurge in cyber security vulnerabilities of M-Banking Apps and the paucity of research in this domain, this study was conducted to empirically measure user-perceived security of M-Banking Apps. A total of 315 responses from study participants were analysed using covariance-based structural equation modelling (CB-SEM). The results indicated that most constructs of the baseline Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) structure were validated. Perceived security, institutional trust and technology trust were confirmed as factors that affect user's intention to adopt and use M-Banking Apps. However, perceived risk was not confirmed as a significant predictor. The current study further revealed that in the context of M-Banking Apps, the effects of security and trust are complex. The impact of perceived security and institutional trust on behavioural intention was moderated by age, gender, experience, income, and education, while perceived security on use behaviour was moderated by age, gender, and experience. The effect of technology trust on behavioural intention was moderated by age, education, and experience. Overall, the proposed conceptual model achieved acceptable fit and explained 79% of the variance in behavioural intention and 54.7% in use behaviour of M-Banking Apps, higher than that obtained in the original UTAUT2. The guarantee of enhanced security, advanced privacy mechanisms and trust should be considered paramount in future strategies aimed at promoting M-Banking Apps adoption and use.

相關內容

結(jie)構方程模型(Structural Equation Modeling,SEM)是一種建(jian)立、估計和(he)檢驗因果關系(xi)模型的(de)(de)(de)方法。模型中既(ji)包(bao)(bao)含(han)有可觀(guan)測的(de)(de)(de)顯在(zai)變(bian)量(liang),也(ye)可能包(bao)(bao)含(han)無法直(zhi)接觀(guan)測的(de)(de)(de)潛(qian)在(zai)變(bian)量(liang)。結(jie)構方程模型可以替代(dai)多重回歸、通徑分析(xi)、因子(zi)分析(xi)、協方差分析(xi)等(deng)方法,清晰分析(xi)單項(xiang)指標(biao)對總體的(de)(de)(de)作(zuo)用和(he)單項(xiang)指標(biao)間(jian)的(de)(de)(de)相互關系(xi)。

The Internet of Things (IoT) is one of the emerging technologies that has grabbed the attention of researchers from academia and industry. The idea behind Internet of things is the interconnection of internet enabled things or devices to each other and to humans, to achieve some common goals. In near future IoT is expected to be seamlessly integrated into our environment and human will be wholly solely dependent on this technology for comfort and easy life style. Any security compromise of the system will directly affect human life. Therefore security and privacy of this technology is foremost important issue to resolve. In this paper we present a thorough study of security problems in IoT and classify possible cyberattacks on each layer of IoT architecture. We also discuss challenges to traditional security solutions such as cryptographic solutions, authentication mechanisms and key management in IoT. Device authentication and access controls is an essential area of IoT security, which is not surveyed so far. We spent our efforts to bring the state of the art device authentication and access control techniques on a single paper.

Smart contracts have recently been adopted by many security protocols. However, existing studies lack satisfactory theoretical support on how contracts benefit security protocols. This paper aims to give a systematic analysis of smart contract (SC)-based security protocols to fulfill the gap of unclear arguments and statements. We firstly investigate \textit{state of the art studies} and establish a formalized model of smart contract protocols with well-defined syntax and assumptions. Then, we apply our formal framework to two concrete instructions to explore corresponding advantages and desirable properties. Through our analysis, we abstract three generic properties (\textit{non-repudiation, non-equivocation, and non-frameability}) and accordingly identify two patterns. (1) a smart contract can be as an autonomous subscriber to assist the trusted third party (TTP); (2) a smart contract can replace traditional TTP. To the best of our knowledge, this is the first study to provide in-depth discussions of SC-based security protocols from a strictly theoretical perspective.

Forensic firearms identification, the determination by a trained firearms examiner as to whether or not bullets or cartridges came from a common weapon, has long been a mainstay in the criminal courts. Reliability of forensic firearms identification has been challenged in the general scientific community, and, in response, several studies have been carried out aimed at showing that firearms examination is accurate, that is, has low error rates. Less studied has been the question of consistency, of. whether two examinations of the same bullets or cartridge cases come to the same conclusion, carried out by an examiner on separate occasions -- intrarater reliability or repeatability -- or by two examiners -- interrater reliability or reproducibility. One important study, described in a 2020 Report by the Ames Laboratory-USDOE to the Federal Bureau of Investigation, went beyond considerations of accuracy to investigate firearms examination repeatability and reproducibility. The Report's conclusions were paradoxical. The observed agreement of examiners with themselves or with other examiners appears mediocre. However, the study concluded repeatability and reproducibility are satisfactory, on grounds that the observed agreement exceeds a quantity called the expected agreement. We find that appropriately employing expected agreement as it was intended does not suggest satisfactory repeatability and reproducibility, but the opposite.

Data collection and research methodology represents a critical part of the research pipeline. On the one hand, it is important that we collect data in a way that maximises the validity of what we are measuring, which may involve the use of long scales with many items. On the other hand, collecting a large number of items across multiple scales results in participant fatigue, and expensive and time consuming data collection. It is therefore important that we use the available resources optimally. In this work, we consider how a consideration for theory and the associated causal/structural model can help us to streamline data collection procedures by not wasting time collecting data for variables which are not causally critical for subsequent analysis. This not only saves time and enables us to redirect resources to attend to other variables which are more important, but also increases research transparency and the reliability of theory testing. In order to achieve this streamlined data collection, we leverage structural models, and Markov conditional independency structures implicit in these models to identify the substructures which are critical for answering a particular research question. In this work, we review the relevant concepts and present a number of didactic examples with the hope that psychologists can use these techniques to streamline their data collection process without invalidating the subsequent analysis. We provide a number of simulation results to demonstrate the limited analytical impact of this streamlining.

Recruitment in large organisations often involves interviewing a large number of candidates. The process is resource intensive and complex. Therefore, it is important to carry it out efficiently and effectively. Planning the selection process consists of several problems, each of which maps to one or the other well-known computing problem. Research that looks at each of these problems in isolation is rich and mature. However, research that takes an integrated view of the problem is not common. In this paper, we take two of the most important aspects of the application processing problem, namely review/interview panel creation and interview scheduling. We have implemented our approach as a prototype system and have used it to automatically plan the interview process of a real-life data set. Our system provides a distinctly better plan than the existing practice, which is predominantly manual. We have explored various algorithmic options and have customised them to solve these panel creation and interview scheduling problems. We have evaluated these design options experimentally on a real data set and have presented our observations. Our prototype and experimental process and results may be a very good starting point for a full-fledged development project for automating application processing process.

Human-AI co-creativity involves both humans and AI collaborating on a shared creative product as partners. In a creative collaboration, interaction dynamics, such as turn-taking, contribution type, and communication, are the driving forces of the co-creative process. Therefore the interaction model is a critical and essential component for effective co-creative systems. There is relatively little research about interaction design in the co-creativity field, which is reflected in a lack of focus on interaction design in many existing co-creative systems. The primary focus of co-creativity research has been on the abilities of the AI. This paper focuses on the importance of interaction design in co-creative systems with the development of the Co-Creative Framework for Interaction design (COFI) that describes the broad scope of possibilities for interaction design in co-creative systems. Researchers can use COFI for modeling interaction in co-creative systems by exploring alternatives in this design space of interaction. COFI can also be beneficial while investigating and interpreting the interaction design of existing co-creative systems. We coded a dataset of existing 92 co-creative systems using COFI and analyzed the data to show how COFI provides a basis to categorize the interaction models of existing co-creative systems. We identify opportunities to shift the focus of interaction models in co-creativity to enable more communication between the user and AI leading to human-AI partnerships.

Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement. We focus on the task of formality transfer, and on the three aspects that are usually evaluated: style strength, content preservation, and fluency. To cast light on how such aspects are assessed by common and new metrics, we run a human-based evaluation and perform a rich correlation analysis. We are then able to offer some recommendations on the use of such metrics in formality transfer, also with an eye to their generalisability (or not) to related tasks.

Recommender systems, a pivotal tool to alleviate the information overload problem, aim to predict user's preferred items from millions of candidates by analyzing observed user-item relations. As for tackling the sparsity and cold start problems encountered by recommender systems, uncovering hidden (indirect) user-item relations by employing side information and knowledge to enrich observed information for the recommendation has been proven promising recently; and its performance is largely determined by the scalability of recommendation models in the face of the high complexity and large scale of side information and knowledge. Making great strides towards efficiently utilizing complex and large-scale data, research into graph embedding techniques is a major topic. Equipping recommender systems with graph embedding techniques contributes to outperforming the conventional recommendation implementing directly based on graph topology analysis and has been widely studied these years. This article systematically retrospects graph embedding-based recommendation from embedding techniques for bipartite graphs, general graphs, and knowledge graphs, and proposes a general design pipeline of that. In addition, comparing several representative graph embedding-based recommendation models with the most common-used conventional recommendation models, on simulations, manifests that the conventional models overall outperform the graph embedding-based ones in predicting implicit user-item interactions, revealing the relative weakness of graph embedding-based recommendation in these tasks. To foster future research, this article proposes constructive suggestions on making a trade-off between graph embedding-based recommendation and the conventional recommendation in different tasks as well as some open questions.

This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items. To identify a user's preference in the cold state, existing recommender systems, such as Netflix, initially provide items to a user; we call those items evidence candidates. Recommendations are then made based on the items selected by the user. Previous recommendation studies have two limitations: (1) the users who consumed a few items have poor recommendations and (2) inadequate evidence candidates are used to identify user preferences. We propose a meta-learning-based recommender system called MeLU to overcome these two limitations. From meta-learning, which can rapidly adopt new task with a few examples, MeLU can estimate new user's preferences with a few consumed items. In addition, we provide an evidence candidate selection strategy that determines distinguishing items for customized preference estimation. We validate MeLU with two benchmark datasets, and the proposed model reduces at least 5.92% mean absolute error than two comparative models on the datasets. We also conduct a user study experiment to verify the evidence selection strategy.

To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.

北京阿比特科技有限公司