亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Commonly, introductory programming courses in higher education institutions have hundreds of participating students eager to learn to program. The manual effort for reviewing the submitted source code and for providing feedback can no longer be managed. Manually reviewing the submitted homework can be subjective and unfair, particularly if many tutors are responsible for grading. Different autograders can help in this situation; however, there is a lack of knowledge about how autograders can impact students' overall perception of programming classes and teaching. This is relevant for course organizers and institutions to keep their programming courses attractive while coping with increasing students. This paper studies the answers to the standardized university evaluation questionnaires of multiple large-scale foundational computer science courses which recently introduced autograding. The differences before and after this intervention are analyzed. By incorporating additional observations, we hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty. This qualitative study aims to provide hypotheses for future research to define and conduct quantitative surveys and data analysis. The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.

相關內容

The use of IoT in society is perhaps already ubiquitous, with a vast attack surface offering multiple opportunities for malicious actors. This short paper first presents an introduction to IoT and its security issues, including an overview of IoT layer models and topologies, IoT standardisation efforts and protocols. The focus then moves to IoT vulnerabilities and specific suggestions for mitigations. This work's intended audience are those relatively new to IoT though with existing network-related knowledge. It is concluded that device resource constraints and a lack of IoT standards are significant issues. Research opportunities exist to develop efficient IoT IDS and energy-saving cryptography techniques lightweight enough to reasonably deploy. The need for standardised protocols and channel-based security solutions is clear, underpinned by legislative directives to ensure high standards that prevent cost-cutting on the device manufacturing side.

The burgeoning of AI has prompted recommendations that AI techniques should be "human-centered". However, there is no clear definition of what is meant by Human Centered Artificial Intelligence, or for short, HCAI. This paper aims to improve this situation by addressing some foundational aspects of HCAI. To do so, we introduce the term HCAI agent to refer to any physical or software computational agent equipped with AI components and that interacts and/or collaborates with humans. This article identifies five main conceptual components that participate in an HCAI agent: Observations, Requirements, Actions, Explanations and Models. We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI. In this paper, we focus our analysis on scenarios consisting of a single agent operating in dynamic environments in presence of humans.

Search engines play an essential role in our daily lives. Nonetheless, they are also very crucial in enterprise domain to access documents from various information sources. Since traditional search systems index the documents mainly by looking at the frequency of the occurring words in these documents, they are barely able to support natural language search, but rather keyword search. It seems that keyword based search will not be sufficient for enterprise data which is growing extremely fast. Thus, enterprise search becomes increasingly critical in corporate domain. In this report, we present an overview of the state-of-the-art technologies in literature for three main purposes: i) to increase the retrieval performance of a search engine, ii) to deploy a search platform to a cloud environment, and iii) to select the best terms in expanding queries for achieving even a higher retrieval performance as well as to provide good query suggestions to its users for a better user experience.

Privacy concerns have long been expressed around smart devices, and the concerns around Android apps have been studied by many past works. Over the past 10 years, we have crawled and scraped data for almost 1.9 million apps, and also stored the APKs for 135,536 of them. In this paper, we examine the trends in how Android apps have changed over time with respect to privacy and look at it from two perspectives: (1) how privacy behavior in apps have changed as they are updated over time, (2) how these changes can be accounted for when comparing third-party libraries and the app's own internals. To study this, we examine the adoption of HTTPS, whether apps scan the device for other installed apps, the use of permissions for privacy-sensitive data, and the use of unique identifiers. We find that privacy-related behavior has improved with time as apps continue to receive updates, and that the third-party libraries used by apps are responsible for more issues with privacy. However, we observe that in the current state of Android apps, there has not been enough of an improvement in terms of privacy and many issues still need to be addressed.

This thesis proposes an advanced, generic and high-level code rewriting and analysis system in the Julia programming language, providing applied equality saturation in the presence of multiple dispatch and metaprogramming. We show how our system can practically solve some challenging problems: Can programmers implement their own high-level compiler optimizations for their domain-specific scientific programs, without the requirement of them being compiler experts at all? Can these optimizers be implemented by users in the same language and inside the same programs they want to optimize, solving the two-language problem? Can these compiler optimizers be written in a high-level fashion, as equations, without the need to worry about the rewriting ordering? Thus, can symbolic mathematics do high-level compiler optimizations or vice-versa?

Autonomous Vehicles (AVs) i.e., self-driving cars, operate in a safety critical domain, since errors in the autonomous driving software can lead to huge losses. Statistically, road intersections which are a part of the AVs operational design domain (ODD), have some of the highest accident rates. Hence, testing AVs to the limits on road intersections and assuring their safety on road intersections is pertinent, and thus the focus of this paper. We present a situation coverage-based (SitCov) AV-testing framework for the verification and validation (V&V) and safety assurance of AVs, developed in an open-source AV simulator named CARLA. The SitCov AV-testing framework focuses on vehicle-to-vehicle interaction on a road intersection under different environmental and intersection configuration situations, using situation coverage criteria for automatic test suite generation for safety assurance of AVs. We have developed an ontology for intersection situations, and used it to generate a situation hyperspace i.e., the space of all possible situations arising from that ontology. For the evaluation of our SitCov AV-testing framework, we have seeded multiple faults in our ego AV, and compared situation coverage based and random situation generation. We have found that both generation methodologies trigger around the same number of seeded faults, but the situation coverage-based generation tells us a lot more about the weaknesses of the autonomous driving algorithm of our ego AV, especially in edge-cases. Our code is publicly available online, anyone can use our SitCov AV-testing framework and use it or build further on top of it. This paper aims to contribute to the domain of V&V and development of AVs, not only from a theoretical point of view, but also from the viewpoint of an open-source software contribution and releasing a flexible/effective tool for V&V and development of AVs.

We consider the question: how can you sample good negative examples for contrastive learning? We argue that, as with metric learning, learning contrastive representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use label information. In response, we develop a new class of unsupervised methods for selecting hard negative samples where the user can control the amount of hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.

There has been considerable growth and interest in industrial applications of machine learning (ML) in recent years. ML engineers, as a consequence, are in high demand across the industry, yet improving the efficiency of ML engineers remains a fundamental challenge. Automated machine learning (AutoML) has emerged as a way to save time and effort on repetitive tasks in ML pipelines, such as data pre-processing, feature engineering, model selection, hyperparameter optimization, and prediction result analysis. In this paper, we investigate the current state of AutoML tools aiming to automate these tasks. We conduct various evaluations of the tools on many datasets, in different data segments, to examine their performance, and compare their advantages and disadvantages on different test cases.

Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. This field of research has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine. Thus, deep RL opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more. This manuscript provides an introduction to deep reinforcement learning models, algorithms and techniques. Particular focus is on the aspects related to generalization and how deep RL can be used for practical applications. We assume the reader is familiar with basic machine learning concepts.

北京阿比特科技有限公司