亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This research sheds light on the present and future landscape of Engineering Entrepreneurship Education (EEE) by exploring varied approaches and models adopted in Australian universities, evaluating program effectiveness, and offering recommendations for curriculum enhancement. While EEE programs have been in existence for over two decades, their efficacy remains underexplored. Using a multi-method approach encompassing self-reflection, scoping review, surveys, and interviews, this study addresses key research questions regarding the state, challenges, trends, and effectiveness of EEE. Findings reveal challenges like resource limitations and propose solutions such as experiential learning and industry partnerships. These insights underscore the importance of tailored EEE and inform teaching strategies and curriculum development, benefiting educators and policymakers worldwide.

相關內容

《工程》是中國工程院(CAE)于2015年推出的國際開放存取期刊。其目的是提供一個高水平的平臺,傳播和分享工程研發的前沿進展、當前主要研究成果和關鍵成果;報告工程科學的進展,討論工程發展的熱點、興趣領域、挑戰和前景,在工程中考慮人與環境的福祉和倫理道德,鼓勵具有深遠經濟和社會意義的工程突破和創新,使之達到國際先進水平,成為新的生產力,從而改變世界,造福人類,創造新的未來。 期刊鏈接: · Engineering · 話題 · 評論員 · Notability ·
2023 年 10 月 4 日

Cybersecurity concerns about Internet of Things (IoT) devices and infrastructure are growing each year. In response, organizations worldwide have published IoT cybersecurity guidelines to protect their citizens and customers. These guidelines constrain the development of IoT systems, which include substantial software components both on-device and in the Cloud. While these guidelines are being widely adopted, e.g. by US federal contractors, their content and merits have not been critically examined. Two notable gaps are: (1) We do not know how these guidelines differ by the topics and details of their recommendations; and (2) We do not know how effective they are at mitigating real-world IoT failures. In this paper, we address these questions through an exploratory sequential mixed-method study of IoT cybersecurity guidelines. We collected a corpus of 142 general IoT cybersecurity guidelines, sampling them for recommendations until saturation was reached. From the resulting 958 unique recommendations, we iteratively developed a hierarchical taxonomy following grounded theory coding principles. We measured the guidelines' usefulness by asking novice engineers about the actionability of each recommendation, and by matching cybersecurity recommendations to the root causes of failures (CVEs and news stories). We report that: (1) Comparing guidelines to one another, each guideline has gaps in its topic coverage and comprehensiveness; and (2) Although 87.2% recommendations are actionable and the union of the guidelines mitigates all 17 of the failures from news stories, 21% of the CVEs apparently evade the guidelines. In summary, we report shortcomings in every guideline's depth and breadth, but as a whole they are capable of preventing security issues. Our results will help software engineers determine which and how many guidelines to study as they implement IoT systems.

We present a threat modelling approach to represent changes to the attack paths through an Internet of Things (IoT) environment when the environment changes dynamically, i.e., when new devices are added or removed from the system or when whole sub-systems join or leave. The proposed approach investigates the propagation of threats using attack graphs. However, traditional attack graph approaches have been applied in static environments that do not continuously change such as the Enterprise networks, leading to static and usually very large attack graphs. In contrast, IoT environments are often characterised by dynamic change and interconnections; different topologies for different systems may interconnect with each other dynamically and outside the operator control. Such new interconnections lead to changes in the reachability amongst devices according to which their corresponding attack graphs change. This requires dynamic topology and attack graphs for threat and risk analysis. In this paper, we develop a threat modelling approach that cope with dynamic system changes that may occur in IoT environments and enables identifying attack paths whilst allowing for system dynamics. We develop dynamic topology and attack graphs that are able to cope with the changes in the IoT environment rapidly by maintaining their associated graphs. To motivate the work and illustrate our approach we introduce an example scenario based on healthcare systems. Our approach is implemented using a Graph Database Management Tool (GDBM) -- Neo4j -- which is a popular tool for mapping, visualising and querying the graphs of highly connected data, and is efficient in providing a rapid threat modelling mechanism, which makes it suitable for capturing security changes in the dynamic IoT environment.

To maintain a reliable grid we need fast decision-making algorithms for complex problems like Dynamic Reconfiguration (DyR). DyR optimizes distribution grid switch settings in real-time to minimize grid losses and dispatches resources to supply loads with available generation. DyR is a mixed-integer problem and can be computationally intractable to solve for large grids and at fast timescales. We propose GraPhyR, a Physics-Informed Graph Neural Network (GNNs) framework tailored for DyR. We incorporate essential operational and connectivity constraints directly within the GNN framework and train it end-to-end. Our results show that GraPhyR is able to learn to optimize the DyR task.

We study universal traits which emerge both in real-world complex datasets, as well as in artificially generated ones. Our approach is to analogize data to a physical system and employ tools from statistical physics and Random Matrix Theory (RMT) to reveal their underlying structure. We focus on the feature-feature covariance matrix, analyzing both its local and global eigenvalue statistics. Our main observations are: (i) The power-law scalings that the bulk of its eigenvalues exhibit are vastly different for uncorrelated normally distributed data compared to real-world data, (ii) this scaling behavior can be completely modeled by generating gaussian data with long range correlations, (iii) both generated and real-world datasets lie in the same universality class from the RMT perspective, as chaotic rather than integrable systems, (iv) the expected RMT statistical behavior already manifests for empirical covariance matrices at dataset sizes significantly smaller than those conventionally used for real-world training, and can be related to the number of samples required to approximate the population power-law scaling behavior, (v) the Shannon entropy is correlated with local RMT structure and eigenvalues scaling, and substantially smaller in strongly correlated datasets compared to uncorrelated synthetic data, and requires fewer samples to reach the distribution entropy. These findings show that with sufficient sample size, the Gram matrix of natural image datasets can be well approximated by a Wishart random matrix with a simple covariance structure, opening the door to rigorous studies of neural network dynamics and generalization which rely on the data Gram matrix.

This paper performs the first study to understand the prevalence, challenges, and effectiveness of using Static Application Security Testing (SAST) tools on Open-Source Embedded Software (EMBOSS) repositories. We collect a corpus of 258 of the most popular EMBOSS projects, representing 13 distinct categories such as real-time operating systems, network stacks, and applications. To understand the current use of SAST tools on EMBOSS, we measured this corpus and surveyed developers. To understand the challenges and effectiveness of using SAST tools on EMBOSS projects, we applied these tools to the projects in our corpus. We report that almost none of these projects (just 3%) use SAST tools beyond those baked into the compiler, and developers give rationales such as ineffectiveness and false positives. In applying SAST tools ourselves, we show that minimal engineering effort and project expertise are needed to apply many tools to a given EMBOSS project. GitHub's CodeQL was the most effective SAST tool -- using its built-in security checks we found a total of 540 defects (with a false positive rate of 23%) across the 258 projects, with 399 (74%) likely security vulnerabilities, including in projects maintained by Microsoft, Amazon, and the Apache Foundation. EMBOSS engineers have confirmed 273 (51%) of these defects, mainly by accepting our pull requests. Two CVEs were issued. In summary, we urge EMBOSS engineers to adopt the current generation of SAST tools, which offer low false positive rates and are effective at finding security-relevant defects.

This work investigates the potential of Federated Learning (FL) for official statistics and shows how well the performance of FL models can keep up with centralized learning methods.F L is particularly interesting for official statistics because its utilization can safeguard the privacy of data holders, thus facilitating access to a broader range of data. By simulating three different use cases, important insights on the applicability of the technology are gained. The use cases are based on a medical insurance data set, a fine dust pollution data set and a mobile radio coverage data set - all of which are from domains close to official statistics. We provide a detailed analysis of the results, including a comparison of centralized and FL algorithm performances for each simulation. In all three use cases, we were able to train models via FL which reach a performance very close to the centralized model benchmarks. Our key observations and their implications for transferring the simulations into practice are summarized. We arrive at the conclusion that FL has the potential to emerge as a pivotal technology in future use cases of official statistics.

Requirements engineering (RE) literature acknowledges the importance of early stakeholder identification. The sources of requirements are many and also constantly changing as the market and business constantly change. Identifying and consulting all stakeholders on the market is impractical; thus many companies utilize indirect data sources, e.g. documents and representatives of larger groups of stakeholders. However, companies often collect irrelevant data or develop their products based on the sub-optimal information sources that may lead to missing market opportunities. We propose a collaborative method for identification and selection of data sources. The method consists of four steps and aims to build consensus between different perspectives in an organization. We demonstrate the use of the method with three industrial case studies. We have presented and statically validated the method to support prioritization of stakeholders for MDRE. Our results show that the method can support the identification and selection of data sources in three ways: (1) by providing systematic steps to identify and prioritize data sources for RE, (2) by highlighting and resolving discrepancies between different perspectives in an organization, and (3) by analyzing the underlying rationale for using certain data sources.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

We present a monocular Simultaneous Localization and Mapping (SLAM) using high level object and plane landmarks, in addition to points. The resulting map is denser, more compact and meaningful compared to point only SLAM. We first propose a high order graphical model to jointly infer the 3D object and layout planes from single image considering occlusions and semantic constraints. The extracted cuboid object and layout planes are further optimized in a unified SLAM framework. Objects and planes can provide more semantic constraints such as Manhattan and object supporting relationships compared to points. Experiments on various public and collected datasets including ICL NUIM and TUM mono show that our algorithm can improve camera localization accuracy compared to state-of-the-art SLAM and also generate dense maps in many structured environments.

北京阿比特科技有限公司