亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The software supply chain involves a multitude of tools and processes that enable software developers to write, build, and ship applications. Recently, security compromises of tools or processes has led to a surge in proposals to address these issues. However, these proposals commonly overemphasize specific solutions or conflate goals, resulting in unexpected consequences, or unclear positioning and usage. In this paper, we make the case that developing practical solutions is not possible until the community has a holistic view of the security problem; this view must include both the technical and procedural aspects. To this end, we examine three use cases to identify common security goals, and present a goal-oriented taxonomy of existing solutions demonstrating a holistic overview of software supply chain security.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · Processing(編程語言) · INTERACT · CASE · ·
2022 年 10 月 21 日

Constraint monitoring aims to monitor the violation of constraints in business processes, e.g., an invoice should be cleared within 48 hours after the corresponding goods receipt, by analyzing event data. Existing techniques for constraint monitoring assume that a single case notion exists in a business process, e.g., a patient in a healthcare process, and each event is associated with the case notion. However, in reality, business processes are object-centric, i.e., multiple case notions (objects) exist, and an event may be associated with multiple objects. For instance, an Order-To-Cash (O2C) process involves order, item, delivery, etc., and they interact when executing an event, e.g., packing multiple items together for a delivery. The existing techniques produce misleading insights when applied to such object-centric business processes. In this work, we propose an approach to monitoring constraints in object-centric business processes. To this end, we introduce Object-Centric Constraint Graphs (OCCGs) to represent constraints that consider the interaction of objects. Next, we evaluate the constraints represented by OCCGs by analyzing Object-Centric Event Logs (OCELs) that store the interaction of different objects in events. We have implemented a web application to support the proposed approach and conducted two case studies using a real-life SAP ERP system.

Context: Developing software-intensive products or services usually involves a plethora of software artefacts. Assets are artefacts intended to be used more than once and have value for organisations; examples include test cases, code, requirements, and documentation. During the development process, assets might degrade, affecting the effectiveness and efficiency of the development process. Therefore, assets are an investment that requires continuous management. Identifying assets is the first step for their effective management. However, there is a lack of awareness of what assets and types of assets are common in software-developing organisations. Most types of assets are understudied, and their state of quality and how they degrade over time have not been well-understood. Method: We perform a systematic literature review and a field study at five companies to study and identify assets to fill the gap in research. The results were analysed qualitatively and summarised in a taxonomy. Results: We create the first comprehensive, structured, yet extendable taxonomy of assets, containing 57 types of assets. Conclusions: The taxonomy serves as a foundation for identifying assets that are relevant for an organisation and enables the study of asset management and asset degradation concepts.

Supplier selection and order allocation (SSOA) are key strategic decisions in supply chain management which greatly impact the performance of the supply chain. The SSOA problem has been studied extensively but the lack of attention paid to scalability presents a significant gap preventing adoption of SSOA algorithms by industrial practitioners. This paper presents a novel real-time large-scale industrial SSOA problem, which involves a multi-item, multi-supplier environment with dual-sourcing and penalty constraints across two-tiers of a supply chain of a manufacturing company. The problem supports supplier preferences to work with other suppliers through bidding. This is the largest scale studied so far in literature, and needs to be solved in a real-time auction environment, making computational complexity a key issue. Furthermore, order allocation needs to be undertaken on both supply tiers, with dynamically presented constraints where non-preferred allocation may results in penalties by the suppliers. We subsequently propose Mixed Integer Programming models for individual-tiers as well as an integrated problem, which are complex due to NP-hard nature. The use case allows us to highlight how problem formulation, modelling and choice of modelling can help reduce complexity using Mathematical Programming (MP) and Genetic Algorithm (GA) approaches. The results show an interesting observation that MP outperforms GA to solve the individual-tiers problem as well as the integrated problem. Sensitivity analysis is presented for sourcing strategy, penalty threshold and penalty factor. The developed model was successfully deployed in a supplier conference which helped in significant procurement cost reductions to the manufacturing company.

As smart buildings move towards open communication technologies, providing access to the Building Automation System (BAS) through the intranet, or even remotely through the Internet, has become a common practice. However, BAS was historically developed as a closed environment and designed with limited cyber-security considerations. Thus, smart buildings are vulnerable to cyber-attacks with the increased accessibility. This study introduces the development and capability of a Hardware-in-the-Loop (HIL) testbed for testing and evaluating the cyber-physical security of typical BASs in smart buildings. The testbed consists of three subsystems: (1) a real-time HIL emulator simulating the behavior of a virtual building as well as the Heating, Ventilation, and Air Conditioning (HVAC) equipment via a dynamic simulation in Modelica; (2) a set of real HVAC controllers monitoring the virtual building operation and providing local control signals to control HVAC equipment in the HIL emulator; and (3) a BAS server along with a web-based service for users to fully access the schedule, setpoints, trends, alarms, and other control functions of the HVAC controllers remotely through the BACnet network. The server generates rule-based setpoints to local HVAC controllers. Based on these three subsystems, the HIL testbed supports attack/fault-free and attack/fault-injection experiments at various levels of the building system. The resulting test data can be used to inform the building community and support the cyber-physical security technology transfer to the building industry.

Linguistic disparity in the NLP world is a problem that has been widely acknowledged recently. However, different facets of this problem, or the reasons behind this disparity are seldom discussed within the NLP community. This paper provides a comprehensive analysis of the disparity that exists within the languages of the world. We show that simply categorising languages considering data availability may not be always correct. Using an existing language categorisation based on speaker population and vitality, we analyse the distribution of language data resources, amount of NLP/CL research, inclusion in multilingual web-based platforms and the inclusion in pre-trained multilingual models. We show that many languages do not get covered in these resources or platforms, and even within the languages belonging to the same language group, there is wide disparity. We analyse the impact of family, geographical location, GDP and the speaker population of languages and provide possible reasons for this disparity, along with some suggestions to overcome the same.

In the past two decades, autonomous driving has been catalyzed into reality by the growing capabilities of machine learning. This paradigm shift possesses significant potential to transform the future of mobility and reshape our society as a whole. With the recent advances in perception, planning, and control capabilities, autonomous driving technologies are being rolled out for public trials, yet we remain far from being able to rigorously ensure the resilient operations of these systems across the long-tailed nature of the driving environment. Given the limitations of real-world testing, autonomous vehicle simulation stands as the critical component in exploring the edge of autonomous driving capabilities, developing the robust behaviors required for successful real-world operation, and enabling the extraction of hidden risks from these complex systems prior to deployment. This paper presents the current state-of-the-art simulation frameworks and methodologies used in the development of autonomous driving systems, with a focus on outlining how simulation is used to build the resiliency required for real-world operation and the methods developed to bridge the gap between simulation and reality. A synthesis of the key challenges surrounding autonomous driving simulation is presented, specifically highlighting the opportunities to further advance the ability to continuously learn in simulation and effectively transfer the learning into the real-world - enabling autonomous vehicles to exit the guardrails of simulation and deliver robust and resilient operations at scale.

Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.

北京阿比特科技有限公司