亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Collaborative Cyber-Physical Systems (CCPS) are systems that contain tightly coupled physical and cyber components, massively interconnected subsystems, and collaborate to achieve a common goal. The safety of a single Cyber-Physical System (CPS) can be achieved by following the safety standards such as ISO 26262 and IEC 61508 or by applying hazard analysis techniques. However, due to the complex, highly interconnected, heterogeneous, and collaborative nature of CCPS, a fault in one CPS's components can trigger many other faults in other collaborating CPSs. Therefore, a safety assurance technique based on fault criticality analysis would require to ensure safety in CCPS. This paper presents a Fault Criticality Matrix (FCM) implemented in our tool called CPSTracer, which contains several data such as identified fault, fault criticality, safety guard, etc. The proposed FCM is based on composite hazard analysis and content-based relationships among the hazard analysis artifacts, and ensures that the safety guard controls the identified faults at design time; thus, we can effectively manage and control the fault at the design phase to ensure the safe development of CPSs. To validate our approach, we introduce a case study on the Platooning system (a collaborative CPS). We perform the criticality analysis of the Platooning system using FCM in our developed tool. After the detailed fault criticality analysis, we investigate the results to check the appropriateness and effectiveness with two research questions. Also, by performing simulation for the Platooning, we showed that the rate of collision of the Platooning system without using FCM was quite high as compared to the rate of collisions of the system after analyzing the fault criticality using FCM.

相關內容

Network Error Logging helps web server operators detect operational problems in real-time to provide fast and reliable services. HTTP Archive provides detail information of historical data on HTTP requests. This paper leverages the data and provides a long term analysis of Network Error Logging deployment. The deployment raised from 0 to 11.73 % (almost 2,250,000 unique domains) since 2019. Current deployment is dominated by Cloudflare. Although we observed different policies, the default settings prevail. Third party collectors emerge raising the diversity needed to gather sound data. Even so, many service deploy self-hosted services. Moreover, we identify potentially malicious adversaries deploy collectors on randomly-generated domains and shortened URLs.

Reasoning over knowledge graphs (KGs) is a challenging task that requires a deep understanding of the complex relationships between entities and the underlying logic of their relations. Current approaches rely on learning geometries to embed entities in vector space for logical query operations, but they suffer from subpar performance on complex queries and dataset-specific representations. In this paper, we propose a novel decoupled approach, Language-guided Abstract Reasoning over Knowledge graphs (LARK), that formulates complex KG reasoning as a combination of contextual KG search and abstract logical query reasoning, to leverage the strengths of graph extraction algorithms and large language models (LLM), respectively. Our experiments demonstrate that the proposed approach outperforms state-of-the-art KG reasoning methods on standard benchmark datasets across several logical query constructs, with significant performance gain for queries of higher complexity. Furthermore, we show that the performance of our approach improves proportionally to the increase in size of the underlying LLM, enabling the integration of the latest advancements in LLMs for logical reasoning over KGs. Our work presents a new direction for addressing the challenges of complex KG reasoning and paves the way for future research in this area.

Variance-based global sensitivity analysis (GSA) can provide a wealth of information when applied to complex models. A well-known Achilles' heel of this approach is its computational cost which often renders it unfeasible in practice. An appealing alternative is to analyze instead the sensitivity of a surrogate model with the goal of lowering computational costs while maintaining sufficient accuracy. Should a surrogate be "simple" enough to be amenable to the analytical calculations of its Sobol' indices, the cost of GSA is essentially reduced to the construction of the surrogate. We propose a new class of sparse weight Extreme Learning Machines (SW-ELMs) which, when considered as surrogates in the context of GSA, admit analytical formulas for their Sobol' indices and, unlike the standard ELMs, yield accurate approximations of these indices. The effectiveness of this approach is illustrated through both traditional benchmarks in the field and on a chemical reaction network.

Artificial intelligence (AI) methods have great potential to revolutionize numerous medical care by enhancing the experience of medical experts and patients. AI based computer-assisted diagnosis tools can have a tremendous benefit if they can outperform or perform similarly to the level of a clinical expert. As a result, advanced healthcare services can be affordable in developing nations, and the problem of a lack of expert medical practitioners can be addressed. AI based tools can save time, resources, and overall cost for patient treatment. Furthermore, in contrast to humans, AI can uncover complex relations in the data from a large set of inputs and even lead to new evidence-based knowledge in medicine. However, integrating AI in healthcare raises several ethical and philosophical concerns, such as bias, transparency, autonomy, responsibility and accountability, which must be addressed before integrating such tools into clinical settings. In this article, we emphasize recent advances in AI-assisted medical image analysis, existing standards, and the significance of comprehending ethical issues and best practices for the applications of AI in clinical settings. We cover the technical and ethical challenges of AI and the implications of deploying AI in hospitals and public organizations. We also discuss promising key measures and techniques to address the ethical challenges, data scarcity, racial bias, lack of transparency, and algorithmic bias. Finally, we provide our recommendation and future directions for addressing the ethical challenges associated with AI in healthcare applications, with the goal of deploying AI into the clinical settings to make the workflow more efficient, accurate, accessible, transparent, and reliable for the patient worldwide.

We present algorithms based on satisfiability problem (SAT) solving, as well as answer set programming (ASP), for solving the problem of determining inconsistency degrees in propositional knowledge bases. We consider six different inconsistency measures whose respective decision problems lie on the first level of the polynomial hierarchy. Namely, these are the contension inconsistency measure, the forgetting-based inconsistency measure, the hitting set inconsistency measure, the max-distance inconsistency measure, the sum-distance inconsistency measure, and the hit-distance inconsistency measure. In an extensive experimental analysis, we compare the SAT-based and ASP-based approaches with each other, as well as with a set of naive baseline algorithms. Our results demonstrate that overall, both the SAT-based and the ASP-based approaches clearly outperform the naive baseline methods in terms of runtime. The results further show that the proposed ASP-based approaches perform superior to the SAT-based ones with regard to all six inconsistency measures considered in this work. Moreover, we conduct additional experiments to explain the aforementioned results in greater detail.

We investigate autonomous mobile robots in the Euclidean plane. A robot has a function called target function to decide the destination from the robots' positions. Robots may have different target functions. If the robots whose target functions are chosen from a set $\Phi$ of target functions always solve a problem $\Pi$, we say that $\Phi$ is compatible with respect to $\Pi$. If $\Phi$ is compatible with respect to $\Pi$, every target function $\phi \in \Phi$ is an algorithm for $\Pi$. Even if both $\phi$ and $\phi'$ are algorithms for $\Pi$, $\{ \phi, \phi' \}$ may not be compatible with respect to $\Pi$. From the view point of compatibility, we investigate the convergence, the fault tolerant ($n,f$)-convergence (FC($f$)), the fault tolerant ($n,f$)-convergence to $f$ points (FC($f$)-PO), the fault tolerant ($n,f$)-convergence to a convex $f$-gon (FC($f$)-CP), and the gathering problems, assuming crash failures. Obtained results classify these problems into three groups: The convergence, FC(1), FC(1)-PO, and FC($f$)-CP compose the first group: Every set of target functions which always shrink the convex hull of a configuration is compatible. The second group is composed of the gathering and FC($f$)-PO for $f \geq 2$: No set of target functions which always shrink the convex hull of a configuration is compatible. The third group, FC($f$) for $f \geq 2$, is placed in between. Thus, FC(1) and FC(2), FC(1)-PO and FC(2)-PO, and FC(2) and FC(2)-PO are respectively in different groups, despite that FC(1) and FC(1)-PO are in the first group.

Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. In this paper, we extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning. We briefly overview existing libraries designed for either graph machine learning or automated machine learning respectively, and further in depth introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning. Last but not least, we share our insights on future research directions for automated graph machine learning. This paper is the first systematic and comprehensive discussion of approaches, libraries as well as directions for automated graph machine learning.

Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.

北京阿比特科技有限公司