The Institute of Materials and Processes, IMP, of the University of Applied Sciences in Karlsruhe, Germany in cooperation with VDI Verein Deutscher Ingenieure e.V, AEN Automotive Engineering Network and their cooperation partners present their competences of AI-based solution approaches in the production engineering field. The online congress KI 4 Industry on November 12 and 13, 2020, showed what opportunities the use of artificial intelligence offers for medium-sized manufacturing companies, SMEs, and where potential fields of application lie. The main purpose of KI 4 Industry is to increase the transfer of knowledge, research and technology from universities to small and medium-sized enterprises, to demystify the term AI and to encourage companies to use AI-based solutions in their own value chain or in their products.
Securing safe-driving for connected and autonomous vehicles (CAVs) continues to be a widespread concern despite various sophisticated functions delivered by artificial intelligence for in-vehicle devices. Besides, diverse malicious network attacks become ubiquitous along with the worldwide implementation of the Internet of Vehicles, which exposes a range of reliability and privacy threats for managing data in CAV networks. Combined with the fact that the capability of existing CAVs in handling intensive computation tasks is limited, this implies a need for designing an efficient assessment system to guarantee autonomous driving safety without compromising data security. Motivated by this, in this article, we propose a novel framework, namely Blockchain-enabled intElligent Safe-driving assessmenT (BEST), that offers a smart and reliable approach for conducting safe driving supervision while protecting vehicular information. Specifically, a promising solution that exploits a long short-term memory model is introduced to assess the safety level of the moving CAVs. Then, we investigate how a distributed blockchain obtains adequate trustworthiness and robustness for CAV data by adopting a byzantine fault tolerance-based delegated proof-of-stake consensus mechanism. Simulation results demonstrate that our presented BEST gains better data credibility with a higher prediction accuracy for vehicular safety assessment when compared with existing schemes. Finally, we discuss several open challenges that need to be addressed in future CAV networks.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
This paper introduces an open-source software for distributed and decentralized non-convex optimization named ALADIN-$\alpha$. ALADIN-$\alpha$ is a MATLAB implementation of tailored variants of the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) algorithm. Its user interface is convenient for rapid prototyping of non-convex distributed optimization algorithms. An improved version of the recently proposed bi-level variant of ALADIN is included enabling decentralized non-convex optimization with reduced information exchange. A collection of examples from different applications fields including chemical engineering, robotics, and power systems underpins the potential of ALADIN-$\alpha$.
Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.
Human-Centered AI (HCAI) refers to the research effort that aims to design and implement AI techniques to support various human tasks, while taking human needs into consideration and preserving human control. In this short position paper, we illustrate how we approach HCAI using a series of research projects around Data Science (DS) works as a case study. The AI techniques built for supporting DS works are collectively referred to as AutoML systems, and their goals are to automate some parts of the DS workflow. We illustrate a three-step systematical research approach(i.e., explore, build, and integrate) and four practical ways of implementation for HCAI systems. We argue that our work is a cornerstone towards the ultimate future of Human-AI Collaboration for DS and beyond, where AI and humans can take complementary and indispensable roles to achieve a better outcome and experience.
Data management, which encompasses activities and strategies related to the storage, organization, and description of data and other research materials, helps ensure the usability of datasets -- both for the original research team and for others. When contextualized as part of a research workflow, data management practices can provide an avenue for promoting other practices, including those related to reproducibility and those that fall under the umbrella of open science. Not all research data needs to be shared, but all should be well managed to establish a record of the research process.
Citation indexes are by now part of the research infrastructure in use by most scientists: a necessary tool in order to cope with the increasing amounts of scientific literature being published. Commercial citation indexes are designed for the sciences and have uneven coverage and unsatisfactory characteristics for humanities scholars, while no comprehensive citation index is published by a public organization. We argue that an open citation index for the humanities is desirable, for four reasons: it would greatly improve and accelerate the retrieval of sources, it would offer a way to interlink collections across repositories (such as archives and libraries), it would foster the adoption of metadata standards and best practices by all stakeholders (including publishers) and it would contribute research data to fields such as bibliometrics and science studies. We also suggest that the citation index should be informed by a set of requirements relevant to the humanities. We discuss four: source coverage must be comprehensive, including books and citations to primary sources; there needs to be chronological depth, as scholarship in the humanities remains relevant over time; the index should be collection-driven, leveraging the accumulated thematic collections of specialized research libraries; and it should be rich in context in order to allow for the qualification of each citation, for example by providing citation excerpts. We detail the fit-for-purpose research infrastructure which can make the humanities citation index a reality. Ultimately, we argue that a citation index for the humanities can be created by humanists, via a collaborative, distributed and open effort.
The use of digital resources has been increasing in every instance of todays society, being it in business or even ludic purposes. Despite such ever increasing use of technologies as interfaces, in all fields, it seems that it lacks the importance of users perception in this context. This work aims to present a case study about the evaluation of ECAs. We propose a Long-Term Interaction (LTI) to evaluate our conversational agent effectiveness through the user perception and compare it with Short-Term Interactions (STIs), performed by three users. Results show that many different aspects of users perception about the chosen ECA (i.e. Arthur) could be evaluated in our case study, in particular that LTI and STI are both important in order to have a better understanding of ECA impact in UX.
To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.
Machine Learning is a widely-used method for prediction generation. These predictions are more accurate when the model is trained on a larger dataset. On the other hand, the data is usually divided amongst different entities. For privacy reasons, the training can be done locally and then the model can be safely aggregated amongst the participants. However, if there are only two participants in \textit{Collaborative Learning}, the safe aggregation loses its power since the output of the training already contains much information about the participants. To resolve this issue, they must employ privacy-preserving mechanisms, which inevitably affect the accuracy of the model. In this paper, we model the training process as a two-player game where each player aims to achieve a higher accuracy while preserving its privacy. We introduce the notion of \textit{Price of Privacy}, a novel approach to measure the effect of privacy protection on the accuracy of the model. We develop a theoretical model for different player types, and we either find or prove the existence of a Nash Equilibrium with some assumptions. Moreover, we confirm these assumptions via a Recommendation Systems use case: for a specific learning algorithm, we apply three privacy-preserving mechanisms on two real-world datasets. Finally, as a complementary work for the designed game, we interpolate the relationship between privacy and accuracy for this use case and present three other methods to approximate it in a real-world scenario.