The digitalization of railway systems should increase the efficiency of the train operation to achieve future mobility challenges and climate goals. But this digitalization also comes with several new challenges in providing a secure and reliable train operation. The work resulting in this paper tackles two major challenges. First, there is no single university curriculum combining computer science, railway operation, and certification processes. Second, many railway processes are still manual and without the usage of digital tools and result in static implementations and configurations of the railway infrastructure devices. This case study occurred as part of the Digital Rail Summer School 2021, a university course combining the three mentioned aspects as cooperation of several German universities with partners from the railway industry. It passes through all steps from a digital Control-Command and Signalling (CCS) planning in ProSig 7.3, the transfer, and validation of the planning in the PlanPro data format and toolbox, to the generation of code of an interlocking for the digital CCS planning to contribute to the vision of test automation. This paper contributes the experiences of the case study and a proof-of-concept of the whole lifecycle for the Digital Testfield of Deutsche Bahn in Scheibenberg. This proof-of-concept will be continued in ongoing and following projects to fulfill the vision of test automation and automated launching of new devices.
Onboard autonomy technologies such as planning and scheduling, identification of scientific targets, and content-based data summarization, will lead to exciting new space science missions. However, the challenge of operating missions with such onboard autonomous capabilities has not been studied to a level of detail sufficient for consideration in mission concepts. These autonomy capabilities will require changes to current operations processes, practices, and tools. We have developed a case study to assess the changes needed to enable operators and scientists to operate an autonomous spacecraft by facilitating a common model between the ground personnel and the onboard algorithms. We assess the new operations tools and workflows necessary to enable operators and scientists to convey their desired intent to the spacecraft, and to be able to reconstruct and explain the decisions made onboard and the state of the spacecraft. Mock-ups of these tools were used in a user study to understand the effectiveness of the processes and tools in enabling a shared framework of understanding, and in the ability of the operators and scientists to effectively achieve mission science objectives.
Examinations of any experiment involving living organisms require justifications of the need and moral defensibleness of the study. Statistical planning, design and sample size calculation of the experiment are no less important review criteria than general medical and ethical points to consider. Errors made in the statistical planning and data evaluation phase can have severe consequences on both results and conclusions. They might proliferate and thus impact future trials-an unintended outcome of fundamental research with profound ethical consequences. Therefore, any trial must be efficient in both a medical and statistical way in answering the questions of interests to be considered as approvable. Unified statistical standards are currently missing for animal review boards in Germany. In order to accompany, we developed a biometric form to be filled and handed in with the proposal at the local authority on animal welfare. It addresses relevant points to consider for biostatistical planning of animal experiments and can help both the applicants and the reviewers in overseeing the entire experiment(s) planned. Furthermore, the form might also aid in meeting the current standards set by the 3+3R's principle of animal experimentation Replacement, Reduction, Refinement as well as Robustness, Registration and Reporting. The form has already been in use by the local authority of animal welfare in Berlin, Germany. In addition, we provide reference to our user guide giving more detailed explanation and examples for each section of the biometric form. Unifying the set of biostatistical aspects will help both the applicants and the reviewers to equal standards and increase quality of preclinical research projects, also for translational, multicenter, or international studies.
The global pandemic of COVID-19 has fundamentally changed how people interact, especially with the introduction of technology-based measures that aim at curbing the spread of the virus. As the country that currently implements one of the tightest technology-based COVID prevention policy, China has protected its citizen with a prolonged peaceful time of zero case as well as a fast reaction to potential upsurging of the disease. However, such mobile-based technology does come with sacrifices, especially for senior citizens who find themselves difficult to adapt to modern technologies. In this study, we demonstrated the fact that most senior citizens find it difficult to use the health code apps called ''JKM'', to which they responded by cutting down on travel and reducing local commuting to locations where the verification of JKM is needed. Such compromise has physical and mental consequences and leads to inequalities in infrastructure, social isolation and self-sufficiency. As we illustrated in the paper, such decrease in life quality of senior citizens can be greatly reduced if improvements on the user interactions of the JKM can be implemented. To the best of our knowledge, we are the first systemic study of digital inequality due to mobile-based COVID prevention technologies for senior citizens in China. As similar technologies become widely adopted around the world, we wish to shed light on how widened digital inequality increasingly affects the life quality of senior citizens in the pandemic era.
The next revolution of industry will turn the industries as well as the entire society into a human-centric shape. The presence and impacts of human beings in industrial systems and processes will be magnified more than ever before. To cope with the emerging challenges raised by this revolution, 6G will massively deploy digital twins to merge the cyber-physicalhuman worlds; novel solutions for multi-sensory human-machine interfaces will play a key role in this strategy.
This paper deals with a general class of algorithms for the solution of fixed-point problems that we refer to as \emph{Anderson--Pulay acceleration}. This family includes the DIIS technique and its variant sometimes called commutator-DIIS, both introduced by Pulay in the 1980s to accelerate the convergence of self-consistent field procedures in quantum chemistry, as well as the related Anderson acceleration which dates back to the 1960s, and the wealth of techniques they have inspired. Such methods aim at accelerating the convergence of any fixed-point iteration method by combining several iterates in order to generate the next one at each step. This extrapolation process is characterised by its \emph{depth}, i.e. the number of previous iterates stored, which is a crucial parameter for the efficiency of the method. It is generally fixed to an empirical value. In the present work, we consider two parameter-driven mechanisms to let the depth vary along the iterations. In the first one, the depth grows until a certain nondegeneracy condition is no longer satisfied; then the stored iterates (save for the last one) are discarded and the method "restarts". In the second one, we adapt the depth continuously by eliminating at each step some of the oldest, less relevant, iterates. In an abstract and general setting, we prove under natural assumptions the local convergence and acceleration of these two adaptive Anderson--Pulay methods, and we show that one can theoretically achieve a superlinear convergence rate with each of them. We then investigate their behaviour in quantum chemistry calculations. These numerical experiments show that both adaptive variants exhibit a faster convergence than a standard fixed-depth scheme, and require on average less computational effort per iteration. This study is complemented by a review of known facts on the DIIS, in particular its link with the Anderson acceleration and some multisecant-type quasi-Newton methods.
Natural language inference (NLI) is a fundamental NLP task, investigating the entailment relationship between two texts. Popular NLI datasets present the task at sentence-level. While adequate for testing semantic representations, they fall short for testing contextual reasoning over long texts, which is a natural part of the human inference process. We introduce ConTRoL, a new dataset for ConTextual Reasoning over Long texts. Consisting of 8,325 expert-designed "context-hypothesis" pairs with gold labels, ConTRoL is a passage-level NLI dataset with a focus on complex contextual reasoning types such as logical reasoning. It is derived from competitive selection and recruitment test (verbal reasoning test) for police recruitment, with expert level quality. Compared with previous NLI benchmarks, the materials in ConTRoL are much more challenging, involving a range of reasoning types. Empirical results show that state-of-the-art language models perform by far worse than educated humans. Our dataset can also serve as a testing-set for downstream tasks like Checking Factual Correctness of Summaries.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
The Everyday Sexism Project documents everyday examples of sexism reported by volunteer contributors from all around the world. It collected 100,000 entries in 13+ languages within the first 3 years of its existence. The content of reports in various languages submitted to Everyday Sexism is a valuable source of crowdsourced information with great potential for feminist and gender studies. In this paper, we take a computational approach to analyze the content of reports. We use topic-modelling techniques to extract emerging topics and concepts from the reports, and to map the semantic relations between those topics. The resulting picture closely resembles and adds to that arrived at through qualitative analysis, showing that this form of topic modeling could be useful for sifting through datasets that had not previously been subject to any analysis. More precisely, we come up with a map of topics for two different resolutions of our topic model and discuss the connection between the identified topics. In the low resolution picture, for instance, we found Public space/Street, Online, Work related/Office, Transport, School, Media harassment, and Domestic abuse. Among these, the strongest connection is between Public space/Street harassment and Domestic abuse and sexism in personal relationships.The strength of the relationships between topics illustrates the fluid and ubiquitous nature of sexism, with no single experience being unrelated to another.
Conversation interfaces (CIs), or chatbots, are a popular form of intelligent agents that engage humans in task-oriented or informal conversation. In this position paper and demonstration, we argue that chatbots working in dynamic environments, like with sensor data, can not only serve as a promising platform to research issues at the intersection of learning, reasoning, representation and execution for goal-directed autonomy; but also handle non-trivial business applications. We explore the underlying issues in the context of Water Advisor, a preliminary multi-modal conversation system that can access and explain water quality data.
Querying graph structured data is a fundamental operation that enables important applications including knowledge graph search, social network analysis, and cyber-network security. However, the growing size of real-world data graphs poses severe challenges for graph databases to meet the response-time requirements of the applications. Planning the computational steps of query processing - Query Planning - is central to address these challenges. In this paper, we study the problem of learning to speedup query planning in graph databases towards the goal of improving the computational-efficiency of query processing via training queries.We present a Learning to Plan (L2P) framework that is applicable to a large class of query reasoners that follow the Threshold Algorithm (TA) approach. First, we define a generic search space over candidate query plans, and identify target search trajectories (query plans) corresponding to the training queries by performing an expensive search. Subsequently, we learn greedy search control knowledge to imitate the search behavior of the target query plans. We provide a concrete instantiation of our L2P framework for STAR, a state-of-the-art graph query reasoner. Our experiments on benchmark knowledge graphs including DBpedia, YAGO, and Freebase show that using the query plans generated by the learned search control knowledge, we can significantly improve the speed of STAR with negligible loss in accuracy.