Blockchain uses cryptographic proof to replace trusted third parties to ensure the correctness of the information, allowing any two willing parties to transact directly with each other. Smart contracts are pieces of code that reside inside the blockchains and can be triggered to execute any transaction when specifically predefined conditions are satisfied. Being commonly used for commercial transactions in blockchain makes the security of smart contracts particularly important. Over the last few years, we have seen a great deal of academic and practical interest in detecting and repairing the vulnerabilities in smart contracts developed for the Ethereum blockchain. In this paper, we conduct an empirical study on historical bug fixing versions of 46 real-world smart contracts projects from Github, providing a multi-faceted discussion. In this paper, we mainly explore the following four questions: File Type and Amount, Fix Complexity, Bug distribution, and Fix Patches. By analyzing the file type, amount, and fix complexity, we find that about 80% of the bug-related commits modified no more than one solidity source file to fix bugs. Up to 80% of bugs in solidity source files can be fixed by less than three fix actions. Modification is the mostly used fix action, which involves three lines of code on average. By using the analysis tool Mythril to detect the vulnerabilities, we find that nearly 20% of the solidity files in our dataset had or have had vulnerabilities. We finally find that the developers may not put much attention to fixing vulnerabilities reported by Mythril completely or avoid introducing them again. Because vulnerabilities that have a high repair percentage usually have a high rate to be introduced again.
The widespread use of information and communication technology (ICT) over the course of the last decades has been a primary catalyst behind the digitalization of power systems. Meanwhile, as the utilization rate of the Internet of Things (IoT) continues to rise along with recent advancements in ICT, the need for secure and computationally efficient monitoring of critical infrastructures like the electrical grid and the agents that participate in it is growing. A cyber-physical system, such as the electrical grid, may experience anomalies for a number of different reasons. These may include physical defects, mistakes in measurement and communication, cyberattacks, and other similar occurrences. The goal of this study is to emphasize what the most common incidents are with power systems and to give an overview and classification of the most common ways to find problems, starting with the consumer/prosumer end working up to the primary power producers. In addition, this article aimed to discuss the methods and techniques, such as artificial intelligence (AI) that are used to identify anomalies in the power systems and markets.
Voice-controlled smart speaker devices have gained a foothold in many modern households. Their prevalence combined with their intrusion into core private spheres of life has motivated research on security and privacy intrusions, especially those performed by third-party applications used on such devices. In this work, we take a closer look at such third-party applications from a less pessimistic angle: we consider their potential to provide personalized and secure capabilities and investigate measures to authenticate users (``PIN'', ``Voice authentication'', ``Notification'', and presence of ``Nearby devices''). To this end, we asked 100 participants to evaluate 15 application categories and 51 apps with a wide range of functions. The central questions we explored focused on: users' preferences for security and personalization for different categories of apps; the preferred security and personalization measures for different apps; and the preferred frequency of the respective measure. After an initial pilot study, we focused primarily on 7 categories of apps for which security and personalization are reported to be important; those include the three crucial categories finance, bills, and shopping. We found that ``Voice authentication'', while not currently employed by the apps we studied, is a highly popular measure to achieve security and personalization. Many participants were open to exploring combinations of security measures to increase the protection of highly relevant apps. Here, the combination of ``PIN'' and ``Voice authentication'' was clearly the most desired one. This finding indicates systems that seamlessly combine ``Voice authentication'' with other measures might be a good candidate for future work.
The trade-off of secrecy is the difficulty of verification. This trade-off means that contracts must be kept private, yet their compliance needs to be verified, which we call the secrecy-verifiability paradox. However, the existing smart contracts are not designed to provide secrecy in this context without sacrificing verifiability. Without a trusted third party for notarization, the protocol for the verification of smart contracts has to be built on cryptographic primitives. We propose a blockchain-based solution that overcomes this challenge by storing the verifiable evidence as accessible data on a blockchain in an appropriate manner. This solution allows for cryptographic data verification but not revealing the data itself. In addition, with our proposal, it is possible to verify contracts whose form of existence has been destroyed as long as the contract is real and the people involved remember it.
Emerging connected vehicle (CV) data sets have recently become commercially available that enable analysts to develop a variety of powerful performance measures without a need to deploy field infrastructure. This paper presents a several tools using CV data to evaluate the quality of signal progression. These include both performance measures for high-level analysis as well as visualizations to examine details of coordinated operation. With the use of CV data, it is possible to assess not only the movement of traffic on the corridor but also to consider its origin-destination (O-D) path through the corridor, and the tools can be applied to select O-D paths or to all O-D paths in the corridor. Results for real-world operation of an eight-intersection signalized arterial are presented. A series of high-level performance measures are used to evaluate overall performance by time of day and direction, with differing results by metric. Next, the details of the operation are examined with the use of two visualization tools: a cyclic time space diagram, and an empirical platoon progression diagram. Comparing visualizations of only end-to-end journeys on the corridor with all journeys on the corridor reveals several features that are only visible with the latter. The study demonstrates the utility of CV trajectory data for obtaining high-level details as well as drilling down into the details.
Background: The COVID-19 pandemic has had a profound impact on health, everyday life and economics around the world. An important complication that can arise in connection with a COVID-19 infection is acute kidney injury. A recent observational cohort study of COVID-19 patients treated at multiple sites of a tertiary care center in Berlin, Germany identified risk factors for the development of (severe) acute kidney injury. Since inferring results from a single study can be tricky, we validate these findings and potentially adjust results by including external information from other studies on acute kidney injury and COVID-19. Methods: We synthesize the results of the main study with other trials via a Bayesian meta-analysis. The external information is used to construct a predictive distribution and to derive posterior estimates for the study of interest. We focus on various important potential risk factors for acute kidney injury development such as mechanical ventilation, use of vasopressors, hypertension, obesity, diabetes, gender and smoking. Results: Our results show that depending on the degree of heterogeneity in the data the estimated effect sizes may be refined considerably with inclusion of external data. Our findings confirm that mechanical ventilation and use of vasopressors are important risk factors for the development of acute kidney injury in COVID-19 patients. Hypertension also appears to be a risk factor that should not be ignored. Shrinkage weights depended to a large extent on the estimated heterogeneity in the model. Conclusions: Our work shows how external information can be used to adjust the results from a primary study, using a Bayesian meta-analytic approach. How much information is borrowed from external studies will depend on the degree of heterogeneity present in the model.
The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on how to overcome or otherwise mitigate the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.
Quantum machine learning has become an area of growing interest but has certain theoretical and hardware-specific limitations. Notably, the problem of vanishing gradients, or barren plateaus, renders the training impossible for circuits with high qubit counts, imposing a limit on the number of qubits that data scientists can use for solving problems. Independently, angle-embedded supervised quantum neural networks were shown to produce truncated Fourier series with a degree directly dependent on two factors: the depth of the encoding, and the number of parallel qubits the encoding is applied to. The degree of the Fourier series limits the model expressivity. This work introduces two new architectures whose Fourier degrees grow exponentially: the sequential and parallel exponential quantum machine learning architectures. This is done by efficiently using the available Hilbert space when encoding, increasing the expressivity of the quantum encoding. Therefore, the exponential growth allows staying at the low-qubit limit to create highly expressive circuits avoiding barren plateaus. Practically, parallel exponential architecture was shown to outperform the existing linear architectures by reducing their final mean square error value by up to 44.7% in a one-dimensional test problem. Furthermore, the feasibility of this technique was also shown on a trapped ion quantum processing unit.
In NLP, reusing pre-trained models instead of training from scratch has gained popularity; however, NLP models are mostly black boxes, very large, and often require significant resources. To ease, models trained with large corpora are made available, and developers reuse them for different problems. In contrast, developers mostly build their models from scratch for traditional DL-related problems. By doing so, they have control over the choice of algorithms, data processing, model structure, tuning hyperparameters, etc. Whereas in NLP, due to the reuse of the pre-trained models, NLP developers are limited to little to no control over such design decisions. They either apply tuning or transfer learning on pre-trained models to meet their requirements. Also, NLP models and their corresponding datasets are significantly larger than the traditional DL models and require heavy computation. Such reasons often lead to bugs in the system while reusing the pre-trained models. While bugs in traditional DL software have been intensively studied, the nature of extensive reuse and black-box structure motivates us to understand the different types of bugs that occur while reusing NLP models? What are the root causes of those bugs? How do these bugs affect the system? To answer these questions, We studied the bugs reported while reusing the 11 popular NLP models. We mined 9,214 issues from GitHub repositories and identified 984 bugs. We created a taxonomy with bug types, root causes, and impacts. Our observations led to several findings, including limited access to model internals resulting in a lack of robustness, lack of input validation leading to the propagation of algorithmic and data bias, and high-resource consumption causing more crashes, etc. Our observations suggest several bug patterns, which would greatly facilitate further efforts in reducing bugs in pre-trained models and code reuse.
There are existing standard solvers for tackling discrete optimization problems. However, in practice, it is uncommon to apply them directly to the large input space typical of this class of problems. Rather, the input is preprocessed to look for simplifications and to extract the core subset of the problem space, which is called the Kernel. This pre-processing procedure is known in the context of parameterized complexity theory as Kernelization. In this thesis, I implement parallel versions of some Kernelization algorithms and evaluate their performance. The performance of Kernelization algorithms is measured either by the size of the output Kernel or by the time it takes to compute the kernel. Sometimes the Kernel is the same as the original input, so it is desirable to know this, as soon as possible. The problem scope is limited to a particular type of discrete optimisation problem which is a version of the K-clique problem in which nodes of the given graph are pre-coloured legally using k colours. The final evaluation shows that my parallel implementations achieve over 50% improvement in efficiency for at least one of these algorithms. This is attained not just in terms of speed, but it is also able to produce a smaller kernel.
The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.