The 6G standard aims to be an integral part of the future economy by providing both high-performance communication and sensing services. At terahertz (THz) frequencies, indoor campus networks can offer the highest sensing quality. Health monitoring in hospitals is expected to be an application site for these. This work outlines a 6G-enabled monostatic phase-based system for breathing rate monitoring. Our conducted feasibility study observes motion measurement accuracy down to the micrometer level. However, we also find that the patient's pose needs to be considered for generalized applicability. Thus, a solution that leverages multiple propagation paths and beam orientations is proposed.
Analysis of high-dimensional data, where the number of covariates is larger than the sample size, is a topic of current interest. In such settings, an important goal is to estimate the signal level $\tau^2$ and noise level $\sigma^2$, i.e., to quantify how much variation in the response variable can be explained by the covariates, versus how much of the variation is left unexplained. This thesis considers the estimation of these quantities in a semi-supervised setting, where for many observations only the vector of covariates $X$ is given with no responses $Y$. Our main research question is: how can one use the unlabeled data to better estimate $\tau^2$ and $\sigma^2$? We consider two frameworks: a linear regression model and a linear projection model in which linearity is not assumed. In the first framework, while linear regression is used, no sparsity assumptions on the coefficients are made. In the second framework, the linearity assumption is also relaxed and we aim to estimate the signal and noise levels defined by the linear projection. We first propose a naive estimator which is unbiased and consistent, under some assumptions, in both frameworks. We then show how the naive estimator can be improved by using zero-estimators, where a zero-estimator is a statistic arising from the unlabeled data, whose expected value is zero. In the first framework, we calculate the optimal zero-estimator improvement and discuss ways to approximate the optimal improvement. In the second framework, such optimality does no longer hold and we suggest two zero-estimators that improve the naive estimator although not necessarily optimally. Furthermore, we show that our approach reduces the variance for general initial estimators and we present an algorithm that potentially improves any initial estimator. Lastly, we consider four datasets and study the performance of our suggested methods.
Blockchain (BC) and Computer Vision (CV) are the two emerging fields with the potential to transform various sectors.The ability of BC can help in offering decentralized and secure data storage, while CV allows machines to learn and understand visual data. This integration of the two technologies holds massive promise for developing innovative applications that can provide solutions to the challenges in various sectors such as supply chain management, healthcare, smart cities, and defense. This review explores a comprehensive analysis of the integration of BC and CV by examining their combination and potential applications. It also provides a detailed analysis of the fundamental concepts of both technologies, highlighting their strengths and limitations. This paper also explores current research efforts that make use of the benefits offered by this combination. The effort includes how BC can be used as an added layer of security in CV systems and also ensure data integrity, enabling decentralized image and video analytics using BC. The challenges and open issues associated with this integration are also identified, and appropriate potential future directions are also proposed.
Human Activity Recognition (HAR) has become one of the leading research topics of the last decade. As sensing technologies have matured and their economic costs have declined, a host of novel applications, e.g., in healthcare, industry, sports, and daily life activities have become popular. The design of HAR systems requires different time-consuming processing steps, such as data collection, annotation, and model training and optimization. In particular, data annotation represents the most labor-intensive and cumbersome step in HAR, since it requires extensive and detailed manual work from human annotators. Therefore, different methodologies concerning the automation of the annotation procedure in HAR have been proposed. The annotation problem occurs in different notions and scenarios, which all require individual solutions. In this paper, we provide the first systematic review on data annotation techniques for HAR. By grouping existing approaches into classes and providing a taxonomy, our goal is to support the decision on which techniques can be beneficially used in a given scenario.
Prescriptive business process monitoring provides decision support to process managers on when and how to adapt an ongoing business process to prevent or mitigate an undesired process outcome. We focus on the problem of automatically reconciling the trade-off between prediction accuracy and prediction earliness in determining when to adapt. Adaptations should happen sufficiently early to provide enough lead time for the adaptation to become effective. However, earlier predictions are typically less accurate than later predictions. This means that acting on less accurate predictions may lead to unnecessary adaptations or missed adaptations. Different approaches were presented in the literature to reconcile the trade-off between prediction accuracy and earliness. So far, these approaches were compared with different baselines, and evaluated using different data sets or even confidential data sets. This limits the comparability and replicability of the approaches and makes it difficult to choose a concrete approach in practice. We perform a comparative evaluation of the main alternative approaches for reconciling the trade-off between prediction accuracy and earliness. Using four public real-world event log data sets and two types of prediction models, we assess and compare the cost savings of these approaches. The experimental results indicate which criteria affect the effectiveness of an approach and help us state initial recommendations for the selection of a concrete approach in practice.
We propose a method to incorporate the intensity information of a target lesion on CT scans in training segmentation and detection networks. We first build an intensity-based lesion probability (ILP) function from an intensity histogram of the target lesion. It is used to compute the probability of being the lesion for each voxel based on its intensity. Finally, the computed ILP map of each input CT scan is provided as additional supervision for network training, which aims to inform the network about possible lesion locations in terms of intensity values at no additional labeling cost. The method was applied to improve the segmentation of three different lesion types, namely, small bowel carcinoid tumor, kidney tumor, and lung nodule. The effectiveness of the proposed method on a detection task was also investigated. We observed improvements of 41.3% -> 47.8%, 74.2% -> 76.0%, and 26.4% -> 32.7% in segmenting small bowel carcinoid tumor, kidney tumor, and lung nodule, respectively, in terms of per case Dice scores. An improvement of 64.6% -> 75.5% was achieved in detecting kidney tumors in terms of average precision. The results of different usages of the ILP map and the effect of varied amount of training data are also presented.
Secure data deletion enables data owners to fully control the erasure of their data stored on local or cloud data centers and is essential for preventing data leakage, especially for cloud storage. However, traditional data deletion based on unlinking, overwriting, and cryptographic key management either ineffectiveness in cloud storage or rely on unpractical assumption. In this paper, we present SevDel, a secure and verifiable data deletion scheme, which leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts, while the deletion of the encryption keys are guaranteed based on Intel SGX. SevDel implements secure interfaces to perform data encryption and decryption for secure cloud storage. It also utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. Evaluation on real-world workload demonstrates that SevDel achieves efficient data deletion verification and maintain high bandwidth savings.
The big crux with drug delivery to human lungs is that the delivered dose at the local site of action is unpredictable and very difficult to measure, even a posteriori. It is highly subject-specific as it depends on lung morphology, disease, breathing, and aerosol characteristics. Given these challenges, computational approaches have shown potential, but have so far failed due to fundamental methodical limitations. We present and validate a novel in silico model that enables the subject-specific prediction of local aerosol deposition throughout the entire lung. Its unprecedented spatiotemporal resolution allows to track each aerosol particle anytime during the breathing cycle, anywhere in the complete system of conducting airways and the alveolar region. Predictions are shown to be in excellent agreement with in vivo SPECT/CT data for a healthy human cohort. We further showcase the model's capabilities to represent strong heterogeneities in diseased lungs by studying an IPF patient. Finally, high computational efficiency and automated model generation and calibration ensure readiness to be applied at scale. We envision our method not only to improve inhalation therapies by informing and accelerating all stages of (pre-)clinical drug and device development, but also as a more-than-equivalent alternative to nuclear imaging of the lungs.
Blockchain empowers a decentralized economy by enabling distributed trust in a peer-to-peer network. Decentralization is a core property of blockchain that not only ensures system security but also enables the democratization of systems and processes through the promise of a distributed peer-to-peer network. However, surprisingly, a widely accepted definition or measurement of decentralization is still lacking. We explore an SoK on blockchain decentralization focusing on measurement, quantification, and methodology. First, we establish a taxonomy for analyzing blockchain decentralization in the five facets of consensus, network, governance, wealth, and transaction by qualitative analysis of existing research. Second, we propose an index that measures and quantifies the decentralization level of blockchain across different facets by transforming Shannon entropy. We verify the explainability of the index via comparative static simulations. We also define and discuss alternative indices, including the Gini Coefficient, Nakamoto Coefficient, and Herfindahl-Hirschman Index (HHI). We provide open-source Python code for calculating the decentralization index by comparing various alternatives on GitHub. Third, by applying our proposed index to empirical research on top DeFi applications, we demonstrate how the three scientific methods of descriptive, predictive, and causal inferences are useful in studying blockchain decentralization. Finally, we identify future research directions: 1) to explore the interactions between different facets of blockchain decentralization, 2) to design blockchain mechanisms that achieve sustainable decentralization, and 3) to study the interplay of decentralization levels and the ends of security, privacy, and efficiency. We also raise challenges in addressing the controversies in blockchain decentralization.
Car pooling is expected to significantly help in reducing traffic congestion and pollution in cities by enabling drivers to share their cars with travellers with similar itineraries and time schedules. A number of car pooling matching services have been designed in order to efficiently find successful ride matches in a given pool of drivers and potential passengers. However, it is now recognised that many non-monetary aspects and social considerations, besides simple mobility needs, may influence the individual willingness of sharing a ride, which are difficult to predict. To address this problem, in this study we propose GoTogether, a recommender system for car pooling services that leverages on learning-to-rank techniques to automatically derive the personalised ranking model of each user from the history of her choices (i.e., the type of accepted or rejected shared rides). Then, GoTogether builds the list of recommended rides in order to maximise the success rate of the offered matches. To test the performance of our scheme we use real data from Twitter and Foursquare sources in order to generate a dataset of plausible mobility patterns and ride requests in a metropolitan area. The results show that the proposed solution quickly obtain an accurate prediction of the personalised user's choice model both in static and dynamic conditions.
Data transmission between two or more digital devices in industry and government demands secure and agile technology. Digital information distribution often requires deployment of Internet of Things (IoT) devices and Data Fusion techniques which have also gained popularity in both, civilian and military environments, such as, emergence of Smart Cities and Internet of Battlefield Things (IoBT). This usually requires capturing and consolidating data from multiple sources. Because datasets do not necessarily originate from identical sensors, fused data typically results in a complex Big Data problem. Due to potentially sensitive nature of IoT datasets, Blockchain technology is used to facilitate secure sharing of IoT datasets, which allows digital information to be distributed, but not copied. However, blockchain has several limitations related to complexity, scalability, and excessive energy consumption. We propose an approach to hide information (sensor signal) by transforming it to an image or an audio signal. In one of the latest attempts to the military modernization, we investigate sensor fusion approach by investigating the challenges of enabling an intelligent identification and detection operation and demonstrates the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application for specific hand gesture alert system from wearable devices.