In this uncertain world, data uncertainty is inherent in many applications and its importance is growing drastically due to the rapid development of modern technologies. Nowadays, researchers have paid more attention to mine patterns in uncertain databases. A few recent works attempt to mine frequent uncertain sequential patterns. Despite their success, they are incompetent to reduce the number of false-positive pattern generation in their mining process and maintain the patterns efficiently. In this paper, we propose multiple theoretically tightened pruning upper bounds that remarkably reduce the mining space. A novel hierarchical structure is introduced to maintain the patterns in a space-efficient way. Afterward, we develop a versatile framework for mining uncertain sequential patterns that can effectively handle weight constraints as well. Besides, with the advent of incremental uncertain databases, existing works are not scalable. There exist several incremental sequential pattern mining algorithms, but they are limited to mine in precise databases. Therefore, we propose a new technique to adapt our framework to mine patterns when the database is incremental. Finally, we conduct extensive experiments on several real-life datasets and show the efficacy of our framework in different applications.
Noisy correspondence that refers to mismatches in cross-modal data pairs, is prevalent on human-annotated or web-crawled datasets. Prior approaches to leverage such data mainly consider the application of uni-modal noisy label learning without amending the impact on both cross-modal and intra-modal geometrical structures in multimodal learning. Actually, we find that both structures are effective to discriminate noisy correspondence through structural differences when being well-established. Inspired by this observation, we introduce a Geometrical Structure Consistency (GSC) method to infer the true correspondence. Specifically, GSC ensures the preservation of geometrical structures within and between modalities, allowing for the accurate discrimination of noisy samples based on structural differences. Utilizing these inferred true correspondence labels, GSC refines the learning of geometrical structures by filtering out the noisy samples. Experiments across four cross-modal datasets confirm that GSC effectively identifies noisy samples and significantly outperforms the current leading methods.
Generating data with properties of interest by external users while following the right causation among its intrinsic factors is important yet has not been well addressed jointly. This is due to the long-lasting challenge of jointly identifying key latent variables, their causal relations, and their correlation with properties of interest, as well as how to leverage their discoveries toward causally controlled data generation. To address these challenges, we propose a novel deep generative framework called the Correlation-aware Causal Variational Auto-encoder (C2VAE). This framework simultaneously recovers the correlation and causal relationships between properties using disentangled latent vectors. Specifically, causality is captured by learning the causal graph on latent variables through a structural causal model, while correlation is learned via a novel correlation pooling algorithm. Extensive experiments demonstrate C2VAE's ability to accurately recover true causality and correlation, as well as its superiority in controllable data generation compared to baseline models.
The popularity of data science as a discipline and its importance in the emerging economy and industrial progress dictate that machine learning be democratized for the masses. This also means that the current practice of workforce training using machine learning tools, which requires low-level statistical and algorithmic details, is a barrier that needs to be addressed. Similar to data management languages such as SQL, machine learning needs to be practiced at a conceptual level to help make it a staple tool for general users. In particular, the technical sophistication demanded by existing machine learning frameworks is prohibitive for many scientists who are not computationally savvy or well versed in machine learning techniques. The learning curve to use the needed machine learning tools is also too high for them to take advantage of these powerful platforms to rapidly advance science. In this paper, we introduce a new declarative machine learning query language, called {\em MQL}, for naive users. We discuss its merit and possible ways of implementing it over a traditional relational database system. We discuss two materials science experiments implemented using MQL on a materials science workflow system called MatFlow.
Quantum computing holds the potential to solve problems that are practically unsolvable by classical computers due to its ability to significantly reduce time complexity. We aim to harness this potential to enhance ray casting, a pivotal technique in computer graphics for simplifying the rendering of 3D objects. To perform ray casting in a quantum computer, we need to encode the defining parameters of primitives into qubits. However, during the current noisy intermediate-scale quantum (NISQ) era, challenges arise from the limited number of qubits and the impact of noise when executing multiple gates. Through logic optimization, we reduced the depth of quantum circuits as well as the number of gates and qubits. As a result, the event count of correct measurements from an IBM quantum computer significantly exceeded that of incorrect measurements.
Federated learning is highly valued due to its high-performance computing in distributed environments while safeguarding data privacy. To address resource heterogeneity, researchers have proposed a semi-asynchronous federated learning (SAFL) architecture. However, the performance gap between different aggregation targets in SAFL remain unexplored. In this paper, we systematically compare the performance between two algorithm modes, FedSGD and FedAvg that correspond to aggregating gradients and models, respectively. Our results across various task scenarios indicate these two modes exhibit a substantial performance gap. Specifically, FedSGD achieves higher accuracy and faster convergence but experiences more severe fluctuates in accuracy, whereas FedAvg excels in handling straggler issues but converges slower with reduced accuracy.
Spatial point process models are widely applied to point pattern data from various fields in the social and environmental sciences. However, a serious hurdle in fitting point process models is the presence of duplicated points, wherein multiple observations share identical spatial coordinates. This often occurs because of decisions made in the geo-coding process, such as assigning representative locations (e.g., aggregate-level centroids) to observations when data producers lack exact location information. Because spatial point process models like the Log-Gaussian Cox Process (LGCP) assume unique locations, researchers often employ {\it ad hoc} solutions (e.g., jittering) to address duplicated data before analysis. As an alternative, this study proposes a Modified Minimum Contrast (MMC) method that adapts the inference procedure to account for the effect of duplicates without needing to alter the data. The proposed MMC method is applied to LGCP models, with simulation results demonstrating the gains of our method relative to existing approaches in terms of parameter estimation. Interestingly, simulation results also show the effect of the geo-coding process on parameter estimates, which can be utilized in the implementation of the MMC method. The MMC approach is then used to infer the spatial clustering characteristics of conflict events in Afghanistan (2008-2009).
Multilayer networks are used to represent the interdependence between the relational data of individuals interacting with each other via different types of relationships. To study the information-theoretic phase transitions in detecting the presence of planted partition among the nodes of a multi-layer network with additional nodewise covariate information and diverging average degree, Ma and Nandy (2023) introduced Multi-Layer Contextual Stochastic Block Model. In this paper, we consider the problem of detecting planted partitions in the Multi-Layer Contextual Stochastic Block Model, when the average node degrees for each network is greater than $1$. We establish the sharp phase transition threshold for detecting the planted bi-partition. Above the phase-transition threshold testing the presence of a bi-partition is possible, whereas below the threshold no procedure to identify the planted bi-partition can perform better than random guessing. We further establish that the derived detection threshold coincides with the threshold for weak recovery of the partition and provide a quasi-polynomial time algorithm to estimate it.
The integration of technology into exercise regimens has emerged as a strategy to enhance normal human capabilities and return human motor function after injury or illness by enhancing motor learning and retention. Much research has focused on how active devices, whether confined to a lab or made into a wearable format, can apply forces at set times and conditions to optimize the process of learning. However, the focus on active force production often forces devices to either be confined to simple movements or interventions. As such, in this paper, we investigate how passive device behaviors can contribute to the process of motor learning by themselves. Our approach involves using a wearable resistance (WR) device, which is outfitted with elastic bands, to apply a force field that changes in response to a person's movements while performing exercises. We develop a method to measure the produced forces from the device without impeding the function and we characterize the device's force generation abilities. We then present a study assessing the impact of the WR device on motor learning of proper squat form compared to visual or no feedback. Biometrics such as knee and hip angles were used to monitor and assess subject performance. Our findings indicate that the force fields produced while training with the WR device can improve performance in full-body exercises similarly to a more direct visual feedback mechanism, though the improvement is not consistent across all performance metrics. Through our research, we contribute important insights into the application of passive wearable resistance technology in practical exercise settings.
The total energy cost of computing activities is steadily increasing and projections indicate that it will be one of the dominant global energy consumers in the coming decades. However, perhaps due to its relative youth, the video game sector has not yet developed the same level of environmental awareness as other computing technologies despite the estimated three billion regular video game players in the world. This work evaluates the energy consumption of the most widely used industry-scale video game engines: Unity and Unreal Engine. Specifically, our work uses three scenarios representing relevant aspects of video games (Physics, Statics Meshes, and Dynamic Meshes) to compare the energy consumption of the engines. The aim is to determine the influence of using each of the two engines on energy consumption. Our research has confirmed significant differences in the energy consumption of video game engines: 351% in Physics in favor of Unity, 17% in Statics Meshes in favor of Unity, and 26% in Dynamic Meshes in favor of Unreal Engine. These results represent an opportunity for worldwide potential savings of at least 51 TWh per year, equivalent to the annual consumption of nearly 13 million European households, that might encourage a new branch of research on energy-efficient video game engines.
Analyzing observational data from multiple sources can be useful for increasing statistical power to detect a treatment effect; however, practical constraints such as privacy considerations may restrict individual-level information sharing across data sets. This paper develops federated methods that only utilize summary-level information from heterogeneous data sets. Our federated methods provide doubly-robust point estimates of treatment effects as well as variance estimates. We derive the asymptotic distributions of our federated estimators, which are shown to be asymptotically equivalent to the corresponding estimators from the combined, individual-level data. We show that to achieve these properties, federated methods should be adjusted based on conditions such as whether models are correctly specified and stable across heterogeneous data sets.