The two biggest problems with wireless sensor networks are security and energy usage. In sensing devices, malicious nodes could be found in large numbers. The researchers have proposed several methods to find these rogue nodes. To prevent assaults on these networks and data transmission, the data must be secured. Data aggregation aids in reducing the number of messages transmitted within the network, which in turn lowers total network energy consumption. Additionally, when decrypting the aggregated data, the base station can distinguish between encrypted and consolidated analysis based on top of the cryptographic keys. By examining the effectiveness of the data aggregation in this research. To solve the above problem, the system provides a method in which an efficient cluster agent is preferred pedestal on its location at the access point and energy availability. The sensor network's energy consumption is reduced by selecting an effective cluster agent, extending the network's lifespan. The cluster's agent is in indict of compiling data for each member node. The clustering agent validates the data and tosses any errors before aggregation. The clustering agent only aggregates confirmed data. To provide end-to-end anonymity, ElGamal elliptic curve (ECE) encryption is used to secure the client data and reassign the encrypted information en route for the cluster agent. Only the base station (BS) can decrypt the data. Furthermore, an ID-based signature system is utilized to enable authenticity. This research presents a technique for recuperating lost data. The access point employs a cache-based backup system to search for lost data.
Aggregating data in a database could also be called "integrating along fibers": given functions $\pi\colon E\to D$ and $s\colon E\to R$, where $(R,\circledast)$ is a commutative monoid, we want a new function $(\circledast s)_\pi$ that sends each $d\in D$ to the "sum" of all $s(e)$ for which $\pi(e)=d$. The operation lives alongside querying -- or more generally data migration -- in typical database usage: one wants to know how much Canadians spent on cell phones in 2021, for example, and such requests typically require both aggregation and querying. But whereas querying has an elegant category-theoretic treatment in terms of parametric right adjoints between copresheaf categories, a categorical formulation of aggregation -- especially one that lives alongside that for querying -- appears to be completely absent from the literature. In this paper we show how both querying and aggregation fit into the "polynomial ecosystem". Starting with the category $\mathbf{Poly}$ of polynomial functors in one variable, we review the relatively recent results of Ahman-Uustalu and Garner, which showed that the framed bicategory $\mathbb{C}\mathbf{at}^\sharp$ of comonads in $\mathbf{Poly}$ is precisely the right setting for data migration: its objects are categories and its bicomodules are parametric right adjoints between their copresheaf categories. We then develop a great deal of theory, compressed for space reasons, including local monoidal closed structures, a coclosure to bicomodule composition, and an understanding of adjoints in $\mathbb{C}\mathbf{at}^\sharp$. Doing so allows us to derive interesting mathematical results, e.g.\ that the ordinary operation of transposing a span can be decomposed into the composite of two more primitive operations, and then finally to explain how aggregation arises, alongside querying, in $\mathbb{C}\mathbf{at}^\sharp$.
We study the problem of model selection in causal inference, specifically for the case of conditional average treatment effect (CATE) estimation under binary treatments. Unlike model selection in machine learning, there is no perfect analogue of cross-validation as we do not observe the counterfactual potential outcome for any data point. Towards this, there have been a variety of proxy metrics proposed in the literature, that depend on auxiliary nuisance models estimated from the observed data (propensity score model, outcome regression model). However, the effectiveness of these metrics has only been studied on synthetic datasets as we can access the counterfactual data for them. We conduct an extensive empirical analysis to judge the performance of these metrics introduced in the literature, and novel ones introduced in this work, where we utilize the latest advances in generative modeling to incorporate multiple realistic datasets. Our analysis suggests novel model selection strategies based on careful hyperparameter tuning of CATE estimators and causal ensembling.
A numerical method is developed for recovering both the source locations and the obstacle from the scattered Cauchy data of the time-harmonic acoustic field. First of all, the incident and scattered components are decomposed from the coupled Cauchy data by the representation of the single-layer potentials and the solution to the resulting linear integral system. As a consequence of this decomposition, the original problem of joint inversion is reformulated into two decoupled subproblems: an inverse source problem and an inverse obstacle scattering problem. Then, two sampling-type schemes are proposed to recover the shape of the obstacle and the source locations, respectively. The sampling methods rely on the specific indicator functions defined on target-oriented probing domains of circular shape. The error estimates of the decoupling procedure are established and the asymptotic behaviors of the indicator functions are analyzed. Extensive numerical experiments are also conducted to verify the performance of the sampling schemes.
Deep reinforcement learning (DRL) requires the collection of interventional data, which is sometimes expensive and even unethical in the real world, such as in the autonomous driving and the medical field. Offline reinforcement learning promises to alleviate this issue by exploiting the vast amount of observational data available in the real world. However, observational data may mislead the learning agent to undesirable outcomes if the behavior policy that generates the data depends on unobserved random variables (i.e., confounders). In this paper, we propose two deconfounding methods in DRL to address this problem. The methods first calculate the importance degree of different samples based on the causal inference technique, and then adjust the impact of different samples on the loss function by reweighting or resampling the offline dataset to ensure its unbiasedness. These deconfounding methods can be flexibly combined with existing model-free DRL algorithms such as soft actor-critic and deep Q-learning, provided that a weak condition can be satisfied by the loss functions of these algorithms. We prove the effectiveness of our deconfounding methods and validate them experimentally.
Foundation models--such as GPT, CLIP, and DINO--have achieved revolutionary progress in the past several years and are commonly believed to be a promising approach for general-purpose AI. In particular, self-supervised learning is adopted to pre-train a foundation model using a large amount of unlabeled data. A pre-trained foundation model is like an ``operating system'' of the AI ecosystem. Specifically, a foundation model can be used as a feature extractor for many downstream tasks with little or no labeled training data. Existing studies on foundation models mainly focused on pre-training a better foundation model to improve its performance on downstream tasks in non-adversarial settings, leaving its security and privacy in adversarial settings largely unexplored. A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem. In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models, including six confidentiality problems, three integrity problems, and one availability problem. For each problem, we discuss potential opportunities and challenges. We hope our book chapter will inspire future research on the security and privacy of foundation models.
The digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health. Generally understood as female technologies (aka female-oriented technologies or 'FemTech'), these products and systems collect a wide range of intimate data which are processed, transferred, saved and shared with other parties. In this paper, we explore how the "data-hungry" nature of this industry and the lack of proper safeguarding mechanisms, standards, and regulations for vulnerable data can lead to complex harms or faint agentic potential. We adopted mixed methods in exploring users' understanding of the security and privacy (SP) of these technologies. Our findings show that while users can speculate the range of harms and risks associated with these technologies, they are not equipped and provided with the technological skills to protect themselves against such risks. We discuss a number of approaches, including participatory threat modelling and SP by design, in the context of this work and conclude that such approaches are critical to protect users in these sensitive systems.
Existing recommender systems extract the user preference based on learning the correlation in data, such as behavioral correlation in collaborative filtering, feature-feature, or feature-behavior correlation in click-through rate prediction. However, regretfully, the real world is driven by causality rather than correlation, and correlation does not imply causation. For example, the recommender systems can recommend a battery charger to a user after buying a phone, in which the latter can serve as the cause of the former, and such a causal relation cannot be reversed. Recently, to address it, researchers in recommender systems have begun to utilize causal inference to extract causality, enhancing the recommender system. In this survey, we comprehensively review the literature on causal inference-based recommendation. At first, we present the fundamental concepts of both recommendation and causal inference as the basis of later content. We raise the typical issues that the non-causality recommendation is faced. Afterward, we comprehensively review the existing work of causal inference-based recommendation, based on a taxonomy of what kind of problem causal inference addresses. Last, we discuss the open problems in this important research area, along with interesting future works.
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.