In this paper, we propose a scheme for the problem of cache-aided multi-user private information retrieval with small caches, in which $K$ users are connected to $S$ non-colluding databases via shared links. Each database contains a set of $N$ files, and each user has a dedicated cache of size equivalent to the size of $M$ files. All the users want to retrieve a file without revealing their demands to the databases. During off-peak hours, all the users will fill their caches, and when required, users will demand their desired files by cooperatively generating query sets for each database. After receiving the transmissions from databases, all the users should get their desired files using transmitted data and their cache contents. This problem has been studied in [X. Zhang, K. Wan, H. Sun, M. Ji and G. Caire, \tqt{Fundamental limits of cache-aided multiuser private information retrieval}, IEEE Trans. Commun., 2021], in which authors proposed a product design scheme. In this paper, we propose a scheme that gives a better rate for a particular value of $M$ than the product design scheme. We consider a slightly different approach for the placement phase. Instead of a database filling the caches of all users directly, a database will broadcast cache content for all users on a shared link, and then the users will decide unitedly which part of the broadcasted content will be stored in the cache of each user. This variation facilitates maintaining the privacy constraint at a reduced rate.
Security and privacy are important concerns in machine learning. End user devices often contain a wealth of data and this information is sensitive and should not be shared with servers or enterprises. As a result, federated learning was introduced to enable machine learning over large decentralized datasets while promising privacy by eliminating the need for data sharing. However, prior work has shown that shared gradients often contain private information and attackers can gain knowledge either through malicious modification of the architecture and parameters or by using optimization to approximate user data from the shared gradients. Despite this, most attacks have so far been limited in scale of number of clients, especially failing when client gradients are aggregated together using secure model aggregation. The attacks that still function are strongly limited in the number of clients attacked, amount of training samples they leak, or number of iterations they take to be trained. In this work, we introduce MANDRAKE, an attack that overcomes previous limitations to directly leak large amounts of client data even under secure aggregation across large numbers of clients. Furthermore, we break the anonymity of aggregation as the leaked data is identifiable and directly tied back to the clients they come from. We show that by sending clients customized convolutional parameters, the weight gradients of data points between clients will remain separate through aggregation. With an aggregation across many clients, prior work could only leak less than 1% of images. With the same number of non-zero parameters, and using only a single training iteration, MANDRAKE leaks 70-80% of data samples.
This paper considers the sparse recovery with shuffled labels, i.e., $\by = \bPitrue \bX \bbetatrue + \bw$, where $\by \in \RR^n$, $\bPi\in \RR^{n\times n}$, $\bX\in \RR^{n\times p}$, $\bbetatrue\in \RR^p$, $\bw \in \RR^n$ denote the sensing result, the unknown permutation matrix, the design matrix, the sparse signal, and the additive noise, respectively. Our goal is to reconstruct both the permutation matrix $\bPitrue$ and the sparse signal $\bbetatrue$. We investigate this problem from both the statistical and computational aspects. From the statistical aspect, we first establish the minimax lower bounds on the sample number $n$ and the \emph{signal-to-noise ratio} ($\snr$) for the correct recovery of permutation matrix $\bPitrue$ and the support set $\supp(\bbetatrue)$, to be more specific, $n \gtrsim k\log p$ and $\log\snr \gtrsim \log n + \frac{k\log p}{n}$. Then, we confirm the tightness of these minimax lower bounds by presenting an exhaustive-search based estimator whose performance matches the lower bounds thereof up to some multiplicative constants. From the computational aspect, we impose a parsimonious assumption on the number of permuted rows and propose a computationally-efficient estimator accordingly. Moreover, we show that our proposed estimator can obtain the ground-truth $(\bPitrue, \supp(\bbetatrue))$ under mild conditions. Furthermore, we provide numerical experiments to corroborate our claims.
Purpose: Lung disease assessment in precapillary pulmonary hypertension (PH) is essential for appropriate patient management. This study aims to develop an artificial intelligence (AI) deep learning model for lung texture classification in CT Pulmonary Angiography (CTPA), and evaluate its correlation with clinical assessment methods. Materials and Methods: In this retrospective study with external validation, 122 patients with pre-capillary PH were used to train (n=83), validate (n=17) and test (n=10 internal test, n=12 external test) a patch based DenseNet-121 classification model. "Normal", "Ground glass", "Ground glass with reticulation", "Honeycombing", and "Emphysema" were classified as per the Fleishner Society glossary of terms. Ground truth classes were segmented by two radiologists with patches extracted from the labelled regions. Proportion of lung volume for each texture was calculated by classifying patches throughout the entire lung volume to generate a coarse texture classification mapping throughout the lung parenchyma. AI output was assessed against diffusing capacity of carbon monoxide (DLCO) and specialist radiologist reported disease severity. Results: Micro-average AUCs for the validation, internal test, and external test were 0.92, 0.95, and 0.94, respectively. The model had consistent performance across parenchymal textures, demonstrated strong correlation with diffusing capacity of carbon monoxide (DLCO), and showed good correspondence with disease severity reported by specialist radiologists. Conclusion: The classification model demonstrates excellent performance on external validation. The clinical utility of its output has been demonstrated. This objective, repeatable measure of disease severity can aid in patient management in adjunct to radiological reporting.
We propose a method for channel training and precoding in FDD massive MIMO based on deep neural networks (DNNs), exploiting Downlink (DL) channel covariance knowledge. The DNN is optimized to maximize the DL multi-user sum-rate, by producing a pre-beamforming matrix based on user channel covariances that maps the original channel vectors to effective channels. Measurements of these effective channels are received at the users via common pilot transmission and sent back to the base station (BS) through analog feedback without further processing. The BS estimates the effective channels from received feedback and constructs a linear precoder by concatenating the optimized pre-beamforming matrix with a zero-forcing precoder over the effective channels. We show that the proposed method yields significantly higher sum-rates than the state-of-the-art DNN-based channel training and precoding scheme, especially in scenarios with small pilot and feedback size relative to the channel coherence block length. Unlike many works in the literature, our proposition does not involve deployment of a DNN at the user side, which typically comes at a high computational cost and parameter-transmission overhead on the system, and is therefore considerably more practical.
Many generative foundation models (or GFMs) are trained on publicly available data and use public infrastructure, but 1) may degrade the "digital commons" that they depend on, and 2) do not have processes in place to return value captured to data producers and stakeholders. Existing conceptions of data rights and protection (focusing largely on individually-owned data and associated privacy concerns) and copyright or licensing-based models offer some instructive priors, but are ill-suited for the issues that may arise from models trained on commons-based data. We outline the risks posed by GFMs and why they are relevant to the digital commons, and propose numerous governance-based solutions that include investments in standardized dataset/model disclosure and other kinds of transparency when it comes to generative models' training and capabilities, consortia-based funding for monitoring/standards/auditing organizations, requirements or norms for GFM companies to contribute high quality data to the commons, and structures for shared ownership based on individual or community provision of fine-tuning data.
In shilling attacks, an adversarial party injects a few fake user profiles into a Recommender System (RS) so that the target item can be promoted or demoted. Although much effort has been devoted to developing shilling attack methods, we find that existing approaches are still far from practical. In this paper, we analyze the properties a practical shilling attack method should have and propose a new concept of Cross-system Attack. With the idea of Cross-system Attack, we design a Practical Cross-system Shilling Attack (PC-Attack) framework that requires little information about the victim RS model and the target RS data for conducting attacks. PC-Attack is trained to capture graph topology knowledge from public RS data in a self-supervised manner. Then, it is fine-tuned on a small portion of target data that is easy to access to construct fake profiles. Extensive experiments have demonstrated the superiority of PC-Attack over state-of-the-art baselines. Our implementation of PC-Attack is available at //github.com/KDEGroup/PC-Attack.
Event extraction (EE) plays an important role in many industrial application scenarios, and high-quality EE methods require a large amount of manual annotation data to train supervised learning models. However, the cost of obtaining annotation data is very high, especially for annotation of domain events, which requires the participation of experts from corresponding domain. So we introduce active learning (AL) technology to reduce the cost of event annotation. But the existing AL methods have two main problems, which make them not well used for event extraction. Firstly, the existing pool-based selection strategies have limitations in terms of computational cost and sample validity. Secondly, the existing evaluation of sample importance lacks the use of local sample information. In this paper, we present a novel deep AL method for EE. We propose a batch-based selection strategy and a Memory-Based Loss Prediction model (MBLP) to select unlabeled samples efficiently. During the selection process, we use an internal-external sample loss ranking method to evaluate the sample importance by using local information. Finally, we propose a delayed training strategy to train the MBLP model. Extensive experiments are performed on three domain datasets, and our method outperforms other state-of-the-art methods.
Safety is critical in robotic tasks. Energy function based methods have been introduced to address the problem. To ensure safety in the presence of control limits, we need to design an energy function that results in persistently feasible safe control at all system states. However, designing such an energy function for high-dimensional nonlinear systems remains challenging. Considering the fact that there are redundant dynamics in high dimensional systems with respect to the safety specifications, this paper proposes a novel approach called abstract safe control. We propose a system abstraction method that enables the design of energy functions on a low-dimensional model. Then we can synthesize the energy function with respect to the low-dimensional model to ensure persistent feasibility. The resulting safe controller can be directly transferred to other systems with the same abstraction, e.g., when a robot arm holds different tools. The proposed approach is demonstrated on a 7-DoF robot arm (14 states) both in simulation and real-world. Our method always finds feasible control and achieves zero safety violations in 500 trials on 5 different systems.
Conversational interfaces provide a flexible and easy way for users to seek information that may otherwise be difficult or inconvenient to obtain. However, existing interfaces generally fall into one of two categories: FAQs, where users must have a concrete question in order to retrieve a general answer, or dialogs, where users must follow a predefined path but may receive a personalized answer. In this paper, we introduce Conversational Tree Search (CTS) as a new task that bridges the gap between FAQ-style information retrieval and task-oriented dialog, allowing domain-experts to define dialog trees which can then be converted to an efficient dialog policy that learns only to ask the questions necessary to navigate a user to their goal. We collect a dataset for the travel reimbursement domain and demonstrate a baseline as well as a novel deep Reinforcement Learning architecture for this task. Our results show that the new architecture combines the positive aspects of both the FAQ and dialog system used in the baseline and achieves higher goal completion while skipping unnecessary questions.
With the rapid development of facial forgery techniques, forgery detection has attracted more and more attention due to security concerns. Existing approaches attempt to use frequency information to mine subtle artifacts under high-quality forged faces. However, the exploitation of frequency information is coarse-grained, and more importantly, their vanilla learning process struggles to extract fine-grained forgery traces. To address this issue, we propose a progressive enhancement learning framework to exploit both the RGB and fine-grained frequency clues. Specifically, we perform a fine-grained decomposition of RGB images to completely decouple the real and fake traces in the frequency space. Subsequently, we propose a progressive enhancement learning framework based on a two-branch network, combined with self-enhancement and mutual-enhancement modules. The self-enhancement module captures the traces in different input spaces based on spatial noise enhancement and channel attention. The Mutual-enhancement module concurrently enhances RGB and frequency features by communicating in the shared spatial dimension. The progressive enhancement process facilitates the learning of discriminative features with fine-grained face forgery clues. Extensive experiments on several datasets show that our method outperforms the state-of-the-art face forgery detection methods.