Considerable attention has been paid to dynamic searchable symmetric encryption (DSSE) which allows users to search on dynamically updated encrypted databases. To improve the performance of real-world applications, recent non-interactive multi-client DSSE schemes are targeted at avoiding per-query interaction between data owners and data users. However, existing non-interactive multi-client DSSE schemes do not consider forward privacy or backward privacy, making them exposed to leakage abuse attacks. Besides, most existing DSSE schemes with forward and backward privacy rely on keeping a keyword operation counter or an inverted index, resulting in a heavy storage burden on the data owner side. To address these issues, we propose a non-interactive multi-client DSSE scheme with small client storage, and our proposed scheme can provide both forward privacy and backward privacy. Specifically, we first design a lightweight storage chain structure that binds all keywords to a single state to reduce the storage cost. Then, we present a Hidden Key technique, which preserves non-interactive forward privacy through time range queries, ensuring that data with newer timestamps cannot match earlier time ranges. We conduct extensive experiments to validate our methods, which demonstrate computational efficiency. Moreover, security analysis proves the privacy-preserving property of our methods.
Research on debiased recommendation has shown promising results. However, some issues still need to be handled for its application in industrial recommendation. For example, most of the existing methods require some specific data, architectures and training methods. In this paper, we first argue through an online study that arbitrarily removing all the biases in industrial recommendation may not consistently yield a desired performance improvement. For the situation that a randomized dataset is not available, we propose a novel self-sampling training and evaluation (SSTE) framework to achieve the accuracy-bias tradeoff in recommendation, i.e., eliminate the harmful biases and preserve the beneficial ones. Specifically, SSTE uses a self-sampling module to generate some subsets with different degrees of bias from the original training and validation data. A self-training module infers the beneficial biases and learns better tradeoff based on these subsets, and a self-evaluation module aims to use these subsets to construct more plausible references to reflect the optimized model. Finally, we conduct extensive offline experiments on two datasets to verify the effectiveness of our SSTE. Moreover, we deploy our SSTE in homepage recommendation of a famous financial management product called Tencent Licaitong, and find very promising results in an online A/B test.
This article tackles the old problem of prediction via a nonparametric transformation model (NTM) in a new Bayesian way. Estimation of NTMs is known challenging due to model unidentifiability though appealing because of its robust prediction capability in survival analysis. Inspired by the uniqueness of the posterior predictive distribution, we achieve efficient prediction via the NTM aforementioned under the Bayesian paradigm. Our strategy is to assign weakly informative priors to nonparametric components rather than identify the model by adding complicated constraints in the existing literature. The Bayesian success pays tribute to i) a subtle cast of NTMs by an exponential transformation for the purpose of compressing spaces of infinite-dimensional parameters to positive quadrants considering non-negativity of the failure time; ii) a newly constructed weakly informative quantile-knots I-splines prior for the recast transformation function together with the Dirichlet process mixture model assigned to the error distribution. In addition, we provide a convenient and precise estimator for the identified parameter component subject to the general unit-norm restriction through posterior modification, enabling effective relative risks. Simulations and applications on real datasets reveal that our method is robust and outperforms the competing methods. An R package BuLTM is available to predict survival curves, estimate relative risks, and facilitate posterior checking.
We present a method for solving the coverage problem with the objective of autonomously exploring an unknown environment under mission time constraints. Here, the robot is tasked with planning a path over a horizon such that the accumulated area swept out by its sensor footprint is maximized. Because this problem exhibits a diminishing returns property known as submodularity, we choose to formulate it as a tree-based sequential decision making process. This formulation allows us to evaluate the effects of the robot's actions on future world coverage states, while simultaneously accounting for traversability risk and the dynamic constraints of the robot. To quickly find near-optimal solutions, we propose an effective approximation to the coverage sensor model which adapts to the local environment. Our method was extensively tested across various complex environments and served as the local exploration algorithm for a competing entry in the DARPA Subterranean Challenge.
We study the problem of designing mechanisms for \emph{information acquisition} scenarios. This setting models strategic interactions between an uniformed \emph{receiver} and a set of informed \emph{senders}. In our model the senders receive information about the underlying state of nature and communicate their observation (either truthfully or not) to the receiver, which, based on this information, selects an action. Our goal is to design mechanisms maximizing the receiver's utility while incentivizing the senders to report truthfully their information. First, we provide an algorithm that efficiently computes an optimal \emph{incentive compatible} (IC) mechanism. Then, we focus on the \emph{online} problem in which the receiver sequentially interacts in an unknown game, with the objective of minimizing the \emph{cumulative regret} w.r.t. the optimal IC mechanism, and the \emph{cumulative violation} of the incentive compatibility constraints. We investigate two different online scenarios, \emph{i.e.,} the \emph{full} and \emph{bandit feedback} settings. For the full feedback problem, we propose an algorithm that guarantees $\tilde{\mathcal O}(\sqrt T)$ regret and violation, while for the bandit feedback setting we present an algorithm that attains $\tilde{\mathcal O}(T^{\alpha})$ regret and $\tilde{\mathcal O}(T^{1-\alpha/2})$ violation for any $\alpha\in[1/2, 1]$. Finally, we complement our results providing a tight lower bound.
This paper presents an approach to ensure conditions on Variable Impedance Controllers through the off-line tuning of the parameters involved in its description. In particular, we prove its application to term modulations defined by a Learning from Demonstration technique. This is performed through the assessment of conditions regarding safety and performance, which encompass heuristics and constraints in the form of Linear Matrix Inequalities. Latter ones allow to define a convex optimisation problem to analyse their fulfilment, and require a polytopic description of the VIC, in this case, obtained from its formulation as a discrete-time Linear Parameter Varying system. With respect to the current state-of-art, this approach only limits the term definition obtained by the Learning from Demonstration technique to be continuous and function of exogenous signals, i.e. external variables to the robot. Therefore, using a solution-search method, the most suitable set of parameters according to assessment criteria can be obtained. Using a 7-DoF Kinova Gen3 manipulator, validation and comparison against solutions with relaxed conditions are performed. The method is applied to generate Variable Impedance Controllers for a pulley belt looping task, inspired by the Assembly Challenge for Industrial Robotics in World Robot Summit 2018, to reduce the exerted force with respect to a standard (constant) Impedance Controller. Additionally, method agility is evaluated on the generation of controllers for one-off modifications of the nominal belt looping task setup without new demonstrations.
Dealing with distribution shifts is one of the central challenges for modern machine learning. One fundamental situation is the \emph{covariate shift}, where the input distributions of data change from training to testing stages while the input-conditional output distribution remains unchanged. In this paper, we initiate the study of a more challenging scenario -- \emph{continuous} covariate shift -- in which the test data appear sequentially, and their distributions can shift continuously. Our goal is to adaptively train the predictor such that its prediction risk accumulated over time can be minimized. Starting with the importance-weighted learning, we show the method works effectively if the time-varying density ratios of test and train inputs can be accurately estimated. However, existing density ratio estimation methods would fail due to data scarcity at each time step. To this end, we propose an online method that can appropriately reuse historical information. Our density ratio estimation method is proven to perform well by enjoying a dynamic regret bound, which finally leads to an excess risk guarantee for the predictor. Empirical results also validate the effectiveness.
Most of the existing signcryption schemes generate pseudonym by key generation center (KGC) and usually choose bilinear pairing to construct authentication schemes. The drawback is that these schemes not only consume heavy computation and communication costs during information exchange, but also can not eliminate security risks due to not updating pseudonym, which do not work well for resource-constrained smart terminals in cyber-physical power systems (CPPSs). The main objective of this paper is to propose a novel efficient signcryption scheme for resource-constrained smart terminals. First, a dynamical pseudonym self-generation mechanism (DPSGM) is explored to achieve privacy preservation and avoid the source being linked. Second, combined with DPSGM, an efficient signcryption scheme based on certificateless cryptography (CLC) and elliptic curve cryptography (ECC) is designed, which reduces importantly computation and communication burden. Furthermore, under random oracle model (ROM), the confidentiality and non-repudiation of the proposed signcryption scheme are transformed into elliptic curve discrete logarithm and computational Diffie-Hellman problems that cannot be solved in polynomial time, which guarantees the security. Finally, the effectiveness and feasibility of the proposed signcryption scheme are confirmed by experimental analyses.
Deploying federated learning (FL) over wireless networks with resource-constrained devices requires balancing between accuracy, energy efficiency, and precision. Prior art on FL often requires devices to train deep neural networks (DNNs) using a 32-bit precision level for data representation to improve accuracy. However, such algorithms are impractical for resource-constrained devices since DNNs could require execution of millions of operations. Thus, training DNNs with a high precision level incurs a high energy cost for FL. In this paper, a quantized FL framework, that represents data with a finite level of precision in both local training and uplink transmission, is proposed. Here, the finite level of precision is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format. In the considered FL model, each device trains its QNN and transmits a quantized training result to the base station. Energy models for the local training and the transmission with the quantization are rigorously derived. An energy minimization problem is formulated with respect to the level of precision while ensuring convergence. To solve the problem, we first analytically derive the FL convergence rate and use a line search method. Simulation results show that our FL framework can reduce energy consumption by up to 53% compared to a standard FL model. The results also shed light on the tradeoff between precision, energy, and accuracy in FL over wireless networks.
Extremely large-scale multiple-input-multiple-output (XL-MIMO) is a promising technology for the future sixth-generation (6G) networks to achieve higher performance. In practice, various linear precoding schemes, such as zero-forcing (ZF) and regularized zero-forcing (RZF) precoding, are capable of achieving both large spectral efficiency (SE) and low bit error rate (BER) in traditional massive MIMO (mMIMO) systems. However, these methods are not efficient in extremely large-scale regimes due to the inherent spatial non-stationarity and high computational complexity. To address this problem, we investigate a low-complexity precoding algorithm, e.g., randomized Kaczmarz (rKA), taking into account the spatial non-stationary properties in XL-MIMO systems. Furthermore, we propose a novel mode of randomization, i.e., sampling without replacement rKA (SwoR-rKA), which enjoys a faster convergence speed than the rKA algorithm. Besides, the closed-form expression of SE considering the interference between subarrays in downlink XL-MIMO systems is derived. Numerical results show that the complexity given by both rKA and SwoR-rKA algorithms has 51.3% reduction than the traditional RZF algorithm with similar SE performance. More importantly, our algorithms can effectively reduce the BER when the transmitter has imperfect channel estimation.
Background and purpose: Radiation-induced erectile dysfunction (RiED) is commonly seen in prostate cancer patients. Clinical trials have been developed in multiple institutions to investigate whether dose-sparing to the internal-pudendal-arteries (IPA) will improve retention of sexual potency. The IPA is usually not considered a conventional organ-at-risk (OAR) due to segmentation difficulty. In this work, we propose a deep learning (DL)-based auto-segmentation model for the IPA that utilizes CT and MRI or CT alone as the input image modality to accommodate variation in clinical practice. Materials and methods: 86 patients with CT and MRI images and noisy IPA labels were recruited in this study. We split the data into 42/14/30 for model training, testing, and a clinical observer study, respectively. There were three major innovations in this model: 1) we designed an architecture with squeeze-and-excite blocks and modality attention for effective feature extraction and production of accurate segmentation, 2) a novel loss function was used for training the model effectively with noisy labels, and 3) modality dropout strategy was used for making the model capable of segmentation in the absence of MRI. Results: The DSC, ASD, and HD95 values for the test dataset were 62.2%, 2.54mm, and 7mm, respectively. AI segmented contours were dosimetrically equivalent to the expert physician's contours. The observer study showed that expert physicians' scored AI contours (mean=3.7) higher than inexperienced physicians' contours (mean=3.1). When inexperienced physicians started with AI contours, the score improved to 3.7. Conclusion: The proposed model achieved good quality IPA contours to improve uniformity of segmentation and to facilitate introduction of standardized IPA segmentation into clinical trials and practice.