Protecting the parameters, states, and input/output signals of a dynamic controller is essential for securely outsourcing its computation to an untrusted third party. Although a fully homomorphic encryption scheme allows the evaluation of controller operations with encrypted data, an encrypted dynamic controller with the encryption scheme destabilizes a closed-loop system or degrades the control performance due to overflow. This paper presents a novel controller representation based on input-output history data to implement an encrypted dynamic controller that operates without destabilization and performance degradation. Implementation of this encrypted dynamic controller representation can be optimized via batching techniques to reduce the time and space complexities. Furthermore, this study analyzes the stability and performance degradation of a closed-loop system caused by the effects of controller encryption. A numerical simulation demonstrates the feasibility of the proposed encrypted control scheme, which inherits the control performance of the original controller at a sufficient level.
In this paper, we first indicate that the block error event of polar codes under successive cancellation list (SCL) decoding is composed of path loss (PL) error event and path selection (PS) error event, where the PL error event is that correct codeword is lost during the SCL decoding and the PS error event is that correct codeword is reserved in the decoded list but not selected as the decoded codeword. Then, we simplify the PL error event by assuming the all-zero codeword is transmitted and derive the probability lower bound via the joint probability density of the log-likelihood ratios of information bits. Meanwhile, the union bound calculated by the minimum weight distribution is used to evaluate the probability of the PS error event. With the performance analysis, we design a greedy bit-swapping (BS) algorithm to construct polar codes by gradually swapping information bit and frozen bit to reduce the performance lower bound of SCL decoding. The simulation results show that the BLER performance of SCL decoding is close to the lower bound in the medium to high signal-to-noise ratio region and we can optimize the lower bound to improve the BLER performance of SCL decoding by the BS algorithm.
In this paper, we construct an efficient linear and fully decoupled finite difference scheme for wormhole propagation with heat transmission process on staggered grids, which only requires solving a sequence of linear elliptic equations at each time step. We first derive the positivity preserving properties for the discrete porosity and its difference quotient in time, and then obtain optimal error estimates for the velocity, pressure, concentration, porosity and temperature in different norms rigorously and carefully by establishing several auxiliary lemmas for the highly coupled nonlinear system. Numerical experiments in two- and three-dimensional cases are provided to verify our theoretical results and illustrate the capabilities of the constructed method.
In many real-world multi-attribute decision-making (MADM) problems, mining the inter-relationships and possible hierarchical structures among the factors are considered to be one of the primary tasks. But, besides that, one major task is to determine an optimal strategy to work on the factors to enhance the effect on the goal attribute. This paper proposes two such strategies, namely parallel and hierarchical effort assignment, and propagation strategies. The concept of effort propagation through a strategy is formally defined and described in the paper. Both the parallel and hierarchical strategies are divided into sub-strategies based on whether the assignment of efforts to the factors is uniform or depends upon some appropriate heuristics related to the factors in the system. The adapted and discussed heuristics are the relative significance and effort propagability of the factors. The strategies are analyzed for a real-life case study regarding Indian high school administrative factors that play an important role in enhancing students' performance. Total effort propagation of around 7%-15% to the goal is seen across the proposed strategies given a total of 1 unit of effort to the directly accessible factors of the system. A comparative analysis is adapted to determine the optimal strategy among the proposed ones to enhance student performance most effectively. The highest effort propagation achieved in the work is approximately 14.4348%. The analysis in the paper establishes the necessity of research towards the direction of effort propagation analysis in case of decision-making problems.
Topological Data Analysis (TDA) offers a suite of computational tools that provide quantified shape features in high dimensional data that can be used by modern statistical and predictive machine learning (ML) models. In particular, persistent homology (PH) takes in data (e.g., point clouds, images, time series) and derives compact representations of latent topological structures, known as persistence diagrams (PDs). Because PDs enjoy inherent noise tolerance, are interpretable and provide a solid basis for data analysis, and can be made compatible with the expansive set of well-established ML model architectures, PH has been widely adopted for model development including on sensitive data, such as genomic, cancer, sensor network, and financial data. Thus, TDA should be incorporated into secure end-to-end data analysis pipelines. In this paper, we take the first step to address this challenge and develop a version of the fundamental algorithm to compute PH on encrypted data using homomorphic encryption (HE).
The hardware computing landscape is changing. What used to be distributed systems can now be found on a chip with highly configurable, diverse, specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to control today's cyber-physical systems, being the building blocks of critical infrastructures. They are deployed in harsh environments and are connected to the cyberspace, which makes them exposed to both accidental faults and targeted cyberattacks. This is in addition to the changing fault landscape that continued technology scaling, emerging devices and novel application scenarios will bring. In this paper, we discuss how the very features, distributed, parallelized, reconfigurable, heterogeneous, that cause many of the imminent and emerging security and resilience challenges, also open avenues for their cure though SoC replication, diversity, rejuvenation, adaptation, and hybridization. We show how to leverage these techniques at different levels across the entire SoC hardware/software stack, calling for more research on the topic.
Ensuring the long-term reproducibility of data analyses requires results stability tests to verify that analysis results remain within acceptable variation bounds despite inevitable software updates and hardware evolutions. This paper introduces a numerical variability approach for results stability tests, which determines acceptable variation bounds using random rounding of floating-point calculations. By applying the resulting stability test to \fmriprep, a widely-used neuroimaging tool, we show that the test is sensitive enough to detect subtle updates in image processing methods while remaining specific enough to accept numerical variations within a reference version of the application. This result contributes to enhancing the reliability and reproducibility of data analyses by providing a robust and flexible method for stability testing.
This study focuses on (traditional and unsourced) multiple-access communication over a single transmit and multiple ($M$) receive antennas. We assume full or partial channel state information (CSI) at the receiver. It is known that to fully achieve the fundamental limits (even asymptotically) the decoder needs to jointly estimate all user codewords, doing which directly is computationally infeasible. We propose a low-complexity solution, termed coded orthogonal modulation multiple-access (COMMA), in which users first encode their messages via a long (multi-user interference aware) outer code operating over a $q$-ary alphabet. These symbols are modulated onto $q$ orthogonal waveforms. At the decoder a multiple-measurement vector approximate message passing (MMV-AMP) algorithm estimates several candidates (out of $q$) for each user, with the remaining uncertainty resolved by the single-user outer decoders. Numerically, we show that COMMA outperforms a standard solution based on linear multiuser detection (MUD) with Gaussian signaling. Theoretically, we derive bounds and scaling laws for $M$, the number of users $K_a$, SNR, and $q$, allowing to quantify the trade-off between receive antennas and spectral efficiency. The orthogonal signaling scheme is applicable to unsourced random access and, with chirp sequences as basis, allows for low-complexity fast Fourier transform (FFT) based receivers that are resilient to frequency and timing offsets.
Multi armed bandit (MAB) algorithms have been increasingly used to complement or integrate with A/B tests and randomized clinical trials in e-commerce, healthcare, and policymaking. Recent developments incorporate possible delayed feedback. While existing MAB literature often focuses on maximizing the expected cumulative reward outcomes (or, equivalently, regret minimization), few efforts have been devoted to establish valid statistical inference approaches to quantify the uncertainty of learned policies. We attempt to fill this gap by providing a unified statistical inference framework for policy evaluation where a target policy is allowed to differ from the data collecting policy, and our framework allows delay to be associated with the treatment arms. We present an adaptively weighted estimator that on one hand incorporates the arm-dependent delaying mechanism to achieve consistency, and on the other hand mitigates the variance inflation across stages due to vanishing sampling probability. In particular, our estimator does not critically depend on the ability to estimate the unknown delay mechanism. Under appropriate conditions, we prove that our estimator converges to a normal distribution as the number of time points goes to infinity, which provides guarantees for large-sample statistical inference. We illustrate the finite-sample performance of our approach through Monte Carlo experiments.
Multi-flagellated bacteria utilize the hydrodynamic interaction between their filamentary tails, known as flagella, to swim and change their swimming direction in low Reynolds number flow. This interaction, referred to as bundling and tumbling, is often overlooked in simplified hydrodynamic models such as Resistive Force Theories (RFT). However, for the development of efficient and steerable robots inspired by bacteria, it becomes crucial to exploit this interaction. In this paper, we present the construction of a macroscopic bio-inspired robot featuring two rigid flagella arranged as right-handed helices, along with a cylindrical head. By rotating the flagella in opposite directions, the robot's body can reorient itself through repeatable and controllable tumbling. To accurately model this bi-flagellated mechanism in low Reynolds flow, we employ a coupling of rigid body dynamics and the method of Regularized Stokeslet Segments (RSS). Unlike RFT, RSS takes into account the hydrodynamic interaction between distant filamentary structures. Furthermore, we delve into the exploration of the parameter space to optimize the propulsion and torque of the system. To achieve the desired reorientation of the robot, we propose a tumble control scheme that involves modulating the rotation direction and speed of the two flagella. By implementing this scheme, the robot can effectively reorient itself to attain the desired attitude. Notably, the overall scheme boasts a simplified design and control as it only requires two control inputs. With our macroscopic framework serving as a foundation, we envision the eventual miniaturization of this technology to construct mobile and controllable micro-scale bacterial robots.
To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.