Due to large reflection and diffraction losses in the THz band, it is arguable to achieve reliable links in the none-line-of-sight (NLoS) cases. Intelligent reflecting surfaces, although are expected to solve the blockage problem and enhance the system connectivity, suffer from power consumption and operation complexity. In this work, non-intelligent reflecting surface (NIRS), which are simply made of costless metal foils and have no signal configuration capability, are adopted to enhance the signal strength and coverage in the THz band. Channel measurements are conducted in typical indoor scenarios at 300 GHz band to validate the effectiveness of the NIRS. Based on the measurement results, the positive influences of the NIRS are studied, including the improvement of path power and coverage. Numerical results show that by invoking the NIRS, the power of reflected/scattering paths can be increased by more than 10 dB. Moreover, with the NIRS, over half area in the measured scenario has doubled received power and the coverage ratio for a 10 dB signal-to-noise ratio threshold is increased by up to 39%.
People who are blind face unique challenges in performing instrumental activities of daily living (iADLs), which require them to rely on their senses as well as assistive technology. Existing research on the strategies used by people who are blind to conduct different iADLs has focused largely on outdoor activities such as wayfinding and navigation. However, less emphasis has been placed on information needs for indoor activities around the home. We present a mixed-methods approach that combines 16 semi-structured interviews with a follow-up behavioral study to understand current and potential future use of technologies for daily activities around the home, especially for cooking. We identify common practices, challenges, and strategies that exemplify user-specific and task-specific needs for effectively performing iADLs at home. Despite this heterogeneity in user needs, we were able to reveal a near universal preference for tactile over digital aids, which has important implications for the design of future assistive technologies. Our work extends existing research on iADLs at home and identifies barriers to technology adoption. Addressing these barriers will be critical to increasing adoption rates of assistive technologies and improving the overall quality of life for individuals who are blind.
Applications with low data reuse and frequent irregular memory accesses, such as graph or sparse linear algebra workloads, fail to scale well due to memory bottlenecks and poor core utilization. While prior work with prefetching, decoupling, or pipelining can mitigate memory latency and improve core utilization, memory bottlenecks persist due to limited off-chip bandwidth. Approaches doing processing in-memory (PIM) with Hybrid Memory Cube (HMC) overcome bandwidth limitations but fail to achieve high core utilization due to poor task scheduling and synchronization overheads. Moreover, the high memory-per-core ratio available with HMC limits strong scaling. We introduce Dalorex, a hardware-software co-design that achieves high parallelism and energy efficiency, demonstrating strong scaling with >16,000 cores when processing graph and sparse linear algebra workloads. Over the prior work in PIM, both using 256 cores, Dalorex improves performance and energy consumption by two orders of magnitude through (1) a tile-based distributed-memory architecture where each processing tile holds an equal amount of data, and all memory operations are local; (2) a task-based parallel programming model where tasks are executed by the processing unit that is co-located with the target data; (3) a network design optimized for irregular traffic, where all communication is one-way, and messages do not contain routing metadata; (4) novel traffic-aware task scheduling hardware that maintains high core utilization; and (5) a data placement strategy that improves work balance. This work proposes architectural and software innovations to provide the greatest scalability to date for running graph algorithms while still being programmable for other domains.
Fractional programming (FP) plays a crucial role in wireless network design because many relevant problems involve maximizing or minimizing ratio terms. Notice that the maximization case and the minimization case of FP cannot be converted to each other in general, so they have to be dealt with separately in most of the previous studies. Thus, an existing FP method for maximizing ratios typically does not work for the minimization case, and vice versa. However, the FP objective can be mixed max-and-min, e.g., one may wish to maximize the signal-to-interference-plus-noise ratio (SINR) of the legitimate receiver while minimizing that of the eavesdropper. We aim to fill the gap between max-FP and min-FP by devising a unified optimization framework. The main results are three-fold. First, we extend the existing max-FP technique called quadratic transform to the min-FP, and further develop a full generalization for the mixed case. Second. we provide a minorization-maximization (MM) interpretation of the proposed unified approach, thereby establishing its convergence and also obtaining a matrix extension; another result we obtain is a generalized Lagrangian dual transform which facilitates the solving of the logarithmic FP. Finally, we present three typical applications: the age-of-information (AoI) minimization, the Cramer-Rao bound minimization for sensing, and the secure data rate maximization, none of which can be efficiently addressed by the previous FP methods.
Due to the ever increasing data rate demand of beyond 5G networks and considering the wide range of Orthogonal Frequency Division Multipllexing (OFDM) technique in cellular systems, it is critical to reduce pilot overhead of OFDM systems in order to increase data rate of such systems. Due to sparsity of multipath channels, sparse recovery methods can be exploited to reduce pilot overhead. OFDM pilots are utilized as random samples for channel impulse response estimation. We propose a three-step sparsity recovery algorithm which is based on sparsity domain smoothing. Time domain residue computation, sparsity domain smoothing, and adaptive thresholding sparsifying are the three-steps of the proposed scheme. To the best of our knowledge, the proposed sparsity domain smoothing based thresholding recovery method known as SDS-IMAT has not been used for OFDM sparse channel estimation in the literature. Pilot locations are also derived based on the minimization of the measurement matrix coherence. Numerical results verify that the performance of the proposed scheme outperforms other existing thresholding and greedy recovery methods and has a near-optimal performance. The effectiveness of the proposed scheme is shown in terms of mean square error and bit error rate.
Blockchain enables peer-to-peer transactions in cyberspace without a trusted third party. The rapid growth of Ethereum and smart contract blockchains generally calls for well-designed Transaction Fee Mechanisms (TFMs) to allocate limited storage and computation resources. However, existing research on TFMs must consider the waiting time for transactions, which is essential for computer security and economic efficiency. Integrating data from the Ethereum blockchain and memory pool (mempool), we explore how two types of events affect transaction latency. First, we apply regression discontinuity design (RDD) to study the causal inference of the Merge, the most recent significant upgrade of Ethereum. Our results show that the Merge significantly reduces the long waiting time, network loads, and market congestion. In addition, we verify our results' robustness by inspecting other compounding factors, such as censorship and unobserved delays of transactions via private changes. Second, examining three major protocol changes during the merge, we identify block interval shortening as the most plausible cause for our empirical results. Furthermore, in a mathematical model, we show block interval as a unique mechanism design choice for EIP1559 TFM to achieve better security and efficiency, generally applicable to the market congestion caused by demand surges. Finally, we apply time series analysis to research the interaction of Non-Fungible token (NFT) drops and market congestion using Facebook Prophet, an open-source algorithm for generating time-series models. Our study identified NFT drops as a unique source of market congestion -- holiday effects -- beyond trend and season effects. Finally, we envision three future research directions of TFM.
Interference is a ubiquitous problem in experiments conducted on two-sided content marketplaces, such as Douyin (China's analog of TikTok). In many cases, creators are the natural unit of experimentation, but creators interfere with each other through competition for viewers' limited time and attention. "Naive" estimators currently used in practice simply ignore the interference, but in doing so incur bias on the order of the treatment effect. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, are impractically high variance. We introduce a novel Monte-Carlo estimator, based on "Differences-in-Qs" (DQ) techniques, which achieves bias that is second-order in the treatment effect, while remaining sample-efficient to estimate. On the theoretical side, our contribution is to develop a generalized theory of Taylor expansions for policy evaluation, which extends DQ theory to all major MDP formulations. On the practical side, we implement our estimator on Douyin's experimentation platform, and in the process develop DQ into a truly "plug-and-play" estimator for interference in real-world settings: one which provides robust, low-bias, low-variance treatment effect estimates; admits computationally cheap, asymptotically exact uncertainty quantification; and reduces MSE by 99\% compared to the best existing alternatives in our applications.
We present a novel and first-of-its-kind information-theoretic framework for the key design consideration and implementation of a ground-to-UAV (G2U) communication network to minimize end-to-end transmission delay in the presence of interference. The proposed framework is useful as it describes the minimum transmission latency for an uplink ground-to-UAV communication must satisfy while achieving a given level of reliability. To characterize the transmission delay, we utilize Fano's inequality and derive the tight upper bound for the capacity for the G2U uplink channel in the presence of interference, noise, and potential jamming. Subsequently, given the reliability constraint, the error exponent is obtained for the given channel. Furthermore, a relay UAV in the dual-hop relay mode, with amplify-and-forward (AF) protocol, is considered, for which we jointly obtain the optimal positions of the relay and the receiver UAVs in the presence of interference. Interestingly, in our study, we find that for both the point-to-point and relayed links, increasing the transmit power may not always be an optimal solution for delay minimization problems. Moreover, we prove that there exists an optimal height that minimizes the end-to-end transmission delay in the presence of interference. The proposed framework can be used in practice by a network controller as a system parameters selection criteria, where among a set of parameters, the parameters leading to the lowest transmission latency can be incorporated into the transmission. The based analysis further set the baseline assessment when applying Command and Control (C2) standards to mission-critical G2U and UAV-to-UAV(U2U) services.
We show that in bipartite graphs a large expansion factor implies very fast dynamic matching. Coupled with known constructions of lossless expanders, this gives a solution to the main open problem in a classical paper of Feldman, Friedman, and Pippenger (SIAM J. Discret. Math., 1(2):158-173, 1988). Application 1: storing sets. We construct 1-query bitprobes that store a dynamic subset $S$ of an $N$ element set. A membership query reads a single bit, whose location is computed in time poly$(\log N, \log (1/\varepsilon))$ time and is correct with probability $1-\epsilon$. Elements can be inserted and removed efficiently in time quasipoly$(\log N)$. Previous constructions were static: membership queries have the same parameters, but each update requires the recomputation of the whole data structure, which takes time poly$(\# S \log N)$. Moreover, the size of our scheme is smaller than the best known constructions for static sets. Application 2: switching networks. We construct explicit constant depth $N$-connectors of essentially minimum size in which the path-finding algorithm runs in time quasipoly$(\log N)$. In the non-explicit construction in Feldman, Friedman and Pippenger (SIAM J. Discret. Math., 1(2):158-173, 1988). and in the explicit construction of Wigderson and Zuckerman (Combinatorica, 19(1):125-138, 1999) the runtime is exponential in $N$.
This works explores the correlation between channels in reconfigurable intelligent surface (RIS)-aided communication systems. In this type of system, an RIS made up of many passive elements with adjustable phases reflects the transmitter's signal to the receiver. Since the transmitter-RIS link may be shared by multiple receivers, the cascade channels of two receivers may experience correlated fading, which can negatively impact system performance. Using the mean correlation coefficient as a metric, we analyze the correlation between two cascade channels and derive an accurate approximation in closed form. We also consider the extreme case of an infinitely large number of RIS elements and obtain a convergence result. Our analysis accuracy is validated by simulation results, which offer insights into the correlation characteristics of RIS-aided fading channels.
Deep neural networks have achieved remarkable success in computer vision tasks. Existing neural networks mainly operate in the spatial domain with fixed input sizes. For practical applications, images are usually large and have to be downsampled to the predetermined input size of neural networks. Even though the downsampling operations reduce computation and the required communication bandwidth, it removes both redundant and salient information obliviously, which results in accuracy degradation. Inspired by digital signal processing theories, we analyze the spectral bias from the frequency perspective and propose a learning-based frequency selection method to identify the trivial frequency components which can be removed without accuracy loss. The proposed method of learning in the frequency domain leverages identical structures of the well-known neural networks, such as ResNet-50, MobileNetV2, and Mask R-CNN, while accepting the frequency-domain information as the input. Experiment results show that learning in the frequency domain with static channel selection can achieve higher accuracy than the conventional spatial downsampling approach and meanwhile further reduce the input data size. Specifically for ImageNet classification with the same input size, the proposed method achieves 1.41% and 0.66% top-1 accuracy improvements on ResNet-50 and MobileNetV2, respectively. Even with half input size, the proposed method still improves the top-1 accuracy on ResNet-50 by 1%. In addition, we observe a 0.8% average precision improvement on Mask R-CNN for instance segmentation on the COCO dataset.