In this work, we study massive multiple-input multiple-output (MIMO) precoders optimizing power consumption while achieving the users' rate requirements. We first characterize analytically the solutions for narrowband and wideband systems minimizing the power amplifiers (PAs) consumption in low system load, where the per-antenna power constraints are not binding. After, we focus on the asymptotic wideband regime. The power consumed by the whole base station (BS) and the high-load scenario are then also investigated. We obtain simple solutions, and the optimal strategy in the asymptotic case reduces to finding the optimal number of active antennas, relying on known precoders among the active antennas. Numerical results show that large savings in power consumption are achievable in the narrowband system by employing antenna selection, while all antennas need to be activated in the wideband system when considering only the PAs consumption, and this implies lower savings. When considering the overall BS power consumption and a large number of subcarriers, we show that significant savings are achievable in the low-load regime by using a subset of the BS antennas. While optimization based on transmit power pushes to activate all antennas, optimization based on consumed power activates a number of antennas proportional to the load.
The superconducting linear accelerator is a highly flexiable facility for modern scientific discoveries, necessitating weekly reconfiguration and tuning. Accordingly, minimizing setup time proves essential in affording users with ample experimental time. We propose a trend-based soft actor-critic(TBSAC) beam control method with strong robustness, allowing the agents to be trained in a simulated environment and applied to the real accelerator directly with zero-shot. To validate the effectiveness of our method, two different typical beam control tasks were performed on China Accelerator Facility for Superheavy Elements (CAFe II) and a light particle injector(LPI) respectively. The orbit correction tasks were performed in three cryomodules in CAFe II seperately, the time required for tuning has been reduced to one-tenth of that needed by human experts, and the RMS values of the corrected orbit were all less than 1mm. The other transmission efficiency optimization task was conducted in the LPI, our agent successfully optimized the transmission efficiency of radio-frequency quadrupole(RFQ) to over $85\%$ within 2 minutes. The outcomes of these two experiments offer substantiation that our proposed TBSAC approach can efficiently and effectively accomplish beam commissioning tasks while upholding the same standard as skilled human experts. As such, our method exhibits potential for future applications in other accelerator commissioning fields.
In image compression, with recent advances in generative modeling, the existence of a trade-off between the rate and the perceptual quality has been brought to light, where the perception is measured by the closeness of the output distribution to the source. This leads to the question: how does a perception constraint impact the trade-off between the rate and traditional distortion constraints, typically quantified by a single-letter distortion measure? We consider the compression of a memoryless source $X$ in the presence of memoryless side information $Z,$ studied by Wyner and Ziv, but elucidate the impact of a perfect realism constraint, which requires the output distribution to match the source distribution. We consider two cases: when $Z$ is available only at the decoder or at both the encoder and the decoder. The rate-distortion trade-off with perfect realism is characterized for sources on general alphabets when infinite common randomness is available between the encoder and the decoder. We show that, similarly to traditional source coding with side information, the two cases are equivalent when $X$ and $Z$ are jointly Gaussian under the squared error distortion measure. We also provide a general inner bound in the case of limited common randomness.
In this letter, we address blockage detection and precoder design for multiple-input multiple-output (MIMO) links, without communication overhead required. Blockage detection is achieved by classifying light detection and ranging (LIDAR) data through a physics-based graph neural network (GNN). For precoder design, a preliminary channel estimate is obtained by running ray tracing on a 3D surface obtained from LIDAR data. This estimate is successively refined and the precoder is designed accordingly. Numerical simulations show that blockage detection is successful with 95% accuracy. Our digital precoding achieves 90% of the capacity and analog precoding outperforms previous works exploiting LIDAR for precoder design.
We propose a method of optimizing monotone Boolean circuits by re-writing them in a simpler, equivalent form. We use in total six heuristics: Hill Climbing, Simulated Annealing, and variations of them, which operate on the representation of the circuit as a logical formula. Our main motivation is to improve performance in Attribute-Based Encryption (ABE) schemes for Boolean circuits. Therefore, we show how our heuristics improve ABE systems for Boolean circuits. Also, we run tests to evaluate the performance of our heuristics, both as a standalone optimization for Boolean circuits and also inside ABE systems.
We present a mass lumping approach based on an isogeometric Petrov-Galerkin method that preserves higher-order spatial accuracy in explicit dynamics calculations irrespective of the polynomial degree of the spline approximation. To discretize the test function space, our method uses an approximate dual basis, whose functions are smooth, have local support and satisfy approximate bi-orthogonality with respect to a trial space of B-splines. The resulting mass matrix is ``close'' to the identity matrix. Specifically, a lumped version of this mass matrix preserves all relevant polynomials when utilized in a Galerkin projection. Consequently, the mass matrix can be lumped (via row-sum lumping) without compromising spatial accuracy in explicit dynamics calculations. We address the imposition of Dirichlet boundary conditions and the preservation of approximate bi-orthogonality under geometric mappings. In addition, we establish a link between the exact dual and approximate dual basis functions via an iterative algorithm that improves the approximate dual basis towards exact bi-orthogonality. We demonstrate the performance of our higher-order accurate mass lumping approach via convergence studies and spectral analyses of discretized beam, plate and shell models.
A new method for capacity and spectral efficiency increases is a full-duplex (FD) communication, where sending and receiving are done simultaneously. Hence, severe interference leaked from the transmitter to the receiver, which can disrupt the system's operation completely. For interference reduction, the transceiver tries to estimate the interfering symbols to remove their effects. A typical method is to use the Hammerstein model. In this method, nonlinear power amplifier (PA) and multipath channel are modeled with a successive nonlinear system and a finite impulse response (FIR) filter. Then, the model parameters are adjusted, and interference symbols are estimated from the transmitted symbols. In the Hammerstein method, the interference symbols are estimated directly from the transmitted symbols. But practically, the transmitted symbols first pass through the pulse-shaping filter and become a signal. Then, this signal passes through the nonlinear PA and communication channel. Finally, the received signal is filtered by the matched filter (MF) at the receiver and converted to the symbols again. In this procedure, the amplifier and the communication channel affect the transmitted signal directly and distort transmitted symbols indirectly. Therefore, in the practical situation, when we consider the transmitter's pulse-shaping filter and the receiver's MF, the estimated symbols with the Hammerstein method are erroneous. To solve this problem, a new MF at the receiver is proposed and adjusted according to the interfering signal. We have shown that this method is far better than the Hammerstein method.
Page placement is a critical problem for memoryintensive applications running on a shared-memory multiprocessor with a non-uniform memory access (NUMA) architecture. State-of-the-art page placement mechanisms interleave pages evenly across NUMA nodes. However, this approach fails to maximize memory throughput in modern NUMA systems, characterised by asymmetric bandwidths and latencies, and sensitive to memory contention and interconnect congestion phenomena. We propose BWAP, a novel page placement mechanism based on asymmetric weighted page interleaving. BWAP combines an analytical performance model of the target NUMA system with on-line iterative tuning of page distribution for a given memory-intensive application. Our experimental evaluation with representative memory-intensive workloads shows that BWAP performs up to 66% better than state-of-the-art techniques. These gains are particularly relevant when multiple co-located applications run in disjoint partitions of a large NUMA machine or when applications do not scale up to the total number of cores.
This paper studies the open problem of conformalized entry prediction in a row/column-exchangeable matrix. The matrix setting presents novel and unique challenges, but there exists little work on this interesting topic. We meticulously define the problem, differentiate it from closely related problems, and rigorously delineate the boundary between achievable and impossible goals. We then propose two practical algorithms. The first method provides a fast emulation of the full conformal prediction, while the second method leverages the technique of algorithmic stability for acceleration. Both methods are computationally efficient and can effectively safeguard coverage validity in presence of arbitrary missing pattern. Further, we quantify the impact of missingness on prediction accuracy and establish fundamental limit results. Empirical evidence from synthetic and real-world data sets corroborates the superior performance of our proposed methods.
Artificial Intelligence (AI) is making a profound impact in almost every domain. One of the crucial factors contributing to this success has been the access to an abundance of high-quality data for constructing machine learning models. Lately, as the role of data in artificial intelligence has been significantly magnified, concerns have arisen regarding the secure utilization of data, particularly in the context of unauthorized data usage. To mitigate data exploitation, data unlearning have been introduced to render data unexploitable. However, current unlearnable examples lack the generalization required for wide applicability. In this paper, we present a novel, generalizable data protection method by generating transferable unlearnable examples. To the best of our knowledge, this is the first solution that examines data privacy from the perspective of data distribution. Through extensive experimentation, we substantiate the enhanced generalizable protection capabilities of our proposed method.
Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.