This paper considers the reconfigurable intelligent surface (RIS)-assisted multi-user communications, where an RIS is used to assist the base station (BS) for serving multiple users. The RIS consisting of passive reflecting elements can manipulate the reflected direction of the incoming electromagnetic waves by adjusting the phase shifts of the reflecting elements. Alternating optimization (AO) based approach is commonly used to determine the phase shifts of the RIS elements. While AO-based approaches have shown significant gain of RIS, the complexity is quite high due to the coupled structure of the cascade channel from the BS through RIS to the user. In addition, the sub-wavelength structure of the RIS introduces spatial correlation that may cause strong interference to users. To handle severe multi-user interference over correlated channels, we consider adaptive user grouping previously proposed for massive mutli-input and multi-output (MIMO) systems and propose two low-complexity beamforming design methods, depending on whether the grouping result is taken into account. Simulation results demonstrate the superior sum rate achieved by the proposed methods than that without user grouping. Besides, the proposed methods can perform similarly to the AO-based approach but with much lower complexity.
Extremely large-scale multiple-input-multiple-output (XL-MIMO) is a promising technology for the future sixth-generation (6G) networks to achieve higher performance. In practice, various linear precoding schemes, such as zero-forcing (ZF) and regularized zero-forcing (RZF) precoding, are capable of achieving both large spectral efficiency (SE) and low bit error rate (BER) in traditional massive MIMO (mMIMO) systems. However, these methods are not efficient in extremely large-scale regimes due to the inherent spatial non-stationarity and high computational complexity. To address this problem, we investigate a low-complexity precoding algorithm, e.g., randomized Kaczmarz (rKA), taking into account the spatial non-stationary properties in XL-MIMO systems. Furthermore, we propose a novel mode of randomization, i.e., sampling without replacement rKA (SwoR-rKA), which enjoys a faster convergence speed than the rKA algorithm. Besides, the closed-form expression of SE considering the interference between subarrays in downlink XL-MIMO systems is derived. Numerical results show that the complexity given by both rKA and SwoR-rKA algorithms has 51.3% reduction than the traditional RZF algorithm with similar SE performance. More importantly, our algorithms can effectively reduce the BER when the transmitter has imperfect channel estimation.
In the Colored Clustering problem, one is asked to cluster edge-colored (hyper-)graphs whose colors represent interaction types. More specifically, the goal is to select as many edges as possible without choosing two edges that share an endpoint and are colored differently. Equivalently, the goal can also be described as assigning colors to the vertices in a way that fits the edge-coloring as well as possible. As this problem is NP-hard, we build on previous work by studying its parameterized complexity. We give a $2^{\mathcal O(k)} \cdot n^{\mathcal O(1)}$-time algorithm where $k$ is the number of edges to be selected and $n$ the number of vertices. We also prove the existence of a problem kernel of size $\mathcal O(k^{5/2} )$, resolving an open problem posed in the literature. We consider parameters that are smaller than $k$, the number of edges to be selected, and $r$, the number of edges that can be deleted. Such smaller parameters are obtained by considering the difference between $k$ or $r$ and some lower bound on these values. We give both algorithms and lower bounds for Colored Clustering with such parameterizations. Finally, we settle the parameterized complexity of Colored Clustering with respect to structural graph parameters by showing that it is $W[1]$-hard with respect to both vertex cover number and tree-cut width, but fixed-parameter tractable with respect to slim tree-cut width.
We consider the lossy quantum source coding problem where the task is to compress a given quantum source below its von Neumann entropy. Inspired by the duality connections between the rate-distortion and channel coding problems in the classical setting, we propose a new formulation for the lossy quantum source coding problem. This formulation differs from the existing quantum rate-distortion theory in two aspects. Firstly, we require that the reconstruction of the compressed quantum source fulfill a global error constraint as opposed to the sample-wise local error criterion used in the standard rate-distortion setting. Secondly, instead of a distortion observable, we employ the notion of a backward quantum channel, which we refer to as a "posterior reference map", to measure the reconstruction error. Using these, we characterize the asymptotic performance limit of the lossy quantum source coding problem in terms of single-letter coherent information of the given posterior reference map. We demonstrate a protocol to encode (at the specified rate) and decode, with the reconstruction satisfying the provided global error criterion, and therefore achieving the asymptotic performance limit. The protocol is constructed by decomposing coherent information as a difference of two Holevo information quantities, inspired from prior works in quantum communication problems. To further support the findings, we develop analogous formulations for the quantum-classical and classical variants and express the asymptotic performance limit in terms of single-letter mutual information quantities with respect to appropriately defined channels analogous to posterior reference maps. We also provide various examples for the three formulations, and shed light on their connection to the standard rate-distortion formulation wherever possible.
Probabilistic sensitivity analysis identifies the influential uncertain input to guide decision-making. We propose a general sensitivity framework with respect to the input distribution parameters that unifies a wide range of sensitivity measures, including information theoretical metrics such as the Fisher information. The framework is derived analytically via a constrained maximization and the sensitivity analysis is reformulated into an eigenvalue problem. There are only two main steps to implement the sensitivity framework utilising the likelihood ratio/score function method, a Monte Carlo type sampling followed by solving an eigenvalue equation. The resulting eigenvectors then provide the directions for simultaneous variations of the input parameters and guide the focus to perturb uncertainty the most. Not only is it conceptually simple, but numerical examples demonstrate that the proposed framework also provides new sensitivity insights, such as the combined sensitivity of multiple correlated uncertainty metrics, robust sensitivity analysis with an entropic constraint, and approximation of deterministic sensitivities. Three different examples, ranging from a simple cantilever beam to an offshore marine riser, are used to demonstrate the potential applications of the proposed sensitivity framework to applied mechanics problems.
The timely transportation of goods to customers is an essential component of economic activities. However, heavy-duty diesel trucks that deliver goods contribute significantly to greenhouse gas emissions within many large metropolitan areas, including Los Angeles, New York, and San Francisco. To facilitate freight electrification, this paper proposes joint routing and charging (JRC) scheduling for electric trucks. The objective of the associated optimization problem is to minimize the cost of transportation, charging, and tardiness. As a result of a large number of combinations of road segments, electric trucks can take a large number of combinations of possible charging decisions and charging duration as well. The resulting mixed-integer linear programming problem (MILP) is extremely challenging because of the combinatorial complexity even in the deterministic case. Therefore, a Level-Based Surrogate Lagrangian Relaxation method is employed to decompose and coordinate the overall problem into truck subproblems that are significantly less complex. In the coordination aspect, each truck subproblem is solved independently of other subproblems based on charging cost, tardiness, and the values of Lagrangian multipliers. In addition to serving as a means of guiding and coordinating trucks, multipliers can also serve as a basis for transparent and explanatory decision-making by trucks. Testing results demonstrate that even small instances cannot be solved using the over-the-shelf solver CPLEX after several days of solving. The new method, on the other hand, can obtain near-optimal solutions within a few minutes for small cases, and within 30 minutes for large ones. Furthermore, it has been demonstrated that as battery capacity increases, the total cost decreases significantly; moreover, as the charging power increases, the number of trucks required decreases as well.
Non-orthogonal multiple access (NOMA) has become a promising technology for next-generation wireless communications systems due to its capability to provide access for multiple users on the same resource. In this paper, we consider an uplink power-domain NOMA system aided by a reconfigurable intelligent surface (RIS) in the presence of a jammer that aims to maximize its interference on the base station (BS) uplink receiver. We consider two kinds of RISs, a regular RIS whose elements can only change the phase of the incoming wave, and an RIS whose elements can also attenuate the incoming wave. Our aim is to minimize the total power transmitted by the user terminals under quality-of-service constraints by controlling both the propagation from the users and the jammer to the BS with help of the RIS. The resulting objective function and constraints are both non-linear and non-convex, so we address this problem using numerical optimization. Our numerical results show that the RIS can help to dramatically reduce the per user required transmit power in an interference-limited scenario.
The automotive industry is experiencing a transition from assisted to highly automated driving. New concepts for validation of Automated Driving System (ADS) include amongst other a shift from a "technology based" approach to a "scenario based" assessment. The safety validation and type approval process of ADS are seen as the biggest challenges for the automotive industry today. Having in mind a variety of existing white papers, standardization activities and regulatory approaches, manufactures still struggle with selecting the best practices that keep aligned with their Safety Management System and Safety Culture. A step forward would be to implement a harmonized global safety assurance scheme that is compliant with relevant regulations, laws, standards, and reflects local rules. Today many communities (regulatory bodies, local authorities, industrial stake-holders) work on proof-of-concept framework for the Safety Argumentation as an answer to this problem. Unfortunately, there is still no consensus on one definitive methodology and a set of safety metrics to measure ADS safety. An objective of this summary report is to facilitate a comprehensive review and analysis of the literature concerning available methods and approaches for vehicle safety, engineering frameworks, processes of scenario-based evaluation and a vendor- and technology-neutral Safety Argumentation approaches and tools.
The computation of the distance of two time series is time-consuming for any elastic distance function that accounts for misalignments. Among those functions, DTW is the most prominent. However, a recent extensive evaluation has shown that the move-split merge (MSM) metric is superior to DTW regarding the analytical accuracy of the 1-NN classifier. Unfortunately, the running time of the standard dynamic programming algorithm for MSM distance computation is $\Omega(n^2)$, where $n$ is the length of the longest time series. In this paper, we provide approaches to reducing the cost of MSM distance computations by using lower and upper bounds for early pruning paths in the underlying dynamic programming table. For the case of one time series being a constant, we present a linear-time algorithm. In addition, we propose new linear-time heuristics and adapt heuristics known from DTW to computing the MSM distance. One heuristic employs the metric property of MSM and the previously introduced linear-time algorithm. Our experimental studies demonstrate substantial speed-ups in our approaches compared to previous MSM algorithms. In particular, the running time for MSM is faster than a state-of-the-art DTW distance computation for a majority of the popular UCR data sets.
We present a novel solver technique for the anisotropic heat flux equation, aimed at the high level of anisotropy seen in magnetic confinement fusion plasmas. Such problems pose two major challenges: (i) discretization accuracy and (ii) efficient implicit linear solvers. We simultaneously address each of these challenges by constructing a new finite element discretization with excellent accuracy properties, tailored to a novel solver approach based on algebraic multigrid (AMG) methods designed for advective operators. We pose the problem in a mixed formulation, introducing the heat flux as an auxiliary variable and discretizing the temperature and auxiliary fields in a discontinuous Galerkin space. The resulting block matrix system is then reordered and solved using an approach in which two advection operators are inverted using AMG solvers based on approximate ideal restriction (AIR), which is particularly efficient for upwind discontinuous Galerkin discretizations of advection. To ensure that the advection operators are non-singular, in this paper we restrict ourselves to considering open (acyclic) magnetic field lines. We demonstrate the proposed discretization's superior accuracy over other discretizations of anisotropic heat flux, achieving error $1000\times$ smaller for anisotropy ratio of $10^9$, while also demonstrating fast convergence of the proposed iterative solver in highly anisotropic regimes where other diffusion-based AMG methods fail.
In this paper, we propose a scheme for the problem of cache-aided multi-user private information retrieval with small caches, in which $K$ users are connected to $S$ non-colluding databases via shared links. Each database contains a set of $N$ files, and each user has a dedicated cache of size equivalent to the size of $M$ files. All the users want to retrieve a file without revealing their demands to the databases. During off-peak hours, all the users will fill their caches, and when required, users will demand their desired files by cooperatively generating query sets for each database. After receiving the transmissions from databases, all the users should get their desired files using transmitted data and their cache contents. This problem has been studied in [X. Zhang, K. Wan, H. Sun, M. Ji and G. Caire, \tqt{Fundamental limits of cache-aided multiuser private information retrieval}, IEEE Trans. Commun., 2021], in which authors proposed a product design scheme. In this paper, we propose a scheme that gives a better rate for a particular value of $M$ than the product design scheme. We consider a slightly different approach for the placement phase. Instead of a database filling the caches of all users directly, a database will broadcast cache content for all users on a shared link, and then the users will decide unitedly which part of the broadcasted content will be stored in the cache of each user. This variation facilitates maintaining the privacy constraint at a reduced rate.