We present efficient computational solutions to the problems of checking equality, performing multiplication, and computing minimal representatives of elements of free bands. A band is any semigroup satisfying the identity $x ^ 2 \approx x$ and the free band $\operatorname{FB}(k)$ is the free object in the variety of $k$-generated bands. Radoszewski and Rytter developed a linear time algorithm for checking whether two words represent the same element of a free band. In this paper we describe an alternate linear time algorithm for checking the same problem. The algorithm we present utilises a representation of words as synchronous deterministic transducers that lend themselves to efficient (quadratic in the size of the alphabet) multiplication in the free band. This representation also provides a means of finding the short-lex least word representing a given free band element with quadratic complexity.
The broad objective of this paper is to propose a mathematical model for the study of causes of wage inequality and relate it to choices of consumption, the technologies of production, and the composition of labor in an economy. The paper constructs a Simple Closed Model, or an SCM, for short, for closed economies, in which the consumption and the production parts are clearly separated and yet coupled. The model is established as a specialization of the Arrow-Debreu model and its equilibria correspond directly with those of the general Arrow-Debreu model. The formulation allows us to identify the combinatorial data which link parameters of the economic system with its equilibria, in particular, the impact of consumer preferences on wages. The SCM model also allows the formulation and explicit construction of the consumer choice game, where expressed utilities of various labor classes serve as strategies with total or relative wages as the pay-offs. We illustrate, through examples, the mathematical details of the consumer choice game. We show that consumer preferences, expressed through modified utility functions, do indeed percolate through the economy, and influence not only prices but also production and wages. Thus, consumer choice may serve as an effective tool for wage redistribution.
Intelligent vehicles (IVs) have attracted wide attention thanks to the augmented convenience, safety advantages, and potential commercial value. Although a few of autonomous driving unicorns assert that IVs will be commercially deployable by 2025, their deployment is still restricted to small-scale validation due to various issues, among which safety, reliability, and generalization of planning methods are prominent concerns. Precise computation of control commands or trajectories by planning methods remains a prerequisite for IVs, owing to perceptual imperfections under complex environments, which pose an obstacle to the successful commercialization of IVs. This paper aims to review state-of-the-art planning methods, including pipeline planning and end-to-end planning methods. In terms of pipeline methods, a survey of selecting algorithms is provided along with a discussion of the expansion and optimization mechanisms, whereas in end-to-end methods, the training approaches and verification scenarios of driving tasks are points of concern. Experimental platforms are reviewed to facilitate readers in selecting suitable training and validation methods. Finally, the current challenges and future directions are discussed. The side-by-side comparison presented in this survey helps to gain insights into the strengths and limitations of the reviewed methods, which also assists with system-level design choices.
We study linear contextual bandits in the misspecified setting, where the expected reward function can be approximated by a linear function class up to a bounded misspecification level $\zeta>0$. We propose an algorithm based on a novel data selection scheme, which only selects the contextual vectors with large uncertainty for online regression. We show that, when the misspecification level $\zeta$ is dominated by $\tilde O (\Delta / \sqrt{d})$ with $\Delta$ being the minimal sub-optimality gap and $d$ being the dimension of the contextual vectors, our algorithm enjoys the same gap-dependent regret bound $\tilde O (d^2/\Delta)$ as in the well-specified setting up to logarithmic factors. In addition, we show that an existing algorithm SupLinUCB (Chu et al., 2011) can also achieve a gap-dependent constant regret bound without the knowledge of sub-optimality gap $\Delta$. Together with a lower bound adapted from Lattimore et al. (2020), our result suggests an interplay between misspecification level and the sub-optimality gap: (1) the linear contextual bandit model is efficiently learnable when $\zeta \leq \tilde O(\Delta / \sqrt{d})$; and (2) it is not efficiently learnable when $\zeta \geq \tilde \Omega({\Delta} / {\sqrt{d}})$. Experiments on both synthetic and real-world datasets corroborate our theoretical results.
The weak maximum principle of the isoparametric finite element method is proved for the Poisson equation under the Dirichlet boundary condition in a (possibly concave) curvilinear polyhedral domain with edge openings smaller than $\pi$, which include smooth domains and smooth deformations of convex polyhedra. The proof relies on the analysis of a dual elliptic problem with a discontinuous coefficient matrix arising from the isoparametric finite elements. Therefore, the standard $H^2$ elliptic regularity which is required in the proof of the weak maximum principle in the literature does not hold for this dual problem. To overcome this difficulty, we have decomposed the solution into a smooth part and a nonsmooth part, and estimated the two parts by $H^2$ and $W^{1,p}$ estimates, respectively. As an application of the weak maximum principle, we have proved a maximum-norm best approximation property of the isoparametric finite element method for the Poisson equation in a curvilinear polyhedron. The proof contains non-trivial modifications of Schatz's argument due to the non-conformity of the iso-parametric finite elements, which requires us to construct a globally smooth flow map which maps the curvilinear polyhedron to a perturbed larger domain on which we can establish the $W^{1,\infty}$ regularity estimate of the Poisson equation uniformly with respect to the perturbation.
The matrix sensing problem is an important low-rank optimization problem that has found a wide range of applications, such as matrix completion, phase synchornization/retrieval, robust PCA, and power system state estimation. In this work, we focus on the general matrix sensing problem with linear measurements that are corrupted by random noise. We investigate the scenario where the search rank $r$ is equal to the true rank $r^*$ of the unknown ground truth (the exact parametrized case), as well as the scenario where $r$ is greater than $r^*$ (the overparametrized case). We quantify the role of the restricted isometry property (RIP) in shaping the landscape of the non-convex factorized formulation and assisting with the success of local search algorithms. First, we develop a global guarantee on the maximum distance between an arbitrary local minimizer of the non-convex problem and the ground truth under the assumption that the RIP constant is smaller than $1/(1+\sqrt{r^*/r})$. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. More importantly, we prove that this noisy, overparametrized problem exhibits the strict saddle property, which leads to the global convergence of perturbed gradient descent algorithm in polynomial time. The results of this work provide a comprehensive understanding of the geometric landscape of the matrix sensing problem in the noisy and overparametrized regime.
Intelligent reflecting surface (IRS) is envisioned to become a key technology for the upcoming six-generation (6G) wireless system due to its potential of reaping high performance in a power-efficient and cost-efficient way. With its disruptive capability and hardware constraint, the integration of IRS imposes some fundamental particularities on the coordination of multi-user signal transmission. Consequently, the conventional orthogonal and non-orthogonal multiple-access schemes are hard to directly apply because of the joint optimization of active beamforming at the base station and passive reflection at the IRS. Relying on an alternating optimization method, we develop novel schemes for efficient multiple access in IRS-aided multi-user multi-antenna systems in this paper. Achievable performance in terms of the sum spectral efficiency is theoretically analyzed. A comprehensive comparison of different schemes and configurations is conducted through Monte-Carlo simulations to clarify which scheme is favorable for this emerging 6G paradigm.
We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.
Reconfigurable Intelligent Surfaces (RISs) constitute the key enabler for programmable electromagnetic propagation environments, and are lately being considered as a candidate physical-layer technology for the demanding connectivity, reliability, localization, and sustainability requirements of next generation wireless networks. In this paper, we first present the deployment scenarios for RIS-enabled smart wireless environments that have been recently designed within the ongoing European Union Horizon 2020 RISE-6G project, as well as a network architecture integrating RISs with existing standardized interfaces. We identify various RIS deployment strategies and sketch the core architectural requirements in terms of RIS control and signaling, depending on the RIS hardware architectures and respective capabilities. Furthermore, we introduce and discuss, with the aid of simulations and reflectarray measurements, two novel metrics that emerge in the context of RIS-empowered wireless systems: the RIS bandwidth and area of influence. Their extensive investigation corroborates the need for careful deployment and planning of the RIS technology in future networks.
In this article, we propose a class of $L_q$-norm based U-statistics for a family of global testing problems related to high-dimensional data. This includes testing of mean vector and its spatial sign, simultaneous testing of linear model coefficients, and testing of component-wise independence for high-dimensional observations, among others. Under the null hypothesis, we derive asymptotic normality and independence between $L_q$-norm based U-statistics for several $q$s under mild moment and cumulant conditions. A simple combination of two studentized $L_q$-based test statistics via their $p$-values is proposed and is shown to attain great power against alternatives of different sparsity. Our work is a substantial extension of He et al. (2021), which is mostly focused on mean and covariance testing, and we manage to provide a general treatment of asymptotic independence of $L_q$-norm based U-statistics for a wide class of kernels. To alleviate the computation burden, we introduce a variant of the proposed U-statistics by using the monotone indices in the summation, resulting in a U-statistic with asymmetric kernel. A dynamic programming method is introduced to reduce the computational cost from $O(n^{qr})$, which is required for the calculation of the full U-statistic, to $O(n^r)$ where $r$ is the order of the kernel. Numerical studies further corroborate the advantage of the proposed adaptive test as compared to some existing competitors.
The determinant lower bound of Lovasz, Spencer, and Vesztergombi [European Journal of Combinatorics, 1986] is a powerful general way to prove lower bounds on the hereditary discrepancy of a set system. In their paper, Lovasz, Spencer, and Vesztergombi asked if hereditary discrepancy can also be bounded from above by a function of the hereditary discrepancy. This was answered in the negative by Hoffman, and the largest known multiplicative gap between the two quantities for a set system of $m$ substes of a universe of size $n$ is on the order of $\max\{\log n, \sqrt{\log m}\}$. On the other hand, building on work of Matou\v{s}ek [Proceedings of the AMS, 2013], recently Jiang and Reis [SOSA, 2022] showed that this gap is always bounded up to constants by $\sqrt{\log(m)\log(n)}$. This is tight when $m$ is polynomial in $n$, but leaves open what happens for large $m$. We show that the bound of Jiang and Reis is tight for nearly the entire range of $m$. Our proof relies on a technique of amplifying discrepancy via taking Kronecker products, and on discrepancy lower bounds for a set system derived from the discrete Haar basis.