ISAC is recognized as a promising technology for the next-generation wireless networks, which provides significant performance gains over individual S&C systems via the shared use of wireless resources. The characterization of the S&C performance tradeoff is at the core of the theoretical foundation of ISAC. In this paper, we consider a point-to-point ISAC model under vector Gaussian channels, and propose to use the CRB-rate region as a basic tool for depicting the fundamental S&C tradeoff. In particular, we consider the scenario where a unified ISAC waveform is emitted from a dual-functional ISAC Tx, which simultaneously performs S&C tasks with a communication Rx and a sensing Rx. In order to perform both S&C tasks, the ISAC waveform is required to be random to convey communication information, with realizations being perfectly known at both the ISAC Tx and the sensing Rx as a reference sensing signal as in typical radar systems. As the main contribution of this paper, we characterize the S&C performance at the two corner points of the CRB-rate region, namely, $P_{SC}$ indicating the max. achievable rate constrained by the min. CRB, and $P_{CS}$ indicating the min. achievable CRB constrained by the max. rate. In particular, we derive the high-SNR capacity at $P_{SC}$, and provide lower and upper bounds for the sensing CRB at $P_{CS}$. We show that these two points can be achieved by the conventional Gaussian signaling and a novel strategy relying on the uniform distribution over the Stiefel manifold, respectively. Based on the above-mentioned analysis, we provide an outer bound and various inner bounds for the achievable CRB-rate regions. Our main results reveal a two-fold tradeoff in ISAC systems, consisting of the subspace tradeoff (ST) and the deterministic-random tradeoff (DRT) that depend on the resource allocation and data modulation schemes employed for S&C, respectively.
Let $F$ be a finite field, let $f$ be a function from $F$ to $F$, and let $a$ be a nonzero element of $F$. The discrete derivative of $f$ in direction $a$ is $\Delta_a f \colon F \to F$ with $(\Delta_a f)(x)=f(x+a)-f(x)$. The differential spectrum of $f$ is the multiset of cardinalities of all the fibers of all the derivatives $\Delta_a f$ as $a$ runs through $F^*$. The function $f$ is almost perfect nonlinear (APN) if the largest cardinality in the differential spectrum is $2$. Almost perfect nonlinear functions are of interest as cryptographic primitives. If $d$ is a positive integer, the power function over $F$ with exponent $d$ is the function $f \colon F \to F$ with $f(x)=x^d$ for every $x \in F$. There is a small number of known infinite families of APN power functions. In this paper, we re-express the exponents for one such family in a more convenient form. This enables us to give the differential spectrum and, even more, to determine the sizes of individual fibers of derivatives.
Motivated by Carbon Emissions Trading Schemes, Treasury Auctions, and Procurement Auctions, which all involve the auctioning of homogeneous multiple units, we consider the problem of learning how to bid in repeated multi-unit pay-as-bid auctions. In each of these auctions, a large number of (identical) items are to be allocated to the largest submitted bids, where the price of each of the winning bids is equal to the bid itself. The problem of learning how to bid in pay-as-bid auctions is challenging due to the combinatorial nature of the action space. We overcome this challenge by focusing on the offline setting, where the bidder optimizes their vector of bids while only having access to the past submitted bids by other bidders. We show that the optimal solution to the offline problem can be obtained using a polynomial time dynamic programming (DP) scheme. We leverage the structure of the DP scheme to design online learning algorithms with polynomial time and space complexity under full information and bandit feedback settings. We achieve an upper bound on regret of $O(M\sqrt{T\log |\mathcal{B}|})$ and $O(M\sqrt{|\mathcal{B}|T\log |\mathcal{B}|})$ respectively, where $M$ is the number of units demanded by the bidder, $T$ is the total number of auctions, and $|\mathcal{B}|$ is the size of the discretized bid space. We accompany these results with a regret lower bound, which match the linear dependency in $M$. Our numerical results suggest that when all agents behave according to our proposed no regret learning algorithms, the resulting market dynamics mainly converge to a welfare maximizing equilibrium where bidders submit uniform bids. Lastly, our experiments demonstrate that the pay-as-bid auction consistently generates significantly higher revenue compared to its popular alternative, the uniform price auction.
Decentralized architecture offers a robust and flexible structure for online platforms, since centralized moderation and computation can be easy to disrupt with targeted attacks. However, a platform offering a decentralized architecture does not guarantee that users will use it in a decentralized way, and measuring the centralization of socio-technical networks is not an easy task. In this paper we introduce a method of characterizing community influence in terms of how many edges between communities would be disrupted by a community's removal. Our approach provides a careful definition of "centralization" appropriate in bipartite user-community socio-technical networks, and demonstrates the inadequacy of more trivial methods for interrogating centralization such as examining the distribution of community sizes. We use this method to compare the structure of multiple socio-technical platforms -- Mastodon, git code hosting servers, BitChute, Usenet, and Voat -- and find a range of structures, from interconnected but decentralized git servers to an effectively centralized use of Mastodon servers, as well as multiscale hybrid network structures of disconnected Voat subverses. As the ecosystem of socio-technical platforms diversifies, it becomes critical to not solely focus on the underlying technologies but also consider the structure of how users interact through the technical infrastructure.
Splines over triangulations and splines over quadrangulations (tensor product splines) are two common ways to extend bivariate polynomials to splines. However, combination of both approaches leads to splines defined over mixed triangle and quadrilateral meshes using the isogeometric approach. Mixed meshes are especially useful for representing complicated geometries obtained e.g. from trimming. As (bi-)linearly parameterized mesh elements are not flexible enough to cover smooth domains, we focus in this work on the case of planar mixed meshes parameterized by (bi-)quadratic geometry mappings. In particular we study in detail the space of $C^1$-smooth isogeometric spline functions of general polynomial degree over two such mixed mesh elements. We present the theoretical framework to analyze the smoothness conditions over the common interface for all possible configurations of mesh elements. This comprises the investigation of the dimension as well as the construction of a basis of the corresponding $C^1$-smooth isogeometric spline space over the domain described by two elements. Several examples of interest are presented in detail.
We consider the scalar conservation law in one space dimension with a genuinely nonlinear flux. We assume that an appropriate velocity function depending on the entropy solution of the conservation law is given for the comprising particles, and study their corresponding trajectories under the flow. The differential equation that each of these trajectories satisfies depends on the entropy solution of the conservation law which is typically discontinuous in both time and space variables. The existence and uniqueness of these trajectories are guaranteed by the Filippov theory of differential equations. We show that such a Filippov solution is compatible with the front tracking and vanishing viscosity approximations in the sense that the approximate trajectories given by either of these methods converge uniformly to the trajectories corresponding to the entropy solution of the scalar conservation law. For certain classes of flux functions, illustrated by traffic flow, we prove the H\"older continuity of the particle trajectories with respect to the initial field or the flux function. We then consider the inverse problem of recovering the initial field or the flux function of the scalar conservation law from discrete pointwise measurements of the particle trajectories. We show that the above continuity properties translate to the stability of the Bayesian regularised solutions of these inverse problems with respect to appropriate approximations of the forward map. We also discuss the limitations of the situation where the same inverse problems are considered with pointwise observations made from the entropy solution itself.
A growing number of central authorities use assignment mechanisms to allocate students to schools in a way that reflects student preferences and school priorities. However, most real-world mechanisms give students an incentive to be strategic and misreport their preferences. In this paper, we provide an identification approach for causal effects of school assignment on future outcomes that accounts for strategic misreporting. Misreporting may invalidate existing point-identification approaches, and we derive sharp bounds for causal effects that are robust to strategic behavior. Our approach applies to any mechanism as long as there exist placement scores and cutoffs that characterize that mechanism's allocation rule. We use data from a deferred acceptance mechanism that assigns students to more than 1,000 university-major combinations in Chile. Students behave strategically because the mechanism in Chile constrains the number of majors that students submit in their preferences to eight options. Our methodology takes that into account and partially identifies the effect of changes in school assignment on various graduation outcomes.
In this paper, we consider intelligent reflecting surface (IRS) in a non-orthogonal multiple access (NOMA)-aided Integrated Sensing and Multicast-Unicast Communication (ISMUC) system, where the multicast signal is used for sensing and communications while the unicast signal is used only for communications. Our goal is to depict whether the IRS improves the performance of NOMA-ISMUC system or not under the imperfect/perfect successive interference cancellation (SIC) scenario. Towards this end, we formulate a non-convex problem to maximize the unicast rate while ensuring the minimum target illumination power and multicast rate. To settle this problem, we employ the Dinkelbach method to transform this original problem into an equivalent one, which is then solved via alternating optimization algorithm and semidefinite relaxation (SDR) with Sequential Rank-One Constraint Relaxation (SROCR). Based on this, an iterative algorithm is devised to obtain a near-optimal solution. Computer simulations verify the quick convergence of the devised iterative algorithm, and provide insightful results. Compared to NOMA-ISMUC without IRS, IRS-aided NOMA-ISMUC achieves a higher rate with perfect SIC but keeps the almost same rate in the case of imperfect SIC.
The field of trajectory forecasting has grown significantly in recent years, partially owing to the release of numerous large-scale, real-world human trajectory datasets for autonomous vehicles (AVs) and pedestrian motion tracking. While such datasets have been a boon for the community, they each use custom and unique data formats and APIs, making it cumbersome for researchers to train and evaluate methods across multiple datasets. To remedy this, we present trajdata: a unified interface to multiple human trajectory datasets. At its core, trajdata provides a simple, uniform, and efficient representation and API for trajectory and map data. As a demonstration of its capabilities, in this work we conduct a comprehensive empirical evaluation of existing trajectory datasets, providing users with a rich understanding of the data underpinning much of current pedestrian and AV motion forecasting research, and proposing suggestions for future datasets from these insights. trajdata is permissively licensed (Apache 2.0) and can be accessed online at //github.com/NVlabs/trajdata
Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.
Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.