Reconfigurable intelligent surfaces (RIS) and orthogonal time-frequency space (OTFS) modulation have gained attention in recent wireless research. RIS technology aids communication by reflecting the incident electromagnetic waves towards the receiver, and OTFS modulation is effective in high-Doppler channels. This paper presents an early investigation of RIS-aided OTFS in high-Doppler channels. We derive the end-to-end delay-Doppler (DD) domain input-output relation of a RIS-aided OTFS system, considering rectangular pulses and fractional delay-Doppler values. We also consider a Zak receiver for RIS-aided OTFS that converts the received time-domain signal to DD domain in one step using Zak transform, and derive its end-to-end input-output relation. Our simulation results show that $i)$ RIS-aided OTFS performs better than OTFS without RIS, $ii)$ Zak receiver performs better than a two-step receiver, and $iii)$ RIS-aided OTFS achieves superior performance compared to RIS-aided OFDM.
Even though query evaluation is a fundamental task in databases, known classifications of conjunctive queries by their fine-grained complexity only apply to queries without self-joins. We study how self-joins affect enumeration complexity, with the aim of building upon the known results to achieve general classifications. We do this by examining the extension of two known dichotomies: one with respect to linear delay, and one with respect to constant delay after linear preprocessing. As this turns out to be an intricate investigation, this paper is structured as an example-driven discussion that initiates this analysis. We show enumeration algorithms that rely on self-joins to efficiently evaluate queries that otherwise cannot be answered with the same guarantees. Due to these additional tractable cases, the hardness proofs are more complex than the self-join-free case. We show how to harness a known tagging technique to prove hardness of queries with self-joins. Our study offers sufficient conditions and necessary conditions for tractability and settles the cases of queries of low arity and queries with cyclic cores. Nevertheless, many cases remain open.
Electricity prices in liberalized markets are determined by the supply and demand for electric power, which are in turn driven by various external influences that vary strongly in time. In perfect competition, the merit order principle describes that dispatchable power plants enter the market in the order of their marginal costs to meet the residual load, i.e. the difference of load and renewable generation. Many market models implement this principle to predict electricity prices but typically require certain assumptions and simplifications. In this article, we present an explainable machine learning model for the prices on the German day-ahead market, which substantially outperforms a benchmark model based on the merit order principle. Our model is designed for the ex-post analysis of prices and thus builds on various external features. Using Shapley Additive exPlanation (SHAP) values, we can disentangle the role of the different features and quantify their importance from empiric data. Load, wind and solar generation are most important, as expected, but wind power appears to affect prices stronger than solar power does. Fuel prices also rank highly and show nontrivial dependencies, including strong interactions with other features revealed by a SHAP interaction analysis. Large generation ramps are correlated with high prices, again with strong feature interactions, due to the limited flexibility of nuclear and lignite plants. Our results further contribute to model development by providing quantitative insights directly from data.
The goal of the group testing problem is to identify a set of defective items within a larger set of items, using suitably-designed tests whose outcomes indicate whether any defective item is present. In this paper, we study how the number of tests can be significantly decreased by leveraging the structural dependencies between the items, i.e., by incorporating prior information. To do so, we pursue two different perspectives: (i) As a generalization of the uniform combinatorial prior, we consider the case that the defective set is uniform over a \emph{subset} of all possible sets of a given size, and study how this impacts the information-theoretic limits on the number of tests for approximate recovery; (ii) As a generalization of the i.i.d.~prior, we introduce a new class of priors based on the Ising model, where the associated graph represents interactions between items. We show that this naturally leads to an Integer Quadratic Program decoder, which can be converted to an Integer Linear Program and/or relaxed to a non-integer variant for improved computational complexity, while maintaining strong empirical recovery performance.
Deep operator network (DeepONet) has demonstrated great success in various learning tasks, including learning solution operators of partial differential equations. In particular, it provides an efficient approach to predict the evolution equations in a finite time horizon. Nevertheless, the vanilla DeepONet suffers from the issue of stability degradation in the long-time prediction. This paper proposes a {\em transfer-learning} aided DeepONet to enhance the stability. Our idea is to use transfer learning to sequentially update the DeepONets as the surrogates for propagators learned in different time frames. The evolving DeepONets can better track the varying complexities of the evolution equations, while only need to be updated by efficient training of a tiny fraction of the operator networks. Through systematic experiments, we show that the proposed method not only improves the long-time accuracy of DeepONet while maintaining similar computational cost but also substantially reduces the sample size of the training set.
This paper investigates the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems in the context of channel estimation with jointly Gaussian inputs. We analyze the relationship of the generally nonlinear CME to the linear Bussgang estimator, a well-known method based on Bussgang's theorem. We highlight a novel observation that the Bussgang estimator is equal to the CME for different special cases, including the case of univariate Gaussian inputs and the case of multiple observations in the absence of additive noise prior to the quantization. For the general cases we conduct numerical simulations to quantify the gap between the Bussgang estimator and the CME. This gap increases for higher dimensions and longer pilot sequences. We propose an optimal pilot sequence, motivated by insights from the CME, and derive a novel closed-form expression of the MSE for that case. Afterwards, we find a closed-form limit of the MSE in the asymptotically large number of pilots regime that also holds for the Bussgang estimator. Lastly, we present numerical experiments for various system parameters and for different performance metrics which illuminate the behavior of the optimal channel estimator in the quantized regime. In this context, the well-known stochastic resonance effect that appears in quantized systems can be quantified.
In this paper, the paradigm of thermal noise communication (TherCom) is put forward for future wired/wireless networks with extremely low power consumption. Taking backscatter communication (BackCom) and reconfigurable intelligent surface (RIS)-based radio frequency chain-free transmitters one step further, a thermal noise-driven transmitter might enable zero-signal-power transmission by simply indexing resistors or other noise sources according to information bits. This preliminary paper aims to shed light on the theoretical foundations, transceiver designs, and error performance derivations as well as optimizations of two emerging TherCom solutions: Kirchhoff-law-Johnson-noise (KLJN) secure bit exchange and wireless thermal noise modulation (TherMod) schemes. Our theoretical and computer simulation findings reveal that noise variance detection, supported by sample variance estimation with carefully optimized decision thresholds, is a reliable way of extracting the embedded information from noise modulated signals, even with limited number of noise samples.
Power analyses are an important aspect of experimental design, because they help determine how experiments are implemented in practice. It is common to specify a desired level of power and compute the sample size necessary to obtain that power. Such calculations are well-known for completely randomized experiments, but there can be many benefits to using other experimental designs. For example, it has recently been established that rerandomization, where subjects are randomized until covariate balance is obtained, increases the precision of causal effect estimators. This work establishes the power of rerandomized treatment-control experiments, thereby allowing for sample size calculators. We find the surprising result that, while power is often greater under rerandomization than complete randomization, the opposite can occur for very small treatment effects. The reason is that inference under rerandomization can be relatively more conservative, in the sense that it can have a lower type-I error at the same nominal significance level, and this additional conservativeness adversely affects power. This surprising result is due to treatment effect heterogeneity, a quantity often ignored in power analyses. We find that heterogeneity increases power for large effect sizes but decreases power for small effect sizes.
Learning fine-grained interplay between vision and language allows to a more accurate understanding for VisionLanguage tasks. However, it remains challenging to extract key image regions according to the texts for semantic alignments. Most existing works are either limited by textagnostic and redundant regions obtained with the frozen detectors, or failing to scale further due to its heavy reliance on scarce grounding (gold) data to pre-train detectors. To solve these problems, we propose Self-Locator Aided Network (SLAN) for cross-modal understanding tasks without any extra gold data. SLAN consists of a region filter and a region adaptor to localize regions of interest conditioned on different texts. By aggregating cross-modal information, the region filter selects key regions and the region adaptor updates their coordinates with text guidance. With detailed region-word alignments, SLAN can be easily generalized to many downstream tasks. It achieves fairly competitive results on five cross-modal understanding tasks (e.g., 85.7% and 69.2% on COCO image-to-text and text-to-image retrieval, surpassing previous SOTA methods). SLAN also demonstrates strong zero-shot and fine-tuned transferability to two localization tasks.
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data. Existing methods mainly focus on model-based adaptation in a self-training manner, such as predicting pseudo labels for new domain datasets. Since pseudo labels are noisy and unreliable, these methods suffer from catastrophic forgetting and error accumulation when dealing with dynamic data distributions. Motivated by the prompt learning in NLP, in this paper, we propose to learn an image-level visual domain prompt for target domains while having the source model parameters frozen. During testing, the changing target datasets can be adapted to the source model by reformulating the input data with the learned visual prompts. Specifically, we devise two types of prompts, i.e., domains-specific prompts and domains-agnostic prompts, to extract current domain knowledge and maintain the domain-shared knowledge in the continual adaptation. Furthermore, we design a homeostasis-based prompt adaptation strategy to suppress domain-sensitive parameters in domain-invariant prompts to learn domain-shared knowledge more effectively. This transition from the model-dependent paradigm to the model-free one enables us to bypass the catastrophic forgetting and error accumulation problems. Experiments show that our proposed method achieves significant performance gains over state-of-the-art methods on four widely-used benchmarks, including CIFAR-10C, CIFAR-100C, ImageNet-C, and VLCS datasets.
An implicit variable-step BDF2 scheme is established for solving the space fractional Cahn-Hilliard equation, involving the fractional Laplacian, derived from a gradient flow in the negative order Sobolev space $H^{-\alpha}$, $\alpha\in(0,1)$. The Fourier pseudo-spectral method is applied for the spatial approximation. The proposed scheme inherits the energy dissipation law in the form of the modified discrete energy under the sufficient restriction of the time-step ratios. The convergence of the fully discrete scheme is rigorously provided utilizing the newly proved discrete embedding type convolution inequality dealing with the fractional Laplacian. Besides, the mass conservation and the unique solvability are also theoretically guaranteed. Numerical experiments are carried out to show the accuracy and the energy dissipation both for various interface widths. In particular, the multiple-time-scale evolution of the solution is captured by an adaptive time-stepping strategy in the short-to-long time simulation.