Mutual Coupling (MC) is an unavoidable feature in Reconfigurable Intelligent Surfaces (RISs) with sub-wavelength inter-element spacing. Its inherent presence naturally leads to non-local RIS structures, which can be efficiently described via non-diagonal phase shift matrices. In this paper, we focus on optimizing MC in RIS-assisted multi-user MIMO wireless communication systems. We particularly formulate a novel problem to jointly optimize active and passive beamforming as well as MC in a physically consistent manner. To characterize MC, we deploy scattering parameters and propose a novel approach to optimize them through an offline optimization method, rather than optimizing MC on the fly. Our numerical results showcase that the system performance increases with the proposed MC optimization, and this improvement is achievable without the need for optimizing MC on-the-fly, which can be rather cumbersome.
Deep Neural Network (DNN) accelerators are extensively used to improve the computational efficiency of DNNs, but are prone to faults through Single-Event Upsets (SEUs). In this work, we present an in-depth analysis of the impact of SEUs on a Systolic Array (SA) based DNN accelerator. A fault injection campaign is performed through a Register-Transfer Level (RTL) based simulation environment to improve the observability of each hardware block, including the SA itself as well as the post-processing pipeline. From this analysis, we present the sensitivity, independent of a DNN model architecture, for various flip-flop groups both in terms of fault propagation probability and fault magnitude. This allows us to draw detailed conclusions and determine optimal mitigation strategies.
A nonlinear Helmholtz (NLH) equation with high frequencies and corner singularities is discretized by the linear finite element method (FEM). After deriving some wave-number-explicit stability estimates and the singularity decomposition for the NLH problem, a priori stability and error estimates are established for the FEM on shape regular meshes including the case of locally refined meshes. Then a posteriori upper and lower bounds using a new residual-type error estimator, which is equivalent to the standard one, are derived for the FE solutions to the NLH problem. These a posteriori estimates have confirmed a significant fact that is also valid for the NLH problem, namely the residual-type estimator seriously underestimates the error of the FE solution in the preasymptotic regime, which was first observed by Babu\v{s}ka et al. [Int J Numer Methods Eng 40 (1997)] for a one-dimensional linear problem. Based on the new a posteriori error estimator, both the convergence and the quasi-optimality of the resulting adaptive finite element algorithm are proved the first time for the NLH problem, when the initial mesh size lying in the preasymptotic regime. Finally, numerical examples are presented to validate the theoretical findings and demonstrate that applying the continuous interior penalty (CIP) technique with appropriate penalty parameters can reduce the pollution errors efficiently. In particular, the nonlinear phenomenon of optical bistability with Gaussian incident waves is successfully simulated by the adaptive CIPFEM.
In fMRI, capturing brain activation during a task is dependent on how quickly k-space arrays are obtained. Acquiring full k-space arrays, which are reconstructed into images using the inverse Fourier transform (IFT), that make up volume images can take a considerable amount of scan time. Under-sampling k-space reduces the acquisition time, but results in aliased, or "folded," images. GeneRalized Autocalibrating Partial Parallel Acquisition (GRAPPA) is a parallel imaging technique that yields full images from subsampled arrays of k-space. GRAPPA uses localized interpolation weights, which are estimated per-scan and fixed over time, to fill in the missing spatial frequencies of the subsampled k-space. Hence, we propose a Bayesian approach to GRAPPA (BGRAPPA) where space measurement uncertainty are assessed from the a priori calibration k-space arrays. The prior information is utilized to estimate the missing spatial frequency values from the posterior distribution and reconstruct into full field-of-view images. Our BGRAPPA technique successfully reconstructed both a simulated and experimental single slice image with less artifacts, reduced noise leading to an increased signal-to-noise ratio (SNR), and stronger power of task detection.
The Tweedie generalized linear models are commonly applied in the insurance industry to analyze semicontinuous claim data. For better prediction of the aggregated claim size, the mean and dispersion of the Tweedie model are often estimated together using the double generalized linear models. In some actuarial applications, it is common to observe an excessive percentage of zeros, which often results in a decline in the performance of the Tweedie model. The zero-inflated Tweedie model has been recently considered in the literature, which draws inspiration from the zero-inflated Poisson model. In this article, we consider the problem of dispersion modeling of the Tweedie state in the zero-inflated Tweedie model, in addition to the mean modeling. We also model the probability of the zero state based on the generalized expectation-maximization algorithm. To potentially incorporate nonlinear and interaction effects of the covariates, we estimate the mean, dispersion, and zero-state probability using decision-tree-based gradient boosting. We conduct extensive numerical studies to demonstrate the improved performance of our method over existing ones.
Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the average log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further enhancing the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models like Mistral and Llama3. We evaluated on extensive instruction-following benchmarks, including AlpacaEval 2, MT-Bench, and the recent challenging Arena-Hard benchmark. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Llama3-8B-Instruct, achieves a remarkable 44.7 length-controlled win rate on AlpacaEval 2 -- surpassing Claude 3 Opus on the leaderboard, and a 33.8 win rate on Arena-Hard -- making it the strongest 8B open-source model.
We developed a statistical inference method applicable to a broad range of generalized linear models (GLMs) in high-dimensional settings, where the number of unknown coefficients scales proportionally with the sample size. Although a pioneering inference method has been developed for logistic regression, which is a specific instance of GLMs, we cannot apply this method directly to other GLMs because of unknown hyper-parameters. In this study, we addressed this limitation by developing a new inference method designed for a certain class of GLMs. Our method is based on the adjustment of asymptotic normality in high dimensions and is feasible in the sense that it is possible even with unknown hyper-parameters. Specifically, we introduce a novel convex loss-based estimator and its associated system, which are essential components of inference. Next, we devise a moment-based method for estimating the system parameters required by the method. Consequently, we construct confidence intervals for GLMs in a high-dimensional regime. We prove that our proposed method has desirable theoretical properties, such as strong consistency and exact coverage probability. Finally, we experimentally confirmed its validity.
We present EGN, a stochastic second-order optimization algorithm that combines the generalized Gauss-Newton (GN) Hessian approximation with low-rank linear algebra to compute the descent direction. Leveraging the Duncan-Guttman matrix identity, the parameter update is obtained by factorizing a matrix which has the size of the mini-batch. This is particularly advantageous for large-scale machine learning problems where the dimension of the neural network parameter vector is several orders of magnitude larger than the batch size. Additionally, we show how improvements such as line search, adaptive regularization, and momentum can be seamlessly added to EGN to further accelerate the algorithm. Moreover, under mild assumptions, we prove that our algorithm converges to an $\epsilon$-stationary point at a linear rate. Finally, our numerical experiments demonstrate that EGN consistently exceeds, or at most matches the generalization performance of well-tuned SGD, Adam, and SGN optimizers across various supervised and reinforcement learning tasks.
There is a growing trend among statistical agencies to explore non-probability data sources for producing more timely and detailed statistics, while reducing costs and respondent burden. Coverage and measurement error are two issues that may be present in such data. The imperfections may be corrected using available information relating to the population of interest, such as a census or a reference probability sample. In this paper, we compare a wide range of existing methods for producing population estimates using a non-probability dataset through a simulation study based on a realistic business population. The study was conducted to examine the performance of the methods under different missingness and data quality assumptions. The results confirm the ability of the methods examined to address selection bias. When no measurement error is present in the non-probability dataset, a screening dual-frame approach for the probability sample tends to yield lower sample size and mean squared error results. The presence of measurement error and/or nonignorable missingness increases mean squared errors for estimators that depend heavily on the non-probability data. In this case, the best approach tends to be to fall back to a model-assisted estimator based on the probability sample.
Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.