Out-of-distribution (OOD) generalization is a complicated problem due to the idiosyncrasies of possible distribution shifts between training and test domains. Most benchmarks employ diverse datasets to address this issue; however, the degree of the distribution shift between the training domains and the test domains of each dataset remains largely fixed. This may lead to biased conclusions that either underestimate or overestimate the actual OOD performance of a model. Our study delves into a more nuanced evaluation setting that covers a broad range of shift degrees. We show that the robustness of models can be quite brittle and inconsistent under different degrees of distribution shifts, and therefore one should be more cautious when drawing conclusions from evaluations under a limited range of degrees. In addition, we observe that large-scale pre-trained models, such as CLIP, are sensitive to even minute distribution shifts of novel downstream tasks. This indicates that while pre-trained representations may help improve downstream in-distribution performance, they could have minimal or even adverse effects on generalization in certain OOD scenarios of the downstream task if not used properly. In light of these findings, we encourage future research to conduct evaluations across a broader range of shift degrees whenever possible.
The graduated optimization approach is a heuristic method for finding globally optimal solutions for nonconvex functions and has been theoretically analyzed in several studies. This paper defines a new family of nonconvex functions for graduated optimization, discusses their sufficient conditions, and provides a convergence analysis of the graduated optimization algorithm for them. It shows that stochastic gradient descent (SGD) with mini-batch stochastic gradients has the effect of smoothing the function, the degree of which is determined by the learning rate and batch size. This finding provides theoretical insights on why large batch sizes fall into sharp local minima, why decaying learning rates and increasing batch sizes are superior to fixed learning rates and batch sizes, and what the optimal learning rate scheduling is. To the best of our knowledge, this is the first paper to provide a theoretical explanation for these aspects. Moreover, a new graduated optimization framework that uses a decaying learning rate and increasing batch size is analyzed and experimental results of image classification that support our theoretical findings are reported.
Active reconfigurable intelligent surface (ARIS) is a promising way to compensate for multiplicative fading attenuation by amplifying and reflecting event signals to selected users. This paper investigates the performance of ARIS assisted non-orthogonal multiple access (NOMA) networks over cascaded Nakagami-m fading channels. The effects of hardware impairments (HIS) and reflection coefficients on ARIS-NOMA networks with imperfect successive interference cancellation (ipSIC) and perfect successive interference cancellation (pSIC) are considered. More specifically, we develop new precise and asymptotic expressions of outage probability and ergodic data rate with ipSIC/pSIC for ARIS-NOMA-HIS networks. According to the approximated analyses, the diversity orders and multiplexing gains for couple of non-orthogonal users are attained in detail. Additionally, the energy efficiency of ARIS-NOMA-HIS networks is surveyed in delay-limited and delay-tolerant transmission schemes. The simulation findings are presented to demonstrate that: i) The outage behaviors and ergodic data rates of ARIS-NOMA-HIS networks precede that of ARIS aided orthogonal multiple access (OMA) and passive reconfigurable intelligent surface (PRIS) aided OMA; ii) As the reflection coefficient of ARIS increases, ARIS-NOMA-HIS networks have the ability to provide the strengthened outage performance; and iii) ARIS-NOMA-HIS networks are more energy efficient than ARIS/PRIS-OMA networks and conventional cooperative schemes.
We are interested in testing properties of distributions with systematically mislabeled samples. Our goal is to make decisions about unknown probability distributions, using a sample that has been collected by a confused collector, such as a machine-learning classifier that has not learned to distinguish all elements of the domain. The confused collector holds an unknown clustering of the domain and an input distribution $\mu$, and provides two oracles: a sample oracle which produces a sample from $\mu$ that has been labeled according to the clustering; and a label-query oracle which returns the label of a query point $x$ according to the clustering. Our first set of results shows that identity, uniformity, and equivalence of distributions can be tested efficiently, under the earth-mover distance, with remarkably weak conditions on the confused collector, even when the unknown clustering is adversarial. This requires defining a variant of the distribution testing task (inspired by the recent testable learning framework of Rubinfeld & Vasilyan), where the algorithm should test a joint property of the distribution and its clustering. As an example, we get efficient testers when the distribution tester is allowed to reject if it detects that the confused collector clustering is "far" from being a decision tree. The second set of results shows that we can sometimes do significantly better when the clustering is random instead of adversarial. For certain one-dimensional random clusterings, we show that uniformity can be tested under the TV distance using $\widetilde O\left(\frac{\sqrt n}{\rho^{3/2} \epsilon^2}\right)$ samples and zero queries, where $\rho \in (0,1]$ controls the "resolution" of the clustering. We improve this to $O\left(\frac{\sqrt n}{\rho \epsilon^2}\right)$ when queries are allowed.
Mutation validation (MV) is a recently proposed approach for model selection, garnering significant interest due to its unique characteristics and potential benefits compared to the widely used cross-validation (CV) method. In this study, we empirically compared MV and $k$-fold CV using benchmark and real-world datasets. By employing Bayesian tests, we compared generalization estimates yielding three posterior probabilities: practical equivalence, CV superiority, and MV superiority. We also evaluated the differences in the capacity of the selected models and computational efficiency. We found that both MV and CV select models with practically equivalent generalization performance across various machine learning algorithms and the majority of benchmark datasets. MV exhibited advantages in terms of selecting simpler models and lower computational costs. However, in some cases MV selected overly simplistic models leading to underfitting and showed instability in hyperparameter selection. These limitations of MV became more evident in the evaluation of a real-world neuroscientific task of predicting sex at birth using brain functional connectivity.
The integration of experimental data into mathematical and computational models is crucial for enhancing their predictive power in real-world scenarios. However, the performance of data assimilation algorithms can be significantly degraded when measurements are corrupted by biased noise, altering the signal magnitude, or when the system dynamics lack smoothness, such as in the presence of fast oscillations or discontinuities. This paper focuses on variational state estimation using the so-called Parameterized Background Data Weak method, which relies on a parameterized background by a set of constraints, enabling state estimation by solving a minimization problem on a reduced-order background model, subject to constraints imposed by the input measurements. To address biased noise in observations, a modified formulation is proposed, incorporating a correction mechanism to handle rapid oscillations by treating them as slow-decaying modes based on a two-scale splitting of the classical reconstruction algorithm. The effectiveness of the proposed algorithms is demonstrated through various examples, including discontinuous signals and simulated Doppler ultrasound data.
The problems of determining the permutation-representation number (prn) and the representation number of bipartite graphs are open in the literature. Moreover, the decision problem corresponding to the determination of the prn of a bipartite graph is NP-complete. However, these numbers were established for certain subclasses of bipartite graphs, e.g., for crown graphs. Further, it was conjectured that the crown graphs have the highest representation number among the bipartite graphs. In this work, first, we reconcile the relation between the prn of a comparability graph and the dimension of its induced poset and review the upper bounds on the prn of bipartite graphs. Then, we study the prn of bipartite graphs using the notion called neighborhood graphs. This approach substantiates the aforesaid conjecture and gives us theoretical evidence. In this connection, we devise a polynomial-time procedure to construct a word that represents a given bipartite graph permutationally. Accordingly, we provide a better upper bound for the prn of bipartite graphs. Further, we construct a class of bipartite graphs, viz., extended crown graphs, defined over posets and investigate its prn using the neighborhood graphs.
In the aim of reducing the computational cost of the resolution of parameter-dependent eigenvalue problems, a model order reduction (MOR) procedure is proposed. We focus on the case of non-self-adjoint generalized eigenvalue problems, such as the stationary multigroup neutron diffusion equations. The method lies in an approximation of the manifold of solutions using a Proper Orthogonal Decomposition approach. The numerical method is composed of two stages. In the offline stage, we build a reduced space which approximates the manifold. In the online stage, for any given new set of parameters, we solve a reduced problem on the reduced space within a much smaller computational time than the required time to solve the high-fidelity problem. This method is applied to core computations in the APOLLO3 code.
This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a time domain discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence regarding the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.
We propose a new matrix factor model, named RaDFaM, the latent structure of which is strictly derived based on a hierarchical rank decomposition of a matrix. Hierarchy is in the sense that all basis vectors of the column space of each multiplier matrix are assumed the structure of a vector factor model. Compared to the most commonly used matrix factor model that takes the latent structure of a bilinear form, RaDFaM involves additional row-wise and column-wise matrix latent factors. This yields modest dimension reduction and stronger signal intensity from the sight of tensor subspace learning, though poses challenges of new estimation procedure and concomitant inferential theory for a collection of matrix-valued observations. We develop a class of estimation procedure that makes use of the separable covariance structure under RaDFaM and approximate least squares, and derive its superiority in the merit of the peak signal-to-noise ratio. We also establish the asymptotic theory when the matrix-valued observations are uncorrelated or weakly correlated. Numerically, in terms of image/matrix reconstruction, supervised learning, and so forth, we demonstrate the excellent performance of RaDFaM through two matrix-valued sequence datasets of independent 2D images and multinational macroeconomic indices time series, respectively.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.