In this study, we propose a new index for measuring excellence in science which is based on collaborations (co-authorship distances) in science. The index is based on the Erd\H{o}s number - a number that was introduced several years ago. We propose to focus with the new index on laureates of prestigious prizes in a certain field and to measure co-authorship distances between the laureates and other scientists. To exemplify and explain our proposal, we computed the proposed index in the field of quantitative science studies (PWIPM). The Derek de Solla Price Memorial Award (Price Medal, PM) is awarded to outstanding scientists in the field. We tested the convergent validity of the PWIPM. We were interested whether the indicator is related to an established bibliometric indicator: P(top 10%). The results show that the coefficients for the correlation between PWIPM and P(top 10%) are high (in cases when a sufficient number of papers have been considered for a reliable assessment of performance). Therefore, measured by an established indicator for research excellence, the new PWI indicator seems to be convergently valid and, therefore, might be a possible alternative for established (bibliometric) indicators - with a focus on prizes.
Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.
We are interested in solving the Asymmetric Eigenvalue Complementarity Problem (AEiCP) by accelerated Difference-of-Convex (DC) algorithms. Two novel hybrid accelerated DCA: the Hybrid DCA with Line search and Inertial force (HDCA-LI) and the Hybrid DCA with Nesterov's extrapolation and Inertial force (HDCA-NI), are established. We proposed three DC programming formulations of AEiCP based on Difference-of-Convex-Sums-of-Squares (DC-SOS) decomposition techniques, and applied the classical DCA and 6 accelerated variants (BDCA with exact and inexact line search, ADCA, InDCA, HDCA-LI and HDCA-NI) to the three DC formulations. Numerical simulations of 7 DCA-type methods against state-of-the-art optimization solvers IPOPT, KNITRO and FILTERSD, are reported.
We study the complexity of computing equilibria in binary public goods games on undirected graphs. In such a game, players correspond to vertices in a graph and face a binary choice of performing an action, or not. Each player's decision depends only on the number of neighbors in the graph who perform the action and is encoded by a per-player binary pattern. We show that games with decreasing patterns (where players only want to act up to a threshold number of adjacent players doing so) always have a pure Nash equilibrium and that one is reached from any starting profile by following a polynomially bounded sequence of best responses. For non-monotonic patterns of the form $10^k10^*$ (where players want to act alone or alongside $k + 1$ neighbors), we show that it is $\mathsf{NP}$-hard to decide whether a pure Nash equilibrium exists. We further investigate a generalization of the model that permits ties of varying strength: an edge with integral weight $w$ behaves as $w$ parallel edges. While, in this model, a pure Nash equilibrium still exists for decreasing patters, we show that the task of computing one is $\mathsf{PLS}$-complete.
Estimating the rigid transformation between two LiDAR scans through putative 3D correspondences is a typical point cloud registration paradigm. Current 3D feature matching approaches commonly lead to numerous outlier correspondences, making outlier-robust registration techniques indispensable. Many recent studies have adopted the branch and bound (BnB) optimization framework to solve the correspondence-based point cloud registration problem globally and deterministically. Nonetheless, BnB-based methods are time-consuming to search the entire 6-dimensional parameter space, since their computational complexity is exponential to the dimension of the solution domain. In order to enhance algorithm efficiency, existing works attempt to decouple the 6 degrees of freedom (DOF) original problem into two 3-DOF sub-problems, thereby reducing the dimension of the parameter space. In contrast, our proposed approach introduces a novel pose decoupling strategy based on residual projections, effectively decomposing the raw problem into three 2-DOF rotation search sub-problems. Subsequently, we employ a novel BnB-based search method to solve these sub-problems, achieving efficient and deterministic registration. Furthermore, our method can be adapted to address the challenging problem of simultaneous pose and correspondence registration (SPCR). Through extensive experiments conducted on synthetic and real-world datasets, we demonstrate that our proposed method outperforms state-of-the-art methods in terms of efficiency, while simultaneously ensuring robustness.
Evaluating the importance of a network node is a crucial task in network science and graph data mining. H-index is a popular centrality measure for this task, however, there is still a lack of its interpretation from a rigorous statistical aspect. Here we show the statistical nature of h-index from the perspective of order statistics, and we obtain a new family of centrality indices by generalizing the h-index along this direction. The theoretical and empirical evidences show that such a statistical interpretation enables us to obtain a general and versatile framework for quantifying the importance of a network node. Under this framework, many new centrality indices can be derived and some of which can be more accurate and robust than h-index. We believe that this research opens up new avenues for developing more effective indices for node importance quantification from a viewpoint that still remains unexplored.
In federated learning (FL), a cluster of local clients are chaired under the coordination of the global server and cooperatively train one model with privacy protection. Due to the multiple local updates and the isolated non-iid dataset, clients are prone to overfit into their own optima, which extremely deviates from the global objective and significantly undermines the performance. Most previous works only focus on enhancing the consistency between the local and global objectives to alleviate this prejudicial client drifts from the perspective of the optimization view, whose performance would be prominently deteriorated on the high heterogeneity. In this work, we propose a novel and general algorithm {\ttfamily FedSMOO} by jointly considering the optimization and generalization targets to efficiently improve the performance in FL. Concretely, {\ttfamily FedSMOO} adopts a dynamic regularizer to guarantee the local optima towards the global objective, which is meanwhile revised by the global Sharpness Aware Minimization (SAM) optimizer to search for the consistent flat minima. Our theoretical analysis indicates that {\ttfamily FedSMOO} achieves fast $\mathcal{O}(1/T)$ convergence rate with low generalization bound. Extensive numerical studies are conducted on the real-world dataset to verify its peerless efficiency and excellent generality.
Depression is the most prevalent and serious mental illness, which induces grave financial and societal ramifications. Depression detection is key for early intervention to mitigate those consequences. Such a high-stake decision inherently necessitates interpretability, which most existing methods fall short of. To connect human expertise in this decision-making, safeguard trust from end users, and ensure algorithm transparency, we develop an interpretable deep learning model: Multi-Scale Temporal Prototype Network (MSTPNet). MSTPNet is built upon the emergent prototype learning methods. In line with the medical practice of depression diagnosis, MSTPNet differs from existing prototype learning models in its capability of capturing the depressive symptoms and their temporal distribution such as frequency and persistence of appearance. Extensive empirical analyses using real-world social media data show that MSTPNet outperforms state-of-the-art benchmarks in depression detection, with an F1-score of 0.851. Moreover, MSTPNet interprets its prediction by identifying what depression symptoms the user presents and how long these related symptoms last. We further conduct a user study to demonstrate its superiority over the benchmarks in interpretability. Methodologically, this study contributes to extant literature with a novel interpretable deep learning model for depression detection in social media. Our proposed method can be implemented in social media platforms to detect depression and its symptoms. Platforms can subsequently provide personalized online resources such as educational and supporting videos and articles, or sources for treatments and social support for depressed patients.
In this paper, we study the statistical efficiency of Reinforcement Learning in Mean-Field Control (MFC) and Mean-Field Game (MFG) with general function approximation. We introduce a new concept called Mean-Field Model-Based Eluder Dimension (MBED), which subsumes a rich family of Mean-Field RL problems. Additionally, we propose algorithms based on Optimistic Maximal Likelihood Estimation, which can return an $\epsilon$-optimal policy for MFC or an $\epsilon$-Nash Equilibrium policy for MFG, with sample complexity polynomial w.r.t. relevant parameters and independent of the number of states, actions and the number of agents. Notably, our results only require a mild assumption of Lipschitz continuity on transition dynamics and avoid strong structural assumptions in previous work. Finally, in the tabular setting, given the access to a generative model, we establish an exponential lower bound for MFC setting, while providing a novel sample-efficient model elimination algorithm to approximate equilibrium in MFG setting. Our results reveal a fundamental separation between RL for single-agent, MFC, and MFG from the sample efficiency perspective.
In oncology, phase II studies are crucial for clinical development plans, as they identify potent agents with sufficient activity to continue development in the subsequent phase III trials. Traditionally, phase II studies are single-arm studies, with an endpoint of treatment efficacy in the short-term. However, drug safety is also an important consideration. Thus, in the context of such multiple outcome design, predictive probabilities-based Bayesian monitoring strategies have been developed to assess if a clinical trial will show a conclusive result at the planned end of the study. In this paper, we propose a new simple index vector for summarizing the results that cannot be captured by existing strategies. Specifically, for each interim monitoring time point, we calculate the Bayesian predictive probability using our new index, and use it to assign a go/no-go decision. Finally, simulation studies are performed to evaluate the operating characteristics of the design. This analysis demonstrates that the proposed method makes appropriate interim go/no-go decisions.
We focus on the task of learning a single index model $\sigma(w^\star \cdot x)$ with respect to the isotropic Gaussian distribution in $d$ dimensions. Prior work has shown that the sample complexity of learning $w^\star$ is governed by the information exponent $k^\star$ of the link function $\sigma$, which is defined as the index of the first nonzero Hermite coefficient of $\sigma$. Ben Arous et al. (2021) showed that $n \gtrsim d^{k^\star-1}$ samples suffice for learning $w^\star$ and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that $n \gtrsim d^{k^\star/2}$ samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns $w^\star$ with $n \gtrsim d^{k^\star/2}$ samples. We also draw connections to statistical analyses of tensor PCA and to the implicit regularization effects of minibatch SGD on empirical losses.