In this note, we investigate the robustness of Nash equilibria (NE) in multi-player aggregative games with coupling constraints. There are many algorithms for computing an NE of an aggregative game given a known aggregator. When the coupling parameters are affected by uncertainty, robust NE need to be computed. We consider a scenario where players' weight in the aggregator is unknown, making the aggregator kind of "a black box". We pursue a suitable learning approach to estimate the unknown aggregator by proposing an inverse variational inequality-based relationship. We then utilize the counterpart to reconstruct the game and obtain first-order conditions for robust NE in the worst case. Furthermore, we characterize the generalization property of the learning methodology via an upper bound on the violation probability. Simulation experiments show the effectiveness of the proposed inverse learning approach.
In this paper, we introduce an approach for improving the early exploration of grey-box fuzzing campaigns; allowing the fuzzer to reach the interesting coverage earlier. To do this, it leverages information from the system under test's (SUT's) control flow graph in order to decide which inputs are likely to lead to discovering most coverage when mutated.
When they occur, azimuthal thermoacoustic oscillations can detrimentally affect the safe operation of gas turbines and aeroengines. We develop a real-time digital twin of azimuthal thermoacoustics of a hydrogen-based annular combustor. The digital twin seamlessly combines two sources of information about the system (i) a physics-based low-order model; and (ii) raw and sparse experimental data from microphones, which contain both aleatoric noise and turbulent fluctuations. First, we derive a low-order thermoacoustic model for azimuthal instabilities, which is deterministic. Second, we propose a real-time data assimilation framework to infer the acoustic pressure, the physical parameters, and the model and measurement biases simultaneously. This is the bias-regularized ensemble Kalman filter (r-EnKF), for which we find an analytical solution that solves the optimization problem. Third, we propose a reservoir computer, which infers both the model bias and measurement bias to close the assimilation equations. Fourth, we propose a real-time digital twin of the azimuthal thermoacoustic dynamics of a laboratory hydrogen-based annular combustor for a variety of equivalence ratios. We find that the real-time digital twin (i) autonomously predicts azimuthal dynamics, in contrast to bias-unregularized methods; (ii) uncovers the physical acoustic pressure from the raw data, i.e., it acts as a physics-based filter; (iii) is a time-varying parameter system, which generalizes existing models that have constant parameters, and capture only slow-varying variables. The digital twin generalizes to all equivalence ratios, which bridges the gap of existing models. This work opens new opportunities for real-time digital twinning of multi-physics problems.
We present a new approach to stabilizing high-order Runge-Kutta discontinuous Galerkin (RKDG) schemes using weighted essentially non-oscillatory (WENO) reconstructions in the context of hyperbolic conservation laws. In contrast to RKDG schemes that overwrite finite element solutions with WENO reconstructions, our approach employs the reconstruction-based smoothness sensor presented by Kuzmin and Vedral (J. Comput. Phys. 487:112153, 2023) to control the amount of added numerical dissipation. Incorporating a dissipation-based WENO stabilization term into a discontinuous Galerkin (DG) discretization, the proposed methodology achieves high-order accuracy while effectively capturing discontinuities in the solution. As such, our approach offers an attractive alternative to WENO-based slope limiters for DG schemes. The reconstruction procedure that we use performs Hermite interpolation on stencils composed of a mesh cell and its neighboring cells. The amount of numerical dissipation is determined by the relative differences between the partial derivatives of reconstructed candidate polynomials and those of the underlying finite element approximation. The employed smoothness sensor takes all derivatives into account to properly assess the local smoothness of a high-order DG solution. Numerical experiments demonstrate the ability of our scheme to capture discontinuities sharply. Optimal convergence rates are obtained for all polynomial degrees.
Purpose: Paranasal anomalies, frequently identified in routine radiological screenings, exhibit diverse morphological characteristics. Due to the diversity of anomalies, supervised learning methods require large labelled dataset exhibiting diverse anomaly morphology. Self-supervised learning (SSL) can be used to learn representations from unlabelled data. However, there are no SSL methods designed for the downstream task of classifying paranasal anomalies in the maxillary sinus (MS). Methods: Our approach uses a 3D Convolutional Autoencoder (CAE) trained in an unsupervised anomaly detection (UAD) framework. Initially, we train the 3D CAE to reduce reconstruction errors when reconstructing normal maxillary sinus (MS) image. Then, this CAE is applied to an unlabelled dataset to generate coarse anomaly locations by creating residual MS images. Following this, a 3D Convolutional Neural Network (CNN) reconstructs these residual images, which forms our SSL task. Lastly, we fine-tune the encoder part of the 3D CNN on a labelled dataset of normal and anomalous MS images. Results: The proposed SSL technique exhibits superior performance compared to existing generic self-supervised methods, especially in scenarios with limited annotated data. When trained on just 10% of the annotated dataset, our method achieves an Area Under the Precision-Recall Curve (AUPRC) of 0.79 for the downstream classification task. This performance surpasses other methods, with BYOL attaining an AUPRC of 0.75, SimSiam at 0.74, SimCLR at 0.73 and Masked Autoencoding using SparK at 0.75. Conclusion: A self-supervised learning approach that inherently focuses on localizing paranasal anomalies proves to be advantageous, particularly when the subsequent task involves differentiating normal from anomalous maxillary sinuses. Access our code at //github.com/mtec-tuhh/self-supervised-paranasal-anomaly
Logistic regression is widely used in many areas of knowledge. Several works compare the performance of lasso and maximum likelihood estimation in logistic regression. However, part of these works do not perform simulation studies and the remaining ones do not consider scenarios in which the ratio of the number of covariates to sample size is high. In this work, we compare the discrimination performance of lasso and maximum likelihood estimation in logistic regression using simulation studies and applications. Variable selection is done both by lasso and by stepwise when maximum likelihood estimation is used. We consider a wide range of values for the ratio of the number of covariates to sample size. The main conclusion of the work is that lasso has a better discrimination performance than maximum likelihood estimation when the ratio of the number of covariates to sample size is high.
In this report, we present the latest model of the Gemini family, Gemini 1.5 Pro, a highly compute-efficient multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra's state-of-the-art performance across a broad set of benchmarks. Studying the limits of Gemini 1.5 Pro's long-context ability, we find continued improvement in next-token prediction and near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 2.1 (200k) and GPT-4 Turbo (128k). Finally, we highlight surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person who learned from the same content.
In this work, we present an efficient way to decouple the multicontinuum problems. To construct decoupled schemes, we propose Implicit-Explicit time approximation in general form and study them for the fine-scale and coarse-scale space approximations. We use a finite-volume method for fine-scale approximation, and the nonlocal multicontinuum (NLMC) method is used to construct an accurate and physically meaningful coarse-scale approximation. The NLMC method is an accurate technique to develop a physically meaningful coarse scale model based on defining the macroscale variables. The multiscale basis functions are constructed in local domains by solving constraint energy minimization problems and projecting the system to the coarse grid. The resulting basis functions have exponential decay properties and lead to the accurate approximation on a coarse grid. We construct a fully Implicit time approximation for semi-discrete systems arising after fine-scale and coarse-scale space approximations. We investigate the stability of the two and three-level schemes for fully Implicit and Implicit-Explicit time approximations schemes for multicontinuum problems in fractured porous media. We show that combining the decoupling technique with multiscale approximation leads to developing an accurate and efficient solver for multicontinuum problems.
We consider the problem of regularized Poisson Non-negative Matrix Factorization (NMF) problem, encompassing various regularization terms such as Lipschitz and relatively smooth functions, alongside linear constraints. This problem holds significant relevance in numerous Machine Learning applications, particularly within the domain of physical linear unmixing problems. A notable challenge arises from the main loss term in the Poisson NMF problem being a KL divergence, which is non-Lipschitz, rendering traditional gradient descent-based approaches inefficient. In this contribution, we explore the utilization of Block Successive Upper Minimization (BSUM) to overcome this challenge. We build approriate majorizing function for Lipschitz and relatively smooth functions, and show how to introduce linear constraints into the problem. This results in the development of two novel algorithms for regularized Poisson NMF. We conduct numerical simulations to showcase the effectiveness of our approach.
Deep neural network based recommendation systems have achieved great success as information filtering techniques in recent years. However, since model training from scratch requires sufficient data, deep learning-based recommendation methods still face the bottlenecks of insufficient data and computational inefficiency. Meta-learning, as an emerging paradigm that learns to improve the learning efficiency and generalization ability of algorithms, has shown its strength in tackling the data sparsity issue. Recently, a growing number of studies on deep meta-learning based recommenddation systems have emerged for improving the performance under recommendation scenarios where available data is limited, e.g. user cold-start and item cold-start. Therefore, this survey provides a timely and comprehensive overview of current deep meta-learning based recommendation methods. Specifically, we propose a taxonomy to discuss existing methods according to recommendation scenarios, meta-learning techniques, and meta-knowledge representations, which could provide the design space for meta-learning based recommendation methods. For each recommendation scenario, we further discuss technical details about how existing methods apply meta-learning to improve the generalization ability of recommendation models. Finally, we also point out several limitations in current research and highlight some promising directions for future research in this area.
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.