In this work we present two new numerical schemes to approximate the Navier-Stokes-Cahn-Hilliard system with degenerate mobility using finite differences in time and finite elements in space. The proposed schemes are conservative, energy-stable and preserve the maximum principle approximately (the amount of the phase variable being outside of the interval [0,1] goes to zero in terms of a truncation parameter). Additionally, we present several numerical results to illustrate the accuracy and the well behavior of the proposed schemes, as well as a comparison with the behavior of the Navier-Stokes-Cahn-Hilliard model with constant mobility.
Consider an open quantum system with (discrete-time) Markovian dynamics. Our task is to store information in the system in such a way that it can be retrieved perfectly, even after the system is left to evolve for an arbitrarily long time. We show that this is impossible for classical (resp. quantum) information precisely when the dynamics is mixing (resp. asymptotically entanglement breaking). Furthermore, we provide tight universal upper bounds on the minimum time after which any such dynamics `scrambles' the encoded information beyond the point of perfect retrieval. On the other hand, for dynamics that are not of this kind, we show that information must be encoded inside the peripheral space associated with the dynamics in order for it to be perfectly recoverable at any time in the future. This allows us to derive explicit formulas for the maximum amount of information that can be protected from noise in terms of the structure of the peripheral space of the dynamics.
In this work, we present a model order reduction technique for nonlinear structures assembled from components.The reduced order model is constructed by reducing the substructures with proper orthogonal decomposition and connecting them by a mortar-tied contact formulation. The snapshots for the substructure projection matrices are computed on the substructure level by the proper orthogonal decomposition (POD) method. The snapshots are computed using a random sampling procedure based on a parametrization of boundary conditions. To reduce the computational effort of the snapshot computation full-order simulations of the substructures are only computed when the error of the reduced solution is above a threshold. In numerical examples, we show the accuracy and efficiency of the method for nonlinear problems involving material and geometric nonlinearity as well as non-matching meshes. We are able to predict solutions of systems that we did not compute in our snapshots.
We present a new hybrid semi-implicit finite volume / finite element numerical scheme for the solution of incompressible and weakly compressible media. From the continuum mechanics model proposed by Godunov, Peshkov and Romenski (GPR), we derive the incompressible GPR formulation as well as a weakly compressible GPR system. As for the original GPR model, the new formulations are able to describe different media, from elastoplastic solids to viscous fluids, depending on the values set for the model's relaxation parameters. Then, we propose a new numerical method for the solution of both models based on the splitting of the original systems into three subsystems: one containing the convective part and non-conservative products, a second subsystem for the source terms of the distortion tensor and heat flux equations and, finally, a pressure subsystem. In the first stage of the algorithm, the transport subsystem is solved by employing an explicit finite volume method, while the source terms are solved implicitly. Next, the pressure subsystem is implicitly discretised using finite elements. Within this methodology, unstructured grids are employed, with the pressure defined in the primal grid and the rest of the variables computed in the dual grid. To evaluate the performance of the proposed scheme, a numerical convergence analysis is carried out, which confirms the second order of accuracy in space. A wide range of benchmarks is reproduced for the incompressible and weakly compressible cases, considering both solid and fluid media. These results demonstrate the good behaviour and robustness of the proposed scheme in a variety of scenarios and conditions.
In this work, we propose a balanced multi-component and multi-layer neural network (MMNN) structure to approximate functions with complex features with both accuracy and efficiency in terms of degrees of freedom and computation cost. The main idea is motivated by a multi-component, each of which can be approximated effectively by a single-layer network, and multi-layer decomposition in a "divide-and-conquer" type of strategy to deal with a complex function. While an easy modification to fully connected neural networks (FCNNs) or multi-layer perceptrons (MLPs) through the introduction of balanced multi-component structures in the network, MMNNs achieve a significant reduction of training parameters, a much more efficient training process, and a much improved accuracy compared to FCNNs or MLPs. Extensive numerical experiments are presented to illustrate the effectiveness of MMNNs in approximating high oscillatory functions and its automatic adaptivity in capturing localized features.
Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited training set constrains traditional tabular synthetic data generators in their ability to generate a large and diverse augmented dataset needed for ML tasks. To address this challenge, we introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime. However, not all the data generated by LLMs will improve downstream utility, as for any generative model. Consequently, we introduce a principled curation mechanism, leveraging learning dynamics, coupled with confidence and uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple real-world datasets, we demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators. Additionally, we provide insights into the LLM generation and curation mechanism, shedding light on the features that enable them to output high-quality augmented datasets.
In this paper, we propose a new algorithm, the irrational-window-filter projection method (IWFPM), for solving arbitrary dimensional global quasiperiodic systems. Based on the projection method (PM), IWFPM further utilizes the concentrated distribution of Fourier coefficients to filter out relevant spectral points using an irrational window. Moreover, a corresponding index-shift transform is designed to make the Fast Fourier Transform available. The corresponding error analysis on the function approximation level is also given. We apply IWFPM to 1D, 2D, and 3D quasiperiodic Schr\"odinger eigenproblems to demonstrate its accuracy and efficiency. IWFPM exhibits a significant computational advantage over PM for both extended and localized quantum states. Furthermore, the widespread existence of such spectral point distribution feature can endow IWFPM with significant potential for broader applications in quasiperiodic systems.
With wide application of Artificial Intelligence (AI), it has become particularly important to make decisions of AI systems explainable and transparent. In this paper, we proposed a new Explainable Artificial Intelligence (XAI) method called ShapG (Explanations based on Shapley value for Graphs) for measuring feature importance. ShapG is a model-agnostic global explanation method. At the first stage, it defines an undirected graph based on the dataset, where nodes represent features and edges are added based on calculation of correlation coefficients between features. At the second stage, it calculates an approximated Shapley value by sampling the data taking into account this graph structure. The sampling approach of ShapG allows to calculate the importance of features efficiently, i.e. to reduce computational complexity. Comparison of ShapG with other existing XAI methods shows that it provides more accurate explanations for two examined datasets. We also compared other XAI methods developed based on cooperative game theory with ShapG in running time, and the results show that ShapG exhibits obvious advantages in its running time, which further proves efficiency of ShapG. In addition, extensive experiments demonstrate a wide range of applicability of the ShapG method for explaining complex models. We find ShapG an important tool in improving explainability and transparency of AI systems and believe it can be widely used in various fields.
User experience on mobile devices is constrained by limited battery capacity and processing power, but 6G technology advancements are diving rapidly into mobile technical evolution. Mobile edge computing (MEC) offers a solution, offloading computationally intensive tasks to edge cloud servers, reducing battery drain compared to local processing. The upcoming integrated sensing and communication in mobile communication may improve the trajectory prediction and processing delays. This study proposes a greedy resource allocation optimization strategy for multi-user networks to minimize aggregate energy usage. Numerical results show potential improvement at 33\% for every 1000 iteration. Addressing prediction model division and velocity accuracy issues is crucial for better results. A plan for further improvement and achieving objectives is outlined for the upcoming work phase.
In recent years, there has been a growing demand to discern clusters of subjects in datasets characterized by a large set of features. Often, these clusters may be highly variable in size and present partial hierarchical structures. In this context, model-based clustering approaches with nonparametric priors are gaining attention in the literature due to their flexibility and adaptability to new data. However, current approaches still face challenges in recognizing hierarchical cluster structures and in managing tiny clusters or singletons. To address these limitations, we propose a novel infinite mixture model with kernels organized within a multiscale structure. Leveraging a careful specification of the kernel parameters, our method allows the inclusion of additional information guiding possible hierarchies among clusters while maintaining flexibility. We provide theoretical support and an elegant, parsimonious formulation based on infinite factorization that allows efficient inference via Gibbs sampler.
In this work we propose and analyse a structure-preserving approximation of the non-isothermal Cahn-Hilliard-Navier-Stokes system using conforming finite elements in space and implicit time discretisation with convex-concave splitting. The system is first reformulated into a variational form which reveal the structure of the equations, which is then used in the subsequent approximation.