Games with environmental feedback have become a crucial area of study across various scientific domains, modelling the dynamic interplay between human decisions and environmental changes, and highlighting the consequences of our choices on natural resources and biodiversity. In this work, we propose a co-evolutionary model for human-environment systems that incorporates the effects of knowledge feedback and social interaction on the sustainability of common pool resources. The model represents consumers as agents who adjust their resource extraction based on the resource's state. These agents are connected through social networks, where links symbolize either affinity or aversion among them. The interplay between social dynamics and resource dynamics is explored, with the system's evolution analyzed across various network topologies and initial conditions. We find that knowledge feedback can independently sustain common pool resources. However, the impact of social interactions on sustainability is dual-faceted: it can either support or impede sustainability, influenced by the network's connectivity and heterogeneity. A notable finding is the identification of a critical network mean degree, beyond which a depletion/repletion transition parallels an absorbing/active state transition in social dynamics, i.e., individual agents and their connections are/are not prone to being frozen in their social states. Furthermore, the study examines the evolution of the social network, revealing the emergence of two polarized groups where agents within each community have the same affinity. Comparative analyses using Monte-Carlo simulations and rate equations are employed, along with analytical arguments, to reinforce the study's findings. The model successfully captures how information spread and social dynamics may impact the sustanebility of common pool resource.
Physics-informed neural networks (PINN) is a extremely powerful paradigm used to solve equations encountered in scientific computing applications. An important part of the procedure is the minimization of the equation residual which includes, when the equation is time-dependent, a time sampling. It was argued in the literature that the sampling need not be uniform but should overweight initial time instants, but no rigorous explanation was provided for these choice. In this paper we take some prototypical examples and, under standard hypothesis concerning the neural network convergence, we show that the optimal time sampling follows a truncated exponential distribution. In particular we explain when the time sampling is best to be uniform and when it should not be. The findings are illustrated with numerical examples on linear equation, Burgers' equation and the Lorenz system.
Physics-Informed Neural Networks (PINNs) have emerged as a highly active research topic across multiple disciplines in science and engineering, including computational geomechanics. PINNs offer a promising approach in different applications where faster, near real-time or real-time numerical prediction is required. Examples of such areas in geomechanics include geotechnical design optimization, digital twins of geo-structures and stability prediction of monitored slopes. But there remain challenges in training of PINNs, especially for problems with high spatial and temporal complexity. In this paper, we study how the training of PINNs can be improved by using an idealized poroelasticity problem as a demonstration example. A curriculum training strategy is employed where the PINN model is trained gradually by dividing the training data into intervals along the temporal dimension. We find that the PINN model with curriculum training takes nearly half the time required for training compared to conventional training over the whole solution domain. For the particular example here, the quality of the predicted solution was found to be good in both training approaches, but it is anticipated that the curriculum training approach has the potential to offer a better prediction capability for more complex problems, a subject for further research.
High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.
The main challenge of large-scale numerical simulation of radiation transport is the high memory and computation time requirements of discretization methods for kinetic equations. In this work, we derive and investigate a neural network-based approximation to the entropy closure method to accurately compute the solution of the multi-dimensional moment system with a low memory footprint and competitive computational time. We extend methods developed for the standard entropy-based closure to the context of regularized entropy-based closures. The main idea is to interpret structure-preserving neural network approximations of the regularized entropy closure as a two-stage approximation to the original entropy closure. We conduct a numerical analysis of this approximation and investigate optimal parameter choices. Our numerical experiments demonstrate that the method has a much lower memory footprint than traditional methods with competitive computation times and simulation accuracy. The code and all trained networks are provided on GitHub //github.com/ScSteffen/neuralEntropyClosures and //github.com/CSMMLab/KiT-RT.
In this paper, we develop a general framework for multicontinuum homogenization in perforated domains. The simulations of problems in perforated domains are expensive and, in many applications, coarse-grid macroscopic models are developed. Many previous approaches include homogenization, multiscale finite element methods, and so on. In our paper, we design multicontinuum homogenization based on our recently proposed framework. In this setting, we distinguish different spatial regions in perforations based on their sizes. For example, very thin perforations are considered as one continua, while larger perforations are considered as another continua. By differentiating perforations in this way, we are able to predict flows in each of them more accurately. We present a framework by formulating cell problems for each continuum using appropriate constraints for the solution averages and their gradients. These cell problem solutions are used in a multiscale expansion and in deriving novel macroscopic systems for multicontinuum homogenization. Our proposed approaches are designed for problems without scale separation. We present numerical results for two continuum problems and demonstrate the accuracy of the proposed methods.
Flow interaction between a plain-fluid region in contact with a porous layer attracted significant attention from modelling and analysis sides due to numerous applications in biology, environment and industry. In the most widely used coupled model, fluid flow is described by the Stokes equations in the free-flow domain and Darcy's law in the porous medium, and complemented by the appropriate interface conditions. However, traditional coupling concepts are restricted, with a few exceptions, to one-dimensional flows parallel to the fluid-porous interface. In this work, we use an alternative approach to model interaction between the plain-fluid domain and porous medium by considering a transition zone, and propose the full- and hybrid-dimensional Stokes-Brinkman-Darcy models. In the first case, the equi-dimensional Brinkman equations are considered in the transition region, and the appropriate interface conditions are set on the top and bottom of the transition zone. In the latter case, we perform a dimensional model reduction by averaging the Brinkman equations in the normal direction and using the proposed transmission conditions. The well-posedness of both coupled problems is proved, and some numerical simulations are carried out in order to validate the concepts.
Compositional data arise in many areas of research in the natural and biomedical sciences. One prominent example is in the study of the human gut microbiome, where one can measure the relative abundance of many distinct microorganisms in a subject's gut. Often, practitioners are interested in learning how the dependencies between microbes vary across distinct populations or experimental conditions. In statistical terms, the goal is to estimate a covariance matrix for the (latent) log-abundances of the microbes in each of the populations. However, the compositional nature of the data prevents the use of standard estimators for these covariance matrices. In this article, we propose an estimator of multiple covariance matrices which allows for information sharing across distinct populations of samples. Compared to some existing estimators, which estimate the covariance matrices of interest indirectly, our estimator is direct, ensures positive definiteness, and is the solution to a convex optimization problem. We compute our estimator using a proximal-proximal gradient descent algorithm. Asymptotic properties of our estimator reveal that it can perform well in high-dimensional settings. Through simulation studies, we demonstrate that our estimator can outperform existing estimators. We show that our method provides more reliable estimates than competitors in an analysis of microbiome data from subjects with chronic fatigue syndrome.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.