In this paper we consider the numerical approximation of infinite horizon problems via the dynamic programming approach. The value function of the problem solves a Hamilton-Jacobi-Bellman (HJB) equation that is approximated by a fully discrete method. It is known that the numerical problem is difficult to handle by the so called curse of dimensionality. To mitigate this issue we apply a reduction of the order by means of a new proper orthogonal decomposition (POD) method based on time derivatives. We carry out the error analysis of the method using recently proved optimal bounds for the fully discrete approximations. Moreover, the use of snapshots based on time derivatives allow us to bound some terms of the error that could not be bounded in a standard POD approach. Some numerical experiments show the good performance of the method in practice.
In recent years, human mobility research has discovered universal patterns capable of describing how people move. These regularities have been shown to partly depend on individual and environmental characteristics (e.g., gender, rural/urban, country). In this work, we show that life-course events, such as job loss, can disrupt individual mobility patterns. Adversely affecting individuals' well-being and potentially increasing the risk of social and economic inequalities, we show that job loss drives a significant change in the exploratory behaviour of individuals with changes that intensify over time since job loss. Our findings shed light on the dynamics of employment-related behavior at scale, providing a deeper understanding of key components in human mobility regularities. These drivers can facilitate targeted social interventions to support the most vulnerable populations.
In this paper, we reported our experiments with various strategies to improve code-mixed humour and sarcasm detection. We did all of our experiments for Hindi-English code-mixed scenario, as we have the linguistic expertise for the same. We experimented with three approaches, namely (i) native sample mixing, (ii) multi-task learning (MTL), and (iii) prompting very large multilingual language models (VMLMs). In native sample mixing, we added monolingual task samples in code-mixed training sets. In MTL learning, we relied on native and code-mixed samples of a semantically related task (hate detection in our case). Finally, in our third approach, we evaluated the efficacy of VMLMs via few-shot context prompting. Some interesting findings we got are (i) adding native samples improved humor (raising the F1-score up to 6.76%) and sarcasm (raising the F1-score up to 8.64%) detection, (ii) training MLMs in an MTL framework boosted performance for both humour (raising the F1-score up to 10.67%) and sarcasm (increment up to 12.35% in F1-score) detection, and (iii) prompting VMLMs couldn't outperform the other approaches. Finally, our ablation studies and error analysis discovered the cases where our model is yet to improve. We provided our code for reproducibility.
Powerful deep neural networks are vulnerable to adversarial attacks. To obtain adversarially robust models, researchers have separately developed adversarial training and Jacobian regularization techniques. There are abundant theoretical and empirical studies for adversarial training, but theoretical foundations for Jacobian regularization are still lacking. In this study, we show that Jacobian regularization is closely related to adversarial training in that $\ell_{2}$ or $\ell_{1}$ Jacobian regularized loss serves as an approximate upper bound on the adversarially robust loss under $\ell_{2}$ or $\ell_{\infty}$ adversarial attack respectively. Further, we establish the robust generalization gap for Jacobian regularized risk minimizer via bounding the Rademacher complexity of both the standard loss function class and Jacobian regularization function class. Our theoretical results indicate that the norms of Jacobian are related to both standard and robust generalization. We also perform experiments on MNIST data classification to demonstrate that Jacobian regularized risk minimization indeed serves as a surrogate for adversarially robust risk minimization, and that reducing the norms of Jacobian can improve both standard and robust generalization. This study promotes both theoretical and empirical understandings to adversarially robust generalization via Jacobian regularization.
Climate models struggle to accurately simulate precipitation, particularly extremes and the diurnal cycle. Here, we present a hybrid model that is trained directly on satellite-based precipitation observations. Our model runs at 2.8$^\circ$ resolution and is built on the differentiable NeuralGCM framework. The model demonstrates significant improvements over existing general circulation models, the ERA5 reanalysis, and a global cloud-resolving model in simulating precipitation. Our approach yields reduced biases, a more realistic precipitation distribution, improved representation of extremes, and a more accurate diurnal cycle. Furthermore, it outperforms the mid-range precipitation forecast of the ECMWF ensemble. This advance paves the way for more reliable simulations of current climate and demonstrates how training on observations can be used to directly improve GCMs.
This paper reviews work published between 2002 and 2022 in the fields of Android malware, clone, and similarity detection. It examines the data sources, tools, and features used in existing research and identifies the need for a comprehensive, cross-domain dataset to facilitate interdisciplinary collaboration and the exploitation of synergies between different research areas. Furthermore, it shows that many research papers do not publish the dataset or a description of how it was created, making it difficult to reproduce or compare the results. The paper highlights the necessity for a dataset that is accessible, well-documented, and suitable for a range of applications. Guidelines are provided for this purpose, along with a schematic method for creating the dataset.
In this paper, we focus on efficiently and flexibly simulating the Fokker-Planck equation associated with the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, which reflects the dynamic behavior of neuron networks. We apply the Galerkin spectral method to discretize the spatial domain by constructing a variational formulation that satisfies complex boundary conditions. Moreover, the boundary conditions in the variational formulation include only zeroth-order terms, with first-order conditions being naturally incorporated. This allows the numerical scheme to be further extended to an excitatory-inhibitory population model with synaptic delays and refractory states. Additionally, we establish the consistency of the numerical scheme. Experimental results, including accuracy tests, blow-up events, and periodic oscillations, validate the properties of our proposed method.
In randomized trials and observational studies, it is often necessary to evaluate the extent to which an intervention affects a time-to-event outcome, which is only partially observed due to right censoring. For instance, in infectious disease studies, it is frequently of interest to characterize the relationship between risk of acquisition of infection with a pathogen and a biomarker previously measuring for an immune response against that pathogen induced by prior infection and/or vaccination. It is common to conduct inference within a causal framework, wherein we desire to make inferences about the counterfactual probability of survival through a given time point, at any given exposure level. To determine whether a causal effect is present, one can assess if this quantity differs by exposure level. Recent work shows that, under typical causal assumptions, summaries of the counterfactual survival distribution are identifiable. Moreover, when the treatment is multi-level, these summaries are also pathwise differentiable in a nonparametric probability model, making it possible to construct estimators thereof that are unbiased and approximately normal. In cases where the treatment is continuous, the target estimand is no longer pathwise differentiable, rendering it difficult to construct well-behaved estimators without strong parametric assumptions. In this work, we extend beyond the traditional setting with multilevel interventions to develop approaches to nonparametric inference with a continuous exposure. We introduce methods for testing whether the counterfactual probability of survival time by a given time-point remains constant across the range of the continuous exposure levels. The performance of our proposed methods is evaluated via numerical studies, and we apply our method to data from a recent pair of efficacy trials of an HIV monoclonal antibody.
In this paper we obtain the Wedderburn-Artin decomposition of a semisimple group algebra associated to a direct product of finite groups. We also provide formulae for the number of all possible group codes, and their dimensions, that can be constructed in a group algebra. As particular cases, we present the complete algebraic description of the group algebra of any direct product of groups whose direct factors are cyclic, dihedral, or generalised quaternion groups. Finally, in the specific case of semisimple dihedral group algebras, we give a method to build quantum error-correcting codes, based on the CSS construction.
In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.
Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs outstanding capability to learn the input features with deep layers of neuron structures and iterative training process. However, these learned features are hard to identify and interpret from a human vision perspective, causing a lack of understanding of the CNNs internal working mechanism. To improve the CNN interpretability, the CNN visualization is well utilized as a qualitative analysis method, which translates the internal features into visually perceptible patterns. And many CNN visualization works have been proposed in the literature to interpret the CNN in perspectives of network structure, operation, and semantic concept. In this paper, we expect to provide a comprehensive survey of several representative CNN visualization methods, including Activation Maximization, Network Inversion, Deconvolutional Neural Networks (DeconvNet), and Network Dissection based visualization. These methods are presented in terms of motivations, algorithms, and experiment results. Based on these visualization methods, we also discuss their practical applications to demonstrate the significance of the CNN interpretability in areas of network design, optimization, security enhancement, etc.