Noise is a part of data whether the data is from measurement, experiment or ... A few techniques are suggested for noise reduction to improve the data quality in recent years some of which are based on wavelet, orthogonalization and neural networks. The computational cost of existing methods are more than expected and that's why their application in some cases is not beneficial. In this paper, we suggest a low cost techniques based on special linear algebra structures (tridiagonal systems) to improve the signal quality. In this method, we suggest a tridiagonal model for the noise around the most noisy elements. To update the predicted noise, the algorithm is equipped with a learning/feedback approach. The details are described below and based on presented numerical results this algorithm is successful in computing the noise with lower MSE (mean squared error) in computation time specially when the data size is lower than 5000. Our algorithm is used for low-range noise while for high-range noise it is sufficient to use the presented algorithm in hybrid with moving average. The algorithm is implemented in MATLAB 2019b on a computer with Windows 11 having 8GB RAM. It is then tested over many randomly generated experiments. The numerical results confirm the efficiency of presented algorithm in most cases in comparison with existing methods.
A key theme in the past decade has been that when large neural networks and large datasets combine they can produce remarkable results. In deep reinforcement learning (RL), this paradigm is commonly made possible through experience replay, whereby a dataset of past experiences is used to train a policy or value function. However, unlike in supervised or self-supervised learning, an RL agent has to collect its own data, which is often limited. Thus, it is challenging to reap the benefits of deep learning, and even small neural networks can overfit at the start of training. In this work, we leverage the tremendous recent progress in generative modeling and propose Synthetic Experience Replay (SynthER), a diffusion-based approach to flexibly upsample an agent's collected experience. We show that SynthER is an effective method for training RL agents across offline and online settings, in both proprioceptive and pixel-based environments. In offline settings, we observe drastic improvements when upsampling small offline datasets and see that additional synthetic data also allows us to effectively train larger networks. Furthermore, SynthER enables online agents to train with a much higher update-to-data ratio than before, leading to a significant increase in sample efficiency, without any algorithmic changes. We believe that synthetic training data could open the door to realizing the full potential of deep learning for replay-based RL algorithms from limited data. Finally, we open-source our code at //github.com/conglu1997/SynthER.
Open data is frequently released spatially and temporally aggregated, usually to comply with privacy policies. Varying aggregation levels (e.g., zip code, census tract, city block) complicate the integration across variables needed to provide multi-variate training sets for downstream AI/ML systems. In this work, we consider models to disaggregate spatial data, learning a function from a low-resolution irregular partition (e.g., zip code) to s high-resolution irregular partition (e.g., city block). We propose a hierarchical architecture that aligns each geographic aggregation level with a layer in the network such that all aggregation levels can be learned simultaneously by including loss terms for all intermediate levels as well as the final output. We then consider additional loss terms that compare the re-aggregated output against ground truth to further improve performance. To balance the tradeoff between training time and accuracy, we consider three training regimes, including a layer-by-layer process that achieves competitive predictions with significantly reduced training time. For situations where limited historical training data is available, we study transfer learning scenarios and show that a model pre-trained on one city variable can be fine-tuned for another city variable using only a few hundred samples, highlighting the common dynamics among variables from the same built environment and underlying population. Evaluating these techniques on four datasets across two cities, three variables, and two application domains, we find that geographically coherent architectures provide a significant improvement over baseline models as well as typical heuristic methods, advancing our long-term goal of synthesizing any variable, at any location, at any resolution.
Recent advances in Neural Radiance Fields (NeRFs) have made it possible to reconstruct and reanimate dynamic portrait scenes with control over head-pose, facial expressions and viewing direction. However, training such models assumes photometric consistency over the deformed region e.g. the face must be evenly lit as it deforms with changing head-pose and facial expression. Such photometric consistency across frames of a video is hard to maintain, even in studio environments, thus making the created reanimatable neural portraits prone to artifacts during reanimation. In this work, we propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions. CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model in the canonical space that is conditioned on predicted surface normals and the facial expressions and head-pose deformations. The surface normals prediction is guided using 3DMM normals that act as a coarse prior for the normals of the human head, where direct prediction of normals is hard due to rigid and non-rigid deformations induced by head-pose and facial expression changes. Using only a smartphone-captured short video of a subject for training, we demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls, and realistic lighting effects. The project page can be found here: //shahrukhathar.github.io/2023/08/22/CoDyNeRF.html
Similarity caching allows requests for an item to be served by a similar item. Applications include recommendation systems, multimedia retrieval, and machine learning. Recently, many similarity caching policies have been proposed, like SIM-LRU and RND-LRU, but the performance analysis of their hit rate is still wanting. In this paper, we show how to extend the popular time-to-live approximation in classic caching to similarity caching. In particular, we propose a method to estimate the hit rate of the similarity caching policy RND-LRU. Our method, the RND-TTL approximation, introduces the RND-TTL cache model and then tunes its parameters in such a way to mimic the behavior of RND-LRU. The parameter tuning involves solving a fixed point system of equations for which we provide an algorithm for numerical resolution and sufficient conditions for its convergence. Our approach for approximating the hit rate of RND-LRU is evaluated on both synthetic and real world traces.
Selfish Mining is strategic rule-breaking to maximize rewards in proof-of-work protocols [3] and Markov Decision Processes (MDPs) are the preferred tool for finding optimal strategies in Bitcoin [4, 10] and similar linear chain protocols [12]. Protocols increasingly adopt non-sequential chain structures [11], for which MDP analysis is more involved [2]. To date, researchers have tailored specific attack spaces for each protocol [2, 4, 5, 7, 10, 12]. Assumptions differ, and validating and comparing results is difficult. To overcome this, we propose a generic attack space that supports the wide class of DAG protocols that provide a total ordering of blocks [11], e. g., Ethereum, Fruitchains, and Parallel Proof-of-Work. Our approach is modular: we specify each protocol as one program, and then derive the Selfish Mining MDPs automatically.
Information compression techniques are majorly employed to address the concern of reducing communication cost over peer-to-peer links. In this paper, we investigate distributed Nash equilibrium (NE) seeking problems in a class of non-cooperative games over directed graphs with information compression. To improve communication efficiency, a compressed distributed NE seeking (C-DNES) algorithm is proposed to obtain a NE for games, where the differences between decision vectors and their estimates are compressed. The proposed algorithm is compatible with a general class of compression operators, including both unbiased and biased compressors. Moreover, our approach only requires the adjacency matrix of the directed graph to be row-stochastic, in contrast to past works that relied on balancedness or specific global network parameters. It is shown that C-DNES not only inherits the advantages of conventional distributed NE algorithms, achieving linear convergence rate for games with restricted strongly monotone mappings, but also saves communication costs in terms of transmitted bits. Finally, numerical simulations illustrate the advantages of C-DNES in saving communication cost by an order of magnitude under different compressors.
Graph Neural Networks (GNNs) have achieved tremendous success in a variety of real-world applications by relying on the fixed graph data as input. However, the initial input graph might not be optimal in terms of specific downstream tasks, because of information scarcity, noise, adversarial attacks, or discrepancies between the distribution in graph topology, features, and groundtruth labels. In this paper, we propose a bi-level optimization approach for learning the optimal graph structure via directly learning the Personalized PageRank propagation matrix as well as the downstream semi-supervised node classification simultaneously. We also explore a low-rank approximation model for further reducing the time complexity. Empirical evaluations show the superior efficacy and robustness of the proposed model over all baseline methods.
Vector graphics are an industry-standard way to represent and share visual designs. Designers frequently source and incorporate styles from existing designs into their own work. Unfortunately, popular design tools aren't well suited for this task. We present VST, Vector Style Transfer, a novel design tool for flexibly transferring visual styles between vector graphics. The core of VST lies in leveraging automation while respecting designers' tastes and the subjectivity inherent to style transfer. In VST, designers tune a cross-design element correspondence and customize which style attributes to change. We report results from a user study in which designers used VST to control style transfer between several designs, including designs participants created with external tools beforehand. VST shows that enabling design correspondence tuning and customization is one way to support interactive, flexible style transfer. We also find that someone using VST can significantly reduce the time and work for style transfer compared to experienced designers using industry-standard tools.
Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.