This work investigates a modified Schelling model within the scope and aims of Social Physics. The main purpose is to see if how the concepts of potential and kinetic energy can be represented within a computational sociological system. A monetary value is assigned to all the agents in the Monetary Schelling model and a set of dynamics for how the money is spent upon agent position changes and gradual loss. The introduction of the potential and kinetic energy allows for the entropy to be calculated based upon the distribution of the agent energies and as well as the internal energy of the system at each time point. The results show how the movements of the agents produce identity satisfactions with their neighbors decreasing the internal energy of the system along with the decay in the monetary holdings. Simulations are run where agents are provided monetary values at fixed intervals and this causes a subset of the agents to mobilize and explore new positions for satisfaction and increases the entropy with the internal energy removing the system from the fixed point.
In practice, the use of rounding is ubiquitous. Although researchers have looked at the implications of rounding continuous random variables, rounding may be applied to functions of discrete random variables as well. For example, to infer on suicide excess deaths after a national emergency, authorities may provide a rounded average of deaths before and after the emergency started. Suicide rates tend to be relatively low around the world and such rounding may seriously affect inference on the change of suicide rate. In this paper, we study the scenario when a rounded to nearest integer average is used as a proxy for a non-negative discrete random variable. Specifically, our interest is in drawing inference on a parameter from the pmf of Y , when we get U = n[Y /n] as a proxy for Y . The probability generating function of U , E(U ), and Var(U ) capture the effect of the coarsening of the support of Y . Also, moments and estimators of distribution parameters are explored for some special cases. We show that under certain conditions, there is little impact from rounding. However, we also find scenarios where rounding can significantly affect statistical inference as demonstrated in three examples. The simple methods we propose are able to partially counter rounding error effects. While for some probability distributions it may be difficult to derive maximum likelihood estimators as a function of U , we provide a framework to obtain an estimator numerically.
We present Consistent Assignment of Views over Random Partitions (CARP), a self-supervised clustering method for representation learning of visual features. CARP learns prototypes in an end-to-end online fashion using gradient descent without additional non-differentiable modules to solve the cluster assignment problem. CARP optimizes a new pretext task based on random partitions of prototypes that regularizes the model and enforces consistency between views' assignments. Additionally, our method improves training stability and prevents collapsed solutions in joint-embedding training. Through an extensive evaluation, we demonstrate that CARP's representations are suitable for learning downstream tasks. We evaluate CARP's representations capabilities in 17 datasets across many standard protocols, including linear evaluation, few-shot classification, k-NN, k-means, image retrieval, and copy detection. We compare CARP performance to 11 existing self-supervised methods. We extensively ablate our method and demonstrate that our proposed random partition pretext task improves the quality of the learned representations by devising multiple random classification tasks. In transfer learning tasks, CARP achieves the best performance on average against many SSL methods trained for a longer time.
An important prerequisite for autonomous robots is their ability to reliably grasp a wide variety of objects. Most state-of-the-art systems employ specialized or simple end-effectors, such as two-jaw grippers, which severely limit the range of objects to manipulate. Additionally, they conventionally require a structured and fully predictable environment while the vast majority of our world is complex, unstructured, and dynamic. This paper presents an implementation to overcome both issues. Firstly, the integration of a five-finger hand enhances the variety of possible grasps and manipulable objects. This kinematically complex end-effector is controlled by a deep learning based generative grasping network. The required virtual model of the unknown target object is iteratively completed by processing visual sensor data. Secondly, this visual feedback is employed to realize closed-loop servo control which compensates for external disturbances. Our experiments on real hardware confirm the system's capability to reliably grasp unknown dynamic target objects without a priori knowledge of their trajectories. To the best of our knowledge, this is the first method to achieve dynamic multi-fingered grasping for unknown objects. A video of the experiments is available at //youtu.be/Ut28yM1gnvI.
Despite the success of diffusion models (DMs), we still lack a thorough understanding of their latent space. To understand the latent space $\mathbf{x}_t \in \mathcal{X}$, we analyze them from a geometrical perspective. Our approach involves deriving the local latent basis within $\mathcal{X}$ by leveraging the pullback metric associated with their encoding feature maps. Remarkably, our discovered local latent basis enables image editing capabilities by moving $\mathbf{x}_t$, the latent space of DMs, along the basis vector at specific timesteps. We further analyze how the geometric structure of DMs evolves over diffusion timesteps and differs across different text conditions. This confirms the known phenomenon of coarse-to-fine generation, as well as reveals novel insights such as the discrepancy between $\mathbf{x}_t$ across timesteps, the effect of dataset complexity, and the time-varying influence of text prompts. To the best of our knowledge, this paper is the first to present image editing through $\mathbf{x}$-space traversal, editing only once at specific timestep $t$ without any additional training, and providing thorough analyses of the latent structure of DMs. The code to reproduce our experiments can be found at //github.com/enkeejunior1/Diffusion-Pullback.
The agriculture sector requires a lot of labor and resources. Hence, farmers are constantly being pressed for technology and automation to be cost-effective. In this context, autonomous robots can play a very important role in carrying out agricultural tasks such as spraying, sowing, inspection, and even harvesting. This paper presents one such autonomous robot that is able to identify plants and spray agro-chemicals precisely. The robot uses machine vision technologies to find plants and RTK-GPS technology to navigate the robot along a predetermined path. The experiments were conducted in a field of potted plants in which successful results have been obtained.
In a spoken dialogue system, an NLU model is preceded by a speech recognition system that can deteriorate the performance of natural language understanding. This paper proposes a method for investigating the impact of speech recognition errors on the performance of natural language understanding models. The proposed method combines the back transcription procedure with a fine-grained technique for categorizing the errors that affect the performance of NLU models. The method relies on the usage of synthesized speech for NLU evaluation. We show that the use of synthesized speech in place of audio recording does not change the outcomes of the presented technique in a significant way.
This study enhances option pricing by presenting unique pricing model fractional order Black-Scholes-Merton (FOBSM) which is based on the Black-Scholes-Merton (BSM) model. The main goal is to improve the precision and authenticity of option pricing, matching them more closely with the financial landscape. The approach integrates the strengths of both the BSM and neural network (NN) with complex diffusion dynamics. This study emphasizes the need to take fractional derivatives into account when analyzing financial market dynamics. Since FOBSM captures memory characteristics in sequential data, it is better at simulating real-world systems than integer-order models. Findings reveals that in complex diffusion dynamics, this hybridization approach in option pricing improves the accuracy of price predictions. the key contribution of this work lies in the development of a novel option pricing model (FOBSM) that leverages fractional calculus and neural networks to enhance accuracy in capturing complex diffusion dynamics and memory effects in financial data.
Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.
Seeking the equivalent entities among multi-source Knowledge Graphs (KGs) is the pivotal step to KGs integration, also known as \emph{entity alignment} (EA). However, most existing EA methods are inefficient and poor in scalability. A recent summary points out that some of them even require several days to deal with a dataset containing 200,000 nodes (DWY100K). We believe over-complex graph encoder and inefficient negative sampling strategy are the two main reasons. In this paper, we propose a novel KG encoder -- Dual Attention Matching Network (Dual-AMN), which not only models both intra-graph and cross-graph information smartly, but also greatly reduces computational complexity. Furthermore, we propose the Normalized Hard Sample Mining Loss to smoothly select hard negative samples with reduced loss shift. The experimental results on widely used public datasets indicate that our method achieves both high accuracy and high efficiency. On DWY100K, the whole running process of our method could be finished in 1,100 seconds, at least 10* faster than previous work. The performances of our method also outperform previous works across all datasets, where Hits@1 and MRR have been improved from 6% to 13%.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.