In today's data centers, the performance of interconnects plays a pivotal role. However, many of the underlying technologies for these interconnects have a history of several decades and existed long before data centers came into being.To better cater to the requirements of data center networks, particularly in the context of intra-rack communication, we have developed a new interconnect. This interconnect is based on a lossless link layer protocol, named RIFL. In this work, we designed and implemented RIFL Layer 2, a scalable network that supports up to multi-hundred Gbps communication. RIFL Layer 2 includes the RIFL switch and RIFL NIC. By utilizing a simple Batcher Banyan and iSLIP RIFL switch, we effectively keep the typical intra-rack latency under 400 nanoseconds. Moreover, for a 32-port 100Gbps network, under both Bernoulli arrival and bursty arrival traffic patterns, we ensure that the 99\% tail latency does not exceed 12microseconds.
This research delves into the intricate landscape of Musculoskeletal Disorder (MSD) risk factors, employing a novel fusion of Natural Language Processing (NLP) techniques and mode-based ranking methodologies. The primary objective is to advance the comprehension of MSD risk factors, their classification, and their relative severity, facilitating more targeted preventive and management interventions. The study utilizes eight diverse models, integrating pre-trained transformers, cosine similarity, and various distance metrics to classify risk factors into personal, biomechanical, workplace, psychological, and organizational classes. Key findings reveal that the BERT model with cosine similarity attains an overall accuracy of 28%, while the sentence transformer, coupled with Euclidean, Bray-Curtis, and Minkowski distances, achieves a flawless accuracy score of 100%. In tandem with the classification efforts, the research employs a mode-based ranking approach on survey data to discern the severity hierarchy of MSD risk factors. Intriguingly, the rankings align precisely with the previous literature, reaffirming the consistency and reliability of the approach. ``Working posture" emerges as the most severe risk factor, emphasizing the critical role of proper posture in preventing MSDs. The collective perceptions of survey participants underscore the significance of factors like "Job insecurity," "Effort reward imbalance," and "Poor employee facility" in contributing to MSD risks. The convergence of rankings provides actionable insights for organizations aiming to reduce the prevalence of MSDs. The study concludes with implications for targeted interventions, recommendations for improving workplace conditions, and avenues for future research.
To meet the grand challenges of agricultural production including climate change impacts on crop production, a tight integration of social science, technology and agriculture experts including farmers are needed. There are rapid advances in information and communication technology, precision agriculture and data analytics, which are creating a fertile field for the creation of smart connected farms (SCF) and networked farmers. A network and coordinated farmer network provides unique advantages to farmers to enhance farm production and profitability, while tackling adverse climate events. The aim of this article is to provide a comprehensive overview of the state of the art in SCF including the advances in engineering, computer sciences, data sciences, social sciences and economics including data privacy, sharing and technology adoption.
Decision-makers often observe the occurrence of events through a reporting process. City governments, for example, rely on resident reports to find and then resolve urban infrastructural problems such as fallen street trees, flooded basements, or rat infestations. Without additional assumptions, there is no way to distinguish events that occur but are not reported from events that truly did not occur--a fundamental problem in settings with positive-unlabeled data. Because disparities in reporting rates correlate with resident demographics, addressing incidents only on the basis of reports leads to systematic neglect in neighborhoods that are less likely to report events. We show how to overcome this challenge by leveraging the fact that events are spatially correlated. Our framework uses a Bayesian spatial latent variable model to infer event occurrence probabilities and applies it to storm-induced flooding reports in New York City, further pooling results across multiple storms. We show that a model accounting for under-reporting and spatial correlation predicts future reports more accurately than other models, and further induces a more equitable set of inspections: its allocations better reflect the population and provide equitable service to non-white, less traditionally educated, and lower-income residents. This finding reflects heterogeneous reporting behavior learned by the model: reporting rates are higher in Census tracts with higher populations, proportions of white residents, and proportions of owner-occupied households. Our work lays the groundwork for more equitable proactive government services, even with disparate reporting behavior.
With the development of fifth-generation (5G) networks, the number of user equipments (UE) increases dramatically. However, the potential health risks from electromagnetic fields (EMF) tend to be a public concern. Generally, EMF exposure-related analysis mainly considers the passive exposure from base stations (BSs) and active exposure that results from the user's personal devices while communicating. However, the passive radiation that is generated by nearby devices of other users is typically ignored. In fact, with the increase in the density of UE, their passive exposure to human bodies can no longer be ignored. In this work, we propose a stochastic geometry framework to analyze the EMF exposure from active and passive radiation sources. In particular, considering a typical user, we account for their exposure to EMF from BSs, their own UE, and other UE. We derive the distribution of the Exposure index (EI) and the coverage probability for two typical models for spatial distributions of UE, i.e., \textit{i)} a Poisson point process (PPP); \textit{ii)} a Matern cluster process. Also, we show the trade-off between the EMF exposure and the coverage probability. Our numerical results suggest that the passive exposure from other users is non-negligible compared to the exposure from BSs when user density is $10^2$ times higher than BS density, and non-negligible compared to active exposure from the user's own UE when user density is $10^5$ times the BS density.
Currently, there are over 14 billion IoT devices [7], and with many devices come many protocols, the main ones being MQTT and CoAP. We are interested in connecting the many diverse IoT devices to the cloud. To do so, we use the middleware architecture proposed by article [8] in which a device, called the middleware, acts as the middleman between the various IoT networks and the cloud. Since IoT devices typically operate in real-time, performance is of great concern. Therefore, we conducted a simulation to measure the data latency of using middleware and the overall fairness between different IoT networks. Our simulation had an MQTT and a CoAP network interacting with the middleware. The simulation results showed that CoAP always had a lower travel time than MQTT, mainly because CoAP is a more lightweight protocol. However, we also found that MQTT had slightly more throughput, which was unexpected since we initially thought that CoAP would have had higher throughput. We have shown that analyzing data via a middleware device is possible and that there are potential directions to explore, such as evaluating different Quality of Service Algorithms in the context of having a middleware device.
Neural implicit scene representations have recently shown encouraging results in dense visual SLAM. However, existing methods produce low-quality scene reconstruction and low-accuracy localization performance when scaling up to large indoor scenes and long sequences. These limitations are mainly due to their single, global radiance field with finite capacity, which does not adapt to large scenarios. Their end-to-end pose networks are also not robust enough with the growth of cumulative errors in large scenes. To this end, we present PLGSLAM, a neural visual SLAM system which performs high-fidelity surface reconstruction and robust camera tracking in real time. To handle large-scale indoor scenes, PLGSLAM proposes a progressive scene representation method which dynamically allocates new local scene representation trained with frames within a local sliding window. This allows us to scale up to larger indoor scenes and improves robustness (even under pose drifts). In local scene representation, PLGSLAM utilizes tri-planes for local high-frequency features. We also incorporate multi-layer perceptron (MLP) networks for the low-frequency feature, smoothness, and scene completion in unobserved areas. Moreover, we propose local-to-global bundle adjustment method with a global keyframe database to address the increased pose drifts on long sequences. Experimental results demonstrate that PLGSLAM achieves state-of-the-art scene reconstruction results and tracking performance across various datasets and scenarios (both in small and large-scale indoor environments). The code will be open-sourced upon paper acceptance.
Diffusion models (DMs) have shown great potential for high-quality image synthesis. However, when it comes to producing images with complex scenes, how to properly describe both image global structures and object details remains a challenging task. In this paper, we present Frido, a Feature Pyramid Diffusion model performing a multi-scale coarse-to-fine denoising process for image synthesis. Our model decomposes an input image into scale-dependent vector quantized features, followed by a coarse-to-fine gating for producing image output. During the above multi-scale representation learning stage, additional input conditions like text, scene graph, or image layout can be further exploited. Thus, Frido can be also applied for conditional or cross-modality image synthesis. We conduct extensive experiments over various unconditioned and conditional image generation tasks, ranging from text-to-image synthesis, layout-to-image, scene-graph-to-image, to label-to-image. More specifically, we achieved state-of-the-art FID scores on five benchmarks, namely layout-to-image on COCO and OpenImages, scene-graph-to-image on COCO and Visual Genome, and label-to-image on COCO. Code is available at //github.com/davidhalladay/Frido.
Images can convey rich semantics and induce various emotions in viewers. Recently, with the rapid advancement of emotional intelligence and the explosive growth of visual data, extensive research efforts have been dedicated to affective image content analysis (AICA). In this survey, we will comprehensively review the development of AICA in the recent two decades, especially focusing on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence. We begin with an introduction to the key emotion representation models that have been widely employed in AICA and description of available datasets for performing evaluation with quantitative comparison of label noise and dataset bias. We then summarize and compare the representative approaches on (1) emotion feature extraction, including both handcrafted and deep features, (2) learning methods on dominant emotion recognition, personalized emotion prediction, emotion distribution learning, and learning from noisy data or few labels, and (3) AICA based applications. Finally, we discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.