Arctic sea ice has steadily diminished as atmospheric greenhouse gas concentrations have increased. Using observed data from 1979 to 2019, we estimate a close contemporaneous linear relationship between Arctic sea ice area and cumulative carbon dioxide emissions. For comparison, we provide analogous regression estimates using simulated data from global climate models (drawn from the CMIP5 and CMIP6 model comparison exercises). The carbon sensitivity of Arctic sea ice area is considerably stronger in the observed data than in the climate models. Thus, for a given future emissions path, an ice-free Arctic is likely to occur much earlier than the climate models project. Furthermore, little progress has been made in recent global climate modeling (from CMIP5 to CMIP6) to more accurately match the observed carbon-climate response of Arctic sea ice.
Global oil demand is rapidly increasing and is expected to reach 106.3 million barrels per day by 2040. Thus, it is vital for hydrocarbon extraction industries to forecast their production to optimize their operations and avoid losses. Big companies have realized that exploiting the power of deep learning (DL) and the massive amount of data from various oil wells for this purpose can save a lot of operational costs and reduce unwanted environmental impacts. In this direction, researchers have proposed models using conventional machine learning (ML) techniques for oil production forecasting. However, these techniques are inappropriate for this problem as they can not capture historical patterns found in time series data, resulting in inaccurate predictions. This research aims to overcome these issues by developing advanced data-driven regression models using sequential convolutions and long short-term memory (LSTM) units. Exhaustive analyses are conducted to select the optimal sequence length, model hyperparameters, and cross-well dataset formation to build highly generalized robust models. A comprehensive experimental study on Volve oilfield data validates the proposed models. It reveals that the LSTM-based sequence learning model can predict oil production better than the 1-D convolutional neural network (CNN) with mean absolute error (MAE) and R2 score of 111.16 and 0.98, respectively. It is also found that the LSTM-based model performs better than all the existing state-of-the-art solutions and achieves a 37% improvement compared to a standard linear regression, which is considered the baseline model in this work.
The laborious and costly nature of affect annotation is a key detrimental factor for obtaining large scale corpora with valid and reliable affect labels. Motivated by the lack of tools that can effectively determine an annotator's reliability, this paper proposes general quality assurance (QA) tests for real-time continuous annotation tasks. Assuming that the annotation tasks rely on stimuli with audiovisual components, such as videos, we propose and evaluate two QA tests: a visual and an auditory QA test. We validate the QA tool across 20 annotators that are asked to go through the test followed by a lengthy task of annotating the engagement of gameplay videos. Our findings suggest that the proposed QA tool reveals, unsurprisingly, that trained annotators are more reliable than the best of untrained crowdworkers we could employ. Importantly, the QA tool introduced can predict effectively the reliability of an affect annotator with 80% accuracy, thereby, saving on resources, effort and cost, and maximizing the reliability of labels solicited in affective corpora. The introduced QA tool is available and accessible through the PAGAN annotation platform.
Cloud-native architecture is becoming increasingly crucial for today's cloud computing environments due to the need for speed and flexibility in developing applications. It utilizes microservice technology to break down traditional monolithic applications into light-weight and self-contained microservice components. However, as microservices grow in scale and have dynamic inter-dependencies, they also pose new challenges in resource provisioning that cannot be fully addressed by traditional resource scheduling approaches. The various microservices with different resource needs and latency requirements can create complex calling chains, making it difficult to provide fine-grained and accurate resource allocation to each component while maintaining the overall quality of service in the chain. In this work, we aim to address the research problem on how to efficiently provision resources for the growing scale of microservice platform and ensure the performance of latency-critical microservices. To address the problem, we present in-depth analyses of Alibaba's microservice cluster and propose optimized resource provisioning algorithms to enhance resource utilization while ensuring the latency requirement. First, we analyze the distinct features of microservices in Alibaba's cluster compared to traditional applications. Then we present Alibaba's resource capacity provisioning workflow and framework to address challenges in resource provisioning for large-scale and latency-critical microservice clusters. Finally, we propose enhanced resource provisioning algorithms over Alibaba's current practice by making both proactive and reactive scheduling decisions based on different workloads patterns, which can improve resource usage by 10-15% in Alibaba's clusters, while maintaining the necessary latency for microservices.
Fisheye cameras are widely employed in automatic parking, and the video stream object detection (VSOD) of the fisheye camera is a fundamental perception function to ensure the safe operation of vehicles. In past research work, the difference between the output of the deep learning model and the actual situation at the current moment due to the existence of delay of the perception system is generally ignored. But the environment will inevitably change within the delay time which may cause a potential safety hazard. In this paper, we propose a real-time detection framework equipped with a dual-flow perception module (dynamic and static flows) that can predict the future and alleviate the time-lag problem. Meanwhile, we use a new scheme to evaluate latency and accuracy. The standard bounding box is unsuitable for the object in fisheye camera images due to the strong radial distortion of the fisheye camera and the primary detection objects of parking perception are vehicles and pedestrians, so we adopt the rotate bounding box and propose a new periodic angle loss function to regress the angle of the box, which is the simple and accurate representation method of objects. The instance segmentation ground truth is used to supervise the training. Experiments demonstrate the effectiveness of our approach. Code is released at: //gitee.com/hiyanyx/fisheye-streaming-perception.
Wind energy has been increasingly adopted to mitigate climate change. However, the variability of wind energy causes wind curtailment, resulting in considerable economic losses for wind farm owners. Wind curtailment can be reduced using battery energy storage systems (BESS) as onsite backup sources. Yet, this auxiliary role may significantly weaken the economic potential of BESS in energy trading. Ideal BESS scheduling should balance onsite wind curtailment reduction and market bidding, but practical implementation is challenging due to coordination complexity and the stochastic nature of energy prices and wind generation. We investigate the joint-market bidding strategy of a co-located wind-battery system in the spot and Regulation Frequency Control Ancillary Service markets. We propose a novel deep reinforcement learning-based approach that decouples the system's market participation into two related Markov decision processes for each facility, enabling the BESS to absorb onsite wind curtailment while performing joint-market bidding to maximize overall operational revenues. Using realistic wind farm data, we validated the coordinated bidding strategy, with outcomes surpassing the optimization-based benchmark in terms of higher revenue by approximately 25\% and more wind curtailment reduction by 2.3 times. Our results show that joint-market bidding can significantly improve the financial performance of wind-battery systems compared to participating in each market separately. Simulations also show that using curtailed wind generation as a power source for charging the BESS can lead to additional financial gains. The successful implementation of our algorithm would encourage co-location of generation and storage assets to unlock wider system benefits.
We prove the first unconditional consistency result for superpolynomial circuit lower bounds with a relatively strong theory of bounded arithmetic. Namely, we show that the theory V$^0_2$ is consistent with the conjecture that NEXP $\not\subseteq$ P/poly, i.e., some problem that is solvable in non-deterministic exponential time does not have polynomial size circuits. We suggest this is the best currently available evidence for the truth of the conjecture. The same techniques establish the same results with NEXP replaced by the class of problems that are decidable in non-deterministic barely superpolynomial time such as NTIME$(n^{O(\log\log\log n)})$. Additionally, we establish a magnification result on the hardness of proving circuit lower bounds.
The use of supervised deep neural network approaches has been investigated to solve inverse problems in all domains, especially radiology where imaging technologies are at the heart of diagnostics. However, in deployment, these models are exposed to input distributions that are widely shifted from training data, due in part to data biases or drifts. It becomes crucial to know whether a given input lies outside the training data distribution before relying on the reconstruction for diagnosis. The goal of this work is three-fold: (i) demonstrate use of the local Lipshitz value as an uncertainty estimation threshold for determining suitable performance, (ii) provide method for identifying out-of-distribution (OOD) images where the model may not have generalized, and (iii) use the local Lipschitz values to guide proper data augmentation through identifying false positives and decrease epistemic uncertainty. We provide results for both MRI reconstruction and CT sparse view to full view reconstruction using AUTOMAP and UNET architectures due to it being pertinent in the medical domain that reconstructed images remain diagnostically accurate.
Internet of Things (IoT) is defined as the connection between places and physical objects (i.e., things) over the internet/network via smart computing devices. We observed that IoT software developers share solutions to programming questions as code examples on three Stack Exchange Q&A sites: Stack Overflow (SO), Arduino, and Raspberry Pi. Previous research studies found vulnerabilities/weaknesses in C/C++ code examples shared in Stack Overflow. However, the studies did not investigate C/C++ code examples related to IoT. The studies investigated SO code examples only. In this paper, we conduct a large-scale empirical study of all IoT C/C++ code examples shared in the three Stack Exchange sites, i.e., SO, Arduino, and Raspberry Pi. From the 11,329 obtained code snippets from the three sites, we identify 29 distinct CWE (Common Weakness Enumeration) types in 609 snippets. These CWE types can be categorized into 8 general weakness categories, and we observe that evaluation, memory, and initialization related weaknesses are the most common to be introduced by users when posting programming solutions. Furthermore, we find that 39.58% of the vulnerable code snippets contain instances of CWE types that can be mapped to real-world occurrences of those CWE types (i.e. CVE instances). The most number vulnerable IoT code examples was found in Arduino, followed by SO, and Raspberry Pi. Memory type vulnerabilities are on the rise in the sites. For example, from the 3595 mapped CVE instances, we find that 28.99% result in Denial of Service (DoS) errors, which is particularly harmful for network reliant IoT devices such as smart cars. Our study results can guide various IoT stakeholders to be aware of such vulnerable IoT code examples and to inform IoT researchers during their development of tools that can help prevent developers the sharing of such vulnerable code examples in the sites. [Abridged].
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
The world population is anticipated to increase by close to 2 billion by 2050 causing a rapid escalation of food demand. A recent projection shows that the world is lagging behind accomplishing the "Zero Hunger" goal, in spite of some advancements. Socio-economic and well being fallout will affect the food security. Vulnerable groups of people will suffer malnutrition. To cater to the needs of the increasing population, the agricultural industry needs to be modernized, become smart, and automated. Traditional agriculture can be remade to efficient, sustainable, eco-friendly smart agriculture by adopting existing technologies. In this survey paper the authors present the applications, technological trends, available datasets, networking options, and challenges in smart agriculture. How Agro Cyber Physical Systems are built upon the Internet-of-Agro-Things is discussed through various application fields. Agriculture 4.0 is also discussed as a whole. We focus on the technologies, such as Artificial Intelligence (AI) and Machine Learning (ML) which support the automation, along with the Distributed Ledger Technology (DLT) which provides data integrity and security. After an in-depth study of different architectures, we also present a smart agriculture framework which relies on the location of data processing. We have divided open research problems of smart agriculture as future research work in two groups - from a technological perspective and from a networking perspective. AI, ML, the blockchain as a DLT, and Physical Unclonable Functions (PUF) based hardware security fall under the technology group, whereas any network related attacks, fake data injection and similar threats fall under the network research problem group.