This paper explores the application of automated machine learning (AutoML) techniques to the construction industry, a sector vital to the global economy. Traditional ML model construction methods were complex, time-consuming, reliant on data science expertise, and expensive. AutoML shows the potential to automate many tasks in ML construction and to create outperformed ML models. This paper aims to verify the feasibility of applying AutoML to industrial datasets for the smart construction domain, with a specific case study demonstrating its effectiveness. Two data challenges that were unique to industrial construction datasets are focused on, in addition to the normal steps of dataset preparation, model training, and evaluation. A real-world application case of construction project type prediction is provided to illustrate the accessibility of AutoML. By leveraging AutoML, construction professionals without data science expertise can now utilize software to process industrial data into ML models that assist in project management. The findings in this paper may bridge the gap between data-intensive smart construction practices and the emerging field of AutoML, encouraging its adoption for improved decision-making, project outcomes, and efficiency
Reinforcement learning (RL) algorithms face the challenge of limited data efficiency, particularly when dealing with high-dimensional state spaces and large-scale problems. Most RL methods often rely solely on state transition information within the same episode when updating the agent's Critic, which can lead to low data efficiency and sub-optimal training time consumption. Inspired by human-like analogical reasoning abilities, we introduce a novel mesh information propagation mechanism, termed the 'Imagination Mechanism (IM)', designed to significantly enhance the data efficiency of RL algorithms. Specifically, IM enables information generated by a single sample to be effectively broadcasted to different states, instead of simply transmitting in the same episode and it allows the model to better understand the interdependencies between states and learn scarce sample information more efficiently. To promote versatility, we extend the imagination mechanism to function as a plug-and-play module that can be seamlessly and fluidly integrated into other widely adopted RL models. Our experiments demonstrate that Imagination mechanism consistently boosts four mainstream SOTA RL-algorithms, such as SAC, PPO, DDPG, and DQN, by a considerable margin, ultimately leading to superior performance than before across various tasks. For access to our code and data, please visit //github.com/Zero-coder/FECAM.
This chapter explores the complex realm of autonomous cars, analyzing their fundamental components and operational characteristics. The initial phase of the discussion is elucidating the internal mechanics of these automobiles, encompassing the crucial involvement of sensors, artificial intelligence (AI) identification systems, control mechanisms, and their integration with cloud-based servers within the framework of the Internet of Things (IoT). It delves into practical implementations of autonomous cars, emphasizing their utilization in forecasting traffic patterns and transforming the dynamics of transportation. The text also explores the topic of Robotic Process Automation (RPA), illustrating the impact of autonomous cars on different businesses through the automation of tasks. The primary focus of this investigation lies in the realm of cybersecurity, specifically in the context of autonomous vehicles. A comprehensive analysis will be conducted to explore various risk management solutions aimed at protecting these vehicles from potential threats including ethical, environmental, legal, professional, and social dimensions, offering a comprehensive perspective on their societal implications. A strategic plan for addressing the challenges and proposing strategies for effectively traversing the complex terrain of autonomous car systems, cybersecurity, hazards, and other concerns are some resources for acquiring an understanding of the intricate realm of autonomous cars and their ramifications in contemporary society, supported by a comprehensive compilation of resources for additional investigation. Keywords: RPA, Cyber Security, AV, Risk, Smart Cars
The present study proposes clustering techniques for designing demand response (DR) programs for commercial and residential prosumers. The goal is to alter the consumption behavior of the prosumers within a distributed energy community in Italy. This aggregation aims to: a) minimize the reverse power flow at the primary substation, occuring when generation from solar panels in the local grid exceeds consumption, and b) shift the system wide peak demand, that typically occurs during late afternoon. Regarding the clustering stage, we consider daily prosumer load profiles and divide them across the extracted clusters. Three popular machine learning algorithms are employed, namely k-means, k-medoids and agglomerative clustering. We evaluate the methods using multiple metrics including a novel metric proposed within this study, namely peak performance score (PPS). The k-means algorithm with dynamic time warping distance considering 14 clusters exhibits the highest performance with a PPS of 0.689. Subsequently, we analyze each extracted cluster with respect to load shape, entropy, and load types. These characteristics are used to distinguish the clusters that have the potential to serve the optimization objectives by matching them to proper DR schemes including time of use, critical peak pricing, and real-time pricing. Our results confirm the effectiveness of the proposed clustering algorithm in generating meaningful flexibility clusters, while the derived DR pricing policy encourages consumption during off-peak hours. The developed methodology is robust to the low availability and quality of training datasets and can be used by aggregator companies for segmenting energy communities and developing personalized DR policies.
The paper is briefly dealing with greater or lesser misused normalization in self-modeling/multivariate curve resolution (S/MCR) practice. The importance of the correct use of the ode solvers and apt kinetic illustrations are elucidated. The new terms, external and internal normalizations are defined and interpreted. The problem of reducibility of a matrix is touched. Improper generalization/development of normalization-based methods are cited as examples. The position of the extreme values of the signal contribution function is clarified. An Executable Notebook with Matlab Live Editor was created for algorithmic explanations and depictions.
As Basu (1977) writes, "Eliminating nuisance parameters from a model is universally recognized as a major problem of statistics," but after more than 50 years since Basu wrote these words, the two mainstream schools of thought in statistics have yet to solve the problem. Fortunately, the two mainstream frameworks aren't the only options. This series of papers rigorously develops a new and very general inferential model (IM) framework for imprecise-probabilistic statistical inference that is provably valid and efficient, while simultaneously accommodating incomplete or partial prior information about the relevant unknowns when it's available. The present paper, Part III in the series, tackles the marginal inference problem. Part II showed that, for parametric models, the likelihood function naturally plays a central role and, here, when nuisance parameters are present, the same principles suggest that the profile likelihood is the key player. When the likelihood factors nicely, so that the interest and nuisance parameters are perfectly separated, the valid and efficient profile-based marginal IM solution is immediate. But even when the likelihood doesn't factor nicely, the same profile-based solution remains valid and leads to efficiency gains. This is demonstrated in several examples, including the famous Behrens--Fisher and gamma mean problems, where I claim the proposed IM solution is the best solution available. Remarkably, the same profiling-based construction offers validity guarantees in the prediction and non-parametric inference problems. Finally, I show how a broader view of this new IM construction can handle non-parametric inference on risk minimizers and makes a connection between non-parametric IMs and conformal prediction.
Reconfigurable intelligent surfaces (RIS) can be crucial in next-generation communication systems. However, designing the {RIS} phases according to the instantaneous channel state information (CSI) can be challenging in practice due to the short coherent time of the channel. In this regard, we propose a novel algorithm based on the channel statistics of massive multiple input multiple output systems rather than the instantaneous {CSI}. The beamforming at the base station (BS), power allocation of the users, and phase shifts at the RIS elements are optimized to maximize the minimum signal-to-interference and noise ratio (SINR), guaranteeing fair operation among various users. In particular, we design the RIS phases by leveraging the asymptotic deterministic equivalent of the minimum {SINR} that depends only on the channel statistics. This significantly reduces the computational complexity and the amount of controlling data between the {BS} and {RIS} for updating the phases. This setup is also useful for electromagnetic fields (EMF)-aware systems with constraints on the maximum user's exposure to EMF. The numerical results show that the proposed algorithms achieve more than $100 \%$ gain in terms of minimum SINR, compared to a system with random RIS phase shifts, when $40$ RIS elements, $20$ antennas at the BS and $10$ users, are considered.
Surface defect inspection is of great importance for industrial manufacture and production. Though defect inspection methods based on deep learning have made significant progress, there are still some challenges for these methods, such as indistinguishable weak defects and defect-like interference in the background. To address these issues, we propose a transformer network with multi-stage CNN (Convolutional Neural Network) feature injection for surface defect segmentation, which is a UNet-like structure named CINFormer. CINFormer presents a simple yet effective feature integration mechanism that injects the multi-level CNN features of the input image into different stages of the transformer network in the encoder. This can maintain the merit of CNN capturing detailed features and that of transformer depressing noises in the background, which facilitates accurate defect detection. In addition, CINFormer presents a Top-K self-attention module to focus on tokens with more important information about the defects, so as to further reduce the impact of the redundant background. Extensive experiments conducted on the surface defect datasets DAGM 2007, Magnetic tile, and NEU show that the proposed CINFormer achieves state-of-the-art performance in defect detection.
Data-driven personalization is a key practice in fashion e-commerce, improving the way businesses serve their consumers needs with more relevant content. While hyper-personalization offers highly targeted experiences to each consumer, it requires a significant amount of private data to create an individualized journey. To alleviate this, group-based personalization provides a moderate level of personalization built on broader common preferences of a consumer segment, while still being able to personalize the results. We introduce UNICON, a unified deep learning consumer segmentation framework that leverages rich consumer behavior data to learn long-term latent representations and utilizes them to extract two pivotal types of segmentation catering various personalization use-cases: lookalike, expanding a predefined target seed segment with consumers of similar behavior, and data-driven, revealing non-obvious consumer segments with similar affinities. We demonstrate through extensive experimentation our framework effectiveness in fashion to identify lookalike Designer audience and data-driven style segments. Furthermore, we present experiments that showcase how segment information can be incorporated in a hybrid recommender system combining hyper and group-based personalization to exploit the advantages of both alternatives and provide improvements on consumer experience.
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.
Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.