Long-term operation of nuclear steam generators can result in the occurrence of clogging, a deposition phenomenon that may increase the risk of mechanical and vibration loadings on tube bundles and internal structures as well as potentially affecting their response to hypothetical accidental transients. To manage and prevent this issue, a robust maintenance program that requires a fine understanding of the underlying physics is essential. This study focuses on the utilization of a clogging simulation code developed by EDF R\&D. This numerical tool employs specific physical models to simulate the kinetics of clogging and generates time dependent clogging rate profiles for particular steam generators. However, certain parameters in this code are subject to uncertainties. To address these uncertainties, Monte Carlo simulations are conducted to assess the distribution of the clogging rate. Subsequently, polynomial chaos expansions are used in order to build a metamodel while time-dependent Sobol' indices are computed to understand the impact of the random input parameters throughout the whole operating time. Comparisons are made with a previous published study and additional Hilbert-Schmidt independence criterion sensitivity indices are computed. Key input-output dependencies are exhibited in the different chemical conditionings and new behavior patterns in high-pH regimes are uncovered by the sensitivity analysis. These findings contribute to a better understanding of the clogging phenomenon while opening future lines of modeling research and helping in robustifying maintenance planning.
Traditional rigid endoscopes have challenges in flexibly treating tumors located deep in the brain, and low operability and fixed viewing angles limit its development. This study introduces a novel dual-segment flexible robotic endoscope MicroNeuro, designed to perform biopsies with dexterous surgical manipulation deep in the brain. Taking into account the uncertainty of the control model, an image-based visual servoing with online robot Jacobian estimation has been implemented to enhance motion accuracy. Furthermore, the application of model predictive control with constraints significantly bolsters the flexible robot's ability to adaptively track mobile objects and resist external interference. Experimental results underscore that the proposed control system enhances motion stability and precision. Phantom testing substantiates its considerable potential for deployment in neurosurgery.
Autonomous aerial harvesting is a highly complex problem because it requires numerous interdisciplinary algorithms to be executed on mini low-powered computing devices. Object detection is one such algorithm that is compute-hungry. In this context, we make the following contributions: (i) Fast Fruit Detector (FFD), a resource-efficient, single-stage, and postprocessing-free object detector based on our novel latent object representation (LOR) module, query assignment, and prediction strategy. FFD achieves 100FPS@FP32 precision on the latest 10W NVIDIA Jetson-NX embedded device while co-existing with other time-critical sub-systems such as control, grasping, SLAM, a major achievement of this work. (ii) a method to generate vast amounts of training data without exhaustive manual labelling of fruit images since they consist of a large number of instances, which increases the labelling cost and time. (iii) an open-source fruit detection dataset having plenty of very small-sized instances that are difficult to detect. Our exhaustive evaluations on our and MinneApple dataset show that FFD, being only a single-scale detector, is more accurate than many representative detectors, e.g. FFD is better than single-scale Faster-RCNN by 10.7AP, multi-scale Faster-RCNN by 2.3AP, and better than latest single-scale YOLO-v8 by 8AP and multi-scale YOLO-v8 by 0.3 while being considerably faster.
Designing distributed filtering circuits (DFCs) is complex and time-consuming, with the circuit performance relying heavily on the expertise and experience of electronics engineers. However, manual design methods tend to have exceedingly low-efficiency. This study proposes a novel end-to-end automated method for fabricating circuits to improve the design of DFCs. The proposed method harnesses reinforcement learning (RL) algorithms, eliminating the dependence on the design experience of engineers. Thus, it significantly reduces the subjectivity and constraints associated with circuit design. The experimental findings demonstrate clear improvements in both design efficiency and quality when comparing the proposed method with traditional engineer-driven methods. In particular, the proposed method achieves superior performance when designing complex or rapidly evolving DFCs. Furthermore, compared to existing circuit automation design techniques, the proposed method demonstrates superior design efficiency, highlighting the substantial potential of RL in circuit design automation.
The temperature-dependent behavior of defect densities within a crystalline structure is intricately linked to the phenomenon of vibrational entropy. Traditional methods for evaluating vibrational entropy are computationally intensive, limiting their practical utility. We show that total entropy can be decomposed into atomic site contributions and rigorously estimate the locality of site entropy. This analysis suggests that vibrational entropy can be effectively predicted using a surrogate model for site entropy. We employ machine learning to develop such a surrogate models employing the Atomic Cluster Expansion model. We supplement our rigorous analysis with an empirical convergence study. In addition we demonstrate the performance of our method for predicting vibrational formation entropy and attempt frequency of the transition rates, on point defects such as vacancies and interstitials.
Relaxing the sequential specification of a shared object is a way to obtain an implementation with better performance compared to implementing the original specification. We apply this approach to the Counter object, under the assumption that the number of times the Counter is incremented in any execution is at most a known bound $m$. We consider the $k$-multiplicative-accurate Counter object, where each read operation returns an approximate value that is within a multiplicative factor $k$ of the accurate value. More specifically, a read is allowed to return an approximate value $x$ of the number $v$ of increments previously applied to the counter such that $v/k \le x \le vk$. We present three algorithms to implement this object in a wait-free linearizable manner in the shared memory model using read-write registers. All the algorithms have read operations whose worst-case step complexity improves exponentially on that for an exact $m$-bounded counter (which in turn improves exponentially on that for an exact unbounded counter). Two of the algorithms have read step complexity that is asymptotically optimal. The algorithms differ in their requirements on $k$, step complexity of the increment operation, and space complexity.
As an anode material for lithium-ion batteries, amorphous silicon offers a significantly higher energy density than the graphite anodes currently used. Alloying reactions of lithium and silicon, however, induce large deformation and lead to volume changes up to 300%. We formulate a thermodynamically consistent continuum model for the chemo-elasto-plastic diffusion-deformation behavior of amorphous silicon and it's alloy with lithium based on finite deformations. In this paper, two plasticity theories, i.e. a rate-independent theory with linear isotropic hardening and a rate-dependent one, are formulated to allow the evolution of plastic deformations and reduce occurring stresses. Using modern numerical techniques, such as higher order finite element methods as well as efficient space and time adaptive solution algorithms, the diffusion-deformation behavior resulting from both theories is compared. In order to further increase the computational efficiency, an automatic differentiation scheme is used, allowing for a significant speed up in assembling time as compared to an algorithmic linearization for the global finite element Newton scheme. Both plastic approaches lead to a more heterogeneous concentration distribution and to a change to tensile tangential Cauchy stresses at the particle surface at the end of one charging cycle. Different parameter studies show how an amplification of the plastic deformation is affected. Interestingly, an elliptical particle shows only plastic deformation at the smaller half axis. With the demonstrated efficiency of the applied methods, results after five charging cycles are also discussed and can provide indications for the performance of lithium-ion batteries in long term use.
Scene text recognition is a rapidly developing field that faces numerous challenges due to the complexity and diversity of scene text, including complex backgrounds, diverse fonts, flexible arrangements, and accidental occlusions. In this paper, we propose a novel approach called Class-Aware Mask-guided feature refinement (CAM) to address these challenges. Our approach introduces canonical class-aware glyph masks generated from a standard font to effectively suppress background and text style noise, thereby enhancing feature discrimination. Additionally, we design a feature alignment and fusion module to incorporate the canonical mask guidance for further feature refinement for text recognition. By enhancing the alignment between the canonical mask feature and the text feature, the module ensures more effective fusion, ultimately leading to improved recognition performance. We first evaluate CAM on six standard text recognition benchmarks to demonstrate its effectiveness. Furthermore, CAM exhibits superiority over the state-of-the-art method by an average performance gain of 4.1% across six more challenging datasets, despite utilizing a smaller model size. Our study highlights the importance of incorporating canonical mask guidance and aligned feature refinement techniques for robust scene text recognition. The code is available at //github.com/MelosY/CAM.
Free-space optics (FSO)-based satellite communication systems have recently received considerable attention due to their enhanced capacity compared to their radio frequency (RF) counterparts. This paper analyzes the performance of physical layer security of space-to-ground intensity modulation/direct detection FSO satellite links under the effect of atmospheric loss, misalignment, cloud attenuation, and atmospheric turbulence-induced fading. Specifically, a wiretap channel consisting of a legitimate transmitter Alice (i.e., the satellite), a legitimate user Bob, and an eavesdropper Eve over turbulence channels modeled by the Fisher-Snedecor $\mathcal{F}$ distribution is considered. The secrecy performance in terms of the average secrecy capacity, secrecy outage probability, and strictly positive secrecy capacity are derived in closed-form. Simulation results reveal significant impacts of satellite altitude, zenith angle, and turbulence strength on the secrecy performance.
When constructing parametric models to predict the cost of future claims, several important details have to be taken into account: (i) models should be designed to accommodate deductibles, policy limits, and coinsurance factors, (ii) parameters should be estimated robustly to control the influence of outliers on model predictions, and (iii) all point predictions should be augmented with estimates of their uncertainty. The methodology proposed in this paper provides a framework for addressing all these aspects simultaneously. Using payment-per-payment and payment-per-loss variables, we construct the adaptive version of method of winsorized moments (MWM) estimators for the parameters of truncated and censored lognormal distribution. Further, the asymptotic distributional properties of this approach are derived and compared with those of the maximum likelihood estimator (MLE) and method of trimmed moments (MTM) estimators. The latter being a primary competitor to MWM. Moreover, the theoretical results are validated with extensive simulation studies and risk measure sensitivity analysis. Finally, practical performance of these methods is illustrated using the well-studied data set of 1500 U.S. indemnity losses. With this real data set, it is also demonstrated that the composite models do not provide much improvement in the quality of predictive models compared to a stand-alone fitted distribution specially for truncated and censored sample data.
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.