This paper addresses the problem of robotic cutting during disassembly of products for materials separation and recycling. Waste handling applications differ from milling in manufacturing processes, as they engender considerable variety and uncertainty in the parameters (e.g. hardness) of materials which the robot must cut. To address this challenge, we propose a learning-based approach incorporating elements of interaction control, in which the robot can adapt key parameters, such as feed rate, depth of cut, and mechanical compliance during task execution. We show how a mathematical model of cutting mechanics, embedded in a simulation environment, can be used to rapidly train the system without needing large amounts of data from physical cutting trials. The simulation approach was validated on a real robot setup based on four case study materials with varying structural and mechanical properties. We demonstrate the proposed method minimises process force and path deviations to a level similar to offline optimal planning methods, while the average time to complete a cutting task is within 25% of the optimum, at the expense of reduced volume of material removed per pass. A key advantage of our approach over similar works is that no prior knowledge about the material is required.
In this paper we consider the numerical approximation of infinite horizon problems via the dynamic programming approach. The value function of the problem solves a Hamilton-Jacobi-Bellman (HJB) equation that is approximated by a fully discrete method. It is known that the numerical problem is difficult to handle by the so called curse of dimensionality. To mitigate this issue we apply a reduction of the order by means of a new proper orthogonal decomposition (POD) method based on time derivatives. We carry out the error analysis of the method using recently proved optimal bounds for the fully discrete approximations. Moreover, the use of snapshots based on time derivatives allow us to bound some terms of the error that could not be bounded in a standard POD approach. Some numerical experiments show the good performance of the method in practice.
Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.
Identifying replicable signals across different studies provides stronger scientific evidence and more powerful inference. Existing literature on high dimensional applicability analysis either imposes strong modeling assumptions or has low power. We develop a powerful and robust empirical Bayes approach for high dimensional replicability analysis. Our method effectively borrows information from different features and studies while accounting for heterogeneity. We show that the proposed method has better power than competing methods while controlling the false discovery rate, both empirically and theoretically. Analyzing datasets from the genome-wide association studies reveals new biological insights that otherwise cannot be obtained by using existing methods.
This article is concerned with solving the time fractional Vakhnenko Parkes equation using the reproducing kernels. Reproducing kernel theory, the normal basis, some important Hilbert spaces, homogenization of constraints, and the orthogonalization process are the main tools of this technique. The main advantage of reproducing kernel method is it is truly meshless. The solutions obtained by the implementation reproducing kernels Hilbert space method on the time-fractional Vakhnenko Parkes equation is in the form of a series. The obtained solution converges to the exact solution uniquely. It is observed that the implemented method is highly effective. The effectiveness of reproducing kernel Hilbert space method is presented through the tables and graphs. The perfectness of this method is tested by taking different error norms and the order of convergence of the errors.
Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.
Stiff systems of ordinary differential equations (ODEs) and sparse training data are common in scientific problems. This paper describes efficient, implicit, vectorized methods for integrating stiff systems of ordinary differential equations through time and calculating parameter gradients with the adjoint method. The main innovation is to vectorize the problem both over the number of independent times series and over a batch or "chunk" of sequential time steps, effectively vectorizing the assembly of the implicit system of ODEs. The block-bidiagonal structure of the linearized implicit system for the backward Euler method allows for further vectorization using parallel cyclic reduction (PCR). Vectorizing over both axes of the input data provides a higher bandwidth of calculations to the computing device, allowing even problems with comparatively sparse data to fully utilize modern GPUs and achieving speed ups of greater than 100x, compared to standard, sequential time integration. We demonstrate the advantages of implicit, vectorized time integration with several example problems, drawn from both analytical stiff and non-stiff ODE models as well as neural ODE models. We also describe and provide a freely available open-source implementation of the methods developed here.
Since the tension instability was discovered in updated Lagrangian smoothed particle hydrodynamics (ULSPH) at the end of the 20th century, researchers have made considerable efforts to suppress its occurrence. However, up to the present day, this problem has not been fundamentally resolved. In this paper, the concept of hourglass modes is firstly introduced into ULSPH, and the inherent causes of tension instability in elastic dynamics are clarified based on this brand-new perspective. Specifically, we present an essentially non-hourglass formulation by decomposing the shear acceleration with the Laplacian operator, and a comprehensive set of challenging benchmark cases for elastic dynamics is used to showcase that our method can completely eliminate tensile instability by resolving hourglass modes. The present results reveal the true origin of tension instability and challenge the traditional understanding of its sources, i.e., hourglass modes are the real culprit behind inducing this instability in tension zones rather that the tension itself. Furthermore, a time integration scheme known as dual-criteria time stepping is adopted into the simulation of solids for the first time, to significantly enhance computational efficiency.
In this paper, we propose a fuzzy adaptive loss function for enhancing deep learning performance in classification tasks. Specifically, we redefine the cross-entropy loss to effectively address class-level noise conditions, including the challenging problem of class imbalance. Our approach introduces aggregation operators, leveraging the power of fuzzy logic to improve classification accuracy. The rationale behind our proposed method lies in the iterative up-weighting of class-level components within the loss function, focusing on those with larger errors. To achieve this, we employ the ordered weighted average (OWA) operator and combine it with an adaptive scheme for gradient-based learning. Through extensive experimentation, our method outperforms other commonly used loss functions, such as the standard cross-entropy or focal loss, across various binary and multiclass classification tasks. Furthermore, we explore the influence of hyperparameters associated with the OWA operators and present a default configuration that performs well across different experimental settings.
In this paper, we propose nonlocal diffusion models with Dirichlet boundary. These nonlocal diffusion models preserve the maximum principle and also have corresponding variational form. With these good properties, It is relatively easy to prove the well-posedness and the vanishing nonlocality convergence. Furthermore, by specifically designed weight function, we can get a nonlocal diffusion model with second order convergence which is optimal for nonlocal diffusion models.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.