The well-conditioned multi-product formula (MPF), proposed by [Low, Kliuchnikov, and Wiebe, 2019], is a simple high-order time-independent Hamiltonian simulation algorithm that implements a linear combination of standard product formulas of low order. While the MPF aims to simultaneously exploit commutator scaling among Hamiltonians and achieve near-optimal time and precision dependence, its lack of a rigorous error bound on the nested commutators renders its practical advantage ambiguous. In this work, we conduct a rigorous complexity analysis of the well-conditioned MPF, demonstrating explicit commutator scaling and near-optimal time and precision dependence at the same time. Using our improved complexity analysis, we present several applications of practical interest where the MPF based on a second-order product formula can achieve a polynomial speedup in both system size and evolution time, as well as an exponential speedup in precision, compared to second-order and even higher-order product formulas. Compared to post-Trotter methods, the MPF based on a second-order product formula can achieve polynomially better scaling in system size, with only poly-logarithmic overhead in evolution time and precision.
Accurate prediction of battery temperature rise is very essential for designing an efficient thermal management scheme. In this paper, machine learning (ML) based prediction of Vanadium Redox Flow Battery (VRFB) thermal behavior during charge-discharge operation has been demonstrated for the first time. Considering different currents with a specified electrolyte flow rate, the temperature of a kW scale VRFB system is studied through experiments. Three different ML algorithms; Linear Regression (LR), Support Vector Regression (SVR) and Extreme Gradient Boost (XGBoost) have been used for the prediction work. The training and validation of ML algorithms have been done by the practical dataset of a 1kW 6kWh VRFB storage under 40A, 45A, 50A and 60A charge-discharge currents and 10 L min-1 of flow rate. A comparative analysis among the ML algorithms is done in terms of performance metrics such as correlation coefficient (R2), mean absolute error (MAE) and root mean square error (RMSE). It is observed that XGBoost shows the highest accuracy in prediction of around 99%. The ML based prediction results obtained in this work can be very useful for controlling the VRFB temperature rise during operation and act as indicator for further development of an optimized thermal management system.
In this work, a family of symmetric interpolation points are generated on the four-dimensional simplex (i.e. the pentatope). These points are optimized in order to minimize the Lebesgue constant. The process of generating these points closely follows that outlined by Warburton in "An explicit construction of interpolation nodes on the simplex," Journal of Engineering Mathematics, 2006. Here, Warburton generated optimal interpolation points on the triangle and tetrahedron by formulating explicit geometric warping and blending functions, and applying these functions to equidistant nodal distributions. The locations of the resulting points were Lebesgue-optimized. In our work, we extend this procedure to four dimensions, and construct interpolation points on the pentatope up to order ten. The Lebesgue constants of our nodal sets are calculated, and are shown to outperform those of equidistant nodal distributions.
Tow steering technologies, such as Automated fiber placement, enable the fabrication of composite laminates with curvilinear fiber, tow, or tape paths. Designers may therefore tailor tow orientations locally according to the expected local stress state within a structure, such that strong and stiff orientations of the tow are (for example) optimized to provide maximal mechanical benefit. Tow path optimization can be an effective tool in automating this design process, yet has a tendency to create complex designs that may be challenging to manufacture. In the context of tow steering, these complexities can manifest in defects such as tow wrinkling, gaps, overlaps. In this work, we implement manufacturing constraints within the tow path optimization formulation to restrict the minimum tow turning radius and the maximum density of gaps between and overlaps of tows. This is achieved by bounding the local value of the curl and divergence of the vector field associated with the tow orientations. The resulting local constraints are effectively enforced in the optimization framework through the Augmented Lagrangian method. The resulting optimization methodology is demonstrated by designing 2D and 3D structures with optimized tow orientation paths that maximize stiffness (minimize compliance) considering various levels of manufacturing restrictions. The optimized tow paths are shown to be structurally efficient and to respect imposed manufacturing constraints. As expected, the more geometrical complexity that can be achieved by the feedstock tow and placement technology, the higher the stiffness of the resulting optimized design.
Blind single image super-resolution (SISR) is a challenging task in image processing due to the ill-posed nature of the inverse problem. Complex degradations present in real life images make it difficult to solve this problem using na\"ive deep learning approaches, where models are often trained on synthetically generated image pairs. Most of the effort so far has been focused on solving the inverse problem under some constraints, such as for a limited space of blur kernels and/or assuming noise-free input images. Yet, there is a gap in the literature to provide a well-generalized deep learning-based solution that performs well on images with unknown and highly complex degradations. In this paper, we propose IKR-Net (Iterative Kernel Reconstruction Network) for blind SISR. In the proposed approach, kernel and noise estimation and high-resolution image reconstruction are carried out iteratively using dedicated deep models. The iterative refinement provides significant improvement in both the reconstructed image and the estimated blur kernel even for noisy inputs. IKR-Net provides a generalized solution that can handle any type of blur and level of noise in the input low-resolution image. IKR-Net achieves state-of-the-art results in blind SISR, especially for noisy images with motion blur.
In this paper we consider an orthonormal basis, generated by a tensor product of Fourier basis functions, half period cosine basis functions, and the Chebyshev basis functions. We deal with the approximation problem in high dimensions related to this basis and design a fast algorithm to multiply with the underlying matrix, consisting of rows of the non-equidistant Fourier matrix, the non-equidistant cosine matrix and the non-equidistant Chebyshev matrix, and its transposed. This leads us to an ANOVA (analysis of variance) decomposition for functions with partially periodic boundary conditions through using the Fourier basis in some dimensions and the half period cosine basis or the Chebyshev basis in others. We consider sensitivity analysis in this setting, in order to find an adapted basis for the underlying approximation problem. More precisely, we find the underlying index set of the multidimensional series expansion. Additionally, we test this ANOVA approximation with mixed basis at numerical experiments, and refer to the advantage of interpretable results.
Splitting methods are a widely used numerical scheme for solving convection-diffusion problems. However, they may lose stability in some situations, particularly when applied to convection-diffusion problems in the presence of an unbounded convective term. In this paper, we propose a new splitting method, called the "Adapted Lie splitting method", which successfully overcomes the observed instability in certain cases. Assuming that the unbounded coefficient belongs to a suitable Lorentz space, we show that the adapted Lie splitting converges to first-order under the analytic semigroup framework. Furthermore, we provide numerical experiments to illustrate our newly proposed splitting approach.
The implication problem for conditional independence (CI) asks whether the fact that a probability distribution obeys a given finite set of CI relations implies that a further CI statement also holds in this distribution. This problem has a long and fascinating history, cumulating in positive results about implications now known as the semigraphoid axioms as well as impossibility results about a general finite characterization of CI implications. Motivated by violation of faithfulness assumptions in causal discovery, we study the implication problem in the special setting where the CI relations are obtained from a directed acyclic graphical (DAG) model along with one additional CI statement. Focusing on the Gaussian case, we give a complete characterization of when such an implication is graphical by using algebraic techniques. Moreover, prompted by the relevance of strong faithfulness in statistical guarantees for causal discovery algorithms, we give a graphical solution for an approximate CI implication problem, in which we ask whether small values of one additional partial correlation entail small values for yet a further partial correlation.
Reinforcement learning (RL) with continuous state and action spaces remains one of the most challenging problems within the field. Most current learning methods focus on integral identities such as value functions to derive an optimal strategy for the learning agent. In this paper, we instead study the dual form of the original RL formulation to propose the first differential RL framework that can handle settings with limited training samples and short-length episodes. Our approach introduces Differential Policy Optimization (DPO), a pointwise and stage-wise iteration method that optimizes policies encoded by local-movement operators. We prove a pointwise convergence estimate for DPO and provide a regret bound comparable with current theoretical works. Such pointwise estimate ensures that the learned policy matches the optimal path uniformly across different steps. We then apply DPO to a class of practical RL problems which search for optimal configurations with Lagrangian rewards. DPO is easy to implement, scalable, and shows competitive results on benchmarking experiments against several popular RL methods.
In contemporary problems involving genetic or neuroimaging data, thousands of hypotheses need to be tested. Due to their high power, and finite sample guarantees on type-1 error under weak assumptions, Monte-Carlo permutation tests are often considered as gold standard for these settings. However, the enormous computational effort required for (thousands of) permutation tests is a major burden. Recently, Fischer and Ramdas (2024) constructed a permutation test for a single hypothesis in which the permutations are drawn sequentially one-by-one and the testing process can be stopped at any point without inflating the type I error. They showed that the number of permutations can be substantially reduced (under null and alternative) while the power remains similar. We show how their approach can be modified to make it suitable for a broad class of multiple testing procedures. In particular, we discuss its use with the Benjamini-Hochberg procedure and illustrate the application on a large dataset.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.