Use of explicit integration methods for power electronic circuits with ideal switch models significantly improves simulation speed. The PLECS package [1] has effectively used this idea; however, the implementation details involved in PLECS are not available in the public domain. Recently, a basic framework, called the ``ELEX" scheme, for implementing explicit methods has been described [2]. A few modifications of the ELEX scheme for efficient handling of inductors and switches have been presented in [3]. In this paper, the approach presented in [3] is further augmented with robust schemes that enable systematic equation formulation for circuits involving switches, inductors, and transformers. Several examples are presented to illustrate the proposed schemes.
We introduce a novel deep operator network (DeepONet) framework that incorporates generalised variational inference (GVI) using R\'enyi's $\alpha$-divergence to learn complex operators while quantifying uncertainty. By incorporating Bayesian neural networks as the building blocks for the branch and trunk networks, our framework endows DeepONet with uncertainty quantification. The use of R\'enyi's $\alpha$-divergence, instead of the Kullback-Leibler divergence (KLD), commonly used in standard variational inference, mitigates issues related to prior misspecification that are prevalent in Variational Bayesian DeepONets. This approach offers enhanced flexibility and robustness. We demonstrate that modifying the variational objective function yields superior results in terms of minimising the mean squared error and improving the negative log-likelihood on the test set. Our framework's efficacy is validated across various mechanical systems, where it outperforms both deterministic and standard KLD-based VI DeepONets in predictive accuracy and uncertainty quantification. The hyperparameter $\alpha$, which controls the degree of robustness, can be tuned to optimise performance for specific problems. We apply this approach to a range of mechanics problems, including gravity pendulum, advection-diffusion, and diffusion-reaction systems. Our findings underscore the potential of $\alpha$-VI DeepONet to advance the field of data-driven operator learning and its applications in engineering and scientific domains.
We explore the use of quantum generative adversarial networks QGANs for modeling eye movement velocity data. We assess whether the advanced computational capabilities of QGANs can enhance the modeling of complex stochastic distribution beyond the traditional mathematical models, particularly the Markov model. The findings indicate that while QGANs demonstrate potential in approximating complex distributions, the Markov model consistently outperforms in accurately replicating the real data distribution. This comparison underlines the challenges and avenues for refinement in time series data generation using quantum computing techniques. It emphasizes the need for further optimization of quantum models to better align with real-world data characteristics.
We propose a two step strategy for estimating one-dimensional dynamical parameters of a quantum Markov chain, which involves quantum post-processing the output using a coherent quantum absorber and a "pattern counting'' estimator computed as a simple additive functional of the outcomes trajectory produced by sequential, identical measurements on the output units. We provide strong theoretical and numerical evidence that the estimator achieves the quantum Cram\'{e}-Rao bound in the limit of large output size. Our estimation method is underpinned by an asymptotic theory of translationally invariant modes (TIMs) built as averages of shifted tensor products of output operators, labelled by binary patterns. For large times, the TIMs form a bosonic algebra and the output state approaches a joint coherent state of the TIMs whose amplitude depends linearly on the mismatch between system and absorber parameters. Moreover, in the asymptotic regime the TIMs capture the full quantum Fisher information of the output state. While directly probing the TIMs' quadratures seems impractical, we show that the standard sequential measurement is an effective joint measurement of all the TIMs number operators; indeed, we show that counts of different binary patterns extracted from the measurement trajectory have the expected joint Poisson distribution. Together with the displaced-null methodology of J. Phys. A: Math. Theor. 57 245304 2024 this provides a computationally efficient estimator which only depends on the total number of patterns. This opens the way for similar estimation strategies in continuous-time dynamics, expanding the results of Phys. Rev. X 13, 031012 2023.
Considered here is a hypothesis test for the coefficients in the change-plane regression models to detect the existence of a change plane. The test that is considered is from the class of test problems in which some parameters are not identifiable under the null hypothesis. The classic exponential average tests do not work well in practice. To overcome this drawback, a novel test statistic is proposed by taking the weighted average of the squared score test statistic (WAST) over the grouping parameter's space, which has a closed form from the perspective of the conjugate priors when an appropriate weight is chosen. The WAST significantly improves the power in practice, particularly in cases where the number of the grouping variables is large. The asymptotic distributions of the WAST are derived under the null and alternative hypotheses. The approximation of critical value by the bootstrap method is investigated, which is theoretically guaranteed. Furthermore, the proposed test method is naturally extended to the generalized estimating equation (GEE) framework, as well as multiple change planes that can test if there are three or more subgroups. The WAST performs well in simulation studies, and its performance is further validated by applying it to real datasets.
In this work we mainly develop a new numerical methodology to solve a PDE model recently proposed in the literature for pricing interest rate derivatives. More precisely, we use high order in time AMFR-W methods, which belong to a class of W-methods based on Approximate Matrix Factorization (AMF) and are especially suitable in the presence of mixed spatial derivatives. High-order convergence in time allows larger time steps which combined with the splitting of the involved operators, highly reduces the computational time for a given accuracy. Moreover, the consideration of a large number of underlying forward rates makes the PDE problem high dimensional in space, so the use of AMFR-W methods with a sparse grids combination technique represents another innovative aspect, making AMFR-W more efficient than with full grids and opening the possibility of parallelization. Also the consideration of new homogeneous Neumann boundary conditions provides another original feature to avoid the difficulties associated to the presence of boundary layers when using Dirichlet ones, especially in advection-dominated regimes. These Neumann boundary conditions motivate the introduction of a modified combination technique to overcome a decrease in the accuracy of the standard combination technique.
We investigate the potential of bio-inspired evolutionary algorithms for designing quantum circuits with specific goals, focusing on two particular tasks. The first one is motivated by the ideas of Artificial Life that are used to reproduce stochastic cellular automata with given rules. We test the robustness of quantum implementations of the cellular automata for different numbers of quantum gates The second task deals with the sampling of quantum circuits that generate highly entangled quantum states, which constitute an important resource for quantum computing. In particular, an evolutionary algorithm is employed to optimize circuits with respect to a fitness function defined with the Mayer-Wallach entanglement measure. We demonstrate that, by balancing the mutation rate between exploration and exploitation, we can find entangling quantum circuits for up to five qubits. We also discuss the trade-off between the number of gates in quantum circuits and the computational costs of finding the gate arrangements leading to a strongly entangled state. Our findings provide additional insight into the trade-off between the complexity of a circuit and its performance, which is an important factor in the design of quantum circuits.
By combining traditional frequency hopping ideas with the concepts of subcarriers and sampling points in OFDM baseband systems, this paper proposes a frequency hopping technology within the baseband called micro frequency hopping. Based on the concept of micro frequency hopping, this paper proposes a micro frequency hopping spread spectrum modulation method based on cyclic frequency shift and cyclic time shift, as well as a micro frequency hopping encryption method based on phase scrambling of baseband signals. Specifically, this paper reveals a linear micro frequency hopping symbol with good auto-correlation and cross-correlation feature in both time domain and frequency domain. Linear micro frequency hopping symbols with different root $R$ have good cross-correlation feature, which can be used in multi-user communication at same time and same frequency. Moreover, there is a linear relationship between the time delay and frequency offset of this linear micro frequency hopping symbol, making it suitable for time delay and frequency offset estimation, also for ranging, and speed measurement. Finally, this paper also verifies the advantages of micro frequency hopping technology through an example of a linear micro frequency hopping spread spectrum multiple access communication system. The author believes that micro frequency hopping technology will be widely used in fields such as the Internet of Things, military communication, satellite communication, satellite positioning, and radar etc.
This paper considers continuous data assimilation (CDA) in partial differential equation (PDE) discretizations where nudging parameters can be taken arbitrarily large. We prove that long-time optimally accurate solutions are obtained for such parameters for the heat and Navier-Stokes equations (using implicit time stepping methods), with error bounds that do not grow as the nudging parameter gets large. Existing theoretical results either prove optimal accuracy but with the error scaled by the nudging parameter, or suboptimal accuracy that is independent of it. The key idea to the improved analysis is to decompose the error based on a weighted inner product that incorporates the (symmetric by construction) nudging term, and prove that the projection error from this weighted inner product is optimal and independent of the nudging parameter. We apply the idea to BDF2 - finite element discretizations of the heat equation and Navier-Stokes equations to show that with CDA, they will admit optimal long-time accurate solutions independent of the nudging parameter, for nudging parameters large enough. Several numerical tests are given for the heat equation, fluid transport equation, Navier-Stokes, and Cahn-Hilliard that illustrate the theory.
In this work we propose a highly optimized version of a simulated annealing (SA) algorithm adapted to the more recently developed Graphic Processor Units (GPUs). The programming has been carried out with CUDA toolkit, specially designed for Nvidia GPUs. For this purpose, efficient versions of SA have been first analyzed and adapted to GPUs. Thus, an appropriate sequential SA algorithm has been developed as a starting point. Next, a straightforward asynchronous parallel version has been implemented and then a specific and more efficient synchronous version has been developed. A wide appropriate benchmark to illustrate the performance properties of the implementation has been considered. Among all tests, a classical sample problem provided by the minimization of the normalized Schwefel function has been selected to compare the behavior of the sequential, asynchronous, and synchronous versions, the last one being more advantageous in terms of balance between convergence, accuracy, and computational cost. Also, the implementation of a hybrid method combining SA with a local minimizer method has been developed. Note that the generic feature of the SA algorithm allows its application in a wide set of real problems arising in a large variety of fields, such as biology, physics, engineering, finance, and industrial processes.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.