Some aspects of the ELectrical EXplicit (ELEX) scheme for using explicit integration schemes in circuit simulation are discussed. It is pointed out that the parallel resistor approach, presented earlier to address singular matrix issues arising in the ELEX scheme, is not adequately robust for incorporation in a general-purpose simulator for power electronic circuits. New topology-aware approaches, which are more robust and efficient compared to the parallel resistor approach, are presented. Several circuit examples are considered to illustrate the new approaches.
Local modifications of a computational domain are often performed in order to simplify the meshing process and to reduce computational costs and memory requirements. However, removing geometrical features of a domain often introduces a non-negligible error in the solution of a differential problem in which it is defined. In this work, we extend the results from [1] by studying the case of domains containing an arbitrary number of distinct Neumann features, and by performing an analysis on Poisson's, linear elasticity, and Stokes' equations. We introduce a simple, computationally cheap, reliable, and efficient a posteriori estimator of the geometrical defeaturing error. Moreover, we also introduce a geometric refinement strategy that accounts for the defeaturing error: Starting from a fully defeatured geometry, the algorithm determines at each iteration step which features need to be added to the geometrical model to reduce the defeaturing error. These important features are then added to the (partially) defeatured geometrical model at the next iteration, until the solution attains a prescribed accuracy. A wide range of two- and three-dimensional numerical experiments are finally reported to illustrate this work.
Confidence intervals based on the central limit theorem (CLT) are a cornerstone of classical statistics. Despite being only asymptotically valid, they are ubiquitous because they permit statistical inference under very weak assumptions, and can often be applied to problems even when nonasymptotic inference is impossible. This paper introduces time-uniform analogues of such asymptotic confidence intervals. To elaborate, our methods take the form of confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time. CSs provide valid inference at arbitrary stopping times, incurring no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance. Existing CSs in the literature are nonasymptotic, and hence do not enjoy the aforementioned broad applicability of asymptotic confidence intervals. Our work bridges the gap by giving a definition for "asymptotic CSs", and deriving a universal asymptotic CS that requires only weak CLT-like assumptions. While the CLT approximates the distribution of a sample average by that of a Gaussian at a fixed sample size, we use strong invariance principles (stemming from the seminal 1960s work of Strassen and improvements by Koml\'os, Major, and Tusn\'ady) to uniformly approximate the entire sample average process by an implicit Gaussian process. As an illustration of our theory, we derive asymptotic CSs for the average treatment effect using efficient estimators in observational studies (for which no nonasymptotic bounds can exist even in the fixed-time regime) as well as randomized experiments, enabling causal inference that can be continuously monitored and adaptively stopped.
Collaborative filtering (CF) has become a popular method for developing recommender systems (RSs) where ratings of a user for new items are predicted based on her past preferences and available preference information of other users. Despite the popularity of CF-based methods, their performance is often greatly limited by the sparsity of observed entries. In this study, we explore the data augmentation and refinement aspects of Maximum Margin Matrix Factorization (MMMF), a widely accepted CF technique for rating predictions, which has not been investigated before. We exploit the inherent characteristics of CF algorithms to assess the confidence level of individual ratings and propose a semi-supervised approach for rating augmentation based on self-training. We hypothesize that any CF algorithm's predictions with low confidence are due to some deficiency in the training data and hence, the performance of the algorithm can be improved by adopting a systematic data augmentation strategy. We iteratively use some of the ratings predicted with high confidence to augment the training data and remove low-confidence entries through a refinement process. By repeating this process, the system learns to improve prediction accuracy. Our method is experimentally evaluated on several state-of-the-art CF algorithms and leads to informative rating augmentation, improving the performance of the baseline approaches.
We provide a non-unit disk framework to solve combinatorial optimization problems such as Maximum Cut (Max-Cut) and Maximum Independent Set (MIS) on a Rydberg quantum annealer. Our setup consists of a many-body interacting Rydberg system where locally controllable light shifts are applied to individual qubits in order to map the graph problem onto the Ising spin model. Exploiting the flexibility that optical tweezers offer in terms of spatial arrangement, our numerical simulations implement the local-detuning protocol while globally driving the Rydberg annealer to the desired many-body ground state, which is also the solution to the optimization problem. Using optimal control methods, these solutions are obtained for prototype graphs with varying sizes at time scales well within the system lifetime and with approximation ratios close to one. The non-blockade approach facilitates the encoding of graph problems with specific topologies that can be realized in two-dimensional Rydberg configurations and is applicable to both unweighted as well as weighted graphs. A comparative analysis with fast simulated annealing is provided which highlights the advantages of our scheme in terms of system size, hardness of the graph, and the number of iterations required to converge to the solution.
Matrices are built and designed by applying procedures from lower order matrices. Matrix tensor products, direct sums or multiplication of matrices are such procedures and a matrix built from these is said to be a {\em separable} matrix. A {\em non-separable} matrix is a matrix which is not separable and is often referred to as {\em an entangled matrix}. The matrices built may retain properties of the lower order matrices or may also acquire new desired properties not inherent in the constituents. Here design methods for non-separable matrices of required types are derived. These can retain properties of lower order matrices or have new desirable properties. Infinite series of required non-separable matrices are constructible by the general methods. Non-separable matrices are required for applications and other uses; they can capture the structure in a unique way and thus perform much better than separable matrices. General new methods are developed with which to construct {\em multidimensional entangled paraunitary matrices}; these have applications for wavelet and filter bank design. The constructions are in addition used to design new systems of non-separable unitary matrices; these have applications in quantum information theory. Some consequences include the design of full diversity constellations of unitary matrices, which are used in MIMO systems, and methods to design infinite series of special types of Hadamard matrices.
This paper addresses the problem of providing robust estimators under a functional logistic regression model. Logistic regression is a popular tool in classification problems with two populations. As in functional linear regression, regularization tools are needed to compute estimators for the functional slope. The traditional methods are based on dimension reduction or penalization combined with maximum likelihood or quasi--likelihood techniques and for that reason, they may be affected by misclassified points especially if they are associated to functional covariates with atypical behaviour. The proposal given in this paper adapts some of the best practices used when the covariates are finite--dimensional to provide reliable estimations. Under regularity conditions, consistency of the resulting estimators and rates of convergence for the predictions are derived. A numerical study illustrates the finite sample performance of the proposed method and reveals its stability under different contamination scenarios. A real data example is also presented.
The identification of primal variables and adjoint variables is usually done via indices in operator overloading algorithmic differentiation tools. One approach is a linear management scheme, which is easy to implement and supports memory optimization for copy statements. An alternative approach performs a reuse of indices, which requires more implementation effort but results in much smaller adjoint vectors. Therefore, the vector mode of algorithmic differentiation scales better with the reuse management scheme. In this paper, we present a novel approach that reuses the indices and allows the copy optimization, thus combining the advantages of the two aforementioned schemes. The new approach is compared to the known approaches on a simple synthetic test case and a real-world example using the computational fluid dynamics solver SU2.
Making inference with spatial extremal dependence models can be computationally burdensome since they involve intractable and/or censored likelihoods. Building on recent advances in likelihood-free inference with neural Bayes estimators, that is, neural networks that approximate Bayes estimators, we develop highly efficient estimators for censored peaks-over-threshold models that encode censoring information in the neural network architecture. Our new method provides a paradigm shift that challenges traditional censored likelihood-based inference methods for spatial extremal dependence models. Our simulation studies highlight significant gains in both computational and statistical efficiency, relative to competing likelihood-based approaches, when applying our novel estimators to make inference with popular extremal dependence models, such as max-stable, $r$-Pareto, and random scale mixture process models. We also illustrate that it is possible to train a single neural Bayes estimator for a general censoring level, precluding the need to retrain the network when the censoring level is changed. We illustrate the efficacy of our estimators by making fast inference on hundreds-of-thousands of high-dimensional spatial extremal dependence models to assess extreme particulate matter 2.5 microns or less in diameter (PM2.5) concentration over the whole of Saudi Arabia.
The simulation of supersonic or hypersonic flows often suffers from numerical shock instabilities if the flow field contains strong shocks, limiting the further application of shock-capturing schemes. In this paper, we develop the unified matrix stability analysis method for schemes with three-point stencils and present MSAT, an open-source tool to quantitatively analyze the shock instability problem. Based on the finite-volume approach on the structured grid, MSAT can be employed to investigate the mechanism of the shock instability problem, evaluate the robustness of numerical schemes, and then help to develop robust schemes. Also, MSAT has the ability to analyze the practical simulation of supersonic or hypersonic flows, evaluate whether it will suffer from shock instabilities, and then assist in selecting appropriate numerical schemes accordingly. As a result, MSAT is a helpful tool that can investigate the shock instability problem and help to cure it.
"Non-Malleable Randomness Encoder"(NMRE) was introduced by Kanukurthi, Obbattu, and Sekar~[KOS18] as a useful cryptographic primitive helpful in the construction of non-malleable codes. To the best of our knowledge, their construction is not known to be quantum secure. We provide a construction of a first rate-$1/2$, $2$-split, quantum secure NMRE and use this in a black-box manner, to construct for the first time the following: 1) rate $1/11$, $3$-split, quantum non-malleable code, 2) rate $1/3$, $3$-split, quantum secure non-malleable code, 3) rate $1/5$, $2$-split, average case quantum secure non-malleable code.