We give an adequate, concrete, categorical-based model for Lambda-S, which is a typed version of a linear-algebraic lambda calculus, extended with measurements. Lambda-S is an extension to first-order lambda calculus unifying two approaches of non-cloning in quantum lambda-calculi: to forbid duplication of variables, and to consider all lambda-terms as algebraic linear functions. The type system of Lambda-S have a superposition constructor S such that a type A is considered as the base of a vector space while SA is its span. Our model considers S as the composition of two functors in an adjunction relation between the category of sets and the category of vector spaces over C. The right adjoint is a forgetful functor U, which is hidden in the language, and plays a central role in the computational reasoning.
We present and analyze a discontinuous Galerkin method for the numerical modeling of the non-linear fully-coupled thermo-hydro-mechanic problem. We propose a high-order symmetric weighted interior penalty scheme that supports general polytopal grids and is robust with respect to strong heteorgeneities in the model coefficients. We focus on the treatment of the non-linear convective transport term in the energy conservation equation and we propose suitable stabilization techniques that make the scheme robust for advection-dominated regimes. The stability analysis of the problem and the convergence of the fixed-point linearization strategy are addressed theoretically under mild requirements on the problem's data. A complete set of numerical simulations is presented in order to assess the convergence and robustness properties of the proposed method.
In this paper, we propose a robust low-order stabilization-free virtual element method on quadrilateral meshes for linear elasticity that is based on the stress-hybrid principle. We refer to this approach as the Stress-Hybrid Virtual Element Method (SH-VEM). In this method, the Hellinger$-$Reissner variational principle is adopted, wherein both the equilibrium equations and the strain-displacement relations are variationally enforced. We consider small-strain deformations of linear elastic solids in the compressible and near-incompressible regimes over quadrilateral (convex and nonconvex) meshes. Within an element, the displacement field is approximated as a linear combination of canonical shape functions that are $\textit{virtual}$. The stress field, similar to the stress-hybrid finite element method of Pian and Sumihara, is represented using a linear combination of symmetric tensor polynomials. A 5-parameter expansion of the stress field is used in each element, with stress transformation equations applied on distorted quadrilaterals. In the variational statement of the strain-displacement relations, the divergence theorem is invoked to express the stress coefficients in terms of the nodal displacements. This results in a formulation with solely the nodal displacements as unknowns. Numerical results are presented for several benchmark problems from linear elasticity. We show that SH-VEM is free of volumetric and shear locking, and it converges optimally in the $L^2$ norm and energy seminorm of the displacement field, and in the $L^2$ norm of the hydrostatic stress.
We resurrect the infamous harmonic mean estimator for computing the marginal likelihood (Bayesian evidence) and solve its problematic large variance. The marginal likelihood is a key component of Bayesian model selection to evaluate model posterior probabilities; however, its computation is challenging. The original harmonic mean estimator, first proposed by Newton and Raftery in 1994, involves computing the harmonic mean of the likelihood given samples from the posterior. It was immediately realised that the original estimator can fail catastrophically since its variance can become very large (possibly not finite). A number of variants of the harmonic mean estimator have been proposed to address this issue although none have proven fully satisfactory. We present the \emph{learnt harmonic mean estimator}, a variant of the original estimator that solves its large variance problem. This is achieved by interpreting the harmonic mean estimator as importance sampling and introducing a new target distribution. The new target distribution is learned to approximate the optimal but inaccessible target, while minimising the variance of the resulting estimator. Since the estimator requires samples of the posterior only, it is agnostic to the sampling strategy used. We validate the estimator on a variety of numerical experiments, including a number of pathological examples where the original harmonic mean estimator fails catastrophically. We also consider a cosmological application, where our approach leads to $\sim$ 3 to 6 times more samples than current state-of-the-art techniques in 1/3 of the time. In all cases our learnt harmonic mean estimator is shown to be highly accurate. The estimator is computationally scalable and can be applied to problems of dimension $O(10^3)$ and beyond. Code implementing the learnt harmonic mean estimator is made publicly available
This document defines a method for FIR system modelling which is very trivial as it only depends on phase introduction and removal (allpass filters). As magnitude is not altered, the processing is numerically stable. It is limited to phase alteration which maintains the time domain magnitude to force a system within its linear limits.
Refinement calculus provides a structured framework for the progressive and modular development of programs, ensuring their correctness throughout the refinement process. This paper introduces a refinement calculus tailored for quantum programs. To this end, we first study the partial correctness of nondeterministic programs within a quantum while language featuring prescription statements. Orthogonal projectors, which are equivalent to subspaces of the state Hilbert space, are taken as assertions for quantum states. In addition to the denotational semantics where a nondeterministic program is associated with a set of trace-nonincreasing super-operators, we also present their semantics in transforming a postcondition to the weakest liberal postconditions and, conversely, transforming a precondition to the strongest postconditions. Subsequently, refinement rules are introduced based on these dual semantics, offering a systematic approach to the incremental development of quantum programs applicable in various contexts. To illustrate the practical application of the refinement calculus, we examine examples such as the implementation of a $Z$-rotation gate, the repetition code, and the quantum-to-quantum Bernoulli factory. Furthermore, we present Quire, a Python-based interactive prototype tool that provides practical support to programmers engaged in the stepwise development of correct quantum programs.
The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear time invariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.
Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.
Improving the resolution of fluorescence microscopy beyond the diffraction limit can be achievedby acquiring and processing multiple images of the sample under different illumination conditions.One of the simplest techniques, Random Illumination Microscopy (RIM), forms the super-resolvedimage from the variance of images obtained with random speckled illuminations. However, thevalidity of this process has not been fully theorized. In this work, we characterize mathematicallythe sample information contained in the variance of diffraction-limited speckled images as a functionof the statistical properties of the illuminations. We show that an unambiguous two-fold resolutiongain is obtained when the speckle correlation length coincides with the width of the observationpoint spread function. Last, we analyze the difference between the variance-based techniques usingrandom speckled illuminations (as in RIM) and those obtained using random fluorophore activation(as in Super-resolution Optical Fluctuation Imaging, SOFI).
We provide numerical bounds on the Crouzeix ratiofor KLS matrices $A$ which have a line segment on the boundary of the numerical range. The Crouzeix ratio is the supremum over all polynomials $p$ of the spectral norm of $p(A)$ dividedby the maximum absolute value of $p$ on the numerical range of $A$.Our bounds confirm the conjecture that this ratiois less than or equal to $2$. We also give a precise description of these numerical ranges.
We propose and compare methods for the analysis of extreme events in complex systems governed by PDEs that involve random parameters, in situations where we are interested in quantifying the probability that a scalar function of the system's solution is above a threshold. If the threshold is large, this probability is small and its accurate estimation is challenging. To tackle this difficulty, we blend theoretical results from large deviation theory (LDT) with numerical tools from PDE-constrained optimization. Our methods first compute parameters that minimize the LDT-rate function over the set of parameters leading to extreme events, using adjoint methods to compute the gradient of this rate function. The minimizers give information about the mechanism of the extreme events as well as estimates of their probability. We then propose a series of methods to refine these estimates, either via importance sampling or geometric approximation of the extreme event sets. Results are formulated for general parameter distributions and detailed expressions are provided when Gaussian distributions. We give theoretical and numerical arguments showing that the performance of our methods is insensitive to the extremeness of the events we are interested in. We illustrate the application of our approach to quantify the probability of extreme tsunami events on shore. Tsunamis are typically caused by a sudden, unpredictable change of the ocean floor elevation during an earthquake. We model this change as a random process, which takes into account the underlying physics. We use the one-dimensional shallow water equation to model tsunamis numerically. In the context of this example, we present a comparison of our methods for extreme event probability estimation, and find which type of ocean floor elevation change leads to the largest tsunamis on shore.