Here we present two new schemes for quantum key distribution (QKD) which neither require entanglement nor require an ideal single photon source. Thus, the proposed protocols can be implemented using realistic single photon sources which are commercially available. The schemes are shown to be secure against multiple attacks (e.g., intercept resend attack and a class of collective attacks). Bounds on the key rate are obtained and it is shown that by applying a certain type of classical pre-processing, the tolerable error limit can be increased. A trade-off between quantum resources used and information revealed to Eve is observed and it is shown that by using slightly more quantum resources it is possible to design protocols having higher efficiency compared to a protocol of the same family that uses relatively lesser amount of quantum resources. Specifically, in our case, SARG04 protocol is a protocol of the same family and it is clearly shown that the proposed protocols can provide higher efficiency compared to SARG04 at the cost of consumption of more quantum resources. Further, it is shown that the critical distances for the proposed protocols under photon number splitting (PNS) type attacks are higher than the critical distances obtained for BB84 and SARG04 protocols implemented under similar situation.
We extend our formulation of Merge and Minimalism in terms of Hopf algebras to an algebraic model of a syntactic-semantic interface. We show that methods adopted in the formulation of renormalization (extraction of meaningful physical values) in theoretical physics are relevant to describe the extraction of meaning from syntactic expressions. We show how this formulation relates to computational models of semantics and we answer some recent controversies about implications for generative linguistics of the current functioning of large language models.
Nowadays, deep-learning image coding solutions have shown similar or better compression efficiency than conventional solutions based on hand-crafted transforms and spatial prediction techniques. These deep-learning codecs require a large training set of images and a training methodology to obtain a suitable model (set of parameters) for efficient compression. The training is performed with an optimization algorithm which provides a way to minimize the loss function. Therefore, the loss function plays a key role in the overall performance and includes a differentiable quality metric that attempts to mimic human perception. The main objective of this paper is to study the perceptual impact of several image quality metrics that can be used in the loss function of the training process, through a crowdsourcing subjective image quality assessment study. From this study, it is possible to conclude that the choice of the quality metric is critical for the perceptual performance of the deep-learning codec and that can vary depending on the image content.
We consider the general problem of Bayesian binary regression and we introduce a new class of distributions, the Perturbed Unified Skew Normal (pSUN, henceforth), which generalizes the Unified Skew-Normal (SUN) class. We show that the new class is conjugate to any binary regression model, provided that the link function may be expressed as a scale mixture of Gaussian densities. We discuss in detail the popular logit case, and we show that, when a logistic regression model is combined with a Gaussian prior, posterior summaries such as cumulants and normalizing constants can be easily obtained through the use of an importance sampling approach, opening the way to straightforward variable selection procedures. For more general priors, the proposed methodology is based on a simple Gibbs sampler algorithm. We also claim that, in the p > n case, the proposed methodology shows better performances - both in terms of mixing and accuracy - compared to the existing methods. We illustrate the performance through several simulation studies and two data analyses.
We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem, we propose a Bayesian strategy based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.
In this paper, we investigate the functional central limit theorem and the Marcinkiewicz strong law of large numbers for U-statistics having absolutely regular data and taking value in a separable Hilbert space. The novelty of our approach consists in using coupling in order to formulate a deviation inequality for original $U$-statistic, where the upper bound involves the mixing coefficient and the tail of several U-statistics of i.i.d. data. The presented results improve the known results in several directions: the case of metric space valued data is considered as well as Hilbert space valued, and the mixing rates are less restrictive in a wide range of parameters.
Throughout the life sciences we routinely seek to interpret measurements and observations using parameterised mechanistic mathematical models. A fundamental and often overlooked choice in this approach involves relating the solution of a mathematical model with noisy and incomplete measurement data. This is often achieved by assuming that the data are noisy measurements of the solution of a deterministic mathematical model, and that measurement errors are additive and normally distributed. While this assumption of additive Gaussian noise is extremely common and simple to implement and interpret, it is often unjustified and can lead to poor parameter estimates and non-physical predictions. One way to overcome this challenge is to implement a different measurement error model. In this review, we demonstrate how to implement a range of measurement error models in a likelihood-based framework for estimation, identifiability analysis, and prediction, called Profile-Wise Analysis. This frequentist approach to uncertainty quantification for mechanistic models leverages the profile likelihood for targeting parameters and understanding their influence on predictions. Case studies, motivated by simple caricature models routinely used in systems biology and mathematical biology literature, illustrate how the same ideas apply to different types of mathematical models. Open-source Julia code to reproduce results is available on GitHub.
Causal representation learning algorithms discover lower-dimensional representations of data that admit a decipherable interpretation of cause and effect; as achieving such interpretable representations is challenging, many causal learning algorithms utilize elements indicating prior information, such as (linear) structural causal models, interventional data, or weak supervision. Unfortunately, in exploratory causal representation learning, such elements and prior information may not be available or warranted. Alternatively, scientific datasets often have multiple modalities or physics-based constraints, and the use of such scientific, multimodal data has been shown to improve disentanglement in fully unsupervised settings. Consequently, we introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships. Our innovative algorithm utilizes a new differentiable parametrization to learn a directed acyclic graph (DAG) together with a latent space of a variational autoencoder in an end-to-end differentiable framework via a single, tractable evidence lower bound loss function. We place a Gaussian mixture prior on the latent space and identify each of the mixtures with an outcome of the DAG nodes; this novel identification enables feature discovery with causal relationships. Tested against a synthetic and a scientific dataset, our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.
Weights are geometrical degrees of freedom that allow to generalise Lagrangian finite elements. They are defined through integrals over specific supports, well understood in terms of differential forms and integration, and lie within the framework of finite element exterior calculus. In this work we exploit this formalism with the target of identifying supports that are appealing for finite element approximation. To do so, we study the related parametric matrix-sequences, with the matrix order tending to infinity as the mesh size tends to zero. We describe the conditioning and the spectral global behavior in terms of the standard Toeplitz machinery and GLT theory, leading to the identification of the optimal choices for weights. Moreover, we propose and test ad hoc preconditioners, in dependence of the discretization parameters and in connection with conjugate gradient method. The model problem we consider is a onedimensional Laplacian, both with constant and non constant coefficients. Numerical visualizations and experimental tests are reported and critically discussed, demonstrating the advantages of weights-induced bases over standard Lagrangian ones. Open problems and future steps are listed in the conclusive section, especially regarding the multidimensional case.
The optimization of open-loop shallow geothermal systems, which includes both design and operational aspects, is an important research area aimed at improving their efficiency and sustainability and the effective management of groundwater as a shallow geothermal resource. This paper investigates various approaches to address optimization problems arising from these research and implementation questions about GWHP systems. The identified optimization approaches are thoroughly analyzed based on criteria such as computational cost and applicability. Moreover, a novel classification scheme is introduced that categorizes the approaches according to the types of groundwater simulation model and the optimization algorithm used. Simulation models are divided into two types: numerical and simplified (analytical or data-driven) models, while optimization algorithms are divided into gradient-based and derivative-free algorithms. Finally, a comprehensive review of existing approaches in the literature is provided, highlighting their strengths and limitations and offering recommendations for both the use of existing approaches and the development of new, improved ones in this field.
When modeling a vector of risk variables, extreme scenarios are often of special interest. The peaks-over-thresholds method hinges on the notion that, asymptotically, the excesses over a vector of high thresholds follow a multivariate generalized Pareto distribution. However, existing literature has primarily concentrated on the setting when all risk variables are always large simultaneously. In reality, this assumption is often not met, especially in high dimensions. In response to this limitation, we study scenarios where distinct groups of risk variables may exhibit joint extremes while others do not. These discernible groups are derived from the angular measure inherent in the corresponding max-stable distribution, whence the term extreme direction. We explore such extreme directions within the framework of multivariate generalized Pareto distributions, with a focus on their probability density functions in relation to an appropriate dominating measure. Furthermore, we provide a stochastic construction that allows any prespecified set of risk groups to constitute the distribution's extreme directions. This construction takes the form of a smoothed max-linear model and accommodates the full spectrum of conceivable max-stable dependence structures. Additionally, we introduce a generic simulation algorithm tailored for multivariate generalized Pareto distributions, offering specific implementations for extensions of the logistic and H\"usler-Reiss families capable of carrying arbitrary extreme directions.