We explore a simple approach to quantum logic based on hybrid and dynamic modal logic, where the set of states is given by some Hilbert space. In this setting, a notion of quantum clause is proposed in a similar way the notion of Horn clause is advanced in first-order logic, that is, to give logical properties for use in logic programming and formal specification. We propose proof rules for reasoning about quantum clauses and we investigate soundness and compactness properties that correspond to this proof calculus. Then we prove a Birkhoff completeness result for the fragment of hybrid-dynamic quantum logic determined by quantum clauses.
Machine learning techniques have recently been of great interest for solving differential equations. Training these models is classically a data-fitting task, but knowledge of the expression of the differential equation can be used to supplement the training objective, leading to the development of physics-informed scientific machine learning. In this article, we focus on one class of models called nonlinear vector autoregression (NVAR) to solve ordinary differential equations (ODEs). Motivated by connections to numerical integration and physics-informed neural networks, we explicitly derive the physics-informed NVAR (piNVAR) which enforces the right-hand side of the underlying differential equation regardless of NVAR construction. Because NVAR and piNVAR completely share their learned parameters, we propose an augmented procedure to jointly train the two models. Then, using both data-driven and ODE-driven metrics, we evaluate the ability of the piNVAR model to predict solutions to various ODE systems, such as the undamped spring, a Lotka-Volterra predator-prey nonlinear model, and the chaotic Lorenz system.
We consider the following problem in computational geometry: given, in the d-dimensional real space, a set of points marked as positive and a set of points marked as negative, such that the convex hull of the positive set does not intersect the negative set, find K hyperplanes that separate, if possible, all the positive points from the negative ones. That is, we search for a convex polyhedron with at most K faces, containing all the positive points and no negative point. The problem is known in the literature for pure convex polyhedral approximation; our interest stems from its possible applications in constraint learning, where points are feasible or infeasible solutions of a Mixed Integer Program, and the K hyperplanes are linear constraints to be found. We cast the problem as an optimization one, minimizing the number of negative points inside the convex polyhedron, whenever exact separation cannot be achieved. We introduce models inspired by support vector machines and we design two mathematical programming formulations with binary variables. We exploit Dantzig-Wolfe decomposition to obtain extended formulations, and we devise column generation algorithms with ad-hoc pricing routines. We compare computing time and separation error values obtained by all our approaches on synthetic datasets, with number of points from hundreds up to a few thousands, showing our approaches to perform better than existing ones from the literature. Furthermore, we observe that key computational differences arise, depending on whether the budget K is sufficient to completely separate the positive points from the negative ones or not. On 8-dimensional instances (and over), existing convex hull algorithms become computational inapplicable, while our algorithms allow to identify good convex hull approximations in minutes of computation.
We propose a new method to construct a stationary process and random field with a given decreasing covariance function and any one-dimensional marginal distribution. The result is a new class of stationary processes and random fields. The construction method utilizes a correlated binary sequence, and it allows a simple and practical way to model dependence structures in a stationary process and random field as its dependence structure is induced by the correlation structure of a few disjoint sets in the support set of the marginal distribution. Simulation results of the proposed models are provided, which show the empirical behavior of a sample path.
Sharpness is an almost generic assumption in continuous optimization that bounds the distance from minima by objective function suboptimality. It facilitates the acceleration of first-order methods through restarts. However, sharpness involves problem-specific constants that are typically unknown, and restart schemes typically reduce convergence rates. Moreover, these schemes are challenging to apply in the presence of noise or with approximate model classes (e.g., in compressive imaging or learning problems), and they generally assume that the first-order method used produces feasible iterates. We consider the assumption of approximate sharpness, a generalization of sharpness that incorporates an unknown constant perturbation to the objective function error. This constant offers greater robustness (e.g., with respect to noise or relaxation of model classes) for finding approximate minimizers. By employing a new type of search over the unknown constants, we design a restart scheme that applies to general first-order methods and does not require the first-order method to produce feasible iterates. Our scheme maintains the same convergence rate as when the constants are known. The convergence rates we achieve for various first-order methods match the optimal rates or improve on previously established rates for a wide range of problems. We showcase our restart scheme in several examples and highlight potential future applications and developments of our framework and theory.
Deep learning still struggles with certain kinds of scientific data. Notably, pretraining data may not provide coverage of relevant distribution shifts (e.g., shifts induced via the use of different measurement instruments). We consider deep learning models trained to classify the synthesis conditions of uranium ore concentrates (UOCs) and show that model editing is particularly effective for improving generalization to distribution shifts common in this domain. In particular, model editing outperforms finetuning on two curated datasets comprising of micrographs taken of U$_{3}$O$_{8}$ aged in humidity chambers and micrographs acquired with different scanning electron microscopes, respectively.
We propose and study a one-dimensional model which consists of two cross-diffusion systems coupled via a moving interface. The motivation stems from the modelling of complex diffusion processes in the context of the vapor deposition of thin films. In our model, cross-diffusion of the various chemical species can be respectively modelled by a size-exclusion system for the solid phase and the Stefan-Maxwell system for the gaseous phase. The coupling between the two phases is modelled by linear phase transition laws of Butler-Volmer type, resulting in an interface evolution. The continuous properties of the model are investigated, in particular its entropy variational structure and stationary states. We introduce a two-point flux approximation finite volume scheme. The moving interface is addressed with a moving-mesh approach, where the mesh is locally deformed around the interface. The resulting discrete nonlinear system is shown to admit a solution that preserves the main properties of the continuous system, namely: mass conservation, nonnegativity, volume-filling constraints, decay of the free energy and asymptotics. In particular, the moving-mesh approach is compatible with the entropy structure of the continuous model. Numerical results illustrate these properties and the dynamics of the model.
We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.
The analysis of multivariate functional curves has the potential to yield important scientific discoveries in domains such as healthcare, medicine, economics and social sciences. However, it is common for real-world settings to present longitudinal data that are both irregularly and sparsely observed, which introduces important challenges for the current functional data methodology. A Bayesian hierarchical framework for multivariate functional principal component analysis is proposed, which accommodates the intricacies of such irregular observation settings by flexibly pooling information across subjects and correlated curves. The model represents common latent dynamics via shared functional principal component scores, thereby effectively borrowing strength across curves while circumventing the computationally challenging task of estimating covariance matrices. These scores also provide a parsimonious representation of the major modes of joint variation of the curves and constitute interpretable scalar summaries that can be employed in follow-up analyses. Estimation is carried out using variational inference, which combines efficiency, modularity and approximate posterior density estimation, enabling the joint analysis of large datasets with parameter uncertainty quantification. Detailed simulations assess the effectiveness of the approach in sharing information from sparse and irregularly sampled multivariate curves. The methodology is also exploited to estimate the molecular disease courses of individual patients with SARS-CoV-2 infection and characterise patient heterogeneity in recovery outcomes; this study reveals key coordinated dynamics across the immune, inflammatory and metabolic systems, which are associated with survival and long-COVID symptoms up to one year post disease onset. The approach is implemented in the R package bayesFPCA.
For the binary regression, the use of symmetrical link functions are not appropriate when we have evidence that the probability of success increases at a different rate than decreases. In these cases, the use of link functions based on the cumulative distribution function of a skewed and heavy tailed distribution can be useful. The most popular choice is some scale mixtures of skew-normal distribution. This family of distributions can have some identifiability problems, caused by the so-called direct parameterization. Also, in the binary modeling with skewed link functions, we can have another identifiability problem caused by the presence of the intercept and the skewness parameter. To circumvent these issues, in this work we proposed link functions based on the scale mixtures of skew-normal distributions under the centered parameterization. Furthermore, we proposed to fix the sign of the skewness parameter, which is a new perspective in the literature to deal with the identifiability problem in skewed link functions. Bayesian inference using MCMC algorithms and residual analysis are developed. Simulation studies are performed to evaluate the performance of the model. Also, the methodology is applied in a heart disease data.
Consider a convex function that is invariant under an group of transformations. If it has a minimizer, does it also have an invariant minimizer? Variants of this problem appear in nonparametric statistics and in a number of adjacent fields. The answer depends on the choice of function, and on what one may loosely call the geometry of the problem -- the interplay between convexity, the group, and the underlying vector space, which is typically infinite-dimensional. We observe that this geometry is completely encoded in the smallest closed convex invariant subsets of the space, and proceed to study these sets, for groups that are amenable but not necessarily compact. We then apply this toolkit to the invariant optimality problem. It yields new results on invariant kernel mean embeddings and risk-optimal invariant couplings, and clarifies relations between seemingly distinct ideas, such as the summation trick used in machine learning to construct equivariant neural networks and the classic Hunt-Stein theorem of statistics.