This paper is devoted to the theoretical and numerical investigation of the initial boundary value problem for a system of equations used for the description of waves in coastal areas, namely, the Boussinesq-Abbott system in the presence of topography. We propose a procedure that allows one to handle very general linear or nonlinear boundary conditions. It consists in reducing the problem to a system of conservation laws with nonlocal fluxes and coupled to an ODE. This reformulation is used to propose two hybrid finite volumes/finite differences schemes of first and second order respectively. The possibility to use many kinds of boundary conditions is used to investigate numerically the asymptotic stability of the boundary conditions, which is an issue of practical relevance in coastal oceanography since asymptotically stable boundary conditions would allow one to reconstruct a wave field from the knowledge of the boundary data only, even if the initial data is not known.
We consider a geometric programming problem consisting in minimizing a function given by the supremum of finitely many log-Laplace transforms of discrete nonnegative measures on a Euclidean space. Under a coerciveness assumption, we show that a $\varepsilon$-minimizer can be computed in a time that is polynomial in the input size and in $|\log\varepsilon|$. This is obtained by establishing bit-size estimates on approximate minimizers and by applying the ellipsoid method. We also derive polynomial iteration complexity bounds for the interior point method applied to the same class of problems. We deduce that the spectral radius of a partially symmetric, weakly irreducible nonnegative tensor can be approximated within $\varepsilon$ error in poly-time. For strongly irreducible tensors, we also show that the logarithm of the positive eigenvector is poly-time computable. Our results also yield that the the maximum of a nonnegative homogeneous $d$-form in the unit ball with respect to $d$-H\"older norm can be approximated in poly-time. In particular, the spectral radius of uniform weighted hypergraphs and some known upper bounds for the clique number of uniform hypergraphs are poly-time computable.
This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.
Conformal inference is a fundamental and versatile tool that provides distribution-free guarantees for many machine learning tasks. We consider the transductive setting, where decisions are made on a test sample of $m$ new points, giving rise to $m$ conformal $p$-values. While classical results only concern their marginal distribution, we show that their joint distribution follows a P\'olya urn model, and establish a concentration inequality for their empirical distribution function. The results hold for arbitrary exchangeable scores, including adaptive ones that can use the covariates of the test+calibration samples at training stage for increased accuracy. We demonstrate the usefulness of these theoretical results through uniform, in-probability guarantees for two machine learning tasks of current interest: interval prediction for transductive transfer learning and novelty detection based on two-class classification.
In this paper, we introduce a framework for the discretization of a class of constrained Hamilton-Jacobi equations, a system coupling a Hamilton-Jacobi equation with a Lagrange multiplier determined by the constraint. The equation is non-local, and the constraint has bounded variations. We show that, under a set of general hypothesis, the approximation obtained with a finite-differences monotonic scheme, converges towards the viscosity solution of the constrained Hamilton-Jacobi equation. Constrained Hamilton-Jacobi equations often arise as the long time and small mutation asymptotics of population models in quantitative genetics. As an example, we detail the construction of a scheme for the limit of an integral Lotka-Volterra equation. We also construct and analyze an Asymptotic-Preserving (AP) scheme for the model outside of the asymptotics. We prove that it is stable along the transition towards the asymptotics. The theoretical analysis of the schemes is illustrated and discussed with numerical simulations. The AP scheme is also used to conjecture the asymptotic behavior of the integral Lotka-Volterra equation, when the environment varies in time.
One of the fundamental steps toward understanding a complex system is identifying variation at the scale of the system's components that is most relevant to behavior on a macroscopic scale. Mutual information provides a natural means of linking variation across scales of a system due to its independence of functional relationship between observables. However, characterizing the manner in which information is distributed across a set of observables is computationally challenging and generally infeasible beyond a handful of measurements. Here we propose a practical and general methodology that uses machine learning to decompose the information contained in a set of measurements by jointly optimizing a lossy compression of each measurement. Guided by the distributed information bottleneck as a learning objective, the information decomposition identifies the variation in the measurements of the system state most relevant to specified macroscale behavior. We focus our analysis on two paradigmatic complex systems: a Boolean circuit and an amorphous material undergoing plastic deformation. In both examples, the large amount of entropy of the system state is decomposed, bit by bit, in terms of what is most related to macroscale behavior. The identification of meaningful variation in data, with the full generality brought by information theory, is made practical for studying the connection between micro- and macroscale structure in complex systems.
Acoustic wave equation is a partial differential equation (PDE) which describes propagation of acoustic waves through a material. In general, the solution to this PDE is nonunique. Therefore, initial conditions in the form of Cauchy conditions are imposed for obtaining a unique solution. Theoretically, solving the wave equation is equivalent to representing the wavefield in terms of a radiation source which possesses finite energy over space and time. In practice, the source may be represented in terms of pressure, normal derivative of pressure or normal velocity over a surface. The pressure wavefield is then calculated by solving an associated boundary value problem via imposing conditions on the boundary of a chosen solution space. From an analytic point of view, this manuscript aims to review typical approaches for obtaining unique solution to the acoustic wave equation in terms of either a volumetric radiation source $s$, or a singlet surface source in terms of normal derivative of pressure $(\partial/\partial \boldsymbol{n})p$ or its equivalent $\rho_0 u^{\boldsymbol{n}}$ with $\rho_0$ the ambient density, where $u^{\boldsymbol{n}} = \boldsymbol{u} \cdot \boldsymbol{n}$ is the normal velocity with $\boldsymbol{n}$ a unit vector outwardly normal to the surface. For some cases including a time-reversal propagation, the surface source is defined as a doublet source in terms of pressure $p$. A numerical approximation of the derived formulae will then be explained. The key step for numerically approximating the derived analytic formulae is inclusion of source, and will be studied carefully in this manuscript. It will be shown that compared to an analytical or ray-based solutions using Green's function, a numerical approximation of acoustic wave equation for a doublet source has a limitation regarding how to account for solid angles efficiently.
The use of discretized variables in the development of prediction models is a common practice, in part because the decision-making process is more natural when it is based on rules created from segmented models. Although this practice is perhaps more common in medicine, it is extensible to any area of knowledge where a predictive model helps in decision-making. Therefore, providing researchers with a useful and valid categorization method could be a relevant issue when developing prediction models. In this paper, we propose a new general methodology that can be applied to categorize a predictor variable in any regression model where the response variable belongs to the exponential family distribution. Furthermore, it can be applied in any multivariate context, allowing to categorize more than one continuous covariate simultaneously. In addition, a computationally very efficient method is proposed to obtain the optimal number of categories, based on a pseudo-BIC proposal. Several simulation studies have been conducted in which the efficiency of the method with respect to both the location and the number of estimated cut-off points is shown. Finally, the categorization proposal has been applied to a real data set of 543 patients with chronic obstructive pulmonary disease from Galdakao Hospital's five outpatient respiratory clinics, who were followed up for 10 years. We applied the proposed methodology to jointly categorize the continuous variables six-minute walking test and forced expiratory volume in one second in a multiple Poisson generalized additive model for the response variable rate of the number of hospital admissions by years of follow-up. The location and number of cut-off points obtained were clinically validated as being in line with the categorizations used in the literature.
Time-fractional parabolic equations with a Caputo time derivative of order $\alpha\in(0,1)$ are discretized in time using continuous collocation methods. For such discretizations, we give sufficient conditions for existence and uniqueness of their solutions. Two approaches are explored: the Lax-Milgram Theorem and the eigenfunction expansion. The resulting sufficient conditions, which involve certain $m\times m$ matrices (where $m$ is the order of the collocation scheme), are verified both analytically, for all $m\ge 1$ and all sets of collocation points, and computationally, for all $ m\le 20$. The semilinear case is also addressed.
Regression models that incorporate smooth functions of predictor variables to explain the relationships with a response variable have gained widespread usage and proved successful in various applications. By incorporating smooth functions of predictor variables, these models can capture complex relationships between the response and predictors while still allowing for interpretation of the results. In situations where the relationships between a response variable and predictors are explored, it is not uncommon to assume that these relationships adhere to certain shape constraints. Examples of such constraints include monotonicity and convexity. The scam package for R has become a popular package to carry out the full fitting of exponential family generalized additive modelling with shape restrictions on smooths. The paper aims to extend the existing framework of shape-constrained generalized additive models (SCAM) to accommodate smooth interactions of covariates, linear functionals of shape-constrained smooths and incorporation of residual autocorrelation. The methods described in this paper are implemented in the recent version of the package scam, available on the Comprehensive R Archive Network (CRAN).
This paper focuses on the numerical scheme for multiple-delay stochastic differential equations with partially H\"older continuous drifts and locally H\"older continuous diffusion coefficients. To handle with the superlinear terms in coefficients, the truncated Euler-Maruyama scheme is employed. Under the given conditions, the convergence rates at time $T$ in both $\mathcal{L}^{1}$ and $\mathcal{L}^{2}$ senses are shown by virtue of the Yamada-Watanabe approximation technique. Moreover, the convergence rates over a finite time interval $[0,T]$ are also obtained. Additionally, it should be noted that the convergence rates will not be affected by the number of delay variables. Finally, we perform the numerical experiments on the stochastic volatility model to verify the reliability of the theoretical results.