Motivated by the corrected form of the entropy-area law, and with the help of von Neumann entropy of quantum matter, we construct an emergent spacetime by the virtue of the geometric language of statistical information manifolds. We discuss the link between Wald and Jacobson approaches of thermodynamic/gravity correspondence and Fisher pseudo-Riemannian metric of information manifold. We derive in detail Einstein's field equations in statistical information geometric forms. This results in finding a quantum origin of a positive cosmological constant that is founded on Fisher metric. This cosmological constant resembles those found in Lovelock's theories in a de Sitter background as a result of using the complex extension of spacetime and the Gaussian exponential families of probability distributions, and we find a time varying dynamical gravitational constant as a function of Fisher metric together with the corresponding Ryu-Takayanagi formula of such system. Consequently, we obtain a dynamical equation for the entropy in information manifold using Liouville-von Neumann equation from the Hamiltonian of the system. This Hamiltonian is suggested to be non-Hermitian, which corroborates the approaches that relate non-unitary conformal field theories to information manifolds. This provides some insights on resolving "the problem of time".
In this paper, we develop a domain-decomposition method for the generalized Poisson-Boltzmann equation based on a solvent-excluded surface which is widely used in computational chemistry. The solver requires to solve a generalized screened Poisson (GSP) equation defined in $\mathbb{R}^3$ with a space-dependent dielectric permittivity and an ion-exclusion function that accounts for Steric effects. Potential theory arguments transform the GSP equation into two-coupled equations defined in a bounded domain. Then, the Schwarz decomposition method is used to formulate local problems by decomposing the cavity into overlapping balls and only solving a set of coupled sub-equations in each ball in which, the spherical harmonics and the Legendre polynomials are used as basis functions in the angular and radial directions. A series of numerical experiments are presented to test the method.
In this paper, we derive explicit second-order necessary and sufficient optimality conditions of a local minimizer to an optimal control problem for a quasilinear second-order partial differential equation with a piecewise smooth but not differentiable nonlinearity in the leading term. The key argument rests on the analysis of level sets of the state. Specifically, we show that if a function vanishes on the boundary and its the gradient is different from zero on a level set, then this set decomposes into finitely many closed simple curves. Moreover, the level sets depend continuously on the functions defining these sets. We also prove the continuity of the integrals on the level sets. In particular, Green's first identity is shown to be applicable on an open set determined by two functions with nonvanishing gradients. In the second part to this paper, the explicit sufficient second-order conditions will be used to derive error estimates for a finite-element discretization of the control problem.
This paper presents a novel approach to construct regularizing operators for severely ill-posed Fredholm integral equations of the first kind by introducing parametrized discretization. The optimal values of discretization and regularization parameters are computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data.
We consider {\it local} balances of momentum and angular momentum for the incompressible Navier-Stokes equations. First, we formulate new weak forms of the physical balances (conservation laws) of these quantities, and prove they are equivalent to the usual conservation law formulations. We then show that continuous Galerkin discretizations of the Navier-Stokes equations using the EMAC form of the nonlinearity preserve discrete analogues of the weak form conservation laws, both in the Eulerian formulation and the Lagrangian formulation (which are not equivalent after discretizations). Numerical tests illustrate the new theory.
In this paper, we present a discontinuity and cusp capturing physics-informed neural network (PINN) to solve Stokes equations with a piecewise-constant viscosity and singular force along an interface. We first reformulate the governing equations in each fluid domain separately and replace the singular force effect with the traction balance equation between solutions in two sides along the interface. Since the pressure is discontinuous and the velocity has discontinuous derivatives across the interface, we hereby use a network consisting of two fully-connected sub-networks that approximate the pressure and velocity, respectively. The two sub-networks share the same primary coordinate input arguments but with different augmented feature inputs. These two augmented inputs provide the interface information, so we assume that a level set function is given and its zero level set indicates the position of the interface. The pressure sub-network uses an indicator function as an augmented input to capture the function discontinuity, while the velocity sub-network uses a cusp-enforced level set function to capture the derivative discontinuities via the traction balance equation. We perform a series of numerical experiments to solve two- and three-dimensional Stokes interface problems and perform an accuracy comparison with the augmented immersed interface methods in literature. Our results indicate that even a shallow network with a moderate number of neurons and sufficient training data points can achieve prediction accuracy comparable to that of immersed interface methods.
The (modern) arbitrary derivative (ADER) approach is a popular technique for the numerical solution of differential problems based on iteratively solving an implicit discretization of their weak formulation. In this work, focusing on an ODE context, we investigate several strategies to improve this approach. Our initial emphasis is on the order of accuracy of the method in connection with the polynomial discretization of the weak formulation. We demonstrate that precise choices lead to higher-order convergences in comparison to the existing literature. Then, we put ADER methods into a Deferred Correction (DeC) formalism. This allows to determine the optimal number of iterations, which is equal to the formal order of accuracy of the method, and to introduce efficient $p$-adaptive modifications. These are defined by matching the order of accuracy achieved and the degree of the polynomial reconstruction at each iteration. We provide analytical and numerical results, including the stability analysis of the new modified methods, the investigation of the computational efficiency, an application to adaptivity and an application to hyperbolic PDEs with a Spectral Difference (SD) space discretization.
We consider the linear lambda-calculus extended with the sup type constructor, which provides an additive conjunction along with a non-deterministic destructor. The sup type constructor has been introduced in the context of quantum computing. In this paper, we study this type constructor within a simple linear logic categorical model, employing the category of semimodules over a commutative semiring. We demonstrate that the non-deterministic destructor finds a suitable model in a "weighted" codiagonal map. This approach offers a valid and insightful alternative to interpreting non-determinism, especially in instances where the conventional Powerset Monad interpretation does not align with the category's structure, as is the case with the category of semimodules. The validity of this alternative relies on the presence of biproducts within the category.
We exhibit a 5-uniform hypergraph that has no polychromatic 3-coloring, but all its restricted subhypergraphs with edges of size at least 3 are 2-colorable. This disproves a bold conjecture of Keszegh and the author, and can be considered as the first step to understand polychromatic colorings of hereditary hypergraph families better since the seminal work of Berge. We also show that our method cannot give hypergraphs of arbitrary high uniformity, and mention some connections to panchromatic colorings.
For a singular integral equation on an interval of the real line, we study the behavior of the error of a delta-delta discretization. We show that the convergence is non-uniform, between order $O(h^{2})$ in the interior of the interval and a boundary layer where the consistency error does not tend to zero.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.