This paper focuses on the inverse elastic impedance and the geometry problem by a Cauchy data pair on the access part of the boundary in a two-dimensional case. Through the decomposition of the displacement, the problem is transform the solution of into a coupled boundary value problem that involves two scalar Helmholtz equations. Firstly, a uniqueness result is given, and a non-iterative algorithm is proposed to solve the data completion problem using a Cauchy data pair on a known part of the solution domain's boundary. Next, we introduce a Newton-type iterative method for reconstructing the boundary and the impedance function using the completion data on the unknown boundary, which is governed by a specific type of boundary conditions. Finally, we provide several examples to demonstrate the effectiveness and accuracy of the proposed method.
We present and compare two different optimal control approaches applied to SEIR models in epidemiology, which allow us to obtain some policies for controlling the spread of an epidemic. The first approach uses Dynamic Programming to characterise the value function of the problem as the solution of a partial differential equation, the Hamilton-Jacobi-Bellman equation, and derive the optimal policy in feedback form. The second is based on Pontryagin's maximum principle and directly gives open-loop controls, via the solution of an optimality system of ordinary differential equations. This method, however, may not converge to the optimal solution. We propose a combination of the two methods in order to obtain high-quality and reliable solutions. Several simulations are presented and discussed, also checking first and second order necessary optimality conditions for the corresponding numerical solutions.
In this paper, a novel multigrid method based on Newton iteration is proposed to solve nonlinear eigenvalue problems. Instead of handling the eigenvalue $\lambda$ and eigenfunction $u$ separately, we treat the eigenpair $(\lambda, u)$ as one element in a product space $\mathbb R \times H_0^1(\Omega)$. Then in the presented multigrid method, only one discrete linear boundary value problem needs to be solved for each level of the multigrid sequence. Because we avoid solving large-scale nonlinear eigenvalue problems directly, the overall efficiency is significantly improved. The optimal error estimate and linear computational complexity can be derived simultaneously. In addition, we also provide an improved multigrid method coupled with a mixing scheme to further guarantee the convergence and stability of the iteration scheme. More importantly, we prove convergence for the residuals after each iteration step. For nonlinear eigenvalue problems, such theoretical analysis is missing from the existing literatures on the mixing iteration scheme.
We study the data-driven selection of causal graphical models using constraint-based algorithms, which determine the existence or non-existence of edges (causal connections) in a graph based on testing a series of conditional independence hypotheses. In settings where the ultimate scientific goal is to use the selected graph to inform estimation of some causal effect of interest (e.g., by selecting a valid and sufficient set of adjustment variables), we argue that a "cautious" approach to graph selection should control the probability of falsely removing edges and prefer dense, rather than sparse, graphs. We propose a simple inversion of the usual conditional independence testing procedure: to remove an edge, test the null hypothesis of conditional association greater than some user-specified threshold, rather than the null of independence. This equivalence testing formulation to testing independence constraints leads to a procedure with desriable statistical properties and behaviors that better match the inferential goals of certain scientific studies, for example observational epidemiological studies that aim to estimate causal effects in the face of causal model uncertainty. We illustrate our approach on a data example from environmental epidemiology.
We introduce a nonconforming hybrid finite element method for the two-dimensional vector Laplacian, based on a primal variational principle for which conforming methods are known to be inconsistent. Consistency is ensured using penalty terms similar to those used to stabilize hybridizable discontinuous Galerkin (HDG) methods, with a carefully chosen penalty parameter due to Brenner, Li, and Sung [Math. Comp., 76 (2007), pp. 573-595]. Our method accommodates elements of arbitrarily high order and, like HDG methods, it may be implemented efficiently using static condensation. The lowest-order case recovers the $P_1$-nonconforming method of Brenner, Cui, Li, and Sung [Numer. Math., 109 (2008), pp. 509-533], and we show that higher-order convergence is achieved under appropriate regularity assumptions. The analysis makes novel use of a family of weighted Sobolev spaces, due to Kondrat'ev, for domains admitting corner singularities.
We study half-space separation in the convexity of chordless paths of a graph, i.e., monophonic convexity. In this problem, one is given a graph and two (disjoint) subsets of vertices and asks whether these two sets can be separated by complementary convex sets, called half-spaces. While it is known this problem is $\mathbf{NP}$-complete for geodesic convexity -- the convexity of shortest paths -- we show that it can be solved in polynomial time for monophonic convexity.
The starting point of this paper is a collection of properties of an algorithm that have been distilled from the informal descriptions of what an algorithm is that are given in standard works from the mathematical and computer science literature. Based on that, the notion of a proto-algorithm is introduced. The thought is that algorithms are equivalence classes of proto-algorithms under some equivalence relation. Three equivalence relations are defined. Two of them give bounds between which an appropriate equivalence relation must lie. The third lies in between these two and is likely an appropriate equivalence relation. A sound method is presented to prove, using an imperative process algebra based on ACP, that this equivalence relation holds between two proto-algorithms.
This work is concerned with implementing the hybridizable discontinuous Galerkin (HDG) method to solve the linear anisotropic elastic equation in the frequency domain. First-order formulation with the compliance tensor and Voigt notation are employed to provide a compact description of the discretized problem and flexibility with highly heterogeneous media. We further focus on the question of optimal choices of stabilization in the definition of HDG numerical traces. For this purpose, we construct a hybridized Godunov-upwind flux for anisotropic elastic media possessing three distinct wavespeeds. This stabilization removes the need to choose a scaling factor, contrary to the identity and Kelvin-Christoffel based stabilizations which are popular choices in the literature. We carry out comparisons among these families for isotropic and anisotropic material, with constant background and highly heterogeneous ones, in two and three dimensions. These experiments establish the optimality of the Godunov stabilization which can be used as a reference choice for a generic material in which different types of waves propagate.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.
Splitting methods are a widely used numerical scheme for solving convection-diffusion problems. However, they may lose stability in some situations, particularly when applied to convection-diffusion problems in the presence of an unbounded convective term. In this paper, we propose a new splitting method, called the "Adapted Lie splitting method", which successfully overcomes the observed instability in certain cases. Assuming that the unbounded coefficient belongs to a suitable Lorentz space, we show that the adapted Lie splitting converges to first-order under the analytic semigroup framework. Furthermore, we provide numerical experiments to illustrate our newly proposed splitting approach.