It is known that B\'{e}zier curves and surfaces may have multiple representations by different control polygons. The polygons may have different number of control points and may even be disjoint. Up to our knowledge, Pekerman et al. (2005) were the first to address the problem of testing two parametric polynomial curves for coincidence. Their approach is based on reduction of the input curves into canonical irreducible form. They claimed that their approach can be extended for testing tensor product surfaces but gave no further detail. In this paper we develop a new technique and provide a comprehensive solution to the problem of testing tensor product B\'ezier surfaces for coincidence. In (Vlachkova, 2017) an algorithm for testing B\'ezier curves was proposed based on subdivision. There a partial solution to the problem of testing tensor product B\'ezier surfaces was presented. Namely, the case where the irreducible surfaces are of same degree $(n,m)$, $n,m \in \mathbb{N}$, was resolved under certain additional condition. The other cases where one of the surfaces is of degree $(n,m)$ and the other is of degree either $(n,n+m)$, or $(n+m,m)$, or $(n+m,n+m)$ remained open. We have implemented our algorithm for testing tensor product B\'ezier surfaces for coincidence using Mathematica package. Experimental results and their analysis are presented.
We introduce an extension of first-order logic that comes equipped with additional predicates for reasoning about an abstract state. Sequents in the logic comprise a main formula together with pre- and postconditions in the style of Hoare logic, and the axioms and rules of the logic ensure that the assertions about the state compose in the correct way. The main result of the paper is a realizability interpretation of our logic that extracts programs into a mixed functional/imperative language. All programs expressible in this language act on the state in a sequential manner, and we make this intuition precise by interpreting them in a semantic metatheory using the state monad. Our basic framework is very general, and our intention is that it can be instantiated and extended in a variety of different ways. We outline in detail one such extension: A monadic version of Heyting arithmetic with a wellfounded while rule, and conclude by outlining several other directions for future work.
The objective of this article is to address the discretisation of fractured/faulted poromechanical models using 3D polyhedral meshes in order to cope with the geometrical complexity of faulted geological models. A polytopal scheme is proposed for contact-mechanics, based on a mixed formulation combining a fully discrete space and suitable reconstruction operators for the displacement field with a face-wise constant approximation of the Lagrange multiplier accounting for the surface tractions along the fracture/fault network. To ensure the inf--sup stability of the mixed formulation, a bubble-like degree of freedom is included in the discrete space of displacements (and taken into account in the reconstruction operators). It is proved that this fully discrete scheme for the displacement is equivalent to a low-order Virtual Element scheme, with a bubble enrichment of the VEM space. This $\mathbb{P}^1$-bubble VEM--$\mathbb{P}^0$ mixed discretization is combined with an Hybrid Finite Volume scheme for the Darcy flow. All together, the proposed approach is adapted to complex geometry accounting for network of planar faults/fractures including corners, tips and intersections; it leads to efficient semi-smooth Newton solvers for the contact-mechanics and preserve the dissipative properties of the fully coupled model. Our approach is investigated in terms of convergence and robustness on several 2D and 3D test cases using either analytical or numerical reference solutions both for the stand alone static contact mechanical model and the fully coupled poromechanical model.
Compressed Sensing (CS) encompasses a broad array of theoretical and applied techniques for recovering signals, given partial knowledge of their coefficients. Its applications span various fields, including mathematics, physics, engineering, and several medical sciences. Motivated by our interest in the mathematics behind Magnetic Resonance Imaging (MRI) and CS, we employ convex analysis techniques to analytically determine equivalents of Lagrange multipliers for optimization problems with inequality constraints, specifically a weighted LASSO with voxel-wise weighting. We investigate this problem under assumptions on the fidelity term $\Vert{Ax-b}\Vert_2^2$, either concerning the sign of its gradient or orthogonality-like conditions of its matrix. To be more precise, we either require the sign of each coordinate of $2(Ax-b)^TA$ to be fixed within a rectangular neighborhood of the origin, with the side lengths of the rectangle dependent on the constraints, or we assume $A^TA$ to be diagonal. The objective of this work is to explore the relationship between Lagrange multipliers and the constraints of a weighted variant of LASSO, specifically in the mentioned cases where this relationship can be computed explicitly. As they scale the regularization terms of the weighted LASSO, Lagrange multipliers serve as tuning parameters for the weighted LASSO, prompting the question of their potential effective use as tuning parameters in applications like MR image reconstruction and denoising. This work represents an initial step in this direction.
Modeling excess remains to be an important topic in insurance data modeling. Among the alternatives of modeling excess, the Peaks Over Threshold (POT) framework with Generalized Pareto distribution (GPD) is regarded as an efficient approach due to its flexibility. However, the selection of an appropriate threshold for such framework is a major difficulty. To address such difficulty, we applied several accumulation tests along with Anderson-Darling test to determine an optimal threshold. Based on the selected thresholds, the fitted GPD with the estimated quantiles can be found. We applied the procedure to the well-known Norwegian Fire Insurance data and constructed the confidence intervals for the Value-at-Risks (VaR). The accumulation test approach provides satisfactory performance in modeling the high quantiles of Norwegian Fire Insurance data compared to the previous graphical methods.
Inverse problems, which are related to Maxwell's equations, in the presence of nonlinear materials is a quite new topic in the literature. The lack of contributions in this area can be ascribed to the significant challenges that such problems pose. Retrieving the spatial behaviour of some unknown physical property, from boundary measurements, is a nonlinear and highly ill-posed problem even in the presence of linear materials. Furthermore, this complexity grows exponentially in the presence of nonlinear materials. In the tomography of linear materials, the Monotonicity Principle (MP) is the foundation of a class of non-iterative algorithms able to guarantee excellent performances and compatibility with real-time applications. Recently, the MP has been extended to nonlinear materials under very general assumptions. Starting from the theoretical background for this extension, we develop a first real-time inversion method for the inverse obstacle problem in the presence of nonlinear materials. The proposed method is intendend for all problems governed by the quasilinear Laplace equation, i.e. static problems involving nonlinear materials. In this paper, we provide some preliminary results which give the foundation of our method and some extended numerical examples.
We solve several first questions in the table of small parameters of completely regular (CR) codes in Hamming graphs $H(n,q)$. The most uplifting result is the existence of a $\{13,6,1;1,6,9\}$-CR code in $H(n,2)$, $n\ge 13$. We also establish the non-existence of a $\{11,4;3,6\}$-code and a $\{10,3;4,7\}$-code in $H(12,2)$ and $H(13,2)$. A partition of the complement of the quaternary Hamming code of length~$5$ into $4$-cliques is found, which can be used to construct completely regular codes with covering radius $1$ by known constructions. Additionally we discuss the parameters $\{24,21,10;1,4,12\}$ of a putative completely regular code in $H(24,2)$ and show the nonexistence of such a code in $H(8,4)$. Keywords: Hamming graph, equitable partition, completely regular code
Generalized Additive Runge-Kutta schemes have shown to be a suitable tool for solving ordinary differential equations with additively partitioned right-hand sides. This work develops symplectic GARK schemes for additively partitioned Hamiltonian systems. In a general setting, we derive conditions for symplecticness, as well as symmetry and time-reversibility. We show how symplectic and symmetric schemes can be constructed based on schemes which are only symplectic, or only symmetric. Special attention is given to the special case of partitioned schemes for Hamiltonians split into multiple potential and kinetic energies. Finally we show how symplectic GARK schemes can leverage different time scales and evaluation costs for different potentials, and provide efficient numerical solutions by using different order for these parts.
This work is motivated by the need of efficient numerical simulations of gas flows in the serpentine channels used in proton-exchange membrane fuel cells. In particular, we consider the Poisson problem in a 2D domain composed of several long straight rectangular sections and of several bends corners. In order to speed up the resolution, we propose a 0D model in the rectangular parts of the channel and a Finite Element resolution in the bends. To find a good compromise between precision and time consuming, the challenge is double: how to choose a suitable position of the interface between the 0D and the 2D models and how to control the discretization error in the bends. We shall present an \textit{a posteriori} error estimator based on an equilibrated flux reconstruction in the subdomains where the Finite Element method is applied. The estimates give a global upper bound on the error measured in the energy norm of the difference between the exact and approximate solutions on the whole domain. They are guaranteed, meaning that they feature no undetermined constants. (global) Lower bounds for the error are also derived. An adaptive algorithm is proposed to use smartly the estimator for aforementioned double challenge. A numerical validation of the estimator and the algorithm completes the work. \end{abstract}
Considered herein is a Jacobian-free Newton method for the numerical solution of nonlinear equations where the Jacobian is approximated using the complex-step derivative approximation. We demonstrate that this method converges for complex-step values sufficiently small and not necessarily tiny. Notably, in the case of scalar equations the convergence rate becomes quadratic as the complex-step tends to zero. On the other hand, in the case of systems of equations the rate is quadratic for any appropriately small value of the complex-step and not just in the limit to zero. This assertion is substantiated through numerical experiments. Furthermore, we demonstrate the method's seamless applicability in solving nonlinear systems that arise in the context of differential equations, employing it as a Jacobian-free Newton-Krylov method.
The dichromatic number of a digraph is the minimum size of a partition of its vertices into acyclic induced subgraphs. Given a class of digraphs $\mathcal C$, a digraph $H$ is a hero in $\mc C$ if $H$-free digraphs of $\mathcal C$ have bounded dichromatic number. In a seminal paper, Berger at al. give a simple characterization of all heroes in tournaments. In this paper, we give a simple proof that heroes in quasi-transitive oriented graphs are the same as heroes in tournaments. We also prove that it is not the case in the class of oriented multipartite graphs, disproving a conjecture of Aboulker, Charbit and Naserasr. We also give a full characterisation of heroes in oriented complete multipartite graphs up to the status of a single tournament on $6$ vertices.