In a dual weighted residual method based on the finite element framework, the Galerkin orthogonality is an issue that prevents solving the dual equation in the same space as the one for the primal equation. In the literature, there have been two popular approaches to constructing a new space for the dual problem, i.e., refining mesh grids ($h$-approach) and raising the order of approximate polynomials ($p$-approach). In this paper, a novel approach is proposed for the purpose based on the multiple-precision technique, i.e., the construction of the new finite element space is based on the same configuration as the one for the primal equation, except for the precision in calculations. The feasibility of such a new approach is discussed in detail in the paper. In numerical experiments, the proposed approach can be realized conveniently with C++ \textit{template}. Moreover, the new approach shows remarkable improvements in both efficiency and storage compared with the $h$-approach and the $p$-approach. It is worth mentioning that the performance of our approach is comparable with the one through a higher order interpolation ($i$-approach) in the literature. The combination of these two approaches is believed to further enhance the efficiency of the dual weighted residual method.
We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence. Let $T$ be the time horizon and $P_T$ be the path length that essentially reflects the non-stationarity of environments, the state-of-the-art dynamic regret is $\mathcal{O}(\sqrt{T(1+P_T)})$. Although this bound is proved to be minimax optimal for convex functions, in this paper, we demonstrate that it is possible to further enhance the guarantee for some easy problem instances, particularly when online functions are smooth. Specifically, we introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of these two terms. These quantities are at most $\mathcal{O}(T)$ while could be much smaller in benign environments. Therefore, our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and meanwhile guarantee the same rate in the worst case. Notably, our proposed algorithms can achieve favorable dynamic regret with only one gradient per iteration, sharing the same gradient query complexity as the static regret minimization methods. To accomplish this, we introduce the framework of collaborative online ensemble. The proposed framework employs a two-layer online ensemble to handle non-stationarity, and uses optimistic online learning and further introduces crucial correction terms to facilitate effective collaboration within the meta-base two layers, thereby attaining adaptivity. We believe that the framework can be useful for broader problems.
Since their introduction in Abadie and Gardeazabal (2003), Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected. While our results only directly apply in settings where treatment is assigned randomly, we believe that they can complement model-based approaches even for observational studies.
This work presents a strongly coupled partitioned method for fluid-structure interaction (FSI) problems based on a monolithic formulation of the system which employs a Lagrange multiplier. We prove that both the semi-discrete and fully discrete formulations are well-posed. To derive a partitioned scheme, a Schur complement equation, which implicitly expresses the Lagrange multiplier and the fluid pressure in terms of the fluid velocity and structural displacement, is constructed based on the monolithic FSI system. Solving the Schur complement system at each time step allows for the decoupling of the fluid and structure subproblems, making the method non-iterative between subdomains. We investigate bounds for the condition number of the Schur complement matrix and present initial numerical results to demonstrate the performance of our approach, which attains the expected convergence rates.
Finite element discretization of time dependent problems also require effective time-stepping schemes. While implicit Runge-Kutta methods provide favorable accuracy and stability problems, they give rise to large and complicated systems of equations to solve for each time step. These algebraic systems couple all Runge-Kutta stages together, giving a much larger system than for single-stage methods. We consider an approach to these systems based on monolithic smoothing. If stage-coupled smoothers possess a certain kind of structure, then the question of convergence of a two-grid or multi-grid iteration reduces to convergence of a related strategy for a single-stage system with a complex-valued time step. In addition to providing a general theoretical approach to the convergence of monolithic multigrid methods, several numerical examples are given to illustrate the theory show how higher-order Runge-Kutta methods can be made effective in practice.
The use of orthonormal polynomial bases has been found to be efficient in preventing ill-conditioning of the system matrix in the primal formulation of Virtual Element Methods (VEM) for high values of polynomial degree and in presence of badly-shaped polygons. However, we show that using the natural extension of a orthogonal polynomial basis built for the primal formulation is not sufficient to cure ill-conditioning in the mixed case. Thus, in the present work, we introduce an orthogonal vector-polynomial basis which is built ad hoc for being used in the mixed formulation of VEM and which leads to very high-quality solution in each tested case. Furthermore, a numerical experiment related to simulations in Discrete Fracture Networks (DFN), which are often characterised by very badly-shaped elements, is proposed to validate our procedures.
The well-known discrete Fourier transform (DFT) can easily be generalized to arbitrary nodes in the spatial domain. The fast procedure for this generalization is referred to as nonequispaced fast Fourier transform (NFFT). Various applications such as MRI, solution of PDEs, etc., are interested in the inverse problem, i.e., computing Fourier coefficients from given nonequispaced data. In this paper we survey different kinds of approaches to tackle this problem. In contrast to iterative procedures, where multiple iteration steps are needed for computing a solution, we focus especially on so-called direct inversion methods. We review density compensation techniques and introduce a new scheme that leads to an exact reconstruction for trigonometric polynomials. In addition, we consider a matrix optimization approach using Frobenius norm minimization to obtain an inverse NFFT.
Understanding the structure, quantity, and type of snow in mountain landscapes is crucial for assessing avalanche safety, interpreting satellite imagery, building accurate hydrology models, and choosing the right pair of skis for your weekend trip. Currently, such characteristics of snowpack are measured using a combination of remote satellite imagery, weather stations, and laborious point measurements and descriptions provided by local forecasters, guides, and backcountry users. Here, we explore how characteristics of the top layer of snowpack could be estimated while skiing using strain sensors mounted to the top surface of an alpine ski. We show that with two strain gauges and an inertial measurement unit it is feasible to correctly assign one of three qualitative labels (powder, slushy, or icy/groomed snow) to each 10 second segment of a trajectory with 97% accuracy, independent of skiing style. Our algorithm uses a combination of a data-driven linear model of the ski-snow interaction, dimensionality reduction, and a Naive Bayes classifier. Comparisons of classifier performance between strain gauges suggest that the optimal placement of strain gauges is halfway between the binding and the tip/tail of the ski, in the cambered section just before the point where the unweighted ski would touch the snow surface. The ability to classify snow, potentially in real-time, using skis opens the door to applications that range from citizen science efforts to map snow surface characteristics in the backcountry, and develop skis with automated stiffness tuning based on the snow type.
We develop two unfitted finite element methods for the Stokes equations based on Hdiv-conforming finite elements. The first method is a cut finite element discretization of the Stokes equations based on the Brezzi- Douglas-Marini elements and involves interior penalty terms to enforce tangential continuity of the velocity at interior edges in the mesh. The second method is a cut finite element discretization of a three-field for- mulation of the Stokes problem involving the vorticity, velocity, and pressure and uses the Raviart-Thomas space for the velocity. We present mixed ghost penalty stabilization terms for both methods so that the re- sulting discrete problems are stable and the divergence-free property of the Hdiv-conforming elements is preserved also for unfitted meshes. We compare the two methods numerically. Both methods exhibit robust discrete problems, optimal convergence order for the velocity, and pointwise divergence-free velocity fields, independently of the position of the boundary relative to the computational mesh.
The $hp$-adaptive finite element method (FEM) - where one independently chooses the mesh size ($h$) and polynomial degree ($p$) to be used on each cell - has long been known to have better theoretical convergence properties than either $h$- or $p$-adaptive methods alone. However, it is not widely used, owing at least in parts to the difficulty of the underlying algorithms and the lack of widely usable implementations. This is particularly true when used with continuous finite elements. Herein, we discuss algorithms that are necessary for a comprehensive and generic implementation of $hp$-adaptive finite element methods on distributed-memory, parallel machines. In particular, we will present a multi-stage algorithm for the unique enumeration of degrees of freedom (DoFs) suitable for continuous finite element spaces, describe considerations for weighted load balancing, and discuss the transfer of variable size data between processes. We illustrate the performance of our algorithms with numerical examples, and demonstrate that they scale reasonably up to at least 16,384 Message Passing Interface (MPI) processes. We provide a reference implementation of our algorithms as part of the open-source library deal.II.
ChatGPT is a large language model recently released by the OpenAI company. In this technical report, we explore for the first time the capability of ChatGPT for programming numerical algorithms. Specifically, we examine the capability of GhatGPT for generating codes for numerical algorithms in different programming languages, for debugging and improving written codes by users, for completing missed parts of numerical codes, rewriting available codes in other programming languages, and for parallelizing serial codes. Additionally, we assess if ChatGPT can recognize if given codes are written by humans or machines. To reach this goal, we consider a variety of mathematical problems such as the Poisson equation, the diffusion equation, the incompressible Navier-Stokes equations, compressible inviscid flow, eigenvalue problems, solving linear systems of equations, storing sparse matrices, etc. Furthermore, we exemplify scientific machine learning such as physics-informed neural networks and convolutional neural networks with applications to computational physics. Through these examples, we investigate the successes, failures, and challenges of ChatGPT. Examples of failures are producing singular matrices, operations on arrays with incompatible sizes, programming interruption for relatively long codes, etc. Our outcomes suggest that ChatGPT can successfully program numerical algorithms in different programming languages, but certain limitations and challenges exist that require further improvement of this machine learning model.