In this work, we consider space-time goal-oriented a posteriori error estimation for parabolic problems. Temporal and spatial discretizations are based on Galerkin finite elements of continuous and discontinuous type. The main objectives are the development and analysis of space-time estimators, in which the localization is based on a weak form employing a partition-of-unity. The resulting error indicators are used for temporal and spatial adaptivity. Our developments are substantiated with several numerical examples.
This work aims to explore the community structure of Santiago de Chile by analyzing the movement patterns of its residents. We use a dataset containing the approximate locations of home and work places for a subset of anonymized residents to construct a network that represents the movement patterns within the city. Through the analysis of this network, we aim to identify the communities or sub-cities that exist within Santiago de Chile and gain insights into the factors that drive the spatial organization of the city. We employ modularity optimization algorithms and clustering techniques to identify the communities within the network. Our results present that the novelty of combining community detection algorithms with segregation tools provides new insights to further the understanding of the complex geography of segregation during working hours.
Although the applications of Non-Homogeneous Poisson Processes to model and study the threshold overshoots of interest in different time series of measurements have proven to provide good results, they needed to be complemented with an efficient and automatic diagnostic technique to establish the location of the change-points, which, when taken into account, make the estimated model fit poorly in regards of the information contained in the real model. For this reason, we propose a new method to solve the segmentation uncertainty of the time series of measurements, where the emission distribution of exceedances of a specific threshold is the focus of investigation. One of the great contributions of the present algorithm is that all the days that overflowed are candidates to be a change-point, so all the possible configurations of overflow days are the possible chromosomes, which will unite to have offspring. Under the heuristics of a genetic algorithm, the solution to the problem of finding such change points will be guaranteed to be non-local and the best possible one, reducing wasted machine time evaluating the least likely chromosomes to be a solution to the problem. The analytical evaluation technique will be by means of the Minimum Description Length (\textit{MDL}) as the objective function, which is the joint posterior distribution function of the parameters of each regime and the change points that determines them and which account as well for the influence of the presence of said times.
Inverse problems are inherently ill-posed and therefore require regularization techniques to achieve a stable solution. While traditional variational methods have well-established theoretical foundations, recent advances in machine learning based approaches have shown remarkable practical performance. However, the theoretical foundations of learning-based methods in the context of regularization are still underexplored. In this paper, we propose a general framework that addresses the current gap between learning-based methods and regularization strategies. In particular, our approach emphasizes the crucial role of data consistency in the solution of inverse problems and introduces the concept of data-proximal null-space networks as a key component for their solution. We provide a complete convergence analysis by extending the concept of regularizing null-space networks with data proximity in the visual part. We present numerical results for limited-view computed tomography to illustrate the validity of our framework.
In this article we extend and strengthen the seminal work by Niyogi, Smale, and Weinberger on the learning of the homotopy type from a sample of an underlying space. In their work, Niyogi, Smale, and Weinberger studied samples of $C^2$ manifolds with positive reach embedded in $\mathbb{R}^d$. We extend their results in the following ways: In the first part of our paper we consider both manifolds of positive reach -- a more general setting than $C^2$ manifolds -- and sets of positive reach embedded in $\mathbb{R}^d$. The sample $P$ of such a set $\mathcal{S}$ does not have to lie directly on it. Instead, we assume that the two one-sided Hausdorff distances -- $\varepsilon$ and $\delta$ -- between $P$ and $\mathcal{S}$ are bounded. We provide explicit bounds in terms of $\varepsilon$ and $ \delta$, that guarantee that there exists a parameter $r$ such that the union of balls of radius $r$ centred at the sample $P$ deformation-retracts to $\mathcal{S}$. In the second part of our paper we study homotopy learning in a significantly more general setting -- we investigate sets of positive reach and submanifolds of positive reach embedded in a \emph{Riemannian manifold with bounded sectional curvature}. To this end we introduce a new version of the reach in the Riemannian setting inspired by the cut locus. Yet again, we provide tight bounds on $\varepsilon$ and $\delta$ for both cases (submanifolds as well as sets of positive reach), exhibiting the tightness by an explicit construction.
Based on the Magnetospheric Multiscale (MMS) mission we look at magnetic field fluctuations in the Earth's magnetosheath. We apply the statistical analysis using a Fokker-Planck equation to investigate processes responsible for stochastic fluctuations in space plasmas. As already known, turbulence in the inertial range of hydromagnetic scales exhibits Markovian features. We have extended the statistical approach to much smaller scales in space, where kinetic theory should be applied. Here we study in detail and compare the characteristics of magnetic fluctuations behind the bow shock, inside the magnetosheath, and near the magnetopause. It appears that the first Kramers- Moyal coefficient is linear and the second term is quadratic function of magnetic increments, which describe drift and diffusion, correspondingly, in the entire magnetosheath. This should correspond to a generalization of Ornstein-Uhlenbeck process. We demonstrate that the second order approximation of the Fokker-Planck equation leads to non-Gaussian kappa distributions of the probability density functions. In all cases in the magnetosheath, the approximate power-law distributions are recovered. For some moderate scales we have the kappa distributions described by various peaked shapes with heavy tails. In particular, for large values of the kappa parameter this shape is reduced to the normal Gaussian distribution. It is worth noting that for smaller kinetic scales the rescaled distributions exhibit a universal global scale-invariance, consistently with the stationary solution of the Fokker-Planck equation. These results, especially on kinetic scales, could be important for a better understanding of the physical mechanism governing turbulent systems in space and astrophysical plasmas.
In this paper, we derive explicit second-order necessary and sufficient optimality conditions of a local minimizer to an optimal control problem for a quasilinear second-order partial differential equation with a piecewise smooth but not differentiable nonlinearity in the leading term. The key argument rests on the analysis of level sets of the state. Specifically, we show that if a function vanishes on the boundary and its the gradient is different from zero on a level set, then this set decomposes into finitely many closed simple curves. Moreover, the level sets depend continuously on the functions defining these sets. We also prove the continuity of the integrals on the level sets. In particular, Green's first identity is shown to be applicable on an open set determined by two functions with nonvanishing gradients. In the second part to this paper, the explicit sufficient second-order conditions will be used to derive error estimates for a finite-element discretization of the control problem.
The joint design of the optical system and the downstream algorithm is a challenging and promising task. Due to the demand for balancing the global optimal of imaging systems and the computational cost of physical simulation, existing methods cannot achieve efficient joint design of complex systems such as smartphones and drones. In this work, starting from the perspective of the optical design, we characterize the optics with separated aberrations. Additionally, to bridge the hardware and software without gradients, an image simulation system is presented to reproduce the genuine imaging procedure of lenses with large field-of-views. As for aberration correction, we propose a network to perceive and correct the spatially varying aberrations and validate its superiority over state-of-the-art methods. Comprehensive experiments reveal that the preference for correcting separated aberrations in joint design is as follows: longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, field curvature, and coma, with astigmatism coming last. Drawing from the preference, a 10% reduction in the total track length of the consumer-level mobile phone lens module is accomplished. Moreover, this procedure spares more space for manufacturing deviations, realizing extreme-quality enhancement of computational photography. The optimization paradigm provides innovative insight into the practical joint design of sophisticated optical systems and post-processing algorithms.
We propose a novel estimation approach for a general class of semi-parametric time series models where the conditional expectation is modeled through a parametric function. The proposed class of estimators is based on a Gaussian quasi-likelihood function and it relies on the specification of a parametric pseudo-variance that can contain parametric restrictions with respect to the conditional expectation. The specification of the pseudo-variance and the parametric restrictions follow naturally in observation-driven models with bounds in the support of the observable process, such as count processes and double-bounded time series. We derive the asymptotic properties of the estimators and a validity test for the parameter restrictions. We show that the results remain valid irrespective of the correct specification of the pseudo-variance. The key advantage of the restricted estimators is that they can achieve higher efficiency compared to alternative quasi-likelihood methods that are available in the literature. Furthermore, the testing approach can be used to build specification tests for parametric time series models. We illustrate the practical use of the methodology in a simulation study and two empirical applications featuring integer-valued autoregressive processes, where assumptions on the dispersion of the thinning operator are formally tested, and autoregressions for double-bounded data with application to a realized correlation time series.
In this paper we present an abstract nonsmooth optimization problem for which we recall existence and uniqueness results. We show a numerical scheme to approximate its solution. The theory is later applied to a sample static contact problem describing an elastic body in frictional contact with a foundation. This problem leads to a hemivariational inequality which we solve numerically. Finally, we compare three computational methods of solving contact mechanical problems: direct optimization method, augmented Lagrangian method and primal-dual active set strategy.
It is crucial to detect when an instance lies downright too far from the training samples for the machine learning model to be trusted, a challenge known as out-of-distribution (OOD) detection. For neural networks, one approach to this task consists of learning a diversity of predictors that all can explain the training data. This information can be used to estimate the epistemic uncertainty at a given newly observed instance in terms of a measure of the disagreement of the predictions. Evaluation and certification of the ability of a method to detect OOD require specifying instances which are likely to occur in deployment yet on which no prediction is available. Focusing on regression tasks, we choose a simple yet insightful model for this OOD distribution and conduct an empirical evaluation of the ability of various methods to discriminate OOD samples from the data. Moreover, we exhibit evidence that a diversity of parameters may fail to translate to a diversity of predictors. Based on the choice of an OOD distribution, we propose a new way of estimating the entropy of a distribution on predictors based on nearest neighbors in function space. This leads to a variational objective which, combined with the family of distributions given by a generative neural network, systematically produces a diversity of predictors that provides a robust way to detect OOD samples.