In this paper, we study the method to reconstruct dynamical systems from data without time labels. Data without time labels appear in many applications, such as molecular dynamics, single-cell RNA sequencing etc. Reconstruction of dynamical system from time sequence data has been studied extensively. However, these methods do not apply if time labels are unknown. Without time labels, sequence data becomes distribution data. Based on this observation, we propose to treat the data as samples from a probability distribution and try to reconstruct the underlying dynamical system by minimizing the distribution loss, sliced Wasserstein distance more specifically. Extensive experiment results demonstrate the effectiveness of the proposed method.
Fog computing arises as a complement to cloud computing where computing and storage are provided in a decentralized way rather than the centralized approach of the cloud paradigm. In addition, blockchain provides a decentralized and immutable ledger which can provide support for running arbitrary logic thanks to smart contracts. These facts can lead to harness smart contracts on blockchain as the basis for a decentralized, autonomous, and resilient orchestrator for the resources in the fog. However, the potentially vast amount of geographically distributed fog nodes may threaten the feasibility of the orchestration. On the other hand, fog nodes can exhibit highly dynamic workloads which may result in the orchestrator redistributing the services among them. Thus, there is also a need to dynamically support the network connections to those services independently of their location. Software Defined Networking (SDN) can be integrated within the orchestrator to carry out a seamless service management. To tackle both aforementioned issues, the S-HIDRA architecture is proposed. It integrates SDN support within a blockchain-based orchestrator of container-based services for fog environments, in order to provide low network latency and high service availability. Also, a domain-based architecture is outlined \marev{as potential scenario} to address the geographic distributed nature of fog environments. Results obtained from a proof-of-concept implementation assess the required functionality for S-HIDRA.
In this paper, we introduce an implicit staggered algorithm for crystal plasticity finite element method (CPFEM) which makes use of dynamic relaxation at the constitutive integration level. An uncoupled version of the constitutive system consists of a multi-surface flow law complemented by an evolution law for the hardening variables. Since a saturation law is adopted for hardening, a sequence of nonlinear iteration followed by a linear system is feasible. To tie the constitutive unknowns, the dynamic relaxation method is adopted. A Green-Nagdhi plasticity model is adopted based on the Hencky strain calculated using a [2/2] Pad\'e approximation. For the incompressible case, the approximation error is calculated exactly. A enhanced-assumed strain (EAS) element technology is adopted, which was found to be especially suited to localization problems such as the ones resulting from crystal plasticity plane slipping. Analysis of the results shows significant reduction of drift and well defined localization without spurious modes or hourglassing.
We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.
In this paper, we consider a discrete-time stochastic SIR model, where the transmission rate and the true number of infectious individuals are random and unobservable. An advantage of this model is that it permits us to account for random fluctuations in infectiousness and for non-detected infections. However, a difficulty arises because statistical inference has to be done in a partial information setting. We adopt a nested particle filtering approach to estimate the reproduction rate and the model parameters. As a case study, we apply our methodology to Austrian Covid-19 infection data. Moreover, we discuss forecasts and model tests.
We study the problem of parameter estimation for large exchangeable interacting particle systems when a sample of discrete observations from a single particle is known. We propose a novel method based on martingale estimating functions constructed by employing the eigenvalues and eigenfunctions of the generator of the mean field limit, where the law of the process is replaced by the (unique) invariant measure of the mean field dynamics. We then prove that our estimator is asymptotically unbiased and asymptotically normal when the number of observations and the number of particles tend to infinity, and we provide a rate of convergence towards the exact value of the parameters. Finally, we present several numerical experiments which show the accuracy of our estimator and corroborate our theoretical findings, even in the case the mean field dynamics exhibit more than one steady states.
Motivated by a real failure dataset in a two-dimensional context, this paper presents an extension of the Markov modulated Poisson process (MMPP) to two dimensions. The one-dimensional MMPP has been proposed for the modeling of dependent and non-exponential inter-failure times (in contexts as queuing, risk or reliability, among others). The novel two-dimensional MMPP allows for dependence between the two sequences of inter-failure times, while at the same time preserves the MMPP properties, marginally. The generalization is based on the Marshall-Olkin exponential distribution. Inference is undertaken for the new model through a method combining a matching moments approach with an Approximate Bayesian Computation (ABC) algorithm. The performance of the method is shown on simulated and real datasets representing times and distances covered between consecutive failures in a public transport company. For the real dataset, some quantities of importance associated with the reliability of the system are estimated as the probabilities and expected number of failures at different times and distances covered by trains until the occurrence of a failure.
This paper is a direct followup of the recent author's paper. In this paper we continue to analyze approximation and recovery properties with respect to systems satisfying universal sampling discretization property and a special unconditionality property. In addition we assume that the subspace spanned by our system satisfies some Nikol'skii-type inequalities. We concentrate on recovery with the error measured in the $L_p$ norm for $2\le p<\infty$. We apply a powerful nonlinear approximation method -- the Weak Orthogonal Matching Pursuit (WOMP) also known under the name Weak Orthogonal Greedy Algorithm (WOGA). We establish that the WOMP based on good points for the $L_2$-universal discretization provides good recovery in the $L_p$ norm for $2\le p<\infty$. For our recovery algorithms we obtain both the Lebesgue-type inequalities for individual functions and the error bounds for special classes of multivariate functions. We combine here two deep and powerful techniques -- Lebesgue-type inequalities for the WOMP and theory of the universal sampling dicretization -- in order to obtain new results in sampling recovery.
In this paper, we will outline a novel data-driven method for estimating functions in a multivariate nonparametric regression model based on an adaptive knot selection for B-splines. The underlying idea of our approach for selecting knots is to apply the generalized lasso, since the knots of the B-spline basis can be seen as changes in the derivatives of the function to be estimated. This method was then extended to functions depending on several variables by processing each dimension independently, thus reducing the problem to a univariate setting. The regularization parameters were chosen by means of a criterion based on EBIC. The nonparametric estimator was obtained using a multivariate B-spline regression with the corresponding selected knots. Our procedure was validated through numerical experiments by varying the number of observations and the level of noise to investigate its robustness. The influence of observation sampling was also assessed and our method was applied to a chemical system commonly used in geoscience. For each different framework considered in this paper, our approach performed better than state-of-the-art methods. Our completely data-driven method is implemented in the glober R package which is available on the Comprehensive R Archive Network (CRAN).
In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.
This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language