We investigate the problem of joint statistical estimation of several parameters for a stochastic differential equations driven by an additive fractional Brownian motion. Based on discrete-time observations of the model, we construct an estimator of the Hurst parameter, the diffusion parameter and the drift, which lies in a parametrised family of coercive drift coefficients. Our procedure is based on the assumption that the stationary distribution of the SDE and of its increments permits to identify the parameters of the model. Under this assumption, we prove consistency results and derive a rate of convergence for the estimator. Finally, we show that the identifiability assumption is satisfied in the case of a family of fractional Ornstein-Uhlenbeck processes and illustrate our results with some numerical experiments.
Stochastic multi-scale modeling and simulation for nonlinear thermo-mechanical problems of composite materials with complicated random microstructures remains a challenging issue. In this paper, we develop a novel statistical higher-order multi-scale (SHOMS) method for nonlinear thermo-mechanical simulation of random composite materials, which is designed to overcome limitations of prohibitive computation involving the macro-scale and micro-scale. By virtue of statistical multi-scale asymptotic analysis and Taylor series method, the SHOMS computational model is rigorously derived for accurately analyzing nonlinear thermo-mechanical responses of random composite materials both in the macro-scale and micro-scale. Moreover, the local error analysis of SHOMS solutions in the point-wise sense clearly illustrates the crucial indispensability of establishing the higher-order asymptotic corrected terms in SHOMS computational model for keeping the conservation of local energy and momentum. Then, the corresponding space-time multi-scale numerical algorithm with off-line and on-line stages is designed to efficiently simulate nonlinear thermo-mechanical behaviors of random composite materials. Finally, extensive numerical experiments are presented to gauge the efficiency and accuracy of the proposed SHOMS approach.
This paper addresses the overwhelming computational resources needed with standard numerical approaches to simulate architected materials. Those multiscale heterogeneous lattice structures gain intensive interest in conjunction with the improvement of additive manufacturing as they offer, among many others, excellent stiffness-to-weight ratios. We develop here a dedicated HPC solver that benefits from the specific nature of the underlying problem in order to drastically reduce the computational costs (memory and time) for the full fine-scale analysis of lattice structures. Our purpose is to take advantage of the natural domain decomposition into cells and, even more importantly, of the geometrical and mechanical similarities among cells. Our solver consists in a so-called inexact FETI-DP method where the local, cell-wise operators and solutions are approximated with reduced order modeling techniques. Instead of considering independently every cell, we end up with only few principal local problems to solve and make use of the corresponding principal cell-wise operators to approximate all the others. It results in a scalable algorithm that saves numerous local factorizations. Our solver is applied for the isogeometric analysis of lattices built by spline composition, which offers the opportunity to compute the reduced basis with macro-scale data, thereby making our method also multiscale and matrix-free. The solver is tested against various 2D and 3D analyses. It shows major gains with respect to black-box solvers; in particular, problems of several millions of degrees of freedom can be solved with a simple computer within few minutes.
In the logic synthesis stage, structure transformations in the synthesis tool need to be combined into optimization sequences and act on the circuit to meet the specified circuit area and delay. However, logic synthesis optimization sequences are time-consuming to run, and predicting the quality of the results (QoR) against the synthesis optimization sequence for a circuit can help engineers find a better optimization sequence faster. In this work, we propose a deep learning method to predict the QoR of unseen circuit-optimization sequences pairs. Specifically, the structure transformations are translated into vectors by embedding methods and advanced natural language processing (NLP) technology (Transformer) is used to extract the features of the optimization sequences. In addition, to enable the prediction process of the model to be generalized from circuit to circuit, the graph representation of the circuit is represented as an adjacency matrix and a feature matrix. Graph neural networks(GNN) are used to extract the structural features of the circuits. For this problem, the Transformer and three typical GNNs are used. Furthermore, the Transformer and GNNs are adopted as a joint learning policy for the QoR prediction of the unseen circuit-optimization sequences. The methods resulting from the combination of Transformer and GNNs are benchmarked. The experimental results show that the joint learning of Transformer and GraphSage gives the best results. The Mean Absolute Error (MAE) of the predicted result is 0.412.
We consider a general linear parabolic problem with extended time boundary conditions (including initial value problems and periodic ones), and approximate it by the implicit Euler scheme in time and the Gradient Discretisation method in space; the latter is in fact a class of methods that includes conforming and nonconforming finite elements, discontinuous Galerkin methods and several others. The main result is an error estimate which holds without supplementary regularity hypothesis on the solution. This result states that the approximation error has the same order as the sum of the interpolation error and the conformity error. The proof of this result relies on an inf-sup inequality in Hilbert spaces which can be used both in the continuous and the discrete frameworks. The error estimate result is illustrated by numerical examples with low regularity of the solution.
Human infants acquire their verbal lexicon with minimal prior knowledge of language based on the statistical properties of phonological distributions and the co-occurrence of other sensory stimuli. This study proposes a novel fully unsupervised learning method for discovering speech units using phonological information as a distributional cue and object information as a co-occurrence cue. The proposed method can acquire words and phonemes from speech signals using unsupervised learning and utilize object information based on multiple modalities-vision, tactile, and auditory-simultaneously. The proposed method is based on the nonparametric Bayesian double articulation analyzer (NPB-DAA) discovering phonemes and words from phonological features, and multimodal latent Dirichlet allocation (MLDA) categorizing multimodal information obtained from objects. In an experiment, the proposed method showed higher word discovery performance than baseline methods. Words that expressed the characteristics of objects (i.e., words corresponding to nouns and adjectives) were segmented accurately. Furthermore, we examined how learning performance is affected by differences in the importance of linguistic information. Increasing the weight of the word modality further improved performance relative to that of the fixed condition.
The distributed task allocation problem, as one of the most interesting distributed optimization challenges, has received considerable research attention recently. Previous works mainly focused on the task allocation problem in a population of individuals, where there are no constraints for affording task amounts. The latter condition, however, cannot always be hold. In this paper, we study the task allocation problem with constraints of task allocation in a game-theoretical framework. We assume that each individual can afford different amounts of task and the cost function is convex. To investigate the problem in the framework of population games, we construct a potential game and calculate the fitness function for each individual. We prove that when the Nash equilibrium point in the potential game is in the feasible solutions for the limited task allocation problem, the Nash equilibrium point is the unique globally optimal solution. Otherwise, we also derive analytically the unique globally optimal solution. In addition, in order to confirm our theoretical results, we consider the exponential and quadratic forms of cost function for each agent. Two algorithms with the mentioned representative cost functions are proposed to numerically seek the optimal solution to the limited task problems. We further perform Monte Carlo simulations which provide agreeing results with our analytical calculations.
The paper analyses properties of a large class of "path-based" Data Envelopment Analysis models through a unifying general scheme. The scheme includes the well-known oriented radial models, the hyperbolic distance function model, the directional distance function models, and even permits their generalisations. The modelling is not constrained to non-negative data and is flexible enough to accommodate variants of standard models over arbitrary data. Mathematical tools developed in the paper allow systematic analysis of the models from the point of view of ten desirable properties. It is shown that some of the properties are satisfied (resp., fail) for all models in the general scheme, while others have a more nuanced behaviour and must be assessed individually in each model. Our results can help researchers and practitioners navigate among the different models and apply the models to mixed data.
Challenging combinatorial optimization problems are ubiquitous in science and engineering. Several quantum methods for optimization have recently been developed, in different settings including both exact and approximate solvers. Addressing this field of research, this manuscript has three distinct purposes. First, we present an intuitive method for synthesizing and analyzing discrete (i.e., integer-based) optimization problems, wherein the problem and corresponding algorithmic primitives are expressed using a discrete quantum intermediate representation (DQIR) that is encoding-independent. This compact representation often allows for more efficient problem compilation, automated analyses of different encoding choices, easier interpretability, more complex runtime procedures, and richer programmability, as compared to previous approaches, which we demonstrate with a number of examples. Second, we perform numerical studies comparing several qubit encodings; the results exhibit a number of preliminary trends that help guide the choice of encoding for a particular set of hardware and a particular problem and algorithm. Our study includes problems related to graph coloring, the traveling salesperson problem, factory/machine scheduling, financial portfolio rebalancing, and integer linear programming. Third, we design low-depth graph-derived partial mixers (GDPMs) up to 16-level quantum variables, demonstrating that compact (binary) encodings are more amenable to QAOA than previously understood. We expect this toolkit of programming abstractions and low-level building blocks to aid in designing quantum algorithms for discrete combinatorial problems.
Due to its computational complexity, graph cuts for cluster detection and identification are used mostly in the form of convex relaxations. We propose to utilize the original graph cuts such as Ratio, Normalized or Cheeger Cut in order to detect clusters in weighted undirected graphs by restricting the graph cut minimization to the subset of $st$-MinCut partitions. Incorporating a vertex selection technique and restricting optimization to tightly connected clusters, we therefore combine the efficient computability of $st$-MinCuts and the intrinsic properties of Gomory-Hu trees with the cut quality of the original graph cuts, leading to linear runtime in the number of vertices and quadratic in the number of edges. Already in simple scenarios, the resulting algorithm Xist is able to approximate graph cut values better empirically than spectral clustering or comparable algorithms, even for large network datasets. We showcase its applicability by segmenting images from cell biology and provide empirical studies of runtime and classification rate.
We propose a numerically efficient method for evaluating the random-coding union bound with parameter $s$ on the error probability achievable in the finite-blocklength regime by a pilot-assisted transmission scheme employing Gaussian codebooks and operating over a memoryless block-fading channel. Our method relies on the saddlepoint approximation, which, differently from previous results reported for similar scenarios, is performed with respect to the number of fading blocks (a.k.a. diversity branches) spanned by each codeword, instead of the number of channel uses per block. This different approach avoids a costly numerical averaging of the error probability over the realizations of the fading process and of its pilot-based estimate at the receiver and results in a significant reduction of the number of channel realizations required to estimate the error probability accurately. Our numerical experiments for both single-antenna communication links and massive multiple-input multiple-output (MIMO) networks show that, when two or more diversity branches are available, the error probability can be estimated accurately with the saddlepoint approximation with respect to the number of fading blocks using a numerical method that requires about two orders of magnitude fewer Monte-Carlo samples than with the saddlepoint approximation with respect to the number of channel uses per block.