亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Model-free data-driven computational mechanics, first proposed by Kirchdoerfer and Ortiz, replaces phenomenological models with numerical simulations based on sample data sets in strain-stress space. Recent literature extended the approach to inelastic problems using structured data sets, tangent space information, and transition rules. From an application perspective, the coverage of qualified data states and calculating the corresponding tangent space is crucial. In this respect, material symmetry significantly helps to reduce the amount of necessary data. This study applies the data-driven paradigm to elasto-plasticity with isotropic hardening. We formulate our approach employing Haigh-Westergaard coordinates, providing information on the underlying material yield surface. Based on this, we use a combined tension-torsion test to cover the knowledge of the yield surface and a single tensile test to calculate the corresponding tangent space. The resulting data-driven method minimizes the distance over the Haigh-Westergaard space augmented with directions in the tangent space subject to compatibility and equilibrium constraints.

相關內容

通過采集數據(這里的數據必須滿足大、全、細、時),將數據進行組織形成信息流,在做決策或者產品、運營等優化時,根據不同需求對信息流進行提煉總結,從而在數據的支撐下或者指導下進行科學的行動叫做數據驅動。

Time series often reflect variation associated with other related variables. Controlling for the effect of these variables is useful when modeling or analysing the time series. We introduce a novel approach to normalize time series data conditional on a set of covariates. We do this by modeling the conditional mean and the conditional variance of the time series with generalized additive models using a set of covariates. The conditional mean and variance are then used to normalize the time series. We illustrate the use of conditionally normalized series using two applications involving river network data. First, we show how these normalized time series can be used to impute missing values in the data. Second, we show how the normalized series can be used to estimate the conditional autocorrelation function and conditional cross-correlation functions via additive models. Finally we use the conditional cross-correlations to estimate the time it takes water to flow between two locations in a river network.

Wearable devices permit the continuous monitoring of biological processes, such as blood glucose metabolism, and behavior, such as sleep quality and physical activity. The continuous monitoring often occurs in epochs of 60 seconds over multiple days, resulting in high dimensional longitudinal curves that are best described and analyzed as functional data. From this perspective, the functional data are smooth, latent functions obtained at discrete time intervals and prone to homoscedastic white noise. However, the assumption of homoscedastic errors might not be appropriate in this setting because the devices collect the data serially. While researchers have previously addressed measurement error in scalar covariates prone to errors, less work has been done on correcting measurement error in high dimensional longitudinal curves prone to heteroscedastic errors. We present two new methods for correcting measurement error in longitudinal functional curves prone to complex measurement error structures in multi-level generalized functional linear regression models. These methods are based on two-stage scalable regression calibration. We assume that the distribution of the scalar responses and the surrogate measures prone to heteroscedastic errors both belong in the exponential family and that the measurement errors follow Gaussian processes. In simulations and sensitivity analyses, we established some finite sample properties of these methods. In our simulations, both regression calibration methods for correcting measurement error performed better than estimators based on averaging the longitudinal functional data and using observations from a single day. We also applied the methods to assess the relationship between physical activity and type 2 diabetes in community dwelling adults in the United States who participated in the National Health and Nutrition Examination Survey.

Visuotactile sensing technology has received much attention in recent years. This article proposes a feature detection method applicable to visuotactile sensors based on continuous marker patterns (CMP) to measure 3-d deformation. First, we construct the feature model of checkerboard-like corners under contact deformation, and design a novel double-layer circular sampler. Then, we propose the judging criteria and response function of corner features by analyzing sampling signals' amplitude-frequency characteristics and circular cross-correlation behavior. The proposed feature detection algorithm fully considers the boundary characteristics retained by the corners with geometric distortion, thus enabling reliable detection at a low calculation cost. The experimental results show that the proposed method has significant advantages in terms of real-time and robustness. Finally, we have achieved the high-density 3-d contact deformation visualization based on this detection method. This technique is able to clearly record the process of contact deformation, thus enabling inverse sensing of dynamic contact processes.

Inversion for parameters of a physical process typically requires taking expensive measurements, and the task of finding an optimal set of measurements is known as the optimal design problem. Surprisingly, measurement locations in optimal designs are sometimes extremely clustered, and researchers often avoid measurement clusterization by modifying the optimal design problem. We consider a certain flavor of the optimal design problem, based on the Bayesian D-optimality criterion, and suggest an analytically tractable model for D-optimal designs in Bayesian linear inverse problems over Hilbert spaces. We demonstrate that measurement clusterization is a generic property of D-optimal designs, and prove that correlated noise between measurements mitigates clusterization. We also give a full characterization of D-optimal designs under our model: We prove that D-optimal designs uniformly reduce uncertainty in a select subset of prior covariance eigenvectors. Finally, we show how measurement clusterization is a consequence of the characterization mentioned above and the pigeonhole principle.

In this paper, we tackle a critical issue in nonparametric inference for systems of interacting particles on Riemannian manifolds: the identifiability of the interaction functions. Specifically, we define the function spaces on which the interaction kernels can be identified given infinite i.i.d observational derivative data sampled from a distribution. Our methodology involves casting the learning problem as a linear statistical inverse problem using a operator theoretical framework. We prove the well-posedness of inverse problem by establishing the strict positivity of a related integral operator and our analysis allows us to refine the results on specific manifolds such as the sphere and Hyperbolic space. Our findings indicate that a numerically stable procedure exists to recover the interaction kernel from finite (noisy) data, and the estimator will be convergent to the ground truth. This also answers an open question in [MMQZ21] and demonstrate that least square estimators can be statistically optimal in certain scenarios. Finally, our theoretical analysis could be extended to the mean-field case, revealing that the corresponding nonparametric inverse problem is ill-posed in general and necessitates effective regularization techniques.

In many modern data sets, High dimension low sample size (HDLSS) data is prevalent in many fields of studies. There has been an increased focus recently on using machine learning and statistical methods to mine valuable information out of these data sets. Thus, there has been an increased interest in efficient learning in high dimensions. Naturally, as the dimension of the input data increases, the learning task will become more difficult, due to increasing computational and statistical complexities. This makes it crucial to overcome the curse of dimensionality in a given dataset, within a reasonable time frame, in a bid to obtain the insights required to keep a competitive edge. To solve HDLSS problems, classical methods such as support vector machines can be utilised to alleviate data piling at the margin. However, when we question geometric domains and their assumptions on input data, we are naturally lead to convex optimisation problems and this gives rise to the development of solutions like distance weighted discrimination (DWD), which can be modelled as a second-order cone programming problem and solved by interior-point methods when sample size and feature dimensions of the data is moderate. In this paper, our focus is on designing an even more scalable and robust algorithm for solving large-scale generalized DWD problems.

When estimating quantities and fields that are difficult to measure directly, such as the fluidity of ice, from point data sources, such as satellite altimetry, it is important to solve a numerical inverse problem that is formulated with Bayesian consistency. Otherwise, the resultant probability density function for the difficult to measure quantity or field will not be appropriately clustered around the truth. In particular, the inverse problem should be formulated by evaluating the numerical solution at the true point locations for direct comparison with the point data source. If the data are first fitted to a gridded or meshed field on the computational grid or mesh, and the inverse problem formulated by comparing the numerical solution to the fitted field, the benefits of additional point data values below the grid density will be lost. We demonstrate, with examples in the fields of groundwater hydrology and glaciology, that a consistent formulation can increase the accuracy of results and aid discourse between modellers and observationalists. To do this, we bring point data into the finite element method ecosystem as discontinuous fields on meshes of disconnected vertices. Point evaluation can then be formulated as a finite element interpolation operation (dual-evaluation). This new abstraction is well-suited to automation, including automatic differentiation. We demonstrate this through implementation in Firedrake, which generates highly optimised code for solving Partial Differential Equations (PDEs) with the finite element method. Our solution integrates with dolfin-adjoint/pyadjoint, allowing PDE-constrained optimisation problems, such as data assimilation, to be solved through forward and adjoint mode automatic differentiation.

In this work, we propose a novel framework for estimating the dimension of the data manifold using a trained diffusion model. A diffusion model approximates the score function i.e. the gradient of the log density of a noise-corrupted version of the target distribution for varying levels of corruption. We prove that, if the data concentrates around a manifold embedded in the high-dimensional ambient space, then as the level of corruption decreases, the score function points towards the manifold, as this direction becomes the direction of maximal likelihood increase. Therefore, for small levels of corruption, the diffusion model provides us with access to an approximation of the normal bundle of the data manifold. This allows us to estimate the dimension of the tangent space, thus, the intrinsic dimension of the data manifold. To the best of our knowledge, our method is the first estimator of the data manifold dimension based on diffusion models and it outperforms well established statistical estimators in controlled experiments on both Euclidean and image data.

A parametric adaptive physics-informed greedy Latent Space Dynamics Identification (gLaSDI) method is proposed for accurate, efficient, and robust data-driven reduced-order modeling of high-dimensional nonlinear dynamical systems. In the proposed gLaSDI framework, an autoencoder discovers intrinsic nonlinear latent representations of high-dimensional data, while dynamics identification (DI) models capture local latent-space dynamics. An interactive training algorithm is adopted for the autoencoder and local DI models, which enables identification of simple latent-space dynamics and enhances accuracy and efficiency of data-driven reduced-order modeling. To maximize and accelerate the exploration of the parameter space for the optimal model performance, an adaptive greedy sampling algorithm integrated with a physics-informed residual-based error indicator and random-subset evaluation is introduced to search for the optimal training samples on the fly. Further, to exploit local latent-space dynamics captured by the local DI models for an improved modeling accuracy with a minimum number of local DI models in the parameter space, a k-nearest neighbor convex interpolation scheme is employed. The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems, including Burgers equations, nonlinear heat conduction, and radial advection. The proposed adaptive greedy sampling outperforms the conventional predefined uniform sampling in terms of accuracy. Compared with the high-fidelity models, gLaSDI achieves 17 to 2,658x speed-up with 1 to 5% relative errors.

When the data used for reinforcement learning (RL) are collected by multiple agents in a distributed manner, federated versions of RL algorithms allow collaborative learning without the need of sharing local data. In this paper, we consider federated Q-learning, which aims to learn an optimal Q-function by periodically aggregating local Q-estimates trained on local data alone. Focusing on infinite-horizon tabular Markov decision processes, we provide sample complexity guarantees for both the synchronous and asynchronous variants of federated Q-learning. In both cases, our bounds exhibit a linear speedup with respect to the number of agents and sharper dependencies on other salient problem parameters. Moreover, existing approaches to federated Q-learning adopt an equally-weighted averaging of local Q-estimates, which can be highly sub-optimal in the asynchronous setting since the local trajectories can be highly heterogeneous due to different local behavior policies. Existing sample complexity scales inverse proportionally to the minimum entry of the stationary state-action occupancy distributions over all agents, requiring that every agent covers the entire state-action space. Instead, we propose a novel importance averaging algorithm, giving larger weights to more frequently visited state-action pairs. The improved sample complexity scales inverse proportionally to the minimum entry of the average stationary state-action occupancy distribution of all agents, thus only requiring the agents collectively cover the entire state-action space, unveiling the blessing of heterogeneity.

北京阿比特科技有限公司