亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Within the framework of computational plasticity, recent advances show that the quasi-static response of an elasto-plastic structure under cyclic loadings may exhibit a time multiscale behaviour. In particular, the system response can be computed in terms of time microscale and macroscale modes using a weakly intrusive multi-time Proper Generalized Decomposition (MT-PGD). In this work, such micro-macro characterization of the time response is exploited to build a data-driven model of the elasto-plastic constitutive relation. This can be viewed as a predictor-corrector scheme where the prediction is driven by the macrotime evolution and the correction is performed via a sparse sampling in space. Once the nonlinear term is forecasted, the multi-time PGD algorithm allows the fast computation of the total strain. The algorithm shows considerable gains in terms of computational time, opening new perspectives in the numerical simulation of history-dependent problems defined in very large time intervals.

相關內容

Signal detection is one of the main challenges of data science. As it often happens in data analysis, the signal in the data may be corrupted by noise. There is a wide range of techniques aimed at extracting the relevant degrees of freedom from data. However, some problems remain difficult. It is notably the case of signal detection in almost continuous spectra when the signal-to-noise ratio is small enough. This paper follows a recent bibliographic line which tackles this issue with field-theoretical methods. Previous analysis focused on equilibrium Boltzmann distributions for some effective field representing the degrees of freedom of data. It was possible to establish a relation between signal detection and $\mathbb{Z}_2$-symmetry breaking. In this paper, we consider a stochastic field framework inspiring by the so-called "Model A", and show that the ability to reach or not an equilibrium state is correlated with the shape of the dataset. In particular, studying the renormalization group of the model, we show that the weak ergodicity prescription is always broken for signals small enough, when the data distribution is close to the Marchenko-Pastur (MP) law. This, in particular, enables the definition of a detection threshold in the regime where the signal-to-noise ratio is small enough.

Private synthetic data sharing is preferred as it keeps the distribution and nuances of original data compared to summary statistics. The state-of-the-art methods adopt a select-measure-generate paradigm, but measuring large domain marginals still results in much error and allocating privacy budget iteratively is still difficult. To address these issues, our method employs a partition-based approach that effectively reduces errors and improves the quality of synthetic data, even with a limited privacy budget. Results from our experiments demonstrate the superiority of our method over existing approaches. The synthetic data produced using our approach exhibits improved quality and utility, making it a preferable choice for private synthetic data sharing.

In the analyses of cluster-randomized trials, mixed-model analysis of covariance (ANCOVA) is a standard approach for covariate adjustment and handling within-cluster correlations. However, when the normality, linearity, or the random-intercept assumption is violated, the validity and efficiency of the mixed-model ANCOVA estimators for estimating the average treatment effect remain unclear. Under the potential outcomes framework, we prove that the mixed-model ANCOVA estimators for the average treatment effect are consistent and asymptotically normal under arbitrary misspecification of its working model. If the probability of receiving treatment is 0.5 for each cluster, we further show that the model-based variance estimator under mixed-model ANCOVA1 (ANCOVA without treatment-covariate interactions) remains consistent, clarifying that the confidence interval given by standard software is asymptotically valid even under model misspecification. Beyond robustness, we discuss several insights on precision among classical methods for analyzing cluster-randomized trials, including the mixed-model ANCOVA, individual-level ANCOVA, and cluster-level ANCOVA estimators. These insights may inform the choice of methods in practice. Our analytical results and insights are illustrated via simulation studies and analyses of three cluster-randomized trials.

A finite element based computational scheme is developed and employed to assess a duality based variational approach to the solution of the linear heat and transport PDE in one space dimension and time, and the nonlinear system of ODEs of Euler for the rotation of a rigid body about a fixed point. The formulation turns initial-(boundary) value problems into degenerate elliptic boundary value problems in (space)-time domains representing the Euler-Lagrange equations of suitably designed dual functionals in each of the above problems. We demonstrate reasonable success in approximating solutions of this range of parabolic, hyperbolic, and ODE primal problems, which includes energy dissipation as well as conservation, by a unified dual strategy lending itself to a variational formulation. The scheme naturally associates a family of dual solutions to a unique primal solution; such `gauge invariance' is demonstrated in our computed solutions of the heat and transport equations, including the case of a transient dual solution corresponding to a steady primal solution of the heat equation. Primal evolution problems with causality are shown to be correctly approximated by non-causal dual problems.

We propose a novel family of test statistics to detect the presence of changepoints in a sequence of dependent, possibly multivariate, functional-valued observations. Our approach allows to test for a very general class of changepoints, including the "classical" case of changes in the mean, and even changes in the whole distribution. Our statistics are based on a generalisation of the empirical energy distance; we propose weighted functionals of the energy distance process, which are designed in order to enhance the ability to detect breaks occurring at sample endpoints. The limiting distribution of the maximally selected version of our statistics requires only the computation of the eigenvalues of the covariance function, thus being readily implementable in the most commonly employed packages, e.g. R. We show that, under the alternative, our statistics are able to detect changepoints occurring even very close to the beginning/end of the sample. In the presence of multiple changepoints, we propose a binary segmentation algorithm to estimate the number of breaks and the locations thereof. Simulations show that our procedures work very well in finite samples. We complement our theory with applications to financial and temperature data.

Electrical circuits are present in a variety of technologies, making their design an important part of computer aided engineering. The growing number of tunable parameters that affect the final design leads to a need for new approaches of quantifying their impact. Machine learning may play a key role in this regard, however current approaches often make suboptimal use of existing knowledge about the system at hand. In terms of circuits, their description via modified nodal analysis is well-understood. This particular formulation leads to systems of differential-algebraic equations (DAEs) which bring with them a number of peculiarities, e.g. hidden constraints that the solution needs to fulfill. We aim to use the recently introduced dissection concept for DAEs that can decouple a given system into ordinary differential equations, only depending on differential variables, and purely algebraic equations that describe the relations between differential and algebraic variables. The idea then is to only learn the differential variables and reconstruct the algebraic ones using the relations from the decoupling. This approach guarantees that the algebraic constraints are fulfilled up to the accuracy of the nonlinear system solver, which represents the main benefit highlighted in this article.

We present a machine learning framework capable of consistently inferring mathematical expressions of the hyperelastic energy functionals for incompressible materials from sparse experimental data and physical laws. To achieve this goal, we propose a polyconvex neural additive model (PNAM) that enables us to express the hyperelasticity model in a learnable feature space while enforcing polyconvexity. An upshot of this feature space obtained via PNAM is that (1) it is spanned by a set univariate basis that can be re-parametrized with a more complex mathematical form, and (2) the resultant elasticity model is guaranteed to fulfill the polyconvexity, which ensures that the acoustic tensor remains elliptic for any deformation. To further improve the interpretability, we use genetic programming to convert each univariate basis into a compact mathematical expression. The resultant multi-variable mathematical models obtained from this proposed framework are not only more interpretable but are also proven to fulfill physical laws. By controlling the compactness of the learned symbolic form, the machine learning-generated mathematical model also requires fewer arithmetic operations than the deep neural network counterparts during deployment. This latter attribute is crucial for scaling large-scale simulations where the constitutive responses of every integration point must be updated within each incremental time step. We compare our proposed model discovery framework against other state-of-the-art alternatives to assess the robustness and efficiency of the training algorithms and examine the trade-off between interpretability, accuracy, and precision of the learned symbolic hyperelasticity models obtained from different approaches. Our numerical results suggest that our approach extrapolates well outside the training data regime due to the precise incorporation of physics-based knowledge.

Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads for a high-level programming language. A popular approach consists of combining Python, used for the high-level interface, and C++, used for the computing intensive part of the code. A more convenient and efficient approach would be to use a language that provides both high-level programming and high-performance. The Julia programming language, developed at MIT especially to allow the use of a single language in research activities, has followed this path. In this paper the applicability of using the Julia language for HEP research is explored, covering the different aspects that are important for HEP code development: runtime performance, handling of large projects, interface with legacy code, distributed computing, training, and ease of programming. The study shows that the HEP community would benefit from a large scale adoption of this programming language. The HEP-specific foundation libraries that would need to be consolidated are identified

Acceleration of gradient-based optimization methods is an issue of significant practical and theoretical interest, particularly in machine learning applications. Most research has focused on optimization over Euclidean spaces, but given the need to optimize over spaces of probability measures in many machine learning problems, it is of interest to investigate accelerated gradient methods in this context too. To this end, we introduce a Hamiltonian-flow approach that is analogous to moment-based approaches in Euclidean space. We demonstrate that algorithms based on this approach can achieve convergence rates of arbitrarily high order. Numerical examples illustrate our claim.

The emergence of complex structures in the systems governed by a simple set of rules is among the most fascinating aspects of Nature. The particularly powerful and versatile model suitable for investigating this phenomenon is provided by cellular automata, with the Game of Life being one of the most prominent examples. However, this simplified model can be too limiting in providing a tool for modelling real systems. To address this, we introduce and study an extended version of the Game of Life, with the dynamical process governing the rule selection at each step. We show that the introduced modification significantly alters the behaviour of the game. We also demonstrate that the choice of the synchronization policy can be used to control the trade-off between the stability and the growth in the system.

北京阿比特科技有限公司