亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this thesis, a computational framework for microstructural modelling of transverse behaviour of heterogeneous materials is presented. The context of this research is part of the broad and active field of Computational Micromechanics, which has emerged as an effective tool both to understand the influence of complex microstructures on the macro-mechanical response of engineering materials and to tailor-design innovative materials for specific applications through a proper modification of their microstructure. The computational framework presented in this thesis is based on the Virtual Element Method (VEM), a recently developed numerical technique able to provide robust numerical results even with highly-distorted meshes. The peculiar features of VEM have been exploited to analyse two-dimensional representations of heterogeneous materials microstructures. Ad-hoc polygonal multi-domain meshing strategies have been developed and tested to exploit the discretisation freedom that VEM allows. To further simplify the preprocessing stage of the analysis and reduce the total computational cost, a novel hybrid formulation for analysing multi-domain problems has been developed by combining the Virtual Element Method with the well-known Boundary Element Method (BEM). The hybrid approach has been used to study both composite material transverse behaviour in presence of inclusions with complex geometries and damage and crack propagation in the matrix phase. Numerical results are presented that demonstrate the potential of the developed framework.

相關內容

We propose and analyze volume-preserving parametric finite element methods for surface diffusion, conserved mean curvature flow and an intermediate evolution law in an axisymmetric setting. The weak formulations are presented in terms of the generating curves of the axisymmetric surfaces. The proposed numerical methods are based on piecewise linear parametric finite elements. The constructed fully practical schemes satisfy the conservation of the enclosed volume. In addition, we prove the unconditional stability and consider the distribution of vertices for the discretized schemes. The introduced methods are implicit and the resulting nonlinear systems of equations can be solved very efficiently and accurately via the Newton's iterative method. Numerical results are presented to show the accuracy and efficiency of the introduced schemes for computing the considered axisymmetric geometric flows.

Real-time point cloud processing is fundamental for lots of computer vision tasks, while still challenged by the computational problem on resource-limited edge devices. To address this issue, we implement XNOR-Net-based binary neural networks (BNNs) for an efficient point cloud processing, but its performance is severely suffered due to two main drawbacks, Gaussian-distributed weights and non-learnable scale factor. In this paper, we introduce point-wise operations based on Expectation-Maximization (POEM) into BNNs for efficient point cloud processing. The EM algorithm can efficiently constrain weights for a robust bi-modal distribution. We lead a well-designed reconstruction loss to calculate learnable scale factors to enhance the representation capacity of 1-bit fully-connected (Bi-FC) layers. Extensive experiments demonstrate that our POEM surpasses existing the state-of-the-art binary point cloud networks by a significant margin, up to 6.7 %.

Object-oriented programming (OOP) is one of the most popular paradigms used for building software systems. However, despite its industrial and academic popularity, OOP is still missing a formal apparatus similar to lambda-calculus, which functional programming is based on. There were a number of attempts to formalize OOP, but none of them managed to cover all the features available in modern OO programming languages, such as C++ or Java. We have made yet another attempt and created phi-calculus. We also created EOLANG (also called EO), an experimental programming language based on phi-calculus.

Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted--weekly, daily, or even many times a day. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs can be used to address research questions about whether and under what circumstances JITAI components are effective, with the ultimate objective of developing effective and efficient JITAI. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to review primary and secondary analyses methods for MRTs. We briefly review key elements of JITAIs and discuss a variety of considerations that go into planning and designing an MRT. We provide a definition of causal excursion effects suitable for use in primary and secondary analyses of MRT data to inform JITAI development. We review the weighted and centered least-squares (WCLS) estimator which provides consistent causal excursion effect estimators from MRT data. We describe how the WCLS estimator along with associated test statistics can be obtained using standard statistical software such as R (R Core Team, 2019). Throughout we illustrate the MRT design and analyses using the HeartSteps MRT, for developing a JITAI to increase physical activity among sedentary individuals. We supplement the HeartSteps MRT with two other MRTs, SARA and BariFit, each of which highlights different research questions that can be addressed using the MRT and experimental design considerations that might arise.

Model-free data-driven computational mechanics replaces phenomenological constitutive functions by numerical simulations based on data sets of representative samples in stress-strain space. The distance of strain and stress pairs from the data set is minimized, subject to equilibrium and compatibility constraints. Although this method operates well for non-linear elastic problems, there are challenges dealing with history-dependent materials, since one and the same point in stress-strain space might correspond to different material behaviour.In recent literature, this issue has been treated by including local histories into the data set. However, there is still the necessity to include models for the evolution of specific internal variables. Thus, a mixed formulation of classical and data-driven modeling is obtained. In the presented approach, the data set is augmented with directions in the tangent space of points in stress-strain space. Moreover, the data set is divided into subsets corresponding to different material behaviour. Based on this classification, transition rules map the modeling points to the various subsets. The approach will be applied to non-linear elasticity and elasto-plasticity with isotropic hardening.

We introduce and analyse the first order Enlarged Enhancement Virtual Element Method (E$^2$VEM) for the Poisson problem. The method has the interesting property of allowing the definition of bilinear forms that do not require a stabilization term. We provide a proof of well-posedness and optimal order a priori error estimates. Numerical tests on convex and non-convex polygonal meshes confirm the theoretical convergence rates.

We aim to reconstruct the latent space dynamics of high dimensional systems using model order reduction via the spectral proper orthogonal decomposition (SPOD). The proposed method is based on three fundamental steps: in the first, we compress the data from a high-dimensional representation to a lower dimensional one by constructing the SPOD latent space; in the second, we build the time-dependent coefficients by projecting the realizations (also referred to as snapshots) onto the reduced SPOD basis and we learn their evolution in time with the aid of recurrent neural networks; in the third, we reconstruct the high-dimensional data from the learnt lower-dimensional representation. The proposed method is demonstrated on two different test cases, namely, a compressible jet flow, and a geophysical problem known as the Madden-Julian Oscillation. An extensive comparison between SPOD and the equivalent POD-based counterpart is provided and differences between the two approaches are highlighted. The numerical results suggest that the proposed model is able to provide low rank predictions of complex statistically stationary data and to provide insights into the evolution of phenomena characterized by specific range of frequencies. The comparison between POD and SPOD surrogate strategies highlights the need for further work on the characterization of the error interplay between data reduction techniques and neural network forecasts.

We propose a numerical method based on physics-informed Random Projection Neural Networks for the solution of Initial Value Problems (IVPs) of Ordinary Differential Equations (ODEs) with a focus on stiff problems. We address an Extreme Learning Machine with a single hidden layer with radial basis functions having as widths uniformly distributed random variables, while the values of the weights between the input and the hidden layer are set equal to one. The numerical solution of the IVPs is obtained by constructing a system of nonlinear algebraic equations, which is solved with respect to the output weights by the Gauss-Newton method, using a simple adaptive scheme for adjusting the time interval of integration. To assess its performance, we apply the proposed method for the solution of four benchmark stiff IVPs, namely the Prothero-Robinson, van der Pol, ROBER and HIRES problems. Our method is compared with an adaptive Runge-Kutta method based on the Dormand-Prince pair, and a variable-step variable-order multistep solver based on numerical differentiation formulas, as implemented in the \texttt{ode45} and \texttt{ode15s} MATLAB functions, respectively. We show that the proposed scheme yields good approximation accuracy, thus outperforming \texttt{ode45} and \texttt{ode15s}, especially in the cases where steep gradients arise. Furthermore, the computational times of our approach are comparable with those of the two MATLAB solvers for practical purposes.

File-based encryption (FBE) schemes have been developed by software vendors to address security concerns related to data storage. While methods of encrypting data-at-rest may seem relatively straightforward, the main proponents of these technologies in mobile devices have nonetheless created seemingly different FBE solutions. As most of the underlying design decisions are described either at a high-level in whitepapers, or are accessible at a low-level by examining the corresponding source code (Android) or through reverse-engineering (iOS), comparisons between schemes and discussions on their relative strengths are scarce. In this paper, we propose a formal framework for the study of file-based encryption systems, focusing on two prominent implementations: the FBE scheme used in Android and Linux operating systems, as well as the FBE scheme used in iOS. Our proposed formal model and our detailed description of the existing algorithms are based on documentation of diverse nature, such as whitepapers, technical reports, presentations and blog posts, among others. Using our framework we validate the security of the existing key derivation chains, as well as the security of the overall designs, under widely-known security assumptions for symmetric ciphers, such as IND-CPA or INT-CTXT security, in the random-oracle model.

The discovery of Behavior Trees (BTs) impacted the field of Artificial Intelligence (AI) in games, by providing flexible and natural representation of non-player characters (NPCs) logic, manageable by game-designers. Nevertheless, increased pressure on ever better NPCs AI-agents forced complexity of handcrafted BTs to became barely-tractable and error-prone. On the other hand, while many just-launched on-line games suffer from player-shortage, the existence of AI with a broad-range of capabilities could increase players retention. Therefore, to handle above challenges, recent trends in the field focused on automatic creation of AI-agents: from deep- and reinforcementlearning techniques to combinatorial (constrained) optimization and evolution of BTs. In this paper, we present a novel approach to semi-automatic construction of AI-agents, that mimic and generalize given human gameplays by adapting and tuning of expert-created BT under a developed similarity metric between source and BT gameplays. To this end, we formulated mixed discrete-continuous optimization problem, in which topological and functional changes of the BT are reflected in numerical variables, and constructed a dedicated hybrid-metaheuristic. The performance of presented approach was verified experimentally in a prototype real-time strategy game. Carried out experiments confirmed efficiency and perspectives of presented approach, which is going to be applied in a commercial game.

北京阿比特科技有限公司