亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider phase-field models with and without lateral flow for the numerical simulation of lateral phase separation and coarsening in lipid membranes. For the numerical solution of these models, we apply an unfitted finite element method that is flexible in handling complex and possibly evolving shapes in the absence of an explicit surface parametrization. Through several numerical tests, we investigate the effect of the presence of lateral flow on the evolution of phases. In particular, we focus on understanding how variable line tension, viscosity, membrane composition, and surface shape affect the pattern formation. Keywords: Lateral phase separation, surface Cahn-Hilliard equation, lateral flow, surface Navier-Stokes-Cahn-Hilliard system, TraceFEM

相關內容

Surface 是微(wei)軟公司( )旗下一系列使用 Windows 10(早期為 Windows 8.X)操作系統的(de)電腦產(chan)品,目前有 Surface、Surface Pro 和 Surface Book 三個(ge)系列。 2012 年 6 月 18 日,初代(dai) Surface Pro/RT 由時(shi)任微(wei)軟 CEO 史蒂夫·鮑(bao)爾默發(fa)布于在洛杉磯舉行的(de)記(ji)者(zhe)會,2012 年 10 月 26 日上市銷售。

A two-dimensional inviscid incompressible fluid is governed by simple rules. Yet, to characterise its long-time behaviour is a knotty problem. The fluid evolves according to Euler's equations: a non-linear Hamiltonian system with infinitely many conservation laws. In both experiments and numerical simulations, coherent vortex structures, or blobs, emerge after an initial stage. These formations dominate the large-scale dynamics, but small scales also persist. Kraichnan describes in his classical work a forward cascade of enstrophy into smaller scales, and a backward cascade of energy into larger scales. Previous attempts to model Kraichnan's double cascade use filtering techniques that enforce separation from the outset. Here we show that Euler's equations posses an intrinsic, canonical splitting of the vorticity function. The splitting is remarkable in four ways: (i) it is defined solely via the Poisson bracket and the Hamiltonian, (ii) it characterises steady flows, (iii) without imposition it yields a separation of scales, enabling the dynamics behind Kraichnan's qualitative description, and (iv) it accounts for the "broken line" in the power law for the energy spectrum, observed in both experiments and numerical simulations. The splitting originates from Zeitlin's truncated model of Euler's equations in combination with a standard quantum-tool: the spectral decomposition of Hermitian matrices. In addition to theoretical insight, the scale separation dynamics could be used for stochastic model reduction, where small scales are modelled by multiplicative noise.

A high-order finite element method is proposed to solve the nonlinear convection-diffusion equation on a time-varying domain whose boundary is implicitly driven by the solution of the equation. The method is semi-implicit in the sense that the boundary is traced explicitly with a high-order surface-tracking algorithm, while the convection-diffusion equation is solved implicitly with high-order backward differentiation formulas and fictitious-domain finite element methods. By two numerical experiments for severely deforming domains, we show that optimal convergence orders are obtained in energy norm for third-order and fourth-order methods.

An improved Singleton-type upper bound is presented for the list decoding radius of linear codes, in terms of the code parameters [n,k,d] and the list size L. L-MDS codes are then defined as codes that attain this bound (under a slightly stronger notion of list decodability), with 1-MDS codes corresponding to ordinary linear MDS codes. Several properties of such codes are presented; in particular, it is shown that the 2-MDS property is preserved under duality. Finally, explicit constructions for 2-MDS codes are presented through generalized Reed-Solomon (GRS) codes.

This paper deals with a special type of Lyapunov functions, namely the solution of Zubov's equation. Such a function can be used to characterize the domain of attraction for systems of ordinary differential equations. We derive and prove an integral form solution to Zubov's equation. For numerical computation, we develop two data-driven methods. One is based on the integration of an augmented system of differential equations; and the other one is based on deep learning. The former is effective for systems with a relatively low state space dimension and the latter is developed for high dimensional problems. The deep learning method is applied to a New England 10-generator power system model. We prove that a neural network approximation exists for the Lyapunov function of power systems such that the approximation error is a cubic polynomial of the number of generators. The error convergence rate as a function of n, the number of neurons, is proved.

Cognition in midlife is an important predictor of age-related mental decline and statistical models that predict cognitive performance can be useful for predicting decline. However, existing models struggle to capture complex relationships between physical, sociodemographic, psychological and mental health factors that effect cognition. Using data from an observational, cohort study, Midlife in the United States (MIDUS), we modeled a large number of variables to predict executive function and episodic memory measures. We used cross-sectional and longitudinal outcomes with varying sparsity, or amount of missing data. Deep neural network (DNN) models consistently ranked highest in all of the cognitive performance prediction tasks, as assessed with root mean squared error (RMSE) on out-of-sample data. RMSE differences between DNN and other model types were statistically significant (T(8) = -3.70; p < 0.05). The interaction effect between model type and sparsity was significant (F(9)=59.20; p < 0.01), indicating the success of DNNs can partly be attributed to their robustness and ability to model hierarchical relationships between health-related factors. Our findings underscore the potential of neural networks to model clinical datasets and allow better understanding of factors that lead to cognitive decline.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

There has been appreciable progress in unsupervised network representation learning (UNRL) approaches over graphs recently with flexible random-walk approaches, new optimization objectives and deep architectures. However, there is no common ground for systematic comparison of embeddings to understand their behavior for different graphs and tasks. In this paper we theoretically group different approaches under a unifying framework and empirically investigate the effectiveness of different network representation methods. In particular, we argue that most of the UNRL approaches either explicitly or implicit model and exploit context information of a node. Consequently, we propose a framework that casts a variety of approaches -- random walk based, matrix factorization and deep learning based -- into a unified context-based optimization function. We systematically group the methods based on their similarities and differences. We study the differences among these methods in detail which we later use to explain their performance differences (on downstream tasks). We conduct a large-scale empirical study considering 9 popular and recent UNRL techniques and 11 real-world datasets with varying structural properties and two common tasks -- node classification and link prediction. We find that there is no single method that is a clear winner and that the choice of a suitable method is dictated by certain properties of the embedding methods, task and structural properties of the underlying graph. In addition we also report the common pitfalls in evaluation of UNRL methods and come up with suggestions for experimental design and interpretation of results.

Using the 6,638 case descriptions of societal impact submitted for evaluation in the Research Excellence Framework (REF 2014), we replicate the topic model (Latent Dirichlet Allocation or LDA) made in this context and compare the results with factor-analytic results using a traditional word-document matrix (Principal Component Analysis or PCA). Removing a small fraction of documents from the sample, for example, has on average a much larger impact on LDA than on PCA-based models to the extent that the largest distortion in the case of PCA has less effect than the smallest distortion of LDA-based models. In terms of semantic coherence, however, LDA models outperform PCA-based models. The topic models inform us about the statistical properties of the document sets under study, but the results are statistical and should not be used for a semantic interpretation - for example, in grant selections and micro-decision making, or scholarly work-without follow-up using domain-specific semantic maps.

北京阿比特科技有限公司