亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study the topological asymptotic expansion of a topology optimisation problem that is constrained by the Poisson equation with the design/shape variable entering through the right hand side. Using an averaged adjoint approach, we give explicit formulas for topological derivatives of arbitrary order for both an $L_2$ and $H^1$ tracking-type cost function in both dimension two and three and thereby derive the complete asymptotic expansion. As the asymptotic behaviour of the fundamental solution of the Laplacian differs in dimension two and three, also the derivation of the topological expansion significantly differs in dimension two and three. The complete expansion for the $H^1$ cost functional directly follows from the analysis of the variation of the state equation. However, the proof of the asymptotics of the $L_2$ tracking-type cost functional is significantly more involved and, surprisingly, the asymptotic behaviour of the bi-harmonic equation plays a crucial role in our proof.

相關內容

在數學優化,統計學,計量經濟學,決策理論,機器學習和計算神經科學中,代價函數,又叫損失函數或成本函數,它是將一個或多個變量的事件閾值映射到直觀地表示與該事件。 一個優化問題試圖最小化損失函數。 目標函數是損失函數或其負值,在這種情況下它將被最大化。

Generalization of time series prediction remains an important open issue in machine learning, wherein earlier methods have either large generalization error or local minima. We develop an analytically solvable, unsupervised learning scheme that extracts the most informative components for predicting future inputs, termed predictive principal component analysis (PredPCA). Our scheme can effectively remove unpredictable noise and minimize test prediction error through convex optimization. Mathematical analyses demonstrate that, provided with sufficient training samples and sufficiently high-dimensional observations, PredPCA can asymptotically identify hidden states, system parameters, and dimensionalities of canonical nonlinear generative processes, with a global convergence guarantee. We demonstrate the performance of PredPCA using sequential visual inputs comprising hand-digits, rotating 3D objects, and natural scenes. It reliably estimates distinct hidden states and predicts future outcomes of previously unseen test input data, based exclusively on noisy observations. The simple architecture and low computational cost of PredPCA are highly desirable for neuromorphic hardware.

In the present paper non-convex multi-objective parameter optimization problems are considered which are governed by elliptic parametrized partial differential equations (PDEs). To solve these problems numerically the Pascoletti-Serafini scalarization is applied and the obtained scalar optimization problems are solved by an augmented Lagrangian method. However, due to the PDE constraints, the numerical solution is very expensive so that a model reduction is utilized by using the reduced basis (RB) method. The quality of the RB approximation is ensured by a trust-region strategy which does not require any offline procedure, where the RB functions are computed in a greedy algorithm. Moreover, convergence of the proposed method is guaranteed. Numerical examples illustrate the efficiency of the proposed solution technique.

We prove upper and lower bounds on the minimal spherical dispersion, improving upon previous estimates obtained by Rote and Tichy [Spherical dispersion with an application to polygonal approximation of curves, Anz. \"Osterreich. Akad. Wiss. Math.-Natur. Kl. 132 (1995), 3--10]. In particular, we see that the inverse $N(\varepsilon,d)$ of the minimal spherical dispersion is, for fixed $\varepsilon>0$, linear in the dimension $d$ of the ambient space. We also derive upper and lower bounds on the expected dispersion for points chosen independently and uniformly at random from the Euclidean unit sphere. In terms of the corresponding inverse $\widetilde{N}(\varepsilon,d)$, our bounds are optimal with respect to the dependence on $\varepsilon$.

We couple the L1 discretization for Caputo derivative in time with spectral Galerkin method in space to devise a scheme that solves quasilinear subdiffusion equations. Both the diffusivity and the source are allowed to be nonlinear functions of the solution. We prove method's stability and convergence with spectral accuracy in space. The temporal order depends on solution's regularity in time. Further, we support our results with numerical simulations that utilize parallelism for spatial discretization. Moreover, as a side result we find asymptotic exact values of error constants along with their remainders for discretizations of Caputo derivative and fractional integrals. These constants are the smallest possible which improves the previously established results from the literature.

We give a formulation of the Nielsen-Schreier theorem (subgroups of free groups are free) in homotopy type theory using the presentation of groups as pointed connected 1-truncated types. We show the special case of finite index subgroups holds constructively and the full theorem follows from the axiom of choice. We give an example of a boolean infinity topos where our formulation of the theorem does not hold and show a stronger "untruncated" version of the theorem is provably false in homotopy type theory.

Topological semantics for modal logic based on the Cantor derivative operator gives rise to derivative logics, also referred to as $d$-logics. Unlike logics based on the topological closure operator, $d$-logics have not previously been studied in the framework of dynamical systems, which are pairs $(X,f)$ consisting of a topological space $X$ equipped with a continuous function $f\colon X\to X$. We introduce the logics $\bf{wK4C}$, $\bf{K4C}$ and $\bf{GLC}$ and show that they all have the finite Kripke model property and are sound and complete with respect to the $d$-semantics in this dynamical setting. In particular, we prove that $\bf{wK4C}$ is the $d$-logic of all dynamic topological systems, $\bf{K4C}$ is the $d$-logic of all $T_D$ dynamic topological systems, and $\bf{GLC}$ is the $d$-logic of all dynamic topological systems based on a scattered space. We also prove a general result for the case where $f$ is a homeomorphism, which in particular yields soundness and completeness for the corresponding systems $\bf{wK4H}$, $\bf{K4H}$ and $\bf{GLH}$. The main contribution of this work is the foundation of a general proof method for finite model property and completeness of dynamic topological $d$-logics. Furthermore, our result for $\bf{GLC}$ constitutes the first step towards a proof of completeness for the trimodal topo-temporal language with respect to a finite axiomatisation $\mathord{-}$ something known to be impossible over the class of all spaces.

We propose a new unfitted finite element method for simulation of two-phase flows in presence of insoluble surfactant. The key features of the method are 1) discrete conservation of surfactant mass; 2) the possibility of having meshes that do not conform to the evolving interface separating the immiscible fluids; 3) accurate approximation of quantities with weak or strong discontinuities across evolving geometries such as the velocity field and the pressure. The new discretization of the incompressible Navier--Stokes equations coupled to the convection-diffusion equation modeling the surfactant transport on evolving surfaces is based on a space-time cut finite element formulation with quadrature in time and a stabilization term in the weak formulation that provides function extension. The proposed strategy utilize the same computational mesh for the discretization of the surface Partial Differential Equation (PDE) and the bulk PDEs and can be combined with different techniques for representing and evolving the interface, here the level set method is used. Numerical simulations in both two and three space dimensions are presented including simulations showing the role of surfactant in the interaction between two drops.

Learning a graph topology to reveal the underlying relationship between data entities plays an important role in various machine learning and data analysis tasks. Under the assumption that structured data vary smoothly over a graph, the problem can be formulated as a regularised convex optimisation over a positive semidefinite cone and solved by iterative algorithms. Classic methods require an explicit convex function to reflect generic topological priors, e.g. the $\ell_1$ penalty for enforcing sparsity, which limits the flexibility and expressiveness in learning rich topological structures. We propose to learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O). Specifically, our model first unrolls an iterative primal-dual splitting algorithm into a neural network. The key structural proximal projection is replaced with a variational autoencoder that refines the estimated graph with enhanced topological properties. The model is trained in an end-to-end fashion with pairs of node data and graph samples. Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.

Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.

Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D$^3$M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D$^3$M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.

北京阿比特科技有限公司