亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the context of flow visualization a triple decomposition of the velocity gradient into irrotational straining flow, shear flow and rigid body rotational flow was proposed by Kolar in 2007 [V. Kolar, International journal of heat and fluid flow, 28, 638, (2007)], which has recently received renewed interest. The triple decomposition opens for a refined energy stability analysis of the Navier-Stokes equations, with implications for the mathematical analysis of the structure, computability and regularity of turbulent flow. We here perform an energy stability analysis of turbulent incompressible flow, which suggests a scenario where at macroscopic scales any exponentially unstable irrotational straining flow structures rapidly evolve towards linearly unstable shear flow and stable rigid body rotational flow. This scenario does not rule out irrotational straining flow close to the Kolmogorov microscales, since there viscous dissipation stabilizes the unstable flow structures. In contrast to worst case energy stability estimates, this refined stability analysis reflects the existence of stable flow structures in turbulence over extended time.

相關內容

We introduce three new generative models for time series that are based on Euler discretization of Stochastic Differential Equations (SDEs) and Wasserstein metrics. Two of these methods rely on the adaptation of generative adversarial networks (GANs) to time series. The third algorithm, called Conditional Euler Generator (CEGEN), minimizes a dedicated distance between the transition probability distributions over all time steps. In the context of Ito processes, we provide theoretical guarantees that minimizing this criterion implies accurate estimations of the drift and volatility parameters. We demonstrate empirically that CEGEN outperforms state-of-the-art and GAN generators on both marginal and temporal dynamics metrics. Besides, it identifies accurate correlation structures in high dimension. When few data points are available, we verify the effectiveness of CEGEN, when combined with transfer learning methods on Monte Carlo simulations. Finally, we illustrate the robustness of our method on various real-world datasets.

On sparse graphs, Roditty and Williams [2013] proved that no $O(n^{2-\varepsilon})$-time algorithm achieves an approximation factor smaller than $\frac{3}{2}$ for the diameter problem unless SETH fails. In this article, we solve a longstanding question: can we use the structural properties of median graphs to break this global quadratic barrier? We propose the first combinatiorial algorithm computing exactly all eccentricities of a median graph in truly subquadratic time. Median graphs constitute the family of graphs which is the most studied in metric graph theory because their structure represent many other discrete and geometric concepts, such as CAT(0) cube complexes. Our result generalizes a recent one, stating that there is a linear-time algorithm for all eccentricities in median graphs with bounded dimension $d$, i.e. the dimension of the largest induced hypercube. This prerequisite on $d$ is not necessarily anymore to determine all eccentricities in subquadratic time. The execution time of our algorithm is $O(n^{1.6456}\log^{O(1)} n)$. We provide also some satellite outcomes related to this general result. In particular, restricted to simplex graphs, this algorithm enumerate all eccentricities with a quasilinear running time. Moreover, an algorithm is proposed to compute exactly all reach centralities in time $O(2^{3d}n\log^{O(1)}n)$.

Given functional data from a survival process with time-dependent covariates, we derive a smooth convex representation for its nonparametric log-likelihood functional and obtain its functional gradient. From this, we devise a generic gradient boosting procedure for estimating the hazard function nonparametrically. An illustrative implementation of the procedure using regression trees is described to show how to recover the unknown hazard. The generic estimator is consistent if the model is correctly specified; alternatively, an oracle inequality can be demonstrated for tree-based models. To avoid overfitting, boosting employs several regularization devices. One of them is step-size restriction, but the rationale for this is somewhat mysterious from the viewpoint of consistency. Our work brings some clarity to this issue by revealing that step-size restriction is a mechanism for preventing the curvature of the risk from derailing convergence.

In this paper, an important discovery has been found for nonconforming immersed finite element (IFE) methods using the integral values on edges as degrees of freedom for solving elliptic interface problems. We show that those IFE methods without penalties are not guaranteed to converge optimally if the tangential derivative of the exact solution and the jump of the coefficient are not zero on the interface. A nontrivial counter example is also provided to support our theoretical analysis. To recover the optimal convergence rates, we develop a new nonconforming IFE method with additional terms locally on interface edges. The new method is parameter-free which removes the limitation of the conventional partially penalized IFE method. We show the IFE basis functions are unisolvent on arbitrary triangles which is not considered in the literature. Furthermore, different from multipoint Taylor expansions, we derive the optimal approximation capabilities of both the Crouzeix-Raviart and the rotated-$Q_1$ IFE spaces via a unified approach which can handle the case of variable coefficients easily. Finally, optimal error estimates in both $H^1$- and $L^2$- norms are proved and confirmed with numerical experiments.

We consider a model of energy minimization arising in the study of the mechanical behavior caused by cell contraction within a fibrous biological medium. The macroscopic model is based on the theory of non rank-one convex nonlinear elasticity for phase transitions. We study appropriate numerical approximations based on the discontinuous Galerkin treatment of higher gradients and used succesfully in numerical simulations of experiments. We show that the discrete minimizers converge in the limit to minimizers of the continuous problem. This is achieved by employing the theory of $\Gamma$-convergence of the approximate energy functionals to the continuous model when the discretization parameter tends to zero. The analysis is involved due to the structure of numerical approximations which are defined in spaces with lower regularity than the space where the minimizers of the continuous variational problem are sought. This fact leads to the development of a new approach to $\Gamma$-convergence, appropriate for discontinuous finite element discretizations, which can be applied to quite general energy minimization problems. Furthermore, the adoption of exponential terms penalising the interpenetration of matter requires a new framework based on Orlicz spaces for discontinuous Galerkin methods which is developed in this paper as well.

This paper presents a general, nonlinear finite element formulation for rotation-free shells with embedded fibers that captures anisotropy in stretching, shearing, twisting and bending -- both in-plane and out-of-plane. These capabilities allow for the simulation of large sheets of heterogeneous and fibrous materials either with or without matrix, such as textiles, composites, and pantographic structures. The work is a computational extension of our earlier theoretical work (Duong et al., 2021) that extends existing Kirchhoff-Love shell theory to incorporate the in-plane bending resistance of initially straight or curved fibers. The formulation requires only displacement degrees-of-freedom to capture all mentioned modes of deformation. To this end, isogeometric shape functions are used in order to satisfy the required $C^1$-continuity for bending across element boundaries. The proposed formulation can admit a wide range of material models, such as surface hyperelasticity that does not require any explicit thickness integration. To deal with possible material instability due to fiber compression, a stabilization scheme is added. Several benchmark examples are used to demonstrate the robustness and accuracy of the proposed computational formulation.

This paper addresses the energy management of a grid-connected renewable generation plant coupled with a battery energy storage device in the capacity firming market, designed to promote renewable power generation facilities in small non-interconnected grids. The core contribution is to propose a probabilistic forecast-driven strategy, modeled as a min-max-min robust optimization problem with recourse. It is solved using a Benders-dual cutting plane algorithm and a column and constraints generation algorithm in a tractable manner. A dynamic risk-averse parameters selection strategy based on the quantile forecasts distribution is proposed to improve the results. A secondary contribution is to use a recently developed deep learning model known as normalizing flows to generate quantile forecasts of renewable generation for the robust optimization problem. This technique provides a general mechanism for defining expressive probability distributions, only requiring the specification of a base distribution and a series of bijective transformations. Overall, the robust approach improves the results over a deterministic approach with nominal point forecasts by finding a trade-off between conservative and risk-seeking policies. The case study uses the photovoltaic generation monitored on-site at the University of Li\`ege (ULi\`ege), Belgium.

The radio access network (RAN) part of the next-generation wireless networks will require efficient solutions for satisfying low latency and high-throughput services. The open RAN (O-RAN) is one of the candidates to achieve this goal, in addition to increasing vendor diversity and promoting openness. In the O-RAN architecture, network functions are executed in central units (CU), distributed units (DU), and radio units (RU). These entities are virtualized on general-purpose CPUs and form a processing pool. These processing pools can be located in different geographical places and have limited capacity, affecting the energy consumption and the performance of networks. Additionally, since user demand is not deterministic, special attention should be paid to allocating resource blocks to users by ensuring their expected quality of service for latency-sensitive traffic flows. In this paper, we propose a joint optimization solution to enhance energy efficiency and provide delay guarantees to the users in the O-RAN architecture. We formulate this novel problem and linearize it to provide a solution with a mixed-integer linear problem (MILP) solver. We compare this with a baseline that addresses this optimization problem using a disjoint approach. The results show that our approach outperforms the baseline method in terms of energy efficiency.

Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE). In PINNs, the residual form of the PDE of interest and its boundary conditions are lumped into a composite objective function as an unconstrained optimization problem, which is then used to train a deep feed-forward neural network. Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach when applied to different kinds of PDEs. To address these limitations, we propose a versatile framework that can tackle both inverse and forward problems. The framework is adept at multi-fidelity data fusion and can seamlessly constrain the governing physics equations with proper initial and boundary conditions. The backbone of the proposed framework is a nonlinear, equality-constrained optimization problem formulation aimed at minimizing a loss functional, where an augmented Lagrangian method (ALM) is used to formally convert a constrained-optimization problem into an unconstrained-optimization problem. We implement the ALM within a stochastic, gradient-descent type training algorithm in a way that scrupulously focuses on meeting the constraints without sacrificing other loss terms. Additionally, as a modification of the original residual layers, we propose lean residual layers in our neural network architecture to address the so-called vanishing-gradient problem. We demonstrate the efficacy and versatility of our physics- and equality-constrained deep-learning framework by applying it to learn the solutions of various multi-dimensional PDEs, including a nonlinear inverse problem from the hydrology field with multi-fidelity data fusion. The results produced with our proposed model match exact solutions very closely for all the cases considered.

Co-evolving time series appears in a multitude of applications such as environmental monitoring, financial analysis, and smart transportation. This paper aims to address the following challenges, including (C1) how to incorporate explicit relationship networks of the time series; (C2) how to model the implicit relationship of the temporal dynamics. We propose a novel model called Network of Tensor Time Series, which is comprised of two modules, including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural Network (TRNN). TGCN tackles the first challenge by generalizing Graph Convolutional Network (GCN) for flat graphs to tensor graphs, which captures the synergy between multiple graphs associated with the tensors. TRNN leverages tensor decomposition to model the implicit relationships among co-evolving time series. The experimental results on five real-world datasets demonstrate the efficacy of the proposed method.

北京阿比特科技有限公司