亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The resolution of the incompressible Navier-Stokes equations is tricky, and it is well known that one of the major issue is to compute a divergence free velocity. The non-conforming Crouzeix-Raviart finite element are convenient since they induce local mass conservation. Moreover they are such that the stability constant of the Fortin operator is equal to 1. This implies that they can easily handle anisotropic mesh [1, 2]. However spurious velocities may appear and damage the approximation. We propose a scheme here that allows to reduce the spurious velocities. It is based on a new discretisation for the gradient of pressure based on the symmetric MPFA scheme (finite volume MultiPoint Flux Approximation) [3, 4, 5].

相關內容

The intersection of ground reaction forces near a point above the center of mass has been observed in computer simulation models and human walking experiments. Observed so ubiquitously, the intersection point (IP) is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without an IP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the IP-typical intersection of ground reaction forces. The non-IP gaits found are stable and successfully rejected step-down perturbations, which indicates that an IP is not necessary for locomotion robustness or postural stability. A collision-based analysis shows that non-IP gaits feature center of mass (CoM) dynamics with vectors of the CoM velocity and ground reaction force increasingly opposing each other, indicating an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already indicate that the role of the IP in postural stability should be further investigated. Moreover, our observations on the CoM dynamics and gait efficiency suggest that the IP may have an alternative or additional function that should be considered.

A growing body of literature in fairness-aware ML (fairML) aspires to mitigate machine learning (ML)-related unfairness in automated decision making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods that ensure that trained ML models achieve low values in those measures. However, the underlying concept of fairness, i.e., the question of what fairness is, is rarely discussed, leaving a considerable gap between centuries of philosophical discussion and recent adoption of the concept in the ML community. In this work, we try to bridge this gap by formalizing a consistent concept of fairness and by translating the philosophical considerations into a formal framework for the training and evaluation of ML models in ADM systems. We derive that fairness problems can already arise without the presence of protected attributes, pointing out that fairness and predictive performance are not irreconcilable counterparts, but rather that the latter is necessary to achieve the former. Moreover, we argue why and how causal considerations are necessary when assessing fairness in the presence of protected attributes. We achieve greater linguistic clarity for the discussion of fairML and propose general algorithms for practical applications.

In reinforcement learning, unsupervised skill discovery aims to learn diverse skills without extrinsic rewards. Previous methods discover skills by maximizing the mutual information (MI) between states and skills. However, such an MI objective tends to learn simple and static skills and may hinder exploration. In this paper, we propose a novel unsupervised skill discovery method through contrastive learning among behaviors, which makes the agent produce similar behaviors for the same skill and diverse behaviors for different skills. Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill, which serves as an upper bound of the previous MI objective. Meanwhile, our method implicitly increases the state entropy to obtain better state coverage. We evaluate our method on challenging mazes and continuous control tasks. The results show that our method generates diverse and far-reaching skills, and also obtains competitive performance in downstream tasks compared to the state-of-the-art methods.

In this paper, we investigate the impact of fading channel correlation on the performance of dual-hop decode-and-forward (DF) simultaneous wireless information and power transfer (SWIPT) relay networks. More specifically, by considering the power splitting-based relaying (PSR) protocol for the energy harvesting (EH) process, we quantify the effect of positive and negative dependency between the source-to-relay (SR) and relay-to-destination (RD) links on key performance metrics such as ergodic capacity and outage probability. To this end, we first represent general formulations for the cumulative distribution function (CDF) of the product of two arbitrary random variables, exploiting copula theory. This is used to derive the closed-form expressions of the ergodic capacity and outage probability in a SWIPT relay network under correlated Nakagami-m fading channels. Monte-Carlo (MC) simulation results are provided throughout to validate the correctness of the developed analytical results, showing that the system performance significantly improves under positive dependence in the SR-RD links, compared to the case of negative dependence and independent links. Results further demonstrate that the efficiency of the ergodic capacity and outage probability ameliorates as the fading severity reduces for the PSR protocol.

Motivated by the mathematical modeling of tumor invasion in healthy tissues, we propose a generalized compressible diphasic Navier-Stokes Cahn-Hilliard model that we name G-NSCH. We assume that the two phases of the fluid represent two different populations of cells: cancer cells and healthy tissue. We include in our model possible friction and proliferation effects. The model aims to be as general as possible to study the possible mechanical effects playing a role in the invasive growth of a tumor. In the present work, we focus on the analysis and numerical simulation of the G-NSCH model. Our G-NSCH system is derived rigorously and satisfies the basic mechanics of fluids and the thermodynamics of particles. Under simplifying assumptions, we prove the existence of global weak solutions. We also propose a structure-preserving numerical scheme based on the scalar auxiliary variable method to simulate our system and present some numerical simulations validating the properties of the numerical scheme and illustrating the solutions of the G-NSCH model.

Docker allows for the packaging of applications and dependencies, and its instructions are described in Dockerfiles. Nowadays, version pinning is recommended to avoid unexpected changes in the latest version of a package. However, version pinning in Dockerfiles is not yet fully realized (only 17k of the 141k Dockerfiles we analyzed), because of the difficulties caused by version pinning. To maintain Dockerfiles with version-pinned packages, it is important to update package versions, not only for improved functionality, but also for software supply chain security, as packages are changed to address vulnerabilities and bug fixes. However, when updating multiple version-pinned packages, it is necessary to understand the dependencies between packages and ensure version compatibility, which is not easy. To address this issue, we explore the applicability of the meta-maintenance approach, which aims to distribute the successful updates in a part of a group that independently maintains a common artifact. We conduct an exploratory analysis of 7,914 repositories on GitHub that hold Dockerfiles, which retrieve packages on GitHub by URLs. There were 385 repository groups with the same multiple package combinations, and 208 groups had Dockerfiles with newer version combinations compared to others, which are considered meta-maintenance applicable. Our findings support the potential of meta-maintenance for updating multiple version-pinned packages and also reveal future challenges.

We study the computational scalability of a Gaussian process (GP) framework for solving general nonlinear partial differential equations (PDEs). This framework transforms solving PDEs to solving quadratic optimization problem with nonlinear constraints. Its complexity bottleneck lies in computing with dense kernel matrices obtained from pointwise evaluations of the covariance kernel of the GP and its partial derivatives at collocation points. We present a sparse Cholesky factorization algorithm for such kernel matrices based on the near-sparsity of the Cholesky factor under a new ordering of Diracs and derivative measurements. We rigorously identify the sparsity pattern and quantify the exponentially convergent accuracy of the corresponding Vecchia approximation of the GP, which is optimal in the Kullback-Leibler divergence. This enables us to compute $\epsilon$-approximate inverse Cholesky factors of the kernel matrices with complexity $O(N\log^d(N/\epsilon))$ in space and $O(N\log^{2d}(N/\epsilon))$ in time. With the sparse factors, gradient-based optimization methods become scalable. Furthermore, we can use the oftentimes more efficient Gauss-Newton method, for which we apply the conjugate gradient algorithm with the sparse factor of a reduced kernel matrix as a preconditioner to solve the linear system. We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs such as the nonlinear elliptic, Burgers, and Monge-Amp\`ere equations. In summary, we provide a fast, scalable, and accurate method for solving general PDEs with GPs.

The aim of this paper is to study the shape optimization method for solving the Bernoulli free boundary problem, a well-known ill-posed problem that seeks the unknown free boundary through Cauchy data. Different formulations have been proposed in the literature that differ in the choice of the objective functional. Specifically, it was shown respectively in [14] and [16] that tracking Neumann data is well-posed but tracking Dirichlet data is not. In this paper we propose a new well-posed objective functional that tracks Dirichlet data at the free boundary. By calculating the Euler derivative and the shape Hessian of the objective functional we show that the new formulation is well-posed, i.e., the shape Hessian is coercive at the minimizers. The coercivity of the shape Hessian may ensure the existence of optimal solutions for the nonlinear Ritz-Galerkin approximation method and its convergence, thus is crucial for the formulation. As a summary, we conclude that tracking Dirichlet or Neumann data in its energy norm is not sufficient, but tracking it in a half an order higher norm will be well-posed. To support our theoretical results we carry out extensive numerical experiments.

The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司