亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work is motivated by the need of efficient numerical simulations of gas flows in the serpentine channels used in proton-exchange membrane fuel cells. In particular, we consider the Poisson problem in a 2D domain composed of several long straight rectangular sections and of several bends corners. In order to speed up the resolution, we propose a 0D model in the rectangular parts of the channel and a Finite Element resolution in the bends. To find a good compromise between precision and time consuming, the challenge is double: how to choose a suitable position of the interface between the 0D and the 2D models and how to control the discretization error in the bends. We shall present an \textit{a posteriori} error estimator based on an equilibrated flux reconstruction in the subdomains where the Finite Element method is applied. The estimates give a global upper bound on the error measured in the energy norm of the difference between the exact and approximate solutions on the whole domain. They are guaranteed, meaning that they feature no undetermined constants. (global) Lower bounds for the error are also derived. An adaptive algorithm is proposed to use smartly the estimator for aforementioned double challenge. A numerical validation of the estimator and the algorithm completes the work. \end{abstract}

相關內容

A rectangulation is a decomposition of a rectangle into finitely many rectangles. Via natural equivalence relations, rectangulations can be seen as combinatorial objects with a rich structure, with links to lattice congruences, flip graphs, polytopes, lattice paths, Hopf algebras, etc. In this paper, we first revisit the structure of the respective equivalence classes: weak rectangulations that preserve rectangle-segment adjacencies, and strong rectangulations that preserve rectangle-rectangle adjacencies. We thoroughly investigate posets defined by adjacency in rectangulations of both kinds, and unify and simplify known bijections between rectangulations and permutation classes. This yields a uniform treatment of mappings between permutations and rectangulations that unifies the results from earlier contributions, and emphasizes parallelism and differences between the weak and the strong cases. Then, we consider the special case of guillotine rectangulations, and prove that they can be characterized - under all known mappings between permutations and rectangulations - by avoidance of two mesh patterns that correspond to "windmills" in rectangulations. This yields new permutation classes in bijection with weak guillotine rectangulations, and the first known permutation class in bijection with strong guillotine rectangulations. Finally, we address enumerative issues and prove asymptotic bounds for several families of strong rectangulations.

The growth pattern of an invasive cell-to-cell propagation (called the successive coronas) on the square grid is a tilted square. On the triangular and hexagonal grids, it is an hexagon. It is remarkable that, on the aperiodic structure of Penrose tilings, this cell-to-cell diffusion process tends to a regular decagon (at the limit). In this article we generalize this result to any regular multigrid dual tiling, by defining the characteristic polygon of a multigrid and its dual tiling. Exploiting this elegant duality allows to fully understand why such surprising phenomena, of seeing highly regular polygonal shapes emerge from aperiodic underlying structures, happen.

Several physical problems modeled by second-order elliptic equations can be efficiently solved using mixed finite elements of the Raviart-Thomas family RTk for N-simplexes, introduced in the seventies. In case Neumann conditions are prescribed on a curvilinear boundary, the normal component of the flux variable should preferably not take up values at nodes shifted to the boundary of the approximating polytope in the corresponding normal direction. This is because the method's accuracy downgrades, which was shown in previous papers by the first author et al. In that work an order-preserving technique was studied, based on a parametric version of these elements with curved simplexes. In this article an alternative with straight-edged triangles for two-dimensional problems is proposed. The key point of this method is a Petrov-Galerkin formulation of the mixed problem, in which the test-flux space is a little different from the shape-flux space. After describing the underlying variant of RTk we show that it gives rise to a uniformly stable and optimally convergent method, taking the Poisson equation as a model problem.

A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.

LiNGAM determines the variable order from cause to effect using additive noise models, but it faces challenges with confounding. Previous methods maintained LiNGAM's fundamental structure while trying to identify and address variables affected by confounding. As a result, these methods required significant computational resources regardless of the presence of confounding, and they did not ensure the detection of all confounding types. In contrast, this paper enhances LiNGAM by introducing LiNGAM-MMI, a method that quantifies the magnitude of confounding using KL divergence and arranges the variables to minimize its impact. This method efficiently achieves a globally optimal variable order through the shortest path problem formulation. LiNGAM-MMI processes data as efficiently as traditional LiNGAM in scenarios without confounding while effectively addressing confounding situations. Our experimental results suggest that LiNGAM-MMI more accurately determines the correct variable order, both in the presence and absence of confounding.

Monitoring the distribution and size structure of long-living shrubs, such as Juniperus communis, can be used to estimate the long-term effects of climate change on high-mountain and high latitude ecosystems. Historical aerial very-high resolution imagery offers a retrospective tool to monitor shrub growth and distribution at high precision. Currently, deep learning models provide impressive results for detecting and delineating the contour of objects with defined shapes. However, adapting these models to detect natural objects that express complex growth patterns, such as junipers, is still a challenging task. This research presents a novel approach that leverages remotely sensed RGB imagery in conjunction with Mask R-CNN-based instance segmentation models to individually delineate Juniperus shrubs above the treeline in Sierra Nevada (Spain). In this study, we propose a new data construction design that consists in using photo interpreted (PI) and field work (FW) data to respectively develop and externally validate the model. We also propose a new shrub-tailored evaluation algorithm based on a new metric called Multiple Intersections over Ground Truth Area (MIoGTA) to assess and optimize the model shrub delineation performance. Finally, we deploy the developed model for the first time to generate a wall-to-wall map of Juniperus individuals. The experimental results demonstrate the efficiency of our dual data construction approach in overcoming the limitations associated with traditional field survey methods. They also highlight the robustness of MIoGTA metric in evaluating instance segmentation models on species with complex growth patterns showing more resilience against data annotation uncertainty. Furthermore, they show the effectiveness of employing Mask R-CNN with ResNet101-C4 backbone in delineating PI and FW shrubs, achieving an F1-score of 87,87% and 76.86%, respectively.

We address the task of deriving fixpoint equations from modal logics characterizing behavioural equivalences and metrics (summarized under the term conformances). We rely on earlier work that obtains Hennessy-Milner theorems as corollaries to a fixpoint preservation property along Galois connections between suitable lattices. We instantiate this to the setting of coalgebras, in which we spell out the compatibility property ensuring that we can derive a behaviour function whose greatest fixpoint coincides with the logical conformance. We then concentrate on the linear-time case, for which we study coalgebras based on the machine functor living in Eilenberg-Moore categories, a scenario for which we obtain a particularly simple logic and fixpoint equation. The theory is instantiated to concrete examples, both in the branching-time case (bisimilarity and behavioural metrics) and in the linear-time case (trace equivalences and trace distances).

Previous approaches to modelling interval-censored data have often relied on assumptions of homogeneity in the sense that the censoring mechanism, the underlying distribution of occurrence times, or both, are assumed to be time-invariant. In this work, we introduce a model which allows for non-homogeneous behaviour in both cases. In particular, we outline a censoring mechanism based on semi-Markov processes in which interval generation is assumed to be time-dependent and we propose a Markov point process model for the underlying occurrence time distribution. We prove the existence of this process and derive the conditional distribution of the occurrence times given the intervals. We provide a framework within which the process can be accurately modelled, and subsequently compare our model to homogeneous approaches by way of a parametric example.

We propose an adaptive iteratively linearized finite element method (AILFEM) in the context of strongly monotone nonlinear operators in Hilbert spaces. The approach combines adaptive mesh-refinement with an energy-contractive linearization scheme (e.g., the Ka\v{c}anov method) and a norm-contractive algebraic solver (e.g., an optimal geometric multigrid method). Crucially, a novel parameter-free algebraic stopping criterion is designed and we prove that it leads to a uniformly bounded number of algebraic solver steps. Unlike available results requiring sufficiently small adaptivity parameters to ensure even plain convergence, the new AILFEM algorithm guarantees full R-linear convergence for arbitrary adaptivity parameters. Thus, parameter-robust convergence is guaranteed. Moreover, for sufficiently small adaptivity parameters, the new adaptive algorithm guarantees optimal complexity, i.e., optimal convergence rates with respect to the overall computational cost and, hence, time.

For the Crouzeix-Raviart and enriched Crouzeix-Raviart elements, asymptotic expansions of eigenvalues of the Stokes operator are derived by establishing two pseudostress interpolations, which admit a full one-order supercloseness with respect to the numerical velocity and the pressure, respectively. The design of these interpolations overcomes the difficulty caused by the lack of supercloseness of the canonical interpolations for the two nonconforming elements, and leads to an intrinsic and concise asymptotic analysis of numerical eigenvalues, which proves an optimal superconvergence of eigenvalues by the extrapolation algorithm. Meanwhile, an optimal superconvergence of postprocessed approximations for the Stokes equation is proved by use of this supercloseness. Finally, numerical experiments are tested to verify the theoretical results.

北京阿比特科技有限公司