亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The well-suited discretization of the Keller-Segel equations for chemotaxis has become a very challenging problem due to the convective nature inherent to them. This paper aims to introduce a new upwind, mass-conservative, positive and energy-dissipative discontinuous Galerkin scheme for the Keller-Segel model. This approach is based on the gradient-flow structure of the equations. In addition, we show some numerical experiments in accordance with the aforementioned properties of the discretization. The numerical results obtained emphasize the really good behaviour of the approximation in the case of chemotactic collapse, where very steep gradients appear.

相關內容

Many problems in science and engineering can be rigorously recast into minimizing a suitable energy functional. We have been developing efficient and flexible solution strategies to tackle various minimization problems by employing finite element discretization with P1 triangular elements [1,2]. An extension to rectangular hp-finite elements in 2D is introduced in this contribution.

Trace finite element methods have become a popular option for solving surface partial differential equations, especially in problems where surface and bulk effects are coupled. In such methods a surface mesh is formed by approximately intersecting the continuous surface on which the PDE is posed with a three-dimensional (bulk) tetrahedral mesh. In classical $H^1$-conforming trace methods, the surface finite element space is obtained by restricting a bulk finite element space to the surface mesh. It is not clear how to carry out a similar procedure in order to obtain other important types of finite element spaces such as $H({\rm div})$-conforming spaces. Following previous work of Olshanskii, Reusken, and Xu on $H^1$-conforming methods, we develop a ``quasi-trace'' mixed method for the Laplace-Beltrami problem. The finite element mesh is taken to be the intersection of the surface with a regular tetrahedral bulk mesh as previously described, resulting in a surface triangulation that is highly unstructured and anisotropic but satisfies a classical maximum angle condition. The mixed method is then employed on this mesh. Optimal error estimates with respect to the bulk mesh size are proved along with superconvergent estimates for the projection of the scalar error and a postprocessed scalar approximation.

We present a novel stabilized isogeometric formulation for the Stokes problem, where the geometry of interest is obtained via overlapping NURBS (non-uniform rational B-spline) patches, i.e., one patch on top of another in an arbitrary but predefined hierarchical order. All the visible regions constitute the computational domain, whereas independent patches are coupled through visible interfaces using Nitsche's formulation. Such a geometric representation inevitably involves trimming, which may yield trimmed elements of extremely small measures (referred to as bad elements) and thus lead to the instability issue. Motivated by the minimal stabilization method that rigorously guarantees stability for trimmed geometries [1], in this work we generalize it to the Stokes problem on overlapping patches. Central to our method is the distinct treatments for the pressure and velocity spaces: Stabilization for velocity is carried out for the flux terms on interfaces, whereas pressure is stabilized in all the bad elements. We provide a priori error estimates with a comprehensive theoretical study. Through a suite of numerical tests, we first show that optimal convergence rates are achieved, which consistently agrees with our theoretical findings. Second, we show that the accuracy of pressure is significantly improved by several orders using the proposed stabilization method, compared to the results without stabilization. Finally, we also demonstrate the flexibility and efficiency of the proposed method in capturing local features in the solution field.

We make two contributions to the Isolation Forest method for anomaly and outlier detection. The first contribution is an information-theoretically motivated generalisation of the score function that is used to aggregate the scores across random tree estimators. This generalisation allows one to take into account not just the ensemble average across trees but instead the whole distribution. The second contribution is an alternative scoring function at the level of the individual tree estimator, in which we replace the depth-based scoring of the Isolation Forest with one based on hyper-volumes associated to an isolation tree's leaf nodes. We motivate the use of both of these methods on generated data and also evaluate them on 34 datasets from the recent and exhaustive ``ADBench'' benchmark, finding significant improvement over the standard isolation forest for both variants on some datasets and improvement on average across all datasets for one of the two variants. The code to reproduce our results is made available as part of the submission.

Geographical and Temporal Weighted Regression (GTWR) model is an important local technique for exploring spatial heterogeneity in data relationships, as well as temporal dependence due to its high fitting capacity when it comes to real data. In this article, we consider a GTWR model driven by a spatio-temporal noise, colored in space and fractional in time. Concerning the covariates, we consider that they are correlated, taking into account two interaction types between covariates, weak and strong interaction. Under these assumptions, Weighted Least Squares Estimator (WLS) is obtained, as well as its rate of convergence. In order to evidence the good performance of the estimator studied, it is provided a simulation study of four different scenarios, where it is observed that the residuals oscillate with small variation around zero. The STARMA package of the R software allows obtaining a variant of the $R^{2}$ coefficient, with values very close to 1, which means that most of the variability is explained by the model.

Many protocols in distributed computing rely on a source of randomness, usually called a random beacon, both for their applicability and security. This is especially true for proof-of-stake blockchain protocols in which the next miner or set of miners have to be chosen randomly and each party's likelihood to be selected is in proportion to their stake in the cryptocurrency. Current random beacons used in proof-of-stake protocols, such as Ouroboros and Algorand, have two fundamental limitations: Either (i)~they rely on pseudorandomness, e.g.~assuming that the output of a hash function is uniform, which is a widely-used but unproven assumption, or (ii)~they generate their randomness using a distributed protocol in which several participants are required to submit random numbers which are then used in the generation of a final random result. However, in this case, there is no guarantee that the numbers provided by the parties are uniformly random and there is no incentive for the parties to honestly generate uniform randomness. Most random beacons have both limitations. In this thesis, we provide a protocol for distributed generation of randomness. Our protocol does not rely on pseudorandomness at all. Similar to some of the previous approaches, it uses random inputs by different participants to generate a final random result. However, the crucial difference is that we provide a game-theoretic guarantee showing that it is in everyone's best interest to submit uniform random numbers. Hence, our approach is the first to incentivize honest behavior instead of just assuming it. Moreover, the approach is trustless and generates unbiased random numbers. It is also tamper-proof and no party can change the output or affect its distribution. Finally, it is designed with modularity in mind and can be easily plugged into existing distributed protocols such as proof-of-stake blockchains.

The adoption of deep neural networks (DNNs) in safety-critical contexts is often prevented by the lack of effective means to explain their results, especially when they are erroneous. In our previous work, we proposed a white-box approach (HUDD) and a black-box approach (SAFE) to automatically characterize DNN failures. They both identify clusters of similar images from a potentially large set of images leading to DNN failures. However, the analysis pipelines for HUDD and SAFE were instantiated in specific ways according to common practices, deferring the analysis of other pipelines to future work. In this paper, we report on an empirical evaluation of 99 different pipelines for root cause analysis of DNN failures. They combine transfer learning, autoencoders, heatmaps of neuron relevance, dimensionality reduction techniques, and different clustering algorithms. Our results show that the best pipeline combines transfer learning, DBSCAN, and UMAP. It leads to clusters almost exclusively capturing images of the same failure scenario, thus facilitating root cause analysis. Further, it generates distinct clusters for each root cause of failure, thus enabling engineers to detect all the unsafe scenarios. Interestingly, these results hold even for failure scenarios that are only observed in a small percentage of the failing images.

Because of physical assumptions and numerical approximations, low-order models are affected by uncertainties in the state and parameters, and by model biases. Model biases, also known as model errors or systematic errors, are difficult to infer because they are `unknown unknowns', i.e., we do not necessarily know their functional form a priori. With biased models, data assimilation methods may be ill-posed because either (i) they are 'bias-unaware' because the estimators are assumed unbiased, (ii) they rely on an a priori parametric model for the bias, or (iii) they can infer model biases that are not unique for the same model and data. First, we design a data assimilation framework to perform combined state, parameter, and bias estimation. Second, we propose a mathematical solution with a sequential method, i.e., the regularized bias-aware ensemble Kalman Filter (r-EnKF), which requires a model of the bias and its gradient (i.e., the Jacobian). Third, we propose an echo state network as the model bias estimator. We derive the Jacobian of the network, and design a robust training strategy with data augmentation to accurately infer the bias in different scenarios. Fourth, we apply the r-EnKF to nonlinearly coupled oscillators (with and without time-delay) affected by different forms of bias. The r-EnKF infers in real-time parameters and states, and a unique bias. The applications that we showcase are relevant to acoustics, thermoacoustics, and vibrations; however, the r-EnKF opens new opportunities for combined state, parameter and bias estimation for real-time and on-the-fly prediction in nonlinear systems.

The study further explores randomized QMC (RQMC), which maintains the QMC convergence rate and facilitates computational efficiency analysis. Emphasis is laid on integrating randomly shifted lattice rules, a distinct RQMC quadrature, with IS,a classic variance reduction technique. The study underscores the intricacies of establishing a theoretical convergence rate for IS in QMC compared to MC, given the influence of problem dimensions and smoothness on QMC. The research also touches on the significance of IS density selection and its potential implications. The study culminates in examining the error bound of IS with a randomly shifted lattice rule, drawing inspiration from the reproducing kernel Hilbert space (RKHS). In the realm of finance and statistics, many problems boil down to computing expectations, predominantly integrals concerning a Gaussian measure. This study considers optimal drift importance sampling (ODIS) and Laplace importance sampling (LapIS) as common importance densities. Conclusively, the paper establishes that under certain conditions, the IS-randomly shifted lattice rule can achieve a near $O(N^{-1})$ error bound.

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

北京阿比特科技有限公司