亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Facing the world wide coronavirus disease 2019 (COVID-19) pandemic, a new fitting method (QDF, quasi-distribution fitting) which could be used to analyze the data of COVID-19 is developed based on piecewise quasi-uniform B-spline curves. For any given country or district, it simulates the distribution histogram data which is made from the daily confirmed cases (or the other data including daily recovery cases and daily fatality cases) of the COVID-19 with piecewise quasi-uniform B-spline curves. Being dealt with area normalization method, the fitting curves could be regarded as a kind of probability density function (PDF), its mathematical expectation and the variance could be used to analyze the situation of the coronavirus pandemic. Numerical experiments based on the data of certain countries have indicated that the QDF method demonstrate the intrinsic characteristics of COVID-19 data of the given country or distric, and because of the interval of data used in this paper is over one year (500 days), it reveals the fact that after multi-wave transmission of the coronavirus, the case fatality rate has declined obviously, the result shows that as an appraisal method, it is effective and feasible.

相關內容

In their classical 1993 paper [CV93] Chaudhuri and Vardi notice that some fundamental database theory results and techniques fail to survive when we try to see query answers as bags (multisets) of tuples rather than as sets of tuples. But disappointingly, almost 30 years after [CV93], the bag-semantics based database theory is still in its infancy. We do not even know whether conjunctive query containment is decidable. And this is not due to lack of interest, but because, in the multiset world, everything suddenly gets discouragingly complicated. In this paper, we try to re-examine, in the bag semantics scenario, the query determinacy problem, which has recently been intensively studied in the set semantics scenario. We show that query determinacy (under bag semantics) is decidable for boolean conjunctive queries and undecidable for unions of such queries (in contrast to the set semantics scenario, where the UCQ case remains decidable even for unary queries). We also show that -- surprisingly -- for path queries determinacy under bag semantics coincides with determinacy under set semantics (and thus it is decidable).

This paper introduces a novel method for the efficient second-order accurate computation of normal fields from volume fractions on unstructured polyhedral meshes. Locally, i.e. in each mesh cell, an averaged normal is reconstructed by fitting a plane in a least squares sense to the volume fraction data of neighboring cells while implicitly accounting for volume conservation in the cell at hand. The resulting minimization problem is solved approximately by employing a Newton-type method. Moreover, applying the Reynolds transport theorem allows to assess the regularity of the derivatives. Since the divergence theorem implies that the volume fraction can be cast as a sum of face-based quantities, our method considerably simplifies the numerical procedure for applications in three spatial dimensions while demonstrating an inherent ability to robustly deal with unstructured meshes. We discuss the theoretical foundations, regularity and appropriate error measures, along with the details of the numerical algorithm. Finally, numerical results for convex and non-convex hypersurfaces embedded in cuboidal and tetrahedral meshes are presented, where we obtain second-order convergence for the normal alignment and symmetric volume difference. Moreover, the findings are substantiated by completely new deep insights into the minimization procedure.

The emerging public awareness and government regulations of data privacy motivate new paradigms of collecting and analyzing data transparent and acceptable to data owners. We present a new concept of privacy and corresponding data formats, mechanisms, and theories for privatizing data during data collection. The privacy, named Interval Privacy, enforces the raw data conditional distribution on the privatized data to be the same as its unconditional distribution over a nontrivial support set. Correspondingly, the proposed privacy mechanism will record each data value as a random interval (or, more generally, a range) containing it. The proposed interval privacy mechanisms can be easily deployed through survey-based data collection interfaces, e.g., by asking a respondent whether its data value is within a randomly generated range. Another unique feature of interval mechanisms is that they obfuscate the truth but not perturb it. Using narrowed range to convey information is complementary to the popular paradigm of perturbing data. Also, the interval mechanisms can generate progressively refined information at the discretion of individuals, naturally leading to privacy-adaptive data collection. We develop different aspects of theory such as composition, robustness, distribution estimation, and regression learning from interval-valued data. Interval privacy provides a new perspective of human-centric data privacy where individuals have a perceptible, transparent, and simple way of sharing sensitive data.

The paper presents the one of possible approach to model the epidemic propagation. The proposed model is based on the mean-field control inside separate groups of population, namely, suspectable (S), infected (I), removed (R) and cross-immune (C). In the paper the numerical algorithm to solve such a problem is presented, which ensures the conservation the total mass of population during timeline. Numerical experiments demonstrate the result of modelling the propagation of COVID-19 virus during two 100 day periods in Novosibirsk (Russia).

The cumulative distribution or probability density of a random variable, which is itself a function of a high number of independent real-valued random variables, can be formulated as high-dimensional integrals of an indicator or a Dirac $\delta$ function, respectively. To approximate the distribution or density at a point, we carry out preintegration with respect to one suitably chosen variable, then apply a Quasi-Monte Carlo method to compute the integral of the resulting smoother function. Interpolation is then used to reconstruct the distribution or density on an interval. We provide rigorous regularity and error analysis for the preintegrated function to show that our estimators achieve nearly first order convergence. Numerical results support the theory.

We present a comprehensive workflow to simulate single-phase flow and transport in fractured porous media using the discrete fracture matrix approach. The workflow has three primary parts: (1) a method for conforming mesh generation of and around a three-dimensional fracture network, (2) the discretization of the governing equations using a second-order mimetic finite difference method, and (3) implementation of numerical methods for high-performance computing environments. A method to create a conforming Delaunay tetrahedralization of the volume surrounding the fracture network, where the triangular cells of the fracture mesh are faces in the volume mesh, that addresses pathological cases which commonly arise and degrade mesh quality is also provided. Our open-source subsurface simulator uses a hierarchy of process kernels (one kernel per physical process) that allows for both strong and weak coupling of the fracture and matrix domains. We provide verification tests based on analytic solutions for flow and transport, as well as numerical convergence. We also provide multiple expositions of the method in complex fracture networks. In the first example, we demonstrate that the method is robust by considering two scenarios where the fracture network acts as a barrier to flow, as the primary pathway, or offers the same resistance as the surrounding matrix. In the second test, flow and transport through a three-dimensional stochastically generated network containing 257 fractures is presented.

Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise. Therefore, eliminating the impact of distribution shifts between training and testing data is crucial for building performance-promising deep models. Conventional methods assume either the known heterogeneity of training data (e.g. domain labels) or the approximately equal capacities of different domains. In this paper, we consider a more challenging case where neither of the above assumptions holds. We propose to address this problem by removing the dependencies between features via learning weights for training samples, which helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between discriminative features and labels. Extensive experiments clearly demonstrate the effectiveness of our method on multiple distribution generalization benchmarks compared with state-of-the-art counterparts. Through extensive experiments on distribution generalization benchmarks including PACS, VLCS, MNIST-M, and NICO, we show the effectiveness of our method compared with state-of-the-art counterparts.

Seam-cutting and seam-driven techniques have been proven effective for handling imperfect image series in image stitching. Generally, seam-driven is to utilize seam-cutting to find a best seam from one or finite alignment hypotheses based on a predefined seam quality metric. However, the quality metrics in most methods are defined to measure the average performance of the pixels on the seam without considering the relevance and variance among them. This may cause that the seam with the minimal measure is not optimal (perception-inconsistent) in human perception. In this paper, we propose a novel coarse-to-fine seam estimation method which applies the evaluation in a different way. For pixels on the seam, we develop a patch-point evaluation algorithm concentrating more on the correlation and variation of them. The evaluations are then used to recalculate the difference map of the overlapping region and reestimate a stitching seam. This evaluation-reestimation procedure iterates until the current seam changes negligibly comparing with the previous seams. Experiments show that our proposed method can finally find a nearly perception-consistent seam after several iterations, which outperforms the conventional seam-cutting and other seam-driven methods.

We propose a geometric convexity shape prior preservation method for variational level set based image segmentation methods. Our method is built upon the fact that the level set of a convex signed distanced function must be convex. This property enables us to transfer a complicated geometrical convexity prior into a simple inequality constraint on the function. An active set based Gauss-Seidel iteration is used to handle this constrained minimization problem to get an efficient algorithm. We apply our method to region and edge based level set segmentation models including Chan-Vese (CV) model with guarantee that the segmented region will be convex. Experimental results show the effectiveness and quality of the proposed model and algorithm.

We introduce a new multi-dimensional nonlinear embedding -- Piecewise Flat Embedding (PFE) -- for image segmentation. Based on the theory of sparse signal recovery, piecewise flat embedding with diverse channels attempts to recover a piecewise constant image representation with sparse region boundaries and sparse cluster value scattering. The resultant piecewise flat embedding exhibits interesting properties such as suppressing slowly varying signals, and offers an image representation with higher region identifiability which is desirable for image segmentation or high-level semantic analysis tasks. We formulate our embedding as a variant of the Laplacian Eigenmap embedding with an $L_{1,p} (0<p\leq1)$ regularization term to promote sparse solutions. First, we devise a two-stage numerical algorithm based on Bregman iterations to compute $L_{1,1}$-regularized piecewise flat embeddings. We further generalize this algorithm through iterative reweighting to solve the general $L_{1,p}$-regularized problem. To demonstrate its efficacy, we integrate PFE into two existing image segmentation frameworks, segmentation based on clustering and hierarchical segmentation based on contour detection. Experiments on four major benchmark datasets, BSDS500, MSRC, Stanford Background Dataset, and PASCAL Context, show that segmentation algorithms incorporating our embedding achieve significantly improved results.

北京阿比特科技有限公司