亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin and stochastic collocations methods. This work is concerned with a non-intrusive generalization of the adaptive Galerkin FEM with residual based error estimation. It combines the non-intrusive character of a randomized least-squares method with the a posteriori error analysis of stochastic Galerkin methods. The proposed approach uses the Variational Monte Carlo method to obtain a quasi-optimal low-rank approximation of the Galerkin projection in a highly efficient hierarchical tensor format. We derive an adaptive refinement algorithm which is steered by a reliable error estimator. Opposite to stochastic Galerkin methods, the approach is easily applicable to a wide range of problems, enabling a fully automated adjustment of all discretization parameters. Benchmark examples with affine and (unbounded) lognormal coefficient fields illustrate the performance of the non-intrusive adaptive algorithm, showing best-in-class performance.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

An incremental approach for computation of convex hull for data points in two-dimensions is presented. The algorithm is not output-sensitive and costs a time that is linear in the size of data points at input. Graham's scan is applied only on a subset of the data points, represented at the extremal of the dataset. Points are classified for extremal, in proportion with the modular distance, about an imaginary point interior to the region bounded by convex hull of the dataset assumed for origin or center in polar coordinate. A subset of the data is arrived by terminating at until an event of no change in maximal points is observed per bin, for iteratively and exponentially decreasing intervals.

In this paper we introduce a new nonlinear reconstruction operator over two dimensional triangularized domains with equilateral triangles. We focus on the local definition of the operator. The ideas behind this definition come from some basic properties of the Harmonic mean of three positive values. We prove some results regarding the approximation properties of the operator and we carry out some numerical tests giving evidence of the avoidance of any Gibbs effects.

In this paper, we propose new geometrically unfitted space-time Finite Element methods for partial differential equations posed on moving domains of higher order accuracy in space and time. As a model problem, the convection-diffusion problem on a moving domain is studied. For geometrically higher order accuracy, we apply a parametric mapping on a background space-time tensor-product mesh. Concerning discretisation in time, we consider discontinuous Galerkin, as well as related continuous (Petrov-)Galerkin and Galerkin collocation methods. For stabilisation with respect to bad cut configurations and as an extension mechanism that is required for the latter two schemes, a ghost penalty stabilisation is employed. The article puts an emphasis on the techniques that allow to achieve a robust but higher order geometry handling for smooth domains. We investigate the computational properties of the respective methods in a series of numerical experiments. These include studies in different dimensions for different polynomial degrees in space and time, validating the higher order accuracy in both variables.

In clinical trials, identification of prognostic and predictive biomarkers is essential to precision medicine. Prognostic biomarkers can be useful for the prevention of the occurrence of the disease, and predictive biomarkers can be used to identify patients with potential benefit from the treatment. Previous researches were mainly focused on clinical characteristics, and the use of genomic data in such an area is hardly studied. A new method is required to simultaneously select prognostic and predictive biomarkers in high dimensional genomic data where biomarkers are highly correlated. We propose a novel approach called PPLasso (Prognostic Predictive Lasso) integrating prognostic and predictive effects into one statistical model. PPLasso also takes into account the correlations between biomarkers that can alter the biomarker selection accuracy. Our method consists in transforming the design matrix to remove the correlations between the biomarkers before applying the generalized Lasso. In a comprehensive numerical evaluation, we show that PPLasso outperforms the traditional Lasso approach on both prognostic and predictive biomarker identification in various scenarios. Finally, our method is applied to publicly available transcriptomic data from clinical trial RV144. Our method is implemented in the PPLasso R package which will be soon available from the Comprehensive R Archive Network (CRAN).

We present a new procedure for enhanced variable selection for component-wise gradient boosting. Statistical boosting is a computational approach that emerged from machine learning, which allows to fit regression models in the presence of high-dimensional data. Furthermore, the algorithm can lead to data-driven variable selection. In practice, however, the final models typically tend to include too many variables in some situations. This occurs particularly for low-dimensional data (p<n), where we observe a slow overfitting behavior of boosting. As a result, more variables get included into the final model without altering the prediction accuracy. Many of these false positives are incorporated with a small coefficient and therefore have a small impact, but lead to a larger model. We try to overcome this issue by giving the algorithm the chance to deselect base-learners with minor importance. We analyze the impact of the new approach on variable selection and prediction performance in comparison to alternative methods including boosting with earlier stopping as well as twin boosting. We illustrate our approach with data of an ongoing cohort study for chronic kidney disease patients, where the most influential predictors for the health-related quality of life measure are selected in a distributional regression approach based on beta regression.

Recently a machine learning approach to Monte-Carlo simulations called Neural Markov Chain Monte-Carlo (NMCMC) is gaining traction. In its most popular form it uses the neural networks to construct normalizing flows which are then trained to approximate the desired target distribution. As this distribution is usually defined via a Hamiltonian or action, the standard learning algorithm requires estimation of the action gradient with respect to the fields. In this contribution we present another gradient estimator (and the corresponding [PyTorch implementation) that avoids this calculation, thus potentially speeding up training for models with more complicated actions. We also study the statistical properties of several gradient estimators and show that our formulation leads to better training results.

The estimation of the intrinsic dimension of a dataset is a fundamental step in most dimensionality reduction techniques. This article illustrates intRinsic, an R package that implements novel state-of-the-art likelihood-based estimators of the intrinsic dimension of a dataset. In order to make these novel estimators easily accessible, the package contains a small number of high-level functions that rely on a broader set of efficient, low-level routines. Generally speaking, intRinsic encompasses models that fall into two categories: homogeneous and heterogeneous intrinsic dimension estimators. The first category contains the TWO-NN model, an estimator derived from the distributional properties of the ratios of the distances between each data point and its first two of nearest neighbors. The functions dedicated to this method carry out inference under both the frequentist and Bayesian frameworks. In the second category, we find Hidalgo, a Bayesian mixture model, for which an efficient Gibbs sampler is implemented. After presenting the theoretical background, we demonstrate the performance of the models on simulated datasets. This way, we can facilitate the exposition by immediately assessing the validity of the results. Then, we employ the package to study the intrinsic dimension of the Alon dataset, obtained from a famous microarray experiment. We show how the estimation of homogeneous and heterogeneous intrinsic dimensions allows us to gain valuable insights into the topological structure of a dataset.

We present a method to extend the finite element library FEniCS to solve problems with domains in dimensions above three by constructing tensor product finite elements. This methodology only requires that the high dimensional domain is structured as a Cartesian product of two lower dimensional subdomains. In this study we consider linear partial differential equations, though the methodology can be extended to non-linear problems. The utilization of tensor product finite elements allows us to construct a global system of linear algebraic equations that only relies on the finite element infrastructure of the lower dimensional subdomains contained in FEniCS. We demonstrate the effectiveness of our methodology in three distinctive test cases. The first test case is a Poisson equation posed in a four dimensional domain which is a Cartesian product of two unit squares solved using the classical Galerkin finite element method. The second test case is the wave equation in space-time, where the computational domain is a Cartesian product of a two dimensional space grid and a one dimensional time interval. In this second case we also employ the Galerkin method. Finally, the third test case is an advection dominated advection-diffusion equation where the global domain is a Cartesian product of two one dimensional intervals. The streamline upwind Petrov-Galerkin method is applied to ensure discrete stability. In all three cases, optimal convergence rates are achieved with respect to h refinement.

With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司