亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Engineering problems are often characterized by significant uncertainty in their material parameters. A typical example coming from geotechnical engineering is the slope stability problem where the soil's cohesion is modeled as a random field. An efficient manner to account for this uncertainty is the novel sampling method called p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). The p-MLQMC method uses a hierarchy of p-refined Finite Element meshes combined with a deterministic Quasi-Monte Carlo sampling rule. This combination yields a significant computational cost reduction with respect to classic Multilevel Monte Carlo. However, in previous work, not enough consideration was given how to incorporate the uncertainty, modeled as a random field, in the Finite Element model with the p-MLQMC method. In the present work we investigate how this can be adequately achieved by means of the integration point method. We therefore investigate how the evaluation points of the random field are to be selected in order to obtain a variance reduction over the levels. We consider three different approaches. These approaches will be benchmarked on a slope stability problem in terms of computational runtime. We find that for a given tolerance the Local Nested Approach yields a speedup up to a factor five with respect to the Non-Nested approach.

相關內容

We consider the problem of approximating the arboricity of a graph $G= (V,E)$, which we denote by $\mathsf{arb}(G)$, in sublinear time, where the arboricity of a graph is the minimal number of forests required to cover its edges. An algorithm for this problem may perform degree and neighbor queries, and is allowed a small error probability. We design an algorithm that outputs an estimate $\hat{\alpha}$, such that with probability $1-1/\textrm{poly}(n)$, $\mathsf{arb}(G)/c\log^2 n \leq \hat{\alpha} \leq \mathsf{arb}(G)$, where $n=|V|$ and $c$ is a constant. The expected query complexity and running time of the algorithm are $O(n/\mathsf{arb}(G))\cdot \textrm{poly}(\log n)$, and this upper bound also holds with high probability. %($\widetilde{O}(\cdot)$ is used to suppress $\textrm{poly}(\log n)$ dependencies). This bound is optimal for such an approximation up to a $\textrm{poly}(\log n)$ factor.

We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function. In particular, we use a multilevel Monte-Carlo approach due to Blanchet and Glynn to turn any optimal stochastic gradient method into an estimator of $x_\star$ with bias $\delta$, variance $O(\log(1/\delta))$, and an expected sampling cost of $O(\log(1/\delta))$ stochastic gradient evaluations. As an immediate consequence, we obtain cheap and nearly unbiased gradient estimators for the Moreau-Yoshida envelope of any Lipschitz convex function, allowing us to perform dimension-free randomized smoothing. We demonstrate the potential of our estimator through four applications. First, we develop a method for minimizing the maximum of $N$ functions, improving on recent results and matching a lower bound up to logarithmic factors. Second and third, we recover state-of-the-art rates for projection-efficient and gradient-efficient optimization using simple algorithms with a transparent analysis. Finally, we show that an improved version of our estimator would yield a nearly linear-time, optimal-utility, differentially-private non-smooth stochastic optimization method.

Measuring quality of cancer care delivered by US health providers is challenging. Patients receiving oncology care greatly vary in disease presentation among other key characteristics. In this paper we discuss a framework for institutional quality measurement which addresses the heterogeneity of patient populations. For this, we follow recent statistical developments on health outcomes research and conceptualize the task of quality measurement as a causal inference problem, helping to target flexible covariate profiles that can represent specific populations of interest. To our knowledge, such covariate profiles have not been used in the quality measurement literature. We use different clinically relevant covariate profiles and evaluate methods for layered case-mix adjustments that combine weighting and regression modeling approaches in a sequential manner in order to reduce model extrapolation and allow for provider effect modification. We appraise these methods in an extensive simulation study and highlight the practical utility of weighting methods that warn the investigator when case-mix adjustments are infeasible without some form of extrapolation that goes beyond the support of the data. In a study of cancer-care outcomes, we assess the performance of oncology practices for different profiles that correspond to the types of patients who may receive cancer care. We describe how the methods examined may be particularly important for high-stakes quality measurement, such as public reporting or performance-based payments. These methods may also be applied to support the health care decisions of individual patients and provide a path to personalized quality measurement.

Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time. For example, in the standard Sufficiency metric, only the top-k most important tokens are kept. In this paper, we study several under-explored dimensions of FI explanations, providing conceptual and empirical improvements for this form of explanation. First, we advance a new argument for why it can be problematic to remove features from an input when creating or evaluating explanations: the fact that these counterfactual inputs are out-of-distribution (OOD) to models implies that the resulting explanations are socially misaligned. The crux of the problem is that the model prior and random weight initialization influence the explanations (and explanation metrics) in unintended ways. To resolve this issue, we propose a simple alteration to the model training process, which results in more socially aligned explanations and metrics. Second, we compare among five approaches for removing features from model inputs. We find that some methods produce more OOD counterfactuals than others, and we make recommendations for selecting a feature-replacement function. Finally, we introduce four search-based methods for identifying FI explanations and compare them to strong baselines, including LIME, Anchors, and Integrated Gradients. Through experiments with six diverse text classification datasets, we find that the only method that consistently outperforms random search is a Parallel Local Search (PLS) that we introduce. Improvements over the second-best method are as large as 5.4 points for Sufficiency and 17 points for Comprehensiveness. All supporting code for experiments in this paper is publicly available at //github.com/peterbhase/ExplanationSearch.

The support vector machine (SVM) and minimum Euclidean norm least squares regression are two fundamentally different approaches to fitting linear models, but they have recently been connected in models for very high-dimensional data through a phenomenon of support vector proliferation, where every training example used to fit an SVM becomes a support vector. In this paper, we explore the generality of this phenomenon and make the following contributions. First, we prove a super-linear lower bound on the dimension (in terms of sample size) required for support vector proliferation in independent feature models, matching the upper bounds from previous works. We further identify a sharp phase transition in Gaussian feature models, bound the width of this transition, and give experimental support for its universality. Finally, we hypothesize that this phase transition occurs only in much higher-dimensional settings in the $\ell_1$ variant of the SVM, and we present a new geometric characterization of the problem that may elucidate this phenomenon for the general $\ell_p$ case.

Many real-world optimization problems such as engineering design are finally modeled as a multiobjective optimization problem (MOP) which must be solved to get a set of trade-offs. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a very promising approach for solving MOPs, which offers a general algorithmic framework of evolutionary multiobjective optimization. Recent studies have shown that MOEA/D with uniformly distributed weight vectors is well-suited to MOPs with regular Pareto optimal front, but its performance in terms of diversity deteriorates on MOPs with irregular Pareto optimal front such as highly nonlinear and convex. In this way, the solution set obtained by the algorithm can not provide more reasonable choices for decision makers. In order to efficiently overcome this shortcoming, in this paper, we propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points. Specifically, this strategy consists of the setting and adaptation of reference points generated by the techniques of equidistant partition and projection. For performance assessment, the proposed algorithm is compared with existing four state-of-the-art multiobjective evolutionary algorithms on both benchmark test problems with various types of Pareto optimal fronts and two real-world MOPs including the hatch cover design and the rocket injector design in engineering optimization. Experimental results reveal that the proposed algorithm is better than that of the other compared algorithms in diversity.

We consider the use of extreme learning machines (ELM) for computational partial differential equations (PDE). In ELM the hidden-layer coefficients in the neural network are assigned to random values generated on $[-R_m,R_m]$ and fixed, where $R_m$ is a user-provided constant, and the output-layer coefficients are trained by a linear or nonlinear least squares computation. We present a method for computing the optimal value of $R_m$ based on the differential evolution algorithm. The presented method enables us to illuminate the characteristics of the optimal $R_m$ for two types of ELM configurations: (i) Single-Rm-ELM, in which a single $R_m$ is used for generating the random coefficients in all the hidden layers, and (ii) Multi-Rm-ELM, in which multiple $R_m$ constants are involved with each used for generating the random coefficients of a different hidden layer. We adopt the optimal $R_m$ from this method and also incorporate other improvements into the ELM implementation. In particular, here we compute all the differential operators involving the output fields of the last hidden layer by a forward-mode auto-differentiation, as opposed to the reverse-mode auto-differentiation in a previous work. These improvements significantly reduce the network training time and enhance the ELM performance. We systematically compare the computational performance of the current improved ELM with that of the finite element method (FEM), both the classical second-order FEM and the high-order FEM with Lagrange elements of higher degrees, for solving a number of linear and nonlinear PDEs. It is shown that the current improved ELM far outperforms the classical FEM. Its computational performance is comparable to that of the high-order FEM for smaller problem sizes, and for larger problem sizes the ELM markedly outperforms the high-order FEM.

We propose a machine learning-based method to build a system of differential equations that approximates the dynamics of 3D electromechanical models for the human heart, accounting for the dependence on a set of parameters. Specifically, our method permits to create a reduced-order model (ROM), written as a system of Ordinary Differential Equations (ODEs) wherein the forcing term, given by the right-hand side, consists of an Artificial Neural Network (ANN), that possibly depends on a set of parameters associated with the electromechanical model to be surrogated. This method is non-intrusive, as it only requires a collection of pressure and volume transients obtained from the full-order model (FOM) of cardiac electromechanics. Once trained, the ANN-based ROM can be coupled with hemodynamic models for the blood circulation external to the heart, in the same manner as the original electromechanical model, but at a dramatically lower computational cost. Indeed, our method allows for real-time numerical simulations of the cardiac function. We demonstrate the effectiveness of the proposed method on two relevant contexts in cardiac modeling. First, we employ the ANN-based ROM to perform a global sensitivity analysis on both the electromechanical and hemodynamic models. Second, we perform a Bayesian estimation of two parameters starting from noisy measurements of two scalar outputs. In both these cases, replacing the FOM of cardiac electromechanics with the ANN-based ROM makes it possible to perform in a few hours of computational time all the numerical simulations that would be otherwise unaffordable, because of their overwhelming computational cost, if carried out with the FOM. As a matter of fact, our ANN-based ROM is able to speedup the numerical simulations by more than three orders of magnitude.

This review paper discusses how context has been used in neural machine translation (NMT) in the past two years (2017-2018). Starting with a brief retrospect on the rapid evolution of NMT models, the paper then reviews studies that evaluate NMT output from various perspectives, with emphasis on those analyzing limitations of the translation of contextual phenomena. In a subsequent version, the paper will then present the main methods that were proposed to leverage context for improving translation quality, and distinguishes methods that aim to improve the translation of specific phenomena from those that consider a wider unstructured context.

Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.

北京阿比特科技有限公司