亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Linear kinetic transport equations play a critical role in optical tomography, radiative transfer and neutron transport. The fundamental difficulty hampering their efficient and accurate numerical resolution lies in the high dimensionality of the physical and velocity/angular variables and the fact that the problem is multiscale in nature. Leveraging the existence of a hidden low-rank structure hinted by the diffusive limit, in this work, we design and test the angular-space reduced order model for the linear radiative transfer equation, the first such effort based on the celebrated reduced basis method (RBM). Our method is built upon a high-fidelity solver employing the discrete ordinates method in the angular space, an asymptotic preserving upwind discontinuous Galerkin method for the physical space, and an efficient synthetic accelerated source iteration for the resulting linear system. Addressing the challenge of the parameter values (or angular directions) being coupled through an integration operator, the first novel ingredient of our method is an iterative procedure where the macroscopic density is constructed from the RBM snapshots, treated explicitly and allowing a transport sweep, and then updated afterwards. A greedy algorithm can then proceed to adaptively select the representative samples in the angular space and form a surrogate solution space. The second novelty is a least-squares density reconstruction strategy, at each of the relevant physical locations, enabling the robust and accurate integration over an arbitrarily unstructured set of angular samples toward the macroscopic density. Numerical experiments indicate that our method is effective for computational cost reduction in a variety of regimes.

相關內容

Learning data representations under uncertainty is an important task that emerges in numerous machine learning applications. However, uncertainty quantification (UQ) techniques are computationally intensive and become prohibitively expensive for high-dimensional data. In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions. The proposed model combines a neural network approach for dimensionality reduction of the (potentially high-dimensional) data, with a surrogate model method for learning the data distribution. We first employ a variational autoencoder (VAE) to learn a low-dimensional representation of the data distribution. We then propose to harness polynomial chaos expansion (PCE) formulation to map this distribution to the output target. The coefficients of PCE are learned from the distribution representation of the training data using a maximum mean discrepancy (MMD) approach. Our model enables us to (a) learn a representation of the data, (b) estimate uncertainty in the high-dimensional data system, and (c) match high order moments of the output distribution; without any prior statistical assumptions on the data. Numerical experimental results are presented to illustrate the performance of the proposed method.

Monitoring the evolution of the Covid19 pandemic constitutes a critical step in sanitary policy design. Yet, the assessment of the pandemic intensity within the pandemic period remains a challenging task because of the limited quality of data made available by public health authorities (missing data, outliers and pseudoseasonalities, notably), that calls for cumbersome and ad-hoc preprocessing (denoising) prior to estimation. Recently, the estimation of the reproduction number, a measure of the pandemic intensity, was formulated as an inverse problem, combining data-model fidelity and space-time regularity constraints, solved by nonsmooth convex proximal minimizations. Though promising, that formulation lacks robustness against the limited quality of the Covid19 data and confidence assessment. The present work aims to address both limitations: First, it discusses solutions to produce a robust assessment of the pandemic intensity by accounting for the low quality of the data directly within the inverse problem formulation. Second, exploiting a Bayesian interpretation of the inverse problem formulation, it devises a Monte Carlo sampling strategy, tailored to a nonsmooth log-concave a posteriori distribution, to produce relevant credibility intervalbased estimates for the Covid19 reproduction number. Clinical relevance Applied to daily counts of new infections made publicly available by the Health Authorities for around 200 countries, the proposed procedures permit robust assessments of the time evolution of the Covid19 pandemic intensity, updated automatically and on a daily basis.

Numerical stabilization is often used to eliminate (alleviate) the spurious oscillations generally produced by full order models (FOMs) in under-resolved or marginally-resolved simulations of convection-dominated flows. In this paper we investigate the role of numerical stabilization in reduced order models (ROMs) of convection-dominated, marginally-resolved flows. Specifically, we investigate the FOM-ROM consistency, i.e., whether the numerical stabilization is beneficial both at the FOM and the ROM level. As a numerical stabilization strategy, we focus on the evolve-filter-relax (EFR) regularization algorithm, which centers around spatial filtering. To investigate the FOM-ROM consistency, we consider two ROM strategies: (i) the EFR-ROM, in which the EFR stabilization is used at the FOM level, but not at the ROM level; and (ii) the EFR-EFRROM, in which the EFR stabilization is used both at the FOM and at the ROM level. We compare the EFR-ROM with the EFR-EFRROM in the numerical simulation of a 2D flow past a circular cylinder in the convection-dominated, marginally-resolved regime. We also perform model reduction with respect to both time and Reynolds number. Our numerical investigation shows that the EFR-EFRROM is more accurate than the EFR-ROM, which suggests that FOM-ROM consistency is beneficial in convection-dominated, marginally-resolved flows.

Kernel methods are ubiquitous in statistical modeling due to their theoretical guarantees as well as their competitive empirical performance. Polynomial kernels are of particular importance as their feature maps model the interactions between the dimensions of the input data. However, the construction time of explicit feature maps scales exponentially with the polynomial degree and a naive application of the kernel trick does not scale to large datasets. In this work, we propose Complex-to-Real (CtR) random features for polynomial kernels that leverage intermediate complex random projections and can yield kernel estimates with much lower variances than their real-valued analogs. The resulting features are real-valued, simple to construct and have the following advantages over the state-of-the-art: 1) shorter construction times, 2) lower kernel approximation errors for commonly used degrees, 3) they enable us to obtain a closed-form expression for their variance.

Learning data representations under uncertainty is an important task that emerges in numerous machine learning applications. However, uncertainty quantification (UQ) techniques are computationally intensive and become prohibitively expensive for high-dimensional data. In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions. The proposed model combines a neural network approach for dimensionality reduction of the (potentially high-dimensional) data, with a surrogate model method for learning the data distribution. We first employ a variational autoencoder (VAE) to learn a low-dimensional representation of the data distribution. We then propose to harness polynomial chaos expansion (PCE) formulation to map this distribution to the output target. The coefficients of PCE are learned from the distribution representation of the training data using a maximum mean discrepancy (MMD) approach. Our model enables us to (a) learn a representation of the data, (b) estimate uncertainty in the high-dimensional data system, and (c) match high order moments of the output distribution; without any prior statistical assumptions on the data. Numerical experimental results are presented to illustrate the performance of the proposed method.

We devise coresets for kernel $k$-Means with a general kernel, and use them to obtain new, more efficient, algorithms. Kernel $k$-Means has superior clustering capability compared to classical $k$-Means, particularly when clusters are non-linearly separable, but it also introduces significant computational challenges. We address this computational issue by constructing a coreset, which is a reduced dataset that accurately preserves the clustering costs. Our main result is a coreset for kernel $k$-Means that works for a general kernel and has size $\mathrm{poly}(k\epsilon^{-1})$. Our new coreset both generalizes and greatly improves all previous results; moreover, it can be constructed in time near-linear in $n$. This result immediately implies new algorithms for kernel $k$-Means, such as a $(1+\epsilon)$-approximation in time near-linear in $n$, and a streaming algorithm using space and update time $\mathrm{poly}(k \epsilon^{-1} \log n)$. We validate our coreset on various datasets with different kernels. Our coreset performs consistently well, achieving small errors while using very few points. We show that our coresets can speed up kernel $k$-Means++ (the kernelized version of the widely used $k$-Means++ algorithm), and we further use this faster kernel $k$-Means++ for spectral clustering. In both applications, we achieve up to 1000x speedup while the error is comparable to baselines that do not use coresets.

A fortification game (FG) is a three-level, two-player Stackelberg game, also known as defender-attacker-defender game, in which at the uppermost level, the defender selects some assets to be protected from potential malicious attacks. At the middle level, the attacker solves an interdiction game by depreciating unprotected assets, i.e., reducing the values of such assets for the defender, while at the innermost level the defender solves a recourse problem over the surviving or partially damaged assets. Fortification games have applications in various important areas, such as military operations, design of survivable networks, protection of facilities, or power grid protection. In this work, we present an exact solution algorithm for FGs, in which the recourse problems correspond to (possibly NP-hard) combinatorial optimization problems. The algorithm is based on a new generic mixed-integer linear programming reformulation in the natural space of fortification variables. Our new model makes use of fortification cuts that measure the contribution of a given fortification strategy to the objective function value. These cuts are generated on-the-fly by solving separation problems, which correspond to (modified) middle-level interdiction games. We design a branch-and-cut-based solution algorithm based on fortification cuts, their lifted versions, and other speed-up techniques. We present a computational study using the knapsack fortification game and the shortest path fortification game. For the latter one, we include a comparison with a state-of-the-art solution method from the literature. Our algorithm outperforms this method and allows us to solve previously unsolved instances to optimality.

Getting good performance out of numerical equation solvers requires that the user has provided stable and efficient functions representing their model. However, users should not be trusted to write good code. In this manuscript we describe ModelingToolkit (MTK), a symbolic equation-based modeling system which allows for composable transformations to generate stable, efficient, and parallelized model implementations. MTK blurs the lines of traditional symbolic computing by acting directly on a user's numerical code. We show the ability to apply graph algorithms for automatically parallelizing and performing index reduction on code written for differential-algebraic equation (DAE) solvers, "fixing" the performance and stability of the model without requiring any changes to on the user's part. We demonstrate how composable model transformations can be combined with automated data-driven surrogate generation techniques, allowing machine learning methods to generate accelerated approximate models within an acausal modeling framework. These reduced models are shown to outperform the Dymola Modelica compiler on an HVAC model by 590x at 3\% error. Together, this demonstrates MTK as a system for bringing the latest research in graph transformations directly to modeling applications.

A stabilized finite element method is introduced for the simulation of time-periodic creeping flows, such as those found in the cardiorespiratory systems. The new technique, which is formulated in the frequency rather than time domain, strictly uses real arithmetics and permits the use of similar shape functions for pressure and velocity for ease of implementation. It involves the addition of the Laplacian of pressure to the continuity equation with a complex-valued stabilization parameter that is derived systematically from the momentum equation. The numerical experiments show the excellent accuracy and robustness of the proposed method in simulating flows in complex and canonical geometries for a wide range of conditions. The present method significantly outperforms a traditional solver in terms of both computational cost and scalability, which lowers the overall solution turnover time by several orders of magnitude.

The Residual Networks of Residual Networks (RoR) exhibits excellent performance in the image classification task, but sharply increasing the number of feature map channels makes the characteristic information transmission incoherent, which losses a certain of information related to classification prediction, limiting the classification performance. In this paper, a Pyramidal RoR network model is proposed by analysing the performance characteristics of RoR and combining with the PyramidNet. Firstly, based on RoR, the Pyramidal RoR network model with channels gradually increasing is designed. Secondly, we analysed the effect of different residual block structures on performance, and chosen the residual block structure which best favoured the classification performance. Finally, we add an important principle to further optimize Pyramidal RoR networks, drop-path is used to avoid over-fitting and save training time. In this paper, image classification experiments were performed on CIFAR-10/100 and SVHN datasets, and we achieved the current lowest classification error rates were 2.96%, 16.40% and 1.59%, respectively. Experiments show that the Pyramidal RoR network optimization method can improve the network performance for different data sets and effectively suppress the gradient disappearance problem in DCNN training.

北京阿比特科技有限公司