亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We focus on the numerical modelling of water waves by means of depth averaged models. We consider in particular PDE systems which consist in a nonlinear hyperbolic model plus a linear dispersive perturbation involving an elliptic operator. We propose two strategies to construct reduced order models for these problems, with the main focus being the control of the overhead related to the inversion of the elliptic operators, as well as the robustness with respect to variations of the flow parameters. In a first approach, only a linear reduction strategies is applied only to the elliptic component, while the computations of the nonlinear fluxes are still performed explicitly. This hybrid approach, referred to as pdROM, is compared to a hyper-reduction strategy based on the empirical interpolation method to reduce also the nonlinear fluxes. We evaluate the two approaches on a variety of benchmarks involving a generalized variant of the BBM-KdV model with a variable bottom, and a one-dimensional enhanced weakly dispersive shallow water system. The results show the potential of both approaches in terms of cost reduction, with a clear advantage for the pdROM in terms of robustness, and for the EIMROM in terms of cost reduction.

相關內容

This paper considers the surrogate modeling of a complex numerical code in a multifidelity framework when the code output is a time series. Using an experimental design of the low-and high-fidelity code levels, an original Gaussian process regression method is proposed. The code output is expanded on a basis built from the experimental design. The first coefficients of the expansion of the code output are processed by a co-kriging approach. The last coefficients are collectively processed by a kriging approach with covariance tensorization. The resulting surrogate model taking into account the uncertainty in the basis construction is shown to have better performance in terms of prediction errors and uncertainty quantification than standard dimension reduction techniques.

Model order reduction through the POD-Galerkin method can lead to dramatic gains in terms of computational efficiency in solving physical problems. However, the applicability of the method to non linear high-dimensional dynamical systems such as the Navier-Stokes equations has been shown to be limited, producing inaccurate and sometimes unstable models. This paper proposes a closure modeling approach for classical POD-Galerkin reduced order models (ROM). We use multi layer perceptrons (MLP) to learn a continuous in time closure model through the recently proposed Neural ODE method. Inspired by Taken's theorem as well as the Mori-Zwanzig formalism, we augment ROMs with a delay differential equation architecture to model non-Markovian effects in reduced models. The proposed model, called CD-ROM (Complementary Deep-Reduced Order Model) is able to retain information from past states of the system and use it to correct the imperfect reduced dynamics. The model can be integrated in time as a system of ordinary differential equations using any classical time marching scheme. We demonstrate the ability of our CD-ROM approach to improve the accuracy of POD-Galerkin models on two CFD examples, even in configurations unseen during training.

Semantic segmentation of point cloud usually relies on dense annotation that is exhausting and costly, so it attracts wide attention to investigate solutions for the weakly supervised scheme with only sparse points annotated. Existing works start from the given labels and propagate them to highly-related but unlabeled points, with the guidance of data, e.g. intra-point relation. However, it suffers from (i) the inefficient exploitation of data information, and (ii) the strong reliance on labels thus is easily suppressed when given much fewer annotations. Therefore, we propose a novel framework, PointMatch, that stands on both data and label, by applying consistency regularization to sufficiently probe information from data itself and leveraging weak labels as assistance at the same time. By doing so, meaningful information can be learned from both data and label for better representation learning, which also enables the model more robust to the extent of label sparsity. Simple yet effective, the proposed PointMatch achieves the state-of-the-art performance under various weakly-supervised schemes on both ScanNet-v2 and S3DIS datasets, especially on the settings with extremely sparse labels, e.g. surpassing SQN by 21.2% and 17.2% on the 0.01% and 0.1% setting of ScanNet-v2, respectively.

We consider generalized Nash equilibrium problems (GNEPs) with non-convex strategy spaces and non-convex cost functions. This general class of games includes the important case of games with mixed-integer variables for which only a few results are known in the literature. We present a new approach to characterize equilibria via a convexification technique using the Nikaido-Isoda function. To any given instance of the GNEP, we construct a set of convexified instances and show that a feasible strategy profile is an equilibrium for the original instance if and only if it is an equilibrium for any convexified instance and the convexified cost functions coincide with the initial ones. We further develop this approach along three dimensions. We first show that for quasi-linear models, where a convexified instance exists in which for fixed strategies of the opponent players, the cost function of every player is linear and the respective strategy space is polyhedral, the convexification reduces the GNEP to a standard (non-linear) optimization problem. Secondly, we derive two complete characterizations of those GNEPs for which the convexification leads to a jointly constrained or a jointly convex GNEP, respectively. These characterizations require new concepts related to the interplay of the convex hull operator applied to restricted subsets of feasible strategies and may be interesting on their own. Finally, we demonstrate the applicability of our results by presenting a numerical study regarding the computation of equilibria for a class of integral network flow GNEPs.

This paper proposes a simple unified approach to testing transformations on cumulative distribution functions (CDFs) with nuisance parameters. We consider testing general parametric transformations on two CDFs, and then generalize the test for multiple CDFs. We construct the test using a numerical bootstrap method which can easily be implemented. The proposed test is shown to be asymptotically size controlled and consistent. Monte Carlo simulations and an empirical application show that the test performs well on finite samples.

We consider the Cauchy problem for a second-order nonlinear evolution equation in a Hilbert space. This equation represents the abstract generalization of the Ball integro-differential equation. The general nonlinear case with respect to terms of the equation which include a square of a norm of a gradient is considered. A three-layer semi-discrete scheme is proposed in order to find an approximate solution. In this scheme, the approximation of nonlinear terms that are dependent on the gradient is carried out by using an integral mean. We show that the solution of the nonlinear discrete problem and its corresponding difference analogue of a first-order derivative is uniformly bounded. For the solution of the corresponding linear discrete problem, it is obtained high-order a priori estimates by using two-variable Chebyshev polynomials. Based on these estimates we prove the stability of the nonlinear discrete problem. For smooth solutions, we provide error estimates for the approximate solution. An iteration method is applied in order to find an approximate solution for each temporal step. The convergence of the iteration process is proved.

In order to characterize the fluctuation between the ergodic limit and the time-averaging estimator of a full discretization in a quantitative way, we establish a central limit theorem for the full discretization of the parabolic stochastic partial differential equation. The theorem shows that the normalized time-averaging estimator converges to a normal distribution with the variance being the same as that of the continuous case, where the scale used for the normalization corresponds to the temporal strong convergence order of the considered full discretization. A key ingredient in the proof is to extract an appropriate martingale difference series sum from the normalized time-averaging estimator so that the convergence to the normal distribution of such a sum and the convergence to zero in probability of the remainder are well balanced. The main novelty of our method to balance the convergence lies in proposing an appropriately modified Poisson equation so as to possess the space-independent regularity estimates. As a byproduct, the full discretization is shown to fulfill the weak law of large numbers, namely, the time-averaging estimator converges to the ergodic limit in probability.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1).

We introduce a new multi-dimensional nonlinear embedding -- Piecewise Flat Embedding (PFE) -- for image segmentation. Based on the theory of sparse signal recovery, piecewise flat embedding with diverse channels attempts to recover a piecewise constant image representation with sparse region boundaries and sparse cluster value scattering. The resultant piecewise flat embedding exhibits interesting properties such as suppressing slowly varying signals, and offers an image representation with higher region identifiability which is desirable for image segmentation or high-level semantic analysis tasks. We formulate our embedding as a variant of the Laplacian Eigenmap embedding with an $L_{1,p} (0<p\leq1)$ regularization term to promote sparse solutions. First, we devise a two-stage numerical algorithm based on Bregman iterations to compute $L_{1,1}$-regularized piecewise flat embeddings. We further generalize this algorithm through iterative reweighting to solve the general $L_{1,p}$-regularized problem. To demonstrate its efficacy, we integrate PFE into two existing image segmentation frameworks, segmentation based on clustering and hierarchical segmentation based on contour detection. Experiments on four major benchmark datasets, BSDS500, MSRC, Stanford Background Dataset, and PASCAL Context, show that segmentation algorithms incorporating our embedding achieve significantly improved results.

北京阿比特科技有限公司