亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

When constructing high-order schemes for solving hyperbolic conservation laws, the corresponding high-order reconstructions are commonly performed in characteristic spaces to eliminate spurious oscillations as much as possible. For multi-dimensional finite volume (FV) schemes, we need to perform the characteristic decomposition several times in different normal directions of the target cell, which is very time-consuming. In this paper, we propose a rotated characteristic decomposition technique which requires only one-time decomposition for multi-dimensional reconstructions. The rotated direction depends only on the gradient of a specific physical quantity which is cheap to calculate. This technique not only reduces the computational cost remarkably, but also controls spurious oscillations effectively. We take a third-order weighted essentially non-oscillatory finite volume (WENO-FV) scheme for solving the Euler equations as an example to demonstrate the efficiency of the proposed technique.

相關內容

This work deals with a number of questions relative to the discrete and continuous adjoint fields associated with the compressible Euler equations and classical aerodynamic functions. The consistency of the discrete adjoint equations with the corresponding continuous adjoint partial differential equation is one of them. It is has been established or at least discussed only for a handful of numerical schemes and a contribution of this article is to give the adjoint consistency conditions for the 2D Jameson-Schmidt-Turkel scheme in cell-centred finite-volume formulation. The consistency issue is also studied here from a new heuristic point of view by discretizing the continuous adjoint equation for the discrete flow and adjoint fields. Both points of view prove to provide useful information. Besides, it has been often noted that discrete or continuous inviscid lift and drag adjoint exhibit numerical divergence close to the wall and stagnation streamline for a wide range of subsonic and transonic flow conditions. This is analyzed here using the physical source term perturbation method introduced in reference [Giles and Pierce, AIAA Paper 97-1850, 1997]. With this point of view, the fourth physical source term of appears to be the only one responsible for this behavior. It is also demonstrated that the numerical divergence of the adjoint variables corresponds to the response of the flow to the convected increment of stagnation pressure and diminution of entropy created at the source and the resulting change in lift and drag.

Numerical stabilization is often used to eliminate (alleviate) the spurious oscillations generally produced by full order models (FOMs) in under-resolved or marginally-resolved simulations of convection-dominated flows. In this paper, we investigate the role of numerical stabilization in reduced order models (ROMs) of marginally-resolved convection-dominated flows. Specifically, we investigate the FOM-ROM consistency, i.e., whether the numerical stabilization is beneficial both at the FOM and the ROM level. As a numerical stabilization strategy, we focus on the evolve-filter-relax (EFR) regularization algorithm, which centers around spatial filtering. To investigate the FOM-ROM consistency, we consider two ROM strategies: (I) the EFR-ROM, in which the EFR stabilization is used at the FOM level, but not at the ROM level; and (ii) the EFR-EFRROM, in which the EFR stabilization is used both at the FOM and at the ROM level. We compare the EFR-ROM with the EFR-EFRROM in the numerical simulation of a 2D flow past a circular cylinder in the convection-dominated, marginally-resolved regime. We also perform model reduction with respect to both time and Reynolds number. Our numerical investigation shows that the EFR-EFRROM is more accurate than the EFR-ROM, which suggests that FOM-ROM consistency is beneficial in convection-dominated,marginally-resolved flows.

The method of constructing trigonometric Hermite splines, which interpolate the values of some periodic function and its derivatives in the nodes of a uniform grid, is considered. The proposed method is based on the periodicity properties of trigonometric functions and is reduced to solving only systems of linear algebraic equations of the second order; solutions of these systems can be obtained in advance. When implementing this method, it is necessary to calculate the coefficients of interpolation trigonometric polynomials that interpolate the values of the function itself and the values of its derivatives at the nodes of the uniform grid; known fast Fourier transform algorithms can be used for this purpose. Examples of construction of trigonometric Hermite splines of the first and second orders are given. The proposed method can be recommended for practical use.

We study a numerical approximation for a nonlinear variable-order fractional differential equation via an integral equation method. Due to the lack of the monotonicity of the discretization coefficients of the variable-order fractional derivative in standard approximation schemes, existing numerical analysis techniques do not apply directly. By an approximate inversion technique, the proposed model is transformed as a second kind Volterra integral equation, based on which a collocation method under uniform or graded mesh is developed and analyzed. In particular, the error estimates improve the existing results by proving a consistent and sharper mesh grading parameter and characterizing the convergence rates in terms of the initial value of the variable order, which demonstrates its critical role in determining the smoothness of the solutions and thus the numerical accuracy.

We present SSAG, an efficient and scalable method for computing a lossy graph summary that retains the essential structure of the original graph. SSAG computes a sparse representation (summary) of the input graph and also caters for graphs with node attributes. The summary of a graph $G$ is stored as a graph on supernodes (subset of vertices of $G$) and two supernodes are connected by a weighted superedge. The proposed method constructs a summary graph on $k$ supernodes that minimizes the reconstruction error (difference between the original graph and the graph reconstructed from the summary) and maximum homogeneity with respect to attribute values. We construct the summary by iteratively merging a pair of nodes. We derive a closed-form expression to efficiently compute the reconstruction error after merging a pair and approximate this score in constant time. To reduce the search space for selecting the best pair for merging, we assign a weight to each supernode that closely quantifies the contribution of the node in the score of the pairs containing it. We choose the best pair for merging from a random sample made up of supernodes selected with probability proportional to their weights. With weighted sampling, a logarithmic-sized sample yields a comparable summary based on various quality measures. We propose a sparsification step for the constructed summary to reduce the storage cost to a given target size with a marginal increase in reconstruction error. Empirical evaluation on several real-world graphs and comparison with state-of-the-art methods shows that SSAG is up to $5\times$ faster and generates summaries of comparable quality. We further demonstrate the goodness of SSAG by accurately and efficiently answering the queries related to the graph structure and attribute information using the summary only.

This paper considers the reconstruction of a defect in a two-dimensional waveguide during non-destructive ultrasonic inspection using a derivative-based optimization approach. The propagation of the mechanical waves is simulated by the Scaled Boundary Finite Element Method (SBFEM) that builds on a semi-analytical approach. The simulated data is then fitted to a given set of data describing the reflection of a defect to be reconstructed. For this purpose, we apply an iteratively regularized Gauss-Newton method in combination with algorithmic differentiation to provide the required derivative information accurately and efficiently. We present numerical results for three different kinds of defects, namely a crack, a delamination, and a corrosion. These examples show that the parameterization of the defect can be reconstructed efficiently and robustly in the presence of noise.

In this work we estimate the number of randomly selected elements of a tensor that with high probability guarantees local convergence of Riemannian gradient descent for tensor train completion. We derive a new bound for the orthogonal projections onto the tangent spaces based on the harmonic mean of the unfoldings' singular values and introduce a notion of core coherence for tensor trains. We also extend the results to tensor train completion with side information and obtain the corresponding local convergence guarantees.

A polynomial Turing kernel for some parameterized problem $P$ is a polynomial-time algorithm that solves $P$ using queries to an oracle of $P$ whose sizes are upper-bounded by some polynomial in the parameter. Here the term "polynomial" refers to the bound on the query sizes, as the running time of any kernel is required to be polynomial. One of the most important open goals in parameterized complexity is to understand the applicability and limitations of polynomial Turing Kernels. As any fixed-parameter tractable problem admits a Turing kernel of some size, the focus has mostly being on determining which problems admit such kernels whose query sizes can be indeed bounded by some polynomial. In this paper we take a different approach, and instead focus on the number of queries that a Turing kernel uses, assuming it is restricted to using only polynomial sized queries. Our study focuses on one the main problems studied in parameterized complexity, the Clique problem: Given a graph $G$ and an integer $k$, determine whether there are $k$ pairwise adjacent vertices in $G$. We show that Clique parameterized by several structural parameters exhibits the following phenomena: - It admits polynomial Turing kernels which use a sublinear number of queries, namely $O(n/\log^c n)$ queries where $n$ is the total size of the graph and $c$ is any constant. This holds even for a very restrictive type of Turing kernels which we call OR-kernels. - It does not admit polynomial Turing kernels which use $O(n^{1-\epsilon})$ queries, unless NP$\subseteq$coNP/poly. For proving the second item above, we develop a new framework for bounding the number of queries needed by polynomial Turing kernels. This framework is inspired by the standard lower bounds framework for Karp kernels, and while it is quite similar, it still requires some novel ideas to allow its extension to the Turing setting.

Logistic regression is one of the most fundamental methods for modeling the probability of a binary outcome based on a collection of covariates. However, the classical formulation of logistic regression relies on the independent sampling assumption, which is often violated when the outcomes interact through an underlying network structure. This necessitates the development of models that can simultaneously handle both the network peer-effect (arising from neighborhood interactions) and the effect of high-dimensional covariates. In this paper, we develop a framework for incorporating such dependencies in a high-dimensional logistic regression model by introducing a quadratic interaction term, as in the Ising model, designed to capture pairwise interactions from the underlying network. The resulting model can also be viewed as an Ising model, where the node-dependent external fields linearly encode the high-dimensional covariates. We propose a penalized maximum pseudo-likelihood method for estimating the network peer-effect and the effect of the covariates, which, in addition to handling the high-dimensionality of the parameters, conveniently avoids the computational intractability of the maximum likelihood approach. Consequently, our method is computationally efficient and, under various standard regularity conditions, our estimate attains the classical high-dimensional rate of consistency. In particular, our results imply that even under network dependence it is possible to consistently estimate the model parameters at the same rate as in classical logistic regression, when the true parameter is sparse and the underlying network is not too dense. As a consequence of the general results, we derive the rates of consistency of our estimator for various natural graph ensembles, such as bounded degree graphs, sparse Erd\H{o}s-R\'{e}nyi random graphs, and stochastic block models.

Data augmentation is an effective technique to improve the generalization of deep neural networks. Recently, AutoAugment proposed a well-designed search space and a search algorithm that automatically finds augmentation policies in a data-driven manner. However, AutoAugment is computationally intensive. In this paper, we propose an efficient gradient-based search algorithm, called Hypernetwork-Based Augmentation (HBA), which simultaneously learns model parameters and augmentation hyperparameters in a single training. Our HBA uses a hypernetwork to approximate a population-based training algorithm, which enables us to tune augmentation hyperparameters by gradient descent. Besides, we introduce a weight sharing strategy that simplifies our hypernetwork architecture and speeds up our search algorithm. We conduct experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet. Our results show that HBA is competitive to the state-of-the-art methods in terms of both search speed and accuracy.

北京阿比特科技有限公司