亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Hypergeometric structures in single and multiscale Feynman integrals emerge in a wide class of topologies. Using integration-by-parts relations, associated master or scalar integrals have to be calculated. For this purpose it appears useful to devise an automated method which recognizes the respective (partial) differential equations related to the corresponding higher transcendental functions. We solve these equations through associated recursions of the expansion coefficient of the multivalued formal Taylor series. The expansion coefficients can be determined using either the package {\tt Sigma} in the case of linear difference equations or by applying heuristic methods in the case of partial linear difference equations. In the present context a new type of sums occurs, the Hurwitz harmonic sums, and generalized versions of them. The code {\tt HypSeries} transforming classes of differential equations into analytic series expansions is described. Also partial difference equations having rational solutions and rational function solutions of Pochhammer symbols are considered, for which the code {\tt solvePartialLDE} is designed. Generalized hypergeometric functions, Appell-,~Kamp\'e de F\'eriet-, Horn-, Lauricella-Saran-, Srivasta-, and Exton--type functions are considered. We illustrate the algorithms by examples.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志(zhi)。 Publisher:Elsevier。 SIT:

A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with dual tree-depth d and largest entry D are solvable in time g(d,D)poly(n) for some function g. However, the dual tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth, and thus does not reflect its geometric structure. We prove that the minimum dual tree-depth of a row-equivalent matrix is equal to the branch-depth of the matroid defined by the columns of the matrix. We design a fixed parameter algorithm for computing branch-depth of matroids represented over a finite field and a fixed parameter algorithm for computing a row-equivalent matrix with minimum dual tree-depth. Finally, we use these results to obtain an algorithm for integer programming running in time g(d*,D)poly(n) where d* is the branch-depth of the constraint matrix; the branch-depth cannot be replaced by the more permissive notion of branch-width.

We consider symmetric positive definite preconditioners for multiple saddle-point systems of block tridiagonal form, which can be applied within the MINRES algorithm. We describe such a preconditioner for which the preconditioned matrix has only two distinct eigenvalues, 1 and -1, when the preconditioner is applied exactly. We discuss the relative merits of such an approach compared to a more widely studied block diagonal preconditioner, specify the computational work associated with applying the new preconditioner inexactly, and survey a number of theoretical results for the block diagonal case. Numerical results validate our theoretical findings.

Clustering of mixed-type datasets can be a particularly challenging task as it requires taking into account the associations between variables with different level of measurement, i.e., nominal, ordinal and/or interval. In some cases, hierarchical clustering is considered a suitable approach, as it makes few assumptions about the data and its solution can be easily visualized. Since most hierarchical clustering approaches assume variables are measured on the same scale, a simple strategy for clustering mixed-type data is to homogenize the variables before clustering. This would mean either recoding the continuous variables as categorical ones or vice versa. However, typical discretization of continuous variables implies loss of information. In this work, an agglomerative hierarchical clustering approach for mixed-type data is proposed, which relies on a barycentric coding of continuous variables. The proposed approach minimizes information loss and is compatible with the framework of correspondence analysis. The utility of the method is demonstrated on real and simulated data.

We study the problem of computing the vitality with respect to max flow of edges and vertices in undirected planar graphs, where the vitality of an edge/vertex in a graph with respect to max flow between two fixed vertices $s,t$ is defined as the max flow decrease when the edge/vertex is removed from the graph. We show that the vitality of any $k$ selected edges can be computed in $O(kn + n\log\log n)$ worst-case time, and that a $\delta$ additive approximation of the vitality of all edges with capacity at most $c$ can be computed in $O(\frac{c}{\delta}n +n \log \log n)$ worst-case time, where $n$ is the size of the graph. Similar results are given for the vitality of vertices. All our algorithms work in $O(n)$ space.

We propose a method for quantifying uncertainty in high-dimensional PDE systems with random parameters, where the number of solution evaluations is small. Parametric PDE solutions are often approximated using a spectral decomposition based on polynomial chaos expansions. For the class of systems we consider (i.e., high dimensional with limited solution evaluations) the coefficients are given by an underdetermined linear system in a regression formulation. This implies additional assumptions, such as sparsity of the coefficient vector, are needed to approximate the solution. Here, we present an approach where we assume the coefficients are close to the range of a generative model that maps from a low to a high dimensional space of coefficients. Our approach is inspired be recent work examining how generative models can be used for compressed sensing in systems with random Gaussian measurement matrices. Using results from PDE theory on coefficient decay rates, we construct an explicit generative model that predicts the polynomial chaos coefficient magnitudes. The algorithm we developed to find the coefficients, which we call GenMod, is composed of two main steps. First, we predict the coefficient signs using Orthogonal Matching Pursuit. Then, we assume the coefficients are within a sparse deviation from the range of a sign-adjusted generative model. This allows us to find the coefficients by solving a nonconvex optimization problem, over the input space of the generative model and the space of sparse vectors. We obtain theoretical recovery results for a Lipschitz continuous generative model and for a more specific generative model, based on coefficient decay rate bounds. We examine three high-dimensional problems and show that, for all three examples, the generative model approach outperforms sparsity promoting methods at small sample sizes.

In this paper, a high-order moment-based multi-resolution Hermite weighted essentially non-oscillatory (HWENO) scheme is designed for hyperbolic conservation laws. The main idea of this scheme is derived from our previous work [J. Comput. Phys., 446 (2021) 110653], in which the integral averages of the function and its first order derivative are used to reconstruct both the function and its first order derivative values at the boundaries. However, in this paper, only the function values at the Gauss-Lobatto points in the one or two dimensional case need to be reconstructed by using the information of the zeroth and first order moments. In addition, an extra modification procedure is used to modify those first order moments in the troubled-cells, which leads to an improvement of stability and an enhancement of resolution near discontinuities. To obtain the same order of accuracy, the size of the stencil required by this moment-based multi-resolution HWENO scheme is still the same as the general HWENO scheme and is more compact than the general WENO scheme. Moreover, the linear weights can also be any positive numbers as long as their sum equals one and the CFL number can still be 0.6 whether for the one or two dimensional case. Extensive numerical examples are given to demonstrate the stability and resolution of such moment-based multi-resolution HWENO scheme.

Determining the matrix multiplication exponent $\omega$ is one of the greatest open problems in theoretical computer science. We show that it is impossible to prove $\omega = 2$ by starting with structure tensors of modules of fixed degree and using arbitrary restrictions. It implies that the same is impossible by starting with $1_A$-generic non-diagonal tensors of fixed size with minimal border rank. This generalizes the work of Bl\"aser and Lysikov [3]. Our methods come from both commutative algebra and complexity theory.

Learning mapping between two function spaces has attracted considerable research attention. However, learning the solution operator of partial differential equations (PDEs) remains a challenge in scientific computing. Therefore, in this study, we propose a novel pseudo-differential integral operator (PDIO) inspired by a pseudo-differential operator, which is a generalization of a differential operator and characterized by a certain symbol. We parameterize the symbol by using a neural network and show that the neural-network-based symbol is contained in a smooth symbol class. Subsequently, we prove that the PDIO is a bounded linear operator, and thus is continuous in the Sobolev space. We combine the PDIO with the neural operator to develop a pseudo-differential neural operator (PDNO) to learn the nonlinear solution operator of PDEs. We experimentally validate the effectiveness of the proposed model by using Burgers' equation, Darcy flow, and the Navier-Stokes equation. The results reveal that the proposed PDNO outperforms the existing neural operator approaches in most experiments.

The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings. However, this bias is so strong that it avoids any singleton partitions, even when vertices are very weakly connected to the rest of the graph. Motivated by the B\"uhler-Hein family of balanced cut costs, we propose the family of Compassionately Conservative Balanced (CCB) Cut costs, which are indexed by a parameter that can be used to strike a compromise between the desire to avoid too many singleton partitions and the notion that all partitions should be balanced. We show that CCB-Cut minimization can be relaxed into an orthogonally constrained $\ell_{\tau}$-minimization problem that coincides with the problem of computing Piecewise Flat Embeddings (PFE) for one particular index value, and we present an algorithm for solving the relaxed problem by iteratively minimizing a sequence of reweighted Rayleigh quotients (IRRQ). Using images from the BSDS500 database, we show that image segmentation based on CCB-Cut minimization provides better accuracy with respect to ground truth and greater variability in region size than NCut-based image segmentation.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司