亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Feynman integrals are solutions to linear partial differential equations with polynomial coefficients. Using a triangle integral with general exponents as a case in point, we compare $D$-module methods to dedicated methods developed for solving differential equations appearing in the context of Feynman integrals, and provide a dictionary of the relevant concepts. In particular, we implement an algorithm due to Saito, Sturmfels, and Takayama to derive canonical series solutions of regular holonomic $D$-ideals, and compare them to asymptotic series derived by the respective Fuchsian systems.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

In this paper, the problem of correction of a single error in $q$-ary symmetric channel with noiseless feedback is considered. We propose an algorithm to construct codes with feedback inductively. For all prime power $q$ we prove that two instances of feedback are sufficient to transmit over the $q$-ary symmetric channel the same number of messages as in the case of complete feedback. Our other contribution is the construction of codes with one-time feedback with the same parameters as Hamming codes for $q$ that is not a prime power. We also construct single-error-correcting codes with one-time feedback of size $q^{n-2}$ for arbitrary $q$ and $n\leq q+1$, which can be seen as an analog for Reed-Solomon codes.

In this work, we systematically investigate linear multi-step methods for differential equations with memory. In particular, we focus on the numerical stability for multi-step methods. According to this investigation, we give some sufficient conditions for the stability and convergence of some common multi-step methods, and accordingly, a notion of A-stability for differential equations with memory. Finally, we carry out the computational performance of our theory through numerical examples.

Accurate evaluation of nearly singular integrals plays an important role in many boundary integral equation based numerical methods. In this paper, we propose a variant of singularity swapping method to accurately evaluate the layer potentials for arbitrarily close targets. Our method is based on the global trapezoidal rule and trigonometric interpolation, resulting in an explicit quadrature formula. The method achieves spectral accuracy for nearly singular integrals on closed analytic curves. In order to extract the singularity from the complexified distance function, an efficient root finding method is proposed based on contour integration. Through the change of variables, we also extend the quadrature method to integrals on the piecewise analytic curves. Numerical examples for Laplace's and Helmholtz equations show that high order accuracy can be achieved for arbitrarily close field evaluation.

We provide a new approach for establishing hardness of approximation results, based on the theory recently introduced by the author. It allows one to directly show that approximating a problem beyond a certain threshold requires super-polynomial time. To exhibit the framework, we revisit two famous problems in this paper. The particular results we prove are: MAX-3-SAT$(1,\frac{7}{8}+\epsilon)$ requires exponential time for any constant $\epsilon$ satisfying $\frac{1}{8} \geq \epsilon > 0$. In particular, the gap exponential time hypothesis (Gap-ETH) holds. MAX-3-LIN-2$(1-\epsilon, \frac{1}{2}+\epsilon)$ requires exponential time for any constant $\epsilon$ satisfying $\frac{1}{4} \geq \epsilon > 0$.

A random algebraic graph is defined by a group $G$ with a uniform distribution over it and a connection $\sigma:G\longrightarrow[0,1]$ with expectation $p,$ satisfying $\sigma(g)=\sigma(g^{-1}).$ The random graph $\mathsf{RAG}(n,G,p,\sigma)$ with vertex set $[n]$ is formed as follows. First, $n$ independent vectors $x_1,\ldots,x_n$ are sampled uniformly from $G.$ Then, vertices $i,j$ are connected with probability $\sigma(x_ix_j^{-1}).$ This model captures random geometric graphs over the sphere and the hypercube, certain regimes of the stochastic block model, and random subgraphs of Cayley graphs. The main question of interest to the current paper is: when is a random algebraic graph statistically and/or computationally distinguishable from $\mathsf{G}(n,p)$? Our results fall into two categories. 1) Geometric. We focus on the case $G =\{\pm1\}^d$ and use Fourier-analytic tools. For hard threshold connections, we match [LMSY22b] for $p = \omega(1/n)$ and for $1/(r\sqrt{d})$-Lipschitz connections we extend the results of [LR21b] when $d = \Omega(n\log n)$ to the non-monotone setting. We study other connections such as indicators of interval unions and low-degree polynomials. 2) Algebraic. We provide evidence for an exponential statistical-computational gap. Consider any finite group $G$ and let $A\subseteq G$ be a set of elements formed by including each set of the form $\{g, g^{-1}\}$ independently with probability $1/2.$ Let $\Gamma_n(G,A)$ be the distribution of random graphs formed by taking a uniformly random induced subgraph of size $n$ of the Cayley graph $\Gamma(G,A).$ Then, $\Gamma_n(G,A)$ and $\mathsf{G}(n,1/2)$ are statistically indistinguishable with high probability over $A$ if and only if $\log|G|\gtrsim n.$ However, low-degree polynomial tests fail to distinguish $\Gamma_n(G,A)$ and $\mathsf{G}(n,1/2)$ with high probability over $A$ when $\log |G|=\log^{\Omega(1)}n.$

Let a polytope $P$ be defined by a system $A x \leq b$. We consider the problem of counting the number of integer points inside $P$, assuming that $P$ is $\Delta$-modular, where the polytope $P$ is called $\Delta$-modular if all the rank sub-determinants of $A$ are bounded by $\Delta$ in the absolute value. We present a new FPT-algorithm, parameterized by $\Delta$ and by the maximal number of vertices in $P$, where the maximum is taken by all r.h.s. vectors $b$. We show that our algorithm is more efficient for $\Delta$-modular problems than the approach of A. Barvinok et al. To this end, we do not directly compute the short rational generating function for $P \cap Z^n$, which is commonly used for the considered problem. Instead, we use the dynamic programming principle to compute its particular representation in the form of exponential series that depends on a single variable. We completely do not rely to the Barvinok's unimodular sign decomposition technique. Using our new complexity bound, we consider different special cases that may be of independent interest. For example, we give FPT-algorithms for counting the integer points number in $\Delta$-modular simplices and similar polytopes that have $n + O(1)$ facets. As a special case, for any fixed $m$, we give an FPT-algorithm to count solutions of the unbounded $m$-dimensional $\Delta$-modular subset-sum problem.

Profile likelihoods are rarely used in geostatistical models due to the computational burden imposed by repeated decompositions of large variance matrices. Accounting for uncertainty in covariance parameters can be highly consequential in geostatistical models as some covariance parameters are poorly identified, the problem is severe enough that the differentiability parameter of the Matern correlation function is typically treated as fixed. The problem is compounded with anisotropic spatial models as there are two additional parameters to consider. In this paper, we make the following contributions: 1, A methodology is created for profile likelihoods for Gaussian spatial models with Mat\'ern family of correlation functions, including anisotropic models. This methodology adopts a novel reparametrization for generation of representative points, and uses GPUs for parallel profile likelihoods computation in software implementation. 2, We show the profile likelihood of the Mat\'ern shape parameter is often quite flat but still identifiable, it can usually rule out very small values. 3, Simulation studies and applications on real data examples show that profile-based confidence intervals of covariance parameters and regression parameters have superior coverage to the traditional standard Wald type confidence intervals.

Boolean MaxSAT, as well as generalized formulations such as Min-MaxSAT and Max-hybrid-SAT, are fundamental optimization problems in Boolean reasoning. Existing methods for MaxSAT have been successful in solving benchmarks in CNF format. They lack, however, the ability to handle 1) (non-CNF) hybrid constraints, such as XORs and 2) generalized MaxSAT problems natively. To address this issue, we propose a novel dynamic-programming approach for solving generalized MaxSAT problems with hybrid constraints -- called \emph{Dynamic-Programming-MaxSAT} or DPMS for short -- based on Algebraic Decision Diagrams (ADDs). With the power of ADDs and the (graded) project-join-tree builder, our versatile framework admits many generalizations of CNF-MaxSAT, such as MaxSAT, Min-MaxSAT, and MinSAT with hybrid constraints. Moreover, DPMS scales provably well on instances with low width. Empirical results indicate that DPMS is able to solve certain problems quickly, where other algorithms based on various techniques all fail. Hence, DPMS is a promising framework and opens a new line of research that invites more investigation in the future.

The time continuous Volterra equations valued in $\mathbb{R}$ with completely positive kernels have two basic monotonicity properties. The first is that any two solution curves do not intersect with suitable given signals. The second is that the solutions to the autonomous equations are monotone. Due to the fading memory principle, we also desire the kernels to be nonincreasing. In this work, through an generalization of the convolution to nonuniform meshes, we introduce the concept of ``right complementary monotone'' (R-CMM) kernels in the discrete level for nonuniform meshes, which inherits both the nonincreasing property and complete positivity in the continuous level. We prove that the discrete solutions preserve these two monotonicity properties if the discretized kernel satisfies R-CMM property. Technically, we highly rely on the resolvent kernels to achieve this.

We propose an easy-to-implement iterative method for resolving the implicit (or semi-implicit) schemes arising in solving reaction-diffusion (RD) type equations. We formulate the nonlinear time implicit scheme as a min-max saddle point problem and then apply the primal-dual hybrid gradient (PDHG) method. Suitable precondition matrices are applied to the PDHG method to accelerate the convergence of algorithms under different circumstances. Furthermore, our method is applicable to various discrete numerical schemes with high flexibility. From various numerical examples tested in this paper, the proposed method converges properly and can efficiently produce numerical solutions with sufficient accuracy.

北京阿比特科技有限公司