亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider injective first-order interpretations that input and output trees of bounded height. The corresponding functions have polynomial output size, since a first-order interpretation can use a k-tuple of input nodes to represent a single output node. We prove that the equivalence problem for such functions is decidable, i.e. given two such interpretations, one can decide whether, for every input tree, the two output trees are isomorphic. We also give a calculus of typed functions and combinators which derives exactly injective first-order interpretations for unordered trees of bounded height. The calculus is based on a type system, where the type constructors are products, coproducts and a monad of multisets. Thanks to our results about tree-to-tree interpretations, the equivalence problem is decidable for this calculus. As an application, we show that the equivalence problem is decidable for first-order interpretations between classes of graphs that have bounded tree-depth. In all cases studied in this paper, first-order logic and MSO have the same expressive power, and hence all results apply also to MSO interpretations.

相關內容

The Gerber-Shiu function is a classical research topic in actuarial science.However, exact solutions are only available in the literature for very specific cases where the claim amounts follow distributions such as the exponential distribution. This presents a longstanding challenge, particularly from a computational perspective. For the classical risk process in continuous time, the Gerber-Shiu discounted penalty function satisfies a class of Volterra integral equations. In this paper, we use the collocation method to compute the Gerber-Shiu function for risk model with interest. Our methodology demonstrates that the function can be expressed as a linear algebraic system, which is straightforward to implement. One major advantage of our approach is that it does not require any specific distributional assumptions on the claim amounts, except for mild differentiability and continuity conditions that can be easily verified. We also examine the convergence orders of the collocation method. Finally, we present several numerical examples to illustrate the desirable performance of our proposed method.

This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to be handled. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approach.

Exact travelling wave solutions to the two-dimensional stochastic Allen-Cahn equation with multiplicative noise are obtained through the hyperbolic tangent (tanh) method. This technique limits the solutions to travelling wave profiles by representing them with a finite tanh power series. This study focuses on how multiplicative noise affects the dynamics of these travelling waves, in particular, occurring of wave propagation failure due to high levels of noise.

We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.

This paper proposes a new Helmholtz decomposition based windowed Green function (HD-WGF) method for solving the time-harmonic elastic scattering problems on a half-space with Dirichlet boundary conditions in both 2D and 3D. The Helmholtz decomposition is applied to separate the pressure and shear waves, which satisfy the Helmholtz and Helmholtz/Maxwell equations, respectively, and the corresponding boundary integral equations of type $(\mathbb{I}+\mathbb{T})\bs\phi=\bs f$, that couple these two waves on the unbounded surface, are derived based on the free-space fundamental solution of Helmholtz equation. This approach avoids the treatment of the complex elastic displacement tensor and traction operator that involved in the classical integral equation method for elastic problems. Then a smooth ``slow-rise'' windowing function is introduced to truncate the boundary integral equations and a ``correction'' strategy is proposed to ensure the uniformly fast convergence for all incident angles of plane incidence. Numerical experiments for both two and three dimensional problems are presented to demonstrate the accuracy and efficiency of the proposed method.

A posteriori reduced-order models, e.g. proper orthogonal decomposition, are essential to affordably tackle realistic parametric problems. They rely on a trustful training set, that is a family of full-order solutions (snapshots) representative of all possible outcomes of the parametric problem. Having such a rich collection of snapshots is not, in many cases, computationally viable. A strategy for data augmentation, designed for parametric laminar incompressible flows, is proposed to enrich poorly populated training sets. The goal is to include in the new, artificial snapshots emerging features, not present in the original basis, that do enhance the quality of the reduced-order solution. The methodologies devised are based on exploiting basic physical principles, such as mass and momentum conservation, to devise physically-relevant, artificial snapshots at a fraction of the cost of additional full-order solutions. Interestingly, the numerical results show that the ideas exploiting only mass conservation (i.e., incompressibility) are not producing significant added value with respect to the standard linear combinations of snapshots. Conversely, accounting for the linearized momentum balance via the Oseen equation does improve the quality of the resulting approximation and therefore is an effective data augmentation strategy in the framework of viscous incompressible laminar flows.

Most categorical models for dependent types have traditionally been heavily set based: contexts form a category, and for each we have a set of types in said context -- and for each type a set of terms of said type. This is the case for categories with families, categories with attributes, and natural models; in particular, all of them can be traced back to certain discrete Grothendieck fibrations. We extend this intuition to the case of general, non necessarily discrete, fibrations, so that over a given context one has not only a set but a category of types. We argue that the added structure can be attributed to a notion of subtyping that shares many features with that of coercive subtyping, in the sense that it is the product of thinking about subtyping as an abbreviation mechanism: we say that a given type $A'$ is a subtype of $A$ if there is a unique coercion from $A'$ to $A$. Whenever we need a term of type $A$, then, it suffices to have a term of type $A'$, which we can `plug-in' into $A$. For this version of subtyping we provide rules, coherences, and explicit models, and we compare and contrast it to coercive subtyping as introduced by Z. Luo and others. We conclude by suggesting how the tools we present can be employed in finding appropriate rules relating subtyping and certain type constructors.

Partitioned neural network functions are used to approximate the solution of partial differential equations. The problem domain is partitioned into non-overlapping subdomains and the partitioned neural network functions are defined on the given non-overlapping subdomains. Each neural network function then approximates the solution in each subdomain. To obtain the convergent neural network solution, certain continuity conditions on the partitioned neural network functions across the subdomain interface need to be included in the loss function, that is used to train the parameters in the neural network functions. In our work, by introducing suitable interface values, the loss function is reformulated into a sum of localized loss functions and each localized loss function is used to train the corresponding local neural network parameters. In addition, to accelerate the neural network solution convergence, the localized loss function is enriched with an augmented Lagrangian term, where the interface condition and the boundary condition are enforced as constraints on the local solutions by using Lagrange multipliers. The local neural network parameters and Lagrange multipliers are then found by optimizing the localized loss function. To take the advantage of the localized loss function for the parallel computation, an iterative algorithm is also proposed. For the proposed algorithms, their training performance and convergence are numerically studied for various test examples.

Tensorial neural networks (TNNs) combine the successes of multilinear algebra with those of deep learning to enable extremely efficient reduced-order models of high-dimensional problems. Here, I describe a deep neural network architecture that fuses multiple TNNs into a larger network, intended to solve a broader class of problems than a single TNN. I evaluate this architecture, referred to as a "stacked tensorial neural network" (STNN), on a parametric PDE with three independent variables and three parameters. The three parameters correspond to one PDE coefficient and two quantities describing the domain geometry. The STNN provides an accurate reduced-order description of the solution manifold over a wide range of parameters. There is also evidence of meaningful generalization to parameter values outside its training data. Finally, while the STNN architecture is relatively simple and problem agnostic, it can be regularized to incorporate problem-specific features like symmetries and physical modeling assumptions.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司