亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Invariant finite-difference schemes are considered for one-dimensional magnetohydrodynamics (MHD) equations in mass Lagrangian coordinates for the cases of finite and infinite conductivity. For construction these schemes previously obtained results of the group classification of MHD equations are used. On the basis of the classical Samarskiy-Popov scheme new schemes are constructed for the case of finite conductivity. These schemes admit all symmetries of the original differential model and have difference analogues of all of its local differential conservation laws. Among the conservation laws there are previously unknown ones. In the case of infinite conductivity, conservative invariant schemes constructed as well. For isentropic flows of a polytropic gas proposed schemes possess the conservation law of energy and preserve entropy on two time layers. This is achieved by means of specially selected approximations for the equation of state of a polytropic gas. Also, invariant difference schemes with additional conservation laws are proposed.

相關內容

The problem of finding a nonzero solution of a linear recurrence $Ly = 0$ with polynomial coefficients where $y$ has the form of a definite hypergeometric sum, related to the Inverse Creative Telescoping Problem of [14][Sec. 8], has now been open for three decades. Here we present an algorithm (implemented in a SageMath package) which, given such a recurrence and a quasi-triangular, shift-compatible factorial basis $\mathcal{B} = \langle P_k(n)\rangle_{k=0}^\infty$ of the polynomial space $\mathbb{K}[n]$ over a field $\mathbb{K}$ of characteristic zero, computes a recurrence satisfied by the coefficient sequence $c = \langle c_k\rangle_{k=0}^\infty$ of the solution $y_n = \sum_{k=0}^\infty c_kP_k(n)$ (where, thanks to the quasi-triangularity of $\mathcal{B}$, the sum on the right terminates for each $n \in \mathbb{N}$). More generally, if $\mathcal{B}$ is $m$-sieved for some $m \in \mathbb{N}$, our algorithm computes a system of $m$ recurrences satisfied by the $m$-sections of the coefficient sequence $c$. If an explicit nonzero solution of this system can be found, we obtain an explicit nonzero solution of $Ly = 0$.

We study full Bayesian procedures for high-dimensional linear regression. We adopt data-dependent empirical priors introduced in [1]. In their paper, these priors have nice posterior contraction properties and are easy to compute. Our paper extend their theoretical results to the case of unknown error variance . Under proper sparsity assumption, we achieve model selection consistency, posterior contraction rates as well as Bernstein von-Mises theorem by analyzing multivariate t-distribution.

We prove that the stack-number of the strong product of three $n$-vertex paths is $\Theta(n^{1/3})$. The best previously known upper bound was $O(n)$. No non-trivial lower bound was known. This is the first explicit example of a graph family with bounded maximum degree and unbounded stack-number. The main tool used in our proof of the lower bound is the topological overlap theorem of Gromov. We actually prove a stronger result in terms of so-called triangulations of Cartesian products. We conclude that triangulations of three-dimensional Cartesian products of any sufficiently large connected graphs have large stack-number. The upper bound is a special case of a more general construction based on families of permutations derived from Hadamard matrices. The strong product of three paths is also the first example of a bounded degree graph with bounded queue-number and unbounded stack-number. A natural question that follows from our result is to determine the smallest $\Delta_0$ such that there exist a graph family with unbounded stack-number, bounded queue-number and maximum degree $\Delta_0$. We show that $\Delta_0\in \{6,7\}$.

We develop a class of mixed-integer formulations for disjunctive constraints intermediate to the big-M and convex hull formulations in terms of relaxation strength. The main idea is to capture the best of both the big-M and convex hull formulations: a computationally light formulation with a tight relaxation. The "$P$-split" formulations are based on a lifted transformation that splits convex additively separable constraints into $P$ partitions and forms the convex hull of the linearized and partitioned disjunction. We analyze the continuous relaxation of the $P$-split formulations and show that, under certain assumptions, the formulations form a hierarchy starting from a big-M equivalent and converging to the convex hull. The goal of the $P$-split formulations is to form a strong approximation of the convex hull through a computationally simpler formulation. We computationally compare the $P$-split formulations against big-M and convex hull formulations on 320 test instances. The test problems include K-means clustering, P_ball problems, and optimization over trained ReLU neural networks. The computational results show promising potential of the $P$-split formulations. For many of the test problems, $P$-split formulations are solved with a similar number of explored nodes as the convex hull formulation, while reducing the solution time by an order of magnitude and outperforming big-M both in time and number of explored nodes.

Learning data representations under uncertainty is an important task that emerges in numerous machine learning applications. However, uncertainty quantification (UQ) techniques are computationally intensive and become prohibitively expensive for high-dimensional data. In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions. The proposed model combines a neural network approach for dimensionality reduction of the (potentially high-dimensional) data, with a surrogate model method for learning the data distribution. We first employ a variational autoencoder (VAE) to learn a low-dimensional representation of the data distribution. We then propose to harness polynomial chaos expansion (PCE) formulation to map this distribution to the output target. The coefficients of PCE are learned from the distribution representation of the training data using a maximum mean discrepancy (MMD) approach. Our model enables us to (a) learn a representation of the data, (b) estimate uncertainty in the high-dimensional data system, and (c) match high order moments of the output distribution; without any prior statistical assumptions on the data. Numerical experimental results are presented to illustrate the performance of the proposed method.

Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\F$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial $P\in \F[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of Hamming weight $k\in [q,n-q]$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Heged\H{u}s (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.

We investigate time complexities of finite difference methods for solving the high-dimensional linear heat equation, the high-dimensional linear hyperbolic equation and the multiscale hyperbolic heat system with quantum algorithms (hence referred to as the "quantum difference methods"). Our detailed analyses show that for the heat and linear hyperbolic equations the quantum difference methods provide exponential speedup over the classical difference method with respect to the spatial dimension. For the multiscale problem, the time complexity of both the classical treatment and quantum treatment for the explicit scheme scales as $O(1/\varepsilon)$, where $\varepsilon$ is the scaling parameter, while the scaling for the Asymptotic-Preserving (AP) schemes does not depend on $\varepsilon$. This indicates that it is still of great importance to develop AP schemes for multiscale problems in quantum computing.

We propose Characteristic Neural Ordinary Differential Equations (C-NODEs), a framework for extending Neural Ordinary Differential Equations (NODEs) beyond ODEs. While NODEs model the evolution of the latent state as the solution to an ODE, the proposed C-NODE models the evolution of the latent state as the solution of a family of first-order quasi-linear partial differential equations (PDE) on their characteristics, defined as curves along which the PDEs reduce to ODEs. The reduction, in turn, allows the application of the standard frameworks for solving ODEs to PDE settings. Additionally, the proposed framework can be cast as an extension of existing NODE architectures, thereby allowing the use of existing black-box ODE solvers. We prove that the C-NODE framework extends the classical NODE by exhibiting functions that cannot be represented by NODEs but are representable by C-NODEs. We further investigate the efficacy of the C-NODE framework by demonstrating its performance in many synthetic and real data scenarios. Empirical results demonstrate the improvements provided by the proposed method for CIFAR-10, SVHN, and MNIST datasets under a similar computational budget as the existing NODE methods.

Learning rate schedules are ubiquitously used to speed up and improve optimisation. Many different policies have been introduced on an empirical basis, and theoretical analyses have been developed for convex settings. However, in many realistic problems the loss-landscape is high-dimensional and non convex -- a case for which results are scarce. In this paper we present a first analytical study of the role of learning rate scheduling in this setting, focusing on Langevin optimization with a learning rate decaying as $\eta(t)=t^{-\beta}$. We begin by considering models where the loss is a Gaussian random function on the $N$-dimensional sphere ($N\rightarrow \infty$), featuring an extensive number of critical points. We find that to speed up optimization without getting stuck in saddles, one must choose a decay rate $\beta<1$, contrary to convex setups where $\beta=1$ is generally optimal. We then add to the problem a signal to be recovered. In this setting, the dynamics decompose into two phases: an \emph{exploration} phase where the dynamics navigates through rough parts of the landscape, followed by a \emph{convergence} phase where the signal is detected and the dynamics enter a convex basin. In this case, it is optimal to keep a large learning rate during the exploration phase to escape the non-convex region as quickly as possible, then use the convex criterion $\beta=1$ to converge rapidly to the solution. Finally, we demonstrate that our conclusions hold in a common regression task involving neural networks.

Imposing orthogonality on the layers of neural networks is known to facilitate the learning by limiting the exploding/vanishing of the gradient; decorrelate the features; improve the robustness. This paper studies theoretical properties of orthogonal convolutional layers. We establish necessary and sufficient conditions on the layer architecture guaranteeing the existence of an orthogonal convolutional transform. The conditions prove that orthogonal convolutional transforms exist for almost all architectures used in practice for 'circular' padding.We also exhibit limitations with 'valid' boundary condition and 'same' boundary condition with zero padding. Recently, a regularization term imposing the orthogonality of convolutional layers has been proposed, and impressive empirical results have been obtained in different applications (Wang et al. 2020).The second motivation of the present paper is to specify the theory behind this.We make the link between this regularization term and orthogonality measures. In doing so, we show that this regularization strategy is stable with respect to numerical and optimization errors and that, in the presence of small errors and when the size of the signal/image is large, the convolutional layers remain close to isometric.The theoretical results are confirmed with experiments, the landscape of the regularization term is studied and the regularization strategy is validated on real datasets. Altogether, the study guarantees that the regularization with L_{orth} (Wang et al. 2020) is an efficient, flexible and stable numerical strategy to learn orthogonal convolutional layers.

北京阿比特科技有限公司