{mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a data-driven approach to construct entropy-based closures for the moment system from kinetic equations. The proposed closure learns the entropy function by fitting the map between the moments and the entropy of the moment system, and thus does not depend on the space-time discretization of the moment system and specific problem configurations such as initial and boundary conditions. With convex and $C^2$ approximations, this data-driven closure inherits several structural properties from entropy-based closures, such as entropy dissipation, hyperbolicity, and H-Theorem. We construct convex approximations to the Maxwell-Boltzmann entropy using convex splines and neural networks, test them on the plane source benchmark problem for linear transport in slab geometry, and compare the results to the standard, optimization-based M$_N$ closures. Numerical results indicate that these data-driven closures provide accurate solutions in much less computation time than the M$_N$ closures.

相關內容

Physics-informed neural networks (PINNs) show great advantages in solving partial differential equations. In this paper, we for the first time propose to study conformable time fractional diffusion equations by using PINNs. By solving the supervise learning task, we design a new spatio-temporal function approximator with high data efficiency. L-BFGS algorithm is used to optimize our loss function, and back propagation algorithm is used to update our parameters to give our numerical solutions. For the forward problem, we can take IC/BCs as the data, and use PINN to solve the corresponding partial differential equation. Three numerical examples are are carried out to demonstrate the effectiveness of our methods. In particular, when the order of the conformable fractional derivative $\alpha$ tends to $1$, a class of weighted PINNs is introduced to overcome the accuracy degradation caused by the singularity of solutions. For the inverse problem, we use the data obtained to train the neural network, and the estimation of parameter $\lambda$ in the equation is elaborated. Similarly, we give three numerical examples to show that our method can accurately identify the parameters, even if the training data is corrupted with 1\% uncorrelated noise.

In this paper, we propose a general meta learning approach to computing approximate Nash equilibrium for finite $n$-player normal-form games. Unlike existing solutions that approximate or learn a Nash equilibrium from scratch for each of the games, our meta solver directly constructs a mapping from a game utility matrix to a joint strategy profile. The mapping is parameterized and learned in a self-supervised fashion by a proposed Nash equilibrium approximation metric without ground truth data informing any Nash equilibrium. As such, it can immediately predict the joint strategy profile that approximates a Nash equilibrium for any unseen new game under the same game distribution. Moreover, the meta-solver can be further fine-tuned and adaptive to a new game if iteration updates are allowed. We theoretically prove that our meta-solver is not affected by the non-smoothness of exact Nash equilibrium solutions, and derive a sample complexity bound to demonstrate its generalization ability across normal-form games. Experimental results demonstrate its substantial approximation power against other strong baselines in both adaptive and non-adaptive cases.

One of the main reasons for topological persistence being useful in data analysis is that it is backed up by a stability (isometry) property: persistence diagrams of $1$-parameter persistence modules are stable in the sense that the bottleneck distance between two diagrams equals the interleaving distance between their generating modules. However, in multi-parameter setting this property breaks down in general. A simple special case of persistence modules called rectangle decomposable modules is known to admit a weaker stability property. Using this fact, we derive a stability-like property for $2$-parameter persistence modules. For this, first we consider interval decomposable modules and their optimal approximations with rectangle decomposable modules with respect to the bottleneck distance. We provide a polynomial time algorithm to exactly compute this optimal approximation which, together with the polynomial-time computable bottleneck distance among interval decomposable modules, provides a lower bound on the interleaving distance. Next, we leverage this result to derive a polynomial-time computable distance for general multi-parameter persistence modules which enjoys similar stability-like property. This distance can be viewed as a generalization of the matching distance defined in the literature.

The muzzle blast caused by the discharge of a firearm generates a loud, impulsive sound that propagates away from the shooter in all directions. The location of the source can be computed from time-of-arrival measurements of the muzzle blast on multiple acoustic sensors at known locations, a technique known as multilateration. The multilateration problem is considerably simplified by assuming straight-line propagation in a homogeneous medium, a model for which there are multiple published solutions. Live-fire tests of the ShotSpotter gunshot location system in Pittsburgh, PA were analyzed off-line under several algorithms and geometric constraints to evaluate the accuracy of acoustic multilateration in a forensic context. Best results were obtained using the algorithm due to Mathias, Leonari and Galati under a two-dimensional geometric constraint. Multilateration on random subsets of the participating sensor array show that 96% of shots can be located to an accuracy of 15 m or better when six or more sensors participate in the solution.

Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance in nonparametric estimation, whereas the results are suboptimal due to a redundant logarithmic sacrifice. In this paper, we show that such log-factors are not necessary. We derive upper bounds for the $L^2$ minimax risk in nonparametric estimation. Sufficient conditions on network architectures are provided such that the upper bounds become optimal (without log-sacrifice). Our proof relies on an explicitly constructed network estimator based on tensor product B-splines. We also derive asymptotic distributions for the constructed network and a relating hypothesis testing procedure. The testing procedure is further proven as minimax optimal under suitable network architectures.

We explore an application of the Physics Informed Neural Networks (PINNs) in conjunction with Airy stress functions and Fourier series to find optimal solutions to a few reference biharmonic problems of elasticity and elastic plate theory. Biharmonic relations are fourth-order partial differential equations (PDEs) that are challenging to solve using classical numerical methods, and have not been addressed using PINNs. Our work highlights a novel application of classical analytical methods to guide the construction of efficient neural networks with the minimal number of parameters that are very accurate and fast to evaluate. In particular, we find that enriching feature space using Airy stress functions can significantly improve the accuracy of PINN solutions for biharmonic PDEs.

We propose an Extended Hybrid High-Order scheme for the Poisson problem with solution possessing weak singularities. Some general assumptions are stated on the nature of this singularity and the remaining part of the solution. The method is formulated by enriching the local polynomial spaces with appropriate singular functions. Via a detailed error analysis, the method is shown to converge optimally in both discrete and continuous energy norms. Some tests are conducted in two dimensions for singularities arising from irregular geometries in the domain. The numerical simulations illustrate the established error estimates, and show the method to be a significant improvement over a standard Hybrid High-Order method.

This paper establishes the optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously. To that end, we first prove that multivariate polynomials can be approximated by deep ReLU networks of width $\mathcal{O}(N)$ and depth $\mathcal{O}(L)$ with an approximation error $\mathcal{O}(N^{-L})$. Through local Taylor expansions and their deep ReLU network approximations, we show that deep ReLU networks of width $\mathcal{O}(N\ln N)$ and depth $\mathcal{O}(L\ln L)$ can approximate $f\in C^s([0,1]^d)$ with a nearly optimal approximation error $\mathcal{O}(\|f\|_{C^s([0,1]^d)}N^{-2s/d}L^{-2s/d})$. Our estimate is non-asymptotic in the sense that it is valid for arbitrary width and depth specified by $N\in\mathbb{N}^+$ and $L\in\mathbb{N}^+$, respectively.

The combination of numerical integration and deep learning, i.e., ODE-net, has been successfully employed in a variety of applications. In this work, we introduce inverse modified differential equations (IMDE) to contribute to the behaviour and error analysis of discovery of dynamics using ODE-net. It is shown that the difference between the learned ODE and the truncated IMDE is bounded by the sum of learning loss and a discrepancy which can be made sub exponentially small. In addition, we deduce that the total error of ODE-net is bounded by the sum of discrete error and learning loss. Furthermore, with the help of IMDE, theoretical results on learning Hamiltonian system are derived. Several experiments are performed to numerically verify our theoretical results.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司
N$ closures. Numerical results indicate that these data-driven closures provide accurate solutions in much less computation time than the M 成年人日屄视频免费观看,国产免费一级无码婬片AA片,99久久免费国产精品久久久二 {mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a data-driven approach to construct entropy-based closures for the moment system from kinetic equations. The proposed closure learns the entropy function by fitting the map between the moments and the entropy of the moment system, and thus does not depend on the space-time discretization of the moment system and specific problem configurations such as initial and boundary conditions. With convex and $C^2$ approximations, this data-driven closure inherits several structural properties from entropy-based closures, such as entropy dissipation, hyperbolicity, and H-Theorem. We construct convex approximations to the Maxwell-Boltzmann entropy using convex splines and neural networks, test them on the plane source benchmark problem for linear transport in slab geometry, and compare the results to the standard, optimization-based M$_N$ closures. Numerical results indicate that these data-driven closures provide accurate solutions in much less computation time than the M$_N$ closures.

相關內容

Physics-informed neural networks (PINNs) show great advantages in solving partial differential equations. In this paper, we for the first time propose to study conformable time fractional diffusion equations by using PINNs. By solving the supervise learning task, we design a new spatio-temporal function approximator with high data efficiency. L-BFGS algorithm is used to optimize our loss function, and back propagation algorithm is used to update our parameters to give our numerical solutions. For the forward problem, we can take IC/BCs as the data, and use PINN to solve the corresponding partial differential equation. Three numerical examples are are carried out to demonstrate the effectiveness of our methods. In particular, when the order of the conformable fractional derivative $\alpha$ tends to $1$, a class of weighted PINNs is introduced to overcome the accuracy degradation caused by the singularity of solutions. For the inverse problem, we use the data obtained to train the neural network, and the estimation of parameter $\lambda$ in the equation is elaborated. Similarly, we give three numerical examples to show that our method can accurately identify the parameters, even if the training data is corrupted with 1\% uncorrelated noise.

In this paper, we propose a general meta learning approach to computing approximate Nash equilibrium for finite $n$-player normal-form games. Unlike existing solutions that approximate or learn a Nash equilibrium from scratch for each of the games, our meta solver directly constructs a mapping from a game utility matrix to a joint strategy profile. The mapping is parameterized and learned in a self-supervised fashion by a proposed Nash equilibrium approximation metric without ground truth data informing any Nash equilibrium. As such, it can immediately predict the joint strategy profile that approximates a Nash equilibrium for any unseen new game under the same game distribution. Moreover, the meta-solver can be further fine-tuned and adaptive to a new game if iteration updates are allowed. We theoretically prove that our meta-solver is not affected by the non-smoothness of exact Nash equilibrium solutions, and derive a sample complexity bound to demonstrate its generalization ability across normal-form games. Experimental results demonstrate its substantial approximation power against other strong baselines in both adaptive and non-adaptive cases.

One of the main reasons for topological persistence being useful in data analysis is that it is backed up by a stability (isometry) property: persistence diagrams of $1$-parameter persistence modules are stable in the sense that the bottleneck distance between two diagrams equals the interleaving distance between their generating modules. However, in multi-parameter setting this property breaks down in general. A simple special case of persistence modules called rectangle decomposable modules is known to admit a weaker stability property. Using this fact, we derive a stability-like property for $2$-parameter persistence modules. For this, first we consider interval decomposable modules and their optimal approximations with rectangle decomposable modules with respect to the bottleneck distance. We provide a polynomial time algorithm to exactly compute this optimal approximation which, together with the polynomial-time computable bottleneck distance among interval decomposable modules, provides a lower bound on the interleaving distance. Next, we leverage this result to derive a polynomial-time computable distance for general multi-parameter persistence modules which enjoys similar stability-like property. This distance can be viewed as a generalization of the matching distance defined in the literature.

The muzzle blast caused by the discharge of a firearm generates a loud, impulsive sound that propagates away from the shooter in all directions. The location of the source can be computed from time-of-arrival measurements of the muzzle blast on multiple acoustic sensors at known locations, a technique known as multilateration. The multilateration problem is considerably simplified by assuming straight-line propagation in a homogeneous medium, a model for which there are multiple published solutions. Live-fire tests of the ShotSpotter gunshot location system in Pittsburgh, PA were analyzed off-line under several algorithms and geometric constraints to evaluate the accuracy of acoustic multilateration in a forensic context. Best results were obtained using the algorithm due to Mathias, Leonari and Galati under a two-dimensional geometric constraint. Multilateration on random subsets of the participating sensor array show that 96% of shots can be located to an accuracy of 15 m or better when six or more sensors participate in the solution.

Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance in nonparametric estimation, whereas the results are suboptimal due to a redundant logarithmic sacrifice. In this paper, we show that such log-factors are not necessary. We derive upper bounds for the $L^2$ minimax risk in nonparametric estimation. Sufficient conditions on network architectures are provided such that the upper bounds become optimal (without log-sacrifice). Our proof relies on an explicitly constructed network estimator based on tensor product B-splines. We also derive asymptotic distributions for the constructed network and a relating hypothesis testing procedure. The testing procedure is further proven as minimax optimal under suitable network architectures.

We explore an application of the Physics Informed Neural Networks (PINNs) in conjunction with Airy stress functions and Fourier series to find optimal solutions to a few reference biharmonic problems of elasticity and elastic plate theory. Biharmonic relations are fourth-order partial differential equations (PDEs) that are challenging to solve using classical numerical methods, and have not been addressed using PINNs. Our work highlights a novel application of classical analytical methods to guide the construction of efficient neural networks with the minimal number of parameters that are very accurate and fast to evaluate. In particular, we find that enriching feature space using Airy stress functions can significantly improve the accuracy of PINN solutions for biharmonic PDEs.

We propose an Extended Hybrid High-Order scheme for the Poisson problem with solution possessing weak singularities. Some general assumptions are stated on the nature of this singularity and the remaining part of the solution. The method is formulated by enriching the local polynomial spaces with appropriate singular functions. Via a detailed error analysis, the method is shown to converge optimally in both discrete and continuous energy norms. Some tests are conducted in two dimensions for singularities arising from irregular geometries in the domain. The numerical simulations illustrate the established error estimates, and show the method to be a significant improvement over a standard Hybrid High-Order method.

This paper establishes the optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously. To that end, we first prove that multivariate polynomials can be approximated by deep ReLU networks of width $\mathcal{O}(N)$ and depth $\mathcal{O}(L)$ with an approximation error $\mathcal{O}(N^{-L})$. Through local Taylor expansions and their deep ReLU network approximations, we show that deep ReLU networks of width $\mathcal{O}(N\ln N)$ and depth $\mathcal{O}(L\ln L)$ can approximate $f\in C^s([0,1]^d)$ with a nearly optimal approximation error $\mathcal{O}(\|f\|_{C^s([0,1]^d)}N^{-2s/d}L^{-2s/d})$. Our estimate is non-asymptotic in the sense that it is valid for arbitrary width and depth specified by $N\in\mathbb{N}^+$ and $L\in\mathbb{N}^+$, respectively.

The combination of numerical integration and deep learning, i.e., ODE-net, has been successfully employed in a variety of applications. In this work, we introduce inverse modified differential equations (IMDE) to contribute to the behaviour and error analysis of discovery of dynamics using ODE-net. It is shown that the difference between the learned ODE and the truncated IMDE is bounded by the sum of learning loss and a discrepancy which can be made sub exponentially small. In addition, we deduce that the total error of ODE-net is bounded by the sum of discrete error and learning loss. Furthermore, with the help of IMDE, theoretical results on learning Hamiltonian system are derived. Several experiments are performed to numerically verify our theoretical results.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司
N$ closures. ">

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a data-driven approach to construct entropy-based closures for the moment system from kinetic equations. The proposed closure learns the entropy function by fitting the map between the moments and the entropy of the moment system, and thus does not depend on the space-time discretization of the moment system and specific problem configurations such as initial and boundary conditions. With convex and $C^2$ approximations, this data-driven closure inherits several structural properties from entropy-based closures, such as entropy dissipation, hyperbolicity, and H-Theorem. We construct convex approximations to the Maxwell-Boltzmann entropy using convex splines and neural networks, test them on the plane source benchmark problem for linear transport in slab geometry, and compare the results to the standard, optimization-based M$_N$ closures. Numerical results indicate that these data-driven closures provide accurate solutions in much less computation time than the M$_N$ closures.

相關內容

Physics-informed neural networks (PINNs) show great advantages in solving partial differential equations. In this paper, we for the first time propose to study conformable time fractional diffusion equations by using PINNs. By solving the supervise learning task, we design a new spatio-temporal function approximator with high data efficiency. L-BFGS algorithm is used to optimize our loss function, and back propagation algorithm is used to update our parameters to give our numerical solutions. For the forward problem, we can take IC/BCs as the data, and use PINN to solve the corresponding partial differential equation. Three numerical examples are are carried out to demonstrate the effectiveness of our methods. In particular, when the order of the conformable fractional derivative $\alpha$ tends to $1$, a class of weighted PINNs is introduced to overcome the accuracy degradation caused by the singularity of solutions. For the inverse problem, we use the data obtained to train the neural network, and the estimation of parameter $\lambda$ in the equation is elaborated. Similarly, we give three numerical examples to show that our method can accurately identify the parameters, even if the training data is corrupted with 1\% uncorrelated noise.

In this paper, we propose a general meta learning approach to computing approximate Nash equilibrium for finite $n$-player normal-form games. Unlike existing solutions that approximate or learn a Nash equilibrium from scratch for each of the games, our meta solver directly constructs a mapping from a game utility matrix to a joint strategy profile. The mapping is parameterized and learned in a self-supervised fashion by a proposed Nash equilibrium approximation metric without ground truth data informing any Nash equilibrium. As such, it can immediately predict the joint strategy profile that approximates a Nash equilibrium for any unseen new game under the same game distribution. Moreover, the meta-solver can be further fine-tuned and adaptive to a new game if iteration updates are allowed. We theoretically prove that our meta-solver is not affected by the non-smoothness of exact Nash equilibrium solutions, and derive a sample complexity bound to demonstrate its generalization ability across normal-form games. Experimental results demonstrate its substantial approximation power against other strong baselines in both adaptive and non-adaptive cases.

One of the main reasons for topological persistence being useful in data analysis is that it is backed up by a stability (isometry) property: persistence diagrams of $1$-parameter persistence modules are stable in the sense that the bottleneck distance between two diagrams equals the interleaving distance between their generating modules. However, in multi-parameter setting this property breaks down in general. A simple special case of persistence modules called rectangle decomposable modules is known to admit a weaker stability property. Using this fact, we derive a stability-like property for $2$-parameter persistence modules. For this, first we consider interval decomposable modules and their optimal approximations with rectangle decomposable modules with respect to the bottleneck distance. We provide a polynomial time algorithm to exactly compute this optimal approximation which, together with the polynomial-time computable bottleneck distance among interval decomposable modules, provides a lower bound on the interleaving distance. Next, we leverage this result to derive a polynomial-time computable distance for general multi-parameter persistence modules which enjoys similar stability-like property. This distance can be viewed as a generalization of the matching distance defined in the literature.

The muzzle blast caused by the discharge of a firearm generates a loud, impulsive sound that propagates away from the shooter in all directions. The location of the source can be computed from time-of-arrival measurements of the muzzle blast on multiple acoustic sensors at known locations, a technique known as multilateration. The multilateration problem is considerably simplified by assuming straight-line propagation in a homogeneous medium, a model for which there are multiple published solutions. Live-fire tests of the ShotSpotter gunshot location system in Pittsburgh, PA were analyzed off-line under several algorithms and geometric constraints to evaluate the accuracy of acoustic multilateration in a forensic context. Best results were obtained using the algorithm due to Mathias, Leonari and Galati under a two-dimensional geometric constraint. Multilateration on random subsets of the participating sensor array show that 96% of shots can be located to an accuracy of 15 m or better when six or more sensors participate in the solution.

Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance in nonparametric estimation, whereas the results are suboptimal due to a redundant logarithmic sacrifice. In this paper, we show that such log-factors are not necessary. We derive upper bounds for the $L^2$ minimax risk in nonparametric estimation. Sufficient conditions on network architectures are provided such that the upper bounds become optimal (without log-sacrifice). Our proof relies on an explicitly constructed network estimator based on tensor product B-splines. We also derive asymptotic distributions for the constructed network and a relating hypothesis testing procedure. The testing procedure is further proven as minimax optimal under suitable network architectures.

We explore an application of the Physics Informed Neural Networks (PINNs) in conjunction with Airy stress functions and Fourier series to find optimal solutions to a few reference biharmonic problems of elasticity and elastic plate theory. Biharmonic relations are fourth-order partial differential equations (PDEs) that are challenging to solve using classical numerical methods, and have not been addressed using PINNs. Our work highlights a novel application of classical analytical methods to guide the construction of efficient neural networks with the minimal number of parameters that are very accurate and fast to evaluate. In particular, we find that enriching feature space using Airy stress functions can significantly improve the accuracy of PINN solutions for biharmonic PDEs.

We propose an Extended Hybrid High-Order scheme for the Poisson problem with solution possessing weak singularities. Some general assumptions are stated on the nature of this singularity and the remaining part of the solution. The method is formulated by enriching the local polynomial spaces with appropriate singular functions. Via a detailed error analysis, the method is shown to converge optimally in both discrete and continuous energy norms. Some tests are conducted in two dimensions for singularities arising from irregular geometries in the domain. The numerical simulations illustrate the established error estimates, and show the method to be a significant improvement over a standard Hybrid High-Order method.

This paper establishes the optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously. To that end, we first prove that multivariate polynomials can be approximated by deep ReLU networks of width $\mathcal{O}(N)$ and depth $\mathcal{O}(L)$ with an approximation error $\mathcal{O}(N^{-L})$. Through local Taylor expansions and their deep ReLU network approximations, we show that deep ReLU networks of width $\mathcal{O}(N\ln N)$ and depth $\mathcal{O}(L\ln L)$ can approximate $f\in C^s([0,1]^d)$ with a nearly optimal approximation error $\mathcal{O}(\|f\|_{C^s([0,1]^d)}N^{-2s/d}L^{-2s/d})$. Our estimate is non-asymptotic in the sense that it is valid for arbitrary width and depth specified by $N\in\mathbb{N}^+$ and $L\in\mathbb{N}^+$, respectively.

The combination of numerical integration and deep learning, i.e., ODE-net, has been successfully employed in a variety of applications. In this work, we introduce inverse modified differential equations (IMDE) to contribute to the behaviour and error analysis of discovery of dynamics using ODE-net. It is shown that the difference between the learned ODE and the truncated IMDE is bounded by the sum of learning loss and a discrepancy which can be made sub exponentially small. In addition, we deduce that the total error of ODE-net is bounded by the sum of discrete error and learning loss. Furthermore, with the help of IMDE, theoretical results on learning Hamiltonian system are derived. Several experiments are performed to numerically verify our theoretical results.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司