亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct a new nonlinear finite volume (FV) scheme for highly anisotropic diffusion equations, that satisfies the discrete minimum-maximum principle. The construction relies on the linearized scheme satisfying less restrictive monotonicity conditions than those of an M-matrix, based on a weakly regular matrix splitting and using the Cartesian structure of the mesh (extension to quadrilateral meshes is also possible). The resulting scheme, obtained by expressing fluxes as nonlinear combinations of linear fluxes, has a larger stencil than other nonlinear positivity preserving or minimum-maximum principle preserving schemes. Its larger "linearized" stencil, closer to the actual complete stencil (that includes unknowns appearing in the convex combination coefficients), enables a faster convergence of the Picard iterations used to compute the solution of the scheme. Steady state dimensionless numerical tests as well as simulations of the highly anisotropic diffusion in electron radiation belts show a second order of convergence of the new scheme and confirm its computational efficiency compared to usual nonlinear FV schemes.

相關內容

迄(qi)今為止,產(chan)品設(she)計(ji)師最(zui)友好的交互(hu)動(dong)畫軟件。

The backwards induction method due to Bellman~\cite{bellman1952theory} is a popular approach to solving problems in optimiztion, optimal control, and many other areas of applied math. In this paper we analyze the backwords induction approach, under min/max conditions. We show that if the value function is has strictly positive derivatives of order 1-4 then the optimal strategy for the adversary is Brownian motion. Using that fact we analyze different potential functions and show that the Normal-Hedge potential is optimal.

This paper considers a novel multi-agent linear stochastic approximation algorithm driven by Markovian noise and general consensus-type interaction, in which each agent evolves according to its local stochastic approximation process which depends on the information from its neighbors. The interconnection structure among the agents is described by a time-varying directed graph. While the convergence of consensus-based stochastic approximation algorithms when the interconnection among the agents is described by doubly stochastic matrices (at least in expectation) has been studied, less is known about the case when the interconnection matrix is simply stochastic. For any uniformly strongly connected graph sequences whose associated interaction matrices are stochastic, the paper derives finite-time bounds on the mean-square error, defined as the deviation of the output of the algorithm from the unique equilibrium point of the associated ordinary differential equation. For the case of interconnection matrices being stochastic, the equilibrium point can be any unspecified convex combination of the local equilibria of all the agents in the absence of communication. Both the cases with constant and time-varying step-sizes are considered. In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, so that doubly stochastic matrices cannot be implemented in a distributed manner, the paper proposes a push-sum-type distributed stochastic approximation algorithm and provides its finite-time bound for the time-varying step-size case by leveraging the analysis for the consensus-type algorithm with stochastic matrices and developing novel properties of the push-sum algorithm.

We propose a numerical method based on physics-informed Random Projection Neural Networks for the solution of Initial Value Problems (IVPs) of Ordinary Differential Equations (ODEs) with a focus on stiff problems. We address an Extreme Learning Machine with a single hidden layer with radial basis functions having as widths uniformly distributed random variables, while the values of the weights between the input and the hidden layer are set equal to one. The numerical solution of the IVPs is obtained by constructing a system of nonlinear algebraic equations, which is solved with respect to the output weights by the Gauss-Newton method, using a simple adaptive scheme for adjusting the time interval of integration. To assess its performance, we apply the proposed method for the solution of four benchmark stiff IVPs, namely the Prothero-Robinson, van der Pol, ROBER and HIRES problems. Our method is compared with an adaptive Runge-Kutta method based on the Dormand-Prince pair, and a variable-step variable-order multistep solver based on numerical differentiation formulas, as implemented in the \texttt{ode45} and \texttt{ode15s} MATLAB functions, respectively. We show that the proposed scheme yields good approximation accuracy, thus outperforming \texttt{ode45} and \texttt{ode15s}, especially in the cases where steep gradients arise. Furthermore, the computational times of our approach are comparable with those of the two MATLAB solvers for practical purposes.

In this work we present a novel bulk-surface virtual element method (BSVEM) for the numerical approximation of elliptic bulk-surface partial differential equations (BSPDEs) in three space dimensions. The BSVEM is based on the discretisation of the bulk domain into polyhedral elements with arbitrarily many faces. The polyhedral approximation of the bulk induces a polygonal approximation of the surface. Firstly, we present a geometric error analysis of bulk-surface polyhedral meshes independent of the numerical method. Hence, we show that BSVEM has optimal second-order convergence in space, provided the exact solution is $H^{2+3/4}$ in the bulk and $H^2$ on the surface, where the additional $\frac{3}{4}$ is due to the combined effect of surface curvature and polyhedral elements close to the boundary. We show that general polyhedra can be exploited to reduce the computational time of the matrix assembly. To support our convergence results, a numerical example is presented which demonstrates optimal convergence of an elliptic BSPDE in three space dimensions.

A matroid $\mathcal{M}$ on a set $E$ of elements has the $\alpha$-partition property, for some $\alpha>0$, if it is possible to (randomly) construct a partition matroid $\mathcal{P}$ on (a subset of) elements of $\mathcal{M}$ such that every independent set of $\mathcal{P}$ is independent in $\mathcal{M}$ and for any weight function $w:E\to\mathbb{R}_{\geq 0}$, the expected value of the optimum of the matroid secretary problem on $\mathcal{P}$ is at least an $\alpha$-fraction of the optimum on $\mathcal{M}$. We show that the complete binary matroid, ${\cal B}_d$ on $\mathbb{F}_2^d$ does not satisfy the $\alpha$-partition property for any constant $\alpha>0$ (independent of $d$). Furthermore, we refute a recent conjecture of B\'erczi, Schwarcz, and Yamaguchi by showing the same matroid is $2^d/d$-colorable but cannot be reduced to an $\alpha 2^d/d$-colorable partition matroid for any $\alpha$ that is sublinear in $d$.

Polynomial expansions are important in the analysis of neural network nonlinearities. They have been applied thereto addressing well-known difficulties in verification, explainability, and security. Existing approaches span classical Taylor and Chebyshev methods, asymptotics, and many numerical approaches. We find that while these individually have useful properties such as exact error formulas, adjustable domain, and robustness to undefined derivatives, there are no approaches that provide a consistent method yielding an expansion with all these properties. To address this, we develop an analytically modified integral transform expansion (AMITE), a novel expansion via integral transforms modified using derived criteria for convergence. We show the general expansion and then demonstrate application for two popular activation functions, hyperbolic tangent and rectified linear units. Compared with existing expansions (i.e., Chebyshev, Taylor, and numerical) employed to this end, AMITE is the first to provide six previously mutually exclusive desired expansion properties such as exact formulas for the coefficients and exact expansion errors (Table II). We demonstrate the effectiveness of AMITE in two case studies. First, a multivariate polynomial form is efficiently extracted from a single hidden layer black-box Multi-Layer Perceptron (MLP) to facilitate equivalence testing from noisy stimulus-response pairs. Second, a variety of Feed-Forward Neural Network (FFNN) architectures having between 3 and 7 layers are range bounded using Taylor models improved by the AMITE polynomials and error formulas. AMITE presents a new dimension of expansion methods suitable for analysis/approximation of nonlinearities in neural networks, opening new directions and opportunities for the theoretical analysis and systematic testing of neural networks.

Intelligent reflecting surfaces (IRSs) are emerging as promising enablers for the next generation of wireless communication systems, because of their ability to customize favorable radio propagation environments. However, with the conventional passive architecture, IRSs can only adjust the phase of the incident signals limiting the achievable beamforming gain. To fully unleash the potential of IRSs, in this paper, we consider a more general IRS architecture, i.e., active IRSs, which can adapt the phase and amplify the magnitude of the reflected incident signal simultaneously with the support of an additional power source. To realize green communication in active IRS-assisted multiuser systems, we jointly optimize the reflection matrix at the IRS and the beamforming vector at the base station (BS) for the minimization of the BS transmit power. The resource allocation algorithm design is formulated as an optimization problem taking into account the maximum power budget of the active IRS and the quality-of-service (QoS) requirements of the users. To handle the non-convex design problem, we develop a novel and computationally efficient algorithm based on the bilinear transformation and inner approximation methods. The proposed algorithm is guaranteed to converge to a locally optimal solution of the considered problem. Simulation results illustrate the effectiveness of the proposed scheme compared to the two baseline schemes. Moreover, the results unveil that deploying active IRSs is a promising approach to enhance the system performance compared to conventional passive IRSs, especially when strong direct links exist.

By the asymptotic oracle property, non-convex penalties represented by minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) have attracted much attentions in high-dimensional data analysis, and have been widely used in signal processing, image restoration, matrix estimation, etc. However, in view of their non-convex and non-smooth characteristics, they are computationally challenging. Almost all existing algorithms converge locally, and the proper selection of initial values is crucial. Therefore, in actual operation, they often combine a warm-starting technique to meet the rigid requirement that the initial value must be sufficiently close to the optimal solution of the corresponding problem. In this paper, based on the DC (difference of convex functions) property of MCP and SCAD penalties, we aim to design a global two-stage algorithm for the high-dimensional least squares linear regression problems. A key idea for making the proposed algorithm to be efficient is to use the primal dual active set with continuation (PDASC) method, which is equivalent to the semi-smooth Newton (SSN) method, to solve the corresponding sub-problems. Theoretically, we not only prove the global convergence of the proposed algorithm, but also verify that the generated iterative sequence converges to a d-stationary point. In terms of computational performance, the abundant research of simulation and real data show that the algorithm in this paper is superior to the latest SSN method and the classic coordinate descent (CD) algorithm for solving non-convex penalized high-dimensional linear regression problems.

Gaussian stochastic process (GaSP) has been widely used as a prior over functions due to its flexibility and tractability in modeling. However, the computational cost in evaluating the likelihood is $O(n^3)$, where $n$ is the number of observed points in the process, as it requires to invert the covariance matrix. This bottleneck prevents GaSP being widely used in large-scale data. We propose a general class of nonseparable GaSP models for multiple functional observations with a fast and exact algorithm, in which the computation is linear ($O(n)$) and exact, requiring no approximation to compute the likelihood. We show that the commonly used linear regression and separable models are special cases of the proposed nonseparable GaSP model. Through the study of an epigenetic application, the proposed nonseparable GaSP model can accurately predict the genome-wide DNA methylation levels and compares favorably to alternative methods, such as linear regression, random forest and localized Kriging method. The algorithm for fast computation is implemented in the ${\tt FastGaSP}$ R package on CRAN.

This is full length article (draft version) where problem number of topics in Topic Modeling is discussed. We proposed idea that Renyi and Tsallis entropy can be used for identification of optimal number in large textual collections. We also report results of numerical experiments of Semantic stability for 4 topic models, which shows that semantic stability play very important role in problem topic number. The calculation of Renyi and Tsallis entropy based on thermodynamics approach.

北京阿比特科技有限公司