亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We explore the mathematical foundations of Recurrent Neural Networks (RNNs) and three fundamental procedures: temporal rescaling, discretization, and linearization. These techniques provide essential tools for characterizing RNN behaviour, enabling insights into temporal dynamics, practical computational implementation, and linear approximations for analysis. We discuss the flexible order of application of these procedures, emphasizing their significance in modelling and analyzing RNNs for computational neuroscience and machine learning applications. We explicitly describe here under what conditions these procedures can be interchangeable.

相關內容

再縮放是一個類別不平衡學習的一個基本策略。當訓練集中正、反例數據不均等時,令m+表示正例數,m-表示反例數,并且需對預測值進行縮放調整。

In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies -- also known as false-name strategies, where agents create fake identities to manipulate outcomes -- has not yet been studied. In this paper, we delve into the details of different cost-sharing mechanisms proposed in the literature, highlighting their non-Sybil-resistant nature. Furthermore, we prove that a Sybil-proof cost-sharing mechanism for public excludable goods under mild conditions is at least $(n+1)/2-$approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of \textit{Sybil Welfare Invariant} mechanisms, where a mechanism does not decrease its welfare under Sybil-strategies when agents choose weak dominant strategies and have subjective prior beliefs over other players' actions. Finally, we prove that the Shapley value mechanism for symmetric and submodular cost functions holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ under equilibrium with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding suggests that any group of agents, each with private valuations, can fund public excludable goods both permissionless and anonymously, achieving efficiency comparable to that of permissioned and non-anonymous domains, even when the total number of participants is unknown.

We adopt the integral definition of the fractional Laplace operator and study an optimal control problem on Lipschitz domains that involves a fractional elliptic partial differential equation (PDE) as state equation and a control variable that enters the state equation as a coefficient; pointwise constraints on the control variable are considered as well. We establish the existence of optimal solutions and analyze first and, necessary and sufficient, second order optimality conditions. Regularity estimates for optimal variables are also analyzed. We develop two finite element discretization strategies: a semidiscrete scheme in which the control variable is not discretized, and a fully discrete scheme in which the control variable is discretized with piecewise constant functions. For both schemes, we analyze the convergence properties of discretizations and derive error estimates.

At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay for a weaker notion of degeneracy, or even parameterized by the maximum degree $\Delta$, i.e., whether it can be turned into an algorithm with delay $f(\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, and as a first step toward answering that question, they note that they could not achieve these time bounds even for the particular case of minimal dominating sets enumeration. In this paper, using ordered generation, we show that an FPT-delay algorithm can be devised for minimal transversals enumeration parameterized by the maximum degree and dimension, giving a positive and more general answer to the latter question.

The linear decomposition attack provides a serious obstacle to direct applications of noncommutative groups and monoids (or semigroups) in cryptography. To overcome this issue we propose to look at monoids with only big representations, in the sense made precise in the paper, and undertake a systematic study of such monoids. One of our main tools is Green's theory of cells (Green's relations). A large supply of monoids is delivered by monoidal categories. We consider simple examples of monoidal categories of diagrammatic origin, including the Temperley-Lieb, the Brauer and partition categories, and discuss lower bounds for their representations.

The Crank-Nicolson (CN) method is a well-known time integrator for evolutionary partial differential equations (PDEs) arising in many real-world applications. Since the solution at any time depends on the solution at previous time steps, the CN method is inherently difficult to parallelize. In this paper, we consider a parallel method for the solution of evolutionary PDEs with the CN scheme. Using an all-at-once approach, we can solve for all time steps simultaneously using a parallelizable over time preconditioner within a standard iterative method. Due to the diagonalization of the proposed preconditioner, we can prove that most eigenvalues of preconditioned matrices are equal to 1 and the others lie in the set: $\left\{z\in\mathbb{C}: 1/(1 + \alpha) < |z| < 1/(1 - \alpha)~{\rm and}~\Re{\rm e}(z) > 0\right\}$, where $0 < \alpha < 1$ is a free parameter. Besides, the efficient implementation of the proposed preconditioner is described. Given certain conditions, we prove that the preconditioned GMRES method exhibits a mesh-independent convergence rate. Finally, we will verify both theoretical findings and the efficacy of the proposed preconditioner via numerical experiments on financial option pricing PDEs.

We propose a hybrid iterative method based on MIONet for PDEs, which combines the traditional numerical iterative solver and the recent powerful machine learning method of neural operator, and further systematically analyze its theoretical properties, including the convergence condition, the spectral behavior, as well as the convergence rate, in terms of the errors of the discretization and the model inference. We show the theoretical results for the frequently-used smoothers, i.e. Richardson (damped Jacobi) and Gauss-Seidel. We give an upper bound of the convergence rate of the hybrid method w.r.t. the model correction period, which indicates a minimum point to make the hybrid iteration converge fastest. Several numerical examples including the hybrid Richardson (Gauss-Seidel) iteration for the 1-d (2-d) Poisson equation are presented to verify our theoretical results, and also reflect an excellent acceleration effect. As a meshless acceleration method, it is provided with enormous potentials for practice applications.

In recent years dynamical systems (of deterministic and stochastic nature), describing many models in mathematics, physics, engineering and finances, become more and more complex. Numerical analysis narrowed only to deterministic algorithms seems to be insufficient for such systems, since, for example, curse of dimensionality affects deterministic methods. Therefore, we can observe increasing popularity of Monte Carlo algorithms and, closely related with them, stochastic simulations based on stochastic differential equations. In these lecture notes we present main ideas concerned with Monte Carlo methods and their theoretical properties. We apply them to such problems as integration and approximation of solutions of deterministic/stochastic differential equations. We also discuss implementation of exemplary algorithms in Python programming language and their application to option pricing. Part of these notes has been used during lectures for PhD students at AGH University of Science and Technology, Krakow, Poland, at summer semesters in the years 2020, 2021, 2023.

The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They rely on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least $\approx 10^4$). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

This article presents a new algorithm to compute all the roots of two families of polynomials that are of interest for the Mandelbrot set $\mathcal{M}$ : the roots of those polynomials are respectively the parameters $c\in\mathcal{M}$ associated with periodic critical dynamics for $f_c(z)=z^2+c$ (hyperbolic centers) or with pre-periodic dynamics (Misiurewicz-Thurston parameters). The algorithm is based on the computation of discrete level lines that provide excellent starting points for the Newton method. In practice, we observe that these polynomials can be split in linear time of the degree. This article is paired with a code library \citelib{MLib} that implements this algorithm. Using this library and about 723 000 core-hours on the HPC center \textit{Rom\'eo} (Reims), we have successfully found all hyperbolic centers of period $\leq 41$ and all Misiurewicz-Thurston parameters whose period and pre-period sum to $\leq 35$. Concretely, this task involves splitting a tera-polynomial, i.e. a polynomial of degree $\sim10^{12}$, which is orders of magnitude ahead of the previous state of the art. It also involves dealing with the certifiability of our numerical results, which is an issue that we address in detail, both mathematically and along the production chain. The certified database is available to the scientific community. For the smaller periods that can be represented using only hardware arithmetic (floating points FP80), the implementation of our algorithm can split the corresponding polynomials of degree $\sim10^{9}$ in less than one day-core. We complement these benchmarks with a statistical analysis of the separation of the roots, which confirms that no other polynomial in these families can be split without using higher precision arithmetic.

Theoretical background is provided towards the mathematical foundation of the minimum enclosing ball problem. This problem concerns the determination of the unique spherical surface of smallest radius enclosing a given bounded set in the d-dimensional Euclidean space. The study of several problems that are similar or related to the minimum enclosing ball problem has received a considerable impetus from the large amount of applications of these problems in various fields of science and technology. The proposed theoretical framework is based on several enclosing (covering) and partitioning (clustering) theorems and provides among others bounds and relations between the circumradius, inradius, diameter and width of a set. These enclosing and partitioning theorems are considered as cornerstones in the field that strongly influencing developments and generalizations to other spaces and non-Euclidean geometries.

北京阿比特科技有限公司