亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this manuscript, we study the stability of the origin for the multivariate geometric Brownian motion. More precisely, under suitable sufficient conditions, we construct a Lyapunov function such that the origin of the multivariate geometric Brownian motion is globally asymptotically stable in probability. Moreover, we show that such conditions can be rewritten as a Bilinear Matrix Inequality (BMI) feasibility problem. We stress that no commutativity relations between the drift matrix and the noise dispersion matrices are assumed and therefore the so-called Magnus representation of the solution of the multivariate geometric Brownian motion is complicated. In addition, we exemplify our method in numerous specific models from the literature such as random linear oscillators, satellite dynamics, inertia systems, diagonal noise systems, cancer self-remission and smoking.

相關內容

In this work, we analyze the convergence rate of randomized quasi-Monte Carlo (RQMC) methods under Owen's boundary growth condition [Owen, 2006] via spectral analysis. Specifically, we examine the RQMC estimator variance for the two commonly studied sequences: the lattice rule and the Sobol' sequence, applying the Fourier transform and Walsh--Fourier transform, respectively, for this analysis. Assuming certain regularity conditions, our findings reveal that the asymptotic convergence rate of the RQMC estimator's variance closely aligns with the exponent specified in Owen's boundary growth condition for both sequence types. We also provide guidance on choosing the importance sampling density to minimize RQMC estimator variance.

In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies -- also known as false-name strategies, where agents create fake identities to manipulate outcomes -- has not yet been studied. In this paper, we delve into the details of different cost-sharing mechanisms proposed in the literature, highlighting their non-Sybil-resistant nature. Furthermore, we prove no deterministic, anonymous, truthful, Sybil-proof, upper semicontinuous, and individually rational cost-sharing mechanism for public excludable goods is better than $\Omega(n)$-approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of \textit{Sybil Welfare Invariant} mechanisms, where a mechanism does not decrease its welfare under Sybil strategies when agents choose weak dominant strategies and have subjective prior beliefs over other players' actions. Finally, we prove that the Shapley value mechanism for symmetric and submodular cost functions holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ under equilibrium with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding suggests that any group of agents, each with private valuations, can fund public excludable goods both permissionless and anonymously, achieving efficiency comparable to that of non-anonymous domains, even when the total number of participants is unknown.ess and anonymously, achieving efficiency comparable to that of permissioned and non-anonymous domains, even when the total number of participants is unknown.

We revisit the problem of the existence of the maximum likelihood estimate for multi-class logistic regression. We show that one method of ensuring its existence is by assigning positive probability to every class in the sample dataset. The notion of data separability is not needed, which is in contrast to the classical set up of multi-class logistic regression in which each data sample belongs to one class. We also provide a general and constructive estimate of the convergence rate to the maximum likelihood estimate when gradient descent is used as the optimizer. Our estimate involves bounding the condition number of the Hessian of the maximum likelihood function. The approaches used in this article rely on a simple operator-theoretic framework.

In this paper, we study the Boltzmann equation with uncertainties and prove that the spectral convergence of the semi-discretized numerical system holds in a combined velocity and random space, where the Fourier-spectral method is applied for approximation in the velocity space whereas the generalized polynomial chaos (gPC)-based stochastic Galerkin (SG) method is employed to discretize the random variable. Our proof is based on a delicate energy estimate for showing the well-posedness of the numerical solution as well as a rigorous control of its negative part in our well-designed functional space that involves high-order derivatives of both the velocity and random variables. This paper rigorously justifies the statement proposed in [Remark 4.4, J. Hu and S. Jin, J. Comput. Phys., 315 (2016), pp. 150-168].

The subject of graph convexity is well explored in the literature, the so-called interval convexities above all. In this work, we explore the cycle convexity, an interval convexity whose interval function is $I(S) = S \cup \{u \mid G[S \cup \{u\}]$ has a cycle containing $u\}$. In this convexity, we prove that determine whether the convexity number of a graph $G$ is at least $k$ is \NP-complete and \W[1]-hard when parameterized by the size of the solution when $G$ is a thick spider, but polynomial when $G$ is an extended $P_4$-laden graph. We also prove that determining whether the percolation time of a graph is at least $k$ is \NP-complete even for fixed $k \geq 9$, but polynomial for cacti or for fixed $k\leq2$.

We continue our investigation of viscoelasticity by extending the Holzapfel-Simo approach discussed in Part I to the fully nonlinear regime. By scrutinizing the relaxation property for the non-equilibrium stresses, it is revealed that a kinematic assumption akin to the Green-Naghdi type is necessary in the design of the potential. This insight underscores a link between the so-called additive plasticity and the viscoelasticity model under consideration, further inspiring our development of a nonlinear viscoelasticity theory. Our strategy is based on Hill's hyperelasticity framework and leverages the concept of generalized strains. Notably, the adopted kinematic assumption makes the proposed theory fundamentally different from the existing models rooted in the notion of the intermediate configuration. The computation aspects, including the consistent linearization, constitutive integration, and modular implementation, are addressed in detail. A suite of numerical examples is provided to demonstrate the capability of the proposed model in characterizing viscoelastic material behaviors at large strains.

In the context of the stream calculus, we present an Implicit Function Theorem (IFT) for polynomial systems, and discuss its relations with the classical IFT from calculus. In particular, we demonstrate the advantages of the stream IFT from a computational point of view, and provide a few example applications where its use turns out to be valuable.

In this work, we study and extend a class of semi-Lagrangian exponential methods, which combine exponential time integration techniques, suitable for integrating stiff linear terms, with a semi-Lagrangian treatment of nonlinear advection terms. Partial differential equations involving both processes arise for instance in atmospheric circulation models. Through a truncation error analysis, we first show that previously formulated semi-Lagrangian exponential schemes are limited to first-order accuracy due to the discretization of the linear term; we then formulate a new discretization leading to a second-order accurate method. Also, a detailed stability study, both considering a linear stability analysis and an empirical simulation-based one, is conducted to compare several Eulerian and semi-Lagrangian exponential schemes, as well as a well-established semi-Lagrangian semi-implicit method, which is used in operational atmospheric models. Numerical simulations of the shallow-water equations on the rotating sphere, considering standard and challenging benchmark test cases, are performed to assess the orders of convergence, stability properties, and computational cost of each method. The proposed second-order semi-Lagrangian exponential method was shown to be more stable and accurate than the previously formulated schemes of the same class at the expense of larger wall-clock times; however, the method is more stable and has a similar cost compared to the well-established semi-Lagrangian semi-implicit; therefore, it is a competitive candidate for potential operational applications in atmospheric circulation modeling.

We present an approach for the efficient implementation of self-adjusting multi-rate Runge-Kutta methods and we extend the previously available stability analyses of these methods to the case of an arbitrary number of sub-steps for the active components. We propose a physically motivated model problem that can be used to assess the stability of different multi-rate versions of standard Runge-Kutta methods and the impact of different interpolation methods for the latent variables. Finally, we present the results of several numerical experiments, performed with implementations of the proposed methods in the framework of the \textit{OpenModelica} open-source modelling and simulation software, which demonstrate the efficiency gains deriving from the use of the proposed multi-rate approach for physical modelling problems with multiple time scales.

Adversarial robustness and generalization are both crucial properties of reliable machine learning models. In this letter, we study these properties in the context of quantum machine learning based on Lipschitz bounds. We derive parameter-dependent Lipschitz bounds for quantum models with trainable encoding, showing that the norm of the data encoding has a crucial impact on the robustness against data perturbations. Further, we derive a bound on the generalization error which explicitly involves the parameters of the data encoding. Our theoretical findings give rise to a practical strategy for training robust and generalizable quantum models by regularizing the Lipschitz bound in the cost. Further, we show that, for fixed and non-trainable encodings, as those frequently employed in quantum machine learning, the Lipschitz bound cannot be influenced by tuning the parameters. Thus, trainable encodings are crucial for systematically adapting robustness and generalization during training. The practical implications of our theoretical findings are illustrated with numerical results.

北京阿比特科技有限公司