亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we demonstrate that the Bochner integral representation of the Algebraic Riccati Equations (ARE) are well-posed without any compactness assumptions on the coefficient and semigroup operators. From this result, we then are able to determine that, under some assumptions, the solution to the Galerkin approximations to these equations are convergent to the infinite dimensional solution. Going further, we apply this general result to demonstrate that the finite element approximation to the ARE are optimal for weakly damped wave semigroup processes in the $H^1(\Omega) \times L^2(\Omega)$ norm. Optimal convergence rates of the functional gain for a weakly damped wave optimal control system in both the $H^1(\Omega) \times L^2(\Omega)$ and $L^2(\Omega)\times L^2(\Omega)$ norms are demonstrated in the numerical examples.

相關內容

We propose and analyze an augmented mixed finite element method for the pseudostress-velocity formulation of the stationary convective Brinkman-Forchheimer problem in $\mathrm{R}^d$, $d\in \{2,3\}$. Since the convective and Forchheimer terms forces the velocity to live in a smaller space than usual, we augment the variational formulation with suitable Galerkin type terms. The resulting augmented scheme is written equivalently as a fixed point equation, so that the well-known Schauder and Banach theorems, combined with the Lax-Milgram theorem, allow to prove the unique solvability of the continuous problem. The finite element discretization involves Raviart-Thomas spaces of order $k\geq 0$ for the pseudostress tensor and continuous piecewise polynomials of degree $\le k + 1$ for the velocity. Stability, convergence, and a priori error estimates for the associated Galerkin scheme are obtained. In addition, we derive two reliable and efficient residual-based a posteriori error estimators for this problem on arbitrary polygonal and polyhedral regions. The reliability of the proposed estimators draws mainly upon the uniform ellipticity of the form involved, a suitable assumption on the data, a stable Helmholtz decomposition, and the local approximation properties of the Cl\'ement and Raviart-Thomas operators. In turn, inverse inequalities, the localization technique based on bubble functions, and known results from previous works, are the main tools yielding the efficiency estimate. Finally, some numerical examples illustrating the performance of the mixed finite element method, confirming the theoretical rate of convergence and the properties of the estimators, and showing the behaviour of the associated adaptive algorithms, are reported. In particular, the case of flow through a $2$D porous media with fracture networks is considered.

Hyperuniformity is the study of stationary point processes with a sub-Poisson variance in a large window. In other words, counting the points of a hyperuniform point process that fall in a given large region yields a small-variance Monte Carlo estimation of the volume. Hyperuniform point processes have received a lot of attention in statistical physics, both for the investigation of natural organized structures and the synthesis of materials. Unfortunately, rigorously proving that a point process is hyperuniform is usually difficult. A common practice in statistical physics and chemistry is to use a few samples to estimate a spectral measure called the structure factor. Its decay around zero provides a diagnostic of hyperuniformity. Different applied fields use however different estimators, and important algorithmic choices proceed from each field's lore. This paper provides a systematic survey and derivation of known or otherwise natural estimators of the structure factor. We also leverage the consistency of these estimators to contribute the first asymptotically valid statistical test of hyperuniformity. We benchmark all estimators and hyperuniformity diagnostics on a set of examples. In an effort to make investigations of the structure factor and hyperuniformity systematic and reproducible, we further provide the Python toolbox structure_factor, containing all the estimators and tools that we discuss.

Stochastic algebraic Riccati equations, also known as rational algebraic Riccati equations, arising in linear-quadratic optimal control for stochastic linear time-invariant systems, were considered to be not easy to solve. The-state-of-art numerical methods most rely on differentiability or continuity, such as Newton-type method, LMI method, or homotopy method. In this paper, we will build a novel theoretical framework and reveal the intrinsic algebraic structure appearing in this kind of algebraic Riccati equations. This structure guarantees that to solve them is almost as easy as to solve deterministic/classical ones, which will shed light on the theoretical analysis and numerical algorithm design for this topic.

The Dean-Kawasaki equation - one of the most fundamental SPDEs of fluctuating hydrodynamics - has been proposed as a model for density fluctuations in weakly interacting particle systems. In its original form it is highly singular and fails to be renormalizable even by approaches such as regularity structures and paracontrolled distrubutions, hindering mathematical approaches to its rigorous justification. It has been understood recently that it is natural to introduce a suitable regularization, e.g., by applying a formal spatial discretization or by truncating high-frequency noise. In the present work, we prove that a regularization in form of a formal discretization of the Dean-Kawasaki equation indeed accurately describes density fluctuations in systems of weakly interacting diffusing particles: We show that in suitable weak metrics, the law of fluctuations as predicted by the discretized Dean--Kawasaki SPDE approximates the law of fluctuations of the original particle system, up to an error that is of arbitrarily high order in the inverse particle number and a discretization error. In particular, the Dean-Kawasaki equation provides a means for efficient and accurate simulations of density fluctuations in weakly interacting particle systems.

We study the problem of planning restless multi-armed bandits (RMABs) with multiple actions. This is a popular model for multi-agent systems with applications like multi-channel communication, monitoring and machine maintenance tasks, and healthcare. Whittle index policies, which are based on Lagrangian relaxations, are widely used in these settings due to their simplicity and near-optimality under certain conditions. In this work, we first show that Whittle index policies can fail in simple and practically relevant RMAB settings, even when the RMABs are indexable. We discuss why the optimality guarantees fail and why asymptotic optimality may not translate well to practically relevant planning horizons. We then propose an alternate planning algorithm based on the mean-field method, which can provably and efficiently obtain near-optimal policies with a large number of arms, without the stringent structural assumptions required by the Whittle index policies. This borrows ideas from existing research with some improvements: our approach is hyper-parameter free, and we provide an improved non-asymptotic analysis which has: (a) no requirement for exogenous hyper-parameters and tighter polynomial dependence on known problem parameters; (b) high probability bounds which show that the reward of the policy is reliable; and (c) matching sub-optimality lower bounds for this algorithm with respect to the number of arms, thus demonstrating the tightness of our bounds. Our extensive experimental analysis shows that the mean-field approach matches or outperforms other baselines.

We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms. This class is particularly suitable for study under information-theoretic metrics as the algorithms are inherently randomized, and it includes commonly used algorithms such as Stochastic Gradient Langevin Dynamics (SGLD). Herein, we use the maximal leakage (equivalently, the Sibson mutual information of order infinity) metric, as it is simple to analyze, and it implies both bounds on the probability of having a large generalization error and on its expected value. We show that, if the update function (e.g., gradient) is bounded in $L_2$-norm, then adding isotropic Gaussian noise leads to optimal generalization bounds: indeed, the input and output of the learning algorithm in this case are asymptotically statistically independent. Furthermore, we demonstrate how the assumptions on the update function affect the optimal (in the sense of minimizing the induced maximal leakage) choice of the noise. Finally, we compute explicit tight upper bounds on the induced maximal leakage for several scenarios of interest.

Hyperbolic curvature flow is a geometric evolution equation that in the plane can be viewed as the natural hyperbolic analogue of curve shortening flow. It was proposed by Gurtin and Podio-Guidugli (1991) to model certain wave phenomena in solid-liquid interfaces. We introduce a semidiscrete finite difference method for the approximation of hyperbolic curvature flow and prove error bounds for natural discrete norms. We also present numerical simulations, including the onset of singularities starting from smooth strictly convex initial data.

With apparently all research on estimation-of-distribution algorithms (EDAs) concentrated on pseudo-Boolean optimization and permutation problems, we undertake the first steps towards using EDAs for problems in which the decision variables can take more than two values, but which are not permutation problems. To this aim, we propose a natural way to extend the known univariate EDAs to such variables. Different from a naive reduction to the binary case, it avoids additional constraints. Since understanding genetic drift is crucial for an optimal parameter choice, we extend the known quantitative analysis of genetic drift to EDAs for multi-valued variables. Roughly speaking, when the variables take $r$ different values, the time for genetic drift to become significant is $r$ times shorter than in the binary case. Consequently, the update strength of the probabilistic model has to be chosen $r$ times lower now. To investigate how desired model updates take place in this framework, we undertake a mathematical runtime analysis on the $r$-valued LeadingOnes problem. We prove that with the right parameters, the multi-valued UMDA solves this problem efficiently in $O(r\log(r)^2 n^2 \log(n))$ function evaluations. Overall, our work shows that EDAs can be adjusted to multi-valued problems, and it gives advice on how to set the main parameters.

Many problems arising in control require the determination of a mathematical model of the application. This has often to be performed starting from input-output data, leading to a task known as system identification in the engineering literature. One emerging topic in this field is estimation of networks consisting of several interconnected dynamic systems. We consider the linear setting assuming that system outputs are the result of many correlated inputs, hence making system identification severely ill-conditioned. This is a scenario often encountered when modeling complex cybernetics systems composed by many sub-units with feedback and algebraic loops. We develop a strategy cast in a Bayesian regularization framework where any impulse response is seen as realization of a zero-mean Gaussian process. Any covariance is defined by the so called stable spline kernel which includes information on smooth exponential decay. We design a novel Markov chain Monte Carlo scheme able to reconstruct the impulse responses posterior by efficiently dealing with collinearity. Our scheme relies on a variation of the Gibbs sampling technique: beyond considering blocks forming a partition of the parameter space, some other (overlapping) blocks are also updated on the basis of the level of collinearity of the system inputs. Theoretical properties of the algorithm are studied obtaining its convergence rate. Numerical experiments are included using systems containing hundreds of impulse responses and highly correlated inputs.

Tumor shape is a key factor that affects tumor growth and metastasis. This paper proposes a topological feature computed by persistent homology to characterize tumor progression from digital pathology and radiology images and examines its effect on the time-to-event data. The proposed topological features are invariant to scale-preserving transformation and can summarize various tumor shape patterns. The topological features are represented in functional space and used as functional predictors in a functional Cox proportional hazards model. The proposed model enables interpretable inference about the association between topological shape features and survival risks. Two case studies are conducted using consecutive 133 lung cancer and 77 brain tumor patients. The results of both studies show that the topological features predict survival prognosis after adjusting clinical variables, and the predicted high-risk groups have worse survival outcomes than the low-risk groups. Also, the topological shape features found to be positively associated with survival hazards are irregular and heterogeneous shape patterns, which are known to be related to tumor progression.

北京阿比特科技有限公司