亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we investigate an optimal control problem governed by parabolic equations with measure-valued controls over time. We establish the well-posedness of the optimal control problem and derive the first-order optimality condition using Clarke's subgradients, revealing a sparsity structure in time for the optimal control. Consequently, these optimal control problems represent a generalization of impulse control for evolution equations. To discretize the optimal control problem, we employ the space-time finite element method. Here, the state equation is approximated using piecewise linear and continuous finite elements in space, alongside a Petrov-Galerkin method utilizing piecewise constant trial functions and piecewise linear and continuous test functions in time. The control variable is discretized using the variational discretization concept. For error estimation, we initially derive a priori error estimates and stabilities for the finite element discretizations of the state and adjoint equations. Subsequently, we establish weak-* convergence for the control under the norm $\mathcal{M}(\bar I_c;L^2(\omega))$, with a convergence order of $O(h^\frac{1}{2}+\tau^\frac{1}{4})$ for the state.

相關內容

In this work, we offer a theoretical analysis of two modern optimization techniques for training large and complex models: (i) adaptive optimization algorithms, such as Adam, and (ii) the model exponential moving average (EMA). Specifically, we demonstrate that a clipped version of Adam with model EMA achieves the optimal convergence rates in various nonconvex optimization settings, both smooth and nonsmooth. Moreover, when the scale varies significantly across different coordinates, we demonstrate that the coordinate-wise adaptivity of Adam is provably advantageous. Notably, unlike previous analyses of Adam, our analysis crucially relies on its core elements -- momentum and discounting factors -- as well as model EMA, motivating their wide applications in practice.

Regression methods dominate the practice of biostatistical analysis, but biostatistical training emphasises the details of regression models and methods ahead of the purposes for which such modelling might be useful. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth": that the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective has led to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline a new approach to the teaching and application of biostatistical methods, which situates them within a framework that first requires clear definition of the substantive research question at hand within one of three categories: descriptive, predictive, or causal. Within this approach, the simple univariable regression model may be introduced as a tool for description, while the development and application of multivariable regression models as well as other advanced biostatistical methods should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand.

Through an uncertainty quantification (UQ) perspective, we show that score-based generative models (SGMs) are provably robust to the multiple sources of error in practical implementation. Our primary tool is the Wasserstein uncertainty propagation (WUP) theorem, a model-form UQ bound that describes how the $L^2$ error from learning the score function propagates to a Wasserstein-1 ($\mathbf{d}_1$) ball around the true data distribution under the evolution of the Fokker-Planck equation. We show how errors due to (a) finite sample approximation, (b) early stopping, (c) score-matching objective choice, (d) score function parametrization expressiveness, and (e) reference distribution choice, impact the quality of the generative model in terms of a $\mathbf{d}_1$ bound of computable quantities. The WUP theorem relies on Bernstein estimates for Hamilton-Jacobi-Bellman partial differential equations (PDE) and the regularizing properties of diffusion processes. Specifically, PDE regularity theory shows that stochasticity is the key mechanism ensuring SGM algorithms are provably robust. The WUP theorem applies to integral probability metrics beyond $\mathbf{d}_1$, such as the total variation distance and the maximum mean discrepancy. Sample complexity and generalization bounds in $\mathbf{d}_1$ follow directly from the WUP theorem. Our approach requires minimal assumptions, is agnostic to the manifold hypothesis and avoids absolute continuity assumptions for the target distribution. Additionally, our results clarify the trade-offs among multiple error sources in SGMs.

This research conducts a thorough reevaluation of seismic fragility curves by utilizing ordinal regression models, moving away from the commonly used log-normal distribution function known for its simplicity. It explores the nuanced differences and interrelations among various ordinal regression approaches, including Cumulative, Sequential, and Adjacent Category models, alongside their enhanced versions that incorporate category-specific effects and variance heterogeneity. The study applies these methodologies to empirical bridge damage data from the 2008 Wenchuan earthquake, using both frequentist and Bayesian inference methods, and conducts model diagnostics using surrogate residuals. The analysis covers eleven models, from basic to those with heteroscedastic extensions and category-specific effects. Through rigorous leave-one-out cross-validation, the Sequential model with category-specific effects emerges as the most effective. The findings underscore a notable divergence in damage probability predictions between this model and conventional Cumulative probit models, advocating for a substantial transition towards more adaptable fragility curve modeling techniques that enhance the precision of seismic risk assessments. In conclusion, this research not only readdresses the challenge of fitting seismic fragility curves but also advances methodological standards and expands the scope of seismic fragility analysis. It advocates for ongoing innovation and critical reevaluation of conventional methods to advance the predictive accuracy and applicability of seismic fragility models within the performance-based earthquake engineering domain.

In this paper, we develop a new and effective approach to nonparametric quantile regression that accommodates ultrahigh-dimensional data arising from spatio-temporal processes. This approach proves advantageous in staving off computational challenges that constitute known hindrances to existing nonparametric quantile regression methods when the number of predictors is much larger than the available sample size. We investigate conditions under which estimation is feasible and of good overall quality and obtain sharp approximations that we employ to devising statistical inference methodology. These include simultaneous confidence intervals and tests of hypotheses, whose asymptotics is borne by a non-trivial functional central limit theorem tailored to martingale differences. Additionally, we provide finite-sample results through various simulations which, accompanied by an illustrative application to real-worldesque data (on electricity demand), offer guarantees on the performance of the proposed methodology.

In this paper, we introduce a discretization scheme for the Yang-Mills equations in the two-dimensional case using a framework based on discrete exterior calculus. Within this framework, we define discrete versions of the exterior covariant derivative operator and its adjoint, which capture essential geometric features similar to their continuous counterparts. Our focus is on discrete models defined on a combinatorial torus, where the discrete Yang-Mills equations are presented in the form of both a system of difference equations and a matrix form.

In this paper, we present a set of private and secure delegated quantum computing protocols and techniques tailored to user-level and industry-level use cases, depending on the computational resources available to the client, the specific privacy needs required, and the type of algorithm. Our protocols are presented at a high level as they are independent of the particular algorithm used for such encryption and decryption processes. Additionally, we propose a method to verify the correct execution of operations by the external server.

In this paper, we describe and analyze the spectral properties of a symmetric positive definite inexact block preconditioner for a class of symmetric, double saddle-point linear systems. We develop a spectral analysis of the preconditioned matrix, showing that its eigenvalues can be described in terms of the roots of a cubic polynomial with real coefficients. We illustrate the efficiency of the proposed preconditioners, and verify the theoretical bounds, in solving large-scale PDE-constrained optimization problems.

In this paper, we present a logic for conditional strong historical necessity in branching time and apply it to analyze a nontheological version of Lavenham's argument for future determinism. Strong historical necessity is motivated from a linguistical perspective, and an example of it is ``If I had not gotten away, I must have been dead''. The approach of the logic is as follows. The agent accepts ontic rules concerning how the world evolves over time. She takes some rules as indefeasible, which determine acceptable timelines. When evaluating a sentence with conditional strong historical necessity, we introduce its antecedent as an indefeasible ontic rule and then check whether its consequent holds for all acceptable timelines. The argument is not sound by the logic.

In this paper, we investigate nonlinear optimization problems whose constraints are defined as fuzzy relational equations (FRE) with max-min composition. Since the feasible solution set of the FRE is often a non-convex set and the resolution of the FREs is an NP-hard problem, conventional nonlinear approaches may involve high computational complexity. Based on the theoretical aspects of the problem, an algorithm (called FRE-ACO algorithm) is presented which benefits from the structural properties of the FREs, the ability of discrete ant colony optimization algorithm (denoted by ACO) to tackle combinatorial problems, and that of continuous ant colony optimization algorithm (denoted by ACOR) to solve continuous optimization problems. In the current method, the fundamental ideas underlying ACO and ACOR are combined and form an efficient approach to solve the nonlinear optimization problems constrained with such non-convex regions. Moreover, FRE-ACO algorithm preserves the feasibility of new generated solutions without having to initially find the minimal solutions of the feasible region or check the feasibility after generating the new solutions. FRE-ACO algorithm has been compared with some related works proposed for solving nonlinear optimization problems with respect to maxmin FREs. The obtained results demonstrate that the proposed algorithm has a higher convergence rate and requires a less number of function evaluations compared to other considered algorithms.

北京阿比特科技有限公司