亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The scope of this paper is the analysis and approximation of an optimal control problem related to the Allen-Cahn equation. A tracking functional is minimized subject to the Allen-Cahn equation using distributed controls that satisfy point-wise control constraints. First and second order necessary and sufficient conditions are proved. The lowest order discontinuous Galerkin - in time - scheme is considered for the approximation of the control to state and adjoint state mappings. Under a suitable restriction on maximum size of the temporal and spatial discretization parameters $k$, $h$ respectively in terms of the parameter $\epsilon$ that describes the thickness of the interface layer, a-priori estimates are proved with constants depending polynomially upon $1/ \epsilon$. Unlike to previous works for the uncontrolled Allen-Cahn problem our approach does not rely on a construction of an approximation of the spectral estimate, and as a consequence our estimates are valid under low regularity assumptions imposed by the optimal control setting. These estimates are also valid in cases where the solution and its discrete approximation do not satisfy uniform space-time bounds independent of $\epsilon$. These estimates and a suitable localization technique, via the second order condition (see \cite{Arada-Casas-Troltzsch_2002,Casas-Mateos-Troltzsch_2005,Casas-Raymond_2006,Casas-Mateos-Raymond_2007}), allows to prove error estimates for the difference between local optimal controls and their discrete approximation as well as between the associated state and adjoint state variables and their discrete approximations

相關內容

Consider the approximation of stochastic Allen-Cahn-type equations (i.e. $1+1$-dimensional space-time white noise-driven stochastic PDEs with polynomial nonlinearities $F$ such that $F(\pm \infty)=\mp \infty$) by a fully discrete space-time explicit finite difference scheme. The consensus in literature, supported by rigorous lower bounds, is that strong convergence rate $1/2$ with respect to the parabolic grid meshsize is expected to be optimal. We show that one can reach almost sure convergence rate $1$ (and no better) when measuring the error in appropriate negative Besov norms, by temporarily `pretending' that the SPDE is singular.

In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in exponential dimension dependency. Based on this localized construction, we show that if the objective function is a finite perturbation of a piecewise strongly convex and smooth function with $P$ pieces, i.e. non-convex and non-smooth in general, the generalization error can be upper bounded by $O(\sqrt{(\log n\log(nP))/n})$, where $n$ is the number of data samples. In particular, this rate is independent of dimension and does not require early stopping and decaying step size. Finally, we employ these results in various contexts and derive generalization bounds for multi-index linear models, multi-class support vector machines, and $K$-means clustering for both hard and soft label setups, improving the known state-of-the-art rates.

This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.

We establish the notion of limit consistency as a modular part in proving the consistency of lattice Boltzmann equations (LBE) with respect to a given partial differential equation (PDE) system. The incompressible Navier-Stokes equations (NSE) are used as paragon. Based upon the diffusion limit [L. Saint-Raymond (2003), doi: 10.1016/S0012-9593(03)00010-7] of the Bhatnagar-Gross-Krook (BGK) Boltzmann equation towards the NSE, we provide a successive discretization by nesting conventional Taylor expansions and finite differences. Elaborating the work in [M. J. Krause (2010), doi: 10.5445/IR/1000019768], we track the discretization state of the domain for the particle distribution functions and measure truncation errors at all levels within the derivation procedure. Via parametrizing equations and proving the limit consistency of the respective sequences, we retain the path towards the targeted PDE at each step of discretization, i.e. for the discrete velocity BGK Boltzmann equation and the space-time discretized LBE. As a direct result, we unfold the discretization technique of lattice Boltzmann methods as chaining finite differences and provide a generic top-down derivation of the numerical scheme which upholds the continuous limit.

The optimal binning is the optimal discretization of a variable into bins given a discrete or continuous numeric target. We present a rigorous and extensible mathematical programming formulation for solving the optimal binning problem for a binary, continuous and multi-class target type, incorporating constraints not previously addressed. For all three target types, we introduce a convex mixed-integer programming formulation. Several algorithmic enhancements, such as automatic determination of the most suitable monotonic trend via a Machine-Learning-based classifier and implementation aspects are thoughtfully discussed. The new mathematical programming formulations are carefully implemented in the open-source python library OptBinning.

We study differentially private (DP) stochastic optimization (SO) with data containing outliers and loss functions that are not Lipschitz continuous. To date, the vast majority of work on DP SO assumes that the loss is Lipschitz (i.e. stochastic gradients are uniformly bounded), and their error bounds scale with the Lipschitz parameter of the loss. While this assumption is convenient, it is often unrealistic: in many practical problems where privacy is required, data may contain outliers or be unbounded, causing some stochastic gradients to have large norm. In such cases, the Lipschitz parameter may be prohibitively large, leading to vacuous excess risk bounds. Thus, building on a recent line of work [WXDX20, KLZ22], we make the weaker assumption that stochastic gradients have bounded $k$-th moments for some $k \geq 2$. Compared with works on DP Lipschitz SO, our excess risk scales with the $k$-th moment bound instead of the Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers. For convex and strongly convex loss functions, we provide the first asymptotically optimal excess risk bounds (up to a logarithmic factor). Moreover, in contrast to the prior works [WXDX20, KLZ22], our bounds do not require the loss function to be differentiable/smooth. We also devise an accelerated algorithm that runs in linear time and yields improved (compared to prior works) and nearly optimal excess risk for smooth losses. Additionally, our work is the first to address non-convex non-Lipschitz loss functions satisfying the Proximal-PL inequality; this covers some classes of neural nets, among other practical models. Our Proximal-PL algorithm has nearly optimal excess risk that almost matches the strongly convex lower bound. Lastly, we provide shuffle DP variations of our algorithms, which do not require a trusted curator (e.g. for distributed learning).

Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.

The parameters of the log-logistic distribution are generally estimated based on classical methods such as maximum likelihood estimation, whereas these methods usually result in severe biased estimates when the data contain outliers. In this paper, we consider several alternative estimators, which not only have closed-form expressions, but also are quite robust to a certain level of data contamination. We investigate the robustness property of each estimator in terms of the breakdown point. The finite sample performance and effectiveness of these estimators are evaluated through Monte Carlo simulations and a real-data application. Numerical results demonstrate that the proposed estimators perform favorably in a manner that they are comparable with the maximum likelihood estimator for the data without contamination and that they provide superior performance in the presence of data contamination.

A formalized and quantifiable responsibility score is a crucial component in many aspects of the development and application of multi-agent systems and autonomous agents. We can employ it to inform decision making processes based on ethical considerations, as a measure to ensure redundancy that helps us in avoiding system failure, as well as for verifying that autonomous systems remain trustworthy by testing for unwanted responsibility voids in advance. We follow recent proposals to use probabilities as the basis for responsibility ascription in uncertain environments rather than the deterministic causal views employed in much of the previous formal philosophical literature. Using an axiomatic approach we formally evaluate the qualities of (classes of) proposed responsibility functions. To this end, we decompose the computation of the responsibility a group carries for an outcome into the computation of values that we assign to its members for individual decisions leading to that outcome, paired with an appropriate aggregation function. Next, we discuss a number of intuitively desirable properties for each of these contributing functions. We find an incompatibility between axioms determining upper and lower bounds for the values assigned at the member level. Regarding the aggregation from member-level values to group-level responsibility we are able to axiomatically characterize one promising aggregation function. Finally, we present two maximally axiom compliant group-level responsibility measures -- one respecting the lower bound axioms at the member level and one respecting the corresponding upper bound axioms.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

北京阿比特科技有限公司