亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A unified framework for fourth-order semilinear problems with trilinear nonlinearity and general source allows for quasi-best approximation with lowest-order finite element methods. This paper establishes the stability and a priori error control in the piecewise energy and weaker Sobolev norms under minimal hypotheses. Applications include the stream function vorticity formulation of the incompressible 2D Navier-Stokes equations and the von K\'{a}rm\'{a}n equations with Morley, discontinuous Galerkin, $C^0$ interior penalty, and weakly over-penalized symmetric interior penalty schemes. The proposed new discretizations consider quasi-optimal smoothers for the source term and smoother-type modifications inside the nonlinear terms.

相關內容

Bilevel optimization has various applications such as hyper-parameter optimization and meta-learning. Designing theoretically efficient algorithms for bilevel optimization is more challenging than standard optimization because the lower-level problem defines the feasibility set implicitly via another optimization problem. One tractable case is when the lower-level problem permits strong convexity. Recent works show that second-order methods can provably converge to an $\epsilon$-first-order stationary point of the problem at a rate of $\tilde{\mathcal{O}}(\epsilon^{-2})$, yet these algorithms require a Hessian-vector product oracle. Kwon et al. (2023) resolved the problem by proposing a first-order method that can achieve the same goal at a slower rate of $\tilde{\mathcal{O}}(\epsilon^{-3})$. In this work, we provide an improved analysis demonstrating that the first-order method can also find an $\epsilon$-first-order stationary point within $\tilde {\mathcal{O}}(\epsilon^{-2})$ oracle complexity, which matches the upper bounds for second-order methods in the dependency on $\epsilon$. Our analysis further leads to simple first-order algorithms that can achieve similar near-optimal rates in finding second-order stationary points and in distributed bilevel problems.

Implementing accurate Distribution System State Estimation (DSSE) faces several challenges, among which the lack of observability and the high density of the distribution system. While data-driven alternatives based on Machine Learning models could be a choice, they suffer in DSSE because of the lack of labeled data. In fact, measurements in the distribution system are often noisy, corrupted, and unavailable. To address these issues, we propose the Deep Statistical Solver for Distribution System State Estimation (DSS$^2$), a deep learning model based on graph neural networks (GNNs) that accounts for the network structure of the distribution system and for the physical governing power flow equations. DSS$^2$ leverages hypergraphs to represent the heterogeneous components of the distribution systems and updates their latent representations via a node-centric message-passing scheme. A weakly supervised learning approach is put forth to train the DSS$^2$ in a learning-to-optimize fashion w.r.t. the Weighted Least Squares loss with noisy measurements and pseudomeasurements. By enforcing the GNN output into the power flow equations and the latter into the loss function, we force the DSS$^2$ to respect the physics of the distribution system. This strategy enables learning from noisy measurements, acting as an implicit denoiser, and alleviating the need for ideal labeled data. Extensive experiments with case studies on the IEEE 14-bus, 70-bus, and 179-bus networks showed the DSS$^2$ outperforms by a margin the conventional Weighted Least Squares algorithm in accuracy, convergence, and computational time, while being more robust to noisy, erroneous, and missing measurements. The DSS$^2$ achieves a competing, yet lower, performance compared with the supervised models that rely on the unrealistic assumption of having all the true labels.

The gradient discretisation method (GDM) -- a generic framework encompassing many numerical methods -- is studied for a general stochastic Stefan problem with multiplicative noise. The convergence of the numerical solutions is proved by compactness method using discrete functional analysis tools, Skorohod theorem and the martingale representation theorem. The generic convergence results established in the GDM framework are applicable to a range of different numerical methods, including for example mass-lumped finite elements, but also some finite volume methods, mimetic methods, lowest-order virtual element methods, etc. Theoretical results are complemented by numerical tests based on two methods that fit in GDM framework.

In this article, we present a method for increasing adaptivity of an existing robust estimation algorithm by learning two parameters to better fit the residual distribution. The analyzed method uses these two parameters to calculate weights for Iterative Re-weighted Least Squares. This adaptive nature of the weights can be helpful in situations where the noise level varies in the measurements. We test our algorithm first on the point cloud registration problem with synthetic data sets and LiDAR odometry with open source real-world data sets. We show that the existing approach needs an additional manual tuning of a residual scale parameter which our method directly learns from data and has similar or better performance. We further present the idea of decoupling scale and shape parameters to improve performance of the algorithm. We give detailed analysis of our algorithm along with its comparison with similar well-known algorithms from literature to show the benefits of the proposed approach.

Penalized regression methods such as ridge regression heavily rely on the choice of a tuning or penalty parameter, which is often computed via cross-validation. Discrepancies in the value of the penalty parameter may lead to substantial differences in regression coefficient estimates and predictions. In this paper, we investigate the effect of single observations on the optimal choice of the tuning parameter, showing how the presence of influential points can change it dramatically. We distinguish between points as ``expanders'' and ``shrinkers'', based on their effect on the model complexity. Our approach supplies a visual exploratory tool to identify influential points, naturally implementable for high-dimensional data where traditional approaches usually fail. Applications to simulated and real data examples, both low- and high-dimensional, are presented. The visual tool is implemented in the R package influridge.

This paper presents a novel approach to Bayesian nonparametric spectral analysis of stationary multivariate time series. Starting with a parametric vector-autoregressive model, the parametric likelihood is nonparametrically adjusted in the frequency domain to account for potential deviations from parametric assumptions. We show mutual contiguity of the nonparametrically corrected likelihood, the multivariate Whittle likelihood approximation and the exact likelihood for Gaussian time series. A multivariate extension of the nonparametric Bernstein-Dirichlet process prior for univariate spectral densities to the space of Hermitian positive definite spectral density matrices is specified directly on the correction matrices. An infinite series representation of this prior is then used to develop a Markov chain Monte Carlo algorithm to sample from the posterior distribution. The code is made publicly available for ease of use and reproducibility. With this novel approach we provide a generalization of the multivariate Whittle-likelihood-based method of Meier et al. (2020) as well as an extension of the nonparametrically corrected likelihood for univariate stationary time series of Kirch et al. (2019) to the multivariate case. We demonstrate that the nonparametrically corrected likelihood combines the efficiencies of a parametric with the robustness of a nonparametric model. Its numerical accuracy is illustrated in a comprehensive simulation study. We illustrate its practical advantages by a spectral analysis of two environmental time series data sets: a bivariate time series of the Southern Oscillation Index and fish recruitment and time series of windspeed data at six locations in California.

This paper describes a class of shape optimization problems for optical metamaterials comprised of periodic microscale inclusions composed of a dielectric, low-dimensional material suspended in a non-magnetic bulk dielectric. The shape optimization approach is based on a homogenization theory for time-harmonic Maxwell's equations that describes effective material parameters for the propagation of electromagnetic waves through the metamaterial. The control parameter of the optimization is a deformation field representing the deviation of the microscale geometry from a reference configuration of the cell problem. This allows for describing the homogenized effective permittivity tensor as a function of the deformation field. We show that the underlying deformed cell problem is well-posed and regular. This, in turn, proves that the shape optimization problem is well-posed. In addition, a numerical scheme is formulated that utilizes an adjoint formulation with either gradient descent or BFGS as optimization algorithms. The developed algorithm is tested numerically on a number of prototypical shape optimization problems with a prescribed effective permittivity tensor as the target.

In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity. Our contribution is twofold. First, from an information-theoretic point of view, we prove tight lower bounds on Hessian-dependent complexities by introducing a concept called energy allocation, which captures the interaction between the searching algorithm and the geometry of objective functions. A matching upper bound is obtained by solving the optimal energy spectrum. Then, algorithmically, we show the existence of a Hessian-independent algorithm that universally achieves the asymptotic optimal sample complexities for all Hessian instances. The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions, which are enabled by a truncation method.

We consider Biot model with block preconditioners and generalized eigenvalue problems for scalability and robustness to parameters. A discontinuous Galerkin discretization is employed with the displacement and Darcy flow flux discretized as piecewise continuous in $P_1$ elements, and the pore pressure as piecewise constant in the $P_0$ element with a stabilizing term. Parallel algorithms are designed to solve the resulting linear system. Specifically, the GMRES method is employed as the outer iteration algorithm and block-triangular preconditioners are designed to accelerate the convergence. In the preconditioners, the elliptic operators are further approximated by using incomplete Cholesky factorization or two-level additive overlapping Schwartz method where coarse grids are constructed by generalized eigenvalue problems in the overlaps (GenEO). Extensive numerical experiments show a scalability and parametric robustness of the resulting parallel algorithms.

In this paper we present a new H(div)-conforming unfitted finite element method for the mixed Poisson problem which is robust in the cut configuration and preserves conservation properties of body-fitted finite element methods. The key is to formulate the divergence-constraint on the active mesh, instead of the physical domain, in order to obtain robustness with respect to cut configurations without the need for a stabilization that pollutes the mass balance. This change in the formulation results in a slight inconsistency, but does not affect the accuracy of the flux variable. By applying post-processings for the scalar variable, in virtue of classical local post-processings in body-fitted methods, we retain optimal convergence rates for both variables and even the superconvergence after post-processing of the scalar variable. We present the method and perform a rigorous a-priori error analysis of the method and discuss several variants and extensions. Numerical experiments confirm the theoretical results.

北京阿比特科技有限公司