亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Motivated by the discrete dipole approximation (DDA) for the scattering of electromagnetic waves by a dielectric obstacle that can be considered as a simple discretization of a Lippmann-Schwinger style volume integral equation for time-harmonic Maxwell equations, we analyze an analogous discretization of convolution operators with strongly singular kernels. For a class of kernel functions that includes the finite Hilbert transformation in 1D and the principal part of the Maxwell volume integral operator used for DDA in dimensions 2 and 3, we show that the method, which does not fit into known frameworks of projection methods, can nevertheless be considered as a finite section method for an infinite block Toeplitz matrix. The symbol of this matrix is given by a Fourier series that does not converge absolutely. We use Ewald's method to obtain an exponentially fast convergent series representation of this symbol and show that it is a bounded function, thereby allowing to describe the spectrum and the numerical range of the matrix. It turns out that this numerical range includes the numerical range of the integral operator, but that it is in some cases strictly larger. In these cases the discretization method does not provide a spectrally correct approximation, and while it is stable for a large range of the spectral parameter $\lambda$, there are values of $\lambda$ for which the singular integral equation is well posed, but the discretization method is unstable.

相關內容

This paper presents a new method for reconstructing regions of interest (ROI) from a limited number of computed tomography (CT) measurements. Classical model-based iterative reconstruction methods lead to images with predictable features. Still, they often suffer from tedious parameterization and slow convergence. On the contrary, deep learning methods are fast, and they can reach high reconstruction quality by leveraging information from large datasets, but they lack interpretability. At the crossroads of both methods, deep unfolding networks have been recently proposed. Their design includes the physics of the imaging system and the steps of an iterative optimization algorithm. Motivated by the success of these networks for various applications, we introduce an unfolding neural network called U-RDBFB designed for ROI CT reconstruction from limited data. Few-view truncated data are effectively handled thanks to a robust non-convex data fidelity term combined with a sparsity-inducing regularization function. We unfold the Dual Block coordinate Forward-Backward (DBFB) algorithm, embedded in an iterative reweighted scheme, allowing the learning of key parameters in a supervised manner. Our experiments show an improvement over several state-of-the-art methods, including a model-based iterative scheme, a multi-scale deep learning architecture, and other deep unfolding methods.

Solving PDEs with machine learning techniques has become a popular alternative to conventional methods. In this context, Neural networks (NNs) are among the most commonly used machine learning tools, and in those models, the choice of an appropriate loss function is critical. In general, the main goal is to guarantee that minimizing the loss during training translates to minimizing the error in the solution at the same rate. In this work, we focus on the time-harmonic Maxwell's equations, whose weak formulation takes H(curl) as the space of test functions. We propose a NN in which the loss function is a computable approximation of the dual norm of the weak-form PDE residual. To that end, we employ the Helmholtz decomposition of the space H(curl) and construct an orthonormal basis for this space in two and three spatial dimensions. Here, we use the Discrete Sine/Cosine Transform to accurately and efficiently compute the discrete version of our proposed loss function. Moreover, in the numerical examples we show a high correlation between the proposed loss function and the H(curl)-norm of the error, even in problems with low-regularity solutions.

The aim of this article is to analyze numerical schemes using two-layer neural networks with infinite width for the resolution of the high-dimensional Poisson-Neumann partial differential equations (PDEs) with Neumann boundary conditions. Using Barron's representation of the solution with a measure of probability, the energy is minimized thanks to a gradient curve dynamic on the $2$ Wasserstein space of parameters defining the neural network. Inspired by the work from Bach and Chizat, we prove that if the gradient curve converges, then the represented function is the solution of the elliptic equation considered. Numerical experiments are given to show the potential of the method.

This article proposes and analyzes the generalized weak Galerkin ({\rm g}WG) finite element method for the second order elliptic problem. A generalized discrete weak gradient operator is introduced in the weak Galerkin framework so that the {\rm g}WG methods would not only allow arbitrary combinations of piecewise polynomials defined in the interior and on the boundary of each local finite element, but also work on general polytopal partitions. Error estimates are established for the corresponding numerical functions in the energy norm and the usual $L^2$ norm. A series of numerical experiments are presented to demonstrate the performance of the newly proposed {\rm g}WG method.

We study connections between differential equations and optimization algorithms for $m$-strongly and $L$-smooth convex functions through the use of Lyapunov functions by generalizing the Linear Matrix Inequality framework developed by Fazylab et al. in 2018. Using the new framework we derive analytically a new (discrete) Lyapunov function for a two-parameter family of Nesterov optimization methods and characterize their convergence rate. This allows us to prove a convergence rate that improves substantially on the previously proven rate of Nesterov's method for the standard choice of coefficients, as well as to characterize the choice of coefficients that yields the optimal rate. We obtain a new Lyapunov function for the Polyak ODE and revisit the connection between this ODE and the Nesterov's algorithms. In addition discuss a new interpretation of Nesterov method as an additive Runge-Kutta discretization and explain the structural conditions that discretizations of the Polyak equation should satisfy in order to lead to accelerated optimization algorithms.

Convolutional codes with a maximum distance profile attain the largest possible column distances for the maximum number of time instants and thus have outstanding error-correcting capability especially for streaming applications. Explicit constructions of such codes are scarce in the literature. In particular, known constructions of convolutional codes with rate k/n and a maximum distance profile require a field of size at least exponential in n for general code parameters. At the same time, the only known lower bound on the field size is the trivial bound that is linear in n. In this paper, we show that a finite field of size $\Omega_L(n^{L-1})$ is necessary for constructing convolutional codes with rate k/n and a maximum distance profile of length L. As a direct consequence, this rules out the possibility of constructing convolutional codes with a maximum distance profile of length L >= 3 over a finite field of size O(n). Additionally, we also present an explicit construction of convolutional code with rate k/n and a maximum profile of length L = 1 over a finite field of size $O(n^{\min\{k,n-k\}})$, achieving a smaller field size than known constructions with the same profile length.

The Kolmogorov-Arnold representation of a continuous multivariate function is a decomposition of the function into a structure of inner and outer functions of a single variable. It can be a convenient tool for tasks where it is required to obtain a predictive model that maps some vector input of a black box system into a scalar output. However, the construction of such representation based on the recorded input-output data is a challenging task. In the present paper, it is suggested to decompose the underlying functions of the representation into continuous basis functions and parameters. A novel lightweight algorithm for parameter identification is then proposed. The algorithm is based on the Newton-Kaczmarz method for solving non-linear systems of equations and is locally convergent. Numerical examples show that it is more robust with respect to the section of the initial guess for the parameters than the straightforward application of the Gauss-Newton method for parameter identification.

Robust inference based on the minimization of statistical divergences has proved to be a useful alternative to classical techniques based on maximum likelihood and related methods. Basu et al. (1998) introduced the density power divergence (DPD) family as a measure of discrepancy between two probability density functions and used this family for robust estimation of the parameter for independent and identically distributed data. Ghosh et al. (2017) proposed a more general class of divergence measures, namely the S-divergence family and discussed its usefulness in robust parametric estimation through several asymptotic properties and some numerical illustrations. In this paper, we develop the results concerning the asymptotic breakdown point for the minimum S-divergence estimators (in particular the minimum DPD estimator) under general model setups. The primary result of this paper provides lower bounds to the asymptotic breakdown point of these estimators which are independent of the dimension of the data, in turn corroborating their usefulness in robust inference under high dimensional data.

Inspired by certain regularization techniques for linear inverse problems, in this work we investigate the convergence properties of the Levenberg-Marquardt method using singular scaling matrices. Under a completeness condition, we show that the method is well-defined and establish its local quadratic convergence under an error bound assumption. We also prove that the search directions are gradient-related allowing us to show that limit points of the sequence generated by a line-search version of the method are stationary for the sum-of-squares function. The usefulness of the method is illustrated with some examples of parameter identification in heat conduction problems for which specific singular scaling matrices can be used to improve the quality of approximate solutions.

Under-approximations of reachable sets and tubes have been receiving growing research attention due to their important roles in control synthesis and verification. Available under-approximation methods applicable to continuous-time linear systems typically assume the ability to compute transition matrices and their integrals exactly, which is not feasible in general, and/or suffer from high computational costs. In this note, we attempt to overcome these drawbacks for a class of linear time-invariant (LTI) systems, where we propose a novel method to under-approximate finite-time forward reachable sets and tubes, utilizing approximations of the matrix exponential and its integral. In particular, we consider the class of continuous-time LTI systems with an identity input matrix and initial and input values belonging to full dimensional sets that are affine transformations of closed unit balls. The proposed method yields computationally efficient under-approximations of reachable sets and tubes, when implemented using zonotopes, with first-order convergence guarantees in the sense of the Hausdorff distance. To illustrate its performance, we implement our approach in three numerical examples, where linear systems of dimensions ranging between 2 and 200 are considered.

北京阿比特科技有限公司