亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an $\ell^2_2+\ell_1$-regularized discrete least squares approximation over general regions under assumptions of hyperinterpolation, named hybrid hyperinterpolation. Hybrid hyperinterpolation, using a soft thresholding operator and a filter function to shrink the Fourier coefficients approximated by a high-order quadrature rule of a given continuous function with respect to some orthonormal basis, is a combination of Lasso and filtered hyperinterpolations. Hybrid hyperinterpolation inherits features of them to deal with noisy data once the regularization parameter and the filter function are chosen well. We not only provide $L_2$ errors in theoretical analysis for hybrid hyperinterpolation to approximate continuous functions with noise and noise-free, but also decompose $L_2$ errors into three exact computed terms with the aid of a prior regularization parameter choices rule. This rule, making fully use of coefficients of hyperinterpolation to choose a regularization parameter, reveals that $L_2$ errors for hybrid hyperinterpolation sharply decrease and then slowly increase when the sparsity of coefficients ranges from one to large values. Numerical examples show the enhanced performance of hybrid hyperinterpolation when regularization parameters and noise vary. Theoretical $L_2$ errors bounds are verified in numerical examples on the interval, the unit-disk, the unit-sphere and the unit-cube, the union of disks.

相關內容

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.

We present new Dirichlet-Neumann and Neumann-Dirichlet algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, three variants can be found for the Dirichlet-Neumann and Neumann-Dirichlet algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.

Given a square pencil $A+ \lambda B$, where $A$ and $B$ are complex matrices, we consider the problem of finding the singular pencil nearest to it in the Frobenius distance. This problem is known to be very difficult, and the few algorithms available in the literature can only deal efficiently with pencils of very small size. We show that the problem is equivalent to minimizing a certain objective function over the Riemannian manifold $SU(n) \times SU(n)$, where $SU(n)$ denotes the special unitary group. With minor modifications, the same approach extends to the case of finding a nearest singular pencil with a specified minimal index. This novel perspective is based on the generalized Schur form of pencils, and yields a competitive numerical method, by pairing it with an algorithm capable of doing optimization on a Riemannian manifold. We provide numerical experiments that show that the resulting method allows us to deal with pencils of much larger size than alternative techniques, yielding candidate minimizers of comparable or better quality. In the course of our analysis, we also obtain a number of new theoretical results related to the generalized Schur form of a (regular or singular) square pencil and to the minimal index of a singular square pencil whose nullity is $1$.

Geometric quantiles are location parameters which extend classical univariate quantiles to normed spaces (possibly infinite-dimensional) and which include the geometric median as a special case. The infinite-dimensional setting is highly relevant in the modeling and analysis of functional data, as well as for kernel methods. We begin by providing new results on the existence and uniqueness of geometric quantiles. Estimation is then performed with an approximate M-estimator and we investigate its large-sample properties in infinite dimension. When the population quantile is not uniquely defined, we leverage the theory of variational convergence to obtain asymptotic statements on subsequences in the weak topology. When there is a unique population quantile, we show that the estimator is consistent in the norm topology for a wide range of Banach spaces including every separable uniformly convex space. In separable Hilbert spaces, we establish weak Bahadur-Kiefer representations of the estimator, from which $\sqrt n$-asymptotic normality follows.

Eigenvalue density generated by embedded Gaussian unitary ensemble with $k$-body interactions for two species (say $\mathbf{\pi}$ and $\mathbf{\nu}$) fermion systems is investigated by deriving formulas for the lowest six moments. Assumed in constructing this ensemble, called EGUE($k:\mathbf{\pi} \mathbf{\nu}$), is that the $\mathbf{\pi}$ fermions ($m_1$ in number) occupy $N_1$ number of degenerate single particle (sp) states and similarly $\mathbf{\nu}$ fermions ($m_2$ in number) in $N_2$ number of degenerate sp states. The Hamiltonian is assumed to be $k$-body preserving $(m_1,m_2)$. Formulas with finite $(N_1,N_2)$ corrections and asymptotic limit formulas both show that the eigenvalue density takes $q$-normal form with the $q$ parameter defined by the fourth moment. The EGUE($k:\mathbf{\pi} \mathbf{\nu}$) formalism and results are extended to two species boson systems. Results in this work show that the $q$-normal form of the eigenvalue density established only recently for identical fermion and boson systems extends to two species fermion and boson systems.

We introduce a new stochastic algorithm to locate the index-1 saddle points of a function $V:\mathbb R^d \to \mathbb R$, with $d$ possibly large. This algorithm can be seen as an equivalent of the stochastic gradient descent which is a natural stochastic process to locate local minima. It relies on two ingredients: (i) the concentration properties on index-1 saddle points of the first eigenmodes of the Witten Laplacian (associated with $V$) on $1$-forms and (ii) a probabilistic representation of a partial differential equation involving this differential operator. Numerical examples on simple molecular systems illustrate the efficacy of the proposed approach.

A superdirective antenna array has the potential to achieve an array gain proportional to the square of the number of antennas, making it of great value for future wireless communications. However, designing the superdirective beamformer while considering the complicated mutual-coupling effect is a practical challenge. Moreover, the superdirective antenna array is highly sensitive to excitation errors, especially when the number of antennas is large or the antenna spacing is very small, necessitating demanding and precise control over excitations. To address these problems, we first propose a novel superdirective beamforming approach based on the embedded element pattern (EEP), which contains the coupling information. The closed-form solution to the beamforming vector and the corresponding directivity factor are derived. This method relies on the beam coupling factors (BCFs) between the antennas, which are provided in closed form. To address the high sensitivity problem, we formulate a constrained optimization problem and propose an EEP-aided orthogonal complement-based robust beamforming (EEP-OCRB) algorithm. Full-wave simulation results validate our proposed methods. Finally, we build a prototype of a 5-dipole superdirective antenna array and conduct real-world experiments. The measurement results demonstrate the realization of the superdirectivity with our EEP-based method, as well as the robustness of the proposed EEP-OCRB algorithm to excitation errors.

In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.

Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.

For a state $\rho_{A_1^n B}$, we call a sequence of states $(\sigma_{A_1^k B}^{(k)})_{k=1}^n$ an approximation chain if for every $1 \leq k \leq n$, $\rho_{A_1^k B} \approx_\epsilon \sigma_{A_1^k B}^{(k)}$. In general, it is not possible to lower bound the smooth min-entropy of such a $\rho_{A_1^n B}$, in terms of the entropies of $\sigma_{A_1^k B}^{(k)}$ without incurring very large penalty factors. In this paper, we study such approximation chains under additional assumptions. We begin by proving a simple entropic triangle inequality, which allows us to bound the smooth min-entropy of a state in terms of the R\'enyi entropy of an arbitrary auxiliary state while taking into account the smooth max-relative entropy between the two. Using this triangle inequality, we create lower bounds for the smooth min-entropy of a state in terms of the entropies of its approximation chain in various scenarios. In particular, utilising this approach, we prove an approximate version of entropy accumulation and also provide a solution to the source correlation problem in quantum key distribution.

北京阿比特科技有限公司