亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a class of eigenvector-dependent nonlinear eigenvalue problems (NEPv) without the unitary invariance property. Those NEPv commonly arise as the first-order optimality conditions of a particular type of optimization problems over the Stiefel manifold, and previously, special cases have been studied in the literature. Two necessary conditions, a definiteness condition and a rank-preserving condition, on an eigenbasis matrix of the NEPv that is a global optimizer of the associated problem optimization are revealed, where the definiteness condition has been known for the special cases previously investigated. We show that, locally close to the eigenbasis matrix satisfying both necessary conditions, the NEPv can be reformulated as a unitarily invariant NEPv, the so-called aligned NEPv, through a basis alignment operation -- in other words, the NEPv is locally unitarily invariantizable. Numerically, the NEPv is naturally solved by an SCF-type iteration. By exploiting the differentiability of the coefficient matrix of the aligned NEPv, we establish a closed-form local convergence rate for the SCF-type iteration and analyze its level-shifted variant. Numerical experiments confirm our theoretical results.

相關內容

The unit selection problem aims to identify objects, called units, that are most likely to exhibit a desired mode of behavior when subjected to stimuli (e.g., customers who are about to churn but would change their mind if encouraged). Unit selection with counterfactual objective functions was introduced relatively recently with existing work focusing on bounding a specific class of objective functions, called the benefit functions, based on observational and interventional data -- assuming a fully specified model is not available to evaluate these functions. We complement this line of work by proposing the first exact algorithm for finding optimal units given a broad class of causal objective functions and a fully specified structural causal model (SCM). We show that unit selection under this class of objective functions is $\text{NP}^\text{PP}$-complete but is $\text{NP}$-complete when unit variables correspond to all exogenous variables in the SCM. We also provide treewidth-based complexity bounds on our proposed algorithm while relating it to a well-known algorithm for Maximum a Posteriori (MAP) inference.

Gradients have been exploited in proposal distributions to accelerate the convergence of Markov chain Monte Carlo algorithms on discrete distributions. However, these methods require a natural differentiable extension of the target discrete distribution, which often does not exist or does not provide effective gradient guidance. In this paper, we develop a gradient-like proposal for any discrete distribution without this strong requirement. Built upon a locally-balanced proposal, our method efficiently approximates the discrete likelihood ratio via Newton's series expansion to enable a large and efficient exploration in discrete spaces. We show that our method can also be viewed as a multilinear extension, thus inheriting its desired properties. We prove that our method has a guaranteed convergence rate with or without the Metropolis-Hastings step. Furthermore, our method outperforms a number of popular alternatives in several different experiments, including the facility location problem, extractive text summarization, and image retrieval.

We design an adaptive virtual element method (AVEM) of lowest order over triangular meshes with hanging nodes in 2d, which are treated as polygons. AVEM hinges on the stabilization-free a posteriori error estimators recently derived in [8]. The crucial property, that also plays a central role in this paper, is that the stabilization term can be made arbitrarily small relative to the a posteriori error estimators upon increasing the stabilization parameter. Our AVEM concatenates two modules, GALERKIN and DATA. The former deals with piecewise constant data and is shown in [8] to be a contraction between consecutive iterates. The latter approximates general data by piecewise constants to a desired accuracy. AVEM is shown to be convergent and quasi-optimal, in terms of error decay versus degrees of freedom, for solutions and data belonging to appropriate approximation classes. Numerical experiments illustrate the interplay between these two modules and provide computational evidence of optimality.

This paper presents the Residual QPAS Subspace method (ResQPASS) method that solves large-scale least-squares problem with bound constraints on the variables. The problem is solved by creating a series of small problems with increasing size by projecting on the basis residuals. Each projected problem is solved by the QPAS method that is warm-started with a working set and the solution of the previous problem. The method coincides with conjugate gradients (CG) applied to the normal equations when none of the constraints is active. When only a few constraints are active the method converges, after a few initial iterations, as the CG method. We develop a convergence theory that links the convergence with Krylov subspaces. We also present an efficient implementation where the matrix factorizations using QR are updated over the inner and outer iterations.

We study the problem of designing adaptive multi-armed bandit algorithms that perform optimally in both the stochastic setting and the adversarial setting simultaneously (often known as a best-of-both-world guarantee). A line of recent works shows that when configured and analyzed properly, the Follow-the-Regularized-Leader (FTRL) algorithm, originally designed for the adversarial setting, can in fact optimally adapt to the stochastic setting as well. Such results, however, critically rely on an assumption that there exists one unique optimal arm. Recently, Ito (2021) took the first step to remove such an undesirable uniqueness assumption for one particular FTRL algorithm with the $\frac{1}{2}$-Tsallis entropy regularizer. In this work, we significantly improve and generalize this result, showing that uniqueness is unnecessary for FTRL with a broad family of regularizers and a new learning rate schedule. For some regularizers, our regret bounds also improve upon prior results even when uniqueness holds. We further provide an application of our results to the decoupled exploration and exploitation problem, demonstrating that our techniques are broadly applicable.

The research in this article aims to find conditions of an algorithmic nature that are necessary and sufficient to transform any Boolean function in conjunctive normal form into a specific form that guarantees the satisfiability of this function. To find such conditions, we use the concept of a special covering of a set introduced in [13], and investigate the connection between this concept and the notion of satisfiability of Boolean functions. As shown, the problem of existence of a special covering for a set is equivalent to the Boolean satisfiability problem. Thus, an important result is the proof of the existence of necessary and sufficient conditions that make it possible to find out if there is a special covering for the set under the special decomposition. This result allows us to formulate the necessary and sufficient algorithmic conditions for Boolean satisfiability, considering the function in conjunctive normal form as a set of clauses. In parallel, as a result of the aforementioned algorithmic procedure, we obtain the values of the variables that ensure the satisfiability of this function. The terminology used related to graph theory, set theory, Boolean functions and complexity theory is consistent with the terminology in [1], [2], [3], [4]. The newly introduced terms are not found in use by other authors and do not contradict to other terms.

We consider the problem of iteratively solving large and sparse double saddle-point systems arising from the stationary Stokes-Darcy equations in two dimensions, discretized by the Marker-and-Cell (MAC) finite difference method. We analyze the eigenvalue distribution of a few ideal block preconditioners. We then derive practical preconditioners that are based on approximations of Schur complements that arise in a block decomposition of the double saddle-point matrix. We show that including the interface conditions in the preconditioners is key in the pursuit of scalability. Numerical results show good convergence behavior of our preconditioned GMRES solver and demonstrate robustness of the proposed preconditioner with respect to the physical parameters of the problem.

The non-greedy algorithm for $L_1$-norm PCA proposed in \cite{nie2011robust} is revisited and its convergence properties are studied. The algorithm is first interpreted as a conditional subgradient or an alternating maximization method. By treating it as a conditional subgradient, the iterative points generated by the algorithm will not change in finitely many steps under a certain full-rank assumption; such an assumption can be removed when the projection dimension is one. By treating the algorithm as an alternating maximization, it is proved that the objective value will not change after at most $\left\lceil \frac{F^{\max}}{\tau_0} \right\rceil$ steps. The stopping point satisfies certain optimality conditions. Then, a variant algorithm with improved convergence properties is studied. The iterative points generated by the algorithm will not change after at most $\left\lceil \frac{2F^{\max}}{\tau} \right\rceil$ steps and the stopping point also satisfies certain optimality conditions given a small enough $\tau$. Similar finite-step convergence is also established for a slight modification of the PAMe proposed in \cite{wang2021linear} very recently under a full-rank assumption. Such an assumption can also be removed when the projection dimension is one.

Time-dependent Maxwell's equations govern electromagnetics. Under certain conditions, we can rewrite these equations into a partial differential equation of second order, which in this case is the vectorial wave equation. For the vectorial wave, we investigate the numerical application and the challenges in the implementation. For this purpose, we consider a space-time variational setting, i.e. time is just another spatial dimension. More specifically, we apply integration by parts in time as well as in space, leading to a space-time variational formulation with different trial and test spaces. Conforming discretizations of tensor-product type result in a Galerkin--Petrov finite element method that requires a CFL condition for stability. For this Galerkin--Petrov variational formulation, we study the CFL condition and its sharpness. To overcome the CFL condition, we use a Hilbert-type transformation that leads to a variational formulation with equal trial and test spaces. Conforming space-time discretizations result in a new Galerkin--Bubnov finite element method that is unconditionally stable. In numerical examples, we demonstrate the effectiveness of this Galerkin--Bubnov finite element method. Furthermore, we investigate different projections of the right-hand side and their influence on the convergence rates. This paper is the first step towards a more stable computation and a better understanding of vectorial wave equations in a conforming space-time approach.

We prove a convergence theorem for U-statistics of degree two, where the data dimension $d$ is allowed to scale with sample size $n$. We find that the limiting distribution of a U-statistic undergoes a phase transition from the non-degenerate Gaussian limit to the degenerate limit, regardless of its degeneracy and depending only on a moment ratio. A surprising consequence is that a non-degenerate U-statistic in high dimensions can have a non-Gaussian limit with a larger variance and asymmetric distribution. Our bounds are valid for any finite $n$ and $d$, independent of individual eigenvalues of the underlying function, and dimension-independent under a mild assumption. As an application, we apply our theory to two popular kernel-based distribution tests, MMD and KSD, whose high-dimensional performance has been challenging to study. In a simple empirical setting, our results correctly predict how the test power at a fixed threshold scales with $d$ and the bandwidth.

北京阿比特科技有限公司