亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A locating-dominating set $D$ of a graph $G$ is a dominating set of $G$ where each vertex not in $D$ has a unique neighborhood in $D$, and the Locating-Dominating Set problem asks if $G$ contains such a dominating set of bounded size. This problem is known to be $\mathsf{NP-hard}$ even on restricted graph classes, such as interval graphs, split graphs, and planar bipartite subcubic graphs. On the other hand, it is known to be solvable in polynomial time for some graph classes, such as trees and, more generally, graphs of bounded cliquewidth. While these results have numerous implications on the parameterized complexity of the problem, little is known in terms of kernelization under structural parameterizations. In this work, we begin filling this gap in the literature. Our first result shows that Locating-Dominating Set, when parameterized by the solution size $d$, admits no $2^{o(d \log d)}$ time algorithm unless the Exponential Time Hypothesis fails; as a corollary, we also show that no $n^{o(d)}$ time algorithm exists under ETH, implying that the naive $\mathsf{XP}$ algorithm is essentially optimal. We present an exponential kernel for the distance to cluster parameterization and show that, unless $\mathsf{NP-hard} \subseteq \mathsf{NP-hard}/$\mathsf{poly}$, no polynomial kernel exists for Locating-Dominating Set when parameterized by vertex cover nor when parameterized by distance to clique. We then turn our attention to parameters not bounded by neither of the previous two, and exhibit a linear kernel when parameterizing by the max leaf number; in this context, we leave the parameterization by feedback edge set as the primary open problem in our study.

相關內容

We consider the problem of aggregating the judgements of a group of experts to form a single prior distribution representing the judgements of the group. We develop a Bayesian hierarchical model to reconcile the judgements of the group of experts based on elicited quantiles for continuous quantities and probabilities for one-off events. Previous Bayesian reconciliation methods have not been used widely, if at all, in contrast to pooling methods and consensus-based approaches. To address this we embed Bayesian reconciliation within the probabilistic Delphi method. The result is to furnish the outcome of the probabilistic Delphi method with a direct probabilistic interpretation, with the resulting prior representing the judgements of the decision maker. We can use the rationales from the Delphi process to group the experts for the hierarchical modelling. We illustrate the approach with applications to studies evaluating erosion in embankment dams and pump failures in a water pumping station, and assess the properties of the approach using the TU Delft database of expert judgement studies. We see that, even using an off-the-shelf implementation of the approach, it out-performs individual experts, equal weighting of experts and the classical method based on the log score.

The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear time invariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.

Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.

We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.

We provide numerical bounds on the Crouzeix ratiofor KLS matrices $A$ which have a line segment on the boundary of the numerical range. The Crouzeix ratio is the supremum over all polynomials $p$ of the spectral norm of $p(A)$ dividedby the maximum absolute value of $p$ on the numerical range of $A$.Our bounds confirm the conjecture that this ratiois less than or equal to $2$. We also give a precise description of these numerical ranges.

We explore the maximum likelihood degree of a homogeneous polynomial $F$ on a projective variety $X$, $\mathrm{MLD}_F(X)$, which generalizes the concept of Gaussian maximum likelihood degree. We show that $\mathrm{MLD}_F(X)$ is equal to the count of critical points of a rational function on $X$, and give different geometric characterizations of it via topological Euler characteristic, dual varieties, and Chern classes.

We study the convergence of specific inexact alternating projections for two non-convex sets in a Euclidean space. The $\sigma$-quasioptimal metric projection ($\sigma \geq 1$) of a point $x$ onto a set $A$ consists of points in $A$ the distance to which is at most $\sigma$ times larger than the minimal distance $\mathrm{dist}(x,A)$. We prove that quasioptimal alternating projections, when one or both projections are quasioptimal, converge locally and linearly for super-regular sets with transversal intersection. The theory is motivated by the successful application of alternating projections to low-rank matrix and tensor approximation. We focus on two problems -- nonnegative low-rank approximation and low-rank approximation in the maximum norm -- and develop fast alternating-projection algorithms for matrices and tensor trains based on cross approximation and acceleration techniques. The numerical experiments confirm that the proposed methods are efficient and suggest that they can be used to regularise various low-rank computational routines.

The high-index saddle dynamics (HiSD) method [J. Yin, L. Zhang, and P. Zhang, {\it SIAM J. Sci. Comput., }41 (2019), pp.A3576-A3595] serves as an efficient tool for computing index-$k$ saddle points and constructing solution landscapes. Nevertheless, the conventional HiSD method often encounters slow convergence rates on ill-conditioned problems. To address this challenge, we propose an accelerated high-index saddle dynamics (A-HiSD) by incorporating the heavy ball method. We prove the linear stability theory of the continuous A-HiSD, and subsequently estimate the local convergence rate for the discrete A-HiSD. Our analysis demonstrates that the A-HiSD method exhibits a faster convergence rate compared to the conventional HiSD method, especially when dealing with ill-conditioned problems. We also perform various numerical experiments including the loss function of neural network to substantiate the effectiveness and acceleration of the A-HiSD method.

The classical Zarankiewicz's problem asks for the maximum number of edges in a bipartite graph on $n$ vertices which does not contain the complete bipartite graph $K_{t,t}$. In one of the cornerstones of extremal graph theory, K\H{o}v\'ari S\'os and Tur\'an proved an upper bound of $O(n^{2-\frac{1}{t}})$. In a celebrated result, Fox et al. obtained an improved bound of $O(n^{2-\frac{1}{d}})$ for graphs of VC-dimension $d$ (where $d<t$). Basit, Chernikov, Starchenko, Tao and Tran improved the bound for the case of semilinear graphs. At SODA'23, Chan and Har-Peled further improved Basit et al.'s bounds and presented (quasi-)linear upper bounds for several classes of geometrically-defined incidence graphs, including a bound of $O(n \log \log n)$ for the incidence graph of points and pseudo-discs in the plane. In this paper we present a new approach to Zarankiewicz's problem, via $\epsilon$-t-nets - a recently introduced generalization of the classical notion of $\epsilon$-nets. We show that the existence of `small'-sized $\epsilon$-t-nets implies upper bounds for Zarankiewicz's problem. Using the new approach, we obtain a sharp bound of $O(n)$ for the intersection graph of two families of pseudo-discs, thus both improving and generalizing the result of Chan and Har-Peled from incidence graphs to intersection graphs. We also obtain a short proof of the $O(n^{2-\frac{1}{d}})$ bound of Fox et al., and show improved bounds for several other classes of geometric intersection graphs, including a sharp $O(n\frac{\log n}{\log \log n})$ bound for the intersection graph of two families of axis-parallel rectangles.

We propose a threshold-type algorithm to the $L^2$-gradient flow of the Canham-Helfrich functional generalized to $\mathbb{R}^N$. The algorithm to the Willmore flow is derived as a special case in $\mathbb{R}^2$ or $\mathbb{R}^3$. This algorithm is constructed based on an asymptotic expansion of the solution to the initial value problem for a fourth order linear parabolic partial differential equation whose initial data is the indicator function on the compact set $\Omega_0$. The crucial points are to prove that the boundary $\partial\Omega_1$ of the new set $\Omega_1$ generated by our algorithm is included in $O(t)$-neighborhood from $\partial\Omega_0$ for small time $t>0$ and to show that the derivative of the threshold function in the normal direction for $\partial\Omega_0$ is far from zero in the small time interval. Finally, numerical examples of planar curves governed by the Willmore flow are provided by using our threshold-type algorithm.

北京阿比特科技有限公司