亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a new algorithm for finding isolated zeros of a system of real-valued functions in a bounded interval in $\mathbb{R}^n$. It uses the Chebyshev proxy method combined with a mixture of subdivision, reduction methods, and elimination checks that leverage special properties of Chebyshev polynomials. We prove the method has R-quadratic convergence locally near simple zeros of the system. We also analyze the temporal complexity and the numerical stability of the algorithm and provide numerical evidence in dimensions up to three that the method is both fast and accurate on a wide range of problems. The algorithm should also work well in higher dimensions. Our tests show that the algorithm outperforms other standard methods on this problem of finding all real zeros in a bounded domain. Our Python implementation of the algorithm is publicly available.

相關內容

A geometric $t$-spanner $\mathcal{G}$ on a set $S$ of $n$ point sites in a metric space $P$ is a subgraph of the complete graph on $S$ such that for every pair of sites $p,q$ the distance in $\mathcal{G}$ is a most $t$ times the distance $d(p,q)$ in $P$. We call a connection between two sites in the spanner a link. In some settings, such as when $P$ is a simple polygon with $m$ vertices and a link is a shortest path in $P$, links can consist of $\Theta (m)$ segments and thus have non-constant complexity. The total spanner complexity is a recently-introduced measure of how compact a spanner is. In this paper, we study what happens if we are allowed to introduce $k$ Steiner points to reduce the spanner complexity. We study such Steiner spanners in simple polygons, polygonal domains, and edge-weighted trees. Surprisingly, we show that Steiner points have only limited utility. For a spanner that uses $k$ Steiner points, we provide an $\Omega(nm/k)$ lower bound on the worst-case complexity of any $(3-\varepsilon)$-spanner, and an $\Omega(mn^{1/(t+1)}/k^{1/(t+1)})$ lower bound on the worst-case complexity of any $(t-\varepsilon)$-spanner, for any constant $\varepsilon\in (0,1)$ and integer constant $t \geq 2$. These lower bounds hold in all settings. Additionally, we show NP-hardness for the problem of deciding whether a set of sites in a polygonal domain admits a $3$-spanner with a given maximum complexity using $k$ Steiner points. On the positive side, for trees we show how to build a $2t$-spanner that uses $k$ Steiner points and of complexity $O(mn^{1/t}/k^{1/t} + n \log (n/k))$, for any integer $t \geq 1$. We generalize this result to forests, and apply it to obtain a $2\sqrt{2}t$-spanner in a simple polygon or a $6t$-spanner in a polygonal domain, with total complexity $O(mn^{1/t}(\log k)^{1+1/t}/k^{1/t} + n\log^2 n)$.

We show that, for every $k\geq 2$, $C_{2k}$-freeness can be decided in $O(n^{1-1/k})$ rounds in the \CONGEST{} model by a randomized Monte-Carlo distributed algorithm with one-sided error probability $1/3$. This matches the best round-complexities of previously known algorithms for $k\in\{2,3,4,5\}$ by Drucker et al. [PODC'14] and Censor-Hillel et al. [DISC'20], but improves the complexities of the known algorithms for $k>5$ by Eden et al. [DISC'19], which were essentially of the form $\tilde O(n^{1-2/k^2})$. Our algorithm uses colored BFS-explorations with threshold, but with an original \emph{global} approach that enables to overcome a recent impossibility result by Fraigniaud et al. [SIROCCO'23] about using colored BFS-exploration with \emph{local} threshold for detecting cycles. We also show how to quantize our algorithm for achieving a round-complexity $\tilde O(n^{\frac{1}{2}-\frac{1}{2k}})$ in the quantum setting for deciding $C_{2k}$ freeness. Furthermore, this allows us to improve the known quantum complexities of the simpler problem of detecting cycles of length \emph{at most}~$2k$ by van Apeldoorn and de Vos [PODC'22]. Our quantization is in two steps. First, the congestion of our randomized algorithm is reduced, to the cost of reducing its success probability too. Second, the success probability is boosted using a new quantum framework derived from sequential algorithms, namely Monte-Carlo quantum amplification.

A Bayesian nonparametric method of James, Lijoi \& Prunster (2009) used to predict future values of observations from normalized random measures with independent increments is modified to a class of models based on negative binomial processes for which the increments are not independent, but are independent conditional on an underlying gamma variable. Like in James et al., the new algorithm is formulated in terms of two variables, one a function of the past observations, and the other an updating by means of a new observation. We outline an application of the procedure to population genetics, for the construction of realisations of genealogical trees and coalescents from samples of alleles.

In the Determinant Maximization problem, given an $n\times n$ positive semi-definite matrix $\bf{A}$ in $\mathbb{Q}^{n\times n}$ and an integer $k$, we are required to find a $k\times k$ principal submatrix of $\bf{A}$ having the maximum determinant. This problem is known to be NP-hard and further proven to be W[1]-hard with respect to $k$ by Koutis. However, there is still room to explore its parameterized complexity in the restricted case, in the hope of overcoming the general-case parameterized intractability. In this study, we rule out the fixed-parameter tractability of Determinant Maximization even if an input matrix is extremely sparse or low rank, or an approximate solution is acceptable. We first prove that Determinant Maximization is NP-hard and W[1]-hard even if an input matrix is an arrowhead matrix; i.e., the underlying graph formed by nonzero entries is a star, implying that the structural sparsity is not helpful. By contrast, Determinant Maximization is known to be solvable in polynomial time on tridiagonal matrices. Thereafter, we demonstrate the W[1]-hardness with respect to the rank $r$ of an input matrix. Our result is stronger than Koutis' result in the sense that any $k\times k$ principal submatrix is singular whenever $k>r$. We finally give evidence that it is W[1]-hard to approximate Determinant Maximization parameterized by $k$ within a factor of $2^{-c\sqrt{k}}$ for some universal constant $c>0$. Our hardness result is conditional on the Parameterized Inapproximability Hypothesis posed by Lokshtanov, Ramanujan, Saurab, and Zehavi, which asserts that a gap version of Binary Constraint Satisfaction Problem is W[1]-hard. To complement this result, we develop an $\varepsilon$-additive approximation algorithm that runs in $\varepsilon^{-r^2}\cdot r^{O(r^3)}\cdot n^{O(1)}$ time for the rank $r$ of an input matrix, provided that the diagonal entries are bounded.

In cut sparsification, all cuts of a hypergraph $H=(V,E,w)$ are approximated within $1\pm\epsilon$ factor by a small hypergraph $H'$. This widely applied method was generalized recently to a setting where the cost of cutting each hyperedge $e$ is provided by a splitting function $g_e: 2^e\to\mathbb{R}_+$. This generalization is called a submodular hypergraph when the functions $\{g_e\}_{e\in E}$ are submodular, and it arises in machine learning, combinatorial optimization, and algorithmic game theory. Previous work studied the setting where $H'$ is a reweighted sub-hypergraph of $H$, and measured the size of $H'$ by the number of hyperedges in it. In this setting, we present two results: (i) all submodular hypergraphs admit sparsifiers of size polynomial in $n=|V|$ and $\epsilon^{-1}$; (ii) we propose a new parameter, called spread, and use it to obtain smaller sparsifiers in some cases. We also show that for a natural family of splitting functions, relaxing the requirement that $H'$ be a reweighted sub-hypergraph of $H$ yields a substantially smaller encoding of the cuts of $H$ (almost a factor $n$ in the number of bits). This is in contrast to graphs, where the most succinct representation is attained by reweighted subgraphs. A new tool in our construction of succinct representation is the notion of deformation, where a splitting function $g_e$ is decomposed into a sum of functions of small description, and we provide upper and lower bounds for deformation of common splitting functions.

We propose an $\widetilde{O}(n + 1/\varepsilon)$-time FPTAS (Fully Polynomial-Time Approximation Scheme) for the classical Partition problem. This is the best possible (up to a logarithmic factor) assuming SETH (Strong Exponential Time Hypothesis) [Abboud, Bringmann, Hermelin, and Shabtay'22]. Prior to our work, the best known FPTAS for Partition runs in $\widetilde{O}(n + 1/\varepsilon^{5/4})$ time [Deng, Jin and Mao'23, Wu and Chen'22]. Our result is obtained by solving a more general problem of weakly approximating Subset Sum.

We study the fundamental problem of estimating the mean of a $d$-dimensional distribution with covariance $\Sigma \preccurlyeq \sigma^2 I_d$ given $n$ samples. When $d = 1$, \cite{catoni} showed an estimator with error $(1+o(1)) \cdot \sigma \sqrt{\frac{2 \log \frac{1}{\delta}}{n}}$, with probability $1 - \delta$, matching the Gaussian error rate. For $d>1$, a natural estimator outputs the center of the minimum enclosing ball of one-dimensional confidence intervals to achieve a $1-\delta$ confidence radius of $\sqrt{\frac{2 d}{d+1}} \cdot \sigma \left(\sqrt{\frac{d}{n}} + \sqrt{\frac{2 \log \frac{1}{\delta}}{n}}\right)$, incurring a $\sqrt{\frac{2d}{d+1}}$-factor loss over the Gaussian rate. When the $\sqrt{\frac{d}{n}}$ term dominates by a $\sqrt{\log \frac{1}{\delta}}$ factor, \cite{lee2022optimal-highdim} showed an improved estimator matching the Gaussian rate. This raises a natural question: Is the $\sqrt{\frac{2 d}{d+1}}$ loss \emph{necessary} when the $\sqrt{\frac{2 \log \frac{1}{\delta}}{n}}$ term dominates? We show that the answer is \emph{no} -- we construct an estimator that improves over the above naive estimator by a constant factor. We also consider robust estimation, where an adversary is allowed to corrupt an $\epsilon$-fraction of samples arbitrarily: in this case, we show that the above strategy of combining one-dimensional estimates and incurring the $\sqrt{\frac{2d}{d+1}}$-factor \emph{is} optimal in the infinite-sample limit.

We consider composition orderings for linear functions of one variable. Given $n$ linear functions $f_1,\dots,f_n$ and a constant $c$, the objective is to find a permutation $\sigma$ that minimizes/maximizes $f_{\sigma(n)}\circ\dots\circ f_{\sigma(1)}(c)$. It was first studied in the area of time-dependent scheduling, and known to be solvable in $O(n\log n)$ time if all functions are nondecreasing. In this paper, we present a complete characterization of optimal composition orderings for this case, by regarding linear functions as two-dimensional vectors. We also show several interesting properties on optimal composition orderings such as the equivalence between local and global optimality. Furthermore, by using the characterization above, we provide a fixed-parameter tractable (FPT) algorithm for the composition ordering problem for general linear functions, with respect to the number of decreasing linear functions. We next deal with matrix multiplication orderings as a generalization of composition of linear functions. Given $n$ matrices $M_1,\dots,M_n\in\mathbb{R}^{m\times m}$ and two vectors $w,y\in\mathbb{R}^m$, where $m$ denotes a positive integer, the objective is to find a permutation $\sigma$ that minimizes/maximizes $w^\top M_{\sigma(n)}\dots M_{\sigma(1)} y$. The problem is also viewed as a generalization of flow shop scheduling through a limit. By this extension, we show that the multiplication ordering problem for $2\times 2$ matrices is solvable in $O(n\log n)$ time if all the matrices are simultaneously triangularizable and have nonnegative determinants, and FPT with respect to the number of matrices with negative determinants, if all the matrices are simultaneously triangularizable. As the negative side, we finally prove that three possible natural generalizations are NP-hard: 1) when $m=2$, 2) when $m\geq 3$, and 3) the target version of the problem.

We study the task of efficiently sampling from a Gibbs distribution $d \pi^* = e^{-h} d {vol}_g$ over a Riemannian manifold $M$ via (geometric) Langevin MCMC; this algorithm involves computing exponential maps in random Gaussian directions and is efficiently implementable in practice. The key to our analysis of Langevin MCMC is a bound on the discretization error of the geometric Euler-Murayama scheme, assuming $\nabla h$ is Lipschitz and $M$ has bounded sectional curvature. Our error bound matches the error of Euclidean Euler-Murayama in terms of its stepsize dependence. Combined with a contraction guarantee for the geometric Langevin Diffusion under Kendall-Cranston coupling, we prove that the Langevin MCMC iterates lie within $\epsilon$-Wasserstein distance of $\pi^*$ after $\tilde{O}(\epsilon^{-2})$ steps, which matches the iteration complexity for Euclidean Langevin MCMC. Our results apply in general settings where $h$ can be nonconvex and $M$ can have negative Ricci curvature. Under additional assumptions that the Riemannian curvature tensor has bounded derivatives, and that $\pi^*$ satisfies a $CD(\cdot,\infty)$ condition, we analyze the stochastic gradient version of Langevin MCMC, and bound its iteration complexity by $\tilde{O}(\epsilon^{-2})$ as well.

The study of the closest point(s) on a statistical model from a given distribution in the probability simplex with respect to a fixed Wasserstein metric gives rise to a polyhedral norm distance optimization problem. There are two components to the complexity of determining the Wasserstein distance from a data point to a model. One is the combinatorial complexity that is governed by the combinatorics of the Lipschitz polytope of the finite metric to be used. Another is the algebraic complexity, which is governed by the polar degrees of the Zariski closure of the model. We find formulas for the polar degrees of rational normal scrolls and graphical models whose underlying graphs are star trees. Also, the polar degrees of the graphical models with four binary random variables where the graphs are a path on four vertices and the four-cycle, as well as for small, no-three-way interaction models, were computed. We investigate the algebraic degree of computing the Wasserstein distance to a small subset of these models. It was observed that this algebraic degree is typically smaller than the corresponding polar degree.

北京阿比特科技有限公司