亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a fundamental problem in Computational Geometry, the planar two-center problem. In this problem, the input is a set $S$ of $n$ points in the plane and the goal is to find two smallest congruent disks whose union contains all points of $S$. A longstanding open problem has been to obtain an $O(n\log n)$-time algorithm for planar two-center, matching the $\Omega(n\log n)$ lower bound given by Eppstein [SODA'97]. Towards this, researchers have made a lot of efforts over decades. The previous best algorithm, given by Wang [SoCG'20], solves the problem in $O(n\log^2 n)$ time. In this paper, we present an $O(n\log n)$-time (deterministic) algorithm for planar two-center, which completely resolves this open problem.

相關內容

Enumeration problems aim at outputting, without repetition, the set of solutions to a given problem instance. However, outputting the entire solution set may be prohibitively expensive if it is too big. In this case, outputting a small, sufficiently diverse subset of the solutions would be preferable. This leads to the Diverse-version of the original enumeration problem, where the goal is to achieve a certain level d of diversity by selecting k solutions. In this paper, we look at the Diverse-version of the query answering problem for Conjunctive Queries and extensions thereof. That is, we study the problem if it is possible to achieve a certain level d of diversity by selecting k answers to the given query and, in the positive case, to actually compute such k answers.

When rows of an $n \times d$ matrix $A$ are given in a stream, we study algorithms for approximating the top eigenvector of the matrix ${A}^TA$ (equivalently, the top right singular vector of $A$). We consider worst case inputs $A$ but assume that the rows are presented to the streaming algorithm in a uniformly random order. We show that when the gap parameter $R = \sigma_1(A)^2/\sigma_2(A)^2 = \Omega(1)$, then there is a randomized algorithm that uses $O(h \cdot d \cdot \operatorname{polylog}(d))$ bits of space and outputs a unit vector $v$ that has a correlation $1 - O(1/\sqrt{R})$ with the top eigenvector $v_1$. Here $h$ denotes the number of \emph{heavy rows} in the matrix, defined as the rows with Euclidean norm at least $\|{A}\|_F/\sqrt{d \cdot \operatorname{polylog}(d)}$. We also provide a lower bound showing that any algorithm using $O(hd/R)$ bits of space can obtain at most $1 - \Omega(1/R^2)$ correlation with the top eigenvector. Thus, parameterizing the space complexity in terms of the number of heavy rows is necessary for high accuracy solutions. Our results improve upon the $R = \Omega(\log n \cdot \log d)$ requirement in a recent work of Price and Xun (FOCS 2024). We note that the algorithm of Price and Xun works for arbitrary order streams whereas our algorithm requires a stronger assumption that the rows are presented in a uniformly random order. We additionally show that the gap requirements in their analysis can be brought down to $R = \Omega(\log^2 d)$ for arbitrary order streams and $R = \Omega(\log d)$ for random order streams. The requirement of $R = \Omega(\log d)$ for random order streams is nearly tight for their analysis as we obtain a simple instance with $R = \Omega(\log d/\log\log d)$ for which their algorithm, with any fixed learning rate, cannot output a vector approximating the top eigenvector $v_1$.

Several recent works have focused on carrying out non-asymptotic convergence analyses for AC algorithms. Recently, a two-timescale critic-actor algorithm has been presented for the discounted cost setting in the look-up table case where the timescales of the actor and the critic are reversed and only asymptotic convergence shown. In our work, we present the first two-timescale critic-actor algorithm with function approximation in the long-run average reward setting and present the first finite-time non-asymptotic as well as asymptotic convergence analysis for such a scheme. We obtain optimal learning rates and prove that our algorithm achieves a sample complexity of {$\mathcal{\tilde{O}}(\epsilon^{-(2+\delta)})$ with $\delta >0$ arbitrarily close to zero,} for the mean squared error of the critic to be upper bounded by $\epsilon$ which is better than the one obtained for two-timescale AC in a similar setting. A notable feature of our analysis is that we present the asymptotic convergence analysis of our scheme in addition to the finite-time bounds that we obtain and show the almost sure asymptotic convergence of the (slower) critic recursion to the attractor of an associated differential inclusion with actor parameters corresponding to local maxima of a perturbed average reward objective. We also show the results of numerical experiments on three benchmark settings and observe that our critic-actor algorithm performs the best amongst all algorithms.

We consider the dunking problem: a solid body at uniform temperature $T_{\text i}$ is placed in a environment characterized by farfield temperature $T_\infty$ and spatially uniform time-independent heat transfer coefficient. We permit heterogeneous material composition: spatially dependent density, specific heat, and thermal conductivity. Mathematically, the problem is described by a heat equation with Robin boundary conditions. The crucial parameter is the Biot number -- a nondimensional heat transfer (Robin) coefficient; we consider the limit of small Biot number. We introduce first-order and second-order asymptotic approximations (in Biot number) for several quantities of interest, notably the spatial domain average temperature as a function of time; the first-order approximation is simply the standard engineering `lumped' model. We then provide asymptotic error estimates for the first-order and second-order approximations for small Biot number, and also, for the first-order approximation, alternative strict bounds valid for all Biot number. Companion numerical solutions of the heat equation confirm the effectiveness of the error estimates for small Biot number. The second-order approximation and the first-order and second-order error estimates depend on several functional outputs associated to an elliptic partial differential equation; the latter is derived from Biot-sensitivity analysis of the heat equation eigenproblem in the limit of small Biot number. Most important is $\phi$, the only functional output required for the first-order error estimates; $\phi$ admits a simple physical interpretation in terms of conduction length scale. We investigate the domain and property dependence of $\phi$: most notably, we characterize spatial domains for which the standard lumped-model error criterion -- Biot number (based on volume-to-area length scale) small -- is deficient.

We study the problem of drawing samples from a logconcave distribution truncated on a polytope, motivated by computational challenges in Bayesian statistical models with indicator variables, such as probit regression. Building on interior point methods and the Dikin walk for sampling from uniform distributions, we analyze the mixing time of regularized Dikin walks. Our contributions are threefold. First, for a logconcave and log-smooth distribution with condition number $\kappa$, truncated on a polytope in $\mathbb{R}^n$ defined with $m$ linear constraints, we prove that the soft-threshold Dikin walk mixes in $\widetilde{O}((m+\kappa)n)$ iterations from a warm initialization. It improves upon prior work which required the polytope to be bounded and involved a bound dependent on the radius of the bounded region. Moreover, we introduce the regularized Dikin walk using Lewis weights for approximating the John ellipsoid. We show that it mixes in $\widetilde{O}((n^{2.5}+\kappa n)$. Second, we extend the mixing time guarantees mentioned above to weakly log-concave distributions truncated on polytopes, provided that they have a finite covariance matrix. Third, going beyond worst-case mixing time analysis, we demonstrate that soft-threshold Dikin walk can mix significantly faster when only a limited number of constraints intersect the high-probability mass of the distribution, improving the $\widetilde{O}((m+\kappa)n)$ upper bound to $\widetilde{O}(m + \kappa n)$. Additionally, per-iteration complexity of regularized Dikin walk and ways to generate a warm initialization are discussed to facilitate practical implementation.

In the first part of this paper, uniqueness of strong solution is established for the Vlasov-unsteady Stokes problem in 3D. The second part deals with a semi discrete scheme, which is based on the coupling of discontinuous Galerkin approximations for the Vlasov and the Stokes equations for the 2D problem. The proposed method is both mass and momentum conservative. Based on a special projection and also the Stokes projection, optimal error estimates in the case of smooth compactly supported initial data are derived. Moreover, the generalization of error estimates to 3D problem is also indicated. Finally, based on time splitting algorithm, some numerical experiments are conducted whose results confirm our theoretical findings.

The B-spline copula function is defined by a linear combination of elements of the normalized B-spline basis. We develop a modified EM algorithm, to maximize the penalized pseudo-likelihood function, wherein we use the smoothly clipped absolute deviation (SCAD) penalty function for the penalization term. We conduct simulation studies to demonstrate the stability of the proposed numerical procedure, show that penalization yields estimates with smaller mean-square errors when the true parameter matrix is sparse, and provide methods for determining tuning parameters and for model selection. We analyze as an example a data set consisting of birth and death rates from 237 countries, available at the website, ''Our World in Data,'' and we estimate the marginal density and distribution functions of those rates together with all parameters of our B-spline copula model.

We consider the classical online bipartite matching problem in the probe-commit model. In this problem, when an online vertex arrives, its edges must be probed to determine if they exist, based on known edge probabilities. A probing algorithm must respect commitment, meaning that if a probed edge exists, it must be used in the matching. Additionally, each online vertex has a patience constraint which limits the number of probes that can be made to an online vertex's adjacent edges. We introduce a new configuration linear program (LP) which we prove is a relaxation of an optimal offline probing algorithm. Using this LP, we establish the following competitive ratios which depend on the model used to generate the instance graph, and the arrival order of its online vertices: - In the worst-case instance model, an optimal $1/e$ ratio when the vertices arrive in uniformly at random (u.a.r.) order. - In the known independently distributed (i.d.) instance model, an optimal $1/2$ ratio when the vertices arrive in adversarial order, and a $1-1/e$ ratio when the vertices arrive in u.a.r. order. The latter two results improve upon the previous best competitive ratio of $0.46$ due to Brubach et al. (Algorithmica 2020), which only held in the more restricted known i.i.d. (independent and identically distributed) instance model. Our $1-1/e$-competitive algorithm matches the best known result for the prophet secretary matching problem due to Ehsani et al. (SODA 2018). Our algorithm is efficient and implies a $1-1/e$ approximation ratio for the special case when the graph is known. This is the offline stochastic matching problem, and we improve upon the $0.42$ approximation ratio for one-sided patience due to Pollner et al. (EC 2022), while also generalizing the $1-1/e$ approximation ratio for unbounded patience due to Gamlath et al. (SODA 2019).

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司