亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the online $k$-median clustering problem in which $n$ points arrive online and must be irrevocably assigned to a cluster on arrival. As there are lower bound instances that show that an online algorithm cannot achieve a competitive ratio that is a function of $n$ and $k$, we consider a beyond worst-case analysis model in which the algorithm is provided a priori with a predicted budget $B$ that upper bounds the optimal objective value. We give an algorithm that achieves a competitive ratio that is exponential in the the number $k$ of clusters, and show that the competitive ratio of every algorithm must be linear in $k$. To the best of our knowledge this is the first investigation in the literature that considers cluster consistency using competitive analysis.

相關內容

Phase aberration is one of the primary sources of image quality degradation in ultrasound, which is induced by spatial variations in sound speed across the heterogeneous medium. This effect disrupts transmitted waves and prevents coherent summation of echo signals, resulting in suboptimal image quality. In real experiments, obtaining non-aberrated ground truths can be extremely challenging, if not infeasible. It hinders the performance of deep learning-based phase aberration correction techniques due to sole reliance on simulated data and the presence of domain shift between simulated and experimental data. Here, for the first time, we propose a deep learning-based method that does not require reference data to compensate for the phase aberration effect. We train a network wherein both input and target output are randomly aberrated radio frequency (RF) data. Moreover, we demonstrate that a conventional loss function such as mean square error is inadequate for training the network to achieve optimal performance. Instead, we propose an adaptive mixed loss function that employs both B-mode and RF data, resulting in more efficient convergence and enhanced performance. Source code is available at \url{//code.sonography.ai}.

Link prediction problem has increasingly become prominent in many domains such as social network analyses, bioinformatics experiments, transportation networks, criminal investigations and so forth. A variety of techniques has been developed for link prediction problem, categorized into 1) similarity based approaches which study a set of features to extract similar nodes; 2) learning based approaches which extract patterns from the input data; 3) probabilistic statistical approaches which optimize a set of parameters to establish a model which can best compute formation probability. However, existing literatures lack approaches which utilize strength of each approach by integrating them to achieve a much more productive one. To tackle the link prediction problem, we propose an approach based on the combination of first and second group methods; the existing studied works use just one of these categories. Our two-phase developed method firstly determines new features related to the position and dynamic behavior of nodes, which enforce the approach more efficiency compared to approaches using mere measures. Then, a subspace clustering algorithm is applied to group social objects based on the computed similarity measures which differentiate the strength of clusters; basically, the usage of local and global indices and the clustering information plays an imperative role in our link prediction process. Some extensive experiments held on real datasets including Facebook, Brightkite and HepTh indicate good performances of our proposal method. Besides, we have experimentally verified our approach with some previous techniques in the area to prove the supremacy of ours.

L-moments are expected values of linear combinations of order statistics that provide robust alternatives to traditional moments. The estimation of parametric models by matching sample L-moments -- a procedure known as ``method of L-moments'' -- has been shown to outperform maximum likelihood estimation (MLE) in small samples from popular distributions. The choice of the number of L-moments to be used in estimation remains \textit{ad-hoc}, though: researchers typically set the number of L-moments equal to the number of parameters, as to achieve an order condition for identification. This approach is generally inefficient in larger sample sizes. In this paper, we show that, by properly choosing the number of L-moments and weighting these accordingly, we are able to construct an estimator that outperforms MLE in finite samples, and yet does not suffer from efficiency losses asymptotically. We do so by considering a ``generalised'' method of L-moments estimator and deriving its asymptotic properties in a framework where the number of L-moments varies with sample size. We then propose methods to automatically select the number of L-moments in a given sample. Monte Carlo evidence shows our proposed approach is able to outperform (in a mean-squared error sense) MLE in smaller samples, whilst working as well as it in larger samples.

This paper develops projection-free algorithms for online convex optimization with stochastic constraints. We design an online primal-dual projection-free framework that can take any projection-free algorithms developed for online convex optimization with no long-term constraint. With this general template, we deduce sublinear regret and constraint violation bounds for various settings. Moreover, for the case where the loss and constraint functions are smooth, we develop a primal-dual conditional gradient method that achieves $O(\sqrt{T})$ regret and $O(T^{3/4})$ constraint violations. Furthermore, for the setting where the loss and constraint functions are stochastic and strong duality holds for the associated offline stochastic optimization problem, we prove that the constraint violation can be reduced to have the same asymptotic growth as the regret.

Persistent homology is a central methodology in topological data analysis that has been successfully implemented in many fields and is becoming increasingly popular and relevant. The output of persistent homology is a persistence diagram -- a multiset of points supported on the upper half plane -- that is often used as a statistical summary of the topological features of data. In this paper, we study the random nature of persistent homology and estimate the density of expected persistence diagrams from observations using wavelets; we show that our wavelet-based estimator is optimal. Furthermore, we propose an adaptive estimator that offers a sparse representation of the expected persistence diagram that achieves near-optimality.

Online contention resolution schemes (OCRSs) are effective rounding techniques for online stochastic combinatorial optimization problems. These schemes randomly and sequentially round a fractional solution to a relaxed problem that can be formulated in advance. We propose OCRSs for online stochastic generalized assignment problems. In the OCRSs problem, sequentially arriving items are packed into a single knapsack, and their sizes are revealed only after insertion. The goal of the problem is to maximize the acceptance probability, which is the smallest probability among the items of being placed in the knapsack. Since the item sizes are unknown beforehand, a capacity overflow may occur. We consider two distinct settings: the hard constraint, where items that cause overflow are rejected, and the soft constraint, where such items are accepted. Under the hard constraint setting, we present an algorithm with an acceptance probability of $1/3$, and we also prove that no algorithm can achieve an acceptance probability greater than $3/7$. Under the soft constraint setting, we propose an algorithm with an acceptance probability of $1/2$, and we show that this is best possible.

We propose an algorithm whose input are parameters $k$ and $r$ and a hypergraph $H$ of rank at most $r$. The algorithm either returns a tree decomposition of $H$ of generalized hypertree width at most $4k$ or 'NO'. In the latter case, it is guaranteed that the hypertree width of $H$ is greater than $k$. Most importantly, the runtime of the algorithm is \emph{FPT} in $k$ and $r$. The approach extends to fractional hypertree width with a slightly worse approximation ($4k+1$ instead of $4k$). We hope that the results of this paper will give rise to a new research direction whose aim is design of FPT algorithms for computation and approximation of hypertree width parameters for restricted classes of hypergraphs.

Inverse problems are in many cases solved with optimization techniques. When the underlying model is linear, first-order gradient methods are usually sufficient. With nonlinear models, due to nonconvexity, one must often resort to second-order methods that are computationally more expensive. In this work we aim to approximate a nonlinear model with a linear one and correct the resulting approximation error. We develop a sequential method that iteratively solves a linear inverse problem and updates the approximation error by evaluating it at the new solution. This treatment convexifies the problem and allows us to benefit from established convex optimization methods. We separately consider cases where the approximation is fixed over iterations and where the approximation is adaptive. In the fixed case we show theoretically under what assumptions the sequence converges. In the adaptive case, particularly considering the special case of approximation by first-order Taylor expansion, we show that with certain assumptions the sequence converges to a critical point of the original nonconvex functional. Furthermore, we show that with quadratic objective functions the sequence corresponds to the Gauss-Newton method. Finally, we showcase numerical results superior to the conventional model correction method. We also show, that a fixed approximation can provide competitive results with considerable computational speed-up.

We consider the well-studied Robust $(k, z)$-Clustering problem, which generalizes the classic $k$-Median, $k$-Means, and $k$-Center problems. Given a constant $z\ge 1$, the input to Robust $(k, z)$-Clustering is a set $P$ of $n$ weighted points in a metric space $(M,\delta)$ and a positive integer $k$. Further, each point belongs to one (or more) of the $m$ many different groups $S_1,S_2,\ldots,S_m$. Our goal is to find a set $X$ of $k$ centers such that $\max_{i \in [m]} \sum_{p \in S_i} w(p) \delta(p,X)^z$ is minimized. This problem arises in the domains of robust optimization [Anthony, Goyal, Gupta, Nagarajan, Math. Oper. Res. 2010] and in algorithmic fairness. For polynomial time computation, an approximation factor of $O(\log m/\log\log m)$ is known [Makarychev, Vakilian, COLT $2021$], which is tight under a plausible complexity assumption even in the line metrics. For FPT time, there is a $(3^z+\epsilon)$-approximation algorithm, which is tight under GAP-ETH [Goyal, Jaiswal, Inf. Proc. Letters, 2023]. Motivated by the tight lower bounds for general discrete metrics, we focus on \emph{geometric} spaces such as the (discrete) high-dimensional Euclidean setting and metrics of low doubling dimension, which play an important role in data analysis applications. First, for a universal constant $\eta_0 >0.0006$, we devise a $3^z(1-\eta_{0})$-factor FPT approximation algorithm for discrete high-dimensional Euclidean spaces thereby bypassing the lower bound for general metrics. We complement this result by showing that even the special case of $k$-Center in dimension $\Theta(\log n)$ is $(\sqrt{3/2}- o(1))$-hard to approximate for FPT algorithms. Finally, we complete the FPT approximation landscape by designing an FPT $(1+\epsilon)$-approximation scheme (EPAS) for the metric of sub-logarithmic doubling dimension.

In a vertex-colored graph $G = (V, E)$, a subset $S \subseteq V$ is said to be consistent if every vertex has a nearest neighbor in $S$ with the same color. The problem of computing a minimum cardinality consistent subset of a graph is known to be NP-hard. On the positive side, Dey et al. (FCT 2021) show that this problem is solvable in polynomial time when input graphs are restricted to bi-colored trees. In this paper, we give a polynomial-time algorithm for this problem on $k$-colored trees with fixed $k$.

北京阿比特科技有限公司