亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Shape is a powerful tool to understand point sets. A formal notion of shape is given by $\alpha$-shapes, which generalize the convex hull and provide adjustable level of detail. Many real-world point sets have an inherent temporal property as natural processes often happen over time, like lightning strikes during thunderstorms or moving animal swarms. To explore such point sets, where each point is associated with one timestamp, interactive applications may utilize $\alpha$-shapes and allow the user to specify different time windows and $\alpha$-values. We show how to compute the temporal $\alpha$-shape $\alpha_T$, a minimal description of all $\alpha$-shapes over all time windows, in output-sensitive linear time. We also give complexity bounds on $|\alpha_T|$. We use $\alpha_T$ to interactively visualize $\alpha$-shapes of user-specified time windows without having to constantly compute requested $\alpha$-shapes. Experimental results suggest that our approach outperforms an existing approach by a factor of at least $\sim$52 and that the description we compute has reasonable size in practice. The basis for our algorithm is an existing algorithm which computes all Delaunay triangles over all time windows using $\mathcal{O}(1)$ time per triangle. Our approach generalizes to higher dimensions with the same runtime for fixed $d$.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 穩健性 · Subspace · 聯系函數 · Performer ·
2024 年 2 月 1 日

We introduce a novel sufficient dimension-reduction (SDR) method which is robust against outliers using $\alpha$-distance covariance (dCov) in dimension-reduction problems. Under very mild conditions on the predictors, the central subspace is effectively estimated and model-free advantage without estimating link function based on the projection on the Stiefel manifold. We establish the convergence property of the proposed estimation under some regularity conditions. We compare the performance of our method with existing SDR methods by simulation and real data analysis and show that our algorithm improves the computational efficiency and effectiveness.

The solution to empirical risk minimization with $f$-divergence regularization (ERM-$f$DR) is presented under mild conditions on $f$. Under such conditions, the optimal measure is shown to be unique. Examples of the solution for particular choices of the function $f$ are presented. Previously known solutions to common regularization choices are obtained by leveraging the flexibility of the family of $f$-divergences. These include the unique solutions to empirical risk minimization with relative entropy regularization (Type-I and Type-II). The analysis of the solution unveils the following properties of $f$-divergences when used in the ERM-$f$DR problem: $i\bigl)$ $f$-divergence regularization forces the support of the solution to coincide with the support of the reference measure, which introduces a strong inductive bias that dominates the evidence provided by the training data; and $ii\bigl)$ any $f$-divergence regularization is equivalent to a different $f$-divergence regularization with an appropriate transformation of the empirical risk function.

Time Complexity is an important metric to compare algorithms based on their cardinality. The commonly used, trivial notations to qualify the same are the Big-Oh, Big-Omega, Big-Theta, Small-Oh, and Small-Omega Notations. All of them, consider time a part of the real entity, i.e., Time coincides with the horizontal axis in the argand plane. But what if the Time rather than completely coinciding with the real axis of the argand plane, makes some angle with it? We are trying to focus on the case when the Time Complexity will have both real and imaginary components. For Instance, if $T\left(n\right)=\ n\log{n}$, the existing asymptomatic notations are capable of handling that in real time But, if we come across a problem where, $T\left(n\right)=\ n\log{n}+i\cdot n^2$, where, $i=\sqrt[2]{-1}$, the existing asymptomatic notations will not be able to catch up. To mitigate the same, in this research, we would consider proposing the Zeta Notation ($\zeta$), which would qualify Time in both the Real and Imaginary Axis, as per the Argand Plane.

We consider the problem of finding a geodesic disc of smallest radius containing at least $k$ points from a set of $n$ points in a simple polygon that has $m$ vertices, $r$ of which are reflex vertices. We refer to such a disc as a SKEG disc. We present an algorithm to compute a SKEG disc using higher-order geodesic Voronoi diagrams with worst-case time $O(k^{2} n + k^{2} r + \min(kr, r(n-k)) + m)$ ignoring polylogarithmic factors. We then present two $2$-approximation algorithms that find a geodesic disc containing at least $k$ points whose radius is at most twice that of a SKEG disc. The first algorithm computes a $2$-approximation with high probability in $O((n^{2} / k) \log n \log r + m)$ worst-case time with $O(n + m)$ space. The second algorithm runs in $O(n \log^{2} n \log r + m)$ expected time using $O(n + m)$ expected space, independent of $k$. Note that the first algorithm is faster when $k \in \omega(n / \log n)$.

We consider a boundary value problem (BVP) modelling one-dimensional heat-conduction with radiation, which is derived from the Stefan-Boltzmann law. The problem strongly depends on the parameters, making difficult to estimate the solution. We use an analytical approach to determine upper and lower bounds to the exact solution of the BVP, which allows estimating the latter. Finally, we support our theoretical arguments with numerical data, by implementing them into the MAPLE computer program.

In 2021, Casares, Colcombet and Fijalkow introduced the Alternating Cycle Decomposition (ACD), a structure used to define optimal transformations of Muller into parity automata and to obtain theoretical results about the possibility of relabelling automata with different acceptance conditions. In this work, we study the complexity of computing the ACD and its DAG-version, proving that this can be done in polynomial time for suitable representations of the acceptance condition of the Muller automaton. As corollaries, we obtain that we can decide typeness of Muller automata in polynomial time, as well as the parity index of the languages they recognise. Furthermore, we show that we can minimise in polynomial time the number of colours (resp. Rabin pairs) defining a Muller (resp. Rabin) acceptance condition, but that these problems become NP-complete when taking into account the structure of an automaton using such a condition.

Diffusion models have recently emerged as a promising framework for Image Restoration (IR), owing to their ability to produce high-quality reconstructions and their compatibility with established methods. Existing methods for solving noisy inverse problems in IR, considers the pixel-wise data-fidelity. In this paper, we propose SaFaRI, a spatial-and-frequency-aware diffusion model for IR with Gaussian noise. Our model encourages images to preserve data-fidelity in both the spatial and frequency domains, resulting in enhanced reconstruction quality. We comprehensively evaluate the performance of our model on a variety of noisy inverse problems, including inpainting, denoising, and super-resolution. Our thorough evaluation demonstrates that SaFaRI achieves state-of-the-art performance on both the ImageNet datasets and FFHQ datasets, outperforming existing zero-shot IR methods in terms of LPIPS and FID metrics.

We investigate the complexity of solving stable or perturbation-resilient instances of $k$-Means and $k$-Median clustering in fixed dimension Euclidean metrics (more generally doubling metrics). The notion of stable (perturbation resilient) instances was introduced by Bilu and Linial [2010] and Awasthi et al. [2012]. In our context we say a $k$-Means instance is $\alpha$-stable if there is a unique OPT which remains optimum if distances are (non-uniformly) stretched by a factor of at most $\alpha$. Stable clustering instances have been studied to explain why heuristics such as Lloyd's algorithm perform well in practice. In this work we show that for any fixed $\epsilon>0$, $(1+\epsilon)$-stable instances of $k$-Means in doubling metrics can be solved in polynomial time. More precisely we show a natural multiswap local search algorithm finds OPT for $(1+\epsilon)$-stable instances of $k$-Means and $k$-Median in a polynomial number of iterations. We complement this result by showing that under a new PCP theorem, this is essentially tight: that when the dimension d is part of the input, there is a fixed $\epsilon_0>0$ s.t. there is not even a PTAS for $(1+\epsilon_0)$-stable $k$-Means in $R^d$ unless NP=RP. To do this, we consider a robust property of CSPs; call an instance stable if there is a unique optimum solution $x^*$ and for any other solution $x'$, the number of unsatisfied clauses is proportional to the Hamming distance between $x^*$ and $x'$. Dinur et al. have already shown stable QSAT is hard to approximate for some constant Q, our hypothesis is simply that stable QSAT with bounded variable occurrence is also hard. Given this hypothesis we consider "stability-preserving" reductions to prove our hardness for stable k-Means. Such reductions seem to be more fragile than standard L-reductions and may be of further use to demonstrate other stable optimization problems are hard.

The principal component analysis (PCA) is widely used for data decorrelation and dimensionality reduction. However, the use of PCA may be impractical in real-time applications, or in situations were energy and computing constraints are severe. In this context, the discrete cosine transform (DCT) becomes a low-cost alternative to data decorrelation. This paper presents a method to derive computationally efficient approximations to the DCT. The proposed method aims at the minimization of the angle between the rows of the exact DCT matrix and the rows of the approximated transformation matrix. The resulting transformations matrices are orthogonal and have extremely low arithmetic complexity. Considering popular performance measures, one of the proposed transformation matrices outperforms the best competitors in both matrix error and coding capabilities. Practical applications in image and video coding demonstrate the relevance of the proposed transformation. In fact, we show that the proposed approximate DCT can outperform the exact DCT for image encoding under certain compression ratios. The proposed transform and its direct competitors are also physically realized as digital prototype circuits using FPGA technology.

In this paper, for any fixed integer $q>2$, we construct $q$-ary codes correcting a burst of at most $t$ deletions with redundancy $\log n+8\log\log n+o(\log\log n)+\gamma_{q,t}$ bits and near-linear encoding/decoding complexity, where $n$ is the message length and $\gamma_{q,t}$ is a constant that only depends on $q$ and $t$. In previous works there are constructions of such codes with redundancy $\log n+O(\log q\log\log n)$ bits or $\log n+O(t^2\log\log n)+O(t\log q)$. The redundancy of our new construction is independent of $q$ and $t$ in the second term.

北京阿比特科技有限公司