亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Since its introduction to computational geometry by Alt and Godau in 1992, the Fr\'echet distance has been a mainstay of algorithmic research on curve similarity computations. The focus of the research has been on comparing polygonal curves, with the notable exception of an algorithm for the decision problem for planar piecewise smooth curves due to Rote (2007). We present an algorithm for the decision problem for piecewise smooth curves that is both conceptually simpler and naturally extends to the first algorithm for the problem for piecewise smooth curves in $\mathbb{R}^d$. We assume that the algorithm is given two continuous curves, each consisting of a sequence of $m$, resp.\ $n$, smooth pieces, where each piece belongs to a sufficiently well-behaved class of curves, such as the set of algebraic curves of bounded degree. We introduce a decomposition of the free space diagram into a controlled number of pieces that can be used to solve the decision problem similarly to the polygonal case, in $O(mn)$ time, leading to a computation of the Fr\'echet distance that runs in $O(mn\log(mn))$ time. Furthermore, we study approximation algorithms for piecewise smooth curves that are also $c$-packed for some fixed value $c$. We adapt the existing framework for $(1+\epsilon)$-approximations and show that an approximate decision can be computed in $O(cn/\epsilon)$ time for any $\epsilon > 0$.

相關內容

Genome assembly is a prominent problem studied in bioinformatics, which computes the source string using a set of its overlapping substrings. Classically, genome assembly uses assembly graphs built using this set of substrings to compute the source string efficiently, having a tradeoff between scalability and avoiding information loss. The scalable de Bruijn graphs come at the price of losing crucial overlap information. The complete overlap information is stored in overlap graphs using quadratic space. Hierarchical overlap graphs [IPL20] (HOG) overcome these limitations, avoiding information loss despite using linear space. After a series of suboptimal improvements, Khan and Park et al. simultaneously presented two optimal algorithms [CPM2021], where only the former was seemingly practical. We empirically analyze all the practical algorithms for computing HOG, where the optimal algorithm [CPM2021] outperforms the previous algorithms as expected, though at the expense of extra memory. However, it uses non-intuitive approach and non-trivial data structures. We present arguably the most intuitive algorithm, using only elementary arrays, which is also optimal. Our algorithm empirically proves even better for both time and memory over all the algorithms, highlighting its significance in both theory and practice. We further explore the applications of hierarchical overlap graphs to solve various forms of suffix-prefix queries on a set of strings. Loukides et al. [CPM2023] recently presented state-of-the-art algorithms for these queries. However, these algorithms require complex black-box data structures and are seemingly impractical. Our algorithms, despite failing to match the state-of-the-art algorithms theoretically, answer different queries ranging from 0.01-100 milliseconds for a data set having around a billion characters.

Blockchains are gaining momentum due to the interest of industries and people in \emph{decentralized applications} (Dapps), particularly in those for trading assets through digital certificates secured on blockchain, called tokens. As a consequence, providing a clear unambiguous description of any activities carried out on blockchains has become crucial, and we feel the urgency to achieve that description at least for trading. This paper reports on how to leverage the \emph{Ontology for Agents, Systems, and Integration of Services} ("\ONT{}") as a general means for the semantic representation of smart contracts stored on blockchain as software agents. Special attention is paid to non-fungible tokens (NFTs), whose management through the ERC721 standard is presented as a case study.

We propose a new density estimation algorithm. Given $n$ i.i.d. samples from a distribution belonging to a class of densities on $\mathbb{R}^d$, our estimator outputs any density in the class whose ''perceptron discrepancy'' with the empirical distribution is at most $O(\sqrt{d/n})$. The perceptron discrepancy between two distributions is defined as the largest difference in mass that they place on any halfspace of $\mathbb{R}^d$. It is shown that this estimator achieves expected total variation distance to the truth that is almost minimax optimal over the class of densities with bounded Sobolev norm and Gaussian mixtures. This suggests that regularity of the prior distribution could be an explanation for the efficiency of the ubiquitous step in machine learning that replaces optimization over large function spaces with simpler parametric classes (e.g. in the discriminators of GANs). We generalize the above to show that replacing the ''perceptron discrepancy'' with the generalized energy distance of Sz\'ekeley-Rizzo further improves total variation loss. The generalized energy distance between empirical distributions is easily computable and differentiable, thus making it especially useful for fitting generative models. To the best of our knowledge, it is the first example of a distance with such properties for which there are minimax statistical guarantees.

Arrangements of pseudolines are classic objects in discrete and computational geometry. They have been studied with increasing intensity since their introduction almost 100 years ago. The study of the number $B_n$ of non-isomorphic simple arrangements of $n$ pseudolines goes back to Goodman and Pollack, Knuth, and others. It is known that $B_n$ is in the order of $2^{\Theta(n^2)}$ and finding asymptotic bounds on $b_n = \frac{\log_2(B_n)}{n^2}$ remains a challenging task. In 2011, Felsner and Valtr showed that $0.1887 \leq b_n \le 0.6571$ for sufficiently large $n$. The upper bound remains untouched but in 2020 Dumitrescu and Mandal improved the lower bound constant to $0.2083$. Their approach utilizes the known values of $B_n$ for up to $n=12$. We tackle the lower bound with a dynamic programming scheme. Our new bound is $b_n \geq 0.2526$ for sufficiently large $n$. The result is based on a delicate interplay of theoretical ideas and computer assistance.

Most of current anomaly detection models assume that the normal pattern remains same all the time. However, the normal patterns of Web services change dramatically and frequently. The model trained on old-distribution data is outdated after such changes. Retraining the whole model every time is expensive. Besides, at the beginning of normal pattern changes, there is not enough observation data from the new distribution. Retraining a large neural network model with limited data is vulnerable to overfitting. Thus, we propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs). This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones. Moreover, we have performed many experiments to verify that retraining LARA with even 43 time slots of data from new distribution can result in its competitive F1 Score in comparison with the state-of-the-art anomaly detection models trained with sufficient data. Besides, we verify its light overhead.

Explainable AI methods facilitate the understanding of model behaviour, yet, small, imperceptible perturbations to inputs can vastly distort explanations. As these explanations are typically evaluated holistically, before model deployment, it is difficult to assess when a particular explanation is trustworthy. Some studies have tried to create confidence estimators for explanations, but none have investigated an existing link between uncertainty and explanation quality. We artificially simulate epistemic uncertainty in text input by introducing noise at inference time. In this large-scale empirical study, we insert different levels of noise perturbations and measure the effect on the output of pre-trained language models and different uncertainty metrics. Realistic perturbations have minimal effect on performance and explanations, yet masking has a drastic effect. We find that high uncertainty doesn't necessarily imply low explanation plausibility; the correlation between the two metrics can be moderately positive when noise is exposed during the training process. This suggests that noise-augmented models may be better at identifying salient tokens when uncertain. Furthermore, when predictive and epistemic uncertainty measures are over-confident, the robustness of a saliency map to perturbation can indicate model stability issues. Integrated Gradients shows the overall greatest robustness to perturbation, while still showing model-specific patterns in performance; however, this phenomenon is limited to smaller Transformer-based language models.

In [Incer Romeo, I. X., \textit{The Algebra of Contracts}. Ph.D. Thesis, UC Berkeley (2022)] an algebraic perspective on assume-guarantee contracts is proposed. This proposal relies heavily on a construction involving Boolean algebras. However, the structures thus proposed lack a clearly prescribed set of basic operations, necessary if we want to see them as a class of algebras (in the sense of Universal Algebra). In this article, by prescribing a suitable set of basic operations on contracts, we manage to describe these algebras as (a generating set of members of) well-known varieties.

Given a finite, simple, connected graph $G=(V,E)$ with $|V|=n$, we consider the associated graph Laplacian matrix $L = D - A$ with eigenvalues $0 = \lambda_1 < \lambda_2 \leq \dots \leq \lambda_n$. One can also consider the same graph equipped with positive edge weights $w:E \rightarrow \mathbb{R}_{> 0}$ normalized to $\sum_{e \in E} w_e = |E|$ and the associated weighted Laplacian matrix $L_w$. We say that $G$ is conformally rigid if constant edge-weights maximize the second eigenvalue $\lambda_2(w)$ of $L_w$ over all $w$, and minimize $\lambda_n(w')$ of $L_{w'}$ over all $w'$, i.e., for all $w,w'$, $$ \lambda_2(w) \leq \lambda_2(1) \leq \lambda_n(1) \leq \lambda_n(w').$$ Conformal rigidity requires an extraordinary amount of symmetry in $G$. Every edge-transitive graph is conformally rigid. We prove that every distance-regular graph, and hence every strongly-regular graph, is conformally rigid. Certain special graph embeddings can be used to characterize conformal rigidity. Cayley graphs can be conformally rigid but need not be, we prove a sufficient criterion. We also find a small set of conformally rigid graphs that do not belong into any of the above categories; these include the Hoffman graph, the crossing number graph 6B and others. Conformal rigidity can be certified via semidefinite programming, we provide explicit examples.

We present a minimal phase oscillator model for learning quadrupedal locomotion. Each of the four oscillators is coupled only to itself and its corresponding leg through local feedback of the ground reaction force, which can be interpreted as an observer feedback gain. We interpret the oscillator itself as a latent contact state-estimator. Through a systematic ablation study, we show that the combination of phase observations, simple phase-based rewards, and the local feedback dynamics induces policies that exhibit emergent gait preferences, while using a reduced set of simple rewards, and without prescribing a specific gait. The code is open-source, and a video synopsis available at //youtu.be/1NKQ0rSV3jU.

\texttt{Mixture-Models} is an open-source Python library for fitting Gaussian Mixture Models (GMM) and their variants, such as Parsimonious GMMs, Mixture of Factor Analyzers, MClust models, Mixture of Student's t distributions, etc. It streamlines the implementation and analysis of these models using various first/second order optimization routines such as Gradient Descent and Newton-CG through automatic differentiation (AD) tools. This helps in extending these models to high-dimensional data, which is first of its kind among Python libraries. The library provides user-friendly model evaluation tools, such as BIC, AIC, and log-likelihood estimation. The source-code is licensed under MIT license and can be accessed at \url{//github.com/kasakh/Mixture-Models}. The package is highly extensible, allowing users to incorporate new distributions and optimization techniques with ease. We conduct a large scale simulation to compare the performance of various gradient based approaches against Expectation Maximization on a wide range of settings and identify the corresponding best suited approach.

北京阿比特科技有限公司