亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Exact Circular Pattern Matching (ECPM) problem consists of reporting every occurrence of a rotation of a pattern $P$ in a text $T$. In many real-world applications, specifically in computational biology, circular rotations are of interest because of their prominence in virus DNA. Thus, given no restrictions on pre-processing time, how quickly all such circular rotation occurrences is of interest to many areas of study. We highlight, to the best of our knowledge, a novel approach to the ECPM problem and present four data structures that accompany this approach, each with their own time-space trade-offs, in addition to experimental results to determine the most computationally feasible data structure.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

We present a robust framework to perform linear regression with missing entries in the features. By considering an elliptical data distribution, and specifically a multivariate normal model, we are able to conditionally formulate a distribution for the missing entries and present a robust framework, which minimizes the worst case error caused by the uncertainty about the missing data. We show that the proposed formulation, which naturally takes into account the dependency between different variables, ultimately reduces to a convex program, for which a customized and scalable solver can be delivered. In addition to a detailed analysis to deliver such solver, we also asymptoticly analyze the behavior of the proposed framework, and present technical discussions to estimate the required input parameters. We complement our analysis with experiments performed on synthetic, semi-synthetic, and real data, and show how the proposed formulation improves the prediction accuracy and robustness, and outperforms the competing techniques.

Data-adaptive (machine learning-based) effect estimators are increasingly popular to reduce bias in high-dimensional bioinformatic and clinical studies (e.g. real-world data, target trials, -omic discovery). Their relative statistical efficiency (high power) is particularly invaluable in these contexts since sample sizes are often limited due to practical and cost concerns. However, these methods are subject to technical limitations that are dataset specific and involve computational trade-offs. Thus, it is challenging for analysts to identify when such methods may offer benefits or select amongst statistical methods. We present extensive simulation studies of several cutting-edge estimators, evaluating both performance and computation time. Critically, rather than use arbitrary simulation data, we generate synthetic datasets mimicking the observed data structure (plasmode simulation) of a real molecular epidemiologic cohort. We find that machine learning approaches may not always be indicated in such data settings, but that performance is highly context dependent. We present a user-friendly Shiny app REFINE2 (Realistic Evaluations of Finite sample INference using Efficient Estimators) that enables analysts to simulate synthetic data from their own datasets and directly evaluate the performance of several cutting-edge algorithms in those settings. This tool may greatly facilitate the proper selection and implementation of machine-learning-based effect estimators in bioinformatic and clinical study contexts.

Wireless sensor networks are among the most promising technologies of the current era because of their small size, lower cost, and ease of deployment. With the increasing number of wireless sensors, the probability of generating missing data also rises. This incomplete data could lead to disastrous consequences if used for decision-making. There is rich literature dealing with this problem. However, most approaches show performance degradation when a sizable amount of data is lost. Inspired by the emerging field of graph signal processing, this paper performs a new study of a Sobolev reconstruction algorithm in wireless sensor networks. Experimental comparisons on several publicly available datasets demonstrate that the algorithm surpasses multiple state-of-the-art techniques by a maximum margin of 54%. We further show that this algorithm consistently retrieves the missing data even during massive data loss situations.

Nowadays, the environments of smart systems for Industry 4.0 and Internet of Things (IoT) are experiencing fast industrial upgrading. Big data technologies such as design making, event detection, and classification are developed to help manufacturing organizations to achieve smart systems. By applying data analysis, the potential values of rich data can be maximized and thus help manufacturing organizations to finish another round of upgrading. In this paper, we propose two new algorithms with respect to big data analysis, namely UFC$_{gen}$ and UFC$_{fast}$. Both algorithms are designed to collect three types of patterns to help people determine the market positions for different product combinations. We compare these algorithms on various types of datasets, both real and synthetic. The experimental results show that both algorithms can successfully achieve pattern classification by utilizing three different types of interesting patterns from all candidate patterns based on user-specified thresholds of utility and frequency. Furthermore, the list-based UFC$_{fast}$ algorithm outperforms the level-wise-based UFC$_{gen}$ algorithm in terms of both execution time and memory consumption.

Virtual telepresence is the future of online communication. Clothing is an essential part of a person's identity and self-expression. Yet, ground truth data of registered clothes is currently unavailable in the required resolution and accuracy for training telepresence models for realistic cloth animation. Here, we propose an end-to-end pipeline for building drivable representations for clothing. The core of our approach is a multi-view patterned cloth tracking algorithm capable of capturing deformations with high accuracy. We further rely on the high-quality data produced by our tracking method to build a Garment Avatar: an expressive and fully-drivable geometry model for a piece of clothing. The resulting model can be animated using a sparse set of views and produces highly realistic reconstructions which are faithful to the driving signals. We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application, where a garment is being reconstructed from two views, and a user can pick and swap garment design as they wish. In addition, we show a challenging scenario when driven exclusively with body pose, our drivable garment avatar is capable of producing realistic cloth geometry of significantly higher quality than the state-of-the-art.

Automated machine learning (AutoML) frameworks have become important tools in the data scientists' arsenal, as they dramatically reduce the manual work devoted to the construction of ML pipelines. Such frameworks intelligently search among millions of possible ML pipelines - typically containing feature engineering, model selection and hyper parameters tuning steps - and finally output an optimal pipeline in terms of predictive accuracy. However, when the dataset is large, each individual configuration takes longer to execute, therefore the overall AutoML running times become increasingly high. To this end, we present SubStrat, an AutoML optimization strategy that tackles the data size, rather than configuration space. It wraps existing AutoML tools, and instead of executing them directly on the entire dataset, SubStrat uses a genetic-based algorithm to find a small yet representative data subset which preserves a particular characteristic of the full data. It then employs the AutoML tool on the small subset, and finally, it refines the resulted pipeline by executing a restricted, much shorter, AutoML process on the large dataset. Our experimental results, performed on two popular AutoML frameworks, Auto-Sklearn and TPOT, show that SubStrat reduces their running times by 79% (on average), with less than 2% average loss in the accuracy of the resulted ML pipeline.

We revisit the classic regular expression matching problem, that is, given a regular expression $R$ and a string $Q$, decide if $Q$ matches any of the strings specified by $R$. A standard textbook solution [Thompson, CACM 1968] solves this problem in $O(nm)$ time, where $n$ is the length of $Q$ and $m$ is the number of characters in $R$. More recently, several results that improve this bound by polylogarithmic factor have appeared. All of these solutions are essentially based on constructing and simulation a non-deterministic finite automaton. On the other hand, assuming the strong exponential time hypotheses we cannot solve regular expression $O((nm)^{1-\epsilon})$ [Backurs and Indyk, FOCS 2016]. Hence, a natural question is if we can design algorithms that can take advantage of other parameters of the problem to obtain more fine-grained bounds. We present the first algorithm for regular expression matching that can take advantage of sparsity of the automaton simulation. More precisely, we define the \emph{density}, $\Delta$, of the instance to be the total number of states in a simulation of a natural automaton for $R$. The density is always at most $nm+1$ but may be significantly smaller for many typical scenarios, e.g., when a string only matches a small part of the regular expression. Our main result is a new algorithm that solves the problem in $$O\left(\Delta \log \log \frac{nm}{\Delta} + n + m\right)$$ time. This result essentially replaces $nm$ with $\Delta$ in the complexity of regular expression matching. Prior to this work no non-trivial bound in terms of $\Delta$ was known. The key technical contribution is a new linear space representation of the classic position automaton that supports fast state-set transition computation in near-linear time in the size of the input and output state sets.

Current guidelines from the World Health Organization indicate that the SARS-CoV-2 coronavirus, which results in the novel coronavirus disease (COVID-19), is transmitted through respiratory droplets or by contact. Contact transmission occurs when contaminated hands touch the mucous membrane of the mouth, nose, or eyes so hands hygiene is extremely important to prevent the spread of the SARSCoV-2 as well as of other pathogens. The vast proliferation of wearable devices, such as smartwatches, containing acceleration, rotation, magnetic field sensors, etc., together with the modern technologies of artificial intelligence, such as machine learning and more recently deep-learning, allow the development of accurate applications for recognition and classification of human activities such as: walking, climbing stairs, running, clapping, sitting, sleeping, etc. In this work, we evaluate the feasibility of a machine learning based system which, starting from inertial signals collected from wearable devices such as current smartwatches, recognizes when a subject is washing or rubbing its hands. Preliminary results, obtained over two different datasets, show a classification accuracy of about 95% and of about 94% for respectively deep and standard learning techniques.

Multi-label text classification refers to the problem of assigning each given document its most relevant labels from the label set. Commonly, the metadata of the given documents and the hierarchy of the labels are available in real-world applications. However, most existing studies focus on only modeling the text information, with a few attempts to utilize either metadata or hierarchy signals, but not both of them. In this paper, we bridge the gap by formalizing the problem of metadata-aware text classification in a large label hierarchy (e.g., with tens of thousands of labels). To address this problem, we present the MATCH solution -- an end-to-end framework that leverages both metadata and hierarchy information. To incorporate metadata, we pre-train the embeddings of text and metadata in the same space and also leverage the fully-connected attentions to capture the interrelations between them. To leverage the label hierarchy, we propose different ways to regularize the parameters and output probability of each child label by its parents. Extensive experiments on two massive text datasets with large-scale label hierarchies demonstrate the effectiveness of MATCH over state-of-the-art deep learning baselines.

北京阿比特科技有限公司