亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper builds a novel bridge between algebraic coding theory and mathematical knot theory, with applications in both directions. We give methods to construct error-correcting codes starting from the colorings of a knot, describing through a series of results how the properties of the knot translate into code parameters. We show that knots can be used to obtain error-correcting codes with prescribed parameters and an efficient decoding algorithm.

相關內容

代碼(Code)是專知網的一個重要知識資料文檔板塊,旨在整理收錄論文源代碼、復現代碼,經典工程代碼等,便于用戶查閱下載使用。

We consider the online version of the piercing set problem, where geometric objects arrive one by one, and the online algorithm must maintain a valid piercing set for the already arrived objects by making irrevocable decisions. It is easy to observe that any deterministic algorithm solving this problem for intervals in $\mathbb{R}$ has a competitive ratio of at least $\Omega(n)$. This paper considers the piercing set problem for similarly sized objects. We propose a deterministic online algorithm for similarly sized fat objects in $\mathbb{R}^d$. For homothetic hypercubes in $\mathbb{R}^d$ with side length in the range $[1,k]$, we propose a deterministic algorithm having a competitive ratio of at most~$3^d\lceil\log_2 k\rceil+2^d$. In the end, we show deterministic lower bounds of the competitive ratio for similarly sized $\alpha$-fat objects in $\mathbb{R}^2$ and homothetic hypercubes in $\mathbb{R}^d$. Note that piercing translated copies of a convex object is equivalent to the unit covering problem, which is well-studied in the online setup. Surprisingly, no upper bound of the competitive ratio was known for the unit covering problem when the corresponding object is anything other than a ball or a hypercube. Our result yields an upper bound of the competitive ratio for the unit covering problem when the corresponding object is any convex object in $\mathbb{R}^d$.

The classic Yannakakis framework proposed in 1981 is still the state-of-the-art approach for tackling acyclic join-aggregate queries defined over commutative semi-rings. It has been shown that the time complexity of the Yannakakis framework is $O(N + \OUT)$ for any free-connex join-aggregate query, where $N$ is the input size of database and $\OUT$ is the output size of the query result. This is already output-optimal. However, only a general upper bound $O(N \cdot \OUT)$ on the time complexity of the Yannakakis framework is known for the remaining class of acyclic but non-free-connex queries. We first show a lower bound $\Omega\left(N \cdot \OUT^{1- \frac{1}{\outw}} + \OUT\right)$ for computing an acyclic join-aggregate query by {\em semi-ring algorithms}, where $\outw$ is identified as the {\em out-width} of the input query, $N$ is the input size of the database, and $\OUT$ is the output size of the query result. For example, $\outw =2$ for the chain matrix multiplication query, and $\outw=k$ for the star matrix multiplication query with $k$ relations. We give a tighter analysis of the Yannakakis framework and show that Yannakakis framework is already output-optimal on the class of {\em aggregate-hierarchical} queries. However, for the large remaining class of non-aggregate-hierarchical queries, such as chain matrix multiplication query, Yannakakis framework indeed requires $\Theta(N \cdot \OUT)$ time. We next explore a hybrid version of the Yannakakis framework and present an output-optimal algorithm for computing any general acyclic join-aggregate query within $\O\left(N\cdot \OUT^{1-\frac{1}{\outw}} + \OUT\right)$ time, matching the out-width-dependent lower bound up to a poly-logarithmic factor. To the best of our knowledge, this is the first polynomial improvement for computing acyclic join-aggregate queries since 1981.

Most modern reinforcement learning algorithms optimize a cumulative single-step cost along a trajectory. The optimized motions are often 'unnatural', representing, for example, behaviors with sudden accelerations that waste energy and lack predictability. In this work, we present a novel paradigm of controlling nonlinear systems via the minimization of the Koopman spectrum cost: a cost over the Koopman operator of the controlled dynamics. This induces a broader class of dynamical behaviors that evolve over stable manifolds such as nonlinear oscillators, closed loops, and smooth movements. We demonstrate that some dynamics characterizations that are not possible with a cumulative cost are feasible in this paradigm, which generalizes the classical eigenstructure and pole assignments to nonlinear decision making. Moreover, we present a sample efficient online learning algorithm for our problem that enjoys a sub-linear regret bound under some structural assumptions.

We introduce NCP (Neural Conditional Probability), a novel operator-theoretic approach for learning conditional distributions with a particular focus on inference tasks. NCP can be used to build conditional confidence regions and extract important statistics like conditional quantiles, mean, and covariance. It offers streamlined learning through a single unconditional training phase, facilitating efficient inference without the need for retraining even when conditioning changes. By tapping into the powerful approximation capabilities of neural networks, our method efficiently handles a wide variety of complex probability distributions, effectively dealing with nonlinear relationships between input and output variables. Theoretical guarantees ensure both optimization consistency and statistical accuracy of the NCP method. Our experiments show that our approach matches or beats leading methods using a simple Multi-Layer Perceptron (MLP) with two hidden layers and GELU activations. This demonstrates that a minimalistic architecture with a theoretically grounded loss function can achieve competitive results without sacrificing performance, even in the face of more complex architectures.

This paper addresses the need for deep learning models to integrate well-defined constraints into their outputs, driven by their application in surrogate models, learning with limited data and partial information, and scenarios requiring flexible model behavior to incorporate non-data sample information. We introduce Bayesian Entropy Neural Networks (BENN), a framework grounded in Maximum Entropy (MaxEnt) principles, designed to impose constraints on Bayesian Neural Network (BNN) predictions. BENN is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output. To achieve simultaneous uncertainty quantification and constraint satisfaction, we employ the method of multipliers approach. This allows for the concurrent estimation of neural network parameters and the Lagrangian multipliers associated with the constraints. Our experiments, spanning diverse applications such as beam deflection modeling and microstructure generation, demonstrate the effectiveness of BENN. The results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.

Despite the recent progress in deep learning, most approaches still go for a silo-like solution, focusing on learning each task in isolation: training a separate neural network for each individual task. Many real-world problems, however, call for a multi-modal approach and, therefore, for multi-tasking models. Multi-task learning (MTL) aims to leverage useful information across tasks to improve the generalization capability of a model. This thesis is concerned with multi-task learning in the context of computer vision. First, we review existing approaches for MTL. Next, we propose several methods that tackle important aspects of multi-task learning. The proposed methods are evaluated on various benchmarks. The results show several advances in the state-of-the-art of multi-task learning. Finally, we discuss several possibilities for future work.

Geometric deep learning (GDL), which is based on neural network architectures that incorporate and process symmetry information, has emerged as a recent paradigm in artificial intelligence. GDL bears particular promise in molecular modeling applications, in which various molecular representations with different symmetry properties and levels of abstraction exist. This review provides a structured and harmonized overview of molecular GDL, highlighting its applications in drug discovery, chemical synthesis prediction, and quantum chemistry. Emphasis is placed on the relevance of the learned molecular features and their complementarity to well-established molecular descriptors. This review provides an overview of current challenges and opportunities, and presents a forecast of the future of GDL for molecular sciences.

Non-IID data present a tough challenge for federated learning. In this paper, we explore a novel idea of facilitating pairwise collaborations between clients with similar data. We propose FedAMP, a new method employing federated attentive message passing to facilitate similar clients to collaborate more. We establish the convergence of FedAMP for both convex and non-convex models, and propose a heuristic method to further improve the performance of FedAMP when clients adopt deep neural networks as personalized models. Our extensive experiments on benchmark data sets demonstrate the superior performance of the proposed methods.

Clustering is one of the most fundamental and wide-spread techniques in exploratory data analysis. Yet, the basic approach to clustering has not really changed: a practitioner hand-picks a task-specific clustering loss to optimize and fit the given data to reveal the underlying cluster structure. Some types of losses---such as k-means, or its non-linear version: kernelized k-means (centroid based), and DBSCAN (density based)---are popular choices due to their good empirical performance on a range of applications. Although every so often the clustering output using these standard losses fails to reveal the underlying structure, and the practitioner has to custom-design their own variation. In this work we take an intrinsically different approach to clustering: rather than fitting a dataset to a specific clustering loss, we train a recurrent model that learns how to cluster. The model uses as training pairs examples of datasets (as input) and its corresponding cluster identities (as output). By providing multiple types of training datasets as inputs, our model has the ability to generalize well on unseen datasets (new clustering tasks). Our experiments reveal that by training on simple synthetically generated datasets or on existing real datasets, we can achieve better clustering performance on unseen real-world datasets when compared with standard benchmark clustering techniques. Our meta clustering model works well even for small datasets where the usual deep learning models tend to perform worse.

Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).

北京阿比特科技有限公司