亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Coded caching schemes are used to reduce computer network traffics in peak time. To determine the efficiency of the schemes, \cite{MN} defined the information rate of the schemes and gave a construction of optimal coded caching schemes. However, their construction needs to split the data into a large number of packets which may cause constraints in real applications. Many researchers then constructed new coded caching schemes to reduce the number of packets but that increased the information rate. We define an optimization of coded caching schemes under the limitation of the number of packets which may be used to verify the efficiency of these schemes. We also give some constructions for several infinite classes of optimal coded caching schemes under the new definition.

相關內容

Modern techniques for physical simulations rely on numerical schemes and mesh-refinement methods to address trade-offs between precision and complexity, but these handcrafted solutions are tedious and require high computational power. Data-driven methods based on large-scale machine learning promise high adaptivity by integrating long-range dependencies more directly and efficiently. In this work, we focus on fluid dynamics and address the shortcomings of a large part of the literature, which are based on fixed support for computations and predictions in the form of regular or irregular grids. We propose a novel setup to perform predictions in a continuous spatial and temporal domain while being trained on sparse observations. We formulate the task as a double observation problem and propose a solution with two interlinked dynamical systems defined on, respectively, the sparse positions and the continuous domain, which allows to forecast and interpolate a solution from the initial condition. Our practical implementation involves recurrent GNNs and a spatio-temporal attention observer capable of interpolating the solution at arbitrary locations. Our model not only generalizes to new initial conditions (as standard auto-regressive models do) but also performs evaluation at arbitrary space and time locations. We evaluate on three standard datasets in fluid dynamics and compare to strong baselines, which are outperformed both in classical settings and in the extended new task requiring continuous predictions.

We extend Petkov\v{s}ek's algorithm for computing hypergeometric solutions of scalar difference equations to the case of difference systems $\tau(Y) = M Y$, with $M \in {\rm GL}_n(C(x))$, where $\tau$ is the shift operator. Hypergeometric solutions are solutions of the form $\gamma P$ where $P \in C(x)^n$ and $\gamma$ is a hypergeometric term over $C(x)$, i.e. ${\tau(\gamma)}/{\gamma} \in C(x)$. Our contributions concern efficient computation of a set of candidates for ${\tau(\gamma)}/{\gamma}$ which we write as $\lambda = c\frac{A}{B}$ with monic $A, B \in C[x]$, $c \in C^*$. Factors of the denominators of $M^{-1}$ and $M$ give candidates for $A$ and $B$, while another algorithm is needed for $c$. We use the super-reduction algorithm to compute candidates for $c$, as well as other ingredients to reduce the list of candidates for $A/B$. To further reduce the number of candidates $A/B$, we bound the so-called type of $A/B$ by bounding local types. Our algorithm has been implemented in Maple and experiments show that our implementation can handle systems of high dimension, which is useful for factoring operators.

Many important computational structures involve an intricate interplay between algebraic features (given by operations on the underlying set) and relational features (taking account of notions such as order or distance). This paper investigates algebras over relational structures axiomatized by an infinitary Horn theory, which subsume, for example, partial algebras, various incarnations of ordered algebras, quantitative algebras introduced by Mardare, Panangaden, and Plotkin, and their recent extension to generalized metric spaces and lifted algebraic signatures by Mio, Sarkis, and Vignudelli. To this end, we develop the notion of clustered equation, which is inspired by Mardare et al.'s basic conditional equations in the theory of quantitative algebras, at the level of generality of arbitrary relational structures, and we prove it to be equivalent to an abstract categorical form of equation earlier introduced by Milius and Urbat. Our main results are a family of Birkhoff-type variety theorems (classifying the expressive power of clustered equations) and an exactness theorem (classifying abstract equations by a congruence property).

Computing schedulers that optimize reachability probabilities in MDPs is a standard verification task. To address scalability concerns, we focus on MDPs that are compositionally described in a high-level description formalism. In particular, this paper considers string diagrams, which specify an algebraic, sequential composition of subMDPs. Towards their compositional verification, the key challenge is to locally optimize schedulers on subMDPs without considering their context in the string diagram. This paper proposes to consider the schedulers in a subMDP which form a Pareto curve on a combination of local objectives. While considering all such schedulers is intractable, it gives rise to a highly efficient sound approximation algorithm. The prototype on top of the model checker Storm demonstrates the scalability of this approach.

Perhaps the most widely used form of code documentation is function header comments. We performed a large-scale survey of 367 developers to catalog their expectations from such documentation and to chronicle actual practice. Paradoxically, we found that developers appreciate the value of header comments and estimate that they are worth the investment in time, but nevertheless they tend not to write such documentation in their own code. Reasons for not writing header comments vary from the belief that code should be self-documenting to concern that documentation will not be kept up-to-date. A possible outcome of this situation is that developers may evade requirements to write documentation by using templates to generate worthless comments that do not provide any real information. We define a simple metric for information-less documentation based on its similarity to the function signature. Applying this to 21,140 files in GitHub Python projects shows that most functions are undocumented, but when header comments are written they typically do contain additional information beyond the function signature.

We introduce the calculus of neo-Peircean relations, a string diagrammatic extension of the calculus of binary relations that has the same expressivity as first order logic and comes with a complete axiomatisation. The axioms are obtained by combining two well known categorical structures: cartesian and linear bicategories.

We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.

北京阿比特科技有限公司