亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Decoding algorithms for Reed--Solomon (RS) codes are of great interest for both practical and theoretical reasons. In this paper, an efficient algorithm, called the modular approach (MA), is devised for solving the Welch--Berlekamp (WB) key equation. By taking the MA as the key equation solver, we propose a new decoding algorithm for systematic RS codes. For $(n,k)$ RS codes, where $n$ is the code length and $k$ is the code dimension, the proposed decoding algorithm has both the best asymptotic computational complexity $O(n\log(n-k) + (n-k)\log^2(n-k))$ and the smallest constant factor achieved to date. By comparing the number of field operations required, we show that when decoding practical RS codes, the new algorithm is significantly superior to the existing methods in terms of computational complexity. When decoding the $(4096, 3584)$ RS code defined over $\mathbb{F}_{2^{12}}$, the new algorithm is 10 times faster than a conventional syndrome-based method. Furthermore, the new algorithm has a regular architecture and is thus suitable for hardware implementation.

相關內容

Population protocols are a relatively novel computational model in which very resource-limited anonymous agents interact in pairs with the goal of computing predicates. We consider the probabilistic version of this model, which naturally allows to consider the setup in which a small probability of an incorrect output is tolerated. The main focus of this thesis is the question of confident leader election, which is an extension of the regular leader election problem with an extra requirement for the eventual leader to detect its uniqueness. Having a confident leader allows the population protocols to determine the convergence of its computations. This behaviour of the model is highly beneficial and was shown to be feasible when the original model is extended in various ways. We show that it takes a linear in terms of the population size number of interactions for a probabilistic population protocol to have a non-zero fraction of agents in all reachable states, starting from a configuration with all agents in the same state. This leads us to a conclusion that confident leader election is out of reach even with the probabilistic version of the model.

Motivated by practical generalizations of the classic $k$-median and $k$-means objectives, such as clustering with size constraints, fair clustering, and Wasserstein barycenter, we introduce a meta-theorem for designing coresets for constrained-clustering problems. The meta-theorem reduces the task of coreset construction to one on a bounded number of ring instances with a much-relaxed additive error. This reduction enables us to construct coresets using uniform sampling, in contrast to the widely-used importance sampling, and consequently we can easily handle constrained objectives. Notably and perhaps surprisingly, this simpler sampling scheme can yield coresets whose size is independent of $n$, the number of input points. Our technique yields smaller coresets, and sometimes the first coresets, for a large number of constrained clustering problems, including capacitated clustering, fair clustering, Euclidean Wasserstein barycenter, clustering in minor-excluded graph, and polygon clustering under Fr\'{e}chet and Hausdorff distance. Finally, our technique yields also smaller coresets for $1$-median in low-dimensional Euclidean spaces, specifically of size $\tilde{O}(\varepsilon^{-1.5})$ in $\mathbb{R}^2$ and $\tilde{O}(\varepsilon^{-1.6})$ in $\mathbb{R}^3$.

We study the trace reconstruction problem for spider graphs. Let $n$ be the number of nodes of a spider and $d$ be the length of each leg, and suppose that we are given independent traces of the spider from a deletion channel in which each non-root node is deleted with probability $q$. This is a natural generalization of the string trace reconstruction problem in theoretical computer science, which corresponds to the special case where the spider has one leg. In the regime where $d\ge \log_{1/q}(n)$, the problem can be reduced to the vanilla string trace reconstruction problem. We thus study the more interesting regime $d\le \log_{1/q}(n)$, in which entire legs of the spider are deleted with non-negligible probability. We describe an algorithm that reconstructs spiders with high probability using $\exp\left(\mathcal{O}\left(\frac{(nq^d)^{1/3}}{d^{1/3}}(\log n)^{2/3}\right)\right)$ traces. Our algorithm works for all deletion probabilities $q\in(0,1)$.

Group-based cryptography is a relatively young family in post-quantum cryptography. In this paper we give the first dedicated security analysis of a central problem in group-based cryptography: the so-called Semidirect Product Key Exchange (SDPKE). We present a subexponential quantum algorithm for solving SDPKE. To do this we reduce SDPKE to the Abelian Hidden Shift Problem (for which there are known quantum subexponential algorithms). We stress that this does not per se constitute a break of SDPKE; rather, the purpose of the paper is to provide a connection to known problems.

Emergency shelters, which reflect the city's ability to respond to and deal with major public emergencies to a certain extent, are essential to a modern urban emergency management system. This paper is based on spatial analysis methods, using Analytic Hierarchy Process to analyze the suitability of the 28 emergency shelters in Wuhan City. The Technique for Order Preference by Similarity to an Ideal Solution is further used to evaluate the accommodation capacity of emergency shelters in central urban areas, which provides a reference for the optimization of existing shelters and the site selection of new shelters, and provides a basis for improving the service capacity of shelters. The results show that the overall situation of emergency shelters in Wuhan is good, with 96\% of the places reaching the medium level or above, but the suitability level needs to be further improved, especially the effectiveness and accessibility. Among the seven central urban areas in Wuhan, Hongshan District has the strongest accommodation capacity while Jianghan District has the weakest, with noticeable differences.

In type-and-coeffect systems, contexts are enriched by coeffects modeling how they are actually used, typically through annotations on single variables. Coeffects are computed bottom-up, combining, for each term, the coeffects of its subterms, through a fixed set of algebraic operators. We show that this principled approach can be adopted to track sharing in the imperative paradigm, that is, links among variables possibly introduced by the execution. This provides a significant example of non-structural coeffects, which cannot be computed by-variable, since the way a given variable is used can affect the coeffects of other variables. To illustrate the effectiveness of the approach, we enhance the type system tracking sharing to model a sophisticated set of features related to uniqueness and immutability. Thanks to the coeffect-based approach, we can express such features in a simple way and prove related properties with standard techniques.

We introduce two new tools to assess the validity of statistical distributions. These tools are based on components derived from a new statistical quantity, the $comparison$ $curve$. The first tool is a graphical representation of these components on a $bar$ $plot$ (B plot), which can provide a detailed appraisal of the validity of the statistical model, in particular when supplemented by acceptance regions related to the model. The knowledge gained from this representation can sometimes suggest an existing $goodness$-$of$-$fit$ test to supplement this visual assessment with a control of the type I error. Otherwise, an adaptive test may be preferable and the second tool is the combination of these components to produce a powerful $\chi^2$-type goodness-of-fit test. Because the number of these components can be large, we introduce a new selection rule to decide, in a data driven fashion, on their proper number to take into consideration. In a simulation, our goodness-of-fit tests are seen to be powerwise competitive with the best solutions that have been recommended in the context of a fully specified model as well as when some parameters must be estimated. Practical examples show how to use these tools to derive principled information about where the model departs from the data.

Parallel-in-time methods for partial differential equations (PDEs) have been the subject of intense development over recent decades, particularly for diffusion-dominated problems. It has been widely reported in the literature, however, that many of these methods perform quite poorly for advection-dominated problems. Here we analyze the particular iterative parallel-in-time algorithm of multigrid reduction-in-time (MGRIT) for discretizations of constant-wave-speed linear advection problems. We focus on common method-of-lines discretizations that employ upwind finite differences in space and Runge-Kutta methods in time. Using a convergence framework we developed in previous work, we prove for a subclass of these discretizations that, if using the standard approach of rediscretizing the fine-grid problem on the coarse grid, robust MGRIT convergence with respect to CFL number and coarsening factor is not possible. This poor convergence and non-robustness is caused, at least in part, by an inadequate coarse-grid correction for smooth Fourier modes known as characteristic components.We propose an alternative coarse-grid that provides a better correction of these modes. This coarse-grid operator is related to previous work and uses a semi-Lagrangian discretization combined with an implicitly treated truncation error correction. Theory and numerical experiments show the coarse-grid operator yields fast MGRIT convergence for many of the method-of-lines discretizations considered, including for both implicit and explicit discretizations of high order.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司