亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We address the problem of sparse recovery using greedy compressed sensing recovery algorithms, without explicit knowledge of the sparsity. Estimating the sparsity order is a crucial problem in many practical scenarios, e.g., wireless communications, where exact value of the sparsity order of the unknown channel may be unavailable a priori. In this paper we have proposed a new greedy algorithm, referred to as the Multiple Choice Hard Thresholding Pursuit (MCHTP), which modifies the popular hard thresholding pursuit (HTP) suitably to iteratively recover the unknown sparse vector along with the sparsity order of the unknown vector. We provide provable performance guarantees which ensures that MCHTP can estimate the sparsity order exactly, along with recovering the unknown sparse vector exactly with noiseless measurements. The simulation results corroborate the theoretical findings, demonstrating that even without exact sparsity knowledge, with only the knowledge of a loose upper bound of the sparsity, MCHTP exhibits outstanding recovery performance, which is almost identical to that of the conventional HTP with exact sparsity knowledge. Furthermore, simulation results demonstrate much lower computational complexity of MCHTP compared to other state-of-the-art techniques like MSP.

相關內容

Point cloud registration (PCR) is a popular research topic in computer vision. Recently, the registration method in an evolutionary way has received continuous attention because of its robustness to the initial pose and flexibility in objective function design. However, most evolving registration methods cannot tackle the local optimum well and they have rarely investigated the success ratio, which implies the probability of not falling into local optima and is closely related to the practicality of the algorithm. Evolutionary multi-task optimization (EMTO) is a widely used paradigm, which can boost exploration capability through knowledge transfer among related tasks. Inspired by this concept, this study proposes a novel evolving registration algorithm via EMTO, where the multi-task configuration is based on the idea of solution space cutting. Concretely, one task searching in cut space assists another task with complex function landscape in escaping from local optima and enhancing successful registration ratio. To reduce unnecessary computational cost, a sparse-to-dense strategy is proposed. In addition, a novel fitness function robust to various overlap rates as well as a problem-specific metric of computational cost is introduced. Compared with 7 evolving registration approaches and 4 traditional registration approaches on the object-scale and scene-scale registration datasets, experimental results demonstrate that the proposed method has superior performances in terms of precision and tackling local optima.

We derive minimax testing errors in a distributed framework where the data is split over multiple machines and their communication to a central machine is limited to $b$ bits. We investigate both the $d$- and infinite-dimensional signal detection problem under Gaussian white noise. We also derive distributed testing algorithms reaching the theoretical lower bounds. Our results show that distributed testing is subject to fundamentally different phenomena that are not observed in distributed estimation. Among our findings, we show that testing protocols that have access to shared randomness can perform strictly better in some regimes than those that do not. We also observe that consistent nonparametric distributed testing is always possible, even with as little as $1$-bit of communication and the corresponding test outperforms the best local test using only the information available at a single local machine. Furthermore, we also derive adaptive nonparametric distributed testing strategies and the corresponding theoretical lower bounds.

In this paper, we propose a low-cost, parameter-free, and pressure-robust Stokes solver based on the enriched Galerkin (EG) method with a discontinuous velocity enrichment function. The EG method employs the interior penalty discontinuous Galerkin (IPDG) formulation to weakly impose the continuity of the velocity function. However, the symmetric IPDG formulation, despite of its advantage of symmetry, requires a lot of computational effort to choose an optimal penalty parameter and to compute different trace terms. In order to reduce such effort, we replace the derivatives of the velocity function with its weak derivatives computed by the geometric data of elements. Therefore, our modified EG (mEG) method is a parameter-free numerical scheme which has reduced computational complexity as well as optimal rates of convergence. Moreover, we achieve pressure-robustness for the mEG method by employing a velocity reconstruction operator on the load vector on the right-hand side of the discrete system. The theoretical results are confirmed through numerical experiments with two- and three- dimensional examples.

In this paper we deal with the problem of sequential testing of multiple hypotheses. We are interested in minimising a weighted average sample number under restrictions on the error probabilities. A computer-oriented method of construction of optimal sequential tests is proposed. For the particular case of sampling from a Bernoulli population we develop a whole set of computer algorithms for optimal design and performance evaluation of sequential tests and implement them in the form of computer code written in R programming language. The tests we obtain are exact (neither asymptotic nor approximate). Extensions to other distribution families are discussed. A numerical comparison with other known tests (of MSPRT type) is carried out.

According to the public goods game (PGG) protocol, participants decide freely whether they want to contribute to a common pool or not, but the resulting benefit is distributed equally. A conceptually similar dilemma situation may emerge when participants consider if they claim a common resource but the related cost is covered equally by all group members. The latter establishes a reversed form of the original public goods game (R-PGG). In this work, we show that R-PGG is equivalent to PGG in several circumstances, starting from the traditional analysis, via the evolutionary approach in unstructured populations, to Monte Carlo simulations in structured populations. However, there are also cases when the behavior of R-PGG could be surprisingly different from the outcome of PGG. When the key parameters are heterogeneous, for instance, the results of PGG and R-PGG could be diverse even if we apply the same amplitudes of heterogeneity. We find that the heterogeneity in R-PGG generally impedes cooperation, while the opposite is observed for PGG. These diverse system reactions can be understood if we follow how payoff functions change when introducing heterogeneity in the parameter space. This analysis also reveals the distinct roles of cooperator and defector strategies in the mentioned games. Our observations may hopefully stimulate further research to check the potential differences between PGG and R-PGG due to the alternative complexity of conditions.

Integrated information theory (IIT) is a theoretical framework that provides a quantitative measure to estimate when a physical system is conscious, its degree of consciousness, and the complexity of the qualia space that the system is experiencing. Formally, IIT rests on the assumption that if a surrogate physical system can fully embed the phenomenological properties of consciousness, then the system properties must be constrained by the properties of the qualia being experienced. Following this assumption, IIT represents the physical system as a network of interconnected elements that can be thought of as a probabilistic causal graph, $\mathcal{G}$, where each node has an input-output function and all the graph is encoded in a transition probability matrix. Consequently, IIT's quantitative measure of consciousness, $\Phi$, is computed with respect to the transition probability matrix and the present state of the graph. In this paper, we provide a random search algorithm that is able to optimize $\Phi$ in order to investigate, as the number of nodes increases, the structure of the graphs that have higher $\Phi$. We also provide arguments that show the difficulties of applying more complex black-box search algorithms, such as Bayesian optimization or metaheuristics, in this particular problem. Additionally, we suggest specific research lines for these techniques to enhance the search algorithm that guarantees maximal $\Phi$.

We develop a distributed Block Chebyshev-Davidson algorithm to solve large-scale leading eigenvalue problems for spectral analysis in spectral clustering. First, the efficiency of the Chebyshev-Davidson algorithm relies on the prior knowledge of the eigenvalue spectrum, which could be expensive to estimate. This issue can be lessened by the analytic spectrum estimation of the Laplacian or normalized Laplacian matrices in spectral clustering, making the proposed algorithm very efficient for spectral clustering. Second, to make the proposed algorithm capable of analyzing big data, a distributed and parallel version has been developed with attractive scalability. The speedup by parallel computing is approximately equivalent to $\sqrt{p}$, where $p$ denotes the number of processes. Numerical results will be provided to demonstrate its efficiency and advantage over existing algorithms in both sequential and parallel computing.

This manuscript makes two contributions to the field of change-point detection. In a generalchange-point setting, we provide a generic algorithm for aggregating local homogeneity testsinto an estimator of change-points in a time series. Interestingly, we establish that the errorrates of the collection of tests directly translate into detection properties of the change-pointestimator. This generic scheme is then applied to various problems including covariance change-point detection, nonparametric change-point detection and sparse multivariate mean change-point detection. For the latter, we derive minimax optimal rates that are adaptive to theunknown sparsity and to the distance between change-points when the noise is Gaussian. Forsub-Gaussian noise, we introduce a variant that is optimal in almost all sparsity regimes.

In the cybersecurity setting, defenders are often at the mercy of their detection technologies and subject to the information and experiences that individual analysts have. In order to give defenders an advantage, it is important to understand an attacker's motivation and their likely next best action. As a first step in modeling this behavior, we introduce a security game framework that simulates interplay between attackers and defenders in a noisy environment, focusing on the factors that drive decision making for attackers and defenders in the variants of the game with full knowledge and observability, knowledge of the parameters but no observability of the state (``partial knowledge''), and zero knowledge or observability (``zero knowledge''). We demonstrate the importance of making the right assumptions about attackers, given significant differences in outcomes. Furthermore, there is a measurable trade-off between false-positives and true-positives in terms of attacker outcomes, suggesting that a more false-positive prone environment may be acceptable under conditions where true-positives are also higher.

The discrete cosine transform (DCT) is a relevant tool in signal processing applications, mainly known for its good decorrelation properties. Current image and video coding standards -- such as JPEG and HEVC -- adopt the DCT as a fundamental building block for compression. Recent works have introduced low-complexity approximations for the DCT, which become paramount in applications demanding real-time computation and low-power consumption. The design of DCT approximations involves a trade-off between computational complexity and performance. This paper introduces a new multiparametric transform class encompassing the round-off DCT (RDCT) and the modified RDCT (MRDCT), two relevant multiplierless 8-point approximate DCTs. The associated fast algorithm is provided. Four novel orthogonal low-complexity 8-point DCT approximations are obtained by solving a multicriteria optimization problem. The optimal 8-point transforms are scaled to lengths 16 and 32 while keeping the arithmetic complexity low. The proposed methods are assessed by proximity and coding measures with respect to the exact DCT. Image and video coding experiments hardware realization are performed. The novel transforms perform close to or outperform the current state-of-the-art DCT approximations.

北京阿比特科技有限公司