亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The NP-hard problem of decoding random linear codes is crucial to both coding theory and cryptography. In particular, this problem underpins the security of many code based post-quantum cryptographic schemes. The state-of-art algorithms for solving this problem are the information syndrome decoding algorithm and its advanced variants. In this work, we consider syndrome decoding in the multiple instances setting. Two strategies are applied for different scenarios. The first strategy is to solve all instances with the aid of the precomputation technique. We adjust the current framework and distinguish the offline phase and online phase to reduce the amortized complexity. Further, we discuss the impact on the concrete security of some post-quantum schemes. The second strategy is to solve one out of many instances. Adapting the analysis for some earlier algorithm, we discuss the effectiveness of using advanced variants and confirm a related folklore conjecture.

相關內容

Product ranking is the core problem for revenue-maximizing online retailers. To design proper product ranking algorithms, various consumer choice models are proposed to characterize the consumers' behaviors when they are provided with a list of products. However, existing works assume that each consumer purchases at most one product or will keep viewing the product list after purchasing a product, which does not agree with the common practice in real scenarios. In this paper, we assume that each consumer can purchase multiple products at will. To model consumers' willingness to view and purchase, we set a random attention span and purchase budget, which determines the maximal amount of products that he/she views and purchases, respectively. Under this setting, we first design an optimal ranking policy when the online retailer can precisely model consumers' behaviors. Based on the policy, we further develop the Multiple-Purchase-with-Budget UCB (MPB-UCB) algorithms with $\~O(\sqrt{T})$ regret that estimate consumers' behaviors and maximize revenue simultaneously in online settings. Experiments on both synthetic and semi-synthetic datasets prove the effectiveness of the proposed algorithms.

We address the problem of sparse recovery using greedy compressed sensing recovery algorithms, without explicit knowledge of the sparsity. Estimating the sparsity order is a crucial problem in many practical scenarios, e.g., wireless communications, where exact value of the sparsity order of the unknown channel may be unavailable a priori. In this paper we have proposed a new greedy algorithm, referred to as the Multiple Choice Hard Thresholding Pursuit (MCHTP), which modifies the popular hard thresholding pursuit (HTP) suitably to iteratively recover the unknown sparse vector along with the sparsity order of the unknown vector. We provide provable performance guarantees which ensures that MCHTP can estimate the sparsity order exactly, along with recovering the unknown sparse vector exactly with noiseless measurements. The simulation results corroborate the theoretical findings, demonstrating that even without exact sparsity knowledge, with only the knowledge of a loose upper bound of the sparsity, MCHTP exhibits outstanding recovery performance, which is almost identical to that of the conventional HTP with exact sparsity knowledge. Furthermore, simulation results demonstrate much lower computational complexity of MCHTP compared to other state-of-the-art techniques like MSP.

"What-if" questions are intuitively generated and commonly asked during the design process. Engineers and architects need to inherently conduct design decisions, progressing from one phase to another. They either use empirical domain experience, simulations, or data-driven methods to acquire consequential feedback. We take an example from an interdisciplinary domain of energy-efficient building design to argue that the current methods for decision support have limitations or deficiencies in four aspects: parametric independency identification, gaps in integrating knowledge-based and data-driven approaches, less explicit model interpretation, and ambiguous decision support boundaries. In this study, we first clarify the nature of dynamic experience in individuals and constant principal knowledge in design. Subsequently, we introduce causal inference into the domain. A four-step process is proposed to discover and analyze parametric dependencies in a mathematically rigorous and computationally efficient manner by identifying the causal diagram with interventions. The causal diagram provides a nexus for integrating domain knowledge with data-driven methods, providing interpretability and testability against the domain experience within the design space. Extracting causal structures from the data is close to the nature design reasoning process. As an illustration, we applied the properties of the proposed estimators through simulations. The paper concludes with a feasibility study demonstrating the proposed framework's realization.

Real-world optimization problems may have a different underlying structure. In black-box optimization, the dependencies between decision variables remain unknown. However, some techniques can discover such interactions accurately. In Large Scale Global Optimization (LSGO), problems are high-dimensional. It was shown effective to decompose LSGO problems into subproblems and optimize them separately. The effectiveness of such approaches may be highly dependent on the accuracy of problem decomposition. Many state-of-the-art decomposition strategies are derived from Differential Grouping (DG). However, if a given problem consists of non-additively separable subproblems, DG-based strategies may discover many non-existing interactions. On the other hand, monotonicity checking strategies proposed so far do not report non-existing interactions for any separable subproblems but may miss discovering many of the existing ones. Therefore, we propose Incremental Recursive Ranking Grouping (IRRG) that suffers from none of these flaws. IRRG consumes more fitness function evaluations than the recent DG-based propositions, e.g., Recursive DG 3 (RDG3). Nevertheless, the effectiveness of the considered Cooperative Co-evolution frameworks after embedding IRRG or RDG3 was similar for problems with additively separable subproblems that are suitable for RDG3. After replacing the additive separability with non-additive, embedding IRRG leads to results of significantly higher quality.

Clustering is part of unsupervised analysis methods that consist in grouping samples into homogeneous and separate subgroups of observations also called clusters. To interpret the clusters, statistical hypothesis testing is often used to infer the variables that significantly separate the estimated clusters from each other. However, data-driven hypotheses are considered for the inference process, since the hypotheses are derived from the clustering results. This double use of the data leads traditional hypothesis test to fail to control the Type I error rate particularly because of uncertainty in the clustering process and the potential artificial differences it could create. We propose three novel statistical hypothesis tests which account for the clustering process. Our tests efficiently control the Type I error rate by identifying only variables that contain a true signal separating groups of observations.

We consider a stationary Markov process that models certain queues with a bulk service of fixed number m of admitted customers. We find an integral expression of its transition probability function in terms of certain multi-orthogonal polynomials. We study the convergence of the appropriate scheme of simultaneous quadrature rules to design an algorithm for computing this integral expression.

Data integration has become increasingly popular owing to the availability of multiple data sources. This study considered quantile regression estimation when a key covariate had multiple proxies across several datasets. In a unified estimation procedure, the proposed method incorporates multiple proxies that have various relationships with the unobserved covariates. The proposed approach allows the inference of both the quantile function and unobserved covariates. Moreover, it does not require the quantile function's linearity and, simultaneously, accommodates both the linear and nonlinear proxies. Simulation studies have demonstrated that this methodology successfully integrates multiple proxies and revealed quantile relationships for a wide range of nonlinear data. The proposed method is applied to administrative data obtained from the Survey of Household Finances and Living Conditions provided by Statistics Korea, to specify the relationship between assets and salary income in the presence of multiple income records.

In domains where sample sizes are limited, efficient learning algorithms are critical. Learning using privileged information (LuPI) offers increased sample efficiency by allowing prediction models access to auxiliary information at training time which is unavailable when the models are used. In recent work, it was shown that for prediction in linear-Gaussian dynamical systems, a LuPI learner with access to intermediate time series data is never worse and often better in expectation than any unbiased classical learner. We provide new insights into this analysis and generalize it to nonlinear prediction tasks in latent dynamical systems, extending theoretical guarantees to the case where the map connecting latent variables and observations is known up to a linear transform. In addition, we propose algorithms based on random features and representation learning for the case when this map is unknown. A suite of empirical results confirm theoretical findings and show the potential of using privileged time-series information in nonlinear prediction.

While multitask representation learning has become a popular approach in reinforcement learning (RL) to boost the sample efficiency, the theoretical understanding of why and how it works is still limited. Most previous analytical works could only assume that the representation function is already known to the agent or from linear function class, since analyzing general function class representation encounters non-trivial technical obstacles such as generalization guarantee, formulation of confidence bound in abstract function space, etc. However, linear-case analysis heavily relies on the particularity of linear function class, while real-world practice usually adopts general non-linear representation functions like neural networks. This significantly reduces its applicability. In this work, we extend the analysis to general function class representations. Specifically, we consider an agent playing $M$ contextual bandits (or MDPs) concurrently and extracting a shared representation function $\phi$ from a specific function class $\Phi$ using our proposed Generalized Functional Upper Confidence Bound algorithm (GFUCB). We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP for the first time. Lastly, we conduct experiments to demonstrate the effectiveness of our algorithm with neural net representation.

Solving large systems of equations is a challenge for modeling natural phenomena, such as simulating subsurface flow. To avoid systems that are intractable on current computers, it is often necessary to neglect information at small scales, an approach known as coarse-graining. For many practical applications, such as flow in porous, homogenous materials, coarse-graining offers a sufficiently-accurate approximation of the solution. Unfortunately, fractured systems cannot be accurately coarse-grained, as critical network topology exists at the smallest scales, including topology that can push the network across a percolation threshold. Therefore, new techniques are necessary to accurately model important fracture systems. Quantum algorithms for solving linear systems offer a theoretically-exponential improvement over their classical counterparts, and in this work we introduce two quantum algorithms for fractured flow. The first algorithm, designed for future quantum computers which operate without error, has enormous potential, but we demonstrate that current hardware is too noisy for adequate performance. The second algorithm, designed to be noise resilient, already performs well for problems of small to medium size (order 10 to 1000 nodes), which we demonstrate experimentally and explain theoretically. We expect further improvements by leveraging quantum error mitigation and preconditioning.

北京阿比特科技有限公司