亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The security of public-key cryptosystems relies on computationally hard problems, that are classically analyzed by number theoretic methods. In this paper, we introduce a new perspective on cryptosystems by interpreting the Diffie-Hellman key exchange as a nonlinear dynamical system. Employing Koopman theory, we transfer this dynamical system into a higher-dimensional space to analytically derive a purely linear system that equivalently describes the underlying cryptosystem. In this form, analytic tools for linear systems allow us to reconstruct the secret integers of the key exchange by simple manipulations. Moreover, we provide an upper bound on the minimal required lifting dimension to obtain perfect accuracy. To demonstrate the potential of our method, we relate our findings to existing results on algorithmic complexity. Finally, we transfer this approach to a data-driven setting where the Koopman representation is learned from data samples of the cryptosystem.

相關內容

The approach to analysing compositional data has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to allow the logarithmic transformation. An alternative new approach, called the `chiPower' transformation, which allows data zeros, is to combine the standardization inherent in the chi-square distance in correspondence analysis, with the essential elements of the Box-Cox power transformation. The chiPower transformation is justified because it} defines between-sample distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and are then equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chiPower transformation as close as possible to a logratio transformation, without having to substitute the zeros. Especially in the area of high-dimensional data, this alternative approach can present such a high level of coherence and isometry as to be a valid approach to the analysis of compositional data. Furthermore, in a supervised learning context, if the compositional variables serve as predictors of a response in a modelling framework, for example generalized linear models, then the power can be used as a tuning parameter in optimizing the accuracy of prediction through cross-validation. The chiPower-transformed variables have a straightforward interpretation, since they are each identified with single compositional parts, not ratios.

Bayesian linear mixed-effects models and Bayesian ANOVA are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can - if the sphericity assumption is violated - likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from \url{//osf.io/mjf47/}.

We show that any application of the technique of unbiased simulation becomes perfect simulation when coalescence of the two coupled Markov chains can be practically assured in advance. This happens when a fixed number of iterations is high enough that the probability of needing any more to achieve coalescence is negligible; we suggest a value of $10^{-20}$. This finding enormously increases the range of problems for which perfect simulation, which exactly follows the target distribution, can be implemented. We design a new algorithm to make practical use of the high number of iterations by producing extra perfect sample points with little extra computational effort, at a cost of a small, controllable amount of serial correlation within sample sets of about 20 points. Different sample sets remain completely independent. The algorithm includes maximal coupling for continuous processes, to bring together chains that are already close. We illustrate the methodology on a simple, two-state Markov chain and on standard normal distributions up to 20 dimensions. Our technical formulation involves a nonzero probability, which can be made arbitrarily small, that a single perfect sample point may have its place taken by a "string" of many points which are assigned weights, each equal to $\pm 1$, that sum to~$1$. A point with a weight of $-1$ is a "hole", which is an object that can be cancelled by an equivalent point that has the same value but opposite weight $+1$.

In real applications, non-Gaussian distributions are frequently caused by outliers and impulsive disturbances, and these will impair the performance of the classical cubature Kalman filter (CKF) algorithm. In this letter, a modified generalized minimum error entropy criterion with fiducial point (GMEEFP) is studied to ensure that the error comes together to around zero, and a new CKF algorithm based on the GMEEFP criterion, called GMEEFP-CKF algorithm, is developed. To demonstrate the practicality of the GMEEFP-CKF algorithm, several simulations are performed, and it is demonstrated that the proposed GMEEFP-CKF algorithm outperforms the existing CKF algorithms with impulse noise.

Formation control of multi-agent systems has been a prominent research topic, spanning both theoretical and practical domains over the past two decades. Our study delves into the leader-follower framework, addressing two critical, previously overlooked aspects. Firstly, we investigate the impact of an unknown nonlinear manifold, introducing added complexity to the formation control challenge. Secondly, we address the practical constraint of limited follower sensing range, posing difficulties in accurately localizing the leader for followers. Our core objective revolves around employing Koopman operator theory and Extended Dynamic Mode Decomposition to craft a reliable prediction algorithm for the follower robot to anticipate the leader's position effectively. Our experimentation on an elliptical paraboloid manifold, utilizing two omni-directional wheeled robots, validates the prediction algorithm's effectiveness.

Data complexity analysis quantifies the hardness of constructing a predictive model on a given dataset. However, the effectiveness of existing data complexity measures can be challenged by the existence of irrelevant features and feature interactions in biological micro-array data. We propose a novel data complexity measure, depth, that leverages an evolutionary inspired feature selection algorithm to quantify the complexity of micro-array data. By examining feature subsets of varying sizes, the approach offers a novel perspective on data complexity analysis. Unlike traditional metrics, depth is robust to irrelevant features and effectively captures complexity stemming from feature interactions. On synthetic micro-array data, depth outperforms existing methods in robustness to irrelevant features and identifying complexity from feature interactions. Applied to case-control genotype and gene-expression micro-array datasets, the results reveal that a single feature of gene-expression data can account for over 90% of the performance of multi-feature model, confirming the adequacy of the commonly used differentially expressed gene (DEG) feature selection method for the gene expression data. Our study also demonstrates that constructing predictive models for genotype data is harder than gene expression data. The results in this paper provide evidence for the use of interpretable machine learning algorithms on microarray data.

We study the problem of reconstructing the Faber--Schauder coefficients of a continuous function $f$ from discrete observations of its antiderivative $F$. Our approach starts with formulating this problem through piecewise quadratic spline interpolation. We then provide a closed-form solution and an in-depth error analysis. These results lead to some surprising observations, which also throw new light on the classical topic of quadratic spline interpolation itself: They show that the well-known instabilities of this method can be located exclusively within the final generation of estimated Faber--Schauder coefficients, which suffer from non-locality and strong dependence on the initial value and the given data. By contrast, all other Faber--Schauder coefficients depend only locally on the data, are independent of the initial value, and admit uniform error bounds. We thus conclude that a robust and well-behaved estimator for our problem can be obtained by simply dropping the final-generation coefficients from the estimated Faber--Schauder coefficients.

The application of deep learning to non-stationary temporal datasets can lead to overfitted models that underperform under regime changes. In this work, we propose a modular machine learning pipeline for ranking predictions on temporal panel datasets which is robust under regime changes. The modularity of the pipeline allows the use of different models, including Gradient Boosting Decision Trees (GBDTs) and Neural Networks, with and without feature engineering. We evaluate our framework on financial data for stock portfolio prediction, and find that GBDT models with dropout display high performance, robustness and generalisability with reduced complexity and computational cost. We then demonstrate how online learning techniques, which require no retraining of models, can be used post-prediction to enhance the results. First, we show that dynamic feature projection improves robustness by reducing drawdown in regime changes. Second, we demonstrate that dynamical model ensembling based on selection of models with good recent performance leads to improved Sharpe and Calmar ratios of out-of-sample predictions. We also evaluate the robustness of our pipeline across different data splits and random seeds with good reproducibility.

We develop sampling algorithms to fit Bayesian hierarchical models, the computational complexity of which scales linearly with the number of observations and the number of parameters in the model. We focus on crossed random effect and nested multilevel models, which are used ubiquitously in applied sciences. The posterior dependence in both classes is sparse: in crossed random effects models it resembles a random graph, whereas in nested multilevel models it is tree-structured. For each class we identify a framework for scalable computation, building on previous work. Methods for crossed models are based on extensions of appropriately designed collapsed Gibbs samplers, where we introduce the idea of local centering; while methods for nested models are based on sparse linear algebra and data augmentation. We provide a theoretical analysis of the proposed algorithms in some simplified settings, including a comparison with previously proposed methodologies and an average-case analysis based on random graph theory. Numerical experiments, including two challenging real data analyses on predicting electoral results and real estate prices, compare with off-the-shelf Hamiltonian Monte Carlo, displaying drastic improvement in performance.

By combining a logarithm transformation with a corrected Milstein-type method, the present article proposes an explicit, unconditional boundary and dynamics preserving scheme for the stochastic susceptible-infected-susceptible (SIS) epidemic model that takes value in (0,N). The scheme applied to the model is first proved to have a strong convergence rate of order one. Further, the dynamic behaviors are analyzed for the numerical approximations and it is shown that the scheme can unconditionally preserve both the domain and the dynamics of the model. More precisely, the proposed scheme gives numerical approximations living in the domain (0,N) and reproducing the extinction and persistence properties of the original model for any time discretization step-size h > 0, without any additional requirements on the model parameters. Numerical experiments are presented to verify our theoretical results.

北京阿比特科技有限公司