亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a two-factor authentication (2FA) mechanism called 2D-2FA to address security and usability issues in existing methods. 2D-2FA has three distinguishing features: First, after a user enters a username and password on a login terminal, a unique $\textit{identifier}$ is displayed to her. She $\textit{inputs}$ the same identifier on her registered 2FA device, which ensures appropriate engagement in the authentication process. Second, a one-time PIN is computed on the device and $\textit{automatically}$ transferred to the server. Thus, the PIN can have very high entropy, making guessing attacks infeasible. Third, the identifier is also incorporated into the PIN computation, which renders $\textit{concurrent attacks}$ ineffective. Third-party services such as push-notification providers and 2FA service providers, do not need to be trusted for the security of the system. The choice of identifiers depends on the device form factor and the context. Users could choose to draw patterns, capture QR codes, etc. We provide a proof of concept implementation, and evaluate performance, accuracy, and usability of the system. We show that the system offers a lower error rate (about half) and better efficiency (2-3 times faster) compared to the commonly used PIN-2FA. Our study indicates a high level of usability with a SUS of 75, and a high perception of efficiency, security, accuracy, and adoptability.

相關內容

Mendelian randomization is a widely-used method to estimate the unconfounded effect of an exposure on an outcome by using genetic variants as instrumental variables. Mendelian randomization analyses which use variants from a single genetic region (cis-MR) have gained popularity for being an economical way to provide supporting evidence for drug target validation. This paper proposes methods for cis-MR inference which use the explanatory power of many correlated variants to make valid inferences even in situations where those variants only have weak effects on the exposure. In particular, we exploit the highly structured nature of genetic correlations in single gene regions to reduce the dimension of genetic variants using factor analysis. These genetic factors are then used as instrumental variables to construct tests for the causal effect of interest. Since these factors may often be weakly associated with the exposure, size distortions of standard t-tests can be severe. Therefore, we consider two approaches based on conditional testing. First, we extend results of commonly-used identification-robust tests to account for the use of estimated factors as instruments. Secondly, we propose a test which appropriately adjusts for first-stage screening of genetic factors based on their relevance. Our empirical results provide genetic evidence to validate cholesterol-lowering drug targets aimed at preventing coronary heart disease.

Blades manufactured through flank and point milling will likely exhibit geometric variability. Gauging the aerodynamic repercussions of such variability, prior to manufacturing a component, is challenging enough, let alone trying to predict what the amplified impact of any in-service degradation will be. While rules of thumb that govern the tolerance band can be devised based on expected boundary layer characteristics at known regions and levels of degradation, it remains a challenge to translate these insights into quantitative bounds for manufacturing. In this work, we tackle this challenge by leveraging ideas from dimension reduction to construct low-dimensional representations of aerodynamic performance metrics. These low-dimensional models can identify a subspace which contains designs that are invariant in performance -- the inactive subspace. By sampling within this subspace, we design techniques for drafting manufacturing tolerances and for quantifying whether a scanned component should be used or scrapped. We introduce the blade envelope as a computational manufacturing guide for a blade that is also amenable to qualitative visualizations. In this paper, the first of two parts, we discuss its underlying concept and detail its computational methodology, assuming one is interested only in the single objective of ensuring that the loss of all manufactured blades remains constant. To demonstrate the utility of our ideas we devise a series of computational experiments with the Von Karman Institute's LS89 turbine blade.

A central problem in the study of human mobility is that of migration systems. Typically, migration systems are defined as a set of relatively stable movements of people between two or more locations over time. While these emergent systems are expected to vary over time, they ideally contain a stable underlying structure that could be discovered empirically. There have been some notable attempts to formally or informally define migration systems, however they have been limited by being hard to operationalize, and by defining migration systems in ways that ignore origin/destination aspects and/or fail to account for migration dynamics. In this work we propose a novel method, spatio-temporal (ST) tensor co-clustering, stemming from signal processing and machine learning theory. To demonstrate its effectiveness for describing stable migration systems we focus on domestic migration between counties in the US from 1990-2018. Relevant data for this period has been made available through the US Internal Revenue Service. Specifically, we concentrate on three illustrative case studies: (i) US Metropolitan Areas, (ii) the state of California, and (iii) Louisiana, focusing on detecting exogenous events such as Hurricane Katrina in 2005. Finally, we conclude with discussion and limitations of this approach.

Real-time analysis of bio-heat transfer is very beneficial in improving clinical outcomes of hyperthermia and thermal ablative treatments but challenging to achieve due to large computational costs. This paper presents a fast numerical algorithm well suited for real-time solutions of bio-heat transfer, and it achieves real-time computation via the (i) computationally efficient explicit dynamics in the temporal domain, (ii) element-level thermal load computation, (iii) computationally efficient finite elements, (iv) explicit formulation for unknown nodal temperature, and (v) pre-computation of constant simulation matrices and parameters, all of which lead to a significant reduction in computation time for fast run-time computation. The proposed methodology considers temperature-dependent thermal properties for nonlinear characteristics of bio-heat transfer in soft tissue. Utilising a parallel execution, the proposed method achieves computation time reduction of 107.71 and 274.57 times compared to those of with and without parallelisation of the commercial finite element codes if temperature-dependent thermal properties are considered, and 303.07 and 772.58 times if temperature-independent thermal properties are considered, far exceeding the computational performance of the commercial finite element codes, presenting great potential in real-time predictive analysis of tissue temperature for planning, optimisation and evaluation of thermo-therapeutic treatments. The source code is available at //github.com/jinaojakezhang/FEDFEMBioheat.

We study the asymptotic normality of two estimators of the integrated volatility of volatility based on the Fourier methodology, which does not require the pre-estimation of the spot volatility. We show that the bias-corrected estimator reaches the optimal rate 1/4, while the estimator without bias-correction has a slower convergence rate and a smaller asymptotic variance. Additionally, we provide simulation results that support the theoretical asymptotic distribution of the rate-efficient estimator and show the accuracy of the Fourier estimator in comparison with a rate-optimal estimator based on the pre-estimation of the spot volatility. Finally, we reconstruct the daily volatility of volatility of the S&P500 and EUROSTOXX50 indices over long samples via the rate-optimal Fourier estimator and provide novel insight into the existence of stylized facts about its dynamics.

We give an exact characterization of admissibility in statistical decision problems in terms of Bayes optimality in a so-called nonstandard extension of the original decision problem, as introduced by Duanmu and Roy. Unlike the consideration of improper priors or other generalized notions of Bayes optimalitiy, the nonstandard extension is distinguished, in part, by having priors that can assign "infinitesimal" mass in a sense that can be made rigorous using results from nonstandard analysis. With these additional priors, we find that, informally speaking, a decision procedure $\delta_0$ is admissible in the original statistical decision problem if and only if, in the nonstandard extension of the problem, the nonstandard extension of $\delta_0$ is Bayes optimal among the extensions of standard decision procedures with respect to a nonstandard prior that assigns at least infinitesimal mass to every standard parameter value. We use the above theorem to give further characterizations of admissibility, one related to Blyth's method, one to a condition due to Stein which characterizes admissibility under some regularity assumptions; and finally, a characterization using finitely additive priors in decision problems meeting certain regularity requirements. Our results imply that Blyth's method is a sound and complete method for establishing admissibility. Buoyed by this result, we revisit the univariate two-sample common-mean problem, and show that the Graybill--Deal estimator is admissible among a certain class of unbiased decision procedures.

Application virtual memory footprints are growing rapidly in all systems from servers to smartphones. To address this growing demand, system integrators are incorporating larger amounts of main memory, warranting rethinking of memory management. In current systems, applications produce page faults whenever they access virtual memory regions that are not backed by a physical page. As application memory footprints grow, they induce more and more minor pagefaults. Handling of each minor page fault can take few 1000's of CPU-cycles and blocks the application till OS kernel finds a free physical frame. These page faults can be detrimental to the performance when their frequency of occurrence is high and spread across application run-time. Our evaluation of several workloads indicates an overhead due to minor page faults as high as 29% of execution time. In this paper, we propose to mitigate this problem through a HW/SW co-design approach. Specifically, we first propose to parallelize portions of the kernel page allocation to run ahead of fault time in a separate thread. Then we propose the Minor Fault Offload Engine(MFOE), a per-core HW accelerator for minor fault handling. MFOE is equipped with pre-allocated frame table that it uses to service a page fault. On a page fault, MFOE quickly picks a pre-allocated page frame from this table, makes an entry for it in the TLB, and updates the page table entry to satisfy the page fault. The pre-allocation frame tables are periodically refreshed by a background thread, which also updates the data structures in the kernel to account for the handled page faults. We evaluate this system in the gem5 simulator with a modified Linux kernel running on top of simulated hardware. Our results show that MFOE improves the critical-path fault handling latency by 37x and improves the run-time amongst the evaluated applications, by an average of 7.5%

This paper describes the development of the Microsoft XiaoIce system, the most popular social chatbot in the world. XiaoIce is uniquely designed as an AI companion with an emotional connection to satisfy the human need for communication, affection, and social belonging. We take into account both intelligent quotient (IQ) and emotional quotient (EQ) in system design, cast human-machine social chat as decision-making over Markov Decision Processes (MDPs), and optimize XiaoIce for long-term user engagement, measured in expected Conversation-turns Per Session (CPS). We detail the system architecture and key components including dialogue manager, core chat, skills, and an empathetic computing module. We show how XiaoIce dynamically recognizes human feelings and states, understands user intents, and responds to user needs throughout long conversations. Since the release in 2014, XiaoIce has communicated with over 660 million users and succeeded in establishing long-term relationships with many of them. Analysis of large-scale online logs shows that XiaoIce has achieved an average CPS of 23, which is significantly higher than that of other chatbots and even human conversations.

In this paper, a new video classification methodology is proposed which can be applied in both first and third person videos. The main idea behind the proposed strategy is to capture complementary information of appearance and motion efficiently by performing two independent streams on the videos. The first stream is aimed to capture long-term motions from shorter ones by keeping track of how elements in optical flow images have changed over time. Optical flow images are described by pre-trained networks that have been trained on large scale image datasets. A set of multi-channel time series are obtained by aligning descriptions beside each other. For extracting motion features from these time series, PoT representation method plus a novel pooling operator is followed due to several advantages. The second stream is accomplished to extract appearance features which are vital in the case of video classification. The proposed method has been evaluated on both first and third-person datasets and results present that the proposed methodology reaches the state of the art successfully.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

北京阿比特科技有限公司