亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Blokchain is a promising technology to enable distributed and reliable data sharing at the network edge. The high security in blockchain is undoubtedly a critical factor for the network to handle important data item. On the other hand, according to the dilemma in blockchain, an overemphasis on distributed security will lead to poor transaction-processing capability, which limits the application of blockchain in data sharing scenarios with high-throughput and low-latency requirements. To enable demand-oriented distributed services, this paper investigates the relationship between capability and security in blockchain from the perspective of block propagation and forking problem. First, a Markov chain is introduced to analyze the gossiping-based block propagation among edge servers, which aims to derive block propagation delay and forking probability. Then, we study the impact of forking on blockchain capability and security metrics, in terms of transaction throughput, confirmation delay, fault tolerance, and the probability of malicious modification. The analytical results show that with the adjustment of block generation time or block size, transaction throughput improves at the sacrifice of fault tolerance, and vice versa. Meanwhile, the decline in security can be offset by adjusting confirmation threshold, at the cost of increasing confirmation delay. The analysis of capability-security trade-off can provide a theoretical guideline to manage blockchain performance based on the requirements of data sharing scenarios.

相關內容

 區塊鏈(Blockchain)是由節點參與的分布式數據庫系統,它的特點是不可更改,不可偽造,也可以將其理解為賬簿系統(ledger)。它是比特幣的一個重要概念,完整比特幣區塊鏈的副本,記錄了其代幣(token)的每一筆交易。通過這些信息,我們可以找到每一個地址,在歷史上任何一點所擁有的價值。

知識薈萃

精品入門和進階教程、論文和代碼整理等

更多

查看相關VIP內容、論文、資訊等

We report an improvement to the conventional Echo State Network (ESN) across three benchmark chaotic time-series prediction tasks using fruit fly connectome data alone. We also investigate the impact of key connectome-derived structural features on prediction performance -- uniquely bridging neurobiological structure and machine learning function; and find that both increasing the global average clustering coefficient and modifying the position of weights -- by permuting their synapse-synapse partners -- can lead to increased model variance and (in some cases) degraded performance. In all we consider four topological point modifications to a connectome-derived ESN reservoir (null model): namely, we alter the network sparsity, re-draw nonzero weights from a uniform distribution, permute nonzero weight positions, and increase the network global average clustering coefficient. We compare the four resulting ESN model classes -- and the null model -- with a conventional ESN by conducting time-series prediction experiments on size-variants of the Mackey-Glass 17 (MG-17), Lorenz, and Rossler chaotic time series; denoting each model's performance and variance across train-validate trials.

Attackers may attempt exploiting Internet of Things (IoT) devices to operate them unduly as well as to gather personal data of the legitimate device owners'. Vulnerability Assessment and Penetration Testing (VAPT) sessions help to verify the effectiveness of the adopted security measures. However, VAPT over IoT devices, namely VAPT targeted at IoT devices, is an open research challenge due to the variety of target technologies and to the creativity it may require. Therefore, this article aims at guiding penetration testers to conduct VAPT sessions over IoT devices by means of a new cyber Kill Chain (KC) termed PETIoT. Several practical applications of PETIoT confirm that it is general, while its main novelty lies in the combination of attack and defence steps. PETIoT is demonstrated on a relevant example, the best-selling IP camera on Amazon Italy, the TAPO C200 by TP-Link, assuming an attacker who sits on the same network as the device's in order to assess all the network interfaces of the device. Additional knowledge is generated in terms of three zero-day vulnerabilities found and practically exploited on the camera, one of these with High severity and the other two with Medium severity by the CVSS standard. These are camera Denial of Service (DoS), motion detection breach and video stream breach. The application of PETIoT culminates with the proof-of-concept of a home-made fix, based on an inexpensive Raspberry Pi 4 Model B device, for the last vulnerability. Ultimately, our responsible disclosure with the camera vendor led to the release of a firmware update that fixes all found vulnerabilities, confirming that PetIoT has valid impact in real-world scenarios.

In day-ahead electricity markets based on uniform marginal pricing, small variations in the offering and bidding curves may substantially modify the resulting market outcomes. In this work, we deal with the problem of finding the optimal offering curve for a risk-averse profit-maximizing generating company (GENCO) in a data-driven context. In particular, a large GENCO's market share may imply that her offering strategy can alter the marginal price formation, which can be used to increase profit. We tackle this problem from a novel perspective. First, we propose a optimization-based methodology to summarize each GENCO's step-wise supply curves into a subset of representative price-energy blocks. Then, the relationship between the market price and the resulting energy block offering prices is modeled through a Bayesian linear regression approach, which also allows us to generate stochastic scenarios for the sensibility of the market towards the GENCO strategy, represented by the regression coefficient probabilistic distributions. Finally, this predictive model is embedded in the stochastic optimization model by employing a constraint learning approach. Results show how allowing the GENCO to deviate from her true marginal costs renders significant changes in her profits and the market marginal price. Furthermore, these results have also been tested in an out-of-sample validation setting, showing how this optimal offering strategy is also effective in a real-world market contest.

Hybrid ventilation (coupling natural and mechanical ventilation) is an energy-efficient solution to provide fresh air for most climates, given that it has a reliable control system. To operate such systems optimally, a high-fidelity control-oriented model is required. It should enable near-real time forecast of the indoor air temperature and humidity based on operational conditions such as window opening and HVAC schedules. However, widely used physics-based simulation models (i.e., white-box models) are labour-intensive and computationally expensive. Alternatively, black-box models based on artificial neural networks can be trained to be good estimators for building dynamics. This paper investigates the capabilities of a multivariate multi-head attention-based long short-term memory (LSTM) encoder-decoder neural network to predict indoor air conditions of a building equipped with hybrid ventilation. The deep neural network used for this study aims to predict indoor air temperature dynamics when a window is opened and closed, respectively. Training and test data were generated from detailed multi-zone office building model (EnergyPlus). The deep neural network is able to accurately predict indoor air temperature of five zones whenever a window was opened and closed.

The potential of Model Predictive Control in buildings has been shown many times, being successfully used to achieve various goals, such as minimizing energy consumption or maximizing thermal comfort. However, mass deployment has thus far failed, in part because of the high engineering cost of obtaining and maintaining a sufficiently accurate model. This can be addressed by using adaptive data-driven approaches. The idea of using behavioral systems theory for this purpose has recently found traction in the academic community. In this study, we compare variations thereof with different amounts of data used, different regularization weights, and different methods of data selection. Autoregressive models with exogenous inputs (ARX) are used as a well-established reference. All methods are evaluated by performing iterative system identification on two long-term data sets from real occupied buildings, neither of which include artificial excitation for the purpose of system identification. We find that: (1) Sufficient prediction accuracy is achieved with all methods. (2) The ARX models perform slightly better, while having the additional advantages of fewer tuning parameters and faster computation. (3) Adaptive and non-adaptive schemes perform similarly. (4) The regularization weights of the behavioral systems theory methods show the expected trade-off characteristic with an optimal middle value. (5) Using the most recent data yields better performance than selecting data with similar weather as the day to be predicted. (6) More data improves the model performance.

Block-based programming languages like Scratch are increasingly popular for programming education and end-user programming. Recent program analyses build on the insight that source code can be modelled using techniques from natural language processing. Many of the regularities of source code that support this approach are due to the syntactic overhead imposed by textual programming languages. This syntactic overhead, however, is precisely what block-based languages remove in order to simplify programming. Consequently, it is unclear how well this modelling approach performs on block-based programming languages. In this paper, we investigate the applicability of language models for the popular block-based programming language Scratch. We model Scratch programs using n-gram models, the most essential type of language model, and transformers, a popular deep learning model. Evaluation on the example tasks of code completion and bug finding confirm that blocks inhibit predictability, but the use of language models is nevertheless feasible. Our findings serve as foundation for improving tooling and analyses for block-based languages.

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Additionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most.

北京阿比特科技有限公司