亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cryptographic Protocols (CP) are distributed algorithms intended for secure communication in an insecure environment. They are used, for example, in electronic payments, electronic voting procedures, systems of confidential data processing, etc. Errors in CPs can bring to great financial and social damage, therefore it is necessary to use mathematical methods to substantiate the correctness and safety of CPs. In this paper, a distributed process model of CPs is presented, which allows one to formally describe CPs and their properties. It is shown how to solve the problems of verification of CPs on the base of this model.

相關內容

Processing 是一門(men)開(kai)源編程語言和(he)與之(zhi)配套的(de)集成開(kai)發環(huan)境(IDE)的(de)名稱(cheng)。Processing 在電子藝術和(he)視覺(jue)設(she)計社區被用來教授編程基礎,并運用于大量的(de)新媒體和(he)互動藝術作品中。

In this paper, we study an information-theoretic secret sharing problem, where a dealer distributes shares of a secret among a set of participants under the following constraints: (i) authorized sets of users can recover the secret by pooling their shares, and (ii) non-authorized sets of colluding users cannot learn any information about the secret. We assume that the dealer and participants observe the realizations of correlated Gaussian random variables and that the dealer can communicate with participants through a one-way, authenticated, rate-limited, and public channel. Unlike traditional secret sharing protocols, in our setting, no perfectly secure channel is needed between the dealer and the participants. Our main result is a closed-form characterization of the fundamental trade-off between secret rate and public communication rate.

We prove several new tight distributed lower bounds for classic symmetry breaking graph problems. As a basic tool, we first provide a new insightful proof that any deterministic distributed algorithm that computes a $\Delta$-coloring on $\Delta$-regular trees requires $\Omega(\log_\Delta n)$ rounds and any randomized algorithm requires $\Omega(\log_\Delta\log n)$ rounds. We prove this result by showing that a natural relaxation of the $\Delta$-coloring problem is a fixed point in the round elimination framework. As a first application, we show that our $\Delta$-coloring lower bound proof directly extends to arbdefective colorings. We exactly characterize which variants of the arbdefective coloring problem are "easy", and which of them instead are "hard". As a second application, which we see as our main contribution, we use the structure of the fixed point as a building block to prove lower bounds as a function of $\Delta$ for a large class of distributed symmetry breaking problems. For example, we obtain a tight lower bound for the fundamental problem of computing a $(2,\beta)$-ruling set. This is an exponential improvement over the best existing lower bound for the problem, which was proven in [FOCS '20]. Our lower bound even applies to a much more general family of problems that allows for almost arbitrary combinations of natural constraints from coloring problems, orientation problems, and independent set problems, and provides a single unified proof for known and new lower bound results for these types of problems. Our lower bounds as a function of $\Delta$ also imply lower bounds as a function of $n$. We obtain, for example, that maximal independent set, on trees, requires $\Omega(\log n / \log \log n)$ rounds for deterministic algorithms, which is tight.

Distributed computing often gives rise to complex concurrent and interacting activities. In some cases several concurrent activities may be working together, i.e. cooperating, to solve a given problem; in other cases, the activities may be independent but needing to share common system resources for which they must compete. Many difficulties and limitations occur in the widely advocated objects and (trans)actions model when it is supposed to support cooperating activities. We have introduced previously the concept of coordinated atomic (CA) actions [Xu et al. 1995]; this paper analyzes and examines the derived objects and CA actions model for constructing fault-tolerant distributed systems and providing unified support for both cooperative and competitive concurrency. Our investigation reveals and clarifies several significant problems that have not previously been studied extensively, including the problem of ensuring consistent access to shared objects from a joint action as opposed to a set of independent actions. Conceptual and implementation-related solutions are proposed and illustrated.

Advances in machine learning (ML) technologies have greatly improved Artificial Intelligence (AI) systems. As a result, AI systems have become ubiquitous, with their application prevalent in virtually all sectors. However, AI systems have prompted ethical concerns, especially as their usage crosses boundaries in sensitive areas such as healthcare, transportation, and security. As a result, users are calling for better AI governance practices in ethical AI systems. Therefore, AI development methods are encouraged to foster these practices. This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems to determine if it enables AI governance in development processes through ethical practices. The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices. This indicates that the method can be further improved.

Multi-access Edge Computing (MEC) is a 5G-enabling solution that aims to bring cloud-computing capabilities closer to the end-users. This paper focuses on mitigation techniques against Distributed Denial-of-Service (DDoS) attacks in the context of 5G MEC, providing solutions that involve the virtualized environment and the management entities from the MEC architecture. The proposed solutions aim to reduce the risk of affecting legitimate traffic in the context of DDoS attacks. Our work supports the idea of using a network flow collector that sends the data to an anomaly detection system based on artificial intelligence techniques and, as an improvement over the previous work, it contributes to redirecting detected anomalies for isolation to a separate virtual machine. This virtual machine uses deep packet inspection tools to analyze the traffic and provides services until the final verdict. We decrease the risk of compromising the virtual machine that provides services to legitimate users by isolating the suspicious traffic. The management entities of the MEC architecture allow to re-instantiate or reconfigure the virtual machines. Hence, if the machine inspecting the isolated traffic crashes because of an attack, the damaged machine can be restored while the services provided to legitimate users are not affected.

Smart Grid (SG) research and development has drawn much attention from academia, industry and government due to the great impact it will have on society, economics and the environment. Securing the SG is a considerably significant challenge due the increased dependency on communication networks to assist in physical process control, exposing them to various cyber-threats. In addition to attacks that change measurement values using False Data Injection (FDI) techniques, attacks on the communication network may disrupt the power system's real-time operation by intercepting messages, or by flooding the communication channels with unnecessary data. Addressing these attacks requires a cross-layer approach. In this paper a cross-layered strategy is presented, called Cross-Layer Ensemble CorrDet with Adaptive Statistics(CECD-AS), which integrates the detection of faulty SG measurement data as well as inconsistent network inter-arrival times and transmission delays for more reliable and accurate anomaly detection and attack interpretation. Numerical results show that CECD-AS can detect multiple False Data Injections, Denial of Service (DoS) and Man In The Middle (MITM) attacks with a high F1-score compared to current approaches that only use SG measurement data for detection such as the traditional physics-based State Estimation, Ensemble CorrDet with Adaptive Statistics strategy and other machine learning classification-based detection schemes.

Because of the superior feature representation ability of deep learning, various deep Click-Through Rate (CTR) models are deployed in the commercial systems by industrial companies. To achieve better performance, it is necessary to train the deep CTR models on huge volume of training data efficiently, which makes speeding up the training process an essential problem. Different from the models with dense training data, the training data for CTR models is usually high-dimensional and sparse. To transform the high-dimensional sparse input into low-dimensional dense real-value vectors, almost all deep CTR models adopt the embedding layer, which easily reaches hundreds of GB or even TB. Since a single GPU cannot afford to accommodate all the embedding parameters, when performing distributed training, it is not reasonable to conduct the data-parallelism only. Therefore, existing distributed training platforms for recommendation adopt model-parallelism. Specifically, they use CPU (Host) memory of servers to maintain and update the embedding parameters and utilize GPU worker to conduct forward and backward computations. Unfortunately, these platforms suffer from two bottlenecks: (1) the latency of pull \& push operations between Host and GPU; (2) parameters update and synchronization in the CPU servers. To address such bottlenecks, in this paper, we propose the ScaleFreeCTR: a MixCache-based distributed training system for CTR models. Specifically, in SFCTR, we also store huge embedding table in CPU but utilize GPU instead of CPU to conduct embedding synchronization efficiently. To reduce the latency of data transfer between both GPU-Host and GPU-GPU, the MixCache mechanism and Virtual Sparse Id operation are proposed. Comprehensive experiments and ablation studies are conducted to demonstrate the effectiveness and efficiency of SFCTR.

Federated learning is a distributed machine learning method that aims to preserve the privacy of sample features and labels. In a federated learning system, ID-based sample alignment approaches are usually applied with few efforts made on the protection of ID privacy. In real-life applications, however, the confidentiality of sample IDs, which are the strongest row identifiers, is also drawing much attention from many participants. To relax their privacy concerns about ID privacy, this paper formally proposes the notion of asymmetrical vertical federated learning and illustrates the way to protect sample IDs. The standard private set intersection protocol is adapted to achieve the asymmetrical ID alignment phase in an asymmetrical vertical federated learning system. Correspondingly, a Pohlig-Hellman realization of the adapted protocol is provided. This paper also presents a genuine with dummy approach to achieving asymmetrical federated model training. To illustrate its application, a federated logistic regression algorithm is provided as an example. Experiments are also made for validating the feasibility of this approach.

Network embedding aims to learn a latent, low-dimensional vector representations of network nodes, effective in supporting various network analytic tasks. While prior arts on network embedding focus primarily on preserving network topology structure to learn node representations, recently proposed attributed network embedding algorithms attempt to integrate rich node content information with network topological structure for enhancing the quality of network embedding. In reality, networks often have sparse content, incomplete node attributes, as well as the discrepancy between node attribute feature space and network structure space, which severely deteriorates the performance of existing methods. In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space. The resultant latent subspace can respect network structure in a more consistent way towards learning high-quality node representations. We formulate an optimization problem which is solved by an efficient stochastic gradient descent algorithm, with linear time complexity to the number of nodes. We investigate a series of linear and non-linear transformations performed on node attributes and empirically validate their effectiveness on various types of networks. Another advantage of attri2vec is its ability to solve out-of-sample problems, where embeddings of new coming nodes can be inferred from their node attributes through the learned mapping function. Experiments on various types of networks confirm that attri2vec is superior to state-of-the-art baselines for node classification, node clustering, as well as out-of-sample link prediction tasks. The source code of this paper is available at //github.com/daokunzhang/attri2vec.

Recommender systems rely on large datasets of historical data and entail serious privacy risks. A server offering recommendations as a service to a client might leak more information than necessary regarding its recommendation model and training dataset. At the same time, the disclosure of the client's preferences to the server is also a matter of concern. Providing recommendations while preserving privacy in both senses is a difficult task, which often comes into conflict with the utility of the system in terms of its recommendation-accuracy and efficiency. Widely-purposed cryptographic primitives such as secure multi-party computation and homomorphic encryption offer strong security guarantees, but in conjunction with state-of-the-art recommender systems yield far-from-practical solutions. We precisely define the above notion of security and propose CryptoRec, a novel recommendations-as-a-service protocol, which encompasses a crypto-friendly recommender system. This model possesses two interesting properties: (1) It models user-item interactions in a user-free latent feature space in which it captures personalized user features by an aggregation of item features. This means that a server with a pre-trained model can provide recommendations for a client without having to re-train the model with the client's preferences. Nevertheless, re-training the model still improves accuracy. (2) It only uses addition and multiplication operations, making the model straightforwardly compatible with homomorphic encryption schemes.

北京阿比特科技有限公司