亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Information flow security ensures that the secret data manipulated by a program does not influence its observable output. Proving information flow security is especially challenging for concurrent programs, where operations on secret data may influence the execution time of a thread and, thereby, the interleaving between different threads. Such internal timing channels may affect the observable outcome of a program even if an attacker does not observe execution times. Existing verification techniques for information flow security in concurrent programs attempt to prove that secret data does not influence the relative timing of threads. However, these techniques are often restrictive (for instance because they disallow branching on secret data) and make strong assumptions about the execution platform (ignoring caching, processor instructions with data-dependent runtime, and other common features that affect execution time). In this paper, we present a novel verification technique for secure information flow in concurrent programs that lifts these restrictions and does not make any assumptions about timing behavior. The key idea is to prove that all mutating operations performed on shared data commute, such that different thread interleavings do not influence its final value. Crucially, commutativity is required only for an abstraction of the shared data that contains the information that will be leaked to a public output. Abstract commutativity is satisfied by many more operations than standard commutativity, which makes our technique widely applicable. We formalize our technique in CommCSL, a relational concurrent separation logic with support for commutativity-based reasoning, and prove its soundness in Isabelle/HOL. We implemented CommCSL in HyperViper, an automated verifier based on the Viper verification infrastructure, and demonstrate its ability to verify challenging examples.

相關內容

Blockchain-based IoT systems can manage IoT devices and achieve a high level of data integrity, security, and provenance. However, incorporating existing consensus protocols in many IoT systems limits scalability and leads to high computational cost and consensus latency. In addition, location-centric characteristics of many IoT applications paired with limited storage and computing power of IoT devices bring about more limitations, primarily due to the location-agnostic designs in blockchains. We propose a hierarchical and location-aware consensus protocol (LH-Raft) for IoT-blockchain applications inspired by the original Raft protocol to address these limitations. The proposed LH-Raft protocol forms local consensus candidate groups based on nodes' reputation and distance to elect the leaders in each sub-layer blockchain. It utilizes a threshold signature scheme to reach global consensus and the local and global log replication to maintain consistency for blockchain transactions. To evaluate the performance of LH-Raft, we first conduct an extensive numerical analysis based on the proposed reputation mechanism and the candidate group formation model. We then compare the performance of LH-Raft against the classical Raft protocol from both theoretical and experimental perspectives. We evaluate the proposed threshold signature scheme using Hyperledger Ursa cryptography library to measure various consensus nodes' signing and verification time. Experimental results show that the proposed LH-Raft protocol is scalable for large IoT applications and significantly reduces the communication cost, consensus latency, and agreement time for consensus processing.

Analog compute-in-memory (CIM) systems are promising for deep neural network (DNN) inference acceleration due to their energy efficiency and high throughput. However, as the use of DNNs expands, protecting user input privacy has become increasingly important. In this paper, we identify a potential security vulnerability wherein an adversary can reconstruct the user's private input data from a power side-channel attack, under proper data acquisition and pre-processing, even without knowledge of the DNN model. We further demonstrate a machine learning-based attack approach using a generative adversarial network (GAN) to enhance the data reconstruction. Our results show that the attack methodology is effective in reconstructing user inputs from analog CIM accelerator power leakage, even at large noise levels and after countermeasures are applied. Specifically, we demonstrate the efficacy of our approach on an example of U-Net inference chip for brain tumor detection, and show the original magnetic resonance imaging (MRI) medical images can be successfully reconstructed even at a noise-level of 20% standard deviation of the maximum power signal value. Our study highlights a potential security vulnerability in analog CIM accelerators and raises awareness of using GAN to breach user privacy in such systems.

Block-based visual programming environments play an increasingly important role in introducing computing concepts to K-12 students. In recent years, they have also gained popularity in neuro-symbolic AI, serving as a benchmark to evaluate general problem-solving and logical reasoning skills. The open-ended and conceptual nature of these visual programming tasks make them challenging, both for state-of-the-art AI agents as well as for novice programmers. A natural approach to providing assistance for problem-solving is breaking down a complex task into a progression of simpler subtasks; however, this is not trivial given that the solution codes are typically nested and have non-linear execution behavior. In this paper, we formalize the problem of synthesizing such a progression for a given reference block-based visual programming task. We propose a novel synthesis algorithm that generates a progression of subtasks that are high-quality, well-spaced in terms of their complexity, and solving this progression leads to solving the reference task. We show the utility of our synthesis algorithm in improving the efficacy of AI agents (in this case, neural program synthesizers) for solving tasks in the Karel programming environment. Then, we conduct a user study to demonstrate that our synthesized progression of subtasks can assist a novice programmer in solving tasks in the Hour of Code: Maze Challenge by Code-dot-org.

This article introduces randomized block Gram-Schmidt process (RBGS) for QR decomposition. RBGS extends the single-vector randomized Gram-Schmidt (RGS) algorithm and inherits its key characteristics such as being more efficient and having at least as much stability as any deterministic (block) Gram-Schmidt algorithm. Block algorithms offer superior performance as they are based on BLAS3 matrix-wise operations and reduce communication cost when executed in parallel. Notably, our low-synchronization variant of RBGS can be implemented in a parallel environment using only one global reduction operation between processors per block. Moreover, the block Gram-Schmidt orthogonalization is the key element in the block Arnoldi procedure for the construction of a Krylov basis, which in turn is used in GMRES, FOM and Rayleigh-Ritz methods for the solution of linear systems and clustered eigenvalue problems. In this article, we develop randomized versions of these methods, based on RBGS, and validate them on nontrivial numerical examples.

In this article, we propose a novel formulation for the resource allocation problem of a sliced and disaggregated Radio Access Network (RAN) and its transport network. Our proposal assures an end-to-end delay bound for the Ultra-Reliable and Low-Latency Communication (URLLC) use case while jointly considering the number of admitted users, the transmission rate allocation per slice, the functional split of RAN nodes and the routing paths in the transport network. We use deterministic network calculus theory to calculate delay along the transport network connecting disaggregated RANs deploying network functions at the Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU) nodes. The maximum end-to-end delay is a constraint in the optimization-based formulation that aims to maximize Mobile Network Operator (MNO) profit, considering a cash flow analysis to model revenue and operational costs using data from one of the world's leading MNOs. The optimization model leverages a Flexible Functional Split (FFS) approach to provide a new degree of freedom to the resource allocation strategy. Simulation results reveal that, due to its non-linear nature, there is no trivial solution to the proposed optimization problem formulation. Our proposal guarantees a maximum delay for URLLC services while satisfying minimal bandwidth requirements for enhanced Mobile BroadBand (eMBB) services and maximizing the MNO's profit.

We give a structure theorem for Boolean functions on the biased hypercube which are $\epsilon$-close to degree $d$ in $L_2$, showing that they are close to sparse juntas. Our structure theorem implies that such functions are $O(\epsilon^{C_d} + p)$-close to constant functions. We pinpoint the exact value of the constant $C_d$.

The branch-and-bound algorithm based on decision diagrams introduced by Bergman et al. in 2016 is a framework for solving discrete optimization problems with a dynamic programming formulation. It works by compiling a series of bounded-width decision diagrams that can provide lower and upper bounds for any given subproblem. Eventually, every part of the search space will be either explored or pruned by the algorithm, thus proving optimality. This paper presents new ingredients to speed up the search by exploiting the structure of dynamic programming models. The key idea is to prevent the repeated exploration of nodes corresponding to the same dynamic programming states by storing and querying thresholds in a data structure called the Barrier. These thresholds are based on dominance relations between partial solutions previously found. They can be further strengthened by integrating the filtering techniques introduced by Gillard et al. in 2021. Computational experiments show that the pruning brought by the Barrier allows to significantly reduce the number of nodes expanded by the algorithm. This results in more benchmark instances of difficult optimization problems being solved in less time while using narrower decision diagrams.

Multi-marginal Optimal Transport (mOT), a generalization of OT, aims at minimizing the integral of a cost function with respect to a distribution with some prescribed marginals. In this paper, we consider an entropic version of mOT with a tree-structured quadratic cost, i.e., a function that can be written as a sum of pairwise cost functions between the nodes of a tree. To address this problem, we develop Tree-based Diffusion Schr\"odinger Bridge (TreeDSB), an extension of the Diffusion Schr\"odinger Bridge (DSB) algorithm. TreeDSB corresponds to a dynamic and continuous state-space counterpart of the multimarginal Sinkhorn algorithm. A notable use case of our methodology is to compute Wasserstein barycenters which can be recast as the solution of a mOT problem on a star-shaped tree. We demonstrate that our methodology can be applied in high-dimensional settings such as image interpolation and Bayesian fusion.

Cross-validation is the standard approach for tuning parameter selection in many non-parametric regression problems. However its use is less common in change-point regression, perhaps as its prediction error-based criterion may appear to permit small spurious changes and hence be less well-suited to estimation of the number and location of change-points. We show that in fact the problems of cross-validation with squared error loss are more severe and can lead to systematic under- or over-estimation of the number of change-points, and highly suboptimal estimation of the mean function in simple settings where changes are easily detectable. We propose two simple approaches to remedy these issues, the first involving the use of absolute error rather than squared error loss, and the second involving modifying the holdout sets used. For the latter, we provide conditions that permit consistent estimation of the number of change-points for a general change-point estimation procedure. We show these conditions are satisfied for optimal partitioning using new results on its performance when supplied with the incorrect number of change-points. Numerical experiments show that the absolute error approach in particular is competitive with common change-point methods using classical tuning parameter choices when error distributions are well-specified, but can substantially outperform these in misspecified models. An implementation of our methodology is available in the R package crossvalidationCP on CRAN.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司