亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Serverless computing is an emerging cloud computing paradigm, which allows software engineers to develop applications at the granularity of function (called serverless functions). However, multiple identical runs of the same serverless functions can show different performance (i.e., response latencies) due to the highly dynamic underlying environment where these functions are executed. We conduct the first work to study serverless function performance to raise awareness of this variance among researchers. We investigate 59 related research papers published in top-tier conferences, and observe that only 40.68% of them use multiple runs to quantify the variance of serverless function performance. Then we extract 65 serverless functions used in these papers and find that the performance of these serverless functions can differ by up to 338.76% (44.15% on average), indicating a large magnitude of the variance. Furthermore, we find that 61.54% of these functions can have unreliable performance results at the low number of repetitions that are widely adopted in the serverless computing literature.

相關內容

Deterministic finite automata (DFA) are a classic tool for high throughput matching of regular expressions, both in theory and practice. Due to their high space consumption, extensive research has been devoted to compressed representations of DFAs that still support efficient pattern matching queries. Kumar~et~al.~[SIGCOMM 2006] introduced the \emph{delayed deterministic finite automaton} (\ddfa{}) which exploits the large redundancy between inter-state transitions in the automaton. They showed it to obtain up to two orders of magnitude compression of real-world DFAs, and their work formed the basis of numerous subsequent results. Their algorithm, as well as later algorithms based on their idea, have an inherent quadratic-time bottleneck, as they consider every pair of states to compute the optimal compression. In this work we present a simple, general framework based on locality-sensitive hashing for speeding up these algorithms to achieve sub-quadratic construction times for \ddfa{}s. We apply the framework to speed up several algorithms to near-linear time, and experimentally evaluate their performance on real-world regular expression sets extracted from modern intrusion detection systems. We find an order of magnitude improvement in compression times, with either little or no loss of compression, or even significantly better compression in some cases.

Despite numerous advances in the field and a seemingly ever-increasing amount of investment, we are still some years away from seeing a production quantum computer in action. However, it is possible to make some educated guesses about the operational difficulties and challenges that may be encountered in practice. We can be reasonably confident that the early machines will be hybrid, with the quantum devices used in an apparently similar way to current accelerators such as FPGAs or GPUs. Compilers, libraries and the other tools relied upon currently for development of software will have to evolve/be reinvented to support the new technology, and training courses will have to be rethought completely rather than ``just'' updated alongside them. The workloads we are likely to see making best use of these hybrid machines will initially be few, before rapidly increasing in diversity as we saw with the uptake of GPUs and other new technologies in the past. This will again be helped by the increase in the number of supporting libraries and development tools, and by the gradual re-development of existing software, to make use of the new quantum devices. Unfortunately, at present the problem of error correction is still largely unsolved, although there have been many advances. Quantum computation is very sensitive to noise, leading to frequent errors during execution. Quantum calculations, although asymptotically faster than their equivalents in ``traditional'' HPC, still take time, and while the profiling tools and programming approaches will have to change drastically, many of the skills honed in the current HPC industry will not suddenly become obsolete, but continue to be useful in the quantum era.

Our comprehension of biological neuronal networks has profoundly influenced the evolution of artificial neural networks (ANNs). However, the neurons employed in ANNs exhibit remarkable deviations from their biological analogs, mainly due to the absence of complex dendritic trees encompassing local nonlinearity. Despite such disparities, previous investigations have demonstrated that point neurons can functionally substitute dendritic neurons in executing computational tasks. In this study, we scrutinized the importance of nonlinear dendrites within neural networks. By employing machine-learning methodologies, we assessed the impact of dendritic structure nonlinearity on neural network performance. Our findings reveal that integrating dendritic structures can substantially enhance model capacity and performance while keeping signal communication costs effectively restrained. This investigation offers pivotal insights that hold considerable implications for the development of future neural network accelerators.

Blockchain technology transformed the digital sphere by providing a transparent, secure, and decentralized platform for data security across a range of industries, including cryptocurrencies and supply chain management. Blockchain's integrity and dependability have been jeopardized by the rising number of security threats, which have attracted cybercriminals as a target. By summarizing suggested fixes, this research aims to offer a thorough analysis of mitigating blockchain attacks. The objectives of the paper include identifying weak blockchain attacks, evaluating various solutions, and determining how effective and effective they are at preventing these attacks. The study also highlights how crucial it is to take into account the particular needs of every blockchain application. This study provides beneficial perspectives and insights for blockchain researchers and practitioners, making it essential reading for those interested in current and future trends in blockchain security research.

We consider the optimization of a smooth and strongly convex objective using constant step-size stochastic gradient descent (SGD) and study its properties through the prism of Markov chains. We show that, for unbiased gradient estimates with mildly controlled variance, the iteration converges to an invariant distribution in total variation distance. We also establish this convergence in Wasserstein-2 distance under a relaxed assumption on the gradient noise distribution compared to previous work. Thanks to the invariance property of the limit distribution, our analysis shows that the latter inherits sub-Gaussian or sub-exponential concentration properties when these hold true for the gradient. This allows the derivation of high-confidence bounds for the final estimate. Finally, under such conditions in the linear case, we obtain a dimension-free deviation bound for the Polyak-Ruppert average of a tail sequence. All our results are non-asymptotic and their consequences are discussed through a few applications.

Modern in-orbit satellites and other available remote sensing tools have generated a huge availability of public data waiting to be exploited in different formats hosted on different servers. In this context, ETL formalism becomes relevant for the integration and analysis of the combined information from all these sources. Throughout this work, we present the theoretical and practical foundations to build a modular analysis infrastructure that allows the creation of ETLs to download, transform and integrate data coming from different instruments in different formats. Part of this work is already implemented in a Python library which is intended to be integrated into already available workflow management tools based on acyclic-directed graphs which also have different adapters to impact the combined data in different warehouses.

Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.

Reed-Solomon (RS) codes have been increasingly adopted by distributed storage systems in place of replication,because they provide the same level of availability with much lower storage overhead. However, a key drawback of those RS-coded distributed storage systems is the poor latency of degraded reads, which can be incurred by data failures or hot spots,and are not rare in production environments. To address this issue, we propose a novel parallel reconstruction solution called APLS. APLS leverages all surviving source nodes to send the data needed by degraded reads and chooses light-loaded starter nodes to receive the reconstructed data of those degraded reads. Hence, the latency of the degraded reads can be improved.Prototyping-based experiments are conducted to compare APLS with ECPipe, the state-of-the-art solution of improving the latency of degraded reads. The experimental results demonstrate that APLS effectively reduces the latency, particularly under heavy or medium workloads.

In recent years, online social networks have been the target of adversaries who seek to introduce discord into societies, to undermine democracies and to destabilize communities. Often the goal is not to favor a certain side of a conflict but to increase disagreement and polarization. To get a mathematical understanding of such attacks, researchers use opinion-formation models from sociology, such as the Friedkin--Johnsen model, and formally study how much discord the adversary can produce when altering the opinions for only a small set of users. In this line of work, it is commonly assumed that the adversary has full knowledge about the network topology and the opinions of all users. However, the latter assumption is often unrealistic in practice, where user opinions are not available or simply difficult to estimate accurately. To address this concern, we raise the following question: Can an attacker sow discord in a social network, even when only the network topology is known? We answer this question affirmatively. We present approximation algorithms for detecting a small set of users who are highly influential for the disagreement and polarization in the network. We show that when the adversary radicalizes these users and if the initial disagreement/polarization in the network is not very high, then our method gives a constant-factor approximation on the setting when the user opinions are known. To find the set of influential users, we provide a novel approximation algorithm for a variant of MaxCut in graphs with positive and negative edge weights. We experimentally evaluate our methods, which have access only to the network topology, and we find that they have similar performance as methods that have access to the network topology and all user opinions. We further present an NP-hardness proof, which was an open question by Chen and Racz [IEEE Trans. Netw. Sci. Eng., 2021].

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

北京阿比特科技有限公司