亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With the increasing adoption of smart contracts, ensuring their security has become a critical concern. Numerous vulnerabilities and attacks have been identified and exploited, resulting in significant financial losses. In response, researchers have developed various tools and techniques to identify and prevent vulnerabilities in smart contracts. In this survey, we present a systematic overview of the quality assurance of smart contracts, covering vulnerabilities, attacks, defenses, and tool support. By classifying vulnerabilities based on known attacks, we can identify patterns and common weaknesses that need to be addressed. Moreover, in order to effectively protect smart contracts, we have created a labeled dataset to evaluate various vulnerability detection tools and compare their effectiveness.

相關內容

Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing.

Slow concept drift is a ubiquitous, yet under-studied problem in practical machine learning systems. In such settings, although recent data is more indicative of future data, naively prioritizing recent instances runs the risk of losing valuable information from the past. We propose an optimization-driven approach towards balancing instance importance over large training windows. First, we model instance relevance using a mixture of multiple timescales of decay, allowing us to capture rich temporal trends. Second, we learn an auxiliary scorer model that recovers the appropriate mixture of timescales as a function of the instance itself. Finally, we propose a nested optimization objective for learning the scorer, by which it maximizes forward transfer for the learned model. Experiments on a large real-world dataset of 39M photos over a 9 year period show upto 15% relative gains in accuracy compared to other robust learning baselines. We replicate our gains on two collections of real-world datasets for non-stationary learning, and extend our work to continual learning settings where, too, we beat SOTA methods by large margins.

We propose expanding the shared Transformer module to produce and initialize Transformers of varying depths, enabling adaptation to diverse resource constraints. Drawing an analogy to genetic expansibility, we term such module as learngene. To identify the expansion mechanism, we delve into the relationship between the layer's position and its corresponding weight value, and find that linear function appropriately approximates this relationship. Building on this insight, we present Transformer as Linear Expansion of learnGene (TLEG), a novel approach for flexibly producing and initializing Transformers of diverse depths. Specifically, to learn learngene, we firstly construct an auxiliary Transformer linearly expanded from learngene, after which we train it through employing soft distillation. Subsequently, we can produce and initialize Transformers of varying depths via linearly expanding the well-trained learngene, thereby supporting diverse downstream scenarios. Extensive experiments on ImageNet-1K demonstrate that TLEG achieves comparable or better performance in contrast to many individual models trained from scratch, while reducing around 2x training cost. When transferring to several downstream classification datasets, TLEG surpasses existing initialization methods by a large margin (e.g., +6.87% on iNat 2019 and +7.66% on CIFAR-100). Under the situation where we need to produce models of varying depths adapting for different resource constraints, TLEG achieves comparable results while reducing around 19x parameters stored to initialize these models and around 5x pre-training costs, in contrast to the pre-training and fine-tuning approach. When transferring a fixed set of parameters to initialize different models, TLEG presents better flexibility and competitive performance while reducing around 2.9x parameters stored to initialize, compared to the pre-training approach.

We investigate the effectiveness of using a large ensemble of advanced neural language models (NLMs) for lattice rescoring on automatic speech recognition (ASR) hypotheses. Previous studies have reported the effectiveness of combining a small number of NLMs. In contrast, in this study, we combine up to eight NLMs, i.e., forward/backward long short-term memory/Transformer-LMs that are trained with two different random initialization seeds. We combine these NLMs through iterative lattice generation. Since these NLMs work complementarily with each other, by combining them one by one at each rescoring iteration, language scores attached to given lattice arcs can be gradually refined. Consequently, errors of the ASR hypotheses can be gradually reduced. We also investigate the effectiveness of carrying over contextual information (previous rescoring results) across a lattice sequence of a long speech such as a lecture speech. In experiments using a lecture speech corpus, by combining the eight NLMs and using context carry-over, we obtained a 24.4% relative word error rate reduction from the ASR 1-best baseline. For further comparison, we performed simultaneous (i.e., non-iterative) NLM combination and 100-best rescoring using the large ensemble of NLMs, which confirmed the advantage of lattice rescoring with iterative NLM combination.

While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granularity at which a network can be adjusted. This paper is motivated by question whether the reactivity of a network can be improved by re-optimizing solutions dynamically rather than from scratch, especially if inputs such as link weights do not change significantly. This paper explores to what extent dynamic algorithms can be used to speed up fundamental tasks in network operations. We specifically investigate optimizations related to traffic engineering (namely shortest paths and maximum flow computations), but also consider spanning tree and matching applications. While prior work on dynamic graph algorithms focuses on link insertions and deletions, we are interested in the practical problem of link weight changes. We revisit existing upper bounds in the weight-dynamic model, and present several novel lower bounds on the amortized runtime for recomputing solutions. In general, we find that the potential performance gains depend on the application, and there are also strict limitations on what can be achieved, even if link weights change only slightly.

Classical locally recoverable codes (LRCs) have become indispensable in distributed storage systems. They provide efficient recovery in terms of localized errors. Quantum LRCs have very recently been introduced for their potential application in quantum data storage. In this paper, we use classical LRCs to investigate quantum LRCs. We prove that the parameters of quantum LRCs are bounded by their classical counterparts. We deduce the bounds on the parameters of quantum LRCs from the bounds on the parameters of the classical ones. We establish a characterization of optimal pure quantum LRCs based on classical codes with specific properties. Using well-crafted classical LRCs as ingredients in the construction of quantum CSS codes, we offer the first construction of several families of optimal pure quantum LRCs.

Small obstacles on the ground often lead to a fall when caught with commercial prosthetic feet. Despite some recently developed feet can actively control the ankle angle, for instance over slopes, their flat and rigid sole remains a cause of instability on uneven grounds. Soft robotic feet were recently proposed to tackle that issue; however, they lack consistent experimental validation. Therefore, this paper describes the experimental setup realized to test soft and rigid prosthetic feet with lower-limb prosthetic users. It includes a wooden walkway and differently shaped obstacles. It was preliminary validated with an able-bodied subject, the same subject walking on commercial prostheses through modified walking boots, and with a prosthetic user. They performed walking firstly on even ground, and secondly on even ground stepping on one of the obstacles. Results in terms of vertical ground reaction force and knee moments in both the sagittal and frontal planes show how the poor performance of commonly used prostheses is exacerbated in case of obstacles. The prosthetic user, indeed, noticeably relies on the sound leg to compensate for the stiff and unstable interaction of the prosthetic limb with the obstacle. Therefore, since the limitations of non-adaptive prosthetic feet in obstacle-dealing emerge from the experiments, as expected, this study justifies the use of the setup for investigating the performance of soft feet on uneven grounds and obstacle negotiation.

For machine learning models to be reliable and trustworthy, their decisions must be interpretable. As these models find increasing use in safety-critical applications, it is important that not just the model predictions but also their explanations (as feature attributions) be robust to small human-imperceptible input perturbations. Recent works have shown that many attribution methods are fragile and have proposed improvements in either these methods or the model training. We observe two main causes for fragile attributions: first, the existing metrics of robustness (e.g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image. To rectify this, we propose simple ways to strengthen existing metrics and attribution methods that incorporate locality of pixels in robustness metrics and diversity of pixel locations in attributions. Towards the role of model training in attributional robustness, we empirically observe that adversarially trained models have more robust attributions on smaller datasets, however, this advantage disappears in larger datasets. Code is available at //github.com/ksandeshk/LENS.

With the rapid progress in virtual reality (VR) technology, the scope of VR applications has greatly expanded across various domains. However, the superiority of VR training over traditional methods and its impact on learning efficacy are still uncertain. To investigate whether VR training is more effective than traditional methods, we designed virtual training systems for mechanical assembly on both VR and desktop platforms, subsequently conducting pre-test and post-test experiments. A cohort of 53 students, all enrolled in engineering drawing course without prior knowledge distinctions, was randomly divided into three groups: physical training, desktop virtual training, and immersive VR training. Our investigation utilized analysis of covariance (ANCOVA) to examine the differences in post-test scores among the three groups while controlling for pre-test scores. The group that received VR training showed the highest scores on the post-test. Another facet of our study delved into the presence of the virtual system. We developed a specialized scale to assess this aspect for our research objectives. Our findings indicate that VR training can enhance the sense of presence, particularly in terms of sensory factors and realism factors. Moreover, correlation analysis uncovers connections between the various dimensions of presence. This study confirms that using VR training can improve learning efficacy and the presence in the context of mechanical assembly, surpassing traditional training methods. Furthermore, it provides empirical evidence supporting the integration of VR technology in higher education and engineering training. This serves as a reference for the practical application of VR technology in different fields.

There is abundant interest in assessing the joint effects of multiple exposures on human health. This is often referred to as the mixtures problem in environmental epidemiology and toxicology. Classically, studies have examined the adverse health effects of different chemicals one at a time, but there is concern that certain chemicals may act together to amplify each other's effects. Such amplification is referred to as synergistic interaction, while chemicals that inhibit each other's effects have antagonistic interactions. Current approaches for assessing the health effects of chemical mixtures do not explicitly consider synergy or antagonism in the modeling, instead focusing on either parametric or unconstrained nonparametric dose response surface modeling. The parametric case can be too inflexible, while nonparametric methods face a curse of dimensionality that leads to overly wiggly and uninterpretable surface estimates. We propose a Bayesian approach that decomposes the response surface into additive main effects and pairwise interaction effects, and then detects synergistic and antagonistic interactions. Variable selection decisions for each interaction component are also provided. This Synergistic Antagonistic Interaction Detection (SAID) framework is evaluated relative to existing approaches using simulation experiments and an application to data from NHANES.

北京阿比特科技有限公司