亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Irregular memory access patterns pose performance and user productivity challenges on distributed-memory systems. They can lead to fine-grained remote communication and the data access patterns are often not known until runtime. The Partitioned Global Address Space (PGAS) programming model addresses these challenges by providing users with a view of a distributed-memory system that resembles a single shared address space. However, this view often leads programmers to write code that causes fine-grained remote communication, which can result in poor performance. Prior work has shown that the performance of irregular applications written in Chapel, a high-level PGAS language, can be improved by manually applying optimizations. However, applying such optimizations by hand reduces the productivity advantages provided by Chapel and the PGAS model. We present an inspector-executor based compiler optimization for Chapel programs that automatically performs remote data replication. While there have been similar compiler optimizations implemented for other PGAS languages, high-level features in Chapel such as implicit processor affinity lead to new challenges for compiler optimization. We evaluate the performance of our optimization across two irregular applications. Our results show that the total runtime can be improved by as much as 52x on a Cray XC system with a low-latency interconnect and 364x on a standard Linux cluster with an Infiniband interconnect, demonstrating that significant performance gains can be achieved without sacrificing user productivity.

相關內容

Signal Temporal Logic (STL) is capable of expressing a broad range of temporal properties that controlled dynamical systems must satisfy. In the literature, both mixed-integer programming (MIP) and nonlinear programming (NLP) methods have been applied to solve optimal control problems with STL specifications. However, neither approach has succeeded in solving problems with complex long-horizon STL specifications within a realistic timeframe. This study proposes a new optimization framework, called \textit{STLCCP}, which explicitly incorporates several structures of STL to mitigate this issue. The core of our framework is a structure-aware decomposition of STL formulas, which converts the original program into a difference of convex (DC) programs. This program is then solved as a convex quadratic program sequentially, based on the convex-concave procedure (CCP). Our numerical experiments on several commonly used benchmarks demonstrate that this framework can effectively handle complex scenarios over long horizons, which have been challenging to address even using state-of-the-art optimization methods.

This study proposes a hybrid deep-learning-metaheuristic framework with a bi-level architecture for road network design problems (NDPs). We train a graph neural network (GNN) to approximate the solution of the user equilibrium (UE) traffic assignment problem, and use inferences made by the trained model to calculate fitness function evaluations of a genetic algorithm (GA) to approximate solutions for NDPs. Using two NDP variants and an exact solver as benchmark, we show that our proposed framework can provide solutions within 5% gap of the global optimum results given less than 1% of the time required for finding the optimal results. Our framework can be utilized within an expert system for infrastructure planning to intelligently determine the best infrastructure management decisions. Given the flexibility of the framework, it can easily be adapted to many other decision problems that can be modeled as bi-level problems on graphs. Moreover, we observe many interesting future directions, thus we propose a brief research agenda for this topic. The key observation inspiring influential future research was that fitness function evaluation time using the inferences made by the GNN model for the genetic algorithm was in the order of milliseconds, which points to an opportunity and a need for novel heuristics that 1) can cope well with noisy fitness function values provided by neural networks, and 2) can use the significantly higher computation time provided to them to explore the search space effectively (rather than efficiently). This opens a new avenue for a modern class of metaheuristics that are crafted for use with AI-powered predictors.

Instruction tuning for large language models (LLMs) has gained attention from researchers due to its ability to unlock the potential of LLMs in following instructions. While instruction tuning offers advantages for facilitating the adaptation of large language models (LLMs) to downstream tasks as a fine-tuning approach, training models with tens of millions or even billions of parameters on large amounts of data results in unaffordable computational costs. To address this, we focus on reducing the data used in LLM instruction tuning to decrease training costs and improve data efficiency, dubbed as Low Training Data Instruction Tuning (LTD Instruction Tuning). Specifically, this paper conducts a preliminary exploration into reducing the data used in LLM training and identifies several observations regarding task specialization for LLM training, such as the optimization of performance for a specific task, the number of instruction types required for instruction tuning, and the amount of data required for task-specific models. The results suggest that task-specific models can be trained using less than 0.5% of the original dataset, with a 2% improvement in performance over those trained on full task-related data.

Persistent memory (PMEM) devices present an opportunity to retain the flexibility of main memory data structures and algorithms, but augment them with reliability and persistence. The challenge in doing this is to combine replication (for reliability) and failure atomicity (for persistence) with concurrency (for fully utilizing persistent memory bandwidth). These requirements are at odds due to the sequential nature of replicating a log of updates versus concurrent updates that are necessary for fully leveraging the path from CPU to memory. We present Blizzard -- a fault-tolerant, PMEM-optimized persistent programming runtime. Blizzard addresses the fundamental tradeoff by combining (1) a coupled operations log that permits tight integration of a PMEM-specialized user-level replication stack with a PMEM-based persistence stack, and (2) explicit control over the commutativity among concurrent operations. We demonstrate the generality and potential of Blizzard with three illustrative applications with very different data structure requirements for their persistent state. These use cases demonstrate that with Blizzard, PMEM native data structures can deliver up to 3.6x performance benefit over the alternative purpose-build persistent application runtimes, while being simpler and safer (by providing failure atomicity and replication).

In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the unprecedented number of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive access from machine-type devices with intermittent activity. In this paper, coded random access (CRA) with massive multiple input multiple output (MIMO) is investigated as a solution to design highly-scalable massive multiple access protocols, taking into account stringent requirements on latency and reliability. With a focus on signal processing aspects at the physical layer and their impact on the overall system performance, critical issues of successive interference cancellation (SIC) over fading channels are first analyzed. Then, SIC algorithms and a scheduler are proposed that can overcome some of the limitations of the current access protocols. The effectiveness of the proposed processing algorithms is validated by Monte Carlo simulation, for different CRA protocols and by comparisons with developed benchmarks.

Earth observation (EO) is a prime instrument for monitoring land and ocean processes, studying the dynamics at work, and taking the pulse of our planet. This article gives a bird's eye view of the essential scientific tools and approaches informing and supporting the transition from raw EO data to usable EO-based information. The promises, as well as the current challenges of these developments, are highlighted under dedicated sections. Specifically, we cover the impact of (i) Computer vision; (ii) Machine learning; (iii) Advanced processing and computing; (iv) Knowledge-based AI; (v) Explainable AI and causal inference; (vi) Physics-aware models; (vii) User-centric approaches; and (viii) the much-needed discussion of ethical and societal issues related to the massive use of ML technologies in EO.

We propose an end-to-end deep-learning approach for automatic rigging and retargeting of 3D models of human faces in the wild. Our approach, called Neural Face Rigging (NFR), holds three key properties: (i) NFR's expression space maintains human-interpretable editing parameters for artistic controls; (ii) NFR is readily applicable to arbitrary facial meshes with different connectivity and expressions; (iii) NFR can encode and produce fine-grained details of complex expressions performed by arbitrary subjects. To the best of our knowledge, NFR is the first approach to provide realistic and controllable deformations of in-the-wild facial meshes, without the manual creation of blendshapes or correspondence. We design a deformation autoencoder and train it through a multi-dataset training scheme, which benefits from the unique advantages of two data sources: a linear 3DMM with interpretable control parameters as in FACS, and 4D captures of real faces with fine-grained details. Through various experiments, we show NFR's ability to automatically produce realistic and accurate facial deformations across a wide range of existing datasets as well as noisy facial scans in-the-wild, while providing artist-controlled, editable parameters.

Machine learning has recently made significant strides in reducing design cycle time for complex products. Ship design, which currently involves years long cycles and small batch production, could greatly benefit from these advancements. By developing a machine learning tool for ship design that learns from the design of many different types of ships, tradeoffs in ship design could be identified and optimized. However, the lack of publicly available ship design datasets currently limits the potential for leveraging machine learning in generalized ship design. To address this gap, this paper presents a large dataset of thirty thousand ship hulls, each with design and functional performance information, including parameterization, mesh, point cloud, and image representations, as well as thirty two hydrodynamic drag measures under different operating conditions. The dataset is structured to allow human input and is also designed for computational methods. Additionally, the paper introduces a set of twelve ship hulls from publicly available CAD repositories to showcase the proposed parameterizations ability to accurately reconstruct existing hulls. A surrogate model was developed to predict the thirty two wave drag coefficients, which was then implemented in a genetic algorithm case study to reduce the total drag of a hull by sixty percent while maintaining the shape of the hulls cross section and the length of the parallel midbody. Our work provides a comprehensive dataset and application examples for other researchers to use in advancing data driven ship design.

Fourteen years after the invention of Bitcoin, there has been a proliferation of many permissionless blockchains. Each such chain provides a public ledger that can be written to and read from by anyone. In this multi-chain world, a natural question arises: what is the optimal security an existing blockchain, a consumer chain, can extract by only reading and writing to k other existing blockchains, the provider chains? We design a protocol, called interchain timestamping, and show that it extracts the maximum economic security from the provider chains, as quantified by the slashable safety resilience. We observe that interchain timestamps are already provided by light-client based bridges, so interchain timestamping can be readily implemented for Cosmos chains connected by the Inter-Blockchain Communication (IBC) protocol. We compare interchain timestamping with cross-staking, the original solution to mesh security, as well as with Trustboost, another recent security sharing protocol.

Fail-operational systems are a prerequisite for autonomous driving. Without a driver who can act as a fallback solution in a critical failure scenario, the system has to be able to mitigate failures on its own and keep critical applications operational. To reduce redundancy cost, graceful degradation can be applied by repurposing hardware resources at run-time. Critical applications can be kept operational by starting passive backups and shutting down non-critical applications instead to make sufficient resources available. In order to design such systems efficiently, the degradation effects on reliability and cost savings have to be analyzed. In this paper we present our approach to formally analyze the impact of graceful degradation on the reliability of critical and non-critical applications. We then quantify the effect of graceful degradation on the reliability of both critical and non-critical applications in distributed automotive systems and compare the achieved cost reduction with conventional redundancy approaches. In our experiments redundancy overhead could be reduced by 80% compared to active redundancy in a scenario with a balanced mix of critical and non-critical applications using our graceful degradation approach Overall, we present a detailed reliability and cost analysis of graceful degradation in distributed automotive systems. Our findings confirm that using graceful degradation can tremendously reduce cost compared to conventional redundancy approaches with no negative impact on the redundancy of critical applications if a reliability reduction of non-critical applications can be accepted. Our results show that a trade-off between the impact of the degradation on the reliability of non-critical applications and cost reduction has to be made.

北京阿比特科技有限公司