亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recent advances in large language models have demonstrated their potential for automated generation of hardware description language (HDL) code from high-level prompts. Researchers have utilized fine-tuning to enhance the ability of these large language models (LLMs) in the field of Chip Design. However, the lack of Verilog data hinders further improvement in the quality of Verilog generation by LLMs. Additionally, the absence of a Verilog and Electronic Design Automation (EDA) script data augmentation framework significantly increases the time required to prepare the training dataset for LLM trainers. This paper proposes an automated design-data augmentation framework, which generates high-volume and high-quality natural language aligned with Verilog and EDA scripts. For Verilog generation, it translates Verilog files to an abstract syntax tree and then maps nodes to natural language with a predefined template. For Verilog repair, it uses predefined rules to generate the wrong verilog file and then pairs EDA Tool feedback with the right and wrong verilog file. For EDA Script generation, it uses existing LLM(GPT-3.5) to obtain the description of the Script. To evaluate the effectiveness of our data augmentation method, we finetune Llama2-13B and Llama2-7B models using the dataset generated by our augmentation framework. The results demonstrate a significant improvement in the Verilog generation tasks with LLMs. Moreover, the accuracy of Verilog generation surpasses that of the current state-of-the-art open-source Verilog generation model, increasing from 58.8% to 70.6% with the same benchmark. Our 13B model (ChipGPT-FT) has a pass rate improvement compared with GPT-3.5 in Verilog generation and outperforms in EDA script (i.e., SiliconCompiler) generation with only 200 EDA script data.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

Modern regression applications can involve hundreds or thousands of variables which motivates the use of variable selection methods. Bayesian variable selection defines a posterior distribution on the possible subsets of the variables (which are usually termed models) to express uncertainty about which variables are strongly linked to the response. This can be used to provide Bayesian model averaged predictions or inference, and to understand the relative importance of different variables. However, there has been little work on meaningful representations of this uncertainty beyond first order summaries. We introduce Cartesian credible sets to address this gap. The elements of these sets are formed by concatenating sub-models defined on each block of a partition of the variables. Investigating these sub-models allow us to understand whether the models in the Cartesian credible set always/never/sometimes include a particular variable or group of variables and provide a useful summary of model uncertainty. We introduce methods to find these sets that emphasize ease of understanding. The potential of the method is illustrated on regression problems with both small and large numbers of variables.

Traversing 3-D complex environments has always been a significant challenge for legged locomotion. Existing methods typically rely on external sensors such as vision and lidar to preemptively react to obstacles by acquiring environmental information. However, in scenarios like nighttime or dense forests, external sensors often fail to function properly, necessitating robots to rely on proprioceptive sensors to perceive diverse obstacles in the environment and respond promptly. This task is undeniably challenging. Our research finds that methods based on collision detection can enhance a robot's perception of environmental obstacles. In this work, we propose an end-to-end learning-based quadruped robot motion controller that relies solely on proprioceptive sensing. This controller can accurately detect, localize, and agilely respond to collisions in unknown and complex 3D environments, thereby improving the robot's traversability in complex environments. We demonstrate in both simulation and real-world experiments that our method enables quadruped robots to successfully traverse challenging obstacles in various complex environments.

This paper introduces an innovative approach to the design of efficient decoders that meet the rigorous requirements of modern communication systems, particularly in terms of ultra-reliability and low-latency. We enhance an established hybrid decoding framework by proposing an ordered statistical decoding scheme augmented with a sliding window technique. This novel component replaces a key element of the current architecture, significantly reducing average complexity. A critical aspect of our scheme is the integration of a pre-trained neural network model that dynamically determines the progression or halt of the sliding window process. Furthermore, we present a user-defined soft margin mechanism that adeptly balances the trade-off between decoding accuracy and complexity. Empirical results, supported by a thorough complexity analysis, demonstrate that the proposed scheme holds a competitive advantage over existing state-of-the-art decoders, notably in addressing the decoding failures prevalent in neural min-sum decoders. Additionally, our research uncovers that short LDPC codes can deliver performance comparable to that of short classical linear codes within the critical waterfall region of the SNR, highlighting their potential for practical applications.

Subdivision surfaces are considered as an extension of splines to accommodate models with complex topologies, making them useful for addressing PDEs on models with complex topologies in isogeometric analysis. This has generated a lot of interest in the field of subdivision space approximation. The quasi-interpolation offers a highly efficient approach for spline approximation, eliminating the necessity of solving large linear systems of equations. Nevertheless, the lack of analytical expressions at extraordinary points on subdivision surfaces makes traditional techniques for creating B-spline quasi-interpolants inappropriate for subdivision spaces. To address this obstacle, this paper innovatively reframes the evaluation issue associated with subdivision surfaces as a correlation between subdivision matrices and limit points, offering a thorough method for quasi-interpolation specifically designed for subdivision surfaces. This developed quasi-interpolant, termed the subdivision space projection operator, accurately reproduces the subdivision space. We provide explicit quasi-interpolation formulas for various typical subdivision schemes. Numerical experiments demonstrate that the quasi-interpolants for Catmull-Clark and Loop subdivision exhibit third-order approximation in the (L_2) norm and second-order in the (L_\infty) norm. Furthermore, the modified Loop subdivision quasi-interpolant achieves optimal approximation rates in both the (L_2) and (L_\infty) norms.

Large language models (LLMs) based on transformers have made significant strides in recent years, the success of which is driven by scaling up their model size. Despite their high algorithmic performance, the computational and memory requirements of LLMs present unprecedented challenges. To tackle the high compute requirements of LLMs, the Mixture-of-Experts (MoE) architecture was introduced which is able to scale its model size without proportionally scaling up its computational requirements. Unfortunately, MoE's high memory demands and dynamic activation of sparse experts restrict its applicability to real-world problems. Previous solutions that offload MoE's memory-hungry expert parameters to CPU memory fall short because the latency to migrate activated experts from CPU to GPU incurs high performance overhead. Our proposed Pre-gated MoE system effectively tackles the compute and memory challenges of conventional MoE architectures using our algorithm-system co-design. Pre-gated MoE employs our novel pre-gating function which alleviates the dynamic nature of sparse expert activation, allowing our proposed system to address the large memory footprint of MoEs while also achieving high performance. We demonstrate that Pre-gated MoE is able to improve performance, reduce GPU memory consumption, while also maintaining the same level of model quality. These features allow our Pre-gated MoE system to cost-effectively deploy large-scale LLMs using just a single GPU with high performance.

We propose an interdisciplinary framework that combines Bayesian predictive inference, a well-established tool in Machine Learning, with Formal Methods rooted in the computer science community. Bayesian predictive inference allows for coherently incorporating uncertainty about unknown quantities by making use of methods or models that produce predictive distributions, which in turn inform decision problems. By formalizing these decision problems into properties with the help of spatio-temporal logic, we can formulate and predict how likely such properties are to be satisfied in the future at a certain location. Moreover, we can leverage our methodology to evaluate and compare models directly on their ability to predict the satisfaction of application-driven properties. The approach is illustrated in an urban mobility application, where the crowdedness in the center of Milan is proxied by aggregated mobile phone traffic data. We specify several desirable spatio-temporal properties related to city crowdedness such as a fault-tolerant network or the reachability of hospitals. After verifying these properties on draws from the posterior predictive distributions, we compare several spatio-temporal Bayesian models based on their overall and property-based predictive performance.

Tow steering technologies, such as Automated fiber placement, enable the fabrication of composite laminates with curvilinear fiber, tow, or tape paths. Designers may therefore tailor tow orientations locally according to the expected local stress state within a structure, such that strong and stiff orientations of the tow are (for example) optimized to provide maximal mechanical benefit. Tow path optimization can be an effective tool in automating this design process, yet has a tendency to create complex designs that may be challenging to manufacture. In the context of tow steering, these complexities can manifest in defects such as tow wrinkling, gaps, overlaps. In this work, we implement manufacturing constraints within the tow path optimization formulation to restrict the minimum tow turning radius and the maximum density of gaps between and overlaps of tows. This is achieved by bounding the local value of the curl and divergence of the vector field associated with the tow orientations. The resulting local constraints are effectively enforced in the optimization framework through the Augmented Lagrangian method. The resulting optimization methodology is demonstrated by designing 2D and 3D structures with optimized tow orientation paths that maximize stiffness (minimize compliance) considering various levels of manufacturing restrictions. The optimized tow paths are shown to be structurally efficient and to respect imposed manufacturing constraints. As expected, the more geometrical complexity that can be achieved by the feedstock tow and placement technology, the higher the stiffness of the resulting optimized design.

We propose an end-to-end real-estate recommendation system, RE-RecSys, which has been productionized in real-world industry setting. We categorize any user into 4 categories based on available historical data: i) cold-start users; ii) short-term users; iii) long-term users; and iv) short-long term users. For cold-start users, we propose a novel rule-based engine that is based on the popularity of locality and user preferences. For short-term users, we propose to use content-filtering model which recommends properties based on recent interactions of users. For long-term and short-long term users, we propose a novel combination of content and collaborative filtering based approach which can be easily productionized in the real-world scenario. Moreover, based on the conversion rate, we have designed a novel weighing scheme for different impressions done by users on the platform for the training of content and collaborative models. Finally, we show the efficiency of the proposed pipeline, RE-RecSys, on a real-world property and clickstream dataset collected from leading real-estate platform in India. We show that the proposed pipeline is deployable in real-world scenario with an average latency of <40 ms serving 1000 rpm.

The covXtreme software provides functionality for estimation of marginal and conditional extreme value models, non-stationary with respect to covariates, and environmental design contours. Generalised Pareto (GP) marginal models of peaks over threshold are estimated, using a piecewise-constant representation for the variation of GP threshold and scale parameters on the (potentially multidimensional) covariate domain of interest. The conditional variation of one or more associated variates, given a large value of a single conditioning variate, is described using the conditional extremes model of Heffernan and Tawn (2004), the slope term of which is also assumed to vary in a piecewise constant manner with covariates. Optimal smoothness of marginal and conditional extreme value model parameters with respect to covariates is estimated using cross-validated roughness-penalised maximum likelihood estimation. Uncertainties in model parameter estimates due to marginal and conditional extreme value threshold choice, and sample size, are quantified using a bootstrap resampling scheme. Estimates of environmental contours using various schemes, including the direct sampling approach of Huseby et al. 2013, are calculated by simulation or numerical integration under fitted models. The software was developed in MATLAB for metocean applications, but is applicable generally to multivariate samples of peaks over threshold. The software and case study data can be downloaded from GitHub, with an accompanying user guide.

We consider the community detection problem in sparse random hypergraphs under the non-uniform hypergraph stochastic block model (HSBM), a general model of random networks with community structure and higher-order interactions. When the random hypergraph has bounded expected degrees, we provide a spectral algorithm that outputs a partition with at least a $\gamma$ fraction of the vertices classified correctly, where $\gamma\in (0.5,1)$ depends on the signal-to-noise ratio (SNR) of the model. When the SNR grows slowly as the number of vertices goes to infinity, our algorithm achieves weak consistency, which improves the previous results in Ghoshdastidar and Dukkipati (2017) for non-uniform HSBMs. Our spectral algorithm consists of three major steps: (1) Hyperedge selection: select hyperedges of certain sizes to provide the maximal signal-to-noise ratio for the induced sub-hypergraph; (2) Spectral partition: construct a regularized adjacency matrix and obtain an approximate partition based on singular vectors; (3) Correction and merging: incorporate the hyperedge information from adjacency tensors to upgrade the error rate guarantee. The theoretical analysis of our algorithm relies on the concentration and regularization of the adjacency matrix for sparse non-uniform random hypergraphs, which can be of independent interest.

北京阿比特科技有限公司